entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
10
200
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
2
817k
http://arxiv.org/abs/2306.17797v1
20230620082028
HIDFlowNet: A Flow-Based Deep Network for Hyperspectral Image Denoising
[ "Li Pang", "Weizhen Gu", "Xiangyong Cao", "Xiangyu Rui", "Jiangjun Peng", "Shuang Xu", "Gang Yang", "Deyu Meng" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China 710049 [email protected] 0000-0003-2079-3354 Nankai University No.38, Tongyan Road Tianjin China [email protected] Corresponding Author Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Northwest A&F University No. 22, Xinong Road, Yangling District Xianyang Shaanxi China [email protected] University of Science and Technology of China No. 96, Jinzhai Road, Baohe District Hefei Anhui China [email protected] Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Hyperspectral image (HSI) denoising is essentially ill-posed since a noisy HSI can be degraded from multiple clean HSIs. However, current deep learning-based approaches ignore this fact and restore the clean image with deterministic mapping (i.e., the network receives a noisy HSI and outputs a clean HSI). To alleviate this issue, this paper proposes a flow-based HSI denoising network (HIDFlowNet) to directly learn the conditional distribution of the clean HSI given the noisy HSI and thus diverse clean HSIs can be sampled from the conditional distribution. Overall, our HIDFlowNet is induced from the flow methodology and contains an invertible decoder and a conditional encoder, which can fully decouple the learning of low-frequency and high-frequency information of HSI. Specifically, the invertible decoder is built by staking a succession of invertible conditional blocks (ICBs) to capture the local high-frequency details since the invertible network is information-lossless. The conditional encoder utilizes down-sampling operations to obtain low-resolution images and uses transformers to capture correlations over a long distance so that global low-frequency information can be effectively extracted. Extensive experimental results on simulated and real HSI datasets verify the superiority of our proposed HIDFlowNet compared with other state-of-the-art methods both quantitatively and visually. <ccs2012> <concept> <concept_id>10010147</concept_id> <concept_desc>Computing methodologies</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224.10010245.10010254</concept_id> <concept_desc>Computing methodologies Reconstruction</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies [500]Computing methodologies Reconstruction < g r a p h i c s > Instead of performing HSI denoising with a deterministic mapping, our HIDFlowNet learns the conditional distribution of clean HSI given corresponding noisy counterpart, which explicitly alleviates the ill-posed nature of HSI denoising and enables us to sample diverse clean HSIs. The charts on the right demonstrate that the reconstructed spectral reflectance of our HIDFlowNet is more consistent with the ground truth than that of other approaches, verifying the superiority of our proposed method. HIDFlowNet: A Flow-Based Deep Network for Hyperspectral Image Denoising Deyu Meng 30 July 1999 ======================================================================= § INTRODUCTION Hyperspectral image (HSI) depicts an object in numerous narrow and contiguous spectral bands across the electromagnetic spectrum. Compared with RGB images, HSIs enable a more comprehensive depiction of captured scenes due to more spectral bands and have been widely applied in various fields including remote sensing <cit.>, medical diagnosis <cit.>, agriculture <cit.> and so on. However, owing to multiple factors such as instrument instability, circuit malfunction and light disturbance, HSIs are often subjected to various noises during the data acquisition stage, which can negatively impact the performance of the downstream applications aforementioned. Therefore, noise reduction is an essential step in HSI analysis and processing. However, HSI denoising is an ill-posed problem since a given noisy HSI can be degraded from multiple clean HSIs, which presents significant challenges when designing HSI denoising approaches. In the last decade, numerous HSI denoising techniques have been proposed and these methods can be categorized into two classes, i.e., model-based approaches and deep learning-based methods. Model-based approaches rely on human handcrafted prior and conduct HSI denoising in an iterative optimization manner. However, since the characteristics of HSIs are complex, the hand-crafted priors only partially reflect the features of HSIs, making these approaches incapable of handling unknown real-world noise. Moreover, the iterative optimization process consumes a substantial amount of time to denoise a single image. In contrast, by utilizing the impressive nonlinearity capability of neural networks, deep learning-based approaches model the intrinsic characteristics of HSIs in a data-driven manner. These methods learn the underlying image features statistically with abundant clean and noisy image pairs. Although these approaches can achieve desirable denoising performance, they can only predict a single clean HSI with a deterministic mapping (see Figure <ref>) and ignore the ill-posed nature of HSI denoising. Compared with distribution learning-based denoising approaches, these deterministic methods overemphasize pixel similarity and tend to predict the average of all possible clean images, resulting in over-smoothed areas and loss of image details. Additionally, most of the existing deep learning-based methods focus on directly learning the network mapping from numerous training pairs and always neglect the fact that noise is part of the high-frequency component. Thus the existing network architectures often fail to decouple the learning of low-frequency and high-frequency and thus lack specific physical meaning. To alleviate these issues, this paper proposes a flow-based hyperspectral image denoising network (i.e., HIDFlowNet). HIDFlowNet aims to directly learn the conditional distribution of the clean HSIs by transforming the unknown conditional distribution of clean HSIs into a known Gaussian distribution (see Figure <ref>). Concretely, the HIDFlowNet decouples the learning of low-frequency and high-frequency information of HSI and contains two main components: a conditional encoder network and an invertible decoder network. The encoder network composed of a series of transformer blocks and down-sampling operations, is utilized to extract global low-frequency information in an unsupervised manner. To be specific, the down-sampling operations employed in the encoder enable the network to obtain low-resolution images so that low-frequency information is extracted efficiently. Transformers which is able to capture long-distance correlations are also adopted to extract global information effectively. Additionally, the invertible decoder is built by staking a successive of invertible conditional blocks (ICBs) to preserve local high-frequency details since invertible networks are information-lossless <cit.>. Finally, HIDFlowNet is trained by minimizing the negative log-likelihood of the conditional distribution given the training data and a reconstruction loss to obtain high-quality HSIs. Once the training is finished, diverse clean HSIs corresponding to one noisy HSI can be generated by first sampling in the latent space and then performing inverse transforms. In summary, our contributions are shown as follows: * A flow-based network namely HIDFlowNet is proposed to learn the conditional distribution of a clean HSI given its corresponding noisy counterpart. The model is able to generate diverse restored images by sampling random Gaussian noise and performing inverse transforms. To our knowledge, this is the first attempt to employ a flow-based model for HSI denoising. * The architecture of HIDFlowNet induced from the flow methodology contains two main components and has an explicit physical interpretation since it decouples the learning of low-frequency and high-frequency information of HSI. The invertible decoder preserves the local high-frequency details and the conditional encoder network extracts global low-frequency representation. * Extensive experiments on the simulated and real HSI datasets verify the superiority of our proposed method compared with other state-of-the-art methods. § RELATED WORK In this section, we give a brief review of several research fields related to our work, including two major HSI denoising directions and flow-based generative models. Model-based methods utilize priori information about the underlying statistical properties of the hyperspectral data to perform denoising. Handcrafted priors such as low-rank <cit.>, sparse representation <cit.>, total variation <cit.> and nonlocal similarity <cit.> are proposed and corresponding model regularization terms are designed to obtain promising denoising results. For example, in <cit.>, low-rank matrix recovery (LRMR) is proposed to simultaneously remove various noises by utilizing the low-rank property of HSIs and the sparsity nature of non-Gaussian noise. Cao et al. <cit.> proposed a mixture of exponential power distribution in the low-rank matrix factorization framework to capture the complex noise of HSIs. Xue et al. <cit.> proposed a structured sparse low-rank representation (SSLRR) model to induce sparse property. Spatial-spectral total variation regularized local low-rank matrix recovery (LLRSSTV) <cit.> employed a global reconstruction strategy to fully utilize both low-rank property and smoothness properties of HSIs. He et al. <cit.> proposed NG-Meet which unified spatial and spectral low-rank properties. While these methods effectively preserve the spectral and spatial characteristics of HSIs, the optimization of the model is typically complex and thus these methods can be considerably time-consuming. In addition, the denoising performance is highly dependent on the consistency between the priors and HSIs. However, manually designed priors only reflect the intrinsic characteristics of HSIs partially, limiting their ability for HSI denoising. Recently, deep learning-based methods for HSI denoising gain increasing attention and popularity owing to the powerful nonlinear fitting ability of neural networks. These methods capture the statistical characteristics of HSIs in a data-driven manner with a large number of training pairs. For instance, HSI-DeNet <cit.> employs a 2-D convolutional neural network to learn multiple image filters for HSI denoising. HSID-CNN <cit.> employs convolution kernels of multiple sizes to extract multilevel features, which are then fused to restore the HSIs. QRNN3D <cit.> introduces 3-D convolution blocks and quasi-recurrent mechanisms to extract spatial and spectral simultaneously without damaging the image structure. GRN <cit.> used two reasoning modules based on the graph neural network (GNN) to carefully extract both global and local spatial-spectral features. TRQ3DNet <cit.> first introduces a vision Transformer in HSI denoising, modelling the spatial long-range dependencies of HSIs and achieving desirable denoising performance. SST <cit.> conducts attention mechanisms in both spatial and spectral dimensions to fully explore the similarity characteristics of HSIs. HWnet <cit.> is proposed to improve the generalization ability of model-based methods in a data-driven manner. While demonstrating promising denoising performance, these approaches learn a deterministic mapping and neglect the fundamental ill-posed nature of HSI denoising. Flow-based generative models have shown promising results in a variety of applications, including image generation <cit.>, speech synthesis <cit.>, and physics simulations <cit.>. These models transform a complex distribution into a known simple distribution (e.g., Gaussian Distribution) with an invertible network so that diverse samples can be obtained by sampling in the known latent space and performing inverse transforms. For example, NICE <cit.> stacks several additive coupling layers and a rescaling layer to learn manifolds. Based on NICE, RealNVP <cit.> further proposes affine coupling layers with masked convolution to improve fitting ability. Glow <cit.> employs invertible 1 × 1 convolutions to perform channel permutations and actnorm layers to accelerate training. Recently, flow-based models which model complex conditional distribution have been increasingly proposed to tackle various tasks <cit.>. SRFlow <cit.> models the conditional distribution of high-resolution images given corresponding low-resolution images, enabling the trained model to predict diverse high-resolution images. VideoFlow <cit.> predicts high-quality stochastic multi-frame videos based on past observations using a normalizing flow. In this paper, we follow this research line and further exploit the application of flow-based methods in HSI denoising task. § THE PROPOSED METHOD In this section, we provide a detailed description of our proposed HIDFlowNet. Firstly, we present the problem of the ill-posed nature of HSI denoising and then introduce conditional flow models. Next, we illustrate the network structure of HIDFlowNet in detail. §.§ Conditional Generative Flows The task of HSI denoising is to restore clean HSIs from given noisy HSIs. Generally, a degraded HSI can be mathematically modeled as 𝐘 = 𝐗 + ϵ. where 𝐘∈ℝ^H× W× B denotes the degraded HSI, 𝐗∈ℝ^H× W× B is the corresponding clean HSI and ϵ∈ℝ^H× W× B stands for the additive noise. H, W, B denote the height, width and spectral band number of the HSI, respectively. As previously mentioned, HSI denoising is an ill-posed problem since a noisy HSI can be degraded from multiple clean HSIs that are equally reasonable. Therefore, instead of learning a deterministic mapping 𝐘→𝐗 as existing deep learning-based methods do, we propose to employ a flow-based network f_θ to learn the conditional distribution P_𝐗|𝐘(𝐗|𝐘, θ) of clean HSI 𝐗 given corresponding noisy counterpart 𝐘. Specifically, the network is designed to be invertible to guarantee one-to-one mapping. To put it another way, the invertible network transforms a clean and noisy HSI pair (𝐗, 𝐘) into a latent variable 𝐳 = f_θ(𝐗;𝐘), and the clean HSI 𝐗 can be reconstructed exactly by performing inverse transforms as 𝐗 = f^-1_θ(𝐳;𝐘). In this context, by applying the change-of-variables formula, the probability density of p_𝐗|𝐘 can be explicitly defined as p_𝐗|𝐘(𝐗|𝐘,θ)=p_𝐳(f_θ(𝐗;𝐘))|∂ f_θ∂𝐗(𝐗;𝐘)|. where the det(·) term is the determinant of the Jacobian matrix ∂ f_θ∂𝐗(𝐗;𝐘). Therefore, the conditional distribution of the clean HSI can be directly learned by minimizing the negative log-likelihood (NLL) as ℒ_nll(θ;𝐗,𝐘) = -log p_𝐗|𝐘(𝐗|𝐘,θ) = -log p_𝐳(f_θ(𝐗;𝐘))-log|∂ f_θ∂𝐗(𝐗;𝐘)|. In addition, the flow-based network is decomposed into a succession of invertible layers so that the determinant term in Eq.(<ref>) can be readily calculated. Specifically, the flow-based network consists of N invertible layers, i.e.,f_θ=f_θ^Nf_θ^N-1⋯ f_θ^1, where f_θ^n denotes the n_th layer. The n_th layer takes the outputs of the previous layer as inputs, i.e.,𝐡^n+1 = f^n_θ(𝐡^n;𝐗), where 𝐡^1 = 𝐗 and 𝐡^N+1 = z. Then, by employing the chain rule and the multiplicative property of the determinant, the NLL objective in Eq.(<ref>) can be defined as ℒ_nll(θ;𝐗,𝐘)=-log p_𝐳(𝐳)-∑_n=1^Nlog|∂ f_θ^n∂𝐡^n(𝐡^n;𝐗,𝐘)|. As a consequence, we only need to ensure that each layer is invertible and corresponding log-determinant of the Jacobian matrix can be efficiently computed, which will be detailed in the following section. Then clean HSIs can be sampled from p_𝐗|𝐘(𝐗|𝐘,θ_*) by drawing samples from a simple distribution (e.g. Gaussian) p_z and performing inverse transforms, i.e.,𝐗=f^-1_θ_*(ẑ;𝐘), ẑ∼ p_𝐳, where θ_* is the learnt parameters of the proposed network. §.§ Network Architecture In this section, we illustrate the network architecture and implementation details of our proposed method. §.§.§ Overall Network Architecture. While the invertibility of flow-based networks ensures one-to-one mapping, this constraint also imposes limitations on the network design and decreases the fitting ability. Furthermore, the dimensionality of HSIs is significantly larger than RGB images, resulting in the learning of HSI distribution more challenging. Therefore, we propose to decouple the learning of global low-frequency representation and local high-frequency details. Specifically, we propose a flow-based framework namely HIDFlowNet, which is composed of a transformer-based encoder and an invertible decoder as shown in Figure <ref>. The framework employs a conditional encoder without the constraint of invertibility to learn global low-frequency information. Then the flow-based decoder consisting of invertible conditional blocks (ICBs) takes the features maps of the conditional encoder's hidden layers as conditional inputs and transforms samples drawn from Gaussian distribution into local high-frequency information. Since invertible networks are information-lossless and can preserve details <cit.>, the flow-based decoder is ideal for learning the distribution of the high-frequency part of HSIs. Finally, we apply a bilinear upsampling operation to the outputs of the encoder to expand the spatial size. Then the restored HSI is obtained by adding up the outputs of the encoding network and the flow-based decoder so that the global low-frequency and local high-frequency details are restored simultaneously. Next, we will introduce the conditional encoder network and the invertible decoder network in detail. §.§.§ Conditional Encoder. Previous works <cit.> perform either checkerboard pattern squeeze operation or Haar wavelets to reshape image to lower resolutions and capture information in a larger distance when designing invertible networks. However, each time the squeeze operation is performed, the number of channels becomes four times the original number as the size of the image needs to remain unchanged to ensure reversibility. Such operations are not suitable for HSIs which contain tens and even hundreds of spectral bands, as the exponential growth of the number of channels could lead to intolerable computational cost and model complexity. Therefore, inspired by previous work <cit.>, we compress the high-dimensional image data by applying down-sampling operations in the encoder which is not necessarily invertible to capture low-frequency information while reducing model complexity in an unsupervised manner. Recently, vision transformers have gained great popularity in various tasks such as classification <cit.>, segmentation <cit.> and image restoration <cit.>. The self-attention mechanism in transformers enables networks to capture global dependencies and has demonstrated powerful representation capabilities. Therefore, in this work, the encoding network is built by staking a succession of transformers with down-sampling operations to obtain global low-resolution representations as shown in Figure <ref>. Specifically, the locally-enhanced window (LeWin) transformer block proposed in <cit.> is employed in the HIDFlowNet as the block is considerably efficient and captures both local and global features. Since the LeWin transformer is not the main point of our proposed method, readers could refer to <cit.> for further details. The downsampling is implemented by a 2-D convolution block with stride=2. §.§.§ Invertible Decoder. The architecture of the invertible decoder which learns the distribution of high-frequency information requires careful design to ensure that the network is invertible and the Jacobian determinant term in Eq.(<ref>) is tractable. Based on previous works <cit.>, a novel invertible conditional block (ICB) is proposed in this work. As shown in Figure <ref>, each ICB consists of a conditional affine layer and a residual invertible 1 × 1 convolution. The conditional affine layer utilizes an information transfer layer to perform element-wise scaling and addition. Concretely, the conditional affine layer takes the low-resolution feature map t^n of the encoder layer as conditional inputs and generates scale and bias, which can be illustrated as s, b = (g_θ((t^n))) h^n+1 = (s) ⊙h^n + b where g_θ denotes the information transfer layer, denotes bilinear upsampling and ⊙ is Hadamard product. Half instance normalization block <cit.> with channel attention <cit.> (HinCaBlock) is employed as the information transfer layer in our work, which is shown in Figure <ref>. The Jacobian matrix of this affine transformation is diagonal and the log-determinant can be efficiently computed by adding up the elements of scale s. The inverse of this transformation is given by h^n = (h^n+1 - b) ⊘(s) where ⊘ is element-wise division. <cit.> proposed an invertible 1 × 1 convolution as a permutation operation. However, the determinant of the convolution weight matrix is likely to be a large value and change drastically during the training process as the magnitude of the matrix elements is equivalent. In our work, we further propose a residual invertible 1 × 1 convolution to improve the stability of the training process. Specifically, the residual convolution can be defined as h_ij^n+1=Wh_ij^n+h_ij^n=(W+I)h_ij^n where h_ij^n is the feature vector on spatial coordinate (i, j). The log-determinant is computed in a straightforward way as log|(dResidualConv(𝐡;𝐖)d𝐡)|=h· w·log|(𝐖+𝐈)| where h and w are the height and width of the feature map 𝐡, and ResidualConv is the residual invertible convolution. Since the channel number remains unchanged in the invertible decoder, the log-determinant can be trivially calculated. In addition, the Jacobian determinant term in Eq.(<ref>) prevents the coefficient matrix W+I from being singular. We initialize the parameters W with small values, such that the residual convolution performs as an identity function approximately, which is helpful for training deep networks <cit.>. §.§.§ Objective Function. As mentioned earlier, we propose a negative log-likelihood loss ℒ_nll(θ;𝐗,𝐘) to learn the distribution of HSIs. To restore high-quality HSI and accelerate training, we further define reconstruction loss as ℒ_rec(θ;𝐗,𝐘, ẑ) = ||f^-1_θ(ẑ;𝐘) - 𝐗||_1. Finally, the total objective function is defined as ℒ_total(θ;𝐗,𝐘,ẑ) = λ_1 ℒ_nll(θ;𝐗,𝐘) + λ_2 ℒ_rec(θ;𝐗,ẑ) where λ_1 and λ_2 are hyperparameters. In our experiments, λ_1 and λ_2 is set as 0.001 and 1, respectively. § RESULTS §.§ Experimental Settings In this section, we provide a detailed description of the datasets and training settings in our experiment. §.§.§ Synthetic Datasets. Two datasets, i.e., CAVE <cit.> and KAIST <cit.>, are used in our experiments. CAVE dataset consists of 32 HSIs with a spatial resolution of 512 × 512 over 31 spectral bands. KAIST dataset contains 30 HSIs with a spatial resolution of 2704 × 3376 over 31 spectral bands. For the CAVE dataset, we use 20 images for training, 2 images for validation and 10 images for testing. For the KAIST dataset, 20 images are used for training and the rest are used for testing, 2 images selected from the CAVE dataset are used for validation. We crop the training set with a spatial size of 64 × 64 and stride 16 to enlarge training sets, resulting in 16824 training patches in total. Various transformations, i.e., random flipping and multi-angle image rotation (angles of 0^∘, 90^∘, 180^∘, 270^∘) are employed for data augmentation. §.§.§ Real HSI Data. We evaluate all competing approaches on one real-world noisy HSI, i.e., Indian Pines dataset, which consists of 145 × 145 pixels with 220 bands. For computational convenience, we crop the centre area with a spatial size of 128 × 128 for comparison. §.§.§ Noise Setting. We consider two types of noises (i.e., Gaussian noise and mixture noise) which are consistent with real-world situations <cit.>. In the Gaussian noise case, HSIs are contaminated by noises with variance set as {50, 70, 90}. In the mixture noise case, HSIs are contaminated by non-i.i.d. Gaussian noise, impulse noise, deadlines and strips. Specifically, each band of the clean HSIs is firstly corrupted by Gaussian noise with random intensities which range from 10 to 70. Next, the spectral bands are randomly divided into three parts, each part is respectively added with impulse noise, stripe noise and deadline noise. §.§.§ Competing Methods and Evaluation Metrics. Eight HSI reconstruction methods are adopted for comparison, including five model-based methods, i.e., BM4D <cit.>, LRTDTV <cit.>, NMoG <cit.>, FastHyDe <cit.>, LLRGTV <cit.>, and three learning based methods, i.e., HSIDCNN <cit.>, QRNN3D <cit.>, SST <cit.>. Three commonly used image quality evaluation metrics, including peak signal-to-noise ratio (PSNR), structural similarity (SSIM) <cit.> and spectral angle mapper (SAM) <cit.>, are employed to evaluate the denoising performance of different approaches. Larger values of PSNR and SSIM and smaller values of SAM indicate better image quality. §.§.§ Implementation Details. We implement the proposed framework HIDFlowNet in Pytorch. Adam <cit.> optimizer with β_1 = 0.9, β_2 = 0.999 is employed to update model parameters and the learning rate is set to 2 × 10^-4. All models are trained in an easy-to-difficult way which has been proven helpful for network training <cit.>. Concretely, the networks are trained with Gaussian noise for 50 epochs and then trained with mixture noise for another 50 epochs. The training batch size is set as 8. For fair comparisons, all deep learning-based methods are trained and tested in the same way. The models trained for 50 and 100 epochs are employed to remove Gaussian noise and mixture noise respectively. All deep learning-based models are trained on an NVIDIA Geforce RTX 3090 GPU. §.§ Experimental Results §.§.§ Experiment on Synthetic Data. The denoising results on the CAVE dataset are shown in Table <ref> and Figure <ref>. It can be seen that our proposed HIDFlowNet demonstrates better performance in most cases. While achieving desirable results in Gaussian noise cases, most model-based methods fail to tackle complex noise as manually designed priors cannot fully describe complex situations. In addition, although HSIDCNN achieves the best PSNR in several cases by performing multiscale feature extraction, HIDFlowNet also achieves promising PSNR and performs significantly better in other evaluate indexes. The visualization results of reconstructed HSIs are provided in Figure <ref>. As shown in the figure, model-based approaches yield either still noisy images or over-smooth results. Deep learning-based methods obtain promising denoising results but are also prone to provide over-smooth predictions since these methods overemphasize the pixel similarity and ignore the underlying distribution of clean HSIs. In contrast, HIDFlowNet is more capable of preserving fine-grained details while restoring spatial smoothness without introducing undesirable artefacts. The excellent performance of HIDFlowNet is primarily owing to the fact that the compressive encoding component suppresses noise and enhances the low-frequency part of HSIs, and the flow-based decoder enjoys the information-less property and preserves textural details. Moreover, HIDFlowNet also exhibits desirable denoising performance on the KAIST dataset as shown in Table <ref>, which further verifies the superiority of our proposed method. §.§.§ Experiment on Real-World Data. We further employ all models trained on the Indian Pines dataset for real-world HSI denoising to verify the effectiveness of our proposed approach. Since there is no ground truth for real-world data, we provide visualization results shown in Figure <ref> for comparison. It can be observed that the original image is seriously degraded owing to environmental factors such as terrible atmosphere or sensor failure. Compared with other approaches, our HIDFlowNet effectively handles the unknown noise and outputs sharper and more realistic results, convincing the robustness and superiority of HIDFlowNet. §.§.§ Effectiveness of Flow Model. We present visualization results of the generated HSIs derived from different Gaussian noises in Figure <ref> to verify the effectiveness of our proposed flow-based model. It can be observed that while generated HSIs are highly similar which verifies the stability of the trained model, there still exist differences in local details owing to different noises, confirming the effectiveness of our proposed flow-based model. §.§ Ablation Study In this section, we provide an ablation study on the components of HIDFlowNet and model complexity. §.§.§ Feature Decoupling Analysis. In addition to quantitative results, we provide visual analysis to further prove the effectiveness of the proposed encoding network and the flow-based decoder. Specifically, the inputs and the feature maps of the 3th, 6th and 9th layers of the encoder and decoder are depicted in Figure <ref>. It can be seen that with the increase of layers, the outputs of the encoder tend to ignore local details (e.g., the joint of the blocks) and gradually capture global low-frequency information. Since attention is calculated in local windows as elaborated in <cit.>, the feature map of the last layer exhibits a relatively obvious reticular structure. The outputs of the decoder demonstrate that with the guidance of the encoder, random Gaussian noise is transformed into local high-frequency information progressively, convincing the feasibility of the invertible network. §.§.§ Component Analysis. There are two components in an invertible conditional block, including an affine conditional layer and a residual invertible convolution. In this section, to verify the effectiveness and rationality of the two components adopted in our work, we conduct denoising on the KAIST dataset in Gaussian noise case with σ=50 for comparison and the effectiveness of the two components is explored as illustrated in Table <ref>. As can be seen, the model without affine conditional layers demonstrates the worst performance since the decoder is a pure generative network without conditional information in this case, and the quality of the denoising result is highly reliant on the performance of the encoder. HIDFlowNet adopted in our work outperforms other configurations, verifying the rationality of the proposed approach. §.§.§ Model Complexity. We further investigate the influence of the depth of HIDFlowNet by testing models on the KAIST test set in Gaussian noise case with σ=50. As shown in Table <ref>, the denoising performance improves with the increasing number of ICBs. HIDFlowNet with 9 ICBs is adopted in our work for a tradeoff between complexity and performance. § LIMITATIONS AND FUTURE WORK While our proposed HIDFlowNet exhibits plausible denoising performance, there are still several limitations. Specifically, the invertible requirement of flow-based models puts limitations on the use of various operations such as convolution with larger kernels, attention mechanisms and dimension reduction, reducing the fitting ability of the network. Moreover, the proposed method lacks control over the generative process and is unable to explicitly generate HSIs with expected specific properties such as higher SSIM. In the future, novel invertible frameworks and controllable generative models are worth further exploration to alleviate these problems. § CONCLUSION To alleviate the ill-posed nature of HSI denoising (i.e., multiple predictions are reasonable for a given noisy HSI) which is ignored by most existing deep learning-based approaches, this paper proposes a novel flow-based network namely HIDFlowNet. The network directly learns the distribution of clean HSIs conditioned on noisy counterparts and is capable of generating diverse clean HSIs. Specifically, the proposed HIDFlowNet is composed of a conditional encoder and an invertible decoder to decouple the learning of low-frequency and high-frequency information. The encoder utilizes transformers and down-sampling operations to obtain low-resolution images so that global representation is effectively extracted, while the decoder employs a series of invertible conditional blocks to preserve local details. Extensive experiments on two synthetic datasets and one real-world dataset demonstrate the superiority of our proposed model both quantitatively and qualitatively. ACM-Reference-Format
http://arxiv.org/abs/2306.05772v2
20230609092548
A Boosted Model Ensembling Approach to Ball Action Spotting in Videos: The Runner-Up Solution to CVPR'23 SoccerNet Challenge
[ "Luping Wang", "Hao Guo", "Bin Liu" ]
cs.CV
[ "cs.CV" ]
A Boosted Model Ensembling Approach to Ball Action Spotting in Videos: The Runner-Up Solution to CVPR'23 SoccerNet Challenge Luping WangEqual contributions Hao Guo^∗ Bin LiuCorresponding author Research Center for Applied Mathematics and Machine Intelligence Zhejiang Lab, Hangzhou 311121, China {wangluping, guoh, liubin}@zhejianglab.com July 31, 2023 ================================================================================================================================================================================================================================================================================================================================= This technical report presents our solution to Ball Action Spotting in videos. Our method reached second place in the CVPR'23 SoccerNet Challenge. Details of this challenge can be found at <https://www.soccer-net.org/tasks/ball-action-spotting>. Our approach is developed based on a baseline model termed E2E-Spot <cit.>, which was provided by the organizer of this competition. We first generated several variants of the E2E-Spot model, resulting in a candidate model set. We then proposed a strategy for selecting appropriate model members from this set and assigning an appropriate weight to each model. The aim of this strategy is to boost the performance of the resulting model ensemble. Therefore, we call our approach Boosted Model Ensembling (BME). Our code is available at <https://github.com/ZJLAB-AMMI/E2E-Spot-MBS>. § INTRODUCTION To better understand the salient actions of a broadcast soccer game, SoccerNet has introduced the task of action spotting, which involves finding all the actions occurring in the videos. This task addresses the more general problem of retrieving moments with specific semantic meaning in long untrimmed videos, extending beyond just soccer understanding. Details of the SoccerNet Ball Action Spotting challenge can be found at <https://www.soccer-net.org/tasks/ball-action-spotting>. In this technical report, we introduce our submitted solution termed Boosted Model Ensembling (BME), which reached second place in this Challenge.. Our proposed solution is built on a baseline model termed E2E-Spot <cit.>, which was provided by organizers of this challenge. We analyzed E2E-Spot and identified 3 opportunities to improve it for addressing the SoccerNet Ball Action Spotting challenge: * In the data set, only one frame associated with a representative event is labeled, whereas in reality, each considered event should be associated with multiple consecutive frames. * A higher quality event feature extraction may help, as indicated by <cit.>. * the loss function used in training E2E-Spot is not totally consistent with the metric , which is adopted by this challenge. Taking all above issues into consideration, we developed our solution BME, described in detail in Section <ref>. The experimental setting is presented in Section <ref>, and some key results are showed in Section <ref>. Finally, we conclude our work in Section <ref>. § OUR METHOD In this section, we describe our proposed method BME in detail. §.§ The Model Ensembling Operation The key operation of BME is model ensembling, which is illustrated in Figure <ref>. As is shown, the final model ensemble F_T is obtained after T iterations. At an iteration say t, an objective function obj_t associated with the performance metric ( used here) is defined (namely, Equation (2) in Section 2.2), which is used to select the best model f_i_t and its weight value w_t. Then, a new model ensemble F_t is got by combining F_t-1 and f_i_t as follows F_t(x) = (1 - w_t)F_t-1(x) + w_tf_i_t(x) §.§ Objective function The objective function used to select the best model f_i_t and its weight value w_t, which appear in Equation (1), is defined as follows obj_t = e(F_t, 𝒟_valid) - e(F_t-1, 𝒟_valid) where the function e(·) denotes the target performance metric ( here), 𝒟_valid the validation data set. At each iteration, we search for an appropriate member model and its corresponding weight that maximizes Equation (2), and then update the model ensemble according to Equation (1). §.§ Generating Candidate Models All candidate models are built on E2E-Spot <cit.>. Their differences lie in: (1) the training samples being used; (2) network architectures for feature extraction; and (3) the optimizer being used. Training samples We generate training samples as shown in Figure <ref>. Firstly, the video is decomposed into a fixed number of frames per second (FPS=25 in our case). Then, all the achieved frames are labeled based on the time of the given events together with the label sharing scope controlled by a hyper-parameter Δ. Finally, a training sample-set 𝒟_s, Δ={(x_s, i, y_Δ, i)}_i=1^N can be constructed by randomly picking N video clips with a fixed clip length L and a fixed frame stride size s. Different settings of s and Δ lead to training sample sets with different properties. If Δ is set to a large value, the ratio of event frames increases while the ratio of error-labeled frames also increases and vice versa. The larger the stride size s, the longer time the clip covers, and the poorer continuity between the frames. Therefore, models trained with such different sample-sets will have different properties. Network architectures for feature extraction The RegNet <cit.> is used as the baseline of the feature architecture in E2E-Spot <cit.>. However, according to the experimental results reported in <cit.>, RegNet performs worse than EfficientNet <cit.> on the problems addressed in that paper. Therefore, RegNet and EfficientNet are the two candidates for the feature architecture considered in our solution. In addition, we incorporated the Gate-Shift Module (GSM) <cit.> into the 2D convolutional operator included in both RegNet and EfficientNet. The two versions of the feature architecture are denoted as rny008_gsm and enetb2_gsm, respectively. The optimizer We use the same baseline optimizer as in  <cit.>, which is AdamW. In addition, we incorporate stochastic weight averaging (SWA)<cit.> into the training process to improve the generalization of the trained model, denoted as AdamW^†. Both AdamW and AdamW^† are considered as candidates for the optimizer used to train a candidate model. Each candidate model is trained with a specific combination of the training sample set, network architecture for feature extraction, and optimizer. Therefore, the number of candidate models is N_1× N_2× N_3, where N_1, N_2, and N_3 represent the number of training sample sets, network architectures, and optimizers, respectively. § EXPERIMENTAL SETTING Datasets We solely used the dataset provided by the challenge organizers in our experiments. We employed five settings for constructing training samples, namely 𝒟_1, 5, 𝒟_1, 4, 𝒟_2, 5, 𝒟_2, 4, and 𝒟_2, 2. The length of the clip is set to L=100, and the dimension of cropping for the frame is 224. During the test phase, we used 𝒟_valid as the validation dataset, while during the challenge phase, it was used as the test dataset. Training candidate models All hyperparameters used for training candidate models were kept the same, unless otherwise specified. We selected GRU <cit.> as the temporal architecture of E2E-Spot and employed data augmentation techniques such as random cropping, random flipping, brightness, contrast, hue, saturation, and MixUp <cit.> during training. The initial learning rate was set to 0.001 and was scheduled based on LinearLR and CosineAnnealingLR after warming up for 3 epochs. Each member model was trained for a total of 100 epochs on A100-GPU-80GB, with a batch size of 8. All the related source code was implemented using PyTorch 1.12.1. Other issues During model inference, the length of overlap between adjacent clips is set to L-1, i.e., overlap_len=99. After model inference, we employed non-maximum suppression (NMS) <cit.> as a post-processing step on the predicted results. The window size, frame rate, and threshold of NMS are set to 10, 25, 0.01, respectively. When using BME to ensemble the sub-models, the weights are sampled from {0.1, 0.2, 0.3, ⋯, 1.0}. § RESULTS To provide a clear view of the performance of each sub-model and the overall result of BME, we present the values of and the weights of the selected candidate models in Table <ref>. From the table, we can observe that the performance of the sub-model candidates is similar, but their abilities and/or properties may differ. However, by combining the results of the selected sub-models through BME, we achieved a significant improvement, with an of 86.37%. These findings suggest that the method of generating candidate models is reasonable and the proposed BME approach is effective. § CONCLUSION In this report, we presented our submitted solution, termed Boosted Model Ensembling (BME), for the CVPR'23 SoccerNet Challenge (<https://www.soccer-net.org/tasks/ball-action-spotting>). BME is a model ensembling approach built on the end-to-end baseline model, E2E-Spot, as presented in <cit.>. We generate several variants of the E2E-Spot model to create a candidate model set and propose a strategy for selecting appropriate model members from this set while assigning appropriate weights to each selected model. BME is characterized by operations for generating candidate models and a novel method for selecting and weighting them during the model ensembling process. The resulting ensemble model takes into account uncertainties in event length, optimal network architectures, and optimizers, making it more robust than the baseline model. Our approach can potentially be adapted to handle various video event analysis tasks. ieee_fullname
http://arxiv.org/abs/2306.03516v1
20230606090840
COPR: Consistency-Oriented Pre-Ranking for Online Advertising
[ "Zhishan Zhao", "Jingyue Gao", "Yu Zhang", "Shuguang Han", "Siyuan Lou", "Xiang-Rong Sheng", "Zhe Wang", "Han Zhu", "Yuning Jiang", "Jian Xu", "Bo Zheng" ]
cs.IR
[ "cs.IR", "cs.LG" ]
Zhishan Zhao and Jingyue Gao contribute equally to this work. Alibaba Group Beijing China Alibaba Group Beijing China Alibaba Group Beijing China Han Zhu is the corresponding author. Alibaba Group Beijing China Alibaba Group Beijing China Alibaba Group Beijing China Cascading architecture has been widely adopted in large-scale advertising systems to balance efficiency and effectiveness. In this architecture, the pre-ranking model is expected to be a lightweight approximation of the ranking model, which handles more candidates with strict latency requirements. Due to the gap in model capacity, the pre-ranking and ranking models usually generate inconsistent ranked results, thus hurting the overall system effectiveness. The paradigm of score alignment is proposed to regularize their raw scores to be consistent. However, it suffers from inevitable alignment errors and error amplification by bids when applied in online advertising. To this end, we introduce a consistency-oriented pre-ranking framework for online advertising, which employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results. A Δ NDCG-based weighting mechanism is adopted to better distinguish the importance of inter-chunk samples in optimization. Both online and offline experiments have validated the superiority of our framework. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. COPR: Consistency-Oriented Pre-Ranking for Online Advertising Bo Zheng ============================================================= § INTRODUCTION Online advertising has become a major source of revenue for many web platforms <cit.>. Advertisers ensure effective promotion of products by bidding and paying for user actions (e.g., click and purchase)[Without loss of generality, we regard click as the action in this paper] on advertisements (i.e., ads). To maximize platform revenue, the advertising system typically ranks ads based on their Expected Cost Per Mille (ECPM) <cit.> and selects top ones for impression: ECPM = 1000 × bid × pCTR , where bid is the price that the advertiser is willing to pay and pCTR is the predicted click-through rate (CTR) denoting the probability that the user clicks the ad. Under strict latency requirements in online deployment, it is infeasible for complex CTR models <cit.> with high inference cost to handle millions of candidates in the ad corpus. To balance efficiency and effectiveness, a common practice in industrial systems is to adopt a cascading architecture <cit.>, which filters ads through multiple phases with increasingly complex models as illustrated in Fig. <ref>. Particularly, the retrieval model first retrieves tens of thousands of relevant ads from the corpus. Afterwards, the pre-ranking model outputs pCTR for retrieved candidates, where top hundreds with highest ECPM are sent to the ranking model for final selection. To handle a larger candidate set, the pre-ranking model is usually designed to be lightweight, which works more efficiently but less accurately compared with the ranking model. Pre-Ranking has recently received increasing attention due to its importance in the cascading architecture. Huang et al. <cit.> propose a two-tower model that maps users and candidates into latent vectors and calculates their inner products. To enable high-order feature interactions, Li et al. <cit.> add find-grained interactions between two towers and Wang et al. <cit.> propose to use deep neural network with squeeze-and-excitation block. Despite improvement of accuracy, there is still a non-negligible gap between the pre-ranking and ranking models. They may generate significantly different ranked results on the same candidate set. Such inconsistency hinders the overall system effectiveness. For example, top ads selected from the pre-ranking phase could be less competitive in the ranking phase, causing waste of the computational resource. Also, ads which are preferred in the ranking phase could be unfortunately discarded in the pre-ranking phase, leading to sub-optimal results. Some pioneering studies <cit.> propose to align the pre-ranking and ranking models via distillation on pCTR scores. The pre-ranking model is encouraged to generate same scores as the ranking model <cit.> or generate high scores for top candidates selected by the ranking model <cit.>. Although exhibiting encouraging performance, the paradigm of score alignment suffers from the following issues, especially when applied to the advertising system: * Inevitable alignment errors. Due to simpler architecture and fewer parameters for efficiency concerns, the capacity of the pre-ranking model is limited, making it difficult to well approximate original scores of the complex ranking model. Thus even with explicit optimization, there still exist errors in aligning their scores to be exactly the same. * Error amplification in ECPM ranks[We use ECPM rank to denote the order of an ad in the ECPM-ranked list.]. In both pre-ranking and ranking phases, ads are ranked according to their ECPM as Eq. (<ref>), which is jointly determined by the pCTR score and the bid. Thus the influence of alignment errors could be amplified due to existence of bids. As shown in Table <ref>, when multiplied by corresponding bids, even a tiny difference in pCTR scores of the pre-ranking and ranking models leads to completely different ranked results. Above issues call for rethinking the necessity of strictly aligning pCTR scores in the advertising system. Essentially, given a set of candidates, it is not their absolute pCTR scores but their relative ECPM ranks that determine the results of each phase. Therefore, to achieve consistent results, the pre-ranking model is not required to output same pCTR scores as the ranking model. Instead, it only needs to output scores which yield same ECPM ranks when multiplied by bids. In this way, the requirement of score alignment can be relaxed to that of rank alignment, which is more easier to meet. Moreover, when optimizing pCTR scores for consistent ECPM ranks, the influence of bids can be taken into account beforehand, thus alleviating the issue of error amplification. To this end, we introduce a Consistency-Oriented Pre-Ranking (COPR) framework for online advertising, which explicitly optimize the pre-ranking model towards consistency with the ranking model. Particularly, we collect historical logs of the ranking phase, where each log records a ECPM-ranked list of candidates. COPR segments the list into fixed-sized chunks. Each chunk is endowed with certain level of priority from the view of the ranking phase. With pairs of ads sampled from different chunks, COPR learns an plug-and-play rank alignment module which aims to consistently distinguish their priority using scores at the pre-ranking phase. Moreover, we adopts a Δ NDCG-based weighting mechanism to better distinguish the importance of inter-chunk pairs in optimization. Our main contributions can be summarized as follows: * To the best of our knowledge, we are the first to explicitly optimize the pre-ranking model towards consistency with the ranking model in the widely-used cascading architecture for online advertising. * We propose a novel consistency-oriented pre-ranking framework named COPR, which employs a chunk-based sampling module and a plug-and-play rank alignment module for effective improvement of consistency. * We conduct extensive experiments on public and industrial datasets. Both offline and online results validate that the proposed COPR framework significantly outperforms state-of-the-art baselines. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. § RELATED WORK In this section, we briefly review studies about pre-ranking. Located in the middle of the cascading architecture, the pre-ranking system has played an indispensable role for many large-scale industrial systems <cit.>. The development of a pre-ranking model is mainly for balancing the system effectiveness and efficiency, as the downstream ranking model usually cannot deal with tens of thousands of candidates. To this end, techniques such as the dual-tower modeling <cit.> are commonly adopted. However, this paradigm limits feature interactions between users and items to the form of vector product, which often results in extensive performance degradation. Another line of work strives to enhance high-order feature interactions, and explores the ways to reduce the online latency. Li et al. <cit.> add fine-grained and early feature interactions between two towers. Wang et al. <cit.> propose to use fully-connected layers and employ various techniques from the perspectives of both modeling efficiency and engineering optimization. Specifically, a Squeeze-and-Excitation module <cit.> is utilized to choose the most useful feature set, and meanwhile system parallelism and low-precision computation are exploited whenever possible for latency optimization. Ma et al. <cit.> propose a feature selection algorithm based on feature complexity and variational dropout (FSCD) to search a set of effective and efficient features for pre-ranking. A similar study <cit.> uses network architecture searching (NAS) to determine the optimal set of features and corresponding architectures. These studies mainly focus on improving the accuracy of the pre-ranking model but neglects its interaction with the subsequent ranking model, leading to inconsistent ranked results. Several studies propose to align the pre-ranking and ranking models in terms of pCTR scores via knowledge distillation. RD <cit.> encourages the lightweight student model to score higher for candidates selected by the larger teacher model, which is often used in training pre-ranking models. RankFlow <cit.> regularizes the pre-ranking and ranking models to generate same scores for same candidates. Despite encouraging performance, there still exist inevitable errors in score alignment due to discrepancy in model capacity. When applied in online advertising, influence of such errors would be amplified by bids of ads, yielding inconsistent ECPM-ranked results. In this paper, we propose to relax the objective of score alignment to rank alignment, where bids of ads are incorporated and consistency of ranked results between two phases can be explicitly optimized in an effective manner. § METHODOLOGY In this section, we first introduce background knowledge about the pre-ranking model, and then describe our proposed COPR framework as illustrated in Fig. <ref>. §.§ Background Training Data. When the advertising system serves online traffic as Fig. <ref>, hundreds of ads are ranked through the ranking phase and recorded to logs, which we refer to as ranking logs. Each log contains an ranked list of ads with descending ECPM: 𝐑 = [(ad_1, pCTR_1, bid_1),...,(ad_M, pCTR_M, bid_M)], where pCTR_i is the score output by the ranking model for i-th ad and bid_i denotes its bid. M is the number of candidates. Then top N ads are displayed to the user. User feedback y (click/non-click) on each displayed ad is recorded to impression logs: 𝐈 = [(ad_1,y_1),...,(ad_N, y_N)]. Base Model. The base model for pre-ranking is usually a lightweight CTR model. Here we adopt the architecture of COLD <cit.>. The input features consist of three parts: user features 𝐔 such as age and gender, ad features 𝐀 such as brand and category, context features 𝐂 such as time and device. After pre-selecting a concise set of features, COLD feeds them into embedding layers and concatenate their embeddings for a compact representation 𝐱: 𝐱 = E(𝐔) ⊕ E(𝐀) ⊕ E(𝐂). Then it employs a prediction net consists of multiple fully-connected layers to estimate CTR: ŷ = Sigmoid(MLP(𝐱)) ∈ [0,1]. To accurately predict user click y, the model is optimized with cross entropy loss over impression logs I: L_ctr = ∑_𝐈[-ylog(ŷ)-(1-y)log(1-ŷ)]. §.§ Consistency-Oriented Pre-Ranking Though the pre-ranking model is expected to well approximates the ranking model in the cascading system, their gap in model capacity often hinders satisfying approximation. Thus in addition to L_ctr, we aim to explicitly optimize the pre-ranking model towards consistent results with the ranking model over 𝐑. §.§.§ Chunk-Based Sampling Given candidates {Ad_i}_1^M in ranking logs, an ideal pre-ranking model should output scores that yield same ECPM-ranked list as Eq. (<ref>). Considering its limited capacity, it could be hard to rank hundreds of ad all in correct positions. To reduce the learning difficulty, we partition the ranked list into D=M/K fixed-sized chunks, each constituting K adjacent ads, as shown in Fig. <ref>. We regard ads in the same chunk as candidates with same priority in the ranking phase. The pre-ranking model is not required to distinguish ads in the same chunk. Instead, it only needs to consistently rank candidates in the granularity of chunk. For each chunk, we randomly sample a candidate and endow it with the priority related to this chunk. In this way, for each ranked list, we obtain a concise sub-list: 𝐑_chunk = [(ad_s_d, pCTR_s_d, bid_s_d, D-d)]_d=1^D, where s_d is the index of sampled ad in chunk d and D-d denotes its priority which the larger the better. The above chunk-based sampling has two-fold advantages: 1) It provides a flexible way to control the granularity of consistency, which makes the objective reachable for the lightweight pre-ranking model. By increasing the chunk size K, the objective of consistency gradually shifts from fine-grained to coarse-grained. 2) It effectively reduces the size of ranked list in logs by K times and still maintains coverage of original lists, which is critical for efficient training in industrial machine learning systems. In our production implementation, K is set to 10. §.§.§ Rank Alignment In the following, we introduce how to modify the base model with a plug-and-play rank alignment module. Instead of regularizing the difference between ŷ_i in Eq. (<ref>) and pCTR_i in Eq. (<ref>) as score alignment methods <cit.>, we propose to relax the objective to rank alignment on a properly-adjusted pCTR score. Particularly, we employ a relaxation net to learn a factor α > 0, with which we adjust the original pCTR score: α = ReLU(MLP(x))+1e^-6∈ℛ^+, ỹ = α * ŷ, where ỹ denote the adjusted pCTR. Thus ECPM at the pre-ranking phase can be accordingly estimated as ỹ * bid, based on which we aim to correctly rank each inter-chunk pair in 𝐑_chunk. Here we adopt the pairwise logistic loss for its relatively good performance and the simplicity for implementation <cit.>: L_rank = ∑_i<jlog[1+e^-(ỹ_s_i * bid_s_i/ỹ_s_j * bid_s_j-1) ]. For each pair of ad_s_i and ad_s_j sampled from different chunks that i<j, we optimize L_rank by encouraging ỹ_s_i * bid_s_i > ỹ_s_j * bid_s_j, which means ad_s_i would be ranked before ad_s_j by ECPM in the pre-ranking phase. If all inter-chunk pairs can be correctly ranked, we achieve consistent ECPM-ranked results between the pre-ranking and ranking phases over R_chunk. Note that by introducing the relaxation factor α, we slightly modify the original pCTR score to achieve consistent ranked results if necessary. To maintain original value as much as possible, α should be around 1. Thus we add a symmetric regularization to penalize the deviation of α from 1: L_reg = α-1 α>1 1/α-1 α<=1 . It is worth mentioning that the proposed rank alignment module does not rely on specific assumption about the architecture of base model. It is an plug-and-play component that can be added to any pre-ranking models for improvement of consistency. §.§.§ Δ NDCG-Based Pair Weighting L_rank in Eq. (<ref>) fails to consider the relative importance of different pairs in consistency optimization. In practice, consistently ranking ads from chunk 1 and chunk 10 is more important than ranking chunk 11 and chunk 20, since only the top ads will be sent to the ranking phase and displayed to users. It calls for a weighting mechanism that considers chunk-related priorities of candidates. Intuitively, if pair (ad_s_i, ad_s_j) in L_rank are mistakenly ranked, the consistency between the pre-ranking and ranking phase will be hurt. Thus its weight in L_rank should be determined by the negative impact. As each sampled ad_s_d in 𝐑_chunk is endowed with priority D-d, we use NDCG <cit.> to measure the utility of any ranked list p of these candidates: DCG = ∑_i=1^D2^p_i-1/log(i+1), IDCG = ∑_i=1^D2^D-i-1/log(i+1), where p_i denote the priority of i-th ad in the permutation and the IDCG is the ideal DCG achieved by 𝐑_chunk. If we swap the position of ad_s_i and ad_s_j in 𝐑_chunk, the utility of the list will experience a drop which can be further normalized as: Δ NDCG(i,j) = 2^D-i-2^D-j/IDCG[1/log(i+1)-1/log(j+1)]. The utility drop is used to re-weight inter-chunk pairs in consistency optimization: L_rank = ∑_i<jΔ NDCG(i,j) log[1+e^-(ỹ_s_i * bid_s_i - ỹ_s_j * bid_s_j)]. Thus the objective function of COPR can be formulated as: L = L_ctr_CTR Loss+ λ_1L_rank + λ_2 L_reg_Consistency Loss, where λ_1>0, λ_2>0 are weights for corresponding loss terms. By minimizing L, we explicitly optimize the pre-ranking model towards consistency with the ranking model via a plug-and-play rank alignment module. §.§ System Deployment We introduce the deployment of COPR in three stages: data generation, model training, and online serving as shown in Fig. <ref>. Data Generation. During online serving, hundreds of ads are ranked through ranking model and recorded to ranking logs, with which we perform chunk-based sampling. The content of each sample includes user index, ad index, chunk index as well as the bid. Note that the bid at the ranking phase could differ from that at the the pre-ranking phase <cit.>. In this case, we record the pre-ranking bid since it influences L_rank in model training. When ads are displayed to users in the client, we also record user feedback in impression logs, which are used in calculating L_ctr. Model Training. The training procedure is performed on our ODL (Online Deep Learning) <cit.> platform, which consumes real-time streaming data to continuously update model parameters. After training with fixed number of steps, the learnt model will be delivered to the Model Center, which manages all online models. Online Serving. Once a new version of pre-ranking model is ready, pre-ranking server will load it from Model Center to replace the online version in service. § EXPERIMENTS In this section, we conduct experiments on both public dataset and production dataset to validate the effectiveness of COPR in improving consistency and overall system performance. §.§ Experiment Setup Taobao Dataset. It is a public dataset[https://tianchi.aliyun.com/dataset/dataDetail?dataId=56] with 26 million impression logs of 1 million users and 0.8 million items in 8 days. Item price is used as bid. Impressions of first 7 days are used to train DIN <cit.> as the ranking model. For each impression, we sample 10 candidates and collect ECPM-ranked results by the ranking model to train pre-ranking models. Logs of the last day are used for evaluation. To simulate the cascading process, we sample 100 candidates for each impression, among which the pre-ranking and ranking model sequentially select top 10 and top 1 candidates to display. Production Dataset. It contain 8 days of impression logs and ranking logs collected from our system shown in Fig. <ref>. These logs are of the magnitude of billions. The first week of logs are used for training and the last day is used for evaluation. According to the scenario that logs come from, it is further divided into two subsets: Homepage and Post-Purchase. Baselines. COPR is compared with following baselines: * Base adopts the architecture of COLD <cit.> and is trained on impression logs. * Distillation <cit.> directly distills predicted scores of the ranking model on impression logs. * RankFlow <cit.> distills predicted scores of the ranking model on ranking logs and further regularizes the pre-ranking model to generate high scores for candidates selected by the ranking model. * COPR w/o Δ NDCG removes the Δ NDCG-based weighting mechanism from the COPR framework. Metrics. We adopt two groups of metrics in evaluation. * The first group measures the consistency between ECPM-ranked results of the pre-ranking and ranking phases, including HitRatio(HR@K), normalized discounted cumulative gain (NDCG@K), and mean average precision (MAP@K). In HR@K and MAP@K, top 10 candidates selected by the ranking model are treated as relative ones. In NDCG@K, order in ranking logs is used as a proxy of relevance. The standard calculation of these metrics can be found in <cit.>. * The second group measures the overall system performance. We use Click-Through-Rate (CTR) and Revenue Per Mille (RPM) similar to <cit.>, which corresponds to user experience and platform revenue, respectively. On public dataset, CTR is simulated as the portion of clicked ads in displayed ads, and RPM is simulated as the product of CTR and average bid of clicked ads. In production experiment, we perform online A/B test to obtain CTR and RPM on real traffic. Hyper-parameters. The chunk size is set to 2 and 10 on the public dataset and the production dataset, respectively. The number of MLP layers in the prediction net and the relaxation net is 3. The embedding size of raw input features is set to 16. λ_1 and λ_2 in Eq. (<ref>) are fixed to 1 and 0.2. §.§ Results on Public Dataset Table <ref> compares COPR and baselines in terms of consistency and system performance. We only show K=10 in HR@K, NDCG@K, and MAP@K due to limited space. Results under other settings of K are similar. From Table <ref>, we draw the following conclusions. First, system performance (CTR and RPM) is highly associated with the consistency between the pre-ranking and ranking phases. For COPR and baselines, the higher consistency generally yields the better system performance. It validates our motivation to explicitly optimize consistency between phases in order to improve the overall effectiveness of the cascading system. Second, COPR achieves best consistent results of all methods, outperforming the state-of-the-art RankFlow by 5.1%, 13.5%, and 33.0% in terms of HR@10, NDCG@10, and MAP@10. We attribute the improvement to our shift of objective from score alignment to rank alignment. By such relaxation, COPR can directly optimize towards consistent ECPM-ranked results and meanwhile reduce the learning difficulty for the lightweight model. Moreover, the influence of bids is considered in training COPR, thus alleviating the issue of error amplification that RankFlow suffers from. We also find that RankFlow is better than Distillation. We think it is because Rankflow aligns scores over ranking logs while the latter is on impression logs which is too sparse. Third, COPR w/o Δ NDCG experiences performance drop compared with COPR. This ablation study verifies the effectiveness of the pair weighting mechanism based on Δ NDCG. By emphasizing more on important inter-chunk pairs in consistency optimization, COPR ensures top candidates are more likely to be consistently ranked, which helps improve the overall utility of pre-ranking results. §.§ Results on Production Dataset We also perform similar evaluation on the production dataset composed of samples from two scenarios. Most conclusions are consistent with those on the public dataset. As shown in more details from Fig. <ref> to Fig. <ref>, COPR significantly outperforms other methods in term of HR@K, NDCG@K, and MAP@K with varying K from 5 to 100 on two scenarios, which demonstrates the stable improvement of consistency achieved by our proposed framework. Moreover, we still observe the gap between COPR and COPR w/o Δ NDCG, which shows that the weighting mechanism also works in the large-scale production dataset. To evaluate system performance in production environment, we perform online A/B test on two scenarios, where these methods are used to serve real users and advertisers. From Table <ref> we find that Distillation, RankFlow, and COPR all perform better than the production baseline, among which COPR achieves the largest improvement, with a lift of up to +12.3% CTR and +5.6% RPM. With impressive performance, COPR has been successfully deployed to serve the main traffic of Taobao display advertising system in the pre-ranking phase since October of 2022. §.§ Qualitative Analysis Given ranked results from the pre-ranking and ranking phases, we calculate the average pre-ranking position for candidates at each ranking position, based on which we draw the Ranking-PreRanking Curve (RPC). The ideal RPC happens when results are exactly same. §.§.§ Error Amplification in ECPM Rank. As shown in Fig. <ref> (Left), RPC by pCTR of RankFlow is close to the ideal curve, showing well alignment of raw pCTR in two phases. However, after ranking by ECPM, RPC of RankFlow largely deviates from the ideal one. It verifies that the involvement of bid in ECPM will amplify the influence of errors in score alignment, leading to more inconsistent ECPM-ranked results. This analysis is consistent with the example in Table <ref>. Hence we confirm that merely score alignment is not enough for the cascading architecture in online advertising. §.§.§ More Consistent ECPM Rank. Fig. <ref> (Right) shows RPC by ECPM of different methods. We observe that compared with Base and RankFlow, RPC of COPR is more close to the ideal curve in almost each ranking position. It qualitatively shows that ECPM-ranked results given by COPR are more consistent with results of the ranking phase. It can be attributed to the design of our consistency-oriented framework, where the rank alignment module directly optimizes towards this objective. The incorporation of bid also helps alleviate the above mentioned error amplification. § CONCLUSION In this paper, we introduce a consistency-oriented pre-ranking framework for online advertising, which employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results. A Δ NDCG-based weighting mechanism is also adopted to better distinguish the importance of inter-chunk samples in optimization. Both online and offline experiments have validated the superiority of our framework. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. ACM-Reference-Format
http://arxiv.org/abs/2306.07631v1
20230613085950
Time Resolved Investigation of High Repetition Rate Gas Jet Target For High Harmonic Generation
[ "Balázs Nagyillés", "Zsolt Diveki", "Arjun Nayak", "Mathieu Dumergue", "Balázs Major", "Katalin Varjú", "Subhendu Kahaly" ]
physics.optics
[ "physics.optics", "physics.app-ph", "physics.atom-ph", "physics.comp-ph", "quant-ph" ]
[Correspondence: ][email protected] ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Institute of Physics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary LULI–CNRS, CEA, Sorbonne Université, Ecole Polytechnique, Institut Polytechnique de Paris, Paris ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Department of Optics and Quantum Electronics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Department of Optics and Quantum Electronics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary [Correspondence: ][email protected] ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Institute of Physics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary High repetition rate gas targets constitute an essential component in intense laser matter interaction studies. The technology becomes challenging as the repetition rate approaches kHz regime. In this regime, cantilever based gas valves are employed, which can open and close in tens of microseconds, resulting in a unique kind of gas characteristics in both spatial and temporal domain. Here we characterize piezo cantilever based kHz pulsed gas valves in the low density regime, where it provides sufficient peak gas density for High Harmonic Generation while releasing significantly less amount of gas reducing the vacuum load within the interaction chamber, suitable for high vacuum applications. In order to obtain reliable information of the gas density in the target jet space-time resolved characterization is performed. The gas jet system is validated by conducting interferometric gas density estimations and high harmonic generation measurements at the Extreme Light Infrastructure Attosecond Light Pulse Source (ELI ALPS) facility. Our results demonstrate that while employing such targets for optimal high harmonic generation, the high intensity interaction should be confined to a suitable time window, after the cantilever opening. The measured gas density evolution correlates well with the integrated high harmonic flux and state of the art 3D simulation results, establishing the importance of such metrology. Time Resolved Investigation of High Repetition Rate Gas Jet Target For High Harmonic Generation Subhendu Kahaly July 31, 2023 =============================================================================================== § INTRODUCTION Investigations in ultrashort laser-plasma science in the strong field regime are generically based on the interaction of an appropriately focused laser driver on to reflective (overdense) or transparent (underdense) targets. The interaction conditions needs to be reproduced, and hence the target needs to be replenished, at the repetition rate of the laser. Recent advances in few cycle, high peak power high repetition rate (≥ 1 kHz) lasers <cit.> has expedited the development and characterization of targets that are able to sustain interactions at such challenging repetition rate in a reproducible and stable manner. For all transmission based experiments in this domain, the use of gas targets is widespread because they can provide dense, stable and reproducible medium for laser matter interaction studies. The application space is ever expanding with recent demonstrations of laser wake field acceleration of electrons <cit.> and high harmonic based attosecond pulse generation <cit.>, both operating at a high repetition rate. In both the cases a continuous gas cell has been used for the interaction and the accessible gas density space is limited due to the residual gas load within the vacuum chamber. One straight-forward way to overcome this is to use a high repetition rate gas jet target with appropriate nozzle geometry. Pulse valves working up to a very high pressure and gas density has been demonstrated <cit.>, albeit operating at a low frequency. Nonetheless the available repetition rate for pulsed valves currently allows one to reach up to ∼ 5kHz <cit.>. The importance of careful metrology of gas jets emanating from such valves with respect to their appropriate application space cannot be overemphasized. Such systems are important for the attoscience community and beyond. For example coupled with the emergence of ≥ 1 kHz intense lasers <cit.> such a high repetition rate gas jet target can enable the extension of the recent demonstrations like multi millijoule THz <cit.> and/or relativistic single cycle mid IR pulses <cit.> to the high average power regime, opening up wide ranging applications. The capability of solenoid type Even-Lavie valves operating at less than 2 kHz repetition rate has been demonstrated in the domain of high harmonic spectroscopy of molecules <cit.> and transient absorption spectroscopy <cit.>. Here we undertake the space and time resolved investigation of the gas density profile of a piezo cantilever based high repetition rate gas jet from the perspective of optimizing the high harmonic generation (HHG). HHG is a non-linear process where the strong fundamental laser field gets coherently upconverted to a comb of higher frequency radiation <cit.>. This frequency conversion happens in a gas target most of the cases, when the atomic/molecular system of the gas is driven in the strong field regime <cit.>. The conversion efficiency is inherently defined by the HHG process which is dependent on the characteristics of the generating laser pulse and the gas medium. One of the parameters to optimize the high harmonic radiation, is the pressure of the gas target, since the number of particles determine the number of emitters and absorbers in the HHG and define the phase-mismatch. There is a fine balance between increasing the number of emitters and absorbing the generated harmonics <cit.>, for a given set of laser parameters. Thus, it is evident that proper gas target characterization is essential for optimization of the HHG process. In this article, we perform interferometric characterization of the space-time resolved gas density profile and study the HHG from a cantilever based high repetition rate piezo gasjet system. Our investigation reveal a clear correlation of the HHG yield and the dynamics of the gas density evolution. We further corroborate our observation with 3D strong field simulations that incorporate, microscopic HHG along with macroscopic propagation effects emulating the experimental conditions.Our results show that the gas density profile resulting from such a valve is intricately linked to the dynamics of the cantilever piezo. The remarkable correlation of the HHG signal with the gas density dynamics allows us to achieve stable and optimum harmonic signal through careful timing setting of the valve opening with respect to the pulse arrival. This also establishes the importance of such space time resolved characterization for each such high repetition rate piezo cantilever based valve for any given application under consideration. This becomes crucial in systems like the SYLOS COMPACT beamline at ELI-ALPS <cit.> where several high repetition rate pulsed gas jets can be placed in sequence <cit.>, in order to improve XUV beam energy, by optimising the phase matching conditions for applications in nonlinear XUV physics <cit.>. § EXPERIMENTS The required behaviour of a gas jet in HHG is its short opening time, while creating high density jet at its orifice at high repetition rate. It is crucial to reliably synchronize the timing of the nozzle opening of the gas jet and the arrival of the generating laser pulse, in order to get the best harmonic yield. The experiments have been conducted at two separate locations. We developed a standalone test station to characterize the gas density inside the jet under different timing of trigger, valve opening time and backing pressure. We used the outcome to compare it to the harmonic yield obtained from the experiments conducted at the SYLOS COMPACT beamline <cit.> at ELI-ALPS. §.§ Gas jet characterization by interferometry The density profiling of gas targets has been carried out with several different methods (see <cit.> and references teherin), for both static <cit.> and pulsed jets <cit.>. Here we undertake space-time resolved interferometry to access the gas atomic density distribution. The experimental layout is based on an Mach-Zehnder interferometer, Fig. <ref>(a). An expanded He-Ne laser is used to enter the interferometer after being split in two arms. The interaction arm passes through a gas jet to introduce some phase shift in the optical path of the laser beam with respect to the reference arm. Both arms are in vacuum. The gas target transverse plane (represented by I_1(y,z) in Fig. <ref>(a)) is imaged onto a CMOS sensor (a commercial Basler acA1440-73gm camera), where the recombined beams form the interference pattern (I_12(y,z) in Fig. <ref>(b)). Several vital points have been taken into account when characterizing the gasjets: * To keep the signal to noise ratio high (especially at low gas jet densities) an ATH 500M turbo-molecular pump was providing the low ambient pressure in the chamber, 10^-5-10^-6 mbar. Additionally, the whole part of the interferometer, which is not in vacuum had to be covered to protect the beam paths from air fluctuations and the assembly was placed on stable optical table. These precautions, reduced the residual gas load, minimised parasitic vibrations and reduced refractive index fluctuations in the interferometer, allowing the intrinsic noise of the setup to be limited to gas density levels as low as about 3×10^17 cm^-3, estimated from the analysis of reference images of the interference pattern, without activating the valve. * The experimental target gas is argon which has high refractive index of 1.00028 at wavelength λ=633 nm (for example significantly higher compared to helium 1.000034 at the same λ) allowing for more sensitivity in terms of measuring phase difference, in spite of the sub-millimetric width of the gasjet, even in the sub-10^18 cm^-3 density regime. * The gas refractive index ansatz is valid, when the characterization is not corrupted due to molecular jet formation with large clusters. Within the parameter range relevant to us the empirical Hagena parameter Γ^*≪ 100 is significantly less that the limit Γ^*∼ 10^3 required for cluster formation <cit.>. For the tests we used an Amsterdam Piezo Valve ACPV2 model, a cantilever piezo with 500 μm nozzle size. Cantilever piezos can deliver large displacements up to hundreds of micrometers to 1 mm, while working at high repetition rates, up to 5kHz. The difference of the cantilever piezos to disk shaped piezos is that by adjusting the free length of the cantilever one can adjust the displacement of the cantilever<cit.>. For example, by decreasing the length, the displacement drops rapidly, while its resonant frequency increases. Cantilever resonance can introduce observable effects in gas density measurement. Since the cantilever will bounce back and forth while opening and closing the pulsed valve, it can introduce pressure and hence number density fluctuation in the released gas within one such cycle of operation. The synchronization between the camera and the jet is realized with a delay generator. The time resolution of the measurement - which is determined by the shortest possible exposure time of the CMOS sensor is 1 μs. For each measurement two images are recorded, one with the nozzle opened and one with the nozzle closed serving as a reference measurement without any gas present in any arms - this is realized by running the camera at twice the repetition rate of the gas source. For resolving the temporal evolution of the gas density while opening the valve, the camera trigger was delayed compared to the trigger signal of the jet. The setup can record images at up to 100 Hz, but in order to get a background free image one has to wait until the turbomolecular pump can reduce the pressure in the chamber to the base - 10^-5-10^-6 level, resulting a few Hertz operation. As depicted in the flowchart in Fig. <ref>(b), the 2D phase shift ϕ(y,z) is extracted from the interferogram using 2D Fourier transformation algorithm described in Ref. <cit.>. One can see from a typical unwrapped phase map presented in Figure <ref>(b) (step 3) that in the plane perpendicular to the propagation axis x, the jet rapidly spreads out as the distance from the nozzle tip increases (vertical z direction). The extra contribution to the phase shift introduced in the probe beam propagating along x by the argon gas density profile is, Δϕ(y,z) = ∫2π/λΔ n(x,y,z)dx, where Δ n(x,y,z)=n(x,y,z)-1, is the shift in index of refraction due to the presence of the gas and n(x,y,z) is the refractive index of argon jet. As explained in the caption of Fig. <ref>, Δϕ(y,z) is calculated from two projection interferograms: one with the gasjet on and the other without any gas in the interaction arm. The measured phase-map Δϕ(y,z) is a 2D projection of the 3D distribution of the phase difference Δϕ(r,z) introduced by the gasjet (r is the radial distance from the center of the gasjet axis z). Since one can assume that the jet has a cylindrical symmetry it is possible to transform the projection Δϕ(y,z) to a radial distribution θ(r, z) = Δϕ(r,z) using the inverse Abel transform (IAT) <cit.> as follows: θ(r, z) = IAT[Δϕ(y, z)] = -1/π∫_r^∞dΔϕ(y, z)/dy1/√(y^2 - r^2) dy where r is the radial distance from the center of the nozzle, z is the vertical distance from the tip of the nozzle and transverse coordinate y is the coordinate perpendicular to x and z. This is indicated in step 4 of Fig. <ref>(b). We numerically carry out the IAT in Python using the well developed BAsis Set EXpansion (BASEX)<cit.> method in the package PyAbel <cit.>. The refractive index can be expressed with the radial distribution using equation, Δ n(r,z)=n(r,z)-1 = λ/2 πθ (r,z), where λ is the wavelength of the laser. The refractive index is connected to the number density. This can be found through the series of few steps. The molar reflectivity (A) relates the optical properties of the substance with the thermodynamic properties. From the Lorenz—Lorentz expression which is dependent on the temperature (T) through the molar mass (M), A = ( n^2 -1 ) /( n^2 +2 ) M/ρ, where n is the refractive index of the atomic gas and ρ is the gas density <cit.>. The molar mass can be given by, M = R T ρ/p, where R is the universal gas constant and p is pressure. Using the relation between polarizability (α_e=1.664 Å^3 for argon) and molar reflectivity, A = 4/3N_Aα_eπ and ideal gas law as pV = NRT, the number density n_md=N/V in the units of particles/cm^3 can be written as: n_d = n_md/N_A = 3/4( n^2 -1 ) /( n^2 +2 ) 1/N_A^2α_e π This is the last step presented in Fig. <ref>(b). §.§ High Harmonic Generation in the beamline The gas density dynamics during the opening and closing of the cantilever is crosschecked on the SYLOS COMPACT beamline <cit.>. The main goal of this beamline is to achieve high energy isolated attosecond pulses as well as attosecond pulse trains in the sub 150 eV regime at high repetition rate, in order to perform XUV-pump XUV-probe nonlinear experiments <cit.>. To achieve this goal it uses long laser focusing (10 m) and up to four high pressure gas jets to generate XUV radiation. The generated XUV beam is separated from the driver IR by a 200 nm thick Al filter and the XUV signal is detected with a calibrated XUV photodiode. The incoming beam size is around 6 cm which was reduced with an iris to 3 cm in order to maximize the XUV yield - resulting around 12 mJ in the interaction region. These conditions enable the generation of around 30nJ XUV pulses, from argon gas, after XUV filter. The driving laser for this experiment was the SYLOS Experiment Alignment (SEA) laser <cit.> which operates at 10Hz and delivers 34mJ pulse energy with 11fs pulse duration at 825nm central wavelength. When optimizing the XUV energy it is crucial to get the timing of the valve opening and the opening duration correct for the individual gas jets, in order to maximize the gas density in each interaction region. Keeping the valve opening time constant and changing the delay between the laser trigger and the opening of the valve one can study the impact of the dynamics of the valve opening on the integrated yield of the generated XUV. In an ideal case there is a rise in the gas density as the valve opens, so does the XUV yield grow, then it reaches a maximum, when the valve opens the most. The gas density corresponding to the maximum valve opening should stay fairly constant during the opening time of the valve, then it should slowly drop to zero with the closing valve. This is the typical behavior at the disk shaped piezo valve. We show that the gas density does not stay constant during the opening time of the cantilever piezo valve, which introduces an extra factor to optimize during the high harmonic generation. § RESULTS §.§ Experimental observations In Fig. <ref>(a) we present the retrieved gas atomic number density distribution along the radial (please note that the radial r, x and y distributions are same due to the cylindrical symmetry of the gas flow) and the vertical direction, achieved by applying the protocol presented in the previous section. The presented density map is achieved at one specific delay after opening of the gas valve. Due to the shape of the nozzle, the expanding gas jet is rather confined into a cylinder along the vertical direction with a diameter of 500 μm (which is the opening size of the nozzle) and not expanding much in the radial direction. As expected the maximum value of the number density distribution is close to the exit of the valve and its value is around 1.2×10^19 particles per cm^3. The central z line-out n_d(r=0,z) presented in Fig. <ref>(b) shows the exponential decay of the number density distribution as the distance from the nozzle increases in the vertical direction, a typical feature of such nozzle geometry <cit.>. In Fig. <ref>(c), we plot the radial line-outs of n_d(r,z) for the five different z values marked in Fig. <ref>(b). In all the measurements, the opening window of the valve is set to 400 μs. In Fig. <ref> (a) we present five different snapshots of the dynamic evolution of the gas number density distribution in space, within the opening window of the gas valve. As discussed before, the gas density is exponentially dropping as the function of height from the nozzle exit. Therefore, in order to access the higher density region for the HHG experiments, for the given focusing configuration, one has to shoot as close to the exit of the nozzle as possible, without damaging the nozzle. Because of technical constrains, for our conditions, in the HHG experiments, we kept the center of the intense focused beam around 400 μm above the exit. The black dashed horizontal lines on the colormaps in Fig. <ref> (a) represent the laser propagation axis (z=400 μm) for HHG experiments. The red solid circles in Fig. <ref> (a) indicate the position of maximum atomic gas density along the laser propagation axis. These colormaps show that during the opening time of the valve the laser focus experiences large variations in the gas distribution. In order to closely follow the temporal evolution we obtain a large number of 2D spatial gas number density snapshots as a function of delay within the opening window of the valve. In Fig. <ref> (b) we plot the maximum number density seen by the center of the laser focal spot as a function of this delay. The vertical red lines in Fig. <ref> (b) indicate the temporal delay where the 2D number density snapshots (presented in Fig. <ref> (a)) were taken while the red circles and the black curve correspond to maximum gas number density in the focal spot. In an ideal case, during the scan of the opening window one would see the rise then the drop of the number of particles in the interaction region, while the maximum would show the right delay setting for optimal harmonic yield. However, in our case in Figure <ref> (b) we clearly identify several maxima as the function of the delay. Time dependent injection of the gas jet within the pulsed valve aperture time and a consequent gas density depletion has been observed by other research groups as well <cit.>. One can observe two important features on the graph in Fig. <ref> (b): on one hand the opening and closing of the valve is not sudden, but takes several tens of microseconds. On the other hand, during the opening window, the signal oscillates. The first minimum is a drop of approximately 40 percent in the number density, while the following drops are less intense. However the maxima reaches roughly the same level in each case. The observed facts are a clear sign of a damped oscillation of the cantilever piezo <cit.> which is well known from vibrations of cantilever beams <cit.>. The frequency of this oscillation is roughly 7.6 kHz determined by physical parameters of the piezzo and not influenced by the driving frequency <cit.>. The results show that the dynamics of the cantilever when using it as a valve introduces variations in the gas jet density profile emphasizing the importance of such metrology in any experiment that is sensitive to the gas atomic number density. In addition, since the gas expansion from such a valve is non-trivial, the knowledge of the exact distribution of gas density could improve the understanding and modelling of the HHG process, while correlating with experimental observations. Fig. <ref> (c) presents the normalized high harmonic yield (red rectangles) as the function of the delay of the arrival time of the interacting intense focused laser pulse with respect to the opening time (marked as zero delay which signifies a measurable number density above the detection threshold in Fig. <ref>(b)) of the valve. The relative high harmonic yield is experimentally measured with a thin film coated photodiode (Optodiode-AXUV100AL). The HHG yield data presented in Fig. <ref> (c) is normalized with respect to the maximum measured yield. The red vertical lines correspond to the delay times presented in Fig. <ref> (a). Multiple red rectangles at the same delay time (wherever available are presented in Fig. <ref> (c)) represent typical fluctuations in the measured yield. The measured HHG yield data in Fig. <ref> (c) follows remarkably well the number density variations presented in Fig. <ref> (b). For the HHG interaction, here we note the following points: * The laser pulse duration (∼11fs FWHM of the pulse intensity envelop) is negligible on the timescale of gas density evolution. This implies that the interacting pulse sees a frozen gas density distribution in the transverse plane. This ensures that the microscopic emitter distribution across the focal spot, within the pulse duration during HHG are not evolving. * The transit time of the intense laser pulse through the gasjet target (typically < 10ps in our case) is also negligible compared to the time scale of gas dynamics. This ensures that the measured temporal snap-shots of the spatial distribution of gas number density does not change during pulse propagation and thus can be utilised for macroscopic phase matching considerations in HHG. * Since the confocal parameter (∼49 cm) is significantly larger than the medium length (∼1 mm) under our experimental configuration, we are not limited by longitudinal variation of intensity and the contribution of Gouy phase, associated with the spatial focusing of the fundamental laser pulse, to phase matching is unimportant. * The gradient in the number density (presented in Fig. <ref>(b)), across the focal spot diameter of ≈300 μm can result in subtle effects like influencing the phase matching condition for the HHG. This can lead, for example, to distortions in the XUV wavefront impacting the focusability of the XUV beam <cit.>, which is beyond the scope of the present manuscript. The experimental results demonstrate that in case of HHG, it is essential to know the exact number density of the gas medium in a space time resolved manner. In addition, in order to optimize the harmonic yield one has to synchronize the arrival of the generating laser pulse with the opening time of the valve and introduce an appropriate relative time delay, depending on spatio-temporal the characteristics of the gas jet under utilization. At this point, we would like to emphasize that the monotonic nature of the HHG yield as a function of measured gas jet atomic density as observed experimentally and presented in Fig. <ref> (c) is not the case in general. In case of coherent light emission - like HHG - the generated photon flux scales quadratically with the number of emitters under ideal conditions <cit.>. The resemblance between the jet density (Fig. <ref> (b)) and the harmonic yield (Fig. <ref> (c)) highlights the importance of phase matching, as it manifests under our specific experimental conditions. A close investigation of the correlation between the number density data in Fig. <ref> (b) and the measurements in Fig. <ref> (c) reveal that within our interaction regime the HHG yield is almost proportional to the gas pressure. Phase matching is a complex dynamical <cit.> process and the relation between gas atomic number density and HHG yield is not straight forward in the short pulse regime. In order to investigate further we undertake numerical simulations in the following. §.§ Numerical validation using 3D Simulation Direct measurement of the gas number density distribution in the HHG interaction region is crucial not just from the optimization of the high harmonic source. Such metrology also enables one to feed experimental measurements into state of the art simulation tools that are often utilised to investigate the strong field interaction further. In this case the numerical simulations can be performed in a virtual experimental set up with initial parameters mimicking the real experimental conditions. This is important, if one needs to reconcile experimental observations with theoretical results and interpret the relevant physics in a correct manner. In our case, we undertake such an effort and use state of the art simulations where the gas jet metrology data is fed as input to simulate the harmonic yield. We note here, that the macroscopic effects like plasma generation, absorption and refraction during propagation play significant part in the phase matching process and hence cannot be neglected for calculation of the HHG yield. In order to investigate the experimental results further, we have performed a series of macroscopic simulations using a three-dimensional (3D) non-adiabatic model, described in detail elsewhere <cit.>.As a short summary, the simulation is performed in three self-consistent computational steps. Firstly, to analyze the propagation of the linearly polarised electric field of the fundamental laser pulse E(𝐫_L,t) in the generation volume, the nonlinear wave equationv of the form ∇^2E(𝐫_L,t)-1/c^2∂^2E(𝐫_L,t)/∂ t^2=ω_0^2/c^2(1-n_eff^2(𝐫_L,t))E(𝐫_L,t) , is solved <cit.>. In the previous equation c is the speed of light in vacuum, ω_0 is the central angular frequnecy of the laser field, and the suffix L in 𝐫_L indicates that this vector represents the coordinate in the frame with respect to the laser axis (in contrast to the r scalar coordinate described previously around the gas jet symmetry axis). The effective refractive index n_eff(𝐫_L,t) of the excited medium — depending on both space and time — can be ontained by <cit.> n_eff(𝐫_L,t)=n+n̅_2 I(𝐫_L,t)-ω_p^2(𝐫_L,t)/2ω_0^2, where I(𝐫_L,t)=1/2ϵ_0c|Ẽ(𝐫_L,t)|^2 is the intensity envelope of the laser field (note that in this expression the complex electric field Ẽ(𝐫_L,t) is present <cit.>), and ω_p(𝐫_L,t)=[n_e(𝐫_L,t)e^2/(mϵ_0)]^1/2 is the plasma frequency. The plasma frequnecy is well-known to be a function of the electron number density n_e(𝐫_L,t), and its expression also contains the electron charge e, the effective electron mass m, and the vacuum permittivity ϵ_0). Dispersion and absorption, along with the Kerr effect, are thus incorporated via the linear (n) and nonlinear (n̅_2) part of the refractive index. Absorption losses due to ionization <cit.> are also included, while plasma dispersion is estimated based on ionization values in the last term of n_eff(𝐫_L,t). The model assumes cylindrical symmetry about the laser propagation direction z_L (𝐫_L→ r_L,z_L) and uses paraxial approximation <cit.>. Applying a moving frame translating at the the speed of light, and by eliminating the time derivative using Fourier transform ℱ, equation. (<ref>) reduces to the explicit form, (∂^2/∂ r_L^2+1/r_L∂/∂ r_L)E(r_L,z_L,ω)-2iω/c∂ E(r_L,z_L,ω)/∂ z_L = ω^2/c^2ℱ[(1-n_eff^2(r_L,z_L,t))E(r_L,z_L,t)]. Equation. (<ref>) is solved using the Crank–Nicolson method in an iterative algorithm <cit.>. The ABCD-Hankel transform is used to define the laser field distribution in the input plane of the medium <cit.>. In step two, we calculate the single-atom response (dipole moment D(t)) based on the laser-pulse temporal shapes available on the complete (r_L, z_L) grid, by evaluating the Lewenstein integral <cit.>. The macroscopic nonlinear response P_nl(t), is then calculated by taking the depletion of the ground state into account <cit.> using P_nl(t)=n_aD(t)exp[-∫^t_-∞w(t')dt'], where w(t) is the ionization rate obtained from tabulated values calculated using the hybrid anti-symmetrized coupled channels approach (haCC) <cit.> showing a good agreement with the Ammosov-Delone-Krainov (ADK) model <cit.> and n_a is the atomic number density within the specific grid point (r_L, z_L) <cit.>. In the third step we calculate the propagation of the generated harmonic field E_h(𝐫_L,t) using the wave equation ∇^2E_h(𝐫_L,t)-1/c^2∂^2E_h(𝐫_L,t)/∂ t^2=μ_0d^2P_nl(t)/dt^2 , with μ_0 being the vacuum permeability. Equation. (<ref>) is solved in a manner similar to equation. (<ref>), but without an iterative scheme (since the source term is known). The amplitude decrease and phase shift of the harmonic field - caused by absorption and dispersion, respectively - are incorporated at each step when solving equation (<ref>) by taking into account the effect of complex refractive index on wave propagation. The real and imaginary parts of the refractive index in the XUV regime are from tabulated values of atomic scattering factors <cit.>. The simulation method described above assumes radial symmetry aroung the laser propagation axis. For the laser spatio temporal profile we use the measured focal spot distribution and experimental laser pulse duration in order to mimic the real experimental conditions. For the gas jet atomic number density profile we use the measured gas jet number density profile along the axis of laser propagation (peak densities as shown in Fig. <ref> (b)). Thus, within our numerical simulations, the influence of gas density gradient across the laser focal spot (along the symmetry axis of the gas jet as presented in Fig. <ref>(b)) is lumped into an average value. Figure <ref> (c) presents tbe simulated harmonic yield (black hollow circles) as the function of the delay from the opening of the valve. The gas jet pressure for the simulation was calculated from the number density variation on Figure <ref> (b). Both the HHG measurements and simulations show a remarkable resemblance to the jet density variation measured with the interferometric technique. The simulations also revelaed that under the circumstances that describe these experiments, transient phase matching <cit.> limits efficient generation to the first half of the short laser pulse. At the same time, due to minimal reshaping of the pulsed laser beam, there are spatially homegenous phase matching conditions in the whole interaction volume. This allows us to apply a simple model<cit.> to explain the variation of the observable harmonic flux in the absorbing medium. The analysis confirmed that with the coherence lengths and absorption lengths involved, the harmonic flux changes close to linearly with the change of atom number density. § CONCLUSION On one hand, an interferometric gas density characterization was developed for underdense gas jets produced from a high frequency (up to 5 kHz) cantilever piezo valve. On the other hand we show that the cantilever valve has its characteristic dynamics while opening the valve resulting in the oscillation of the gas density as the function of time. Using HHG from such a gas jet target, we observe a remarkable experimental correlation in between the gas density and HHG yield. Our results have been corroborated by sophisticated simulations that self consistently include both microscopic HHG and macroscopic propagation effects under conditions mimicking the real experimental scenario. Our results establish the feasibility of utilizing cantilever based high repetition rate gas valves for high harmonic generation processes, emphasizing the importance of precise timing control in order to access proper gas density regime. This also shows that appropriate time and space resolved characterization and monitoring of such gas valves is an important aspect for its application and reproducible performance is easily achieved by properly managing the synchronization of gas jet with respect to the arrival time of the laser. The results are also important to a diverse field of studies which can benefit from high repetition rate gas jets, where the signature effects of the phenomena, have sensitive dependence upon the precise gas density profile such as molecular or atomic quantum path interferometry <cit.>, ion spectroscopy from dilute plasma <cit.> spatio-temporal <cit.> and equivalently spatio-spectral <cit.> control of attosecond pulses, and in designing of gas based extreme-ultraviolet refractive optics <cit.>, to name a few. § ACKNOWLEDGMENTS ELI ALPS is supported by the European Union and co-financed by the European Regional Development Fund (ERDF) (GINOP-2.3.6-15-2015-00001). This project has received funding from the European Union Framework Programme for Research and Innovation Horizon 2020 under IMPULSE grant agreement No 871161. S.K. acknowledges Project No. 2020-1.2.4-TÉT-IPARI-2021-00018, which has been implemented with support provided by the National Research, Development and Innovation Office of Hungary, and financed under the 2020-1.2.4-TET-IPARI-CN funding scheme.
http://arxiv.org/abs/2306.04372v2
20230607121154
Thermal expansion of atmosphere and stability of vertically stratified fluids
[ "T. D. Kaladze", "A. P. Misra" ]
physics.ao-ph
[ "physics.ao-ph", "physics.flu-dyn", "physics.geo-ph" ]
1 .001 [email protected] I. Vekua Institute of Applied Mathematics and E. Andronikashvili Institute of Physics, Tbilisi State University, Georgia [email protected]; [email protected] Department of Mathematics, Siksha Bhavana, Visva-Bharati University, Santiniketan-731 235, India The influence of thermal expansion of the Earth's atmosphere on the stability of vertical stratification of fluid density and temperature is studied. We show that such an influence leads to the instability of incompressible flows. Modified by the thermal expansion coefficient, a new expression for the Brunt-Väisälä frequency is derived, and a critical value of the thermal expansion coefficient for which the instability occurs is revealed. Thermal expansion of atmosphere and stability of vertically stratified fluids A.P. Misra July 31, 2023 ============================================================================= § INTRODUCTION Climate change is vitally connected to the warming processes (such as convection, in which the heat energy gets transferred by the movement of neutral fluids from one place to another) in the Earth's atmosphere. In addition, numerous other processes, including meteorological and auroral activities and a solar eclipse, can cause equilibrium density and pressure inhomogeneities, and their gradients. As a result, the atmospheric fluids under gravity become stratified, and in the interior, the small-scale density and pressure fluctuations can produce internal gravity waves (IGWs). The latter are thus of interest in the general circulation of atmospheric stratified fluids <cit.>. So, the characteristics of IGWs become the primary investigation of many scientists. Not only do these waves play crucial roles in particle transport and momentum and energy transfers, as they propagate vertically from the Earth's surface to the upper atmosphere, but these are also relevant in large-scale zonal flows <cit.>, formation of solitary vortices <cit.>, and for the emergence of chaos and turbulence <cit.>. In the generation of IGWs, buoyancy plays the role of restoring force that opposes vertical displacements of fluid particles under gravity, and they are associated with the equilibrium density and temperature inhomogeneities. Typically, the frequency of IGWs ranges in between the Coriolis parameter and the Brunt-Väisälä frequency, i.e., 10^-4 s^-1<ω<1.7×10^-2 s^-1 and their amplitudes are relatively small in the tropospheric and stratospheric layers <cit.>. The linear and nonlinear theories of IGWs have been studied by several authors owing to their fundamental importance in understanding the Earth's atmosphere <cit.>. Typically, the dynamics of stratified fluids are more complex than homogeneous fluids. When the stratified fluids are stable, they can support the existence and propagation of various kinds of gravity waves, including IGWs. However, the stratified fluids may become unstable due to the density variations in different layers of the atmosphere. In this situation, the corresponding Brunt-Väisälä frequency may become imaginary due to a negative density gradient, i.e., when the atmospheric fluid density decreases with height <cit.>. In addition, if the temperature variations (spatial) occur due to differential heating and hence the density variations owing to thermal expansion, there may be competitive roles between the temperature and density gradients, and the relevant fluid dynamics becomes more interesting to study. In this letter, we study the influence of thermal expansion on the stability of vertical stratification of atmospheric fluids (in the regions of the troposphere and stratosphere). We show that the Brunt-Väisälä frequency N(z) gets tightly connected to IGWs, and it stimulates their horizontal propagation. In the case when N^2(z)>0, the background vertical stratification is said to be stable, but when N^2(z)<0, the stratification becomes unstable. Also, we discuss the behaviors of N(z) with the effects of the thermal expansion coefficient. § BASIC EQUATIONS AND ANALYSIS WITH OBSERVATIONAL DATA We consider the linear propagation of IGWs in incompressible stratified atmospheric neutral fluids. As a starting point, we consider the following momentum balance and the continuity equations for incompressible neutral fluids. ∂ u/∂ t+( u·∇) u=-1/ρ∇ p+ g, dρ/dt≡∂ρ/∂ t+( u·∇)ρ=0,i.e., ∇· u=0, where u, ρ, and p are the neutral fluid velocity, mass density, and the pressure respectively, and g=(0,0,-g) is the constant gravitational acceleration directed vertically downward. In equilibrium without the fluid flow, we have from Eq. (<ref>) ∂ p_0/∂ z=-ρ_0 g. As said, differential heating causes spatial variations of temperature in the fluid, which in turn produces the density variation due to the thermal expansion. Thus, if β (K^-1) is the volumetric thermal expansion coefficient of the heated incompressible fluid, the equation of state can be written as <cit.> ρ=ρ_0(z)(1-β T), where ρ_0 is the fluid mass density at temperature T=0. Considering the data for the “U.S. Standard Atmosphere Air Properties" <cit.>, the density and temperature variations of the atmosphere with the height (stratification) are presented in Table <ref> and the variations are graphically exhibited in Fig. <ref>. In Table <ref>, the temperature and density gradients are obtained using the central difference formula. From Table <ref> and Fig. <ref>, it is evident that the fluid density decreases with the height, i.e., dρ_0/dz<0 in the whole region of 0<z<50 (km). However, the temperature decreases with the height, i.e., dT_0/dz<0 in the interval 0<z<15 (km), but the same increases in the other interval, i.e., dT_0/dz>0 in 15<z<50 (km). So, we are interested mainly in the altitudes of the troposphere (ranging from 0 to 15 km) and stratosphere (ranging from 15 to 50 km) and consider the vertical distribution of Brunt-Väisälä in the neutral fluid atmosphere. In what follows, we also show the dependence of the thermal expansion coefficient (β) on the temperature T_0 in Fig. <ref>. The data used are as in Ref. <cit.>. It is clear that the expansion coefficient falls off quickly with increasing values of the temperature and that the maximum temperature T_0≈288.15 K occurring at the Earth's surface corresponds to the thermal expansion coefficient β≈0.0035. Later, we will show that such value of β is minimum (corresponding to the maximum temperature T_0≈288.15 K) above which the Brunt-Väisälä frequency becomes negative (N^2<0) and hence the instability of stratified fluid density perturbations. It is well known that the density variations due to internal gravity waves do not exceed 3-4%. So, the ratio between the density perturbation and the unperturbed density is small, i.e., ρ_1/ρ_0≈(1-4)×10^-2. In this case, the momentum equation (<ref>) in the Boussinesq approximation reduces to ∂ u/∂ t+( u·∇) u=-1/ρ_0∇ p_1-ρ_1/ρ_0 gẑ, where the suffix 1 in ρ and p denotes perturbation and ẑ is the unit vector along the z-axis. Next, using the relation (<ref>), Eq. (<ref>) reduces to <cit.> ∂ u/∂ t+( u·∇) u=-1/ρ_0∇ p_1+ gβ T_1ẑ, where T_1 denotes the temperature perturbation. Furthermore, we require the following heat equation for the imcompressible fluid in absence of any heat source <cit.>. ∂ T/∂ t+( u·∇)T=χ∇^2 T, where χ is the coefficient of the thermal diffusivity and is equal to the ratio between the therml conductivity κ (W/mK) and the volumetric heat capacity ρ C_p (J/m^3K). Here, C_p is the specific heat capacity (J/kg K) and the mass density ρ is in the unit of kg/m^3. Representing the total temperature as the sum of its equilibrium and perturbed parts, i.e., T=T_0(z)+T_1, and assuming that α≡ dT_0/dz as more or less a constant equilibrium gradient of temperature along the z-axis, i.e., ∇^2T=∇^2(T_0+T_1)=∇^2T_1, from Eq. (<ref>) we obtain <cit.> ∂ T_1/∂ t+( u·∇)T_1=χ∇^2 T_1-α u_z, where u_z is the component of u along the z-axis and α (>0) represents the action of buoyancy force. Equations (<ref>) and (<ref>) with the conditions ∇· u=0,  dρ/dt=0, are the desired set of equations for the evolution of the temperature and density perturbations of stratified incompressible fluids. To elucidate the role of the temperature gradient (vertical), we consider the linear approximation, i.e., we consider the following simple model equations and remove the suffix 1 in the perturbed variables, for simplicity. Separating the perpendicular and vertical (parallel to the gravity) components of Eq. (<ref>), we obtain ∂ u_⊥/∂ t+1/ρ_0∇_⊥ p=0, ∂ u_z/∂ t+1/ρ_0∂ p/∂ z-gβ T=0. Also, the equation ∇· u=0 gives ∇_⊥· u=-∂ u_z/∂ z. Taking the gradient (∇) of Eq. (<ref>), noting that ∇_⊥^2=Δ_⊥=∂^2/∂ x^2+ ∂^2/∂ y^2, and using Eq. (<ref>), we get ∂^2u_z/∂ t∂ z=1/ρ_0∇_⊥ p. Next, we operate ∂Δ_⊥/∂ t on Eq. (<ref>) to get ∂^2/∂ t^2Δ_⊥ u_z+1/ρ_0∂^2/∂ t∂ z∇_⊥ p-gβ∂/∂ tΔ_⊥ T=0. Furthermore, using Eq. (<ref>) and noting that ρ_0=ρ_0(z), from Eq. (<ref>) we have ∂^2/∂ t^2(Δ u_z+1/ρ_0dρ_0/dz∂ u_z/∂ z)-gβ∂/∂ tΔ_⊥ T=0, where the Laplacian operator, Δ=Δ_⊥+∂^2/∂ z^2. Also, operating Eq. (<ref>) with Δ_⊥, we get ∂/∂ tΔ_⊥ T=χΔ_⊥Δ T-αΔ_⊥ u_z. Combining Eqs. (<ref>) and (<ref>) yields ∂^2/∂ t^2(Δ u_z+1/ρ_0dρ_0/dz∂ u_z/∂ z)-gβχΔ_⊥Δ T+gαβΔ_⊥ u_z=0. Using the thermal expansion relation (<ref>), we recast the density conservation equation (<ref>) as (1-β T_0-β T)dρ_0/d t-ρ_0βd/dt(T_0+T)=0. By means of the heat equation (<ref>), Eq. (<ref>) gives, in the linear approximation, the following. (1-β T_0)1/ρ_0dρ_0/d zu_z =βχΔ T. Finally, from Eqs. (<ref>) and (<ref>), we obtain ∂^2/∂ t^2( Δ u_z+1/ρ_0dρ_0/d z∂ u_z/∂ z)+N^2Δ_⊥ u_z=0, where N^2 is the squared Brunt-Väisälä frequency, given by, N^2(z)=g[(β T_0-1)1/ρ_0dρ_0/d z +βdT_0/d z]. Equation (<ref>) represents a differential equation of only one unknown variable u_z with the frequency N^2 being modified by the temperature stratification (proportional to β). In absence of the latter, one recovers the known Brunt-Väisälä frequency <cit.>. Further simplification of Eq. (<ref>) can be made by neglecting the second term in the parentheses, compared to the first one. Thus, the dynamics of internal gravity waves in stratified fluids can be described by the following equation. ∂^2/∂ t^2Δ u_z+N^2Δ_⊥ u_z=0. To elucidate the influence of the thermal expansion parameter β on the stability of perturbations in vertical stratified fluids, from Eq. (<ref>) we find that, N^2 becomes negative when the thermal expansion coefficient β satisfies the inequality: β T_0(L_ρ_0^-1+L_T_0^-1)<L_ρ_0^-1, where L_ρ_0^-1≡(1/ρ_0)|dρ_0/dz| and L_T_0^-1≡(1/T_0)|dT_0/dz|, respectively, denote the inverses of the length scales of density and temperature inhomogeneities. Since in the altitudes of troposphere and stratosphere [0<z<50 (km)], dρ_0/dz<0 (cf. Table <ref>), the inequality (<ref>) reduces to β T_0(1/|L_ρ_0|-1/L_T_0)> 1/|L_ρ_0|. From Table <ref>, it is also evident that |L_T_0^-1|<|L_ρ_0^-1|. Thus, from Eq. (<ref>), we get the following approximate condition of instability in vertical stratified fluids. β T_0>1. From Table <ref>, we find that the maximum value of the temperature is at the Earth's surface (T_0≈288.15 K). So, the instability condition [Eq. (<ref>)] holds for a minimum value of β: β_min≈0.0035. The latter well agrees with the observational data (See the text arrow in Fig. <ref>). The dependence of the squared Brunt-Väisälä frequency (N^2) on the thermal expansion coefficient (β) is shown in Fig. <ref>. It is seen that the instability of atmospheric stratification occurs with an increase of the thermal expansion coefficient beyond the critical value (≈0.0035). The Brunt-Väisälä frequency becomes completely negative for β≳0.005. In the latter, it is also noted that the magnitude of N^2 initially increases in the interval 0≲ z≲10^3 (m), and then decreases in 10^3≲ z≲3×10^4 (m). In the rest of the interval, 3×10^4≲ z≲5×10^4 (m), its magnitude again increases. Such behaviors of N^2 may be due to the variation of the relative magnitudes of the length scales corresponding to the fluid density and temperature as the height z increases from z=0 to z=50 km. It is interesting to note that when the value of β is lower than β=0.005, N^2 can be negative, zero, or positive depending on the altitude z. For example, when β=0.003, N^2<0 in 0≲ z≲2×10^3 (m), N^2≈0 at z=3×10^3 (m), and N^2>0 in 3×10^3≲ z≲50×10^3 (m). Also, when β=0.004, N^2<0 in 0≲ z≲9×10^3 (m), N^2≈0 at z=10×10^3 (m), and N^2>0 in 15×10^3≲ z≲30×10^3 (m). Again, N^2≈0 at z=40×10^3 (m), and N^2<0 at z=50×10^3 (m). Physically, when N^2>0, Eq. (<ref>) admits oscillating solutions for the velocity u_z with frequency N, i.e., if a parcel of stratified neutral fluids moves upward and N^2>0, it will oscillate in between the heights where the fluid density of the parcel matches with the surrounding fluids. In this case, the fluid is said to be stable. However, when N^2=0, the parcel, once pushed up, will not move any further. On the other hand, when N^2<0, i.e., the squared Brunt-Väisälä frequency becomes imaginary, the parcel will move up and up until N^2 becomes zero or positive again in the atmosphere. Typically, such a situation leads to convection, and hence the criterion for the stability of stratified fluids in the atmosphere against convection is that N^2>0. § CONCLUSION We have studied the influence of the thermal expansion of the Earth's atmosphere on the stability of vertical stratification of density and temperature perturbations. We have shown that such an influence can lead to instability in stratified incompressible fluids. Modified by the thermal expansion coefficient, the Brunt-Väisälä frequency is obtained, and a critical value of the expansion coefficient for which the instability occurs is revealed. To conclude, the instability of vertical stratification reported here could be helpful for the initiation of large-scale instability (which may be larger than the scales of any external force or turbulence phenomena) as well as the generation of large-scale vortices in the atmosphere <cit.> through which the particle momentum and energy transfer take place. In the fluid model, we have neglected the dissipative effects, such as those associated with the fluid-particle collision and the kinematic viscosity. These effects will contribute to the evolution equation for internal gravity waves, modify their dispersion properties, and may eventually reduce or prevent the instability of stratified fluids reported here. However, the influence of these forces and the effects of the temperature and density gradients on the propagation characteristics of internal gravity waves are beyond the scope of the present work but a project for our future study. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. apsrev4-1
http://arxiv.org/abs/2306.11808v1
20230620180523
Higgs Footprints of Hefty ALPs
[ "Anisha", "Supratim Das Bakshi", "Christoph Englert", "Panagiotis Stylianou" ]
hep-ph
[ "hep-ph", "hep-ex" ]
DESY-23-082 We discuss axion-like particles (ALPs) within the framework of Higgs Effective Field Theory, targeting instances of close alignment of ALP physics with a custodial singlet character of the Higgs boson. We tension constraints arising from new contributions to Higgs boson decays against limits from high-momentum transfer processes that become under increasing control at the LHC. Going beyond leading-order approximations, we highlight the importance of multi-top and multi-Higgs production for the pursuit of searches for physics beyond the Standard Model extensions. [email protected] School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, United Kingdom [email protected] CAFPE and Departamento de Física Teórica y del Cosmos, Universidad de Granada, Campus de Fuentenueva, E–18071 Granada, Spain [email protected] School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, United Kingdom [email protected] Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany Higgs Footprints of Hefty ALPs Panagiotis Stylianou Received XXX; accepted XXX ============================== § INTRODUCTION Searches for new interactions beyond the Standard Model of Particle Physics have, so far, been unsuccessful. This is puzzling as the Standard Model contains a plethora of flaws that are expected to be addressed by a more comprehensive theory of microscopic interactions. A reconciliation of these flaws can have direct phenomenological consequences for physics at or below the weak scale v≃ 246 GeV. This is particularly highlighted by fine-tuning problems related to the Higgs mass or the neutron electric dipole moment, both of which take small values due to cancellations which are not protected by symmetries in the Standard Model (SM). Dynamical solutions to these issues have a long history, leading to new interactions and states around the TeV scale to address Higgs naturalness, or relaxing into and CP-conserving QCD vacuum via a Peccei-Quinn-like mechanism <cit.>. Often such approaches yield an additional light pseudo-Nambu Goldstone field, in the guise of a composite Higgs boson or the axion <cit.>. The search for a wider class of the latter states referred to as axion-like particles (ALPs) bridges different areas of high energy physics. Efforts to detect ALPs across different mass and coupling regimes have shaped the current BSM programme in many experimental realms (see e.g. <cit.> for recent reviews). In particular, at the Large Hadron Collider (LHC), ALP interactions have been discussed in relation to their tell-tale signatures arising from ∼ FF̃ coupling structures <cit.>, top quarks <cit.>, emerging signatures <cit.>, flavour physics <cit.>, electroweak precision constraints <cit.>, Higgs decays <cit.>, and mixing <cit.>. The methods of effective field theory <cit.> naturally embed ALP-related field theories into a broader framework of a more modern perspective on renormalisability <cit.>. Experimental searches for these states have been carried out using a variety of techniques, including collider searches, precision measurements of atomic and nuclear transitions (e.g. ACME <cit.> and nEDM <cit.>), and searches from astrophysical events <cit.>, over a wide range of ALP mass <cit.>. In particular for the ALP mass range M_𝒜 ∈ [6, 100] GeV, the most stringent exclusion limits for ALPs are derived from ultra-peripheral lead nuclei collision data <cit.>. These limits are from exclusive di-photon searches, and define SM–ALPs interactions via electromagnetic interactions (∼ F F). [When the ALP mass M_𝒜 < 2 m_e, only the di-photon channel is the allowed decay process via SM particles. In same manner, with greater ALP mass, the decay modes to other leptons, quarks (jets), gauge bosons open up as well.] The ATLAS limits <cit.> on the ALP–photon cross sections when put in terms of ALP–photon couplings is found in the range g_𝒜γ∈ [0.05,1] TeV^-1 <cit.>. However, in general, the ALP–SM interactions can be defined via gauge bosons, fermions, and scalars; although, its decays will depend on its mass. The limits on ALP couplings to the SM fields (except photons) are less stringent. The exotic decays of the SM Higgs and Z-bosons are promising channels for ALP searches (particularly benefiting from the high-luminosity run of the LHC), e.g., with the decay modes h → Z 𝒜 <cit.>. It is the latter perspective that we adopt in this note to focus on ALP interactions with the Higgs boson, also beyond leading order. Adopting the methodology of Higgs Effective Theory (HEFT), we can isolate particular interactions of the ALP state and trace their importance (and thus the potential for constraints) to representative collider processes that navigate between the low energy precision and the large momentum transfer regions accessible at the LHC. If the interactions of ALPs and Higgs particles is predominantly related to a custodial singlet realisation of the Higgs boson, these areas might well be the first phenomenological environments where BSM could be unveiled as pointed out in, e.g., Ref. <cit.>. In parallel, our results demonstrate the further importance of multi-top and multi-Higgs final states as promising candidates for the discovery of new physics. With the LHC experiments closing in on both Higgs pair <cit.> and four-top production in the SM <cit.>, such searches becoming increasingly interesting for our better understanding of the BSM landscape. This work is organised as follows: In Sec. <ref>, we review the ALP-HEFT framework that we use in this study to make this work self-contained (a comprehensive discussion is presented in <cit.>). In Sec. <ref>, we focus on the decay phenomenology of the Higgs boson in the presence of ALP interactions before we turn to discuss a priori sensitive processes that can provide additional constraints due to their multi-scale nature and kinematic coverage. Specifically, in Sec. <ref> we analyse ALP corrections to Higgs propagation as accessible in four-top final states <cit.>, which informs corrections to multi-Higgs production. We conclude in Sec. <ref>. § ALP CHIRAL HEFT LAGRANGIAN The leading order ALP interactions with SM fields in the framework of chiral (non-linear) electroweak theory are written as ℒ_LO=ℒ^HEFT_LO+ ℒ^ALP_LO . ℒ^HEFT_LO is the chiral dimension-2 HEFT Lagrangian <cit.>. In this framework, the SM Higgs (H) is a singlet field and the Goldstone bosons π^a are parametrised non-linearly using the matrix U U(π^a) = exp(i π^aτ^a/v) , with τ^a as the Pauli matrices with a= 1,2,3 and v≃ 246 GeV. The U matrix transforms under L∈ SU(2)_L, U(1)_Y⊂ SU(2)_R ∋ R as U→ L U R^† and is expanded as U(π^a) = 1_2 + i π^a/vτ^a - 2G^+G^- + G^0 G^0/2 v^21_2 + … , where G^± = ( π^2± i π^1 )/√(2) and G^0 = -π^3. The dynamics of the gauge bosons W^a_μ and B_μ are determined by the usual SU(2)_L× U(1)_Y gauge symmetry. Weak gauging of SU(2)_L× U(1)_Y is achieved through the standard covariant derivative D_μU = ∂_μ U + i g_W (W^a_μτ^a /2) U -i g' U B_μτ^3/2 . The gauge fields in the physical (mass and electromagnetic U(1)_em) basis are are related to the gauge basis via the Weinberg angle s_W=sinθ_W,c_W=cosθ_W W^±_μ = 1/√(2)(W^1_μ∓ W^2_μ) , [ Z_μ; A_μ ] = [ c_W s_W; -s_W c_W ][ W^3_μ; B_μ ] . The leading order HEFT Lagrangian relevant for our discussion is then given by ℒ^HEFT_LO = - 1/4 W^a_μν W^aμν - 1/4 B_μν B^μν + L_ferm+ ℒ_Yuk + v^2/4ℱ_H Tr[D_μ U^† D^μ U] + 1/2∂_μ H ∂^μ H - V(H) . The interactions of the singlet Higgs field with gauge and Goldstone bosons are parametrised by the flare function ℱ_H given as ℱ_H = (1+ 2(1+ζ_1)H/v + (1+ζ_2) (H/v)^2 + ... ) . The couplings ζ_i denote the independent parameters that determine the leading-order interactions of the Higgs boson with the gauge fields. L_ferm parametrises the fermion-gauge boson interactions, which we take SM-like in the following. V(H) is the Higgs potential, which we relate to the SM expectation V(H)= 1/2 M_H^2 H^2 + κ_3 H^3+ κ_4 H^4 . with κ_3≃ 32 GeV, κ_4≃ 0.03 in the SM. In this work, we consider ALP interactions that particularly probe the singlet character of the Higgs boson as a parametrised by the HEFT Lagrangian. The interactions are given by ℒ^ALP_LO = 1/2∂_μ𝒜∂^μ𝒜 -1/2M_𝒜^2𝒜^2 + a_2D(i v^2 Tr[ U τ^3 U^† 𝒱_μ] ∂_μ𝒜/f_Aℱ_2D) with ℱ_2D= (1+ 2ζ_12DH/v + ζ_22D(H/v)^2 + ... ) , and 𝒱_μ= (D_μU)U^† . In Eq. (<ref>), f_𝒜 denotes the scale linked with the ALP interactions. The interactions specified by Eq. (<ref>) are the leading order chiral interactions of the ALP field with SM states. These couplings specifically probe the custodial singlet nature of the Higgs boson <cit.>. Therefore, the phenomenology of these interactions provides relevant insights into the mechanism of electroweak symmetry breaking and its relation to axion-like states. The radiative imprints of these interactions on SM correlations are then captured by the chiral dimension-4 interactions contributing to ℒ^HEFT_LO when HEFT parameters coincide with the SM expectation.[All interactions detailed above are implemented using the FeynRules package <cit.>.] The aim of our analysis is to clarify the phenomenological reach to the couplings involved in Eq. (<ref>) from two different angles. Firstly, these interactions are clear indicators of a singlet character of the Higgs boson in HEFT. Secondly, the interactions ∼ζ_12D will introduce modifications to the Higgs boson propagation and Higgs decay in HEFT, ζ_22D will imply modifications to the Higgs pair production rate. Although the ALP might be too light to be directly accessible at collider experiments such as the LHC, its virtual imprint through specific predictions between the correlations of four-top and Higgs pair production could reveal its presence. We will turn to the expected constraints in the next section. Throughout, we will identify the HEFT parameters with their corresponding tree-level SM limit except for the deviations introduced by the ALP, which we also detail below. We will focus on the interactions that are generated at order a_2Dζ_12D etc.; fits against the ALP-less HEFT (or the SM as a particular HEFT parameter choice) should be sensitive to these contributions when data is consistent with the latter expectation. To reflect this we will therefore also assume that HEFT operators coincide with their SM expectation. Specifically this means that we will choose vanishing HEFT parameters arising at chiral dimension-4. Departures from the SM correlations are then directly related to (radiative) presence of the ALP. § DIRECT CONSTRAINTS FROM HIGGS DECAYS The interactions of an ALP with a Higgs boson via the a_2Dζ_12D coupling of Eq. (<ref>) is tree-level mediated. The exotic decay of the Higgs boson via H →𝒜 Z at leading order is given by 2.7cm < g r a p h i c s > =- 2i e/c_W s_W v / f_𝒜 a_2 D ζ _12 D q^μ(H) , with q^μ(H) denoting the four-momentum carried by the Higgs leg. When kinematically accessible, the decay width of the Higgs boson receives a non-SM contribution Γ( H →𝒜 Z) = v^2 a_2D^2 ζ_12D^2/274 s_W^2 c_W^2 f_𝒜^2 M_H^3 M_Z^2 (M_𝒜^4-2 M_𝒜^2 (M_H^2+M_Z^2)+(M_H^2-M_Z^2)^2)^3/2. Assuming this two-body process as the most dominating BSM decay involving the ALP, the SM Higgs boson signal strengths get uniformly modified μ_SM,A = Γ(H)_SM/Γ( H →𝒜 Z) + Γ(H)_SM , with Γ(H)_SM≃ 4 MeV as the total Higgs boson decay width in the SM <cit.>. To constrain this BSM decay, we use the well constrained and hence representative signal strength for H →γγ. This has been measured μ_γγ=1.04^+0.1_-0.09 <cit.> for the representative ATLAS Run 2 dataset of 139 fb^-1. For the on-shell decay of H→𝒜 Z, the maximum value of ALP mass allowed kinematically is ≃ 34  GeV with M_H = 125  GeV and M_Z = 91.18  GeV. For heavier ALP masses, the branching ratio quickly dies off due to the offshellness of the involved Z boson. The allowed parameter space in a_2D/f_𝒜 vs M_𝒜 plane is shown in Fig. <ref> for three different values of ζ_12D. The above 95% limit translates into the lower bound on Γ( H →𝒜 Z) < 0.65  MeV using Eq. (<ref>) for ζ_12D=1. The above bound on Γ( H →𝒜 Z) is reduced by half with the HL-LHC projections for H →γγ at 3 ab^-1 <cit.>, i.e. we obtain Γ( H →𝒜 Z) < 0.32  MeV for ζ_12D=1. § HIGGS SIGNALS OF VIRTUAL ALPS §.§.§ Propagation vs. on-shell properties: Four-top production BSM corrections to the Higgs self energy Σ_H can give rise to an oblique correction Ĥ = - M_H^2/2Σ_H^'' (M_H^2) , analogously to the Ŵ, Ŷ parameters in the gauge sector, e.g. <cit.>. Such a correction leads to a Higgs propagator modification <cit.> -iΔ_H(q^2) = 1/q^2 - M_H^2( 1 + Ĥ(1-q^2 M_H^2) ) , indicating a departure for large momentum transfers at unit pole residue. Measurements of this parameter have by now been established by ATLAS and CMS in Refs. <cit.>. The expected upper limit is Ĥ≤ 0.12 , at 95% CL from the recent four-top production results of Ref. <cit.>. We can re-interpret this in the framework that we consider. In parallel, we can employ an extrapolation of four-top final states to estimate sensitivity improvements that should become available in Ĥ-specific analyses at the high-luminosity LHC (ATLAS currently observe a small tension in their Ĥ fit). Explicit calculation in general R_ξ gauge of the ALP insertion of Eq. (<ref>) into the Higgs two-point function yields the ξ-independent result (see also remarks in <cit.>) Γ(H(q)H(q))=a_2D^2 ζ_12D^2/4 π^2 f_𝒜^2(4 M_𝒜^4 - 3 M_𝒜^2 q^2 + ( q^2 - 3 M_Z^2 ) q^2) Δ_UV + … , with MS factor Δ_UV=Γ(1+ϵ) ϵ(4πμ^2 M_H^2)^ϵ in dimensional regularisation D=4-2ϵ with `t Hooft mass μ. The ellipses in Eq. (<ref>) denote finite terms for ϵ→ 0 (see below). In the following we will adopt the on-shell scheme for field and mass renormalisation (cf. Eq. (<ref>)), and the MS scheme for HEFT parameters (see also <cit.>). On the one hand, part of the divergence of Eq. (<ref>) are then cancelled by the (divergent, div.) counterterms related to the Higgs wave function and mass renormalisation δZ_H|_div. = 3 a_2D^2 ζ_12D^2 (M_𝒜^2 + M_Z^2) /4 π^2 f_𝒜^2 , δM_H^2 |_div. = a_2D^2 ζ_12D^2 (4 M_𝒜^4 - 3 M_𝒜^2 M_H^2 - 3 M_H^2 M_Z^2) /4π^2 f_𝒜^2 . On the other hand, the appearance of a q^4 contribution signifies the sourcing of the chiral dimension-4 operator 𝒪_□□ of the HEFT Lagrangian 𝒪_□□= a_□□□ H □ H/v^2 . This operator is renormalised by the ALP interactions via δ a_□□= - a_2D^2 ζ_12D^2 v^2/8 π^2 f_𝒜^2 Δ_UV . Together, the renormalised Higgs two-point function then links to the Ĥ parameter as Ĥ = - a_2D^2 M_H^2 ζ_12D^2/8 π^2 f_A^2( 2 B_0(M_H^2, M_A^2, M_Z^2)|_fin. - 4(M_A^2 - M_H^2 + M_Z^2)B'_0(M_H^2, M_A^2, M_Z^2) . .+ [M_A^4 - 2 M_A^2 (M_H^2 + M_Z^2) + (M_H^2 - M_Z^2)^2] B”_0(M_H^2, M_A^2, M_Z^2)) , where `fin.' denotes the UV finite part of the Passarino-Veltman B_0 function after subtracting Eq. (<ref>) and derivatives are taken with respect to the first argument of the B_0 function (an explicit representation can be found in Ref. <cit.>). Ĥ vanishes in the decoupling limit f_A>M_A≫ M_H. Equation (<ref>) shows that propagator corrections that can be attributed to Ĥ probe similar couplings as the Higgs decay of Eq. (<ref>), however, in a momentum transfer-enhanced way, at the price of a loop suppression. This way the energy coverage of the LHC that becomes under increasing statistical control provides additional sensitivity beyond the fixed scale Higgs decay. Any enhanced sensitivity to the on vs. off-shell phenomenology that can be gained from the combination of the processes discussed so far, can then break the degeneracies between the different HEFT coefficients in Eq. (<ref>). To obtain an extrapolation estimate from the current constraints on Ĥ, we implement the modifications from Ĥ in MadGraph5_aMC@NLO <cit.> in order to estimate the changes caused in the four-top cross section from different contributions to the Higgs self-energy, and extrapolate the result of Eq. (<ref>). Assuming a significance S(Ĥ = 0.12) / √(B) = 2 from the constraint of Ref. <cit.> at 140/fb, and then subsequently rescaling the results to 3/ab, we obtain the approximate significance at HL-LHC. While using the more recent results yields improved bounds compared to earlier projections of Ref. <cit.> that include systematics (due to improvements in the analysis procedure utilising ML techniques), our projections remain conservative compared to the previously estimated significance with only statistical uncertainties, see Fig. <ref>. In Fig. <ref>, we also see that if M_A is light, it will freely propagate in the 2 point function thus imparting the characteristic q^4 dependence probed by Ĥ. This also means that this behaviour is essentially independent of the light ALP mass scale. Turning to heavier states, this kinematic dependence is not sourced as efficiently anymore, leading to a quick decoupling from the two-point Higgs function and reduced sensitivity and larger theoretical uncertainty. We will return to the relevance of Ĥ for the discussed scenario after discussing the modifications to Higgs pair production in the next section. §.§.§ Higher terms of the ALP flare function: Higgs pair production Corrections to Higgs pair production under the same assumptions as in the previous section are contained in propagator corrections and corrections to trilinear Higgs coupling. As with the chiral dimension-4 operator that leads to new contributions to the Higgs-two point function, there are additional operators that modify the Higgs trilinear interactions. The amputated off-shell three-point function receives contributions (see also <cit.>) v^3Γ_1(H(q)H(k_1)H(k_2)) = a_χ 1 (q^4 + k_1^4 + k_2^4) + 2 a_χ 2 (q^2 k_1^2 + k_1^2 k_2^2 + q^2 k_2^2) + a_χ 3v^2 (q^2+ k_1^2 + k_2^2) , which are renormalised in the MS scheme according to δ a_χ1 = a_2D^2 ζ_12D v^2/8 π^2 f_𝒜^2 (3(1+ζ_1) ζ_12D + 2ζ_22D) Δ_UV , δ a_χ2 = 3 a_2D^2 ζ_12D^2 v^2 /8 π^2 f_𝒜^2 (1+ζ_1) Δ_UV , δ a_χ3 = 3 a_2D^2 ζ_12D v^2/4 π^2 f_𝒜^2 [ (M_A^2+M_Z^2)ζ_22D - 3(M_A^2+2M_Z^2) (1+ζ_1)ζ_12D]Δ_UV . The remaining renormalisation of the chiral dimension-2 term follows from Eq. (<ref>) δΓ_2(H(q)H(k_1)H(k_2))|_div = - 9a_2D^2 ζ_12D^2 /8 π^2 f_𝒜^2 κ_3 (M_A^2+m_Z^2) - a_2D^2 ζ_12D M_A^4 /2 π^2 f_𝒜^2 v ( 2 (1+ζ_1) ζ_12D - ζ_22D) . ATLAS (CMS) have set highly competitive expected 95% confidence level cross section limits of σ/σ_SM<3.9 (5.2) <cit.> in the bb̅ττ channel <cit.> alone. Slightly reduced sensitivity <cit.> can be achieved in the 4b and 2b2γ modes <cit.>. ATLAS have combined these channels to obtain a combined exclusion of 3.1 σ_SM <cit.> with the currently available data and forecast a sensitivity of σ/σ_SM≳ 1.1 at the HL-LHC <cit.>. We use the two latter result to gain a qualitative sensitivity reach of Higgs pair production in the considered scenario. In Fig. <ref>, we show representative invariant Higgs pair mass distributions for 13 TeV LHC collisions, which demonstrates the potential of multi-Higgs final states' sensitivity to the momentum-enhanced new physics contributions characteristic to the ALP.[We have implemented these changes into an in-house Monte Carlo event generate based on Vbfnlo <cit.> employing FeynArts, FormCalc, and LoopTools <cit.> and PackageX <cit.> for numerical and analytical cross checks. Throughout this work we chose a renormalisation scale of μ=2M_h.] The behaviour exhibited by the invariant mass distribution is not sensitive to the mass of the ALP as long as the latter is not close to the ≃ 2M_H threshold that determines the gg→ HH phenomenology. In instances when hefty ALPs propagate freely, their distinctive momentum enhancements will sculpt the Higgs-boson distributions. In parallel, non-linear effects will be important away from the SM reference point as shown in Fig. <ref>. This shows that the constraints that can be obtained in the di-Higgs channel are relatively strongly coupled, which is motivation for us to directly include “squared” BSM effects to our analysis in addition to interference effects. We combine the three representative analyses in a global χ^2 to obtain sensitivity estimates. In the case when the ALP is light, there are significant modifications to Higgs physics, also at large momentum transfers, see also Fig. <ref>. Of course, these large contributions in particular to the Higgs pair rate are tamed by decreasing signal strengths into SM-like states, which quickly result in tension with experimental observations for larger couplings. As Higgs pair production observations need to rely on relatively clean and high branching ratio final states, the prospects of Higgs pair production (and four top) analyses to provide additional sensitivity is relatively low. This is highlighted already in the combination of the Higgs decay constraints with these processes in Fig. <ref>. For parameter choices for which the ALP is above the Higgs decay threshold, this picture changes. Multi-Higgs constraints remain relatively insensitive to the ALP mass scale as long as these states are away from the 2 M_H threshold. The cross section enhancement then translates directly into an enhancement of the observable Higgs boson pair production rate. In turn, constraints on the the higher order terms in the ALP flare function become possible. It is important to note that these are independent of couplings (to first order) that shape the ALP decay phenomenology. As the large enhancements result from the tails of distributions there is a question of validity. Nonetheless the momentum dependence introduced by Eq. (<ref>) leads to partial wave unitarity violation as, e.g. HZ scattering proceeds momentum-enhanced. A numerical investigation shows that for O(1) couplings in Eq. (<ref>), conserved zeroth partial wave unitarity up to scales ∼ 1.5 TeV sets a lower bound of f_a≳ 300 GeV for unsuppressed propagation M_A=1 GeV. These constraints are driven by the longitudinal Z polarisations, constraints from transverse modes are comparably weaker. This means that the entire region that is shown in Fig. <ref> is perturbative at tree-level. In parallel, the HL-LHC is unlikely to probe Higgs pairs beyond invariant masses M_HH>600 GeV in the SM (for which the cross section drops to 10% of the inclusive rate). Most sensitivity in HL-LHC searches results from the threshold region. Therefore, the sensitivity expected by the HL extrapolation of <cit.> will probe Eq. (<ref>) in a perturbatively meaningful regime. The combined constraints are largely driven by Higgs pair constraints, Fig. <ref>. However, it is worth highlighting that the statistics-only extrapolation does not include changes to the four top search methodology. Improvements of the latter can be expected with increasing luminosity and the final verdict from four top production might indeed be much more optimistic than our √(luminosity) extrapolation might suggest. § SUMMARY AND CONCLUSIONS Searches for new light propagating degrees of freedom such as axion-like particles are cornerstones of the BSM programme in particle physics as explored at, e.g., the Large Hadron Collider. The Higgs boson, since a global picture of its interactions is still incomplete, provides a motivated avenue for the potential discovery of new physics in the near future as the LHC experiments gain increasingly phenomenological sensitivity in rare processes that could be tell-tale signs of Higgs-related BSM physics. We take recent experimental developments in multi-Higgs and multi-top analyses as motivation to analyse effective Higgs-philic ALP interactions, also beyond leading order. This enables us to tension constraints from different areas of precision Higgs phenomenology, combining Higgs decay modifications with large-momentum transfer processes that are becoming increasingly accessible at the LHC. For light states and sizeable HEFT-like couplings, a large part of the sensitivity is contained in Higgs signal strength measurements (see also <cit.>), which, however, only provide limited insights into the Higgs-ALP interactions. Higher terms of the Higgs-ALP flare function, still have the phenomenological potential to sizeably modify Higgs pair final states at a level that will be observable at the LHC in the near future. Our findings therefore also highlight further the relevance of multi-top and multi-Higgs final state for the quest for new physics. We thank Dave Sutherland for insightful discussions. C.E. thanks the high-energy physics group (FTAE) at the University of Granada for their hospitality during early stages of this work. A. is supported by the Leverhulme Trust under grant RPG-2021-031. S.D.B is supported by SRA (Spain) under Grant No. PID2019-106087GB-C21 (10.13039/501100011033), and PID2021-128396NB-100/AEI/10.13039/501100011033; by the Junta de Andalucía (Spain) under Grants No. FQM-101, A-FQM-467-UGR18, and P18-FR-4314 (FEDER). C.E. is supported by the STFC under grant ST/T000945/1, the Leverhulme Trust under grant RPG-2021-031, and the Institute for Particle Physics Phenomenology Associateship Scheme. P.S. is supported by the Deutsche Forschungsgemeinschaft under Germany’s Excellence strategy EXC2121 “Quantum Universe” - 390833306. This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 491245950.
http://arxiv.org/abs/2306.02829v1
20230605122431
Dynamic Calculations of Magnetic Field and Implications on Spin Polarization and Spin Alignment in Heavy Ion Collisions
[ "Hui Li", "Xiao-Liang Xia", "Xu-Guang Huang", "Huan Zhong Huang" ]
nucl-th
[ "nucl-th" ]
[email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China [email protected] Department of Physics and Center for Field Theory and Particle Physics, Fudan University, Shanghai 200433, China [email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China Department of Physics and Center for Field Theory and Particle Physics, Fudan University, Shanghai 200433, China Shanghai Research Center for Theoretical Nuclear Physics, NSFC and Fudan University, Shanghai 200438, China [email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA Magnetic field plays a crucial role in various novel phenomena in heavy-ion collisions. We solve the Maxwell equations numerically in a medium with time-dependent electric conductivity by using the Finite-Difference Time-Domain (FDTD) algorithm. We investigate the time evolution of magnetic fields in two scenarios with different electric conductivities at collision energies ranging from = 7.7 to 200 GeV. Our results suggest that the magnetic field may not persist long enough to induce a significant splitting between the global spin polarizations of Λ and Λ̅ at freeze-out stage. However, our results do not rule out the possibility of the magnetic field influencing the spin (anti-)alignment of vector mesons. Dynamic Calculations of Magnetic Field and Implications on Spin Polarization and Spin Alignment in Heavy Ion Collisions Huan Zhong Huang Received 21 February, 2023; accepted 5 June, 2023 ======================================================================================================================= § INTRODUCTION In non-central relativistic heavy-ion collisions, two positively charged nuclei collide with non-zero impact parameters, resulting in the generation of a large magnetic field. This magnetic field can reach 10^18 Gauss in Au + Au collisions at =200 GeV at RHIC and 10^19 Gauss in Pb + Pb collisions at =5020 GeV at LHC <cit.>. The effect of this strong magnetic field on the Quark-Gluon Plasma (QGP) has attracted much attention due to its potential impacts on many novel phenomena, such as the chiral magnetic effect <cit.>, the spin polarization of hyperons <cit.>, the spin alignment of vector mesons <cit.>, the charge-dependent directed flow <cit.>, and the Breit-Wheeler process of dilepton production <cit.> in heavy-ion collisions. When making theoretical predictions about the aforementioned effects, a crucial question to be addressed is how the magnetic field evolves over time. In particular, it is important to determine whether the lifetime of the magnetic field is sufficiently long to maintain significant field strength leading to observable effects. In general, simulations of the magnetic field evolution can be carried out using the following steps. Before the collision, the charge density of the two colliding nuclei can be initialized by utilizing the Wood-Saxon distribution or by sampling the charge position in the nucleus using the Monte-Carlo Glauber model <cit.>. After the collision, the two charged nuclei pass through each other like two instantaneous currents. Currently, several approaches exist to simulate the collision process. The simplest method is to assume that the two nuclei pass through each other transparently or to incorporate the charge stopping effect using empirical formulae <cit.>. A more sophisticated approach involves simulating the entire collision process through transport models <cit.>. Once the motion of electric charge is determined, the magnetic field can be calculated using analytical formulae. These methods have been widely employed in previous studies to investigate the evolution of magnetic fields in heavy-ion collisions <cit.>. Previous simulations have shown that the strong magnetic field produced by the colliding nuclei rapidly decays with time in vacuum <cit.>. The lifetime of the magnetic field is primarily determined by fast-moving spectators, and the strong magnetic field only exists during the early stage of the collision. However, the time evolution of the magnetic field can be significantly modified when taking into account the response of the QGP, which is a charge-conducting medium. In this case, when the magnetic field begins to decrease, the induced Faraday currents in the QGP considerably slow down the damping of the magnetic field. Analytical formulae have demonstrated that the damping of the magnetic field in a constant conductive medium can be significantly delayed <cit.>. However, those analytical formulae only apply to the case of a constant conductivity, which is unrealistic because the conductivity only exists after the collision and the value varies as the QGP medium expands. Therefore, it is essential to numerically calculate the magnetic field. Numerical results can overcome the limitations of analytical calculations and provide unambiguous solutions for time-dependent conductive medium. As a result, numerical results can serve as a more accurate reference for final state observations that are sensitive to the evolution of the magnetic field. It is worth noting that some studies have also simulated the magnetic field by numerically solving the Maxwell equations with an electric conductivity <cit.> and by combining the magnetic field with the electromagnetic response of the QGP medium <cit.>. This paper presents a numerical study of the time evolution of magnetic fields with time-dependent electric conductivities at = 7.7–200 GeV. To solve the Maxwell equations, we utilize the Finite-Difference Time-Domain (FDTD) algorithm <cit.>. The paper is organized as follows: Sec. <ref> introduces the analytical formulae. Sec. <ref> describes the numerical model setup of the charge density, charge current, and the electric conductivity. Sec. <ref> describes the numerical method. Sec. <ref> presents the results and discusses the impact on the spin polarization and the spin alignment. Finally, Sec. <ref> concludes the results. § LIMITATION OF ANALYTICAL FORMULA We consider the electromagnetic field which is generated by external current of two moving nuclei and evolves in a conductive medium created in heavy-ion collisions. The electromagnetic field is governed by Maxwell equations: ∇· = ρ, ∇· = 0, ∇× = -∂_t , ∇× = + σ + ∂_t , where ρ and are the charge density and the charge current, and σ is the electric conductivity of the medium. For a point charge q moving with a constant velocity $̌,ρandare: ρ(t,) = qδ^(3)[-_q(t)], (t,) = qδ̌^(3)[-_q(t)], whereis the position of the field point and_q(t)is the position of the point charge at timet. If the conductivityσis a constant, the magnetic field has been rigorously derived as follows <cit.>: (t,) =γ×̌/Δ^3/2(1+γσ/2||̌√(Δ))e^A, whereγ=1/√(1-^̌2)is Lorentz contraction factor,≡-_q(t)is the position difference between the field point and the point charge at timet,Δ≡R^2+(γ·̌)^2, andA≡-γσ(γ·̌+||̌√(Δ))/2. Ifσis set to zero, the above formula can recover to the electromagnetic field in vacuum which can be expressed by the Lienard-Wiechert potential: (t,) =γ×̌/[R^2+(γ·̌)^2]^3/2. Because the Maxwell equations (<ref>–<ref>) satisfy the principle of superposition, the formulae (<ref>) and (<ref>) can also be applied to charge distributions rather than just a point charge. Therefore, the formulae have been widely used in the literature <cit.> to calculate the magnetic field generated by nuclei in heavy-ion collisions. However, Eq. (<ref>) is valid only if 1) the point charge moves with a constant velocity, and 2) the conductivityσis constant (fort∈[-∞,∞]). Unfortunately, neither of these conditions is realistic in heavy-ion collisions. First, when the collision occurs, charged particles slow down, and the velocities keep changing during the subsequent cascade scattering. Second, the QGP is produced after the collision, which means that the conductivityσis non-zero only aftert = 0(the time when the collision happens) and the value ofσvaries with time. For these reasons, it is important to develop a numerical method which can solve the Maxwell equations under more complicate and more realistic conditions ofρ,, andσ. In this paper, we focus on studying the influence of time-dependentσon the evolution of magnetic field. § MODEL SETUP §.§ Charge density and current In heavy-ion collisions, the external electric current arises from the contribution of protons in the fast moving nuclei. In this case, we consider two nuclei, which are moving along+zand-zaxis with velocityv_z, and their projections on thex-yplane are centered at(x=±b/2, y=0), respectively, withbbeing the impact parameter. In the rest frame of a nucleus, the charge distribution can be described by the Wood-Saxon distribution: f(r) = N_0/1+exp[(r-R)/a], whereRis the nuclear radius,ais the surface thickness, andN_0is a normalization factor determined by4π∫f(r)r^2dr = Ze. Take the gold nucleus as an example, we haveZ=79,R=6.38fm,a=0.535fm, thereforeN_0≈0.0679 e/fm^3. Then, it is straightforward to derive the charge density and current of the two moving nuclei by a Lorentz boost from Eq. (<ref>), which leads to ρ^±(t,x,y,z) = γ f(√((x∓ b/2)^2+y^2+γ^2(z∓ v_zt)^2)), j_x^±(t,x,y,z) = 0, j_y^±(t,x,y,z) = 0, j_z^±(t,x,y,z) = γ v_z f(√((x∓ b/2)^2+y^2+γ^2(z∓ v_zt)^2)), where the±sign overρandjon the left side indicates the direction of nucleus' motion alongzaxis, the velocityv_z = √(γ^2 - 1) / γ, withγ= / (2m_N)andm_N=938MeV. The total charge density and current are given as follows: ρ(t,x,y,z) = ρ^+(t,x,y,z) + ρ^-(t,x,y,z), (t,x,y,z) = ^+(t,x,y,z) + ^-(t,x,y,z). Eqs. (<ref>) and (<ref>) can describe the charge and current distributions before the collision exactly when the two nuclei are moving at a constant velocity. After the collision, the two nuclei are “wounded”, and some charged particles are stopped to collide with each other. This causes dynamic changes in the charge and current distributions. However, the main goal of this paper is to investigate how the time behavior of the magnetic field is influenced by the time-dependentσ. As a simplification, we currently assume that the two nuclei pass through each other and continue moving with their original velocity, so the charge and current distributions in Eqs. (<ref>) and (<ref>) are unchanged after the collision. This allows us to compare our numerical results with the analytical results obtained by Eq. (<ref>) under the same conditions ofρand, so that we can focus on studying the influence of the time-dependentσ. §.§ Electric conductivity Generally, Eq. (<ref>) is not a realistic description of the electromagnetic response of QGP matter because it assumes a constant conductivity. In reality, the QGP matter exists only after the collision, and the conductivity is time-dependent during the expansion of the system. To provide a more realistic description of the evolution of the magnetic field, it is necessary to consider a time-dependent electric conductivity. In this study, we consider two scenarios for the electric conductivity. In the first scenario, the conductivity is absent before the collision, and it appears to be constant after the collision. Thus, we can introduce aθ(t)function to describe it, σ = σ_0 θ(t). In this equation, if the constant conductivityσ_0were not multiplied by theθ(t)function, the formula (<ref>) would be valid for calculating the magnetic field. However, as we will show in Sec. <ref>, even with such a minor modification on the electric conductivity, the time behavior of the magnetic field becomes very different. In the second scenario, we consider the electric conductivity to be absent before the collision, and after the collision the electric conductivity depends on time via σ = σ_0 θ(t)/(1 + t / t_0)^1/3. The denominator in this equation accounts that the conductivity decreases as the QGP medium expands <cit.>. Thus, this scenario provides a more relativistic description of the magnetic field's time behavior in heavy-ion collisions. § NUMERICAL METHOD In the aforementioned scenarios in Eqs. (<ref>) and (<ref>),σis time dependent, therefore the analytical results in Eq. (<ref>) is not applicable, and the Maxwell equations (<ref>–<ref>) need to be solved numerically. Becauseσis zero before the collision, and the two nuclei move linearly with constant velocity, the electromagnetic field att ≤0can be analytically calculated by the Lienard-Wiechert formula as given by Eq. (<ref>). This provides the initial condition of the electromagnetic field att = 0. Once the initial condition is given, the electromagnetic field att ≥0is calculated by numerically solving the Maxwell equations (<ref>–<ref>). We use the FDTD algorithm <cit.> to solve the Maxwell equations. In detail, electric and magnetic fields are discretized on the Yee's grid, and the updating format forandcan be constructed by discretizing Eqs. (<ref>) and (<ref>) with a finite time step, as follows (t + Δ t) - (t)/Δ t = - ∇×(t+Δ t/2), and (t + Δ t) - (t)/Δ t + σ(t + Δ t) + (t)/2 = ∇×(t+Δ t/2) - (t+Δ t/2). The Yee's grid provides a high-accuracy method to calculate∇×and∇×. As time evolves,andare updated alternately. For example, ifis initially known at timetandis initially known at timet+Δt/2, then one can use the values of(t+Δt/2)and Eq. (<ref>) to updatefromttot + Δt; and after(t + Δt)is obtained, one can use Eq. (<ref>) to updatefromt + Δt/2tot + 3Δt/2. This algorithm provides higher accuracy than the regular first-order difference method. § NUMERICAL RESULTS Using the numerical method described in Sec. <ref>, we calculate the magnetic field by solving the Maxwell equations (<ref>–<ref>) under the conditions ofσ= 0,σ= σ_0 θ(t), andσ= σ_0 θ(t)/(1 + t / t_0)^1/3, respectively. As a verification of our numerical method, we have checked that our numerical solution forσ= 0matches the analytical result by Eq. (<ref>). We also calculate the magnetic field under the condition ofσ= σ_0using the analytical formula (<ref>) for comparison. In all the results presented in this section, the values ofσ_0andt_0are set to beσ_0= 5.8MeV andt_0 = 0.5fm/c, which are taken from Ref. <cit.>. §.§ σ = σ_0 θ(t) vs σ = σ_0 Figure <ref> displays the time evolution of the magnetic field in the out-of-plane direction (B_y) at the center of collision (𝐱 = 0) in Au+Au collisions for energies ranging from 7.7 to 200 GeV with impact parameterb = 7fm. The results ofσ= σ_0 θ(t)are calculated using the numerical algorithm described in Sec. <ref>, while the results ofσ= σ_0are calculated using the analytical formula given by Eq. (<ref>). The magnetic field in vacuum (σ= 0) is also shown as a baseline. In general, the presence of electric conductivity delays the decreasing of the magnetic field. However, the time behavior of the magnetic field under the condition ofσ= σ_0 θ(t)is very different from that ofσ= σ_0. In Figure <ref> we can see that, in the case ofσ= σ_0(namely,σis constant at botht < 0andt > 0), the magnitude of magnetic field is different from the vacuum baseline since a very early time. On the other hand, in the case ofσ= σ_0 θ(t), the difference between the magnetic field and the vacuum baseline is negligible at early time stages (t < 1fm/c for 200 GeV ort < 3fm/c for 7.7 GeV). This is because thatσexists only after the collision and it needs some time to build the effect on delaying the magnetic field's decay. Only at very late time stage (t > 7fm/c), when the evolution system has “forgotten” whetherσis zero or not beforet = 0, the curves ofσ= σ_0 θ(t)and ofσ= σ_0converge. In the middle time stage, the magnitude of the magnetic field is ranked in the order:B[vacuum] < B[σ=σ_0θ(t)] < B[σ=σ_0]. Our results indicate that the analytical formula (<ref>) significantly overestimates the magnetic field in the early and middle time stage compared to the numerical results. The difference between the analytical and numerical results arises from theθ(t)function introduced in Eq. (<ref>). It is important to note that the conductivity is absent att < 0in realistic collisions, therefore the formula (<ref>) is not applicable. This remarks the importance of considering time-dependentσand solving the Maxwell equations numerically. At the late time stage, although the analytical results agree well with the numerical ones, the magnetic field has become very small and has little impact on final observables. §.§ σ = σ_0 θ(t) vs σ = σ_0 θ(t) / (1 + t / t_0)^1/3 The electric conductivity in heavy-ion collisions is a time-dependent quantity due to the expansion of the QGP. Therefore, we consider a more realistic scenario where the electric conductivity decreases with time as given by Eq. (<ref>). Figure <ref> shows the corresponding results, which are compared to the results under the conditions ofσ= σ_0 θ(t)andσ= 0. We see again that, in both scenarios ofσ= σ_0 θ(t)and ofσ=σ_0θ(t) / (1 + t / t_0)^1/3, the magnitude of the magnetic field does not obviously diverge from the vacuum baseline at the early time stage. At later time, the differences is manifested, and we see thatB[vacuum] < B[σ=σ_0θ(t) / (1 + t / t_0)^1/3] < B[σ=σ_0θ(t)]. Needless to say, the decreasing conductivity has smaller effect on delaying the magnetic field's decay than a constant one. Nevertheless, Figure <ref> shows that the magnitude of the magnetic field withσ=σ_0θ(t) / (1 + t / t_0)^1/3are more close to the one ofσ=σ_0θ(t)than to the vacuum baseline, especially at high energies. This suggests that the even if the conductivity decreases, it still has an obvious effect on delaying the damping of the magnetic field. However, this effect is only significant in late time stage, when the magnetic field has already decreased. §.§ Impact on the spin polarization Now let us discuss the impact of the magnetic field on the splitting between the global spin polarizations ofΛandΛ̅. The magnetic-field-induced global spin polarization ofΛandΛ̅can be calculated using the following formula <cit.> P_Λ/Λ̅ = ±μ_ΛB/T, whereμ_Λis the magnetic moment ofΛand is equal to-0.613μ_N, withμ_Nbeing the nuclear magneton, andTis the temperature when the hyperon spin is “freezed”. We shall use the hadronization temperatureT≈155MeV as an estimate. Then the splitting between theΛandΛ̅global spin polarizations is given by P_Λ̅-P_Λ = 0.0826eB/m_π^2. Based on the numerical results presented in Figure <ref>, the magnitude of the magnetic field at late time is of the order ofeB_y∼10^-3–10^-2 m_π^2, which is significantly smaller than the initial values att=0. Therefore, the effect of the magnetic field on the global spin polarizations ofΛandΛ̅is negligible, as the splitting can be no larger than0.1%. This is consistent with the recent STAR data <cit.> which puts an upper limit ofP_Λ̅-P_Λ < 0.24%at=19.6 GeV andP_Λ̅-P_Λ < 0.35%at=27 GeV. In conclusion, our results suggest that the magnetic field is not sufficiently long-lived to provide a distinguishable splitting between theΛandΛ̅global spin polarizations under the current experimental accuracy; similar results were obtained also in Ref. <cit.>. §.§ Impact on the spin alignment The magnetic field also plays an important role in the spin (anti-)alignment of vector mesons. For vector mesons such asϕandK^*0, the spins of the constituent quarks in the meson have a lager chance to be anti-algined [i.e. the(|↑↓⟩+|↓↑⟩)/√(2)state] than to be aligned (|↑↑⟩or|↓↓⟩state) in an external magnetic field <cit.>. This effect can be explored experimentally by measuring the spin-density matrix elementρ_00. We note thatρ_00is a frame dependent quantity. The following formulae show theρ_00with respect tox,y, andzaxis, respectively <cit.>: ρ_00^(x) = 1-P_x^qP_x^q̅+P_y^qP_y^q̅+P_z^qP_z^q̅/3+𝐏_q·𝐏_q̅, ρ_00^(y) = 1-P_y^qP_y^q̅+P_x^qP_x^q̅+P_z^qP_z^q̅/3+𝐏_q·𝐏_q̅, ρ_00^(z) = 1-P_z^qP_z^q̅+P_x^qP_x^q̅+P_y^qP_y^q̅/3+𝐏_q·𝐏_q̅. where(P_x^q, P_y^q, P_z^q)and(P_x^q̅, P_y^q̅, P_z^q̅)are spin polarization vectors of the constituent quark and anti-quark, respectively. Our results have shown that the global spin polarization induced by the magnetic field is a small amount (<0.1%), therefore one may expect that the contribution from the magnetic field to the spin alignment (measured viaρ_00-1/3, which is proportional to the square of the magnetic field) will be even smaller. However, it should be realized that our calculations do not take into account the fluctuations in the charge density and current. Therefore, the results should be interpreted as the averaged magnetic field, which suggest that the average values such as⟨P_q ⟩and⟨P_q̅ ⟩are small, but do not imply that the correlation betweenP_qandP_q̅is small. Instead, when a vector meson is formed by combination of a quark and an anti-quark, the distance between the quarks should be small enough, thusP_qandP_q̅, which arise from the fluctuation of magnetic field, are highly correlated. This can lead to a massive contribution toρ_00. Therefore, our results do not rule out the possible effect of the magnetic field on the spin (anti-)alignment of vector mesons. For the same reason, the spin alignment of vector mesons can also arise from the fluctuation of other fields such as vorticity <cit.>, temperture gradient <cit.>, shear tensor <cit.>, and strong-force field <cit.>. Finally, it is important to note that, if the spin alignment is mainly contributed by fluctuations, then the value ofρ_00is not constrained by the value of global or localΛpolarizations. This may explain the significant value of|ρ_00-1/3|in the experimental data <cit.>, whereas the global or localΛpolarizations are much smaller <cit.>. § SUMMARY In this study, we present a numerical method to solve the Maxwell equations and investigate the evolution of magnetic field in heavy-ion collisions. We also discuss the impact of the magnetic field on the spin polarizations ofΛandΛ̅as well as the spin alignment of vector mesons. We demonstrate that although the electric conductivity can delay the decay of the magnetic field, this effect has been overestimated by the analytical formula which assumes a constant conductivity. After taking into account that the conductivity only exists after the collision, we find that the magnetic field is not sufficiently long-lived to induce a significant splitting between the global spin polarizations ofΛandΛ̅. On the other hand, the spin alignment of vector meson is a measure of correlation between the spin polarizations of quark and anti-quark, instead of the spin polarization being squared solely. Therefore, although the averaged spin polarization induced by the magnetic field is very small, our results do not rule out the possibility that the fluctuations of the magnetic field, as well as other fields, can have a significant contribution to the spin alignment of vector meson. We thank Dmitri Kharzeev and Oleg Teryaev for useful comments on the retreat on Spin Dynamics, Vorticity, Chirality and magnetic field workshop. This work was supported by the NSFC through Grants No. 11835002, No. 12147101, No. 12225502 and No. 12075061, the National Key Research and Development Program of China through Grant No. 2022YFA1604900, and the Natural Science Foundation of Shanghai through Grant No. 20ZR1404100. H. L was also supported by the China Postdoctoral Science Foundation 2019M661333. apsrev4-2
http://arxiv.org/abs/2306.08853v1
20230615044225
In Search of netUnicorn: A Data-Collection Platform to Develop Generalizable ML Models for Network Security Problems
[ "Roman Beltiukov", "Wenbo Guo", "Arpit Gupta", "Walter Willinger" ]
cs.NI
[ "cs.NI", "cs.CR", "cs.LG" ]
https://netunicorn.cs.ucsb.edu [email protected] 0000-0001-8270-0219 UC Santa Barbara California USA [email protected] 0000-0002-6890-4503 Purdue University Indiana USA [email protected] 0000-0002-6378-7440 UC Santa Barbara California USA [email protected] 0000-0002-1384-8188 NIKSUN Inc. New Jersey USA The remarkable success of the use of machine learning-based solutions for network security problems has been impeded by the developed ML models' inability to maintain efficacy when used in different network environments exhibiting different network behaviors. This issue is commonly referred to as the generalizability problem of ML models. The community has recognized the critical role that training datasets play in this context and has developed various techniques to improve dataset curation to overcome this problem. Unfortunately, these methods are generally ill-suited or even counterproductive in the network security domain, where they often result in unrealistic or poor-quality datasets. To address this issue, we propose an augmented ML pipeline that leverages explainable ML tools to guide the network data collection in an iterative fashion. To ensure the data's realism and quality, we require that the new datasets should be endogenously collected in this iterative process, thus advocating for a gradual removal of data-related problems to improve model generalizability. To realize this capability, we develop a data-collection platform, , that takes inspiration from the classic “hourglass” model and is implemented as its “thin waist" to simplify data collection for different learning problems from diverse network environments. The proposed system decouples data-collection intents from the deployment mechanisms and disaggregates these high-level intents into smaller reusable, self-contained tasks. We demonstrate how simplifies collecting data for different learning problems from multiple network environments and how the proposed iterative data collection improves a model's generalizability. In Search of : A Data-Collection Platform to Develop Generalizable ML Models for Network Security Problems Walter Willinger ========================================================================================================== plain § INTRODUCTION Machine learning-based methods have outperformed existing rule-based approaches for addressing different network security problems, such as detecting DDoS attacks <cit.>, malwares <cit.>, network intrusions <cit.>, etc. However, their excellent performance typically relies on the assumption that the training and testing data are independent and identically distributed. Unfortunately, due to the highly diverse and adversarial nature of real-world network environments, this assumption does not hold for most network security problems. For instance, an intrusion detection model trained and tested with data from a specific environment cannot be expected to be effective when deployed in a different environment, where attack and even benign behaviors may differ significantly due to the nature of the environment. This inability of existing ML models to perform as expected in different deployment settings is known as generalizability problem <cit.>, poses serious issues with respect to maintaining the models' effectiveness after deployment, and is a major reason why security practitioners are reluctant to deploy them in their production networks in the first place. Recent studies (e.g., <cit.>) have shown that the quality of the training data plays a crucial role in determining the generalizability of ML models. In particular, in popular application domains of ML such as computer vision and natural language processing <cit.>, researchers have proposed several data augmentation and data collection techniques that are intended to improve the generalizability of trained models by enhancing the diversity and quality of training data <cit.>. For example, in the context of image processing, these techniques include adding random noise, blurring, and linear interpolation. Other research efforts leverage open-sourced datasets collected by various third parties to improve the generalizability of text and image classifiers. Unfortunately, these and similar existing efforts are not directly applicable to network security problems. For one, since the semantic constraints inherent in real-world network data are drastically different from those in text or image data, simply applying existing augmentation techniques that have been designed for text or image data is likely to result in unrealistic and semantically incoherent network data. Moreover, utilizing open-sourced data for the network security domain poses significant challenges, including the encrypted nature of increasing portions of the overall traffic and the fact that without detailed knowledge of the underlying network configuration, it is, in general, impossible to correctly label additional data. Finally, due to the high diversity in network environments and a myriad of different networking conditions, randomly using existing data or collecting additional data without understanding the inherent limitations of the available training data may even reduce data quality. As a result, there is an urgent need for novel data curation techniques that are specifically designed for the network security domain and aid the development of generalizable ML models for network security problems. To address this need, we propose a two-pronged approach that consists of an alternative ML pipeline in conjunction with a novel data-collection platform. In particular, our proposed alternative ML pipeline consists of augmenting the standard ML pipeline by (1) adding an explainability step between model evaluation and deployment, (2) using eXplainable AI (XAI) tools to identify issues with the training dataset that affect a trained model's ability to generalize, and (3) using the resulting insights to inform the iterative collection of new datasets for model training so as to gradually improve the generalizability of the models that are trained with these new datasets. This approach represents a significant departure from existing approaches that often use synthetic data for model training in their attempt to improve model generalizability. A key requirement for realizing this alternative ML pipeline is the ability to collect new datasets iteratively, irrespective of the given learning problem and considered network environment. Specifically, a network operator interested in improving the generalizability of a trained model will benefit from a data-collection platform that facilitates collecting data for different learning problems from a diverse set of network environments without the operator having to worry about the details of implementing the desired data collection. Such a platform can be envisioned as representing the “thin waist" of the classic hourglass model <cit.>, where the different learning problems comprise the top layer, and the different network environments constitute the bottom layer. This perspective is in stark contrast with existing data-collection efforts that are typically “fragmented" in the sense that each effort is custom-designed for a specific learning problem or specific network environment. To realize this “thin waist" analogue, we propose a new data-collection platform, , and present its design and implementation. This platform provides a new programming abstraction that enables (1) decoupling the data-collection intents or policies (i.e., answering what data to collect and from where) from mechanisms (i.e., answering how to collect the desired data on a given platform); and (2) disaggregating the high-level intents into self-contained and reusable tasks. Specifically, by disaggregating a data-collection experiment into multiple pipelines, which are further disaggregated into smaller self-contained tasks, this programming abstraction greatly simplifies collecting data for different learning problems. also lets experimenters map pipelines to one or more abstract data-collection nodes that are identified by their static and dynamic attributes, including interface, location, available memory, computing environment, etc. is responsible for compiling the high-level intents into target-specific instructions, deploying them to appropriate data-collection nodes, and executing them to collect data while handling various runtime events, such as links or node failures. Our design choices for are based on practical considerations in the computer networking area that ensure that can successfully realize the expressed intents with high fidelity at scale for disparate learning problems and network environments. Contributions. This paper makes the following contributions: * An alternative ML pipeline. We present the design of a new ML pipeline that augments the standard ML pipeline and supports iterative data collection to improve a model's generalizability (<ref>). * A new data-collection platform. We justify (<ref>) and present the design and implementation (<ref>) of , a new data-collection platform that enables performing iterative data collection for any given learning problem or network environment in concert with applying the alternative ML pipeline in practice. We evaluate both the iterative approach and the platform in <ref> and <ref>. * Artifacts. We make the full source code of the system as well as the datasets used in this paper publicly available (anonymously). Specifically, we have released three repositories: full source code of  <cit.>, a repository of all discussed tasks and data-collection pipelines <cit.>, and other supplemental materials <cit.> (See <ref>). We view the proposed ML pipeline and data-collection platform to be a promising first step toward developing ML-based network security solutions that are generalizable and can therefore be expected to have a better chance of getting deployed. However, much work remains and careful consideration has to be given to the network infrastructure used for data collection and the type of traffic observed in production settings before model generalizability can be guaranteed. § BACKGROUND AND PROBLEM SCOPE We first recall the basic steps of applying the widely-used standard ML pipeline in network security, then discuss existing data-related efforts to improve model generalizability and their limitation, and briefly describe our approach to overcome these limitations. §.§ Existing ML Pipeline for Network Security Key components. The standard ML pipeline (see <ref>) defines a workflow for developing ML artifacts and is widely used in many application domains, including network security. To solve a learning problem (e.g., detecting DDoS attack traffic), the first step is to collect (or choose) labeled data, select a model design or architecture (e.g., random forest classifier), extract related features, and then perform model training using the training dataset. An independent and identically distributed (iid) evaluation procedure is then used to assess the resulting model by measuring its expected predictive performance on test data drawn from the training distribution. The final step involves selecting the highest-performing model from a group of similarly trained models based on one or more performance metrics (e.g., F1-score). The selected model is then considered the ML-based solution for the task at hand and is recommended for deployment and being used or tested in production settings. Data collection mechanisms. As in other application areas of ML, the collection of appropriate training data is of paramount importance for developing effective ML-based network security solutions. In network security, the standard ML pipeline integrates two basic data collection mechanisms: real-world network data collection and emulation-based network data collection. In the case of real-world network data collection, data such as traffic-specific aspects are extracted directly (and usually passively) from a real-world target network environment. While this method can provide datasets that reflect pertinent attributes of the target environment, issues such as encrypted network traffic and user privacy considerations pose significant challenges to understanding the context and correctly labeling the data. Despite an increasing tendency towards traffic encryption  <cit.>, this approach still captures real-world networking conditions but often restricts the quality and diversity of the resulting datasets. Regarding emulation-based network data collection, the approach involves using an existing or building one's own emulated environment of the target network and generating (usually actively) various types of attack and benign traffic in this environment to collect data. Since the data collector has full control over the environment, it is, in general, easy to obtain ground truth labels for the collected data. While created in an emulated environment, the resulting traffic is usually produced by existing real-world tools. Many widely used network datasets, including the still-used DARPA1998 dataset <cit.> and the more recent CIC-IDS intrusion detection datasets <cit.> have been collected using this mechanism. §.§ Model Generalizability Issues Although existing emulation-based mechanisms have the benefit of providing datasets with correct labels, the training data is often riddled with problems that prevent trained models from generalizing, thus making them ill-suited for real-world deployment. There are three main reasons why these problems can arise. First, network data is inherently complex and heterogeneous, making it challenging to produce datasets that do not contain inductive biases. Second, emulated environments typically differ from the target environment – without perfect knowledge of the target environment's configurations, it is difficult to accurately mimic it. The result is datasets that do not fully represent all the target environment's attributes. Third, shifting attack (or even benign) behavior is the norm, resulting in training datasets that become less representative of newly created testing data after the model is deployed. These observations motivate considering the following concrete issues concerning the generalizability of ML-based network security solutions but note that there is no clear delineation between notions such as credible, trustworthy or robust ML models and that the existing literature tends to blur the line between these (and other) notions and what we refer to as model generalizability. Shortcut learning. As discussed in <cit.>, ML-based security solutions often suffer from shortcuts. Here, shortcuts refer to encoded/inductive biases in a trained model that stem from false or non-causal associations in the training dataset <cit.>. These biases can lead to a model not performing as desired in deployment scenarios, mainly because the test datasets from these scenarios are unlikely to contain the same false associations. Shortcuts are often attributable to data-collection issues, including how the data was collected (intent) or from where it was collected (environment). Recent studies have shown that shortcut learning is a common problem for ML models trained with datasets collected from emulated networking environments. For example, <cit.> found that the reported high F1-score for the VPN vs. non-VPN classification problem in <cit.> was due to a specific artifact of how this dataset was curated. Out-of-distribution issues. Due to unavoidable differences between a real-world target environment and its emulated counterpart or subtle changes in attack and/or benign behaviors, out-of-distribution (ood) data is another critical factor that limits model generalizability. The standard ML pipeline's evaluation procedure results in models that may appear to be well-performing, but their excellent performance can often be attributed to the models' innate ability for “rote learning”, where the models cannot transfer learned knowledge to new situations. To assess such models' ability to learn beyond iid data, purposefully curated ood datasets can be used. For network security problems, ood datasets of interest can represent different real-world network conditions (e.g., different user populations, protocols, applications, network technologies, architectures, or topologies) or different network situations (also referred to as distribution shift <cit.> or concept drift <cit.>). For determining whether or not a trained model generalizes to different scenarios, it is important to select ood datasets that accurately reflect the different conditions that can prevail in those scenarios. §.§ Existing Approaches We can divide the existing approaches to improving a model's generalizability into two broad categories: (1) Efforts for improving model selection, training, and testing algorithms; and (2) Efforts for improving the training datasets. The first category focuses mainly on the later steps in the standard ML pipeline (see <ref>) that deal with the model's structure, the algorithm used for training, and the evaluation process. The second category is concerned with improving the quality of datasets used during model training and focuses on the early steps in the standard ML pipeline. Improving model selection, training, and evaluation. The focal point of most existing efforts is either the model's structure (e.g., domain adaption <cit.> and multi-task learning <cit.>), or the training algorithm (e.g., few-shot learning <cit.>), or the evaluation process (e.g., ood detection <cit.>). However, they neglect the training dataset, mainly because it is in general assumed to be fixed and already given. While these efforts provide insights into improving model generalizability, studying the problem without the ability to actively and flexibly change the training dataset is difficult, especially when the given training dataset turns out to exhibit inductive biases, be noisy or of low quality, or simply be non-informative for the problem at hand <cit.>. See <ref> for a more detailed discussion about existing model-based efforts and how they differ from our proposed approach described below. Improving the training dataset. Data augmentation is a passive method for synthesizing new or modifying existing training datasets and is widely used in the ML community to improve models' generalizability. Technically, data augmentation methods leverage different operations (e.g., adding random noise <cit.>, using linear interpolations <cit.> or more complex techniques) to synthesize new training samples for different types of data such as images <cit.>, text <cit.>, or tabular data <cit.>. However, using such passive data-generation methods for the network security domain is in general inappropriate or counterproductive because they often result in unrealistic or even semantically meaningless datasets <cit.>. For example, since network protocols usually adhere to agreed-upon standards, they constrain various network data in ways that such data-generation methods cannot ensure without specifically incorporating domain knowledge. Besides that, various network environments can induce significant differences in observed communication patterns, even when using the same tools or considering the same scenarios <cit.>, by influencing data characteristics (such as packet interarrival times, packet sizes, or header information) and introducing unique network conditions or patterns. §.§ Limitations of Existing Approaches From a network security domain perspective, these existing approaches miss out on two aspects that are intimately related to improving a model's ability to generalize: (1) Leveraging insights from model explainability tools, and (2) ensuring the realism of collected training datasets. Using explainable ML techniques. To better scrutinize an ML model's weaknesses and understand model errors, we argue that an additional explainability step that relies on recent advances in explainable ML should be added to the standard ML pipeline to improve the ML workflow for network security problems <cit.>. The idea behind adding such a step is that it enables taking the output of the standard ML pipeline, extracting and examining a carefully-constructed white-box model in the form of a decision tree, and then scrutinizing it for signs of blind spots in the output of the standard ML pipeline. If such blind spots are found, the decision tree and an associated summary report can be consulted to trace their root causes to aspects of the training dataset and/or model specification that led the output to encode inductive biases. Ensuring realism in collected training datasets. To beneficially study model generalizability from the training dataset perspective, we posit that for the network security domain, the collection of training datasets should be done endogenously or in vivo; that is, performed or taking place within the network environment of interest. Given that network-related datasets are typically the result of intricate interactions between different protocols and their various embedded closed control loops, accurately reflecting these complexities associated with particular deployment settings or traffic conditions requires collecting the datasets from within the network. §.§ Our Approach in a Nutshell We take a first step towards a more systematic treatment of the model generalizability problem and propose an approach that (1) uses an augmented ML pipeline and (2) calls for running this pipeline in its entirety multiple times, each time with a possibly different model specification but always with a different training dataset compared to the original one. Here, we use an augmented ML pipeline (<ref>) that differs from the standard pipeline by including an explanation step. Also, each new training dataset used as part of a new run of the augmented ML pipeline is assumed to be endogenously collected and not exogenously manipulated. The collection of each new training dataset is actively guided by a root cause analysis of inductive bias(es) in the trained model. This analysis leverages existing explainability tools such as Trustee <cit.> or others <cit.> that are provided as part of the explainability step. This guided data-collection effort promises to enhance the quality of given training datasets by gradually reducing the presence of inductive biases that are identified by our approach. Importantly, this effort results in trained models that are more likely to generalize. Note however that our proposed approach does not guarantee model generalizability. Instead, by eliminating identified inductive biases in the form of shortcuts and ood data, our approach enhances a model's generalizability capabilities. Our proposed approach differs from existing approaches in several ways. First, it reduces the burden on the user or domain expert to select the “right” training dataset apriori. Second, it calls for the collection of training datasets that are endogenously generated and where explainability tools guide the decision-making about what “better" data to collect. Third, it proposes using multiple training datasets, collected iteratively (in a fail-fast manner), to combat the underspecification of the trained models and thus enhance model generalizability. In particular, it recognizes that an “ideal” training dataset may not be readily available in the beginning and argues strongly against attaining it through exogenous means. § ON “IN VIVO” DATA-COLLECTION In this section, we discuss some of the main issues with existing data-collection efforts and describe our proposed approach to overcome their shortcomings. §.§ Problems with Existing Approaches Data collection operations. We refer to collecting data for a learning problem from a specific network environment (or domain) as a data-collection experiment. We divide such a data-collection experiment into three distinct operations. (1) Specification: expressing the intents that specify what data to collect or generate for the experiment. (2) Deployment: bootstrapping the experiment by translating the high-level intents into target-specific commands and configurations across the physical or virtual data-collection infrastructure and implementing them. (3) Execution: orchestrating the experiment to collect the specified data while handling different runtime events (e.g., node failure, connectivity issues, etc.). Here, the first operation is concerned with “what to collect," and the latter operations deal with “how to collect" this data. The “fragmentation” issue. Existing data-collection efforts are inherently fragmented, i.e., they only work for a specific learning problem and network environment, emulated using one or more network infrastructures (<ref>). Extending them to collect data for a new learning problem or from a new network environment is challenging. For example, consider the data-collection effort for the video fingerprinting problem <cit.>, where the goal is to fingerprint different videos for video streaming applications (e.g., YouTube) using a stream of encrypted network packets as input. Here, the data-collection intent is to start a video streaming session and collect the related packet traces from multiple end hosts that comprise a specific target environment. The deployment operation entails developing scripts that automate setting up the computing environment (e.g., installing the required selenium package) at the different end hosts. The execution operation requires developing a runtime system to start/stop the experiments and handle runtime events such as node failure, connectivity issues, etc. Lack of modularity. In addition to being one-off in nature, existing approaches to collecting data for a given learning problem are also monolithic. That is, being highly problem-specific, there is, in general, no clear separation between experiment specification and mechanisms. An experimenter must write scripts that realize the data-collection intents (e.g., start/stop video streaming sessions, collect pcaps, etc.), deploy these scripts to one or more network infrastructures, and execute them to collect the required data. Given this monolithic structure, existing data collection approaches <cit.> cannot easily be extended so that they can be used for a different learning problem, such as inferring QoE <cit.> or for a different network environment, such as congested environments (e.g., hotspots in a campus network) or high-latency networks (e.g., networks that use GEO satellites as access link). Disparity between virtual and physical infrastructures. While a number of different network emulators and simulators are currently available to researchers <cit.>, it is, in general, difficult or impossible to write experiments that can be seamlessly transferred from a virtual to a physical infrastructure and back. This capability has special appeal in view of the fact that virtual infrastructures provide the ability to quickly iterate on data collection and test various network conditions, including conditions that are complex in nature and in general difficult to achieve in physical infrastructures. Lacking this capability, experimenters often end up writing experiments for only one of these infrastructures, creating different (typically simplified) experiment versions for physical test beds, or completely rewriting the experiments to account for real-world conditions and problems (e.g., node and link failures, network synchronization) Missed opportunity. Together, these observations highlight a missed opportunity for researchers who now have access to different network infrastructures. The list includes NSF-supported research infrastructures, such as EdgeNet <cit.>, ChiEdge <cit.>, Fabric <cit.>, PAWR <cit.>, virtual emulators and simulators <cit.>, as well as on-demand infrastructure offered by different cloud services providers, such as AWS <cit.>, Azure <cit.>, Digital Ocean <cit.>, GCP <cit.>, etc. This rich set of network infrastructures can aid in emulating diverse and representative network environments for data collection. §.§ An “Hourglass” Design to the Rescue The observed fragmented, one-off, and monolithic nature of how training datasets for network security-related ML problems are currently collected motivates a new and more principled approach that aims at lowering the threshold for researchers wanting to collect high-quality network data. Here, we say a training dataset is of high quality if the model trained using this dataset is not obviously prone to inductive biases and, therefore, likely to generalize. Our hourglass model. Our proposed approach takes inspiration from the classic “hourglass” model <cit.>, a layered systems architecture that, in our case, consists of designing and implementing a “thin waist" that enables collecting data for different learning problems (hourglass' top layer) from a diverse set of possible network environments (hourglass' bottom layer). In effect, we want to design the thin waist of our hourglass model in such a way that it accomplishes three goals: (1) allows us to collect a specified training dataset for a given learning problem from network environments emulated using one or more supported network infrastructures, (2) ensures that we can collect a specified training set for each of the considered learning problems for a given network environment, and (3) facilitates experiment reproducibility and shareability. Requirements for a “thin waist”. Realizing this hourglass model's thin waste requires developing a flexible and modular data-collection platform that supports two main functionalities: (1) decoupling data-collection intents (i.e., expressing what to collect and from where) from mechanisms (i.e., how to realize these intents); and (2) disaggregating intents into independent and reusable tasks. The required first functionality allows the experimenter to focus on the experiment's intent without worrying about how to implement it. As a result, expressing a data-collection experiment does not require re-doing tasks related to deployment and execution in different network environments. For instance, to ensure that the learning model for video fingerprinting is not overfitted to a specific network environment, collecting data from different environments, such as congested campus networks or cable- and satellite-based home networks, is important. Not requiring the experimenter to specify the implementation details simplifies this process. Providing support for the second functionality allows the experimenter to reuse common data-collection intents and mechanisms for different learning problems. For instance, while the goal for QoE inference and video fingerprinting may differ, both require starting and stopping video streaming sessions on an end host. Ensuring these two required functionalities makes it easier for an experimenter to iteratively improve the data collection intent, addressing apparent or suspected inductive biases that a model may have encoded and may affect the model's ability to generalize. § REALIZING THE “THIN WAIST” IDEA To achieve the desired “thin waist” of the proposed hourglass model, we develop a new data-collection platform, . We identify two distinct stakeholders for this platform: (1) experimenters who express data-collection intents, and (2) developers who develop different modules to realize these intents. In <ref>, we describe the programming abstractions that considers to satisfy the “thin” waist requirements, and in  <ref>, we show how realizes these abstractions while ensuring fidelity, scalability, and extensibility. §.§ Programming Abstractions To satisfy the second requirement (disaggregation), allows experimenters to disaggregate their intents into distinct pipelines and tasks. Specifically, offers experimenters Task and Pipeline abstractions. Experimenters can structure data collection experiments by utilizing multiple independent pipelines. Each pipeline can be divided into several processing stages, where each stage conducts self-contained and reusable tasks. In each stage, the experimenter can specify one or more tasks that will execute concurrently. Tasks in the next stage will only be executed once all tasks in the previous stage have been completed. To satisfy the first requirement, offers a unified interface for all tasks. To this end, it relies on abstractions that concern specifics of the computing environment (e.g., containers, shell access, etc.) and executing target (e.g., ARM-based Raspberry Pis, AMD64-based computers, OpenWRT routers, etc.) and allows for flexible and universal task implementation. To further decouple intents from mechanisms, 's API exposes the Nodes object to the experimenters. This object abstracts the underlying physical or virtual infrastructure as a pool of data-collection nodes. Here, each node can have different static and dynamic attributes, such as type (e.g., Linux host, PISA switch), location (e.g., room, building), resources (e.g., memory, storage, CPU), etc. An experimenter can use the filter operator to select a subset of nodes based on their attributes for data collection. Each node can support one or more compute environments, where each environment can be a shell (command-line interpreter), a Linux container (e.g., Docker <cit.>), a virtual machine, etc. allows users to map pipelines to these nodes using the Experiment object and map operator. Then, experimenters can deploy and execute their experiments using the Client object. <ref> in the appendix summarizes the key components of 's API. Illustrative example. To illustrate with an example how an experimenter can use 's API to express the data-collection experiment for a learning problem, we consider the bruteforce attack detection problem. For this problem, we need to realize three pipelines, where the different pipelines perform the key tasks of running an HTTPS server, sending attacks to the server, and sending benign traffic to the server, respectively. The first pipeline also needs to collect packet traces from the HTTPS server. <ref> shows how we express this experiment using . Lines 1-6 show how we select a host to represent a target server, start the HTTPS server, perform PCAP capture, and notify all other hosts that the server is ready. Lines 8-16 show how we can take hosts from different environments that will wait for the target server to be ready and then launch a bruteforce attack on this node. Lines 18-26 show how we select hosts that represent benign users of the HTTPS server. Finally, lines 28-32 show how we combine pipelines and hosts into a single experiment, deploy it to all participating infrastructure nodes, and start execution. Note that in <ref> we omitted task definitions and instantiation, package imports, client authorization, and other details to simplify the exposition of the system. §.§ System Design The system compiles high-level intents, expressed using the proposed programming abstraction, into target-specific programs. It then deploys and executes these programs on different data-collection nodes to complete an experiment. is designed to realize the high-level intents with fidelity, minimize the inherent computing and communication overheads (scalability), and simplify supporting new data-collection tasks and infrastructures for developers (extensibility). Ensuring high fidelity. is responsible for compiling a high-level experiment into a sequence of target-specific programs. We divide these programs into two broad categories for each task: deployment and execution. The deployment definitions help configure the computing environment to enable the successful execution of a task. For example, executing the YouTubeWatcher task requires installing a Chromium browser and related extensions. Since successful execution of each specified task is critical for satisfying the fidelity requirement, must ensure that the computing environment at the nodes is set up for a task before execution. Addressing the scalability issues. To execute a given pipeline, a system can control deployment and execution either at the task- or pipeline-level granularity. The first option entails the deployment and execution of the task and then reporting results back to the system before executing the next task. It ensures fidelity at the task granularity and allows the execution of pipelines even with tasks with contradicting requirements (e.g., different library versions). However, since such an approach requires communication with core system services, it slows the completion time and incurs additional computing and network communication overheads. Our system implements the second option, running all the setup programs before marking a pipeline ready for execution and then offloading the task flow control to a node-based executor that reports results only at the end of the pipeline. It allows for optimization of environment preparation (e.g., configure a single docker image for distribution) and time overhead between tasks, and also reduces network communication while offering only “best-effort” fidelity for pipelines. Enabling extensibility. Enabling extensibility calls for simplifying how a developer can add a new task, update an existing task for a new target, or add a new physical or virtual infrastructure. Note that the 's extensibility requirement targets developers and not experimenters. Simplify adding and updating tasks. An experimenter specifies a task to be executed in a pipeline. The chooses a specific implementation of this task. This may require customizing the computing environment, which can vary depending on the target (e.g., container vs shell of OpenWRT router). For example, a Chromium browser and specific software must be installed to start a video streaming session on a remote host without a display. The commands to do so may differ for different targets. The system provides a base class that includes all necessary methods for a task. Developers can extend this base class by providing their custom subclasses with the target-specific run method to specify how to execute the task for different types of targets. This allows for easy extensibility because creating a new task subclass is all that is needed to adapt the task to a new computing environment. Simplify adding new infrastructures. To deploy data-collection pipelines, send commands, and send/receive different events and data to/from multiple nodes in the underlying infrastructure, requires an underlying deployment system. One option is to explicitly bind to one of the existing deployment (orchestration) systems, such as Kubernetes <cit.>, SaltStack <cit.> , Ansible <cit.>, or others for all infrastructures. However, requiring a physical infrastructure to support a specific deployment system is disruptive in practice. Network operators managing a physical infrastructure are often not amenable to changing their existing deployment system as it would affect other supported services. Another option is to support multiple deployment systems. However, we need to ensure that supporting a new deployment system does not require a major refactoring of 's existing modules. To this end, introduces a separate connectivity module that abstracts away all the connectivity issues from the 's other modules (e.g., runtime), offering seamless connectivity to infrastructures using multiple deployment systems. Each time developers want to add a new infrastructure that uses an unsupported deployment system, they only need to update the connectivity manager — simplifying extensibility. §.§ Prototype Implementation Our implementation of is shown in <ref>.[For brevity, we omit discussing a few less important services (like authentication service).] Our implementation embraces a service-oriented architecture <cit.> and has three key components: client(s), core, and executor(s). Experimenters use local instances of 's client to express their data-collection experiments. Then, 's core is responsible for all the operations related to the compilation, deployment, and execution of an experiment. For each experiment, 's core deploys a target-specific executor on all related data-collection nodes for running and reporting the status of all the programs provided by 's core. The 's core offer three main service groups: mediation, deployment, and execution services. Upon receiving an experiment specification from the client, the mediation service requests the compiler to extract the set of setup configuration for each distinct (pipeline, node-type) pair, which it uploads to the local PostgreSQL database. After compilation, the mediation service requests the connectivity manager to ship this configuration to the appropriate data-collection nodes and verify the computing environment. In the case of docker-based infrastructures, this step is performed locally, and the configured docker image is uploaded to a local docker repository. The connectivity-manager uses an infrastructure-specific deployment system (e.g., SaltStack <cit.>) to communicate with the data-collection nodes. After deploying all the required instructions, the mediation service requests the connectivity manager to instantiate a target-specific executor for all data-collection nodes. The executor uses the instructions shipped in the previous stage to execute a data-collection pipeline. It reports the status and results to 's gateway and then adds them to the related table in the SQL database via the processor. The mediation service retrieves the status information from the database to provide status updates to the experimenter(s). Finally, at the end of an experiment, the mediation service sends cleanup scripts (via connectivity-manager) to each node—ensuring the reusability of the data-collection infrastructure across different experiments. As a part of our prototype, we implemented a number of connectors to different infrastructures or deployment systems. Each of these connectors is configurable, complete, and publicly available at our GitHub organization <cit.>. <ref> provides a list of currently available connectors and corresponding logical lines of code for their implementation. We encourage other researchers to contribute to this effort by improving the existing connectors or creating and publishing new connectors for infrastructures and deployment systems other than the ones listed in <ref>. § EVALUATION: AUGMENTED ML PIPELINE In this section, we demonstrate how our proposed augmented ML pipeline helps to improve model generalizability. Specifically, we seek to answer the following questions: 182 Does the proposed pipeline help in identifying and removing shortcuts? 183 How do models trained using the proposed pipeline perform compare to models trained with existing exogenous data augmentation methods? 184 Does the proposed pipeline help with combating ood issues? §.§ Experimental Setup To illustrate our approach and answer these questions, we consider the bruteforce example mentioned in <ref> and first describe the different choices we made with respect to the ML pipeline and the iterative data-collection methodology. Network environments. We consider three distinct network environments for data collection: a network, a hybrid -cloud setting, and a multi-cloud environment. The network environment is emulated using a programmable data-collection infrastructure. This infrastructure is deployed at a network and consists of multiple (40+) single-board computers (such as Raspberry Pis) connected to the Internet via wired and/or wireless access links. These computers are strategically located in different areas across the campus, including the library, dormitories, and cafeteria. In this setup, all three types of nodes (i.e., target server, benign hosts, and malicious hosts) are selected from end hosts on the campus network. The -cloud environment is a hybrid network that combines programmable end hosts at the campus network with one of three cloud service providers: AWS, Azure, or Digital Ocean.[Unless specified otherwise, we host the target server on Azure for this environment.] In this setup, we deploy the target server in the cloud while running the benign and malicious hosts on the campus network. Lastly, the multi-cloud environment is emulated using all three cloud service providers with multiple regions. We deploy the target server on Azure and the benign and malicious hosts on all three cloud service providers. Data collection experiment. The data-collection experiment involves three pipelines, namely target, benign, and malicious. Each of these pipelines is assigned to different sets of nodes depending on the considered network environment. The target pipeline is responsible for deploying a public HTTPS endpoint with a real-world API that requires authentication for access. Additionally, this pipeline utilizes tcpdump to capture all incoming and outgoing network traffic. The benign pipeline emulates valid usage of the API with correct credentials, while the malicious pipeline attempts to obtain the service's data by brute-forcing the API using the Patator <cit.> tool and a predefined list of commonly used credentials <cit.>. Data pre-processing and feature engineering. We used CICFlowMeter <cit.> to transform raw packets into a feature vector of 84 dimensions for each unique connection (flow). These features represent flow-level summary statistics (e.g., average packet length, inter-arrival time, etc.) and are widely used in the network security community <cit.>. Learning models. We train three different learning models: Multi-layer Perceptron (MLP) <cit.>, Gradient Boosting (GB) <cit.>, and Random Forest (RF) <cit.>. These models are commonly used for handling CICFlowMeter features and have been shown to outperform more complex deep models <cit.>. Explainability tool. We leverage global explainability tools to identify shortcuts and ood issues. Among the various types of existing global explainability tools (e.g., PDP plots <cit.>, ALE plots <cit.>, and others <cit.>), we employed the recently developed tool Trustee <cit.>. For any given black-box ML model, this tool generates a high-fidelity and low-complexity decision tree, which provides a detailed explanation of the trained model's decision-making process. It also generates a trust report that simplifies the identification of shortcuts and ood issues in a given trained model. §.§ Identifying and Removing Shortcuts To answer 182, we consider a setup where a researcher curates training datasets from the environment and aims at developing a model that generalizes to the multi-cloud environment (i.e., unseen domain). Initial setup (iteration #0). We refer to the training data generated from this experiment as -0. <ref> shows that while all three models have a perfect training performance, they all have low testing performance (errors are mainly false positives). <ref> shows that these models almost exclusively use the TTL (time-to-live) feature to discriminate between benign and malicious flows, which is an obvious shortcut. To understand the root cause of this shortcut, we checked the infrastructure and noticed that almost all nodes used for benign traffic generation have the exact same TTL value due to a flat structure of the network. This observation also explains why most errors are false positives, i.e., the model treats a flow as malicious if it has a different TTL from the benign flows in the training set. Existing domain knowledge suggests that this behavior is unlikely to materialize in more realistic settings such as the multi-cloud environment. Consequently, we observe that models trained using the -0 dataset perform poorly on the unseen domain; i.e., they generalize poorly. Removing shortcuts (iteration #1). To fix this issue, we modified the data-collection experiment to use a more diverse mix of nodes for generating benign and malicious traffic and collected a new dataset, -1. However, this change only marginally improved the testing performance for all three models (<ref>). Inspection of the corresponding decision tree shows that all the models use the “Bwd Init Win Bytes” feature for discrimination, which appears to be yet another shortcut. More precisely, this feature quantifies the TCP window size for the first packet in the backward direction, i.e., from the attacked server to the client. It acts as a flow control and reacts to whether the receiver (i.e., HTTP endpoint) is overloaded with incoming data. Although it could be one indicator of whether the endpoint is being brute-force attacked, it should only be weakly correlated with whether a flow is malicious or benign. Given this reasoning and the poor generalizability of the models, we consider the use of this feature to be a shortcut. Removing shortcuts (iteration #2). To remove this newly identified shortcut, we refined the data-collection experiment. First, we created a new task that changes the workflow for the Patator tool. This new version uses a separate TCP connection for each bruteforce attempt and has the effect of slowing down the bruteforce process. Second, we increased the number of flows for benign traffic and the diversity of benign tasks. Using these changes, we collected a new dataset, -2. <ref> shows that the change in data-collection policy significantly improved the testing performance for all models. We no longer observe any obvious shortcuts in the corresponding decision tree. Moreover, domain knowledge suggests that the top three features (i.e., “Fwd Segment Size Average”, “Packet Length Variance”, and “Fwd Packet Length Std”) are meaningful and their use can be expected to accurately differentiate benign traffic from repetitive brute force requests. Note that although the models appear to be shortcut-free, we cannot guarantee that the models trained with these diligently curated datasets do not suffer from other possible encoded inductive biases. Further improvements of these curated datasets might be possible but will require more careful scrutiny of the obtained decision trees and possibly more iterations. §.§ Comparison with Exogeneous Methods To answer 183, we compare the performance of the model trained using -2 (i.e., the dataset curated after two rounds of iterations) with that of models trained with datasets obtained by means of existing exogenous methods. Specifically, we consider the following three methods: * Noise augmentation. A popular data augmentation technique that consists of adding suitable chosen random Gaussian noise <cit.> to the corresponding skewed features during each iteration. * SYMPROD. A popular augmentation method for tabular data – SMOTE <cit.>. This method applies interpolation techniques to synthesize data points that balance the data across different classes. We utilize one of the most recent versions of this method called SYMPROD <cit.>. * Feature drop. A simple method that drops a specified skewed feature from the dataset in each iteration. We apply these methods to the three training datasets curated from the network in the previous experiment. For -0 and -1, we use the two identified skewed features for adding noise or dropping features altogether. As shown in <ref>, the models trained using these exogenous methods perform poorly in all iterations when compared to our approach. This highlights the value of our proposed augmented ML pipeline for iterative data collection and model training. Furthermore, among the three methods considered, the first two methods compromise the semantic integrity of network data, making them unsuitable for addressing generalizability issues for network security problems. §.§ Combating ood-specific Issues To answer 184, we consider two different scenarios: attack adaptation and environment adaptation. Attack adaptation. We consider a setup where an attacker changes the tool used for the bruteforce attack, i.e., use Hydra <cit.> instead of Patator. To this end, we use to generate a new testing dataset from the infrastructure with Hydra as the bruteforce attack. <ref> shows that the model's testing performance drops significantly (to 0.85 on average). We observe that this drop is attributable to the model's reduced ability to identify malicious flows, which indicates that changing the attack generation tool introduces ood samples, although they belong to the same attack type. To address this problem, we modified the data generation experiment to collect attack traffic from both Hydra and Patator in equal proportions. This change in the data-collection experiment only required 6 LLoC. We retrain the models on this dataset and observe significant improvements in the model's performance on the same test dataset after retraining (see <ref>). Note that we only test one type of ood data where the evolved attack still has the same goal and functionality. For example, an attack can also evolve into another type with a different goal, resulting in ood samples with a new label. Also, we leverage model ensemble and human analysis to identify the ood case. It is possible to replace this step with more automated detection tools to identify ood issues and then use to regenerate training datasets to resolve them. We plan to pursue this issue in our future work. Environment adaptation. As we test the model on the unseen multi-cloud environment, we consider this as an ood issue due to possible feature distribution differences. To address this issue, we use the -cloud environment for data collection. As expected, we observe differences in distribution for some of the features across the two environments (see <ref>). <ref> shows the performance of the models trained using only the data from the environment compared to the one that uses data from both and -cloud environments. Notably, as -cloud is more similar to the multi-cloud environment than , the models trained with additional -cloud data exhibit slight improvements in performance under the test settings. § EVALUATION: We now answer if lowers the threshold for researchers to collect data for: 185 different learning problems for a given network environment? 186 a given learning problem from different environments, emulated using one or more network infrastructures? and 187 iteratively calibrating the data collection intents for a given learning problem and environment? We also demonstrate 188 how well does scale for larger data-collection infrastructures, especially the ones equipped with relatively low-end devices, such as RPis? §.§ Experimental Setup Learning problems. Besides the HTTP bruteforce attack detection problem, we explore two more learning problems for this experiment, namely video fingerprinting and advanced persistent threats detection (APTs). In the case of the first additional example, the learning problem is to fingerprint videos for web-based streaming services, such as YouTube, that adopt variable bitrates <cit.>. Previous work <cit.> did not evaluate the proposed learning model under realistic network conditions. Thus, to collect meaningful data for this problem, we use a network of end hosts in the infrastructure to collect a training dataset for five different YouTube videos.[Each video is identified with a unique URL.] Specifically, our data-collection intent is specified by the following sequence of tasks: start packet capture, watch a YouTube video in headless mode for 30 seconds, and stop packet capture. We repeat this sequence ten times for each video in a shuffled order and combine it into a single pipeline, where at the end, we upload the collected data to our server. Regarding the second additional example, the learning problem, in this case, is to identify the hosts that some APTs have compromised. To generate data for this learning problem, we write an experiment that mimics the behavior of a compromised host. Specifically, our data-collection intent is as follows: find active hosts using Ping, check if port 443 is opened for active hosts (identified in the previous stage) with PortScan, and then for each host with open 443 port launch four different attacks in parallel: CVE20140160 (Heartbleed), CVE202141773 (Apache 2.4.49 Path), CVE202144228 (Log4J), and Patator (HTTP admin endpoint bruteforce using the Patator tool). The ML pipeline creates a “semi-realistic” training dataset by combining actively generated attack traffic with passively collected packet traces from a border router of a production network, such as the network.[Note, in theory, we could use to actively collect the benign traffic for this learning problem in addition to the attack traffic. However, generating representative benign traffic for a large and complex enterprise network will require a more complex data-collection infrastructure than the one we use for evaluation. <ref> discusses this issue in greater detail.] We then use this dataset for model training. Note, here we assume that we know the attacker's playbook; that is, the goal in this case is not to demonstrate a realistic attack playbook but to demonstrate that simplifies generating attack traffic for a given APT attack playbook. Network environments. enables emulating network environments for data collection using one or more physical/virtual infrastructures. Previously, we used a SaltStack-based infrastructure at and multiple clouds to emulate various network environments: , -cloud, and multi-cloud. In this experiment, we implement a connector to another infrastructure, Azure Container Instances (ACI) to expand cloud-based environments with serverless Docker containers. During the experiments, containers were dynamically created in multiple regions and used for pipeline execution. Overall, currently supports seven different deployment system connectors (see <ref>). Baseline. To the best of our knowledge, none of the existing platforms/systems offer the desired extensibility, scalability, and fidelity for data collection (see <ref> for more details). To illustrate how simplifies data collection efforts, we consider baselines that directly configure three different deployment/orchestration systems. Specifically, we consider the following deployment systems as baselines: Kubernetes, SaltStack, and Azure Container Instances (ACI). For each data-collection experiment, we explicitly compose different tasks to realize different data-collection pipelines, create pipeline-specific docker images, and use existing tools (e.g., kubectl) to map and deploy these pipelines to different nodes. §.§ Simplifying Data Collection Effort We now demonstrate how simplifies data collection for: Different learning problems for a given network environment (185). <ref> reports the effort in expressing the data-collection experiments for the three learning problems for the network. We observe that only requires 17-35 LLoCs to express the data-collection intent. The network infrastructure uses SaltStack as the deployment system, and we observe that it takes 113-237 LLoC (around 5-13 × more effort) to express and realize the same data-collection intents without . The key enabler here is the set of self-contained tasks that realize different data-collection activities. For each learning problem, <ref> quantifies the overhead of specifying new tasks unique to the problem at hand. Even taking the overheads of expressing these tasks into consideration, collecting the same data from network without requires around 2-3 × more effort. Overall, we implemented around twenty different tasks to bootstrap (see <ref> (in <ref>) for more details). The total development effort for the bootstrapping was around 900 LLoCs . Though this bootstrapping effort is not insignificant, we posit that this effort amortizes over time as this repository of reusable and self-contained tasks will facilitate expressing increasingly disparate data-collection experiments. Given learning problem from multiple network environments (186). As we discussed before, is inherently extensible, i.e., it can use different sets of network infrastructures to emulate disparate network environments for data collection. With , changing an existing data-collection experiment to collect data from a new set of network infrastructure(s) requires changing only a few LLoCs (2-3 for the examples in <ref>). In contrast, collecting the data for the HTTP Bruteforce detection problem from a cloud infrastructure (ACI) and a Kubernetes cluster requires writing additional 61 and 74 LLoCs, respectively. This effort is even more intense for video fingerprinting and APT detection problems. The key enabler for simplifying data collection across one or more network infrastructures is 's extensible connectivity-manager that can interface with multiple deployment systems via a system of connectors. In <ref>, we enumerated all the implemented connectors and corresponding logical lines of code (LLoC) for each implementation. Note that this bootstrapping is a one-time effort, and these connectors can be reused across multiple physical infrastructures that are managed using either of the supported deployment systems (e.g., SaltStack, Kubernetes, etc.). Iterative data collection (187). To iteratively modify data collection intents, the system should allow flexibility in both pipeline modifications and environment changes. We implemented the experiment, described in <ref>, using , for all three environments (, -cloud, and multicloud). We report the combined LLoCs for experiment definitions and tasks implementations in <ref>. As we reused previously implemented connectors, we do not report their LLoC in the table. The table shows that the overhead for iterative updates is minimal. While this overhead may also be minimal for more conventional (platform- and problem-specific) solutions, 's abstractions allow for seamless integration of many other platforms, thus providing a means to increase the diversity of the collected datasets further and, in turn, a model's generalizability capabilities. §.§ Scaling Data Collection To quantify the computing and memory overheads of 's core and executors (188), we measure the wall time or elapsed time as a proxy for CPU cycles and use a Python-based memory profiler <cit.>, respectively. Our results show that the executor running on a low-end node such as a Raspberry Pi incurs a computing overhead of approximately 1 second per stage and 0.13 seconds per task while consuming less than 21 MB of memory. Meanwhile, 's core incurs a computing overhead of around five seconds for deployment and 20 seconds for execution in a 20-node infrastructure while consuming less than 417 MB of memory. The details of these experiments can be found in <ref>. § DISCUSSION More learning problems. While not implemented in this paper, we envision that the platform can be used for a wide range of different network security problems, such as censorship measurements <cit.>, website fingerprinting <cit.>, Tor traffic analysis <cit.>, and others. Many of these experiments incorporate an active measurement approach for data collection, labeling, or communication and would benefit from simultaneous multi-infrastructure usage and simplified experiments' reproducibility and shareability. To demonstrate this capability, we additionally implemented an experiment for multi-vantage point validation of the Let's Encrypt ACME challenge <cit.> using (see <ref> for additional details). Usability and Realism. First, a critical step in our proposed method is that we require domain experts to articulate data collection intents. As demonstrated in <ref>, it is often possible to generate appropriate intents with the help of explainable ML models. Our platform design further simplifies the process of translating intents into action, ensuring the usability of our proposed method. Second, our data collection follows an emulation-based mechanism that enables accurate labeling. With our proposed iterative approach, we can eliminate biases from the collected data. Additionally, our platform significantly lowers the threshold for gathering data from multiple environments, enhancing the diversity of the data collected. As demonstrated in <ref>, the data we collected is realistic and representative and can improve the generalizability of trained models in various environments. Limitations of the proposed approach. Active data collection. Our approach uses endogenously generated (labeled) network data from actual network environments. We note that it may also be possible to improve a model's generalizability by means of carefully selected and exogenously generated (passive) data from a production network, but such an approach is beyond the scope of this paper. Feature pre-processing. Curating training datasets entails both data collection and pre-processing. Since data pre-processing remains the same for different versions of the collected data that result from our iterative approach, it poses no problems for the desired “thin waist” of 's design. In this paper, we utilized the CICFlowmeter for pre-processing, which worked well for all considered learning problems. While we readily acknowledge that there is more to data pre-processing than CICFlowmeter, we leave the exploration of alternative pre-processing (as well as model selection and optimization) techniques for future work. Decomposing pipelines. We assume that it is possible to decompose a data-collection pipeline into self-contained tasks. However, such a decomposition may be cumbersome for complex learning problems like Puffer <cit.> that require closer service integration. Decoupling pipelines from infrastructures. We assume that it is possible to decouple the data-collection intents from actual infrastructure-specific mechanisms. However, realizing such decoupling may be difficult, especially for experiments where the data-collection tasks are heavily intertwined with a specific attribute of the data-collection node. For example, some IoT security experiments <cit.> require running the data-collection pipeline on specific devices with integrated firmware and pre-defined implementations of closed-source services, which cannot be easily supported by . Programming overheads. Our approach requires experimenters to express new data-collection tasks that are not yet presented in 's library. Though this effort will amortize over time, it will only materialize if we succeed in building and incentivizing a broad user community for the proposed platform. Here, we take a first step and make a case for a holistic communal effort to address the data quality and model generalizability issues that have impeded the use of ML-based network security solutions in practice to date. Limitations of the prototype implementation. Data-collection nodes. Our current prototype only supports Linux- or Windows-based nodes, optionally with Docker support to enable full platform capabilities (such as Docker container environments). This restriction is reasonable because of the widespread support for Docker-based containers in current data-collection infrastructures <cit.> and a growing trend to manage Docker-based infrastructures <cit.>. In future work, we plan to extend support to other computing environments, such as OpenWRT routers and PISA switches, which do not natively support Python or Docker. Currently, such extensions are possible using the sidecar model <cit.>, which allows the configuration of nodes without Python support through Python-based APIs, such as P4-runtime <cit.>. § RELATED WORK Alternative approaches for our designs. In principle, it is possible to use existing tools and frameworks to realize the “thin waist" we implemented for data collection, but doing so while achieving 's level of abstraction, extensibility, fidelity, and scalability poses significant challenges (See <ref> for a comparison of with existing tools and frameworks). For example, one possibility is to disaggregate pipelines into tasks with existing workflow-management platforms, such as Airflow <cit.> or others <cit.>. However, there is often no explicit support to map these pipelines to specific data-collection nodes and instantiation multiple copies of tasks – limiting data-collection experiments' flexibility. Existing CI/CD systems (e.g., Jenkins <cit.> and others <cit.> allow explicit mapping of pipelines to nodes but typically require specific infrastructure access and configuration, limiting the desired extensibility and fidelity. Besides, they do not optimize inter-task execution time, limiting their ability to scale the data collection scenarios. Finally, one can also use different configuration (e.g., SaltStack <cit.>) or orchestration platforms (e.g., Kubernetes <cit.>), and others <cit.>. However, these systems lack the desired extensibility and flexibility because, being tailor-made for orchestration, they only work for specific types of infrastructures and do not provide explicit support for the proposed pipelines and stages abstraction, limiting tasks and experiments' reusability. Passive data augmentation. In computer vision, researchers synthesize novel training data by adding random Gaussian noise, blurring, rotating, and flipping to training images <cit.>. However, these methods are specific to images and rarely easily can be applied beyond vision data. Recent studies propose more application-domain independent methods, such as mixup <cit.> and SMOTE <cit.>, which can be applied to networking data. However, as demonstrated in <ref>, these methods have limited efficacy in networking applications due to the legitimacy and correctness of the augmented data. They also generate samples that are similar to the given training data, limiting their generalizability. Another line of data augmentation methods, designed to improve adversarial robustness, generates adversarial samples by adding carefully crafted perturbations to training samples (e.g., <cit.>). Since these perturbations are just noises with a Non-Gaussian distribution, they suffer similar limitations as adding Gaussian noise. Model-side efforts. Various model-side efforts have also been made to improve model generalizability. In particular, (reinforcement learning-based) domain adaptation methods (e.g.,<cit.>) maintain an ML model's efficacy across multiple domains. To generalize across different learning problems, existing research proposed multi-task learning <cit.>) and few-shot learning methods <cit.>. Researchers have also developed advanced models to combat shortcuts <cit.> or out-of-distribution (ood) issues <cit.>, such as detecting oods with contrastive learning <cit.>. All the model-side efforts assume that the training data is fixed and already given. These techniques are orthogonal and complementary to our method, which focuses on improving datasets. Our future work will explore combining these two approaches to enable more generalizable and practical models. § CONCLUSION In this paper, we present an augmented ML pipeline to curate high-quality datasets for developing generalizable ML-based solutions for network security problems. Our approach is based on a new data-collection method that leverages advances in explainable ML and emphasizes the need for a flexible “in vivo" collection of training datasets. It takes inspiration from the classic “hourglass” abstraction, where the different learning problems make up the hourglass' top layer, and the different network environments constitute its bottom layer. We realize the “thin waist" of this hourglass abstraction with a new data-collection platform, . In effect, for each learning problem, enables data collection in multiple network environments, and for each network environment, it facilitates data collection for multiple learning problems. Through extensive experiments that involve different network security problems and consider multiple network infrastructures, we demonstrate how , in conjunction with the use of explainable ML tools, simplifies data collection for different learning problems from diverse network environments, enables iterative data collection for advancing the development of generalizable ML models, and improves the reproducibility, reusability, and shareability of network security experiments. abbrv § VALIDATING LET'S ENCRYPT CHALLENGES FROM MULTIPLE VANTAGE POINTS. In this scenario, we consider the task of domain name validation via the Let's Encrypt “challenges,” as defined by the ACME standard. Recent papers <cit.> argue that for validating these challenges, it is important to use multiple vantage points that are located in different geographically and logically dispersed networks so as to avoid BGP attacks and prevent the validation of malicious requests. We implemented the DNS-01 and HTTP-01 validation protocols for the ACME challenge using 's abstractions and created an experiment with nodes from two different infrastructures (and multi-region Azure), thus effectively reproducing the multi-vantage point scenario from the original paper <cit.>. We enhanced the experiment by dynamic node selection, making possible BGP attacks more complicated due to a priori unknown vantage point locations. In total, we expressed this experiment using 14 LLoCs, excluding challenge protocol implementation (see corresponding tasks in <ref>). § EXPANDING ITERATIVE COLLECTION We also consider an expanded version of the experiment conducted in <ref>. In this version, we use the environment for training and both the campus-cloud and multi-cloud environments for testing. In addition, instead of having a fixed testing dataset, we collect testing datasets using the same experiment modifications as for the training infrastructure, mitigating the possible distribution difference between training and testing data. Results are presented in <ref> and align with the original experiment in <ref>, showing that model generalizability improves with each iteration. § IMPLEMENTED TASKS DESCRIPTION We briefly describe the full list of tasks that we implemented for . For each task, we provide the task intent, the number of logical lines of code (LLoC) for standard task implementation, and the number of LLoC to implement a wrapper for . The results are provided in the <ref>. § SCALING DATA COLLECTION We quantify how our design choices help reduce the computing and memory overheads incurred by 's core and executor(s). Executors. Recall that for each experiment, 's mediation service requests the connectivity-manager to instantiate an executor for all the participating data-collection nodes. Our goal is to quantify the executor's overhead for a (relatively) low-end data-collection node (i.e., a Raspberry Pi (RPi) 4B device) in our infrastructure. To ensure that our measurements are not skewed by the nature of the data-collection tasks, processing stages, and pipelines, we created custom pipelines with varying numbers of tasks and stages for our evaluation. Specifically, we evaluated four pipelines: (1) a short pipeline with one stage and one task, (2) a short pipeline with two stages and ten tasks per stage, (3) a long pipeline with 100 stages and one task per stage, and (4) a long pipeline with 100 stages and ten tasks per stage. Each task in all these pipelines sleeps for 5 seconds. For each pipeline, we quantify the executor's computing overhead as the difference between the completion time for different tasks and processing stages and related sleep times. We observe that the executor's average computing overhead is 1 second per stage and 0.13 seconds per task in all pipelines, including the overhead for process spawning, data serialization, and results collection. We measure the executor's memory overhead using a Python-based tool, memory-profiler <cit.>. We observe that the executor's total memory overhead is 20.2 MB, for pipeline sizes varying from 1 to 19 KB. These results show that the executor's low computing and memory overheads will not negatively impact the pipeline's completion time or data quality, even for low-end devices like RPis. 's core. To quantify the overheads incurred by 's core, we use the data-collection experiment for the bruteforce attack detection problem. For this experiment, we collect data from two infrastructures: (with RPis) and Azure Container Instances (ACI) (with AMD64-based Linux containers). For both infrastructures, we expressed experiments that use N different number of data-collection nodes, with N=1, 10, or 20. For both of these infrastructures, it is possible to configure the computing environment locally and ship the configured docker image to the data-collection nodes. We report two metrics to quantify the computing overheads: deployment overhead and execution overhead. Deployment overhead measures the wall-clock time between the instance when an experiment is submitted and the instance when it is ready for execution, minus the time it takes to configure the docker image and distribute the instructions to the respective data-collection nodes. Execution overhead measures the wall-clock time between the start and end times of an experiment, minus the wall-clock time for individual tasks. We refer to <ref> for more details about an experiment's lifecycle in for docker-based infrastructures. <ref> shows the wall-clock overhead for both stages. Note that we report the image distribution time as part of the execution overhead for the Azure Container Instances – due to available operations in Azure Cloud SDK, it is impossible to separate these stages. We also measured the total memory overhead of the platform on our servers (a single SuperMicro server platform with AMD64 architecture and Ubuntu 22.04). All services (6 in total) were implemented using Python 3.11, deployed in Docker containers, and consumed in total 240 MB. In addition, the platform requires a PostgreSQL database for storing states, pipelines, and results, and optionally a private docker repository for image storage. In summary, this evaluation shows the memory and computing efficiency of 's core and executor(s) and demonstrates 's ability to scale data-collection to realistic settings. § EXPERIMENT PREPARATION AND EXECUTION BREAKDOWN We provide a breakdown of a typical experiment preparation and execution with a Docker environment: * User defines or imports tasks that should be executed on the nodes and combines them into pipelines. * User requests a node pool from the platform, defines an experiment by assigning pipelines to nodes, and submits it to . * Platform analyzes the assignment of pipelines and defines Docker images to compile. This stage could be skipped if for all pipelines a pre-built custom image is provided. * 's service compiles requested images and uploads them to a repository. * requests connector to upload images to the nodes. This stage could be skipped if custom images were provided and they are already presented on the target nodes. * marks the experiment as READY. * User requests the platform to start a READY experiment. * requests connector to distribute the start command to all ready nodes participating in the experiment. * Each node starts the container with an executor which executes the tasks and reports results back to the platform. * The platform waits for all nodes to report their results or times out, and then sets the experiment status to FINISHED. § COMPARISON WITH EXISTING CLASSES OF TOOLS. Below, we provide a more detailed comparison of with existing classes of tools that are suitable for data collection purposes in the networking area <cit.> and are mentioned in <ref>. We consider three main classes of tools that can enable data collection for our scenarios and provide a combined description of their differences from our system in <ref>. Workflow management platforms. These solutions are designed to define and execute a data processing pipeline that uses currently existing platforms such as Airflow <cit.>, SnakeMake <cit.>, Luigi <cit.>, or Dagster <cit.>. Unfortunately, these systems do not always provide convenient ways of selecting nodes for code execution (relying on affinity settings, like Airflow Kubernetes operator), which is critical for network experiments that require precise control over the data collection. They also rarely try to minimize system overhead (especially between task execution) and require nodes to have a constant stable connection to the platform, which is not always available in our scenarios (e.g., nodes could be situated in remote locations with intermittent network connectivity). Orchestration platforms. These systems are typically used to change the configuration of controlled nodes (servers, laptops, etc.) or deploy containers or virtual machines to particular nodes. Common examples of these systems are Ansible <cit.>, SaltStack <cit.>, Chef <cit.>, Puppet <cit.>, and Kubernetes <cit.> and VMware vSphere <cit.> for containers and VMs deployment. These systems typically need a specific infrastructure setup and administration, which requires root access to nodes. They are challenging to integrate with or run alongside other systems, limiting their implementation in other infrastructures. These systems' pipelines (also called playbooks) are often customized with unique information about certain nodes, complicating mapping them to other nodes or infrastructures. Continuous integration and continuous delivery tools. These tools provide a way to execute a set of instructions on specified nodes, usually for application development automation or deployment. The most popular examples of such systems are Jenkins <cit.>, Gitlab CI/CD <cit.>, and Github Actions <cit.>. These tools can be adjusted for data collection. Still, they do not optimize important data generation properties (such as overhead between tasks), use declarative language for configuration, do not separate deployment and execution of pipelines, or restrict the scalability of solutions (e.g., GitHub Actions Free plan supports only 20 parallel jobs, and only up to 180 parallel jobs in GitHub Enterprise). Specialized data-collection platforms and infrastructures. This category includes platforms designed for specific (often community-based) data-collection experiments. Popular examples include platforms such as RIPE Atlas <cit.>, Stanford's Puffer project <cit.>, Netrics <cit.>, etc. Unfortunately, these platforms cannot be easily extended to support data collection for multiple learning problems from one or more network environments. § SOURCE CODE AND SUPPLEMENTARY MATERIALS In this section, we describe the repositories and their purpose. 's code. The system's code is available in this repository: <https://github.com/netunicorn/netunicorn>. It contains all of 's code for deploying core services of the system on an arbitrary infrastructure, supported by existing connectors. This repository also contains technical documentation of the system and examples of use cases. 's library. The library of task and pipeline implementations is available at: <https://github.com/netunicorn/netunicorn-library>. This repository contains all tasks, mentioned in this paper, together with other tasks, contributed by the community. We encourage users of the system to freely propose requests to include their task and pipeline implementations for public use by the community. Paper's supplemental materials. The paper's supplemental materials (such as the experiments' code, collected datasets, and required Dockerfiles) are available in this repository: <https://github.com/netunicorn/netunicorn-search>. While supporting the work described in this paper, this repository will not be used for further system development.
http://arxiv.org/abs/2306.01844v1
20230602180314
Stepped Partially Acoustic Dark Matter: Likelihood Analysis and Cosmological Tensions
[ "Manuel A. Buen-Abad", "Zackaria Chacko", "Can Kilic", "Gustavo Marques-Tavares", "Taewook Youn" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
=1
http://arxiv.org/abs/2306.01863v1
20230602183529
Embedding Security into Ferroelectric FET Array via In-Situ Memory Operation
[ "Yixin Xu", "Yi Xiao", "Zijian Zhao", "Franz Müller", "Alptekin Vardar", "Xiao Gong", "Sumitha George", "Thomas Kämpfe", "Vijaykrishnan Narayanan", "Kai Ni" ]
cs.ET
[ "cs.ET" ]
[ Ilon Joseph June 2, 2023 ================ Non-volatile memories (NVMs) have the potential to reshape next-generation memory systems because of their promising properties of near-zero leakage power consumption, high density and non-volatility. However, NVMs also face critical security threats that exploit the non-volatile property. Compared to volatile memory, the capability of retaining data even after power down makes NVM more vulnerable. Existing solutions to address the security issues of NVMs are mainly based on Advanced Encryption Standard (AES), which incurs significant performance and power overhead. In this paper, we propose a lightweight memory encryption/decryption scheme by exploiting in-situ memory operations with negligible overhead. To validate the feasibility of the encryption/decryption scheme, device-level and array-level experiments are performed using ferroelectric field effect transistor (FeFET) as an example NVM without loss of generality. Besides, a comprehensive evaluation is performed on a 128x128 FeFET AND-type memory array in terms of area, latency, power and throughput. Compared with the AES-based scheme, our scheme shows ∼22.6×/∼14.1× increase in encryption/decryption throughput with negligible power penalty. Furthermore, we evaluate the performance of our scheme over the AES-based scheme when deploying different neural network workloads. Our scheme yields significant latency reduction by 90% on average for encryption and decryption processes. empty § INTRODUCTION The proliferation of smart edge devices has led to a massive influx of data, necessitating high-capacity and energy-efficient memory solutions for storage and processing. Traditional volatile memories, such as static random access memory (SRAM) and dynamic RAM (DRAM), are struggling to meet the demands due to their significant leakage power and low density<cit.>. To address this issue, high-density NVMs, such as mainstream vertical NAND flash, has become the cornerstone of modern massive information storage. NVM offers nonvolatility, zero leakage power consumption, and high density if integrated in dense 3D form<cit.>. Various emerging NVM technologies are being pursued targeting different levels of the memory hierarchy, e.g., as storage class memory or even as on-chip last-level cache, including 3D XPoint based on phase change memory (PCM)<cit.>, sequential or vertical 3D resistive memory, and back-end-of-line ferroelectric memory. Beyond simple data storage, NVM is playing an increasingly important role in data-centric computing, particularly in the compute-in-memory (CiM) paradigm. Within this paradigm, computation takes place in the analog domain within the memory array, eliminating the energy and latency associated with data transfer in conventional computing hardware. This has the potential to pave the way for sustainable data-intensive applications, particularly in the field of artificial intelligence, which is rapidly advancing with exponentially growing models. Hence NVM will be a crucial electronic component for ensuring sustainable computing in the future. < g r a p h i c s > parbox=none Motivation and potential applications. Potential applications of memory encryption techniques: (a) To prevent from Stolen DIMM attacks, (b) To ensure AI privacy, and (c) To implement in secure encrypted virtualization (SEV) (d) Without protection, NVMs become vulnerable after power down. (e) NVMs with AES-embedded can be protected after power down but with high encryption overheads. (f) With the proposed encryption scheme, NVMs can be protected after power down with minimal penalty. However, the nonvolatility of NVM also brings many new security challenges and concerns<cit.>that were absent in conventional volatile memories. One of the major threats occurs when a NVM is stolen or lost, the malicious attackers may exploit the unique properties of NVM to get unauthorized accesses by low-cost tampering and then easily extract all the sensitive information stored in the devices, such as users' passwords and credit card numbers, out of the memory, and is also known as the "stolen memory attack". Compared to volatile memory such as SRAM which is considered safe due to the loss of data after power down, NVM retains data indefinitely, making them vulnerable after the system is powered down, as shown in Fig. <ref>(d). Besides, with the increasing demand of intensive computation and the stronger desire of large data capacity, replacing some parts of storage systems with NVMs increases the incentive to attack the system and makes more data vulnerable. Hence, the security vulnerability of NVM has become a critical issue for information-sensitive systems. To address the above issue and ensure data security in modern NVM systems, data encryption is the most common approach. AES is the most common and widely-used cryptographic algorithm<cit.>. It is a symmetrical block cipher algorithm including two processes – encryption and decryption, which converts the plaintext (PT) to the ciphertext (CT) and converts back by using 128-, 192-, or 256-bits keys. Because of the high security and high computation efficiency it provides, AES algorithm has attracted many researchers to actively explore its related hardware implementations and applications in a wide range of fields, such as wireless communication<cit.>, financial transactions<cit.> etc. In addition, a variety of AES-based encryption techniques were proposed aiming to address the aforementioned NVM security issues and improve the security of NVM. However, AES encryption and decryption incurs significant performance and energy cost due to extra complexity involved with read and write operations, as shown in Fig. <ref>(e). An incremental encryption scheme, called as i-NVMM, was proposed to reduce the latency overhead<cit.>, in which different data in NVMs is encrypted at different times depending on what data is predicted to be useful to the processor. By doing partial encryption incrementally, i-NVMM can keep the majority of memory encrypted while incurring affordable encryption overheads. However, i-NVMM relies on the dedicated AES engine that is impacted by limited bandwidth. Other prior works have proposed near-memory and in-memory encryption techniques as solutions to address the performance issues. For instance, AIM, which refers to AES in-memory implementation, supports one in-memory AES engine that provides bulk encryption of data blocks in NVMs for mobile devices<cit.>. In AIM, encryption is executed only when it's necessary and by leveraging the benefit of the in-memory computing architecture, AIM achieves high encryption efficiency but the bulk encryption limits support fine-grain protection. In summary, prior AES-based encryption schemes fail to efficiently address the aforementioned security issues in NVMs without incurring negligible costs. Our effort aims to break the dilemma between encryption/decryption performance and cost by finding a satisfactory solution to address the security vulnerability issue. As illustrated in Fig. <ref>(f), we propose a memory encryption/decryption scheme that exploits the intrinsic memory array operations without incurring complex encryption/decryption circuitry overhead. The idea is to use the intrinsic memory array operations to implement a lightweight encryption/decryption technique, i.e., bit wise XOR between the secret key and the plaintext/ciphertext, respectively. In this way, the ciphertext is written into memory through normal memory write operations and the data is secure unless a correct key, which attackers do not possess, is provided during the memory sensing operation. This work demonstrates this proposed encryption/decryption operation in FeFET memories and can potentially be extended to other NVM technologies. Ferroelectric HfO2 has revived interests in ferroelectric memory for its scalability, CMOS compatiblity, and energy efficiency. Inserting the ferroelectric into the gate stack of a MOSFET, a FeFET is realized such that its threshold voltage (VTH) can be programmed to the low-VTH (LVT) state or high-VTH (HVT) state by applying positive or negative write pulses on the gate, respectively. In this work, with the co-design from technology, circuit and architecture level, the proposed efficient encryption/decryption scheme can successfully remove the vulnerability window and achieve secure encryption in FeFET-based NVM. Moreover, since there is no additional complicated encryption/decryption engine (e.g. AES engine) as a part of the peripheral circuit in our architecture, ourdesign can avoid the latency/power/area costs in AES-based encryption designs by only adding lightweight logic gates, which dramatically improves the performance of memory and expands the range of potential applications in different fields. With the proposed memory encryption/decyrption scheme integrated in FeFET memory array, many NVM-targeted attacks can be prevented. For example, if the memory device is stolen or lost, our design can effectively protect it against the malicious stolen memory attack as the attacker has no knowledge of what the data represents without correct secret keys even though they are able to physically access and read out the stored ciphertext (Fig. <ref>(a)). Besides, with negligible incurred overhead compared with normal memory, the proposed design can benefit wide applications that can exploit the added security feature without compromising performance. For instance, as shown in Fig. <ref>(b), NVM arrays can be used to accelerate the prevalent operation in deep neural networks, i.e., matrix vector multiplication (MVM) in memory. By storing the trained neural network weights as, for example, the NVM conductance, the intended MVM operation is naturally conducted in analog domain by applying the input as input voltage pulses and summing up the resulting array column current. As artificial intelligence makes significant strides in various application domains, especially those information sensitive sectors, how to protect these trained weights from malicious entities becomes an essential problem<cit.>. Many relevant works have explored and demonstrated that data encryption embedded in CiM enables in-situ authentication and computation with high area and energy efficiency<cit.>. Compared to existing AES-based encryption design which would introduce significant delay, our encryption design can efficiently encrypt and decrypt all the weights in-situ and perform CiM computation with the encrypted weights directly thus ensuring high security and privacy. Another application example is secure encrypted virtualization (SEV)<cit.>. SEV systems require keys to isolate guests and the host OS/hypervisor from one another in order to ensure the data security in system hardware. However, present SEV systems use AES engines for encryption. By replaceing the AES engines with our design, the system performance will be improved in terms of latency. § OVERVIEW OF THE PROPOSED MEMORY ENCRYPTION/DECRYPTION SCHEME For a deeper look into the design principles of the proposed in-situ encryption/decryption scheme in FeFET array, details from different granularity and levels are demonstrated in Fig. <ref>. Fig. <ref>(a) shows an overview of the proposed encryption memory architecture, including the FeFET-based memory array and the associated peripheral circuitry. < g r a p h i c s > parbox=none The proposed memory encryption scheme. (a) An overview of the proposed memory encryption architecture. (b) Three scenarios in the memory. (c) The details of the encryption and decryption schemes. In our encryption design, the whole memory is encrypted in block-wise, which means it uses one key (1/0) per block. Depending on different cost and security demands, the granularity of encrypted blocks varies. As shown in Fig. <ref>(b), there are three situations in the memory – unencrypted blocks, encrypted blocks with key = 1, and encrypted blocks with key = 0. For unencrypted blocks, they operate as traditional FeFET memory array. For each memory cell, depending on which data to store (1/0), FeFET would be programmed to LVT state or HVT state by applying different write voltages (±VW). However, for encrypted blocks, each memory cell consists of two FeFETs, as illustrated in Fig. <ref>(b). Hence, during every write operation, two rows would be selected and asserted by different voltages (±VW). In addition, with different keys, these encrypted blocks follow different encryption strategies. The details of the proposed encryption/decryption strategies are demonstrated in Fig. <ref>(c) in cell level. In the encryption process, the key is XORed with PT to obtain the CT. And the two FeFETs in the same cell would be programmed to different state patterns depending on the data that CT represents. For example, if the PT is '1' and the key for this block is '1', then the CT would be '0'. Based on our encryption strategy, the upper FeFET in the target cell should be programmed to LVT state and the bottom one should be programmed to HVT state. Similarly, if the result of CT is '1', then the upper FeFET should be set to HVT state and the bottom FeFET should be set to LVT state. In the decryption process, different read voltages (VR/0 V) are applied on the gate terminals of FeFETs. However, the voltage pattern of decryption is different from that of encryption in the proposed design. The voltage pattern (VR/0 or 0/VR) is only relevant to the key of this cell. More specifically, if the key = 1, VR would be applied on the gate of the upper FeFET in the memory cell, and 0 V would be applied to the other FeFET. In contrast, if the key = 0, VR would be asserted on the bottom FeFET instead. In this way, original data (PT) can be successfully read out through sensing the current only when the user uses the correct key. However, for unauthorized users/attackers, even though they may have the physical access to read out the current of each memory cell, they are no aware of whether the information they read is correct or not since they don't know the correct keys for each block. Therefore, the FeFET memory are protected from information leakage and achieves intrinsic secure without extra circuit cost. Besides, the proposed in-situ memory encryption/decryption scheme is not just limited for the AND arrays. We also explore and demonstrate the feasibility of the proposed scheme to apply in other array structures, such as FeFET NAND array which provides potentially higher integration density (Supplementary Fig. <ref>) and FeFET NOR array (Supplementary Fig. <ref>). Both of them show that the proposed memory encryption/decryption scheme is general and can fit into different memory designs. More specifically, two FeFETs are coupled as one cell for representing one bit information – bit '1' or bit '0'. During the encryption process, firstly, CT will be determined by XORing PT and the corresponding key. Depending on different CT, complementary states will be programmed into the 2FeFET-based cell. During the decryption process, different read voltages depending on key patterns will be applied to the coupled FeFETs in the same cell. Finally, the correct information (PT) would be successfully read out. § EXPERIMENTAL VERIFICATION In this section, functional verification of encryption/decryption operations on one single cell and memory array is demonstrated. For experimental measurement, FeFET devices integrated on the 28 nm high-κ metal gate (HKMG) technology platform are tested<cit.>. Fig. <ref>(a) and (b) show the transmission electron microscopy (TEM) and schematic cross-section of the device, respectively. The device features an 8 nm thick doped HfO2 as the ferroelectric layer and around 1 nm SiO2 as the interlayer in the gate stack. The experimental setup for on-wafer characterization is shown in Fig.<ref>. First single cell encryption/decryption shown in Fig.<ref>(c) is demonstrated. Fig. <ref>(c) and (e) show the ID-VG characteristics of each FeFET in a cell storing the CT of bit '0' for key bit of '1' and '0', respectively. With CT of '0', the top/bottom FeFET is programmed to the LVT/HVT, using +4V/-4V, 1μs write gate pulse, respectively. Then the decryption process simply corresponds conventional array sensing operation but with key-dependent read voltages on the two FeFETs (i.e., dashed line in Fig. <ref>(c) and (e)). For example, with key of '1', the top/bottom FeFETs are applied with VR (i.e., 0.6V)/0V, respectively. In this way, the top FeFET contributes a high read current, thus corresponding to the PT of bit '1'. If the key is bit '0', the read biases for the two FeFETs are swapped such that the top/bottom FeFETs receive 0V/VR, respectively, where both FeFETs are cut-off, thus corresponding to the PT of bit '0'. Successful decryption can also be demonstrated for CT of bit '1' as shown in Fig.<ref>(d) and (f), where the top/bottom FeFETs are programmed to the HVT/LVT state, respectively and the same key-dependent read biases are applied. These results demonstrate successful single cell encryption/decryption using only in-situ memory operations. < g r a p h i c s > parbox=none Experimental verification. (a-b) TEM and schematic cross section. (c-f) I_D - V_G characteristics for the proposed 2FeFET memory cell. (g) The image of 8x7 FeFET AND array for array-level verification. (h-k) The patterns of plaintext, keys, ciphertext, and corresponding V_TH after encryption. (l-n) In the decryption process, three conditions of applying different patterns of keys: correct keys, all-0 keys, random keys. The colorbar on the right side indicates the read current measured from each cell. Array-level experiments and functional verification are also performed and demonstrated. Without loss of generability, FeFET AND array is adopted. Fig. <ref>(g) illustrates a 8x7 FeFET AND memory array for measurements. As illustrated in Fig. <ref>(h), a checkerboard data pattern of PT (i.e., orange boxes represent data '1'; blue boxes represent data '0'.) and random keys shown in Fig. <ref>(i) are used. To show the most general case, bit-wise encryption/decryption is validated, as encryption at a coarser granularity, i.e., row-wise or block-wise, is simply derivation of the bit-wise case. With the PT and keys determined, the CT is simply the XOR result between the PT and corresponding keys, as shown in Fig. <ref>(j). Each CT bit is then stored as the complementary VTH states of the two FeFETs in each cell. Different write schemes along with disturb inhibition strategy can be applied <cit.>. In this work, block-wise erase is performed first by raising the body potential to reset the whole array to the HVT state and then selectively programming corresponding FeFETs into the LVT state. Fig. <ref>(k) shows the VTH map of 8x7 FeFETs in the array after the encryption process, corresponding to 4x7 encrypted CT. For the decryption process, three different scenarios are considered, i.e., using correct keys, all-0 keys, and random keys. For bit-wise encryption/decryption in AND array, since all the FeFETs in the same row share the same word line, it requires two read cycles to sense the whole row. This is because the key-dependent read voltage biases are different for key bit '1' and bit '0'. Therefore two read cycles are required where cycle 1 and 2 reads out the cells with key bit '1' and '0', respectively. Cycle 1 results are temporarily buffered and merged with cycle 2 results. Note that the additional latency can be avoided if row-wise or block-wise encryption granularity is used, where the same word line bias can be applied. As shown in Fig. <ref>(l), under the condition of using correct keys, the user can successfully read out all PT. For attackers without the knowledge of keys, two representative scenarios are considered, where the attackers can simply apply all-0 keys or random keys. In the condition of all-0 keys, the accuracy is only 50%, as shown in Fig. <ref>(m). With random keys, the accuracy of decryption is only 32.1%, which is much worse than other two conditions. Above all, both the functional correctness of the proposed encryption design and the resistance against attacks are verified at the cell level and array level. § EVALUATION AND CASE STUDY To evaluate the feasibility and performance of the proposed in-situ memory encryption /decryption scheme using FeFET memory arrays, a comprehensive evaluation is performed between this work and AES-based encryption scheme<cit.> in terms of area, latency, power, and throughput. For a fair comparison, an 128x128 FeFET AND-type array is designed in 28nm HKMG platform and operates at 25 MHz, consistent with the reference AES work<cit.>. This speed serves as a pessimistic estimation of FeFET array encryption/decryption operation as it can operate at a higher speed. In addition, for memory sensing, 16 sense amplifiers (SAs) are used for illustration. If a higher sensing throughput is needed, more SAs can be deployed. For the evaluation, both the AES and proposed in-situ encryption/decyrption scheme are applied. As summarized in Table. <ref>, for the prior AES-based work, the area cost of its AES unit is 0.00309 mm^2. However, for the proposed scheme, the only functional gate required is XOR gates, whose area is negligible comparing to the whole memory area cost. Besides, latency is one of the most important criteria for evaluating encryption methods. In the proposed design, the encryption and decryption latency for 128-bit data are 5 cycles and 16 cycles, respectively, which is much less than the latency penalty of the AES accelerator (115.5 cycles, 117 cycles). One thing should be noticed is that decryption latency would be reduced if more SAs are used for sensing. Moreover, at the frequency of 25 MHz, the performance of 640/400 Mbps throughput is obtained during the encryption/decryption process, which is much better than that of the AES accelerator (throughput: 28.32 Mbps). Since the power consumption of our encryption circuit is only equal to that of multiple XOR gates, it is negligible compared to the AES accelerator (0.031 mW). In addition, to investigate the latency benefit provided by the proposed scheme compared to the conventional AES scheme when implementing data encryption and decryption with different neural network (NN) workloads, a case study is performed on 6 NN workloads which are Alexnet, Mobilenet, FasterRCNN, Googlenet, Restnet18, and Yolo_tiny via SCALE-Sim<cit.> which is a simulator for evaluating conventional neural network (CNN) accelerators. In this case study, we specifically consider this scenario – all the workloads are implemented into a systolic array for processing (Google TPU in this case). The encrypted weights of each neural network are pre-loaded into FeFET-based memory arrays for feeding to the systolic system after decryption. After the computation, the outputs will be read out and securely stored into the FeFET memory with encryption. As shown in Fig. <ref>(b), the latency introduced by encryption and decryption processes of the proposed scheme is much less than that of AES-based scheme. The average latency reduction over these 6 workloads is ∼90%. According to the simulation results, it shows that the proposed in-situ memory encryption/decryption scheme offers significant time savings over the conventional AES scheme, especially when processing data-intensive applications, such as neural networks. < g r a p h i c s > parbox=none Evaluation Results. (a) Comparison with AES-based encryption scheme<cit.>. (b) Latency comparison on different neural network workloads. § CONCLUSION In summary, we propose an in-situ memory encryption/decryption scheme which can guarantee high-level security by exploiting the intrinsic memory array operations while incurring negligible overheads. In addition, the functionality of the proposed scheme is verified through experiments on both device-level and array-level. Moreover, the evaluation results show that our scheme can hugely improve the encryption/decryption speed and throughput with negligible power cost from system-level aspect. Furthermore, an application-level case study is investigated. It shows that our scheme can achieve 90% latency reduction on average compared to the prior AES-based accelerator. § DATA AVAILABILITY The data that support the plots within this paper and other findings of this study are available from the corresponding author on reasonable request. § REFERENCES § ACKNOWLEDGEMENTS This work is primarily supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences Energy Frontier Research Centers program under Award Number DESC0021118. The architecture part is supported by SUPREME and PRISM centers, two of the SRC/JUMP 2.0 centers and in part by NSF 2246149 and 2212240. § AUTHOR CONTRIBUTIONS V.N., and K.N. proposed and supervised the project. Y.X., Y.X., Z. Zhao, X.G., and S.G. conceived the encryption/decryption schemes in different memory arrays. F.M., A.V., and T.K. performed cell and array characterization. All authors contributed to write up of the manuscript. § COMPETING INTERESTS The authors declare no competing interests. § EXPERIMENTAL DETAILS The electrical characterization was conducted using a measurement setup comprising a PXIe System provided by NI. To access each contact of the testpad, a separate NI PXIe-4143 Source Measure Unit (SMU) was employed. Source selection for each contact was facilitated by a customized switch-matrix controlled by NI PXIe-6570 Pin Parametric Measurement Units (PPMU). The external resistor was connected to the source-terminal contact on the switch-matrix. The probe-card established the connection between the switch-matrix and the FeFET-structures, see Fig.<ref>. < g r a p h i c s > parbox=none Measurement setup for FeFET characterization. The measurement setup utilizes a PXI System that incorporates Source Measurement Units (SMU) and Pin Parametric Measurement Units (PPMU). The PPMUs are employed to configure the Switch Matrix, allowing the source signals to be routed to the corresponding contact needles. The test structures, present on 300 mm wafers, are connected to the measurement setup through a semi-automatic probe station, facilitated by a probe card. § NAND ENCRYPTION SCHEME Besides the FeFET AND array, the proposed encryption scheme can be implemented in the form of FeFET NAND array which provides potentially higher integration density (shown in Fig. <ref>). Similar to the AND array case, 2 neighboring FeFETs are grouped as a cell to represent 1 bit stored information. For the encryption process, firstly the key is XORed with PT to obtain the CT. If CT is 1 (0), the two consecutive FeFETs on the selected NAND string are programmed to HVT and LVT (LVT and HVT) respectively. During the decryption process, two possible voltages (V_r1 and V_r2) are applied on the gate nodes of FeFETs, which satisfy V_r1 > Vth,high > V_r2 > Vth,low. V_r1/V_r2 are applied on the first/second FeFET when Key=0, and V_r2/V_r1 are applied on the first/second FeFET when Key=1. If PT=1, V_r1 is applied on the HVT FeFET and V_r2 is applied on the LVT FeFET, in which case they are both ON so that a high current is sensed on the NAND string. If PT=0, V_r1 is applied on the LVT FeFET and V_r2 is applied on the HVT FeFET. Since the HVT FeFET is OFF, the read current is low. In this way, CT is XORed with Key so that PT is obtained by sensing the read current. < g r a p h i c s > parbox=none The encryption and decryption scheme for NAND memory arrays § NOR ENCRYPTION SCHEME The proposed encryption scheme can be implemented in the form of FeFET NOR array as well (shown in Fig. <ref>). Similar to the AND array case, 2 consecutive FeFETs in the same column are used to represent 1 bit encrypted information. During the encryption process, after the Key is XORed with PT to obtain the CT, the top and bottom FeFET are programmed to HVT (LVT) and LVT (HVT) respectively if CT=1 (0). While during the decryption process, the read voltage (Vth,high > VR > Vth,low) is applied on the top (bottom) FeFET if Key=1 (0). Only when VR is applied on the LVT FeFET, a high current is sensed which represents PT=1. In this way, the XOR operation between Key and CT is realized in the NOR array. < g r a p h i c s > parbox=none The encryption and decryption scheme for NOR memory arrays in (a) array level and (b) cell level.
http://arxiv.org/abs/2307.00395v1
20230701174912
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications
[ "Mustafa Munir", "William Avery", "Radu Marculescu" ]
cs.CV
[ "cs.CV", "cs.LG" ]
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications Mustafa Munir* The University of Texas at Austin [email protected] William Avery* The University of Texas at Austin [email protected] Radu Marculescu The University of Texas at Austin [email protected] July 31, 2023 ============================================================================================================================================================================================================================ *Equal contribution Traditionally, convolutional neural networks (CNN) and vision transformers (ViT) have dominated computer vision. However, recently proposed vision graph neural networks (ViG) provide a new avenue for exploration. Unfortunately, for mobile applications, ViGs are computationally expensive due to the overhead of representing images as graph structures. In this work, we propose a new graph-based sparse attention mechanism, Sparse Vision Graph Attention (SVGA), that is designed for ViGs running on mobile devices. Additionally, we propose the first hybrid CNN-GNN architecture for vision tasks on mobile devices, MobileViG, which uses SVGA. Extensive experiments show that MobileViG beats existing ViG models and existing mobile CNN and ViT architectures in terms of accuracy and/or speed on image classification, object detection, and instance segmentation tasks. Our fastest model, MobileViG-Ti, achieves 75.7% top-1 accuracy on ImageNet-1K with 0.78 ms inference latency on iPhone 13 Mini NPU (compiled with CoreML), which is faster than MobileNetV2x1.4 (1.02 ms, 74.7% top-1) and MobileNetV2x1.0 (0.81 ms, 71.8% top-1). Our largest model, MobileViG-B obtains 82.6% top-1 accuracy with only 2.30 ms latency, which is faster and more accurate than the similarly sized EfficientFormer-L3 model (2.77 ms, 82.4%). Our work proves that well designed hybrid CNN-GNN architectures can be a new avenue of exploration for designing models that are extremely fast and accurate on mobile devices. Our code is publicly available at <https://github.com/SLDGroup/MobileViG>. § INTRODUCTION Artificial intelligence (AI) and machine learning (ML) have had explosive growth in the past decade. In computer vision, the key driver behind this growth has been the re-emergence of neural networks, especially convolutional neural networks (CNNs) and more recently vision transformers <cit.>. Even though CNNs trained via back-propagation were invented in the 1980s <cit.>, they were used for more small-scale tasks such as character recognition <cit.>. The potential of CNNs to re-shape the field of artificial intelligence was not fully realized until AlexNet <cit.> was introduced in the ImageNet <cit.> competition. Further advancements to CNN architectures have been made improving their accuracy, efficiency, and speed <cit.>. Along with CNN architectures, pure multi-layer perceptron (MLP) architectures and MLP-like architectures have also shown promise as backbones for general-purpose vision tasks <cit.> Though CNNs and MLPs had become widely used in computer vision, the field of natural language processing used recurrent neural networks (RNNs), specifically long-short term memory (LSTM), networks due to the disparity between the tasks of vision and language <cit.>. Though LSTMs are still used, they have largely been replaced with transformer architectures in NLP tasks <cit.>. With the introduction of Vision Transformer (ViT) <cit.> a network architecture applicable to both language and vision domains was introduced. By splitting an image into a sequence of patch embeddings an image can be transformed into an input usable by transformer modules <cit.>. One of the major advantages of the transformer architecture over CNNs or MLPs is its global receptive field, allowing it to learn from distant object interactions in images. Graph neural networks (GNNs) have developed to operate on graph-based structures such as biological networks, social networks, or citation networks <cit.>. GNNs have even been proposed for tasks such as node classification <cit.>, drug discovery <cit.>, fraud detection <cit.>, and now computer vision tasks with the recently proposed Vision GNN (ViG) <cit.>. In short, ViG divides an image into patches and then connects the patches through the K-nearest neighbors (KNN) algorithm <cit.>, thus providing the ability to process global object interactions similar to ViTs. Research in computer vision for mobile applications has seen rapid growth, leading to hybrid architectures using CNNs for learning spatially local representations and vision transformers (ViT) for learning global representations <cit.>. Current ViG models are not suited for mobile tasks, as they are inefficient and slow when running on mobile devices. The concepts learned from the design of CNN and ViT models can be explored to determine whether CNN-GNN hybrid models can provide the speed of CNN-based models along with the accuracy of ViT-based models. In this work, we investigate hybrid CNN-GNN architectures for computer vision on mobile devices and develop a graph-based attention mechanism that can compete with existing efficient architectures. We summarize our contributions as follows: * We propose a new graph-based sparse attention method designed for mobile vision applications. We call our attention method Sparse Vision Graph Attention (SVGA). Our method is lightweight as it does not require reshaping and incurs little overhead in graph construction as compared to previous methods. * We propose a novel mobile CNN-GNN architecture for vision tasks using our proposed SVGA, max-relative graph convolution <cit.>, and concepts from mobile CNN and mobile vision transformer architectures <cit.> that we call MobileViG. * Our proposed model, MobileViG, matches or beats existing vision graph neural network (ViG), mobile convolutional neural network (CNN), and mobile vision transformer (ViT) architectures in terms of accuracy and/or speed on three representative vision tasks: ImageNet image classification, COCO object detection, and COCO instance segmentation. To the best of our knowledge, we are the first to investigate hybrid CNN-GNN architectures for mobile vision applications. Our proposed SVGA attention method and MobileViG architecture open a new path of exploration for state-of-the-art mobile architectures and ViG architectures. This paper is structured as follows. Section 2 covers related work in the ViG and mobile architecture space. Section 3 describes the design methodology behind SVGA and the MobileViG architecture. Section 4 describes experimental setup and results for ImageNet-1k image classification, COCO object detection, and COCO instance segmentation. Lastly, Section 5 concludes the paper and suggests future work with ViGs in mobile architecture design. § RELATED WORK ViG <cit.> is proposed as an alternative to CNNs and ViTs due to its capacity to represent image data in a more flexible format. ViG represents images through using the KNN algorithm <cit.>, where each pixel in the image attends to similar pixels. ViG achieves comparable performance to popular ViT models, DeiT <cit.> and SwinTransformer <cit.>, suggesting it is worth further investigations. Despite the success of ViT-based models in vision tasks, they are still slower when compared to lightweight CNN-based models <cit.>, in contrast CNN-based models lack the global receptive field of ViT-based models. Thus, ViG-based models may be a possible solution by providing speeds faster than ViT-based models and accuracies higher than CNN-based models. To the best of our knowledge, there are no works on mobile ViGs at this time; however, there are many existing works in the mobile CNN and hybrid model space. We classify mobile architecture designs into two primary categories: convolutional neural network (CNN) models and hybrid CNN-ViT models, which blend elements of CNNs and ViTs. The MobileNetv2 <cit.> and EfficientNet <cit.> families of CNN-based architectures are some of the first mobile models to see success in common image tasks. These models are lightweight with fast inference speeds. However, purely CNN-based models have steadily been replaced by hybrid competitors. There are a vast number of hybrid mobile models, including MobileViTv2 <cit.>, EdgeViT <cit.> LeViT <cit.>, and EfficientFormerv2 <cit.>. These hybrid models consistently beat MobileNetv2 in image classification, object detection, and instance segmentation tasks, but some of these models do not always perform as well in terms of latency. The latency difference can be tied to the inclusion of ViT blocks, which have traditionally been slower on mobile hardware. To improve this state of affairs we propose MobileViG, which provides speeds comparable to MobileNetv2<cit.> and accuracies comparable to EfficientFormer <cit.>. § METHODOLOGY In this section, we describe the SVGA algorithm and provide details on the MobileViG architecture design. More precisely, Section 3.1 describes the SVGA algorithm. Section 3.2 explains how we adapt the Grapher module from ViG <cit.> to create the SVGA block. Section 3.3 describes how we combine the SVGA blocks along with inverted residual blocks for local processing to create MobileViG-Ti, MobileViG-S, MobileViG-M, and MobileViG-B. §.§ Sparse Vision Graph Attention We propose Sparse Vision Graph Attention (SVGA) as a mobile-friendly alternative to KNN graph attention from Vision GNN <cit.>. The KNN-based graph attention introduces two non-mobile-friendly components, KNN computation and input reshaping, that we remove with SVGA. In greater detail, the KNN computation is required for every input image, since the nearest neighbors of each pixel cannot be known ahead of time. This results in a graph with seemingly random connections as seen in Figure <ref>a. Due to the unstructured nature of KNN, the authors of <cit.> reshape the input image from a 4D to 3D tensor, allowing them to properly align the features of connected pixels for graph convolution. Following the graph convolution, the input must be reshaped from 3D back to 4D for subsequent convolutional layers. Thus, KNN-based attention requires the KNN computation and two reshaping operations, both of which are costly on mobile devices. To remove the overhead of the KNN computation and reshaping operations, SVGA assumes a fixed graph, where each pixel is connected to every K^th pixel in its row and column. For example, given an 8×8 image and K=2, the top left pixel would be connected to every second pixel across its row and every second pixel down its column as seen in Figure <ref>b. This same pattern is repeated for every pixel in the input image. Since the graph has a fixed structure (i.e., each pixel will have the same connections for all 8×8 input images), the input image does not have to be reshaped to perform the graph convolution. Instead, it can be implemented using rolling operations across the two image dimensions, denoted as roll_right and roll_down in Algorithm <ref>. The first parameter to the roll operation is the input to roll, and the second is the distance to roll in the right or down direction. Using the example from Figure <ref>b where K=2, the top left pixel can be aligned with every second pixel in its row by rolling the image twice to the right, four times to the right, and six times to the right. The same can be done for every second pixel in its column, except by rolling down. Note that since every pixel is connected in the same way, the rolling operations used to align the top left pixel with its connections simultaneously align every other pixel in the image with its connections. In MobileViG, graph convolution is performed using max-relative graph convolution (MRConv). Therefore, after every roll_right and roll_down operation, the difference between the original input image and the rolled version is computed, denoted as X_r and X_c in Algorithm <ref>, and the max operation is taken element wise and stored in X_j, also denoted in Algorithm <ref>. After completing the rolling and max-relative operations, a final Conv2d is performed. Through this approach, SVGA trades the KNN computation for cheaper rolling operations, consequently not requiring reshaping to perform the graph convolution. We note that SVGA eschews the representation flexibility of KNN in favor of being mobile friendly. §.§ SVGA Block We insert SVGA and the updated MRConv layer into the Grapher block proposed in Vision GNN <cit.>. Given an input feature X∈ℝ^N × N, the updated Grapher is expressed as 1 Y=σ(MRConv(XW_in))W_out+X where Y∈ℝ^N × N, W_in and W_out are fully connected layer weights, and σ is a GeLU activation. We also change the number of filter groups from 4 (the value used in Vision GNN <cit.>) to 1 in the MRConv step to increase the expressive potential of the MRConv layer without a noticeable increase in latency. The updated Grapher module is visually depicted in Figure <ref>d Following the updated Grapher, we use the feed-forward network (FFN) module as proposed in Vision GNN <cit.> and shown in Figure <ref>e The FFN module is a two layer MLP expressed as 2 Z=σ(XW_1)W_2+Y where Z∈ℝ^N × N, W_1 and W_2 are fully connected layer weights, and σ is once again GeLU. We call this combination of updated Grapher and FFN an SVGA block, as shown in Figure <ref>c. §.§ MobileViG Architecture The MobileViG architecture shown in Figure <ref>a is composed of a convolutional stem, followed by three stages of inverted residual blocks (MBConv) with an expansion ratio of four for local processing as proposed in MobileNetv2 <cit.>. Within the MBConv blocks, we swap ReLU6 for GeLU as it has been shown to improve performance in computer vision tasks <cit.>. The MBConv blocks consist of a 1×1 convolution plus batch normalization (BN) and GeLU, a depth-wise 3×3 convolution plus BN and GeLU, and lastly a 1×1 convolution plus BN and a residual connection as seen in Figure <ref>b. Following the MBConv blocks we have one stage of SVGA blocks to capture global information as seen in Figure <ref>a. We also have a convolutional head after the SVGA blocks for classification. After each MBConv stage, a downsampling step halves the input resolution and expands the channel dimension. Each stage is composed of multiple MBConv or SVGA blocks, where the number of repetitions is changed depending on model size. The channel dimensions and number of blocks repeated per stage for MobileViG-Ti, MobileViG-S, MobileViG-M, and MobileViG-B can be seen in Table <ref>. § EXPERIMENTAL RESULTS We compare MobileViG to ViG <cit.> and show its superior performance in terms of latency, model size, and image classification accuracy on ImageNet-1k <cit.> in Table <ref>. We also compare MobileViG to several mobile models and show that, for each model, it has superior or comparable performance in terms of accuracy and latency in Table <ref>. §.§ Image Classification We implement the model using PyTorch 1.12 <cit.> and Timm library <cit.>.We use 8 NVIDIA A100 GPUs to train each model, with an effective batch size of 1024. The models are trained from scratch for 300 epochs on ImageNet-1K <cit.> with AdamW optimizer <cit.>. Learning rate is set to 2e-3 with cosine annealing schedule. We use a standard image resolution, 224 × 224, for both training and testing. Similar to DeiT <cit.>, we perform knowledge distillation using RegNetY-16GF <cit.> with 82.9% top-1 accuracy. For data augmentation we use RandAugment, Mixup, Cutmix, random erasing, and repeated augment. We use an iPhone 13 Mini (iOS 16) to benchmark latency on NPU and GPU. The models are compiled with CoreML and latency is averaged over 1000 predictions <cit.>. As seen in Table <ref>, for a similar number of parameters, MobileViG outperforms Pyramid ViG <cit.> both in accuracy and GPU latency. For example, for 3.5 M fewer parameters, MobileViG-S matches Pyramid ViG-Ti in top-1 accuracy, while being 2.83× faster. Additionally, for 0.6 M fewer parameters, MobileViG-B beats Pyramid ViG-S by 0.5% in top-1 accuracy, while being 2.08× faster. When compared to mobile models in Table <ref>, MobileViG consistently beats every model in at least NPU latency, GPU latency, or accuracy. MobileViG-Ti is faster than MobileNetv2 with 3.9% higher top-1 accuracy. It also matches EfficientFormerv2 <cit.> in top-1 while having a slight edge in NPU and GPU latency. MobileViG-S is nearly 2x faster than EfficientNet-B0 <cit.> in NPU latency and has 0.5% higher top-1 accuracy. Compared to MobileViTv2-1.5 <cit.>, MobileViG-M is over 3x faster in NPU latency and 2x faster in GPU latency with 0.2% higher top-1 accuracy. Additionally, MobileViG-B is 6x faster than DeiT-S and is able to beat both DeiT-S and Swin-Tiny in top-1 accuracy. §.§ Object Detection and Instance Segmentation We evaluate MobileViG on object detection and instance segmentation tasks to further prove the potential of SVGA. We integrate MobileViG as a backbone in the Mask-RCNN framework <cit.> and experiment using the MS COCO 2017 dataset <cit.>. We implement the backbone using PyTorch 1.12 <cit.> and Timm library <cit.>, and use 4 NVIDIA RTX A6000 GPUs to train our models. We initialize the model with pretrained ImageNet-1k weights from 300 epochs of training, use AdamW <cit.> optimizer with an initial learning rate of 2e-4 and train the model for 12 epochs with a standard resolution (1333 X 800) following the process of Next-ViT, EfficientFormer, and EfficientFormerV2 <cit.>. As seen in Table <ref>, with similar model size MobileViG outperforms ResNet, PoolFormer, EfficientFormer, and PVT in terms of either parameters or improved average precision (AP) on object detection and/or instance segmentation. The medium size MobileViG-M model gets 41.3 APbox, 62.8 APbox when 50 Intersection over Union (IoU), and 45.1 APbox when 75 IoU on the object detection task. MobileViG-M gets 38.1 APmask, 60.1 APmask when 50 IoU, and 40.8 APmask when 75 IoU for the instance segmentation task. The big size MobileViG-B model gets 42.0 APbox, 64.3 APbox when 50 IoU, and 46.0 APbox when 75 IoU on the object detection task. MobileViG-B gets 38.9 APmask, 61.4 APmask when 50 IoU, and 41.6 APmask when 75 IoU on the instance segmentation task. The strong performance of MobileViG on object detection and instance segmentation shows that MobileViG generalizes well as a backbone for different tasks in computer vision. The design of MobileViG is partly inspired by the designs of Pyramid ViG <cit.>, EfficientFormer <cit.>, and the MetaFormer concept <cit.>. The results achieved in MobileViG demonstrate that hybrid CNN-GNN architectures are a viable alternative to CNN, ViT, and hybrid CNN-ViT designs. Hybrid CNN-GNN architectures can provide the speed of CNN-based models along with the accuracy of ViT models making them an ideal candidate for high accuracy mobile architecture designs. Further explorations of hybrid CNN-GNN architectures for mobile computer vision tasks can improve on the MobileViG concept and introduce new state-of-the-art architectures. § CONCLUSION In this work, we have proposed a graph-based attention mechanism, Sparse Vision Graph Attention (SVGA), and MobileViG, a competitive mobile vision architecture that uses SVGA. SVGA does not require reshaping and allows for the graph structure to be known prior to inference, unlike previous methods. We use inverted residual blocks, max-relative graph convolution, and feed-forward network layers to create MobileViG, a hybrid CNN-GNN architecture, that achieves competitive results on image classification, object detection, and instance segmentation tasks. MobileViG outperforms existing ViG models and many existing mobile models, including MobileNetv2, in terms of accuracy and latency. Future research on mobile architectures can further explore the potential of GNN-based models on resource-constrained devices for IoT applications. ieee_fullname
http://arxiv.org/abs/2306.09773v1
20230616111118
Unraveling cradle-to-grave disease trajectories from multilayer comorbidity networks
[ "Elma Dervić", "Johannes Sorger", "Liuhuaying Yang", "Michael Leutner", "Alexander Kautzky", "Stefan Thurner", "Alexandra Kautzky-Willer", "Peter Klimek" ]
physics.med-ph
[ "physics.med-ph", "physics.data-an", "stat.ME" ]
A Testbed for Carbon-Aware Applications and Systems Odej Kao July 31, 2023 =================================================== * Complexity Science Hub Vienna, Josefstädter Straße 39, 1080 Vienna, Austria; * Medical University of Vienna, Section for Science of Complex Systems, CeMSIIS, Spitalgasse 23, 1090 Vienna, Austria; * Medical University of Vienna, Department of Internal Medicine III, Clinical Division of Endocrinology and Metabolism, Währinger Gürtel 18–20, A-1090 Vienna, Austria; * Medical University of Vienna, Department of Psychiatry and Psychotherapy, Währinger Gürtel 18-20, A-1090 Vienna, Austria; * Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA. * Gender Institute, A-3571 Gars am Kamp, Austria ^*Correspondence: [email protected] We aim to comprehensively identify typical life-spanning trajectories and critical events that impact patients' hospital utilization and mortality. We use a unique dataset containing 44 million records of almost all inpatient stays from 2003 to 2014 in Austria to investigate disease trajectories. We develop a new, multilayer disease network approach to quantitatively analyse how cooccurrences of two or more diagnoses form and evolve over the life course of patients. Nodes represent diagnoses in age groups of ten years; each age group makes up a layer of the comorbidity multilayer network. Inter-layer links encode a significant correlation between diagnoses (p < 0.001, relative risk > 1.5), while intra-layers links encode correlations between diagnoses across different age groups. We use an unsupervised clustering algorithm for detecting typical disease trajectories as overlapping clusters in the multilayer comorbidity network. We identify critical events in a patient's career as points where initially overlapping trajectories start to diverge towards different states. We identified 1,260 distinct disease trajectories (618 for females, 642 for males) that on average contain 9 (IQR 2-6) different diagnoses that cover over up to 70 years (mean 23 years). We found 70 pairs of diverging trajectories that share some diagnoses at younger ages but develop into markedly different groups of diagnoses at older ages. The disease trajectory framework can help us to identify critical events as specific combinations of risk factors that put patients at high risk for different diagnoses decades later. Our findings enable a data-driven integration of personalized life-course perspectives into clinical decision-making. A Testbed for Carbon-Aware Applications and Systems Odej Kao July 31, 2023 =================================================== § INTRODUCTION Multimorbidity, the occurrence of two or more diseases in one patient, is a frequent phenomenon   <cit.>. Today's reality of a 100-year lifespan brings a shifting multimorbidity burden and increased healthcare and long-term care costs  <cit.>. It was estimated that mnore than 50 million people in Europe show more than one chronic condition  <cit.>. In  <cit.>, authors estimated that 16–57% of adults in developed countries are diagnosed with more than one chronic disease and predicted a dramatic rise of multimorbidity rates in the next years. The WHO World Report on Ageing and Health emphasizes the importance of research to better understand the dynamics and consequences of ageing  <cit.>. Studies on multimorbidity patterns may contribute to successful ageing by the prevention of disease progression by identifying critical events which lead to a rapid deterioration of health  <cit.>. As diseases tend to co-occur and interact with each other (in a way that can worsen the course of both), they cannot be studied separately from each other  <cit.>. The analysis of multimorbidity has recently been catalyzed by the massive collection of patient health information on diagnoses, medication, results of laboratory tests in electronic health records (EHR), and other clinical registries. Comorbidity networks have been established as tools to analyse multimorbidity in such datasets  <cit.>. Age and sex-specific analyses can further be conducted to address age- and sex-dependent associations between diagnoses  <cit.>. These works confirm that patients mainly develop diseases in close network proximity to disorders they already suffer. The concept of disease trajectories has been proposed to formally describe the progression of multimorbidity over time. Disease trajectories are frequently occurring patterns or sequences of diagnoses at particular times and are typically extracted from the medical history of millions of patients. Thus, apart from the pairwise disease associations, uncovering complex disease patterns and assessing their temporal directionality is crucial for estimating disease progression, developing prediction models  <cit.>, analysing trajectories  <cit.> and their temporal patterns using clustering algorithms  <cit.>. Many studies used data on in-hospital stays to construct such trajectories. A summary of applications of machine learning tools to understand the structure and formation of multimorbidity in the population was given in  <cit.>. However, studies of multimorbidity patterns over the full life span of patients, from cradle to grave, remain scarce, as studies frequently take cross-sectional approaches  <cit.>. Longitudinal analysis of multimorbidity requires large population-wide disease registries which span over multiple years, if not decades. Such analyses are challenging as they require custom-made methods and that are often computationally challenging  <cit.>. Taken together, a life span perspective on multimorbidities addressing the need for more comprehensive knowledge on disease trajectories and their critical events is largely missing to date  <cit.>. Here, we propose a novel approach to dynamical comorbidity networks from longitudinal population-wide healthcare data to comprehensively identify disease trajectories in an entire population. A multilayer comorbidity network is constructed where nodes correspond to diagnoses, layers to age groups, intralayer links to disease co-occurrence relations, and interlayer links encode the directionality of disease pairs (which diagnosis tends to occur first). We identify temporal disease trajectories as communities in this multilayer network. In some cases, these tightly connected communities share some nodes and be referred to as overlapping communities. The central assumption of our approach is that communities of nodes in the comorbidity network represent patients' diseases trajectories. We identify overlapping communities rather than exclusive clusters as the same diseases (nodes) can naturally be part of different disease trajectories, i.e. sleep disorders in patients with and without obesity and diabetes mellitus type 2. We further try to identify critical events as points along trajectories, where two initially identical trajectories start to diverge and will lead to different outcomes in terms of disease burden (hospital utilization) and mortality. Figure <ref> illustrates the suggested methodology of this large-scale disease trajectory study. We analysed data from an electronic health registry covering almost all of 8.9 million Austrians with more than 44 million in-hospital stays over 17 years, from 1997-2014. To ensure the comparability of the health status of our study population, we restricted the analysis to patients who were "healthy" at the beginning of the observed period between 2003 and 2014. Therefore, in the first step of the analysis, we identified as the study population all patients with at least one hospital stay between 1997 and 2002 with a diagnosis from the range A00-N99 (in total 1,081 diagnoses). Moreover, in the early 2000s, Austria transitioned from the previous ICD coding system to ICD-10 2001. It was crucial to avoid combining various classification systems as it would have compromised the reliability of the analysis, Figure <ref> (blue box). In a next step we then constructed a multilayer comorbidity network to explore how different disease conditions co-occur and develop over time. We separated our data into 10-year age groups. For every age group we introduced a layer in the multilayer comorbidity network. In this network, two types of link can be found, links that connect nodes in the same layer (intralayer links) and links that connect nodes from different layers (interlayer links). All identified significant correlations of diagnoses in the same age group are defined as intralayer links, while interlayer links represent the correlation between diagnoses in different age groups, Figure <ref> (green box). Nodes without any intralayer links were removed, Figure <ref> (red box). We used an algorithm based on the local optimization of a fitness function presented in  <cit.> to identify overlapping communities in the multilayer network, Figure <ref> (orange box). Note that the detected communities typically encompass more than one age layer. We analysed the age structure of the detected overlapping communities and the number of chapters of diagnoses inside the communities. More concrete, we conceptualize disease trajectories as groups of diagnoses that occur at different age groups (layers in the network) and that are more closely connected to other diagnoses in the same community compared to diagnoses outside of the community. As disease trajectories can overlap, this enables us to comprehensively study relationships between disease trajectories across more than one age group. We defined pairs of trajectories as converging if they do not overlap (no shared diagnoses) in younger age groups while they have a nonzero overlap in older age groups. Additionally, diverging pairs of trajectories overlap at the beginning, in younger age groups, but have different pathways in older age groups. From this we can identify critical events in patient careers. Critical events are defined as combinations of diagnoses in a specific age group, mainly chronic conditions, that signal that the disease trajectories are about to diverge towards paths that lead to different levels of mortality or lengths of hospital stays in the following age groups. Critical events can be thought of as bifurcation points of disease trajectories that can lead to trajectories associated with strongly varying outcomes. These events can support the identification of patients at risk for more severe multimorbidity trajectories and associated adverse outcomes in the next decade and thereby provide leverage points for targeted preventive actions. § DATA AND METHODS §.§.§ Data The analysed dataset spans 17 years of nationwide in-hospital data from all hospitals in Austria. Each hospital stay is recorded with primary and secondary diagnoses, age in the resolution of 5 years, sex, admission and release date, release type (i.e., release, transfer, death...). This dataset covers the period from 1997 until 2014 and the vast majority of Austria's population with 8.9 unique patients. Diagnoses are coded with the three-digit International Classification of Diseases, 10th Revision (ICD-10) codes. We restricted our analysis to 1,081 codes from A00 to N99, excluding codes describing health encounters that can not be directly related to diseases (i.e., O00-O9A - Pregnancy, childbirth, and the puerperium, S00-T88 - Injury, poisoning and certain other consequences of external causes...). The data always reports a primary diagnosis as the main reason for hospitalization, along with a variable number of secondary diagnoses. In this study, we assigned equal importance to both primary and secondary diagnoses  <cit.>. To ensure that our study population's health state was comparable at the beginning of the observation period and not in the middle of connected hospitalization episodes, we introduced a wash-out period and limit the analysis to patients who had no hospital visits between 1997 and 2002. Consequently, excluding these patients also ensured that analysed data has only one ICD coding system, as in the early 2000s, Austria updated its ICD coding system to ICD-10 2001  <cit.>. §.§.§ Multilayer Comorbidity Network Formally, we construct the multilayer comorbidity network given by the tensor M_i,j^α, β where i and j refer to nodes (diagnoses) on layers (age groups) α and β, respectively. We refer to entries in M with α=β as intralayer links and with α≠β as interlayer links. The analysis was performed separately for male and female patients. §.§ Intralayer links Intralayer links give the correlation between diagnoses within the same age group. The analysed dataset was stratified by six time windows of two years each, from 2003 to 2014. A contingency table is created for each pair of diagnoses in each stratum (for each sex and age group, the intralayer analysis includes six strata, each covering two calendar years). We used all contingency tables with more than four patients in each subgroup to compute relative risks (RR) and the p-value for rejecting the null hypothesis that the co-occurrence of two analysed diagnoses is statistically independent  <cit.>. A weighted average of the estimates of the risk ratios and odds ratios across the stratified data were calculated by using the Cochran–Mantel–Haenszel method  <cit.>. Subsequently, all correlations with RR higher than 1.5 and p-value smaller than 0.05 were extracted and presented as intralayer links  <cit.>. These links are bidirectional, and we use a normalized RR as the link weight. The normalization of RR was done such that the sum of all total weights of all intralayer links with the same target was one. §.§ Interlayer links To estimate directionality or time order in pairs of diagnoses, we split the observation period in two time frames T1 = [2003, 2008] and T2 = [2009, 2014]. We investigate if a patient diagnosed with i in T1 elevates the risk of being diagnosed with j in T2 and compute the interlayer link weight as M_i,j^α≠β = P(j_T2^β|i_T1^α)/P(j_T2^β) . §.§ Overlapping community detection in multilayer network We deleted all nodes without at least one inbound and one outbound link. Further, we normalized all link weights to range from 0 to 1 by dividing each link's weight by the sum of all links of the same type of a target node M_ij^αβ = M_ij^αβ/∑_jM_ij^αβ . The algorithm for detecting the overlapping and hierarchical community structure in complex networks proposed in  <cit.> was applied. This unsupervised clustering algorithm does not have a predefined number of communities. The detection procedure is initiated starting with a random node, which represents one community by itself. Community's fitness is f_G = k_in^G/(k_in^G + k_out^G)^a, where k_in^G are the total internal degrees of the nodes in the community G and k_out^G are the total external degrees of the nodes in the community G. As long as the f_G improves, neighboring nodes are added, or nodes already community members are removed. The entailed resolution parameter a enables us to uncover different hierarchical levels of a system, natural choice is a=1. Fitness is calculated at each step. Once the fitness cannot be increased anymore by a node removal or addition step, that community is "completed" and "closed." The community detection process ends when all nodes have been assigned to at least one community. To parallelize and optimize this computationally costly process, we identify the community of every node and delete duplicates among the discovered communities. Identified communities usually consist of diseases in different age groups that tend to co-occur more frequently among themselves than diseases that are not part of the community. Hence, these communities represent typical disease trajectories; we denote a trajectory X as a set of diagnosis-age tuples, X = {(i_1, α_1), (i_2, α_2), (i_3, α_3)...}, where i is an ICD10 code ranging from [A00, N99] and α is the age group from [1,8]. We measure the similarity of trajectories by the Jaccard coefficient between two trajectories consisting of tuples with diagnoses i and age groups α, (i,α). That is, two trajectories have a non-zero overlap if they share diagnoses within the same age groups. §.§ Identifying converging and diverging trajectories We performed a comprehensive classification with respect to all pairwise relations between every pair of trajectories. Provided that two trajectories share at least one diagnosis, they can be related in one of four different ways, namely (i) diverging, (ii) converging, (iii) nested, or (iv) persistent, Figure <ref>. Diverging trajectories have some overlapping elements at younger ages, but they develop into markedly different sets of diagnoses at older ages. More formally, trajectories X = {(i_11, α_11), (i_12, α_12), (i_13, α_13)...} and Y = {(i_21, α_21), (i_22, α_22), (i_23, α_23)...} are diverging if it holds that {{ (i_1i, α_1i) ∈ X | α_1i = α^X_min) }∩{ (i_2i, α_2i) ∈ Y | α_2i = α^X_min) }} ∪ {{ (i_1i, α_1i) ∈ X | α_1i = α^Y_min) }∩{ (i_2i, α_2i) ∈ Y | α_2i = α^Y_min) }} ≠∅ and { (i_1i, α_1i) ∈ X | α_1i > α^X_min) } ≠ { (i_2i, α_2i) ∈ Y | α_2i > α^X_min) } and { (i_1i, α_1i) ∈ X | α_1i > α^Y_min) } ≠ { (i_2i, α_2i) ∈ Y | α_2i > α^Y_min) }, where α^X_min = min_(i, α)∈ X α , α^Y_min = min_(i, α)∈ Y α. Converging trajectories overlap at older ages but are clearly different at younger ages. Trajectories X and Y are converging if it holds that {{ (i_1i, α_1i) ∈ X | α_1i = α^X_max) }∩{ (i_2i, α_2i) ∈ Y | α_2i = α^X_max) }} ∪ {{ (i_1i, α_1i) ∈ X | α_1i = α^Y_max) }∩{ (i_2i, α_2i) ∈ Y | α_2i = α^Y_max) }} ≠∅ and { (i_1i, α_1i) ∈ X | α_1i < α^X_max) } ≠ { (i_2i, α_2i) ∈ Y | α_2i < α^X_max) } and { (i_1i, α_1i) ∈ X | α_1i < α^Y_max) } ≠ { (i_2i, α_2i) ∈ Y | α_2i < α^Y_max) }, where α^X_max = max_(i, α)∈ X α , α^Y_max = max_(i, α)∈ Y α. Two trajectories are nested if one of them is a subset of another one, X ⊂ Y or Y ⊂ X. Persistent trajectories X and Y can overlap in the highest age group of X and lowest age group of Y, or vice versa. §.§ Identifying critical events We define critical events by one or a combination of diagnoses and age groups where two trajectories begin to diverge and where one of the diverging trajectories has patients with considerably higher number of diagnoses, higher mortality or more extended hospital stays in the subsequent age group(s) compared to the other diverging trajectory. Mortality of a trajectory for a certain age group is calculated as M = ∑_i m_i * ∏_j ≠ i (1- m_j), where m is the in-hospital mortality of a diagnosis (defined as the percentage of patient diagnosed with the diagnose in a specific age group who die in-hospital) which is a member of a trajectory. Length of hospital stay of a trajectory in a certain age group is defined as the average number of days spent in hospital for patients who are diagnosed with at least half of all diagnoses from a trajectory. § RESULTS §.§.§ Multilayer Comorbidity Network We constructed the multilayer comorbidity network based on hospital data, basic characteristics of the database are shown in Figure S1. We used all 3-digits ICD10 codes from the range A00-N99 and one more newly introduced code for patients without any diagnosis, in total 1,082 codes. Nodes in the constructed network are ICD10 codes appearing in one of eight different age groups, i.e. E66-0-9, E66-10-19, etc. Hence, we used 8,648 nodes to construct a multilayer comorbidity network with eight layers (one for each ten years age group, 0-9, 10-19,... 70-79 years old). We filtered the network by removing nodes without any intralayer links. This reduced the network from 8,648 nodes to 4,923 nodes for males and 4,764 nodes for females. The average degree in the filtered male network is 11.6 SD 39.7, for the female network the average degree is 15.8 SD 46. The number of hospital stays, Figure <ref>(A), and nodes N Figure <ref>(B) increases with age, reaches a peak at ages 60 to 69 and decreases for older ages. We see similar age trends in Figure <ref>(C) the total number of links and Figure <ref>(D) the average degree for intralayer as well as in- or outbound interlayer links for males and females. §.§.§ Trajectories The unsupervised community detection algorithm discovered 642 distinct disease trajectories in the male and 618 in the females network; they are listed in SI, Tables S1–S2, and shown in Figure <ref>. These trajectories contain on average 9 (IQR 2-6) different diagnoses that range over up to 7 age groups (mean: 2.3 age groups), meaning that these trajectories range on average over 20-30 years and in some cases over up to 70 years of life. Besides trivial examples like a trajectory with the only diagnosis being K51 (ulcerative colitis) in each age group in males, we also found more complex trajectories spanning 70 years. For instance, for female patients there is a trajectory that starts with personality disorder (F61) at the age of 20-29y. Over the following decades there is an accumulation of mental disorders including depression (F33), post-traumatic stress disorder (F43) and eating disorders (F50) in 50-59y, followed by anxiety disorders (F40) and a few more non-chronic diagnoses in 60-69y. The distribution of the size of the trajectories (number of diagnoses-age tuples) is presented in Figure <ref> (A). Most trajectories contain between 3 and 5 diagnoses-age combinations; while a few trajectories contain more than hundred elements. We split trajectories into seven groups based on the number of age groups in the trajectory and analysed the number of different disease chapters in one trajectory Figure <ref> (B). This shows that trajectories typically span heterogeneous chapters of ICD codes, meaning that they often span diagnoses affecting quite different organ systems. We calculated the Jaccard index to inspect the pairwise similarity and dissimilarity of trajectories; see the distribution of this index in Figure <ref> (C). Jaccard indices range between zero and one, indicating varying degrees of similarity between two trajectories. The most common relationship among pairs of trajectories is nested, which explains the peak at one in the Jaccard index. Figure <ref> (D) shows frequency statistics of different types of trajectory pairs. In Figure <ref> we show a more detailed view of some of the trajectories from Figure <ref>. We show two examples for trajectories (grey areas) departing from (A) hypertension (I10) at an age of 10-19y in females and (B) sleep disorders (G47) at an age of 20-29y in males. In both cases, different combinations of other diagnoses appear in subsequent decades. The hypertension trajectory diverged into chronic kidney diseases (2,289 patients) or (1,027 patients) a combination of metabolic (obesity, disorders of lipoprotein metabolism) and digestive disorders (liver diseases, cholelithiasis) with nicotine abuse. The sleep disorder trajectory diverged either toward the metabolic syndrome (including obesity and type 2 diabetes) in 115 patients or towards a combination of movement disorders, hernia, obesity and diseases of the middle ear (316 patients). In total, we identified 35 pairs of such diverging trajectories in females and 35 in males; see Figure <ref> D). On average, diverging trajectories have 2.9 SD 0.8 age groups, 3.5 SD 1.8 different diagnoses chapters, and 8.1 SD 4.7 different diseases for females, and for males 3.0 SD 1 age groups, 3.5 SD 2.9 different diagnoses chapters, and 11 SD 11 different diseases. While there are 64 pairs of converging trajectories in females and 95 in males, converging trajectories in females have: 2.8 SD 0.9 age groups, 4.2 SD 3.2 different diagnoses chapters, and 26 SD 79 different diseases, in males: 3 SD 1 age groups, 3.8 SD 3.5 different diagnoses chapters, and 22 SD 68 different diseases. Some of the trajectories are persistent (16 pairs of trajectories in females, 14 in males). These can be combined as they overlap at the end of X trajectory and the beginning of the Y trajectory. The most frequent relationship between trajectories was the complete overlap of shorter and longer trajectories, which we defined as nested. We found 314 pairs of nested trajectories among female trajectories and 266 in males trajectories. We designed and implemented an online visualization tool that allows a user to interactively explore the comorbidity network structure and the underlying diagnose data, <https://vis.csh.ac.at/netviewer/>. §.§.§ Outcomes of trajectories For every trajectory, we calculated (in-hospital) mortality and the number of days spent in the hospital for each age group, Figure <ref>. In-hospital mortality for each trajectory is shown in the yellow outer circle. The analysis reveals notable variations in mortality rates across trajectories, with younger age groups generally exhibiting lower mortality. Moreover, it is evident that certain trajectories undergo significant shifts in mortality as they progress into older age groups. The green circle represents the average duration of hospitalization for trajectories, while the blue circle denotes the number of diagnoses, and the purple inner circle signifies the count of patients who followed at least 50% of a given trajectory. Notably, the green circle highlights discernible differences in the number of hospital days among different trajectories. Some trajectories have a clearly higher number of hospital days compared to other trajectories; these trajectories mainly consist of mental and behavioral disorders (F chapter) and infectious and parasitic diseases (B chapter) in males, while in females, besides these we see diseases of musculoskeletal and connective tissue (M chapter) and diseases of the nervous system (G chapter). We also compared outcomes of diverging trajectories; some examples are shown in Table <ref> (extended tables in SI, Table S8 and S9). We calculated an average number of hospital diagnoses, hospital days, and hospital stays for each age group in each trajectory over all patients following these trajectories. We calculated the ratio of each outcome of trajectories in each diverging pair to check if these trajectories develop into different outcomes in terms of disease burden and mortality. For example, both trajectories from the pair starting with N81 in 50s are characterised with a similar average number of hospital diagnoses in the 20s, while in the 30s patients of the second trajectory have, on average, 24% more hospital diagnoses. In the same example, we see that patients of the first trajectory, on average have more days spent in hospital and hospital stays in the 20s (Ratio of average number of / number of days spent in hospital = 1.547 / hospital stays = 1.548), but in 30s patients of the second trajectory, have remarkably more days spent in hospital and hospital stays (Ratio of average number of / number of days spent in hospital = 0.331 / hospital stays = 0.551), Table <ref>. § DISCUSSION AND CONCLUSION In this work we introduced a novel method to identify life-course disease trajectories, in some cases spanning up to 70 years of life, in terms of sequences and combinations of hospital diagnoses that form and change over time. Our comprehensive analysis identified 642 disease trajectories in males and 618 in females ranging over the entire diagnostic spectrum (41% of males and 42% of female trajectories contained diagnoses from more than one ICD chapter). While the most common length of these trajectories was two diagnoses for both sexes, on average they contained 5.3 SD 5.1 and 5.4 SD 5.5 diagnoses for males and females, respectively, emphasizing the heterogeneous and widespread nature of multimorbidity in the general population. There is a substantial variation in the number of patients that follow a trajectory. We count patients for each trajectory for each age group if they have at least 50% of diagnoses from a trajectory. In general, shorter trajectories tend to be followed by more patients (more than 10,000 patients per trajectory per age group) than longer, more specific ones that typically contain approximately a hundred patients. The number of patients in a trajectory typically increases with age. The trajectories foster the rapid identification of critical events. These can take the form of bifurcation points where a trajectory “splits up” into multiple diverging trajectories at a specific age group. More concrete, we found 35 pairs of diverging trajectories for females and 35 pairs for males. For example, in females diagnosed with arterial hypertension (I10) between 10 and 19 years, two major trajectories were identified by the model. The first trajectory lead to the additional diagnosis of chronic kidney disease (N18) at an age of 20-29 years. This is clinically relevant as the number of pediatric arterial hypertension is increasing worldwide  <cit.> and it is well known that aHTN is closely related to chronic kidney disease. So our results point out that from a clinical point of view, a strict monitoring for arterial hypertension should be established especially in children at high risk, such as obese children or children with the metabolic syndrome. Arterial hypertension does not only mean increased risk for chronic kidney disease, but also other complications such as cardiovascular disease. The second trajectory was characterized by patients with the metabolic syndrome; these patients were disproportionally diagnosed with obesity (E66), lipid disorders (E78), steatosis hepatis (K76), cholelithiasis (K80) and nicotine abuse (N17) in their further life. In general, we therefore have two trajectories in females initially diagnosed with arterial hypertension, which are in principal dangerous conditions - the "kidney-trajectory" and the "metabolic trajectory". We found that approximately 2,289 patients follow the "kidney-" and 1,027 patients follow the metabolic trajectory. These trajectories are mostly important as metabolic diseases belong to the most common diseases worldwide and also chronic kidney disease is a disease which is related to multi morbidity and increased mortality rate. In a different example we found that sleeping disorders (G47) in males diagnosed in the age groups between 20-39 years were also followed by a metabolic trajectory which was defined by an over-representation of later diagnoses of diabetes mellitus type 2 (E11), obesity (E66), lipid disorders (E78) and hyperuricemia (E79). The other trajectory, diverging from sleeping disorders, is characterized by a higher chance of being diagnosed with movement disorders or otitis media (G25), obesity (E66) and abdominal hernia (K46). We found substantial differences in the average number of diagnoses and hospital days between patients of different branches of these diverging trajectories. While patients who followed these two trajectories showed similar average numbers of diagnoses at age 20-29 (3.3 diagnoses in both cases), patients who followed a metabolic trajectory had, on average, 3.9 diagnoses ten years later while patients who followed the other trajectory had, on average, 5.1 diagnoses. The number of sleeping disorders is on the rise and these results show that patients with sleeping disorders have to be monitored for several diseases in different trajectories. Our analysis also identified several instances were diverging trajectories differed substantially in their mortality, in some cases of up to 18 times. In terms of mortality we identify trajectories that develop into a combination of diagnoses with high mortality in older age groups. For instance, a trajectory consisting of chronic bronchitis and COPD at an age of 40-49y, bronchiectasis and intraoperative and postprocedural complications at 50-59y and finally in sequelae of tuberculosis, inflammatory polyneuropathy, conjunctivitis, bronchitis, bronchiectasis, eosinophilia and again intraoperative and postprocedural complications in 60-69y in males had eight times higher mortality in the age group 60-69y compared to its' mortality ten years earlier (mortality increased from 0.089 in 40-49y to 0.013 before jumping to 0.11 in 60-69y). Trajectories with the highest mortality usually contain cancer diagnoses, but cardiovascular or respiratory diseases also feature in the trajectories with high mortality. §.§.§ Strengths and Limitations Strengths of this study include its comprehensive population-wide in-hospital database, containing information on about 9 million individuals. Non-systematic errors, such as randomly missing diagnoses, have little impact on our research because of the volume of the data set. However, this study has some limitations caused by data quality and limitations in data availability, in particular, the lack of information on outpatient visits, medication and lifestyle. Consequently, we cannot evaluate the outcomes of outpatient visits, blood tests, examinations, or imaging because primary care diagnoses are not recorded in this dataset; only hospital diagnoses coded with ICD10 codes were available for analysis. Another drawback is that the database was designed for billing purposes, so diagnoses that did not result in financial compensation were frequently not reported. Therefore, we have to point out that some diseases, such as alcohol related disorders or nicotine dependence, are often not recorded correctly in our data. Further, socio-economic indicators for individual patients were also not available in the dataset, leaving it yet to be explored how socio-economic status impacts on these trajectories. An additional constraint associated with the dataset is the exclusive availability of in-hospital mortality data. On a methodological level, it is also important to bear in mind that a constructed multilayer comorbidity network has two types of links (with normalized links weights); but these types are not distinguishable by the used community detection algorithms. In summary, we presented a novel and statistically grounded way of studying disease progression over time based on a population-wide and decade-spanning data set of hospital diagnoses. We proposed an age multilayer comorbidity network as a base for our modelling approach. We showed that this kind of network is a promising approach for better understanding disease trajectories and their dynamics as patients age. While some of the identified trajectories in this study have been described in previously published studies, many novel disease trajectories and their decades-long time dynamics have been revealed. A better understanding of diseases, their correlations and the sequences in they occur has the potential to improve the prevention of focal diseases. Early detection and identification of a patient's projected disease trajectory might enable prompt and timely treatments next to targeted preventive action. Consequently, that will help transition health systems from single-disease models to more effective life-spanning and individualized multimorbidity models  <cit.>. § REFERENCES 29 han2021diseaseHan, X., Hou, C., Yang, H., Chen, W., Ying, Z., Hu, Y., Sun, Y., Qu, Y., Yang, L., Valdimarsdóttir, U. & Others Disease trajectories and mortality among individuals diagnosed with depression: a community-based cohort study in UK Biobank. Molecular Psychiatry. 26, 6736-6746 (2021) cezard2021studyingCezard, G., McHale, C., Sullivan, F., Bowles, J. & Keenan, K. Studying trajectories of multimorbidity: a systematic scoping review of longitudinal approaches and evidence. BMJ Open. 11, e048485 (2021) whoage World Health Organization - Ageing and health. Last accessed May 01, 2022 from <https://www.who.int/news-room/fact-sheets/detail/ageing-and-health>. euage WAgeing Europe: LOOKING AT THE LIVES OF OLDER PEOPLE IN THE EU, 2019 edition from <https://ec.europa.eu/eurostat/>. struckmann2014caringStruckmann, V., Snoeijs, S., Melchiorre, M., Hujala, A., Rijken, M., Quentin, W., Ginneken, E. & Others Caring for people with multiple chronic conditions in Europe. Eurohealth. 20, 35-40 (2014) hajat2018globalHajat, C. & Stein, E. The global burden of multiple chronic conditions: a narrative review. Preventive Medicine Reports. 12 pp. 284-293 (2018) world2015worldOrganization, W. World report on ageing and health. (World Health Organization,2015) rowe1997successfulRowe, J. & Kahn, R. Successful aging. The Gerontologist. 37, 433-440 (1997) kudesia2021incidenceKudesia, P., Salimarouny, B., Stanley, M., Fortin, M., Stewart, M., Terry, A. & Ryan, B. The incidence of multimorbidity and patterns in accumulation of chronic conditions: A systematic review. Journal Of Multimorbidity And Comorbidity. 11 pp. 26335565211032880 (2021) di2015associationDi Angelantonio, E., Kaptoge, S., Wormser, D., Willeit, P., Butterworth, A., Bansal, N., O’Keeffe, L., Gao, P., Wood, A., Burgess, S. & Others Association of cardiometabolic multimorbidity with mortality. Jama. 314, 52-60 (2015) strauss2014distinctStrauss, V., Jones, P., Kadam, U. & Jordan, K. Distinct trajectories of multimorbidity in primary care were identified using latent class growth analysis. Journal Of Clinical Epidemiology. 67, 1163-1171 (2014) fotouhi2018statisticalFotouhi, B., Momeni, N., Riolo, M. & Buckeridge, D. Statistical methods for constructing disease comorbidity networks from longitudinal inpatient data. Applied Network Science. 3, 1-34 (2018) jeong2017networkJeong, E., Ko, K., Oh, S. & Han, H. Network-based analysis of diagnosis progression patterns using claims data. Scientific Reports. 7, 1-12 (2017) chmiel2014spreadingChmiel, A., Klimek, P. & Thurner, S. Spreading of diseases through comorbidity networks across life and gender. New Journal Of Physics. 16, 115013 (2014) violan2020fiveViolá, C., Fernández-Bertolín, S., Guisado-Clavero, M., Foguet-Boreu, Q., Valderas, J., Vidal Manzano, J., Roso-Llorach, A. & Cabrera-Bean, M. Five-year trajectories of multimorbidity patterns in an elderly Mediterranean population using Hidden Markov Models. Scientific Reports. 10, 1-11 (2020) prados2018cohortPrados-Torres, A., Poblador-Plou, B., Gimeno-Miguel, A., Calderón-Larrañaga, A., Poncel-Falcó, A., Gimeno-Feliú, L., González-Rubio, F., Laguna-Berna, C., Marta-Moreno, J., Clerencia-Sierra, M. & Others Cohort profile: the epidemiology of chronic diseases and multimorbidity. The EpiChron cohort study. International Journal Of Epidemiology. 47, 382-384f (2018) siggaard2020diseaseSiggaard, T., Reguant, R., Jørgensen, I., Haue, A., Lademann, M., Aguayo-Orozco, A., Hjaltelin, J., Jensen, A., Banasik, K. & Brunak, S. Disease trajectory browser for exploring temporal, population-wide disease progression patterns in 7.2 million Danish patients. Nature Communications. 11, 1-10 (2020) jensen2014temporalJensen, A., Moseley, P., Oprea, T., Ellesøe, S., Eriksson, R., Schmock, H., Jensen, P., Jensen, L. & Brunak, S. Temporal disease trajectories condensed from population-wide registry data covering 6.2 million patients. Nature Communications. 5, 1-10 (2014) haug2020highHaug, N., Deischinger, C., Gyimesi, M., Kautzky-Willer, A., Thurner, S. & Klimek, P. High-risk multimorbidity patterns on the road to cardiovascular mortality. BMC Medicine. 18, 1-12 (2020) giannoula2018identifyingGiannoula, A., Gutierrez-Sacristán, A., Bravo, Á., Sanz, F. & Furlong, L. Identifying temporal patterns in patient disease trajectories using dynamic time warping: a population-based study. Scientific Reports. 8, 1-14 (2018) hassaine2020untanglingHassaine, A., Salimi-Khorshidi, G., Canoy, D. & Rahimi, K. Untangling the complexity of multimorbidity with machine learning. Mechanisms Of Ageing And Development. 190 pp. 111325 (2020) hsu2015trajectoriesHsu, H. Trajectories of multimorbidity and impacts on successful aging. Experimental Gerontology. 66 pp. 32-38 (2015) vos2015trajectoriesVos, R., Akker, M., Boesten, J., Robertson, C. & Metsemakers, J. Trajectories of multimorbidity: exploring patterns of multimorbidity in patients with more than ten chronic health problems in life course. BMC Family Practice. 16, 1-12 (2015) lancichinetti2009detectingLancichinetti, A., Fortunato, S. & Kertész, J. Detecting the overlapping and hierarchical community structure in complex networks. New Journal Of Physics. 11, 033015 (2009) deischinger2020diabetesDeischinger, C., Dervic, E., Leutner, M., Kosi-Trebotic, L., Klimek, P., Kautzky, A. & Kautzky-Willer, A. Diabetes mellitus is associated with a higher risk for major depressive disorder in women than in men. BMJ Open Diabetes Research And Care. 8, e001430 (2020) dervic2021effectDervic, E., Deischinger, C., Haug, N., Leutner, M., Kautzky-Willer, A., Klimek, P. & Others The Effect of Cardiovascular Comorbidities on Women Compared to Men: Longitudinal Retrospective Analysis. JMIR Cardio. 5, e28015 (2021) kuritz1988generalKuritz, S., Landis, J. & Koch, G. A general overview of Mantel-Haenszel methods: applications and recent developments. Annual Review Of Public Health. 9, 123-160 (1988) ashraf2020pediatricAshraf, M., Irshad, M. & Parry, N. Pediatric hypertension: an updated review. Clinical Hypertension. 26 pp. 1-6 (2020) zou2022associationZou, S., Wang, Z., Bhura, M. & Tang, K. Association of multimorbidity of non-communicable diseases with mortality: a 10-year prospective study of 0.5 million Chinese adults. Public Health. 205 pp. 63-71 (202 ) fortunato2016communityFortunato, S. & Hric, D. Community detection in networks: A user guide. Physics Reports. 659 pp. 1-44 (2016) § ACKNOWLEDGMENTS This study was supported financially by the WWTF "Mathematics and..." Project MA16-045. ED would like to thank Michaela Kaleta, Nina Haug and Rafael Prieto-Curiel for the helpful discussions. § AUTHOR CONTRIBUTIONS ED and PK conceived the study and devised the analytic methods. ED wrote the manuscript with contributions from PK, ST, ML and JS. ED carried out the analysis and produced the plots and graphics. JS and LY designed and implemented the visualisation too. AK-W, AK and ML contributed medical expertise regarding the medical interpretation of the findings and in developing medical hypotheses. ED and PK researched and prepared the data. All authors reviewed and contributed to the manuscript. § COMPETING INTERESTS The authors declare no competing interests. [pages=-,scale=0.9, pagecommand=]ML_SI_1.pdf [pages=-,scale=0.9, pagecommand=, landscape=true]ML_SI_2.pdf
http://arxiv.org/abs/2306.07036v1
20230612113346
Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision
[ "Yuhao Wu", "Xiaobo Xia", "Jun Yu", "Bo Han", "Gang Niu", "Masashi Sugiyama", "Tongliang Liu" ]
cs.LG
[ "cs.LG" ]
Quantum Interference of Cavity Light Induced by a Single Atom in Double Well Hao Zhang July 31, 2023 ============================================================================= Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high. The remarkable progress in working with weaker forms of supervision is binary classification from multiple unlabeled datasets which requires the knowledge of exact class priors for all unlabeled datasets. However, the availability of class priors is restrictive in many real-world scenarios. To address this issue, we propose to solve a new problem setting, i.e., binary classification from multiple unlabeled datasets with only one pairwise numerical relationship of class priors (MU-OPPO), which knows the relative order (which unlabeled dataset has a higher proportion of positive examples) of two class-prior probabilities for two datasets among multiple unlabeled datasets. In MU-OPPO, we do not need the class priors for all unlabeled datasets, but we only require that there exists a pair of unlabeled datasets for which we know which unlabeled dataset has a larger class prior. Clearly, this form of supervision is easier to be obtained, which can make labeling costs almost free. We propose a novel framework to handle the MU-OPPO problem, which consists of four sequential modules: (i) pseudo label assignment; (ii) confident example collection; (iii) class prior estimation; (iv) classifier training with estimated class priors. Theoretically, we analyze the gap between estimated class priors and true class priors under the proposed framework. Empirically, we confirm the superiority of our framework with comprehensive experiments. Experimental results demonstrate that our framework brings smaller estimation errors of class priors and better performance of binary classification. § INTRODUCTION Deep learning with large-scale supervised data has enjoyed huge successes in various domains <cit.>. However, it is often very costly or even infeasible to collect the data with strong supervision <cit.>. The fact motivates us to investigate learning algorithms that work with weaker forms of supervision <cit.>, e.g., partial labels <cit.> and noisy labels <cit.>. The remarkable progress in working with weaker forms of supervision is binary classification from multiple unlabeled datasets <cit.>. Those multiple unlabeled datasets share the same class-conditional density and the aim is to learn a binary classifier from m (m≥ 2) unlabeled datasets with different class priors, i.e., the proportion of positives in each unlabeled dataset <cit.>. Such a learning scheme is credible in practice. Unlabeled datasets with different class priors can be naturally collected. For example, considering morbidity rates, they can be potential patient data collected from different regions. The rates are likely to be very different because of the different region backgrounds <cit.>. Class priors play essential roles in the problem of binary classification from multiple unlabeled datasets. In detail, class priors can be leveraged as the supervision to make the problem mathematically solvable, further leading to statistically consistent methods <cit.>. Prior works assumed that the class priors of multiple unlabeled datasets are given, which is reflected in Table <ref>. However, in many real-world scenarios, the class priors are unavailable <cit.>. In addition, randomly set class priors or poorly estimated class priors cannot work well (see empirical evidence in Section <ref>). It is still mysterious nowadays that the solution for binary classification from multiple unlabeled datasets without given class priors. In this paper, we raise a new problem setting, i.e., multiple unlabeled datasets with only one pairwisely numerical relationship of class priors (MU-OPPO). In this new problem illustrated in Figure <ref>, there are no given class priors for multiple unlabeled datasets that share the same class-conditional density. Instead, multiple unlabeled datasets are provided with at least one pairwise numerical relationship of class priors. That is to say, we only require that there exists a pair of unlabeled datasets for which we know which unlabeled dataset has a larger class prior. This weakly supervised requirement makes binary classification from multiple unlabeled datasets almost free of supervision. For example, considering the data about morbidity rates of patients in different regions, we cannot obtain precise morbidity rates directly without careful diagnoses, which could be very costly. On the contrary, we can easily obtain that, for a pair of regions, which one has a higher morbidity rate than the other according to relevant and easily available knowledge, e.g., the region having poor sanitary and medical conditions usually has a high morbidity rate. Generally speaking, obtaining the numerical relationship of two class priors would be much easier than directly obtaining precise class priors and thus less costly. Although the supervision in MU-OPPO is more accessible by comparing precise class priors with the pairwise numerical relationship of class priors, the learning task becomes more challenging since the supervised information of the pairwise numerical relationship of class priors is less than exact class priors. To address the MU-OPPO problem, we propose a novel learning framework in this paper. Specifically, our learning framework consists of four modules: (i) pseudo label assignment; (ii) confident example collection; (iii) class prior estimation; (iv) classifier training with estimated class priors. We handle the MU-OPPO problem by executing the four modules sequentially. The illustration of our proposed framework is provided in Figure <ref>. In particular, we first assign pseudo labels to the two unlabeled datasets whose class prior numerical relationship is known. Specifically, the unlabeled dataset having a larger (resp. smaller) positive class prior is assigned with positive (resp. negative) pseudo labels. Then, we select confident data whose pseudo labels are more likely to be correct, from the two datasets with pseudo labels. Afterward, with the selected confident data, the class priors of all unlabeled datasets can be estimated under irreducible and mutually irreducible assumptions <cit.>. In the end, the estimated class priors are then employed to build statistically consistent algorithms for binary classification from unlabeled datasets <cit.>. Contributions. Before delving into details, we summarize the contributions of this paper as follows. * We propose a new and realistic problem setting that targets binary classification from multiple unlabeled datasets with only one pairwise numerical relationship of class priors (MU-OPPO). Unlike previous proposals (see Table <ref>), the proposed problem setting gets rid of given class priors, which requires almost free supervision information (Section <ref>). * We establish a generalized framework for the MU-OPPO problem, which contains four modules, as discussed above. Within the framework, multiple class priors can be estimated well by utilizing only one pairwise numerical relationship of class priors. This makes applying existing statistically consistent binary classification algorithms from unlabeled datasets successful (Section <ref>). * Theoretically, we analyze the gap between estimated class priors and true class priors under the proposed framework. (Section <ref>) Empirically, we evaluate the effectiveness of the proposed framework with comprehensive experiments based on the problem setting of MU-OPPO. Besides, we verify the generalization capability of our framework by replacing various methods in different modules. Experimental results confirm the advancement of the framework (Section <ref>). Organization. The rest of this paper is organized as follows. In Section <ref>, we formally set up the MU-OPPO problem and analyze the dilemma when previous methods target this problem. In Section <ref>, we discuss the proposed framework and include modules in detail. In Section <ref>, we discuss the difference between the proposed framework and the previous advanced one. In Section <ref>, the theoretical analysis provides the gap between estimated class priors and true class priors under the proposed framework. In Section <ref>, a series of experiments are presented to verify the effectiveness of the proposed framework. In Section <ref>, we provide a more comprehensive review of related literature. Lastly, in Section <ref>, we conclude this paper. § PRELIMINARIES In this section, we first formulate the MU-OPPO problem (Section <ref>). Then, we show that prior solutions cannot be directly applied to solve the MU-OPPO problem, which demonstrates the necessity of the proposed learning paradigm (Section <ref>). §.§ Problem Statement We warm up with binary classification in supervised learning. Let 𝒳 be the input feature space and 𝒴={+1,-1} be the binary label space respectively. Let x∈𝒳 and y∈𝒴 denote the input and output random variables, following an underlying joint distribution 𝒟. In supervised learning, the goal of binary classification is to train a classifier f:𝒳→𝒴 that minimizes the risk defined as R(f):=𝔼_(x,y)∼𝒟[ℓ(f(x),y)], where 𝔼 denotes the expectation and ℓ:R×𝒴→[0,+∞) is the specific loss function, e.g., the cross-entropy loss <cit.>. In most cases, R(f) cannot be calculated directly since the joint distribution 𝒟 is unknown to the learner. Instead, we are given a fully labeled training dataset {(x_i,y_i)}_i=1^n i.i.d. drawn from 𝒟, where n is the size of training examples. The empirical risk is then used to approximate R(f) by R̂(f):=1/n∑_i=1^nℓ(f(x_i),y_i). Compared with binary classification in supervised learning, MU-OPPO considers a different problem set. Problem Set. In MU-OPPO, we only have access to m (m≥ 2) unlabeled datasets denoted by 𝒳_u={𝒳_u^j}_j=1^m. Here, 𝒳_u^j = {x_1^j,…, x_n_j^j}i.i.d.∼P_u^j(x), where n_j and P_u^j(x) denote the sample size and the marginal density of the j-th unlabeled dataset respectively. The marginal density can be seen as a mixture of the positive and negative class-conditional density [P_p(x),P_n(x)]:=[P(x|y=+1),P(x|y=-1)] by the class priors π_j=P_u^j(y=+1), i.e., P_u^j(x)=π_j P_p(x) + (1-π_j)P_n(x). For all m unlabeled datasets, there exists one pair of unlabeled datasets, e.g., 𝒳_u^α and 𝒳_u^β, the numerical relationship of their corresponding class priors π_α and π_β are known as weak supervision, i.e., π_α>π_β or π_α<π_β[According to <cit.> and <cit.>, among the m unlabeled datasets, to make the problem mathematically solvable, it is necessary to assume that at least two of class priors {π_j}_j=1^m are different, i.e., ∃ j,j'∈{1,…,m} such that j ≠ j' and π_j ≠π_j'.]. Compared with our problem setting, prior works <cit.> assumed that the class priors π_j for any j∈{1,…,m} is known. As discussed in Section <ref>, in many realistic scenarios, the class priors are unavailable. Therefore, we have introduced the problem of MU-OPPO above. Obviously, the supervision of one pairwise numerical relationship of class priors is weaker than multiple exact class priors. If the exact class priors are available, then the pairwise numerical relationship of class priors can be accessed, but not vice versa. Although the supervision signal in our problem setting is almost free, the goal is still to obtain a binary classifier that can generalize well with respect to the joint distribution 𝒟, even though it is unobserved. §.§ The Challenging of MU-OPPO In MU-OPPO, exact class priors of multiple unlabeled datasets are unavailable, which makes the problem challenging. For estimating class priors, the state-of-the-art methods are highly related to mixture proportion estimation (MPE) <cit.>. However, those methods cannot address our MU-OPPO problem, and we will explain the reasons below. In the setting of MPE, there is a mixture distribution ℱ, i.e., ℱ=(1-κ^*)𝒢+κ^*ℋ, where 𝒢 and ℋ denote two component distributions. Given the examples randomly collected from ℱ and ℋ, MPE aims to estimate the maximum proportion κ^*∈ (0,1) of ℋ in ℱ <cit.>. Ideally, if we regard κ^* as the class prior, the MPE methods can be applied to estimate π_j by replacing ℱ and ℋ with the mixture distribution P_u^j(x) and one of the component distribution P_p^j(x). Unfortunately, in MU-OPPO, this direct application is infeasible. The reason is that P_p is latent, while only P_u is obtainable. Moreover, Scott (2015) applied a mutual MPE model (Scott, 2015) to the setting of learning from noisy labels with the rigorous assumption of class priors. Although their method is related to our work, we show in Section <ref> that this method has drawbacks when applied to our MU-OPPO setting. Note that randomly generating class priors cannot perform well for binary classification, which is verified in Figure <ref>. Therefore, to accurately estimate class priors, we propose a novel framework in Section <ref>, where it is valid to approximate the latent distributions P_p and P_n out of mixture distributions P_u^j(x) with theoretical guarantees. We call the proposed framework for estimating the class priors the confident class-prior estimator (CCPE) since we use a collection of confident examples in approximating the latent distributions P_p and P_n. § TACKLING THE MU-OPPO PROBLEM WITH A SOLUTION FRAMEWORK In this section, we design a solution framework to handle the MU-OPPO problem and name it the MU-OPPO solution (MOS) framework. The proposed MOS framework of the MU-OPPO problem contains four modules: (i) pseudo label assignment; (ii) confident example collection; (iii) class prior estimation; (iv) classifier training with estimated class priors. The first three modules estimate the class priors and are included in the CCPE. The entire four modules mainly constitute the MOS framework and tackle the MU-OPPO problem sequentially. The illustration of tackling the MU-OPPO problem is provided in Figure <ref>. Specifically, we first assign pseudo labels to two unlabeled datasets, whose numerical relationship between class priors is known and collect confident examples used to approximate the latent distributions P_p and P_n. Then, with the selected confident examples, the class priors of all unlabeled datasets are estimated. Finally, the estimated class priors are employed to learn a binary classifier from multiple unlabeled datasets. Note that our MOS framework consists of four modules that provide compatibility with some existing methods, e.g., adopting various existing methods for selecting confident examples into our confident example collection module. In summary, we target a novel MU-OPPO problem setting by designing a new solution framework here. Technically, we propose a new way of estimating multiple class priors with only one pairwise numerical relationship of class priors. Below we discuss the technical details of the modules involved in our MOS framework. §.§ Pseudo Label Assignment We first discuss how to assign pseudo labels to multiple unlabeled datasets. Specifically, for α,β∈{1,…,m}, we build a new dataset by combining the two unlabeled datasets, i.e., 𝒳_u^α = {x_1^α,…, x_n_α^α}i.i.d.∼P_u^α(x) and 𝒳_u^β = {x_1^β,…, x_n_β^β}i.i.d.∼P_u^β(x). Afterwards, pseudo labels ỹ_i^α and ỹ_i^β corresponding to x_i^α and x_i^β are assigned with positive (+1) and negative (-1) forms. The numerical relationship of class priors π_α and π_β play an important role in assigning pseudo labels. In detail, if π_α >π_β, we can regard P_u^α(x) as the corrupted positive density P̃_p^α(x), and regard P_u^β(x) as the corrupted negative density P̃_n^β(x). Note that we assume that among the m unlabeled datasets, at least two class priors are different and their numerical relationship is known. We use the pair that has different class priors. Therefore, we roughly assign positive labels to 𝒳_u^α and negative labels to 𝒳_u^β. After the assignment of pseudo labels, we obtain a pseudo labeled dataset 𝒳^αβ={(x_i,ỹ_i)}_i=1^n_α+n_β, where ỹ_i denotes the (noisy) pseudo labels of the instance x_i. At first glance, the generation process of our pseudo-labeled dataset looks similar to class-conditional noise <cit.> in learning with noisy labels <cit.>, where clean labels are assumed to flip into other classes with a certain probability. In fact, this data generation process is highly related to the mutually contaminated distributions <cit.>, which is more general than the CCN model. Denote P̃(·) by the corrupted distribution and ỹ by the random variable of noisy labels. Then, CCN and MCD are defined by [ P̃(ỹ=+1|x); P̃(ỹ=-1|x) ] = T_CCN[ P(y=+1|x); P(y=-1|x) ] and [ P̃(x|ỹ=+1); P̃(x|ỹ=-1) ] = T_MCD[ P_p(x); P_n(x) ], where both of T_CCN and T_MCD are 2× 2 matrices but T_CCN is column normalized and T_MCD is row normalized. Moreover, the CCN model is a strict special case of the MCD model <cit.>. Denote P̃(ỹ) by the corrupted label distribution. For CCN, P̃(ỹ) is fixed once P̃(ỹ|x) is specified. While, for MCD, P̃(ỹ) is free after P(x| ỹ) is specified. Furthermore, P̃(x)=P(x) in CCN but P̃(x)≠P(x) in MCD. Due to this covariate shift <cit.>, CCN methods do not fit the MCD problem setting, while the MCD methods can fit the CCN problem setting conversely <cit.>. After generating the pseudo-labeled dataset, we warm up the training of a binary classifier, e.g., ResNet <cit.>, in the first few epochs to help identify confident examples later. The reason for training a warm-up classifier is that the deep learning models will first memorize training data with clean labels and then gradually adapt to those with noisy labels as training epochs become large <cit.>. Therefore, the classifier's predictions can be reliable after a few warm-up learning epochs. §.§ Confident Example Collection In order to approximate the latent distributions, i.e., P_p and P_u, we collect confident examples from 𝒳^αβ based on a previously trained warm-up classifier. For collecting confident examples, some existing methods in the literature of learning with label noise have been proven to work well. Here, we adopt three representative methods among them from different perspectives: (i) the loss distribution <cit.>; (ii) the prediction probability of examples on given pseudo labels <cit.>; (iii) the latent representation of data points  <cit.>. In the following, confident examples are formally defined first, and then we discuss three methods for collecting confident examples through the trained warm-up classifier. 𝒟 is the underlying joint distribution of a pair of random variables (x,y) ∈𝒳×𝒴. For any example (x_i,ỹ_i) sampled from the joint distribution 𝒟, if the probability P(y=ỹ_i|x=x_i) > 0.5, the example (x_i,ỹ_i) is defined as a confident example. In the above definition, Eq. (<ref>) denotes that an example can be defined as a confident example when the probability that its assigned pseudo label is correct is greater than the threshold of 0.5. In the following implementations, we collect confident examples by estimating the posterior probability P(y=ỹ_i|x=x_i). It is inevitable that estimation errors will occur. Thus, in practice, we appropriately increase the threshold to collect confident samples more strictly. §.§.§ Collection by the Loss Distribution It has been empirically demonstrated <cit.> that deep networks tend to memorize clean examples faster than mislabeled examples. Thus, when considering the loss value of examples after the warm-up training, clean examples are more likely to have smaller losses than mislabeled examples <cit.>. Following <cit.>, we estimate the probability that an example is confident by fitting a two-component mixture model to the loss distribution for mixtures of clean and mislabeled examples. Formally, let θ_w denote the parameters of our warm-up classifier. The logistic loss ℒ(θ_w) reflects how well the classifier fits our pseudo labeled dataset 𝒳^αβ: ℒ(θ_w)={ℒ_i}^n_α+n_β_i=1={ln(1 + exp^-ỹ_i f(x_i; θ_w)) }^n_α+n_β_i=1, where f(x_i;θ_w) denotes the predicted probability by the warm-up classifier when the pseudo label is ỹ_i. Considering that the Gaussian Mixture Model (GMM) can better distinguish clean and mislabeled samples due to its flexibility in the sharpness of distribution <cit.>, we fit a two-component GMMs with ℒ(θ_w) using the Expectation-Maximization algorithm <cit.>. For each example, the probability w_i that the assigned pseudo label is correct is estimated by the posterior probability P(g|x_i), where g is the Gaussian component with a smaller mean (minor loss). We collect confident examples by setting a threshold on w_i. §.§.§ Collection by the Prediction Probability The confident learning algorithm <cit.> works by estimating the joint distribution of noisy and latent true labels. The estimation relies on the predicted probability of examples on given pseudo labels. The central idea of the confident learning algorithm is to introduce the confident joint C_ỹ,y to partition and count confident examples. The confident joint C_ỹ,y first finds out the set of examples with the noisy label r and true label s, which is denoted by 𝒳_ỹ=r,y=s. Afterward, the algorithm identifies 𝒳^*_ỹ=r,y=s out of 𝒳_ỹ=r,y=s, where 𝒳^*_ỹ=r,y=s is the set of examples noisily labeled ỹ=r with a large enough predicted probability P̂(ỹ=s|x). Here, the predicted probability is output by our trained warm-up classifier. Moreover, by introducing a per-class threshold t_s, the confident joint is formulated as C_ỹ,y[r][s] 𝒳^*_ỹ=r,y=s, where 𝒳^*_ỹ=r,y=s{x∈𝒳_ỹ = r : P̂ (ỹ = s |x) ≥ t_s, s = P̂ (ỹ = l |x) }. The threshold t_s is the expected (average) self-confidence in each class, i.e., t_s = 1/|𝒳_ỹ=s|∑_x∈𝒳_ỹ=sP̂(ỹ=s|x). It should be noted that in the original paper of the confident learning algorithm <cit.>, the computation of predicted probabilities is out-of-sample by using four-fold cross-validation. Otherwise, the predicted probabilities of examples would be over-confident[ When training neural networks on train datasets, modern neural networks are over-confident in their predictions, i.e., the predicted probabilities of examples are excessively high <cit.>.]. Fortunately, using the warm-up classifier in our framework has already addressed this issue and would be reliable enough for the downstream collection of confident examples, as experimentally demonstrated in Section <ref>. After computing C_ỹ,y, we could easily collect confident examples {x∈𝒳̂_ỹ=r,y=s r = s} from the diagonals of C_ỹ,y. Specifically, we can obtain the confident positive set 𝒳^αβ_p and confident negative set 𝒳^αβ_n as 𝒳^αβ_p={x∈𝒳̂_ỹ=+1,y=+1} and 𝒳^αβ_n={x∈𝒳̂_ỹ=-1,y=-1}. §.§.§ Collection by the Latent Representation In <cit.>, the confident examples are identified by the alignment between the principal component distribution and representation of each instance by using an eigenvalue decomposition of the data Gram matrix. For this goal, given the low-dimensional representation z_i of the instance x_i, which can be obtained with our previously trained warm-up classifier, the data Gram matrix is defined as G_k = ∑_ỹ_i:= kz_iz_i^⊤, k∈{+1,-1}. Then, the alignment a_i of x_i is evaluated via the square of the inner product, i.e., a_i,k:=⟨u_k,z_i⟩^2. Here, u_k is the first column of U_k from the eigendecomposition of G_k, when the eigenvalues are in descending order. According to <cit.>, the collection process of confident examples depends on alignment clusterability, which leads to the following definition. For all features pseudo-labeled as class k in the dataset 𝒳^αβ, let fit a two-component GMM on their alignment a_i,k to divide them into two sets. The set that has a larger mean value is treated as a confident set. Then, we state that the dataset 𝒳^αβ satisfies alignment clusterability if the representation z labeled as the same true class belongs to the confident set. As a whole, u_k is the principal component with the eigendecomposition. The dataset is well clustered with the alignment of representations toward the principal component. Hence, for the dataset 𝒳^αβ, we can obtain the confident positive set 𝒳^αβ_p and confident negative set 𝒳^αβ_n by fitting the GMM on their alignment. Till now, we have discussed how to collect confident examples. Below, we show how to estimate the class priors accurately with these confident examples. §.§ Class Prior Estimation According to Section <ref>, the direct application of MPE-based class prior estimators is infeasible due to the latent P_p or P_n distribution. But, the above confident example collection module can approximate the latent distribution by selecting confident samples. Thus, we can estimate class priors in our setting by implementing some existing methods on the MPE problem. Note that without any assumption, the class priors are not identifiable in the MPE problem <cit.>. Thus, to ensure identifiability, the irreducible assumption <cit.> has been proposed, and we briefly review this assumption first in this subsection. After introducing the irreducible assumption, we provide how to estimate class priors in our setting using a standard MPE estimator when the confident examples are collected. Later, we also implement two recent MPE estimators, i.e., Regrouping MPE (ReMPE), which weakens the irreducible assumption <cit.>, and the Best Bin Estimation (BBE) estimator <cit.>. To introduce the irreducible assumption, let P_p and P_u be probability distributions on the measurable space (𝒳,Ω), where Ω is the σ-algebra. Let κ^*, which is the maximum proportion of P_p in P_u, be identical to π. The irreducible assumption was proposed by <cit.> as follows: [Irreducible] The distribution P_n is irreducible with respect to P_p, if P_n is not a mixture containing P_p. That is, it is not possible to admit a decomposition P_n=(1-γ)Q+γP_p, where Q is a probability distribution on the measurable space (𝒳,Ω), and 0 < γ≤ 1. Assumption <ref> implies the following fact: inf_S ∈Ω, P_p(S)>0P_n(S)/P_p(S) = 0. This means that with the selection of different sets S, the probability P_n(S) can be arbitrarily close to 0, and P_p(S)>0 <cit.>. Intuitively, the irreducible assumption assumes that the support of the positive class-conditional distribution P_p is not contained in the support of the negative class-conditional distribution P_n. §.§.§ The Standard MPE Estimator Here, we introduce how to estimate class priors through the standard MPE estimator. Based on Assumption <ref>, this standard MPE estimator estimates class priors with theoretical guarantees <cit.>. Note that we can approximate both the latent P_p and the latent P_n distributions here by previously collected confident examples. Thus, by approximating these two latent distributions, it is possible to estimate class priors twice through the standard MPE estimator. In the following, we first introduce the standard MPE estimator and then describe how to estimate class priors twice with a suitable assumption to obtain more accurate results. Based on Assumption <ref>, a standard estimator can be designed with theoretical guarantees to estimate underlying class priors. Now we can access the distribution P^j_u and approximate the distribution P_p, and suppose the set C contains all possible latent distributions. The class prior π_j can be estimated as π̂_j^1 = κ^*(P_u^j|P_p) := sup{ω|P_u^j = (1-ω)K + ωP_p, K ∈ C} =inf_S ∈Ω, P_p(S)>0P_u^j(S)/P_p(S). According to <cit.>, the above equation converges to the true class priors at an acceptable rate. However, approximating the distribution P_p requires the collection of confident samples, which involves errors in selecting confident examples. Thus, the mistakes of selecting confident examples cause the estimation errors of class priors, and we clearly show it in Section <ref>. To reduce the estimation errors of the above equation, we additionally estimate class priors using the approximation of P_n and average the estimation results. This is because we can additionally approximate the distribution P_n by our confident example collection module. To additionally estimate class priors through the standard MPE estimator, the distribution P_n also needs to be irreducible with respect to P_p. Thus, we employ the mutually irreducible assumption <cit.> here, and then present the additional estimation formula. [Mutually Irreducible] The distributions P_p and P_n are said to be mutually irreducible if P_n is irreducible with respect to P_p, and vice versa. Assumption <ref> means that the support of P_n is also hardly contained in the support of P_p: (P_p)⊄(P_n) and (P_n)⊄(P_p). Assumption <ref> is reasonable in many cases, which essentially says that the existing patterns belonging to positive examples could not possibly be confused with patterns from negative examples. We thus have the following additional estimation formula: π̂_j^2 = 1 - κ^*(P_u^j|P_n):= 1- inf_S∈Ω, P_p(S)>0P_u^j(S)/P_n(S). By combining both Eq. (<ref>) and Eq. (<ref>), at last, the class prior π_j can be estimated as π̂_j=(π̂_j^1+π̂_j^2)/2. Note that, for estimating the class priors, most other previous methods, e.g., kernel mean embedding-based estimators KM1 and KM2 <cit.>, a non-parametric class prior estimator AlphaMax (AM) <cit.>, Elkan-Noto (EN) <cit.>, DEDPUL (DPL) <cit.>, and Rankprunning (RP) <cit.> are based on the standard MPE estimator and implicitly or explicitly rely on the irreducible assumption. However, this assumption is strong and hard to guarantee <cit.> since the multiple unlabeled sets may be collected from diverse sources. Therefore, we apply the ReMPE estimator <cit.> that improves the estimations of class priors without the irreducible assumption. §.§.§ The ReMPE Estimator The main idea of the ReMPE estimator is that, instead of estimating the maximum proportion of P_p in P_u, and we raise a new MPE problem by creating a new auxiliary distribution P_p'. The new auxiliary distribution makes the irreducible assumption hold <cit.>. Then we use the standard MPE estimator to obtain the class priors π̂_j. We use the following regrouping process to construct the auxiliary distribution P_p'. The process of regrouping is to change the P_n and P_p into new P_n' and P_p' by transporting a small set of examples A from P_n to P_p. Note that, in the original paper <cit.>, the set A is selected from P_u due to the unavailability of P_n in the positive-unlabeled learning setting <cit.>. Nevertheless, in our setting, we could directly collect A from the distribution P_n that was approximated by selected confident negative sets 𝒳^αβ_n. After the regrouping process, the selected set A should contain the examples that look the most similar to P_p and dissimilar to P_n, which could guarantee better class prior estimation when irreducible assumption does not hold <cit.>. Therefore, we train the binary classifier with the confident positive set 𝒳^αβ_p and confident negative set 𝒳^αβ_n. Then, we obtain 𝒳^αβ_p' by copying the percentage p of 𝒳^αβ_n examples with the smallest negative class-posterior probability from 𝒳^αβ_n to 𝒳^αβ_p. In the end, we estimate the class priors by employing an algorithm based on Eq. (<ref>) with inputs 𝒳^αβ_p' and 𝒳_u^j. The above process could be easily extended to obtain 𝒳^αβ_n', which can be incorporated with 𝒳_u^j on Eq. (<ref>) to finally estimate the class prior π̂_j. §.§.§ The BBE Estimator Recently, <cit.> proposed the Best Bin Estimation (BBE) for the MPE problem. This method produces a consistent class-prior estimation under the pure positive bin assumption. The assumption means that the top bin (nearly) purely contains positive examples when packing unlabeled data into bins relying on their predicted probabilities (of being positive) from the trained Positive-versus-Unlabeled (PvU) classifier. This assumption is a variation of the irreducible assumption and sufficient for BBE to obtain a (nearly) consistent class-prior estimation <cit.>. Next, we summarize the main process of the BBE estimator in our setting, referring to <cit.>. At the beginning, we train a Positive-versus-Unlabeled (PvU) classifer f_p,u on some portions of 𝒳^αβ_p and 𝒳^j_u and push other positive and unlabeled examples through the classifier f_p,u to obtain one-dimensional outputs Z_p = f_p,u(𝒳^αβ_p) and Z_u^j = f_p,u(𝒳^j_u). We now define a function q(z)=∫_H_z p(x)dx, where H_z = {x ∈𝒳: f(x) ≥ z } for all z ∈ [0,1]. Intuitively, q(z) captures the cumulative density of points in a top bin, i.e., the proportion of the input domain is assigned a value larger than z by the classifier f in the transformed space. With the Z_p and Z_u^j, the estimations of q_p(z) and q_u^j(z) are q̂_p (z) = ∑_z_i ∈ Z_p1[z_i ≥ z]/n_p and q̂_u^j(z)= ∑_z_i ∈ Z_u^j1[z_i ≥ z]/n_u^j for all z ∈ [0,1]. Then, we obtain ĉ that minimizes the upper confidence bound as follows: ĉ= _c ∈ [0,1]( q̂_u^j(c)/q̂_p(c) + 1+γ/q̂_p(c)( √(log(4/δ)/2 n_u^j) + √(log(4/δ)/2n_p)) ) , at a pre-specified level γ and a fixed parameter δ∈ (0,1). The above Eq. (<ref>) has been theoretically proven and empirically verified in <cit.>. Finally, we obtain the class-prior estimation π̂_j^1 = q̂_u^j(ĉ)/q̂_p(ĉ). Furthermore, π̂_j^2 also could be similarly estimated by replacing 𝒳^αβ_p as 𝒳^αβ_n in the above process. Accordingly, the class prior π_j can be estimated as π̂_j=(π̂_j^1+π̂_j^2)/2 by the BBE estimator. Till now, the first three modules make up the proposed CCPE that can estimate all class priors by utilizing only one pairwise numerical relationship of class priors. We summarize the algorithm flow of CCPE in Algorithm <ref>. Note that since the approximation of P_p and P_n only uses the selected confident examples from 𝒳^αβ, the samples from other unlabeled datasets are wasted. Therefore, we construct the enhanced CCPE (ECCPE) that can substitute the CCPE to lead to a more accurate class-prior estimation. Specifically, in ECCPE, we run CCPE first and obtain the initialization of the class priors {π̂_j}_j=1^m, which could identify the numerical relationships of all pairs of class priors. The size of the numerical relationships is m2. After pairing the unlabeled datasets and selecting confident examples from all pairs of unlabeled datasets, class priors can be estimated multiple times according to different unlabeled dataset pairs. Thus, after averaging all estimated results of class priors, the final estimated results will be more accurate. We summarize the algorithm flow of ECCPE in Algorithm <ref>. §.§ Classifier Training with Estimated Class Priors After obtaining the estimated class priors {π̂_j^⋆}_j=1^m by one pairwise numerical relationship of class priors, for binary classifier training, an empirical risk minimization method can be constructed <cit.>. In fact, other methods, e.g., <cit.>, can be applied with estimated class priors. We employ MCM <cit.> and U^m-SSC <cit.> here. §.§.§ MCM There have already been some risk-consistent methods for learning a binary classifier from only two unlabeled datasets with given class priors, e.g., BER <cit.>, UU <cit.>, and UU-c <cit.>. Based on the risk-consistent methods, <cit.> proposed a mutual contamination (MCM) framework, which can learn a binary classifier from multiple unlabeled datasets in two steps: Firstly, pairing all unlabeled datasets so that they are sufficiently different in each pair. Specifically, MCM assumes the number of unlabeled datasets m = 2k and conducts a pre-processing step that finds k pairs of the unlabeled datasets by the index t ∈{1, …, k}. Let (𝒳_u^t,+,π̂_t^⋆,+) and (𝒳_u^t,-,π̂_t^⋆,-) constitute the t-th pair of bags such that the numerical relationship π̂_t^⋆,- < π̂_t^⋆,+ (MCM assumes that π̂_t^⋆,-≠π̂_t^⋆,+ in each pair). The pairing of all unlabeled datasets maximizes ∑^k_t=1n̅_t (π̂_t^⋆,+ - π̂_t^⋆,-)^2, where n̅_t = 2n_t^+n_t^-/(n_t^+ + n_t^-) and n_t^+ and n_t^- are the sizes of unlabeled dataset 𝒳_u^t,+ and 𝒳_u^t,- respectively. For intuitive understanding, this pairing way gives preference to pairs of datasets where one dataset contains mostly positive samples (large π̂_t^⋆,+) and the other contains mostly negative samples (small π̂_t^⋆,-). According to <cit.>, the above pairing way is known as the “maximum weighted matching” problem with an exact solution <cit.>. Several approximation algorithms also exist for this problem. Note that when the sample sizes of m unlabeled datasets are the same, the solution to this pairing problem is straightforward. That is, we can match the unlabeled dataset with the largest class prior π̂_t^⋆,+ to the unlabeled dataset with the smallest one and match the unlabeled dataset with the second largest class prior π̂_t^⋆,+ to the unlabeled dataset with the second smallest, and so on. Secondly, after pairing all the unlabeled datasets, the unbiased risk estimators of each pair are linearly combined by the weights w_t = n̅_t (π̂_t^⋆,+ - π̂_t^⋆,-)^2. The resulting weighted learning objective is given by R_MCM(f)=∑_t=1^kω_tR_U^2-c(f), where R_U^2-c(f) is our selected non-negative risk estimator <cit.> since it avoids overfitting during training on two unlabeled datasets and shows better empirical performance in the MCM framework <cit.>. §.§.§ U^m-SSC <cit.> considered a surrogate set classification that bridges the original and surrogate class-posterior probabilities with a linear-fractional transformation. Let the index of P_u^j be a surrogate label y̅∈{1,…,m}, 𝒟̅ be the joint distribution of x∈𝒳 and y̅∈𝒴̅={1,…,m}, η(x)=P(y=+1|x) is the class-posterior probability for the true class +1 in the original binary classification, η_j(x) = P(y = j |x) is the class-posterior probability for the class j in the surrogate set classification problem, and π_𝒟 is the test class prior. The goal of surrogate set classification is to train a classifier g:𝒳→ℝ^m that minimizes the following risk: R_surr(g)=𝔼_(x,y̅)∼𝒟̅[ℓ(g(x),y̅)], where g(x) estimates the surrogate class-posterior probability η̅_j(x) = P(y = j |x). Then, we bridge η(x) and η_j(x) by adding an estimated linear-fractional transition T̂_j with the final estimated class priors {π̂_j^⋆}_j=1^m, i.e., η_j(x)=T̂_j(η(x)), ∀ j = 1,…,m, where T̂_j(η(x))=â_j·η(x)+b̂_j/ĉ·η(x)+d̂, with â_j = ρ_j(π̂_j^⋆-π_𝒟), b̂_j = ρ_jπ_𝒟(1-π̂_j^⋆), ĉ = ∑_j=1^mρ_j(π̂_j^⋆-π_𝒟), and d̂ = ∑_j=1^mρ_jπ_𝒟(1-π̂_j^⋆). Here, ρ_j is given by ρ_j=n_j/∑_j=1^mn_j. Afterwards, let f(x) be the model outputs that estimate η(x). We make use of the estimated transition function T̂_j(·) and model g_j(x)=T̂_j(f(x)), where g_j(x) is the j-th element of g(x). Based on the above terms, the following modified loss function is presented as ℓ(g(x), y̅)=ℓ(T̂(f(x)), y̅), where T̂(f(x))=[T̂_1(f(x)),…,T̂_m(f(x))]^⊤. Next, the corresponding risk for the surrogate task can be written as R_surr(f)=𝔼_(x,y)∼𝒟[ℓ(T̂(f(x)), y̅)]=𝔼_(x,y)∼𝒟[ℓ(g(x), y̅)]= R_surr(g). The corresponding empirical risk of Eq. (<ref>) is given by R_surr(f) =1/n'∑_i=1^n'ℓ(T̂(f(x_i)),y̅_i), where n' denotes the number of all examples of multiple unlabeled datasets, i.e., n'=∑_j=1^m n_j. <cit.> showed that the classifier learned by solving the surrogate set classification task from multiple unlabeled datasets converges to the optimal classifier learned from fully supervised data under mild conditions. We empirically show that, in our setting, the nice theoretical properties are preserved. § COMPARISON WITH RELATED WORK Previously, <cit.> applied a mutual MPE model <cit.> to the setting of learning from noisy labels with the rigorous assumption of class priors. Although their method relates to our work, its direct application in our MU-OPPO setting has drawbacks. Firstly, the algorithm in <cit.> is based on the mutual MPE model. Hence, it is required to pair all unlabeled datasets first when meeting m (m > 2) unlabeled datasets. The pairing should satisfy the Assumption <ref> that implies that the number of positive samples in noisy (corrupted) positive distribution is larger than the number of negative samples in noisy (corrupted) negative distribution. Although this assumption guarantees the subsequent estimation of class priors, it is hard to check and satisfy in our MU-OPPO setting. We know only one pairwise numerical relationship of class priors and cannot access any specific class priors. Secondly, in the final formula (Eq. (<ref>)) of estimating class priors, there are estimates π̂̃̂_n and π̂̃̂_p in denominators when estimating class priors. If the numerical values of estimates π̂̃̂_n and π̂̃̂_p are large or small, the final estimation error would be high. Consequently, this creates an unstable problem when estimating π_n and π_p, which is clearly shown in Figure <ref>. Next, we summarize the method of <cit.>, then discuss how it differs from our proposal in terms of estimating class priors through experiments. In <cit.>, a mutual MPE model was proposed: P̃_p(x)=π_pP_p(x)+(1-π_p)P_n(x), P̃_n(x)=π_nP_p(x)+(1-π_n)P_n(x), where π_p and π_n are the class priors of noisy (corrupted) positive and negative distribution P̃_p and P̃_n, respectively. Note that <cit.> makes the strict assumption of class priors as follows. Assume that π_p > 1/2 and π_n < 1/2. According to Proposition 3 in <cit.>, Assumption <ref> implies π_p > π_n. If P_p≠P_n, then P̃_p≠P̃_n, and there exists unique 0 ≤π̃_p, π̃_n < 1 that could substitute above Eq. (<ref>) and (<ref>) to: P̃_p(x)=(1-π̃_p) P_p(x)+π̃_pP̃_n(x), P̃_n(x)=(1-π̃_n)P_n(x)+π̃_nP̃_p(x). In particular, π̃_p=1-π_p/1-π_n and π̃_n=π_n/π_p. Furthermore, according to <cit.>, if P_p and P_n satisfy the Assumption <ref>, then the estimations π̂̃̂_p and π̂̃̂_n of π̃_p and π̃_n could be obtained by Eq. (<ref>), i.e, π̂̃̂_p = κ^*(P̃_p|P̃_n) and π̂̃̂_n = κ^*(P̃_n|P̃_p). We use these terms to estimate the class priors π_p and π_n by inverting the identities in Eq. (<ref>), leading to the estimation π̂_p=1-π̂̃̂_p/1-π̂̃̂_pπ̂̃̂_n and π̂_n=(1-π̂̃̂_p)π̂̃̂_n/1-π̂̃̂_pπ̂̃̂_n. After obtaining π̂_p and π̂_n, the cost parameter α∈ (0,1) that is guaranteed by Assumption <ref>, could be estimated by α̂ = π̂_p-1/2/π̂_p-π̂_n. The cost parameter then is utilized to construct the α-cost-sensitive P̃-risk <cit.>. For any binary classifier f, the α-cost-sensitive P̃-risk is R_P̃,α̂(f) 𝔼_(x,ỹ)∼P̃[(1-α̂)1_{ỹ=1}1_{f(x) ≤ 0} + α̂1_{ỹ=0}1_{f(x) > 0}], where P̃ denotes the probability measure governing (x,ỹ). <cit.> show that minimizing a cost-sensitive P̃-risk is equivalent to minimizing the cost-insensitive P-risk: R_p(f) 𝔼_(x,y)∼ P[1_sign(f(x)) ≠ y], where P denote the probability measure governing (x,y). Therefore, the binary classifier f could be obtained from P̃_p and P̃_n without the exact class priors π_p and π_n. Note that in our setting, we also obtain P̃_p and P̃_n after assigning pseudo labels by one pairwise numerical relationship of class priors. Thus, we can empirically compare the method <cit.> and our proposal. For comparing the method <cit.> and our proposal, we conduct experiments in Figure <ref>, which use a variety of unlabeled dataset pairs. In Figure <ref>, the considerable variation of the colors in the top four plots reflects the unstable problem of <cit.>. By contrast, our method handles this problem well. § THEORETICAL JUSTIFICATIONS In our MOS framework for the proposed MU-OPPO setting, estimating class priors plays an essential role. Therefore, we conduct the theoretical analysis for estimation errors of class priors after confident example collection. Our solution framework is capable of incorporating various methods directly into different modules. Therefore, we set the confident example collection module based on latent representations and the class prior estimation module based on the standard MPE estimator. This is taken as an example of theoretical analysis. Here, we analyze the properties of confident example selection and theoretically justify its impact on estimating class priors. For simplicity, we omit the index and directly use P_u and π to replace the P_u^j and π_j in the following theoretical descriptions. §.§ The Causes of Estimation Errors of Class Priors According to <cit.>, the standard MPE estimator is consistent and convergent to true class priors with strong theoretical guarantees. In our implementation, we primarily select the confident positive examples 𝒳_p to approximate the distribution P_p. However, the selection of 𝒳_p may be corrupted by mixing the examples from the distribution P_n. The selected examples 𝒳_p may be generated from P_p', where P_p' is a corrupted version of P_p and is obtained by mixing a small negative set from P_n to P_p. Hence, the inaccurate performance of confident example selection is the main reason for estimation errors. In order to formally present the theoretical analysis, we generate P_p' by splitting, transporting, and combining a set E from P_n to P_p. We detail the procedure as follows. Let M be a probability distribution on a measurable space (𝒳,Ω). Given a set E∈Ω, according to <cit.>, it could be defined a distribution M^E on the σ-algebra Ω, i.e., ∀ S ∈Ω, M^E(S) = M(S∩ E). Therefore, given two distributions M^E and M^E^c, where E^c = 𝒳\ E, for any set E∈Ω, we have M^E + M^E^c = M. Then, considering splitting the set E ∈Ω from P_n, P_n are divided into two parts: P_n^E^c and P_n^E. That P_n^E is transported to P_p, i.e., P_u = πP_p + (1-π)P_n = πP_p+(1-π)(P_n^E+P^E^c_n)_divide into two = (πP_p + (1-π)P_n^E)_mix as one+(1-π)P_n^E^c. From Eq. (<ref>), due to the transport by E, we need to redefine P_u as a mixture of a corrupted distribution P_p' and a split distribution P_n' defined in Theorem <ref>. Suppose P_u = πP_p + (1-π)P_n and E ⊂Supp (P_n). By splitting P_n^E from P_n to P_p, P_u is a new mixture, i.e., P_u = π' P_p' + (1-π')P_n', where π'=π+(1-π)P_n(E), P_n' = P_n^E^c/P_n(E^c), and P_p'=(1-π)P_n^E+πP_p/(1-π)P_n(E)+π, where P_p' and P_n' are satisfied the anchor point assumption[The anchor point assumption <cit.> is a stronger variant of the irreducible assumption. It accelerates the convergence rate of a series of MPE estimators.]. The proof of Theorem <ref> can be found in <cit.>. According to Theorem <ref>, the new proportion π' is always identifiable as P_p' and P_n' always satisfies the anchor set assumption. However, the π' may not be close to π, which causes the estimation error. Denote the estimation error by ϵ with ϵ = |π' - π| = (1-π)P_n(E). Note that, for ϵ, the π is a fixed latent value. Therefore, the item P_n(E) decides the extent of ϵ. §.§ Theoretical Analysis of the Estimation Errors According to Definition <ref>, the item P_n(E) decides the extent of ϵ. Here, P_n(E)=∑_x_i∈ EP(x_i|y_i=-1) reflects the negative class-conditional probability of the set E. In the procedure of confident example collection, this probability could be measured. For intuitive understanding, the set E is made of the samples (x_i, y_i = -1) that are incorrectly selected as confident positive examples by our confident example collection module. Currently, our confident example collection module collects confident positive examples using latent representations, and we make the following reasonable assumptions referred to related works <cit.>. Besides, we focus on the data points whose pseudo label is ỹ=+1 in the following theoretical analysis. The feature distribution is comprised of two Gaussian distributions. One is a clean cluster that contains the data points whose labels are y = +1 and ỹ = +1. Another is an unclean cluster that includes the data points whose labels are y = -1 and ỹ = +1. The features of all instances with y =+1 are aligned on the unit vector v with the white noise. Similarly, the features of all instances with y =-1 are aligned on the unit vector w. As the feature distribution comprises two Gaussian distributions, the projected distribution z is also a mixture of two Gaussian distributions <cit.>. By the linear discriminant analysis (LDA) assumption <cit.>, the decision boundary B with the threshold ζ = 0.5 is the same as the average of the mean of two clusters. We have B = 1/2(∑_i=1^N1_{ỹ_i = +1,y_i=+1}z_i/N_+ + ∑_i=1^N1_{ỹ_i = +1, y_i=-1}z_i/N_-) with probability 1-δ. In Eq. (<ref>), N denotes the size of the set to be collected, i.e., the number of examples in a pair of unlabeled datasets. Also, N^+=∑_i=1^N1_{ỹ_i = +1,y_i=+1} and N^-=∑_i=1^N1_{ỹ_i = +1,y_i=-1}. <cit.> proved that: (u^⊤v)^2 + (u^⊤w)^2/2 - 𝒞√(2/N_+log(2/δ))≤ B ≤(u^⊤v)^2 + (u^⊤w)^2/2+ 𝒞√(2/N_+log(2/δ)), where 𝒞>0 is a constant. Then, based on Eq. (<ref>), we can derive the upper bound for the estimation error in the following Theorem <ref>. Let Φ be the cumulative distribution function (CDF) of 𝒩(0, 1). The corrupted set E is selected as the wrong confident positive examples when the corresponding projection z_i>b and z_i ∈ E. The upper bound of the estimation error ϵ can be derived as ϵ≤ |E| (1-π)Φ(-Δ + 2 𝒞√((2/N_+) log(2/δ))/2 σ), where Δ = ‖u^⊤v - u^⊤w‖_2^2, π is the class prior of an unlabeled dataset, |E| denotes the sample size of E, σ^2 is a variance of white noise, δ, and 𝒞 are consistent numbers. The proof of Theorem <ref> is shown in Appendix <ref>. Theorem <ref> states that the upper bound for the estimation error ϵ depends on the latent class prior π and Δ that denotes the difference of mean between two Gaussian distributions. A small upper bound can be guaranteed as class priors increase and Δ becomes larger. In this theoretical analysis, Δ is an essential factor related to the separation of alignment clusterability (Definition <ref>). Fortunately, this propriety has been ensured with the theoretical and empirical analysis by <cit.>. Therefore, in our solution framework, Theorem <ref> guarantees the gap between estimated class priors and true class priors. As a result, with the estimated class priors, our MOS framework can obtain a well-worked binary classifier and tackles the proposed MU-OPPO setting. § EXPERIMENTS Datasets. We verify the effectiveness of the proposed learning paradigm on widely adopted benchmarks, i.e., MNIST <cit.>, Fashion (F)-MNIST <cit.>, Kuzushiji (K)-MNIST <cit.>, and CIFAR-10 <cit.>. The important statistics are shown in Table <ref>. As the four datasets contain ten classes originally, we manually corrupt them into binary-classification datasets as did in <cit.>. In experiments, unless otherwise specified, the number of examples contained in all unlabeled datasets is the same. In addition, the class priors {π_j}_j=1^m of all unlabeled datasets are evenly generated from the range [0.1, 0.9]. The generation way makes the sampled class priors not all identical, which ensures that the problem is mathematically solvable <cit.>. Models & Optimization. We exploit ResNet-18 <cit.> as our warm-up classifier, with the SGD optimizer <cit.>. The number of warm-up epochs is set to 10. After training the warm-up classifier, the models and optimizations of different modules in our MOS framework are consistent with their original paper. Note that, considering the computing efficiency, the pair-selection number γ is set to 4 when employing the ECCPE to estimate class priors in our MOS framework. §.§ The Generalization of the MOS Framework In MU-OPPO, our MOS framework is a generalized framework that could incorporate various methods in different modules. For the confident-example collection module, we apply three methods, i.e., with the loss distribution, confident learning, and latent representations, which have been introduced in Section <ref>. We apply three estimators for the class-prior estimation module, i.e., MPE, ReMPE, and BBE, introduced in Section <ref>. For binary classification with estimated class priors, we apply two methods, i.e., MCM and U^m-SSC, which have been introduced in Section <ref>. In this subsection, we experiment with all combinations of various methods in different modules. The binary classifier is learned from m=10 sets of unlabeled datasets with the assumed numerical relationship, e.g., π_10 > π_1. We analyze the generalization of our proposed framework from the following two perspectives, i.e., the estimation error of class-prior estimating and the classification accuracy of the binary classifier. Estimation error of class-prior estimating. In the proposed MOS framework, the main component is the estimation of class priors and we employ ECCPE here. This part aims to validate whether the proposed framework could reduce the estimation error of class priors and if this property could be preserved when applying different methods. To have a rigorous performance evaluation, each case runs five times and reports the mean and standard deviation. The evaluation is the average of absolute estimation errors between all class priors, shown in Table <ref>. We can see that the MOS framework could estimate class priors with a minor estimation error. Furthermore, we find that applying different methods to our framework also could perform reasonable estimations. This makes us believe that our framework not only provides accurate class-prior estimation but also shows the generalization capacity of various methods when estimating class priors. In addition, the ReMPE estimator provides significantly better performance on estimating the class priors in most cases since the regrouping process in the ReMPE estimator could alleviate the violation of the irreducible assumption in practice. Classification accuracy of binary classifiers. In this part, we train a binary classifier from the MU-OPPO learning scheme using our MOS framework, which contains various methods of different modules. All experiments are trained by 300 epochs. The classification accuracy at the last epoch in the test phase is reported. All the experiments are repeated five times. The mean accuracy with standard deviations is recorded for each method. For clarity, we divide all the experiments into Table <ref> and Table <ref> by two different binary classification modules, i.e., MCM and U^m-SSC. Checking the results in Table <ref>, the classification accuracy of our proposed framework is highly close to the one given the true class priors in the MU-OPPO problem. The utilization of different methods within our framework also can perform well. This property also could be found in Table <ref>. Hence, we confirm that our proposed framework successfully solves the MU-OPPO problem, and the empirical performance is ideal. Besides, if we compare the experiment results from Table <ref> and Table <ref>, we could find that the experiment results in Table <ref> are better than Table <ref>. The reason is that the optimal combination weights in the MCM method are proved with strong model assumptions and thus remain difficult to be tuned in practice <cit.>. These results are consistent with the observations in <cit.>. Suppose we connect the experiment results from the estimation errors of class-prior estimating and the classification accuracy of the binary classifier. In that case, we also notice that some methods that have the smallest estimation error may not provide the highest classification accuracy, e.g., the framework utilizes the loss distribution as the confident-example collection module and use the ReMPE estimator as the class-prior estimation module has the lowest estimation error (2.73±0.76) but do not provide highest classification accuracy (97.75±0.10). This is because the U^m-SSC methods that classify with class priors are slightly robust to inaccurate class priors <cit.>. §.§ Comparison with Other Baselines Baselines. Note that, in MU-OPPO, our MOS framework is the first solution that can estimate the class priors while other related works fail on this problem (described in Section <ref>). Therefore, we carefully design these baselines as follows: * MOS-M: Here, we want to introduce the method of <cit.> as our baseline, but this method is infeasible in our MU-OPPO setting, which has been clearly described in Section <ref>. Thus, we apply the mutual model of <cit.> as a class priors estimator after pairing all unlabeled datasets and regarding it as the MOS-mutual (MOS-M). The details are provided in Appendix <ref>. * MOS-(·): In the MOS process, the class prior estimation module also can be replaced by the other related class-prior estimators: the kernel mean embedding-based estimators KM1 and KM2 <cit.>, a non-parametric class prior estimator AlphaMax (AM) <cit.>, Elkan-Noto (EN) <cit.>, DEDPUL (DPL) <cit.>, and Rankprunning (RP) <cit.>. Then, we implement them under our framework and call them MOS-(·), which places the abbreviation name of these estimators in (·). * MOS-E: The ECCPE is employed to estimate class priors in the MOS framework. * MOS-C: The CCPE is employed to estimate class priors in the MOS framework. * MOS-T: To better show the performance of our proposed framework, we give the true class priors to the MU-OPPO setting and then implement MOS framework with true class priors. Due to the generalization of our framework, we casually choose the method based on latent representations as the confident-example collection module and the classical MPE estimator as the class-prior estimation module in the MOS framework. If not specified, we keep this setting in all subsequent experiments. We compare our proposed method with designed baseline methods for the MU-OPPO problem. The binary classifier is learned from m=10 sets of unlabeled datasets with the assumed numerical relationship, e.g., π_10 > π_1. Class-prior estimation. This part aims to validate whether the proposed method could reduce the estimation error of class priors compared to previous work. Each estimation case runs five times to have a rigorous performance evaluation and gets the mean and standard deviation. The evaluation is the average of absolute estimation errors between all class priors, shown in Table <ref>. We can see that MOS-E estimates class priors with a smaller estimation error compared to other baselines through all datasets. Furthermore, we find that our MOS-C method also performs reasonable estimation. It is better than MOS-M in the K-MNIST and CIFAR-10 datasets. Binary classification. In this part, we train a binary classifier from the MU-OPPO learning scheme. All experiments are trained by 300 epochs. The classification accuracy at the last epoch in the test phase is reported in Table <ref>. All the experiments are repeated five times. The mean accuracy with standard deviations is recorded for each method. Checking the results in Table <ref>, our proposed MOS framework outperforms others in all cases. The classification accuracy is highly close to the MOS-T. §.§ On the Variation of the Set Number The main factor influencing the performance is the number of unlabeled datasets available. As the unlabeled sets can be easily collected from multiple sources <cit.>, the learning algorithm is expected to perform well under the variation of set numbers. In this subsection, we test the proposed method with the MOS-M method on the different numbers of unlabeled datasets: m = {4,8,12,16,20,24,28}. We assume that the known relationship is π_m > π_1. We train 300 epochs for all the experiments, and the classification accuracy at the last epoch in the test phase is reported in Figure <ref>. From the results, we can see that the performance of the proposed MOS-E method is reasonably well on different set numbers. In most cases, the MOS-E method is better than the MOS-C and MOS-M. In particular, high accuracy can be observed for m=4 accessing all four benchmark datasets. The better performance may come from the larger number of unlabeled datasets contained in a single set, i.e., n'/4 in this case. The possible reason is that increasing the sampled data within each unlabeled set guarantees a better approximation of them <cit.>. In addition, compared to MOS-C and MOS-M, the proposed MOS-E method demonstrates its effectiveness when the set numbers increase. This is a significant advantage, which shows the enhanced performance from MOS-C to MOS-E when the number of estimated class priors grows, and MOS-C is not steady in the large set number of cases (reflected by large shade areas). §.§ On the Variation of the Set Size In practice, the size of the unlabeled datasets may vary from an extensive range depending on different tasks, which may cause severe covariate shifts between training sets and test sets <cit.>. To verify the robustness of our proposed framework against set size shift, we conduct experiments on the variation of set sizes. Recall that in other experiments. We use uniform set sizes, i.e., all sets contain n'/m unlabeled data. In this subsection, we investigate two set size shift settings according to <cit.>: (1) randomly select ⌈ m/2⌉ unlabeled datasets and change their set sizes to τ· n'/m, where τ∈ [0, 1]; (2) randomly sample each set size n_j from range [0,n'] such that ∑_j = 1^m n_j=n'. More specially, m are set to 10. In the first design, we primarily generate unlabeled datasets like the setting in Section <ref> and then change their set sizes. As shown in Table <ref>, our proposed method is robust as τ moves towards 0 in the first shift setting. The accuracy degrades relatively slightly as τ decreases. We also observe that our solution framework performs well for four datasets in the second shift setting. In particular, MOS-E still has better robustness than MOS-C due to the lower standard deviation in many cases. Overall, the robustness of proposed methods on varied set sizes can be verified by changing set sizes. §.§ Test Prior Estimation Note that in MU-OPPO, the test prior π_𝒟, which was previously assumed known, could be estimated by our MOS framework. In this subsection, we learn a binary classifier from MU-OPPO after estimating both class priors π_j and the test prior π_𝒟 by our method. We also verify the steady performance of proposed methods from small set numbers, e.g., m=5, to large set numbers, e.g., m=100. Three hundred epochs still perform all experiments. The assumed numerical relationship is π_m > π_1. Following the results in Table <ref>, our observations are as follows. First, the proposed methods could still perform well when adding the estimation of the test prior π_𝒟. Second, our proposed framework empirically guarantees the robustness of larger set numbers with acceptable standard deviations. §.§ Ablation Study We study the effect of removing different components to provide insights into what makes the MOS framework successful. We analyze the results in Table <ref> as follows. * MOS without Class-Prior estimation. When we identify confident examples after our confident-example collection module, we could directly train a binary classifier on them. However, the confident examples we collected are hard to contain clean hard examples <cit.>, which are essential to training a binary classifier. This is because the clean hard examples are usually entangled with mislabeled samples <cit.>. Thus, the trained classifiers are not optimal. To study the effect of this point, we directly use the collected confident examples and a ResNet-18 model to train a binary classifier. We report the classification accuracy at the last epoch in the test phase. The poor classification accuracy clearly reflects that the confident examples are not supposed to learn a classifier directly, but they can be used to learn the class priors. * MOS without Confident Example Collection. To study the effect of confident example collection, we drop the confident example collection module and directly estimate class priors by inputting the unlabeled datasets into the class-prior estimator. Note that the other training process remains unchanged. The performance decreases compared to the MOS framework and shows the necessity of collecting confident examples. * MOS without the Warm-Up Classifier. The reason for training a warm-up classifier is that early-stopping the training of deep learning models will prevent them from generally adapting to mislabeled data as training epochs become large. In this manner, a warm-up classifier whose training is stopped in early epochs will provide reliable guidance in selecting confident examples. Therefore, to study the effect of the warm-up classifier, we replace it with a fully-trained (convergent) classifier that is trained with large training epochs. The other modules remain unchanged. The decrease in accuracy suggests that the warm-up classifier helps the selection of confident examples and indirectly benefits the training of the final binary classifier. § RELATED WORK We review more related literature on this work below. Remarkably, existing methods designed for binary classification from unlabeled datasets are varied from learning paradigms. Partial previous methods are regarded as discriminative clustering based on maximum likelihood estimation, such as maximizing the margin or the mutual information between given instances and unknown instance labels <cit.>. Recall that clustering methods are often suboptimal since they require that one cluster exactly belongs to one class <cit.>, which is rarely satisfied in practice. Beyond clustering, <cit.> and <cit.> evidenced the possibility of empirical balanced risk minimization (EBRM) when learning from two unlabeled datasets. Both adopt the balanced error, which is a special case of the classification error <cit.> as the performance measure. Though EBRM methods do not need the knowledge of class priors <cit.>, they assume the class prior is strictly balanced and only handles the case of two unlabeled datasets. The MU-OPPO problem setting is also related to learning with label proportions (LLP), with a subtle difference in the unlabeled dataset generation[In most LLP works, the generation of unlabeled datasets relies on uniform sampling and may result in the same label proportion for all unlabeled datasets, which make the LLP problem computationally intractable <cit.>.]. In LLP, each unlabeled dataset is associated with a proportion label of different classes. The challenge in LLP is to train models using the weak supervision of proportion labels. To overcome this issue, <cit.> exploited a deep learning algorithm on LLP by introducing the empirical proportion risk minimization (EPRM). Recently, <cit.> combined EPRM with consistency regularization to obtain the state-of-the-art performance on LLP. However, EPRM is inferior to empirical risk minimization (ERM) since its learning is not consistent. A breakthrough in binary classification from unlabeled datasets is the proposal of ERM-based methods. Specifically, <cit.> proposed the unlabeled-unlabeled (UU) classification that assumed m=2 and π_1 > π_2, and provided a risk-consistent UU method that constructs an equivalent expression of the classification risk. Then, it is shown in <cit.> that the UU method can take negative values which causes overfitting in the empirical training risk. Hence, they proposed a novel consistent risk correction technique that is robust against overfitting and improves the UU method. Although these ERM-based methods are advantageous in terms of flexibility and theoretical guarantees, they are limited to two unlabeled datasets. Therefore, <cit.> extended these previous methods for the general unlabeled dataset setting (m≥2) by adding a transition layer and the proposed method is classifier-consistent <cit.>. In addition, <cit.> solved this general setting by firstly pairing all unlabeled datasets relying on class priors and then combining the risk estimator of each pair. Although existing methods successfully learn a binary classifier from multiple unlabeled datasets with theoretical guarantees, they heavily rely on the precise class priors, which has led to a non-trivial dilemma—the uncertainty of class priors can undesirably prevent the learning process. Yet, it is still unexplored how to learn a binary classifier from multiple unlabeled datasets without the exact class priors. To the best of our knowledge, this work is the first attempt to get rid of precise class priors when learning from multiple unlabeled datasets. § CONCLUSION In this work, we propose a new learning setting, i.e., MU-OPPO, and construct the MOS framework that can both estimates class priors accurately and achieves high binary classification accuracy. Specifically, this framework is based on the class prior estimation, which estimates multiple class priors using only one pairwise numerical relationship of class priors. After that, the injection of estimated class priors subsequently achieves a statistically-consistent classifier. By establishing an estimation error bound, we also prove that these estimated class priors are close to the true class priors under some conditions. Extensive experiments demonstrate that the proposed MOS framework can successfully train binary classifiers from multiple unlabeled datasets, and is competitive with existing methods. plainnat § APPENDIX § PROOFS OF ESTIMATION ERRORS We use the similar proof skills of Theorem 2 of <cit.>. Specifically, let |E| denote the sample size of E, we have P_n(E) = ∑_x_i ∈ EP(x_i |y_i = -1) ≈ |E| P(z > B |y = -1) ≤ |E| P(z > (u^⊤v)^2 + (u^⊤w)^2/2 - 𝒞√(2/N_+log(2/δ)) |  y= -1) = |E| P(z > μ_+ + μ_-/2 - 𝒞√(2/N_+log(2/δ)) |  y= -1) = |E| P( z - μ_-/σ > μ_+ - μ_-/2σ - 𝒞√(2/N_+log(2/δ))/σ) = |E| P(𝒩(0, 1) > Δ - 2 𝒞√(2/N_+log(2/δ))/2 σ) = |E| (1 - P(𝒩(0, 1) ≤Δ - 2 𝒞√(2/N_+log(2/δ))/2 σ)) = |E| (1 - Φ(Δ - 2 𝒞√(2/N_+log(2/δ))/2 σ)) = |E| Φ(-Δ + 2 𝒞√(2/N_+log(2/δ))/2 σ) Thus, we have the upper bound of the estimation error: ϵ = (1-π)P_n(E) ≤ (1-π)|E|Φ(-Δ + 2 𝒞√(2/N_+log(2/δ))/2 σ) where Δ = ‖u^⊤v - u^⊤w‖_2^2, π is the class prior of an unlabeled dataset, |E| denotes the sample size of E, σ^2 is a variance of white noise, δ, and 𝒞 are consistent numbers. § A MUTUAL MODEL OF CLASS PRIORS ESTIMATION After running CCPE first and obtaining the initialization of the class priors {π̂_j}_j=1^m, we could re-index the unlabeled datasets by t ∈{1, ..., m2}. Let (𝒳_u^t,+,π_t^+) and (𝒳_u^t,-,π_t^-) constitute the t-th pair of bags, such that π_t^-≤π_t^+. Then we could apply the mutual MPE model in Proposition 3 of <cit.> on all the pairs: P_u^t,+(x)=π_t^+P_p(x)+(1-π_t^+)P_n(x), P_u^t,-(x)=π_t^-P_p(x)+(1-π_t^-)P_n(x). According to pseudo label assignment, P_u^t,+(x) and P_u^t,-(x) are seemed as the noisy positive and negative density P̃_p and P̃_n: P̃_p(x)=π_t^+P_p(x)+(1-π_t^+)P_n(x), P̃_n(x)=π_t^-P_p(x)+(1-π_t^-)P_n(x). Substituting the above equations, we have P̃_p(x)=(1-π̃_t^+) P_p(x)+π̃_t^+P̃_n(x), P̃_n(x)=(1-π̃_t^-)P_n(x)+π̃_t^-P̃_p(x), where π̃_t^+=1-π_t^+/1-π_t^- and π̃_t^-=π_t^-/π_t^+ are two class priors. Then, we can obtain estimations π̂̃̂_t^+ and π̂̃̂_t^- of π̃_t^+ and π̃_t^- by κ^*(P̃_p|P̃_n) and κ^*(P̃_n|P̃_p). We use these terms to estimate the class priors π_t^+ and π_t^- by inverting the identities in Eq. (<ref>), leading to the estimations π̂_t^+=1-π̂̃̂_t^+/1-π̂̃̂_t^+π̂̃̂_t^- and π̂_t^-=(1-π̂̃̂_t^+)π̂̃̂_t^-/1-π̂̃̂_t^+π̂̃̂_t^-.
http://arxiv.org/abs/2306.03336v1
20230606011339
Exploiting Scratchpad Memory for Deep Temporal Blocking: A case study for 2D Jacobian 5-point iterative stencil kernel (j2d5pt)
[ "Lingqi Zhang", "Mohamed Wahib", "Peng Chen", "Jintao Meng", "Xiao Wang", "Toshio Endo", "Satoshi Matsuoka" ]
cs.DC
[ "cs.DC" ]
A case study for 2D Jacobian 5-point iterative stencil kernel (j2d5pt) authorsperrow=4 Tokyo Tech AIST Japan RIKEN R-CCS Japan AIST RIKEN R-CCS Japan SIAT China ORNL USA Tokyo Tech Japan RIKEN R-CCS Tokyo Tech Japan Tokyo Institute of Technology, Japan AIST, Japan [email protected] RIKEN CCS, Hyogo, Japan [email protected] AIST, Japan RIKEN CCS, Hyogo, Japan [email protected] Oak Ridge National Laboratory, US Wangx2@orn Shenzhen Institutes of Advanced Technology, China Wangx2@orn General Purpose Graphics Processing Units (GPGPU) are used in most of the top systems in HPC. The total capacity of scratchpad memory has increased by more than 40 times in the last decade. However, existing optimizations for stencil computations using temporal blocking have not aggressively exploited the large capacity of scratchpad memory. This work uses the 2D Jacobian 5-point iterative stencil as a case study to investigate the use of large scratchpad memory. Unlike existing research that tiles the domain in a thread block fashion, we tile the domain so that each tile is large enough to utilize all available scratchpad memory on the GPU. Consequently, we process several time steps inside a single tile before offloading the result back to global memory. Our evaluation shows that our performance is comparable to state-of-the-art implementations, yet our implementation is much simpler and does not require auto-generation of code. <ccs2012> <concept> <concept_id>10010147.10010169.10010170.10010173</concept_id> <concept_desc>Computing methodologies Vector / streaming algorithms</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010169.10010170.10010174</concept_id> <concept_desc>Computing methodologies Massively parallel algorithms</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies [500]Computing methodologies Vector / streaming algorithms [500]Computing methodologies Massively parallel algorithms Exploiting Scratchpad Memory for Deep Temporal Blocking Jintao Meng July 31, 2023 ======================================================= § INTRODUCTION When observing the previous generations of GPUs, Nivida GPUs for instance, there is a clear trend of increase in the cache capacity. Especially the volume of scratchpad memory (or shared memory in CUDA <cit.>) increased from 720 KB in K20 (2013) to 17.30 MB in A100 (2020). The latest H100 (2023) GPU even pushes max usable shared memory to be 29.83 MB to more than 200 KB per stream multiprocessor(SM). GPU optimizations that are commonly used in HPC applications were designed mostly assuming that scratchpad memory is not larger than 100 KB per stream multiprocessor <cit.>. There is a potential in leveraging the untapped scratchpad memory to aggressively optimize for data locality. In this work, we use a case study kernel commonly used in HPC applications, namely 2D Jacobian 5-point iterative stencil, to fully take advantage of the scratchpad memory for tiling data in an unusual way. More specifically, we run each of the tiles in a serial fashion one after the other while aggressively using the shared memory to run each tile entirely from shared memory. We use device-wide synchronization to resolve the spatial dependency between thread blocks. We demonstrate a new approach to leverage the large capacity of shared memory by proposing a temporal blocking stencil scheme that optimizes for peak data locality, i.e. running the entire problem from shared memory. Our method is much simpler than complex temporal blocking schemes; iterative kernels that use our methods can be manually written, unlike complex temporal schemes that require auto-generation of code. § RELATED WORK Temporal blocking <cit.> tiles the domain and processes the domain with in combined time steps. Due to space limitations, we mainly review StencilGen <cit.> and AN5D <cit.>. Both works used 2.5D or 3.5D tiling and relied on code auto generation for performance optimization. In addition, they relied on overlapped tiling within thread blocks. They did not exploit the inter thread block data exchange pattern. Regarding the usage of scratchpad memory, StencilGen stores all combined time steps in scratchpad memory; AN5D uses scratchpad memory conservatively for double buffer. As a result, in the j2d5pt double-precision kernel. StencilGen and AN5D consumed about 4.32 MB and 0.864 MB scratchpad memory, respectively. So, both AN5D and StencilGen left most of the scratchpad memory untapped, and are overly complex to implement. § DEEP TEMPORAL BLOCKING (DTB) language = C++, breaklines = true, breakindent = 10pt, lineskip=-1pt, basicstyle = , commentstyle = , classoffset = 0, keywordstyle = , stringstyle = , frame = trbl, framesep=0pt, numbers = left, stepnumber = 1, xrightmargin=12pt, xleftmargin=0pt, numberstyle = , tabsize = 1, captionpos = t, directivestyle=, emph=int,char,double,float,unsigned, int3, float4, float2, emphstyle=, escapeinside=<@@> §.§ Basic function Listing <ref> shows the base kernel function we used in this case study. We only modified the input and output pointer location to use scratchpad memory. In this kernel, we move the time loop from the host side to he be inside the kernel. Next, we tile the domain of the problem spatially and run the tiles in a serial fashion. For each tile, we run it entirely to completion, over all its time steps, before we start on the next tile. §.§ Dependency Between Thread Blocks We use the CUDA grid-level barrier to ensure that each thread block can exchange the halo region correctly. We use the bulk synchronous parallel (BSP) model. §.§ Processing the Tiles in Order After we load a tile into the scratchpad memory, we process the tile for several time steps (temporal blocking) before moving to the next tiling. Figure <ref> shows the process. § EVALUATION We compare DTB with StencilGen <cit.> and AN5D <cit.>, the state-of-the-art implementations for temporal blocking for stencils (a domain size of 8192^2). We used 8592×8328 to run the DTB. We also report a pruned version that considers 8192^2 as a valid domain size. Figure <ref> shows the result: the performance of DTB is comparable to that of state-of-the-art temporal blocking implementations (SOTAs). § CONCLUSION In this work, we discuss a case study on the use of scratchpad memory for DTB on the j2d5pt stencil. Instead of applying a complex temporal blocking implementation, we just tile the domain so that each tile fully occupies the scratchpad memory. Evaluation shows that DTB is compatible with other SOTAs. We anticipate that DTB could perform even better on a larger scratchpad memory architecture, which would be explored in future work. This work was supported by JSPS KAKENHI under Grant Number JP21K17750. This paper is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This research used resources at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. The authors wish to express their sincere appreciation to Jens Domke, Aleksandr Drozd, Emil Vatai and other RIKEN R-CCS colleagues for their invaluable advice and guidance throughout the course of this research. Finally, the first author would also like to express his gratitude to RIKEN R-CCS for offering the opportunity to undertake this research in an intern program. ACM-Reference-Format
http://arxiv.org/abs/2306.12141v3
20230621093828
Recoil: Parallel rANS Decoding with Decoder-Adaptive Scalability
[ "Fangzheng Lin", "Kasidis Arunruangsirilert", "Heming Sun", "Jiro Katto" ]
cs.DC
[ "cs.DC", "cs.IT", "math.IT" ]
[email protected] [email protected] Waseda University 3-4-1 Okubo Shinjuku Tokyo JP 169-8555 [email protected] Yokohama National University 79-1 Tokiwadai Hodogaya Yokohama JP 240-0067 [email protected] Waseda University 3-4-1 Okubo Shinjuku Tokyo JP 169-8555 Entropy coding is essential to data compression, image and video coding, etc. The Range variant of Asymmetric Numeral Systems (rANS) is a modern entropy coder, featuring superior speed and compression rate. As rANS is not designed for parallel execution, the conventional approach to parallel rANS partitions the input symbol sequence and encodes partitions with independent codecs, and more partitions bring extra overhead. This approach is found in state-of-the-art implementations such as DietGPU. It is unsuitable for content-delivery applications, as the parallelism is wasted if the decoder cannot decode all the partitions in parallel, but all the overhead is still transferred. To solve this, we propose Recoil, a parallel rANS decoding approach with decoder-adaptive scalability. We discover that a single rANS-encoded bitstream can be decoded from any arbitrary position if the intermediate states are known. After renormalization, these states also have a smaller upper bound, which can be stored efficiently. We then split the encoded bitstream using a heuristic to evenly distribute the workload, and store the intermediate states and corresponding symbol indices as metadata. The splits can then be combined simply by eliminating extra metadata entries. The main contribution of Recoil is reducing unnecessary data transfer by adaptively scaling parallelism overhead to match the decoder capability. The experiments show that Recoil decoding throughput is comparable to the conventional approach, scaling massively on CPUs and GPUs and greatly outperforming various other ANS-based codecs. <ccs2012> <concept> <concept_id>10002951.10002952.10002971.10003451.10002975</concept_id> <concept_desc>Information systems Data compression</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010169.10010170</concept_id> <concept_desc>Computing methodologies Parallel algorithms</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002950.10003712.10003713</concept_id> <concept_desc>Mathematics of computing Coding theory</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Information systems Data compression [500]Computing methodologies Parallel algorithms [100]Mathematics of computing Coding theory Recoil: Parallel rANS Decoding with Decoder-Adaptive Scalability Jiro Katto July 31, 2023 ================================================================ § INTRODUCTION The demand for high-quality entertainment content, such as high-resolution images, Ultra-High-Definition (UHD) 4K and 8K videos, and the streaming of VR and AR content, is rapidly growing. However, the communication bandwidth, especially via wireless links <cit.>, is very limited. Therefore, in these applications, data compression always plays a crucial role in both user experience enhancement and cost saving, by reducing the amount of transmission bandwidth and storage. Data compression is generally achieved by Entropy Coding, which efficiently encodes symbols close to their Shannon limit. The Asymmetric Numeral Systems (ANS) <cit.> family is a series of modern entropy coders. Its variants are widely used in recent compression and coding algorithms such as JPEG-XL <cit.>, FSE <cit.>, Zstandard <cit.>, etc., providing both superior compression and decompression speed, and high compression rate. To achieve higher throughput and lower latency in these applications, exploiting parallelism out of entropy coding algorithms has always been desirable. However, this is not simple to achieve: entropy coding inherently relies on variable-length codes, resulting in internal data dependencies, which could be complex to split apart. In particular, an ANS bitstream must be serially encoded from the first symbol to the last, and decoded from the back to the front to get the symbols back in reverse order. Therefore, the conventional approach to parallel entropy coders often involves partitioning the input symbol sequence into smaller sub-sequences, and encoding each sub-sequence with an independent entropy coder. This approach is seen in state-of-the-art massively parallel rANS codec implementations such as DietGPU <cit.>. This method presents a trade-off between compression rate and the level of parallelism: as the input sequence is partitioned into more sub-sequences, the worsening of the compression rate becomes more dominant, due to the almost linearly increasing amount of coding overhead. In content-delivery applications, the server must account for the different parallel capacities of clients (decoders) by encoding the content with more sub-sequences. While this may allow massive parallelism, it also makes the file size larger. Besides, a decoding machine with a state-of-the-art GPU may be able to decode tens of thousands of sub-sequences in parallel, while a budget CPU can only decode a few at once. Therefore, the budget CPU couldn't take advantage of the massive-parallelism optimized content, but still needs to receive all the data required to exploit the parallelism intended for high-end machines. On the other hand, the server could also prepare multiple variations of the content to contend for different parallel capacities, but this is still undesirable as it creates great storage and computational overhead for the server. This is because of the primary drawback with the conventional partitioning symbols approach: once the symbol sequence is broken into smaller intervals, there is no going back since the data dependencies inside the entropy coders are already broken. There is no flexibility in the number of sequences after encoding, thus the tradeoff between compression rate and parallelism is fixed once the encoding is done. We propose an alternative approach, named Recoil, that offers such flexibility. Instead of encoding the symbols with multiple independent encoders, we first use a single group of interleaved rANS encoders to produce a single rANS bitstream. We then split the bitstream by recording the intermediate rANS encoder states before the split point and the symbol indices from which the states are taken. We show that this allows decoding to start at an intermediate position, through a synchronization process. We only pick the intermediate states at renormalization points, as they have a small upper bound and can be represented in fewer bits, reducing storage overhead. We then propose an efficient way to store all this metadata. As the bitstream is independent of the split metadata, the context server only needs to encode once, taking into account the maximum parallelism it intends to support. Then, Recoil enables splits to be combined by a very lightweight process before transmitting to the decoder, since we do not actually divide up the bitstream, but instead record metadata around the split point. The main contributions of Recoil are the following: * Recoil allows omitting unnecessary metadata based on the decoder's parallelism capability before transmission, significantly improving the compression rate and saving bandwidth on both sides. We achieved a maximum -14.12% compression rate overhead reduction in our evaluation. * Recoil makes little tradeoff in decompression speed. Experiments show that Recoil performs comparably in decoding throughput to the conventional method (90+ GB/s on a Turing GPU), outperforming various other ANS-based codecs. * Recoil is highly compatible with existing rANS decoders, allowing easy integration into standardized codecs. This is because Recoil does not actually modify the rANS bitstream, but instead works on independent metadata. § PRELIMINARY §.§ The Range-variant of Asymmetric Numeral Systems (rANS) The rANS is a widely-used variant of the modern entropy coder, Asymmetric Numeral Systems. It encodes and decodes each symbol according to a probability density function (PDF) and the corresponding cumulative distribution function (CDF). The PDF and CDF can be either statically pre-computed or adaptively generated. The coding works with a single state, with which the sequence of symbols is represented. Encoding and decoding are performed by a transformation of the state. Let S be the symbol set, s denote the sequence of input symbols, ∀ i s_i ∈ S. Let f(t) and F(t) denote the quantized PDF and CDF of the symbol set S. Both f(t) and F(t) are quantized to the range [0, 2^n]. Let the coder state after encoding / before decoding symbol s_i be x_i. Then, the encoding of symbol s_i is formulated as: x_i = 2^n ⌊x_i-1/f(s_i)⌋ + F(s_i) + (x_i-1 f(s_i)) The decoding of symbol s_i is formulated as: s_i = t s.t. F(t) ≤ x_i 2^n < F(t+1) x_i-1 = f(s) ⌊x_i/2^n⌋ - F(s) + (x_i 2^n) The division and remainder operations regarding 2^n are often implemented with bitwise operations. Intuitively, rANS codes a symbol with a smaller bit length when f(s_i) is large, and vice versa, as shown in Equation <ref>. Encoding and decoding in rANS are symmetrical; encoding s_i transforms state x_i-1 to x_i while decoding gets s_i back by restoring x_i to x_i-1. Thus, rANS works like a stack: if the symbols are encoded from s_0 to s_n, the decoder retrieves them in reverse, s_n to s_0. While in some implementations, the coder buffers the symbols first and then encodes them in reverse order (so that the decoder retrieves them in forward order), we do not consider this for simplicity. Renormalization. To avoid handling an unmanageably large state, renormalization is often employed. During encoding, if the state overflows a calculated upper bound, its lower bits are written to a bitstream. The bitstream is read from during decoding when the state underflows a given lower bound. Let L = k 2^n, k ∈ℤ^+ denote the Renormalization Lower Bound. Let b be the number of bits the encoder writes to / decoder reads from the bitstream once, B denote the bitstream, and p denotes the current offset in the bitstream that the codec is writing to / reading from. The renormalization during encoding is formulated as: x_i ⌊x_i/2^b⌋ B_p x_i 2^b p p + 1 while x_i ≥2^b/2^n L f(s_i+1) The renormalization during decoding is formulated as: x_i x_i 2^b + B_p, p p - 1 while x_i < L Similarly, this is often implemented with bitwise operations. §.§ Interleaved rANS The rANS decoding algorithm is highly memory-bound. Specifically, the symbol lookup process in Equation <ref> often consists of an array search or a LUT lookup. Besides, each decoder state x_i depends on the previous state x_i+1, limiting Instruction Level Parallelism (ILP). Interleaved rANS <cit.> is proposed to overcome these limitations. An illustration of 4-way interleaved rANS encoding is shown in Figure <ref>. In this example, the symbols are processed in groups of 4. Each symbol in the group is encoded by one encoder. After encoding a group, some encoder states overflow, producing renormalization outputs, interleaved into a single bitstream, in the order of increasing encoder ID. The decoding is similarly in reverse: decoders that need to renormalize read from the bitstream in decreasing decoder ID order, before decoding the symbols. Interleaved rANS shows boosted throughput, because using multiple coders mitigates the data dependency and is thus ILP- and SIMD-friendly. However, interleaved rANS fails to scale over multiple cores. Because the renormalization output must be packed into the bitstream after encoding each group, all threads must stop at a synchronization point to obtain the correct offsets they should write to. Similarly, the offset each decoder reads from during decoding depends on the underflow flag of all other decoders, requiring synchronization. This synchronization barrier is trivial in a core but heavy across multiple cores, and therefore forbids massive scaling over multi-core CPUs and GPUs. §.§ Conventional "Partitioning Symbols" Approach The conventional approach partitions the input symbol sequence into smaller sub-sequences to achieve further high-throughput rANS decoding on multi-core CPU and GPU systems. The sub-sequences are encoded and decoded by rANS coders completely independent from each other, allowing them to execute in different threads and processors. These coders can be single encoders or decoders, or (in most implementations) groups of interleaved rANS encoders or decoders running on the same CPU core / GPU warp. This process is illustrated in Figure <ref>. This method produces multiple independent bitstreams, which are often merged by simple concatenation and maintaining an offset table to locate the start of each sub-bitstream. The variants of this method, including those built for other ANS variations, are implemented in many previous works, such as DietGPU <cit.> and <cit.>. However, partitioning the symbol sequence can come with a huge cost. We evaluated its impact using the partitioning symbols approach and a varying number of sub-sequences. We evaluated 16 sub-sequences, a typical core count of a high-end workstation CPU; and 2176 sub-sequences, the number of threads required to fully utilize a high-end GPU (RTX 2080 Ti). As Figure <ref> shows, more symbol sub-sequences negatively impacts the compression rate. Especially, the 2176-sub-sequence variation, intended for high-end GPUs, greatly impacts file size. This overhead originates from the initial setup cost of rANS codecs, the final states, etc. Moreover, different hardware has different optimal sub-sequence numbers depending on parallelism capacities. The paradox is that it is impossible to prepare optimal variations of compressed data for every hardware; conversely, if the content delivery server serves the maxed-out variation to all decoders, it creates unnecessary data transfer for those that cannot utilize the max level of parallelism. The root cause of this problem originates from partitioning symbols, which is irreversible. This breaks the data dependency, which the rANS codec will otherwise establish to achieve efficient coding. Therefore, our approach does not break this dependency chain, but instead uses metadata to provide the missing information, which enables decoding to start from intermediate positions. Our approach does not suffer a similar paradox since more metadata is only sent when the decoder has a larger parallel capacity. §.§ Other Related Work Previous works showcased massively parallel decoding of various compression algorithms <cit.>. In particular, multians <cit.> utilizes the fact that tANS (table-variant) tend to self-synchronize even if decoding starts with an incorrect state. The tANS states usually have a limited range (a range of 1024 was used), encouraging self-synchronization. These approaches allow certain encoded bitstreams of corresponding serial encoders to be decoded in parallel, without any extra metadata or file size overhead, seemingly suggesting no harm to compression rate. However, the smaller state range limits the maximum probability quantization level n. This disables more fine-grained quantization often required in image and video coding. In addition, tANS requires the symbol probability distributions to be pre-computed into decoding tables, which is unsuitable for adaptive coding. While tANS theoretically requires less computation than rANS as its decoding is implemented as a pure LUT, the design of multians <cit.> shows a less cache-friendly memory access pattern and large self-synchronization overhead. Due to these reasons, although against instinct, our rANS-based approach greatly outperforms it in decoding speed, as shown in Section <ref>. § MOTIVATION This section presents the main motivations we considered while designing Recoil, a parallel rANS decoding approach with decoder-adaptive scalability. In particular, we do not partition the uncompressed symbol sequence into sub-sequences before encoding. Instead, our design uses a single group of interleaved rANS encoders, then adds metadata to enable parallel decoding with multiple groups of interleaved rANS decoders. Admittedly, this design trades off encoder throughput; however, there are many use cases where a lower entropy encoder throughput is acceptable, and decoding throughput is the main concern, such as content delivery servers that our design mainly targets. This tradeoff brings greater flexibility in adaptively scaling according to the decoder. For simplicity of explanation, we demonstrate the method using a bitstream encoded with a single non-interleaved rANS encoder in this section, as shown in Figure <ref>. In Section <ref>, we discuss how this method is extended to interleaved rANS bitstreams. §.§ Decodability of rANS from intermediate positions The rANS bitstream must be decoded from the end to the start because only the final coder state is preserved by being explicitly transmitted along with the bitstream. Let this final state be denoted by x_n. The decoder starts with a full bitstream and x_n, then iterates to derive x_n-1, x_n-2, …, decoding the symbols s_n, s_n-1, s_n-2, …, and consuming the bitstream during renormalization. However, as Equation <ref> and <ref> shows, an iteration of decoding to derive x_i-1 only depends on two non-static parameters: (1) the previous state x_i; (2) the current bitstream offset p; assuming a static probability distribution. Since rANS is symmetric and encoder and decoder states are equivalent, it is possible to record these intermediate states and bitstream offsets during encoding, and share this information with the decoder. This enables multiple starting points for decoding. These decoders are completely independent of each other since they do not share either states or bitstream starting offsets, allowing rANS decoding to be scaled over multiple cores. For example, as shown in Figure <ref>, if we record the intermediate state x_6, and the starting bitstream offset 0, it allows decoder thread 1 to start decoding with the state x_6. After renormalizing with 5c, decoder derives the symbols s_6, s_5, …, s_1. Similarly, for state x_12 and bitstream offset 1, s_12 to s_1 can be decoded in thread 2. In practice, we also share the corresponding symbol index at the split point. For example, we would also share that the intermediate state x_6 is taken at the index 6. All metadata we share with the decoder is shown in the metadata table inside Figure <ref>. This has the following practical advantages: (1) it offers thread 2 an easy way to stop at s_7 by simply using a counter. Without this index, each thread does not know how many symbols it must decode, and can only determine the stop points indirectly by checking if the current decoder state and bitstream offset match the start point of another thread; (2) it allows the threads to write to a common buffer of a pre-determined size, rather than dynamically allocating space as needed and copying to one buffer at the end; (3) it allows the use of adaptive coding, in which the probability distribution used in every iteration is dynamic, determined using symbol index as a key in many image codecs that use hyperprior-based context <cit.>. While this seems like a lot of extra information to transmit, we propose a method to efficiently store them in Section <ref>, so that the burden on compression rate does not exceed, and even outperforms the conventional partitioning symbols approach. In summary, the conventional method partitions the input symbols before encoding, while our method encodes them first and then records metadata that enables parallel decoding, allowing more flexibility. §.§ Bounded Intermediate States at Renormalization Points It may seem ideal to place split points at where the symbols would be partitioned into equal splits, to balance the workload of threads. However, storing the intermediate states requires many bits (each state can be up to 32 bits in our implementation). Fortunately, we observe that this overhead can be greatly reduced if we only allow splitting at renormalization points. If an state x_i overflows (x_i ≥2^b/2^n L f(s_i+1)) in encoding, then after renormalizing once, x_i < L. Recall that L = k 2^n, k ∈ℤ^+. The previous state x_i-1 must be fully renormalized before being used in the next iteration. Thus: x_i-1 < 2^b/2^n L f(s_i) ⇔ ⌊x_i-1/f(s_i)⌋ < 2^b/2^n L ⇔ ⌊x_i-1/f(s_i)⌋ + F(s_i + (x_i f(s_i)))/2^n_< 1 < 2^b/2^n L ⇔ 2^n ⌊x_i-1/f(s_i)⌋ + F(s_i + (x_i f(s_i)))_x_i < 2^b L If x_i is renormalized, it is divided by 2^b. Let x_i ' denote the state after renormalization. x_i' = ⌊x_i/2^b⌋ < L In our implementation, L is chosen to be 2^16, so these bounded states can be safely represented in 16-bit numbers, reducing the overhead by half. Renormalization points are where bitstreams are written during encoding. For example, in Figure <ref>, x_6 and x_12 are taken from renormalization points (which are identified because the encoder produced bitstream outputs there); thus, they are bounded by L and can be represented with fewer bits. Intuitively, instead of partitioning the symbols, our method can be seen as splitting the encoded bitstream into sub-bitstreams. We then attempt to balance the number of symbols in each sub-bitstream, so that the workload is evenly distributed. §.§ Decoder-Adaptive Scalability Recall that the conventional partitioning symbols approach lacks flexibility in decoding scalability: once the bitstream is generated, the number of symbol sub-sequences is fixed, so that decoders with less parallel capacities suffer from a worsened compression rate. In contrast, achieving this flexibility is trivial with our design. As shown in the metadata table in Figure <ref>, metadata for thread 2 carries the necessary information needed for that thread to start decoding intermediately from symbol index 12. When no combining of splits happens, thread 2 stops decoding at s_7, because s_1 to s_6 are handled by thread 1. However, nothing prevents thread 2 from continuing decoding beyond that point; it naturally carries all the information required. In other words, if 2-thread-parallelism for the interval of s_1 to s_12 is not needed, we can safely drop the metadata for thread 1; this only removes the ability for decoding to start at symbol index 6. The combined split now appears as a single 12-symbol split to the decoder. Therefore, combining splits is trivial, since it only requires removing the metadata in a way that combines the splits into bigger ones with close symbol counts, so that the workload is still balanced. Suppose the initial encoding produced N splits metadata and the decoder only wants M, M < N; we could use a heuristic, but usually simply sending every other ⌈N/M⌉ split metadata is good enough. As this process is very lightweight and does not require re-encoding of the source symbols, it can be done in real time by the content delivery server before data transmission to the decoder. We consider the use case, where the client requests content, and also attaches its parallel capacity inside the request header; the server receives the request, shrinks down the metadata in real-time, and serves the bitstream and the shrunk metadata to the decoder. No compression rate is wasted to provide unnecessary parallelism. § THE RECOIL DECODER In the previous section, we explored a potential parallel decoder design, based on non-interleaved rANS codecs. While this approach provides scalability over multi-core, it fails to scale well over SIMD. On the other hand, interleaved rANS <cit.> provides such scalability. In this section, we present Recoil, a massively parallel interleaved rANS decoder with decoder-adaptive scalability that engineers the previous motivations to work over interleaved rANS bitstreams. §.§ Extending to Interleaved rANS As shown in Figure <ref>, recording intermediate states at a single point no longer works with interleaved rANS: if the split point is placed at bitstream offset 2 (a9), the intermediate states of the 4 encoders are recorded at this point. However, the only interleaved encoder renormalized here is E_1. The other encoders, E_2 to E_4, encoded other symbols since their last renormalizations. Their intermediate state is no longer below the lower bound and cannot be represented in the smallest number of bits. Therefore, we instead take the intermediate states from multiple close positions during encoding, then decode the bitstream in 3 phases. An illustration is shown in Figure <ref>. During encoding, for a split position, we perform a backward scan to find the last renormalization points of each encoder. For example, in Figure <ref>, the split position is at s_16, or bitstream offset 6 (a5). Let x_i,j denote the intermediate state of interleaved rANS encoder E_j at symbol index i. Since the bitstream output a5 is produced by E_4 during renormalization, we record x_16,4. Then, backward scan finds E_2 producing ea, so x_14,2 is recorded. Next, although E_4 also renormalized at s_12 producing 26, we do not record it as we ignore the previous renormalizations of each encoder. Similarly, we record x_11,3 and x_9,1. Since all 4 encoder states have been recorded at index 9, the backward scan terminates. We call the symbol interval s_9 to s_16 the Synchronization Section for this split. The decoding is done in 3 phases: (1) Synchronization Phase to synchronize and recover the correct intermediate decoder states; (2) Decoding Phase in which normal interleaved rANS decoding is performed; and (3) Cross-Boundary Decoding Phase in which the Synchronization Section of the previous split is decoded. §.§.§ Synchronization Phase The decoder starts at s_16, but only the interleaved decoder D_4 is initialized here as the intermediate states for D_1 to D_3 are still unknown. The decoding of s_15 is skipped since the corresponding interleaved decoder D_3 is not initialized. Similarly, we initialize D_2 and decode s_14, and skip s_13 since D_1 is not ready. s_12 can be decoded correctly since D_4 is initialized. We initialize D_3, decode s_11, decode s_10 with D_2, and initialize D_1 at index 9. At this point, all 4 decoders are fully initialized, and their states are synchronized through the decoding process. As explained in Section <ref>, the interleaved bitstream must be read in a specific order by the interleaved decoders; if the read offset is misaligned, all subsequent decoding results will be incorrect. Recoil ensures the correctness of the read offset even with the absence of some interleaved decoders. This is because we always take the intermediate states from the last renormalization points during encoding. Therefore, interleaved decoders are always initialized immediately before the first time they read the bitstream. Absent interleaved decoders will not be reading the bitstream anyway, thus a correct read offset is always maintained. However, the decoded symbol sequence during this phase is incomplete. For example, in Figure <ref>, symbols s_13 and s_15 are missing. The symbols produced in this stage are merely a side effect of the synchronization, so the decoding results in this phase are discarded. §.§.§ Decoding Phase After completing synchronization, normal rANS decoding can be performed starting from s_8, by the standard interleaved rANS decoding algorithm shown in Figure <ref>. This decoding phase ends at the split position boundary. §.§.§ Cross-Boundary Decoding Phase The next thread handles decoding the next split containing the interval of s_17 to some later index. After decoding s_17, this thread finishes the Decoding Phase and now reaches the split boundary at bitstream offset 6 (a5). Inherently, it carries all the correct intermediate decoder states required to continue decoding s_16 and forth. Therefore, this decoder crosses the split boundary, and decodes the symbols in the Synchronization Section of the previous split: s_9 to s_16. It terminates at the synchronization completion point, s_9. This design reflects the three motivations of Recoil: (1) Decodability of rANS from intermediate positions is achieved through the Synchronization Section and Synchronization Phase; (2) Bounded intermediate states at renormalization points are chosen by the backward scan during encoding; (3) Decoder-adaptive scalability is still trivial to achieve since (1) is maintained, so splits can still be combined by eliminating extra metadata entries. §.§ Distributing the workload The split points must still be picked carefully to balance the workload. The Synchronization Section adds complexity to this: larger Synchronization Sections bring more runtime overhead. Therefore, the bitstream must be split in a way ensuring both even workloads and less synchronization. We use a simple heuristic to achieve this: Suppose the full bitstream contains N symbols, and M splits are to be made. For any given split point, let t denote the number of symbols encoded in the sub-bitstream that the previous and the current split points represent (including the Synchronization Section) and t_s denote the number of symbols in the Synchronization Section. We optimize for the minimum of the following function: H(t, t_s) = | t - T | + | t - t_s - T | where T = ⌈N/M⌉ In other words, T represents the expected number of symbols per split; we split at the position that makes the symbol count, including and excluding the Synchronization Section, as close to average as possible. Ideally, this heuristic produces many small splits with finely balanced workloads. Suppose N splits are produced. Therefore, to combine these small splits into M more coarse-grained ones, it is only necessary to pick the metadata by every other ⌈N/M⌉ and send them to the client, achieving decoder-adaptive scaling in real-time. §.§ Efficient Metadata Storage Our heuristic ensures that the number of symbols per split will not differ largely from the average. Also, most real-world data has a mostly uniform distribution of entropy: in ideal conditions, equal numbers of symbols are compressed into equal bitstream lengths. While real-world datasets are not always ideal, the actual split points are likely not far away from this expectation. We exploit these facts to store the metadata efficiently. First, we store the number of splits M, the bitstream length B, and the number of all symbols N as-is in the metadata header. We then compute the expected sub-bitstream length E_b for each split simply using rounded-up averages: E_b = ⌈B/M⌉. The i-th split point is expected to be at bitstream offset iE_b. Then, we calculate the differences between the actual positions and these expectations, and only store the differences in the metadata, as Table <ref> shows. We apply a similar approach to the metadata of each interleaved codec. The intermediate states are stored as-is since they are difficult to be encoded further. We do not store the actual symbol indices but only the Symbol Group IDs corresponding to the ones in Figure <ref>, since it is trivial to convert them back and forth. An example is shown in Table <ref>: we take the max Symbol Group ID 4, and store it in the form of difference to the expectation, as shown in Table <ref>; the expected value here is also simply a rounded-up average. Next, we use this maximum value as an anchor, then compute and store the differences from all the Symbol Group IDs to it. This is based on the assumption that when one interleaved encoder renormalizes, the other encoders should also renormalize soon. The differences here are guaranteed to be negative or zero since they are compared to the maximum value. We drop the sign bits and store the absolute values only. We group the difference values into data series and store them in the following format: (1) we first use a value to represent how many maximum bits each element occupies, and minus it by one, which is equal to max⌊log_2 (v_i+1) ⌋ - 1 where v_i-s are the individual values in the data series[Except for zero: we use one bit to represent zeros as well.]; (2) we then encode the series with this number of bits, with an extra sign bit if necessary. We allow up to 16-bit unsigned values for the Symbol Group ID differences, and therefore a 4-bit value for the bit count. We group the differences from each split into individual data series. Therefore, the Differences row in Table <ref> is stored as: 0000_len = 0 + 1 = 1 bit1_-10_01_-10_0 For the split point metadata in Table <ref>, we allow up to 32-bit signed values and group the differences from all the splits into two data series containing all Bitstream Offset and Max Symbol Group ID differences, respectively. §.§ Implementation Details We implemented four variations of Recoil decoding: (1) a pure C++ implementation, non-optimized as it is for debugging purposes; (2) an AVX2 implementation; (3) an AVX512 implementation; (4) a CUDA implementation for executing on NVIDIA GPUs. We expect the algorithm to be easily ported to any other SIMD + multi-core architecture, including other GPUs, since we do not rely on any platform-specific feature. We also include a basic encoder for testing purposes. All four implementations are mutually compatible; generated bitstreams by the encoder can be decoded by any of them. Implementations (2) and (3) can be selected based on the target platform's AVX support when decoding on CPU. Recoil is implemented as a C++20 header-only library and relies heavily on templates to allow customizing almost every parameter. However, for the best performance, we recommend the parameters in Table <ref>, and used them throughout the experiments. We use 32-way interleaved rANS because it performs best for both AVX implementations and naturally fits into a GPU warp. For the AVX2 implementation, we use 8-way 32-bit interleaved decoders in each instruction, and manually unroll four times; for the AVX512 implementation, we use 16 ways in each instruction and unroll twice. We recommend launching one thread per CPU core and not utilizing SMT. For CUDA, we use 128 threads per block operating on four groups of interleaved decoders and use a CUDA library function call[] to obtain the optimal block count. We build LUTs for the symbol lookup process shown in equation <ref>. Here we apply a common optimization: if (s_i) = 8, and n ≤ 12, we pack the symbol s_i, its quantized probability f(s_i) and quantized CDF F(s_i) into a single 32-bit integer. Our performance implementations, namely (2) (3) (4), expect that renormalization always completes in one step. The necessary condition is b ≥ n <cit.>. Therefore, we pick b = 16 by default to support probability quantization levels up to 16, which is more than enough for most applications. § EXPERIMENTS §.§ Experiment Setup We use 10 datasets to evaluate the codecs featuring different sizes and compressibilities, as shown in Table <ref>. The datasets are 10-Megabyte files generated with random exponentially distributed bytes, with λ = 10, 50, 100, 200, 500 respectively representing different compression rates. The dickens <cit.>, webster <cit.>, enwik8 <cit.> and enwik9 <cit.> datasets are ASCII text files of various sizes. We model these 7 datasets with static probability distributions generated by symbol statistics and use 8-bit symbol size. The datasets are high-quality images from the DIV2K validation set <cit.>. We use the mbt2018-mean <cit.> lossy codec to transform the source image into intermediate representations of 16-bit symbols. We adaptively model each symbol with different Gaussian distributions using hyperpriors. We compare Recoil against the following baselines: (A) Single-Thread or standard 32-way interleaved rANS; (B) the Conventional partitioning symbols approach with underlying 32-way interleaved rANS; and (C) multians <cit.>. To achieve the flexibility of parameters required in the experiments, and minimize the implementation differences so that the comparison focuses on the algorithms, we implemented (A) and (B) with the same building blocks that constitute Recoil. We checked that our baseline implementations show comparable throughput with state-of-the-art[We checked that our implementation of Single-Thread outperforms ryg_rans <cit.>; the latter is a popular but SSE-4.1-based implementation which fails to unleash the full potential of modern CPUs. We compare our implementation of Conventional on GPU with DietGPU <cit.> and it shows similar performance; DietGPU assumes probability quantization levels of 9 to 11 as part of the optimization strategy, which is not enough to perform some of the experiments.] implementations. We use the open-source implementation for multians but modify the state count only for the n=16 experiment; however, since it has limited support for adaptive coding, we omit it for the image compression tests. §.§ Compression Rate Results We first compress the datasets with two probability quantization levels: n=11 and n=16 and compare the compression rates. We only compress the image datasets with n=16 because a higher quantization level is required for 16-bit symbols. For each dataset and n, we encode the bitstream into six variations: (a) standard rANS bitstream compressed with Single-Thread serving as compression rate baseline, shown in Table <ref>; (b) (c) the Large variations (2176 partitions/splits) for massively parallel GPU decoding, encoded with Conventional and Recoil correspondingly; (d) (e) the Small variations (16 partitions/splits) for parallel CPU decoding, (d) re-encoded with Conventional, and (e) converted from the Large (c) variation using Recoil splits combining; (f) tANS bitstream for decoding with multians. The differences in the compressed file sizes of variations (b) to (f), compared to the baseline (a), are shown in Table <ref> and <ref> for n=11, 16 respectively. The Small variations show negligible overheads, with max. 0.14% for Conventional and 0.13% for Recoil; however, the Large variations impact the compression rates more, as a max. 23.54% increase in file size was observed for Conventional. In contrast, even without splits combining, Recoil variations already show lower overhead, with only max. 21.53% and outperforming Conventional in every dataset. It is clear that when the baseline compressed file size is small and more splits are made, the parallelism impacts on compression rate becomes more dominant. Recall that symbol partitions in Conventional cannot be combined; the content therefore must be either encoded into both Large (b) and Small (d) variations, creating extra storage and encoding overhead on the server, or every client is served the Large (b) variation, creating unnecessary data transfer. Recoil allows real-time conversion from Large (c) to Small (e), which is especially effective when the original compressed file size is small. This reduces max. -23.41% compression rate overhead compared to sending the Conventional Large (b) variation, and eliminates the need for multiple encoded variations of the same data on the server. Interestingly, some multians bitstreams at n=11 outperformed the baseline in compression rate, as shown in Table <ref>. This is likely because the single-threaded baseline consists of 32-way interleaved rANS encoders, while multians use a single tANS encoder. However, this advantage vanished when we set n=16, as shown in Table <ref>. §.§ Decoding Throughput Results Next, we test the decoding of the six bitstream variations with the corresponding decoders and measure the throughput. On CPU, we compare the AVX512 and AVX2 implementations of Single-Thread and Conventional, and Recoil, by decoding bitstream variations (a), (d) and (e). On GPU, we compare the CUDA implementations of multians, Conventional and Recoil, by decoding bitstreams (f), (b) and (c). We use Intel Xeon W-3245 (16C) for CPU experiments and NVIDIA GeForce RTX 2080 Ti for GPU experiments. We measure only the CUDA kernel execution for GPU, excluding memory transfer overhead. We average the throughput over 10 runs. The throughput measurement results on CPU and GPU are shown in Figure <ref>. On CPU, both Conventional and Recoil greatly outperformed Single-Thread, showing massive scalability across multiple CPU cores, and using AVX512 yields even higher performance. The decoding throughput of Recoil and Conventional are comparable, reaching max. 11+ GB/s decoding throughput, showing that the extra synchronization and cross-boundary decoding steps of Recoil only bring negligible performance overhead. The conclusion on GPU is similar: Recoil and Conventional performed similarly, reaching max. 90+ GB/s decoding throughput. This indicates that the heuristic Recoil uses can distribute work evenly across threads even when there are many splits, and that the synchronization overhead is mostly negligible. Both rANS-based variations significantly outperformed multians, the tANS-based parallel decoder, which can even be matched by our CPU-based decoders. Especially when we manually set n=16, the self-synchronization overhead of multians is too large that it fails to output symbols at a usable throughput. Therefore, although multians is a zero-storage-overhead parallel tANS decoder, it has strong limitations that ultimately restrict its compression rate in other ways: we could use n=16 and still outperform the original multians with n=11 in both throughput and compression rate. § CONCLUSION In this work, we presented Recoil, a parallel interleaved rANS decoder that adaptively combines splits to match the parallel capacity of each decoder, which avoids wasting compression rate to provide unnecessary parallelism. We proposed to encode the symbols into a single interleaved rANS bitstream, then pick bitstream split points using a heuristic and record metadata to enable parallelism. We observed comparable decoding throughput to the conventional approach, outperforming other ANS decoders, while greatly saving compression rate. Although Recoil encoding cannot be done in parallel and encoding throughput is limited, this is often acceptable in content delivery applications, especially those involving high-resolution image, UHD video, VR and AR contents: complex and non-parallel prediction algorithms are widely used in their compression pipelines, in an effort to reduce data rate while maintaining quality, so that Recoil encoding is unlikely the bottleneck of these systems. As future work, Recoil can be combined with standardized state-of-the-art image and video coding formats. Recoil can be an easy drop-in replacement for the single-threaded interleaved rANS coders: the Recoil metadata can be transmitted separately so that the coding format does not change. This creates massively parallel high-throughput and low-latency content delivery experiences while wasting no transmission overhead on unused parallelism. The open-source implementation of Recoil is made public at https://github.com/lin-toto/recoilhttps://github.com/lin-toto/recoil. This work was supported in part by NICT No. 03801, JST PRESTO JPMJPR19M5, JSPS Grant 21K17770, Kenjiro Takayanagi Foundation, and Foundation of Ando laboratory. The authors would like to thank Shota Hirose, Jinming Liu, Ao Luo, Zhongfa Wang, Huanchao Shen, Hiroshi Sasaki, Prof. Keiji Kimura, Chisato Nishikigi, and Takina Inoue for the inspiring comments and discussions on this work. The project name is inspired by the TV anime series "Lycoris Recoil", and we gratefully thank them for driving our research with the power of sakana and chinanago. ACM-Reference-Format
http://arxiv.org/abs/2306.11415v1
20230620095033
High frequency oscillations in spin-torque nano oscillator due to bilinear coupling
[ "R. Arun", "R. Gopal", "V. K. Chandrasekar", "M. Lakshmanan" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "nlin.CD" ]
Department of Nonlinear Dynamics, School of Physics, Bharathidasan University, Tiruchirapalli-620024, India Department of Physics, Centre for Nonlinear Science and Engineering, School of Electrical and Electronics Engineering, SASTRA Deemed University, Thanjavur 613 401, India Department of Physics, Centre for Nonlinear Science and Engineering, School of Electrical and Electronics Engineering, SASTRA Deemed University, Thanjavur 613 401, India Department of Nonlinear Dynamics, School of Physics, Bharathidasan University, Tiruchirapalli-620024, India Exchange coupling in an interfacial context is crucial for spin-torque nano oscillator (STNO) that consists of a non-magnetic spacer which is alloyed with a ferromagnetic material. Currently, investigations on the dynamics of the free layer magnetization and frequency enhancement in the STNO with bilinear coupling are still being actively pursued. In the present work, we investigate the dynamics of the STNO in the presence of bilinear coupling but in the absence of an external magnetic field by analyzing the associated Landau-Lifshitz-Gilbert-Sloncewski(LLGS) equation, and consequently the impact of the bilinear coupling on the dynamics of the magnetization of the free layer is studied. It is observed that the frequency of the oscillations in the magnetization component along the direction of the pinned layer polarization can be enhanced above 300 GHz by positive bilinear coupling and up to around 30 GHz by negative bilinear coupling. We further reveal a transition from in-plane to out-of-plane precession both for positive and negative bi-linear couplings. We also analyze the switching of the magnetization for different values of current and bilinear coupling. Our detailed investigations of STNO with bilinear coupling aim at the possibilities of high-frequency devices by considering the applied current and bilinear coupling in the absence of a magnetic field. High frequency oscillations in spin-torque nano oscillator due to bilinear coupling M. Lakshmanan =================================================================================== § INTRODUCTION A spin-polarized electrical current can impart spin angular momentum in the ferromagnetic material, which can be used to control the magnetization state of a magnetoresistive device called spin torque nano oscillator (STNO) <cit.> . In particular, it is feasible to cause the oscillations or precession of the magnetization, which is relevant for tunable microwave devices or to reverse the magnetization that is essential for various magnetic memory systems <cit.>. In an STNO, two ferromagnetic layers are separated by a thin nonmagnetic, but conductive layer called a spacer. Among the two ferromagnetic layers, one is called the free layer, which is comparatively thinner than the other which is the pinned layer. In the free layer the direction of magnetization can change while it is fixed in the pinned layer. Further, some studies also ensure that the spacer layer can promote a high interlayer exchange coupling between its adjacent ferromagnetic layers <cit.>. The bottom and top layers of the two in-plane magnetized ferromagnetic layers are exchange-coupled via a Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction across the thin nonmagnetic spacer, whose thickness is tuned to produce an antiferromagnetic coupling in zero applied field <cit.>. For instance, a nonmagnetic layer typically made of Ru <cit.> introduces a RKKY exchange coupling between two magnetic layers <cit.>. The spin direction of the ferromagnetic layers can be parallel or antiparallel to each other depending upon the thickness of the spacer layer in magnetic multilayer systems. This parallel or antiparallel orientation of the ferromagnetic layers can be called collinear magnetization configuration <cit.>. On the other hand, obtaining a noncollinear magnetization configuration is possible due to the competition between the interlayer coupling energy and magnetic anisotropies of the coupled ferromagnetic layers for some structures. Recently, Nunn et al. have reported that the influence of the exchange coupling between two ferromagnetic layers (Fe) coupled through a nonmagnetic interlayer (Ru) is essential in controlling the magnetic layers' functionality <cit.>, and this has now been observed in various systems. It has been explained theoretically by several different approaches <cit.>. Recent results <cit.> in this context show that the presence of an exchange coupling system plays a backbone in the emergence of many spintronic-based applications such as magnetic field sensors, magnetic memory devices <cit.>, magnetic resistive random access memory (MRAM) <cit.> and spin-torque nano oscillators <cit.>. Based on the nanoscale size and suitability for room-temperature operation, spin-torque oscillators (STOs) provide exciting possibilities for these applications. However, their adjustable range and oscillation frequency are only from 100 MHz to 10 GHz <cit.>. Recently, we investigated and reported that the frequency of an STNO with bilinear and bi-quadratic couplings can be enhanced above 300 GHz by the current <cit.>. Also, Kurokawa et al.  <cit.> have shown the oscillations of the free layer magnetization in the components along the perpendicular directions of the pinned layer polarization with frequencies upto 576 GHz in the presence of bilinear and biquadratic interlayer exchange couplings in STNOs, and also with the free layer having low transition temperature for the saturation magnetization. In their investigation they have shown that the biquadratic coupling is essential for the high frequency <cit.>. In this connection, our present report provides a detailed study on Co |RuFe | Co STNO with bilinear interlayer exchange coupling alone between the free and pinned ferromagnetic layers and show the existence of oscillations of the free layer magnetization in the components along the pinned layer polarization with frequencies above 300 GHz with the free layer having high transition temperature. This unaccompanied role of the bilinear interlayer exchange coupling has been thoroughly researched since it has been used in many spintronics devices <cit.>, and multilayer magnetic thin films. Depending on the interfacial exchange coupling, both negative and positive exchange couplings have been seen in ferromagnetic/ferrimagnetic transition of metal and rare-earth alloy multilayer thin films <cit.> and the role of the bilinear coupling co-efficient are experimentally studied in Ref. <cit.>. However, numerical and analytical studies on the bilinear coupling in STNO without an external magnetic field that leads to magnetization oscillations have not been thoroughly studied in the literature <cit.>. The paper is organized as follows. First, we formulate the model and the governing LLGS equation of motion and effective magnetic field for the present study in Sec. II. The positive and negative bilinear coupling dynamics and expression for minimum current for oscillations are presented in Sec. III and IV, respectively. Section V is devoted to the conclusion of the present work. § MODEL The schematic picture of an STNO considered for our study, which consists of a free layer, a spacer layer and a pinned layer, is shown in Fig.<ref>. The magnetization of the free layer is denoted as M = M_s m, where M_s is the saturation of the magnetization. While the magnitude of the magnetization is fixed, its direction can change over time. The magnetization of the pinned layer P = M_s p is fixed for both magnitude and direction. Here m and p are the unit vectors along M and P, respectively. As shown in Fig.<ref>, the positive and negative currents correspond to the flow of electrons from the free layer to pinned layer and vice versa, respectively. The free and pinned layers are considered to be made up of Co. The spacer layer is a nonmagnetic conductive layer, constituting an alloy of Ru and Fe. The magnetization dynamics described by the LLGS equation that governs the motion of the unit vector m is given as d m/dt= -γ m× H_eff+ α m×d m/dt +γ H_S  m× ( m× p). Here, γ and α are the gyromagnetic ratio and damping parameter, respectively. The spin-torque strength is H_S = ħη I/2 e M_s V (1+λ m· p)), where ħ is the reduced Planck's constant (ħ(=h/2π)), I is the current, e is the electron charge, and V is the volume of the free layer, η and λ are the dimensionless parameters determining magnitude and angular dependence of the spin-transfer torque. The effective magnetic field H_eff is given by H_eff = H_ani + H_dem + H_bil, where H_ani and H_dem is the anisotropy and the demagnetization field, respectively. The effective field also consists of a bilinear coupling interaction H_bil of interlayer exchange coupling between the free and reference layers, the details of which are given below. Specifically, the various interactions in (3) are given by H_ani = H_k m_z  e_z, H_dem = -4π M_s m_z  e_z, H_bil = -J/dM_s  e_x. Consequently, we have H_eff=(H_k-4π M_s) m_z  e_z-J/dM_s  e_x . Here e_x, e_y and e_z are the respective unit vectors along the positive x, y and z directions. H_k is the magneto-crystalline anisotropy constant, J is the coefficient of the bilinear coupling, M_s is the saturation magnetization and d is the thickness of the free layer. The energy density of the free layer responsible for the effective field H_eff=- ∂ E/∂ (M_s m) is given by E = J/d  m. p -M_s/2[ H_k - 4π M_s ]( m. e_z)^2. The pinned layer is considered to be polarized along positive x-direction, i.e. p = e_x. The material parameters are adapted as M_s = 1210 emu/c.c., H_k = 3471 Oe, η = 0.54, λ = η^2, d = 2 nm, A = π×60×60 nm^2, V = Ad, α = 0.005 and γ = 17.64 Mrad/(Oe s). Since H_k<4π M_s, the system exhibits easy-plane anisotropy for xy-plane or hard axis anisotropy for z-axis due to the resultant demagnetization field -(4π M_s-H_k) m_z  e_z. It means that the magnetization is always pulled towards the xy plane whenever it moves away from the plane with the strength directly proportional to m_z. Therefore, before applying any current, to minimize the energy (Eq.(<ref>)), the magnetization of the free layer settles at (-1,0,0) for positive bilinear coupling (J>0) or (1,0,0) for negative bilinear coupling (J<0). This implies that the system exhibits antiferromagnetic coupling for the positive bilinear coupling and ferromagnetic coupling for the negative bilinear coupling between the free and pinned layers <cit.>. It has been shown that the magnitude and sign of the bilinear coupling coefficient can be experimentally tuned by changing the concentration of Fe in the spacer layer made by Ru_100-xFe_x alloy <cit.> since the oscillations are observed when I<0 for the positive bilinear coupling and I>0 for the negative bilinear coupling, and both the cases of the bilinear couplings are investigated separately in the following sections. § DYNAMICS FOR THE POSITIVE BILINEAR COUPLING In the absence of current the equilibrium state of the unit magnetization vector m for the positive bilinear coupling is S_1 = (-1,0,0) since the field due to the interaction H_bil acts along the negative x-direction. This is confirmed in Figs.<ref>(a) and <ref>(b), where the time evolution of m_x and m_y are plotted for J = 0.756 mJ/m^2 and 0.352 mJ/m^2, respectively, for different initial conditions. In both these figures <ref>(a) and <ref>(b), we can observe that the magnetization finally reaches the state S_1. These numerical results coincide well with the experimental results obtained by Nunn et al <cit.>, where the same system exhibits antiparallel configuration between the magnetizations of the free and pinned layers for J = 0.756 mJ/m^2 and 0.352 mJ/m^2 corresponding to Ru_32Fe_68. When the current is applied, depending upon the magnitude of the current, the system exhibits three different dynamics for m. (i) When |I|<|I_min|, the unit magnetization vector m stays in the state S_1 where it was existing already. (ii) When |I_min|<|I|<|I_max|, the vector m exhibits continuous precession. (iii) When |I|>|I_max| the vector m moves away from (-1,0,0) and settles into the state S_2 (near (0,0,±1)) for small J (<2.8 mJ/m^2) or settles into the state S_3=(1,0,0) for large J (>2.8 mJ/m^2). Hence the states S_1, S_2 and S_3 are associated with the currents when |I|<|I_min|, |I|>|I_max| for J (>2.8 mJ/m^2) and |I|>|I_max| for J (<2.8 mJ/m^2), respectively. The critical value of the positive bilinear coupling strength J_c = 2.8 mJ/m^2 is derived in Eq.(<ref>). Here, I_min and I_max are the minimum and maximum currents, respectively, between which oscillations can be exhibited. To confirm the precession of m, oscillations of m_x and tunability of the frequency by current, Eq.(<ref>) is numerically solved by adaptive step size Runge-Kutta-4 method. The initial condition of m, for the numerical simulation, is randomly chosen near the state S_1. When a negative current is applied with the magnitude |I_min|<|I|<|I_max|, the magnetization which was in the S_1 state moves away from it due to the spin-transfer torque. This is due to the fact that the incoming electrons in the free layer, which were spin polarized along the positive x-direction, always move the magnetization to align with the positive x-direction. Once the magnetization moves away from the state S_1 by STT, continuous precession is achieved due to the balance between the damping (due to the effective field) and the STT. The trajectories of m (after transition and between t = 299 ns and t = 300 ns) in continuous precession at different currents for a low value of J(= 0.4 mJ/m^2) and the time evolution of m_x corresponding to J = 0.4 mJ/m^2 and I = -1.5 mA are plotted in Figs.<ref>(a) and (c), respectively. Similarly, the trajectories of m in the same duration for a high value of J(= 7.0 mJ/m^2) and the time evolution of m_x corresponding to J = 7 mJ/m^2 and I = -2.3 mA are plotted in Figs.<ref>(b) and (d), respectively. We can observe from Fig.<ref>(a) that the trajectory corresponding to the current I = -0.5 mA (red) exhibits in-plane precession around the x-axis due to the field from positive bilinear coupling. The direction of the precession is clockwise as seen from the positive x-axis. When the strength of the current is increased further to I = -1 mA (blue), the trajectory of the magnetization slightly transforms as shown in Fig.<ref>(a). It seems that the trajectory has been folded along the negative x-axis. The magnetization gets close to the positive x-axis when it reaches the xy-plane. This is due to the fact that the resultant demagnetization field becomes weaker when the magnetization gets closer to the xy-plane. Therefore the STT, which always moves the m towards the positive x-axis, becomes stronger and moves the magnetization towards the positive x-axis as much as possible. Once the magnetization crosses the xy-plane, the magnetization moves away from the positive x-axis. This is due to the fact that the resultant demagnetization field rotates the magnetization from negative to positive y-axis in the northern hemisphere and from positive to negative y-axis in the southern hemisphere. When the current is further increased to -1.5 mA (brown), the magnetization shows a transition from the in-plane precession to out-of-plane precession around the z-axis as shown in the Fig.<ref>(a). This is because an increase of curent increases the magnitude of the STT and consequently the projection of m in the xy-plane crosses the positive x-axis before the m reaches the xy-plane. Therefore the bilinear exchange coupling field and the resultant demagnetization field along with the STT precess the magnetization within the northern hemisphere continuously. The out-of-plane precessions may symmetrically take place in the southern or northern hemisphere. Further increment in the current to -2.5 mA (black) and -3.25 mA (magenta) makes the concentric trajectories of m around the equilibrium magnetization state where the m settles when |I|>|I_max|, with I_max = - 3.4 mA for J = 0.4 mJ/m^2. The black point in Fig.<ref>(a) corresponds to the equilibrium state at which the unit vector m settles for I = -4 mA when J = 0.4 mJ/m^2. This equilibrium state can be identified as follows: The LLGS equation given by Eq.(<ref>) is transformed into spherical polar coordinates using the transformation equations m_x=sinθcosϕ, m_y = sinθsinϕ, m_z=cosθ as dθ/dt =  γ/1+α^2{ -J/dM_s(αcosθcosϕ-sinϕ) -α (H_k-4π M_s) sinθcosθ - H_S0(αsinϕ+cosθcosϕ)/(1+λsinθcosϕ)} = P(θ,ϕ),  dϕ/dt =  γθ/1+α^2{J/dM_s(cosθcosϕ+αsinϕ) + (H_k-4π M_s) sinθcosθ + H_S0(sinϕ-αcosθcosϕ)/(1+λsinθcosϕ)} = Q(θ,ϕ). Here, θ and ϕ are the polar and azimuthal angles, respectively, H_S0 = ħη I/2eM_sV. The equilibrium state is obtained from the equations P(θ^*,ϕ^*)=0 and Q(θ^*,ϕ^*)=0, where ϕ^* is numerically observed as ϕ^*≈ 0. This leads us to derive the relation sinθ^* = J/(dM_s(4π M_s-H_k)). Therefore, the equilibrium state S_2 for m when |I|>|I_max| is given by S_2≈(sinθ^*,0,±cosθ^*), where sinθ^* is as given above. However, when the magnitude of the current is increased much further than |I_max|, the equilibrium state will slightly move away from the state S_2 and if the magnitude of the current is extremely large (|I|>>|I_max|), i.e above ∼100 mA, then the magnetization will settle in the state S_3 = (1,0,0). From Eq.(<ref>), we can understand that the value of θ^* becomes π/2 when J = dM_s(4π M_s-H_k). It means that the equilibrium state S_2 of the magnetization moves towards the state S_3 = (1,0,0) as the strength of the positive bilinear coupling J increases and reaches (1,0,0) when J→ J_c, where J_c = dM_s (4π M_s-H_k) = 2.8   mJ/m^2. Similarly, the magnetization precession for the high strength of bilinear coupling (J = 7.0 mJ/m^2) is also investigated by plotting the trajectories for the currents I = -2 mA (red), -2.1 mA (blue), -2.2 mA (black), -2.3 mA (magenta), -2.35 mA (orange) and -3 mA (black point) in Fig.<ref>(b). Unlike the case of low bilinear coupling as shown in Fig.<ref>(a), there is no transition from in-plane to out-of-plane precession while increasing the magnitude of the current and the magnetization exhibits in-plane precession only around the x-axis. This can be reasoned as follows: When the strength of the bilinear coupling field is strong due to large J(>0), the STT and the resultant demagnetization field are dominated by this bilinear coupling field. Therefore, the rotations due to the resultant demagnetization field and the approach of the magnetization towards the positive x-axis due to the STT are not exhibited. When the current is increased further, the trajectory moves from the negative to positive x-axis and settles into the equilibrium state S_3 when |I|>|I_max|, where I_max = -2.35 mA for J = 7.0 mJ/m^2. The equilibrium state for the current -3 mA is shown by the black point in the Fig.<ref>(b). To confirm the oscillations the time evolutions of the component m_x are plotted in Fig.<ref>(c) for J = 0.4 mJ/m^2, I = -1.5 mA and in Fig.<ref>(d) for J = 7.0 mJ/m^2, I = -2.3 mA. The frequencies of the oscillations are 16 GHz and 163 GHz, respectively. The frequencies of the oscillations, of m_x are plotted against the current for different values of bilinear coupling strengths (given in mJ/m^2) from 0.1 mJ/m^2 to 12 mJ/m^2 in Fig.<ref>(a) and against bilinear coupling for different values of current in Fig.<ref>(b). From Fig.<ref>(a), we can understand that when the bilinear coupling coefficient is low, the frequency decreases up to some critical current I_c and then increases. This change in the frequency from decrement to increment is attributed to the transition of magnetization precession from the in-plane to out-of-plane as discussed earlier with reference to Fig.<ref>(a). In Fig.<ref>(a), the existence of I_min and I_max is evident, and the range of current for the oscillations (|I_max|-|I_min|) confirms the wide frequency tunability by the current. The magnitude of I_c slightly decreases with the increase of J. Also, we can observe that when J is large (≥2.9 mJ/m^2) the frequency decreases with the increase in the magnitude of the current up to I_max and the I_c does not exist. This is due to the nonexistence of out-of-plane precession, as shown in Fig.<ref>(b). From Fig.<ref>(a) it is observed that the tunability range (|I_max|-|I_min|) decreases and increases with J when the strength of J is small and large, respectively. At a given current, the frequency increases with the magnitude of bilinear coupling. Also, it is confirmed that the frequency can be enhanced up to 300 GHz for J = 12.0 mJ/m^2 and even above when J is increased further. Similarly, the frequency is plotted against J for different values of the current in Fig.<ref>(b). Due to the nonexistence of out-of-plane precession at large strengths of J, the discontinuity appears in the frequency while increasing the value of J as shown in Fig.<ref>(b). From Fig.<ref>(b) we can observe that the frequency almost linearly enhances with J. The frequency range is around 30 GHz and 300 GHz when the values of J are small and large, respectively. The enlargement of frequency and switching time can be essentially attributed to the large value of the bilinear coupling strength J, which causes the system to behave more like a layered antiferromagnet <cit.>. The large value of J in our system is possibly due to Nunn et al.'s recently proposed RuFe spacer layer <cit.>. The current density corresponding to the frequency 299.6 GHz when I = -3.35 mA can be obtained as 2.96× 10^7 A/cm^2 for the cross sectional area A=π× 60× 60 nm^2. Also, it is visible that the magnitude of the current can increase the range of J for which the oscillations are possible. Figs.<ref>(a) and (b) summarize the dependence of the frequency on current and J while J is below and above 2.3 mJ/m^2, respectively. The white color region is nonoscillatory region. From Figs.<ref>(a) & (b), we can see that the magnitude of the current above which the oscillations occur (|I_min|) linearly increases with J. The value I_min for J>0 can be derived as follows: The nature of the stability of an equilibrium state which is represented by polar coordinates can be identified from the following Jacobian matrix by using Eqs.(<ref>) and (<ref>) 𝒥 = [ .dP/dθ|_(θ^*,ϕ^*) .dP/dϕ|_(θ^*,ϕ^*); .dQ/dθ|_(θ^*,ϕ^*) .dQ/dϕ|_(θ^*,ϕ^*) ]. The equilibrium state (θ^*,ϕ^*) will be stable only when the system is dissipative about it. It will be dissipative if and only if the trace of the matrix 𝒥 becomes negative, Tr(𝒥)<0. We knew that when |I|<|I_c^min| and J>0 the magnetization settles at S_1, i.e, (π/2,π) in polar coordinates. Therefore specific set of values (θ^*,ϕ^*)=(π/2,π) satisfies Eq.(<ref>). The trace of the matrix corresponding to (π/2,π) is given by Tr(𝒥)|_(θ^*,ϕ^*) = γ/1+α^2[-2Jα/dM_s+(H_k-4π M_s)α-2H_S0/1+λ]. The minimum critical current I_min (for J>0), below which the S_1 is stable can be derived from Eqs.(<ref>) and (<ref>) as I_min = eAα(λ-1)/dħη[2J+(4π M_s-H_k)dM_s] and it has been plotted as open circles in Figs.<ref>(a) and (b), which matches well with the numerical results and confirms the validity of the numerical results. From Fig.<ref>(a) and (b) we can observe that value of I_max decreases with J at lower strengths of J and increases (almost linearly) with J at higher strengths of it. Fig.<ref>(b) evidences that the range of current which exhibits oscillations increases with J while J is large. In the case of positive current, the STT always moves the magnetization to be aligned with the negative x-direction. Therefore the positive current does not move the magnetization from the state (-1,0,0), where it existed already before the application of the current, and therefore no precession is exhibited. We can observe in Figs.<ref>(a) and <ref>(b) that the magnetization settles into the equilibrium states S_2 and S_3, respectively when I>I_max. It indicates a transition from S_2 to S_3 while increasing the strength of the positive bilinear coupling. As discussed in Eq.(<ref>), the transition occurs at J = 2.8  mJ/m^2. From Fig.<ref>(b), we can observe that when the magnitude of the current is above the magnitude of I_max, the magnetization will settle into the state S_3 from S_1 for the positive bilinear coupling. This indicates the existence of current-induced magnetization switching from the negative to positive x-direction. The corresponding switchings of m_x from -1 to +1 for different values of bilinear coupling when I = -2.5 mA and current when J = 4.5 mJ/m^2 are plotted in Figs.<ref>(a) and (b), respectively. From Fig.<ref>(a) we can observe that the switching times for J = 3.0, 4.5 and 6.0 mJ/m^2 are 4.42, 6.01 and 9.42 ns, respectively. Hence, the switching time increases with the magnitude of the positive bilinear coupling. On the other hand, from Fig.<ref>(b) we can understand that the switching times for the currents I = -2.0, -2.5 and -3.0 mA are 9.88, 6.01 and 3.892 ns, respectively. This implies that the switching times reduce with the increase of the magnitude of the current. The variation of the switching time against current and the strength of the bilinear coupling for different values of J and I are plotted in Figs.<ref>(c) and (d), respectively. Figs.<ref>(c) and (d) confirm the decrement and increment of the switching time with the increase in the magnitude of current and positive bilinear coupling, respectively. Since the field due to the positive bilinear coupling acts along the negative x-direction, the enhancement in the magnitude of the negative current can quickly reverse the magnetization from negative to positive x-direction as shown in Fig.<ref>(c). Similarly, when the strength of the positive bilinear coupling increases, its corresponding field along the negative x-direction increases, and consequently the magnetization takes much time to reverse from the negative to positive x-direction by the application of negative current as confirmed in Fig.<ref>(d). The above current-induced magnetization switching has spin torque magnetic random access memory applications and is much more efficient than the field-induced switching. The field-free switching may help produce magnetic memory devices with low power consumption and greater device density  <cit.>. As observed from Figs.<ref>, when the current I is kept constant and the strength of the positive bilinear coupling J is increased, the magnetization reaches the equilibrium state S_2 via out-of-plane precession (see Fig.<ref>(a)). When J is increased further, the equilibrium state of the magnetization S_2 becomes (1,0,0) as J→ J_c (see Eq.<ref> in the revised manuscript). After the magnetization reaches the state S_3 it continues to settle there without showing any oscillations until the further increase in J is strong enough to move away the magnetization from the state S_3 against the STT due to the incoming spin polarized electrons. As observed in Fig.<ref>(b) and Figs.<ref>, the gap between the offset of oscillations of m when reaching S_2 and the onset of oscillations when emanating from S_3 increases with the magnitude of the current. This is due to the fact that the strength of the STT which tends to keep the magnetization along the positive x-direction increases with the magnitude of current and consequently the strength of the bilinear coupling is required to be high enough to regain the oscillations from the equilibrium state S_3. § DYNAMICS FOR THE NEGATIVE BILINEAR COUPLING In the presence of negative bilinear coupling the magnetization will initially be oriented at S_3 since the field due to the negative bilinear coupling H_bil acts along the positive x-direction. The magnetization continues to be settled at S_3 until the current I is increased to I_min. The STT, due to the positive current, will always move the magnetization to be aligned with the negative x-direction. When I>I_min, the magnetization is moved away from S_3, and the system shows continuous precession for the vector m. The frequency of the oscillations of m_x is plotted against low values of current in Fig.<ref>(a) and high values of current in Fig.<ref>(b) for different values of the negative bilinear coupling (given in mJ/m^2). From Fig.<ref>(a), we can understand that similar to the case of the positive bilinear coupling, the frequency decreases with current up to a critical value I_c and then increases with current. Similar to the previous case, this increment in frequency after decrement is attributed to the transition from in-plane to out-of-plane precession. This is verified by plotting the trajectories of the vector m corresponding to I = 1 mA (red) and 2 mA (blue) for J = -0.1 mJ/m^2 in Fig.<ref>(c). Since the field, due to negative bilinear coupling, acts along the positive x-direction, the magnetisation trajectory corresponding to I = 1 mA (red) has been folded along the positive x-axis and exhibits in-plane precession. When the current increases to 2 mA (blue), the magnetization transforms from in-plane precession to out-of-plane precession in the northern hemisphere. However, the out-of-plane precession may also be symmetrically placed in the southern hemisphere. The explanation behind this transition is similar to those discussed in the case of positive bilinear coupling. The out-of-plane precessions corresponding to the currents I = 10 mA (brown), 20 mA (black) and 36 mA (magenta) for J = -0.1 mJ/m^2 also are plotted in Fig.<ref>(c). From Fig.<ref>(a), we can understand that when the strength of the negative bilinear coupling is relatively high, the frequency shows only an increment with the current. This is because at higher values of negative bilinear coupling, the unit magnetization vector m exhibits out-of-plane precession instead of exhibiting any transition from in-plane to out-of-plane precession. In Fig.<ref>(b), the frequency is plotted up to large values of current for different values of J. The frequency increases with current and reaches its maximum. For small values of J, the frequency increases to its maximum and then decreases. Fig.<ref>(b) shows that there is a maximum current I_max above which oscillations are not possible. For the currents above I_max, the magnetization settles into S_1 without showing any precession. In Fig.<ref>(b) we can observe the discontinuities for frequencies near I_max upto J≈ -0.4 mJ/m^2, where the system exhibits multistability i.e the magnetization may precess continuously or settle at S_1. It is confirmed in Fig.<ref>(c) by precession for I = 36 mA (magenta) and equilibrium state S_1 for I = 37 mA (black point). In Fig.<ref>(b) it is observed that the discontinuities in the frequencies have disappeared above J = -0.4 mJ/m^2. This is because the magnetization does not settle at S_1 below I_max. The magnetization exhibits three different nature of equilibrium states for |J|>∼0.4 and I>I_max. When the current is increased near above I_max, the magnetization settles near poles at S_2. When I is increased further the unit vector m settles into S_2 or S_1. If the current is increased further to extremely large values, the magnetization settles into S_1. The range of the current in which the oscillations are possible (I_max-I_min) also increases (decreases) with |J| when |J| is small (large). From Figs.<ref>(a) and (b), it is observed that the frequency can be reached around 30 GHz by increasing the current and the magnitude of the negative bilinear coupling. In Fig.<ref>(d), the frequency is plotted against the negative bilinear coupling for different values of the currents. It seems that the frequency increases almost linearly with the increase in the magnitude of negative bilinear coupling coefficient. Also, at a given J, the frequency increases with the magnitude of the current. The dependence of the frequency on the negative bilinear coupling and current is plotted for the large values of current in Fig.<ref>(a) and small values of current in Fig.<ref>(b). The white background corresponds to the non-oscillatory region. From Fig.<ref>(a) we can observe that the value of I_max increases up to -0.33 mJ/m^2 and then decreases abruptly. From the bright green and red regions in Fig.<ref>(a) we can understand that the frequency can be maintained constant while increasing the current at fixed J. Also, it is clearly visible that the tunability range of the frequency by current drastically reduces after ∼-0.3 mJ/m^2. This is different from the case of positive bilinear coupling where the ocillatory region (|I_max|-|I_min|) can be expanded with the increase of J. For currents above I_max, three different regions are identified for m as shown in Fig.<ref>(a). The three different regions for equilibrium states S_1, S_2 and S_1/S_2 for the current above I_max are indicated in Fig.<ref>(a). To see the minute variation of frequency in the low current region, Fig.<ref>(b) is plotted for currents upto 3 mA. Fig.<ref>(b) confirms the decrement and increment in frequency with current when |J|<1 mJ/m^2. Also, the frequency at a given current increases with the strength of the negative bilinear coupling. The minimum current I_min for J<0 is similarly derived as in the previous case for positive bilinear coupling. When I<I^min and J<0, the state S_3 becomes stable and the magnetization settles into S_3, corresponding to (π/2,0) in polar coordinates. The trace of the matrix J corresponding to the state (π/2,0) is derived as Tr(𝒥)|_(π/2,0) = γ/1+α^2[2Jα/dM_s+(H_k-4π M_s)α+2H_S0/1-λ]. From the condition (<ref>) and Eq.(<ref>), we can derive the minimum current (for J<0) below which the equilibrium state S_3 is stable as I_min = -eAα(1+λ)/dħη[2J+(H_k-4π M_s)dM_s]. Eq.(<ref>) is plotted in Fig.<ref>(b) as open circles and matches well with the numerical results. This confirms the validity of the numerical results. If the current is negative, the STT always moves the magnetization towards the positive x-direction. Therefore the magnetization does not move from the state S_3, where it was already existing before applying the current, by the negative current, and no precession is exhibited. Similar to the case of positive bilinear coupling, magnetization switching can also be identified for negative bilinear coupling. As discussed in Fig.<ref>(a) when a current corresponding to the region of equilibrim state S_1 is applied the magnetization will switch from S_3 to S_1. In Figs.<ref>(a) and (b) the component m_x is plotted to confirm the switching from positive to negative x-direction for different values of J when I = 33.5 mA and for different values of I when J = -0.05 mJ/m^2, respectively. The variation of the switching time against current and the coupling is plotted in Figs.<ref>(c) and (d), respectively. From Figs.<ref>(a) and (c), we can understand that similar to the positive bilinear coupling, the switching time decreases with the increase in the magnitude of the current. Fig.<ref>(d) confirms that there is no definite relationship between the switching time and the negative bilinear coupling. The switching time variation against the magnitude of the coupling is not smooth like in the case of positive bilinear coupling. § CONCLUSION In conclusion, we have investigated the dynamics of Co |RuFe | Co STNO using the LLGS equation and identified high-frequency oscillations in the magnetization of the free layer due to the presence of bilinear coupling. The obtained orientations of the magnetization of the free layer with that of the pinned layer in the absence of current match well with the experimental results. A transition in the precession of the magnetization from in-plane precession to out-of-plane precession while increasing the current is observed for both positive and negative bilinear coupling cases. However, the transition does not occur at higher strengths of the bilinear coupling. Only an in-plane precession for the positive bilinear coupling and an out-of-plane precession for the negative bilinear coupling are exhibited. A wide range of frequency tunability by the current is observed for both cases of bilinear coupling. While the frequency is enhanced upto 30 GHz by the negative bilinear coupling, the positive bilinear coupling enhances the frequency upto and above 300 GHz. This high frequency has been shown for the oscillations of the magnetization vector (free layer) along the pinned layer polarization and with the free layer having high transition temperature for the saturation magnetization. The range of the current in which the frequency can be tuned increases with the strength of the positive bilinear coupling corresponding to the in-plane precession. Oscillations are exhibited for the positive (negative) bilinear coupling when the current is applied in the negative (positive) direction. Also, oscillations are possible only when the current is between I_min and I_max. When |I|<|I_max|, the magnetization settles into (-1,0,0) for J>0 and (1,0,0) for J<0. If the strength of the positive bilinear coupling is large, then the magnetization settles into (1,0,0) for all the magnitudes of the current above |I_max|. On the other hand, if the strength is small, it settles near poles (S_2) when |I|>|I_max| or into (1,0,0) when |I|>>|I_max|. If the bilinear coupling is negative, there are three regions corresponding to the equilibrium states S_2, S_1 (or) S_2 and S_1 above I_max depending upon the values of I and J. The magnetization switching induced by the current alone is identified for both of the bilinear couplings. It is observed that the switching time reduces with the increase in the magnitude of the current for both cases of the bilinear coupling. We have also analyzed the expressions for the minimum currents to achieve the oscillations for both the positive and negative bilinear couplings. We have shown that they match well with the numerically obtained results. We have also proved that the bilinear coupling is sufficient for the high-frequency oscillations among two interlayer exchange couplings, namely bilinear and biquadratic couplings. We wish to point out that this study has been carried out for the temperature T = 0 K. However, the free layer we have considered is perpendicular magnetic anisotropic one and this is normally robust against thermal noise <cit.>. We believe that our detailed study on bilinear coupling can be helpful in applications related to microwave generation with high-frequency enhancement and magnetic memory devices. § ACKNOWLEDGEMENT The works of V.K.C. and R. G are supported by the DST-SERB-CRG Grant No. CRG/2020/004353 and they wish to thank DST, New Delhi for computational facilities under the DST-FIST programme (SR/FST/PS-1/2020/135) to the Department of Physics. M.L. wishes to thank the Department of Science and Technology for the award of a DST-SERB National Science Chair under Grant No. NSC/2020/00029 in which R. Arun is supported by a Research Associateship. 53 slon J.C. Slonczewski, Phys. Rev. B 39, 6995 (1989); J. Magn. Magn.Mater. 159, L1 (1996). berger L. Berger, Phys. Rev. B 54, 9353 (1996). slon1 J. C. Slonczewski, Phys. Rev. B 71, 024411 (2005). katine J. A. Katine, F. J. Albert, R. A. Buhrman, E. B. Myers, and D. C. Ralph, Phys. Rev. Lett. 84, 3149 (2000). deng K. Deng, X. Li, and B. Flebus, Phys. Rev. B 107, L100402 (2023). yama T. Yamaguchi, S. Tsunegi, K. Nakajima, and T. Taniguchi, Phys. Rev. B 107, 054406 (2023). liu C. Liu, Y. Kurokawa, N. Hashimoto, T. Tanaka, and H. Yuasa, Scientific Reports 13, 3631 (2023). imai Y. Imai, S. Tsunegi, K. Nakajima, and T. Taniguchi, Phys. Rev. B 105, 224407 (2022). wu H. T. Wu, Lei Wang, Tai Min, and X. R. Wang, Phys. Rev. B 104, 014305 (2021). wolba B. Wolba, O. Gomonay, and V. P. Kravchuk, Phys. Rev. B 104, 024407 (2021) eklund A. J. Eklund, M. Dvornik, F. Qejvanaj, S. Jiang, S. Chung, J. Akerman, and B. G. Malm, Phys. Rev. B 103, 214433 (2021). jiang B. Jiang aand W. Zhang, Journal of Magnetism and Magnetic Materials 490, 165470 (2019). luba M. V. Lubarda, M. Kuteifan, C.-H. Lambert, V. Lomakin, E. E. Fullerton, and S. Mangin, Phys. Rev. B 102, 014405 (2020). groll J. Grollier, V. Cros, A. Hamzic, J. M. George, H. Jaffres, A. Fert, G. Faini, J. B. Youssef, and H. Legall, Appl. Phys. Lett. 78, 3663 (2001). parkin S. S. P. Parkin, N. More, and K. P. Roche, Phys. Rev. Lett. 64, 2304 (1990). ungu J. Unguris, R. J. Celotta, and D. T. Pierce, J. Appl. Phys. 75, 6437 (1994). naga T. Nagasima, A. Oguri, and H. Ishii, Journal of Magnetism and Magnetic Materials 177-181, 1008 (1998). moser A. Moser, A. Berger, D. T. Margulies, and Eric E. Fullerton, Phys. Rev. Lett. 91, 097203 (2003). gri S. V. Grigoriev, D. Lott, Yu. O. Chetverikov,A. T. D. Grunwald, R. C. C. Ward, and A. Schreyer Phys. Rev. B 82, 195432 (2010). mck T. McKinnon and E. Girt, App. Phys. Lett 113, 192407 (2018). duine R. A. Duine, K. J. Lee, S. S. P. Parkin, and M. D. Stiles, Nat. Phys. 14, 217 (2018). nunn Z. R. Nunn, C. Abert, D. Suess, E. Girt, Sci. Adv. 6, eabd8861 (2020). cuc L. Cuchet, B. Rodmacq, S. Auffret, R. C. Sousa, I. L. Prejbeanu, and B.Dieny, Sci. Rep. 6, 21246 (2016). rana B. Rana and Y. Otani, Commun. Phys. 2, 90 (2019). harris V. G. Harris, IEEE Trans. Magn. 48, 1075 (2012). meena J. S. Meena, S. M. Sze, U. Chand, and T.-Y. Tseng, Nanoscale Res. Lett. 9, 526 (2014). grim E. Grimaldi, A. Dussaux, P. Bortolotti, J. Grollier, G. Pillet, A. Fukushima, H. Kubota, K. Yakushiji, S. Yuasa, and V. Cros, Phys. Rev. B 89, 104404 (2014). tsu S. Tsunegi, H. Kubota, K. Yakushiji, M. Konoto, S. Tamaru, A. Fukushima, H. Arai, H. Imamura, E. Grimaldi, R. Lebru, Appl. Phys. Express 7, 063009 (2014). arun R. Arun, R. Gopal, V. K. Chandrasekar, and M. Lakshmanan, J. Appl. Phys. 132, 094301 (2022). kuro Y. Kurokawa, K. Yamada, T. Taniguchi, S. Horiike, T. Tanaka, and H. Yuasa, Sci. Rep. 12, 10849 (2022). ishi Y. Ishikuro, M. Kawaguchi, T. Taniguchi and M. Hayashi, Phys. Rev. B 101, 014404 (2020). alt B. Altuncevahir and A.R. Koymen J. Magn. Magn. Mater. 261, 424 (2003). mish S. K. Mishra, F. Radu, H. A. Dürr, and W. Eberhardt, Phys. Rev. Lett. 102, 177208 (2009). gusa D. Gusakova, D. Houssameddine, U. Ebels, B. Dieny, L. Buda-Prejbeanu, M. C. Cyrille, and B. Delaet, Phys. Rev. B 79, 104406 (2020). vol I. Volvach, A.D. Kent, E.E. Fullerton, and V. Lomakin, Phys. Rev. Applied 18, 024071 (2022). vigo H. Vigo-Cotrina and A.P. Guimaraes, J. Magn. Magn. Mater. 497, 166009 (2020). roy P. E. Roy, R. M. Otxoa, and J. Wunderlich Phys. Rev. B 94, 014439 (2016). fira I. Firastrau, L. D. Buda-Prejbeanu, B. Dieny, and U. Ebels, J. Appl. Phys. 113, 113908 (2013). khym R. Khymyn, I. Lisenkov, V. Tiberkevich, B.  A. Ivanov, and A. Slavin, Sci. Rep. 7, 43705 (2017). Loca N. Locatelli, V. Cros and J. Grollier, Nat. Mat. 13, 11 (2014). Ral D. C. Ralph and M. D. Stiles, J. Magn. Magn. Mater. 320, 1190 (2008). Tudu B. Tudu and A. Tiwari, Vaccum 146, 329 (2017).
http://arxiv.org/abs/2306.06730v1
20230611174050
Critical branching processes in a sparse random environment
[ "Dariusz Buraczewski", "Congzao Dong", "Alexander Iksanov", "Alexander Marynych" ]
math.PR
[ "math.PR", "Primary: 60J80, secondary: 60F05" ]
Dariusz Buraczewski, Mathematical Institute, University of Wroclaw, 50-384 Wroclaw, Po­land [email protected] Congzao Dong, School of Mathematics and Statistics, Xidian University, 710126 Xi'an, China [email protected] Alexander Iksanov, Faculty of Computer Science and Cybernetics, Taras Shev­chen­ko National University of Kyiv, 01601 Kyiv, Ukraine [email protected] Alexander Marynych, Faculty of Computer Science and Cybernetics, Taras Shev­chen­ko National University of Kyiv, 01601 Kyiv, Ukraine [email protected] We introduce a branching process in a sparse random environment as an intermediate model between a Galton–Watson process and a branching process in a random environment. In the critical case we investigate the survival probability and prove Yaglom-type limit theorems, that is, limit theorems for the size of population conditioned on the survival event. [2020]Primary: 60J80; secondary: 60F05 Critical branching processes in a sparse random environment Alexander Marynych =========================================================== § INTRODUCTION AND MAIN RESULTS The branching process is a random process starting with one individual, the initial ancestor, which produces offspring according to some random rule. The collection of offspring constitutes the first generation. Each individual of the first generation gives birth to a random number of children with the same offspring distribution as for the initial ancestor. The numbers of offspring of different individuals (including the initial ancestor) are independent. This process continues forever or until the population dies out. An interesting problem is the behavior of the long-time evolution of the process. Plainly, it depends on a particular rule that regulates giving birth to offspring. In the simplest case, when the offspring distribution is the same for all generations, the branching process is called the Galton–Watson process. We refer to <cit.> for numerous results concerning, for instance, long-term survival or extinction of such a process, the growth rate of the population, fluctuations of population sizes. Thanks to a simple tree structure, not only does the Galton–Watson process find numerous applications as a model of biological reproduction processes, but also in many other fields including computer science and physics. The homogeneity of the Galton–Watson process reduces its applicability. In some cases it may happen that the population evolution conditions change randomly over time. We intend to study here branching processes in a randomly perturbed environment, in which homogeneity of the environment is modified on a sparse subset of . To give a precise definition, let μ be a fixed probability measure on _0 and = ((d_k,ν_k))_k≥ 1 a sequence of independent copies of a random vector (d,ν), where d is a positive integer-valued random variable and ν is a random measure on _0 independent of d. First we choose a subset of integers marked by the positions of a standard random walk (S_k)_k≥ 0 defined by S_0=0, S_k = ∑_j=1^k d_j, k∈ℕ, and then we impose random measures at the marked sites. The branching process in sparse random environment (BPSRE) is formally defined as follows: Z_0=1, Z_n+1=∑_j=1^Z_nξ^(n)_j, n∈ℕ_0:={0,1,2,…}, where, if n = S_k for some k∈ℕ, then, given , ξ^(n)_j are independent random variables with the common distribution ν_k+1, which are also independent of Z_n. Otherwise, if n ∉{S_0,S_1,S_2,…}, then ξ^(n)_j are independent random variables with the common distribution μ, which are also independent of Z_n. The process (Z_n)_n≥ 0 behaves like the Galton–Watson process, with the exception of some randomly chosen generations in which the offspring distribution is random. The BPSRE that we are going to investigate here is an intermediate model between the branching process in random environment (BPRE) introduced by Smith and Wilkinson <cit.> and the Galton–Watson process. The BPRE is a population growth process, in which the individuals reproduce independently of each other with the offspring distribution picked randomly at each generation. More precisely, let ν be a random measure on the set of positive integers . Then a sequence (ν_n )_n ≥ 1 of independent copies of ν can be interpreted as a random environment. The BPRE is then the sequence Z'=(Z'_n)_n ≥ 0 defined by the recursive formula Z'_n+1=∑_k=1^Z'_nξ^(n)_k, where, given (ν_n )_n ≥ 1, (ξ^(n)_k)_k≥ 1 are independent identically distributed (iid) and independent of Z'_n with the common distribution ν_n+1. We refer to the recent monograph by Kersting and Vatutin <cit.> for an overview of fundamental properties of this process. We intend to describe how the additional randomness of the environment affects the behavior of the BPSRE. To this end, we focus on Yaglom-type results. For the Galton–Watson process in the critical case, that is, when the expected number of offspring is 1 (see (A2) below), it is known that the probability of survival up to the generation n is of the order 1/n and the population size conditioned to the survival set converges weakly to an exponential distribution (section 9 in <cit.>). In contrast, in the critical case for the BPRE, that is, when the expectation of the logarithm of the number of offspring is 0 (see (A1)), the probability of survival up to the generation n is asymptotically 1/√(n), and the process conditioned to the survival event converges weakly to a Rayleigh distribution. We prove below in Theorems <ref> and <ref> that, although the environment is random on a sparse subset only, the behavior the BPSRE reminds that of a BPRE. To close the introduction, we mention that closely related random walks in a sparse random environment, which is an intermediate model between the simple random walk and the random walk in a random environment, have been recently investigated in <cit.>. §.§ Notation and assumptions Given a deterministic or random probability measure θ on ℕ_0, define the generating function f_θ(s) = ∑_j=0^∞ s^j θ({j}), |s|≤ 1. Denote by A_θ:= f_θ^'(1) = ∑_j=1^∞ j θ({j}) its mean and by σ_θ:= f”_θ(1)/(f'_θ(1))^2 = 1/A^2_θ∑_j=2 ^∞ j(j-1)θ({j}) its normalized second factorial moment. We shall also use a standardized truncated second moment defined by κ(f_θ;a):=1/A^2_θ∑_j=a^∞j^2θ({j}), a∈ℕ_0. To simplify our notation we shall write, for k≥ 1, A_k and σ_k instead of A_ν_k and σ_ν_k, respectively. Thus, in our setting (A_k)_k≥ 1 and (σ_k)_k≥ 1 are two (dependent) sequences of iid random variables. As usual, x^+=max (x,0) and x^-=max (-x,0) for x∈ℝ. Throughout the paper we impose the following assumptions: (A1) log A_1=0, 𝔳^2:= (log A_1)∈ (0,∞) and (log^- A_1)^4<∞; (A2) A_μ=1; (A3) d_1^3/2<∞ and we put := d_1; (A4) (log^+κ(f_ν;a))^4<∞ for some a∈ℕ. §.§ Main results Let ∈ (0,∞] be the extinction time of (Z_n)_n≥ 0, that is, :=inf{k≥ 0:Z_k=0}. The following observation is almost immediate. Under the assumptions (A1)-(A2), {<∞}=1. Our first main result is concerned with the tail behavior of ℙ{>n}=ℙ{Z_n>0} as n→∞. Assume (A1)-(A4). Then there exists ∈ (0,∞) such that lim_n→∞√(n)ℙ{>n}=. Our next result is a Yaglom-type functional limit theorem for the process (Z_n). Recall that a Brownian meander, see <cit.>, is a stochastic process (B_+(t))_t∈ [0,1] defined as follows. Let (B(t))_t∈ [0,1] be a standard Brownian motion and ζ:=sup{t ∈ [0,1]:B(t)=0} be its last visit to 0 on [0,1]. Then B_+(t)=1/√(1-ζ)|B(ζ+t(1-ζ))|, t∈ [0,1]. Assume (A1)-(A4). Then with (B_+(t))_t∈ [0,1] being a Brownian meander Law((log Z_⌊ nt⌋/𝔳√(^-1n))_t∈ [0,1] | Z_n>0) ⟹  Law((B_+(t))_t∈ [0,1]), n→∞, weakly on the space of probability measures on D[0,1] endowed with the Skorokhod J_1-topology. Using formula (1.1) in <cit.> we obtain the following one-dimensional result. Assume (A1)-(A4). Then, for every fixed t∈ (0,1], lim_n→∞ℙ{log Z_⌊ nt⌋/𝔳√(^-1n)≥ x | Z_n>0}=ℙ{B_+(t)≥ x}, x≥ 0. The random variable B_+(t) has an absolutely continuous distribution with a bounded nonvanishing density on [0,∞). Furthermore, ℙ{B_+(1)≤ x}=1-e^-x^2/2, x≥ 0, so B_+(1) has a Rayleigh distribution. The assumption (A4) and the last part of the assumption (A1) can be weakened without changing the formulations of the main results. A version of (A4) appears as Assumption (C) in <cit.>. It is a convenient general condition allowing for an (asymptotically) closed form of the survival probability and also validity of a functional limit theorem for a critical branching process in iid random environment. A more general version of Assumption (C) can be found in <cit.>. However, we prefer to sacrifice generality in favor of transparency and simplicity of the formulations. § PROOFS The proof of our main results consists of three steps. First, we analyze an embedded process (Z_S_n)_n≥ 0 by finding its survival asymptotic and proving a counterpart of Theorem <ref>. Second, we deduce from the results obtained for (Z_S_n)_n≥ 0 the corresponding statements for a randomly stopped process (Z_S_ϑ(n))_n≥ 0, where (ϑ(n))_n≥ 0 is the first passage time process for the random walk (S_k)_k≥ 0. At the last step, we show that (Z_S_ϑ(n))_n≥ 0 is uniformly close to (Z_n)_n≥ 0. §.§ Analysis of the embedded process Observe that (Z_S_k)_k≥ 0 is a branching process in iid random environment ℚ=(ν_k)_k≥ 1 which can be explicitly described as follows. Let ((Z^(i)_j)_j≥ 0)_i≥ 0 be a sequence of independent copies of a critical Galton–Watson process (Z_j)_j≥ 0 in deterministic environment with the offspring distribution μ and Z_0=1. Suppose that ((Z^(i)_j)_j≥ 0)_i≥ 0 is independent of the environment ℚ. Then ν_k({j})=∑_l=0^∞ν_k({l})ℙ{∑_i=1^lZ^(i)_d_k-1=j}, k,j∈ℕ_0. Let ν be a generic copy of iid random measures (ν_k)_k≥ 1. Put g(s):= s^Z_d-1, |s|≤ 1, where d is assumed independent of (Z_k)_k≥ 0. Equality  (<ref>) entails that the generating function of the random measure ν is given by f_ν(s)=f_ν(g(s)), |s|≤ 1. Since g^'(1)=Z_d-1=1, the latter formula immediately implies that A_ν_k=f^'_ν_k(1)=f^'_ν_k(1)=A_ν_k=A_k, k∈ℕ_0. Further, σ_ν_k=f^''_ν_k(1)/(f^'_ν_k(1))^2=f^''_ν_k(1)+f^'_ν_k(1)g^''(1)/(f^'_ν_k(1))^2=σ_ν_k+σ_μ( d-1)/A_ν_k=σ_k+σ_μ( d-1)/A_k, k∈ℕ_0, where we have used that g^''(1)=Z_d-1(Z_d-1-1)=σ_μ( d-1), see, for instance, Chapter I.2 in <cit.> for the last equality. Note that (<ref>) guarantees that log A_ν_1 = log A_ν_1 = log A_1 = 0, which means that the embedded process (Z_S_n)_n≥ 0 is critical. In particular, :=inf{k≥ 0:Z_S_k=0}<∞ a.s. Recall that we denote by κ(f_θ;a) the truncated second moment of a measure θ. Let a_∗∈ℕ_0 and assume that (log^+κ(f_ν;a_∗))^4<∞ and (log^- A_ν)^4<∞. Then (log^+κ(f_ν;a_∗))^4<∞. We start by writing κ(f_ν;a)=1/A^2_ν∑_j=a^∞j^2∑_l=0^∞ν({l})ℙ{∑_i=1^lZ^(i)_d-1=j} =1/A_ν^2∑_l=0^∞ν({l}) ((∑_i=1^lZ^(i)_d-1)^2 _{∑_i=1^lZ^(i)_d-1≥ a}). In view of (<ref>), for all a∈ℕ, ((∑_i=1^lZ^(i)_d-1)^2 _{∑_i=1^lZ^(i)_d-1≥ a})≤(∑_i=1^lZ^(i)_d-1)^2 ≤ C_1 m l^2, where m = d and C_1>0 is a constant. Thus, it suffices to check that (log^+1/A_ν^2∑_l=0^a_∗ν({l}) ((∑_i=1^lZ^(i)_d-1)^2 _{∑_i=1^lZ^(i)_d-1≥ a_∗}))^4<∞. The inner expectation is equal to 0 if l=0 and is uniformly bounded by a constant C_2>0 for all l=1,…,a_∗. It remains to note that (log^+C_2/A_ν^2∑_l=1^a_∗ν({l}))^4≤(log^+C_2/A_ν^2∑_l=1^∞lν({l}))^4≤(log^+C_2/A_ν)^4 ≤ C_3 (log^+1/A_ν)^4+C_4 =C_3 (log^- A_ν)^4+C_4<∞ for some C_3>0 and C_4≥ 0. Using Theorem 5.1 on p. 107 in <cit.> we obtain the following result. Assume (A1), (A2), (A4) and d<∞. Then ℙ{Z_S_n>0} ∼ /√(n), n→∞ for some constant >0. Furthermore, Theorem 5.6 on p. 126 in <cit.> entails the proposition. Assume (A1), (A2), (A4) and d<∞ and (κ(f_ν;a))^4<∞ for some a∈ℕ_0. Then, with (B_+(t))_t∈ [0,1] being the Brownian meander, Law((log Z_S_⌊ nt⌋/𝔳√(n))_t∈ [0,1] | Z_S_n>0) ⟹  Law((B_+(t))_t∈ [0,1]), n→∞ weakly on the space of probability measures on D[0,1] endowed with the Skorokhod J_1-topology. Given next is the corollary which follows from formula (1.1) in <cit.>. Under the assumptions of Proposition <ref>, for every fixed t∈ (0,1], lim_n→∞ℙ{log Z_S_⌊ nt⌋/𝔳√(n)≥ x | Z_S_n>0}=ℙ{B_+(t)≥ x}, x≥ 0. The random variable B_+(t) has an absolutely continuous distribution with a bounded nonvanishing density on [0,∞). Propositions <ref> and <ref> are the key ingredients for the proof of our main results. §.§ Proof of Proposition <ref> and Theorem <ref> Recall that =inf{k≥ 0:Z_S_k=0} is the extinction time of the embedded process (Z_S_k)_k≥ 0 and note that ℙ{<∞}≥ℙ{<∞}=1, where the equality is justified by  (<ref>). This proves Proposition <ref>. For n∈ℕ_0, define the first passage time ϑ(n) by ϑ(n):=inf{k≥ 0:S_k > n}. Note that ℙ{Z_S_ϑ(n)>0}≤ℙ{Z_n>0}≤ℙ{Z_S_ϑ(n)-1>0}, n∈ℕ_0. In view of the strong law of large numbers for ϑ(n), which reads ϑ(n)/n → 1/ m, n→∞, a.s., and Proposition <ref>, it is natural to expect that ℙ{Z_S_ϑ(n)>0} ∼  m^1/2/√(n) ∼ ℙ{Z_S_ϑ(n)-1>0}, n→∞. Checking relation (<ref>) is clearly sufficient for a proof of Theorem <ref>. Furthermore, (<ref>) would demonstrate that =^1/2. Observe that ℙ{Z_S_ϑ(n)>0}=ℙ{>ϑ(n)}=ℙ{-1≥ϑ(n)}=ℙ{S_-1>n}, and, similarly, ℙ{Z_S_ϑ(n)-1>0}=ℙ{>ϑ(n)-1}=ℙ{≥ϑ(n)}=ℙ{S_>n}. The desired relation (<ref>) follows from Theorem 3.1 in <cit.> applied with r=3/2 provided we can check that nℙ{d>n}=o(ℙ{>n})=o(ℙ{Z_S_n>0}), n→∞. By Proposition  <ref>, this is equivalent to ℙ{d>n}=o(n^-3/2), n→∞, which is secured by assumption (A3). This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> We start by noting that <∞ together with the strong law of large numbers for (ϑ(n)) imply sup_t∈ [0,1]|ϑ(⌊ nt⌋)-1/n-t/| → 0, n→∞ Thus, the weak convergence claimed in Proposition <ref> can be strengthened to the joint convergence Law(((log Z_S_⌊^-1nt⌋/𝔳√(^-1 n),ϑ(⌊ nt⌋)-1/^-1n)_t∈ [0,1] | Z_S_⌊^-1n⌋>0))  ⟹  Law((B_+(t),t)_t∈ [0,1]), n→∞, which holds weakly on the space of probability measures on D[0,1]× D[0,1] endowed with the product J_1-topology. Using the continuous mapping theorem in combination with continuity of the composition (see, for instance, Theorem 13.2.2 in <cit.>) we infer Law(((log^+ Z_S_ϑ(⌊ nt⌋)-1/𝔳√(^-1n))_t∈ [0,1] | Z_S_⌊^-1n⌋>0)) ⟹  Law((B_+(t))_t∈ [0,1]), n→∞ weakly on the space of probability measures on D[0,1]. We have replaced log by log^+ in (<ref>) because the event {Z_S_⌊^-1n⌋>0} does not entail the event {Z_S_ϑ(⌊ nt⌋)-1>0 for all t∈ [0,1]}. Now we check that (<ref>) secures Law(((log Z_S_ϑ(⌊ nt⌋)-1/𝔳√(^-1n))_t∈ [0,1] | Z_n>0)) ⟹  Law((B_+(t))_t∈ [0,1]), n→∞. By Proposition <ref>, Theorem <ref> and (<ref>), ℙ{Z_n>0} ∼ ℙ{Z_S_⌊^-1n⌋>0} ∼ /√(n), n→∞. Thus, limit relation (<ref>) follows once we can prove that lim_n→∞√(n)ℙ{Z_S_⌊^-1n⌋>0,Z_n=0}=lim_n→∞√(n)ℙ{Z_S_⌊^-1n⌋=0,Z_n>0}=0. In view of (<ref>), it suffices to show that lim_n→∞ℙ{Z_n>0 | Z_S_⌊^-1n⌋>0}=1. Fix any ε>0. The assumption (A3) implies that ℙ{|S_n- n|≥ε n}=o(n^-1/2), n→∞, by Theorem 4 in <cit.>. Thus, ℙ{Z_n>0 | Z_S_⌊^-1n⌋>0} =ℙ{Z_n>0, S_⌊^-1(1+ε)n⌋>n | Z_S_⌊^-1n⌋>0}+o(1) ≥ℙ{Z_S_⌊^-1(1+ε)n⌋>0, S_⌊^-1(1+ε)n⌋>n | Z_S_⌊^-1n⌋>0}+o(1) =ℙ{Z_S_⌊^-1(1+ε)n⌋>0 | Z_S_⌊^-1n⌋>0}+o(1) =ℙ{Z_S_⌊^-1(1+ε)n⌋>0}/ℙ{Z_S_⌊^-1n⌋>0}+o(1) → (1+ε)^-1/2, n→∞, where we have used Proposition <ref> for the last passage. Sending ε→ 0 gives (<ref>). To finish the proof of Theorem <ref> it remains to check that, for all ε>0, lim_n→∞ℙ{sup_t∈ [0,1]|log Z_⌊ nt⌋-log Z_S_ϑ(⌊ nt⌋)-1/𝔳√(^-1n)|>ε | Z_n>0}=0. To this end, we need an auxiliary lemma. Assume (A2), d<∞ and that d is independent of (Z_j)_j≥ 0. Then (max_0≤ k≤ dZ_k)≤ 1+ d<∞. The proof follows from the chain of inequalities (max_0≤ k≤ dZ_k) ≤𝔼(∑_k≥ 0Z_k_{d≥ k})=∑_k≥ 0Z_k·ℙ{d≥ k}=1+ d. In order to prove (<ref>) we first show that lim_n→∞ℙ{sup_t∈ [0,1]log Z_⌊ nt⌋-log Z_S_ϑ(⌊ nt⌋)-1/𝔳√(^-1n)>ε | Z_n>0}=0. Note that Z_⌊ nt⌋=∑_j=1^Z_S_ϑ(⌊ nt⌋)-1Z^(j)_⌊ nt⌋-S_ϑ(⌊ nt⌋)-1(S_ϑ(⌊ nt⌋)-1), t∈ [0,1], n∈ℕ, where (Z^(j)_k(m))_k≥ 0 is the Galton–Watson process initiated by the j-th individual in the generation m. On the event {Z_n>0}, Z_⌊ nt⌋/Z_S_ϑ(⌊ nt⌋)-1=∑_j=1^Z_S_ϑ(⌊ nt⌋)-1Z^(j)_⌊ nt⌋-S_ϑ(⌊ nt⌋)-1(S_ϑ(⌊ nt⌋)-1)/Z_S_ϑ(⌊ nt⌋)-1, t∈ [0,1], n∈ℕ, and thereupon sup_t∈ [0,1]Z_⌊ nt⌋/Z_S_ϑ(⌊ nt⌋)-1≤sup_1≤ k≤ϑ(n)∑_j=1^Z_S_k-1max_0≤ i≤ d_kZ^(j)_i(S_k-1)/Z_S_k-1, n∈ℕ. Instead of (<ref>), we shall prove a stronger relation lim_n→∞ℙ{∑_1≤ k≤ϑ(n)∑_j=1^Z_S_k-1max_0≤ i≤ d_kZ^(j)_i(S_k-1)/Z_S_k-1>ε n^3 | Z_n>0}=0. By Markov's inequality in combination with ℙ{Z_n>0}≥ (1/C_5) n^-1/2 for some C_5>0 and large n. ℙ{∑_1≤ k≤ϑ(n)∑_j=1^Z_S_k-1max_0≤ i≤ d_kZ^(j)_i(S_k-1)/Z_S_k-1>ε n^3 | Z_n>0} ≤ε^-1 n^-3∑_k=1^∞( _{S_k-1≤ n}1/Z_S_k-1∑_j=1^Z_S_k-1max_0≤ i≤ d_kZ^(j)_i(S_k-1) | Z_n>0) ≤ C_5 ε^-1 n^-5/2∑_k=1^∞( _{S_k-1≤ n, Z_n>0}1/Z_S_k-1∑_j=1^Z_S_k-1max_0≤ i≤ d_kZ^(j)_i(S_k-1)) ≤ C_5 ε^-1 n^-5/2∑_k=1^∞( _{S_k-1≤ n, Z_S_k-1>0}1/Z_S_k-1∑_j=1^Z_S_k-1max_0≤ i≤ d_kZ^(j)_i(S_k-1)) =C_5 ε^-1 n^-5/2(max_0≤ i≤ dZ_i)∑_k=1^∞ℙ{S_k-1≤ n}=O(n^-3/2), n→∞. To justify the penultimate equality, observe that, given (Z_S_k-1,S_k-1), the sequences (Z^(1)_i(S_k-1))_i≥ 0,…,(Z^(Z_S_k-1)_i(S_k-1))_i≥ 0 are independent copies of the critical Galton–Watson process (Z_i)_i≥ 0. The last equality is a consequence of Lemma <ref> and the elementary renewal theorem which states that ∑_k=1^∞ℙ{S_k-1≤ n}=ϑ (n) ∼ n/, n→∞. We shall now check that lim_n→∞ℙ{inf_t∈ [0,1]log Z_⌊ nt⌋-log Z_S_ϑ(⌊ nt⌋)-1/𝔳√(^-1n)<-ε | Z_n>0}=0. Using again decomposition (<ref>), we write on the event {Z_n>0} inf_t∈ [0,1]Z_⌊ nt⌋/Z_S_ϑ(⌊ nt⌋)-1 =inf_t∈ [0,1]∑_j=1^Z_S_ϑ(⌊ nt⌋)-1Z^(j)_⌊ nt⌋-S_ϑ(⌊ nt⌋)-1(S_ϑ(⌊ nt⌋)-1)/Z_S_ϑ(⌊ nt⌋)-1 ≥inf_1≤ k≤ϑ(n)∑_j=1^Z_S_k-1min_0≤ i≤ d_kZ^(j)_i(S_k-1)/Z_S_k-1 ≥inf_1≤ k≤ϑ(n)∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0}/Z_S_k-1, n∈ℕ, As in the proof of (<ref>), we shall prove a relation which is stronger than (<ref>), namely, lim_n→∞ℙ{inf_1≤ k≤ϑ(n)∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0}/Z_S_k-1 < ε n^-3 | Z_n>0}=0. Since ℙ{Z_S_ϑ(n)>0}∼ℙ{Z_n>0} as n→∞, by  (<ref>), and {Z_S_ϑ(n)>0} entails {Z_n>0}, relation (<ref>) is equivalent to lim_n→∞ℙ{inf_1≤ k≤ϑ(n)∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0}/Z_S_k-1 < ε n^-3 | Z_S_ϑ(n)>0}=0. Observe that on the event {Z_S_ϑ(n)>0} ∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0}>0, k≤ϑ(n), since otherwise the population does not survive up to time S_ϑ(n). Using this and the union bound yields ℙ{inf_1≤ k≤ϑ(n)1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3 | Z_S_ϑ(n)>0} ≤∑_k≥ 1ℙ{0<1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3,k≤ϑ(n) | Z_S_ϑ(n)>0}. Invoking ℙ{Z_S_ϑ(n)>0}≥ (1/C_6) n^-1/2 for some C_6>0 and large n, we obtain, for such n, ∑_k≥ 1ℙ{0<1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3,k≤ϑ(n) | Z_S_ϑ(n)>0} ≤ C_6 n^1/2∑_k≥ 1ℙ{0<1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3,k≤ϑ(n), Z_S_ϑ(n)>0} = C_6 n^1/2∑_k≥ 1ℙ{0<1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3,S_k-1≤ n, Z_S_ϑ(n)>0} ≤ C_6 n^1/2∑_k≥ 1ℙ{0<1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3,S_k-1≤ n, Z_S_k-1>0}. Let p:=ℙ{Z_d>0} be the probability of the event that the critical Galton–Watson process (Z_k)_k≥ 0 survives up to random time d independent of (Z_k)_k≥ 0. Obviously, p∈ (0,1). Given (S_k-1,Z_S_k-1), the sum ∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} has a binomial distribution with parameters (Z_S_k-1,p). In what follows we denote by Bin(N,p) a random variable with a binomial distribution with N and p interpreted as the number of independent trials and a success probability, respectively. The next lemma provides a uniform in N estimate for ℙ{0 < N^-1 Bin(N,p)≤ x}, which is useful when x is close to zero. For all N∈ℕ and x∈ (0,p), ℙ{0<N^-1 Bin(N,p)≤ x}≤p(1-p)x/(p-x)^2. Plainly, ℙ{0<N^-1 Bin(N,p)≤ x}=0 if x<1/N. If x≥ 1/N, then by Chebyshev's inequality ℙ{0<N^-1 Bin(N,p)≤ x}≤ℙ{ Bin(N,p)≤ Nx} = ℙ{ Bin(N,1-p)-N(1-p)≥ N(p-x)}≤ p(1-p)/(p-x)^21/N≤p(1-p)/(p-x)^2x. Using Lemma <ref> we estimate the summands in (<ref>) as follows. For k≥ 1 and n large enough, ℙ{0<1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3,S_k-1≤ n, Z_S_k-1>0} =ℙ{0<1/Z_S_k-1∑_j=1^Z_S_k-1_{Z^(j)_d_k(S_k-1)>0} < ε n^-3 | S_k-1≤ n, Z_S_k-1>0}ℙ{S_k-1≤ n, Z_S_k-1>0} ≤p(1-p)ε/(p-ε n^-3)n^-3ℙ{S_k-1≤ n, Z_S_k-1>0}≤p(1-p)ε/(p-ε n^-3)n^-3ℙ{S_k-1≤ n}. Summarizing, the probability on the left-hand side of (<ref>) is bounded from above by C_6 n^1/2p(1-p)ε/(p-ε n^-3)n^-3∑_k≥ 1ℙ{S_k-1≤ n}=O(n^-3/2), n→∞, thereby finishing the proof of (<ref>) and Theorem <ref>. Acknowledgement The research was supported by the High Level Talent Project DL2022174005L of Ministry of Science and Technology of PRC. plain
http://arxiv.org/abs/2306.05337v1
20230608163715
Categorical centers and Yetter--Drinfel`d-modules as 2-categorical (bi)lax structures
[ "Bojana Femić", "Sebastian Halbig" ]
math.CT
[ "math.CT", "18N10, 18D25, 18M15" ]
Real-time GeoAI for High-resolution Mapping and Segmentation of Arctic Permafrost Features Anna Liljedahl July 31, 2023 ========================================================================================== The bicategorical point of view provides a natural setting for many concepts in the representation theory of monoidal categories. We show that centers of twisted bimodule categories correspond to categories of 2-dimensional natural transformations and modifications between the deloopings of the twisting functors. We also show that dualities lift to centers of twisted bimodule categories. Inspired by the notion of (pre)bimonoidal functors due to McCurdy and Street and by bilax functors of Aguiar and Mahajan, we study 2-dimensional functors which are simultaneously lax and colax with a compatibility condition. Our approach uses a sort of 2-categorical Yang-Baxter operators, but the idea could equally be carried out using a kind of 2-categorical braidings. We show how this concept, which we call bilax functors, generalize many known notions from the theory of Hopf algebras. We propose a 2-category of bilax functors whose 1-cells generalize the notions of Yetter-Drinfel`d modules in ordinary categories, and a type of bimonads and mixed distributive laws in 2-categories. We show that the 2-category of bilax functors from the trivial 2-category is isomorphic to the 2-category of bimonads, and that there is a faithful 2-functor from the latter to the 2-category of mixed distributive laws of Power and Watanabe. Keywords: center categories, bicategories, Yang-Baxter operators, bimonads, bimonoidal functors. 2020 MSC: 18N10, 18D25, 18M15. § INTRODUCTION intro The concept of centers of a monoids was categorified independently by Drinfel`d, Majid and Street in the 1990's. Since then it has been extensively studied in Hopf algebra and category theory, see for example <cit.> for an overview. One of its striking features comes from the fact that by passing from sets to categories one can replace the qualitative question: `Do two elements commute with another?' with a quantitative one: `How many suitably coherent (iso-)morphisms exist between the tensor product of two objects and its opposite?'. Such (iso-)morphisms are called half-braidings. The center of a monoidal category consists of objects of the underlying category with fixed half-braidings together with morphisms of the base category which satisfy a certain compatibility relation. The aim of the present paper is twofold. In the first part, we study the center construction from the bicategorical perspective. Our main motivation comes from the observation that monoidal categories can be identified with bicategories with a single object. This procedure, sometimes called delooping, establishes an equivalence between monoidal categories with monoidal functors and bicategories with a single object together with pseudofunctors. Following this line of thinking, by simple means we reveal a surprising and beautiful fact that colax natural transformations between lax functors among bicategories with single objects are nothing but the objects of the twisted Drinfel`d center of the corresponding codomain monoidal category. Accordingly, modifications of such colax transformations correspond to the morphisms in the Drinfel`d center, so that one has an isomorphism of categories. In particular, we obtain: Let be a monoidal category and write Del() for its delooping. There exists a monoidal equivalence of categories between the Drinfel`d center of and the category of pseudonatural transformations and their modifications on the identitity functor of Del(). In fact, we formulate a general (weak) center category ^w(F,,G) for a -bimodule category and two lax monoidal functors F:→ and G:→ from a third monoidal category . In the case that ==, we interpret ^w(F,,G) from a bicategorical point of view. As this interpretation relies on delooping, in paricular on the fact that the monoidal product of becomes the composition of 1-cells, this approach does not allow for giving a bicategorical interpretation of ^w(F,,G) for general F and G. This task will be treated elsewhere. (The weakness corresponds to dealing with non-invertible half-braidings, while with strongness we allude to invertible ones. Accordingly, we differentiate left and right weak centers. ) We prove that left weak center categories form a bicategory ^w_l(,) which is isomorphic to the bicategory of suitable lax functors of bicategories. When is an autonomous 2-category, meaning that all its 1-cells have left and right adjoints, and is an autonomous monoidal category, then on pseudofunctors Del()→ the weak and strong center categories coincide (w=s). Furthermore, under these conditions the corresponding bicategory ^w ps_l(,) is autonomous (auton). This result provides a natural interpretation of duality notions between centers of compatible bimodule categories in <cit.>. Moreover, our bicategorical interpretation of center categories ^w_l(F,,G) which make up the bicategory ^w_l(,) encompasses also the Shimizu's bicategory (,) of tensor functors from <cit.> and the result thereof about duals. The above bicategory ^w_l(,) is a particular case of the bicategory _clx(,') of lax functors →' among bicategories, colax natural transformations and modifications. Taking the pseudo-pseudo version of the latter, we get the bicategory _ps(,') whose hom-categories are (,) for pseudofunctors ,:→'. When ==_ one recovers the center category () of the bicategory introduced in <cit.>. Our above bicategory of center categories alludes to the possibility to consider “twisted center categories of the bicategory '”. On the other hand, in the second part of the paper, we introduce and study 2-categorical functors which are simultaneously lax and colax with a compatibility relation involving a Yang–Baxter operator. We call them bilax functors and differentiate bilax functors with compatible Yang-Baxter operator. For monoidal categories such functors were studied under the name of pre-bimonoidal and bimonoidal functors in <cit.>, and (when the domain category is braided) bilax functors in <cit.>. We show that our bilax functors generalize a variety of notions and possess certain preservation properties: bialgebras in braided monoidal categories, bimonads in 2-categories (with respect to Yang-Baxter operators, YBO's), and preserve bimonads (w.r.t. YBO's), bimonads in 2-categories with respect to distributive laws from <cit.>, module comonads and comodule monads, and relative bimonad modules. Moreover, the component functors of a bilax functor on hom-categories factor through the category of Hopf bimodules (w.r.t. YBO's). The 2-categorical notions in italic letters are introduced in this paper and they generalize to 2-categories the same named notions in braided monoidal categories. We record that instead of working with Yang-Baxter operators, one could equally use local braidings, following the footsteps of <cit.>. In this case the generalization and preservation results somewhat differ from the ones that we obtained and that are listed above. We establish a 2-category of bilax functors (,') by introducing bilax natural transformations and bilax modifications. Accordingly, _c(,') denotes the 2-category of bilax functors with compatible Yang-Baxter operator. Bilax natural transformations are both lax and colax natural transformations satisfying a compatibility condition. As such they generalize bimonad morphisms from <cit.> and Yetter-Drinfel`d modules from braided monoidal categories. In the classical case, the category of Yetter-Drinfel`d modules over a bialgebra B is monoidally equivalent to the Drinfel`d center of the category _B of modules over the same bialgebra. The half-braidings in the left Drinfel`d center can be seen as colax natural transformations. In the category _B one can construct a lax natural transformation which together with the colax one makes a bilax natural transformation. (We explain this in more detail at the end of b.nat-tr.) This illustrates why in a general 2-category bilax natural transformations (and bilax modifications) generalize the category of Yetter-Drinfel`d modules, but not the (left) center category. Finally, we show that there is a 2-category isomorphism _c(1,)() and a faithful 2-functor ()↪(). Here () is the 2-category of bimonads from <cit.> and () is the 2-category of mixed distributive laws of <cit.>. The paper is composed as follows. We first give an overview of bicategories, deloopings, module and center categories. In section 3 we give a higher categorical interpretation of center categories and study when the bicategory of center categories is autonomous. Bilax functors and their properties are studied in section 4, while in the last section a 2-category of bilax functors is introduced and its relations to the 2-categories () and () is shown. § PRELIMINARIES: DELOOPINGS AND WEAK TWISTED CENTERS prelim We assume that the reader is familiar with the notion of a braided monoidal category and the corresponding notation of string diagrams (see e.g. <cit.>), as well as with the definition of a bicategory, for which we recommend <cit.>. In this section we give a short summary of bicategories, delooping bicategories, module categories and weak twisted centers. For a more extensive discussion of module categories we refer the reader to <cit.>. Briefly, a monoidal category consists of a category together with a suitably associative and unital multiplication ⊗×→ implemented by a functor which is called the tensor product. A `many object' generalization of monoidal categories is provided by bicategories. These can be thought of as higher dimensional categories with hom-categories between every pair of objects instead of mere sets. The objects of these hom-categories are called 1-cells and the morphisms 2-cells. Any bicategory admits two ways to compose: horizontal composition given by the composition functors ∘_Z,Y,X(Y,Z)×(X,Y) →(X, Z), for X,Y,Z∈ (objects of ), and vertical composition induced by the compositions inside the hom-categories. Instead of identity morphisms, every X∈ has a unit 1-cell 𝕀_X∈(X,X). In general, the horizontal composition of a bicategory is associative and unital only up to suitable natural isomorphisms. Bicategories where these morphisms are identities are called 2-categories. Since every bicategory is biequivalent to a 2-category, we will restrict ourselves without loss of generality to the setting of 2-categories. As hinted at before, there is an intimate relationship between monoidal categories and bicategories. It is provided by considering a monoidal category as a bicategory Del() with one object (which we will usually denote by *) and as its unique hom-category. Under this identification, the tensor product of becomes the horizontal composition of Del(), and the monoidal unit becomes the identity 1-cell id on the unique object of Del(). The resulting canoncial isomorphism of categories between the category of monoidal categories with certain structure preserving functors and one-object bicategories plus structure preserving 2-dimensional functors is called delooping: eq:delooping Del {monoidal categories with lax/colax/strong monoidal functors}→{one-object bicategories with lax/colax/pseudofunctors. } reversed Observe that we consider the horizontal composition in bicategories in the counter lexicographical order, whereas the tensor product in a monoidal category is read from left to right, that is: ×∋(X,Y)↦ X Y∈. For objects X,Y∈ corresponding to 1-cells x,y in Del() respectively, this implies that the tensor product X Y corresponds to the composition of 1-cells y∘ x. In order to avoid applying this mirror symmetry, we are going to consider in the isomorphism Del in eq:delooping that the reversed tensor product becomes the horizontal composition in bicategories (formally, this is precomposing Del with the isomorphism functor defined on objects by sending a monoidal category (, ) to it reversed category(^rev, ^rev) ). Bicategories provide a natural interpretation of the representation theory of monoidal categories. All endomorphism categories of a bicategory are monoidal with horizontal composition as a tensor product. Similarly, given two objects A, B ∈ of a bicategory with endomorphism categories :=(A,A) and :=(B,B), horizontal composition endows (A,B) with the structure of a (, )-bimodule category. That is, there are two functors ×→ and ×→ subject to analogous but weakened version of the axioms of bimodules over a monoid. Conversely: to any (, )-bimodule category we can associate a two object bicategory Del(), which we call the delooping of . It has two objects 0 and 1 and hom-categories Del()(0,0) = , Del()(0,1)=M, Del()(1,1) = and Del()(1,0) =1, the trivial category. Horizontal composition is given by the tensor products of and and the left and right action of and on . The relation between (bi)module categories and bicategories was already observed by Benaboú, <cit.>. If the categories and coincide, one can define the center of a bimodule category. The aim of the paper at hand will be the study of these centers and their interaction with the theory of bicategories in a slightly more general version. relative center cat Let F → and G → be lax monoidal functors and a (strict) (, )-bimodule category over the (strict) monoidal categories and . A left half-braiding of an object M∈ relative to F and G is a natural transformation σ_X M F (X) → G (X) M, for all X ∈, such that for all X,Y ∈ℰ the following diagrams commute: center diag1M F(Y) F(X) G(Y) M F(X) M F(Y ⊗ X) G(Y ⊗ X) M G(Y) G(X) M ["σ_Y 𝕀_F(X)", from=1-1, to=1-3] ["𝕀_G(Y)σ_X", from=1-3, to=2-3] ["G^2 M", from=2-3, to=2-2] ["M F^2"', from=1-1, to=2-1] ["σ_Y⊗ X"', from=2-1, to=2-2] center diag2 M I ≅ I M G(I) M M F(I)["G^0 M", from=1-1, to=1-3] ["M F^0"', from=1-1, to=2-2] ["σ_I"', from=2-2, to=1-3] Similarly, a right half-braiding on M relative to F and G is a natural transformation σ̃_X G (X) M → M F (X), for all X ∈, subject to analogous identities. The left weak center of relative to F and G is the category _l^w(F, , G). Its objects are pairs (M,σ) consisting of an object M ∈ together with a left half-braiding σ on M relative to F and G. A morphism between objects (M, σ), (N, τ) ∈_l^w(F, , G) is an arrow f ∈(M,N) such that (𝕀_G(X) f)σ_X = τ_X(f 𝕀_F(X)), for all X ∈. The full subcategory ^s_l(F, , G) of _l^w(F, , G) whose objects have invertible half-braidings is called the (strong) left center of relative to F and G. When the functors are clear from the context, we will call the latter two categories simply left weak/strong twisted centers of , respectively. We define the right weak and strong twisted center categories _r^w(F, , G) and ^s_r(F, , G) in an analogous way. When == we set Z_l^w(F, G):=_l^w(F, , G) and Z_r^w(F, G):=_r^w(F, , G). For a tensor category and F,G tensor functors these present the (left and right version of) twisted center category Z(F,G) studied in <cit.>. In case F=G=_, we write ^l_()^s_l(,,) and ^r_()^s_r(,,). These recover the (left and right) center category from <cit.>. If moreover =, the categories ^l_() and ^r_() recover the left and right Drinfel`d center categories of . left-iso-right Suppose G→ and F→ are lax monoidal functors and is a (, )-bimodule category. Then there exists an isomorphism of categories Ξ_l^s(F, , G) →_r^s(F, , G), Ξ(M,σ) = (M, σ^-1), which is the identity on morphisms. Suppose that σ is an invertible left half-braiding on an object M∈. We show that σ^-1 defines a right half-braiding. Precomposing the equation in center diag1 by (σ_Y^-1𝕀_F(X))(𝕀_G(X)σ_X^-1) and postcomposing by σ_YX^-1 yields the desired compatibility of right half-braidings with the lax functor structures. Analogous calculations show that σ^-1 is compatible with the lax units of F and G and that Ξ sends any morphism in the strong left center to a morphism of the strong right center. The proof is concluded by constructing Ξ^-1 in the same spirit as Ξ. That is, by mapping invertible right half-braidings to their inverses. The construction of (left) strong twisted center categories can be seen as a result of the following composition of 2-functors: (F,G)→_→() ↦_G_F ↦_(_G_F) where (F,G) denotes precomposing the left and right action by F and G, respectively, and _ is defined as in <cit.>. The term “twisted” is motivated by this composition. Namely, if F and G are strong monoidal functors, a -bimodule category structure is twisted by them into an -bimodule structure. § CATEGORICAL CENTERS AS A DATA IN A TRICATEGORY At the core of our investigation in this section are (weak) twisted centers and their interpretation from a higher categorical point of view. We will show that center categories are hom-categories of hom-bicategories of a particular tricategory. Namely, the tricategory of bicategories with a single object. §.§ Categorical centers as (co)lax natural transformations For the interpretation of center categories from the perspective of 2-categories we first recall the definitions of lax and colax functors between bicategories and of lax and colax natural transformations between the latter. A lax functor (, ^2, ^0) →' between 2-categories consists of 0em * an assignment ∋ A ↦(A) ∈', * for all A,B ∈ a local functor _A,B(A, B) →'((A), (B)), * a natural transformation ^2_g,f F(g) ⊗' F(f) ⇒ F(g ⊗ f), for (g,f)∈(B,C)×(A,B), and * a natural transformation ^0_A id_(A)⇒(id_A), for A∈, so that ^2 and ^0 satisfy associativity and unitality laws. When the natural transformations ^2 and ^0 are directed in the opposite direction and satisfy coassociativity and counitality laws, one has a colax functor. One speaks about a pseudofunctor if ^2 and ^0 are isomorphisms. Lax transformations can be defined both for lax and colax functors. The same holds for colax transformations, so that there are four variations of definitions, depending on the situation. colax tr Let (, ^2, ^0) →' and (, ^2, ^0)→' be lax functors between 2-categories. A colax natural transformation χ⇒ consists of * a 1-cell χ_A (A) → (A) for each object A∈, and * for every pair of objects A, B ∈ Ob a collection of 2-cells def-colax{χ_f χ_B∘_A,B(f) ⇒_A,B(f)∘χ_A | f ∈(A,B) } natural in f subject to colax multiplicativity [sep = 1.2em] (B) (A) (B) (C) (A) (C) = (A) (B) (C) (A) (C)["(f)"description, from=2-1, to=2-3] ["(g)"description, from=2-3, to=2-5] ["χ_C"description, from=2-5, to=4-5] ["(g)"description, from=4-3, to=4-5] ["χ_B"description, from=2-3, to=4-3] ["χ_A"description, from=2-1, to=4-1] ["(f)"description, from=4-1, to=4-3] [""name=0, anchor=center, inner sep=0, "(gf)"description, curve=height=40pt, from=4-1, to=4-5] ["(f)"description, curve=height=-12pt, from=2-7, to=1-9] ["(g)"description, curve=height=-12pt, from=1-9, to=2-11] ["(gf)"description, from=2-7, to=2-11] ["(gf)"description, from=4-7, to=4-11] ["χ_C"description, from=2-11, to=4-11] ["χ_A"description, from=2-7, to=4-7] ["χ_f"', shorten <=11pt, shorten >=11pt, Rightarrow, from=2-3, to=4-1] ["χ_g", shorten <=11pt, shorten >=11pt, Rightarrow, from=2-5, to=4-3] ["χ_gf"', shorten <=22pt, shorten >=22pt, Rightarrow, from=2-11, to=4-7] [draw=none, from=2-9, to=2-7] ["^2"', shorten <=2pt, shorten >=2pt, Rightarrow, from=1-9, to=2-9] ["^2"', shorten <=3pt, shorten >=3pt, Rightarrow, from=4-3, to=0] and colax unitality (A) (A) (A) (A) = (A) (A) (A) (A)["χ_A"description, from=1-1, to=3-1] ["χ_A"description, from=1-3, to=3-3] [""name=0, anchor=center, inner sep=0, "𝕀_(A)"description, curve=height=-18pt, from=3-1, to=3-3] [""name=1, anchor=center, inner sep=0, "(𝕀_A)"description, curve=height=18pt, from=3-1, to=3-3] ["𝕀_(A)"description, curve=height=-18pt, from=1-1, to=1-3] [""name=2, anchor=center, inner sep=0, "𝕀_(A)"description, curve=height=-18pt, from=1-5, to=1-7] [""name=3, anchor=center, inner sep=0, "(𝕀_A)"description, curve=height=18pt, from=1-5, to=1-7] ["χ_A"description, from=1-7, to=3-7] ["χ_A"description, from=1-5, to=3-5] ["(𝕀_A)"description, curve=height=18pt, from=3-5, to=3-7] ["𝕀"', shift right=1, shorten <=11pt, shorten >=11pt, Rightarrow, from=1-3, to=3-1] ["χ_𝕀_A", shift left=1, shorten <=11pt, shorten >=11pt, Rightarrow, from=1-7, to=3-5] ["^0"', shorten <=5pt, shorten >=5pt, Rightarrow, from=2, to=3] ["^0"', shorten <=5pt, shorten >=5pt, Rightarrow, from=0, to=1] If the 2-cells of χ are invertible, it is called a pseudonatural transformation. In case they are identities, one speaks of a strict natural transformation. By reverting the direction of the 2-cells of χ one obtains the notion of a lax natural transformation between lax functors. We start by a simple observation that entails a marvelous fact. Shim Let F, G → be lax monoidal functors. The objects of the weak twisted center _l^w(F,,G) are canonically in bijection with colax natural transformations χ Del(F) ⇒ Del(G) between the induced lax functors Del(F), Del(G): Del()→ Del(). Under this identification, the objects of the strong center correspond to pseudonatural transformations. Since both bicategories Del(), Del() have a single object, there is a single 1-cell component of χ Del(F) ⇒ Del(G), which is a distinguished object D_χ=χ_* in . The 2-cell components of χ amount to morphisms χ_X: D_χ F(X)→ G(X) D_χ in natural in X (mind that we do not flip the order of factors when translating from Del() to , as we assume the reversed tensor product in the sense of reversed), and the colax multiplicativity and unity translate into the commuting diagrams center diag1 and center diag2. The second claim is immediate. co/lax An analogous statement to the previous proposition for right half-braidings can be obtained by considering lax instead of colax natural transformations between lax functors. To obtain a bicategorical interpretation of the morphisms in a center category, we need to recall the definition of modifications. A modification a: χ⇛ψ between two colax natural transformations χ, ψ:⇒:→' consists of a family of 2-cells a_A:χ(A)⇒ψ(A), indexed by the objects A∈, such that for every 1-cell f∈(A,B) we have: [sep=1.2em] (A) (A) (A) (A) = (B) (B) (B) (B)[""name=0, anchor=center, inner sep=0, "χ_A"description, from=1-1, to=1-4] ["F(f)"description, from=1-1, to=3-1] ["G(f)"description, from=1-4, to=3-4] ["χ_B"description, from=3-1, to=3-4] [""name=1, anchor=center, inner sep=0, "ψ_A"description, curve=height=-30pt, from=1-1, to=1-4] ["χ_f", shorten <=16pt, shorten >=16pt, Rightarrow, from=3-1, to=1-4] ["ψ_A"description, from=1-6, to=1-9] ["G(f)"description, from=1-9, to=3-9] ["F(f)"description, from=1-6, to=3-6] [""name=2, anchor=center, inner sep=0, "ψ_B"description, from=3-6, to=3-9] [""name=3, anchor=center, inner sep=0, "χ_B"description, curve=height=30pt, from=3-6, to=3-9] ["ψ_f", shorten <=16pt, shorten >=16pt, Rightarrow, from=3-6, to=1-9] ["a_A", shorten <=4pt, shorten >=4pt, Rightarrow, from=0, to=1] ["a_B", shorten <=4pt, shorten >=4pt, Rightarrow, from=3, to=2] For two lax functors ,:→' among bicategories let Colax(, ) and Lax(, ) denote the categories of colax (respectively lax) natural transformations and their modifications. Similarly, Pseudo(F, G) denotes the category of pseudonatural transformations and their modifications. isos Let F, G → be lax monoidal functors. There are canonical isomorphisms of categories: _l^w(F,, G)Colax(Del(F),Del(G)), _l^s(F,,G)Pseudo(Del(F),Del(G)), _r^w(F,,G)Lax(Del(F),Del(G)), _r^s(F,,G)Pseudo(Del(F),Del(G)). We only prove the claims for left center categories, as the other cases are analogous. Let χ, ψ Del(F) ⇒ Del(G) be colax natural transformations and write (D_χ, χ), (E_ψ, ψ) ∈_w^l(F,,G) for their corresponding objects in the weak left center. Since Del() and Del() are deloopings of monoidal categories, any modification aχ⇛ψ is defined by a single morphism f D_χ→ E_ψ satisfying for all X∈ the following identity: morph261D1F(X)11χ_X1f111G(X)1E= 261D1F(X)11f1ψ_X111G(X)1E. This is precisely the defining equation of a morphism in the weak center and the claim follows. In the above proposition we started from two lax monoidal functors between monoidal categories to obtain the result. One can also start from two lax functors between 2-categories in a specific way to obtain an analogous result. For this purpose recall the interplay between ordinary categories and bicategories encoded in the delooping isomorphism eq:delooping and the discussion after reversed. Naidu-gen Let be a monoidal category, a bicategory which has at least two objects 0 and 1, and assume that ,:Del()→ are two lax functors such that (*)=0 and (*)=1. There are canonical isomorphisms of categories _l^w(F,, G)Colax(,), _l^s(F,,G)Pseudo(,), _r^w(F,, G)Lax(,), _r^s(F,,G)Pseudo(,), for a suitable bimodule category and lax monoidal functors F and G. We set :=(0,0), :=(1,1) and :=(0,1). Then the two lax functors yield lax monoidal functors F=_*,*:→, G=_*,*:→. The unique 1-cell component of a colax natural transformation χ:→ is an object M living in the ()-bimodule category , as it is a 1-cell mapping 0=(*)→(*)=1 in . The 2-cell component yields a half-braiding given by morphisms χ_X: M F(X)→ G(X) M in , for every 1-endocell in , i.e. an object X∈. The rest follows as in isos. Recall that we recovered the left center category from <cit.> as ^l_()^s_l(,,), and similarly the right one, and that they are isomorphic by left-iso-right. Naidu Let be a monoidal category, a bicategory which has at least two objects 0 and 1, and assume that ,:Del()→ are two pseudofunctors such that (*)=0,(*)=1 and _*,*=_*,*=_. There are canonical isomorphisms of categories ^l_()(,)^r_() where =(0,1). §.§ The bicategory of center categories The bicategorical perspective gives us a deeper insight of “why” the twisted center categories can be composed between each other, and in particular “why” “F F-twisted” center categories are monoidal. Namely, for fixed bicategories and ' there are bicategories _lx(,') and _clx(,') of lax functors →', lax (resp. colax) transformations and their modifications, see <cit.>. The composition of (co)lax natural transformations (as 1-cells in these bicategories) is given by what is known as the vertical composition of such transformations. The hom-categories are Lax(, ) and Colax(, ) for two lax functors ,, respectively. The horizontal composition in the two bicategories corresponds to the composition of the right, respectively left, weak twisted center categories between themselves. In particular, the fact that (,) and Colax(, ) are monoidal entails that the “F F-twisted” center categories are monoidal as well. We explain now how this is achieved. Let us fix a monoidal category and a 2-category of which we think as a collection of monoidal categories and composable bimodule categories between them. According to Naidu-gen, we can identify the hom-categories of _clx(Del(),) with the weak left centers relative to lax monoidal functors from to the endo-1-categories of . In order to emphasize this point of view, we write _l^w(,) _clx(Del(),). Given lax functors , Del() →, we write _l^w(,,) for the hom-category of _l^w(, ) with source and target . (Observe that they are categories _l^w(F,,G) from Naidu-gen, where F,G are lax monoidal functors induced by , and =(0,1).) The above observation readily implies that (weak left) centers can be organized into a bicategory. We refer to _l^w(, ) as the bicategory of weak left centers of relative to . Our next result translates its horizontal composition into a `pre-tensor product' between weak relative centers. Given two vertically composable 2-cells xα⇒ yβ⇒z: A→ B, we denote their vertical composition in equations by the fraction α/β. center-2-cat Let be a monoidal category and a 2-category. The weak left centers of relative to form a bicategory _l^w(, ). For all lax functors , , : Del()→, its horizontal composition is given by eq:∘_,,_l^w(, , ) ×_l^w(, , ) →_l^w(, , ) ((N, τ), (M, σ) ) ↦ (N∘ M, 𝕀_N ∘σ/τ∘𝕀_M) (g,f) ↦ (g∘ f) The first claim is merely a recapitulation that _l^w(,) = _clx(Del(),) is a bicategory. By Naidu-gen, the objects and morphisms of any weak center are in correspondence with appropriate colax natural transformations and modifications. Applying this identification, one immediately obtains the above formulas from their respective composition rules. (Observe that M∈__=(0,1) and N∈__=(1,2), where (*)=0,(*)=1 and (*)=2, and :=(0,0), :=(1,1) and :=(2,2), in the sense of the proof of Naidu-gen.) Analogous considerations hold for the bicategory of weak right centers _r^w(,) _lx(Del(),). The consequences of the above result are best exemplified by considering the bicategory _l^w(, ) of all twisted weak left centers _l^w(F,,G) of . The weak left analogue _l^w() of the Drinfel`d center corresponds to the endo-category on the identity functor of . The previous result recovers its monoidal structure. Moreover, it implies that, if only the left or right action of on itself is twisted, the resulting center canonically becomes a right, respectively, left module category over _l^w(). This gives a theoretical justification to the constructions of <cit.> involving the anti-Drinfel`d center and its opposite. Let us consider the pseudo-pseudo version of the bicategories _lx(,') and _clx(,'). It is a bicategory that we denote by _ps(,') with hom-categories (,) for pseudofunctors ,:→'. In the particular case when ==_ we have the center category () of the bicategory introduced in <cit.>. Our above considerations allude to the possibility to consider “twisted center categories of the bicategory '”. In this case the twisting is done through the pseudo- (or lax or colax) functors →'. §.§ The tricategory that encomapsses strong center categories Let denote the tricategory of bicategories, pseudofunctors, pseudonatural transformations, and modifications. tricat-bicat Although it is sufficient to consider only lax functors and colax (resp. lax) natural transformations in order to recover left (resp. right) weak twisted center category, in order to form a tricategory whose 2- and 3-cells are some kind of bicategorical functors and transformations, both kinds of cells should be of pseudo type, i.e. they both should have isomorphisms for their respective defining structures. Namely, on one hand, in order to be able to define the horizontal composition of two lax (or colax) natural transformations, both lax and colax structures of the functors they act to are needed (see e.g. Second problem in <cit.>). On the other hand, one sees from the diagram (11.3.12) of <cit.> that in order to construct the isomorphism interchange 3-cell one needs both pseudofunctors and pseudonatural transformations. One clearly has: By delooping, the full sub-tricategory of whose objects are bicategories with a single object gives rise to a tricategory ^* with monoidal categories as objects, strong monoidal functors as 1-cells, and for each pair F, G→ of such functors the strong center _l^s(F,,G)_r^s(F,, G) as hom-category. In view of tricat-bicat it becomes clear why we do not speak of a tricategory that contains weak center categories as bottom hom-categories. By further restricting the tricategory ^* to finite tensor categories (in the sense of <cit.>), one has that every hom-bicategory ^*(,) is precisely the bicategory (,) from <cit.>, of tensor functors between finite tensor categories and . As the author works in a context of rigid categories, he shows that for every hom-category Z_l^w(F,G) of the bicategory (,) and every object (V,σ_V)∈ Z_l^w(F,G) the transformation σ_V is invertible (<cit.>), and (V,σ_V) has a left and a right dual object in Z_l^w(G,F). We will generalize this to 2-categories in w=s. Question. In Naidu-gen we dealt with lax functors ,:Del()→' from a one-object bicategory. If we allow for the domain bicategory to have more than one object, we wonder what the categories of (co)lax transformations χ: ⇒ recover. In particular, what is the “meaning in nature” of the 2-cell components χ_X: χ(B)∘ F(X)→ G(X)∘χ(A) for 1-cells X:A→ B in ? §.§ String diagrams in 2-categories In adj and throughout Bilax and 2-cat Bilax we will use string diagrams for 2-categories (again relying on the biequivalence of any bicategory with a 2-category). Our string diagrams are read from top to bottom and (in the context of 2-categories) from right to left. The domains and codomains of the strings stand for 1-cells, while the strings themselves and boxes stand for 2-cells. The 0-cells are to be understood from the context (reading the 1-cells from right to left). Observe that such string diagrams which depict 2-cells in a 2-category acting on the same underlying 0-cells A∈ (that is, morphisms in the monoidal categories (A,A) for every A) correspond exactly to the string diagrams in the monoidal categories (A,A) (due to the isomorphism eq:delooping). This is even more clear if =Del() for some monoidal category . Let be a lax functor and a colax functor. We depict their lax, respectively colax structures by diagrams in the following way: 331(g)3(f)33(gf) 331id_(A)11(id_A) 333(gf)31(g)3(f) 331(id_A)11id_(A) where g,f are composable 1-cells and A a 0-cell in the domain 2-category. We will often simplify the notation ∘ for the composition of 1-cells by concatenation. Observe that a colax transformation between two lax functors, colax tr, is nothing but a distributive law between lax functor structures that is moreover natural in 1-cells. Similarly, a colax transformation between two colax functors is a distributive law between colax functor structures that is moreover natural in 1-cells. In string diagrams we may write the latter as follows: phi-colax351χ_C2(gf)1χ_g1111-1χ_f-1(g)21(f)1χ_A= 351χ_C2(gf)112211221χ_gf1110110(g)11(f)1χ_A; 351χ_A2(id_A)11χ_id_A113χ_A= 351χ_A2(id_A)11211χ_A; 361χ_B1(x)111(α)χ_y111G(y)2χ_A= 2611χ_B1(x)1111χ_x(α)111110G(y)3χ_A for any 2-cell α:x⇒ y:A→ B. (A colax transformation between two lax functors is determined by string diagrams that are both vertically and horizontally symmetric to the above ones.) Dually (taking opposite 2-cells, i.e. taking vertically symmetric diagrams to those in phi-colax) we obtain string diagrams for lax transformations between two lax functors. §.§ Adjoints in 2-categories adj Dualisability plays a prominent role in the study of monoidal categories and the closely related subject of (extended) topological quantum field theories, see <cit.>. For example, Section 2 of <cit.> and Section 4 of <cit.> discuss and utilize `duals' of bimodule categories and centers. In the following, we want to provide a 2-categorical perspective on these constructions, thereby giving a theoretical underpinning for some of the ad-hoc constructions of <cit.>. We start by briefly recalling the notion of adjoint 1-cells in 2-categories, see for example <cit.>. Let f A→ B be a 1-cell in a 2-category . A left adjoint of f is a 1-cell u B → A together with two 2-cells η: 𝕀_A→ u f and ε: f u →𝕀_B such that _f∘η/ε∘_f = 𝕀_f and η∘_u/_u ∘ε = 𝕀_u. Similarly, a right adjoint of f is a 1-cell v B → A together with two 2-cells η̅𝕀_B → f v and ε̅ v f →𝕀_A such that _v ∘η̅/ε̅∘_v = 𝕀_f and η̅∘_f/f ∘ε̅ = 𝕀_f. In string diagrams we will write η= 21 and Ε= 21, and they satisfy the laws: 3413u111u=_u and 341f11114f=_f. Pseudofunctors :→' preserve (and reflect) adjoints and we have: ev-coev on F321(f)3(u)3 = 341(f)3(u)3(Ε)11, 3231(u)3(f) = 3411(η)31(u)3(f). In the aforementioned <cit.>, certain trace-like morphisms were considered in order to implement pivotal structures on the Drinfel`d centers. This involves a `lift' of the notion of duals, i.e. adjoints, to the setting of centers of bimodule categories. With our interpretation of _l^w(,) = _clx(Del(),) in center-2-cat as a bicategory, for a monoidal category and a bicategory , we can now derive a more conceptual version of this construction. We refer to a 2-category as autonomous if all 1-cells in have left and right adjoints. The most prominent example of an autonomous 2-category is given by the delooping Del() of an autonomous (rigid) monoidal category . We will prove at the end of this section that the full sub-bicategory _l^w-ps(, ) whose objects are pseudofunctors is autonomous. We will use the following fact, which is immediately proved: lax-ps Let ,:→' be pseudofunctors between 2-categories, and suppose that χ:⇒ is a colax natural transformation with respect to their lax functors structures. Then χ is a colax natural transformation of pseudofunctors. The following result is a 2-categorical interpretation of the fact that the half-braidings over autonomous monoidal categories are automatically invertible, see <cit.>. w=s Let be an autonomous 2-category and an autonomous monoidal category. For any pair of pseudofunctors , Del() →, the weak and strong centers coincide. Moreover, the inverse of a left half-braiding is a right half-braiding, that is: _l^w-ps(, , ) = _l^s-ps(, , ) _r^s-ps(, , ) = _r^w-ps(, , ). For the first claim it suffices to show that the component 2-cells of any colax natural transformation between pseudofunctors χ⇒ are invertible. Hereto, we fix an object X∈ and write X^∗ for its left adjoint. Starting from the left hand-side below and applying the left equality in ev-coev on F, the coherence of χ with the functor's multiplicativity, naturality of χ with respect to Ε_X and the coherence of χ with respect to the functor's counitalities (mind lax-ps) we reach the equality with the right hand-side: 450D_χ2(X)2(X^*)χ_X11χ_X^*121D_χ = 351D_χ2(X)2(X^*)331D_χ. Then the 2-cell: γ_X:= 450(X)3D_χ211χ_X^*2121D_χ2(X) is clearly a left inverse of χ_X: D_χ∘(X) →(X) ∘ D_χ. The analogous reasoning, but this time using the coherence of χ with the comultiplicativity and unitality of the functors, shows that γ_X is also a right inverse of χ_X. The last statement is proved directly. Our next result states that adjoints can be `lifted' to weak centers. auton Suppose to be an autonomous monoidal category and to be an autonomous 2-category. Then the full sub-bicategory ^w ps_l(, ) ⊂_l^w(, ) whose objects are pseudofunctors is autonomous. We only show that any 1-cell in ^w ps_l(, ), that is, an object (M,χ) ∈^w ps_l(, ,), has a right adjoint living in ^w ps_l(, ,). The case of left adjoints is analogous. Due to w=s, the half-braiding χ, i.e. the 2-cells χ_X: M ∘(X) →(X)∘ M, is invertible. We utilize this fact, to define a half-braiding on ^∗M as shown in the next diagram σ_X:= 450^∗M3(X)211χ_X^-12121(X)2^∗M. A direct computation shows that it satisfies the Equations (2) and (3). For example, we have 770^∗M3(Y)3(X)2121χ_Y^-12121χ_X^-13312431(Y X)21^∗M = 560^∗M3(Y)0(X)11121χ_X^-121χ_Y^-11121(Y X)3^∗M = 660^∗M3(Y)0(X)13221312χ_Y X^-121130(Y X)4^∗M. To conclude the proof, we show that the unit η𝕀→ M ∘^∗M and counit ε^∗M ∘ M →𝕀 lift to morphisms in the center category ^w ps_l(,,). For the counit this follows from the computation depicted below. 570^∗M3M0(X)4χ_X121115221χ_X^-11121(X) = 450^∗M3M0(X)2χ_X1χ_X^-1121(X) = 450^∗M3M0(X)321(X) An analogous argument shows that the unit also becomes a morphism in the center. § BILAX FUNCTORS Bilax We are interested in functors on bicategories that are both lax and colax but not necessarily pseudofunctors. Likewise, we are interested in natural transformations that are both lax and colax but not necessarily pseudonatural transformations, as well as in their modifications. In particular, we introduce the notions of a bilax functor, bilax natural transformation and bilax modifications. We formulate them for 2-categories, just to avoid the use of associators and unitors, but the corresponding definitions for bicategories can be formulated in a straightforward fashion. To that end, we fix 2-categories and '. In this section we introduce bilax functors, and leave the resting two notions for the next section. §.§ Bilax functors We reiterate that we will often simplify the notation ∘ for the horizontal composition of 1- and 2-cells by concatenation. Let :→' be a 2-functor. A Yang-Baxter operator for consists of a collection of 2-cells ν_g,f:(g)(f)⇒(f)(g) in ', natural in 1-endocells f,g of , which satisfy the Yang-Baxter equation YBE F451(h)2(g)1(f)ν_h,g11311ν_h,fν_g,f11121(f)2(g)1(h) = 4510(h)3(g)1(f)11101ν_g,f1ν_h,f111110ν_h,g10(f)3(g)1(h) for all 1-endocells f,g,h, and the following unity-counity law YB-unity2511(f)11ν_1,f111(f) =_(f)= 251(f)11ν_f,1113(f). We call a Yang-Baxter operator for the identity 2-functor :→ a Yang-Baxter operator of . We reserve the notation c for a Yang-Baxter operator on the identity 2-functor on . For 2-categories =Del() where is a braided monoidal category, a class of Yang-Baxter operators c is given by the braiding(s) of . bilax Assume that possesses a Yang-Baxter operator c. A bilax functor is a pair (, ν):(,c)→' where :→' is simultaneously a lax and a colax functor and ν is a Yang-Baxter operator of , meaning that apart from the rule YBE F two additional groups of rules hold: left and right lax distributive laws lax d.l.350(h)3(g)0(f)11201ν_hg,f1111-111(f)11(hg) = 4510(h)11(g)2(f)11101ν_g,f1ν_h,f11111-11(f)12(hg), 2511(f)11ν_1,f110(f)3(id)= 251(f)11220(f)3(id); 35-1(h)4(g)-1(f)11021ν_h,gf1111-111(gf)11(h) = 5510(h)11(g)2(f)1ν_h,g112111ν_h,f1111(gf)3(h), 251(f)11ν_f,1110(id)3(f)= 253(f)11220(id)3(f) and left and right colax distributive laws colax d.l.35-1(f)5(hg)110211221ν_f,hg1110110(h)11(g)2(f) = 4510(f)4(hg)111ν_f,h11111-1ν_f,g1-1(h)20(g)3(f), 250(f)3(id)11ν_f,1113(f)= 250(f)3(id)11211(f); 25-1(gf)6(h)11001120ν_gf,h111-10(h)11(g)2(f) = 553(gf)1(h)1111ν_f,h1ν_g,h111310(h)3(g)1(f), 250(id)3(f)11ν_1,f111(f)= 250(id)3(f)11123(f), and additionally the bilaxity condition bilax452(gf)2(hk)1ν_f,h12(gh)2(fk) = 451(gf)3(hk)3(1c1)31(gh)11(fk), 3311111-11-1(id_A)5(id_A)= 331131(id_A)3(id_A), 23-1(id_A)5(id_A)11-11111= 331(id_A)3(id_A)311, 12111= _id_A holds for 1-cells Ak→Bh→Bf→B g→C. Observe that the unit laws in lax d.l. (or the counit laws in colax d.l.) together with the fourth rule in bilax imply YB-unity. We record that the unit laws of lax d.l. (or the counit laws of colax d.l.) imply eps-e2511(id)11ν_1,11111(id) = 141(id)111(id) = 251(id)11ν_1,1111(id). We briefly comment the term “distributive law” in equtions lax d.l. and colax d.l. (we also used it in phi-colax). This term in the context of lax functors appeared in <cit.>, as the authors say: “by analogy to the distributive laws of monads, which have similar axioms”. On the other hand, monads and lax monoidal functors on (monoidal) categories can both be interpreted as monoids: the former are monoids in endofunctor categories, while the latter are monoids under Day convolution. In accordance with this suggestive similarity we use the term “distributive law” also for pseudonatural transformations on lax/colax/bilax functors. Let (,c) and (',d) be 2-categories with their respective Yang-Baxter operators, and assume that a bilax functor (,ν) is given acting between them. If ν_f,g=d_(f), (g) for all composable 1-endocells f,g in , we will say that is a bilax functor with a compatible Yang-Baxter operator, and we will write simply : (,c)→ (',d). In the case that the relation between ν and d is not known, we will write (,ν): (,c)→', as in bilax. Given a bilax functor (, ν) the functors on endo-hom-categories end-functors_A,A: (A,A)→'((A), (A)) are pre-bimonoidal in the sense of <cit.>, where we rely on the strictification theorem for monoidal categories. If instead of the Yang-Baxter operators one works with braidings on the endo-hom categories, then the functors _A,A are bilax as in <cit.>. The latter inspired the terminology in bilax. While the analogues of the coherence conditions lax d.l. and colax d.l. do not appear explicitly in <cit.>, we incorporate them in accordance with the discussion in <cit.> in our definition. Accordingly, we recover a “braided bialgebra” from <cit.>, as we will show further below. In the situation : (,c)→ (',d) (i.e. that ν is compatible with d), the functors _A,A are bimonoidal in the sense of <cit.>. The other way around, we clearly have: braided Let (,Φ_) and (,Φ_) be braided monoidal categories. We identify their braidings with Yang-Baxter operators c and d on their delooping categories Del() and Del(), respectively. Any bilax functor :(Del(),c)→(Del(),d) with a compatible Yang-Baxter operator is a bimonoidal functor in the sense of <cit.>. triv-braided fun Let (,Φ) be a braided monoidal category and 1 the trivial 2-category (it has a single 0-cell and only identity higher cells). Any bilax functor :1→ Del() with invertible ν which coincided with Φ can be identified with a bialgebra in . To see this, note that determines and is determined by a 1-cell in Del(), i.e. an object B of , and a (co-)multiplication and (co-)unit on that 1-cell, which are subject to (co-)associativity and (co-)unitality due to being lax and colax. On the other hand, c of the trivial 2-category is trivial: it is the identity 2-cell on the identity 1-cell on *. Then the equations bilax recover bialgebra axioms on (id_*). (The rest of axioms of the bilax functor are automatically fulfilled by the braiding, and do not contribute any additional information.) If moreover is a pseudofunctor, then it is trivial: the obtained bialgebra is isomorphic to the monoidal unit of . This can be proved considering the lax unitality structure of . The following result is straightforwardly proved, see also <cit.>. Let : (,c) → (', d) and : (', d) → (”, e) be compatible bilax functors. Then : (, c) → (”, e) is a compatible bilax functor with a Yang-Baxter operator ν_g,f:=ν^_(g),(f), where ν^ is a Yang-Baxter operator of and g,f are 1-endocells of . §.§ Bimonads The theory of monads and comonads in the context of 2-categories was introduced by Street in <cit.>. Recall that a monad in , is a 1-endo-cell t∈(A,A) endowed with 2-cells μ: tt → t and η: id_A → t subject to associativity and unitality conditions. Dually, that is, swapping the direction of the structure 2-cells, one obtains the notion of a comonad in : a 1-endo-cell endowed with a coassociative and counital comultiplication. In order to differentiate notations for (co)lax functors and (co)monad structures, we will represent the multiplication and unit of a monad as well as the comultiplication and counit of a comonad by: 2311122, 1311, 2311221, 1311. One shows in a straightforward manner that lax functors preserve monads and colax functors preserve comonads. Specifically, for a monad t in and a lax functor :→', and a comonad d in and a colax functor we have the following monad and comonad structures: dot-structures341(t)11(t)3113(t) = 451(t)11(t)3(∇)113(t), 13111(t) = 4311(η)11(t), 343(d)1131(d)11(d) = 453(d)11(Δ)31(d)11(d), 131(d)11 = 4311(d)(Ε)11. We are going to introduce bimonads in 2-categories with respect to Yang-Baxter operators. Observe that their 1-categorical analogue is different than the bimonads of <cit.> and <cit.> in ordinary categories, but they are a particular instance of τ-bimonads and bimonads in 2-categories from <cit.>. Whereas bimonads in <cit.> are opmonoidal monads on monoidal categories, bimonads in <cit.> are monads and comonads on a not necessarily monoidal category with compatibility conditions that involve a distributive law λ. The bimonads in <cit.> generalize the latter to 2-categories and are equipped with an analogous 2-cell i.e. distributive law λ. The τ-bimonads that we also introduced in <cit.>, are a particular case of bimonads, where the 2-cell λ is given in terms of a 2-cell τ which is a distributive law both on the left and on the right, both with respect to monads and comonads. In the special case of a Yang-Baxter operator coming from a braiding one gets examples à la <cit.>. c-bimnd Let (,c) be a 2-category with a Yang-Baxter operator c and suppose that b∈ is a 1-endo-cell endowed with a monad and comonad structure. We call b a c-bimonad if c_b,b is a distributive law both on the left and on the right, both with respect to monads and comonads, and the following compatibilities hold: c-bimonad472b2b11221112221c_b,b11122111222b2b= 351b3b31131b11b, 231b1b1111= 231b1b111, 2311111b1b= 24111b1b, 12111= __A. The following observation is inspired by Benaboú: blx from triv There exists a bijection between c-bimonads in (,c) and compatible bilax functors 1 → (,c). It is given by mapping any c-bimonad b: A→ A in (,c) to the bilax functor : 1 → with (id_*)= b, whose bilax structure 2-cells agree with the respective 2-cells of the c-bimonad b. As observed by Benaboú, a lax functor from 1 to defines and is defined by a monad in . Likewise, the colaxity of : 1 → is equivalent to (id_*) being a comonad. The equations lax d.l. and colax d.l. correspond to the four distributive laws of c_b,b with b=(id_*) and the bilaxity conditions given in bilax correspond to conditions c-bimonad. Such a bilax functor is also a τ-bimonad in the sense of <cit.> (it is a monad and a comonad with a monad-morphic and a comonad-morphic distributive law on both sides, in the sense of lax d.l. and colax d.l.). Similarly, we clearly have: T(id) Any bilax functor (,ν) : 1 → determines a ν-bimonad (id_*)= b. preserve bimnd Let (,ν):(,c)→' be a bilax functor and b∈ a c-bimonad. Then (b) is a ν-bimonad in '. That is, it satisfies the last three axioms of c-bimonad and a variant of the first axiom shown below: tau-bimonad472(b)2(b)11221112221ν_b,b11122111222(b)2(b)= 351(b)3(b)31131(b)11(b). Moreover, (b) is a τ-bimonad in the sense of <cit.>. The claim follows by the naturality of the (co)lax structure of with respect to the 2-cells from the (co)monad structures of b. (For the three compatibilities of the (co)unit structures for (b), alternatively, apply (η_b) and (Ε_b) at suitable places in the last three equations in bilax and use the naturality of the (co)lax structure of .) The last statement follows from lax d.l. and colax d.l.. We have that (id_A) for all A∈ are ν-bimonads in '. Given that the notion of a Yang-Baxter operator is more general than that of a braiding, a ν-bimonad generalizes the notion of a bialgebra. Moreover, we have that the notion of a bilax functor :→' recovers that of a “braided bialgebra” from <cit.>, in the case =Del(), where is a monoidal category with a single object. my bimonads Let (,c) and (',d) be two 2-categories with their respective Yang-Baxter operators, and let b:A_0→ A_0 be a d-bimonad in '. We define _b(A)=A_0, _b(x)=b, _b(α)=_b for all objects A, 1-cells x and 2-cells α in . Then _b is a bilax functor with ν_f,g=d_b,b for all 1-endocells f,g in . (Alternatively, instead of the Yang-Baxter operator d in ' one could require a Yang-Baxter operator ν on _b, take a ν-bimonad b and obtain a bilax functor (_b,ν).) The following claim is directly proved: For a comonad d:A→ A and a colax transformation between two colax functors χ:⇒ it holds: 361χ_A2(d)111221χ_d1111-1χ_d-1(d)21(d)1χ_A= 351χ_A2(d)112211221χ_d12110110(d)11(d)1χ_A; 351χ_A2(d)11χ_d113χ_A= 351χ_A2(d)11211χ_A. Monads and comonads in can act on other 1-cells of . That is, for example a left module over a bimonad b:A→ A is a 1-cell x:A'→ A endowed with a left action : bx ⇒ x of b (in <cit.> the axioms are expressed in string diagrams). Note that in category theory, what we call a b-(co)module is also referred to as a b-(co)algebra, see for example <cit.>. (co)mod str Let , be two bilax functors and b: A→ A a c-bimonad in (indeed, a monad and comonad satisfying the two last identities in c-bimonad). For a colax transformation ϕ:⇒ of colax functors, left G(b)-comod defines a left (b)-comodule structure on ϕ(A). Dually, for a lax transformation ψ:⇒ of lax functors, left G(b)-mod defines a left (b)-module structure on ψ(A). left G(b)-comod243ϕ_A110(b)3ϕ_A := 3511ϕ_A1111ϕ_id_A(η)110G(b)3ϕ_Anat.=251ϕ_A11ϕ_b110G(b)3ϕ_A left G(b)-mod240(b)3ψ_A113ψ_A := 3510(b)3ψ_A(Ε)11ψ_id_A1113ψ_Anat.=250(b)3ψ_A11ψ_b111ψ_A We indicate the steps of the proof for the comodule structure. Starting from 331213111 apply first the second rule in bilax, then the first rule in phi-colax, naturality of 21, naturality of χ, third rule in c-bimonad for b, and lastly naturality of χ. For the counitality, starting from 2311 apply first the fourth rule in c-bimonad, the second rule in phi-colax, and lastly the fourth rule in bilax. For = acting on the trivial 2-category =1, and ϕ and ψ being endo-transformations, (co)mod str recovers <cit.>, proved for 2-(co)monads and their distributive laws. The latter is a 2-categorical formulation for a fact possibly used in (braided) monoidal categories by different authors, but we are not aware of an exact reference. It is important to note that modules/comodules over a c-bimonad in do not form a monoidal category (unless the Yang-Baxter operator c is a half-braiding). §.§ Module comonads, comodule monads and relative bimonad modules mod In <cit.> the notion of a wreath was introduced as monad in the free completion 2-category ^M() of the 2-category () of monads in under the Eilenberg-Moore construction. Dually, cowreaths are comonads in ^C(), where the latter is the analogous completion of the 2-category () of comonads. In <cit.> the first author introduced the 2-category () so that there are forgetful 2-functors ()→^M() and ()→^C(). Moreover, in loc. cit. biwreaths were introduced as bimonads in (). Biwreaths as a notion integrate both wreaths and cowreaths as well as their mixed versions: mixed wreaths and mixed cowreaths. In particular, a biwreath also behaves like a “(co)module (co)monad” with respect to monad-morphic or comonad-morphic distributive law in , where the highlighted notions in the 2-categorical setting were introduced in <cit.>. In the present paper, similarly to the above-mentioned idea (see diagrams (67) and (65) of loc. cit.), but now with respect to Yang-Baxter operators, we will consider the notions that we introduce in the definition below. For the sake of examples that we will study further below, we record that in <cit.> the 2-category () of bimonads in (with respect to distributive laws) was defined, so that there are inclusion and projection 2-functors E_B: ()→() and π: ()→() which are identities on 0- and 1-cells. In <cit.> we have considered a variation of the 2-category () by changing the definition of 2-cells. It is this other version of the 2-category that we will be interested in here. We will recall it in bimnd-K. mod-comod Let b:A→ A be a c-bimonad in a 2-category (,c) with a Yang-Baxter operator. Let d:A→ A be a comonad and a right b-module, and t:A→ A a monad and a right b-module. We say that d is a (right) module comonad if the left two equations below hold, and that t is a (right) comodule monad if the right two equations below hold: 462d2b11221112221c_d,b11d3d = 3511d1b11131d11d, 231d1b1 = 331d1b1111; 461t3t1c_b,t11122111222t2b = 351t11t311111t1b, 2311t1b = 3311111t1b. The left hand-side versions of these notions can be clearly deduced. We continue with some simple yoga of (co)lax and bilax functors. Consider 1-cells: (20,53)(1,0)[“t]3001a (0,30)(1,0)[A`A`]3400a (20,13)(1,0)[“d]3001b (370,30)(1,0)[`B`x]3001a in , where t is a monad, d is a comonad and x is a right t-module (via a 2-cell ) and a right d comodule (via a 2-cell ρ). Recall that, as we already used before, lax functors preserve monads and colax functors preserve comonads. Given a lax functor and a colax functor one has that (x) is a right (t)-module and a right (d)-comodule with structure 2-cells: induced (co)mods341(x)11(t)111311(x) = 441(x)11(t)3()3(x), 341(x)111131(x)11(d) = 443(x)(ρ)31(x)11(d). The analogous claims hold on left sides. bilax preserves comod mod Bilax functors : (,c)→ (',d) with compatible Yang-Baxter operators preserve module comonads and comodule monads. The arguments for showing that preserves comodule monads are analogous to proving that it maps module comonads to module comonads. Therefore, we will restrict ourselves to the latter. Let b : A → A be a bimonad in (,c) and x : A → A a b-module comonad. The bilaxity of implies that (b) is a d-bimonad with an action on the comonad (x). Hence, we only have to show that the first two compatibility conditions in mod-comod hold. Using the functoriality of we have: 462(x)2(b)11221112221d_x,b11(x)3(x) = 471(x)5(b)(Δ_d)(Δ_b)41(1c1)141()()11(x)5(x) = 371(x)3(b)3(Δ_dΔ_b)(1c1)()31(x)3(x) = 361(x)3(b)3()(Δ_d)31(x)3(x) = 3511(x)3(b)111131131(x)3(x). The compatibility of the action of (b) with the counit of (x) is an immediate consequence of being lax. For the next property we introduce the following notion: rel Hopf Let b:A→ A be a c-bimonad in a 2-category (,c) with a Yang-Baxter operator and let t:A→ A be a right b-comodule monad. A right t-module and a right b-comodule x:A→ B is a right relative t b-module if the following relation holds: 451x3t1c_b,t11x4b = 361x1t11111x1b. Morphisms of right relative t b-modules are right t-linear and right b-colinear 2-cells in . Analogously, one defines a left relative t b-module. If t=b we call x a relative bimonad module. The above notions correspond to those of relative Hopf modules <cit.> and Hopf modules <cit.> in braided monoidal categories, which in turn are categorifications of Hopf modules introduced in <cit.>. Obviously, b itself is a relative bimonad module. relative mod Let : (,c)→ (',d) be a bilax functor with compatible Yang-Baxter operator. For any 1-cell x:A→ B the 1-cell (x) is a left relative bimonad module over (id_B) and a right relative bimonad module over (id_A). This follows from the first equation in bilax by setting id_B,id_B, id_B, x, respectively x, id_A, id_A, id_A for the 1-cells g,f,h,k. Analogously to bilax preserves comod mod we get the following result: relative mod Bilax functors : (,c)→ (',d) with compatible Yang-Baxter operators preserve relative bimonad modules. Hopf bimodules, for Hopf algebras over a field, appeared in the construction of bicovariant differential calculi over a Hopf algebra in <cit.>. They were generalized in <cit.> to the context of a braided monoidal category . For a bialgebra B in a Hopf bimodule is a B-bimodule M in which is moreover a B-bicomodule in the monoidal category of B-bimodules _B_B (for the structures on B itself the regular (co)action on B and the diagonal action on tensor product of comodules are used). The latter means that both left and right B-comodule structures of M are left and right B-bimodule morphisms, meaning that there are four conditions to be fulfilled. Together with simultaneously B-linear and B-bicolinear morphisms Hopf bimodules make a category denoted by ^B_B_B^B. We mark that the name “Hopf bimodules” is somewhat misleading, as the Hopf structure on the bialgebra B is not necessary here. Substituting a braided monoidal category and a bialgebra B in it with a monoidal category with a Yang-Baxter operator c and a c-bimonad in it, we can consider the analogous category of Hopf bimodules ^B_B(,c)_B^B, where and B now have these new meanings. Let :(,c)→(',d) be a bilax functor with compatible Yang-Baxter operator. The functors end-functors for every A∈ factor through the category of Hopf bimodules over the d-bimonads (id_A) in ('((A), (A)),d). For any 1-endocell x:A→ A we should check that (x) satisfies the four relations. Two of them are satisfied by relative mod, which mean that the left coaction is left linear and that right coaction is right linear. The other two, meaning the two mixed versions of compatibilities, one gets by setting x=f, respectively x=h, and the resting three 1-cells to be identities, in the first equation in bilax. To check the claim on morphisms, observe that for any 2-cell α:x→ y in , i.e. morphism in (A,A), (α) is (id_A)-(co)linear by the naturality of the (co)lax structure of . We record here some direct consequences for a bilax functor (,ν) from the first equation in bilax. For simplicity, we may consider :(,c)→(',d) to have a compatible Yang-Baxter operator. By induced (co)mods note that (x) is a bi(co)module over (id) for any 1-cell x (acting among suitable 0-cells). Then we may write: mod coalg-l451(id)3(fk)1ν_1,f111(f)3(k)= 351(id)3(fk)11131111311(f)3(k) comod alg-l453(f)1(k)1ν_f,112(id)2(fk)= 2511(f)3(k)1311131111(id)3(fk) mod coalg-r452(gh)2(id)1ν_h,111(g)3(h) = 3511(gh)3(id)111131131(g)3(h), comod alg-r452(g)2(h)1ν_1,h11(gh)4(id) = 451(g)11(h)3111111311(gh)3(id) for 1-cells Ak→Bh→Bf→B g→C. § 2-CATEGORY OF BILAX FUNCTORS 2-cat Bilax In this section we introduce the rest of the ingredients to construct a 2-category of bilax functors. §.§ Bilax natural transformations b.nat-tr Among bilax functors we introduce bilax natural transformations. Recall that for a lax transformation ψ and a colax transformation ϕ, both acting between (bilax) functors ⇒', for every 1-cell f: A→ B there are 2-cells ψ_f: '_A,B(f)ψ(A) ⇒ψ(B)_A,B(f) and ϕ_f: ϕ(B)_A,B(f) ⇒'_A,B(f)ϕ(A) natural in f. For the sake of the following definition we introduce the notation: lambdaλ_xy,z:= 452(xy)2(z)11ν_y,z12(xz)2(y) for a bilax functor and 1-cells A z→ A y→ A x→ B. Then the first equation in bilax can be expressed also as: bilax-lambda350(gf)4(hk)1101λ_gf,h111100(gh)4(fk) = 551(gf)3(hk)3(1c1)31(gh)11(fk) Observe that by eps-e and the rules of the (co)lax structures of , one has: lambda-x1252(x)11λ_x id,id111(x)=_(x). A bilax natural transformation χ: ⇒' between bilax functors is a pair (ψ,ϕ) consisting of a lax natural transformation ψ of lax functors and a colax natural transformation ϕ of colax functors, which agree on the 1-cell components, i.e. ψ(A)=ϕ(A):=χ(A) for every A∈, and whose 2-cell components are related through the relation: psi-lambda-phi for bimonads47-1'(xy)13χ(A)1(z)110112231ψ_xy1λ_xy,zϕ_xz221311101-1'(xz)13χ(A)1(y)= 371'(xy)3χ(A)1(z)22031113111ϕ_z1λ'_xy,z12230ψ_y2111131'(xz)11χ(A)3(y) for composable 1-cells: Az→ A y→ A x→ B. We will denote it shortly as a triple (χ, ψ, ϕ). In particular, if y=z=id_A and one applies the unity of the lax structure of on the top right in psi-lambda-phi for bimonads, and the counity of the colax structure of on the bottom right, one obtains (by lambda-x1 and (co)mod str): YD26-1'(x)13χ(A)11011ψ_xϕ_x11101-1'(x)13χ(A)= 351'(x)3χ(A)1λ'_x id,id111'(x)3χ(A) = 452'(x)3χ(A)1ν_id,id12'(x)3χ(A) which we will call the Yetter-Drinfel`d condition on the bilax natural transformation χ. Observe that the left module and comodule structures of χ(A) above are over '(id_A). The relation psi-lambda-phi for bimonads we will call strong Yetter-Drinfel`d condition on χ, and the 1-cells χ(A) in ' we will call strong Yetter-Drinfel`d modules (over '(id_A)). bimnd-K Let and ' be 2-categories with Yang-Baxter operators c and d, respectively. We fix two d-bimonads b_0:A_0→ A_0 and b_1:A_1→ A_1 in ' and define bilax functors _0,_1:→' by _0:=_b_0 and _1:=_b_1 with ν_i=d for i=1,2, as in my bimonads. A bilax natural transformation (χ, ψ, ϕ) between bilax functors χ: _0⇒_1 consists of a family of 2-cells ψ_A: b_1χ(A)⇒χ(A) b_0 and ϕ_A: χ(A) b_0 → b_1 χ(A), indexed by A∈, where ψ_A (respectively ϕ_A) are distributive laws with respect to the monad (resp. comonad) structures of b_0,b_1 (by phi-colax and its vertical dual), and it holds: YD-cond351b_11χ_A1b_0ψ_A11λ_0ϕ_A11b_11χ_A1b_0 = 351b_11χ_A1b_01ϕ_Aλ_111ψ_A1b_11χ_A1b_0, and consequently 241b_11χ_Aψ_Aϕ_A1b_11χ_A= 452b_13χ_A1d12b_13χ_A. Note that λ_0 and λ_1 have the form: λ_0= 452b_01b_011d12b_01b_0 and λ_1= 352b_11b_111d12b_11b_1 and that the third equation in phi-colax is now trivial. For every A∈, the triple (χ(A), ψ_A, ϕ_A) is a 1-cell in the 2-category (') from <cit.>, which we mentioned at the beginning of mod. The 0-cells of () are bimonads in defined via a distributive law 2-cell λ, 1-cells are triples (F,ψ,ϕ) where (F,ψ) is a 1-cell in () and (F,ϕ) is a 1-cell in () with a compatibility condition between ψ, ϕ and λ (as in YD-cond on the left), and a 2-cell is a single 2-cell ζ in which is simultaneously a 2-cell in () and in (). L-trivial: bimnd If =Del() in the above Example is induced by a braided monoidal category , the above bilax natural transformation χ: _0⇒_1:Del()→' is precisely a single 1-cell in (Del()). 1-cell for K=1 By T(id) actually any two bilax functors (Τ_0,ν),(Τ_1,ν'):1→ determine two bimonads in : _0 yields a ν-bimonad b_0:=(id_*) on A=(*) and _1 a ν'-bimonad b_1:=_1(id_*) on A'='(*). Then analogously as in L-trivial: bimnd, any bilax natural transformation χ: Τ_0⇒Τ_1:1→' is precisely a single 1-cell in (). bilax trans triv Consider a bilax transformation χ:(,ν)⇒(',ν'): 1→ from the trivial 2-category to a 2-category . Let B:=(id_*) be the ν-bimonad on =(*) and B':='(id_*) the ν'-bimonad on '(*) as in T(id). Let m(B) denote the monad part of B and c(B) the comonad part of B, and set χ(*)=X. We find that ψ:m(B')X⇒ Xm(B) is a distributive law with respect to monads, and that ϕ:Xc(B)⇒ c(B')X is a distributive law with respect to comonads. It is a nice exercise to prove the following lemma that we will use to pursue with this example. ni-lambda Let B be both a monad (to which we refer to as m(B)) and a comonad (we refer to it as c(B)) and suppose that ν:BB⇒ BB is a distributive law, both on left and right side, with respect to monad m(B) and with respect to comonad c(B) (this means four distributive laws). Then λ:= 452m(B)2c(B)11ν12c(B)2m(B) is a distributive law on the left both with respect to monads and comonads, that is: lambda d.l.350m(B)3m(B)0c(B)11201λ11110c(B)3m(B) = 4510m(B)11m(B)2c(B)11101λ1λ11111c(B)2m(B), 2512c(B)11λ110c(B)3m(B)= 251c(B)11220c(B)3m(B); 351m(B)2c(B)1λ1111-1λ-1c(B)20c(B)3m(B)= 351m(B)2c(B)112211221λ1110110c(B)11c(B)2m(B), 251m(B)2c(B)11λ113m(B)= 351m(B)2c(B)11211m(B). Continuing with the Example, we have that similarly ν' induces λ'. Moreover, the compatibilities psi-lambda-phi J. Power351m(B')1X1c(B)ψ11λϕ11c(B')1X1m(B)= 351m(B')1X1c(B)1ϕλ'11ψ1c(B')1X1m(B) and 26-1m(B')13X11011ψϕ11101-1c(B')13X= 351m(B)3X1λ'111c(B')3X hold. Then (X,ψ,ϕ):(,m(B), c(B),λ)→(',m(B'), c(B'),λ') is a 1-cell in the 2-category of mixed distributive laws () of <cit.>. (In the specific case when X is a left m(B')-module and left c(B')-comodule, and ψ and ϕ are given by ψ= 331m(B')1X 111X1m(B) and ϕ= 3311X1c(B)11c(B')1X, the two expressions in psi-lambda-phi J. Power are equivalent and one recovers a particular form of λ-bialgebras by Turin and Plotkin, <cit.>.) Suppose that B is a bialgebra in a braided monoidal category . A (left) Yetter-Drinfel`d module over B is an object M together with a (left) action B M→ M and a (left) coaction M→ B M of B subject to the compatibility condition: YD-def382B1M112112B1M = 452B3M112B3M. The category of (left) Yetter-Drinfel`d modules over B in and left B-linear and B-colinear morphisms we denote by ^B_B(). bialg vs Hopf Observe that the antipode, i.e. a Hopf algebra structure on a bialgebra in the context of Yetter-Drinfel'd modules, is used in the following two instances. One is to construct the inverse for the braiding of the respective category. Another one is to formulate an equivalent condition to YD-def. Thus, the category of Yetter-Drinfel'd modules over a bialgebra is monoidal and even it has a pre-braiding (non-invertible), given by: 3511M1N11111N1M. triv-braided Consider two braided monoidal categories and and two bialgebras B_0, B_1 in . These give rise to two bilax (and bimonoidal) functors F_B_0, F_B_1: Del()→ Del() as in my bimonads. The bilax natural transformation χ: _0⇒_1:→' from bimnd-K corresponds to a generalized notion of a Yetter-Drinfel`d module over B_1, which in view of the above we call strong Yetter-Drinfel`d modules. YD specific Any Yetter-Drinfel'd module M over a bialgebra B' in a braided monoidal category (,Φ) comes from a bilax natural transformation of triv-braided where ψ and ϕ are given by left-left YD classicψ= 352B'1M11 j^-111M1B, ϕ= 3511M1Bj112B'1M, for any bialgebra isomorphism j:B→ B'. (More precisely, from a bilax endo-transformation with j=id_B in left-left YD classic.) The notation in these two diagrams is the usual one for braided monoidal categories, concretely 21 and 21 stand for the (co)multiplication of B'. That the given ψ and ϕ are desired distributive laws (i.e. (co)lax natural transformations) it was proved at the beginning of <cit.>, though for B=B' and trivial j. The algebra (resp. coalgebra) morphism property of j^-1 (resp. j) makes left-left YD classic the desired distributive laws for nontrivial j. The first claim now follows from <cit.>, whose conditions are fulfilled since is braided. Set ν_F,X=(M j^-1)Φ_F',X and ν_X,F=Φ_X,F'(M j). The second claim follows from Corollary 7.6 of loc. cit.. both-braided Let (χ, ψ, ϕ) be a bilax natural transformation between bilax functors with compatible Yang Baxter operators χ: ⇒':Del()→ Del() where and are braided monoidal categories with braidings Φ_ and Φ_, respectively. Then F:=_*,* and G:='_*,* are bimonoidal functors → as in braided, χ(*)=M is an object in and there are morphisms ψ_X: G(X) M → M F(X) and ϕ_X: M F(X) → G(X) M natural in X∈, where ψ is a distributive law for the monoidal functor structures, and ϕ is a distributive law for the comonoidal functor structures, so that the left identity below holds, and consequently the one next to it: 47-1G(XY)13M1F(Z)110112231ψ_XY1λ_XY,Zϕ_XZ221311101-1G(XZ)13M1F(Y)= 671G(XY)3M1F(Z)22031113111ϕ_Z1λ'_XY,Z12230ψ_Y2111131G(XZ)11M3F(Y), 241G(X)1Mψ_Xϕ_X1G(X)1M= 452G(X)3M112G(X)3M, with λ_XY,Z:= 452F(XY)2F(Z)1112F(XZ)2F(Y). For bialgebras B in by preserve bimnd F(B), G(B) are bialgebras in , and if ψ_B, ϕ_B are of the form as in left-left YD classic, we recover classical Yetter-Drinfel`d modules in . The bilax natural transformations, i.e. identities psi-lambda-phi for bimonads and YD, offer the following point of view. Given any monoidal category , when one considers the center category _l^w(), one is given a family of colax transformations ϕ. In particular, when =_H the category of modules over a bialgebra or a Hopf algebra H in a braided monoidal category , one is able to construct lax transformations ψ (as in left-left YD classic) so that the given ϕ and this ψ obey YD - since ϕ is H-linear, being a morphism in _H. Similarly, considering _r^w(^H) one is given ψ's and one constructs ϕ's, so that they together obey YD. As in the proof of the above Proposition (that is, as proved in <cit.>), in this setting the bilax condition psi-lambda-phi for bimonads follows. §.§ Bilax modifications We finally introduce: Let χ, χ' : (, ν) ⇒ (', ν') : (, c) →' be bilax natural transformations. A bilax modification a: χ⇛χ' is a collection of 2-cells (a(A))_A∈() satisfying equations: lax modif361χ_B1(x)11ϕ_x1a(A)110'(x)3χ'_A= 2611χ_B1(x)111a(B)11ϕ'_x11110'(x)3χ'_A colax modif362'(x)1χ_A1111ψ_xa(B)111110χ'_B3(x) = 360'(x)3χ_A111a(A)ψ'_x111χ'_B2(x). Equivalently, a bilax modification is a modification both of lax and colax natural transformations: a: ψ⇛ψ' and a: ϕ⇛ϕ', where (ψ,ϕ) constitute χ and (ψ',ϕ') constitute χ'. 2-cells Bimnd Pursuing 1-cell for K=1 a bilax modification between bilax natural transformations of bilax functors 1→ is precisely a 2-cell in (). bilax modif Recall bilax trans triv where bilax natural transformations are 1-cells in the 2-category of mixed distributive laws () of <cit.>. In this setting a bilax modification of bilax natural transformations is a 2-cell ζ: X⇒ Y in that satisfies: 341m(B')1Xψζ11Y1m(B) = 341m(B')1X1ζψ'1Y1m(B) and 241X1c(B)ϕ1ζ1c(B')1Y= 241X1c(B)ζ1ϕ'1c(B')1Y. As such it is a 2-cell in the 2-category () of <cit.>. In the setting of triv-braided, where bilax natural transformations are strong Yetter-Drinfel`d modules, a bilax modification of bilax natural transformations is a morphism f in satisfying: 341B'1Mψf11N1B = 341B'1M1fψ'1N1B and 241M1Bϕ1f1B'1N= 241M1Bf1ϕ'1B'1N. By left G(b)-comod and left G(b)-mod this means that f is both a morphism of left B'-modules and left B'-comodules. This is a morphism of strong Yetter-Drinfel'd modules from triv-braided. Now we may formulate: The category of Yetter-Drinfel`d modules ^B_B() over a bialgebra B in a braided monoidal category (,Φ_) is a full subcategory of the category (_B) of bilax endo-transformations on a bilax functor _B:(Del(),Φ_)→ (Del(),Φ_) with compatible Yang-Baxter operator as in triv-braided and bilax modifications. Similarly one has: YD iso The category ^B_B() is isomorphic to the category (_B) of bilax endo-transformations on a bilax functor _B:1→ (Del(),Φ_) with compatible Yang-Baxter operator as in blx from triv and bilax modifications. §.§ 2-category of bilax functors We finish this section by concluding that bilax functors (with compatible Yang-Baxter operator) →', bilax natural transformations and bilax modifications form a 2-category (,') (respectively, _c(,')). The composition of bilax transformations (χ, ψ, ϕ):⇒ and (χ', ψ',ϕ'): ⇒ is easily seen to be induced by the vertical compositions of the (co)lax transformations ψ'·ψ, ϕ'·ϕ, namely by: (ψ'·ψ)_f= 56-1(f)21ψ'(A)3ψ(A)11-1112231ψ'_f121-1ψ_f111113-2ψ'(B)31ψ(B)12(f) and (ϕ'·ϕ)_f= 56-1ϕ'(B)21ϕ(B)3(f)12-11111311ϕ_fϕ'_f1212111-11-1(f)21ϕ'(A)11ϕ(A). Bilax modifications compose both horizontally and vertically, in the obvious and natural way. We comment for the record that although the lax and colax natural transformations compose horizontally by: comp colax (ψ'ψ)_f= 57-1'(f)22ψ'(A)11'ψ(A)11-1112243ψ'_(f)13121-2'(ψ_f)13-2ψ'(B)31'ψ(B)12'(f) and (ϕ'ϕ)_f= 57-2ϕ'(B)31'ϕ(B)12'(f)12-2131'(ϕ_f)13ϕ'_(f)2234111-11-1'(f)22ϕ'(A)11'ϕ(A), the horizontal composition of lax and colax natural transformations does not induce a bilax transformation. Namely, in order for this to work, the (co)lax structures should be identities. We finally compare the 2-category (,'), more precisely its special case (1,), with two other existing 2-categories in the literature, namely () and () mentioned before. From T(id), 1-cell for K=1 and 2-cells Bimnd we clearly have: There is a 2-category isomoprhism (1, )(). From T(id), bilax trans triv and bilax modif, it can be appreciated that on the level of 1- and 2-cells there is a faithful assignment ()↪(). Since the 0-cells of () are given by tupples (, T, D, λ), where T is a monad and D a comonad on a 0-cell in , and λ:TD⇒ DT is a distributive law with respect to monad and comonad as in lambda d.l., we clearly have: There is a faithful 2-functor ()↪(), which is defined on 0-cells by (, B, ν)↦(, m(B),c(B), λ), with λ being λ(ν):= 452B1B11ν12B1B. Observe that YD iso is a consequence of the above 2-category isomorphism (1, )(). Acknowledgments. The first author was supported by the Science Fund of the Republic of Serbia, Grant No. 7749891, Graphical Languages - GWORDS. 99 Agui M. Aguiar, S. Mahajan, Monoidal functors, species and Hopf algebras, CRM Monograph Series 29 Amer. Math. Soc. (2010). BD J. C. Baez, J. Dolan, Higher?dimensional algebra and topological quantum field theory, J. Math. Phys. 36/6073 (1995); https://doi.org/10.1063/1.531236. Ben J. Bénabou, Introduction to bicategories, Lecture notes in mathematics 47 (1967). Besp Y. Bespalov, B. Drabant, Hopf (bi-)modules and crossed modules in braided monoidal categories, J. Pure Appl. Alg. 123/(1-3) (1998), 105–129. CF J. Cuadra, B. Femić, A Sequence to Compute the Brauer Group of Certain Quasi-Triangular Hopf Algebras, Applied Categorical Structures 20 (2012), 433–512. DSS C. L. Douglas, C. Schommer-Pries, N. Snyder, Dualizable Tensor Categories, Memoirs of the American Mathematical Society 268 (2020). EGNO P. Etingof, S. Gelaki, D. Nikshych and V. Ostrik. Tensor categories. Mathematical Surveys and Monographs 205, Amer. Math. Soc., Providence (2015). ENO P. Etingof, D. Nikshych, V. Ostrik, Fusion categories and homotopy theory, Quantum Topol. 1/3, (2010) 209–273. FMS P.F. Faul, G. Manuell, J. Siqueira, 2-Dimensional Bifunctor Theorems and Distributive laws, Theory Appl. Categ. 37/34 (2021), 1149–1175. F1 B. Femić, Biwreaths: a self-contained system in a 2-category that encodes different known algebraic constructions and gives rise to new ones, J. Pure Appl. Alg. 223/4 (2019), 1472–1513. F2 B. Femić, A bicategorical approach to actions of monoidal categories, J. Algebra Applic. (2022). GNN S. Gelaki, D. Naidu, D. Nikshych, Centers of graded fusion categories, Algebra Number Theory 3/8 (2009), 959–990 . DOI: 10.2140/ant.2009.3.959 Gray J. W. Gray, Formal category theory: adjointness for 2-categories, Lecture Notes in Mathematics 391, Springer-Verlag, Berlin-New York (1974) 1, 19, 27. HZ S. Halbig, T. Zorman, Pivotality, twisted centres, and the anti-double of a Hopf monad, preprint arxiv.org/abs/2201.05361. JS A. Joyal, R. Street, Braided Tensor Categories, Advances in Mathematics102/1 (1993), 20–78. JY N. Johnson, D. Yau, 2-Dimensional Categories, Oxford University Press (2021). Kassel C. Kassel, Quantum Groups, Graduate Texts in Mathematics 155, Springer-Verlag, New York (1995). Lack-Icons S. Lack, Icons, Applied Categorical Structures 18/3 (2010), 289–307. Lack1 S. Lack, A 2-Categories Companion, Towards Higher Categories, The IMA Volumes in Mathematics and its Applications book series 152 (2009), 105–191. LS S. Lack, R. Street, The formal theory of monads II, J. Pure Appl. Algebra 175/(1-3) (2002), 243–265. LaSw R. G. Larson, M. E. Sweedler, An Associative Orthogonal Bilinear Form for Hopf Algebras, American Journal of Mathematics 91/1 (1969), 75–94. Ly V. Lyubashenko, Modular Transformations for Tensor Categories , J. Pure Appl. Algebra 98 (1995), 279–327. Majid S. Majid, Representations, duals and quantum doubles of monoidal categories, Proceedings of the Winter School on Geometry and Physics (Srní, 1990), Number 26, 197–206, (1991). CS M. B. McCurdy, R. Street, What Separable Frobenius Monoidal Functors Preserve, Cahiers de Topologie et Géométrie Différentielle Catégoriques 51/1 (2010). Ehud E. Meir, M. Szymik, Drinfeld centers for bicategories, Doc. Math. 20 (2015), 707–735. Wisb B. Mesablishvili, R. Wisbauer, Bimonads and Hopf Monads on Categories, Journal of K-theory K-theory and its Applications to Algebra Geometry and Topology 7/2 (2011), 349–388. Moe I. Moerdijk, Monads on tensor categories, J. Pure Appl. Algebra 168/2-3 (2002), 189-208. PW J. Power, H. Watanabe, Combining a monad and a comonad, Theoretical Computer Science 280 (2002), 137–262. Shim K. Shimizu: Ribbon structures of the Drinfel`d center, arXiv:1707.09691 (2017a) St1 R. Street, The formal theory of monads, J. Pure Appl. Algebra 2 (1972), 149-168. Tak M. Takeuchi, Survey of braided Hopf algebras, New Trends in Hopf Algebra Theory (La Falda, 1999), Contemp. Math. 267, Amer. Math. Soc., Providence, RI (2000), pp. 301–323. xlvii, 40, 631. TP D. Turi, G.D. Plotkin, Towards a mathematical operational semantics, In Proc. 12th LICS Conf., pages 280–291. IEEE, Computer Society Press (1997). Wor S. L. Woronowicz, Differential Calculus on Compact Matrix Pseudogroups (Quantum Groups), Commun. Math. Phys. 122, 125 (1989).
http://arxiv.org/abs/2306.10526v2
20230618110928
Steady states of two-dimensional granular systems are unique, stable, and sometimes satisfy detailed balance
[ "Alex D. C. Myhill", "Raphael Blumenfeld" ]
cond-mat.soft
[ "cond-mat.soft", "math-ph", "math.MP", "nlin.AO" ]
Understanding the structural evolution of granular systems is a long-standing problem. A recently proposed theory for such dynamics in two dimensions predicts that steady states of very dense systems satisfy detailed-balance. We analyse analytically and numerically the steady states of this theory in systems of arbitrary density and report the following. 1. We discover that all such dynamics almost certainly possess only one physical steady state, which may or may not satisfy detailed balance. 2. We show rigorously that, if a detailed balance solution is possible then it is unique. The above two results correct an erroneous conjecture in the literature. 3. We show rigorously that the detailed-balance solutions in very dense systems are globally stable, extending the local stability found for these solutions in the literature. 4. In view of recent experimental observations of robust detailed balance steady states in very dilute cyclically sheared systems, our results point to a self-organisation of process rates in dynamic granular systems. Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK Bullard Laboratory, University of Cambridge, Madingley Road, Cambridge CB3 0EZ, UK [email protected] Gonville & Caius College, University of Cambridge, Trinity Street, Cambridge CB2 1TA, UK ESE, Imperial College London, Prince Consort Rd, London SW7 2AZ, UK Steady states of two-dimensional granular systems are unique, stable, and sometimes satisfy detailed balance Raphael Blumenfeld ============================================================================================================== 1. Introduction Granular matter is ubiquitous in nature and plays a major role in our everyday life. Its near-indifference to thermal fluctuations has earned it a recognition as a new form of matter <cit.>. In spite of many decades of intensive theoretical, numerical, and experimental investigations into this form of matter, new aspects of its rich and complex behaviour are being discovered. The sensitivity of the large-scale behaviour and properties to the particle scale characteristics and structure has hindered the modelling of granular matter to date <cit.>. Consequently, of key significance is the modelling of granular dynamic evolution and the mechanically stable structures that such dynamics settle into. In particular, when the dynamics are quasistatic, the steady-state dynamics determine those stable structures and it is on this type of dynamics that we focus here. Several methods have been proposed to describe and model the evolution of the underlying structure during quasistatic dynamics <cit.>. A general way to describe mechanically stable granular structures in two-dimensions (2D) is based on what is known as the cell order distribution (COD) <cit.>. A cell is the smallest void (loop) in the system, surrounded by particles in contact and its order is defined as the number of particles in contact surrounding it. During quasi-static dynamics, the COD evolves by intergranular contacts being made and broken, which split and merge cells, respectively. Such a process is shown schematically in Fig <ref>. The dynamic equations for the COD evolution are <cit.>: Q̇_̇k̇ = 1/2∑_i=3^k-1(p_i,jQ_iQ_j-q_i,jQ_i+j-2)(1 + δ_i,k-i+2) - ∑_i=k+1^𝒞(p_k,jQ_kQ_i-k+2-q_k,i-k+2Q_i-k+2)(1+ δ_i,2k-2) + Q_k ∑_all possibleprocesses i,j(p_i,jQ_iQ_j-q_i,jQ_i+j-2) . In these equations, Q_k is the fraction of cells of order k, referred to in the following as k-cells. p_i,j is the merging rate of i- and j-cells into an (i+j-2)-cell, q_i,j is the rate of the splitting of an (i+j-2)-cell into an i- and a j-cell, and 𝒞 is the highest possible cell order in the system. The factor 1/2 and the δ-functions ensure correct counting. The last term on the right hand side is needed because the total number of cells changes with each merging or splitting event, which changes the fractions Q_k. Rattlers, which are particles with one or no contact, were ignored in these equations, which is a good approximation for dense systems with low-order cells. Including rattlers in the analyses that follow is possible, but the added complication does not add much insight and we disregard them. wanjura_structural_2020 found that, under some conditions, the steady-state cell order transitions of these far-from-equilibrium systems satisfy detailed balance, when the cell orders do not exceed six, a result corrected later to five <cit.>. The steady states of the evolution equations were also shown to be locally stable. Recent experiments on quasi-statically sheared 2D granular systems <cit.> have revealed a surprising observation – they always settled into steady states that satisfy detailed balance. Such robustness suggest that these steady states are not only stable, but also may be unavoidable. Moreover, these observations appear to contradict the paradigm that steady states of non-equilibrium dynamics cannot satisfy detailed balance <cit.>. Motivated by these experimental observations, we analyse here eqs. (<ref>) in detail. We investigate the conditions for detailed balance and the properties of such steady states. We show that: (i) if a steady state satisfies detailed balance in systems where 𝒞=6,7 then it is the only possible steady state; (ii) there is strong numerical evidence that, in systems where 𝒞=6,7, or 8, only one physical steady state is possible, whether or not it satisfies detailed balance; (iii) the steady state in systems where 𝒞=5 is not only locally but also globally stable. These findings provide a partial explanation for the observed convergence to detailed balanced steady states in <cit.>. § ANALYSIS OF THE STEADY STATE SOLUTIONS §.§ General steady states The back-and-forth processes (i) + (j)⇌ (i+j-2) are equivalent to chemical reactions in a multi-component reactive system. Their net flux is η_i,j=p_i,jQ_iQ_j-q_i,jQ_i+j-2 and η_i,j=0 when they are balanced. At steady state, the sum of all the processes, ∑_i,jη̅_i,j, vanishes by definition and eqs. (<ref>) reduce to 0 = 1/2∑_i = 3^k - 1η̅_i,k-i+2 (1 + δ_i,k-i+2) = - ∑_i = k + 1^Cη̅_k,i-k+2(1+ δ_i,2k-2) , in which the bars indicate steady state values. Given 𝒞, there can be (𝒞-2)^2/4 or [(𝒞-2)^2-1]/4 processes, when 𝒞 is even or odd, respectively. Focusing on the even case, for brevity, it is useful to rewrite eqs. (<ref>) as H ·η̅= 0 . Here, H is a (𝒞-2)×[(𝒞-2)^2/4] matrix and the vector η̅'s components are all the steady-state η-fluxes. Thus, η̅ must exist within the null space of H. The normalisation constraint, ∑_k Q̅_k = 1, reduces the number of independent first-order equations in (<ref>) to 𝒞-3. Thus, by Bézout's theorem <cit.> and, since η_i,j are quadratic in the Q_is, the maximal number of solutions is 2^𝒞-3 for any given set of rates, p_i,j and q_i,j. Below, we show analytically that, at least up to 𝒞=7, only one of these solutions is physical – the detailed balance steady state (when it exists), in which η_i,j=0 for all i,j. Indeed, an extensive numerical investigation over a wide range of parameters supports this conclusion – all other numerical solutions included either imaginary or negative Q_k fractions. §.§ The detailed-balance steady state is unique The uniqueness of the detailed balance steady state was established for 𝒞<6 <cit.>. We now extend this result to systems comprising arbitrary orders. Defining θ_i,j = p_i,j/q_i,j, each balanced process satisfies Q̅_i + j - 2 = θ_i,jQ̅_i Q̅_j. It follows that Q̅_k = Q̅_3^k - 2∏_i = 3^k - 1θ_3,i . The detailed-balance steady-state solution depends solely on the ratios θ_i,j and on Q̅_3. The value of Q̅_3 can then be found from the normalisation condition: Q̅_3 + ∑_k = 2^C{Q̅_3^k∏_i = 3^k+1θ_3,i} = 1 . We note in passing that the values of θ_i,j are not independent because cells of order k>5 can be formed by more than one process <cit.>. An example of a system in which detailed balance is possible (indeed observed) is the experimental steady states observed in <cit.>, which satisfy θ_i,j = θ_0 for all i,j. This gives rise to an exponentially decaying COD. Imposing such a condition, we calculate explicitly the COD of the emerging detailed-balance steady state of two systems in which 𝒞 = 7 and θ_0=0.5 and 2. These are shown in fig:ssexam. Nevertheless, this dependence does not affect our following argument regarding the uniqueness of the detailed balance steady state. Since the rates p_i,j and q_i,j must be non-negative then θ_i,j≥0, for all i,j, and the left hand side of (<ref>) is a monotonically increasing function of Q̅_3. Additionally, for 𝒞>3, it vanishes at Q̅_3=0 and exceeds 1 at Q̅_3=1. Thus, only one solution exists in the range 0<Q̅_3<1 and this is the only possible detailed-balance solution. This conclusion holds for both odd and even values of C. It follows that, if a system evolves into a detailed-balance steady state then that state is unique and independent of initial conditions. This goes some way to explain the robustness of the recently observed detailed-balance in <cit.>. §.§ The case 𝒞=6 The next question is whether or not there also exist steady states that do not satisfy detailed balance. There are [(𝒞-2)^2-1]/4 or (𝒞-2)^2/4 potential finite values of the fluxes η̅_i,j to determine from eqs. (<ref>), for 𝒞 odd or even, respectively. With only 𝒞-3 independent eqs. available, in the case of 𝒞 = 5 there are two eqs. for two unknowns. It follows that detailed balance is the only steady state for systems up to 𝒞=5. Since such systems have only two processes, 3+3⇋4 and 3+4⇋5, each must be balanced separately, as no cycle is possible <cit.>. This detailed balance is then straightforward. However, since η̅_i,j are underdetermined for 𝒞>5, it was conjectured that, in addition to detailed-balance, such systems support an infinite number of other stable steady states <cit.>. To investigate this conjecture, we re-examine the steady state. Focusing initially on the case 𝒞=6, there are four processes: η_3,3, η_3,4, η_3,5, and η_4,4, but only three independent equations. Rewriting (<ref>) as [ -2 -1 -1 0; 1 -1 0 -2; 0 1 -1 0; 0 0 1 1 ][ η_3,3; η_3,4; η_3,5; η_4,4 ] = 0 , leaves one flux underdetermined. We parameterise the solution by η_3,3≡ A: η = [ η_3,3; η_3,4; η_3,5; η_4,4 ]=[ A; -A; -A; A ] . It should be noted that, for any finite value of A, this steady state involves a cycle, 4 + 4→6→3 + 5→6→4 + 4 <cit.>. Using (<ref>) and (<ref>), the steady-state cell fractions are: Q̅_4 = (p_3,3Q̅_3^2- A)/q_3,3, Q̅_5 = (p_3,4Q̅_3 Q̅_4 + A)/q_3,4, Q̅_6 = (p_3,5Q̅_3 Q̅_5 + A)/q_3,5, Q̅_6 = (p_4,4Q̅_4^2 - A)/q_4,4 . Eliminating variables, and imposing normalisation, yields a cumbersome equation for Q̅_3, which in all but the simplest cases, can only be solved numerically. It is the dependence on the continuous parameter A that led to the conjecture in <cit.> that there is an infinite family of solutions in these systems. We show next that this is not the case, namely, if A=0 is a solution then no other solution exists. Firstly, note that, when A=0, θ_i,jQ_iQ_j=Q_i+j-2. Eliminating Q_4,Q_5 and Q_6 from eqs. (<ref>), we obtain θ_3,3θ_4,4 = θ_3,4θ_3,5. Since p_i,j,q_i,j≥0 then eqs. (<ref>) and (<ref>) imply that θ_4,4Q̅_4^2 > Q̅_6 and Q̅_6 > θ_3,5Q̅_3 Q̅_5 when A > 0. Additionally, from (<ref>), we have Q̅_5 > θ_3,4Q̅_3Q̅_4. Taken together, these yield θ_4,4Q̅_4^2 > θ_3,4θ_3,5Q̅_3^2 Q̅_4. Now, using the detailed balance condition and eliminating Q̅_6, we have Q̅_4 > θ_3,3Q̅_3^2, but (<ref>) implies Q̅_4 < θ_3,3Q̅_3^2. We have arrived at a contradiction, which means that A>0 cannot be a solution. A similar chain of analysis shows that A<0 is also impossible. It follows that, if the detailed balance solution, A=0, exists, then it is the only possible steady state. In the supplementary material, we use a similar analysis to show that the same conclusion holds for 𝒞=7. This conclusion improves on the conjecture in  <cit.>. We believe that this line of proof can be extended to 𝒞>7, although not without substantial effort. However, a different approach is required for a general proof. These results, combined with the extensive numerical investigations reported below, and the experimental observations in <cit.>, lead us to conjecture that, when a detailed balance steady state exists, it is the only possible steady state for any value of 𝒞. §.§ Numerical investigation of non-detailed-balance steady states The rates parameter space is infinitely large and most combinations of rates lead to non-detailed-balance solutions (with detailed balance only possible if the relation established in <ref> is satisfied, i.e. θ_3,3θ_4,4 = θ_3,4θ_3,5). To understand the nature of these solutions, we explored the parameter space numerically (see also supplementary material). Starting with 𝒞 = 6, we tested the 4^8=65536 rate value combinations when each rate can assume any of the four values: 0.1, 0.5, 1.0, and 3.0. For each combination, we found all the solutions and noted the number of solutions. Unexpectedly, each combination gave rise to only one physical solution, with all the others containing either negative or complex values of some Q̅_k. To test the potential generality of this surprising observation, we solved numerically for the steady-state solutions in systems where 𝒞 = 7 and 8. These have, respectively, 6 and 9 processes and 12 and 18 variable rates. Owing to the required larger computational resources, we tested only 6 rate combinations for 𝒞 = 7: {0.1,0.5},{0.1,1},{0.1,2},{0.5,1},{0.5,2},{1,2}, and in total 6× 2^12=24576 different systems. In each test, the values of p_i,j and q_i,j can take either of the pair of values noted. For 𝒞 = 8, we used the set of rates {0.1,0.5} and {0.5,2}, and in total 2× 2^18=524288 different systems. We found that in none of these systems was there more than one physical steady state solution. Based on these investigations, we conjecture that, for any choice of constant rates, there is only one physical solution regardless of the upper order, 𝒞. In Fig. <ref>, we show an example of the one non-detailed-balance solution when 𝒞 = 6, for a set of rate parameters that also admits five other non-physical solutions (listed in the supplemental material). It can be observed in Fig. <ref> that the difference between the breaking and making rates are the same for all rates. In particular, the vertical offset from the detailed balance line are the same and equal to A. A typical figure for 𝒞 = 8, for a system that also does not satisfy DB, is shown in the supplementary material. § GLOBAL STABILITY FOR 𝒞=5 A linear analysis of the steady states of eqs. (<ref>) has shown them to be asymptotically stable <cit.>, a prediction that has been supported experimentally <cit.>. This, however, does not preclude possible limit cycles around the steady state away from the linear regime. We investigate next the global stability of the solution and show that, at least for the dense system comprising cells of orders 3-5, no such cycles exist. Using the normalisation condition to eliminate Q_5, the independent evolution equations can be written as Q̇_3 =(Q_3 - 2)κ_3,3 + R(Q_3 - 1) κ_3,4, Q̇_4 = (Q_4 + 1)κ_3,3+ R(Q_4 - 1) κ_3,4 , in which the rates q_i,j and θ_i,j are assumed for now to be constant (more on this condition below), R≡ q_3,4/q_3,3, κ_3,3≡θ_3,3Q_3^2-Q_4, κ_3,4≡θ_3,4Q_3 Q_4+Q_3+Q_4-1, and time is scaled: t→ t'≡ q_3,3 t, such that Q̇_k=dQ_k/dt'. The fractions are constrained by Q_3, Q_4 ≥ 0 and Q_3 + Q_4 ≤ 1. We analyse eqs. (<ref>) within this region of the Q_3-Q_4 plane, using the theorems of Bendixson and Poincaré-Bendixson <cit.>. The former states that, in two-variables dynamical systems, ẋ = f(x,y) and ẏ = g(x,y), if V(x,y) = ∂_x(f) + ∂_y(g) is non-zero and has the same sign throughout a simply-connected x-y domain, then no closed orbits can lie within that domain <cit.>. We define V(Q_3,Q_4) = -1 - 4θ_3,3 Q_3(1 - Q_3) - 3Q_4 - R + 3R(Q_3 + Q_4 - 1) + θ_3,4 R(4Q_3Q_4 - Q_3 - Q_4). By the inequalities of the arithmetic and geometric means, (Q_3+Q_4) ≥ (Q_3 + Q_4)^2 ≥ 4Q_3Q_4, we have 4Q_3Q_4 - Q_3 - Q_4≤ 0 . Using Q_3(1-Q_3)≥ 0, Q_3 + Q_4 ≤ 1 and (<ref>) in (<ref>), we establish that V(Q_3,Q_4) ≤ -1 - R . Since R>0, V(Q_3,Q_4)<0 throughout the region to which the system is confined. Thus, according to the Bendixson theorem, this system has no limit cycles. Combining this result with the established uniqueness of the steady state <cit.> then, by the Poincaré-Bendixson theorem, the limit set contains only that steady state <cit.>. Thus, the detailed balance steady state is globally stable for any physical initial condition. § CONCLUSIONS AND FUTURE WORK To conclude, we studied, both theoretically and experimentally, the nature of the detailed-balance steady states into which the non-equilibrium dynamics of granular matter has been found to settle. We have proven that, for any maximum cell order, 𝒞, there can only be one detailed-balance steady state. We have also shown rigorously that, if a detailed-balance steady state solution exists up to 𝒞=7 then it is the only solution. Intriguingly, by solving the steady state equations numerically for 614400 systems up to 𝒞=8, we found that there is always only one physical solution, in which Q_k is real and positive for all k. We conjecture that the evolution equations (<ref>) yield only one physical solution for the steady state, which may or may not satisfy detailed balance. This conjecture is supported by clear experimental observations of detailed-balance steady states for systems with 𝒞>10 <cit.>. Next, we used the theorems of Bendixson and Poincaré-Bendixson to show that the detailed balance solution of the dynamics of systems with 𝒞=5 is globally stable and no periodic orbits exist. This may explain the robustness of such solutions observed experimentally <cit.>. It should be noted that, in our discussion, the steady-state rates p_i,j and q_i,j are constant, but of arbitrary functional form, subject to the condition of detailed balance. However, they need not be constant and evolve during the approach to the steady state. Another intriguing implication of our results is the following. We have found that most systems with 𝒞>5, settle into steady states that do not necessarily satisfy detailed balance. For example, solving and finding the only solution when 𝒞=6, p_3,5 = p_4,4 = q_3,3 = q_3,4 = q_3,5 = 1, and q_4,4 = 0.5, we find that not all η_i,j=0, namely, no detailed-balance. Yet, the experiments of sun_experimental_2021 reveal that, in a range of quasi-statically cyclically sheared 2D granular systems, the steady states always satisfy detailed-balance. This suggests that the cell breaking and merging rates in those experiments were neither arbitrary nor time-independent. Rather, they must have evolved as the granular systems self-organised into values that satisfy detailed balance. This seems the most plausible explanation for the emergence of detailed balance in those experiments and could reconcile the discrepancy between those observations and what appears to be a violation of the paradigmatic Klein principle <cit.>. Rate equations similar to (<ref>) have been used for modelling many evolution processes. In particular, in models of aggregation-fragmentation (AF). Nevertheless, there are some similarities and differences between the granular dynamics we study here and those models. The similarities are that our cell order fractions, Q_k, are analogous to aggregate sizes and all models have transition rates. Additionally, many models of AF assume detailed balance, implicitly or explicitly <cit.>. One minor difference is that the normalisation of our cell order fraction takes into consideration the changing total number of cells, which which gives rise to the last term in our eq. (<ref>). There are, however, more significant differences. One is that, unlike in most of AF models, for example, we do not assume the mathematical forms of the steady-state rates, p_i,j and q_i,j, subject to the condition that they satisfy detailed balance. This makes our analysis more general and more applicable than the studies that make such assumptions. Another difference is that most studies of AF dynamics simplify the analysis by allowing the size of aggregates to tend to infinity, which is often not physical. In contrast, our analysis applies to arbitrarily finite highest order, 𝒞. The third difference is quite fundamental. As mentioned, many models of AF processes assume detailed-balanced steady states. While this phenomenon is well established for systems in equilibrium, the current belief in the community is that it cannot be satisfied in out-of-equilibrium steady states. Indeed, our motivation to study steady states of sheared granular system is the surprisingly strong experimental evidence in <cit.> of persistent detailed-balanced in the steady states of their granular dynamics. To the best of our knowledge no such evidence exists for the non-equilibrium systems, to which some AF models presume to apply. This also means that our results apply only to AF systems either in strict equilibrium or where detailed balance has been established experimentally.
http://arxiv.org/abs/2306.01458v1
20230602113602
Extremely large-scale Array Systems: Near-Filed Codebook Design and Performance Analysis
[ "Feng Zheng" ]
cs.IT
[ "cs.IT", "cs.SY", "eess.SP", "eess.SY", "math.IT" ]
UTF8gbsn Robust Bayesian Inference for Measurement Error Models Charita Dellaporta Department of Statistics, University of Warwick / correspondance to , Theodoros Damoulas Department of Statistics & Department of Computer Science, University of Warwick =================================================================================================================================================================================================================== Extremely large-scale Array (ELAA) promises to deliver ultra-high data rates with more antenna elements. Meanwhile, the increase of antenna elements leads to a wider realm of near-field, which challenges the traditional design of codebooks. In this paper, we propose novel codebook design schemes which provide better quantized correlation with limited overhead. First, we analyze the correlation between codewords and channel vectors uniform linear array (ULA) and uniform planar array (UPA). The correlation formula for the ULA channel can be expressed as an elliptic function, and the correlation formula for the UPA channel can be represented as an ellipsoid formula. Based on the analysis, we design a uniform sampling codebook to maximize the minimum quantized correlation and a dislocation ULA codebook to reduce the number of quantized bits further. Besides, we give a better sampling interval for the codebook of the UPA channel. Numerical results demonstrate the appealing advantages of the proposed codebook over existing methods in quantization bit number and quantization accuracy. ELAA, codebook, correlation fitting formula, near-field, quantization bits § INTRODUCTION Massive multiple-input multiple-output (MIMO) technology is a vital component of fifth-generation (5G) mobile communication networks. MIMO involves the utilization of multiple antennas to concentrate signal power within a limited area, contributing to enhanced energy efficiency and spectral efficiency <cit.>. However, with the explosive increase of demand on data rates in the forthcoming sixth-generation (6G) mobile communication networks, current massive MIMO cannot meet the requirement because congregating enough signal power with limited antennas is difficult. To address this issue, ELAA technology, comprising hundreds or thousands of antennas, is considered as a crucial enabling technology for next generation communication <cit.>. ELAA enables efficient multiplexing of multiple user equipment (UE) on the same time-frequency resource, thereby improving spectral efficiency and data rates. Additionally, the deployment of ELAA's high beamforming gain facilitates enhanced spatial resolution and compensates for significant path loss experienced in the terahertz frequency bands <cit.>. Although more antenna elements in ELAA bring benefits in spectral efficiency and data rates, side effects on wireless channel characteristics brought by the increasing antenna numbers also demand attention. The electromagnetic field is generally divided into far-field and near-field, and their boundaries can be determined by the Rayleigh distance 2D^2/λ, where D represents the array size and λ represents the wavelength <cit.>. Due to the deployment of ELAA and high-frequency carriers, the Rayleigh distance is extended to tens to hundreds of meters, so UE is more likely to locate in the near-field <cit.>. For example, for a 128×128 UPA operating at 100 GHz, the near-field region extends to 490 meters. The high beamforming gain of the ELAA system heavily relies on the accurate channel state information (CSI) at the transmitter <cit.>. Especially in the radiating near-field, the non-negligible distance factor poses a significant challenge to the precise alignment of the beams <cit.>. In the time division duplex (TDD) system, the uplink and downlink have reciprocity, and downlink CSI can be obtained through uplink channel estimation. However, for the frequency division duplex (FDD) system, the uplink and downlink work on different frequencies, which makes the channel reciprocity very weak, so it is difficult to deduce the downlink CSI from the uplink CSI <cit.>. That is, CSI can only be obtained through dedicated feedback provided by UE over signaling channels of limited capacity <cit.>. Currently,there are two typical categories of CSI acquisition methods, the explicit CSI acquisition and the implicit CSI acquisition. Explicit feedback schemes directly report an element-wise quantized channel vector. They allow for more flexible transmission or reception methods, which can achieve a higher scheduling gain. However, explicit feedback requires a higher overhead than implicit feedback, so implicit feedback can enable more accurate link adaptation <cit.>. A mainstream technique in implicit feedback is the codebook-based approach, which feeds back an index of a quantized CSI in a pre-designed codebook to the transmitter. For codebook-based feedback, the quantization accuracy of CSI depends on the codebook structure and the allowed number of feedback bits <cit.>. The existence of massive antennas and non-negligible distance dimension in ELAA leads to an unexpected increase in pilot overhead. Therefore, it is crucial to design a codebook to achieve accurate quantization of CSI with limited feedback considering the near-field channel characteristics of ELAA. §.§ Related Works Extensive researches have focused on the design of far-field codebooks. The far-field electromagnetic wave can be considered a plane wave, so the phase changes linearly with the antenna index. The 5G new radio (NR) standard adopted a discrete Fourier transform (DFT) codebook for the ULA system, and the two-dimensional DFT (2D-DFT) codebook was introduced for the UPA system <cit.>. Moreover, to enable more accurate CSI acquisition, the NR standard supported codebook oversampling and the linear combination of multiple codewords for feedback <cit.>. IEEE 802.16 m standard adopted an adaptive codebook structure, such as the skewed codebook, and a differential codebook structure, such as a Polar-Cap codebook <cit.>. Besides, in the case of low pilot overhead, the hierarchical codebook <cit.>, angle-of-departure (AoD) adaptive subspace codebook <cit.>, and compressed sensing (CS) <cit.> methods could also be utilized to quantify CSI accurately. Among them, the codebook feedback schemes based on CS utilize the sparsity of the channel in the angle domain to achieve the goal of low feedback overhead. Only some studies have focused on codebook design for large near-field ELAA systems. <cit.> designed a codebook for the near-field UPA, which uniformly samples in real space. However, significant quantization errors exist in quantizing the channel in real space. Besides, <cit.> derived a one-dimensional (1D) near-field beamforming codebook design, which locates the codeword quantization point based on the Lloyd-Max algorithm. The scheme in <cit.> improved the near-field beamforming gain, but this scheme still needed to give a theoretical analysis of the codebook quantization performance. Meanwhile, near-field CSI quantization and feedback have been studied in some research. <cit.> designed a new polarization-domain codebook for near-field ULA and a CS feedback method based on the sparsity of the polar domain. Considering both angle and distance information, a new polarization-domain codebook for near-field ULA and a CS feedback method based on the sparsity of the channel in the polar domain was designed. This method used the on-grid polar-domain simultaneous orthogonal matching pursuit (P-SOMP) algorithm and the off grid polar-domain simultaneous iterative gridless weighted (P-SIGW) algorithm CS algorithm to feedback near-field channel information. In <cit.>, a hierarchical codebook was designed by projecting the near-field channel into the angle and slope domains, considering the incomplete coverage and overlap of spatial chirp beams, further designing a hierarchical codebook via manifold optimization and alternative minimization. To our knowledge, no studies have analyzed the correlation between near-field channels. The correlation is sensitive to angle and distance in the near-field polarization domain. Correlation performs distinctive change patterns in near-field and far-field. Unfair sampling methods will make codewords redundant and reduce quantization accuracy. In addition, there are few studies on the codebook and channel feedback scheme of the UPA channel. The UPA channel has one more angle parameter than the ULA channel, significantly increasing the codeword overhead. Therefore, a codebook suitable for near-field channels needs to be carefully designed. §.§ Contributions To fill in this gap, in this paper, we analyze the quantization performance in theory and propose codebook design schemes for the ULA channel and the UPA channel in the ELAA system. Our main contributions are summarized as follows ∙ We provide a theoretical analysis of the relationship between codeword quantization performance and quantization region in ULA and UPA channels. Specifically, we derive a polynomial form of the correlation formula between the codeword and the channel. The correlation formula for the ULA channel can be expressed as an elliptic function, and the correlation formula for each codeword remains constant, which indicates its stability. Besides, the correlation formula for the UPA channel can be represented as an ellipsoid formula, and the correlation formulas for different codewords vary, indicating non-stationarity. The polynomial expression of the correlation formula facilitates the design of sampling between codewords. ∙ We propose two codebook design methods for ULA and UPA, respectively. Based on the correlation formula of the ULA codebook, a uniform sampling codebook scheme with maximized minimum correlation is given. Furthermore, we propose an improved dislocation sampling scheme that reduces the number of quantized codewords. Considering the non-stationary correlation in the UPA channel, we propose a sampling scheme that achieves an upper bound on the codeword quantization performance. Analytical results show that oversampling in the angle domain can achieve higher quantization performance. In near-field channels, the number of quantized bits of CSI is nonlinearly proportional to the number of BS antennas. §.§ Organization and Notation The remainder of the paper is organized as follows. Section 2 presents the system model. Section 3 analyzes the characteristic of correlation, and describes the polynomial fitting formula in both ULA and UPA model. In Section 4, the near-field optimal uniformly codebook and uniformly dislocation codebook of ULA channel are proposed. Section 5 presents the optimal codebook design method of UPA channel and evaluates the proposed codebook. Simulation results are provided in Section 6, and conclusions are drawn in Section 7. Notations: Vectors are denoted by lowercase bold letters, while matrices are denoted by uppercase bold letters; ⊗ denotes the Kronecker product; (·)^H and (·)^T denotes the conjugate-transpose and diagonal operations, respectively; Υ_(m,n) denotes the (m,n)-th entry of the matrix Υ; Υ_n denotes the n-th column of the matrix Υ; υ_n denotes the n-th element of the vector υ. § SYSTEM MODEL In this section, we describe the system model of the near-field ELAA communication system. First, we introduce the spherical wave model of ELAA system. Next, we present a CSI quantization feedback model, and formulate the design of the codebook as an optimization problem. §.§ ULA Near-field Channel Model Consider a downlink narrow-band ELAA system, where the BS is equipped with a ULA to serve a single antenna UE. As shown in Fig. <ref>, the N-antenna array is placed along the y-axis. The antenna spacing is d = λ/2, where λ is the electromagnetic wavelength. The coordinate of the n-th antenna is given by 𝐭_n = ( 0,y_n), where y_n = ( n - N + 1/2)d with n = 1,2,…,N. Meanwhile, the UE is located at 𝐮 = ( r𝑐𝑜𝑠θ,r𝑠𝑖𝑛θ), where r and θ represent the distance and angle between UE and array center, respectively. The line-of-sight (LoS) channel is considered because this paper only focuses on the quantization feedback problem of the near-field codebook. According to the spherical wave model <cit.>, the distance determines the signal phase, and the near-field channel vector 𝐡 can be expressed as h = √(N) gb( r,θ), where g=√(η)e^ - jkr/r is the complex-valued channel gain with η and k = 2π/λ denoting the reference channel gain at a distance of 1m and wave number, respectively. 𝐛( r,θ) denotes the beam focusing vector, which is given by 𝐛( r,θ) = 1/√(N)[e^ - jk( r_1 - r),e^ - jk( r_2 - r), … ,e^ - jk( r_N - r)]^T , where r_n = 𝐭_𝐧 - 𝐮 represents the distance between the n-th antenna at the BS and the UE. Furthermore, according to the second order Taylor series expansion √(1+x) = 1 + x/2 - x^2/8 + 𝒪( x^3) , r_n can be approximated as r_n = √((rsinθ - y_n)^2 + (rcosθ )^2) ≈ r - sinθy_n + cos^2θ/2ry_n^2. Therefore, the n-th element of near-field beam focusing vector 𝐛 can be simplified as b_n = 1 /√(N)e^ - jk( - sinθy_n + cos^2θ/2ry_n^2). When the r is sufficiently large, the cos^2θ/2r term can be omitted, and 𝐛( r,θ) is simplified as 𝐚( θ) = 1/√(N)[ 1,e^ jπsinθ, …,e^ jπ( N - 1)sinθ]^T, which is equivalent to the conventional far-field beam steering vector for the ULA. And the DFT codebook is adopted to quantify the far-field channel vector. Therefore, to be more precise, the concept of “near-field” in this paper does not exclude far-field as well. §.§ UPA Near-field Channel Model As shown in Fig. <ref>, the BS employs a UPA, which is located on the xOy plane and the center of the array is located at the coordinate origin. N × N uniformly spaced antenna elements are placed in both horizontal and vertical directions, with a spacing of d = λ/2. The Cartesian coordinate of the (m,n)-th antenna element of the UPA can be expressed as 𝐭_(m,n) = ( x_m,y_n,0) with x_m = ( m - N + 1/2)d, y_n = ( n - N + 1/2)d, m = 1,...,N, n = 1,…,N. Meanwhile, we assume the coordination of UE is u = ( rsinθcosϕ,rsinθsinϕ ,rcosθ), where r, θ and ϕ represent the distance, elevation angle and azimuth angle of UE relative to the UPA center, respectively. Therefore, the beam focusing vector for UPA can be obtained based on the spherical wave propagation model as 𝐛( r,θ ,ϕ) = 1/ N [e^ - jk( r_( 1,1) - r), … ,e^ - jk( r_( N,N) - r)]^T, where r_(m,n) = 𝐭_(m,n) - 𝐮 represents the distance between the (m,n)-th antenna at the BS and the UE, which can be approximated as r_(m,n) ≈ r - sinθcosϕx_m - sinθsinϕy_n + 1 - sin^2θcos^2ϕ/2rx_m^2 + 1 - sin^2θsin^2ϕ/2ry_n^2 - sin^2θcosϕsinϕ/rx_my_n. As a result, the (m,n)-th element of the LoS channel can be represented as b_(m,n) = 1 /Ne^- jk(r_(m,n)-r). When the r is sufficiently large, the last 3 terms in (<ref>) can be omitted, and 𝐛( r,θ,ϕ) is simplified as 𝐚( θ, ϕ) = 1/ N [ 1,…,e^ jπ (m sinθcosϕ+ nsinθsinϕ) , …, . . e^ jπ((N-1)sinθcosϕ + (N-1)sinθsinϕ ) ]^T, which is equivalent to the conventional far-field beam steering vector for the UPA, and the 2D-DFT codebook is adopted for the CSI feedback. Since the phase of (<ref>) can be decoupled into two parts in terms of x and y, the 2D-DFT codebook can be expressed in the form of the Kronecker product of the DFT vectors, that is a = a_x⊗a_y, where a_x = 1/√(N)[ 1,e^ jπsinθcosϕ, … ,e^ jπ (N - 1)sinθcosϕ]^T and a_y = 1/√(N)[ 1,e^ jπsinθsinϕ, … ,e^ jπ (N - 1)sinθsinϕ]^T. However, the cross-term sin^2θcosϕsinϕx_my_n/r in (<ref>) prevents it from being decoupled as 𝐚( θ,ϕ). Therefore, the near-field codebook for the UPA cannot be directly constructed based on the ULA codebook, which will be explained in detail later. §.§ CSI Quantization Feedback Model For FDD communication systems, the BS can obtain the CSI through the UE’s feedback. Specifically, the pilot signal 𝐗 = diag( x_1,…,x_N) is transmitted at first, where x_n denotes the pilot symbol for the n-th antenna, and N represents the size of either ULA or UPA. At the UE side, the received signal is given by y = Xh + n, where n is the additive Gaussian white noise (AWGN) with variance σ^2. Based on this, the estimation of channel vector, denoted as 𝐡̂, can be obtained by methods such as least squares (LS) <cit.>. Since this paper mainly focuses on the codebook design for the near-field communication, the perfect CSI estimation is assumed, i.e., 𝐡̂ = 𝐡. To inform the BS of CSI with limited feedback, the channel vector is quantized based on a predefined codebook 𝐖 = [𝐖_1,…,𝐖_S], which contains S codewords and satisfies 𝐖_s = 1. The UE selects the ideal codeword from the codebook and feeds back its index s^* = sargmax| 𝐡^H𝐖_s|^2 to the BS. Finally, the BS can determine the transmission scheme based on the CSI feedback from the UE. For example, the codeword 𝐖_s^* can be utilized as the beamforming weight. In the above quantization feedback model, the codebook design affects the channel quantization accuracy, which in turn affects the performance of the communication system. Obviously, all the channel vectors in the region of interest have different correlations with the codewords. This paper considers the max-min correlation criterion, and assuming 𝒲 ={ W_1, …,W_s}, the near-filed codebook design problem can be formulated as max_𝐖 1mumin_h∈ H 1mumax_s |h^HW_s| s.t. | 𝒲| = S. or [ min_𝒲 1mu| 𝒲|; s.t. min_h∈ H 1mumax_s 1mu |h^HW_s| > c. ] H represents the set of LoS channel vectors within the region of interest, and c represents the correlation between codewords and channel vectors. In the above quantization feedback model, the codebook design affects the channel quantization accuracy, which in turn affects the performance of the communication system. Obviously, all the channel vectors in the region of interest have different correlations with the codewords. This paper considers the max-min correlation criterion, and assuming 𝒲 ={ W_1, …,W_s}, the near-filed codebook design problem can be formulated as § CODEWORD QUANTIZATION PERFORMANCE ANALYSIS This section first investigates the correlation between the codewords and the near-field channel vectors. The transform domain perspective for analyzing correlation function is proposed, demonstrating many desired mathematical properties. Second, this section gives the fitting formula of the quantization performance of the codeword to the channel vector, which inspires the codebook design. §.§ Correlation Function for ULA systems When codewords are selected from the beam focusing vectors, i.e., 𝐖_s = 𝐛( r_s,θ_s ), they can be viewed as a LoS channel vectors at specific locations. For the ULA systems, the correlation between the codeword 𝐛( r_s,θ _s) and the beam focusing vector 𝐛( r_q,θ _q) can be calculated as τ( r_s,θ _s; r_q,θ _q) = |𝐛^H(r_s,θ _s)𝐛( r_q,θ _q)| = 1/N| ∑_n = 1^N exp( - j2π/λ(( sinθ _q - sinθ _s)y_n +( cos^2θ _s/2r_s - cos^2θ _q/2r_q) y_n^2) )|. Let α_i=λcos^2θ_i /4r_i, β_i =sinθ_i with i=s,q, (<ref>) can be simplified as Let α_i=λ_i cos^2θ_i /4r_i, β_i =sinθ_i with i=1,2, δ_α =α_2-α_1, and δ_β_q^α_s=β_2-β_1, (<ref>) can be simplified as f (α_s, β _s;α _q, β _q ) =1/N|∑_n=1^N exp( -jπ ( ( α _s-α _q )n^2 +( ( β _q-β_s ) -(α_s- α _q )( N+1 ) ) n )) |. Further, let δ _α and δ _β respectively represent as the location difference between 𝐛(r_q, θ_q) and 𝐖_s, which can be expressed as δ_α =α_q-α_s, δ _β=β_q-β_s. Then, (<ref>) can be simplified as f(δ_α,δ_β)=1/N|∑_n=1^Nexp(-jπ(-δ_αn^2 +(δ_β+δ_α(N+1))))|. Without loss of generality, the correlation between the codeword and the channel vector always satisfies f(δ_α ,δ_β)≤ 1. f(δ_α ,δ_β)= 1 if and only if δ_α =0 and δ_β =0. Consequently, there is always a certain quantization error when a codeword quantizes a channel other than itself. To consider the influence of wavelength and number of antennas on the correlation function, we set δ̃_α = δ_αN^2 and δ̃_β = δ_βN. For the antenna with a sufficiently large of N, the above formula can be approximated to f̃(δ̃_α,δ̃_β) ≈| ∫_ - 1/2^1/2exp( - jπ( δ̃_βt - δ̃_αt^2))dt|. (<ref>) converts the coordinate from (δ _α ,δ _β ) to ( δ̃_α,δ̃_β ). The change in the number of antennas N does not affect the expression of f(δ̃_α,δ̃_β). Thus, (<ref>) shows that the channel correlation function is suitable for different antenna numbers N and wavelengths λ, in other words, (<ref>) can be used to describe the correlation between codeword and channel in both the far-field and near-field. Under different correlations c, the difference between the locations satisfying the condition is distributed in contour lines on the α-β domain, as shown in Fig. <ref>. The correlation between the codeword and the channel vector is only related to the difference in position. Under different correlations c, the difference between the locations satisfying the condition is distributed in contour lines. Similar to the far-field DFT beam pattern in the angle domain, the correlation function (<ref>) can also be understood as the normalized beam pattern of the near-field beam in the α-β domain. In Fig. <ref>, the yellow ellipse in the center reflects the beamwidth under a specific correlation requirement. The beamwidth on the α domain is inversely proportional to N^2, and the beamwidth on the β domain is inversely proportional to N. Unfortunately, the exponential term in (<ref>) is relatively complicated. Most existing literature deals with the exponential term based on the Fresnel integral <cit.>. However, it is still difficult to directly obtain the numerical characteristic for the correlation function approximated by the Fresnel integral. Inspired by this, we further explore the correlation properties of codewords. We can deduce the following properties based on the correlation (<ref>). [Stationarity] The correlation between the codeword and channel vector is only related to δ _α and δ _β. For different codeword 𝐖_s and 𝐖_s^', the correlation’s distribution of codeword with the channel within its quantization region is always the same. The stationarity in the ULA channel can be formulated as f(α_s,β_s; . . α_s+δ _α ,β_s+δ _β) =f(α_s^',β_s^';α_s^'+δ _α ,β_s^'+δ _β). [Symmetry] Similar to the symmetry of the DFT beam in the angle domain, the correlation distribution of the near-field channel also satisfies symmetry. In the quantization area of the codeword, the distribution of the correlation between the codeword and the channel is symmetrical about the codeword in the α and β domain, which can be expressed as f(δ _α ,δ _β)=f(δ _α ,-δ _β) =f(-δ _α ,δ _β)=f(-δ _α ,-δ _β). (<ref>) can be rewritten as f(δ _α ,δ _β)=1/N|∑_n=1^Nexp(-j2π/λ(δ _β y_n-2/λδ _α y_n^2)) |. Set δ _β '=-δ _β, the above formula can be calculated as f(δ _α ,δ _β ')=1/N | ∑_n=1^Nexp(-j2π/λ(-δ _β y_n-2/λδ _α y_n^2)) |. y_n is symmetric about y_n=0 in [ - (N-1)d/2, (N-1)d/2 ], hence there always exists y_N-n+1=-y_n. Thus, formula (<ref>) can be written as f(δ _α , -δ _β) =1/N|∑_n=1^Nexp(-j2π/λ(δ _β y_N-n+1-2/λδ _α y_N-n+1^2)) |, which indicates that f(δ _α ,δ _β) =f(δ _α , -δ _β). It is evident that f(δ _α ,δ _β )=f(-δ _α ,-δ _β ) by central symmetry. Therefore, we can easily deduce that f(δ _α ,δ _β )=f(-δ _α ,δ _β ). According to these two properties, we can obtain the correlation characteristics of any codewords by analyzing the correlation of a specific codeword. These features provide convenience for designing the sampling interval between codewords to ensure an ideal design. Next, we will use a polynomial function to approximate the correlation function in Proposition <ref>. When δ _α≠ 0 and δ _β≠ 0, the polynomial fitting formula of f(δ _α ,δ _β ) can be expressed as f( δ _α,δ _β) ≈p_αδ _α^2N^4 + p_βδ _β^2N^2 + 1 , where p_α=-0.025983670363830, p_β=-0.391749735984250. To sum up, if the correlation between the codeword 𝐖_s and the beam focusing vector 𝐛(α_q,β_q) is f(δ _α ,δ _β )=c∈(0,1), the distribution satisfied by δ _α and δ _β can be equivalent to p_αδ _α^2 N^4+ p_βδ _β^2N^2=c-1. This formula can be further simplified into the form of the following formula p_αδ _α^2N^4/c-1+ p_βδ _β^2N^2/c-1 =1. Evidently, the correlation fitting formula in Proposition <ref> is an elliptic function. The ellipse formula always centers around (α_s,β_s). Moreover, the ellipse is the quantization boundary of 𝐖_s when the quantization accuracy of the codeword satisfies c. The correlation c affects the major and minor axes of the ellipse formula. The larger c is, the shorter the axial length of the ellipsoid is, and the smaller the codeword quantization area is. Conversely, the minor c is, the axis length of the ellipse is smaller, and the lower the quantization accuracy of the codeword is. Further, the axis length of the ellipse is also decided by N. The axis length of the ellipse in the α and β domains are inversely proportional to N^2 and N, respectively. The formula in Proposition <ref> is more concise and easier to calculate than the approximated Fresnel integral function and provides strong theoretical support for the codebook design scheme in this paper. In the next section, we will design ULA codebooks based on the above Proposition <ref>. For 𝐖_0=𝐛(0,0), f(δ _α ,δ _β ) can be written as f(α_q,β_q). At this time, (<ref>) is the quantization boundary of 𝐖_0 and gives the channel vector 𝐛(α_q,β_q) which meet the conditions of |𝐖_0^H𝐛(α_q,β_q)|=c. For the codeword 𝐖_0 with the minimum correlation as c, the possible channel vectors 𝐛(α_q,β_q) always distribute in the ellipse interior, which can be formulated as Ω ={𝐛(α_q,β _q)|p_α N^4α_q^2/c-1+p_β N^2β_q^2/c-1≤ 1}. Considering the stationarity characteristics, the correlation between any codeword and its quantization channel vector can be described using the contour lines shown in Fig. <ref>. §.§ Correlation Function for UPA systems In the UPA system, the correlation between codeword 𝐖_s=𝐛(r_s,θ_s,ϕ_s) and beam focusing vector 𝐛(r_q,θ_q,ϕ_q) can be calculated as τ(r_s,θ _s,ϕ _s;r_q,θ _q,ϕ _q) = | b^H( r_s,θ _s,ϕ _s)b( r_q,θ _q,ϕ _q)|. We replace the difference of location between the codeword and channel vector as follows δ_ψ_s =sinθ_scosϕ_s-sinθ_qcosϕ_q, δ_φ_s =sinθ_ssinϕ_s-sinθ_qsinϕ_q, δ_ρ_s =1/r_s-1/r_q. Then, (<ref>) can be written as f(δ_ψ_s , δ_φ_s ,δ_ρ_s) =1/N^2 |∑_n=1^N∑_m=1^Nexp(-j2π/λ (x_mδ_ψ_s+y_nδ_φ_s -(1-(δ_ψ_s)^2)δ_ρ_s/2x_m^2 -(1-(δ_φ_s) ^2)δ_ρ_s/2y_n^2 +(1-δ_ψ_sδ_φ_s)x_my_n))|. The cross-term x_my_n contained in the UPA channel vector makes it impossible to use the Kronecker product to decouple the channel vector. If the cross-term x_my_n is ignored, it will cause a significant performance error due to missing more CSI. Before giving the method to solve the above problems, the characteristics of UPA channel correlation shown in the property <ref> and property <ref> are firstly given. [non-stationarity] The non-stationarity manifests in that the quantized areas of different codewords are not fixed under the minimum correlation c. Therefore, codebooks in UPA channels do not exhibit stationarity. For two codewords 𝐖_s and 𝐖_s^', the non-stationarity manifests can be formulated as f(ψ_s, φ_s,ρ_s;ψ_s+δ_ψ_s,φ_s+δ_φ_s,ρ_s+δ_ρ_s) f(ψ_s^',φ_s^',ρ_s^';ψ_s^'+δ_ψ_s,φ_s^'+δ_φ_s,ρ_s^'+δ_ρ_s). [symmetry] Symmetry similar to ULA, it is easy to prove that f(δ_ψ_s ,δ_φ_s ,δ_ρ_s) = f(-δ_ψ_s ,δ_φ_s ,δ_ρ_s) = f(δ_ψ_s ,-δ_φ_s ,δ_ρ_s) = f(-δ_ψ_s ,-δ_φ_s ,δ_ρ_s). Unlike the ULA model, the non-stationarity of the UPA model brings challenges to the codeword design of UPA. In proposition <ref>, we give the UPA channel correlation fitting formula different from the ULA fitting formula. When δ_ψ_s≠0, δ_φ_s≠0 and δ_ρ_s≠0, the correlation between channels in the UPA channel model can be better fitted by a polynomial function, and the fitting formula can be written as f(δ_ψ_s,δ_φ_s,δ_ρ_s) =p_ψδ _ψ _s^2 N^2+p_φδ _φ _s^2 N^2+p_ρδ _ρ _s^2N^4+1. If f(δ_ψ_s ,δ_φ_s ,δ_ρ_s )=c and c∈(0,1), the above formula can be converted into the following ellipsoid formula p_ψ N^2δ _ψ _s^2/c-1+ p_φ N^2δ _φ _s^2/c-1+ p_ρ N^4δ _ρ _s^2/c-1=1. The ellipsoid fitting formula in Proposition <ref> gives the quantization boundary of the codeword 𝐖_s with the minimized correlation is c. At a fixed position, the smaller c is, the smaller the axial length of the ellipsoid will be. It should be noted that due to the non-stationary characteristics of the UPA channel, the values of the coefficients p_ψ,p_φ and p_ρ in (<ref>) are related to the specific position of the codeword sampling point and the minimum correlation c. Thus, the ellipsoid fitting formula has different axial lengths for different codewords. The correlation fitting formula in Proposition <ref> provides strong support for the codebook design of near-field UPA. Below we give an example to verify the accuracy of Proposition <ref>. We plot the actual beam focusing vector whose correlation with the codeword 𝐖_0=𝐛(0,0,0) satisfies f(0,0,0;ψ_q,φ_q,ρ_q)=0.95. We calculate the appropriate fitting coefficients from the simulation and plot the corresponding ellipsoid images in Fig. <ref>, which shows that the actual beam focusing vector satisfying the condition points to the fitted ellipsoid curve. For the codeword 𝐖_0 with the minimum correlation as c, the quantized channel vectors always is distributed in the ellipsoid interior, which can be formulated as Ω ={𝐛(ψ_q,φ_q,ρ_q)| p_ψ N^2ψ_q^2/c-1+ p_φ N^2φ _q^2/c-1+ p_ρ N^4 ρ _q^2/c-1≤ 1}. This example shows that the fitting formula of the UPA channel can more accurately describe the channel correlation of UPA. § NEAR-FIELD ULA CODEBOOK DESIGN We have analyzed the effect of the angle and distance between the UE and the antenna on the quantization accuracy of codewords. In this section, we describe two quantized codebook design schemes of the ULA channel to ensure the optimal beam gain according to the ULA channel correlation distribution characteristics presented in Section 3-A. First, an optimal uniform codebook scheme is proposed, which achieves the maximum quantization range under a minimum correlation. In addition, we redesign a codebook scheme with constant offset in the transform domains to reduce the quantization overhead of the codebook further. §.§ Uniform Quantization Codebook Design Scheme The most commonly used way to obtain the quantified location of the codewords is to perform uniform sampling on the α-β domain, as shown in Fig. <ref>. α domain and β domain are sampled with S_α and S_β points, respectively. The number of sampling points is S=S_α*S_β, and the index of the codeword is s. Let Ξ^L denote the collection of codeword sampling points, which can be represented as Ξ^L = { (α_s,β_s)| α_s =α_min+ Δα/2 ,α_min+ 3Δα/2,…,α_max; β_s =β_min+ Δβ/2,β_min+ 3Δβ/2,…,β_max}. where α_min,α_max and β_min,β_max are the total quantization intervals of the codeword on the α domain and the β domain respectively. Δα and Δβ represent the sampling steps on the α and β domain respectively. Next, we present an in-depth analysis of the distribution performance of the codewords in Fig. <ref> to explore the optimal codebook design scheme. The blue triangle is the intersection of the quantization intervals of adjacent codewords, located on the same correlation contour line for each codeword. The correlation contour line where the blue triangle is situated is the outermost correlation contour line of the four codewords, representing the quantization boundary of the codeword. Extending the distribution characteristics to the entire α-β domain, as shown in Fig. <ref>, the spacing between adjacent codewords is always symmetrical about the minimum correlation point of the codeword. Furthermore, we reveal the relationship between codeword quantization regions and codeword quantization accuracy for individual codewords in the α-β domain with the constant minimum quantization error. As shown in Fig. <ref>, the quantization area of the codeword is rectangular with uniform sampling points. The vertices of the rectangular region are located on the minimum correlation contour line. Multiple layouts of rectangular vertices on the minimum correlation contour line constitute quantization schemes with different areas. In order to improve the accuracy of codeword quantization CSI, the minimum correlation of each codeword quantization is always expected to be as large as possible. Therefore, the target can be mapped as the area of the rectangle enclosed by the four quantization boundaries is the largest on a specific ellipse contour line. According to Cauchy's inequality, for any point on the ellipse (<ref>), when its coordinates and axis length satisfy the relationship δ _α√((c-1)/(p_β N^2))=δ _β√((c-1)/(p_α N^4)), the area of the rectangle surrounded by the points is the largest. Therefore, when the channel correlation is c, the sampling interval of the achievable maximum quantization area on the α-β domain can be calculated as Δα = 1/N^2√(2( c - 1)/p_α), Δβ = 1/N√(2( c - 1)/p_β). Consider that the user distribution is within a range of r ∈[√(0.62D^3/λ),∞) distance and angle of θ∈[-π/2,π/2]. The maximum quantization range on α and β domain can be respectively calculated as Q_α = √(λ/2.48D^3)≈1/N√(N),Q_β = 2. Then, the number of codewords in the α domain and β domain are given by S_α = Q_α/Δα = √(Np_α/2( c - 1)), S_β = Q_β/Δβ = N√(2p_β/( c - 1)). Thus, the total number of codewords to achieve the minimum number of feedback bits can be calculated as S_ULA =S_α S_β = N√(Np_αp_β)/( 1 - c). At this time, the maximum quantization area of each codeword is R_max = 2(1 - c)/N^3√(1/p_αp_β). Therefore, for all sampling points of the ULA codebook, the coordinates of the s_α-th codeword in the α domain can be reformulated as α( s_α) = ( s_α - 1/2)Δα,  s_α = 1… S_α. And the s_β-th codeword in the β domain can be reformulated as β( s_β) = - 1 + ( s_β - 1/2)Δβ, s_β = 1…,S_β. (<ref>) shows that the number of quantized bits of the codeword is only related to the channel correlation c and the number N of antennas but has nothing to do with the frequency. When the channel correlation c is constant, the number of codewords in the α domain is proportional to √(N), and the number of codewords in the β domain is proportional to N. Moreover, if the number of antennas N remains unchanged, the channel correlation c increase can lead to an increase in the number of codebook quantization vectors. It should be noted that, in the design of this scheme, for a fixed codeword quantization correlation, the number of codewords in the β domain is always far greater than the number of codewords in the α domain. Therefore, under the same feedback bits, dense sampling in the β domain will be more conducive to improving the quantization performance of the codeword. In addition, if the minimum correlation is the same, quantizing r∈ [ 8,∞ ) can save 1 bit of feedback than r∈ [ 4,∞ ). In the case of the same channel correlation and different antenna array sizes, the optimal codeword numbers for the α domain and β domain are summarized in Table 1, respectively. As the table shows, as the number of antennas and codewords on the α domain increase sequentially, the number of codewords on the β domain does not change much. §.§ Dislocation Quantization Codebook Design Scheme In the uniform quantization codebook design, the quantization region of each codeword is distributed in a rectangular shape. To further improve the accuracy of codeword quantized CSI, we propose a codebook design with dislocation. The quantization area of each codeword is distributed in a hexagon, which is symmetric about the a and b domains. Δα and Δβ are the sampling intervals of the offset codebook sampling scheme in the α and β domains, respectively. It should be noted that the dislocation here means that the codewords of the even or odd rows on the β domain are collectively shifted to the right by Δα/2 along the α domain, as shown in Fig. <ref>. Based on the hexagon's symmetry, the quantization area of a single codeword is always twice that of the inscribed triangle in Fig. <ref>. Therefore, the problem of maximizing the quantization area of a dislocation codeword can be transformed into the problem of finding the giant inscribed triangle of an ellipse. As mentioned above, based on Cauchy inequality, the codeword sampling steps in the α domain and β domain can be calculated as Δα = 3/N^2√(( c - 1)/p_α), Δβ = 1/N√(3( c - 1)/p_β). Under the maximum quantization area, the corresponding vertex of the quantization area of the codeword 𝐖_0 is (α_1,β_1) =(-√(c-1/p_α N^4), 0), (α_2,β_2) =(1/2√(c-1/p_α N^4), 1/2√(3(c-1)/p_β N^2)), (α_3,β_3) =(1/2√(c-1/p_α N^4), -1/2√(3(c-1)/p_β N^2)). Currently, the codeword is located at the center of gravity of the triangular quantization area. If codeword 𝐖_0 is selected and used as the origin, then the position of the point with the minor channel correlation from the codeword under the two schemes satisfies the relationship as f(-2/3Δα,0)=f(1/6Δα,1/2Δβ). Therefore, the maximum quantization area of a single codeword is R_max = 3(1-c)/2N^3√(3/p_αp_β). The number of the sampling points in the α and β domain is S_α = 1/3√(Np_α/( c-1)) , S_β=2N√(p_β/3 ( c-1 ) ). And the number of codewords can be calculated as S_ULA=2S_αS_β=4N/3(1-c)√(Np_α p_β/3). In the collection of ULA codewords, the coordinates of the s-th codeword in the α domain can be reformulated as α( s_α) ={ ( s_α - 1 )Δα, odd row Δα/6 +( s_α - 1 )Δα, even row . where s_α = 1…S_α and the t-th sampling point of the β domain can be reformulated as β( s_β) ={ - 1 + ( s_β- 1)Δβ, odd column - 1 + Δβ/2 + ( s_β - 1)Δβ, even column . where s_β = 1…,S_β. (<ref>) and (<ref>) provide the number of codewords under uniform and dislocation sampling. The number of dislocation sampling codewords is only 75% of the number of uniform sampling codewords. Therefore, in the same space, the dislocation quantization scheme can achieve the goal of low codebook quantization overhead by expanding the sampling interval of codewords. § NEAR-FIELD UPA CODEBOOK DESIGN The non-stationary characteristic of the UPA channel correlation results in a different distribution for each codeword in 3D space. This section proposes the concept of a reference ellipsoid and assumes that the correlation formula for each codeword is always the same as the reference ellipsoid. Based on this assumption of stationarity, uniformly sampled codewords can be obtained. In this scheme, each codeword in the non-stationary UPA channel can meet the minimum correlation requirement. §.§ Definition of Reference Ellipsoid Proposition <ref> shows that for any codeword, the pointing position of the channel vector that satisfies the minimum quantization correlation of c is always uniformly distributed on an ellipsoid centered around the quantization center of the codeword. It should be noted that due to the non-stationary characteristics of UPA channels, the size of the ellipsoid enclosed by different codewords at the positions that meet the conditions under the same minimum quantization correlation is always different. Therefore, when designing the optimal sampling interval between UPA codewords, the correlation feature of any codeword cannot be directly regarded as the correlation feature of all codewords, just like in ULA codebooks. In order to solve the problem caused by non-stationary features when designing the optimal sampling interval for codebooks, we hope to find a reference ellipsoid to describe the quantization features of any codeword in space. This reference ellipsoid provides the maximum allowable space, ensuring that all codewords can guarantee the minimum quantization correlation c at this volume. Below, we define a reference ellipsoid. Consider sampling S_ψ, S_φ and S_ρ points on the ψ, φ and ρ domains, respectively. Using the channel vector pointed at by the sampling point as the codeword, a total of S^*=S_ψS_φS_ρ can be obtained. From Proposition <ref>, it can be concluded that when the quantization performance of all codewords satisfies the minimum correlation c^*, the sets of ellipsoidal axis lengths enclosed by the quantization boundaries of all codewords are respectively represented as 𝐋_ψ={l_ψ ,1,… l_ψ ,S_ψ}, 𝐋_φ={l_φ ,1,… l_φ,S_φ}, 𝐋_ρ={l_ρ ,1,… l_ρ ,S_ρ}. Meanwhile, l_ψ^*=𝑚𝑖𝑛 𝐋_ψ, l_φ^*=𝑚𝑖𝑛 𝐋_φ and l_ρ^*=𝑚𝑖𝑛 𝐋_ρ are used as the axial length of the reference ellipsoid, then the formula of the reference ellipsoid can be written as δ_ψ_s^2/(l_ψ^*)^2+δ_φ_s^2/(l_φ^*)^2+δ_ρ_s^2/(l_ρ^*)^2=1. According to the fitting formula given in Proposition <ref>, the fitting coefficient can be calculated as p_ψ^*=c-1/(l_ψ^*N)^2, p_φ^*=c-1/(l_φ^*N)^2, p_ρ^*=c-1/(l_ρ^*N)^2. The formula for the reference ellipsoid can be completed as δ_ψ_s^2/c-1/p_ψ^*N^2+δ_φ_s^2/c-1/p_φ^*N^2+δ_ρ_s^2/c-1/p_ρ^*N^4=1. It is worth noting that some codewords may not achieve the expected minimum quantization correlation c when considering the ellipse whose axis length is longer than the axis length of the reference ellipse to represent the quantized region of the codeword. Therefore, the reference ellipsoid is the lower bound that can describe the quantization region of the codeword. The reference ellipsoid is a virtual ellipsoid reconstructed by taking the minimum value of the quantized region of all codewords, which may not exist. §.§ Uniform Codebook Quantization Scheme For the UPA channel model, UE is located at r ∈[ √(0.62D^3/λ) ,∞ ) in distance, [-π/2,π/2] in elevation angle and [0,π] in azimuth angle. At this moment, the UE can be considered uniformly distributed in 3D space of ψ∈[-1,1], φ∈[-1,1] and ρ∈ [ √(λ/0.62D^3) ,∞). Assuming that the codeword sampling step in the ψ domain, φ domain, and ρ domain is Δψ, Δφ, and Δρ. Respectively, the collection of sampling points can be represented as Ξ ^A={( ψ_s,φ_s,ρ_s)|ψ _s=ψ_min+Δψ/2,ψ_min+3Δψ/2,…, ψ_max;φ _s=φ _min+Δφ/2,φ_min+3Δφ/2,…,φ_max; ρ _s=ρ _min+Δρ/2,ρ _min+3Δρ/2,…,ρ _max}. Next, we provide a design for the UPA codebook. Considering a codebook design scheme for uniform sampling in the transform domain, as shown in Fig. <ref>. Among them, the red rectangle represents the codeword, and the solid black line represents the dividing line of the codeword quantization area. Assuming that the UPA channel is stationary, with a minimum correlation of c for all codewords, the quantization regions of all codewords are distributed in an ellipsoid of the same shape centered on the codeword, as shown in Fig. <ref>. For adjacent eight codewords, the boundaries of the quantization region of the codeword always intersect at one point under the same minimum quantization correlation c. Moreover, the intersection point is located at the center of a cuboid surrounded by eight adjacent codewords. By distributing the features described in Fig. <ref> throughout the entire space, it can be concluded that the quantization area of each codeword is enclosed in a cuboid. Moreover, the eight vertices of the quantized cuboid are located on the ellipsoidal boundary enclosed by the codeword under fixed correlation c. When designing a codebook, higher quantization accuracy and less quantization overhead are always expected. With a fixed minimum quantization correlation, the larger the cuboid quantization area of the codeword in Fig. <ref>, the less quantization cost. For the enclosed cuboid of an ellipsoid, the maximum volume can be obtained when c-1/p_ρ^*N^4δ_ψ_s^2=c-1/p_ψ^*N^2δ_ρ_s^2 and c-1/p_ρ^*N^4δ_φ_s^2=c-1/p_φ^*N^2δ_ρ_s^2 is satisfied. And the optimal sampling steps on the three domains can be calculated as Δψ=2√(3)/3N√(c-1/p_ψ^*), Δφ=2√(3)/3N√(c-1/p_φ^*), Δρ=2√(3)/3N^2√(c-1/p_ρ^*). Therefore, the number of codewords in 3D space can be calculated as S_ψ=√(3p_ψ^*/c-1)N, S_φ=√(3p_φ^*/c-1)N, S_ρ=2.4√(Np_ρ^*/c-1). The positions represented by the s_ψ-th, s_φ-th and s_ρ-th sampling points in ψ, φ and ρ domain can be expressed as ψ(s_ψ) =-1+(s_ψ-1/2)Δψ, s_ψ=1,… ,S_ψ, φ(s_φ) =-1+(s_φ-1/2)Δφ, s_φ=1,… ,S_φ, ρ( s_ρ) =(s_ρ-1/2)Δρ, s_ρ=1,… ,S_ρ. For the proposed codebook scheme, the sampling step is only related to the number of antennas and the minimum correlation. The larger the number of antennas, the smaller the sampling step. However, for the maximum quantization error, the sampling step will increase with its value, and the number of codeword sampling points in a specific space will decrease. When the correlation of UPA channels is regarded as approximately stationary for sampling, the designed codeword accuracy is always lower than the accuracy using other points as reference positions. Because under the same sampling interval, the actual minimum correlation of each codeword is more significant than the minimum correlation of the reference position, its quantization accuracy is also more significant than the quantization area of the reference codeword. Thus, under this reference ellipsoid, when the quantization error is c, the quantization accuracy of the approximately stationary UPA sampling method is the lower bound among all uniform sampling methods. The quantization region of the codeword approximated by stationarity is always smaller than or equal to the actual maximum quantization region of the codeword under the minimum quantization correlation c. Therefore, a single codeword's achievable minimum quantization correlation is always greater than or equal to c. When the channel correlation is 0.95, the table below illustrates the number of sampling points for codewords in the 3D space (refer to Table <ref>). Notably, the number of codewords in the ψ and φ domains consistently exceed the number in the ρ domain. Additionally, the number of sampling points in the angular domain is larger than the number of antennas. Incorporating oversampling in the angle domain during the design of the UPA codebook can effectively enhance the quantization accuracy of codewords. § SIMULATION RESULTS In this section, we provide the simulation results to illustrate the performance of the proposed codebook schemes for ULA and UPA systems. The central frequency is f=3GHz. Define the signal-to-noise ratios (SNR) of the system as SNR=Pη N/r^2σ ^2. P is the transmit power and σ^2 is the nosie power. The achievable rate is given by R=log_2 ( 1+Pη N |𝐖^H𝐛 ( r,θ ) | ^2/r^2σ ^2 ). The simulation results are the average results of 1000 randomly distributed UE. Firstly, we analyze the codebook of the ULA channel. The number of BS antennas is set as N=512. The range of UE locates randomly in the space spanned ( r,θ ) ∈ [ √(0.62D^3/λ) ,∞ ) × [ -π /2,π /2 ]. Next, we evaluate the performance of the proposed UPA channel codebook. Considering the system model in Fig. <ref>, the BS is configured with UPA, whose number of antenna is 16×16. The elevation angle and azimuth angle of the UE is θ∈[-π/2,π/2] and ϕ∈ [0,π], respectively. The distance between the BS and the UE distributes in r∈ [ √(0.62D^3/λ) ,∞ ). Fig. <ref> illustrates the cumulative probability function (CDF) of quantized correlation with various codebooks. In order to provide a comparative analysis, we conduct simulations using several codebooks. Firstly, we consider a codebook with identical sampled points in the α and β domains. Additionally, we evaluate a codebook optimized using the Lloyd algorithm. The proposed dislocation codebook and uniform codebook are both quantized with B=15 bits. It is worth noting that, with an equal number of quantization vectors, the dislocation codebook consistently outperforms the uniform codebook. The proposed uniform codebook comprised 2617 sampling points in the α domain and 14 sampling points in the β domain. On the other hand, the dislocation quantization codeword contains 1912 sampling points in the α domain and 18 sampling points in the β domain. The number of sampling points in the α domain is significantly higher than in the β domain. This phenomenon highlights the robustness of near-field beamforming in the β domain, and the denser sampling of the β domain enhances codeword quantization accuracy. It is observed that the performance of the codebook employing N sampled points in both domains is considerably inferior to the proposed codebooks, even when employing a more significant number of quantization vectors. Furthermore, the proposed schemes demonstrate significant superiority over the Lloyd algorithm optimization scheme, which validates the effectiveness of our proposed codeword design. Fig. <ref> illustrates the achievable rate for two scenarios: the ideal case of perfect channel state information (CSI) and the case where the precoding matrix is selected based on channel quantization. The beamforming scheme with perfect CSI represents the theoretical upper limit. In this comparison, we consider the same codebook scheme shown in Fig. <ref>. The proposed codebooks and the ideal case of perfect CSI exhibit remarkably similar performance. Notably, the dislocation codebook significantly enhances the achievable rate compared to the uniform codebook. When employing the same quantization bits, we observe that the achievable rate of the proposed uniform and dislocation codebook consistently outperforms the rate achieved using the Lloyd framework, particularly as the receiver SNR increases. Furthermore, the proposed codebook outperforms the codebook that employs the same sampling points in the α and β domains. The average sum rate demonstrates an improvement of approximately 0.4 bits/s/Hz compared to the codebook with the same sampling points in the transformed domain. In Fig. <ref>, we compare the number of quantization channel vectors of the proposed quantization schemes. The quantization vector’s number of the proposed uniform codebook and dislocation codebook are respectively set according to (<ref>) and (<ref>). We observe that as the requirement for codebook quantization accuracy increases, the number of vectors required to quantize the channel also increases. The result shows that the dislocation codeword can approximately reduce the number of quantization bits by 25% compared with the uniform codeword, which is consistent with our analysis in Section IV. We compare the proposed UPA codebook with two other schemes. In the first scheme, the codewords are uniformly sampled at N points in the ψ, φ, and ρ domains. The second scheme involves designing codewords with optimal sampling points based on the Lloyd algorithm in the ψ, φ, and ρ domains. Fig. <ref> depicts the CDF of quantized correlation for the different codebooks. The quantized correlation achieved by the proposed uniform codebook is significantly superior to the other schemes. Following the codeword design scheme described in Section V, we assume a minimum codeword quantization correlation of c=0.95. The simulation results consistently demonstrate that the quantized correlation achieved by the proposed codebook consistently exceeds 0.95. The result confirms that the proposed codebook is designed based on the lower bound of quantization correlation for all sampled codewords. Moreover, the proposed codebook exhibits more sampling points in the angle domain than in the distance domain. The results also reveal that the achievable rate performance of our proposed codebook consistently outperforms schemes with an equal number of samples in the ψ, φ, and ρ domains. Therefore, the near-field codebook should be oversampled in the angle domain. Considering the same quantization overhead, our proposed scheme achieves superior quantization correlation compared to the optimal codebook designed using the Lloyd Max algorithm. We evaluate the average sum rate achieved by different codebooks when utilized as the beamforming matrix at the BS in the UPA system. Fig. <ref> illustrates the results at various SNR. The outcomes demonstrate that our proposed UPA codebook scheme significantly outperforms the other two schemes and closely approaches the performance achieved with perfect CSI. Compared to the uniform codeword scheme, which employs the same number of samples in both the angle and distance domains, our proposed scheme exhibits an improvement of nearly 0.5 bits/s/Hz. Furthermore, our proposed codebook outperforms the codebook designed based on the Lloyd algorithm by more than 0.1 bits/s/Hz. § CONCLUSION This paper introduces a novel codebook design to maximize the minimum quantization correlation for quantized ELAA channels. We analyze the correlation between codewords and channel vectors and derive a suitable formula for this correlation. Based on these insights, we propose two ULA codebooks: uniform sampling and dislocation sampling. Notably, the uniform offset codebook achieves the same quantization performance as the uniform codebook while requiring fewer quantization bits. Furthermore, our study reveals that the channel observed by the UPA exhibits non-stationarity. To address this, we propose a lower-bound scheme for optimally sampled codebooks. Additionally, we emphasize the robustness of the angle domain in ELAA systems and the advantages of oversampling the angle domain for codebook design with higher precision. Simulation results confirm that the proposed codebook achieves minimal quantization bits while maintaining the desired minimum quantization correlation. 99 IEEEtran ref1 R. Chataut and R. Akl, “Massive MIMO systems for 5G and beyond networks—overview, recent trends, challenges, and future research direction,” Sensors., vol. 20, no. 1, pp. 2753, May. 2020. ref2 T. E. Bogale and L. B. Le, “Massive MIMO and mmWave for 5G wireless HetNet: Potential benefits and challenges,” IEEE Veh. Technol. Mag., vol. 11, no. 1, pp. 64–75, Mar. 2016. ref3 C. E. De, A. Ali, A.Amiri, M. Angjelichinoski, and R.W.Heath, “Non-stationarities in extra-large-scale massive MIMO,” IEEE Wirel. Commun., vol. 27, no. 4, pp. 74–80, Aug. 2020. ref4 T. S. Rappaport, Y. Xing, O. Kanhere, S. Ju, A. Madanayake, S. Mandal, A. Alkhateeb and G. C. Trichopoulos, “Wireless communications and applications above 100 GHz: Opportunities and challenges for 6G and beyond,” IEEE access., vol. 7, pp. 78729–78757, Jun. 2019. ref5 M. Cui, Z. Wu, Y. Lu, X. Wei, and L. Dai, “Near-Field MIMO Communications for 6G: Fundamentals, Challenges, Potentials, and Future Directions,” IEEE Commun. Mag. , vol. 61, no. 1, pp. 40–46, Sep. 2022. ref6 K. T. Selvan and R. Janaswamy, “Fraunhofer and Fresnel Distances: Unified derivation for aperture antennas,” IEEE Antennas Propag. Mag., vol. 59, no. 4, pp. 12–15, Jun. 2017. ref7 Y. Zhang, X. Wu and C. You, “Fast near-field beam training for extremely large-scale array,” IEEE Wireless Commun. Lett., vol. 11, no. 12, pp. 2625–2629, Oct. 2022. ref8 C. A. Balanis, Antenna theory: analysis and design. John wiley & sons, 2015. ref9 N. Jindal, “Antenna combining for the MIMO downlink channel,” IEEE Trans. Wireless Commun., vol. 7, no. 10, pp. 3834–3844, Oct. 2008. ref9a H. Zhang, N. Shlezinger, F. Guidi , D. Dardari, and Y. Eldar, “6G Wireless Communications: From Far-Field Beam Steering to Near-Field Beam Focusing,” 6G Wireless Communications: From Far-Field Beam Steering to Near-Field Beam Focusing, vol. 61, no. 4, pp. 72-77, Apr. 2023. ref10 C. K. Wen, and W. T. Shih and S. Jin, “Deep learning for massive MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 748–751, Mar. 2018. ref11 S. Schwarz, M. Rupp, S. Wesemann, “Grassmannian product codebooks for limited feedback massive MIMO with two-tier precoding,” IEEE J. Sel. Topics Signal Process. , vol. 11, no. 5, pp. 1119–1135, Aug. 2019. ref12 B. lerckx and C. Oestges, MIMO wireless networks: channels, techniques and standards for multi-antenna, multi-user and multi-cell systems. Academic Press, 2013. ref13 J. Kang and W. Choi, “Novel codebook design for channel state information quantization in MIMO rician fading channels with limited feedback,” IEEE Trans. Signal Process., vol. 69, pp. 2858–2872, May. 2021. ref14 Y. Xie, S.Jin, J. Wang, Y. Zhu, X. Gao, and Y. Huang, “A limited feedback scheme for 3D multiuser MIMO based on Kronecker product codebook,” in Proc. IEEE Annu. Int. Symp. Pers. Indoor Mobile Radio Commun. (PIMRC), London, U.K., Sep. 2013, pp. 1130–1135. ref15 R. M. Dreifuerst and R. W. Heath, “Initial Access Codebook Design and CSI Type-II Feedback for Sub-6GHz 5G NR,” arXiv preprint arXiv:2303.02850, 2023. ref151 IEEE Standard for Local and metropolitan area networks Part 16: Air Interface for Broadband Wireless Access Systems Amendment 3: Advanced Air Interface, IEEE Standard 802.16m, 2011. ref15a Z. Xiao, T. He, P. Xia and X. G. Xia, “Hierarchical codebook design for beamforming training in millimeter-wave communication,” IEEE Trans. Wireless Commun., vol. 15, no. 5, pp. 3380–3392, Jan. 2016. ref16 W. Shen, L. Dai, B. Shim, Z. Wang, and R. W. Heath, “Channel feedback based on AoD-adaptive subspace codebook in FDD massive MIMO systems,” IEEE Trans. Commun., vol. 66, no. 11, pp. 5235–5248, Jun. 2018. ref17 P. H. Kuo, H.T. Kung and P.A.Ting, “Compressive sensing based channel feedback protocols for spatially-correlated massive antenna arrays,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Shanghai, China, Apr. 2012, pp. 492–497. ref18 X. Wei, L. Dai, Y. Zhao, G. Yu and X. Duan, “Codebook design and beam training for extremely large-scale RIS: Far-field or near-field?,” China Commun. , vol. 19, no. 6, pp. 193–204, Jun. 2022. ref19 S. Hu, M. C. Ilter, and H. Wang, “Near-Field Beamforming for Large Intelligent Surfaces,” in Proc. IEEE Annu. Int. Symp. Pers. Indoor Mobile Radio Commun. (PIMRC), Kyoto, Japan, Sep. 2022, pp. 1367-1373. ref20 M. Cui and L. Dai, “Channel estimation for extremely large-scale MIMO: Far-field or near-field?,” IEEE Trans. Commun. , vol. 70, no. 4, pp. 2663–2677, Jan. 2022. ref21 X. Shi, J. Wang, Z. Sun and J. Song, “Hierarchical Codebook-based Beam Training for Extremely Large-Scale Massive MIMO,” arXiv preprint arXiv:2210.03345, 2022 ref22 W. U. Bajwa, J. Haupt, A. Sayeed, and R. Nowak, “Compressed Channel Sensing: A New Approach to Estimating Sparse Multipath Channels,” Proc. IEEE, vol. 98, no. 6, pp. 1058-1076, Apr. 2010. ref23 D. J. Love, R. Heath, and T. Strohmer, “Grassmannian beamforming for multiple-input multiple-output wireless systems," IEEE Trans. Inf. Theory, vol. 49, no. 10, pp. 2735–2747, Oct. 2003. ref24 C. K. Au-yeung and D. J. Love, “On the performance of random vector quantization limited feedback beamforming in a MISO system,” IEEE Trans. Wireless Commun., vol. 6, no. 2, pp. 458–462, Feb. 2007. ref25 P. Xia and G. Giannakis, “Design and analysis of transmit-beamforming based on limited-rate feedback,” IEEE Trans. Signal Process., vol. 54, no. 5, pp. 1853–1863, Apr. 2006. ref26 Z. Wu and L. Dai, “Multiple Access for Near-Field Communications: SDMA or LDMA?,” IEEE J. Sel. Areas Commun., pp. 1 - 1, May. 2023.
http://arxiv.org/abs/2306.04452v1
20230607142829
How to Find Opinion Leader on the Online Social Network?
[ "Bailu Jin", "Mengbang Zou", "Zhuangkun Wei", "Weisi Guo" ]
cs.SI
[ "cs.SI" ]
Cranfield University College Rd, Cranfield, Wharley End Bedford UK [email protected] Cranfield University College Rd, Cranfield, Wharley End Bedford UK [email protected] Cranfield University College Rd, Cranfield, Wharley End Bedford UK [email protected] Cranfield University College Rd, Cranfield, Wharley End Bedford UK [email protected] Online social networks (OSNs) provide a platform for individuals to share information, exchange ideas and build social connections beyond in-person interactions. For a specific topic or community, opinion leaders are individuals who have a significant influence on others' opinions. Detecting and modeling opinion leaders is crucial as they play a vital role in shaping public opinion and driving online conversations. Existing research have extensively explored various methods for detecting opinion leaders, but there is a lack of consensus between definitions and methods. It is important to note that the term "important node" in graph theory does not necessarily align with the concept of "opinion leader" in social psychology. This paper aims to address this issue by introducing the methodologies for identifying influential nodes in OSNs and providing a corresponding definition of opinion leaders in relation to social psychology. The key novelty is to review connections and cross-compare different approaches that have origins in: graph theory, natural language processing, social psychology, control theory, and graph sampling. We discuss how they tell a different technical tale of influence and also propose how some of the approaches can be combined via networked dynamical systems modeling. A case study is performed on Twitter data to compare the performance of different methodologies discussed. The primary objective of this work is to elucidate the progression of opinion leader detection on OSNs and inspire further research in understanding the dynamics of opinion evolution within the field. How to Find Opinion Leader on the Online Social Network? Weisi Guo July 31, 2023 ======================================================== § INTRODUCTION Back in the 1940s, Paul F. Lazarsfeld, Bernard Berelson, and Hazel Gaudet proved that social influence is a vital strength for people modifying their opinions towards a topic within a social network<cit.>. As a part of their research on public opinion, opinion leaders were defined as individuals with a significant impact on the opinions, attitudes and behavior of others. With the rise of online social networks (OSNs) in today's digital age, the role of opinion leaders has become increasingly crucial in shaping public opinion and driving online conversations. The detection of opinion leaders on OSNs have practical applications for businesses and organizations, including targeted marketing strategies, monitoring the spread of information and mitigating the negative impact on public discourse. Empirical research in psychology has explored the phenomenon of opinion evolution during interpersonal interactions. Studies have shown that people tend to modify their opinions to seek similarity with others in a group, highlighting the high inter depend of individual opinions. The combined effects of the influences from cultural norms, mass media and interactions are collectively known as social influence. The concept of opinion leader was first be introduced in the hypothesis of two-step flow of communication<cit.>. It posited that the influence from mass media first reaches to opinion leaders who then disseminate it to their associates. Nowadays, opinion leaders refer to individuals who have the ability to influence the opinions of others through interactions on online social media. Opinion leaders have been shown to play significant roles in promoting, propagating and shaping the opinions in various domains, including marketing, political science and public health<cit.>. Numerous review papers have discussed the related research topics. Riquelme et al. provided an extensive survey on activity, popularity and influence measures that rank influential users in Twitter network<cit.>. Bamakan et al. categorised the characteristics of opinion leaders and the approaches for opinion leader detection<cit.>. Panchendrarajan et al. conducted a comprehensive survey on topic-based influential user detection<cit.>. However, there is a lack of consensus between definitions and methods. In contrast, we focus in this paper is on identifying influential nodes in OSNs and providing a corresponding definition of opinion leaders in relation to social psychology. In this paper, we categorised the opinion identification methods into four main categories, Topology-based Centrality, Topic-sensitive centrality, Control and Graph Sampling. These categories not only employ varied data sources from OSNs, but they also define opinion leaders differently. Topology-based Centrality mainly concentrates on the network structure. In this context, opinion leaders are defined as individuals who occupy the most significant position within the social group. When user content is taken into consideration, the Topic-Sensitive Centrality facilitates the identification of opinion leaders within specific topics. It aids in the identification of influential users who can disseminate topic-related information or influence the opinions of others within a specific context. Additionally, real-time content can be utilised as a representation of the dynamic opinion states of users, which can be used to build the mathematical model to describe the evolution of opinion states. Leveraging the dynamic influence model, Control methodologies aim to identify individuals who can steer the direction of overall opinion. Then, Graph Sampling methodologies focus on identifying a specific subset of opinion leaders who, despite their limited numbers, can be instrumental in reconstructing the comprehensive opinion network. As illustrated in Figure.<ref>, we have showed the relationship among these four methodologies, offering a deep understanding of the development of opinion leader detection method. In table.<ref>, we provide the notations that are used in this paper. §.§ Background This section provides the background knowledge related to topic analysis and opinion evolution modelling. §.§.§ Topic Analysis Topic analysis involves the utilization of natural language processing (NLP) detect the topic-related semantic structures from human language. In this paper, we mainly employ two types of topic analysis: topic modelling and opinion representation. Topic modelling utilizes statistical modelling approaches to assign topic probability distributions to user-generated content. On the other hand, opinion representation is a task of classifying the content into opinion state vectors associated with specific topics. §.§.§ Opinion Evolution Modelling As the effect of “word-of-mouth”, people are likely to be influenced by the idea of their friends in the process of agricultural innovation, adoption of medical, and new product promotion. To explain how individuals develop their opinions towards various topics over time, a formal model on the opinion evolution in a group was proposed by French in 1956<cit.>. From there, social influence models were developed to explain social phenomena such as opinion clustering or controversy. To capture the complexity of opinion evolution, researchers have considered both linear and non-linear models. One example of a linear model is the French-DeGroot model, which introduced a more general form using Markov Chain processes to illustrate how social influence leads to opinion consensus<cit.>. However, opinion consensus is not the only outcome from group discussions. Non-linear models, such as the Hegselmann-Krause model, have been proposed to incorporates a bounded confidence attribute that limits the influence of opposing opinions<cit.>. § METHODOLOGY The definition of being influential point is ambiguous, leading to the development of various measures for identifying opinion leaders. Here we categorise detection methods into four groups: topology-based centrality, topic-sensitive centrality, control and graph sampling. §.§ Topology-based Centrality Centrality in graph theory and network analysis is a fundamental concept that refers to the importance of a node within a network. In the context of social network, centrality measures help identify users who have extensive connections with other members of a network. §.§.§ Degree Centrality Degree centrality is the number of connections a node has in a network. Freeman presented Degree Centrality in Social Network, which is rooted in the belief that an individual’s significance within a group is tied to the number of people they are connected to or interact with<cit.>. In real-world case, the node with the highest degree is the user that directly interacts with many other users within the network. This method is intuitive to the definition of influence, whereas the global structural of the graph is not considered. §.§.§ Closeness Centrality Taking consideration of indirect link using the path length, the closeness centrality extends the local centrality to global centrality. The basic idea of closeness centrality is that the node with high closeness centrality can spread the information to other nodes quickly. In this case, the position of one point in the network is more essential than the number of links it own. In online social network, users with high closeness centrality have been proved to be effective spreaders of information by measuring the diffusion effect <cit.>. However, Closeness centrality is very sensitive to a large distance or missing link due to considering the distance of each pair. §.§.§ Betweenness Centrality Betweenness centrality is based on the number of times a node lies on the shortest path between two other nodes in the network<cit.>. In online social network, user with high Betweenness centrality operates like a bridge in the shortest pathes between possible user pairs. Closeness and betweenness centrality are difficult to be applied in large-scale networks, and have been proved to be unstable in some cross-sectional and temporal networks. §.§.§ Eigenvector Centrality Eigenvector centrality of a node is calculated as the weighted sum of the centralities of its neighbors, with the weights determined by the strength of the connections between the node and its neighbors. Therefore, this measure can be used to measure the level of influence of each node, where the higher score the greater level of influence. Eigenvector centrality is designed to differ from the former measures when the network contains high-degree nodes connected to many low-degree nodes or low-degree nodes connected to a few high-degree nodes. The disadvantage of Eigenvector Centrality is that it has limitations when applied to directed networks. A node can receive a score of zero in the absence of incoming links, resulting in no contribution to the centrality metric of other nodes. §.§.§ Katz Centrality Katz and PageRank are variants of the eigenvector centrality. Katz centrality takes into account both the number of direct connections a node has and the connections of its neighbours, which can be less sensitive to the size of the network and proved stable ranking<cit.>. The limitation of Katz centrality is that it can be influenced by new links to a particular group of nodes. §.§.§ PageRank Centrality To mitigate the impact of spendthrift nodes on centrality scores, PageRank reduce the weight of ingoing links from these nodes. In PageRank, the weight of an incoming link is proportional to the PageRank score of the node it originates from<cit.>. Compared to Katz centrality, PageRank add the scaling factor which gives it the ability to penalise nodes that are linked to from many low-quality nodes and reward nodes that are linked to from high-quality nodes. In this way, the PageRank centrality mitigates the impact of nodes with many outgoing links, and instead focuses on the quality of the incoming links, rather than the quantity. §.§.§ HITS As PageRank, Hyperlink-Induced Topic Search(HITS) is also a link-based ranking algorithm to determine the importance of the node<cit.>. The intuition of HITS is that Authority score and Hub score are both allocated to each web page. Assuming that high-quality Hub usually point to high-quality Authorities, and high-quality Authority is pointed by high-quality Hubs. As a result, the Authority score is proportional to the total hub scores of the Hubs that link to it. In online social network, the algorithm search the influential accounts by collecting the query-related accounts and then ranking only by the network structure instead of textual contents. §.§.§ SPEAR Yeung et al. introduced the terms experts and expertise for resource discovery<cit.>. Assuming that a user’s expertise depends on the quality of the resources they have collected and the quality of resources is depend on the expertise of other users who have assigned relevant tags, Spamming-Resistant Expertise Analysis and Ranking(SPEAR) is introduced to rank users in online knowledge communities. SPEAR is a graph-based algorithm similar to the HITS algorithm implementing the concept of expertise. Later in 2016, Shinde and Girase proposed the modified SPEAR algorithm<cit.> where the expertise of user is based on different topics. In the topic-specific SPEAR algorithm, the credit score function considers not only time, but also number of comments, number of likes, word count and all. §.§.§ TunkRank Tunkelang introduced TunkRank, a measure of user influence based on PageRank<cit.>. TunkRank operates on three assumptions: 1)influence power of a certain influencer corresponds to expected number of audiences who read a tweet from the influencer, 2) the probability of audience reading a tweet based on the number of accounts they follow, and 3) the audience has a constant probability to retweet the seen tweet. The expected number of people who read the tweet can be recursively calculated based on the equally distributed probability of each follower read the tweet and the constant probability that user will retweet the tweet. §.§.§ Dynamical Influence Dynamical influence is a centrality measure that takes into account the interplay between network structure and the dynamical state of nodes. This is a departure from classical centrality measures which rely solely on topology. In the context of social network, the dynamical influence process can be used to explain the dynamics of idea adoption. In this scenario, opinion leaders are defined as the key individuals who can trigger a significant cascade of influence. The challenge lies in identifying these key individuals, which is essentially an influence optimization problem. The goal is to target an initial set of nodes with the greatest influence spread, thereby promoting information to a large fraction of the network. The maximization of information flow was first considered as a discrete optimization problem by Kempe et al.<cit.>. They discussed models for how influence propagates through online social networks, and proposed a greedy hill climbing approach of identifying the most influential nodes which provide provable approximation guarantees. Zhao<cit.> builded a statistical model SEISMIC building on the theory of self-exciting point processes to model the information cascades. SEISMIC provides an extensible framework for predicting information cascades. It requires no feature engineering and can scaling linearly with the number of observed reshapes. §.§.§ SIR The Susceptible-Infected-Recovered(SIR) is another algorithm that considers the dynamical state of nodes. SIR mathematical model was originally designed to describe the spread of infectious diseases in a population. The model divides the dynamical state of population into three categories: Susceptible(S), Infected(I), and Recovered(R). The SIR model has also been used to model the spread of information in a network, where nodes can be thought of as either susceptible to influence, influenced, or recovered from the influence. When applied to identify opinion leaders, the opinion leaders are set as the initially infected nodes, and the probability of an infection depends on the influence from the opinion leader. §.§ Topic-sensitive Centrality Opinion leaders are identified based on various characteristics that align with diverse social groups. Analysis of dynamic influence across topics and time has demonstrated that ordinary users can gain influence by focusing on a single topic<cit.>. Recognising that the influence of these leaders can fluctuate based on various topic field, the topic-sensitive centrality approaches incorporate topical attributes into the analysis. Several topic-sensitive ranking methods have been developed to determine the topical influence of users and their capacity to disseminate information or influence opinion on specific topics. Simultaneously, the dynamics of opinion can be analysed based on a particular topic. §.§.§ InfluenceRank Topical analysis can be used to quantify the novelty of certain content by representing each content as a document and reducing the dimensionality using Latent Dirichlet Allocation(LDA). The InfluenceRank algorithm use the topical analysis to measure the importance and novelty of a blog in comparison to other blogs<cit.>. With the feature vectors that represent the topic distribution, the dissimilarity can be calculated using cosine similarity. InfluenceRank outperforms in terms of coverage, diversity and distortion. §.§.§ TwitterRank TwitterRank is a variant of the TunkRank algorithm that incorporates topical similarity in the calculation of influence. The phenomenon of “homophily” has been observed in various network ties, including information transfer, friendship, and marriage<cit.>. Weng et al. demonstrate that “homophily” also exists in the context of Twitter, where users tend to follow those who share similar topical interest<cit.>. TwitterRank was proposed based on this finding, measuring influence by considering both topical similarity and link structure. However, users’ topical interests can change over time, and as a result, the freshness of their activities needs to be taken into account. Dhali et al.<cit.> addressed this issue by proposing TemporalTwitterRank, a modified algorithm that estimates transition probabilities using topic profile vectors. By emphasizing the temporal dimension of users' activities, TemporalTwitterRank provides a more comprehensive assessment of influence. §.§.§ TopicSimilarRank Wang et al. proposed the TopicSimilarRank algorithm considering the user’s own influence and difference in influential values caused by responses from others. The TopicSimilarRank algorithm is inspired by TwitterRank and takes into account topic similarity, user attributes, interactive information, and network structure. To construct the weighted network, the users can be seen as a set of weighted nodes, and the reposts and comments can be seen as edges with weights represented by similarity values between users. Then the directed and weighted graph can reflect the influential relationships between users. The experiment analysis indicates that TopicSimilarRank is well-suited for mining opinion leaders in topic domains. Similarly, Eliacit et al. <cit.> developed three metrics - User Trust (degree of friendship, expertise and activity), Influence Period and Similarity - to construct a weighted influence network. Influence rank was calculated based on the PageRank Algorithm. The empirical experiment demonstrates that considering the ranking of users enhances the accuracy of sentiment classification in the community. §.§.§ ClusterRank To identify the most influential authors for a specific topic, Pal and Counts proposed a set of features, including both nodal and topical metrics, to describe the authors in various topic fields<cit.>. To reflect the impact of users with respect to one topic, various features are selected for original tweets, conversational tweets and repeat tweets. ClusterRank process includes using probabilistic clustering on this feature space, within-cluster ranking procedure and producing a list of top authors for a given topic. The experiment showed that topical signal and mention impact are two critical features to determine the ranking. §.§.§ OpinionRank OpinionRank considers both the dynamic of information influence and the dynamic of forming opinions. In 2009, Zhou and Zeng introduced the concept of opinion networks and OpinionRank algorithm to rank the nodes based on their opinion scores<cit.>. In this context, a weighted link in the opinion networks represents the opinion orientation from opinion sender to opinion receiver. For instance, in a review website, the opinion receiver is the original review writer and the opinion sender is the comment writer under the review. The opinion orientation can be calculated as the average opinion score after assigning an opinion score to each word. Experimental results indicate that sentiment factors significantly influence social network analysis. §.§.§ Maximization Huang et al. introduced the Positive Opinion Leader Detection (POLD) to track the public formation process<cit.>. POLD constructs multiple opinion networks on comment networks rather than user networks. The comment network takes into account the time interval between comments, assuming that influence weakens with increasing intervals. Applying POLD to the comments of news reveals that the most influential comments and users vary over time. Dong et al. further hypothesised that influence only occurs when a recipient posts within a certain time interval after the influencer<cit.>. The weight of edges in this network is modelled based on the time gap between the posts by influencer and recipient. §.§.§ TrustRank Chen et al. proposed the TrustRank considering both positive and negative opinions<cit.>. TrustRank constructs a network with direct and indirect sentiment labelled links. The construction has 4 phases: 1) set up a basic network, 2) label the links, 3) infer the sign, and 4) transform the post network to user network. During the construction of network, both explicit link and implicit link are considered. The explicit link is denoted by reply and citation, and implicit link infers the semantic similarities between posts. TrueRank outperforms other PageRank-like models on the online comments of a real forum. §.§.§ InfluenceModellingRank In our previous work, we proposed a method to model the evolution of personal opinions as an ordinary differential equation (ODE)<cit.>. To account for the influence of influencers on one recipient's opinion, we employed French's formal theory<cit.> to model the social influence effect. This effect is determined by the discrepancy of their opinions and the influence weight representing the strength of the effect. To compute the influence weight, we utilized a collection of following links and posts. By assigning the influence weights as link weights and using the PageRank algorithm, we were able to rank the users based on their influence weight. The resulting InfluenceModellingRank provides a metric for understanding the opinion influence dynamics in social networks. §.§ Influence based on Control Centrality Social Influence is roughly defined as follows: Given two individuals u,v in a social network, u exerts the power on v, that is, u has the effect of changing the opinion of v in a direct or indirect way <cit.>. The influence of an individual in the social network is affected by the self-dynamics of the individual's behavior, coupling dynamics between individuals, and the network structure of the social network. Metrics for influence based on the previous centrality measures mainly consider the network topology of the social network. When we consider both the social network structure and the dynamics of each node, it is natural for us to ask the following questions: * whether it is possible for a node to influence other nodes to any desired state * how many nodes' states can be influenced by one nodes Therefore, it is reasonable to introduce controllability in complex network to quantify the influence of each node and detect the influential node. Here we introduce the concept of controllability in complex networks to identify influential nodes. The analysis framework we introduce here to identify influence nodes can be generally applied in social networks, which reflects in following perspectives: 1) this framework can be used in any linear dynamics and does not need to know the specific dynamic functions; 2) only the network topology of the social network is needed, and even the weights of connections are not necessary to know. §.§.§ Exact control centrality Consider a complex system described by a directed weighted network of N nodes, the dynamics of a linear time-invariant (LTI) system can be described as ẋ(t)=Ax(t)+Bu(t), where x(t)=(x_1(t),x_2(t), ⋯ x_N(t))^⊤∈ℝ^N captures the state of each node at time t. A∈ℝ^N × N is an N × N matrix describing the weighted connection of the network. The matrix element a_ij∈ℝ gives the strength that node j affects node i. B∈ℝ^N × M is an N × M input matrix (M ≤ N) identifying the nodes that are controlled by the time-dependent input vector u(t) = (u_1(t), u_2(t), ⋯, u_M(t)) ∈ℝ^M with M independent signals imposed by the controller. The matrix element b_ij∈ℝ represents the coupling strength between the input signal u_j(t) and node i. Kalman's controllability rank condition states that the LTI system is controllable if and only if the N × NM controllability matrix C≡ [B, AB, A^2B,⋯,A^N-1B] has full rank <cit.>, i.e., rank C = N. When the system (A, B) is not controllable, the dimension of the controllable subspace is rank C, where rank C<N. One thing we are interested in is how many dimensions of the subspace of the system can be controlled by a single node. Here, we use rank C^(i) to capture the ability of i in controlling other nodes in the networked system. Mathematically, rank C^(i) captures the dimension of the controllable subspace or the size of the controllable subsystem when we only control node i. The exact control centrality of node i is defined as C(i) ≡ rank (C^(i)), where the B in matrix C reduces to the vector b^i with a single nonzero entry, e.g. b^i=[0, 0, ⋯, b_i, ⋯]^⊤. By calculating the exact control centrality of each node in the networked system, we can find the most powerful nodes in controlling the whole networked system. In a social network, we can find the most influential nodes by exact control centrality. §.§.§ Structural control centrality When we know the exact network structure and the weight of each connection in the social network, the influence of each user can be ranked by the exact control centrality. However, for many complex networks, the system parameters are not precisely known, e.g. the elements in matrix A are not exactly known. We only know whether there is a link or not between two nodes but are not able to measure the weights of the link. Hence, it is difficult to numerically verify Kalman's controllability rank condition using fixed weights. To solve this problem, Lin <cit.> introduced the concept of structural control. An LTI system (A, B) is a structured system if the elements in A and B are either fixed zeros or independent free parameters. Apparently, rank (C) varies as a function of the free parameters of A and B. It achieves the maximal value for all but an exceptional set of values of the free parameters. This maximal value is called the generic rank of the controllability matrix C, denoted as rank_g(C), which represents the generic dimension of the controllable subspace. The system (A, B) is structurally controllable if we can set the nonzero elements in A and B such that the resulting system satisfies rank_g C =N. The structural control centrality of node i can be defined as <cit.> C_g(i) ≡ rank_g (C^(i)). The structure control centrality is an upper bound of exact control centrality for all admissible numerical realizations of the controllable matrix C. To calculate C_g(i), we need to introduce some concepts in graph theory. A node j is called accessible if there always exists at least one directed path from the input nodes to j. A stem is a directed path starting from an input node, so that no nodes appear more than once in it. C_g(i) can be calculated according to Hosoe's controllable subspace theorem <cit.>: rank_g(C) = max_G_s ∈ G|E(G_s)|, where G_s is the subgraph of the accessible part of G only consists of stems and cycles and |E(G_s)| represents the number of edges in G_s. §.§ Graph Sampling & Recovery Another idea to determine the most influential nodes over a network leverages whether these nodes can be sampled to recover the whole networked dynamics. This refers to as graph sampling and recovery techniques, which aim to compress the time series of high-dimensional and dependent networked dynamics via a subset of critical nodes, whose dynamics can guarantee the recovery of the whole networked data. From the theoretical perspective, this includes the spatial, and the temporal dependency analysis, whereby the former studies the correlation or hidden high-dimensional dependency among the set of nodes, and the latter focuses on the events at which time steps would be the trigger or with higher importance. §.§.§ Spatial Correlation Analysis Spatial correlation analysis tries to determine an orthogonal signal subspace (matrix), e.g., the operational matrix in compressed sensing (CS), or the graph Fourier transform (GFT) operator <cit.>. Then, leveraging the orthogonal subspace, the highly correlated networked data can be compressed by the linear combinations of the subset of the orthogonal bases, which can be mapped to the critical nodes for sampling and recovery purposes. To be specific, we assume the acquisition of either the dynamic evolution model 𝐱_k+1=𝐀·𝐱_k [], or the networked data 𝐗=[𝐱_1,𝐱_2,⋯,𝐱_K], where 𝐱_k of size N×1 represents the data of N nodes at kth time-step. The GFT orthogonal matrix, denoted as Γ, can be derived by the left singular matrix of either 𝐀 <cit.> or 𝐗 <cit.>, referred to as model-driven or data-driven respectively. Then, the correlation among nodes is characterized by the bandwidth graph frequency, which is determined as set ℛ whose elements are the indices of non-zero elements in the graph frequency response 𝐱̃_k, i.e., 𝐱̃_k=Γ_:, ℛ^T·𝐱_k, where Γ_:, ℛ represents the sub-matrix by selecting all the rows and the columns with indices in ℛ. The samples on the critical nodes (with indices in the set 𝒮⊂𝒱) is denoted as 𝐱_𝒮,k. Leveraging the graph frequency response, the relation between samples and the graph frequency response is 𝐱_𝒮,k=Γ_𝒮, ℛ·𝐱̃_k, which facilitates the dynamic recovery as: 𝐱_k=Γ_:, ℛ·𝐱̃_k(a)=Γ_:, ℛ· pinv(Γ_𝒮, ℛ)·𝐱_𝒮,k. From Eq. (<ref>), to guarantee the recovery process, (a) should be satisfied and is equivalent to ensure the existence of the inverse of Γ_𝒮, ℛ. As such, the node importance rank on the sampling and recovery perspective can be generated as s_1,s_2,⋯,s_N, by the descending order of the least singulars, i.e., s_n+1=_s∈𝒱∖𝒮_nσ_min(Γ_𝒮_n⋃{s},ℛ) where 𝒮_n≜{s_1,s_2,⋯,s_n}, and σ_min(·) represents the least singular value. §.§.§ Spatial & Temporal Dependency Analysis Spatial and temporal dependency analysis aims to determine the critical nodes by considering both the node level and temporal level correlations <cit.>. By combining the temporal correlation information, a more compact dynamic subspace can be derived, which gives rise to a reduction in the number of sampling nodes, and leads to a node importance rank. The derivation of the dynamic subspace contains the model-based and data-driven approaches. In the case of model-based methods, the evolution model is expressed as [𝐱_1^T, 𝐱_2^T,⋯,𝐱_K^T]^T=𝐀̅·𝐱_1, with 𝐀̅≜[(𝐀^0)^T,(𝐀^1)^T,⋯,(𝐀^K-1)^T]^T, whose columns compose the dynamic subspace. Such a model-based subspace compresses the networked dynamics via the spatial and temporal correlations and leads to a compact one-to-one mapping (from NK to N). Then, the sampling and recovery problem is converted to select the critical nodes to make truncated subspace 𝐀̅_𝒮, : full column rank. When the model is unavailable (e.g., difficult to pursue a linear regression), the data-driven methods are well-suited to derive the dynamic subspace. To be specific, by pursuing a compact singular value decomposition of the data 𝐗, denoted as 𝐗=Γ· diag(ς)·𝐕^T (ς is the vector of N singular values), the dynamic subspace can be obtained via vec(𝐗)=𝐀̅·ς, where 𝐀̅ is the sub-matrix of the Kronecker product 𝐕⊗Γ by selecting 1,N+2,2N+3,⋯,N^2 columns. In this way, the data-driven subspace compresses the networked dynamics to a compact one-to-one mapping from NK to N space. After the derivation of the dynamic subspace, the node importance rank can be derived by the greedy selection of nodes to maximize the least singular value of 𝐀̅_𝒮, : which is similar to Eq. (<ref>). §.§ Evaluation Method The evaluation of Opinion Leader Detection methods is not straightforward, and various papers use different evaluation methods. Unfortunately, there is no agreement on which evaluation method is the best. Nonetheless, some evaluation methods are still commonly used and will be discussed in this section. §.§.§ Descriptive Methods In social sciences, descriptive methods are often utilized to identify opinion leaders. One of the most popular descriptive methods is using experts to rank the opinion leaders in one group network. In this approach, an expert is asked subjectively to rate the comments from users having either a strong or weak influence. The ratings of comments are then combined to determine the influential rate of each user. However, descriptive methods require creating questionnaires and conducting interviews, which are costly and challenging to implement. These descriptive measures have been criticized because they do not consider the role of ordinary users in the information flow process<cit.>. §.§.§ OSN Metrics For OSN platforms, the number of followers is the most commonly used metric to determine a user's influence. This approach assumes that each tweet by a user is read by all of their followers. Other metrics such as likes, shares, or mentions are also used to measure user engagement and influence<cit.>. On Twitter, these public metrics are accessible through the Twitter Application Programming Interface (API), which is built on communication data and metadata. §.§.§ Kendall's τ Kendall’s τ is a statistical measure to determine the similarity between the ranking orders of two variables, regardless of their magnitudes. Kendall’s τ coefficient ranges from -1 to 1, with a value of -1 indicating complete disagreement between the rankings, and 1 indicating perfect agreement between the rankings. Kendall’s Tau correlation method is often used in social science research, including the opinion leader detection task, to assess the degree of agreement or disagreement between two rankings. § CASE STUDY This section presents a case study of the application of different ranking methods to the Twitter dataset with COVID-19 and Feminism topics. In order to gather and prepare data for our study, we relied on the methodology outlined in our previous study<cit.>. We present the ranking analysis conducted on two datasets: COVID-19 and Feminism. We computed three centrality rankings (Betweenness, Eigenvector, PageRank), one topic-sensitive ranking (InfluenceModel), one control ranking (Control), and two graph sampling rankings (MGFT, DGFT). The ranking results on the COVID-19 dataset and the Feminism dataset are visualised in Figure.<ref>. In the visualisation, the dots represent users, and the links represent “Following” relationships between two users. We labelled the top 10 percent of users as red dots and others as blue dotes for comparison purposes. As shown in the Figure.<ref>, the three centrality rankings and topic-sensitive ranking exhibit greater similarity with each other. Conversely, the Control ranking and two graph sampling rankings yield distinct results due to their different definitions of influence. For the validation rankings, we selected four topic-filtered matrix rankings: Retweet, Reply, Like, and Quote. These filtered matrix rankings were calculated by considering only the topic-related tweet matrix posted by the group of users. The Kendall τ correlation results are illustrated in Table <ref> and Table<ref>. In Table <ref>, the Control rank exhibits the highest similarity score with Retweet rank and DGFT rank demonstrates the highest similarity score with Reply rank. In Table<ref>, the MGFT rank achieves the highest similarity scores with all four validation ranks. A horizontal comparison among different ranking strategies is challenging, due to the different criteria utilised by the methods. For instance, in an extreme case where a pure repeater that follows all original-context users exists, then the graph sampling theory may select it as an influential node (as a better representive), given its potential to contribute to data recovery. Such a user, on the other hand, may be trivial in the control-based ranking as the reward of controlling it provides little to the network controlling purpose. Then, there may be some overlap that a good representive or control user may have good topographical or topic-sensitive properties, yet their correlation and causality require further studies. § CONCLUSION The purpose of this review is to provide an overview of the development of "opinion leader" concept and detailed comparative analysis of the corresponding detection techniques. The key novelty is to review technical connections and cross-compare different approaches that have origins in: graph theory, natural language processing, psychology, control theory, and graph sampling. Here, we conclude the assumption and usage of the methodologies that have origins in topology-based centrality, topic-sensitive centrality, control and graph sampling. In the case of topology-based centrality, eigenvector-based centrality measures prove fitting within the context of social network. This is mainly due to they captures the idea that the importance of a node depends not only on its direct connections but also on the importance of its neighbors. Based on the topology-based centrality, the identified opinion leader can be seen as the user who has the capability to disseminate information to most of the people in the group. When it comes to topic-sensitive centrality, text content of a user's conversation can be transferred into topic probability distribution or opinion states vector. In the case of the former, topic analysis can be leveraged to determine the novelty and similarity of content. Here, the identified opinion leader exhibits the ability to generate novel content and disseminate topic-related information effectively. In the case of the latter, opinion dynamics are analyzed under a specific topic. The identified opinion leaders in this scenario are users who possess the capacity to influence opinions of others. Opinion evolution modelling was developed to explain how individuals develop their opinions towards various topics over time. Researchers have considered both linear and non-linear models to capture the complexity of opinion evolution. With the linear dynamics of opinion modelling, such as the French-DeGroot model, control theory can be broadly applied to dynamic opinion networks, eliminating the need of knowing the specific dynamic functions or the weights of connections. The application of control theory aims to find opinion leaders who can steer the overall direction of public opinion. Graph sampling theory can be applied in both linear model-based and data-driven manners, where the latter does not require the awareness of the dynamic model nor its linearity assumption. By deriving the orthogonal subspace from the data, the data-driven graph sampling method can obtain the opinion leaders who are the representative users in the dynamic opinion networks. Through a case study, we perform a comparative analysis of multiple methodologies. The result shows that a horizontal comparison among different ranking strategies is challenging, due to the disparate criteria utilised by the methods. There may be some overlap between the identified opinion leaders through various methods, yet their correlation and causality require further studies. It is our hope that this survey will help researches in gaining better understanding of the development of opinion leader detection methods and inspire them to address the remaining challenges in this field. The work is supported by "Networked Social Influence and Acceptance in a New Age of Crises", funded by USAF OFSR under Grant No.: FA8655-20-1-7031, and is partly supported by the Engineering and Physical Sciences Research Council [grant number: EP/V026763/1]. ACM-Reference-Format
http://arxiv.org/abs/2306.02845v1
20230605125707
Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals
[ "Puneet Kumar", "Xiaobai Li" ]
cs.AI
[ "cs.AI" ]
FAUST X. Multi-band, multi-scale dust study of L1527 IRS L. Cacciapuoti 1,2,3 E. Macias 1 A. J. Maury 4 C. J. Chandler 5 N. Sakai 6 Ł. Tychoniec 1 S. Viti 7,8 A. Natta 9 M. De Simone 1,2 A. Miotello 1 C. Codella 2,10 C. Ceccarelli 10 L. Podio 2 D. Fedele 2 D. Johnstone 11,12 Y. Shirley 13 B. J. Liu 14,15 E. Bianchi 16 Z. E. Zhang 5 J. Pineda 17 L. Loinard 18 F. Ménard 9 U. Lebreuilly 4 R. S. Klessen 19,20 P. Hennebelle 4 S. Molinari 21 L. Testi 2,22 S. Yamamoto 23 Received 21 February, 2023; accepted 5 June, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper aims to demonstrate the importance and feasibility of fusing multimodal information for emotion recognition. It introduces a multimodal framework for emotion understanding by fusing the information from visual facial features and rPPG signals extracted from the input videos. An interpretability technique based on permutation feature importance analysis has also been implemented to compute the contributions of rPPG and visual modalities toward classifying a given input video into a particular emotion class. The experiments on IEMOCAP dataset demonstrate that the emotion classification performance improves by combining the complementary information from multiple modalities. Keywords: Affective Computing, Interpretable & Deployable AI, Multimodal Analysis, rPPG, Facial Features. § INTRODUCTION Emotions, characterized by a rich and complex mix of physiological and cognitive states, hold significant importance across multiple fields such as psychology, human-computer interaction, affective computing, and even extending to broader domains such as virtual reality, user experience design, healthcare, and education <cit.>. Understanding and accurately interpreting emotions is essential in human communication and social interactions <cit.>. With the surge in the development and accessibility of multimodal sensing technologies, researchers can explore multiple modalities to enhance the accuracy and robustness of emotion recognition systems <cit.>. The current research trend focuses on building Artificial Intelligence (AI) systems that can be deployed for real-life applications <cit.>. Two such modalities, facial expressions and physiological signals, have garnered significant attention due to the rich information they offer and their non-invasive nature <cit.>. Facial expressions, direct and non-invasive indicators of emotion, have been thoroughly investigated <cit.>. Various techniques involving the extraction of facial landmarks, local descriptors, or holistic representations have been proposed to capture nuanced variations in facial muscle movements that reflect different emotional states <cit.>. Physiological signals, such as remote photoplethysmography (rPPG) signals, provide another layer of emotional cues. These signals, obtained through non-contact video-based techniques, offer insights into physiological changes associated with emotional responses <cit.>. The interplay of these two modalities offers a more holistic understanding of emotions, thus enhancing the robustness of emotion recognition systems <cit.>. Emotion classification through audio-visual information is a well-established research task <cit.>. However, recognizing emotion using the physiological context along with the audio-visual information score for further exploration <cit.>. Furthermore, despite the significant advancements, many multimodal emotion recognition models do not provide meaningful interpretations for their predictions <cit.>. Most existing interpretability techniques have been implemented for visual modality and have yet to be fully explored for multimodal analysis <cit.>. This paper proposes an interpretable multimodal emotion recognition framework that extracts rPPG signals and facial features from the input videos and uses their combined context for emotion detection. The Haar cascades classifier  <cit.> has been implemented to extract the rPPG signals, whereas a pre-trained ResNet-34-based network extracts the visual features. Further, early and late fusion approaches that integrate the static facial expression features and dynamic rPPG signals to capture both spatial and temporal aspects of emotions have been incorporated. An interpretability technique based on permutation feature importance (PFI) <cit.> has also been incorporated that computes the contribution of rPPG and visual modality towards classifying a given input video into a particular emotion class. The experiments performed on Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset <cit.> have resulted in an accuracy of 54.61% while classifying the input videos into ten emotion classes (`neutral,' `happy,' `sad,' `angry,' `excited,' `frustrated,' `fearful,' `surprised,' `distressed' and `other'). The increased performance on using the multimodal context than the individual accuracies on using rPPG or visual modality alone advocates the importance of leveraging the multimodal context for emotion understanding. The average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively. The contributions of this paper can be summarized as follows: * A multimodal emotion recognition framework has been proposed to classify a given video into discrete emotion classes. It extracts the dynamic rPPG signals from the input videos and combines them with static facial expressions using early and late fusion approaches. * An interpretability technique has been incorporated that computes the contribution of rPPG and visual modalities towards emotion classification using the PFI algorithm. * Extensive experiments have been performed on the IEMOCAP dataset, and the results have been presented in terms of accuracy, precision, recall, F1 score, and modality-wise contributions toward emotion classification. § PROPOSED METHOD The proposed framework has been diagrammatically depicted in Figure <ref> and described in the following sections. §.§ Preprocessing and Feature Extraction The video files are loaded and processed frame by frame using OpenCV (cv2) library [https://opencv.org/] and processed to extract rPPG signals and facial features. i) rPPG Signals Extraction: Face detection within each video frame during the rPPG signal extraction process is accomplished using Haar cascades <cit.>. The region of interest (ROI), predominantly the facial region, is isolated from each frame, after which the mean intensity is computed to generate the rPPG signal for each video. The calculation of the mean intensity within the ROI (I̅c) is represented in Eq. <ref>. I̅c = 1/N∑_x=1^W∑_y=1^H I_x, y, c Where I_x, y, c is the intensity of the pixel at location (x, y) for color channel c in the ROI, and N is the total number of pixels in the ROI, whereas W and H represent the width and height of the ROI, respectively, and c ∈R, G, B. ii) Facial Features Extraction: Facial feature extraction employs Dlib's shape predictor <cit.>, which is a version of the ResNet-34 trained on Face Scrub dataset<cit.> to identify the facial landmarks in a given image of a face. As per Eq. <ref>, it identifies 68 facial landmarks for each detected face within every frame, distinguishing unique facial characteristics. P = D(F, {L_i}) F = [f_1, f_2, …, f_n] Where F represents the face detected in a frame, P represents the predicted points on the face, D(F, {L_i}) is the function for predicting points on the face, and L_i is the set of landmark points for the i^th point. As signals from different videos might differ in length, it becomes crucial to standardize the input for the neural network model. This standardization is achieved by zero-padding I̅ and P to match the maximum signal length. §.§ Multimodal Feature Fusion Early fusion and late fusion approaches are used to combine the rPPG signals and facial features. i) Early Fusion: In the early fusion approach, the rPPG signals and facial features are concatenated before being fed into the model. The fused data are then passed through a neural network comprising a flatten layer, followed by CNN layers of dimensions 512 and 256, and the final layer of size equal to the number of classes. The flatten layer transforms the 3D input tensor into a 1D tensor, and the subsequent CNN layers functions perform the classification task. The model structure is represented as per Eq. <ref>. I' = concatenate(I̅c, P) I” = flatten(I') F_early = NNet(I”, C) Where I is the input shape, C denotes the number of classes, I̅c is the mean intensity within the ROI from the rPPG signals, P represents the facial features, NNet represents the early fusion network and F_early is the output of the early fusion. ii) Late Fusion: In the late fusion approach, the rPPG and visual models are trained separately, and their outputs are combined using a weighted average. Eq. <ref> represents a late fusion approach where the models are trained separately, and their outputs are combined in the final output F_late. F_late = w_1 · M_rPPG(I̅c) + w_2 · M_facial(P) Where M_rPPG(I̅c) and M_facial(P) represent the outputs of the rPPG model and the visual model, respectively, and w_1 and w_2 are the weights assigned to each model's output in the final fusion. §.§ Emotion Classification This study employs three separate models for emotion classification. Two of these models operate independently, utilizing rPPG signals and facial features. The third model operates via `early fusion,' exploiting the combined context of data from the rPPG and visual models. The outputs of these individual models are then collaboratively integrated through a `late fusion' approach that uses a weighted addition technique. The individual models, based on rPPG signals and facial features, are constructed as follows. i) rPPG Model: This model utilizes a Deep Convolutional Neural Network (CNN) with two hidden layers. It incorporates Rectified Linear Unit (ReLU) activation functions for emotion classification derived from rPPG signals. ii) Visual Model: This model, built on facial features, employs a ResNet-based Deep CNN with two hidden layers and ReLU activation functions. §.§ Interpretability An explainability method based on permutation feature importance (PFI) <cit.> is implemented, which is used to estimate the importance of features by permuting the values of each feature and measuring the resulting impact on model performance. The PFI of feature j is the decrease in the model score when values of feature j are randomly permuted. PFI for a feature j is the difference in the model score when the values of feature j are randomly permuted. Eq. <ref> mathematically represents the concept of permutation feature importance. PFI(j) = E_π[f(X^(i))] - E_π[f(X^(i)_π_j)] Where PFI(j) is the permutation feature importance of feature j, E_π[f(X^(i))] is the expected value of the model score over all samples in the dataset when the model is scored normally, E_π[f(X^(i)_π_j)] is the expected value of the model score when the values of feature j are permuted according to some permutation π, and X^(i)_π_j denotes the dataset X^(i) with the values of feature j permuted according to π. § RESULTS AND DISCUSSION §.§ Experimental Setup The emotion classification experiments have been performed on the IEMOCAP dataset <cit.> consisting of 10,039 videos labeled with ten discrete emotion labels (`neutral,'` happy,' `sad,'` angry,' `excited,' `frustrated,' `fearful,' `surprised,' `distressed' and`other'). The model training has been trained on NVIDIA RTX 4090 GPU for 50 epochs with a batch size of 32 and a learning rate of 0.001. The performance has been evaluated using accuracy, precision, recall, and F1 score metrics. §.§ Results Table <ref> summarizes the accuracy of the individual and fusion models, whereas the average contributions of rPPG and visual modalities towards emotion recognition in the early fusion setup are presented in Table <ref>. The proposed framework has demonstrated an emotion classification accuracy of 54.61%, and the average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively. Table <ref> shows that both the individual models performed reasonably well. However, the fusion model outperformed the individual models, demonstrating the advantage of combining rPPG signals and facial feature information for emotion recognition. §.§ Discussion This paper presents a compelling case for including multimodal context in emotion recognition. While the models trained on individual modalities show moderate performance, their fusion significantly improves emotion recognition accuracy. It emphasizes the complementarity of these modalities in capturing emotional states. However, the late fusion of modalities underperforms compared to the early fusion approach, indicating that integrating modalities at an earlier stage allows for more effective learning of emotional states. However, this study has a few limitations of the proposed work. The IEMOCAP dataset, while widely used, may limit the generalizability of the findings. Cross-dataset experiments on larger and more diverse datasets could further strengthen the results. Moreover, more modalities such as audio, text, and other physiological signals can also be incorporated for emotion recognition. Finally, a more in-depth interpretability mechanism can be developed to explain the role of individual features in emotion detection. § CONCLUSION This work presents a multimodal emotion recognition framework using rPPG signals and facial features. It paves the way for practical applications where transparent and interpretable emotion understanding is important. The results highlight the benefits of integrating multiple modalities for emotion recognition, with an early fusion approach yielding the highest accuracy. While there are limitations and potential improvements, our study provides a promising direction for future research in emotion recognition, emphasizing the importance of multimodal data and fusion techniques. unsrt
http://arxiv.org/abs/2306.10075v2
20230616033158
Matrix Diagonalization as a Board Game: Teaching an Eigensolver the Fastest Path to Solution
[ "Phil Romero", "Manish Bhattarai", "Christian F. A. Negre", "Anders M. N. Niklasson", "Adetokunbo Adedoyin" ]
math.NA
[ "math.NA", "cs.AI", "cs.LG", "cs.NA", "physics.comp-ph" ]
General Spatial Photonic Ising Machine Based on Interaction Matrix Eigendecomposition Method Shaomeng Wang, Wenjia Zhang*, Xin Ye and Zuyuan He July 31, 2023 ============================================================================================ Matrix diagonalization is at the cornerstone of numerous fields of scientific computing. Diagonalizing a matrix to solve an eigenvalue problem requires a sequential path of iterations that eventually reaches a sufficiently converged and accurate solution for all the eigenvalues and eigenvectors. This typically translates into a high computational cost. Here we demonstrate how reinforcement learning, using the AlphaZero framework, can accelerate Jacobi matrix diagonalizations by viewing the selection of the fastest path to solution as a board game. To demonstrate the viability of our approach we apply the Jacobi diagonalization algorithm to symmetric Hamiltonian matrices that appear in quantum chemistry calculations. We find that a significant acceleration can often be achieved. Our findings highlight the opportunity to use machine learning as a promising tool to improve the performance of numerical linear algebra. Keywords: Eigensolver, Jacobi Rotations, Reinforcement Learning, AlphaZero § INTRODUCTION Matrix diagonalization is a fundamental procedure in numerical linear algebra and appears in numerous fields of scientific computing. A symmetric matrix A, satisfying A = A^t can be factorized as A = U^tDU, where D and U are, respectively, matrices containing eigenvalues and orthogonal eigenvectors of A <cit.>. The computational complexity of a matrix diagonalization typically scales cubically, 𝒪(N^3), with the number of eigenpairs, N, and often with a fairly large pre-factor. Matrix diagonalizations therefore often become a computational bottleneck. Diagonalization is at the core of computational chemistry and it appears as the computational bottleneck determining the properties of chemical systems. In a broad range of quantum chemistry methods based on, for example, Hartree-Fock and density functional theory or semi-empirical methods, a symmetric effective single-particle Hamiltonian, H, needs to be diagonalized to construct the density matrix, ρ, which is given by the Fermi function, f, of H, i.e. ρ = f(H) <cit.>. The decomposition H=U^t D U gives an equivalent system where D is the diagonal matrix containing the energy eigenvalues for each molecular orbital, and U is a unitary matrix containing the eigenvectors corresponding to the molecular orbitals for the system. Hamiltonian matrix functions, such as the density matrix ρ = f(H), can then be constructed in the diagonal molecular-orbital eigenbasis such that ρ = f(H) = U^t f(D) U. The Fermi function, f(H), projects the eigenvalues of the occupied states (at lower energies) to 1 and all the unoccupied eigenvalues (at higher energies) to 0. A property or an observable of a system, “a", such as charges, partial energies and forces, or the dipole moments, can then be determined by computing the trace (or the partial trace) over the corresponding operator, A, projected into the occupied subspace by the density matrix, where a = ⟨ A ⟩ = tr(ρ A). Compared to the prior diagonalization these trace operations are straightforward and can be performed with little computational overhead. One of the most extensively used methods for the diagonalization of a matrix A is based on the QR decomposition. The QR method for matrix diagonalization is an iterative solver that uses a sequence of Gram-Schmidt orthogonalization. In the first iteration we perform a QR factorization of A ≡ A_1 = Q_1 R_1. We then successively compute A_k+1 = R_k Q_k = Q^t_k A_k Q_k, which eventually converges to a sufficiently diagonal matrix D = Q^t_k Q^t_k-1 .. Q_0^t A Q_0 Q_1 .. Q_k = U^t A U. The diagonal matrix D contains the eigenvalues and the accumulated orthogonal matrix U contains the corresponding eigenvectors. This method is surprisingly efficient, but has several limitation. For example, it is not possible to take advantage of data matrix sparsity or data locality to reduce the computational complexity or to achieve efficient parallelism. For the special cases of real symmetric matrices, the Jacobi matrix diagonalization method provides an alternative <cit.>. The Jacobi method is based on a sequence of Givens rotation matrices, G^(i,j)_k, that each sets a pair chosen off-diagonal elements, A_ij and A_ji in A to zero. In the first iteration A_1 = (G_0^(i,j))^t A G_0^(i,j) for some chosen off-diagonal matrix pair (i,j). The procedure is repeated with A_k+1 = (G_k^(i,j))^t A_k G_k^(i,j). Even if other elements besides A_ij and A_ji are modified in each rotation, the magnitude of the off-diagonal elements are systematically reduced. By iteratively sweeping over non-zero off-diagonal elements using the Givens rotations, the sequence eventually converges to a sufficiently converged diagonal matrix D = G^t_k+1 G^t_k .. G_0 A_k G_0 .. G_k+1 = U^t A U. The diagonal matrix D contains the eigenvalues and the accumulated orthogonal matrix U contains the corresponding eigenvectors. The convergence of the Jacobi diagonalization method depends on the sequence we chose for the Givens rotations. We may, for example, always chose the pairs (i,j) corresponding to the off-diagonal matrix elements with the largest magnitude or we may simply sweep over rows and columns. Any path can be chosen, but certain paths will converge faster. To speed up the matrix diagonalization various methods have been developed that can take advantage of the particular structures of the matrix, such as the matrix symmetry and sparsity <cit.>. Special methods have also been developed when only a few eigenpairs are needed or to take advantage of the available computer architectures, e.g. to improve parallelism <cit.>. In this paper we will introduce another aspect that can improve matrix diagonalization, which is the use of machine learning (ML) to accelerate algorithmic convergence. Our goal is to show how modern machine learning can be used to “teach" an eigensolver the fastest path to solution to speed up the calculations. The ability to use AI to improve numerical linear algebra is a new promising field of research with some promising recent results <cit.>. § METHODS Typically, ML techniques have employed eigensolvers to reduce the dimensionality of the problem to the principal components, hence, eliminating redundant variables <cit.>. Here, we do the opposite, which is to use ML and specifically reinforcement learning (RL) to accelerate the computation of the eigenspace of a symmetric matrix. To the extent of our knowledge, no algorithm has yet explored using ML/RL to speedup such calculations. Some recent attempts are available, however, with iterative methods using ML assisted optimizers where no time-to-solution is reported <cit.>. Generalized convolutional deep neural networks have also been proposed to solve the electronic structure problem in quantum chemistry, but without doing an explicit matrix diagonalizaion <cit.>. The recent breakthrough of AlphaZero <cit.>, an RL-based framework, has revolutionized the AI industry by providing solutions for intractable problems in various applications, including protein folding <cit.>, and games like Go <cit.>, Chess, Shogu, and Star Craft <cit.>. The use of Monte Carlo tree search (MCTS) and one-step lookahead DNN has allowed AlphaZero to provide heuristic-free exploration. Based on the principle of AlphaZero, a recent work from Google Deep Mind called AlphaTensor demonstrated a proof of concept showing that the existing fastest matrix-matrix multiplication being performed by the Strassen algorithm was significantly accelerated using AI methods <cit.>. In the aforementioned work, the authors extended the capabilities of the AlphaZero framework to estimate the heuristics for the fastest matrix multiplication. Jacobi diagonalization method gives an ideal framework to explore ML techniques. Givens rotations are typically constructed using the indices of the maximum absolute off-diagonal elements as a pivot <cit.>. However, exploring a reinforcement learning algorithm could enable the AI to discover unconventional strategies for selecting optimal pivot points and facilitating diagonalization in fewer steps, without prior knowledge of the data generation process or diagonalization strategy. The key observation behind this article is that we can view the Jacobi algorithm as a board game. In each move we select a pair of off-diagonal matrix elements, (i,j) and (j,i), on an N × N board that are removed, i.e. that are set to zero by a Givens rotation. Each move leads to an increase in the magnitude of some of the other matrix elements, which can be seen as the counter move by an opponent. The player who can find the fewest number of moves that reaches a sufficiently converged and accurate solution has won. This is a well-defined problem for reinforcement learning, which recently has demonstrated a spectacular breakthrough for board games such as Go and Chess using the framework <cit.>. Our target problem is the diagonalization of a sequence of symmetric Hamiltonian matrices that appear in quantum-mechanical molecular dynamics (QMD) simulations <cit.>. Often tens-of-thousands of fairly similar Hamiltonians need to be diagonalized during a molecular dynamics simulations. This problem should be well-suited for machine learning. In this work, we present Alpha-FastEigen (FastEigen for short notation), an AlphaZero-based matrix diagonalization framework that achieves accelerated eigen decomposition. FastEigen uniquely learns from the matrix state spaces by combining Monte-Carlo Tree Search (MCTS) and policy-value neural network iteratively to optimize policy and estimate the path with the least number of steps leading to the solution eigenspace. By using a self-play strategy over many episodes, the agent learns a policy that achieves an even faster diagonalization rate. §.§ Data set generation The data set used in this work consists of symmetric Hamiltonian matrices generated from a molecular dynamics trajectory using the LATTE code <cit.>. This is a QMD simulation code where the Hamiltonian matrix is computed according to the Density Functional Tight-Binding (DFTB) theory <cit.>. We created a python library that allows for a quick construction of the Hamiltonian H from a system coordinate. Multiple Hamiltonian matrices can hence be generated as needed from a trajectory file (the step-by-step evolution of the position of the atomic coordinates). In order to generate the trajectories we run several QMD simulations using the LATTE code. In the initial stages of our research, we chose to generate small Hamiltonian matrices of size 5× 5 as a proof of concept. This matrix size was selected because it allows us to efficiently test our FastEigen framework and models without the computational burden of larger matrices. The matrices were generated using the HO^. radical system, which provides a simplified yet representative model system for our study. The results obtained from these small-scale simulations were then analyzed to assess the validity and efficacy of our approach. If our methods prove effective at this scale, they can potentially be applied to larger, more complex systems, opening up new avenues for exploration in the field. Similarly, alphatensor <cit.> demonstrates proof of concept results on small scale matrices of similar sizes. We used the LAMMPS driver to perform the MD simulation as described in <cit.>. Initial NVT simulations were performed with a temperature of 300K. Trajectories at 400K, and 500K were also generated to have more variability in the matrix elements. To ensure that all the trajectories are uncorrelated, we have used different random seeds in all the simulations. We used a 0.5 fs time-step that lead to trajectories of 5 ps total simulation timescale (10,000 steps). §.§ Jacobi Algorithm As briefly mentioned in the introduction, Jacobi eigenvalue method is based on Givens rotations to systematically zero out the off-diagonal elements. In this section we will extend the explanation given in the introduction. Every off-diagonal element (i,j) of a matrix A are systematically zeroed out by applying Givens rotation matrices G^(i,j) as follows: A' = (G^(i,j))^t A G^(i,j), where, G^(i,j)_ii = G^(i,j)_jj = cos(θ) = c, G^(i,j)_ij = -G^(i,j)_ij = sin(θ) = s, and G^(i,j)_kl = δ_kl for every other element. The latter, defines the following set of equations: A'_li = c A_li - s A_lj, for l i and l j A'_lj = s A_li - c A_lj, for l i and l k A'_ii = c^2 A_ii + s^2 A_jj - c s A_ij A'_jj = s^2 A_ii + c^2 A_jj + c s A_ij A'_ij = (c^2 - s^2)A_ij + cs (A_ii - A_jj) Since, A'_ij = 0, then (c^2 - s^2)A_ij + c s(A_ii - A_jj) = 0, from where we can solve for tan(θ) = 2A_ij/(A_jj - A_ii) to then compute c and s as: c = (1+ tan(θ)^2)^-1/2 and s = c tan(θ). In conclusion, every time an element needs to be zeroed out, c and s values are computed and matrix A is modified following the above equations. At convergence, the product of all the rotation matrices gives rise to matrix U, while A gets transformed into D. Now, two questions comes to mind: What happens to the elements that are already zero? and; which elements needs to be zeroed out first? The first question is obvious by looking at the sets of equations in <ref>. If we apply G^(i,j) all A'_li gets “contaminated" with A_lj elements, even if A_li is 0. This means that, even if we have a sparse matrix, every iteration will create “spurious" non-zeros where there were not before. The latter means that more rotations will need to be applied until we can guarantee that all |A_k l| are less than a certain tolerance. The answer to the second question above leads to different variations of the Jacobi algorithm which focus on which is the most appropriate sequence in which the elements needs to be zeroed out. In the “Regular Jacobi" (RJ) algorithm the elements that are zeroed out first are the highest in absolute value: maxA = max_k,l(|A_kl|). This version of the algorithm keeps on jumping from element to element looking for the next maxA. We will refer to this strategy as MaxElement which will become our standard for comparison. Although one can demonstrate that this version of the algorithm ensures that maxA > maxA', it does not guarantee to lead to the fastest convergence. In fact, it turns out that systematically zeroing out the off-diagonal elements in a cyclical way has even a better performance since there is no need to introduce a max search at every iteration. One can then specifically design a cyclical sequence to zero out the elements depending on the matrix at hand. This leads to another variant of the algorithm called the “Cyclic Jacobi" (CJ). However, again, nothing guarantees that the proposed sequence is the most optimal one. In order to prove this, one can try variations constructed by permuting the sequence generated via the RJ algorithm. It turns out that certain permutations end up with a new sequence leading to a faster convergence. In this work, we prove that, a fairly good sequence leading to a faster convergence can be learned “on-the-fly" from the off diagonal matrix elements. §.§ AlphaZero-based matrix diagonalization The AlphaGoZero framework was developed to play the game of Go and has been successfully adapted to play many other games including Tic-Tac-Toe, Connect Four, Gomoku and Chess. Table <ref> describes the necessary adaptations and proposed solutions to convert the AlphaGoZero framework, initially designed for board games, into a matrix diagonalizer. Key modifications include: * Training with Input Matrices: Instead of starting from an empty board, the adapted model uses Hamiltonian matrices to initiate the game. * Selection Restriction: The model ensures only elements above the diagonal of a matrix are selectable throughout the game, as opposed to full board accessibility in the original version. * Data Type Transition: The model replaces integer player IDs typically used in games with floating-point values, facilitating a transition from a 2D game board to a 3D matrix representation (The three dimension corresponds to the matrix S co-ordinates (x,y) and corresponding value (S(x,y))). * Matrix Status Evolution: The model incorporates the Jacobi rotation algorithm, which operates on selected matrix positions and updates the entire matrix status for every rotation. This contrasts with the original framework, which only updated a single position marker. * Stopping Criterion for Matrix Diagonalization: A new stopping criterion is implemented to halt the diagonalization process. The criterion is based on the evaluation of matrices after each iteration, which differs from typical game-ending conditions in the AlphaGoZero framework. These changes are part of a broader suite of modifications that also involve optimizing parameters for efficient training and inference processes. The complexity requirement of traditional Max-element Jacobi rotation based matrix diagonalization approach grows cubically with the matrix size and may be infeasible for matrices beyond a certain size. To address this concern, we aim to utilize AlphaZero framework for finding the near optimal solution <cit.>. In this work we have repurposed AlphaZero to allow reformulating matrix diagonalization as a gameplay and learn the optimal diagonalization procedure as a game-winning pattern from numerous simulations. This procedure provides an ability to scale the framework for large-sized matrices without the requirement of significantly large computation resources. This is possible due to the unique combination of the MCTS framework and the policy-value network utilized in the AlphaZero framework. The policy network (upper output head) shown in Figure <ref> provides the likelihood probability for performing the Jacobi rotation corresponding to each off-diagonal elements from the upper triangular portion of the matrix (i.e., dictating what the next efficient move for performing Jacobi rotation is). In contrast, the value network (lower output head) assigns a reward or penalty corresponding to whether the matrix was finally diagonalized or not, within some given convergence tolerance. For all the calculations throughout this work we have set the tolerance to 1e-5. The policy-value network is also combined with the MCTS to provide a lookahead search, which, when combined with the policy network, can narrow down the search to high-likelihood moves and, with the value network, can evaluate the positions in the tree through reward. An example of the tree utilized by the MCTS framework is shown in Figure <ref>. This section will discuss how we adapt the AlphaZero frameworks to solve the matrix diagonalization problem. We call it FastEigen. The beauty of the AlphaZero framework is the ability to ultimately learn the game from scratch in an unsupervised fashion and with no-domain knowledge other than the game rules. The problem of matrix diagonalization is first framed as a two-player game where the agents estimate the index for performing Jacobi rotations and compete with each other to find the fastest diagonalization. To define the matrix diagonalization as a RL problem, we have following configurations: * States: The current state of the matrix being diagonalized. * Rewards: The reward is given based on the quality of the diagonalization. It is related to the closeness of off-diagonal elements to zero, indicating successful diagonalization. * Actions: An action is the selection of the pivot point for the next Jacobi rotation. This is the decision that the AlphaZero model has to make at each step. In this specific case, the actions search space is constrained to off-diagonal elements from upper triangular portion of the matrix. The FastEigen system comprises two major building blocks (i) the policy-value network as shown in Figure <ref> <cit.> and ii) MCTS as shown in Figure <ref>. The role of the policy value network is to observe the current matrix state and generate a decision for the next rotation operation to be performed to diagonalize the matrix with the least number of steps. On the other hand, MCTS contemplates the multiple possible outcomes starting from the current one. The MCTS simulation can be expressed in the form of the tree shown in Figure <ref>. MCTS then utilizes the decision outcomes from the policy-value network to control the simulation procedure where the output of the policy value network is shown in Figure <ref>. Similarly, the policy value network is trained on the simulations of MCTS as shown in Figure <ref>. The policy value framework is a deep learning framework that takes the matrix state and outputs the probability over the actions and the winning chance of the current state. The parametric representation is given as follows: (p,v) = f_θ(s) where f_θ(s) represents the neural network with parameter θ. For a given matrix of a size n× n, the state matrix s is the n× n array representing the matrix. The Jacobi rotation method is an algorithm specifically designed for symmetric matrices. Because of this symmetry, the action space for AlphaZero is constrained to only the upper triangle elements of the matrix. Here p = (p_1,p_2,...,p_N) where N = n(n-1)/2 is the policy distribution vector whose i^th element p_i = P_r(a|s) correspond to the prior probability of performing Jacobi rotation on the i^th element. Here, a corresponds to the most probable action based on p distribution. Similarly, v∈ [-1,1] represents the winning value for the current player who will perform the next Jacobi rotation. Larger v corresponds to a higher chance of being able to diagonalize. A deep neural network with convolutional and fully connected layers was utilized for the policy value network as shown in Figure <ref> with two different output heads for the policy and the value outputs. This framework comprises the following module: The policy value framework comprises of Feature extraction module: This module takes the input matrix as the state space and extracts the features. These features are then fed separately to the policy and value head, Policy head: This module generates the prior probability distribution vector p. and Value head: This module generates the winning value v. The MCTS framework is collectively parameterized by α_θ and it is instructed by the policy-value neural network collectively parameterized by f_θ. The policy value network receives the current state matrix s and outputs a policy probability π which correspond to the diagonalization pivot's associated probability. The policy distribution vector, depicted as π = (π_1, π_2, ..., π_N) in Figure <ref>, represents the probabilities associated with executing matrix diagonalization for each respective off-diagonal pivot. The network selects the pivot associated with the highest probability value for the next operation. In the FastEigen framework, the output of the MCTS, represented by α_θ, is given by π = α_θ(s). Here, π is the policy distribution vector, representing the probabilities of selecting each off-diagonal pivot for matrix diagonalization. These probabilities are influenced by a temperature parameter τ, which controls the balance between exploration and exploitation. More specifically, each action's probability π_a is proportional to N(s, a)^1/τ, where N(s, a) is the number of times action a has been taken in state s. Higher τ values make the probabilities more uniform, encouraging exploration of different pivots, while lower τ values make the probabilities more skewed towards the most frequently selected pivots, promoting exploitation. In the MCTS framework, the search tree comprises a graph (s,a) where the edge stores the prior proability P(s,a), node visit count N(s,a) and action-value Q(s,a). The simulation starts with the root node and iteratively selcts the moves that maximize the upper confidence bound ζ = Q(s,a) + U(s,a) where U(s,a) ∝ P(s,a)/(1+N(s,a)) imply U(s,a) = C_puct P(s,a)/(1+N(s,a)) and ζ = Q(s,a)+C_puct P(s,a)/(1+N(s,a)) where C_puct is a proportional constant that trades between exploitation term (first) and exploration term (second), until the leaf node s' is encountered. The leaf node is then finally evaluated by the policy-value network f_θ to generate prior probabilities and evaluation as (p(s'),v(s')) = f_θ(s'). During the game play, each edge (s,a) traversed is updated to increment the visit count N(s,a) and the action-value is updated to the mean as Q(s,a) = 1/N(s,a)∑_s'|s,a → s' v(s') where s,a → s' corresponds to reaching to state s' from state s after taking action a. To train the policy value framework, first, MCTS is deployed to play each move as per the self-play nature of the RL employed as shown in Figure <ref>. The neural network f(θ) is first initialized randomly so that the initial weights are θ_0. Then for each subsequent iteration i ≥ 1, games of self-play are generated, and for each time step t, the MCTS search policy is estimated as π_t = (α_θ)_i-1(s_t) using a neural network with parameters corresponding to the previous iteration, i.e., f_θ_i-1 where the moves are then played by sampling from search probabilities π_t. At each time step t, the data is stored as (s_t,π_t,z_t) where z_t = ± r_T is the winner of the game and new network parameters θ_i are trained from data (s,π,z) sampled uniformly among all time steps of the last iteration of self-play. r_T represents the final reward of the game, given at the terminal state T. Here, the final reward r_T could be +1 for a win, -1 for a loss, and 0 for a draw. Therefore, z_t being equal to ± r_T would mean that z_t takes the value of the final game reward, with its sign indicating whether the outcome was a win or a loss. The policy value network f_θ is adjusted to minimize the error between the predicted value v and the self-play q index z and to maximize the similarity of neural network move probabilities p to the search probabilities π where the parameters θ are adjusted with gradient decent on loss function l which is given as l = (z-v)^2 - π^T log(p) + c ||θ||^2 The first term in the loss function is the Mean Square Error (MSE) error, the second term is the cross entry, and the third term is the l2-regularizer term to prevent overfitting. A comprehensive overview of the algorithm is presented in the pseudocode of Algorithm <ref>. § EXPERIMENTS AND RESULTS Three trajectories for HO^· were created and processed into 1000 5× 5 matrices sequences. The temperatures for each sequence were 300K, 400K and 500K. Each of these matrices sequences was then split randomly into a 750 matrices set for training and a 250 matrices set for testing. Training was conducted with the exploration-exploitation parameter C_puct set to 4, MCTS playouts parameter n_playouts set to 560 and the action policy network was trained for 50 epochs. C_puct is a hyperparameter that balances exploration and exploitation in the MCTS algorithm where value of 0 corresponds to pure exploitation and larger values favor exploration over exploitation.n_playouts corresponds to the number of simulations or “playouts" run from each position during the MCTS process.The optimal value for n_playouts represents a trade-off: higher values can potentially improve output performance but at the cost of increased training time. The training time was 5.8 hours on a single NVIDIA A100 GPU with an AMD EPYC 7713 64-Core cpu for 5× 5 sized matrices. Our inference was carried out on nodes outfitted with 4 NVIDIA A100 GPUs and powered by AMD EPYC 7713 64-Core CPUs. We employed 12 independent GPU processes (MPS servers) for each node in a embarrassingly parallel computing approach. We capped the inference time at 5 minutes per matrix to create all solution paths. This translated to a maximum of roughly 10 hours for the complete inference phase per set of 1000 matrices. For optimal test performance, the inference experiments involved considerable tuning of the C_puct and n_playouts parameters to produce solutions with the FastEigen system. Unlike finding the patterns for playing the board Games (which AlphaZero was initially designed for), finding the optimal path for the matrix diagonalization is a challenging task where different matrices might have different underlying diagonalization patterns. It is challenging for AlphaZero to find a one-shot solution for diagonalization steps that work for all matrices. Due to this challenge, our model requires significant tuning associated with choosing the right set of hyper-parameters during the inference. Due to this fact, the model required significant exploration steps along with exploitation, even during inference, which is uncommon for many RL frameworks. Moreover, due to the nature of the matrix diagonalization, even for a smaller matrix, many paths could lead to the same solution with FastEigen. Although most matrices can produce efficient solutions in seconds, a small minority can continue producing solutions for several hours due to the exploration of inefficient paths in MCTS. Setting the parameters of c_punct and n_playouts to 4 and 13,000 respectively, produced the results presented here, and inferencing times were limited to a maximum of 5 minutes for each matrix. In each case, the FastEigen method consistently outperformed, delivering solutions within six to seven steps for all 250 test matrices achieving large savings (>50%) in path length as shown in Figures <ref>. On the other hand, the MaxElement method required a broader range of steps, varying from 10 to 14 at 300K, 11 to 16 at 400K, and 10 to 15 at 500K as shown in Figures <ref>. While these results underscore the noticeable superiority of the FastEigen method over the MaxElement method, they do not provide an insight into the distribution of solution advantages. The subsequent histogram plots will provide a clearer depiction of how these solution advantages are distributed shown in Figures <ref>. The results show that the FastEigen method allows for a significant efficiency improvement in matrix diagonalization. These improvements were observed in matrices where the MaxElement method had an average of 12.36 steps to solve. The FastEigen method demonstrated a substantial reduction in the average number of steps, ranging from 6.05 to 7.01. It's also noteworthy that the FastEigen method provided a substantial advantage over the MaxElement method across all tested matrices. Furthermore, we examined the adaptability of the FastEigen method across different temperatures by running inference on the complete sets of 1000 matrices that were not trained at the same temperature. The results from these mixed temperature solutions for each test dataset are presented in the plots of Figure <ref>. Utilizing the FastEigen method instead of the MaxElement method yielded substantial average savings, ranging between 48.5% and 52.3% (Figure <ref>). The limited variability in these percentages suggests that the FastEigen method is highly adaptable across different temperatures. This implies that there is no need to recalibrate the FastEigen method for each specific temperature to achieve substantial efficiency gains. Additionally, we have illustrated the distribution of the percentage savings achieved by the FastEigen method over the MaxElement method for each cross-temperature variation in the Figures <ref>. Based on the resulting plots, there are no large discrepancies when training on one temperature and inferencing on another indicating that training is fairly robust and need not be changed to inference on other temperatures. In the comparison of the MaxElement and FastEigen methods for matrix diagonalization, we observed distinct behaviors between the two. Particularly, the number of steps required for MaxElement exhibited significant variation across different matrices, even when those matrices shared the same condition number as shown in Figure <ref>. This inconsistency could potentially introduce unpredictability when implementing the MaxElement method in practical scenarios, as the computation cost might vary widely across different cases. On the other hand, Fasteigen demonstrated a much more consistent performance. The number of steps required by FastEigen remained relatively stable across matrices with different temperatures and condition numbers. This consistency suggests that FastEigen provides a more reliable estimate of the computation cost involved in matrix diagonalization, which can be a valuable attribute in many practical applications. Furthermore, the consistent behavior of FastEigen implies that it may be less sensitive to variations in the properties of the input matrices, offering a more robust solution for matrix diagonalization across a wide range of scenarios. § CONCLUSIONS By formulating a matrix diagonalization as a board game we have demonstrated how reinforcement learning using the AlphaZero AI framework can be used to learn the fastest path to solution. A significant acceleration can be demonstrated for the FastEigen method. The accelerated performance of the FastEigen method over the MaxElement method was consistently highlighted, with substantial savings observed in the number of steps needed to solve matrices sampled from QMD simulation trajectories across various temperatures. This robust performance of the FastEigen method, despite training on one temperature and inferencing on another, signifies its adaptability and resilience, making it a promising tool for wide-ranging applications. These findings emphasize the potential of leveraging advanced reinforcement learning techniques in matrix diagonalization, with significant improvements over established methods. Our findings highlight the opportunity to use machine learning as a promising tool to improve the performance of numerical linear algebra. Looking ahead, our research endeavors will concentrate on extending this approach to handle larger Hamiltonian matrices. The promising results from this study suggest a good foundation for further optimization and scaling up, with the ultimate goal of advancing the capabilities of matrix diagonalization methods in computational physics and other similar fields. § ACKNOWLEDGMENTS This manuscript has been approved for unlimited release and has been assigned LA-UR-23-21573. This work was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20220428ER. This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). Additionally, we thank the CCS-7 group, Darwin cluster, and Institutional Computing (IC) at Los Alamos National Laboratory for computational resources. Darwin is funded by the Computational Systems and Software Environments (CSSE) subprogram of LANL’s ASC program (NNSA/DOE). unsrt
http://arxiv.org/abs/2306.03201v1
20230605191850
Analysis of Integral Field Spectroscopy observations of the planetary nebula Hen 2-108 and its central star
[ "Bárbara L. Miranda Marques", "Hektor Monteiro", "Isabel Aleman", "Stavros Akras", "Helge Todt", "Romano L. M. Corradi" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
firstpage–lastpage ChatGPT as a mapping assistant: A novel method to enrich maps with generative AI and content derived from street-level photographs Levente Juhász1 Peter Mooney2Hartwig H. Hochmair3 Boyuan Guan1 July 31, 2023 ================================================================================================================================== The study of planetary nebulae provides important constraints for many aspects of stellar and Galactic evolution. Hen 2-108 is a poorly known planetary nebula with a slight elliptical morphology and a peculiar central star (CS), which has defied classification. In this work, we present the first detailed integral field spectroscopic study of the planetary nebula Hen 2-108 and its CS. We provide spatially resolved flux maps for important emission lines, as well as diagnostic maps of extinction and electronic density and temperature. Physical conditions and chemical abundances were also calculated from the integrated spectrum. The analysis was also performed with the code satellite which uses a distinct strategy to evaluate physical and chemical properties. Both satellite and traditional procedure give consistent results, showing some variation in physical and chemical properties. We detect and measure a number of faint heavy element recombination lines from which we find a significant abundance discrepancy factor for O/H, and possibly for N/H. Pseudo 3D photoionization models were used to assist in the interpretation with results supporting the low-ionisation nature of this nebula, indicating a CS with T_eff= 40 kK and a shell structure. The spectrum of the CS has been analysed with a detailed model for expanding atmospheres to infer stellar parameters, finding that it is a [Of/WN8] type with T_*= 41.5 kK, making it a new addition to a small set (∼20) of rare objects. planetary nebulae: general – planetary nebulae: individual: Hen 2-108 – ISM: abundances § INTRODUCTION Planetary nebulae (PNe) are the result of the evolution of stars with masses between approximately 1 and 8 M_⊙. At the end of their evolution, these stars eject their outermost layers, forming the PN. The remnant stellar hot core ionizes and heats the surrounding matter. As the stars undergo nucleosynthesis processes that change their composition during their lives, they result in a significant chemical enrichment of the interstellar medium. <cit.>. Despite having a good grasp of the general processes that lead to the observed morphologies, the details of the physical processes behind the production of the different shapes and substructures of these objects are not yet fully understood. Differences in morphologies of PNe could be related to to the mass loss episodes from the progenitors, the interactions between the companions in binary systems or magnetic fields, and/or a combination of more than one mechanism in the same object. <cit.>. In the last decades, integral field spectroscopy (IFS) has become an important observational technique <cit.>, especially in the study of extended objects such as ionized nebulae. The added spatial coverage is an advantage over traditional long slit strategies. IFS allows us to obtain spatially resolved physical and chemical information, such as kinematics and chemical abundances among other interesting diagnostics, on the extent of the object. Planetary nebulae, being often resolved extended objects in the sky and with a diversity of morphologies, are obvious targets of studies with IFS. The first work using IFS of galactic PNe was published just over a decade ago by <cit.>. There is an increasing number of published works using this technique to address a range of topics in the last years. In addition to the PN gas, its central star (CS) can be studied with 3D spectroscopy data, as seen in the works of <cit.>, <cit.> and <cit.>. <cit.> also used IFS data from VIMOS to study the PN abundance discrepancy problem. Works such as <cit.>, <cit.>, <cit.>, and <cit.> are examples of recent studies of PNe using IFS. The PN Hen 2-108 is an object with an apparently simple, slightly elliptical morphology, with few detailed studies about it <cit.>. The first high quality images of Hen 2-108 were obtained using narrowband imaging by <cit.>. They identified a projected elliptical morphology in Hα and [Oiii] λ5007 narrowband filter images. Hen 2-108 is also classified as approximately circular or elliptical by <cit.> and <cit.>, who determined the dimensions 13.6×12.3 arcsec for this nebula. In the literature, the distance to Hen 2-108 is found in the range of 1.7 to 4.6 kpc <cit.>. Hen 2-108 is believed to be a young, low-ionization nebula, with estimates of its central star temperature ranging from 26 kK to 50 kK <cit.>. Infrared, optical and ultraviolet spectra were used by <cit.> to determine physical conditions, abundances and stellar parameters. <cit.> classified Hen 2-108 CS as a very late (VL) type, due to the presence of the Civ λ5801 and Ciii λ5695 lines in its spectrum. The same authors also identified Heii λ4686 and Cii λ7235 lines as stellar, classifying it as a weak emission-line CS (WELS). However, as mentioned in <cit.> while studying four PNe with supposed WELS CSs, the spatially resolved spectroscopy shows that lines such as the Civ and Niii in recombination are either distributed in the nebula or concentrated in nebular knots rather than appearing in the CS, which suggest that the WELS classification may be spurious. In this paper, we present new results obtained with the first IFS study of the PN Hen 2-108. Our data allows for detailed study of the usual diagnostics, resulting in the first spatially resolved emission line flux, extinction coefficient, temperature and density maps of this object. The high quality integrated spectrum is used to study in detail the abundances, which are compared to the literature results mentioned previously. We also extracted the stellar spectrum and evaluate the different ambiguous classifications of the object, as well as its Wolf-Rayet like [WR] nature with detailed models. We combine the IFS spatially-resolved spectrum with pseudo-3D photoionization modelling to reveal new details of the Hen 2-108 density distribution. This paper is organized as follows. We present the observation properties and the data reduction process in Section <ref>. In Section <ref> the discussion of the flux, electronic density and temperature maps are presented. The integrated spectrum, the analysis of the emission line fluxes, the nebular emission and the physical and chemical parameters obtained from them are given in Section <ref>. We use popular tools extensively used by the community NEAT <cit.> and PyNeb <cit.> to analyse the data, and we compare the with the results obtained by the Satellite software <cit.>, presented in Section <ref>, that evaluate an analysis of the spatial variation of physical parameters and chemical abundances. The spectrum of the central region and the analysis of the features observed in that region are given in Section <ref>. We present a study of the PN parameters, ionization structure and matter distribution of Hen 2-108 using pseudo-3D photoionization models in Section <ref>. The general concluding discussion is given in Section <ref>. § OBSERVATIONS AND DATA REDUCTION The observations analysed here were obtained with the Visible Multi-Object Spectrograph <cit.>, mounted on ESO-VLT UT3 Melipal, Paranal Observatory, in Chile, between Mayl 8th, 2007 and September 1st, 2007. The data are part of the observation program 079.D-0117A, led by H. Schwarz (in memoriam). The observations were performed with airmasses of ≈ 1.6 and ≈ 1.5 for the blue and red arms, respectively. The average seeing obtained from the DIMM was of 0.89 and 0.69 arcsec for the blue and red arms respectively. Details for individual data files can be obtained from the public ESO archive at <http://archive.eso.org/eso/eso_archive_main.html>. VIMOS offers two spatial samplings of 0.33 arcsec (2:1 magnification) and 0.67 arcsec (1:1 magnification) which we use in these observations. A shutter was used to mask the IFU and use only 40×40 fibers. This provides us with a field of view (FoV) of 13×13 arcsec for the 2:1 magnification and a large FoV of 27×27 arcsec for the 1:1 magnification. Using the available high spectral resolution modes HRorange and HRblue, we obtained a wavelength coverage from 4000-6700 and 5250-7400Å with a dispersion of 0.53 and 0.6 Å per pixel, respectively. The regions observed generate two separated data cubes at the end of the reduction process. The spectral resolution achieved is approximately 2500, with 4096 pixels in the spectral axis. The data obtained with the 2:1 magnification was only used to analyse the central source since it did not contain the entire nebula due to pointing problems. These characteristics and exposure information are shown in Table <ref>. The data reduction was performed with the VIMOS Interactive Pipelines <cit.>, using standard procedures of bias and flat field correction, as well as wavelength and flux calibration. For the flux calibration, the standard star used was the white dwarf EG 274. The weather data from ESO showed that for the dates observed the sky was clear and for most of the night photometric. The data was also corrected for differential atmospheric refraction and sky contamination. For the first one, we used a Python script that, with a selected reference star in the observed field, traces its spatial position along each cut in the wavelength axis. The reference star position is obtained with a fit of a Gaussian function to the continuum map and identifying its maximum. The routine determines displacements of the star relative to a reference pixel of choice for all wavelengths. The coordinates of wavelength and displacements of the star are then fit by a second-order polynomial, which is used to correct the positions in each wavelength slice in the data cube. The sky contamination was determined from a region in the data cube external to the ionized gas emission (see the yellow rectangle in Fig. <ref>). With the sky region selected, the median of the flux was obtained in each spaxel, resulting in the estimated sky spectrum per fiber. This sky spectrum was subtracted from all spaxels of the data cube. The residual sky in the data due to errors in our procedure is no larger than 9 percent at the [Oi] 6300 Å line position. We have also inspected maps of the skylines and found no significant slope in the sky contribution, which justifies the use of a median value. § SPATIALLY RESOLVED ANALYSIS §.§ Emission line flux maps The emission line flux maps obtained from the 1:1 magnification cube are shown in Fig. <ref>. To obtain the emission line maps we used a script that reads in the data cubes and performs Gaussian fits to selected emission lines in each spaxel. For a given emission line, a fit is performed in a specified region of the spectra extracted around the line centre. The fits are done using the Levenberg-Marquardt least-squares algorithm. The script outputs maps of the emission line flux, central wavelength, width and continuum. Analysing the morphology in the maps, two main structures are immediately identified: a central region and a surrounding ring structure. Some lines are produced in both structures, while others exclusively in one of them. The lines in Fig. <ref> Niii λ4634, [Feiii] λ4658 and λ5270, Ciii λ5696, Oii λ4907 and Heii λ4686 are emitted predominantly in the central region of the nebula. This region has a FWHM of 2.7±0.3 arcsec across the spectral range. For comparison, we also obtained the FWHM of the standard star used in our calibrations, obtaining a FWHM of 2.5±0.3 arcsec. These values are consistent within uncertainties indicating that the emission is unresolved. It is interesting to highlight the difference between the Nii λ5942 flux map and the Niii λ4634 map, with extended emission in the first one and emission just in central region in the other one. Based on the emission line maps in Fig. <ref> we can see that the lines Cii λ4267 and Cii λ7236 have a extended morphology indicating the nebular origin. However the line Ciii λ5696 is concentrated on the central region. Interestingly, <cit.> identified Cii λ7236 as stellar. The FWHM of the profile in the emission line map of Heii λ4686 in Fig. <ref> is ∼2.3 arcsec. The presence of this Heii line is unexpected, as CSs with effective temperatures around 40 kK cannot produce a large zone of Heii and, therefore, significant emission from this ion. To produce the observed Heii λ4686/Hβ ratio, a star with a temperature of at least ∼60 kK is required <cit.>. The line of [Cliii] λ5517 has a low flux in comparison to the others represented in Fig. <ref>. In addition to the emission in the central region, there is fainter extended emission around. The eastern side of the nebula emits a slightly larger flux than the western side. The Hα, Hβ, Hγ, [Nii], [Ariii], [Oiii] and [Oii] in Fig. <ref> show an annular structure, i.e., less flux in the central region of the nebula and greater brightness in the surrounding region. In all maps, we can see that the flux is not uniform, this may indicate that the matter is not homogeneously distributed in this ring. In almost all maps, except the [Oiii], we can distinguish a brighter region in the southeast region of the ring. For [Oiii], a bright spot is seen in the northern region. For [Sii] and Hei, the emission is produced both in the central and the ring structures. In Hei λ6678, the central region is slightly distinguished. §.§ Extinction map An important step in analysing the data is the correction for interstellar extinction. Since we have spatially resolved data, we can obtain spatial information on this parameter. Adopting the extinction law of <cit.>, Rv=3.1, and using the observed ratio map of Hα/Hβ, we obtained the extinction map for Hen 2-108 with PyNeb <cit.> considering the <cit.> atomic data. The extinction is obtained by comparing the observed ratio of Hα/Hβ to its theoretical value of 2.85 taken from <cit.> for temperature and density values adequate to the object being studied. The temperature and density are obtained simultaneously from an initial guess by fitting [Sii] λ6731/λ6716 and [Nii] λ5755/λ6548 <cit.>. We performed one iteration with the estimated temperature and density to converge on the correct Ha/Hb ratio and then recalculate the physical parameters as recommended by <cit.>. In order to avoid unrealistic ratio values due to low signal-to-noise data in the outer parts of the nebula, we have limited the observed region used in the spatially resolved diagnostic analysis. The nebular emission region used for the analysis was determined such that the Hβ emission had a spaxel signal-to-noise of at least 3. To achieve this, we have applied a mask to the emission line maps such that spaxels that did not satisfy the criteria were not considered. This mask was applied in all diagnostic maps. The resulting c(Hβ) map is shown in Fig. <ref>. The Hβ flux contour lines are overplotted to help find possible relations between the extinction and the nebular structures. In the nebular region, there is a variation in the value of c(Hβ) between 0.3 and 0.6. Differently from what is seen in the Hβ and other line maps, it is not possible to clearly distinguish the annular structure in the extinction map. §.§ Electronic density and temperature maps After correcting for extinction using the c(Hβ) map in Fig. <ref>, we use the line flux maps of [Sii] λλ6716,6731 and [Nii] λλ5755,6584 to calculate the maps of the electronic density (N_e) and temperature (T_e) using the PyNeb diags.getCrossTemDen tool. The Hen 2-108 electronic temperature map was calculated from the [Nii] λλ5755/6584 line ratio iteratively with the density map obtained from the [Sii] λλ6716/6731 line ratio. Based on the model results presented in Section <ref>, we find that the known contribution of recombination of N^++ is under 1% to the λ5755 line intensity, so the recombination line contribution will be disregarded. Figure <ref> shows the resulting Hen 2-108 density and temperature spatial distribution. Most N_e values are between ∼800 and ∼2000 cm^-3, except for the central region, where the density reaches values up to ∼3000 cm^-3. The central region also shows higher temperatures than the rest of the nebula, with a difference of over 1000 K. The values for N_e and T_e in these central regions should be taken with care as there may be some effect from the noise of the continuum of the central source affecting the line intensities in those pixels. Some spaxels with extreme values close to the border of the useful region still remain, but they are likely the result of low signal of the lines used in the diagnostics which are not exactly coincident with Hβ, used to define the signal-to-noise cut-off. § ANALYSIS OF THE INTEGRATED SPECTRUM The integrated spectrum of Hen 2-108 was obtained from the data cube with magnification 1:1 by summing over the spaxels in the region delimited by the significant Hβ mask as described in Section <ref>. The resulting spectrum is shown in Fig. <ref>, where some of the Hen 2-108 emission lines are identified. The observed set of ions seen in the spectrum are that typically seen in a low-ionization nebula. We identified 53 emission lines from 10 elements (15 ions). The brightest lines (above 20% of Hβ) found in the spectrum are Hi and Hei recombination lines and [Oiii], [Nii], and [Ariii] forbidden lines. We also highlight the presence of a cluster of weak lines between 4630 and 4660 Å (hereafter called the λ4630 complex), which is shown in details in Fig. <ref>. The λ4630 complex is composed mostly of metal recombination lines produced in the central region of Hen 2-108. These lines were identified as Nii λ4630, Niii λ4634, Oii λ4639, blend of Ciii, Oii and Ciii λ4647,4649,4650 and [Feiii] λ4658. The maps of two of these lines can be seen in Fig. <ref>. The comparison between the Hα observed line profile and the Gaussian fitting, displayed in details in Fig. <ref>, shows indication that this line has wide wings. We found no clear evidence of wide wings on other lines. Some other interesting features are discussed in more detail in Section <ref>, where we analyse the spectrum of the PN central region. §.§ Emission line fluxes To measure the line fluxes, we used the code ALFA <cit.>. The code also suggests line identifications, which were confirmed against other publications in the literature <cit.>. The ALFA fitting of the Hen 2-108 spectrum was made with no normalization and the following parameter values: rebin of 2; 2000 generations; population size of 2000; pressure of 0.1 and size of continuum window of 51 pixels. We adopted these parameter values to improve signal-to-noise and the detection performance of ALFA. The overall quality level of the ALFA fitting is exemplified in Fig. <ref>, where part of the spectrum is shown in more details. The ALFA fitting provides a relative residual typically less than 15 per cent for most of the spectrum[Higher values occur in regions of the continuum, due to its low flux, and in the wings of some lines, due to deviation of the Gaussian profile. Although the residual may reach up to 40 percent in the line wings, it does not influence the line fluxes, as the flux in the wings do not contribute significantly to the total line flux in our case. High values were also observed where the sky contribution was not perfectly removed, but also in these cases, they do not influence significantly the line fluxes we report.]. Table <ref> lists the measured line fluxes and the uncertainties based on the ALFA fitting procedure. We only include lines that are detected with signal-to-noise ratio S/N>3. The S/N ratio is obtained from the residuals determined by ALFA from the difference of the fitted spectrum with respect to the observed spectrum. The observed and rest wavelengths, as well as the line carrier, are shown in the first three columns. The fourth and fifth columns present, respectively, the measured fluxes and their errors, while the sixth and seventh columns present the corresponding values but corrected from reddening. Columns eight to ten show measured fluxes available in the literature for comparison. With a few exceptions that are discussed in Section <ref>, our integrated fluxes are very similar to the values found in the literature. All fluxes are scaled to Hβ= 100. The Hen 2-108 absolute Hβ flux derived from our integrated spectrum is (1.9± 0.3) × 10^-12 erg cm^-2 s^-1 (not corrected for extinction) and (7.0± 1.1) × 10^-12 erg cm^-2 s^-1 (corrected for extinction). From the fitting procedure, ALFA also estimates the radial velocity of the object using all fitted lines. For Hen 2-108, the estimated barycentric radial velocity, corrected for the Earth's orbital and rotational velocities at the time of the observation, is -10.8 km s^-1. Since ALFA does not provide an uncertainty, we roughly estimated one of 3 km s^-1 based on the spectral sampling. §.§ Physical and chemical properties Table <ref> shows the gas diagnostic results obtained with the code NEAT <cit.> using the integrated spectrum line fluxes of Hen 2-108 as input. For the calculation of physical conditions and of chemical abundances, we considered only lines that were clearly from the nebula and not totally from the central source. The lines not considered were: N ii 4621, 4630; N iii 4634; C iii 4650, 4651 and 5696. These lines are well reproduced by the CS model discussed in Section <ref>. The code was executed using 5000 iterations in order to obtain uncertainties through a Monte-Carlo procedure. The table shows the values of the electronic density estimated from low and medium ionization ions, [Sii] and [Cliii], respectively, as well as the electronic temperature obtained from low ionization ion [Nii] and middle ionization ion [Oiii]. For comparison, we also include in the table calculations from <cit.> which use a methodology for the diagnostics similar to ours, but based on long-slit observations. The extinction coefficient is obtained from Hα/Hβ. We used the extinction law of <cit.>. The extinction coefficient of this PN was determined by several authors and the results are consistent with our value, even though the methods and data are distinct. In addition to the extinction coefficient calculated by <cit.> given in Table <ref>, <cit.> inferred c(Hβ) = 0.4 and <cit.> c(Hβ) = 0.53. Table <ref> shows the ionic and elemental abundances obtained from the VLT/VIMOS data with the code NEAT. The table provides abundances of He and N obtained from recombination lines, as well as abundances of N, O, S, Ar, Cl, and Fe. The total abundances were obtained using the ionization correction factors of <cit.>. Abundances from the literature are also included for comparison. A full comparison taking into consideration the field of view used in each study is presented in Section <ref>. The O/H and N/H abundances were derived from recombination and collisionally excited lines, which allowed us to estimate an abundance discrepancy factors ADF(O/H) = 9 ± 3 and ADF(N/H) = 22 ± 15 respectively. This result should be considered with care, as only a few lines of both ions are detected and used in obtaining the recombination abundance and for N there are no ions which emit both recombination and collisionally-excited lines in the optical. In particular the N/H ADF is likely to be overestimated due to residual contamination from emission lines from the central source (see section <ref> for details). Also, according to <cit.>, the nitrogen spectrum can be produced by continuum fluorescence, which may be specially important in low excitation conditions such as the ones for Hen 2-108. These abundances and respective ADF values should be considered as an order of magnitude estimates only. With the O and C recombination abundance determinations, albeit with low signal-to-noise ratio, it was possible to estimate C/O = 0.22 ± 0.05. We also determined the Fe abundance in Hen 2-108 from the [Feiii] λ4659, λ4881, λ5270 emission lines using the pyneb package <cit.> since NEAT did not perform this estimate. Since [Feiii] line ratios can also be used as density indicators <cit.>, the (T_ e-N_ e) diagnostic diagrams are presented in Fig. <ref> for each of three lines. The observed line ratios are log[Feiii] λ4881/λ4659=-0.72±0.07, log[Feiii] λ5270/λ4659=-0.36±0.06, and log[Feiii] λ5270/λ4881=0.37±0.08. Comparing the observed ratios to the values in the top and bottom panels, two densities are possible, N_ e∼100 cm^-3 or ∼10^5.5 cm^-3, respectively. From the middle panel, the observed [Feiii] λ5270/λ4659 ratio is valid for T_ e>12 000 K but considering the uncertainty of the line ratio, Te can be as low as ∼8000K and N_ e ranges from 10^2.5 to 10^5.5 cm^-3. Overall, these results indicate the presence of a gas with N_ e between 10^3 and 10^5 cm^-3 in the central region of the nebula, where most of the iron emission is produced (Fig. <ref>). The ionic abundance of the doubly ionized Fe is computed 5.8×10^-7 for N_ e=300 cm^-3 and 5.6×10^-7 for N_ e =10^5 cm^-3. The elemental abundance of Fe is between 1.31×10^-6 and 1.27×10^-6 using the ICF expressions provided by <cit.>. The uncertainties were computed considering a Monte Carlo approach with 200 trials. Note that our ionic abundance of Fe is found to be close to the value computed by <cit.> but our elemental Fe abundance is three times higher because of the difference in ICF(Fe) between the two studies. § SATELLITE CODE The spatially resolved study of the chemistry in PNe is complicated by the fact that the usual empirical procedures used to obtain abundances do not take into account the spatial complexity of the nebulae. The Spectroscopic Analysis Tool for intEgraL fieLd unIt daTacubEs (satellite[satellite v.1.3 is publicly available on Github: <https://github.com/StavrosAkras/SATELLITE>]; ; ) was developed to perform a series of calculations to assist in the analysis of the spatial variation on typical nebular diagnostics and abundance determination. The code has four different modules: rotational analysis, radial analysis, specific slit analysis, and 2D spatial analysis of the emission line flux maps generated from the data cubes of any IFU. The code simulates pseudo-slits spectra for a direct comparison with the data from long-slit spectroscopy as well as the construction of emission line ratios diagnostic diagrams (i.e. SMB, ; BPT, ; VO, ). All the physical parameters (c(Hβ), T_e and N_e, ionic and total abundances, and ionization correction factors (ICFS)) are computed for each pseudo-slit through the implementation of the pyneb package 1.1.15 <cit.>. Atomic data available in pyneb can be selected by the user of satellite. For the case of Hen2-108, the atomic data from Chianti database <cit.> were used for direct comparison with the results obtained with neat. In the following subsections, we discuss the application of the different modules and capabilities of satellite to the Hen 2-108 VIMOS data. §.§ Rotation analysis module The rotation analysis module simulates a number of slits placed radially from the centre of the observed nebula, varying the position angle (PA) from 0 to 360 degrees. In this work, we considered pseudo-slits with width and length of 5 and 20 spaxels or equivalently 3.35 and 13.4 arcsec, respectively. Figure <ref> displays the variation of the extinction coefficient, T_e and N_e (obtained employing the [N ii] and [S ii] diagnostics, respectively) as functions of the pseudo-slits' PA. A non-negligible variation in c(Hβ) is found. c(Hβ) starts with a value of 0.45 at PA=0 (north-south direction), decreases down to 0.3 at PA=50 degrees and increases again to 0.5 for PAs between 170 and 330 degrees. On the other hand, T_e and N_e do not show significant variation with the PA and their median values are 8217 K and 1415 cm^-3 with standard deviations of 180 K and 215 cm^-3, respectively. Ionic and total chemical abundances of O and N as well as the N/O ratio as a function of the pseudo-slits' PA are displayed in Fig. <ref>. No significant variation in the elemental abundances with the direction of the pseudo-slits is found. Note that for N and S (not shown here), there is a small difference between the elemental abundances obtained using the ICFs formulae from <cit.> (blue line) and <cit.> (orange line). The results of this analysis indicate that the elemental abundances' distribution shows some deviations from spherical symmetry, albeit with large uncertainties. The error bars of c(Hβ), T_e and N_e in Fig. <ref>, as well as the errors of all the physical parameters determined by satellite, correspond to the standard deviations from a Monte Carlo distribution of the emission line intensities assuming 200 replicates. §.§ Radial analysis module The radial analysis module performs a spectroscopic analysis in a radial direction for a specific pseudo-slit. The direction (PA), width and length of the pseudo-slit are chosen by the user and the products of the analysis are provided as functions of the distance from the CS. For the case of Hen 2-108, we applied this module for two directions: PA=90 degrees (eastern direction) and PA=180 degrees (southern direction). The same slit width and length as in the rotation analysis module were adopted. The variation of c(Hβ), T_e, and N_e as a function of the distance from the CS is presented in Fig. <ref>. The step in these plots is one spaxel or 0.67 arcsec. For both pseudo-slits, c(Hβ) is found to be as low as 0.3 in the inner part of the nebula and increases outwards, reaching a value of 0.5 for the eastern part and ∼0.7 for the southern part. Note that the radial analysis with the satellite code results in very low c(Hβ) for the inner region of the nebula, while the map in Fig <ref> displays values as high as 0.7-0.8. This is caused by the difference in the method. For the map, both the /Hβ and Hγ/Hβ lines ratios are used to get the extinction coefficient, while the satellite considers only the / ratio. T_e and N_e display variations with the distance from the CS although with large uncertainties, especially in the inner regions where the signal-to-noise ratio is low for the used lines. Note that, for r<1.6 arcsec, T_e is higher in the inner region of the nebula, which is consistent with the 2D maps (Fig <ref>). §.§ Specific slits analysis module The third module of satellite, the specific slits analysis module, deals with the simulation of ten pseudo-slits at specific positions and orientations on the nebula for a direct comparison with long-slit spectroscopy studies. The long slit observation from <cit.> is simulated considering a 3 spaxels (= 2 arcsec) wide pseudo-slit. A second pseudo-slit 25 spaxel wide covering the whole nebula is used to obtain the integrated spectrum of Hen 2-108. Line intensities from both spectra are presented in Table <ref>. The integrated fluxes obtained with satellite (including the absolute Hβ flux of 1.9×10^-12) and neat are also in very good agreement. The discrepancy between the satellite analysis and the results from <cit.> is less than 5 percent for the majority of the emission lines, except the He ii and [S iii] lines. Note that the difference between the integrated and pseudo-slit He ii intensities is significant. The long-slit used by <cit.> covers only a small part of the nebula towards the centre, where the He ii emission emanates. Part of the Hβ flux is, however, not included, resulting in a higher He ii/Hβ ratio compared to the integrated spectrum value. All physical parameters (c(Hβ), T_e, N_e, abundances and ICFs) computed by satellite for the integrated and G14's pseudo-slit spectra are given in Tables <ref> and <ref>. §.§ 2D analysis module satellite also calculates all the physical parameters for each individual spaxel and provides maps for all of them, as well as a number of line ratio maps defined by the user. The maps produced by satellite are similar to what is shown in Figs. <ref> and are therefore not shown here. Histograms of the nebular parameters are also provided by satellite for a better illustration of their distributions. The histograms of c(Hβ), T_e, N_e for Hen 2-108 are shown in Fig. <ref>. Most of the spaxels have c(Hβ) between 0.2 and 0.8 with a peak at ∼0.5. This range is consistent with the values discussed in previous sections. Note that the histogram indicates values higher than 1.0 which are not observed in the map of Fig. <ref> because of the mask of S/N=6 that applied on the datacube for the construction of the line maps. The distributions of T_e and N_e are concentrated in the ranges of 7500 to 8500 K and 500 to 2000 cm^-3, respectively. § SPECTRUM OF THE CENTRAL REGION The spectrum of the central region presents several differences with respect to the rest of the nebula. In this region, the CS can contribute significantly to the observed integrated spectrum. To assess the star's contribution and study which type best explains the spectral characteristics, we extracted the spectrum of the central region of the 2:1 magnification data cube and subtracted the nebular contribution. This was achieved by fitting two-dimensional Gaussians plus a second degree surface polynomial to the central region of the PN, defined as a 5 arcsec width centred at the star, at each point along the dispersion axis. As can be seen in Fig. <ref>, the extracted spectrum shows clear absorption features such as Heii λ4200, Hγ, Hei λ4471, Heii λ4541, Heii λ5412, Hei λ5876, which shows a clear P Cygni type profile and Na I doublet λλ5890,5895. We also detected absorption features at 5780, 5797 and 5812 Å. Two of these features, 5797 and 5812 Å, were identified as the Civ doublet by <cit.>. Those authors, however, do not mention the very obvious feature at 5780 Å in their spectrum. The three features coincide with the positions of lines known to be of diffuse interstellar bands (DIBs). These features were discovered by <cit.> and confirmed by <cit.>. Further details about their properties are discussed by <cit.> and references therein. However, our spectrum is not of sufficient quality to rule out the 5797 and 5812 Å lines as the Civ doublet. It is possible that there is a combination of both DIBs and Civ doublet in these absorption lines. A higher resolution and signal-to-noise spectrum is necessary to confirm this. Also confirmed here by the extracted central region spectrum, the lines of Niii λ4634, Heii λ4686 and Ciii λ5696 are mostly stellar in origin, although from our data we can not specify exactly the proportion. The lines of Ciii λ5696, Cii λ7236 and Heii λ4686 were identified by <cit.> as of stellar origin. <cit.> emphasize that the lines of Niii λ4634 and λ4641 are present in [WELS] and the first is less intense than the second. The first is seen in our spectrum next to the line Nii λ4630, but λ4634 cannot be separated from λ4639, identified as Oii. The same authors also identify the line λ4658 as Civ, while we identified the same line as [Feiii]. Our detection is supported by the detection of other [Feiii] lines and no detection of other Civ lines. It is seen in the flux map (Section <ref>) that the λ4658 wavelength is extended just beyond the central region, which may indicate that this emission is nebular. This region of the spectra can be very complex and can include many lines, as it can be seen for example in <cit.>. <cit.> disagree with the use of [WELS] as a classification type since many of the CSPNs were classified using low resolution spectra. Given the observed features such as the P Cygni profiles and broad Hα emission, together with the other emission lines seen in the CS of Hen 2-108, we calculated a set of models with the Potsdam Wolf-Rayet (PoWR) code for expanding atmosphere to infer the stellar parameters, which are summarized in Table <ref>. Additionally to the optical spectrum of the CS, we use for our spectral analysis the coadded, low-dispersion IUE observations sp47512 and sp47513, and also lp25380, all taken on 21. April 1993. The basic assumptions of the PoWR code are spherical symmetry and stationarity of the flow. The radiative transfer equation is solved in the comoving frame of the expanding atmosphere, iteratively with the equations of statistical equilibrium and radiative equilibrium. In the subsonic part, the velocity field is implied by the hydrostatic density stratification according to the continuity equation. For the supersonic part of the wind, we prescribe the velocity field (r) by a so-called β-law, where the free parameter β for WR stars is usually set to β=1 or β=0.8 for O-type stars. For Hen 2-108 we find that the widths of the optical lines correspond to expansion velocities of a few hundred km/s, while the P Cygni profile of the C iv resonance doublet in the IUE spectrum indicates a terminal velocity of about (1200±200) km s^-1. A consistent spectral fit of the widths of the C iv resonance doublet and the optical emission lines simultaneously is achieved for β= 4, i.e. a much shallower velocity law. We note that such a β value is also used for the spectral fit of the qWR star HD 45166 <cit.>, that has a qualitatively similar spectrum. Line broadening by microturbulence is also included in our models. From the shape of the line profiles, we deduce a microturbulence velocity of about 50 km s^-1. We adopted a stellar mass of 0.5 M_⊙, which is slightly below the mean value for CSPNe of 0.6 M_⊙ <cit.> and accounts for our findings of a progenitor with a mass below 1 M_⊙ (see Sect. <ref>). Usually, the value of M_⋆ has no noticeable influence on the synthetic wind spectra. The stellar temperature T_⋆ (defined at the continuum τ_Ross=20) can be estimated from the relative strengths of spectral lines of different ionization stages of the same element. We used He i vs. He ii, C iii vs. C iv, and N iii vs. N iv. While the nitrogen lines are relatively well reproduced by our best fitting model and the fit to the carbon lines is acceptable, we could not find a model that results in a comparable fit quality to all of the helium lines. Models with lower temperatures of about 38kK can reproduce the strengths of the He ii λ4686 emission line quite exactly, but result in too strong He i emission and N iii wind absorption lines. Moreover, such models result in a worse fit to the iron forest in the UV range. From the absolute strengths of the wind lines in the continuum-normalized spectrum we infer the transformed radius R_t <cit.>, a quantity which takes the invariance of the volume emission measure normalized to the stellar surface into account and is defined as R_t = R_⋆(_∞/2500 km s^-1/ Ṁ√(D)/10^-4 M_⊙ yr^-1)^2/3. with the terminal velocity _∞, the mass-loss rate Ṁ, and the stellar radius R_⋆ (defined at continuum τ_Ross=20). We allow for wind inhomogeneities and use the density contrast D as the factor by which the density in the clumps is enhanced compared to a homogeneous wind of the same Ṁ. We account for wind clumping in the approximation of optically thin structures <cit.>. Models with a smooth wind result in a too strong C iv resonance doublet relative to the optical emission lines. A value of D=4, as found for massive WN-type stars, improves the fit quality, but higher values of D cannot be excluded. Our stellar atmosphere models include atomic models for H, He, C, N, O, and Si. Iron-group line blanketing is treated by means of the superlevel approach, i.e. the thousands of energy levels of the iron group elements (Sc-Ni) are combined to energy bands. Between the energy bands the line transitions for the NLTE calculations are treated as superlines with a finite width and cross section σ(ν), comprising the data for millions of spectral lines <cit.>. While the fit quality to the metal lines is in general acceptable or good, it is worse for the helium lines in the parameter range under consideration. Hence, the helium and hydrogen abundance is only roughly constrained. Moreover, the fit to the Balmer lines is hampered by the blending with stellar He ii lines of the Pickering series and contamination with nebular emission, which might not be completely removed. The best fit is for X_H =50%, but solar hydrogen abundance gives also an acceptable fit (see Fig. <ref>). For silicon and oxygen we obtain a sufficient fit quality with solar abundances, while carbon and nitrogen seem to be slightly enhanced. The carbon and nitrogen abundances are inferred from the many C iii, C iv, N iii, and (very weak) N iv lines. From the various silicon lines the Si iv λλ6668-6702 multiplet is quite sensitive to the silicon abundance. For the determination of the oxygen abundance we use the only line in the optical range, O iii λ5592, which is also quite sensitive. We notice the pronounced N V resonance line in the UV that can only be reproduced by our models when super-ionziation by X-ray emission is taken into account. For this purpose, an optically thin hot gas component of T=2MK is assumed to be distributed within the stellar wind. We only account for its free-free emission (thermal bremsstrahlung), as the filling factor is arbitrarily chosen <cit.>. Finally, we fit the spectral energy distribution (SED) to the observed IUE spectra and broadband photometry (see Fig. <ref>) by using the geometric distance from <cit.> and applying the reddening law of <cit.> with an R_V=2.0, which gives a slightly better fit than the standard value of 3.1. We infer an E_B-V=0.37mag and hence determine a stellar luminosity of log (L_⋆/L_⊙)=3.62. From L_⋆ we also obtain a stellar radius of R_⋆=1.25 R_⊙ and a mass-loss rate of log(Ṁ/(M_⊙ a^-1))=-7.2. Because of the low value C/O=0.22 (Section <ref>), and the CS with H abundance of 0.50^+0.24_-0.10 by mass (Table <ref>), we suggest the CS of Hen 2-108 is not a result of a very Late Thermal Pulse (VLTP) or a LTP, since the significant H abundance and low C/O ratio in the nebula of ”born-again” stars is not predicted by models <cit.>. In the literature, many works have used the weak emission lines stars [WELS] as a placeholder for unknown types of objects. <cit.>, for example, has classified Hen 2-108 as [WELS] but pointed out that the spectrum was of low quality. Works such as <cit.> concluded that the presence of the carbon emission lines was an indication of a [WC] type Wolf-Rayet, but classified the CS of Hen 2-108 as a [WELS]. <cit.> classified the CS of Hen 2-108 as [WC]-PG1159 type, that is, [WELS] in a transition phase between the post-AGB and pre-white dwarf phases. We note that in our spectrum, all stellar absorption features in the optical spectrum appear to be wind lines and specifically Hβ has a P Cygni line profile. Therefore and in analogy to the the classification scheme by <cit.> for massive stars, one can assign the spectral class [Of/WN] to the CS of Hen 2-108 with respect to the N iii λλ4634–41 emission lines and Hβ. In contrast to the examples shown in their paper, we do not detect N v in the optical range. The detailed subtype of Hen 2-108 is [Of/WN8] when applying the classification scheme established by <cit.> for massive WN stars. For this subtype, the N iii lines are much stronger than N iv lines and comparable to the He ii λ4686 line. We use brackets to distinguish this object from the massive Of/WN stars. § PHOTOIONIZATION MODEL We used the one-dimensional photoionization code Cloudy (version 17.02; ) and the Python library PyCloudy <cit.> to construct a pseudo-3D photoionization model for Hen 2-108. Our main goal with the model is to produce a better understanding of the results presented in the previous sections and of the characteristics of Hen 2-108, in particular, its matter distribution and ionization structure and ionizing source. Cloudy nebular simulations provide integrated line fluxes, which we can use to compare with the observations in search for a Hen 2-108 photoionization model. Using PyCloudy and assuming a matter distribution for the nebula, we are able to use Cloudy models to simulate not only integrated fluxes, but also line maps and spatial profiles to compare to our spatially resolved observations. For this model, we will not attempt to reproduce the most central zone of the nebula, as the emission is strongly contaminated by the stellar emission, as seen in previous sections. The model we construct is based on our observations, the analysis we present in the previous sections, and information available in the literature. Table <ref> lists the observed integrated line fluxes and infrared photometric measurements used to constrain the model. Fluxes of some lines are dominated by the central (stellar or nebular) emission and are not used to constrain the model. We included Spitzer Space Telescope <cit.> mid infrared line fluxes from P<cit.> in our analysis. <cit.> made corrections for aperture effects and these line ratios are suitable for comparison to our observed and modelled flux ratios for the whole nebula. <cit.> provide the Hβ absolute flux of 1.3×10^-11 erg cm^-2 s^-1 derived from the 2 and 6 cm radio flux obtained by <cit.>. As they pointed out, this flux is consistent with the Hβ flux of 3.7×10^-12 erg cm^-2 s^-1 measured by <cit.>, if the extinction coefficient is c(Hβ)= 0.53. For comparison, we obtained similar values of c(Hβ)= 0.52 and F(Hβ) = 1.9×10^-12 erg cm^-2 s^-1 for the integrated spectrum. <cit.> suggest an upper limit for the [Ciii] λ1909 ultraviolet line flux obtained from IUE (International Ultraviolet Explorer, ). IRAS (Infrared Astronomical Satellite, ) fluxes from <cit.> were used to constrain the dust content in Hen 2-108. As discussed IRAS field of view seems to include all or most the emission from Hen 2-108, no corrections were used when comparing these fluxes with our modelled fluxes. Also, we have no evidence of significant contamination from field objects within the IRAS field of view. We use the factor κ(O), κ(O) = log(O_mod) - log(O_obs)/τ(O), defined in <cit.> as a measure of the proximity of observed and modelled values in Table <ref> and, therefore, the quality of the model. In the expression, O_mod and O_obs are the observed and modelled values of the observable. The tolerance factor for this observable, τ(O), is defined as τ(O) = log(1 + Δ I/I) for any quantity I with uncertainty Δ I. For the optical line fluxes, we assumed the relative uncertainties Δ I / I to be Δ I/I = 0.5 if I ≤ 0.1Hβ 0.3 if 0.1Hβ < I < Hβ 0.2 if I ≥ Hβ We added 15 percent to the relative uncertainties of the UV and IR line fluxes. For the absolute Hβ flux, we assume Δ I / I = 0.15, while for the IR band fluxes, we assume Δ I / I = 0.50. With such values, we take into account the observational uncertainties and the systematic effects that can cause deviations from the model for an observable. Values of κ(O) between -1 and 1 indicate a good fit for the correspondent quantity. To better constrain our model, we also compare the spatial profiles of bright lines extracted from the VIMOS maps to the profiles we obtain from the pseudo-3D models. The comparison is only done by visual inspection, as the goal is to reproduce only the general behaviour, given that the geometry we are assuming for the model is simplified. For the comparison, we use the profile extracted in the N-S direction, as this is more symmetrical. The profiles taken in the E-W direction are similar, although somewhat more extended and less symmetrical. This comparison is especially helpful to constrain the nebular matter distribution, the ionic structure, and the nebular size. We considered T_eff, L_bol, density distribution, elemental abundances of He, C, N, O, Ne, S, Cl, Ar, and Fe, dust-to-gas ratio, and PN distance as free parameters to be determined by the modelling procedure. The parameters of the best photoionization model we found are listed in Table <ref>. The radial density distribution used is shown in Fig. <ref>. This model shows a reasonable match to Hen 2-108 integrated and spatially resolved properties. The comparison of the observed integrated fluxes to the corresponding modelled values are given in Table <ref>. The model reproduces most of the lines fluxes within the expected errors, in particular the H I, [O III], [N II], [S II], [Ar III] (optical and mid-IR), [Cl III], and [Fe III] (optical and mid-IR) brightest lines. Line spatial profiles obtained from the emission maps in the N-S and E-W directions (chord passing by the PN centre) are compared with the radial profiles determined by the pycloudy simulation in Fig. <ref>. In the figure, the modelled profiles are scaled to better match the normalised observations. In most of the cases, this means normalised to its maximum value as in the observational profiles. For the simulation, we assumed a spherically symmetrical nebula. This is a simplified geometry, but it provides reasonable matches to the main features for the bright lines radial spatial profiles. To determine the line flux spatial profiles, we constructed a pseudo-3D photoionization model with spherical symmetry, but with a non-uniform density distribution. We note that Hen 2-108 is not perfectly spherical; it displays a slightly elliptical morphology, as shown in our line maps (see Fig. <ref>) and inferred by other authors in the literature (e.g. ). The spherical distribution is a simplification, but including a radial density profile provides a next major step in understanding the PN true 3D structure. Uniform density models could not reproduce important characteristics of the line emission maps of Hen 2-108. For example, in the case of Hi, [Oiii], and [Ariii] in, the observed emission in the central regions (Fig. <ref>) is significantly fainter than it should be in the case of a uniform filled sphere. As Hen 2-108 is a low excitation PN, the central decrease in [Oiii] could not be due to a ionization effect, i.e. the presence of Oiv emission in the central region. In fact, there is no [Oiv] 25.9 μm emission detected in the Spitzer spectrum of this nebula (see Fig. <ref>). The central decrease is an indication that the density in the central regions of Hen 2-108 must be lower than in the outer zones of the nebula. This could have important consequences for its ionization structure and how well we could use the model to interpret the observed line maps. For our models, we explored different radial density distributions that could reproduce a ring structure as seen in the H I and [O III] maps. Models with large empty central cavities (i.e., large nebular internal radius, a few pixels wide) do not reproduce very well the mid infrared dust thermal emission. These models lack warm dust emission in the low mid-IR wavelengths. We assume a small nebular internal radius of 10^15 cm, chosen as to be smaller or around 0.5 pixel in our images, for distances previously estimated for He 2-108 (around 2-5 kpc). The gas density is increased in the radial direction, from a low value in this radius to a large value in the outer radius. The outer radius was limited by the comparison between the modelled and the observed emission line profiles. The best results (match of observation to were found for the total hydrogen density distribution given in Fig. <ref>. Fitting the oxygen and helium lines (fluxes and profiles) simultaneously was challenging. The best global result we got does not produced a very good match between model and observations for all the Hei lines. Typical residuals are of ∼35 percent. To improve the Hei line ratios fitting, one possibility would be to increase the helium abundance to a value considerably larger than the value inferred from the empirical analysis. Another possibility would be a matter bounded nebula. As we can see in Fig. <ref>, the Hei and Hi lines spatial profiles have similar shapes. For a PN with a low T_eff, the He^+ region is smaller than the H^+ region. They may have similar sizes if the nebula is matter bounded <cit.>. This was observed, for example, for the PN Tc 1 <cit.>. We also note that the observed regions that emit [Oiii] and [Oii] in Hen 2-108 have similar outer radius. This could also indicate that the nebula is matter bounded. However, while a matter bounded nebula improve the fitting of the Hei lines to a much better level, the match to observation of other line fluxes and the dust continuum were sacrificed. A third possible explanation is that the matter distribution is different from what we assume. The emission maps and derived profiles show that the nebula has asymmetries, which are obviously not reproduced by the our spherically symmetric model. The asymmetries are seen in most of the line profiles, but are more evident in the low-ionization emission as [Sii], [Nii], and [Oii] doublets. More detailed 3D modelling is needed to test this hypothesis. The observed profiles of the Hi and Hei recombination lines show only a small central emission excess, with the exception of Hei λ4471, which may have a somewhat prominent central peak emission. Other observed profiles with contribution from the central region are Cii λ7231, Nii λ5679, [Feiii] λλ4658,5270, [Sii] λλ6716,6731, [Siii] λ6312, and [Cliii] λλ5518,5538. Some of these lines might have contributions from the CS emission, while in other cases the contribution may be from a stellar wind or nebular emission close to the central star. The density diagnostic map in Fig. <ref> indicates a high density (up to ∼4000 cm^-3) in the central pixels region (where the excess emission is seen). The [Sii] lines, which were used to derive the density diagnostic maps, should be nebular. Due to their low critical densities, the [Sii] lines ratios are not a useful diagnostic for densities higher than 10^4 cm^-3 <cit.>, but the [Feiii] lines indicate the possible presence of a hot dense gas in the core of Hen 2-108 (see Sect. <ref>). The models we test with such densities, however, did not reproduce the observations. One possible explanation is that the high density region has an open geometry (our models only considered closed geometry) as, for example, a torus, barrel, or a blob. Distance was assumed as a free parameter determined from the model procedure. The best model distance is 4.0 kpc, which is close to the recent calculations of 3.6±0.2 kpc by <cit.> and 4.1 kpc by <cit.> based on GAIA parallax measurements. The nebular gas elemental abundances are assumed to be constant across the nebula. The abundances determined from our observations (Table <ref>) were used as initial guesses. Values were varied to improve the match to observations. The dust content and composition in Hen 2-108 has not been published in the literature. The C to O abundance ratio in Hen 2-108 is not well constrained as the C abundance is poorly known. Although, Pottasch et al. (2011) reported a possible fullerene line in this nebula, which could indicate the presence of C-rich dust, we have found no evidence of fullerene emission the low-resolution (see Fig. <ref> in Appendix <ref>) or in the high resolution Spitzer spectra of Hen 2-108. Mid infrared bands of polycyclic aromatic hydrocarbons (PAHs) are also not detected in the mid-IR Spitzer spectra of this nebula (Fig. <ref>). However, preliminary models we calculated indicated that the gas should be C-rich and the dust continuum is reasonably fitted with graphite dust. For our models, we assume that the dust grains are composed of graphite with the standard Cloudy ISM size distribution <cit.>. Dust is uniformly mixed with the gas. To determine the best model, we only vary the dust-to-gas ratio, which can be well constrained by the available mid and far infrared photometric measurements. It is not our goal to provide a detailed dust model for Hen 2-108 here, but only to consider the general effect of dust on the radiation processing and gas heating, for which our dust model is adequate. The comparison of the infrared continuum simulated with Cloudy and the infrared photometry measurements available in the literature is shown in Fig. <ref>. There is a general good match between them. Uncertainties in both model and observations, as well as the contribution of line emission and from field objects may account for the differences. The IR bump shape is reasonably fitted by the model. This indicates that the dust temperature distribution should be close to the real distribution. This is of particular interest as, models with constant density predict that the heated dust close to the CS would produce more near IR emission than we obtained with our model with the tailored density distribution. The optical line spectrum of Hen 2-108 (Fig. <ref>) shows that this is a low-excitation PN. A few determinations of the CS temperature (T_eff) are available in the literature. Values obtained from stellar atmosphere models are in the range of 32 to 39 kK. The Zanstra temperatures of Hei is comparable (26 kK). The Heii Zanstra temperature, however, shows a much higher value (50 kK), but as we discussed in Section <ref>, the Heii emission is dominated by the central star emission. We thus discard the temperature derived from this line and explored models with T_eff in the range of 25 to 45 kK. We considered models with a large range of luminosities, from 100 to 10 000 L_⊙, which was progressively constrained in the modelling process. We assume that the CS emits as a blackbody, which is parametrized by the effective temperature (T_eff) and its bolometric luminosity (L_⋆). This approximation is sufficient to our purposes and final uncertainties expected. Using atmosphere models, for example, shows differences in the line fluxes and derived PN parameters of the order of 5 to 10 percent <cit.>. The final model have a central star with T_eff = 40 kK and a luminosity of 1500 L_⊙. The ratio Heii λ4686/Hβ is strongly dependent on T_eff. Photoionization models show that the ratio Heii λ4686/Hβ should be more than one order of magnitude smaller than the Hen 2-108 observed value if the nebula is excited by a star with T_eff∼ 40 kK (see Appendix <ref>). This corroborates the conclusion in Section <ref> that the Heii λ4686 emission is produced by the central star. Our final model predicts a flux of only 0.02 percent of Hβ. As in the case of the Heii emission, other lines produced dominantly or strongly contaminated by the central source can only be used as an upper limits and were not very useful to constrain the model. Naturally, the model do not reproduce those values as they are not of nebular origin. Examples are the λ4630 complex, where lines are also complicated by being a strong blend of weak lines, as well as Nii λ5679 and Ciii λ5696, which are additionally weak and very uncertain. The [Siii] line has also important emission in the central region of the nebula, but it is likely of nebular origin. The central star spectrum extracted from the observations does not show this line and the synthesised spectrum does not predict this line to be significant. Its flux is very uncertain because of noise and contamination by the central star emission, so we did not use this line to constrain the model. Our model predicts that the [Siii] optical and infrared emission comes from the main nebula as seen in Fig. <ref>. The spatial profile agrees very well with the observation for the main nebula (considering the uncertainty in this low-intensity emission). The central [Siii] emission not explained by the photoionization or the star atmosphere models is evidence that there should be other source of emission within that small region close to the CS. As we previously mentioned, our model is not expect to reproduce the emission from this region due to the matter distribution limitations. Line and dust continuum can be affected by this extra source. The photoionization model shows that the lines of [S IV] will be produced dominantly in the central region, while the dust thermal continuum emission in the 12 and 25 μm bands may have a significant contribution from the central nebular region, which explain why the modelled fluxes exhibit differences from the observations. The temperature of the gas obtained by the model is in reasonable agreement with the observations given the uncertainties involved. For the N diagnostic ratio, the model and observations agree within 6%. The O diagnostic ratio however is discrepant with the difference being larger than the uncertainty. The discrepancy can not be explained by a recombination line contribution to the [Oiii] 4363 Å or [Nii] 5754 Å lines as the model shows that these are under 2%. The discrepancy in the O diagnostic may be due to the [Oiii] 4363 Å line being very uncertain and a better signal to noise spectrum would be required to better constrain this. The discrepancy may also be due to the simplified structure adopted in the model. It is important to note that the high C/H abundance adopted was required to balance the gas heating and keep the gas temperature in a level that explain the emission line fluxes and the dust thermal emission. The abundance is, however, not realistic and should not be taken as an abundance estimate for this element. We explored different density structures, ionising spectra, dust quantities and elemental abundances, among other possibilities to find a solution to this issue. However we were unable to find a combination of parameters that reproduced all the constraints to a good level and also gave a more reasonable C/H abundance. Basically, we need an extra cooling agent to balance the heating from the significant dust quantity in this nebula, which is determined by its infrared dust emission. We opt to increase the C abundance to act as an extra cooling agent, as the C/H is not well constrained by the current available data. Another option would be significantly lowering the dust abundance, which would help the situation by reducing the nebular gas heating. However, this would also significantly reduce the dust infrared thermal emission to a level much lower than the observations shows. In both cases, there would be no change to our main results about the 3D structure of the nebula and the general characteristics of the central star, which are our main goals with this simplified model. Carbon should have an ADF factor of the same order than oxygen. However, as the model does not reproduce well the recombination line fluxes, we cannot estimate the real C/H abundance. Finally in Fig. <ref> we compare the photoionization model results and the PoWR model results derived in section <ref> to theoretical models for the evolution of post-asymptotic giant branch stars and central stars of planetary nebulae with metalicity Z=0.01Z_⊙ from <cit.>. From this comparison we see that the progenitor mass of the nebula CS is in likely lower than 1.5M_⊙. The luminosity difference between the PoWR result and that of the photoionization model is probably due to the simplified density structure adopted in the latter and a more detailed 3D model is needed to constrain this further. § CONCLUSIONS A detailed study of the PN Hen 2-108 was carried out which it allowed us to better understand the characteristics of the nebula and its CS, as well as its evolutionary stage. The data allowed us to analyse the integrated spectrum of the nebula, as well as the spatially resolved emission maps. The integrated spectrum allowed us to obtain measurements from a larger number of emission lines observed compared to previous studies. The emission maps give better details of the general ionization structure of the nebula. Emission lines of the usual elements were detected, such as H, He, O, N, S, but also lines of C, Cl, Fe and Ar. Among the lines in common with other works, we found a good agreement of the intensities. Abundances were obtained for He, N, O, Ar, S and Cl, which, in general, were consistent with previous results considering the uncertainties, except for N and O, which were higher. For O and N we were able to do the calculation and obtained an ADF(O/H) = 9 ± 3 and ADF(N/H) = 22 ± 15 respectively. Considerable ADF values such as these have been associated with binary systems in the centre of PNe <cit.>. The newly developed code satellite was also used to analyse the Hen 2-108 VIMOS data set. The satellite procedures allowed us to investigate and quantify spatial variations of diagnostics and abundances in 1D and 2D analysis. The angular analysis module revealed a valley in c() for PAs of the pseudo-slits between 25 and 75 degrees, contrary to T_e and N_e that display no variation. Similarly, the chemical abundances were also found to be unchanged with PAs. On the other hand, the radial analysis module shown that c() increases as a function of the radius from the CS while T_e and N_e are nearly constant with an augmentation in T_e for radius<1.2 arcsec. Our data also allowed us to obtain the spectrum of the CS, efficiently discounting the nebular emission. With our medium resolution spectrum, we found that all stellar absorption features in the optical spectrum appear to be wind lines and specifically Hβ and He i λ5876 lines have a P Cygni line profile. The CS spectrum also shows a widening of the Hα line, which is characteristic of stars with strong winds resulting from advanced stages of mass loss. The spectrum was then modelled with the Potsdam Wolf-Rayet (PoWR) code for expanding atmosphere to infer the stellar parameters. The detailed model fit allowed us to classify the CS as a star of type [Of/WN8], making it the newest member of a rare group of stars for which only 3 Of-WR(H), 8 [WN] and 11 [WR] are currently confirmed as listed in <cit.>. The CS with H abundance of 0.50^+24_-10 by mass, together with the low value C/O=0.22 obtained from the nebular abundance analysis suggests that it is not a result of a very Late Thermal Pulse (VLTP) or a LTP, since the significant H abundance and low C/O ratio in the nebula of ”born-again” stars is not predicted by models <cit.>. Although no specific evolutionary pathway has yet been found for this type of star, it could be the result of AGB final thermal pulse (AFTP) <cit.> or binary-induced mass exchange or a WD merger as mentioned by <cit.> for the similar [WN] type. The ADF results we obtain for Hen 2-108, and its connection to binarity would favor the latter scenario, although further studies are needed to confirm this. We have built a pseudo-3D photoionization model that provided us with a better understanding of the object's matter distribution, ionization structure and the central source. The model was able to reproduce most of the observational constrains available to a reasonable level. The ionizing source is a low effective temperature CS (40 kK) with a luminosity of 1500 L_⊙. We note that the temperature of the photoionization model's ionizing source agrees with the WN star spectrum that best reproduces the Hen 2-108 CS spectrum. The photoionization model puts the nebula around 4 kpc from us, in agreement with GAIA parallax data. The model uses spherical symmetry, but the radial matter distribution showed in Fig. 13 was necessary to better fit the observations. The main nebula has a low-density (n_H =260 cm^-3) cavity, with an outer shell with a density of n_H =1400 cm^-3. The total gas mass of the main nebula is 0.19 M_⊙ and its size is 0.1 pc. The dust-to-gas mass ratio is 6.6×10^-3, resulting in a total dust mass of 2.3×10^-4 M_⊙. Asymmetries in the observed profiles with respect to the models indicate that the nebula is not perfectly spherical. Our analysis and modelling efforts also indicates that the emission excess of some lines seen in the central pixels should be produced in an open geometry shell. The main limitation of the model is the necessity of a very high C/H abundance to better reproduce the temperature of the gas. The fact that the empirically determined ionic ratios of C and O are of the same order indicates that these two elements should have a similar ADF. However, as the model does not reproduce well the recombination line fluxes, we cannot estimate the real C/H abundance. Comparing the central source temperature and luminosity obtained from the PoWR and photoionization models to evolutionary tracks of post-asymptotic giant branch stars and central stars of planetary nebulae from <cit.>, we can infer that the progenitor star has a mass no larger than 1.5M_⊙. This result is consistent with the low C/O abundance ratio we find from the observations, which is typical of low mass stars. More detailed high resolution, high signal to noise spectra and three dimensional photoionization models are also required to better constrain the stellar parameters. § ACKNOWLEDGEMENTS Based on observations collected at the European Southern Observatory under ESO programme 079.D-0117(A). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001 (B.L.M.M. and I.A.). This work has made use of the computing facilities available at the Laboratory of Computational Astrophysics of the Universidade Federal de Itajubá (LAC-UNIFEI). The LAC-UNIFEI is maintained with grants from CAPES, CNPq and FAPEMIG. SA acknowledges support under the grant 5077 financed by IAASARS/NOA. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France (DOI: 10.26093/cds/vizier). The original description of the VizieR service was published in A&AS 143, 23. This research has made use of NASA’s Astrophysics Data System. This publication has benefited from a discussion by a meeting sponsored by the International Space Science Institute (ISSI) at Bern, Switzerland. § DATA AVAILABILITY The VIMOS Hen 2-108 observations are publicly available in the ESO Archive (<http://archive.eso.org/eso/eso_archive_main.html>). The photoionization model was generated with publicly available codes and all the parameters necessary to run them are provided. The data sets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. mnras § SPITZER SPECTRA OF HEN 2-108 Figure <ref> shows the low-resolution Spitzer Space Telescope mid-infrared spectrum of Hen 2-108. The spectrum was obtained from the Cornell Atlas of Spitzer/IRS Sources <cit.>. § HEII LINE DEPENDENCE WITH CS TEMPERATURE Fig. <ref> shows the Heii/Hβ intensity ratio strong dependence with T_⋆ The ratios were calculated for a grid of Cloudy models covering a wide range of stellar and nebular parameters, which represents the range observed for PNe. The thickness of the blue curve is due to the superposition of curves for different parameters. The Heii λ4686/Hβ ratio observed for Hen 2-108 (T_eff∼ 40 kK ) should be more than one order of magnitude smaller to be of nebular origin. To produce Fig. <ref>, we use a grid of Cloudy models for T_eff = 30-200 kK, L_bol = 10^2-10^4 L_⊙, and n_H = 10^2-10^5 cm^-3. We include models with solar abundances and also models with abundances of the more abundant species varied by a factor of 5 around solar. The models assume a MNR dust distribution, with a dust-to-gas ratio between 0.1 and 10 of that of the ISM (3.3×10^-3). We only show models with graphite dust, but tests showed that the values in the plots are similar for silicate dust as well. Spherical symmetry is assumed and all the models are ionization bounded. All the models run are plotted in the figure as curves that varies only T_eff, while keeping the other parameters constant.
http://arxiv.org/abs/2306.07995v1
20230612161832
Semantic-Based Neural Network Repair
[ "Richard Schumi", "Jun Sun" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.SE" ]
Singapore Management University Singapore [email protected] Singapore Management University Singapore [email protected] Recently, neural networks have spread into numerous fields including many safety-critical systems. Neural networks are built (and trained) by programming in frameworks such as TensorFlow and PyTorch. Developers apply a rich set of pre-defined layers to manually program neural networks or to automatically generate them (e.g., through AutoML). Composing neural networks with different layers is error-prone due to the non-trivial constraints that must be satisfied in order to use those layers. In this work, we propose an approach to automatically repair erroneous neural networks. The challenge is in identifying a minimal modification to the network so that it becomes valid. Modifying a layer might have cascading effects on subsequent layers and thus our approach must search recursively to identify a ”globally” minimal modification. Our approach is based on an executable semantics of deep learning layers and focuses on four kinds of errors which are common in practice. We evaluate our approach for two usage scenarios, i.e., repairing automatically generated neural networks and manually written ones suffering from common model bugs. The results show that we are able to repair 100% of a set of randomly generated neural networks (which are produced with an existing AI framework testing approach) effectively and efficiently (with an average repair time of 21.08s) and 93.75% of a collection of real neural network bugs (with an average time of 3min 40s). Semantic-Based Neural Network Repair Jun Sun July 31, 2023 ==================================== § INTRODUCTION Artificial intelligence (AI) is based on imitating natural intelligence or learning behaviour with a machine. The method called deep learning (DL) employs artificial neural networks to simulate the neurons of brains <cit.>. In recent years, DL has advanced into numerous fields, like language processing, face and speech recognition, due to improvements in computer hardware <cit.>. Although AI systems have been successfully used in many domains, they are not failure-free. Especially in safety critical applications, like autonomous driving, or medical systems, even minor bugs can have severe consequences. Studies of bugs in AI systems <cit.> have analysed various posted bug reports (or issues), and the large number of identified posts illustrates the high frequency of such bugs. These studies evaluated the time it took from posting a bug until it was resolved and the time ranges from weeks to months depending on the type of the issue. Moreover, there are other factors that make it especially difficult and time-consuming to debug AI systems. Neural networks can have a long training time of days or weeks, which makes it cumbersome to evaluate potential ways of fixing a neural network. Often the error messages that occur during AI development can be unrelated to the actual issue that needs to be fixed <cit.>, the error messages can be inconsistent, or there can even be hidden issues that do not produce error messages <cit.>. Another difficulty comes from the underlying AI frameworks such as TensorFlow or PyTorch. These frameworks are still being rapidly developed, and a new release may not be backward compatible and can break existing code <cit.>. Furthermore, the provided documentation can sometimes be vague or buggy <cit.>, which makes it hard for developers to use unfamiliar AI components. In order to make it easier for developers to fix and prevent such bugs, it is important to be able to systematically (and automatically) test and repair the underlying neural networks or deep learning models. Thus, in this work, we present a novel semantic-based neural network repair approach that helps AI developers fix common errors in the architecture and with the parameters of AI models. Designing AI or deep learning models is a cumbersome task that requires a lot of expertise, domain knowledge, and effort. AI frameworks provide dozens of deep learning layers that can perform simple mathematical functions or complex operations, like different convolutions or recurrent layers. Most of these layers have preconditions, e.g., for the input data or for the parameters. Building a valid model requires the fulfilment of all the preconditions from all layers of the model, which can be challenging since layers can not only be combined in sequences, but in general can be in the form of directed acyclic graphs. Given such a graph, every connection can cause potential precondition violations, and identifying them can be cumbersome. Moreover, the debug information that is provided by AI frameworks can be imprecise or inconsistent <cit.>. Due to that, the process of developing a new model can be time-consuming and frustrating. With our approach, an AI developer is provided with automatically generated changes that can repair the model. The changes are presented in a list, which is ordered based on a change indicator value that reflects the number (and magnitude) of changes that are required. Our approach facilitates and speeds up the time-consuming task of finding a valid model architecture and consistent model parameters. This can be especially helpful for new AI developers, who still lack the necessary skills to fix such issues on their own. Moreover, our approach can support other AI techniques, like AutoML <cit.> where, e.g., a model architecture can be automatically derived for a given problem, or automatic AI model generation <cit.>, which can be used in various scenarios, like AI framework testing. For both these techniques, it can be hard to find valid model architectures, especially if randomness is involved. To avoid generating invalid models, existing AutoML or AI framework testing approach often rely on a limited set of models defined by templates. By repairing generated models, unnecessary pruning of the search space can be avoided, which might benefit both these techniques. Our approach relies on an existing semantics called ExAIS <cit.> which defines the functionality of almost all TensorFlow layers in the logical programming language Prolog. ExAIS contains a number of preconditions that produce debug messages and enable the identification of model bugs. Based on these messages, we apply multiple algorithms to fix different types of errors. For example, for a dimension error (which occurs when the input does not have the required number of dimensions), a simple fix strategy could be to introduce a reshape layer. However, it is usually not as easy to repair a neural network with just a single change, because in most cases there need to be further modifications in the following layers as a consequence, i.e., structural changes like that can also cause invalid weight shapes deeper in the network. Moreover, there are usually multiple potential fixes for a bug, and it can be challenging to find the best repair in different scenarios. There is already an existing tool called Tensfa <cit.> that can automatically repair dimension errors or more generally tensor shape faults, which are also bugs that we are targeting with our method, but in contrast our approach can also repair different types of bugs and it works more straight forward as we will explain later. To sum up, the main contributions of our work are: * We present a novel semantic-based AI model repair approach that is able to suggest multiple model changes that can fix an invalid AI model. * We collect a set of real bugs from AI developers in order to evaluate our approach in a realistic setting. * Additionally, we performed an evaluation with randomly generated models in order to test our approach for a diverse set of models and bugs. Structure. The rest of the paper is structured as follows. In Sect. <ref>, we introduce necessary background information, e.g., about the semantics that we apply. In Sect. <ref>, we present the underlying algorithms of our repair approach in detail. In Sect. <ref>, we show an evaluation with two usage scenarios. Lastly, we review the related work in Sect. <ref> and conclude in Sect. <ref>. § BACKGROUND In this section, we describe the technologies that will be supported by our repair approach as well as the underlying semantics. §.§ AI Framework Testing While our primary goal is to support programmers when they are developing AI models manually, another interesting application that can be supported by our approach is AI framework (such as TensorFlow and PyTorch) testing. There is an active line of research on automatic testing of AI frameworks <cit.>. These works are motivated by the fact that bugs in AI frameworks might potentially affect all AI applications that are built with such frameworks. Due to that and also due to the fact that AI frameworks can still suffer from severe bugs <cit.>, it is important to thoroughly and systematically test them. Existing AI framework testing techniques can be categorized into the following groups: differential testing <cit.> and metamorphic testing <cit.>. Differential testing is a technique that compares multiple frameworks (or implementations) against each other, and bugs are discovered when there is an inconsistency. Metamorphic testing for AI frameworks usually introduces changes in the input data for the AI model that should not affect the output, and a different output (prediction result) would suggest an issue in the AI framework. For both these techniques, it is important to have a wide variety of valid AI models in order to test different aspects of the AI frameworks. Our repair approach can support the generation of such models by enabling effective random model generation that would normally produce a high percentage of invalid models or by allowing other approaches to explore more complicated scenarios that might often be dismissed due to difficulties in finding a working model. §.§ AutoML Another application that can be supported by our repair approach is Automatic machine learning (AutoML) <cit.>. AutoML, which is also called neural architecture search (NAS) in the deep learning context, is a term for methods that try to reduce the need for manual model building for various learning tasks, like image/object recognition, or language modelling. Developing AI systems usually requires a lot of expertise and effort, e.g., for data pre-processing tasks, like feature engineering and to find a deep learning model, i.e., to choose a model architecture and to tune the model in order to achieve an acceptable accuracy. There has been a lot of progress with AutoML techniques, which can potentially take over such tasks and sometimes produce models with higher accuracies compared to hand-crafted models from experienced AI developers <cit.>. There are many AutoML approaches that try to find model architectures and tune parameters with different strategies. Depending on the search strategy, some AutoML approaches can produce invalid models that have to be discarded <cit.> which hinders the performance of AutoML. To avoid generating a large number of invalid models, AutoML approaches usually apply only a limited set of predefined layers or groups of layers (a.k.a. cells) to build neural networks based on knowledge from manually developed networks, which might hinder the discovery of architectures that outperform existing ones <cit.>. With our approach, we can support a variety of layers and neural network architectures. Hence, we believe that our semantic-based repair method can help to enable AutoML approaches to explore an extended search space. §.§ ExAIS Our technique utilises an executable AI semantics called ExAIS <cit.> that is written in the logical programming language Prolog <cit.>. Prolog is a declarative language that relies on first order logic. Programs in Prolog are built with rules and facts, which are usually concerned with mathematical relations. The declarative nature of the language facilitates the creation of high level semantics. Moreover, it supports various list operations and mathematical expressions, which makes it convenient for specifying deep learning layer behaviour that is often concerned with high dimensional input and mathematical operations. Listing <ref> shows the Prolog semantics of a Dense layer <cit.> of ExAIS. It is a standard densely connected layer, which contains a number of nodes, each of which is connected with all inputs. The output is computed as the sum of all inputs (multiplied with the weights) at each node with an added bias value. The specification works as follows. The rule starting in Line 1 has multiple parameters: a list [I|Is], a weight array |IWs|, and a bias array |Bs| (which can be intuitively regarded as inputs) and a list |Os| (which can be regarded as the output). The list notation [I|Is] enables access to the first element I and the remaining elements Is of a list. Line 2 constrains the depth of the nested input to two. We handle higher dimensional input separately. Line 3 applies a predicate that is true when O can be unified as the expected output for an input I, and Line 4 is a recursive constraint, which intuitively continues with the next inputs. The rule in Lines 6–9 is similar, except that it handles (layer) inputs with a higher dimension, which is checked in Line 7, and recursively uses the initial predicate from Line 1 since the dense layer only performs computations in the innermost list even when it receives high dimensional input data. Line 11 (and Line 18) are the base cases for the recursion, i.e., when only an empty list remains. The predicate in Line 13 encodes the main layer functionality and becomes true when the |Res| variable is the expected output for the input [I|Is]. It has the same arguments as the first rule and an additional temporary variable |Res0| for the result. It consists of clauses for multiplying the weight arrays |IW| with each input |I| and for adding the results in Line 16. The predicates |multiply_list_with| and |add_lists| are straightforward and are therefore omitted. [language=Prolog, xleftmargin=15pt, float=tp, caption=Prolog semantics of the Dense layer <cit.>., label=lst:dense,deletekeywords=is, morekeywords=dense_layer,dense_node_comp,depth,add_lists,multiply_list_with,padding1D] dense_layer([I|Is], IWs, Bs, [O|Os]) :- depth([I|Is],2), dense_node_comp(I, IWs, Bs, O), dense_layer(Is, IWs, Bs, Os). dense_layer([I|Is], IWs, Bs, [O|Os]) :- depth([I|Is],D), D > 2, dense_layer(I, IWs, Bs, O), dense_layer(Is, IWs, Bs, Os). dense_layer([], _, _, []). dense_node_comp([I|Is],[IW|IWs],Res0,Res) :- multiply_list_with(IW,I,Res1), add_lists(Res0,Res1,Res2), dense_node_comp(Is,IWs,Res2,Res). dense_node_comp([],[],Res,Res). With this Prolog semantics, we can now answer a variety of queries, e.g., to compute the expected output of a Dense layer. More relevantly, ExAIS contains preconditions that reflect layer requirements, like a specific input shape, or dependencies between the arguments. An example precondition to check if layer input data has a minimum number of dimensions is illustrated in Listing <ref>. The predicate check_min_dimensions takes the input data and a minimum dimension value as arguments. Line 2 shows a predicate that becomes true, when D1 can be unified to the dimension number of Is. Next, there is a condition to check if the dimensions of the input are smaller than the given minimum value. If it is smaller, then an error message is produced. Otherwise, the predicate becomes true. [language=Prolog, xleftmargin=15pt, float=tp, aboveskip=-6pt,belowskip=-6pt, caption=Precondition to check if the input data has a minimum number of dimensions., label=lst:precondition,deletekeywords=is, morekeywords=dense_layer,dense,dense_node_comp,depth,add_lists,multiply_list_with,padding1D,check_min_dimensions,writeln,shape,term_string,string_concat,throw] check_min_dimensions(Is, D) :- depth(Is,D1), (D1 < D ->(write("Invalid Model, Badness Value: "), BV is D1-D,BV1 is BV*100000000000000000, writeln(BV1), S1 = "Dimension error, Input Shape ", shape(Is,Shape), term_string(Shape,S2), string_concat(S1,S2,RS), S3 = ", Expected Min Dimensions ", string_concat(S3,D,RS1), string_concat(RS,RS1,S), throw(S));true). Most layers of ExAIS contain preconditions in this form. The preconditions are part of the ExAIS semantics and were created manually by the ExAIS developers according to the TensorFlow documentation and according to other publications that describe the layer functionality and requirements. When an AI model is executed with the semantics, then all the preconditions of the individual layers are checked. Any violation of the preconditions would make the model invalid. Hence, the preconditions can help identify problematic model aspects. In this work, we utilize this feature to enable our automated AI model repair approach. The execution of the Prolog predicates works similarly to the execution of functions. There is a predicate for each layer, which contains precondition calls and calls to subpredicates, which enables an automatic precondition check during the layer execution. There might be layer preconditions that are not specified in ExAIS and their violations would not be repairable by our approach, but since we tested thousands of random models with our approach, we are confident that we can resolve precondition violations in most cases. The semantics consists of 65 deterministic layers and seven non-deterministic layers. We focus on repairing deterministic layers in this work. Only six of the 65 layers have no preconditions (i.e., the Flatten layer, ReLU, ThresholdedReLU, LeakyReLU, Masking, TimeDistributed). Seven of the layers have simple preconditions (UpSampling1D-3D, ZeroPadding1D-3D, Embedding), i.e., they only require input with a certain number of dimensions. The remaining layers have non-trivial preconditions that mainly fall in the following categories. First, there are consistency requirements between the layer arguments (or inputs) and the weight shape, e.g., for a dense layer the first dimension of the weight needs to have the same size as the last dimension of the input. Second, we may have inconsistencies among the shapes of multiple inputs of a layer, i.e., some layers perform mathematical operations, like addition or multiplication of multiple layer inputs. These layers may require that the shape of the inputs must be the same. Third, there can be consistency requirements between the layer arguments or with the layer arguments and the inputs. For example, for some convolutional layers setting a dilation_rate value not equal to one is incompatible with specifying any stride value not equal to one. During our investigations, we noticed that there are various studies that evaluate AI faults (based on bug reports) and that some of them are related to violations of the layer preconditions of ExAIS and can thus be detected by its precondition checks. For example, one study <cit.> investigated bug reports and it illustrates data and layer dimension errors, which make up close to 30% of the findings. Moreover, the study showed a broad categorisation of the manual repairs that were suggested in the bug reports. Another study <cit.> discusses Tensor shape errors that are related to input shape violations that can be captured with ExAIS's preconditions. The source of most bugs of those studies was not directly related to a misuse of layers or a wrong network architecture, which made them inapplicable for our approach, but their frequency highlights the significance of such bugs. Additionally, we performed experiments with randomly generated models, and collected bug reports from stackoverflow as we will explain in Section <ref> and in both cases we observed bugs that are related to precondition violations of ExAIS. Hence, we believe that bugs based on the precondition violations are relevant and that it is important to provide better ways to fix such bugs. Based on the three categories of preconditions and the simple dimension preconditions, we developed repair algorithms for the issues that are identified with the preconditions. We explain these algorithms in the following section. § METHOD In this section, we describe how we fix specific bugs and how our repair algorithm for AI models works in detail. We consider a repair to be valid if it removes TensorFlow errors (that occur during the model execution) with minimal adjustments. We created our repair suggestions with the intention to preserve the original model as much as possible, i.e., we were looking for minimal model adjustments that maintain the layers and most of the structure of the model. The motivation behind this is the fact that developers usually make small mistakes <cit.>. For simple cases, like dimension errors, our repairs follow the best practice, as suggested by the accepted solutions from bug reports. For more complicated cases, like argument or shape issues, we developed repairs that only utilise standard layers <cit.> (or argument modifications). There would be various other repair options if more deep learning operations, like NumPy functions <cit.>, would be considered. We believe that fixes with standard layers are well suited since they are straightforward, easier to understand, and many AI developers stick to these layers. Generally, it is hard to justify the quality of more complicated repairs without an additional training and model validation step. We intend to further explore these steps in future work. Overall, we believe that our repairs are reasonable, especially since they often were equivalent to accepted solutions for real bugs as we will explain later in Section <ref>. To illustrate the repair of specific model bugs, we present our approaches for fixing common errors that are related to the misuse of standard deep learning layers, i.e., dimension, input shape, and argument errors. Bugs that originate from other AI development tasks, like the data preprocessing, training, or model validation, are out of the scope of this work. Table <ref> gives a simplified overview of our repair approaches for these errors and shows example fixes. For dimension errors, there are two cases: the expected number of dimensions is larger than the actual number of dimensions, or the opposite. For the first case, we check if the problematic layer can be replaced with a higher dimensional version (e.g., Conv2D with Conv3D). Alternatively, this case can be repaired by adding a reshape layer with dimension size one before the layer with the bug. A reshape layer can modify the shape of the input data, while it still keeps the same number of values, i.e., the product of the dimension sizes will be the same after a reshape. The second case, i.e., the expected number of dimensions is smaller than the actual dimension number, can be handled similarly. A layer can be replaced with a smaller dimensional version, but in contrast to the previous case, the reshape works differently. In order to obtain a smaller dimension number, a new shape is computed by combining the last two dimensions of the given input (by multiplying them). Both these repair options are considered for our algorithm. The final minimal suggested fix is determined based on which of the two fixes leads to a smaller overall change. Next, for input shape bugs, there are also two potential repairs. Inconsistent input shapes can occur when a layer that takes multiple inputs (e.g., Add) has incompatible inputs with different shapes or dimension sizes. One way to resolve this bug, is to add padding around one of the inputs, i.e., with a ZeroPadding layer that increases the input space and adds zeros around the given data. An example repair with such a layer is illustrated in the second row of Table <ref>. It shows a graph model with an Add layer that has two ReLU layers with different shapes as input. The fixed model has an additional ZeroPadding1D layer that is in-between the ReLU layer with the smaller input and the Add layer. A padding layer can resolve most of such bugs, but not all, since there are only padding layers that take three to five-dimensional inputs. Alternatively, if the dimension is outside this range, the shape mismatch can be fixed with a Concatenate layer. This layer can adjust the input shape by combining a certain dimension of the input with additional values or arrays of values, which enables our repair approach to change the input shape according to the requirements of the invalid layer. Thirdly, for argument errors, there are also a number of possible repairs. Most of these errors can be repaired by regenerating the layer arguments, i.e., by randomly exchanging the arguments of the layer with random values. This procedure works step by step, first a replacement of a single argument is tried and if this is not successful then more arguments are exchanged. A special case of an argument error is a weight shape bug that occurs when the weight data does not match with other arguments/input data of the layer or when it has a wrong dimension number. This case can be repaired by regenerating the weights of the layer while considering the other layer arguments and the input shape. Another case that needs to be handled separately are pool (or kernel) shapes that are inconsistent with the input shape, because they are too large. These bugs can be fixed by adopting the pool (or kernel) shapes or by regenerating other layer arguments, like padding. There are more special cases like this that have their own preconditions and need some specific approaches to be repaired, which were omitted for brevity. There are two example layers given for this bug type as illustrated in the last row of Table <ref>. First, there is a Cropping1D layer, that applies too much cropping, which would result in an empty output. The bug is fixed by reducing the cropping size values. [t] Pseudo code of the repair algorithm. Another example shows a pool shape bug in a MaxPooling2D layer that has a pool shape that is too large for the specified input shape. A fix for this bug is produced by replacing the pool sizes with smaller values. It should be noticed that in reality such bugs are much harder to find since the models are larger, have many arguments, and the input and output shapes of layers are often not easy to see. Moreover, a model can usually not just be repaired with a singular change of a layer since in many cases there need to be further adjustments deeper in the neural network. For example, argument regenerations often change the output shape of a layer, which can cause inconsistencies with the weight or shape requirements in the next layers. The following simple example model illustrates this cascading behaviour. [language=Python, numbers=none, basicstyle=] C1 = Conv1D(2,input_shape=(8,1,),kernel_size=2,dilation_rate=3) C2 = Conv1D(2,input_shape=(16,1,),kernel_size=2,dilation_rate=3,strides=3) S = Subtract()([C1, C2]) It shows a Subtract layer that has two Conv1D layers as inputs. The model seems to be valid at a first glance. Both Conv1D layers would produce the same output shape, since the stride argument (that specifies the step size with which the kernel is moved over input data) offsets the larger input shape of the second convolution. However, a stride value greater than one is incompatible with a dilation_rate value greater than one. (A dilation_rate can be specified to expand a kernel with zero values.) A regeneration of the arguments to fix this violation, e.g., by replacing the stride or dilation_rate value, will always change the output shape. As a consequence, there will be an inconsistent input shape error at the Subtract layer, which needs to be fixed as shown in Table <ref>. The overall repair approach that incorporates the specific fixes (from Table <ref>) is outlined in Algorithm <ref>. It consists of two major functions. A function that tries to find a singular working fix with a minimal change compared to the original model, and a function that returns a number of potential working fixes by applying the first function. The first function 𝑓𝑖𝑛𝑑𝐹𝑖𝑥𝑊𝑖𝑡ℎ𝑀𝑖𝑛𝑖𝑚𝑎𝑙𝐶ℎ𝑎𝑛𝑔𝑒 takes a random object, a model to be fixed, an error object, and a maximum number of fixes that should be considered as input. In Lines 1–2, we initialise a set 𝑤𝑜𝑟𝑘𝑖𝑛𝑔𝐹𝑖𝑥𝑒𝑠 and a counter 𝑓𝑖𝑥𝑖𝑛𝑔𝐶𝑜𝑢𝑛𝑡. Then, there is a do-while loop that continues until there is a working fix, or until the maximum number of allowed fixing attempts is reached. Within the loop, the 𝑒𝑟𝑟𝑜𝑟𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝐹𝑖𝑥𝑒𝑠 function (Line 5) is called to receive the potential fixes for a given error. For each of the fixes, we apply our 𝑆𝑒𝑚𝑎𝑛𝑡𝑖𝑐𝐻𝑒𝑙𝑝𝑒𝑟 to check if it produced a valid model. The 𝑆𝑒𝑚𝑎𝑛𝑡𝑖𝑐𝐻𝑒𝑙𝑝𝑒𝑟 is a wrapper class that helps with the execution of ExAIS to make a prediction for a given model, by returning a success message or an error object. If it returns an error, then we check if the error helped to improve the model, i.e., with the help of a badness value (Line 8), and try to further repair the model by recursively calling the function. The badness value is calculated by preconditions of ExAIS and returned with an error message, and it works similar as fitness in search-based software testing <cit.>. It is a distance metric that is larger when the layer arguments and inputs are far from being valid, i.e., the layer preconditions are ranked based on severity and a value is calculated by taking into account the difference of an observed and an expected argument value and by multiplying it with a severity factor <cit.>. Finally, in Line 13, a helper function is used that sorts our working fixes based on a change value that indicates the similarity to the original model, and return the fix with the smallest change. The change value is another distance metric that is calculated by considering the number of layer arguments, the layer replacements and additions that are required to repair the original model. We consider an argument modification the smallest change (change value 1), followed by a layer replacement (change value 5), and the largest changes are layer additions (change value 10). The second function 𝑓𝑖𝑛𝑑𝐹𝑖𝑥𝑒𝑠 has the same arguments, and also initializes a set for the fixes. Line 16 applies the 𝑒𝑟𝑟𝑜𝑟𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝐹𝑖𝑥𝑒𝑠 function to obtain potential fixes for the specified error. Then, for each of these fixes, we check if it is working with the 𝑆𝑒𝑚𝑎𝑛𝑡𝑖𝑐𝐻𝑒𝑙𝑝𝑒𝑟. If it is not, then we apply the 𝑓𝑖𝑛𝑑𝐹𝑖𝑥𝑊𝑖𝑡ℎ𝑀𝑖𝑛𝑖𝑚𝑎𝑙𝐶ℎ𝑎𝑛𝑔𝑒 to recursively find a working fix. The fix is added to the fix set, which is sorted and returned at the end of the function. The difference of this function to the first one is that it produces a number of potential fixes instead of a single one. [language=Python, xleftmargin=15pt, float=tp, caption=Real model with a bug from stackoverflow <cit.>., label=lst:modelbug] import tensorflow as tf from tensorflow.keras import layers, models model_valid = tf.keras.Sequential([ layers.Flatten(input_shape=(10,)), layers.Dense(16, activation='relu'), layers.Conv1D(16, kernel_size=(2), activation='relu', padding='same'), layers.MaxPooling1D(pool_size=(4), strides=3, padding='valid'), layers.Flatten(), layers.Dense(1, activation='softmax') ]) [language=Prolog, xleftmargin=15pt, float=*, caption=ExAIS Prolog code of a neural network with a bug., label=lst:prologbug,deletekeywords=is, morekeywords=dense_layer,flatten_layer,conv1D_layer,max_pool1D_layer,exec_layers] LFla71197 = flatten_layer([[1.2079020849786344, 1.449234775683509, ...]], Fla71197), LDen79959 = dense_layer(Fla71197, [[0.6039479770827674, 0.410518536377564, ...]], Den79959), LCon60545 = conv1D_layer(Den79959, 2,[[[0.6711,...]]],[...], 1, true, 1, Con60545), LMax46787 = max_pool1D_layer(Con60545, 4, 3, false, Max46787), LFla21960 = flatten_layer(Max46787, Fla21960), LDen25740 = dense_layer(Fla21960, [[0.6182226038831956, ...]],[...], Den25740), exec_layers([LFla71197,LDen79959,LCon60545,LMax46787,LFla21960,LDen25740],["Fla71197","Den79959","Con60545","Max46787","Fla21960","Den25740"],Den25740,"Den25740"). [language=Python, xleftmargin=15pt, float=tp, caption=Potential repair for the buggy model., label=lst:fix] import tensorflow as tf from tensorflow.keras import layers, models model_valid = tf.keras.Sequential([ layers.Flatten(input_shape=(10,)), layers.Dense(16, activation='relu'), layers.Reshape((16,1)), layers.Conv1D(16, kernel_size=(2), activation='relu', padding='same'), layers.MaxPooling1D(pool_size=(4), strides=3, padding='valid'), layers.Flatten(), layers.Dense(1, activation='softmax') ]) Listing <ref> illustrates a real neural network in the form of a TensorFlow Python program that was posted on stackoverflow <cit.> and contains a bug. The model is invalid, because the Conv1D layer requires input data with three dimensions, but it receives a two-dimensional input. In order to run such a neural network with ExAIS, it needs to be converted into a Prolog model, which is shown in Listing <ref>. Fortunately, this conversion can be done automatically with our repair tool. Our tool supports models in an JSON format which is based on a format introduced by SOCRATES <cit.>. AI models can be exported into this format in a straightforward way. We automatically convert these JSON models into Prolog models (in the form of queries). The model consists of predicate calls for the layers that are assigned to variables (Lines 1–6). For the execution of the model (i.e., making a prediction) there is an |exec_layers| predicate in Line 7. This predicate takes a list of layer variables, executes them, checks the preconditions and helps to identify the name of a layer that violates a precondition. Since the model is invalid, it cannot be used for a prediction. The execution returns the following error message to show the precondition violation. [language=Prolog, numbers=none, morekeywords=dense_layer,sig_gate,depth,add_lists,multiply_list,tanh_gate] Invalid Model, Badness Value: -100000000000000000 Aborted at Con60545: Dimension Error, Input Shape [1,16], Expected Dimensions 3!!! This message gives us the necessary information to repair the bug, i.e., it shows the problematic layer and what needs to be changed. A fix that was produced for the error is shown in Listing <ref>. This fix was computed by the 𝑓𝑖𝑛𝑑𝐹𝑖𝑥𝑊𝑖𝑡ℎ𝑀𝑖𝑛𝑖𝑚𝑎𝑙𝐶ℎ𝑎𝑛𝑔𝑒 function and it has minimal changes, i.e, it is the most similar to the original model, since the only change is the insertion of a reshape layer. The reshape adds a dimension with size one to the input data to satisfy the dimension requirements for the Conv1D layer. Alternatively, such an error could be resolved by looking for a lower dimensional version of the Conv1D layer, but since it is already the version with the lowest number of dimensions, this is not feasible in this case. § EVALUATION In this section, we evaluate the effectiveness and performance of our approach. We implemented our approach that can support various types of neural networks and tested it with a variety of AI models. To demonstrate its usefulness, we show two example applications, i.e., repairing automatically generated neural networks, and repairing manually designed neural networks with bugs. We perform multiple experiments to answer the following research questions (RQ). * RQ1: How effective is our approach in repairing automatically generated models? There are various issues, like inconsistencies with layer arguments, or a faulty network structure, that can result in invalid neural networks. It is important to evaluate to what extent we can repair such bugs. * RQ2: How effective is our approach in repairing manually written models? Developing an AI model is an error-prone task. To highlight the usefulness of our repair approach in practice, we demonstrate the applicability of our approach for a number of real modelling bugs, and investigate to what extent our suggested repairs are relevant for fixing these bugs. * RQ3: What kind of AI models can we support with our approach, and how efficient is our repair method? To clarify the scope of our supported models, it is important to explain what kind of layers and model types we support. Moreover, it also makes sense to investigate up to what network size our approach still runs in a reasonable time. The experiments were performed on a 7th Gen. Lenovo X1 Carbon ThinkPad with an 8th Gen i7 CPU with four 1.80 GHz cores and 16 GB RAM. For executing the Prolog semantics, we used SWI-Prolog 8.2.1, and our repair tool was built in Java 13.0.7. It consists of about 10,000 lines of code and includes some of the functionality of the ExAIS test case generator in order to produce Prolog models. Moreover, it uses the JSON.simple library and can load and save models in a JSON format that is based on the format from SOCRATES <cit.>. Additionally, it can store models as TensorFlow programs, and visualize them with Dot from Graphviz. The tool together with our experiment data and results are available in our repository <cit.>. In the following, we present our answers to the research questions. RQ1: How effective is our approach in repairing automatically generated models? There are a number of approaches that automatically generate AI models for the purpose of AI framework testing. Wang et al. <cit.> show a differential testing approach called LEMON that uses mutations to generate AI models. The approach produces 100% valid models since the mutations are designed to generate only working models, but LEMON is limited to 24 types of layers. A fuzzing method that extracts information from AI library documentations was illustrated by Li <cit.>. The approach generates AI models based on learned input requirements of deep learning layers, but it only produces about 25% valid models that are limited to singular layer models. Another fuzzing approach that generates AI models was presented by Schumi and Sun <cit.>. The method is semantic-based and produces AI models with an optimisation algorithm with some rudimentary fix components that work more random and focus more on replacing problematic layers. The approach focuses more on generating valid models, but incorporates rudimentary repairs, for dimension errors and input shape bugs. The algorithm works by adding, replacing or deleting layers and it is guided by a badness value that indicates how far the model is away from being valid. However, it restarts the generation if it cannot find a working model or it just deletes problematic model components. Hence, in many cases it does not perform a fix. It is able to produce 99.1% valid models for differential testing. We use this approach as a baseline to compare it to our method. In order to evaluate the effectiveness of our approach, we performed an experiment with 1,000 randomly generated invalid AI models. The models were produced with an adopted generation approach from <cit.> by randomly connecting various types of layers. Many layers are connected sequentially by feeding the output as input to the next layer in the sequence. However, there are also layers that take multiple inputs, e.g., to perform mathematical operations, like addition. For these layers, multiple input layers are connected. It should be noted that a random model that is produced in such a way is nearly always invalid due to various layer preconditions that must be satisfied. The resulting neural networks had an average size of 7.4 layers (excluding input layers). We set the parameters to use small inputs with a range of one to four values per dimension, since bigger sizes vastly increased the overall size of the highly dimensional input data (with up to 7 dimensions) and slowed down the execution of ExAIS. On average there were 1.8 bugs per model and in total there were 1797 bugs that needed to be fixed. An overview of the types of bugs is shown in Table <ref>. It can be seen that argument errors are the most common, followed by input shape bugs and dimension errors. Our approach successfully repaired all the bugs, i.e., we were able to repair 100% of the random bugs that occurred within the randomly generated neural networks, and it took on average only 21.08s per model. Being able to repair such a wide set of models with various different layer types shows that our approach is highly effective. In order to compare our approach to the baseline, we executed the same 1,000 invalid random tests with the optimisation algorithm from <cit.>. The approach was able to find a valid fix that was similar to our solution in 87.9% of the models, i.e., the layers of the model were kept and similar adoptions as with our algorithm were made. It took on average 8.96s, which is about twice as fast as our approach, because it discards models (or model parts) early when they are not working, and since it does not include a comprehensive search algorithm that tries to find minimal working fixes. For 121 cases, it did not produce a model that we would consider a fix since the algorithm was randomly replacing or removing layers or regenerating the whole model, i.e., the fixed models were altered in a way that made them no longer recognisable compared to the original model. Hence, we believe that our method showed a significant improvement of this state-of-the-art approach. RQ2: How effective is our approach in repairing manually written models? In order to answer this research question, we collected a set of real modelling bugs from AI developers that were reported on stackoverflow. We searched issues for which people were struggling to find a solution for an error message from TensorFlow that was caused by a misuse of one of the standard layers or a wrong combination of these layers. To find these issues, we used search queries based on common error messages from TensorFlow that we obtained with the previous random test models. For example, we searched with error specific queries, like “TensorFlow shape must be rank”, “TensorFlow inconsistent input shapes”, “Operands could not be broadcast together with shapes”, and with general queries, like “TensorFlow layer argument error” or “TensorFlow layer dimension error”. We found hundreds of bugs reports that we had to manually inspect since the vast majority of them had causes unrelated to the model or layer definition, i.e., they were not relevant to our approach. Some of the bugs are outside the scope of our method, since the cause of these bugs is not related to the model structure or layers. AI development has several steps, like the data preprocessing, training, or model validation, where bugs can occur, but our approach is only concerned with bugs within the models, e.g., by a wrong use of standard layers. Moreover, many models contain hand-crafted layers or enhanced features that are not supported by the semantics. We found 16 bugs that matched our criteria, five with dimension errors <cit.>, three with invalid kernel or pool shapes <cit.>, five weight shape issues <cit.>, and three bugs with inconsistent arguments or input shapes <cit.>. An overview of these bugs, their type, model sizes and repair times is shown in Table <ref>. For 15 of the models, our method was able to find potential repairs. One model <cit.> was too large for the execution with the semantics since the vocabulary data (of an Embedding layer) was too much (25001x200) for Prolog to handle, but after reducing the vocabulary size (to 2501x200), which was irrelevant to the bug in the model, our method was able to produce a repair. On average the repair time was 3min 40s, the maximum time was 23min 21s. The model sizes ranged from about 1KB up to 43MB. The results show the effectiveness of our method in repairing real modelling bugs, which we believe is important because it seems to be common that AI developers have difficulties to correctly design an AI model or to understand error messages from AI frameworks. In order to further evaluate the practical usefulness of our approach, we evaluated our produced repairs for these manual bugs by comparing them to the accepted solutions from stackoverflow. Table <ref> gives an overview of this comparison. Out of the 16 models, eight of our repairs were equivalent or very close (in terms of the type and magnitude of the modification) to the suggested solutions on stackoverflow <cit.>. For example, for dimension issues, our solutions would be to introduce resize layers as it was also suggested on stackoverflow. Five of our repairs were of similar quality compared to the posted solutions <cit.>, e.g., for inconsistent weight shapes we usually regenerate the layer weights based on the other layer arguments. Alternatively, such issues can be addressed by adjusting the layer arguments to correct the weight shape. Our remaining three repairs were not related to the suggested stackoverflow solutions <cit.>, but they still produced working models that might be helpful as an alternative solution for the bugs. For example, a wrong pool/kernel shape can be corrected in multiple ways. Our solution to randomly regenerate the layer arguments step by step usually produced a valid repair, but there can still be other better solutions. Based on these results, we believe that our repairs are reasonably useful in practice and can be helpful for various real modelling bugs. RQ3: What kind of AI models can we support with our approach, and how efficient is our repair method? Since our approach is based on the existing semantics ExAIS, we support neural networks that are supported by the semantics. ExAIS supports 65 deterministic layers of various types, like convolutional, pooling, recurrent, activation, normalisation, mathematical, and cropping. It was developed for TensorFlow and support almost all of its layers, i.e., the developers built ExAIS for TensorFlow 2.4 and identified 72 unique non-abstract and non-wrapper layers. Only 7 of these layers were not included. A full list of the supported layers is available in the repository of ExAIS <cit.>. Hence, our repair approach is able to repair a variety of neural networks, and nearly all types of layers. In order to evaluate to which extent our approach works with different model sizes and how fast it can repair these models, we set up two experiments. (1) We evaluate the repair time of randomly generated neural networks of different types with an increasing number of layers. For this generation, we restricted the types of layers since the execution time of the semantics is very different depending on the type of layers. (2) We use models for benchmark datasets with more realistic input and weight data sizes in order to evaluate the performance in a practicable setting. The results of the first experiment are shown in Fig. <ref> and Fig. <ref>, which illustrate the repair time of our approach for different types of models and for increasing layer numbers. Fig. <ref> shows sequential models that include activation layers and (1) dense layers, (2) recurrent layers, (3) pooling layers, and (4) convolutional layers. Fig. <ref> shows the same, but also includes graph models that include forks as a result of mathematical layers with multiple inputs. It can be seen that even relatively large models can be repaired in a reasonable time. Sequential models with up to 50 layers can be repaired in 80s, even if they contain mostly complex layers like convolutional or pooling. Graph models with about 50 layers take longer, but can still be repaired in max 3min 40s. We believe that compared to the manual effort of inspecting and repairing such a large model, our repairing time is reasonably practical. For the second experiment, we applied three models for the well-known CIFAR <cit.> and MNIST datasets <cit.> that were provided by SOCRATES  <cit.>. The models were called []cifar_conv_small_relu, []mnist_conv_small_relu_diffai and []mnist_conv_9_200. Additionally, we used a model <cit.> for the Fashion-MNIST dataset <cit.> and another model <cit.> for the Street View House Numbers (SVHN) dataset <cit.>. In contrast to the previously generated models, these neural networks had input data of up to 64KB and a model size of up to 10MB due to bigger weight size. In order to evaluate our approach for these models, we manually added inconsistent arguments, dimension errors, and weight shape issues. On average, it took 3min 12s to repair these models, and the maximum repair time was 5min 54s. This demonstrates that we are also able to apply our approach to practical neural networks and not just randomly generated ones. Moreover, it shows that the repair time of these larger neural networks was still reasonable, especially since the experiments were only performed on a laptop with rather limited computing resources and without parallelisation. Discussion. A potential threat to the validity of our evaluation might be that we focused too much on models with small input sizes. Generally, AI models can handle huge data like voice recordings, or large images. An evaluation with larger test inputs and models might be more realistic, but a large input size (in the MB range) will produce even bigger models, which would soon cause memory overflows in the semantics. However, usually bugs in large models can be broken down to smaller version that are easier to process. We believe that our test models with rather small inputs were still reasonable and did not represent a big limitation, and it is well-known that small test cases can reveal various bugs <cit.>. Moreover, we tested reasonably sized models from several benchmark datasets, and we evaluated real model bugs. One might argue that the performance of our approach for larger models is limited, since it can take a couple of minutes to execute our approach for large models. We believe that the performance is still reasonable especially since it can avoid a lot of effort that would be required to manually inspect a model for bugs, or when the long resolution time for AI bug reports is considered that can range from weeks to months <cit.>. Another threat to the validity of our evaluation might be a potential bias when we selected the real model bugs. It is true that we had quite restricted criteria when we were looking for these bugs. There are numerous bug reports from AI developers that are outside the scope of this work since they are not related to the misuse or wrong composition of standard layers. Moreover, there are still many bugs that come from AI models that include non-standard features, like custom layers, which make them unusable with our approach. Hence, it was a cumbersome task to look for relevant bugs for our approach. We believe that we still managed to find a representative set of model bugs that was suited for a good evaluation of our method. There might be rare bugs in the same category that are not supported by our method, but our evaluation showed that we are able to repair common modelling bugs in a reasonable time. Another question that might come up is why we do not use error messages from an AI framework instead of utilizing a semantics to support our repair approach? It is true that our algorithms would in principle also work with error messages from AI frameworks and it would even be faster, but there are a number of problems with these messages <cit.>. The error messages are not always consistent, i.e., even for the same type of bug there can be various different messages, which might be caused by independent implementations in different layers. The source of an error is not always clear and the necessary debug information to repair an error can be hard to extract since the messages have no clear and consistent form. In contrast, ExAIS provides error messages that are well-structured and consistent. They can easily be automatically parsed and provide clear debug information and the source of a bug. Moreover, it is easy to extend ExAIS with additional preconditions, which can be helpful for checking custom model properties or it enables the identification of problematic behaviour that might not lead to error messages in an AI framework. Hence, we believe that it was a good choice to apply ExAIS for our repair approach. § RELATED WORK Most of the related work in neural network repair focuses on other aspects of the neural networks, like improving the accuracy or ensuring certain properties. For example, Sohn et al. <cit.> introduce a repair technique called Arachne that can improve pre-trained models by adjusting the weights of layers. The method applies differential evolution to optimise the weights and to correct misbehaviour, like misclassifications. Moreover, they demonstrate how to resolve fairness issues, i.e., by repairing a bias in a gender classifier. Another approach that deals with fairness properties was presented by Sun et al. <cit.>. The work illustrates a causality-based repair technique called CARE that can identify problematic neurons that are responsible for undesired neural network behaviour. It is able to ensure that a neural network satisfies various fairness and safety properties, e.g., it can remove backdoors caused by malicious training data. Yang et al. <cit.> shows a repair framework that is able to ensure safety and robustness with regard to input-output safety specifications. They illustrate a depth-first-search reachability analysis algorithm to find unsafe input regions and examples that represent these regions. The approach is evaluated with an aircraft collision avoidance and a rocket landing system. Similarly, Sotoudeh and Thakur <cit.> present a provable point repair algorithm that is able to deal with misclassifications and that can ensure safety properties. Usman et al. <cit.> illustrates a constraint-based repair method called NNREPAIR for neural network classifiers. The technique applies fault localization to find problematic network parameters, and it can improve the accuracy of a model and fix safety properties. Xie et al. <cit.> introduced a model-based repair approach for recurrent neural networks (RNN) called RNNRepair. It is based on an influence model that relates the behaviour of a network to the training data. The method can help to understand the behaviours of an RNN, as well as increase the accuracy and repair safety properties. Zhang et al. <cit.> introduced a DNN training monitoring and automatic repairing tool called AUTOTRAINER that can fix common training problems, like a slow convergence or fluctuating accuracies. Similarly, a tool called DeepDiagnosis that further improved the repair performance for such bugs, was introduced by Wardat et al. <cit.> It applies dynamic analysis to monitoring and detect errors according to various symptoms. It can fix eight different training problems and can do this more efficient and with a better performance than other tools. In contrast to both these methods, our approach is only concerned with the model and layer definitions and not with the training phase. Related work is also in the field of AI model debugging which includes a number of approaches and tools <cit.> that offer features like fuzzing, faulty neuron (or feature) localisation, visualisation, or auditing, to enable finding bugs that lead to inaccuracy or property violations. However, such approaches are usually limited to specific types of models, e.g., classifiers. In contrast to all these approaches, our work is not concerned with the accuracy or with certain properties about the predictions of a neural network. We focus on repairing modelling bugs that cause invalid neural networks with structural problems, wrongly used layers, or inconsistent layer arguments. The closest related work is the tool called Tensfa from Wu et al. <cit.>. The tool can automatically detect and repair tensor shape faults, which are comparable to our dimension and input shape bugs. For these bugs, the approach works for even broader scope of AI models and programs since it supports more general operations with Tensors, like array adjustments with NumPy <cit.>, which are not standard layers. The approach uses decision tree model to detect bugs based on crash messages from TensorFlow. It applies static data dependence analysis and a dynamic shape tracking techniques to locate faults, and multiple mutation strategies that perform repairs by considering the frequency and time cost of a number of repair patterns. Some of the repair patterns, like the introduction of reshape layers, are similar to our fixes, but since Tensfa uses more operations than just standard layers, it has more repair options for fixing shape issues. Additionally, Tensfa can check if the input and output shapes of a model are causing a bug, which is out of the scope of our method. In contrast, our repair method with ExAIS works more straight forward. To identify and localize faults, we just run a model with ExAIS and the precondition checks will identify and localize the bugs. Moreover, our approach is not just limited to Tensor shape bugs, since we support other issues, like argument or weight shape errors, and the focus of our repairs is on finding minimal changes with standard layers. Lastly, Tensfa can only fix issues when there is an error message. ExAIS can also identify rare bugs that do not produce error messages from TensorFlow <cit.>. To the best of our knowledge, our work is the first that shows an automatic semantic-based repair approach for AI model bugs. § CONCLUSION We have introduced a novel neural network repair approach that is based on an executable semantics and demonstrated two applications. Our method is able to repair invalid AI models that suffer from structural problems, wrongly used layers, or wrongly connected layers. It works by executing a given AI model with an existing AI framework semantics called ExAIS that has built-in preconditions that are able to provide error messages with debug information that helps to localise and characterise a bug. Based on these error messages that are produced when there is a precondition violation, we are able to identify and repair invalid model aspects with a number of algorithms for different types of bugs. One major application of our approach is the repair of automatically generated neural networks that can, e.g, be used for AI framework testing. In order to evaluate the effectiveness of our approach for this application, we generated 1,000 random models with bugs and repaired them. Our approach was able to repair 100% of these neural networks, and it took on average only 21.08s to completely repair the models. Moreover, we evaluated the approach by repairing a set of larger test models (for well-known benchmark datasets) which required about 192s on average. Another application that we presented is the repair of real modelling bugs from AI developers. For this use case, we collected a set of neural network bugs that developers were struggling with from stackoverflow. Out of 16 faulty AI models, we could directly repair 15 models, and also the one remaining model could be repaired after a minor size adjustment of the model. The average repair time for the bugs was only 3min 40s. Inspecting the quality of our repairs, showed that 13 of our produced repairs were equally as good or were comparable to the solutions on stackoverflow. To sum up, our approach was able to effectively and automatically produce practical repairs for real world bugs within minutes. This shows that it can be valuable for AI developers since it can reduce a lot of debugging effort. We believe that these two approaches highlight the usefulness and applicability of our approach, which has potential to enable further applications. In the future, we aim to explore semantic-based repair techniques for other AI model aspects. Acknowledgments. This research is supported by the Ministry of Education, Singapore under its Academic Research Fund Tier 3 (Award ID: MOET32020-0004). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore. 2 ACM-Reference-Format
http://arxiv.org/abs/2306.02203v1
20230603214804
Coalitions in International Litigation: A Network Perspective
[ "R. Mastrandrea", "G. Antuofermo", "M. Ovadek", "T. Y. -C. Yeung", "A. Dyevre", "G. Caldarelli" ]
physics.soc-ph
[ "physics.soc-ph" ]
Online Bootstrap Inference with Nonconvex Stochastic Gradient Descent Estimator Yanjie Zhong Todd Kuffner Soumendra Lahiri July 31, 2023 =============================================================================== empty § INTRODUCTION The digital revolution<cit.> has ushered in an era of unprecedented access to vast amounts of data, revolutionizing social-scientific research and opening up new avenues for quantitative approaches.<cit.> Network science, a powerful tool for representing and visualizing complex systems <cit.>, has been increasingly applied across various domains, from the study of scientific advancements<cit.> to finance<cit.> and social systems<cit.>. Although the application of network science to social behaviour and human institutions predates the digital revolution as illustrated by Jonathan L. Moreno's famous sociograms<cit.> and pioneering studies on self-organised segregation phenomena<cit.> and social cooperation <cit.>, law has long been perceived as a field beyond the reach of quantitative modelling. However, with digitalisation making considerable progress, the legal field has also witnessed growing interest in network science methods to model case citation dynamics<cit.>, the evolution and structure of legislation <cit.> as well as professional networks of judges <cit.> and law professors <cit.>. While untangling the complexity of normative structures<cit.>, this literature has delivered new insights on the operations of legal institutions<cit.> and the possibility to predict case citations <cit.>. In this context, we present a network analysis of coalitions in litigation before the Court of Justice of the European Union (CJEU). The CJEU adjudicates disputes over the legality of EU acts, the interpretation of EU treaties and legislation and state compliance with EU policies. Due to the impact and implications of its rulings across the bloc, the CJEU is regarded as one of the world's most powerful judicial bodies. National governments and EU institutions may appear before the Court as complainant or as defendant. They may also intervene in proceedings to express support or opposition to the EU member state or the EU institution party to the case. We use network analysis to study the coalition patterns emerging from this data. We construct networks to model the web of directed "friendly" and "unfriendly" connections between intervening states and parties. We examine centrality, modularity and triadic motifs both in the "Friends" and "Foes" networks over time. Additionally, we conduct a multiplex analysis and merge the two networks to gain further insights. We highlight the following findings. Firstly, Friends and Foes networks (see below) display a disassortative behaviour, implying a tendency for nodes to connect with dissimilar nodes rather than similar ones. Secondly, strong correlations among centrality measures suggest that member states and institutions with a higher number of connections simultaneously play a prominent role in bridging the nodes. Thirdly, the modularity of networks points to alignments along regional lines and divisions between EU institutions and member states consistently with results from social science research on European integration. Finally, we find a greater degree of reciprocity within the Foes network compared to the Friends network, suggesting a higher level of mutual opposition and conflict among nodes in the Foes network. § MATERIALS AND METHODS §.§ Data Our data set consists of 625 cases filed with the CJEU between 1977 and 2018. We use web-scraping methods to identify and extract cases with at least one third party intervention from the EUR-Lex website (<www.eur-lex.eu>). Cases in our data set are either initiated by a member state against an EU institution (annulment action) or initiated by the European Commission against a member state (infringement action). We only consider cases involving coalitions, i.e. cases which feature at least one intervention or in which two states or more act as plaintiff/defendant. While governments are also allowed to present observations in cases passed on to the CJEU by national courts (so-called ?preliminary rulings? in EU law jargon), these cases only address matters of interpretation and do not determine the final outcome of the legal dispute at hand. The observations presented by intervening governments in preliminary rulings are ambiguous and do not clearly indicate which side they are meant to support. For these reasons, they are excluded from our analysis. The number of third party interventions varies between 1 and 20 per case. In this data, 29 cases do not feature interventions but are also included as they feature two or more countries/institutions as main parties on the same side of the dispute. While governments have enjoyed the right to intervene in CJEU proceedings since the inception of European integration in the 1950s, the first intervention occured in 1977. To explore the evolution of coalition dynamics over time, we divided the data in eight periods: 1977-1981, 1982-1986, 1987-1991, 1992-1996, 1997-2001, 2002-2006, 2007-2011 and 2012-2018. §.§ Network Structure We constructed two directed, weighted networks. The first is a same-side network, which we refer to as Friends network; the second an opposite-side network, which we refer to as Foes network. In our two networks, a Node represents a country or an EU institution involved in a case either as plaintiff/defendant or as intervening third party. An Edge is drawn between two nodes if they are on the same (Friends network) or opposite (Foes network) side of the case, whereby the intervening party is the source while the plaintiff/defendant and their co-interveners are the target. We make two exceptions to this rule. First, we omit to draw an edge between the Commission and the main party to the case in infringement proceedings initiated by the European Commission. Similarly, we do not draw any edge between the case initiator and the Council of the European Union in annulment proceedings. Because infringement actions are brought by the Commission and all annulment cases are directed against legislation approved by the Council, these edges would merely reflect the proportion of infringement and annulment cases in our data. For the purpose of investigating coalition dynamics, these edges are, therefore, uninformative. Edges are weighted according to the number of cases involving the two nodes as friend/foe in the corresponding period. We then define two adjacency matrices of size N: our fr and fo matrices. They represent the weighted adjacency matrices associated to, respectively, the Friends and Foes network and we indicate the element of such matrices with fr_ij and fo_ij. i.e. [ W^fr≡ (w^fr_ij)_1 ≤ i,j≤ N W^fo≡ (w^fo_ij)_1 ≤ i,j≤ N] §.§ Node Importance and Network Organisation Node importance is assessed using various centrality measures. Some of those centrality measures are for example designed to capture the ability of a node to spur or stop epidemic processes within a network, others to measure the bridging value of a connection and therefore we lack a unique all-purpose centrality measure. For this reason, the importance of a given centrality metric is often a matter of the context. In our study, we focus on four measures of node centrality that highlight different aspects of our system: (i) degree centrality, which represents a simple (binary), local measure considering only the first order connections of a node; (ii) strength centrality. which as a natural extension of degree centrality, includes the effect of edge-values on node importance; (iii) betweenness centrality, which is defined as a higher-order measure of node importance taking into account the shortest paths connecting any two nodes in the network, thus focusing on its possible key role in diffusion processes; and (iv) page-rank centrality <cit.>, which functions as a global measure recursively taking into account the centrality of all the node's neighbours – in other words, it assigns a measure of importance to a node according to the centrality of its neighbours. We examine triangular closure and compare motif occurrence to a null model. To detect community structures, we apply the Louvain algorithm. Our multiplex analysis models the two networks as layers of a duplex, with the analysis focusing on the overlapping node degree/strength (sum of node degree/strength over the two layers), the multiplex participation coefficient with respect to a chosen local quantity and the z-score of the overlapping degree/strength to identify - if any - peculiar nodes<cit.>. Finally, we merged the two networks into a single network by computing the frequency of relationships for each pair of nodes: w^fr_ij/(w^fr_ij + w^fo_ij). We consider this network representation complementary to the multiplex perspective – offering a more intuitive visualization of the relationships occurring among countries/institutions and allowing the study of additional topological properties, although at the price of losing some features of the two layers. § RESULTS We first examine basic topological local properties (size, volume, degree, strength) before turning to centrality measures and community structures. §.§ Basic Network Properties Our entire data set features 36 countries and institutions (see Table in Appendix). Yet the network size never exceeds 31 nodes for any of the eight periods, as shown in Figure <ref>. Successive increases in network size reflect the impact of enlargement (19 countries joined the EU between 1981 and 2013), treaty revisions (which created new institutions, such as the European Central Bank) and more frequent litigation, which has provided governments and EU institutions with more opportunities to intervene in judicial proceedings. Whereas Friends and Foes networks increase at the same pace (Fig.<ref>, top), the density of the Friends network appears higher in the last 3 periods after starting from similar values (Fig.<ref>, middle). The total number of cases rises in tandem with the number of agents involved, but with greater speed for the Friends network (Fig.<ref>, bottom). Both Friends and Foes networks exhibit binary disassortativity when modelled as binary, although not when modelled as weighted. Figure <ref> shows the scatter plot for the binary model for the last period in our data set, 2012-2018. Disassortativity suggests a tendency for countries and institutions involved in a high number of lawsuits to be connected with countries and institutions less active in the litigation process. This property is a manifestation of structural disparities in the involvement of institutions and member states in EU-level legal disputes. Some institutions (e.g. ECB) have authority over narrow policy domains, limiting the range of cases in which they may be involved, while member states differ considerably in economic size, political influence and familiarity with EU law litigation <cit.>. §.§ Node Centrality Figure <ref> and <ref> report node rankings according to our four centrality measures over the eight periods for both Friends and Foes. Here the analysis is restricted to the countries and institutions appearing in all periods in both networks to ensure meaningful temporal comparisons. France dominates the Friends rankings in the first three periods, whereas later periods are dominated by the UK, the Czech Republic, Finland and the European Commission. For the Foes network, centrality rankings are dominated by the UK and the European Commission. The fact that the UK and the Commission score high on out-degree centrality as well as on the other three measures indicate that they are both initiators and targets of hostile interventions. Figure <ref> report node rankings for the period 2012-2018. Ranking for the Commission differ little across centrality in both the Friends and Foes networks. In the Friends network Germany ranks highest on out-degree centrality, indicating frequent friendly interventions. The UK's in-degree score in the Friends network and out-degree score in the Foes network reveal active intervention both against and in support of other EU actors. The scatter plots in figure <ref> display correlation values among the four centrality measures for the last period. We observe high correlations between all centrality measures (for the sake of simplicity, we report total degree rather than in and out-degree). High correlation values are not a necessary property of every network. The central node of a star network, for instance, will possess both high degree and high betweennes centrality, whereas a node connecting the central node of two star networks will exhibit low degree but high betweenness centrality. Correlation values thus indicate the extent to which nodes tend to play the same role in the network. More specifically, in our networks, the degree-betweenness correlation highlights the fact that countries/institutions with a great number of neighbours also a fundamental role in bridging the network. §.§ Motifs To detect patterns of coalition formation, we performed an analysis of recurrent triadic binary motifs (Fig. <ref>, top), comparing their occurrences in our networks with null models sharing the same degree distribution (see Methods for details). We compare the three periods 2002-2006, 2007-2011 and 2012-2018, which are sufficiently comparable in terms of network size so as not to affect the z-score. Figure <ref> shows the z-scores for each motif in the three periods. As regards the Friends network, motif 1 and 4 appear to be overestimated by the null model, open triangles with either two exiting or two entering links are less frequent than expected for the period 2002-2006. Motif 5 and 10, by contrast, are more frequent in the Friends network than in the null models for two periods 2002-2006 and 2012-2018. The frequency of motif 10 in these two periods suggests limited reciprocation in friendly support. Interestingly, reciprocation seems more pronounced in the Foes network. Motif 8 and 10, both triangles with reciprocated links, are abundant, especially in the period 2007-2011 (motif 8) and 2002-2006. These patterns suggest that member states and institutions tend to reciprocate hostile interventions. §.§ Communities Fig.<ref> displays community structures, denoted by node colour, in the Friends and Foes networks for the last period under study (2012-2018). Communities in the Friends network reveal a clear separation between member states and pro-integration institutions countries as well as East-West and North-South divides, which social scientists have documented in legislative settings <cit.>. The European Commission and the European Parliament, both in the light-blue community, tend to advocate federalist policies, whereas member states, along with the Council (which serves to represent the interests of national governments) typically advocate greater deference to domestic decision making <cit.>. The purple community is mostly comprised of southern member states (France, Italy, Spain, Greece, Portugal) along with the Council. The orange community is a cluster of predominantly northern European member states (Sweden, Finland, Netherlands, UK). Eastern member states (including Poland, Hungary, Slovakia and Romania) form a separate (green) community. The position of the European Commission in the Foes network reflects its role as "guardian of the treaties". The Commission frequently intervenes to defend EU legislation against legal challenges brought by national governments. The European Parliament and the Council, who often defend opposite policy positions, find themselves in the same community (orange nodes). So do the UK, France and Spain – a pattern largely driven by reciprocal hostility between the UK, on the one hand, and France and Spain, on the other. §.§ Multiplex Perspective This section reports the results of our duplex analysis, which helps better understand the role of nodes as supporters/rivals. We focus on node degree, computing the overlapping degree between the two layers<cit.> and the participation coefficient with respect to them<cit.>. The overlapping degree simply sums the node degrees over the two layers, while the participation coefficient quantifies the distribution of nodes presence in the two networks: it ranges in [0,1] and is equal to 0 if all edges of a node belong to just one layer, while it is exactly 1 if the edges are equally distributed over the two. A first inspection of node scores according to in/out overlapping degree reveals high variability in node behaviour, as shown in fig<ref>. The same figure also reports the participation coefficient (with respect to in/out degree) versus the related z-scores, highlighting the role played by nodes in the two layers. Also displayed are the ego network of the Slovakia (SK) and the European Commission (COM). They were chosen to contrast the role of node in the duplex. Slovakia can be considered as a "focused" node, as its outgoing edges mainly belong to the Friends network (6, only one link in the other layer), whereas the European Commission exhibits a proper multiplex behaviour, behaving as hub in both layers with 16 outgoing links in the Friends and 25 in the Foes network. §.§ Merged Network In this section we define an alternative network merging information from both Friends and Foes connections among EU countries and institutions. A link between any two nodes is weighted according to number of cases involving them as Friends divided by the total amount of cases between them: W^mrg≡ (w^mrg_ij)_1 ≤ i,j≤ N, with w^mrg_ij = w^fr_ij/(w^fr_ij + w^fo_ij) The edge-direction indicates the source and the target of intervention. In other words, a link from A to B with weight 0.8 indicates that in 80% of cases A supported B (regardless of whether A initiated the case or intervened later), while a weight equal to 0.1 means that only in 10% of cases A supported B. Figure <ref> illustrates the merged networks for the last three periods under study. Node size is proportional to in-degree centrality, while lighter shades of blue indicate lower and darker shades of blue higher out-degree centrality. Edge colour captures weights as defined above, with darker shades of red denoting more thoroughly supportive behaviour. The blue and green shades of the edges linking institutions and member states clearly show the divide opposing EU institutions and national governments in EU litigation. EU institutions typically promote pro-integration laws which national governments seek to contain <cit.>. In figure <ref> we show for a select group of countries and institutions, how the frequency of supporting behaviour are distributed over all the country/institution present in the dataset. On the top we report out-going links, therefore the frequency associated to the country/institution indicated by the color as starter of the supportive action, while on the bottom we consider in-going link, therefore consider the node as target of the action started by the country/institution indicated by the column. Figure <ref> (a) shows interesting information about the tendency to be or not supportive in the last period under study. For example, COM appears generally not supportive as source of an intervention as most of exiting link values are small, except for connections with CON, ECHA, EDPS and EP, all European Institutions. The UK and Luxembourg direct supportive behaviour only towards some countries and institutions, respectively Czech Republic, Denmark, Finland,DK,FI,IE,LU,NL and SE for UK; EP, HU, NL,SE and UK for LU. Figure <ref> (b) reports information about receiving supportive interventions in the last period of our sample.. Most of the ten countries/institutions reported here receive supportive behaviour from the others, with some exceptions for DE (small incoming edge links from COM, EE, ES, FI, LT), UK (small incoming edge links from COM and EP). These figures could help in attempting a classification for country/institution as mainly supportive or not, be supported or not and grouping agents accordingly. § CONCLUSIONS Our network analysis of third interventions before the CJEU provides insights into international litigation dynamics. The disassortative behaviour displayed in both networks indicates a tendency for nodes to form connections with dissimilar nodes rather than similar ones. The strong correlations among centrality measures suggest that certain member states and institutions hold a prominent role in litigation as source and target of interventions and in bridging the networks' communities. The modularity analysis revealed alignments along regional lines and divisions between EU institutions and member states, consistent with previous social science research on European integration. Lastly, the higher degree of reciprocity observed within the Foes network compared to the Friends network suggests a greater level of mutual opposition and conflict among nodes in the Foes network. While our analysis remains exploratory, we hope to have showed that international litigation provides as a suitable context for network analysis, allowing researchers to navigate the complexity of the underlying coalition patterns. § ACKNOWLEDGEMENTS RM acknowledges support from the Italian "Programma di Attività Integrata" (PAI) project "PROsociality COgnition and Peer Effects" (PRO.CO.P.E.), funded by IMT School for Advanced Studies Lucca and the European Union ? Horizon 2020 Program under the scheme ?INFRAIA-01-2018-2019 ? Integrating Activities for Advanced Communities?, Grant Agreement n.871042, ?SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics? (http://www.sobigdata.eu). § AUTHOR CONTRIBUTIONS RM, AD and GC conceived the study; RM analyzed the data, produced the figures and drafted the manuscript; RM, AD, GC interpreted the results. All authors critically revised the manuscript.
http://arxiv.org/abs/2306.04841v1
20230608002429
Improving Vietnamese Legal Question--Answering System based on Automatic Data Enrichment
[ "Thi-Hai-Yen Vuong", "Ha-Thanh Nguyen", "Quang-Huy Nguyen", "Le-Minh Nguyen", "Xuan-Hieu Phan" ]
cs.CL
[ "cs.CL" ]
Improving Vietnamese Legal QAS based on Automatic Data Enrichment Vuong et al. VNU University of Engineering and Technology, Hanoi, Vietnam {yenvth,19020011,hieupx}@vnu.edu.vn National Institute of Informatics, Tokyo, Japan [email protected] Japan Advanced Institute of Science and Technology, Ishikawa, Japan [email protected] Improving Vietnamese Legal Question–Answering System based on Automatic Data Enrichment Thi-Hai-Yen Vuong1 Ha-Thanh Nguyen2 Quang-Huy Nguyen1 Le-Minh Nguyen3 Xuan-Hieu Phan1 July 31, 2023 ============================================================================================ Question answering (QA) in law is a challenging problem because legal documents are much more complicated than normal texts in terms of terminology, structure, and temporal and logical relationships. It is even more difficult to perform legal QA for low-resource languages like Vietnamese where labeled data are rare and pre-trained language models are still limited. In this paper, we try to overcome these limitations by implementing a Vietnamese article-level retrieval-based legal QA system and introduce a novel method to improve the performance of language models by improving data quality through weak labeling. Our hypothesis is that in contexts where labeled data are limited, efficient data enrichment can help increase overall performance. Our experiments are designed to test multiple aspects, which demonstrate the effectiveness of the proposed technique. § INTRODUCTION The performance of question-answering (QA) has increased significantly thanks to the rapid development and recent breakthroughs in natural language processing. With these advances, QA has been used actively in various business domains in order to save human labor, get more automation as well as enhance user experience. Among application areas, QA in the legal domain has attracted a lot of interest from the research community as well as the awareness and support from legal practitioners, experts, law firms, and government agencies. Legal QA could assist them to find relevant legal information quickly, accurately, and reliably. Technically, the legal retrieval-based QA problem is simply stated as follows: given a query q and a text corpus D = {d_1, d_2, …, d_n}, the retrieval-based QA finds the most likely document d^* that maximizes the relevance score R: d^* = max_d ∈ D R(q,d) where R(q,d) represents the relevance score of the query q and document d. Traditionally, lexical weighting and ranking approaches like TF-IDF or BM25 are used to find the relevant documents based on the match of vocabulary terms. Despite their limited accuracy and simplicity, these techniques are normally cost-effective. Meanwhile, representation and deep learning based models are likely to give better results but they are much more expensive in terms of large training data, computing power, storage, and deployment. Various deep learning models have been introduced to enhance the representation of queries and documents, such as CNN <cit.>, RNN and LSTM <cit.>. Pre-trained language models (BERT <cit.>, GPTs <cit.>) also significantly improve text representation in retrieval tasks. In the legal domain, there are several challenges to building a reliable QA system. First, legal documents are much more complex than normal texts. They contain legal terms and concepts that are not commonly observed in general texts. Legal texts are usually long and have complex structures. There are also temporal constraints, logical relations, cross–document references etc. that are even difficult for human readers to follow and understand. Second, data annotation for legal documents is a real challenge, making it hard to construct even a medium-sized high-quality labeled dataset for training QA models. Today, one popular way to improve accuracy is to build large deep-learning models with a huge number of parameters. This is obviously an obstacle because building such models requires powerful computing resources and a huge source of data. In this work, we want to concentrate on enhancing data quality and quantity in the context where expanding labeled data is infeasible. A heuristic method for automatically creating weak label datasets and supporting relationship representation models in case law retrieval is presented by Vuong et al. <cit.>. Therefore, we apply this technique to create more training data to improve our models without the need of increasing number of model parameters. Technically, we address the problem of article-level retrieval-based legal QA. We use the Vietnamese civil law QA dataset, which was introduced by Nguyen et al. <cit.>, to conduct an empirical study on the proposed methods. Table <ref> illustrates an example of a legal query and the anticipated response. It is difficult to represent, retrieve and determine the correct answer when the articles are often long and complex. In addition, a notable feature of this dataset is that each article usually has a title, which serves as a brief summary. The main contributions of our work are twofold. First, we built an end-to-end article retrieval system to solve the legal QA task. Second, we show how efficient automated data enrichment is and we conducted a variety of experiments to contrast our model with the most cutting-edge approaches in this domain. § RELATED WORK In natural language processing, the term question answering (QA) is commonly used to describe systems and models that are capable of providing information based on a given question. Depending on the characteristics of the task, we can divide it into different categories. Factoid QA <cit.> is a class of problems for which the answer is usually simple and can be further extracted from a given question or context. Problems in this category can often be solved with generation models or sequence tagging approaches. Retrieval-based QA <cit.> is a class of problems where the answer should be retrieved from a large list of candidates based on relevancy and ability to answer the question. This class of problems can also be called List QA. Confirmation QA <cit.> is the class of problems where systems or models need to confirm whether a statement is true or false. Systems for this type of problem can be an end-2-end deep learning model, knowledge-based systems, or neuro-symbolic systems. In the legal field, question-answering has been posed in the research community for many years <cit.>. The main challenges of this problem on the rule language include fragmented training data, complex language, and long text. With the emergence of transformer-based <cit.> language models as well as transfer learning and data representation techniques, the performance of systems on tasks is significantly improved. In legal information retrieval, a number of neural approaches are also introduced to address the problem of word differences and characteristics of complex relationships <cit.>. § DATASET Original dataset: the corpus is collected from Vietnamese civil law. The labeled dataset was introduced by Nguyen et al. <cit.>. Table <ref> & <ref> give a statistical summary of the corpus and dataset. There are 8587 documents in the corpus. Vietnamese civil law documents have a long and intricate structure. The longest document contains up to 689 articles, and the average number of articles per document is also comparatively high at 13.69. The average title length in this dataset is 13.28 words, whereas the average content length is 281.83 words. This is also worth noting because one of the challenges and restrictions is the presentation of long texts. On average, the questions are less than 40 words long. Because of the similarity in their distributions, it is expected that the model trained on the training set will yield good performance on the test set. Weak labeled dataset: Vuong et al. have the assumption that the sentences in a legal article will support a topic sentence <cit.>. On the basis of this supposition, the weak labeled dataset is created. There is also a similar relationship in this dataset. The title serves as a brief summary of the article, so the sentences in the article content support to title. We apply this assumption to our method. By considering the title to be the same as the question, we will produce a dataset with weak labels. A title and content pair would be a positive example equivalent to a question and related articles pair. We randomly generated negative examples at a ratio of 1:4 to positive labels and obtained a weak label dataset consisting of 551,225 examples. § METHODS For a legal question-answering system at the article level, given a question q, and a corpus of Civil Law CL = {D_1,D_2,...,D_n}, the system should return a list of related articles A = {a_i|a_i ∈ D_j, D_j ∈ CL}. The following section provides a detailed description of the phases involved in resolving the problem. §.§ General Architecture Figure <ref> demonstrates our proposed system. There are three main phases: preprocessing, training, and inference phase. Preprocessing phase: the result of this phase is an article-level database, which involves processing the raw Vietnamese civil law documents. * Vietnamese Civil law is a corpus of Vietnamese legal documents. * Parser segment legal documents into list of articles. * Cleaning will filter out documents with metadata. Special symbol characters are also removed from the article. Numbers and vocabulary are retained and converted to lowercase. * Tokenizer is crucial to the processing of Vietnamese natural language. Vietnamese word structure is quite complicated, a word might contain one or more tokens. * Indexing is a task to represent and put articles into the database. Given a query, the search engine will return the response quickly and accurately. Training phase: we construct a supervised machine-learning model to rank the articles pertaining to the input question. * Original dataset is a legal QA dataset provided by Nguyen et al. <cit.>. * Articles is result of the preprocessing phase. * Weak label dataset was create by our heuristic method. * Preprocessing includes tasks similar to the preprocessing phase for question processing. * Training, we will construct a deep learning model to rank the texts related to the question. Inference phase: is the process to generate the response to a new input question. * Question is query in natural language. * Preprocessing is same as previous phases to process input question. * Quickview retrieval model matches questions and texts using unsupervised machine learning techniques . The processing speed of this model is typically fast. * Candidates are a list of limited candidates returned from quickview retrieval model. * Supervised model is result of the training phase. Its inputs are the question and the article candidates. * Candicate scores are outputs of Supervised model. * Ensemble model will combine the scores of the quickview retrieval model and the supervised model to make a final decision. §.§ Indexing There are numerous methods for indexing text into a database; in this work, we conduct experiments in two ways: word indexing and dense indexing. Word indexing: During the indexing process, the words in the text will be analyzed, normalized, and assigned a corresponding index. When given a query, the system searches the index the most related. Word indexing helps to find and look up information in the text faster and more accurately. Dense vector indexing: In addition to word indexing, word-to-vec and sequence-to-vec are both common methods for representing text semantically. These dense vectors can be used to represent text and index the database for search purposes. We apply two ways of representing text as dense vector according to w2v (FastText <cit.>) and contextual embedding (BERT <cit.>) to encode the given question and the legal articles. FastText is a model that converts each word into a dense vector of 300 dimensions. To construct a vector representation of a text, we average over the word vectors to form a single representation vector. Sentence-BERT converte the text into a dense vector with 768 dimensions that can represent the contextual semantics of the document by the Sentence-BERT model <cit.>. Table <ref> shows that the length of articels is often large, which is a limitation of the text representation by FastText and BERT. On the other hand, most questions just partially match articles, we overcome this long presentation weakness by splitting the legal article into a list of sentences and then generating dense vectors before indexing them into the database. §.§ Quickview Retrieval Model There are 117,575 legal articles in this corpus. This is a huge number, so in order to ensure the effectiveness of the question-answering system, we build a so-called quickview retrieval model using unsupervised machine learning techniques in order to rapidly return a limited candidate set. Word matching: to compare questions and articles in the word indexing database, we use the BM25 algorithm <cit.>. The bag-of-words retrieval function BM25 estimates the relevance of a document to a given search query by ranking documents according to the query terms that appear in each document. Given a question Q, containing tokens {t_1,t_2,..., t_n}, the BM25 score of a article A is: BM25S(Q,A) = ∑_i=1^n IDF(t_i) ·f(t_i,A) · (k_1+1)/f(t_i,A) + k_1· (1-b+b ·|A|/avgdl) in which: * f(t_i,A): t_i's term frequency in the legal article A * |A|: a number of word in in the legal article A in terms * avgdl: the average article length in the legal corpus. * k_1: a saturation curve parameter of term frequency. * b: the importance of document length. * IDF(t_i) is the inverse document frequency weight of the given question t_i, follow as: IDF(t_i) = ln(1 + N - n(t_i + 0.5)/n(t_i) + 0.5). N is amount of articles in the legal corpus, and n(q_i) is amount of articles containing q_i. While a content article is intense with full meaning, the article title contains a significant meaning. In this instance, the quickview retrieval score is determined using the formula below: QS(Q,A) = α * BM25S(Q,TA) + β * BM25S(Q,CA) in which, α and β are boosting weights. TA and CA are the titles and content of the article. Dense vector matching: to estimate the semantic similarity between questions and legal articles in the dense indexing database, we use cosine similarity to calculate quickview retrieval score: Cosine(VQ,VSA) = VQ^T · VSA/||VQ|| · ||VSA|| QS(Q,A) = 1 ≤ j ≤ nmax(Cosine(VQ,VSA_j)) in which, VQ is presentation vectors of the question.0. VSA_j is presentation vectors of the j^th sentence in the legal article. n is the number of sentences in the legal article. Finally, We use minmaxscaler to normalize scores and generate a list of ranked candidates. §.§ Supervised Model Pre-trained language models have proven useful for natural language processing tasks. Particularly, BERT significantly enhanced common language representation <cit.>. We use the BERT pre-training model and adjust all its parameters to build the related classifier model. We use the first token's final hidden state h as the presentation for the question-article pair. The last layer is a single fully connected added on the top of BERT. The output of the model is a binary classification. Cross-entropy loss is applied to the loss function. Adam <cit.> is used to optimize all model parameters during the training phase with a learning rate of e^-5. The supervised score between the question and the legal article is the classification probability of label 1: SS(Q,A) = P_label=1(Q,A) Lastly, we also use minmaxscaler to normalize scores and reranking a list of candidates. In this model, we proceed to build a related classification model based on two training datasets: the original dataset and a full dataset (original and weak label dataset). In the training process with the full dataset, we fit the model on weak label data first. Then use the best model to fine-tune with the original dataset. §.§ Ensemble Model We utilize the quickview retrieval model to generate a list of the top-k candidates. These candidates are then refined using a supervised ensemble model, which provides higher precision but is slower. The quickview model serves as a preliminary selection step due to its fast computation despite its lower precision. We use a variety of measures of similarity, including lexical similarity (the quickview retrieval model) and semantic similarity (the supervised model). Despite the fact that lexical and semantic similarities are very different from one another, they can work in tandem and are complementary. The combined score of the question Q and the candidate article CA_i is calculated as follows: CombineS(Q,CA_i) = γ * QS(Q,CA_i) + (1-γ) * SS(Q,CA_i) where γ∈ [0,1]. Table <ref> indicates that each question can have one or more related articles (the average is about 1.6). The most relevant article MRCA is returned by default, to determine a set of candidates to return, we would normalize the combined score and use the threshold parameter: a final returned articles set FRA = {CA_i|CombineS(Q,MRCA) - CombineS(Q,CA_i) < threshold}. § EXPERIMENTAL RESULTS AND DISCUSSION To ensure fairness in the training process and selection of hyperparameters, we divided the training dataset into training and validation with a ratio of 9:1. In the quickview retrieval phase, we utilize the Recall@k measure to assess the list of returned candidates. Recall@k is (Number of correctly predicted articles in the top-k results) / (Total number of gold articles). Macro-F2 is a metric to evaluate the end-to-end question-answering system. Precision, recall, and average response time per question are also used to evaluate the system's performance. The processing phase and the quickview retrieval model are carried out on CPU Intel core i5 10500 and 32Gb ram. The supervised model is trained and inference on NVIDIA Tesla P100 GPU 15Gb. In the indexing step and the quickview retrieval model, we use Elasticsearch[https://www.elastic.co/] with the configuration setting 8Gb heap size. Besides, during the experiment with some pre-trained BERT models, the BERT multilingual model produces the best results, so it is used to generate vector representation for the given question and the articles in the dense vector indexing and is used in a supervised model. §.§ Quickview Retrieval Result Table <ref> shows the results of the word matching method, it is easy to see the superiority in execution time. It only takes 14.43 ms to return the set of 50 candidates and 115.63 ms for 1000 candidates. The results also demonstrate how the title and content of the article have an impact on the retrieval. Recall@1000 is only from 0.75 to 0.87 on the datasets if we solely utilize word matching based on either title or content. While using both of them, Recall@1000 is nearly 0.9. As a sort of written summary, the title frequently includes important keywords. Consequently, we achieve the best results, 0.9128 in Recall@1000, when increasing the question-title matching score by 1.5 times compared to the question content. The experimental result of the dense vector matching method is illustrated in Table <ref>. Both the dense vector matching on BERT and FastText have lengthy execution times but just average Recall@k. In the dense vector indexing method, the articles were indexed at the sentence level, we need to return larger records than the word indexing method based on article level. Calculating the similarity between vectors with large dimensions is also a challenge. Therefore, this method takes a long time to execute. Retrieving 10000 sentences that take 1.7 and 5,2 seconds is not possibly applied in the real-time question-answering system. R@10000 is 0.61 for the FastText and 0.67 for the BERT, It is also simple to understand these scores. because the advantage of FastText is a semantic representation at the word level. Whereas BERT is known for its powerful contextual representation of paragraphs, splitting the article into sentences loses this contextual property. Based on the aforementioned experiment results, we decided to build the quickview retrieval model using BM25 with the α = 1.5 and β = 1. For the real-time response, we obtain respectable Recall@k scores of 0.7214, 0.7973 and 0.8453 for the k values in (50, 100, 200), which indicates that the number of candidates will be returned following this phase. §.§ End-to-end Question Answering System Result Table <ref> indicates the experimental results of the end-to-end question answering system result with a top 200 candidates from the quickview retrieval model. The word-matching model with BM25 and the supervised model built from the original data gives F2 score is about 0.38. The ensemble model outperforms the other models in F2 score with 0.6007, which is 22% higher than the single models. As was pointed out in the previous section, lexical and semantic similarity are highly dissimilar. But we believe they can cooperate and support one another. Results certainly support that. Table <ref> also clearly illustrates the contribution of the weak label dataset. It improved the supervised machine learning model's F2 score by 8%. The weak label data continues to have an impact on the F2 score when the lexical and semantic matching models are combined. The ensemble model that used the weak label data had a 1% increase in F2 scores. Additionally, there is a sizeable distinction between precision and recall. The recall is given more consideration because of its great impact on F2 score. We discovered that similarity in lexical and semantics has the same effect during the experimental and evaluation phases. Consequently, γ is set at 0.5. Infer time is also a remarkable point in the construction of the question-answering system, which shows the feasibility of the system when applied in practice. Table <ref> illustrate the results with the computational resources in the experimental environment, we can use the model with the top 50|100 candidates with an execution time of 1 second and 1.7 seconds per question. Their F2 scores are also only 2-5% lower than the best model. Table <ref> shows that our recall and F2 scores are incredibly high when compared to the Attentive CNN <cit.> and the Paraformer <cit.> models (0.6651 and 0.6007). Their models return small amounts of related articles, while our system is designed to return flexible amounts of articles with threshold. This explains why their precision is great, about 0.5987, whereas our precision is only 0.4331. A set of thresholds for each top-k is listed in Table <ref>. Table <ref> describes an example of our legal question-answering system, compared with Paraformer <cit.>. A small number of related articles are frequently returned by Paraformer models. Our system is more flexible with 3 returned related articles. While the gold label number is 2. As an outcome, a paragraph model like Paraformer is produced that has great precision but low recall, whereas our method leans in the opposite direction. Since recall has a greater impact on F2 scores, our model has a significantly higher F2 score of 11%. Our model predicts that “Article 466 from Doc 91/2015/QH13” is relevant to the given query but the gold label is 0. Considering this article, we believe the article is pertinent to the given question but it seems that the annotator's point of view is different. In addition, we discovered some similar cases in our error analysis. Defining and agreeing on a measure of relevance is an important research question that needs the participation of the AI and Law community in its research. This not only benefits the development of automated methods but also makes legal judgments and decisions more reliable and accurate. § CONCLUSIONS In this paper, we present a method to improve performance in the task of legal question answering for Vietnamese using language models through weak labeling. By demonstrating the effectiveness of this method through experiments, we verify the hypothesis that improving the quality and quantity of datasets is the right approach for this problem, especially in low-resource languages like Vietnamese. The results of our work can provide valuable insights and serve as a reference for future attempts to tackle similar challenges in low-resource legal question-answering. § ACKNOWLEDGEMENT This work was supported by VNU University of Engineering and Technology under project number CN22.09. splncs04
http://arxiv.org/abs/2307.00087v1
20230630185504
Existence of a cylinder foliated by periodic orbits in the generalized Chazy differential equation
[ "Jaume Llibre", "Douglas D. Novaes", "Claudia Valls" ]
math.DS
[ "math.DS", "34C23, 34C25, 34C45, 34A36" ]
Generalized Chazy differential equation] Existence of a cylinder foliated by periodic orbits in the generalized Chazy differential equation J. Llibre, D. D. Novaes and C. Valls] Jaume Llibre^1, Douglas D. Novaes^2, and Claudia Valls^3 ^1 Departament de Matemàtiques, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona, Catalonia, Spain [email protected] ^2 Departamento de Matemática, Instituto de Matemática, Estatística e Computação Científica (IMECC), Universidade Estadual de Campinas (UNICAMP), Rua Sérgio Buarque de Holanda, 651, Cidade Universitária Zeferino Vaz, 13083-859, Campinas, SP, Brazil [email protected] ^3Departamento de Matemática, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1049–001, Lisboa, Portugal [email protected] [2010]34C23, 34C25, 34C45, 34A36 [ [ ===== The generalized Chazy differential equation corresponds to the following two-parameter family of differential equations ⃛ x+|x|^q ẍ+k |x|^qxẋ^2=0, which has its regularity varying with q , a positive integer. Indeed, for q=1 it is discontinuous on the straight line x=0, whereas for q a positive even integer it is polynomial, and for q>1 a positive odd integer it is continuous but not differentiable on the straight line x=0. In 1999, the existence of periodic solutions in the generalized Chazy differential equation was numerically observed for q=2 and k=3. In this paper, we prove analytically the existence of such periodic solutions. Our strategy allows to establish sufficient conditions ensuring that the generalized Chazy differential equation, for k=q+1 and any positive integer q , has actually an invariant topological cylinder foliated by periodic solutions in the (x,ẋ,ẍ)-space . In order to set forth the bases of our approach, we start by considering q=1,2,3, which are representatives of the different classes of regularity. For an arbitrary positive integer q , an algorithm is provided for checking the sufficient conditions for the existence of such an invariant cylinder, which we conjecture that always exists. The algorithm was successfully applied up to q=100. In 1999, Géronimi et al <cit.> introduced a two-parameter family of third order differential equations called generalized Chazy differential equation. An interesting feature of the generalized Chazy differential equation is that its regularity changes with a positive integer parameter q. Indeed, for q=1 the generalized Chazy differential equation is discontinuous at x=0, whereas for q a positive even integer it is polynomial, and for q>1 a positive odd integer it is continuous but not differentiable at x=0. They performed numerical computations and, under some constraints, they observed (only numerically) the existence of periodic solutions. Usually, to prove analytically the existence of periodic solutions in a given differential equation is not an easy problem. The difficulty increases significantly when the dimension or the order is greater than two. In this paper, we develop a strategy to detect analytically the existence of periodic solutions in the generalized Chazy differential equation. Our strategy allows to establish sufficient conditions ensuring that the generalized Chazy differential equation, for any positive integer q , has actually an invariant topological cylinder foliated by periodic solutions in the (x,ẋ,ẍ)-space. First, we will focus our analysis in the cases q=1,2,3 (representatives of the above different classes of regularity), for which we will prove the existence of such an invariant cylinder. These initial cases will set forth the bases for an algorithmical approach when q is an arbitrary given positive integer, which will be successfully applied up to q=100. § INTRODUCTION AND STATEMENTS OF THE MAIN RESULTS In 1997, Feix et al. <cit.> introduced the following general third order ordinary differential equation ⃛ x+ x^3q+1f(a,b)=0, , which are invariant under time translation and rescaling symmetries. By taking f(a,b)=ka^2+b, the differential equation (<ref>) becomes ⃛ x+ x^q ẍ+ k x^q-1ẋ^2=0, which, for q=1 and k=-3/2, corresponds to the well known Chazy differential equation, introduced by Chazy <cit.> in 1911. Numerical computations on the actual Chazy differential equation did not detect any periodic solution, while periodic solutions were numerically observed for a large number of initial conditions in the differential equation (<ref>) for q = 2 and k = 3. Those numerical observations were reported in 1999 by Géronimi et al. <cit.>, which led them to introduce the so-called generalized Chazy differential equation ⃛ x+|x|^q ẍ+k |x|^qxẋ^2=0. Usually, to prove analytically the existence of periodic solutions in a given differential equation is not an easy problem. The difficult of the problem increases significantly when the dimension or the order is greater than two. The objective of this paper is double, first to provide an analytic proof of the existence of the periodic solutions detected only numerically in <cit.> in the generalized Chazy differential equation, and second to provide sufficient conditions ensuring that the generalized Chazy differential equation has, actually, an invariant cylinder foliated by periodic solutions in the (x,ẋ,ẍ)-space. The generalized Chazy differential equation can be written as the following first order differential system in ^3 X:{ẋ= y, ẏ= z, ż= -|x|^q z+k |x|^qxy^2. . When k=q+1 system (<ref>) has the first integral H(x,y,z)=x z-y^22+|x|^q x y. Notice that the regularity of the generalized Chazy differential system (<ref>) changes with q. Indeed, for q=1 the generalized Chazy differential system (<ref>) is discontinuous on the straight line x=0, whereas for q a positive even integer it is polynomial, and for q>1 an odd positive integer it is continuous but not differentiable on the straight line x=0. §.§ Main results In this paper, we will propose an algorithmical approach to detect analytically the existence of an invariant cylinder foliated by periodic orbits for any given positive integer q. But first, in order to set forth the bases of our approach, we shall focus our analysis in the cases q=1,2,3, which are representatives of the above classes of regularity. Our first main result is the following. For q=1,2,3 and k=q+1, the generalized Chazy differential system (<ref>) has an invariant cylinder foliated by periodic orbits. Theorem <ref> is proven in Section <ref>. Its proof is divided into three subsections, which cover the cases q=1 (discontinuous), q=2 (polynomial), and q=3 (continuous), respectively. The proof follows by showing the existence of a fixed point of a suitable Poincaré return map in each negative energy level of the first integral H. Due to the nonlinear nature of the generalized Chazy differential system (<ref>), it is not possible to integrate the flow for obtaining an explicit expression of the Poincaré return map, thus tools from the qualitative theory must be employed. In what follows, we provide the idea of the proof. First, by applying a rescaling in the variables and in the time, we reduce the analysis to the energy level H=-1, which is a topological cylinder. Second, by taking advantage of a symmetry of the system in this level, we show that a Poincaré return map is well defined, for which we can ensure the existence of a fixed point via the construction of a convenient trapping region. This implies the existence of a periodic orbit in each negative energy level, which varies smoothly with the energy, producing then a topological cylinder foliated by periodic orbits. The above strategy, developed for proving the existence of invariant cylinders foliated by periodic orbits for q=1,2,3 can be extended for other positive integer values of q. Nevertheless, we have to make the following consideration upon this strategy. At some point in the proof of Theorem <ref>, it will be necessary to estimate the number of roots of some polynomials on specific intervals. While for q=1,2,3, it can be done with a Sturm procedure (see, for instance, <cit.>), for an undetermined q the Sturm procedure does not work well. Alternatively, in what follows, our second main result provides sufficient conditions ensuring the existence of an invariant cylinder foliated by periodic orbits of the generalized Chazy differential system (<ref>) when q is an arbitrary given positive integer. Let q be a positive integer and consider the following polynomials on the variable u: P_0(u)= 8 q^2 - 4 q^2 (1 + 4 q) u + 8 q^3 u^2- 16 (1 + q)^2 u^2(q-1) + 8 (1 + q) (3 + 2 q) (1 + 4 q) u^2q-1 - (1 + 2 q) (9 + 2 q (7 + 4 q)^2) u^2q+ 8 q (1 + 4 q) (4 + q (5 + 2 q)) u^2q+1 - 4 q^2 (7 + 4 q (2 + q)) u^2(q+1) + 8 (1 + q) (1 + 4 q) u^4q+1 - 16 q (1 + q) u^2 (1+ 2 q), P^+ (u)= 2^5+4q/2(1+q) q (1 + q)^1/1+q (2 + q) - 2^1/1+q(1 + q)^2/1+q (3 + 2 q)^2 u - 8 √(2) q u^q + 2^4+3q/2(1+q) (1 + q)^2+q/1+q (9 + 2 q) u^1+q - 2 (9 + 10 q) u^1+2q + 2^5+4q/2(1+q) q (1 + q)^1/1+q u^2(1+q) - 4 √(2) q u^2+3q , P^- (u)= 2^5+4q/2(1+q) q (1 + q)^1/1+q (2 + q) - 2^1/1+q(1 + q)^2/1+q (3 + 2 q)^2 u - 8 (-1)^q √(2) q u^q ≤ + (-1)^q 2^4+3q/2(1+q) (1 + q)^2+q/1+q (9 + 2 q) u^1+q - 2 (9 + 10 q) u^1+2q ≤ + 2^5+4q/2(1+q) q (1 + q)^1/1+q u^2(1+q) - 4 (-1)^q√(2) q u^2+3q. Assume that the following conditions hold: C1. the polynomial P_0 does not vanish on I_0:=(2 , (1+4q)/(2q)); C2. the polynomial P^+ has at most one root (counting its multiplicity) in I^+ :=(0 , u_i,q^D); C3. the polynomial P^- has at most one root (counting its multiplicity) in I^-:=(-2,0). Then, for k=q+1, the generalized Chazy differential system (<ref>) has an invariant cylinder foliated by periodic orbits. Theorem <ref> is proved in Section <ref>. Notice that it allows the development of an algorithmical approach based on the Sturm procedure to detect analytically the existence of an invariant cylinder foliated by periodic orbits of the generalized Chazy differential system (<ref>) when q is an arbitrary given positive integer. A Mathematica algorithm is provided as the supplementary material. We have ran the algorithm for q=1,…,100, which provided the following result: For k=q+1 and 1≤ q ≤ 100, the generalized Chazy differential system (<ref>) has an invariant cylinder foliated by periodic orbits. The obtained results lead us to make the following conjecture: For k=q+1 and q a positive integer, the generalized Chazy differential system (<ref>) has an invariant cylinder foliated by periodic orbits. Of course, the proof of Conjecture <ref> relies only on checking whether conditions C1, C2, and C3 of Theorem <ref> hold for any positive integer q. §.§ Structure of the paper In Section <ref>, we provide the common bases for the proofs of our main results (Theorems <ref> and <ref>), by reducing the problem to study the transition maps of planar vector fields. In Section <ref>, we present the proof of Theorem <ref>. Some details of this proof, concerning the Sturm procedure, will be provided in the Appendix. In Section <ref>, we present the proof of Theorem <ref>. A Mathematica algorithm, based on a Sturm procedure, for checking the conditions of Theorem <ref> is provided as supplementary material. § REDUCED PROBLEM Consider the hyperbola Σ={(u,v): u v+1=0} and its positive and negative branches Σ^+={(u,v): u v+1=0, u>0} andΣ^-={(u,v): u v+1=0, u<0}, respectively. In this section, we shall see that the problem of showing the existence of an invariant cylinder foliated by periodic orbits of the 3D Chazy differential system (<ref>) is equivalent to show the existence of a point p in Σ^+ whose flow through a planar differential system (associated to the restriction of (<ref>) to the energy level H=-1) intersects Σ^- at -p. Consider the first integral (<ref>). By solving H(x,y,z)=-ω^2, with ω>0, in the variable y, we obtain a family of invariant surfaces S_ω,q:=S_ω,q^-∪ S_ω,q^+, where S_ω,q^±={(x,y,z)∈^3: y=f^±_ω,q(x,z) and x z+ω^2≥0}, with f^±_ω,q(x,z)=x | x |^q ±√(x^2(1+q)+2xz+2ω^2). Notice that, for each ω, S_ω,q^- and S_ω,q^+ intersect the plane y=0 on the hyperbola Σ_ω={(x,y,z): x z+ω^2=0} and, therefore, S_ω,q is a topological cylinder. The basis of our approach consists in proving that the invariant surface S_ω,q contains a periodic orbit γ_ω,q for each ω. In order to do that, we will define a Poincaré return map on the branches of the hyperbola Σ_ω^±, namely, Σ_ω^-={(x,y,z): x z+ω^2=0, x<0}, Σ_ω^+={(x,y,z): x z+ω^2=0, x>0}. Denote by X_ω,q^- and X_ω,q^+ the reduced systems of (<ref>) on S_ω,q^- and S_ω,q^+, respectively, which are given by X_ω,q^±: {ẋ= x | x |^q ±√(x^2(1+q)+2xz+2ω^2), ż= | x |^q/x (-x z-(q+1) (x | x |^q ±√(x^2(1+q)+2xz+2ω^2))^2, . for xz+ω^2≥0. Let ϕ^±_ω,q(t,·) denote the flow of the vector field X_ω,q^±. In order to conclude the existence of a periodic orbit contained in S_ω,q, it is sufficient to show the existence of a point p_0,q∈Σ^-_ω,q and t_0,q,t_1,q>0 such that p_1,q:=ϕ^-_ω,q(t_0,q,p_0,q)∈Σ^+_ω and ϕ^+_ω,q(t_1,q,p_1,q)=p_0,q. In this case the periodic orbit is given by γ_ω,q=γ^+_ω,q∪γ^-_ω,q, where γ^-_ω,q= {(x,y,z): (x,z)=ϕ_ω,q^-(t,p_0,q), y=f^-_ω,q(ϕ_ω,q^-(t,p_0,q)), and 0≤ t≤ t_0,q} and γ^+_ω,q= {(x,y,z): (x,z)=ϕ_ω,q^+(t,p_1,q), y=f^+_ω,q(ϕ_ω,q^+(t,p_1,q)), and 0≤ t≤ t_1,q}. Now, in order to simplify the reduced systems X^±_ω,q, we apply the following change of variables and time-rescaling (x,z)=(ω^1/1+q u,ω^1+2q/1+qv) and t=ω^-q/1+qτ. Thus, the vector fields X_ω,q^± become X_q^±: { u'= u | u |^q ±√(u^2(1+q)+2uv+2), v'= | u |^q/u (-u v-(q+1) (u | u |^q ±√(u^2(1+q)+2uv+2))^2), . for u v+1≥0, and the curves Σ_ω^+ and Σ_ω^- become Σ^+ and Σ^-, respectively. Notice that, if ϕ_q^±(t,·) denote the flow of the vector fields X_q^±, then ϕ^±_q,ω(t,p)=Ω ϕ_q^±(ω^q/1+qt,Ω^-1 p) where Ω=([ ω^1/1+q 0; 0 ω^1+2q/1+q ]). The existence of a periodic orbit will follow by showing the existence of a point p_q^*=(u_q^*,-1/u_q^*)∈Σ^+ and a time t_q^*>0 such that ϕ_q^-(t_q^*,p_q^*)=-p_q^*=(-u_q^*,1/u_q^*)∈Σ^-. Indeed the vector fields X_q^+ and X_q^- satisfy X_q^+(u,v)=-X_q^-(-u,-v). This means that ϕ_q^+(t,p)=-ϕ_q^-(t,-p) and, therefore, ϕ_q^+(t_q^*,-p_q^*)=-ϕ_q^-(t_q^*,p_q^*)=p_q^*. Hence, relationship (<ref>) will hold by taking p_0,q=Ω p_q^* and t_0,q=t_1,q=ω^1+q/qt_q^*. § PROOF OF THEOREM <REF> This section is devoted to the proof of Theorem <ref>, which is divided into three subsections, which cover the cases: q=1 (discontinuous), in Subsection <ref>; q=2 (polynomial), in Subsection <ref>, and q=3 (continuous), in Subsection <ref>. Some details concerning the Sturm procedure (see, for instance, <cit.>) will be provided in the Appendix. §.§ Proof of Theorem <ref> for the discontinuous case q=1. In this case, it follows from (<ref>) that X_1^±={[ X_1,L^± if u<0,; X_1,R^± if u>0, ]. for u v+1≥0, where X_1,L^±: { u'= -u^2 ±√(u^4+2uv+2), v'= u v+2 (u | u |±√(u^4+2uv+2))^2, . and X_1,R^±: { u'= u^2 |±√(u^4+2uv+2), v'= -u v-2 (u | u |±√(u^4+2uv+2))^2. . See Figure <ref> for an idea of the phase space of the vector fields X_1^- (in the left) and X_1^+ (in the right). Here, the Filippov's convention <cit.> is assumed for the trajectories of X_1^±. Consider the sections Σ_D^1 and Σ_I^1 on the hyperbola u v+1=0, given by Σ_D^1= {(u,v): v =-1/u: u_i,1^D ≤ u≤ u_f,1^D} and Σ_I^1= {(u,v): v =-1/u: u_i,1^I ≤ u≤ u_f,1^I}, where u_i,1^D= 1/2^3/4, u_f,1^D= 2, u_i,1^I= -1+√(2)/2^3/4 and u_f,1^I=-1/2^3/4 . We are going to show that the flow of X_1^- induces a map P_1:Σ_D^1→Σ_I^1. To do that, we will construct a compact connected trapping region K^1 such that Σ_D^1∪Σ_I^1 ⊂∂ K^1 and X_1^- points inward everywhere on ∂ K^1 except at Σ_I^1 (see Figure <ref>). Since X_1^- does not have singularities in K^1 and both vector fields X_1,L^- and X_1,R^- are transversal on u=0 and point to the left, applying the Poincaré-Bendixson Theorem (see, for instance, <cit.>) first in the region K^1∩{u≥0} and then in the region K^1∩{u≤0}, we conclude that: (C1) for each point p∈Σ_D^1, there exists t(p)>0 such that P_1(p):=ϕ_1^-(t(p),p)∈Σ_I^1. Let K^1 (see Figure <ref>) be the compact region delimited by ∂ K^1=Σ_D^1∪Σ_U^1∪ R_1 ∪⋯∪ R_5, where R_1= {(u,0), 0<u≤ 5/2}, R_2= {(u,v): v =2 u -5/2: u_f,1^D ≤ u ≤ 5/2}, R_3= {(u,v): v =2 √(2) |u| -2^7/4: 0 ≤ u ≤ u_i,1^D}, R_4= {(u,v): v =2 √(2) |u| -2^7/4: u_i,1^I≤ u <0}, R_5= {(u,v): v =2 √(2) |u|:u_f,1^I ≤ u ≤ 0}. In what follows, we are going to analyze the behavior of the vector field X_1^- on each component of the boundary of K^1. Behavior of X_1^- on Σ_D^1 and Σ_I^1. The hyperbola g(u,v)=u v + 1=0 on Σ_D^1 satisfies ⟨∇ g(u,v),X_1^-(u,v)⟩|_(u,v)∈Σ_D^1= u, which does not vanish on Σ_D^1. Note that the flow of the vector field X_1^- points inwards K^1 on Σ_D^1, because for instance X_1^-(1,-1)=(0,1). The curve g(u,v)=u v + 1=0 on Σ_I^1 satisfies ⟨∇ g(u,v),X_1^-(u,v)⟩|_(u,v)∈Σ_I^1 =u(1+8 u^4), which does not vanish on Σ_I^1. Note that the flow of the vector field X_1^- points outwards K^1 on Σ_I^1, because for instance X_1^-(-1,1)=(-2,7). Behavior of X_1^- on R_1. The curve g(u,v)=v=0 satisfies ⟨∇ g(u,v),X_1^-(u,v)⟩|_g(u,v)=0 = - 4 (1+u^4 -u^2 √(2 + u^4)). We will show that this derivative does not vanish for u>0. We proceed by contradiction. Assume that 1+u^4 -u^2 √(2 + u^4)=0. Then, √(2 + u^4) = 1+u ^4/u^2 ⇔ 2 + u^4 = (1+u ^4/u^2)^2 = 1+u^8 + 2 u^4/u^4 ⇔ -1/u^4=0, which is not possible. This shows that the flow along R_1 points always inwards K^1, because for instance X_1^-(1,0)= (1+√(3),-4 (2+√(3))). Behavior of X_1^- on R_2. The curve g(u,v)=v- u +5/2=0 satisfies ⟨∇ g(u,v),X_1^-(u,v)⟩|_g(u,v)=0 = -4 + 25 /2 u -6 u^2 - 4 u^4 + (4 u^2+1) √(2 - 5 u + 2 u^2 + u^4) . We will show that this derivative does not vanish on R_2 that is, for u_f,1^D ≤ u ≤ 5/2. We proceed by contradiction. Assume that it is zero at some u. Then, proceeding analogously to R_1 to get rid of the square root, we obtain ( -4 + 25 /2 u -6 u^2 - 4 u^4 )^2 = (1+4u^2)^2 (2 - 5 u + 2 u^2 + u^4), or equivalently p_0(u) := -14 + 95 u - 745/4 u^2 + 110 u^3 - 19 u^4 + 20 u^5 - 8 u^6=0. Using the Sturm procedure we conclude that the polynomial p_0(u) does not vanish on u_f,1^D < u < 5/2 which is a contradiction (see Appendix). Hence, the flow of the vector field X_1^- points inwards K^1 on R_2. Behavior of X_1^- on R_3 and R_4. On the curve g(u,v)=v-2 √(2) |u| -2^7/4=0 we have: for u>0 ⟨∇ g(u,v),X_1^-(u,v)⟩|_g(u,v)=0 =p_1(u): =-4 + 10 · 2^3/4 u-12 √(2) u^2 - 4 u^4 ≤ + (4 u^2 + 2 √(2)) √(2 - 4 · 2^3/4 u + 4 √(2) u^2 + u^4), and for u<0, ⟨∇ g(u,v),X_1^-(u,v)⟩|_g(u,v)=0 =p_2(u): = 4 - 10 · 2^3/4 u-12 √(2) u^2 + 4 u^4 ≤ + (4 u^2 - 2 √(2)) √(2 - 4 · 2^3/4 u + 4 √(2) u^2 + u^4). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is enough to show that p_1(u) does not vanish on 0 < u ≤ u_i,1^D and that the p_2(u) does not vanish on u_i,1^I ≤ u <0. We proceed by contradiction. Assume first that p_1(u) has a zero. Then, proceeding as in R_1 to get rid of the square root, we have that p_1(u)=0 implies p_3(u)=12 · 2^3/4 - 58 √(2) u + 88 · 2^1/4 u^2 - 38 u^3 + 4 · 2^3/4 u^4 - 4 √(2) u^5=0. Using the Sturm procedure we have that the polynomial p_3(u) has a unique simple real zero on u>0 (see Appendix). Moreover, we have that p_1(0)=0, p_1'(0)=6 · 2^3/4, p_1(u_i,1^D) = 1. These computations show that if p_1(u) has a zero in (0,u_i,1^D) then it has another (or a multiple) zero on that interval. This would produce another zero of p_3(u) in (0, ∞), which is a contradiction. Hence the flow of the vector field X_1^- points inwards K^1 on R_3. On the other hand assume that p_2(u) has a zero. Then, proceeding as in R_1 to get rid of the square root, we have that p_2(u)=0 implies p_4(u)=12 · 2^3/4 - 42 √(2) u - 88 · 2^1/4 u^2 - 38 u^3 + 4 · 2^3/4 u^4 + 4 √(2) u^5=0. Using the Sturm procedure we conclude that the polynomial p_4(u) has a unique simple real zero on u<0 (see Appendix). Moreover, we have that p_2(0)=0, p_2'(0)=-6 · 2^3/4, p_2(u_i,1^I)= 38.5802... These computations show that if p_2(u) has a zero in (u_i,1^I,0), then either it is multiple or p_2(u) has at least two negative zeros in contradiction with uniqueness of negative zeros provided by Sturm procedure. Hence the flow of the vector field X_1^- points inwards K^1 on R_4. Behavior of X^- on R_5. Note that on the curve g(u,v)=v- 2√(2) |u| =0 we have ⟨∇ g(u,v),X_1^-(u,v)⟩|_g(u,v)=0 =p_5(u):= 4 - 12 √(2) u^2 + 4u^4 +(4 u^2 - 2 √(2))√(2 - 4 √(2) u^2 + u^4). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is sufficient to show that this derivative does not vanish on u_f,1^I < u < 0. We proceed by contradiction. Assume that it is zero. Then, proceeding as in the curve R_1 to get rid of the square root, we have that p_5(u)=0 implies 2 u^2 (4 √(2) - 19 u^2 + 2 √(2) u^4)/ (√(2) - 2 u^2)^2=0. Note that the denominator does not vanish on u_f,1^I < u < 0, because the unique negative real solution of √(2) - 2 u^2=0, which is -1/2^1/4, is outside the mentioned interval. On the other hand the unique two negative real solutions of the numerator are u_1=-1/2(19 √(2) - 3 √(66)/2)^1/2 and u_2=-1/2(19 √(2) + 3 √(66)/2)^1/2. We observe that u_1 belongs to the interval u_f,1^I < u < 0 while u_2 does not. However, u_1 is not a solution of p_5(u)=0 because p_5(u_1) = -3/8(-78 +14 √(33) + 3 √(22 (19-3 √(33))) - 5 √(6 (19-3 √(33)))) 0. This contradiction shows that the flow of the vector field X_1^- points inwards K^1 on R_5. Existence of p_1^*. Consider the map h_1 Σ_D^1 → defined by h_1(p)=π (P_1(p)+p), where π:^2→ is the projection onto the first coordinate (see (C1)). Note that by the continuous dependence of the flow of X_1^- with respect to the initial conditions the map h_1 is continuous. Moreover, the image of the point (u_f,1^D,-1/u_f,1^D) by P_1 (which is the point w_4 in Figure <ref>) is inside Σ_I^1, and its symmetric is below its image because u_f,1^D + u_i,1^I>0. Hence h_1(u_f,1^D,-1/u_f,1^D)>0. On the other hand, the image of the point (u_i,1^D,-1/u_i,1^D) by P_1 (which is the point w_3 in Figure <ref>) is inside Σ_I^1, and its symmetric is above its image because u_i,1^D +u_f,1^I<0. Therefore h_1(u_i,1^D,-1/u_i,1^D)<0. Thus, by continuity, there exists p_1^*∈Σ_D^1 such that h_1(p_1^*)=0, i.e, P_1(p_1^*)=-p_1^* and so ϕ_1^-(t^*_1,p_1^*)=-p_1^* as we wanted to prove. §.§ Proof of Theorem <ref> for the polynomial case q=2. In this case, it follows from (<ref>) that X_2^±: { u'= u^3 ±√(u^6+2uv+2), v'= u(-u v-3 (u^3 ±√(u^6+2uv+2))^2), . for u v+1≥0. See Figure <ref> for the phase space of the vector field X_2^- (in the left) and X_2^+ (in the right). Consider the arcs Σ_D^2 and Σ_I^2 on the hyperbola u v+1=0, given by Σ_I^2= {(u,v): v =-1/u: u_i,2^I ≤ u≤ u_f,2^I}, where u_i,2^D= 2^-1/6 3^-1/3, u_f,2^D= 2, u_i,2^I= -2^5/6/3^1/3 and u_f,2^I= -2^1/6/3^1/3. As for the discontinuous case q=1 we are going to show that in this case the flow of X_2^- induces a map P_2:Σ_D^2→Σ_I^2, by constructing a trapping region K^2 such that Σ_D^2∪Σ_I^2⊂∂ K^2 and X_2^- will point inwards K^2 everywhere on ∂ K^2 except at Σ_I^2, see Figure <ref>. Since X_2^- does not have singularities in K^2, from the Poincaré-Bendixson Theorem (see, for instance, <cit.>), we conclude that: (C2) for each point p∈Σ_D^2, there is t(p)>0 such that P_2(p):=ϕ_2^-(t(p),p)∈Σ_I^2. Let K^2 be the compact region delimited by ∂ K^2=Σ_D^2∪Σ_U^2∪ S_1 ∪⋯∪ S_5, where S_1= {(u,0), 0<u≤ 9/4}, S_2= {(u,v): v =2 u -9/2: u_f,2^D ≤ u ≤ 9/4}, S_3= {(u,v): v =3 u^2/√(2) -3^4/3/2^5/6: 0 ≤ u ≤ u_i,2^D}, S_4= {(u,v): v =3 u^2/√(2) -3^4/3/2^5/6: u_i,2^I≤ u <0}, S_5= {(u,v): v =3 u^2/√(2):u_f,2^I ≤ u ≤ 0}. In what follows, we are going to analyze the behavior of the vector field X_2^- on each component of the boundary of K^2. Behavior of X_2^- on Σ_D^2 and Σ_I^2. The hyperbola g(u,v)=u v + 1=0 satisfies ⟨∇ g(u,v),X_2^-⟩ |_g(u,v)=0 = u^2, which does not vanish on Σ_D^2∪Σ_I^2. Note that the flow of the vector field X_2^- points inwards K^2 on Σ_D^2∪Σ_I^2, because, for instance, X_2 ^-(1,-1)=(0,1) and X_2^-(-1,1)=(-2,11). Behavior of X_2^- on S_1. The curve g(u,v)=v=0 satisfies ⟨∇ g(u,v),X_2^-⟩|_g(u,v)=0 = 6 u (1+u^6 -u^3 √(2 + u^6)). We will show that this derivative does not vanish on u>0. We proceed by contradiction. Assume that 6 u (1+u^6 -u^3 √(2 + u^6))=0. Then, since u > 0 we have 1+u^6 -u^3 √(2 + u^6)=0, and so √(2 + u^6) = 1+u ^6/u^3 ⇔ 2 + u^6 = (1+u ^6/u^3)^2 = 1+u^12 + 2 u^6/u^6 ⇔ -1/u^6=0, which is not possible. This shows that the flow of the vector field X_2^- points inwards K^2 on S_1, because for instance X_2^-(1,0)=(-1-√(2),-3 (1+√(3))). Behavior of X_2^- on S_2. The curve g(u,v)=v-2 u +9/2=0 satisfies ⟨∇ g(u,v),X_2^-⟩|_g(u,v)=0 = -6 u + 63/2u^2 - 16 u^3 - 6 u^7 + (2+6u^4)√(u^6 + (u-2) (4u-1 )). We will show that this derivative does not vanish on S_2, i.e, for u_f,2^D ≤ u ≤ 9/4. We proceed by contradiction. Assume that it is zero at some u. Then, proceeding as in S_1 to get rid of the square root, we obtain ( 6 u - 63/2u^2 + 16 u^3 + 6 u^7)^2 = (2+6u^4)^2 (u^6 + (-2 + u) (-1 + 4 u)), and so u^2 p_6(u)=0, where p_6(u) := 32 - 144 u - 80 u^2 + 1512 u^3 - 4545 u^4 + 3168 u^5 - 624 u^6 + 216 u^9 - 96 u^10. Using the Sturm procedure we obtain that the polynomial p_6(u) does not vanish on u_f,2^D < u < 9/4 which is a contradiction (see Appendix). Hence, the flow of the vector field X_2^- points inwards K^2 on S_2. Behavior of X_2^- on S_3 and S_4. Note that on g(u,v)=v-3 u^2/√(2) -3^4/3/2^5/6=0 we have ⟨∇ g(u,v),X_2^-⟩|_g(u,v)=0 =-u ( 6 - 21 · 3^1/3 u/2^5/6 + 27 u^3/√(2) + 6 u^6 + (3 √(2)+ 6 u^3) ≤×√(2 - 3 · 2^1/6 3^1/3 u + 3 √(2) u^3 + u^6)). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is enough to show that this derivative does not vanish neither on 0 < u ≤ u_i,2^D, nor on u_i,2^I ≤ u <0. We proceed by contradiction. Assume that p_7(u):=6 - 21 · 3^1/3 u/2^5/6 + 27 u^3/√(2) + 6 u^6 + (3 √(2)+ 6 u^3) √(2 - 3 · 2^1/6 3^1/3 u + 3 √(2) u^3 + u^6) has a zero. Then, proceeding as in S_1 to get rid of the square root, we have that p_7(u)=0 implies p_8(u) := 32 · 2^1/6 3^1/3 - 49 · 2^1/3 3^2/3 u - 16 √(2) u^2 + 78· 2^2/3 3^1/3 u^3 - 58 u^5 + 8 · 2^1/6 3^1/3 u^6 - 8 √(2) u^8=0. Using the Sturm procedure we show that the polynomial p_8(u) has a unique simple real zero on u>0, and a unique simple real zero on u<0 (see Appendix). Moreover we have that p_7(0)=0, p_7'(0)=0, p”_7(0)=12 · 2^1/6 3^1/3, p_7(u_i,2^D) = 0.617715, p_7(u_i,2^I)= 31.7094. These computations show that if p_7(u) has a zero in (0,u_i,2^D), then it has another (or a multiple) zero on that interval. This would produce another zero of p_8(u) in (0, ∞), which is a contradiction. On the other hand, if p_7(u) has a zero in (u_i,2^I,0) then it has another (or a multiple) zero on that interval and this would produce another zero of p_8(u) in (-∞, 0), which is again a contradiction. In short we have proved that the flow of the vector field X_2^- points inwards K^2 on S_3 and S_4. Behavior of X_2^- on S_5. The curve g(u,v)=v- 3 u^2/√(2) =0 satisfies ⟨∇ g(u,v),X_2^-⟩|_g(u,v)=0 = p_9(u):=-3/2 u (4 + 9 √(2) u^3 + 4 u^6 + (2√(2) + 4 u^3) √(2 + 3 √(2) u^3 + u^6)). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is sufficient to show that this derivative does not vanish on u_f,2^I < u < 0. Proceeding as in S_1 to get rid of the square root, we have that p_9=0 implies √(2 + 3 √(2) u^3 + u^6) = 4 + 9 √(2) u^3 + 4 u^6/ 2√(2) + 4 u^3 ⇒ 2 + 3 √(2) u^3 + u^6 = (4 + 9 √(2) u^3 + 4 u^6/ 2√(2) + 4 u^3)^2, and so -u^3 (8 √(2) + 29 u^3 + 4 √(2) u^6)/2 (√(2) + 2 u^3)^2=0. Note that the denominator does not vanish on u_f,2^I < u < 0, because the unique negative real solution of √(2) + 2 u^3=0, which is -1/2^1/6 is outside the mentioned interval. On the other hand the unique two real solutions of the numerator are u_1=-1/2(29 √(2) - 3 √(130)/2)^1/3 and u_2=-1/2(29 √(2) + 3 √(130)/2)^1/3. We observe that u_1 belongs to the interval u_f,2^I < u < 0 while u_2 does not. However, u_1 is not a solution of p_9(u)=0 because p_9(u_1) = -1/2(29 √(2) - 3 √(130)/2)^1/3 0. This contradiction shows that the flow X_2^- point inwards K^2 on S_5. Existence of p_2^*. Consider the map h_2 Σ_D^2 → defined by h_2(p)=π (P_2(p)+p), where π:^2→ is the projection onto the first coordinate (see (C2)). Note that by the continuous dependence of the flow of X_2^- with respect to the initial conditions the map h_2 is continuous. Moreover, the image of the point (u_f,2^D,-1/u_f,2^D) by P_2 (which is the point w_4 in Figure <ref>) is inside Σ_I^2, and its symmetric is below its image because u_f,2^D + u_i,2^I>0. Hence h_2(u_f,2^D,-1/u_f,2^D)>0. On the other hand, the image of the point (u_i,2^D,-1/u_i,2^D) by P_2 (which is the point w_3 in Figure <ref>) is inside Σ_I^2, and its symmetric is above its image because u_i,2^D +u_f,2^I<0. Therefore h_2(u_i,2^D,-1/u_i,2^D)<0. Thus, by continuity, there exists p_2^*∈Σ_D^2 such that h_2(p_2^*)=0, i.e, P_2(p_2^*)=-p_2^* and so ϕ_2^-(t^*_2,p_2^*)=-p_2^* as we wanted to prove. §.§ Proof of Theorem <ref> for the continuous case q=3 In this case, it follows from (<ref>) that X_2^±: { u'= u | u |^3 ±√(u^8+2uv+2), v'= | u |^3/u(-u v-4 (u | u |^3 ±√(u^8+2uv+2))^2). . for u v+1≥0. See Figure <ref> for the phase spaces of the vector field X_3^- (in the left) and X_3^+ (in the right), defined for u v≤-1. Consider the sections Σ_D^3 and Σ_I^3 on the hyperbola u v+1=0, given by Σ_D^3= {(u,v): v =-1/u: u_i,3^D ≤ u≤ u_f,3^D} and Σ_I^3= {(u,v): v =-1/u: u_i,3^I ≤ u≤ u_f,3^I}, where u_i,3^D=1/2^5/8 , u_f,3^D= 2, u_i,3^I=N_i,3^I/D_i,3^I and u_f,3^I=-3^1/4/2^5/8 being D_i,3^I = 2^13/8(1+√(2))^1/3√(1+√(2)-(1+√(2))^1/3), N_i,3^I = -2-√(2)+√(2) (1+√(2))^1/3-(-6-4 √(2)-2(1+√(2))^2/3+4 (1+√(2))^4/3 ≤ + 4 (2+√(2))√(1+√(2)-(1+√(2))^1/3))^1/2. As for the discontinuous case q=1 we are going to show that in this case the flow of X_3^- induces a map P_3:Σ_D^2→Σ_I^3, by constructing a trapping region K^3 such that Σ_D^3∪Σ_I^3⊂∂ K^3 and X_3^- will point inwards K^3 everywhere on ∂ K^3 except at Σ_I^3, see Figure <ref>. Since X_3^- does not have singularities in K^3, from the Poincaré-Bendixson Theorem (see, for instance, <cit.>), (C3) for each point p∈Σ_D^3, there is t(p)>0 such that P_3(p):=ϕ_3^-(t(p),p)∈Σ_I^3. Let K^3 be the compact region delimited by ∂ K^3=Σ_D^3∪Σ_U^3∪ T_1 ∪⋯∪ T_5, where T_1= {(u,0), 0<u≤ 13/6}, T_2= {(u,v): v =3 u -13/2: u_f,3^D ≤ u ≤ 13/6}, T_3= {(u,v): v =4 √(2) |u|^3/3 -2^21/8/3: 0 ≤ u ≤ u_i,3^D}, T_4= {(u,v): v =4 √(2) |u|^3/3 -2^21/8/3: u_i,3^I ≤ u ≤ 0}, T_5= {(u,v): v =4 √(2) |u|^3/3 :u_f,3^I ≤ u ≤ 0}. In what follows, we are going to analyze the behavior of the vector field X_3^- on each component of the boundary of K^3. Behavior of X_3^- on Σ_D^3 and Σ_I^3. The hyperbola g(u,v)=u v + 1=0 satisfies ⟨∇ g(u,v),X_3^-⟩|_(u,v)∈Σ_D^3 = u^3, which does not vanish on Σ_D^3. Note that the flow of the vector field X_3^- points inwards K^3 on Σ_D^3, because for instance X_1^-(1,-1)=(0,1). The curve g(u,v)=u v + 1=0 on Σ_I^3 satisfies ⟨∇ g(u,v),X_3^-⟩|_(u,v)∈Σ_I^3 =u^3 (1 + 16 u^8), which does not vanish on Σ_I^3. Note that the flow of the vector field X_3^- points outwards K^3 on Σ_I^3, because for instance X_3^-(-1,1)=(-2,15). Behavior of X_3^- on T_1. The curve g(u,v)=v=0 on u>0 satisfies ⟨∇ g(u,v),X_3^-⟩|_g(u,v)=0 =- 8 u^2 ((1+u^8) - u^4 √(2 + u^8)). We will show that this derivative does not vanish on u>0. We proceed by contradiction. Assume that 6 u (1+u^6 -u^3 √(2 + u^6))=0. Then, since u > 0 we have (1+u^8) - u^4 √(2 + u^8)=0, and so √(2 + u^8)= 1+u^8/u^4 ⇔ 2 + u^8 = (1+u ^8/u^4)^2 = 1+u^16 + 2 u^8/u^8 ⇔ 1+4 u^8 + 2 u^16/u^8=0, which is not possible. This shows that the flow of the vector field X_3^- points inwards K^3 on T_1, because for instance X_2^-(1,0)=(1-√(3),-16+8√(3)). Behavior of X_3^- on T_2. The curve g(u,v)=v-3 u +13/6=0 satisfies ⟨∇ g(u,v),X_3^-⟩|_g(u,v)=0 = -8 u^2 +117/2 u^3 -30 u^4 -8 u^10 + (8 u^6+3)√(2 + u (-13 + 6 u + u^7)). We will show that this derivative does not vanish on T_2, i.e. for u_f,3^D ≤ u ≤ 13/6. We proceed by contradiction. Assume that it is zero at some u. Then, proceeding as in T_1 to get rid of the square root, we obtain p_10(u) := 72 - 468 u + 216 u^2 - 256 u^4 + 3744 u^5 - 15225 u^6 + 11544 u^7 - 2412 u^8 + 416 u^13 - 192 u^14. Using the Sturm procedure we prove that the polynomial p_10(u) does not vanish on u_f,3^D < u < 13/6 which is a contradiction (see Appendix). Hence, the flow of the vector field X_3^- points inwards K^3 on T_2. Behavior of X_3^- on T_3 and T_4. On the curve g(u,v)=v-4 √(2) |u|^3/3 -2^21/8/3=0 for u>0 we have ⟨∇ g(u,v),X_3^-⟩|_g(u,v)=0 =p_11(u): =-u^2 (8 - 3 · 2^21/8 u +16 √(2) u^4 + 8 u^8 ≤ - 4 ( √(2)+2 u^4) √(2 - 2^29/8/3 u + 8 √(2)/3 u^4 + u^8)), and for u<0, ⟨∇ g(u,v),X_3^-⟩|_g(u,v)=0 =p_12(u): =u^2 (8 - 3 · 2^21/8 u -16 √(2) u^4 + 8 u^8 ≤ - 4 ( √(2)-2 u^4) √(2 - 2^29/8/3 u - 8 √(2)/3 u^4 + u^8)). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is enough to show that p_11(u) does not vanish on 0 < u ≤ u_i,3^D, and that the p_12(u) does not vanish on u_i,3^I ≤ u <0. We proceed by contradiction. Assume first that p_11(u) has a zero. Then, proceeding as in R_1 to get rid of the square root, we have that p_11(u)=0 implies p_13(u)=20 · 2^5/8 - 54 · 2^1/4 u - 8 √(2) u^3 + 80 · 2^1/8 u^4 - 26 u^7 + 4 · 2^5/8 u^8 - 4 √(2) u^11=0. Using the Sturm procedure we obtain that the polynomial p_13(u) has a unique simple real zero on u>0 (see Appendix). Moreover, we have that p_11(0)=0, p_11'(0)=0, p_11”(0)=0, p_11”'(0)=40 · 2^5/8, p_1(u_i,3^D) =0.420448... These computations show that if p_11(u) has a zero in (0,u_i,3^D), then it has another (or a multiple) zero on that interval. This would produce another zero of p_13(u) in (0, ∞), which is a contradiction. Hence the flow of the vector field X_3^- points inwards K^3 on T_3. On the other hand assume that p_12(u) has a zero. Then, proceeding as in T_1 to get rid of the square root, we have that p_12(u)=0 implies p_14(u)=20 · 2^5/8 - 54 · 2^1/4 u + 8 √(2) u^3 - 80 · 2^1/8 u^4 - 26 u^7 + 4 · 2^5/8 u^8 + 4 √(2) u^11=0. Using the Sturm procedure we have that the polynomial p_14(u) has a unique simple real zero on u<0 (see Appendix). Moreover, we have that p_12(0)=0, p_12'(0)=0, p_12”(0)=0, p_2”'(0)=-40 · 2^5/8, p_2(u_i,3^I)= 40.3062... These computations show that if p_12(u) has a zero in (u_i,3^I,0), then either it is multiple or p_12(u) has at least two negative zeros in contradiction with uniqueness of negative zeros provided by Sturm procedure. So the flow of the vector field X_3^- points inwards K^3 on T_4. Behavior of X_3^- on T_5. On the curve g(u,v)=v- 4 √(2) |u|^3/3 =0 we have ⟨∇ g(u,v),X_3^-⟩|_g(u,v)=0 = 4u^2(2 (u^8-2 √(2) u^4+1) +(2 u^4- √(2)) √(u^8-8 √(2) u^4/3+2)). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is to show that this derivative does not vanish on u_f,3^I < u < 0. Proceeding as in T_1 to get rid of the square root, we have that p_15(u):=6-12 √(2) u^4 + 6 u^8 (√(6)-2√(3) u^4) √(6 - 8 √(2) u^4 + 3 u^8)=0 implies 2 u^4 (4 √(2) - 13 u^4 + 2 √(2) u^8)/3 (√(2) - 2 u^4)^2=0. Note that the denominator does not vanish on u_f,3^I < u < 0, because the unique negative real solution of √(2) - 2 u^4=0, which is -1/2^1/8 is outside the mentioned interval. On the other hand the unique two real solutions of the numerator are u_1=-1/2^3/4(13 √(2) - √(210))^1/4 and u_2=-1/2^3/4(13 √(2) + √(210))^1/4. We observe that u_1 belongs to the interval u_f,3^I < u < 0 while u_2 does not. However, u_1 is not a solution of p_15(u)=0 because p_15(u_1) = -1/2^3/4(77 √(5) -37 √(21)) 0. This contradiction shows that the flow X_3^- point inwards K^3 on T_5. Existence of p_3^*. Consider the map h_3 Σ_D^3 → defined by h_3(p)=π (P_3(p)+p), where π:^2→ is the projection onto the first coordinate (see (C3)). Note that by the continuous dependence of the flow of X_3^- with respect to the initial conditions the map h_3 is continuous. Moreover, the image of the point (u_f,3^D,-1/u_f,3^D) by P_3 (which is the point w_6 in Figure <ref>) is inside Σ_I^3, and its symmetric is below its image because u_f,3^D + u_i,3^I>0. Hence h_3(u_f,3^D,-1/u_f,3^D)>0. On the other hand, the image of the point (u_i,3^D,-1/u_i,3^D) by P_3 (which is the point w_5 in Figure <ref>) is inside Σ_I^3, and its symmetric is above its image because u_i,3^D +u_f,3^I<0. Therefore h_2(u_i,3^D,-1/u_i,3^D)<0. Thus, by continuity, there exists p_3^*∈Σ_D^2 such that h_3(p_3^*)=0, i.e, P_3(p_3^*)=-p_3^* and so ϕ_3^-(t^*_3,p_3^*)=-p_3^* as we wanted to prove. § PROOF OF THEOREM <REF> This section is devoted to the proof of Theorem <ref>. This proof will follow from propositions <ref>, <ref>, <ref>, and <ref>. Consider the sections Σ_D^q and Σ_I^q on the hyperbola u v+1=0, given by Σ_D^q= {(u,v): v =-1/u: u_i,q^D ≤ u≤ u_f,q^D} and Σ_I^q= {(u,v): v =-1/u: u_i,q^I ≤ u≤ u_f,q^I}, where u_i,q^D=2^-1/2(1+q)(1+q)^-1/1+q, u_f,q^D= 2, u_f,q^I=-(√(2)(1+q)/q)^-1/(1+q), and u_i,q^I=-x_i,q^I, being x_i,q^I the unique positive real root of the polynomial P(x)=-q - 2^1/2(1+q)(1+q)^2+q/1+q x + √(2) (1+q) x^1+q. Indeed, in the next result we show that P(x) has a unique positive real zero x_i,q^I and that x_i,q^I < 2. The polynomial P(x) has a unique positive real zero x_i,q^I. In addition, x_i,q^I < u_f,q^D= 2. First, from Descartes rule of signs, P(x) has at most one positive real zero. Let us see that it has exactly one positive real zero. Indeed, since P'(x)=- 2^1/2(1+q)(1+q)^2+q/1+q + √(2) (1+q)^2 x^q and, for x > 2, x^q > 2^1/2(1+q) and (1+q)^2 > (1+q)^2+q/1+q, it follows that P(x) is increasing for x > 2. Since P(0)=-q < 0, then P(x) has exactly one positive real zero x_i,q^I. Now, let us check that x_i,q^I<2 by showing that P(2)>0. Note that P(2)=-q + 2^3+2q/2 (1+q)-2^3+2q/2(1+q) (1+q)^2+q/1+q. First, for q=1, P(2)=-1 - 8 2^1/6 + 8 √(2)>0. Now, for q ≥ 2, one can see that P(2)>0 if, and only if, (2^q-q/2^3/2(q+1))^q+1 > 2^-q/2 (q+1). We claim 2^q-q/2^3/2(q+1) > 2 and 2^q+1>2^-q/2 (q+1). The relation in (<ref>) will then follow from the claim. In order to get the claim, we compute the derivative in the variable q as follows: ddq(2^q-q/2^3/2(q+1) -2) = q/32(1+q)^2 -1/32(1+q) +2^q log 2 > q/32(1+q)^2>0 for any q ≥ 2. Since, for q=2, we get 2^2-2/2^3/23 -2 =1.97>0, the first assertion in the claim follows. For the second one, taking again derivatives in q , we obtain ddq(2^q+1-2^-q/2 (q+1))= 2^1+qlog 2 - 2^-2+q/2 (2-(1+q)log 2). Note that, for q > 1, we have 2-(1+q)log 2 < log 2 and, since 2^-2+q/2 < 2^1+q, it follows that the derivative above is positive for q >1. Finally, since for q=1 we get 2^2-2^1/2>0, then the second assertion of the claim follows. Hence, the proof of the lemma is concluded. From here, by following the strategy developed for the cases q=1,2,3, the idea is to show that the flow of X_q^- induces a transition map P_q:Σ_D^q→Σ_I^q. As before, this will be done by constructing a region K^q such that Σ_D^q∪Σ_I^q⊂∂ K^q and X_q^- points inwards K^q everywhere on ∂ K^q except at Σ_I^q (see Figure <ref>). Since X_q^- does not have singularities in K^q, from the Poincaré-Bendixson Theorem (see, for instance, <cit.>), we conclude that for each point p∈Σ_D^q, there is t(p)>0 such that P_q(p):=ϕ_q^-(t(p),p)∈Σ_I^q. By imitating the construction of the compact regions in cases q=1,2,3, we consider K^q the compact region (see Figure <ref>) delimited by ∂ K^q=Σ_D^q∪Σ_I^q∪ U_1 ∪⋯∪ U_5, where U_1= {(u,0), 0<u≤ (1+4q)/(2q)}, U_2= {(u,v): v =q u -(1+4q)/2: u_f,q^D < u ≤ (1+4q)/(2q)}, U_3= {(u,v): v =(1+q) √(2) |u|^q/q -2^1/2(1+q) (1+q)^2+q/1+q/q: 0 ≤ u < u_i,q^D}, U_4= {(u,v): v =(1+q) √(2) |u|^q/q -2^1/2(1+q) (1+q)^2+q/1+q/q: u_i,q^I < u ≤ 0}, U_5= {(u,v): v =(1+q) √(2) |u|^q/q :u_f,q^I ≤ u ≤ 0}. In what follows, we are going to analyze the behavior of the vector field X_q^- on each component of the boundary of K^q. Summarizing, in Proposition <ref>, we will prove analytically that, for every positive integer q , the flow of the vector field X_q^- points inwards K^q on Σ_D^q, U_1, and U_5, and outwards K^q on Σ_I^q; for the remaining curves, U_2, U_3, and U_4, propositions <ref>, <ref>, and <ref> will establish, respectively, that, for every positive integer q , X_q^- points inwards K^q on them provided that the conditions C1, C2, and C3 of Theorem <ref> hold. The flow of the vector field X_q^- points inwards K^q on Σ_D^q, U_1, and U_5, and outwards K^q on Σ_I^q. We first prove the proposition on Σ_D^q and Σ_I^q. Note that the hyperbola g(u,v)=u v + 1=0 satisfies ⟨∇ g(u,v),X_q^-⟩|_(u,v)∈Σ_D^q = u^q, ⟨∇ g(u,v),X_q^-⟩|_(u,v)∈Σ_I^q =(-1)^q+1(1 + 4 (1 + q) u^2(1+q)) u^q, which does not vanish on Σ_D^q and Σ_I^q, respectively. Since, for instance, X_q^-(1,-1)=(0,1) and X_q^-(-1,1)=(-2,3+4q), we conclude that the flow of the vector field X_q^- points inwards K^q on Σ_D^q and Σ_I^q. Now, consider the curve U_1 described by g(u,v)=v=0, u>0. Note that ⟨∇ g(u,v),X_q^-⟩|_g(u,v)=0 =-2 (1 + q) u^q-1(1 + u^2(1+q) - u^1+q√(2 + u^2(1+q))). We will show that this derivative does not vanish on u>0. We proceed by contradiction. Assume that 1 + u^2(1+q) - u^1+q√(2 + u^2(1+q))=0. Then, √(2 + u^2(1+q))= 1+u^2(1+q)/u^1+q ⇔ 2 + u^2(1+q) = (1+u^2(1+q)/u^1+q)^2 ⇔ -1/u^2(1+q)=0, which is not possible. Thus, the flow of the vector field X_q^- points inwards K^q on U_1. Finally, consider the curve U_5 described by g(u,v)=v- (1+q) √(2) |u|^q/q =0 for u_f,q^I ≤ u ≤ 0 . Note that ⟨∇ g(u,v),X_q^-⟩ |_g(u,v)=0 = √(2)/q (1+q) (-1)^q+1 u^q-1(√(2) q + 3 (-1)^q (1 + q) u^1 + q + √(2) q u^2(1 + q) - √(q) ( 1 + √(2)(-1)^q u^1 + q) √( 2 q + 2 (-1)^q √(2) (1+q) u^1 + q + q u^2(1 + q))). Of course on u=0 the above derivative is zero but, since it is a quadratic tangency, it is enough to show that this derivative does not vanish on u_f,q^I < u < 0. Proceeding as in U_1 to get rid of the square root, we have that p_0(u):=q (-1)^q+1⟨∇ g(u,v),X_q^-⟩|_g(u,v)=0/√(2) (q+1) u^q-1 =0 implies 2 u^q+1 (-4 (-1)^q √(2) q - (9 + 10 q) u^1 + q + 2 (-1)^1+q√(2) q u^2(1 + q))/2 q^2 (1 +√(2)(-1)^q u^1 + q)^2=0. Note that the denominator does not vanish on u_f,3^I < u < 0, because the unique negative real solution of 1 +√(2)(-1)^q u^q+1=0, namely u=-2^-1/(2(1+q)), is outside the mentioned interval. On the other hand, the unique two real solutions of the numerator are u_± =(18 + 20 q ± 6 √(9 + 20 q + 4 q^2)/8 √(2) q)^1/(1+q). We observe that u_- belongs to the interval u_f,q^I < u < 0 while u_+ does not. However, u_- is not a solution of p_0(u)=0 because p_0(u_-) = -32 q (1 + 2 q) (1 + 648 q + 288 q^2 + 32 q^3 + ( 27 + 6q) √(2(1+2q))) 0. Therefore, p_0(u) does not vanish for u_f,q^I < u < 0, which implies that the flow X_q^- points inwards K^q on U_5. It concludes the proof of this proposition. The vector field X_q^- points inwards K^q on U_2 provided that the polynomial P_0, defined in (<ref>), does not vanish on I_0=(u_f,q^D , (1+4q)/(2q)). Consider U_2 described by g(u,v):=v-q u +(1+4q)/2=0, for u∈ I_0. Denote p_0(u)= ⟨∇ g(u,v),X_q^-⟩|_g(u,v)=0, u∈ I_0. Notice that p_0(u) = (q +2 (1+q) u^2q)√(2 - u - 4 q u + 2 q u^2 + u^2(1+q)) + 1/2 u^q-1 (4(1+q) - (3+2q) (1+4q) u + 4 q (2+q)u^2 + 4 (1+q) u^2(1+q)). We claim that p_0(u) does not vanish for u∈ I_0. Indeed, if p_0(u^*)=0 at some u^*∈ I_0, then, by proceeding as in U_1 to get rid of the square root, we obtain that P_0(u^*)=0, which contradicts the hypothesis. Hence, the flow of the vector field X_q^- points inwards K^q on U_2. The vector field X_q^- points inwards K^q on U_3 provided that the polynomial P^+, defined in (<ref>), has at most one root (counting its multiplicity) in I^+ =(0 , u_i,q^D). Consider U_3 described by g(u,v)=v -(1+q) √(2) |u|^q/q -2^1/2(1+q) (1+q)^2+q/1+q/q=0 for 0 ≤ u ≤ u_i,q^D. Note that p_1(u): = ⟨∇ g(u,v),X_q^-⟩|_g(u,v)=0 = 1+q/q u^q-1( √(q) (√(2) + 2 u^1+q) √(R) - 2 q + 2^1/2(1+q) (1 + q)^1/1+q (3 + 2 q) u - 3 √(2) (1 + q) u^1+q - 2 q u^2(1+q)), where R=2 q - 2^3+2q/2(1+q) (1+q)^2+q/1+q u + 2 √(2) (1 + q) u^1+q + q u^2(1+q). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is enough to show that p_1(u) does not vanish on 0 < u ≤ u_i,q^D. We proceed by contradiction. Assume that p_1(u) has a zero. Then, proceeding as in R_1 to get rid of the square root, we have that p_1(u)=0 implies P^+ (u)=0. Notice that p_1(0)=p_1'(0)= ⋯= p_1^(q-1)(0)=0, p_1^(q)(0)=2+q/q 2^1/2(1+q) (1+q)^2+q/1+q >0, and p_1(u_i,q^D) =(u_i,q^D)^q-1 >0. This implies that, if p_1(u) has a zero in (0,u_i,q^D), then it has another (or a multiple) zero on that interval. This would produce two zeros, counting their multiplicity, of P^+ (u) in (0,u_i,q^D), which contradicts the hypothesis. Hence the flow of the vector field X_q^- points inwards K^q on U_3. The vector field X_q^- points inwards K^q on U_4 provided that the polynomial P^-, defined in (<ref>), has at most one root (counting its multiplicity) in I^-:=(-2,0) Consider U_4 described by g(u,v)=v -(1+q) √(2) |u|^q/q -2^1/2(1+q) (1+q)^2+q/1+q/q=0 for u<0. Notice that p_2(u): = ⟨∇ g(u,v),X_q^-⟩|_g(u,v)=0 = 1+q/q u^q-1(√(q) ((-1)^q√(2) + 2 u^1+q)√(R) -2 q (-1)^q + (-1)^q2^1/2(1+q) (1 + q)^1/1+q (3 + 2 q) u - 3 √(2) (1 + q) u^1+q - 2 (-1)^q q u^2(1+q)), where R=2 q - 2^3+2q/2(1+q) (1+q)^2+q/1+q u +(-1)^q 2 √(2) (1 + q) u^1+q + q u^2(1+q). Of course on u=0 the above derivative is zero but since it is a quadratic tangency it is enough to show that p_2(u) does not vanish on u_i,q^I ≤ u <0. Analogously to Proposition <ref>, we proceed by contradiction. Assume that p_2(u) has a zero. Then, proceeding as in U_1 to get rid of the square root, we have that p_2(u)=0 implies P^- (u)=0. Notice that p_2(0)=p_2'(0)= ⋯= p_2^(q-1)(0)=0, and p_2^(q)(0)=-2+q/q 2^1/2(1+q) (1+q)^2+q/1+q <0. In addition, one can see tha p_2(-2)>0. This implies that if p_2(u) has a zero in [u_i,q^I,0)⊂(-2,0), then it has another (or a multiple) zero in the interval (-2,0). From here, we get a contradiction with the hypothesis. Hence the flow of the vector field X_q^- points inwards K^q on U_4. Finally, consider the map h_q Σ_D^q → defined by h_q(p)=π (P_q(p)+p), where π:^2→ is the projection onto the first coordinate. Note that by the continuous dependence of the flow of X_q^- with respect to the initial conditions the map h_q is continuous. Moreover, the image of the point (u_f,q^D,-1/u_f,q^D) by P_q is inside Σ_I^q, and its symmetric is below its image because u_f,q^D + u_i,q^I>0. Hence h_q(u_f,q^D,-1/u_f,q^D)>0. On the other hand, the image of the point (u_i,q^D,-1/u_i,q^D) by P_q is inside Σ_I^q, and its symmetric is above its image because u_i,q^D +u_f,q^I<0. Therefore h_q(u_i,3^D,-1/u_i,3^D)<0. Thus, by continuity, there exists p_q^*∈Σ_D^q such that h_q(p_q^*)=0, i.e, P_q(p_q^*)=-p_q^* and so ϕ_q^-(t^*_q,p_q^*)=-p_q^* as we wanted to prove (see Section <ref>). § APPENDIX: STURM PROCEDURE FOR Q∈{1,2,3} Let p(u) be a square-free polynomial of degree d, and consider the so-called Sturm sequence q_i(u), for i=0,…,ℓ, given by: q_0(u)=p_0(u), q_1(u)=p'_0(u), and for i=2,…,ℓ, -q_i(u) is the polynomial reminder of the division of q_i-2 by q_i-1, where ℓ is the first index for which q_ℓ is constant. Sturm Theorem (see, for instance, <cit.>) provides that the number of real roots of p(x) in the half-open interval (a,b] is V(a)-V(b), where V(x) is the number of sign variation of the sequence q_0(x),q_1(x)…,q_ℓ(x). Notice that when computing V(x), x can take the value ∞ (resp. -∞), in these cases V(∞) (resp. V(-∞)) denotes the number of sign variation of the leading terms of the sequence q_0(x),q_1(x)…,q_ℓ(x) (resp. q_0(-x),q_1(-x)…,q_ℓ(-x)). §.§ Sturm Procedure for q=1. First, note that the polynomial p_0(u)=-14 + 95 u - 745/4 u^2+ 110 u^3 - 19 u^4 + 20 u^5 - 8 u^6 and its derivative p_0'(u)=-95 - 745/2 u+ 330 u^2 - 76 u^3 + 100 u^4 - 48 u^5 have no common zeroes because its resultant with respect to u is -1.52127..· 10^18. In particular, p(x) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,6, we see V(2)=V(5/2)=2, which implies that p_0(u) has no real zero in the interval (2,5/2). Now, notice that the polynomial p_3(u) =12 · 2^3/4 - 58 √(2) u + 88 · 2^1/4 u^2 - 38 u^3 + 4 · 2^3/4 u^4 - 4 √(2) u^5 has no common zeroes with its derivative because the resultant between p_3(u) and p_3'(u) with respect to the variable u is -5637568724992√(2) which is not zero. In particular, p_3(u) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,5, we see that V(0)=3 and V(∞)= 2. Therefore it follows from the Sturm process that p_3(u) has a unique positive real zero. Finally, the polynomial p_4(u) =12 · 2^3/4 - 42 √(2) u - 88 · 2^1/4 u^2 - 38 u^3 + 4 · 2^3/4 u^4 + 4 √(2) u^5 has no common zeroes with its derivative because the resultant between p_4(u) and p_4'(u) with respect to the variable u is -88414837800960√(2) which is not zero. In particular, p_4(u) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,5, we see that V(-∞)=4 and V(0)= 3. Therefore, it follows from the Sturm process that p_4(u) has a unique negative real zero. §.§ Sturm Procedure for q=2 First, note that the polynomial p_6(u)=32 - 144 u - 80 u^2 + 1512 u^3 - 4545 u^4 + 3168 u^5 - 624 u^6 + 216 u^9 - 96 u^10 has no common zeroes with its derivative because the resultant between p_6(u) and p_6'(u) with respect to u is -4.41356.. · 10^57. In particular, p_6(u) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,10, we see V(2)=V(9/4)=3, which implies that p_6(u) has no real zero in the interval (2,9/4). Now, notice that the polynomial p_8(u) =32 · 2^1/6 3^1/3 - 49 · 2^1/3 3^2/3 u - 16 √(2) u^2 + 78 · 2^2/3 3^1/3 u^3 - 58 u^5 + 8 · 2^1/6 3^1/3 u^6 - 8 √(2) u^8 has no common zeroes with its derivative because the resultant between p_8(u) and p_8'(u) with respect to the variable u is 9669300766922659513289932800 · 2^1/6 3^1/3 which is not zero. In particular, p_8(u) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,10, we see that V(-∞)= 5, V(0)=4, and V(∞)=3. Therefore, it follows from the Sturm process that p_8(u) has a unique positive real zero and a unique negative real zero. §.§ Sturm Procedure for q=3. First, note that the polynomial p_10(u) := 72 - 468 u + 216 u^2 - 256 u^4 + 3744 u^5 - 15225 u^6 + 11544 u^7 - 2412 u^8 + 416 u^13 - 192 u^14. has no common zeroes with its derivative because the resultant between p_10(u) and p_10'(u) with respect to u is -2.74875.. · 10^97. In particular, p_10(u) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,10, we see V(2)=V(13/6)=3, which implies that p_10(u) has no real zero in the interval (2,13/6). Now, notice that the polynomial p_13(u) =20 · 2^5/8 - 54 · 2^1/4 u - 8 √(2) u^3 + 80 · 2^1/8 u^4 - 26 u^7 + 4 · 2^5/8 u^8 - 4 √(2) u^11 has no common zeroes with its derivative because the resultant between p_13(u) and p_13'(u) with respect to the variable u is -5.12026.. · 10^35 which is not zero. In particular, p_13(u) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,11, we see that V(0)=5 and V(∞)= 4. Therefore it follows from the Sturm process that p_13(u) has a unique positive real zero. Finally, the polynomial p_14(u) =20 · 2^5/8 - 54 · 2^1/4 u + 8 √(2) u^3 - 80 · 2^1/8 u^4 - 26 u^7 + 4 · 2^5/8 u^8 + 4 √(2) u^11 has no common zeroes with its derivative because the resultant between p_14(u) and p_14'(u) with respect to the variable u is -2.29037.. · 10^37 which is not zero. In particular, p_14(u) is a square-free polynomial and the Sturm procedure can be applied directly. By computing the Sturm sequence q_i(u) for i=0,…,11, we see that V(-∞)=7 and V(0)= 6. Therefore, it follows from the Sturm process that p_14(u) has a unique negative real zero. § SUPPLEMENTARY MATERIAL See the supplementary material for a Mathematica algorithm (based on the Sturm procedure explained in the Appendix) that, for a given arbitrary positive integer q, checks whether conditions C1, C2, and C3 of Theorem <ref> hold or not by computing the number of roots of the polynomials P_0, P^+, and P^- in the intervals I_0, I^+, and I^-, respectively. § ACKNOWLEDGMENTS J. Llibre is supported by the Agencia Estatal de Investigación grant PID2019-104658GB-I00, and the H2020 European Research Council grant MSCA-RISE-2017-777911. D.D. Novaes is partially supported by São Paulo Research Foundation (FAPESP) grants 2022/09633-5, 2021/10606-0, 2019/10269-3, and 2018/13481-0, and by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) grants 438975/2018-9 and 309110/2021-1. C. Valls is supported through CAMGSD, IST-ID, projects UIDB/04459/2020 and UIDP/04459/2020. § DATA AVAILABILITY STATEMENT Data sharing is not applicable to this article as no new data were created or analyzed in this study. 1 Ch J. Chazy. Sur les équations différentielles du troisième ordre et d'ordre supérieur dont l'intégrale générale a ses points critiques fixes. Acta Math., 34(1):317–385, 1911. DLA06 F. Dumortier, J. Llibre, and J. C. Artés. Qualitative theory of planar differential systems. Universitext. Springer, Berlin, 2006. Fe M. R. Feix, C. Geronimi, L. Cairó, P. G. L. Leach, R. L. Lemmer, and S. Bouquet. On the singularity analysis of ordinary differential equations invariant under time translation and rescaling. J. Phys. A, 30(21):7437–7461, 1997. Filippov88 A. F. Filippov. Differential equations with discontinuous righthand sides, volume 18 of Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, 1988. Translated from the Russian. GFL C. Géronimi, M. R. Feix, and P. G. L. Leach. Periodic solutions and associated limit cycle for the generalised Chazy equation. In Dynamical systems, plasmas and gravitation (Orléans la Source, 1997), volume 518 of Lecture Notes in Phys., pages 327–335. Springer, Berlin, 1999. SB J. Stoer and R. Bulirsch. Introduction to numerical analysis, volume 12 of Texts in Applied Mathematics. Springer, New York, third edition, 2002. Translated from the German by R. Bartels, W. Gautschi and C. Witzgall.
http://arxiv.org/abs/2306.08823v1
20230615023515
Energy Management for a DM-i Plug-in Hybrid Electric Vehicle via Continuous-Discrete Reinforcement Learning
[ "Changfu Gong", "Jinming Xu", "Yuan Lin" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Energy Management for a DM-i Plug-in Hybrid Electric Vehicle via Continuous-Discrete Reinforcement Learning Changfu Gong, Jinming Xu, and Yuan Lin, Member, IEEE This work was supported by Guangzhou Basic and Applied Basic Research Program under Grant 2023A04J1688. (Corresponding author: Yuan Lin.) The authors are with the Shien-Ming Wu School of Intelligent Engineering at South China University of Technology, Guangzhou 511442, China (e-mail: [email protected]; [email protected]; [email protected]). ================================================================================================================================================================================================================================================================================================================================================================================================================================================== Energy management strategy (EMS) is a key technology for plug-in hybrid electric vehicles (PHEVs). The energy management of PHEVs needs to output continuous variables such as engine torque, as well as discrete variables such as clutch engagement or disengagement. This type of problem is a mixed-integer programming problem. In addition, the hybrid powertrain system is highly nonlinear and complex. Designing an efficient EMS is a challenging task. We establish a control-oriented mathematical model for a BYD DM-i hybrid powertrain system from the perspective of mixed-integer programming. Then, an EMS based on continuous-discrete reinforcement learning is introduced, which can output both continuous and discrete variables simultaneously. Finally, the effectiveness of the proposed control strategy is verified by comparing EMS based on charge-depleting charge-sustaining (CD-CS) and Dynamic Programming (DP). The simulation results show that the reinforcement learning EMS can improve energy efficiency by 10.08% compared to the CD-CS EMS, and the fuel economy gap is about 6.4% compared with the benchmark global optimum based on DP. PHEV, series-parallel hybrid system, energy management, continuous-discrete reinforcement learning. § INTRODUCTION Energy conservation and reducing consumption are effective ways to achieve low-carbon development of automotive technology. Plug-in hybrid electric vehicles (PHEVs) combine the advantages of electric vehicles and traditional gasoline vehicles, which can save energy and reduce emissions while avoiding the range anxiety associated with pure electric vehicles. They play a significant role in the current commercialization of new energy vehicles. The study of EMS in PHEVs involves the coordination between electric energy and fuel, which is the crucial technology that impacts the fuel economy and emissions of the vehicle <cit.>. Therefore, a reasonable and effective EMS is crucial for improving the overall performance of PHEVs and can also contribute to achieving sustainable development goals of the automotive manufacturing industry. §.§ Literature Review In order to improve the fuel economy of PHEVs, a significant amount of research has been conducted over the past few decades on energy management control strategies for PHEVs. EMS can be divided into rule-based, optimization-based, and learning-based methods<cit.>. Rule-based strategies select the operating mode based on pre-defined rules and can be further divided into deterministic rule-based strategies<cit.> and fuzzy logic-based strategies<cit.>. Rule-based EMS is widely used due to their simplicity and practicality, but they cannot obtain the globally optimal solution<cit.>. In order to further enhance the control performance of rule-based strategies, some studies have utilized algorithms for rule optimization. In <cit.>, particle swarm optimization algorithm is used to optimize the parameters of fuzzy rules, which improves the fuel economy of vehicles. In <cit.>, the mode switching threshold was optimized using simulated annealing and particle swarm optimization algorithms, resulting in an ideal mode switching sequence. In optimization-based control strategies, the PHEV's EMS is typically abstracted as a constrained nonlinear optimization problem. Optimization-based control strategies can be divided into two categories: global optimization and real-time optimization<cit.>. Global optimization mainly includes DP<cit.> and Pontryagin's minimum principle (PMP)<cit.>. The DP algorithm aims to optimize the fuel economy of the entire vehicle by establishing a global optimization mathematical model with constraints. Based on the Bellman optimality principle, the algorithm solves for the optimal energy allocation between the engine and battery by using state transition equations. If the entire driving cycle information is obtained in advance, the global optimal EMS can be obtained through DP algorithm <cit.>. Although DP-based strategies are effective in obtaining the globally optimal fuel economy of PHEVs, driving cycle information is usually unknown in practical applications, so DP is not suitable for real-time control. It is usually used as a benchmark for fuel economy. The EMS based on PMP algorithm is to obtain the optimal control strategy by minimizing the Hamilton equation in real time <cit.>. In <cit.>, the PMP is used to optimize the EMS of a hybrid energy storage system. The simulation results show that in the energy management of power-split HEVs, the difference between the PMP-based control strategy and the DP strategy is less than 1% , which proves the flexibility and effectiveness of the strategy. Compared with the DP algorithm, the PMP calculation is relatively small. However, it is difficult to solve the EMS directly through PMP due to the influence of state constraints and the complexity of the system model. Moreover, PMP also requires a known driving cycle,which is not available in real-world driving. Model predictive control (MPC) <cit.> and the minimum equivalent fuel consumption strategy (ECMS) <cit.> are typical real-time optimization algorithms. MPC is based on rolling optimization, which transforms the optimization process into a finite predictive range, reducing computational complexity and having the potential for real-time control. However, model accuracy and prediction horizon length will affect fuel economy <cit.>. The core idea of ECMS is to use equivalent coefficients to convert the power consumption of the motor into fuel consumption, and to solve the optimal energy allocation problem of PHEVs with the minimum equivalent fuel consumption as the objective function <cit.>. However, ECMS results are very sensitive to equivalent coefficients, which are affected by driving conditions, battery SOC, driving style, road gradient, and other factors, which affect the calculation accuracy of EMS. Reinforcement learning (RL) has been applied to energy management in PHEVs, and the results have been promising. In <cit.>, a Q-learning based EMS for PHEVs was proposed, which makes decisions on the current system state by looking up Q-table. It does not need to rely on prior knowledge of future driving conditions to make optimal decisions. Simulation results show that the fuel economy of the proposed EMS is improved by 11.93% compared with the binary mode control strategy. In <cit.>, a HEV EMS based on SARSA algorithm was studied. Unlike Q-learning, SARSA is on-policy. During the interaction between agent and environment, more searches can be conducted, which is conducive to the accuracy of agent learning. The simulation results revealed that compared with Q-learning, SARSA algorithm could achieve better performance. Although RL-based EMS has some significant advantages, traditional RL require the discretization of state and action spaces. As the dimensions of the state and action spaces increase, it will lead to "curse of dimensionality". Furthermore, traditional RL stores data in tables, which requires lots of time and computational resources to complete learning. To improve the performance of RL, some researchers have combined deep learning (DL) with RL and proposed an online EMS called Deep Reinforcement Learning (DRL) <cit.>. The emergence of DRL algorithm solves the problem of the traditional RL algorithm's tendency to fall into the dimensionality catastrophe due to discretization. Currently, the mainstream of DRL is discrete reinforcement learning and continuous reinforcement learning <cit.>. In the case of continuous problems, the action space of discrete reinforcement learning is limited, so it is necessary to discretize continuous actions. In <cit.>, a power split hybrid electric bus EMS based on DQL was proposed. Compared with the RL-based strategy, the optimality and adaptability of the DQL-based strategy under different driving conditions were verified. Compared with the Q-learning strategy with the same model, the DQL strategy performs better in terms of training difficulty and the influence of different state variables on the Q function. To solve the problem of overestimation, EMS based on dual deep Q learning (DDQL) was applied to hybrid tracked vehicles in <cit.>. The simulation results show that the fuel economy is improved by 7.1% compared to the DQL algorithm. Although discrete reinforcement learning performs well, it inevitably introduces discretization errors when discretizing action variables and cannot obtain an accurate solution. To solve the problem of discrete control variables, an EMS based on Deep Deterministic Policy Gradient (DDPG) was proposed in <cit.>. DDPG avoids the discretization of the action space and can output continuous control variables. Simulation results show that compared with the DQN algorithm, the DDPG algorithm converges faster and is more robust. To solve the problem that DDPG's high action value estimation may lead to unstable training, an EMS based on TD3 was employed in <cit.>. Simulation results show that under stable and transient driving cycles, the EMS based on TD3 algorithm has better fuel economy. In practical energy management systems, there are often both continuous control variables such as the output torque of the engine and motor and discrete control variables such as the switch of the clutch and the gear of the transmission.The above methods cannot directly obtain both continuous and discrete control inputs, witch is difficult to apply to real-time driving conditions. Currently, few articles have applied continuous-discrete reinforcement learning to the energy management of HEVs. In <cit.>, continuous-discrete reinforcement learning was used to optimize the energy management of a hybrid electric bus with a continuously variable transmission, and the best energy management of the hybrid bus was solved based on DDPG. In <cit.>, the Coach-Actor-Double Critic reinforcement learning framework was used for the energy management of a hybrid electric bus. The article further considers the switching status of the engine and uses continuous reinforcement learning to output the engine status. When the output value is less than 0.5, the engine is turned off, otherwise, it is turned on. In <cit.>, continuous-discrete reinforcement learning was used to optimize the energy management of a HEV with a 5-speed automatic transmission. The article uses DDPG to control the engine switch and DQN to control the gear position of the transmission, outputting continuous and discrete variables separately, but without combining Actor-Critic learning and Q-learning. The above articles demonstrate that compared to traditional optimization methods, continuous-discrete reinforcement learning can achieve near-optimal control. §.§ Motivation and innovation The BYD DMi hybrid electric vehicles are very popular in China, but there is currently a lack of literature specifically focused on the DM-i hybrid system. Additionally, to the author's knowledge, while continuous-discrete RL has achieved some success in the energy management of non-plug-in hybrid vehicles, there is no literature that applies continuous-discrete RL to the energy management of series-parallel PHEVs. Based on these research gaps, this article takes the BYD DM-i PHEV as the research object, establishes a vehicle powertrain model, and applies state-of-the-art continuous-discrete RL to solve energy management, achieving approximate optimal control of the energy management of the DM-i PHEV. The main contributions of this paper are summarized as follows: (1) Modeling the BYD DM-i hybrid system from the perspective of mixed integer programming. (2) A rule-based EMS is proposed to realize the torque distribution of the engine and motor, by analyzing the working mode of the DM-i hybrid system. (3) Applying a state-of-the-art continuous-discrete reinforcement learning algorithm for the optimal energy management of the DM-i hybrid system, achieving simultaneous selection of continuous and discrete actions. §.§ Organization The rest of this paper is organized as follows: Section 2 models the BYD DM-i PHEV. Section 3 introduces the continuous-discrete reinforcement learning PDQN-TD3 algorithm. Section 4 describes the EMS based on the PDQN-TD3 algorithm, and compares the EMS based on PDQN-TD3 with those based on CD-CS and DP approaches. Section 5 concludes the paper. § DM-I HYBRID POWER SYSTEM MODELING Modeling of hybrid systems is the foundation for designing EMSs. In this section, we consider the BYD DM-i PHEV system as shown in Fig. <ref>, which consists of components such as an engine, a clutch, a generator, a drive motor, and a power battery. The key component parameters are shown in Table <ref>. In this type of hybrid system, both the engine and battery serve as energy sources for the powertrain. The engine and drive motor serve as the power sources for the vehicle. By controlling the clutch, engine, and motor state, it is possible to realize five operation modes: EV mode, series mode, parallel mode, engine driving mode, and energy recovery mode. §.§ Vehicle Dynamics For hybrid powertrain systems, the powertrain must obey the torque balance equation: T_d = T_e i_e i_c η_t + T_m i_m η_m η_t + T_b where T_d denotes the required driving torque of PHEV; T_e is engine torque; i_e is engine transmission ratio; i_c denotes clutch engagement/disengagement, with a value of either 1 or 0; T_m is motor torque; i_m is motor transmission ratio; T_b is break torque; η_t and η_m are the mechanical and motor efficiencies, respectively. According to the longitudinal dynamics equation of the vehicle, the required torque of the powertrain system is established as: T_d = [ma + 0.5 C_d ρ A v^2 + μ mg cos(θ) + mg sin(θ)] r w_d = v/r where m is curb weight; a represents vehicle acceleration; C_d is air drag coefficient; ρ is air density; A is the windward area, v is the longitudinal vehicle velocity without regard to wind speed, μ is rolling coefficient of road; g is the gravity acceleration; θ is the road slope; r is the wheel radius; ω_d is the wheel speed. §.§ Engine Model The fuel economy of engine is a key factor to evaluate the EMS of hybrid system. In this article, experimental modeling method is used for engine modeling. Given the engine speed and torque, the instantaneous fuel consumption rate can be obtained by interpolating the engine fuel consumption map as shown in Fig.<ref>. The engine fuel consumption per unit time is given by Eq. (<ref>): ṁ_f = P_e b_e = T_e ω_e b_e where P_e is engine power, b_e is the effective fuel consumption of engine according to BSFC in Fig.<ref>, ω_e is engine angular velocity. In the DM-i hybrid system, the controller controls the connection and disconnection between the engine and the wheels by controlling the engagement/disengagement of the clutch. When the clutch is disconnected, the engine and wheels are decoupled, the engine speed is independent of vehicle speed, and the engine operates in the optimal working curve. When the clutch is closed, the engine speed and the wheels speed are coupled to each other, and the engine speed adjusts according to the vehicle's speed.Therefore the engine speed can be expressed by the following equation: ω_e = ω_d i_e i_c + f(T_e)(1-i_c) where f(T_e) represents the look-up table between engine speed and torque when the engine operates along the optimal economic working curve. §.§ Drive Motor and Generator Models The DM-i hybrid system has two electric motors, namely the drive motor and the generator. The drive motor provides torque output during driving and is responsible for energy recovery during braking. The generator mainly serves as an auxiliary unit, converting mechanical energy into electrical energy, ensuring the engine's quick start, and adjusting the engine speed to make it work in the economic range. The motor's map is shown in Fig. <ref>. For the drive motor, the motor speed and motor torque are written as: ω_m = ω_d i_m T_m = (T_d - T_b - T_e i_e i_cη_t )/i_m η_m η_t where ω_m is the driving motor speed. To reduce the computational overhead of the RL and DP algorithm, we optimized Eq.(<ref>) by merging T_m and T_b into T_mb, thus eliminating the need to control T_b. T_mb = T_m i_m η_m η_t + T_b = T_d - T_e i_e i_cη_t The T_m and T_b can be obtained by Eq.(<ref>): T_m = -f(ω_m), if T_mb -f(ω_m) T_mb/i_m η_m η_t, else T_b = T_mb-T_m i_m η_m η_t = T_mb + f(ω_m)i_m η_m η_t, if T_mb -f(ω_m) 0, else where f(ω_m) represents the maximum torque of the motor at the current speed. For Eq. (<ref>) and (<ref>), when T_mb is less than the motor's energy recovery upper limit (T_mb<-f(ω_m)), the vehicle enters a braking state. The motor recovers energy at its maximum capacity, while the mechanical brake (T_b) provides the remaining braking force. When T_mb>-f(ω_m), the motor assumes the role of energy recovery if T_mb<0, and driving if T_mb>0. In both scenarios, the utilization of a mechanical brake (T_b) is unnecessary. For the generator, the speed and torque are written as: ω_g = ω_e/i_g T_g = T_e i_g η_g (1-i_c) where ω_g is the generator speed; T_g is the generator torque; η_g is the generator efficiency. §.§ Battery Model The power battery is used to provide the electric energy required by the motor during driving, and can also store the energy recovered by the motor during braking. This paper does not consider the effect of temperature on the internal characteristics of the battery and establishes the dynamic equation of the battery state of charge (SOC) based on the internal resistance of the battery, as shown in the following equations: P_b = P_m + P_g + P_aux P_b = V_oc I_b - R_bI^2_b ṠȮĊ = -V_oc - √(V^2_oc - 4 R_b P_b)/2R_b Q_b where P_b is battery power; P_aux is the auxiliary power consumption of the vehicle; V_oc is the open circuit voltage; I_b is battery current; R_b is battery resistance. Ignoring the effects of battery aging and temperature, the relationship between SOC, battery internal resistance, and open circuit voltage is shown in Fig.<ref>. § PDQN-TD3 CONTINUOUS-DISCRETE REINFORCEMENT LEARNING ALGORITHM Currently, RL methods mostly focus on either continuous action spaces or discrete action spaces, but many engineering control problems involve both continuous and discrete variables, which is referred to as mixed action space. For example, during the driving process of PHEVs, the torque of the engine is a continuous variable, while the clutch switch is a discrete variable. In mixed action space, the agent needs to make simultaneous discrete and continuous choices. In this section, based on the Actor-Critic framework, the PDQN-TD3 algorithm is proposed to deal with the mixed action space problem. §.§ Principles of Reinforcement Learning RL is a trial-and-error based learning method. In RL, an agent learns how to make optimal decisions by interacting with its environment. The interaction process between agent and environment can be described using Markov decision processes (MDP). In a MDP, the agent is in a state, can choose an action, then receives a reward signal from the environment and transitions to a new state. The MDP model describes the relationship between state, action, reward and transition probability in this process. RL formulates the optimal strategy by learning the MDP model, which includes learning the relationships between state, action, reward, and transition probabilities. The MDP can be represented as: P_ss'^a = P[s_t+1=s'|s_t=s,a_t=a] where s_t represents the state at time t, a_t represents the action taken at time t, P_ss'^a represents the probability of state transition, s represents the current state, s' represents the next state, a represents the current action, P is a probability function. In the state transition process of Eq. (<ref>), a reward R is generated. Given a policy π, the cumulative reward G obtained by agent in the interaction process is: G_t =R_t+1+γ R_t+2+...+ γ^t+k R_t+1+k = ∑_k=0^∞γ^k R_t+1+k where γ∈ [0,1] is the reward discount factor; R_t+1 represents the immediate reward at time t+1. The ultimate goal of the agent is to find the optimal strategy π^* to maximize the cumulative reward. π^*(s,a) = argmax_a E[G_t] where π^*(s,a) represents the optimal policy and E represents the expectation. To obtain the optimal strategy, use Q-values to evaluate the superiority or inferiority of the policy π: Q_π(s,a) = E[R_t+γ R_t+1+γ^2 R_t+2+...|s_t=s,a_t=a)] Simplified further, the formula can be expressed as follows: Q_π(s,a) = E[R_t+γ Q(s',a')|s_t=s,a_t=a)] where Q_π(s,a) represents the value of taking action a in state s according to policy π. Traditional Q-learning establishes a Q-table to store the Q-values of different actions in each state, selects the action with the maximum Q-value as the output, and updates the Q-value according to the observed reward and the next state. The Bellman equation can be expressed as: Q(s,a) = E[R + γ max_aQ(s',a')|s_t=s,a_t=a)] In practical problems, the state and action space are usually too large to be stored in a table. Therefore, combined with deep learning, neural network is introduced to replace the Q-table, and the output of the network is used to approximate the Q-function. Q(s,a;θ)≈ Q(s,a) §.§ PDQN Algorithm The PDQN algorithm combines deterministic actor-critic and Q-learning, integrating the classic algorithms DDPG for continuous reinforcement learning and DQN for discrete reinforcement learning. Specifically, PDQN uses an Actor network to output continuous actions and replaces the critic learning in deterministic actor-critic with Q-learning. This allows PDQN to output the Q-values corresponding to each discrete action and select the discrete action based on the maximum Q-value. The schematic diagram of the PDQN algorithm is shown in Fig.<ref>. For PDQN, the action space 𝒜 consists of both continuous actions and discrete actions. 𝒜 = {(k,x_k)|k ∈ K, x_k ∈𝒳_k } where K denotes the discrete action set, k is a discrete action, 𝒳_k denotes the continuous action set, x_k is a continuous action. Inspired by the processing method of DQN, the actor network of PDQN uses the deterministic policy network x(·;θ) to approximate x_k in order to output continuous actions. The critic network approximate Q(s,k,x_k) with a deep neural network Q(s,k,x_k;ω), thus outputting discrete actions. Eq. (<ref>) can be further expressed as: Q(s,k,x_k) = E[R_t+γ max_k∈ KQ(s',k',x_k(s';θ);ω) |s_t=s] The critic network parameters ω are updated based on the TD error between Q(s,k,x_k;ω) and the target network estimate y. PDQN performs the parameter update by minimizing the loss function, which is defined as the squared error between the target q-value and the estimated Q-value. L_Q(ω) =1/2Σ(y-Q(s,k,x_k;ω))^2 where y = R_t+γ max_k∈ KQ(s',k',x'_k;ω). The Actor network parameters θ is updated based on the negative sum of Q values. L_x(θ) = -∑_k=1^kQ(s,k,x_k(s;θ);ω) The target network is updated using the parameters of the actor and critic networks, and the update formula is as follows: ω_i,targ ←τω_i+(1-τ)ω_i,targ θ_i,targ ←τθ_i+(1-τ)θ_i,targ §.§ PDQN-TD3 Algorithm PDQN integrates DDPG and DQN, which can effectively solve continuous-discrete control problems. But PDQN also has the corresponding drawbacks of the two algorithms. PDQN algorithm involves a maximization operation when calculating the TD target, which leads to an overestimation of the true action value by PDQN. Additionally, due to the deep Q-network is continuously updated, eagerly updating value network parameter μ when the value network is still poor not only fails to improve μ but also destabilizes the training of the actor network due to the fluctuations in μ. This paper applies a state-of-the-art PDQN-TD3 algorithm to solve the above problem. PDQN-TD3 uses the actor-critic network architecture. The structure of the policy network and the evaluation network are designed using the TD3 structure. The policy network includes an actor network and the corresponding target network, while the evaluation network includes two critic networks and the corresponding target network. At each time step t, the environment feeds the state s into the actor network of PDQN-TD3 to obtain the continuous action x_k. The critic network acts as a Q-value networks and select a discrete action k based on the state variables and the output of the actor network using an ϵ-greedy policy. After executing the continuous action and discrete action (x_k, k), the environment transitions to a new state s_t+1, and the tuple (s, (x_k, k), r, s_t+1) is stored in the experience replay buffer for neural network training. Compared with PDQN, the PDQN-TD3 algorithm introduces three key techniques: clipped double-Q learning, delayed policy updates and target policy smoothing. §.§.§ Target policy smoothing Random noise 𝒩(0,σ) obeying normal distribution was added to the target action value π_ϕ_targ(s) output by the actor network, and the noise value was limited within (-c, c). It makes the update of the value function smooth and avoids overfitting. x̃_k = π_ϕ_targ(s) + clip(𝒩(0,σ),-c,c) where x̃_k is the action value after adding smooth noise. §.§.§ Clipped double-Q learning To avoid overestimating the Q-function, PDQN-TD3 introduces two independent critic networks to learn the Q-function and construct the critic computing Q-target with smaller Q-value. Q_targ(s',k',x̃;μ)= min_i=1,2Q_targ(s',k',x̃;μ_i) y=r+γ Q_targ(s',k',x̃;μ) §.§.§ Delayed policy updates To enhance the stability of training the PDQN, the idea of delayed updates is introduced. The actor network is updated at a lower frequency, while the critic network is updated at a higher frequency. This approach ensures more stable training of the actor network. The both critic network parameter updates is performed by minimizing the loss function, which is defined as the squared error between the target Q-value and the estimated Q-value. L(μ_1) = 1/2Σ(y-Q(s,k,x_k;μ_1))^2 L(μ_2)=1/2Σ(y-Q(s,k,x_k;μ_2))^2 The training process of PDQN-TD3 algorithm is shown in Algorithm <ref>. § ENERGY MANAGEMENT STRATEGY VIA PDQN-TD3 §.§ Problem Description The research object of this article is the DM-i series-parallel hybrid system. Due to the presence of both continuous control variables and discrete control variables, energy management systems are modeled as continuous-discrete action space. In this hybrid system, the PHEV series-parallel mode switching is achieved through the engagement/disengagement of the clutch, and the state of the clutch is a discrete variable. Therefore, the output torque of the engine is selected as the continuous action and the clutch engagement/disengagement as a discrete action. §.§ The framework of PDQN-TD3 for EMS This article proposes a HEV EMS based on continuous-discrete reinforcement learning PDQN-TD3, as shown in Fig. <ref>. 1) State: The state space of EMS based on PDQN-TD3 can be expressed as: S = {SOC, v, P_d} 2) Action: The action space can be expressed as: A = {T_e, i_c} 3) Reward: The primary goal of EMS is to minimize fuel and electricity consumption. Thus, the reward function is defined as the sum of fuel and electricity costs. To ensure that the agent does not violate the constraints of the hybrid power system during training, we also add a penalty term in the reward function for any violation of these constraints. The detailed reward function is defined as follows: R = -(r_c+p_ω_e+p_soc) r_c = (k_f ṁ_f + k_e P_b/η_b η_chr) Δ t r_ω_e = p_maxΔ t r_soc = 0, if SOC_l SOC SOC_h p_maxSOC-SOC_h/1-SOC_h, if SOC SOC_h p_maxSOC-SOC_l/SOC_l, if SOC SOC_l where p_ω_e and p_soc are the penalties for exceeding the constraints on the engine angular velocity and SOC, respectively; Δ t=1s; k_f is the fuel price which is 7.6 CNY/L (10.3 CNY/kg); k_e is the electric price which is 1.0 CNY/kW·h; p_max= 0.1 is the maximum penalty, which is set to 10 times the maximum fuel consumption; The penalty for exceeding the SOC constraint is designed as a linear function of the SOC deviation from the desired range. Therefore, the greater the deviation of the SOC from the desired range, the higher the penalty. § SIMULATION ANALYSIS §.§ Simulation Parameters and Conditions In order to verify the effectiveness and superiority of the proposed energy management method for HEVs, we conducted simulation experiments on the Python platform and compared it with EMS based on CD-CS and DP. The main hyperparameters of the PDQN-TD3 algorithm are shown in Table <ref>. Three WLTC (Worldwide Light-duty Test Cycle) were selected as the simulation operating conditions. WLTC is chassis dynamometer tests for the determination of emissions and fuel consumption from light-duty vehicles. The total time for a complete WLTC cycle is 1800s, with a driving distance of 23.25 km, a maximum vehicle speed of 120 km/h, and an average vehicle speed of 31.51 km/h. The relationship between WLTC time and vehicle speed is shown in Fig. <ref>. §.§ DP-based energy management strategy The DP is adopted as a global optimization algorithm for EMS. Under the premise that the entire driving cycle information is known in the future, the DP can obtain the optimal fuel economy. In order to investigate the optimization effect of the PDQN-TD3, this paper uses the DP as the benchmark for fuel economy. For DP, as the driving conditions serve as prior knowledge, the future vehicle speed and power demand are both known. At this point, SOC serves as the only state variable in the system. As the clutch switch only considers two states of engagement and disengagement, the clutch switch state is divided into two grids when performing DP. Simulation analysis shows that increasing the number of grids for engine torque and battery SOC after a certain amount does not significantly reduce the cost function of the system but does increase the computation time of the DP when the engine torque and battery SOC are discretized. Therefore, in this paper, SOC is divided into 60 grids in the range of 0.3 to 0.9, while the engine torque is divided into 120 grids in the range of 0 - 120N. In the simulation process, we use the general DP Matlab toolbox <cit.> to obtain the benchmark. §.§ Rule-based energy management strategy The rule-based control strategy selects the optimal operation mode based on predetermined judgment conditions and control logic. It has the advantages of simplicity and easy implementation, making it a widely adopted EMS by automotive companies. To compare the fuel-saving effect of PDQN-TD3, this paper has designed a rule-based EMS. Specifically, rule-based EMS are mainly divided into two modes: CD and CS mode. SOC > 0.3, the vehicle enters CD mode. (1) When the vehicle demand torque T_d is greater than the engine optimal working point: 172 If v ≥ 60: When T_d is greater than the engine maximum working point T_e_max, the vehicle enters the series mode, otherwise it enters the parallel mode. 173 If v< 60, the vehicle enters the series mode. (2) When T_d is less than the engine optimal working point: 172 If T_d< 0: When SOC > 0.9, mechanical braking is used; otherwise, energy is recovered by driving motor. 173 If T_d> 0, the vehicle enters EV mode. SOC < 0.3, the vehicle enters CS mode. (1) When T_d>T_e_min: 172 If T_d>T_e_max, then T_e=T_e_max. If v> 60, the vehicle enters the engine direct drive mode, otherwise it enters the series mode. 173 When T_d<T_e_max: if v> 60, the vehicle enters the engine direct drive mode. Otherwise, the vehicle enters the series mode. (2) When T_d<T_e_min: the vehicle enters the series mode. (3) If T_d< 0, energy is recovered through the driving motor. For the above rules: (1) In series mode, the clutch is disengaged, the engine works in the optimal economic range to drive the generator for power generation, while the motor provides the demand torque. (2) In parallel mode, the clutch is engaged, the generator does not work, the motor aids the engine to drive the vehicle. (3) In EV mode, the clutch is disengaged, neither the engine nor the generator works, and the motor provides the demand torque. (4) In engine direct drive mode, the clutch is engaged, the generator does not work, and the engine directly drives the vehicle. §.§ Simulation Result Analysis As shown in Fig. <ref>, the curve represents the cumulative reward changes, where a higher return value indicates a better learning effect. It can be observed that the return value curve fluctuates but exhibits an overall upward trend, indicating that the intelligent agent continuously adjusts its strategy to obtain the maximum cumulative return per episode. After 18 × 10^4 steps iteration, the algorithm gradually converges to the optimal control strategy. The PDQN-TD3 and the DP-based strategy have similar total energy consumption, indicating that the PDQN-TD3 control strategy has better fuel economy. The comparison of the vehicle SOC variation trends over time for three algorithms under the WLTC cycle is shown in Fig. <ref>. It can be observed that the CD-CS strategy tends to use electric energy rather than engine energy when the battery SOC is greater than SOC_min. When the battery SOC drops to the set threshold, the SOC fluctuates around SOC_min, and the engine becomes the main power source to suppress excessive discharge of the battery. However, using this control strategy under aggressive driving conditions requires the engine to provide significant torque output when the battery is depleted, which reduces the fuel economy of the vehicle. On the other hand, the variation trend generated by the PDQN-TD3 and DP algorithms is very similar, but the SOC decrease of the PDQN-TD3 is more gradual. This indicates that when the PDQN-TD3 performs energy management, the frequency of engine startup to charge the battery will increase compared to the DP under the same driving conditions. This will reduce the peak discharge power of the power battery, which is positive and beneficial for the battery's lifespan. Compared with CD-CS, the motor can provide greater output power under aggressive driving conditions, and the engine working point can be adjusted more reasonably to allow the engine to work more often in the optimal fuel economy range. Table <ref> shows the simulation results of the total system energy consumption for fuel and electricity consumption. As can be seen from Table <ref>, the DP-based achieves optimal fuel economy, which we use as a benchmark for comparison with the PDQN-TD3 and CD-CS methods. The PDQN-TD3 EMS improves fuel economy by 10.08% compared to the CD-CS EMS. The fuel economy gap between PDQN-TD3 EMS and DP is 6%. This indicates that PDQN-TD3 has a better energy utilization efficiency, which reduces the operating costs of HEVs, further demonstrating the optimality of the PDQN-TD3 strategy for energy management. Fig. <ref> to Fig. <ref> show the engine operating points for the three control strategies. It can be seen that the EMS based on DP has more engine operating points in the fuel-efficient range. The main reason for the sparse distribution of engine operating points is that DP discretizes the engine torque, and due to the influence of the discretization precision, the engine operating point can only be selected from discrete intervals and limited discrete engine operating points on the map. Therefore, one point on the map may correspond to many engine torques with the same value. Although DP has better optimization results, its computation time is longer than that of PDQN-TD3 and CD-CS, which is the main reason why the DP algorithm cannot be applied to real vehicles. Compared with the EMS based on CD-CS, the control strategy based on PDQN-TD3 has smaller fluctuations in engine speed and torque, which further indicates that the EMS based on PDQN-TD3 can adjust the engine operating point well, so that the engine can work in the optimal fuel economy zone in most cases. Compared with CD-CS, the engine operates more efficiently and the vehicle has better fuel economy. The reason why the engine in CD-CS works in a non-economical range is that when the battery SOC is lower than SOC_min, the output power of the battery is restricted and cannot even provide power to the outside. At this time, the vehicle can only be driven by the engine, which to some extent limits the engine's efficiency and reduces fuel economy. Fig. <ref> to Fig. <ref> show the operating points of the three control strategies for the electric motor. It can be seen that for PDQN and DP algorithms, EV mode is mainly distributed in the area of low vehicle speeds. As the vehicle speed further increases, the engine starts, and the system operates in HEV mode. Different from DP and PDQN, for the CDCS control strategy, the system will also frequently enter the HEV mode when the vehicle speed is very low. This is because when the battery power is depleted, the battery cannot meet the power demand of the system and has to rely on the engine generation to provide additional power frequently. However, DP and PDQN can plan the battery usage more reasonably by driving the vehicle using the engine at higher speeds. This not only improves fuel economy but also reduces the output torque of the motor, which is beneficial for the battery's lifespan. From Fig. <ref>, it can be seen that compared to the PDQN and DP strategies, the CD-CS strategy has a lower frequency of operation in the EV mode and a significantly higher frequency in the HEV mode. This is because, for the CD-CS control strategy, even when the battery is fully charged, the vehicle may still operate in the EV mode even if the vehicle speed is high or the torque demand is large at the current moment. This undoubtedly increases the instantaneous power of the power battery, leading to a faster decline in battery capacity and potentially harming its health and lifespan. When the battery SOC drops to a set threshold, the battery is unable to provide sufficient driving torque and cannot enter the EV mode at low speeds, forcing the engine to start frequently in non-economical regions, which undoubtedly increases fuel consumption. Fig. <ref> show the engagement/disengagement of the clutch under three different control strategies. The statistical results of the clutch engagement/disengagement are presented in Table <ref>. It can be seen that for all three control strategies, the clutch disengagement times are greater than the clutch engagement times. This is because the cost of electricity is lower than the cost of fuel, although direct drive of the engine is more economical when the clutch is engaged, it cannot guarantee that the engine operates in the economic range at low speed or low torque. Instead, it increases fuel consumption. In this case, the system still tends to generate electricity through the engine in the high-efficiency range rather than directly driving the vehicle. The difference between the three strategies is that the CD-CS has significantly fewer clutch engagements. This is because at the beginning of the journey, the CD-CS tends to use electricity, and the engine is almost not started until the system's torque demand or vehicle speed exceeds the set threshold. When the SOC of the battery is low, although the engine needs to be started frequently to provide torque, because the system does not have a transmission, the engine cannot directly drive the vehicle at any speed. More often, the engine acts as a range extender. The number of clutch engagements in PDQN-TD3 and DP differs by 2.8%, which may be due to the lower fuel efficiency of PDQN-TD3 than DP. PDQN-TD3 can't predict the operating conditions of the entire journey in advance like DP, which may limit the optimization effect of PDQN-TD3 to some extent. § CONCLUSIONS This work focuses on the BYD DM-i hybrid system and conducts mathematical modeling of the hybrid system and energy management optimal control. The main conclusions are as follows: Considering the characteristics of the hybrid system with both continuous and discrete variables, we establish a control-oriented mathematical model for the DM-i hybrid systems from the perspective of mixed-integer programming. This enables the simultaneous handling of both continuous and discrete variables in energy management problems. The continuous-discrete RL algorithm PDON-TD3 was applied to energy management, achieving simultaneous optimization of both continuous (engine torque) and discrete action (clutch switch). The PDQN-TD3 EMS has better fuel economy and emission reduction effects compared to the CD-CS, with a 10.08% reduction in the total cost of fuel consumption and electric energy consumption. The cost effectiveness gap of the PDQN-TD3 method is 6.4% compared with DP, and it has greater potential for real-time online application. This method can achieve energy-saving driving without compromising vehicle performance. Future research can explore the integration of prediction mechanisms into RL algorithms, further improving the energy utilization efficiency and driving comfort of HEVs. In addition, the cooperative optimization of energy management and advanced driving assistance systems for HEV is also possible future work for energy saving and emission reduction. IEEEtran
http://arxiv.org/abs/2306.07862v1
20230613154809
New Optimal Results on Codes for Location in Graphs
[ "Ville Junnila", "Tero Laihonen", "Tuomo Lehtilä" ]
cs.DM
[ "cs.DM", "math.CO", "05C69, 05C76, 05C63", "G.2.2" ]
square matrix/.style= matrix of nodes, column sep=-, row sep=-, nodes=draw, minimum height=#1, anchor=center, text width=#1, align=center, inner sep=0pt , , square matrix/.default=0.5cm big square matrix/.style= matrix of nodes, column sep=-, row sep=-, nodes=draw, minimum height=#1, anchor=center, text width=28, align=center, inner sep=0pt , , big square matrix/.default=0.88cm New Optimal Results on Codes for Location in GraphsAn extended abstract <cit.> of the paper has been presented at the Fifth Russian Finnish Symposium on Discrete Mathematics.Research supported by the Academy of Finland grant 338797. Ville Junnila Department of Mathematics and Statistics University of Turku Turku FI-20014, Finland [email protected] Tero Laihonen Department of Mathematics and Statistics University of Turku Turku FI-20014, Finland [email protected] Tuomo LehtiläResearch supported by the University of Turku Graduate School (UTUGS), the Vilho, Yrjö and Kalle Väisälä Foundation, and the Jenny and Antti Wihuri Foundation. Department of Mathematics and Statistics University of Turku Turku FI-20014, Finland [email protected] July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= V. Junnila, T. Laihonen, T. LehtiläNew Optimal Results on Codes for Location in Graphs In this paper, we broaden the understanding of the recently introduced concepts of solid-locating-dominating and self-locating-dominating codes in various graphs. In particular, we present the optimal, i.e., smallest possible, codes in the infinite triangular and king grids. Furthermore, we give optimal locating-dominating, self-locating-dominating and solid-locating-dominating codes in the direct product K_n× K_m of complete graphs. We also present optimal solid-locating-dominating codes for the Hamming graphs K_q□ K_q□ K_q with q≥2. Location-domination, solid-location-domination, self-location-domination, king grid, direct product, Hamming graph § INTRODUCTION Sensor networks consist of sensors monitoring various places and connections between these places (see <cit.>). A sensor network is modeled as a simple and undirected graph G=(V(G),E(G))=(V,E). In this context, a sensor can be placed on a vertex v and its closed neighbourhood N[v] represents the set of locations that the sensor monitors. Besides assuming that graphs are simple and undirected, we also assume that they are connected and have cardinality at least two. In the following, we present some terminology and notation. The closed neighbourhood of v is defined N[v]=N(v)∪{v}, where N(v) is the open neighbourhood of v, that is, the set of vertices adjacent to v. A code C is a nonempty subset of V and its elements are codewords. The codeword c∈ C covers a vertex v∈ V if v∈ N[c]. We denote the set of codewords covering v in G by I(G,C;v) = I(G;v) = I(C;v)=I(v)=N[v]∩ C . The set I(v) is called an identifying set or an I-set. We say that a code C⊆ V is dominating in G if I(C;u)≠∅ for all u∈ V. If the sensors are placed at the locations corresponding to the codewords, then each vertex is monitored by the sensors located in I(v). More explanation regarding location detection in the sensor networks can be found in <cit.>. Let us now define identifying codes, which were first introduced by Karpovsky et al. in <cit.>. For numerous papers regarding identifying codes and related topics, the interested reader is referred to the online bibliography <cit.>. A code C ⊆ V is identifying in G if for all distinct u, v ∈ V we have I(C;u) ≠∅ and I(C;u) ≠ I(C;v) . An identifying code C in a finite graph G with the smallest cardinality is called optimal and the number of codewords in an optimal identifying code is denoted by (G). Identifying codes require unique I-sets for codewords as well as for non-codewords. However, if we omit the requirement of unique I-sets for codewords, then we obtain the following definition of locating-dominating codes, which were first introduced by Slater in <cit.>. A code C ⊆ V is locating-dominating in G if for all distinct u, v ∈ V ∖ C we have I(C;u) ≠∅ and I(C;u) ≠ I(C;v) . Notice that an identifying code in G is also locating-dominating (by the definitions). In <cit.>, self-locating-dominating and solid-locating-dominating codes have been introduced and, in <cit.>, they have been further studied. The definitions of these codes are given as follows. Let C ⊆ V be a code in G. (i) We say that C ⊆ V is self-locating-dominating code in G if for all u ∈ V ∖ C we have I(C;u) ≠∅ and ⋂_c ∈ I(C;u) N[c] = {u}. (ii) We say that C ⊆ V is solid-locating-dominating code in G if for all distinct u, v ∈ V ∖ C we have I(C;u) ∖ I(C;v) ≠∅. Observe that since G is a connected graph on at least two vertices, a self-locating-dominating and solid-locating-dominating code is always dominating. Analogously to identifying codes, in a finite graph G, we say that dominating, locating-dominating, self-locating-dominating and solid-locating-dominating codes with the smallest cardinalities are optimal and we denote the cardinality of an optimal code by γ(G), (G), (G) and (G), respectively. In the following theorem, we offer characterizations of self-locating-dominating and solid-locat­ing-dominating codes for easier comparison of them. Let G=(V,E) be a connected graph on at least two vertices: (i) A code C ⊆ V is self-locating-dominating if and only if for all distinct u ∈ V ∖ C and v ∈ V we have I(C;u) ∖ I(C;v) ≠∅. (ii) A code C ⊆ V is solid-locating-dominating if and only if for all u ∈ V ∖ C we have I(C;u)≠∅ and ( ⋂_c ∈ I(C;u) N[c] ) ∖ C = {u}. Based on the previous theorem, we obtain the following corollary. If C is a self-locating-dominating or solid-locating-dominating code in G, then C is also solid-locating-dominating or locating-dominating in G, respectively. Furthermore, for a finite graph G, we have (G)≤(G) ≤(G) . The structure of the paper is described as follows. First, in Section <ref>, we obtain optimal self-locating-dominating and solid-locating-dominating codes in the infinite triangular and king grids, i.e., the smallest possible codes regarding their density (a concept defined later). Regarding the triangular grid, the proofs are rather simple and straightforward, but they serve as nice introductory examples to the concepts of solid-location-domination and self-location-domination. However, the case with the king grid is more interesting; in particular, the proof of the lower bound for solid-location-domination is based on global arguments instead of only local ones, which are more usual in domination type problems. Then, in Section <ref>, we give optimal locating-dominating, self-locating-dominating and solid-locating-dominating codes in the direct product K_n× K_m of complete graphs, where 2≤ n≤ m. Finally, in Section <ref>, we present optimal solid-locating-dominating codes for graphs K_q□ K_q□ K_q with q≥2. § TRIANGULAR AND KING GRIDS In this section, we consider solid-location-domination and self-location-domination in the so called infinite triangular and king grids, which are widely studied graphs in the field of domination (see <cit.>). As defined in the introduction, for finite graphs, the optimality of a code has been defined using the minimum cardinality. However, this method is not valid for the infinite graphs of this section. Hence, we need to use the usual concept of density of a code (see various papers concerning infinite grids in <cit.>). Let us first consider the infinite triangular grid. Let G=(V,E) be a graph with the vertex set V={i(1,0)+j(1/2,√(3)/2)| i,j∈} and two vertices are defined to be adjacent if their Euclidean distance is equal to one. The obtained graph G is called the infinite triangular grid and it is illustrated in Figure <ref>. We further denote v(i,j)= i(1,0) +j (1/2, √(3)/2). Let R_n be the subgraph of G induced by the vertex set V_n = {v(i,j) | |i|, |j| ≤ n }. The density of a code in G is now defined as follows: D(C) = lim sup_n →∞|C∩ V_n|/|V_n| We say that a code is optimal if there exists no other code with smaller density. In the following theorem, optimal self-locating-dominating and solid-locating-dominating codes are given in the triangular grid. The methods used in the proof are rather typical for domination type of problems. However, we present the proof for completeness and as an introductory example. Let G=(V,E) be the triangular grid. The code C = {v(i,j)| i, j ≡ 0 2} is self-locating-dominating in G and, therefore, also solid-locating-dominating. The density of the code C is equal to 1/4 and there exists no self-locating-dominating or solid-locating-dominating code with smaller density, i.e., the code is optimal in both cases. Let us first show that the code C is self-locating-dominating in the triangular grid G. The proof now divides into the following cases depending on the parity of i and j in v(i,j): * If i is odd and j is even, then I(v(i,j))={v(i-1,j),v(i+1,j)} and N[v(i-1,j)] ∩ N[v(i+1,j)] = {v(i,j)}. * Analogously, if i is even and j is odd, then I(v(i,j))={v(i,j-1),v(i,j+1)} and N[v(i,j-1)] ∩ N[v(i,j+1)] = {v(i,j)}. * Finally, if i and j are both odd, then I(v(i,j))={v(i-1,j+1),v(i+1,j-1)} and N[v(i-1,j+1)] ∩ N[v(i+1,j-1)] = {v(i,j)}. Thus, as v(i,j) is a codeword for even i and j, the code C is self-locating-dominating in G. Furthermore, we have D(C)=1/4 since v(i,j) is a codeword if and only if i and j are both even. Notice that C is also a solid-locating-dominating code. For the lower bound, assume that C' is a solid-locating-dominating code in G. Immediately, by the definition of solid-locating-dominating codes, we know that |I(C';u)| ≥ 2 for any non-codeword u. Therefore, by counting in two ways the pairs (u,c), where c ∈ C' ∩ V_n and u ∈ N[c] ∩ V_n-1, we obtain that 7|C' ∩ V_n| ≥ |C' ∩ V_n-1| + 2(|V_n-1| - |C' ∩ V_n-1|) ≥ 2|V_n-1| - |C' ∩ V_n-1| ≥ 2|V_n-1| - |C' ∩ V_n|, which is equivalent to |C' ∩ V_n| ≥ |V_n-1|/4. Thus, we may estimate the density of C' as follows: D(C') = lim sup_n →∞|C'∩ V_n|/|V_n|≥lim sup_n →∞|V_n-1|/4/|V_n| = 1/4. Next we consider the more interesting problems of solid-location-domination and self-location-domination in the infinite king grid. Let us first begin by defining the grid and the density of a code in it. Let G=(V,E) be a graph with V=^2 and for the vertices v=(v_1,v_2)∈ V and u=(u_1,u_2)∈ V we have vu∈ E if and only if |v_1-u_1|≤1 and |v_2-u_2|≤1. The obtained graph G is called the infinite king grid. Further let V_n be a subset of V such that V_n={(x,y)| |x|≤ n, |y|≤ n}. The density of a code C ⊆ V = ^2 is now defined as D(C)=n→∞limsup|C∩ V_n|/|V_n|. We say that a code is optimal if there exists no other code with smaller density. In what follows, we first consider solid-location-domination in the king grid. In the following theorem, we present a solid-locating-dominating code in the king grid with density 1/3. Later, in Theorem <ref>, it is shown that the code is optimal. Let G=(V,E) be the king grid. The code C = {(x,y)∈^2| |x|+|y|≡ 0 3} is solid-locating-dominating in G and its density is 1/3. Let C={(x,y)∈^2| |x|+|y|≡ 0 3} be a code in G (illustrated in Figure <ref>). By the definition, it is immediate that the density of C is equal to 1/3. In order to show that C is a solid-locating-dominating code in G, we prove that the condition of Theorem <ref>(ii) holds for every non-codeword of G. Let u = (x,y) ∈^2 be a vertex not belonging to C. Suppose first that x=0 and y > 0. Now, if y ≡ 1 3, then I(u) = {u + (0,-1), u + (-1,1), u + (1,1)} and N[u + (0,-1)] ∩ N[u + (-1,1)] ∩ N[u + (1,1)] = {u}, else y ≡ 2 3 implying I(u) = {u + (-1,0), u + (1,0), u + (0,1)} and (N[u + (-1,0)] ∩ N[u + (1,0)] ∩ N[u + (0,1)]) ∖ C = {u}. Thus, the required condition is met. The case with y < 0 is analogous. Moreover, the case with y=0 is symmetrical to the one with x = 0. Hence, we may assume that x ≠ 0 and y ≠ 0. Suppose then that x ≥ 1 and y ≥ 1. Now we have either I(u) = {u + (0,-1), u + (-1,0), u + (1,1)} or I(u) = {u + (0,1), u + (1,0), u + (-1,-1)}. In both cases, we obtain that ⋂_c ∈ I(u) N[c] = {u} and the condition is satisfied. The other (three) cases with x ≤ -1 or y ≤ -1 can be handled analogously. Thus, in conclusion, C is a solid-locating-dominating code in G. Usually, the best known constructions for domination type codes in infinite grids are formed by a repetition of a finite pattern. However, this is not the case with the code C of the previous theorem. Another observation is that the codeword c = (0,0) has a special role as a sort of center of the code. In particular, the density of the code (or more precisely the ratio |C ∩ V_n|/|V_n|) in the close proximity of c is less than 1/3. Consider now the lower bound on the density of a solid-locating-dominating code. Usually, the lower bounds are obtained by locally studying the symmetric difference of closed neighbourhoods of vertices or the domination properties of vertices (such as the concept of share <cit.> or the common technique used in the proof of Theorem <ref>). However, in order to deal with the special type of codewords c, we develop a new technique of more global nature. For this purpose, we first present the following lemma on a forbidden pattern of non-codewords. Let G=(V,E) be the king grid and C⊆ V be a solid-locating-dominating code in G. Then T={(i,j),(i,j+1),(i,j+2),(i+1,j+2),(i-1,j+2)} and any formation obtained from T by a rotation of π/2, π or 3π/2 radians around the origin contains a codeword of C. Assume that the set T={(i,j),(i,j+1),(i,j+2),(i+1,j+2),(i-1,j+2)} contains no codewords of C. Then a contradiction with the definition follows since I(i,j+1) ∖ I(i,j) = ∅. The other cases obtained from T by a rotation are proved analogously. In the following theorem, we prove that the solid-locating-dominating code of Theorem <ref> is optimal, i.e., there is no code with density smaller than 1/3. The proof is based on the idea of studying one-way infinite strips of vertices of width 3 and showing that the density of codewords in these strips is at least 1/3. If G=(V,E) is the king grid and C⊆ V is a solid-locating-dominating code in G, then the density D(C) ≥1/3. Let S^j be a subgraph of G induced by the vertex set V'_j={(x,y)| 1≤ x≤ 3, 1≤ y≤ j}. Recall first the definition V_n = {(x,y) | |x| ≤ n, |y| ≤ n}. Observe now that we may fit into the first quadrant {(x,y)| 1≤ x≤ n, 1≤ y≤ n} of V_n ⌊ n/3 ⌋ graphs isomorphic to S^n. Similarly, the other three quadrants of V_n can each contain ⌊ n/3 ⌋ graphs isomorphic to S^n. Thus, in total, 4⌊ n/3 ⌋ graphs isomorphic to S_n can be fitted into V_n. Let C be a solid-locating-dominating code in G. In the final part of the proof, we show that any subgraph of G isomorphic to S^n contains at least n-3 codewords. Assuming this is the case, the density of C can be estimated as follows: D(C)= lim sup_n →∞|C ∩ V_n|/|V_n|≥lim sup_n →∞4⌊n/3⌋·(n-3)/(2n+1)^2≥lim sup_n →∞4(n-3)^2/3(2n+1)^2 = 1/3. It remains to be shown that any subgraph of G isomorphic to S^n contains at least n-3 codewords. By symmetry, it is enough to show that |C∩ V'_n|≥ n-3. In what follows, we consider more closely the number of codewords in a row S_i = {(j,i)| 1≤ j≤3 } of V'_n. For this purpose, the following set of rules for rearranging the codewords inside V'_n are introduced: Rule 1.1: If S_i∩ C=∅, 1≤ i ≤ n-1 and {(1,i+1),(3,i+1) }⊆ C, then one codeword is moved from S_i+1 to S_i. The rule is illustrated in Figure <ref>. Rule 1.2: If S_i∩ C=∅, 2≤ i and {(1,i-1),(3,i-1) }⊆ C, then one codeword is moved from S_i-1 to S_i. The rule can be viewed as a reflected version of Rule 1.1. Rule 2.1: If S_i∩ C=∅, 2≤ i and {(1,i-1),(2,i-1) }= C∩ S_i-1, then one codeword is moved from S_i-1 to S_i. The rule is illustrated in Figure <ref>. Rule 2.2: If S_i∩ C=∅, 2≤ i and {(2,i-1),(3,i-1) }= C∩ S_i-1, then one codeword is moved from S_i-1 to S_i. The rule can be viewed as a reflected version of Rule 2.1. Rule 3.1: If S_i∩ C=∅, 3≤ i, S_i-1∩ C={(1,i-1)} and {(2,i-2),(3,i-2)}⊆ S_i-2∩ C, then one codeword is moved from S_i-2 to S_i. The rule is illustrated in Figure <ref>. Rule 3.2: If S_i∩ C=∅, 3≤ i, S_i-1∩ C={(3,i-1)} and {(2,i-2),(1,i-2)}⊆ S_i-2∩ C, then one codeword is moved from S_i-2 to S_i. The rule can be viewed as a reflected version of Rule 3.1. Rule 4.1: If S_i∩ C=∅, 3≤ i, S_i-1∩ C={(1,i-1)} and {(1,i-2),(2,i-2)}= S_i-2∩ C, then one codeword is moved from S_i-2 to S_i. The rule is illustrated in Figure <ref>. Rule 4.2: If S_i∩ C=∅, 3≤ i, S_i-1∩ C={(3,i-1)} and {(2,i-2),(3,i-2)}= S_i-2∩ C, then one codeword is moved from S_i-2 to S_i. The rule can be viewed as a reflected version of Rule 4.1. Denote the code obtained after simultaneously applying the previous rules by C'. Notice that the rearrangement C' of C is not completely determined by the previous rules and that this is not actually needed as in the following we are only interested on the number of codewords in the rows of V'_n. In other words, when a codeword is moved from a row we can choose any of the codewords and move it to replace any non-codeword of the target row. In what follows, we show that each row which has given away codewords still contains at least one and each row which originally did not contain any codeword has received at least one except possibly the rows S_1, S_2 and S_n. We immediately notice that the rules move codewords only from the rows with at least two codewords. Each type of row with at least two codewords is examined as follows: * C∩ S_i = {(j,i)| 1≤ j≤3}: Rules 1.1, 1.2, 3.1 and 3.2 can be applied on rows with three codewords. Among these, Rules 3.1 and 3.2 cannot be applied at the same time and Rule 1.2 cannot be applied together with Rules 3.1 or 3.2. Hence, we apply at most two rules on a row with three codewords and that row has at least one codeword left in the code C'. * C∩ S_i = {(j,i)| 1≤ j≤2}: Rules 2.1,3.2 and 4.1 can be applied on this types of rows. We cannot apply Rule 2.1 at the same time as 3.2 or 4.1 since 2.1 requires that S_i+1∩ C=∅ and Rules 3.2 and 4.1 require that |S_i+1∩ C|=1. Furthermore, we cannot apply Rules 3.2 and 4.1 at the same time since they require the codeword on the row S_i+1 to locate at different places. Hence, C' is left with at least one codeword. * C∩ S_i = {(j,i)| 2≤ j≤3}: This case is symmetrical to the previous one (now the rules to be considered are 2.2, 3.1 and 4.2). * C∩ S_i = {(j,i)| j≠ 2}: We can only apply Rules 1.1 and 1.2 on these types of rows and both of them only when i≥ 2. However, if both of the rules are used, then C ∩ S_i-1 = C ∩ S_i+1 = ∅ and a contradiction with Lemma <ref> follows. Hence, at most one rule is used and |C' ∩ S_i| ≥ 1. Let us then show that we have |C'∩ S_i|≥1 for each i such that C∩ S_i=∅ and 3 ≤ i ≤ n-1. In the following cases, we assume that S_i∩ C=∅ and the cases are categorized by considering the different formations of the row S_i-1. * S_i-1∩ C=∅: Considering different orientations and positions of the formation T in Lemma <ref>, we have S_i+1⊆ C. Hence, due to Rule 1.1, one codeword from S_i+1 is moved to S_i and we obtain |C'∩ S_i|≥1. * S_i-1∩ C={(1,i-1)}: By Lemma <ref>, we have (2,i-2)∈ C. Notice that if (1,i-2) and (3,i-2) do not belong to C, then a contradiction with the definition of solid-locating-dominating codes follows since we have I(2,i-1) ⊆ I(1,i-2) for non-codewords (2,i-1) and (1,i-2). Hence, at least one of the vertices (1,i-2) and (3,i-2) belongs to C. Therefore, either Rule 3.1 or 4.1 can be applied (to the row S_i-2) and we have |C'∩ S_i|≥ 1. * S_i-1∩ C={(3,i-1)}: This case is symmetrical to the previous one. Here we just use either Rule 3.2 or 4.2. * S_i-1∩ C={(2,i-1)}: By Lemma <ref>, we have {(1,i+1),(3,i+1)}⊆ C. Hence, due to Rule 1.1, we have |C'∩ S_i|≥1. * S_i-1∩ C={(1,i-1),(2,i-1)}: Due to Rule 2.1, we have |C'∩ S_i|≥1. * S_i-1∩ C={(2,i-1),(3,i-1)}: Due to Rule 2.2, we have |C'∩ S_i|≥1. * S_i-1∩ C={(1,i-1),(3,i-1)}: Due to Rule 1.2, we have |C'∩ S_i|≥1. * S_i-1∩ C={(1,i-1),(2,i-1),(3,i-1)}: Due to Rule 1.2, we have |C'∩ S_i|≥1. Thus, in conclusion, we have shown that for 3 ≤ i ≤ n-1 we have |C' ∩ S_i| ≥ 1. Therefore, as the rules rearrange codewords only inside V'_n, we have |C ∩ V'_n| ≥ |C' ∩ V'_n| ≥ n-3. This concludes the proof of the lower bound D(C) ≥ 1/3. In the previous theorems, we have shown that the density of an optimal solid-locating-dominating code in the king grid is 1/3. Recall that a self-locating-dominating code is always solid-locating-dominating. Hence, by the previous lower bound, we also know that there exists no self-locating-dominating code in the king grid with density smaller than 1/3. However, the construction given for the solid-location-domination does not work for self-location-domination. For example, we have I(2,0) = {(2,-1), (2,1), (3,0)} and N[(2,-1)] ∩ N[(2,1)] ∩ N[(3,0)] = {(2,0), (3,0)} contradicting with the definition of self-locating-dominating codes (see Figure <ref>). In the following theorem, we present a self-locating-dominating code in the king grid with the density 1/3. Notice that this code is also solid-locating-dominating. Let G=(V,E) be the king grid. The code C = {(x,y)∈^2| x - y ≡ 0 3} is self-locating-dominating in G and its density is 1/3. The density D(C) = 1/3 since in each row every third vertex is a codeword. Furthermore, C is a self-locating-dominating code since each non-codeword v is covered either by the set of three codewords {v+(1,0),v+(0,-1),v+(-1,1)} or {v+(-1,0),v+(0,1),v+(1,-1)}, and in both cases the closed neighbourhoods of the codewords intersect uniquely in the vertex v. § DIRECT PRODUCT OF COMPLETE GRAPHS A graph is called a complete graph on q vertices, denoted by K_q, if each pair of vertices of the graph is adjacent. The vertex set V(K_q) is denoted by {1,2, …, q}. The Cartesian product of two graphs G_1=(V_1,E_1) and G_2=(V_2,E_2) is defined as G_1□ G_2=(V_1× V_2,E), where E is a set of edges such that (u_1,u_2)(v_1,v_2)∈ E if and only if u_1=v_1 and u_2v_2∈ E_2, or u_2=v_2 and u_1v_1∈ E_1. The direct product of two graphs G_1 and G_2 is defined as G_1× G_2=(V_1× V_2, E), where E={(u_1,u_2)(v_v,v_2)| u_1v_1∈ E_1 and u_2v_2∈ E_2}. A complement of a graph G = (V,E) is the graph G = (V,E') with the edge set E' being such that uv ∈ E' if and only if uv ∉ E. In this section, we give optimal locating-dominating, self-locating-dominating and solid-locating-dominating codes in the direct product K_n× K_m, where 2≤ n≤ m. For location-domination and solid-location-domination, the results heavily depend on the exact values of (K_n □ K_m) and (K_n □ K_m), which have been determined in <cit.>. In the graphs K_n× K_m and K_n □ K_m, the jth row (of V(K_n)× V(K_m)) is denoted by R_j and it consists of the vertices (1,j), (2,j), …, (n,j). Analogously, the ith column is denoted by P_i and it consists of the vertices (i,1), (i,2), …, (i,m). Now we are ready to present the following observations: * In the Cartesian product K_n □ K_m, the closed neighbourhood N[(i,j)] = N[i,j] consists of the row R_j and the column P_i. Therefore, as the closed neighbourhood of a vertex resembles the movements of a rook in a chessboard, K_n □ K_m is also sometimes called the rook's graph. * In the direct product K_n × K_m, we have N((i,j)) = N(i,j) = V(K_n □ K_m) ∖ (R_j ∪ P_i). Due to the previous observations, we know that K_n□ K_m=K_n× K_m. Recall that identification is a topic closely related to the various location-domination type problems. Previously, in <cit.>, the identifying codes have been studied in the direct product K_n× K_m of complete graphs by Goddard and Wash. More precisely, they determined the exact values of (K_n× K_m) for all m and n. In what follows, we determine the exact values of (K_n × K_m) for all m and n. For this purpose, we first present the following result concerning location-domination in the Cartesian product K_n□ K_m of complete graphs given in <cit.>. Let m and n be integers such that 2 ≤ n≤ m. Now we have (K_n □ K_m)= m-1, 2n≤ m, ⌈2n+2m/3⌉-1, n≤ m≤ 2n-1. There is a strong connection between the values of (K_n □ K_m) and (K_n × K_m) as explained in the following. In <cit.>, it has been shown that |γ^LD(G)-(G)|≤1. Therefore, as K_n× K_m=K_n□ K_m, we obtain that (K_n□ K_m)-1≤(K_n× K_m)≤(K_n□ K_m)+1. This result is further sharpened in the following lemma. For 2≤ n≤ m and (n,m) ≠ (2,4), we have γ^LD(K_n□ K_m)-1≤γ^LD(K_n× K_m)≤γ^LD(K_n□ K_m). If (K_n× K_m) = (K_n□ K_m) -1, then the optimal locating-dominating code C in K_n× K_m has a non-codeword v such that I(v)=C. First denote G=K_n□ K_m and H=K_n× K_m. The lower bound of the claim is immediate by the result preceding the lemma. For the upper bound, let C be an optimal locating-dominating code in G. The code C can also be viewed as a code in H. If we have I(H;u)=I(H;v) for some non-codewords u and v, then a contradiction follows since I(G;u)=C∖ I(H;u)=C∖ I(H;v)=I(G;v). Hence, we have I(H;u) ≠ I(H;v) for all distinct non-codewords u and v. Moreover, if I(G;v)≠ C for each non-codeword v, then we also have I(H;v)≠∅, and the upper bound follows since C is a locating-dominating code in H. Hence, we may assume that I(G;v)=C for some non-codeword v. This implies that C⊆ P_i∪ R_j for some i,j. There exists at most one non-codeword in P_i ∖{v} since otherwise there are at least two non-codewords with the same I-set. Similarly, there exists at most one non-codeword in R_j ∖{v}. Furthermore, if both P_i ∖{v} and R_j ∖{v} contain a non-codeword, then there exists a vertex with an empty I-set. Thus, in conclusion, there exists at most two non-codewords in P_i∪ R_j and, hence, we have |C|≥ n+m-3. Dividing into the following cases depending on n and m, we next show that |C| ≥ n+m-3>(G) in majority of the cases of the lemma: * If n ≥ 3 and m ≥ 2n, then we have (G) = m-1 < n+m-3 ≤ |C| (by Theorem <ref>). * If n ≥ 4, n ≤ m ≤ 2n-1 and (n,m) ≠ (4,4), then (G) = ⌈ 2(n+m)/3 ⌉ - 1 < n+m-3 ≤ |C| (by Theorem <ref>). Thus, if n ≥ 3 and m ≥ 2n, or n ≥ 4, n ≤ m ≤ 2n-1 and (n,m) ≠ (4,4), then a contradiction with the optimality of C follows. Hence, in these cases, we have γ^LD(H)≤γ^LD(G). The rest of the cases are covered in the following: * If n = 2 and 2 ≤ m ≤ 3, then C = P_1 is an optimal locating-dominating code in G with the property that for any non-codeword v we have I(G; v) ≠ C. Similarly, if n = 2 and m ≥ 5, then C = {(2,1),(2,2)}∪ P_1∖{(1,i)| i≤3} is an optimal locating-dominating code in G with the property that for no vertex v we have I(G; v) = C. Thus, in both cases, the code C is also locating-dominating in H by the first paragraph of the proof. * If n = m = 3, then C = {(1,1),(1,2),(2,1)} is a locating-dominating code in H with γ^LD(G) = 3 codewords. * If n = 3 and 4 ≤ m ≤ 5 or (n,m) = (4,4), then {(1,1),(1,3),(2,2),(2,4)}, {(1,1), (1,3), (2,2),(2,4),(3,5)} and {(1,1), (1,3), (2,2), (2,4), (3,1)} obtained from the proof of <cit.> are optimal locating-dominating codes in K_3□ K_4, K_3□ K_5 and K_4□ K_4, respectively. Therefore, since there does not exist a non-codeword covering all the codewords (in the Cartesian product) in any of the cases, the codes are also locating-dominating in K_3× K_4, K_3× K_5 and K_4× K_4 (by the first paragraph of the proof), respectively. Let then C' be a locating-dominating code in H. Similarly as above, we get that if I(H;v)≠ C' for each non-codeword v, then C' is also a locating-dominating code in G. Therefore, if (H) = (G) -1, then there exist a non-codeword v such that I(H;v) = C'. Thus, the last claim of the lemma follows. Now with the help of the previous lemma and Theorem <ref>, we determine the exact values of (K_m× K_n) in the following theorem. For 2≤ n≤ m we have (K_n× K_m)=m-1, 2n≤ m and (n,m)≠ (2,4), ⌈2n+2m-1/3⌉-1, 2<n≤ m<2n and (m,n) ≠ (4,4) m, n=2, m≤4, 5, n=4,m=4. Let C be a locating-dominating code in K_n× K_m. We cannot have R_i∩ C=R_j∩ C=∅ for i≠ j since otherwise, for example, I(C;(1,i)) = I(C;(1,j)). Similarly, there exists at most one column without codewords of C. Thus, we have (K_n× K_m)≥ m-1. Therefore, if m≥2n and (n,m)≠(2,4), then by the previous lemma we have m-1 ≤(K_n× K_m) ≤(K_n□ K_m) = m-1, i.e., (K_n× K_m) = m-1. Assume then that 2< n≤ m≤ 2n-1 and n+m ≡ 0, 1 3. In what follows, we show that now |C| ≥(K_n□ K_m). By the previous lemma, we know that if there is no non-codeword u such that I(K_n× K_m, C; u) = C, i.e., there does not exist a row and column without codewords, then |C| =(K_n□ K_m). Hence, we may now assume that there exist a row and a column without codewords. Without loss of generality, we may assume that they are P_n and R_m. Observe that C can now also be viewed as a code in K_n-1□ K_m-1 and that C is locating-dominating in K_n-1□ K_m-1 with the following additional properties: (i) each column has at least one codeword, (ii) each row has at least one codeword and (iii) no codeword (i,j) ∈ C is such that (P_i ∪ R_j) ∩ C = {(i,j)}, i.e., no codeword of C is isolated. Indeed, the properties (i) and (ii) follow immediately by the first paragraph of the proof and if (i,j) ∈ C is a codeword violating the property (iii), then we have I(K_n× K_m,(n,j))=I(K_n× K_m;(i,m))=C∖{(i,j)} (a contradiction). Now we are ready to prove a lower bound on |C| as in <cit.>. Denote the number of columns and rows with exactly one codeword in K_n□ K_m by s_p and s_r, respectively. Now we obtain that |C| ≥ s_p + 2(n-1-s_p) = 2(n-1)-s_p and |C| ≥ s_r + 2(m-1-s_r) = 2(m-1)-s_r (by the properties (i) and (ii)). This further implies that s_p ≥ 2(n-1) - |C| and s_r ≥ 2(m-1) - |C|. By the property (iii), we now obtain that |C|≥ s_p+s_r≥ 2(n-1)+2(m-1)-2|C|. Thus, we have |C|≥⌈ (2m+2n-1)/3 ⌉-1. Hence, as n+m ≡ 0, 1 3, we have |C|≥⌈ (2m+2n-1)/3 ⌉-1 = ⌈ (2m+2n)/3 ⌉ -1 = (K_n□ K_m). Thus, by the upper bound of the previous lemma, we obtain that (K_n× K_m) = (K_n□ K_m) if 2< n≤ m≤ 2n-1 and n+m ≡ 0, 1 3. Assume then that 2< n≤ m≤ 2n-1, n+m ≡ 2 3 and (n,m) ≠ (4,4). In what follows, we show that the lower bound of Lemma <ref> is attained, i.e., (K_n× K_m) = (K_n□ K_m) - 1. Denote n'=n-1 and m'=m-1 and observe that n'+m' is divisible by three. Let C'=A_1∪ A_2∪ A_3 be a code in K_n × K_m with A_1 ={(i,i)| 1≤ i≤n'+m'/3}, A_2 ={(j,i)|n'+m'/3+1≤ i≤ m', j=2n'+m'/3+1-i} and A_3 ={(j,i)| 1≤ i≤2n'-m'/3, j=i+n'+m'/3}. The code C' is illustrated in Figure <ref>. By straightforward counting , we get |C'| = |A_1|+|A_2|+|A_3| = m' + 2n'-m'/3=2n'+2m'/3=2n+2m-1/3-1=(K_n□ K_m)-1. In what follows, we first show that C' is almost a locating-dominating code in K_n□ K_m with the exception that I(C';(n,m)) = ∅. Denote the sets of non-codewords (j,i) with (2n'+2m')/3-m'+1≤ j≤ (n'+m')/3 and i≤ (2n'+2m')/3-m' by B_1 and B_2, respectively. It is straightforward to verify that each non-codeword u ∈ B_1 ∪ B_2 has at least three codewords in I(K_n□ K_m,C';u) and the codewords of I(K_n□ K_m,C';u) do not lie on a single row or column. This implies that ⋂_c ∈ I(C';u) N[c] = {u } for any u ∈ B_1 ∪ B_2, i.e., there is no other vertex containing I(C';u) in its I-set. Thus, each non-codeword in B_1 ∪ B_2 has a unique nonempty I-set. Consider then a non-codeword v=(j,i) with i> (2n'+2m')/3-m' and j<(2n'+2m')/3-m+1'. By the construction of C', we have |I(C';v)| = 2. Now there exists a codeword (j,j)∈ I(v) since j≤ (2n'+2m')/3-m'. Furthermore, there exists a codeword c ∈ I(j,j)∩ A_3. Hence, if there exists a non-codeword w such that I(C';v) = I(C';w), then w ∈ B_2 and a contradiction follows as |I(C';w)| ≥ 3. Thus, the I-set of v is nonempty and unique. Similarly, it can be shown that I(C';(j,i)) is nonempty and unique for i>(2n'+2m')/3-m' and j>(n'+m')/3. Consider then non-codewords u = (j,m) and v = (n, i) with 1 ≤ j ≤ n-1 and 1 ≤ i ≤ m-1. We immediately obtain that I(C';(j,m)) = P_j ∩ C' and I(C';(n,i)) = R_i ∩ C'. These I-sets are nonempty since each row and column contains a codeword. These I-sets are also different from the ones of non-codewords inside K_n'□ K_m' which contain at least two codewords in different rows and columns. It is also impossible to have I(C';u) = I(C';v) since each codeword has another one in the same row or column. Thus, u and v have nonempty and unique I-sets. Thus, in conclusion, we have shown that I(C';u) is nonempty and unique for all non-codewords u in K_n□ K_m except (n,m) (for which we have I(C';(n,m)) = ∅). Furthermore, there does not exist a non-codeword v such that I(C';v) = C'. Therefore, as in the proof of Lemma <ref>, we obtain that C' is a locating-dominating code in K_n× K_m. Thus, we have (K_n× K_m) = (K_n□ K_m) - 1. Now majority of the cases have been considered, and we only have some special cases left. Concluding the proof, these cases are solved as follows: * Assume that n=2 and m≤ 4. It is easy to see that C=P_1 is a locating-dominating code in K_n× K_m. For the lower bound, first recall that K_n× K_m has at most one row without codewords (by the first paragraph of the proof). Therefore, if C is a locating-dominating code in K_n× K_m with |C| ≤ m-1, then all the codewords lie on different rows. Hence, in all the cases, there exist a non-codeword with an empty I-set. Thus, we have (K_n× K_m) = m. * Assume that n = m = 4. By Lemma <ref>, we immediately have 4 ≤(K_4× K_4) ≤ 5. Let C be a locating-dominating code in K_4× K_4. As in the second paragraph of the proof, it can be shown that either |C| ≥(K_4□ K_4) = 5 (and we are done), or C is locating-dominating in K_3× K_3 with the additional properties (i), (ii) and (iii). In the latter case, due to (i), (ii) and (iii), there exist a row and a column of K_3× K_3 with two codewords such that their intersection is a non-codeword u. Hence, a contradiction follows since I(K_4× K_4, C; u) = ∅. Thus, we have (K_4× K_4) = 5. Let us next briefly consider solid-location-domination. The following result has been shown in <cit.>. For all integers m and n such that m ≥ n≥ 1, we have (K_n □ K_m)= m, 4≤ 2n≤ m or n= 2, 2n, 2<n < m <2n, 2n-1, 2<m=n. In the following theorem, we show that the cardinalities of optimal solid-locating-dominating codes are same for K_n× K_m and K_n□ K_m. For all integers m and n such that m ≥ n≥ 2, we have (K_n× K_m)=(K_n□ K_m). By <cit.>, we have (G) = (G) if G is not a discrete or a complete graph. Therefore, as this is the case for G = K_n× K_m, we have (K_n× K_m) = (K_n× K_m) = (K_n□ K_m). Let us then consider self-location-domination. Unlike location-domination <cit.> and solid-location-domination <cit.>, the optimal cardinality of a self-locating-dominating code in G does not depend on the one of the complement graph G. In the following theorem, we first give the result presented in <cit.> regarding (K_n □ K_m). For all integers m and n such that m ≥ n≥ 2, we have (K_n □ K_m)= m, 2n≤ m, 2n, 2≤ n<m<2n, 2n-1, 2<m=n, 4, n=m=2. In the following theorem, we determine the exact values of (K_n × K_m) for all values of m and n. Notice that (K_n□ K_m)=(K_n× K_m) if and only if n=m, m=n+1>3, or n=2 and m≥4. For all integers m and n such that m ≥ n≥ 2, we have (K_n× K_m)=m+n-1, n>2, m, n=2, m>2, 4, n=m=2. Let C be a self-locating-dominating code in K_n× K_m. Notice first that if n = m =2, then K_2× K_2 is isomorphic to a forest of two paths of length two and, therefore, (K_2× K_2) = 4. Hence, we may assume that (n,m) ≠ (2,2). Observe then that if a column P_i contains no codewords, i.e., P_i∩ C=∅, then C = V ∖ P_i. Indeed, for any vertices (i,j) ∈ P_i and (h,j) ∈ V with i ≠ h, we have I(h,j)⊆ I(i,j) and the claim C = V ∖ P_i follows by Theorem <ref>. Analogously, it can be shown that if R_i∩ C=∅, then C = V ∖ R_i. Suppose now that n = 2 and m > 2. If each row contains a codeword, then we immediately have |C| ≥ m. Otherwise, there exists a row without codewords and, by the previous observation, we have |C| ≥ 2m - 2 ≥ m. Hence, we obtain that |C| ≥ m. Furthermore, P_1 is a self-locating-dominating code in K_2× K_m with m codewords. Thus, in conclusion, we have (K_2× K_m) = m. Assume that n > 2. By the previous observations, we know that if there exists a row or a column without codewords, then |C| ≥min{mn-m, mn-n} = mn-m ≥ m+n-1. Hence, we may assume that each row and column contains a codeword of C. Furthermore, if each row contains at least 2 codewords, then |C| ≥ 2m ≥ m+n-1. Hence, we may assume that there exists a row R_i with exactly one codeword, i.e., R_i∩ C = {(j,i) } for some j. Hence, as I(j,h)⊆ I(j,i) for any h ≠ i, we have P_j⊆ C. Therefore, as each column different from P_j also contains a codeword, we obtain that |C| ≥ m+n-1. Thus, we have (K_n× K_m) ≥ m+n-1. Finally, this lower bound can be attained with a code C'={(i,j)| i=1 or j=1}. Indeed, for any i,j > 1, we have I(1,1)={(1,1)}, I(1,j)={(1,j)}∪ (R_1∖{(1,1)}), I(j,1)={(j,1)}∪ (P_1∖{(1,1)}) and I(i,j)= C' ∖{(1,j),(i,1)}. Therefore, we have I(v)⊈I(u) for any vertex u and non-codeword v. Thus, by Theorem <ref>, C' is a self-locating-dominating code in K_n× K_m, and we have (K_n× K_m) = n + m - 1. § ON CERTAIN TYPE OF HAMMING GRAPHS The Cartesian product K_q□ K_q□⋯□ K_q of n copies of K_q is denoted by K_q^n and called a Hamming graph. Goddard and Wash <cit.> studied identification in the case of K_q^n and they, in particular, bounded the cardinality of an optimal identifying code to q^2-q√(q)≤γ^ID(K_q^3)≤ q^2. In <cit.>, we further improved this bound to q^2-3/2q≤γ^ID(K_q^3)≤ q^2-4^t-1 where 2· 4^t≤ q ≤ 2· 4^t+1-1 or q=4^t, and we also showed that γ^SLD(K^3_q)=q^2. In this section, we show that also (K^3_q)=q^2. The following lemma is presented as Exercise 1.12 in <cit.>. For each positive integer q, we have γ(K_q□ K_q)=q. In the following we present some terminology and notations we use. More information about them and their usefulness can be found in <cit.>. * The pipe P^i(a,b)⊆ V(K_q^3) is a set of vertices fixing all but the ith coordinate which varies between 1 and q. The fixed coordinates are a and b where a is the value of left fixed coordinate in the representation (x,y,z). For example P^3(a,b)={(a,b,i)| 1≤ i≤ q}. * The layer L^i_j⊆ V(K_q^3) is a set of vertices fixing the ith coordinate as j. For example, the layer L^1_j consists of pipes P^i(1,j) for i=1,2 and 1≤ j≤ q. * C^i_j⊆ L^i_j denotes the set of codewords in layer L^i_j, that is, for code C⊆ V(K_q^3) we have C^i_j=C∩ L^i_j. * X^i_j⊆ L^i_j denotes such non-codewords v in L^i_j that I(C^i_j;v)=∅ and X^i=⋃_j=1^q X^i_j. * Let us denote a^i_j=q-|C^i_j|. * M^i_j⊆ L^i_j denotes the minimum dominating set of induced subgraph K^3_q[L^i_j] such that C^i_j⊆ M^i_j. Note that K^3_q[L^i_j]≃ K_q□ K_q and hence, |M^i_j|≥ q. * Let us denote f^i_j=|M^i_j|-q. Note that |X^i_j|≥ (f^i_j+a^i_j)^2 and f^i_j+a^i_j≥0 since f^i_j=|M^i_j|-q≥ |C^i_j|-q=-a^i_j, (<cit.>). Let C⊆ V(K^3_q) and let K_t□ K_t be a subgraph of K^3_q[C^i_j] for some i,j. Then we have f^i_j≥ t^2-t. We have C^i_j⊆ M^i_j. Besides the vertices of C^i_j inducing graph K_t□ K_t, there are (q-t)^2 vertices which are not dominated by these vertices. Moreover, we require at least q-t vertices to dominate them. Hence, we have |M^i_j|≥ t^2+(q-t) and thus, f^i_j≥ t^2-t. Let C be a code in K^3_q and v be a vertex of K^3_q. * If a vertex v has two codewords in its I-set and they do not locate within a single pipe, then there is exactly one other vertex which has those two codewords in its I-set. * The I-set I(v) is not a subset of any other I-set if and only if there are at least three codewords in I(v) and they do not locate within a single pipe. We have for q≥2 γ^DLD(K^3_q)=q^2. We have shown in <cit.> that γ^SLD(K^3_q)=q^2. Hence, we have (K^3_q)≤ q^2 by Corollary <ref>. Let us assume that C is an optimal solid-locating-dominating code in V(K^3_q) with |C|<q^2. Since |C|<q^2, we have a layer, say L^3_1, with at most q-1 codewords and hence, we have |X^3_1|≥1 by Lemma <ref>. Let us assume that (1,1,1)∈ X^3_1. Now, we have (i,1,1)∉C for any i and the same is true for (1,j,1) for any j. Moreover, if we have (1,1,h)∉C, then I(1,1,1)⊆ I(1,1,h), a contradiction. Therefore, for each non-codeword in X^i_j we have a pipe with q-1 codewords. Let us denote a pipe with q-1 codewords as P^i_C(a,b) where i denotes the direction of the pipe and (a,b) denotes the coordinates in which the pipe intersects with the layer. Note that if (a,b,z)∈ X^3_z and (a,b,z')∈ X^3_z', then z=z'. Let us first note that we have |{P_C^i(a,b)| 1≤ a,b≤ q}|≤ q+1 for any fixed i∈{1,2,3}. Otherwise, we would have |C|≥ (q+2)(q-1)=q^2+q-2>q^2-1. Let us then consider the case where we have only q-t, t≥2, codewords in a layer, say L^3_1. Then we have |X^3_1|≥ t^2 and these vertices (or some subset of them) induce subgraph K_t□ K_t on K^3_q. Therefore, we have at least t^2 copies of codeword pipes P^3_C(a,b) and without loss of generality, we may assume that values (a,b) form the set {(i,j)| 1≤ i,j≤ t}. Thus, some subset of the vertices in C^3_j, for any fixed j such that 2≤ j≤ q, form an induced subgraph K_t□ K_t. Therefore, we have f^3_j≥ t^2-t for any 2≤ j ≤ q by Lemma <ref>. Thus, we have |X^3|≥ t^2+∑_j=2^q (f^3_j+a^3_j)^2≥ t^2+∑_j=2^q f^3_j+∑_j=2^q a^3_j≥ t^2-t+(q-1)(t^2-t)+1=q(t^2-t)+1≥ 2q+1. Note that ∑_j=2^q a^3_j≥ 1-t and if (a,b,j)∈ X^3_j, then (a,b,i)∉X^3_i for each i≠ j. However, this is a contradiction with (<ref>). Therefore, we have |C^i_j|≥ q-1 for any i,j. Let us then consider the case where |C^3_1|=q-1 and C^3_1 induces a discrete graph. Then for any non-codeword v=(a,b,1), we have |N(v)∩ C^3_1|≤ 2 and the codewords in N(v)∩ C^3_1 do not locate within the same pipe. Therefore, by Lemma <ref>, we have another non-codeword w∈ L^3_1 such that N(v)∩ C^3_1⊆ N(w). Furthermore, this means that there is a codeword in P^3(a,b). Since this is true for any non-codeword and |L^3_1|=q^2, we have |C|≥ q^2, a contradiction. Let us then consider the case |C^3_1|=q-1 for q≥3 and assume that some codewords in C^3_1 are neighbours. We may assume that (1,1,1),(1,2,1)∈ C^3_1. Moreover, we may assume that (q,q,1)∈ X^3_1. Since there are at least two codewords in the pipe P^2(1,1) and there are q-1 codewords in C^3_1, we have at least two pipes P^2(a,1) and P^2(q,1) such that they contain no codewords. Therefore, we have (a,q,1)∈ X^1_1. Moreover, we have codeword pipes P_C^3(q,q) and P_C^3(a,q). Now, we can consider layers L^1_q and L^1_a. Let us first consider the layer L^1_q. First of all, it contains the codeword pipe P_C^3(q,q) and since the pipe P^2(q,1) contains no codewords, there has to be at least one codeword in every pipe P^3(q,i) where 1≤ i≤ q-1. Indeed, otherwise we would have q-1 codewords in some pipe P_C^1(i,q), 2≤ i≤ q, a contradiction with pipes P^2(a,1) and P^2(q,1) containing no codewords. Therefore, we have |C^1_q|≥ 2q-2. Furthermore, we get similarly |C^1_a|≥ 2q-2. However, now we have |C|≥ 2(2q-2)+∑_i=1,i≠ a^q-1|C^1_i|≥ 2(2q-2)+(q-2)(q-1)=q^2+q-2>q^2-1, a contradiction. § ACKNOWLEDGEMENT The authors would like to thank María Luz Puertas Gonzalez for fruitful discussions on the topic. fundam
http://arxiv.org/abs/2306.11546v1
20230620135920
Bullying10K: A Neuromorphic Dataset towards Privacy-Preserving Bullying Recognition
[ "Yiting Dong", "Yang Li", "Dongcheng Zhao", "Guobin Shen", "Yi Zeng" ]
cs.CV
[ "cs.CV" ]
Graph–Based Conditions for Feedback Stabilization of Switched and LPV Systems[This paper was not presented at any conference. This study was partially financed by the European Research Council (ERC) under the European Union's Horizon 2022 research and innovation program under grant agreement No 864017 - L2C and by the ANR project HANDY 18-CE40-0010. ^∗ Corresponding author. aaEmail addresses: [email protected] (M. Della Rossa), [email protected] (T. Alves Lima), [email protected] (M. Jungers), [email protected] (R. M. Jungers)] [ ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The prevalence of violence in daily life poses significant threats to individuals' physical and mental well-being. Using surveillance cameras in public spaces has proven effective in proactively deterring and preventing such incidents. However, concerns regarding privacy invasion have emerged due to their widespread deployment. To address the problem, we leverage Dynamic Vision Sensors (DVS) cameras to detect violent incidents and preserve privacy since it captures pixel brightness variations instead of static imagery. We introduce the Bullying10K dataset, encompassing various actions, complex movements, and occlusions from real-life scenarios. It provides three benchmarks for evaluating different tasks: action recognition, temporal action localization, and pose estimation. With 10,000 event segments, totaling 12 billion events and 255 GB of data, Bullying10K contributes significantly by balancing violence detection and personal privacy persevering. And it also poses a challenge to the neuromorphic dataset. It will serve as a valuable resource for training and developing privacy-protecting video systems. The Bullying10K opens new possibilities for innovative approaches in these domains.[project website https://figshare.com/articles/dataset/Bullying10k/19160663https://figshare.com/articles/dataset/Bullying10k/19160663 ] § INTRODUCTION The issue of violence in daily life poses a significant threat to individuals' physical and mental well-being. In addition to merely punishing for violent actions, it is crucial to deter and prevent their occurrence proactively. Implementing surveillance cameras in public spaces has effectively facilitated the prompt detection of emerging violent behavior. While this strategy has somewhat curbed violent incidents, the widespread deployment of these cameras stirs up concerns over potential invasions of individuals' privacy, leading to significant apprehensions. The proliferation of cameras has dramatically enhanced the ease of data collection. Cameras are commonly employed for indoor and outdoor surveillance, capturing instances of violence or emergencies <cit.>. Nonetheless, this data-gathering method frequently requires obtaining explicit consent from recorded participants. Obtaining comprehensive consent from individuals captured on camera poses significant challenges <cit.>. In addition to capturing movement data, personal information related to privacy, such as facial features and attire, is recorded and potentially stored on untrusted third-party servers with high-performance capabilities, thereby intensifying the potential for privacy breaches. At present, most of the commonly employed violence detection datasets primarily utilize RGB images. We aim to devise a strategy that effectively identifies unusual and violent incidents while minimizing the risk of privacy breaches during normal circumstances. Dynamic Vision Sensors (DVS) cameras <cit.>, which capture pixel brightness variations, provide an innovative alternative to conventional cameras that produce image frames at fixed frequencies. Instead, DVS cameras generate an event stream that records each pixel's brightness changes, either enhancement or reduction. As a result, it becomes challenging to visually identify the captured objects. Although some techniques attempt to reconstruct images from DVS data <cit.>, these approaches grapple with issues such as low contrast and blurriness and may even require additional sensor information for assistance <cit.>. Consequently, extracting detailed user information suitable for recognition systems from DVS cameras becomes a significant challenge, naturally reinforcing privacy persevering measures. At the same time, the high sensitivity of DVS cameras ensures their stable performance under uncontrolled luminary conditions and diverse environmental states <cit.>. As an event-driven camera, DVS consumes low power when the scene is static, reducing energy consumption and mitigating information redundancy compared to traditional cameras. However, although datasets captured using DVS cameras exist <cit.>, most are employed for traditional image classification tasks. Existing action recognition datasets <cit.> primarily focus on generic simple action recognition with limited scale and simplistic labels. Thus, they are insufficient for detecting complex and rapid actions and overlapping individuals, characteristic of violent incidents. To address these concerns, we leverage the unique characteristics of DVS cameras and propose an event-based dataset called Bullying10K. The dataset aims to detect violent incidents in videos while ensuring privacy protection. Instead of relying on conversion algorithms or reproduction methods that can be time-saving and resource-saving, we chose to capture real-life scenarios and subjects using DVS cameras. This approach allows us to avoid data biases that may arise from the process of RGB cameras of original datasets. The dataset captures subjects engaging in various actions under different views and lighting conditions. In addition to data privacy persevering, the Bullying10K dataset stands out from other DVS datasets by encompassing more complex and rapid actions and instances where individuals may obscure each other. This inclusion introduces new challenges to event-based neuromorphic datasets. In conclusion, the design of the Bullying10K dataset aims to fulfill the real-time detection requirements of violent behavior while maximizing the privacy protection of the individuals captured in the footage. This dataset will serve as valuable training data for developing privacy-preserving video systems, providing new insights and opportunities for future research. Our contributions are as follows: * We propose a large-scale DVS violence detection dataset: Bullying10K. It contains 10,000 event segments, totaling 12 billion events and 255 GB of data. The actions in the videos are characterized by their complexity, rapidity, and occlusion of individuals. * We provide three benchmarks for comparing the performance of different methods: an action recognition benchmark, an action temporal localization benchmark, and a pose estimation benchmark. For the pose estimation task, we provide the keypoints of human pose. * We present the DVS community with a trainable dataset to detect violent scenes without compromising privacy. It makes the anticipation and research of violent scenarios possible. § RELATED WORK DVS Dataset Early DVS datasets were typically derived from pre-existing image classification datasets <cit.>. They captured the brightness differences of pixels caused by camera or image motion using DVS cameras. However, generating meaningful temporal data from static images proved challenging. <cit.> captured people in real scenarios, showcasing different actions through hand movements and providing early event-based classification tasks benchmarks. However, these datasets were relatively small, and the actions displayed were somewhat repetitive. In contrast to traditional classification tasks, <cit.> introduced a dataset for few-shot tasks, reconstructing the drawing process of character strokes and creating meaningful temporal data. <cit.> attempted to capture existing action recognition datasets using DVS cameras. However, video reproduction failed to capture the event characteristics in natural scenes, especially motion blur caused by high-speed motion or significant changes in illumination conditions. Constructing a dataset suitable for violence detection requires capturing data with complex actions, fast movement, and occlusion, which existing datasets are not explicitly designed for. In Table <ref>, we provide an overview of several DVS datasets, where #Event Count means the total number of events of the dataset and the average number of each example. Violent Dataset The construction of appropriate datasets for detecting violent actions are crucial to promote research in automated detection technology. <cit.> proposed a dataset created by extracting clips from short films comprising only 200 video segments. <cit.> collected 1,000 data samples by capturing snippets from hockey games. <cit.> gathered realistic scenes involving multiple groups engaged in specific actions. <cit.> compiled the RWF-2000 dataset by amassing 2,000 sample clips from the internet. <cit.> extracted segments from Hollywood movies to create a dataset. Spiking Neural Networks (SNNs) are models that simulate the behavior of neurons in the brain. In contrast to Artificial Neural Networks (ANNs), SNNs transmit signals through discrete spikes, and the accumulation of membrane potentials in SNNs allows them to handle time series data effectively, making them well-suited for processing event-based data. However, due to the non-differentiability of spike sequences, applying the traditional backpropagation (BP) algorithm directly to training SNNs poses significant challenges. As a result, various methods have been proposed to explore effective training approaches for SNNs <cit.>. § BULLYING10K DATASET In this section, we elaborate on the acquisition process, preprocessing methods, and annotation details of the Bullying10K dataset. Simultaneously, we analyze multiple attributes of the dataset, including its temporal length, keypoints motion, and spatial event distribution. §.§ Data Acquisition Environment Setting For data collection, we utilize two Davis346 <cit.>, a high-speed event camera that captures pixel brightness changes with microsecond precision. Each pixel's brightness change triggers an event (t, x, y, p), where (x, y) represents the spatial coordinates of the pixel, t denotes the event's time, and p is either 0 or 1, indicating the polarity of the brightness change (enhancement or reduction). To capture multiple viewing angles and ensure diversity in the collected data, we position two DVS cameras on the left and right sides of the filming scene, as depicted in Figure <ref>. Additionally, we incorporate two lighting conditions: bright and dark, to simulate various real-life scenarios. r0.55 < g r a p h i c s > Visualization of human pose keypoints labels on event frames Each event segment contains two participants who assume the roles of a perpetrator and a victim during segments involving violent actions. In contrast, sections depicting friendly actions involve the cooperation between the two participants. Actors are instructed to perform a specific action in each video segment, and we collect ten valid clips for each action. The duration of each sample segment is action-dependent, ranging from 2 to 20 seconds. Preprocessing The Davis346 camera directly outputs data in the file format, specifically designed for storing event streams. To facilitate subsequent processing and analysis, we transform the raw data into the widely used format. is a common file format for storing NumPy  <cit.> array data, enabling effortless preservation and recovery of multidimensional data, matrices, and other data structures. Throughout this transformation, we organized the event stream into 10-millisecond units to maintain temporal precision while effectively compressing the data. This strategy improves convenience and reduces file size. For user-friendly data manipulation, we supply code that merges the event stream into frames and reads the data. Quality Control To ensure data quality and usability of the data, we marked the position of each camera before the start of filming, along with the relevant settings of the DVS camera (including aperture, focal length, etc.), and maintained consistency in these settings for each capture. To enhance data redundancy and robustness, we introduced a manual screening step to optimize the data collection process. We captured twelve sample segments for each action group. Following data collection, we conducted manual screening to exclude poorly captured segments, using only ten segments per group for the final dataset. This ensures that every sample segment in the dataset possesses a good quality, facilitating subsequent research and analysis. §.§ Data Annotation §.§.§ Category Label We conduct detailed classification and annotation for each sample after filming for action recognition tasks to identify the represented action accurately. Our dataset consists of ten actions, including six violent actions (punching, kicking, hair grabbing, strangling, pushing, and slapping) and four friendly actions (handshaking, finger guessing, greeting, and walking). We further organized each category based on subjects, lighting scenes, and camera positions. Each group is named using the subject's code, action name, illumination, and camera position. §.§.§ Pose Estimation Pose estimation is a task that involves identifying a person's body position and keypoints in a video or an image. Precise pose estimation facilitates effective action recognition, making it an essential precursor to subsequent action recognition tasks. To acquire human pose data for each DVS video segment, we simultaneously leverage the RGB data captured with the event data. The same camera captures both data types and offers overlapping scenes with highly consistent content. This advantage allows us to employ well-established pose estimation algorithms to predict the RGB dataset and obtain initial human pose labels. Specifically, we utilize AlphaPose <cit.> as an automated labeling tool, a multi-person pose estimation system. We employ the ResNet50 <cit.> backbone pre-trained on the Haple dataset, while the annotation process involved using the YOLOX <cit.> algorithm trained on the COCO dataset <cit.> as an object detector. Our annotation target includes 26 keypoints of the human body, as specified in <cit.>, and the label information is saved in the COCO format. Upon obtaining initial labels, we manually calibrated the labels, as illustrated in Figure <ref>. It is important to note that direct pose estimation algorithms for DVS data are still in the early stages of development. Moreover, due to the inherent characteristics of DVS data, the events do not explicitly represent human poses, which compounds the complexity of directly utilizing DVS data for pose annotation. §.§ Data Analysis The Bullying10K dataset encompasses 10,000 sample clips, each with a duration ranging from 2 to 20 seconds. It contains a staggering 12 billion valid events, resulting in a total data volume of 255 GB. Figure <ref> (e,f,g) presents the distribution of frames, events, and events per frame in each sample clip within the dataset. Notably, the average sample length of Bullying10K surpasses existing event datasets captured with DVS cameras. This addresses the limitation of shorter datasets captured by DVS cameras, providing new challenges in establishing long-range dependencies between event data. Figure <ref> (a) displays the results obtained by analyzing the movement distance of keypoints between consecutive frames. Different keypoints exhibit distinct motion distribution patterns, with the most prominent trends observed in the wrist and elbow movements, aligning with human motion characteristics. To quantify the degree of task overlap in the video, we calculate the Intersection over Union (IoU) of the bounding boxes for two characters appearing in the same frame. Figure <ref> (d) visualizes the distribution of IoU values, which are primarily concentrated between 0.1 and 0.5. This indicates a significant number of instances where characters overlap within this dataset. As illustrated in Figure <ref> (b), our analysis reveals a slightly lower occurrence ratio of positive and negative polarity events. Moreover, we visualize the ratio of positive and negative events for each action category within the dataset. As shown in Figure <ref> (c). The movement distribution of actions such as punching and strangling predominantly occurs in the upper part of the image, while walking exhibits a higher likelihood of occurring in both the upper and lower parts of the image. § EVALUATION AND TASK Bullying10K, as a neuromorphic dataset, offers valuable advantages in terms of user privacy persevering. We have provided three benchmark tasks: action recognition, temporal action localization, and pose estimation. Through the validation of these benchmarks, we aim to use neuromorphic datasets to address these complex challenges and foster advancements in the field. §.§ Action Recognition Experimental Setting r0.35 < g r a p h i c s > PR curves for each category obtained from the x3d model trained on Bullying10k. The action recognition task aims to predict the corresponding action labels based on input images. Each sample segment in the dataset includes a single behavior, making it suitable for single-label classification tasks. During preprocessing, the event streams are combined into frames, forming a sequence that is then used to perform the action. The Bullying10K dataset has been divided into training and testing sets with an 8:2 ratio, and the temporal unit for integrating the event stream is set to 10 ms. Given many frames and the need for batch training, we randomly crop video segments to a fixed length, sampling at 0, 2, and 4 intervals. We employed various commonly used action recognition models to evaluate the performance of the Bullying10K dataset. We explored the performance of spiking neural networks on this dataset, showcasing their potential for action recognition tasks. Evaluation Metric In the classification task, we measure the performance of the network by calculating the accuracy of the output corresponding to the labels. However, misjudgment of violent events can have severe consequences and cause irreparable harm. Violent scenarios occur less frequently than non-violent ones, implying a substantial class imbalance in real-world situations. Relying solely on accuracy as an evaluation metric may not adequately reflect the model's ability to predict violent events accurately. To address this issue, we have employed the Precision-Recall (PR) curve. The PR curve allows us to examine the model's predictive capabilities across varying judgment thresholds. Results and Analysis Table <ref> presents a detailed comparison of the performance of several widely-used action recognition models on the Bullying10K dataset. It includes the backbone architectures of each model and their respective operational configurations. We conducted experiments using frame intervals of 0, 2, and 4, with three different temporal step lengths of 4, 10, and 16. The results demonstrate that increasing the frame interval and temporal step length contributes to improved precision of the models. However, even models that exhibit strong performance in traditional visual classification tasks did not yield satisfactory results on the Bullying10K dataset, indicating the dataset's unique challenges. Furthermore, performed a visual analysis of the corresponding PR curve of the X3D model on the Bullying10K dataset. The PR curve reveals significant variations in performance across different action categories, underscoring the dataset's complexity and the need for robust recognition models. §.§ Temporal action localization Experimental Setting In surveillance videos, violent scenarios often occur irregularly and sporadically. Due to the few background disturbance and noise in event data, we can naturally concatenate multiple samples to form extended sequences. In our task, we input a video frame sequence that encompasses segments from different action categories randomly extracted from the dataset and integrate them into longer video sequences. And aim to predict the action label along with its corresponding start and end times. The ground truth annotations are constructed based on the respective time intervals. For training on the Bullying10K dataset, we have chosen commonly used temporal action localization models. Initially, we pre-trained the TSN <cit.> and TSM <cit.> models on the action recognition task, which serve as the feature extractors for the localization models. We extract features from the dataset and store them for subsequent analysis and processing. Evaluation Metric Average recall and Area Under the Curve (AUC) are commonly used metrics to evaluate action localization. Concurrently, we employ the AR@N indicator for assessment, representing the recall rate under the condition of N proposals. This study considers N to be 1, 5, 10, 100. Additionally, we calculate the AUC for the AR-AN curve. Results and Analysis We present the accuracy of several commonly used temporal action localization algorithms on Bullying10K for comparative evaluation. They exhibit varying performances under different features. On the other hand, performance significantly diminishes with the reduction of proposals. It is worth exploring a model designed for processing event datasets that can maintain higher precision with fewer proposals. §.§ Pose estimation Experimental Setting Human pose estimation typically involves providing an image or video containing one or multiple individuals and outputting the corresponding locations of various keypoints for each person. These keypoints include head, shoulders, arms, and legs. Different datasets may have different numbers of joints, and in the case of Bullying10K, we use 26 keypoints. Pose estimation can provide additional information for action recognition. We evaluate the performance of Bullying10K using several commonly used pose estimation models. r0.6 < g r a p h i c s > Taking the prediction results of the SimpleBaseline model with ReseNet101 as the backbone, we visualize the skeleton and heatmap of the prediction results, respectively. Evaluation Metric The standard metric used to measure pose estimation is usually based on Object Keypoint Similarity (OKS). OKS is a scale-invariant measure of localization accuracy and is utilized to evaluate how closely the predicted keypoints of a model match the ground truth keypoints. OKS=∑_i[exp(-d^2_i/2s^2k^2_i)δ(v_i>0)]/∑_i[δ(v_i>0)], where d_i is the distance between the position of predicted keypoints and the ground truth. v_i denotes the visibility of related keypoints. s indicates the object scale. k_i is the predefined constant that controls falloff. Average Precision (AP) and Recall (AR) are used to measure the accuracy of different models on the Bullying10K dataset: AP, AP^50 (OKS is at 0.50),AP^75,AP^M,AP^L,AR. Results and Analysis Table <ref> presents the results of multiple commonly used pose estimation models on the Bullying10K dataset. However, these models exhibit relatively low accuracy on the dataset, indicating that Bullying10K poses substantial challenges for pose estimation tasks. Additionally, although HRNet has shown superior capabilities to SimpleBaseline on other RGB datasets, it achieved lower accuracy on our dataset. This discrepancy may be attributed to the unique nature of our event-based dataset, which significantly differs from RGB images. Furthermore, we investigated the potential of SNN in posture estimation by testing the SNN backbone using the SimpleBaseline algorithm. To enhance the interpretability of the results, we visualized relevant output images and presented heatmaps for selected samples. § CONCLUSION This research introduces a novel event-driven dataset called Bullying10K, which utilizes Dynamic Vision Sensor (DVS) cameras to detect instances of violent behavior while preserving individual privacy. The dataset is designed to address the limitations of existing event-driven datasets by featuring complex, rapid movements and overlapping figures, presenting higher complexity and challenges. By offering a large-scale dataset, Bullying10K enables researchers to explore complex actions and contributes to advancements in violence detection and privacy-preservation techniques. unsrt § CHECKLIST * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? see Section <ref> * Did you describe the limitations of your work? see Section <ref> * Did you discuss any potential negative societal impacts of your work? * Have you read the ethics review guidelines and ensured that your paper conforms to them? * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments (e.g. for benchmarks)... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? see the supplemental material * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? see the supplemental material * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? see the supplemental material * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you cite the creators? see section <ref> * Did you mention the license of the assets? * Did you include any new assets either in the supplemental material or as a URL? see the supplemental material and section <ref> * Did you discuss whether and how consent was obtained from people whose data you're using/curating? * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? see the supplemental material * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? see the supplemental material * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? see the supplemental material
http://arxiv.org/abs/2306.11973v1
20230621015954
Parameterized coherence measure
[ "Meng-Li Guo", "Zhi-Xiang Jin", "Jin-Min Liang", "Bo Li", "Shao-Ming Fei" ]
quant-ph
[ "quant-ph" ]
[email protected] School of Mathematical Sciences, Capital Normal University, Beijing 100048, China [email protected] School of Computer Science and Technology, Dongguan University of Technology, Dongguan, 523808, China School of Mathematical Sciences, Capital Normal University, Beijing 100048, China [email protected] School of Computer and Computing Science, Hangzhou City University, Hangzhou 310015, China [email protected] School of Mathematical Sciences, Capital Normal University, Beijing 100048, China Quantifying coherence is an essential endeavor for both quantum mechanical foundations and quantum technologies. We present a bona fide measure of quantum coherence by utilizing the Tsallis relative operator (α, β)-entropy. We first prove that the proposed coherence measure fulfills all the criteria of a well defined coherence measure, including the strong monotonicity in the resource theories of quantum coherence. We then study the ordering of the Tsallis relative operator (α, β)-entropy of coherence, Tsallis relative α-entropies of coherence, Rényi α-entropy of coherence and l_1 norm of coherence for both pure and mixed qubit states. This provides a new method for defining new coherence measure and entanglement measure, and also provides a new idea for further study of quantum coherence. Parameterized coherence measure Shao-Ming Fei June 20, 2023 =============================== § INTRODUCTION Coherence is a fundamental aspect of quantum physics. It is an important resource in quantum information processing <cit.> and plays significant roles in the emerging fields such as quantum metrology <cit.>, nanoscale thermodynamic <cit.> and quantum biology <cit.>. The quantification of coherence has attracted much attention recently <cit.>. To quantify the coherence Baumgratz, Cramer and Plenio established a consistent framework in terms of quantum resource theory <cit.>. Fruitful results have been obtained in characterizing quantum coherence both theoretically and experimentally <cit.>. In particular, the distillable coherence <cit.>, the coherence of formation <cit.>, the robustness of coherence <cit.>, the coherence measures based on entanglement <cit.>, the max-relative entropy of coherence <cit.>, the Rényi entropy of coherence<cit.>, the Tsallis relative entropies of coherence<cit.> and the coherence concurrence <cit.> have been proposed and investigated. With the development of quantum coherence theory, the basic properties of different coherence measures have been well studied, and important work showing that quantum coherence has important physical significance has also been obtained one after another<cit.>. So it is of great significance to find more coherence measures. Moreover, many important properties of quantum coherence measures have been also explored. Among them is the quantum state ordering of quantum coherence measures. A given physical resource may have different quantitative measures. It is of importance whether two different quantum states have the same ordering under different measures. Taking the entanglement resource theory as an example, the ordering problem of the concurrence and the negativity has been discussed in <cit.>. It is found that these entanglement measures may give rise to different orderings. More discussions about the entanglement ordering are given in <cit.>. Concerning quantum coherence, if two different coherence measures C_1 and C_2 satisfy that C_1 (ρ_1)≤ C_1 (ρ_2) ⇔ C_2(ρ_1)≤ C_2(ρ_2) for any quantum states ρ_1 and ρ_2, it is said that C_1 and C_2 have the same quantum state ordering. Liu <cit.> proves that the relative entropy of coherence and l_1-norm coherence have the same quantum state ordering for any single qubit pure states, but not for general high-dimensional quantum states or single-qubit mixed states. More works has been done toward the quantum state ordering related to coherence measures <cit.>. Recently, Nakamura and Umegaki extended the notion of the von Neumann entropy and presented the operator entropy -Alog A for any positive operator A on a Hilbert space ℋ <cit.>. The relative entropy Alog A-Alog B for positive operators A and B is also introduced associated with the semifinite von Neumann algebra. Moreover, Fujii and Kamei <cit.> introduced the relative operator entropy for two reversible positive operators A and B. This concept is a generalization of operator entropy and relative entropy. In addition, Furuta <cit.> also got the parametric generalization of Shannon inequality. For two-person reversible positive operator A and B and arbitrary real number α∈ (0,1] in Hilbert space, K. Yanagi et al. introduced the concept of Tsallis relative operator (α, β)-entropy T_α, β(A|B) <cit.>. The authors in Ref <cit.> studied the properties of the Tsallis relative operator entropy and obtained the generalization of the Shannon's inequality that plays an important role in classical information theory, e.g., its operator form can be applied to quantum thermo dynamics and quantum information theory. In this work, we propose a well-defined coherence measures via the Tsallis relative operator (α, β)-entropy. Perspective mapping has been widely used in the research of quantum information theory and quantum statistical mechanics, and the Tsallis relative operator (α, β)-entropy is the perspective mapping of the function ln_α x:=x^α-1/α<cit.>. Therefore, our results can provide a new method for defining new entanglement measure and coherence measure. That is, through perspective mapping of existing measure functions, entanglement or coherence measure with similar or better properties can be obtained. Finally, we also study the quantum state ordering of coherence measure, and compare the new parameterized coherence measures with the l_1-norm coherence, Tsallis relative α-entropies of coherence and Rényi α-entropy of coherence. It is proved that they have the same quantum state ordering under certain parameters. § COHERENCE VIA TSALLIS RELATIVE OPERATOR (Α, Β)-ENTROPY Let ℋ be a d-dimensional Hilbert space with orthogonal basis {|i⟩}^d_i=1. With respect to this basis, the set of the incoherent states ℐ has the form δ=∑_i=1^dδ_i|i⟩⟨ i|, where δ_i∈[0,1] and ∑_iδ_i=1. Any proper measure of coherence C should satisfy the following axioms <cit.>: (C1) C(ρ)≥0 for all quantum states ρ, and C(ρ)=0 if and only if ρ∈ℐ; (C2) Monotonicity under incoherent completely positive and trace preserving maps (ICPTP) Ψ, C(ρ) ≥ C(Ψ(ρ)); (C3) Monotonicity for average coherence under subselection based on measurements outcomes: C(ρ)≥∑_i p_iC(ρ_i ), where ρ_i= K_iρ K_i^†/p_i, p_i=Tr( K_iρ K_i^†) for all K_i satisfying ∑_iK_i^† K_i=I (I denotes the identity operator) and K_iℐK_i^†⊆ℐ; (C4) Non-increasing under mixing of quantum states (convexity), i.e., ∑_ip_iC(ρ_i)≥ C(∑_ip_iρ_i) for any ensemble {p_i, ρ_i}. Note that conditions (C3) and (C4) automatically imply condition (C2). The condition (C3) allows for the sub-selection of measurement outcomes in well controlled experiments. The l_1 norm of coherence <cit.> is defined by C_l_1(ρ)=∑_i≠ j|ρ_ij|, where ρ_ij are entries of ρ. The Tsallis relative α-entropies of coherence C^T_α_1(ρ) is defined by <cit.> C^T_α_1(ρ)=min_δ∈ I1/α_1-1 [tr(ρ^α_1δ^1-α_1)-1 ] for α_1 ∈(0,1)⊔ (1,∞). The Rényi α-entropies of coherence is defined by C^R_α_1(ρ)=min_δ∈ I1/α_1-1[logtr(ρ^α_1δ^1-α_1) ] for α_1 ∈ [0,∞). C^T_α_1(ρ) and C^R_α_1(ρ) have analytic expressions <cit.>, C^T_α_1(ρ)=1/α_1-1[ ( ∑_i⟨ i| ρ^α_1 |i ⟩ ^1/α_1)^α_1-1], C^R_α_1(ρ) = α_1/α_1-1log∑_i⟨ i| ρ^α_1 |i ⟩ ^1/α_1. The Tsallis relative operator (α, β)-entropy is defined by <cit.>, T_α, β(ρ||σ)=ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2-ρ^β/α, for arbitrary two invertible positive operators ρ and σ on Hilbert space, and any real number α,β∈ (0,1]. For convenience, writes T_α, β(ρ||σ) as T_α, β(ρ||σ)=ρ^β/2ln_α(ρ^-β/2σρ^-β/2) ρ^β/2, where ln_αX≡X^α-1/α for the positive operator X=ρ^-β/2σρ^-β/2. The perspective mapping P_f of the function f is defined as<cit.>: P_f(A,B):=B^1/2f(B^-1/2AB^-1/2)B^1/2, where A is a self-adjoint operator on Hilbert space H, B is a strictly positive operator and its spectral set falls in a closed interval containing 0. When A and B are noncommutative, Ebadian, et.al, defines an noncommutative generalized perspective map by choosing an appropriate order<cit.>: P_fΔ h(A,B):=h(B)^1/2f(h(B)^-1/2Ah(B)^-1/2)h(B)^1/2. Obviously, the Tsallis relative operator (α, β)-entropy is the perspective mapping of the function ln_α x:=x^α-1/α. Then we can get that Tsallis relative operator (α, β)-entropy T_α, β(ρ||σ) has the following properties, see proof in Appendix A. (T1) (monotonicity) If σ≤τ, then T_α, β(ρ||σ)≤ T_α, β(ρ||τ). (T2) (superadditivity) If ρ_1 ≤ρ_2, σ_1 ≤σ_2, then T_α, β(ρ_1+ρ_2||σ_1+σ_2)≥ T_α, β(ρ_1||σ_1)+T_q(ρ_2||σ_2). (T3) <cit.> (joint concavity) T_α, β(n ρ_1+(1-n) ρ_2||nσ_1+(1-n)σ_2)≥ n T_α, β(ρ_1||σ_1)+(1-n) T_α, β(ρ_2||σ_2). (T4) T_α, β(U ρ U^†||U σ U^†) =T_α, β(ρ||σ) for any unitary operator U. (T5) For a unital positive linear map Φ from the set of the bounded linear operators on Hilbert space to itself, Φ(T_α, β(ρ||σ))≤ T_α, β(Φ(ρ)||Φ(σ)). Denote ρ♯_α, βσ≡ρ^β/2 (ρ^-β/2σρ^-β/2)^αρ^β/2 the operator mean between ρ and σ, so called the α-power mean <cit.>. Next we have the following lemma about the α-power mean. For any quantum states ρ and σ, and any positive real number a>0, α,β∈ (0,1], the following inequalities hold, T_α, β(ρ || σ)≥ρ♯_α, βσ -1/aρ♯_α-1, βσ + (ln_α1/a)ρ^β, T_α, β(ρ || σ) ≤1/aσ -ρ^β - (ln_α1/a) ρ♯_α, βσ, where ln_α1/a≡(1/a)^α-1/α. Particularly, when a=1, the condition T_α, β(ρ || σ)=0 is equivalent to ρ^β=σ. Proof According to Lemma 3.5 in <cit.>, we have x^α(1-1/a x) + ln_α1/a≤ln_α x ≤x/a -1 -x^αln_α1/a. Set x=ρ^-β/2σρ^-β/2 in the above formula. We obtain [ ln_α(ρ^-β/2σρ^-β/2) ≥ (ρ^-β/2σρ^-β/2)^α; - 1/a (ρ^-β/2σρ^-β/2) ^α-1+ (ln_α1/a) I,; ln_α(ρ^-β/2σρ^-β/2) ≤ 1/a (ρ^-β/2σρ^-β/2) -I; - (ln_α1/a) (ρ^-β/2σρ^-β/2)^α. ] Multiplying ρ^β/2 on both sides of each term in above inequality, we obtain ρ^β/2ln_α(ρ^-β/2σρ^-β/2) ρ^β/2≥ρ♯_α, βσ -1/aρ♯_α-1, βσ + (ln_α1/a)ρ^β, ρ^β/2ln_α(ρ^-β/2σρ^-β/2) ρ^β/2≤1/aσ -ρ^β - (ln_α1/a) ρ♯_α, βσ. Noting that ρ^β/2ln_α(ρ^-β/2σρ^-β/2) ρ^β/2=T_α, β(ρ || σ), we complete the proof of the inequalities (<ref>) and (<ref>). If T_α, β(ρ || σ)=0 and a=1, from (<ref>) and (<ref>) we have ρ^β≤σ and σ≤ρ^β, namely, ρ^β=σ. Conversely, if ρ^β=σ, it is easily seen that T_α, β(ρ || σ)=0. Next, we set f(ρ, σ)=tr[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2], where α∈(0,1]. The following two lemmas about the function f(ρ,σ) is important in deriving our main results. The proofs of Lemmas 2 and 3 are given in Appendix B and C, respectively. For any quantum states ρ and σ with supp ρ⊆ supp σ, we have f(Φ(ρ),Φ(σ))≥ f(ρ,σ) for any completely positive and trace preserving (CPTP) map Φ. Let Φ :={ K_n:∑_nK_n^†K_n= ℐ_H} be a CPTP map which transforms the states ρ and σ into the ensembles { p_n,ρ_n} and {q_n,σ_n}, respectively. We have f( ρ _H,δ _H) ≤∑_n p_n^γq_n^1-γf_q( ρ _n,σ _n) for γ∈(0,1). Below we propose a measure of quantum coherence based on the Tsallis relative operator (α,β)-entropy. The following parameterized function C^T_α, β(ρ) of state ρ is a bona fide measure of quantum coherence, C^T_α, β(ρ)=min_σ∈ℐ1/α{1-[tr( ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2)]^1/(1-α)β}, where α∈ (0,1) and β∈ (0,1]. Proof From Lemma 1, we have C^T_α, β(ρ)≥0, and C^T_α, β(ρ)=0 if and only if ρ=σ. For any CPTP map ϕ, by using the property (T5) we get 1/α{Tr(ρ^β)-Tr[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2] } ≥1/α{Tr(ρ)^β-tr[ϕ (ρ)^β/2(ϕ (ρ)^-β/2ϕ (σ) ϕ (ρ)^-β/2)^αϕ (ρ)^β/2] }. According to the Jensen's inequality <cit.> one gets Φ[ρ^β] ≤Φ(ρ)^β and 1/α{1-tr[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2] } ≥1/α{1-tr[ϕ (ρ)^β/2(ϕ (ρ)^-β/2ϕ (σ) ϕ (ρ)^-β/2)^αϕ (ρ)^β/2] }. This implies that tr[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2] ≤tr[ϕ (ρ)^β/2(ϕ (ρ)^-β/2ϕ (σ)ϕ (ρ)^-β/2)^αϕ (ρ)^β/2], {tr[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2]} ^1/(1-α)β ≤{tr[ϕ (ρ)^β/2(ϕ (ρ)^-β/2ϕ (σ) ϕ (ρ)^-β/2)^αϕ (ρ)^β/2]} ^1/(1-α)β. For any ICPTP map ϕ_ℐ, there exists σ ^∗∈ℐ such that max_σ∈ℐ{tr[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2]} ^1/(1-α)β ={tr[ρ^β/2(ρ^-β/2σ ^∗ρ^-β/2)^αρ^β/2]} ^1/(1-α)β ≤{tr[ϕ _ℐ(ρ)^β/2(ϕ _ℐ(ρ)^-β/2ϕ _ℐ(σ ^∗) ϕ _ℐ(ρ)^-β/2)^αϕ _ℐ(ρ)^β/2]} ^1/(1-α)β ≤max_σ∈ℐ{tr[ϕ _ℐ(ρ)^β/2(ϕ _ℐ(ρ)^-β/2σ ^∗ϕ _ℐ(ρ)^-β/2)^αϕ _ℐ(ρ)^β/2]} ^1/(1-α)β. This proves that C^T_α, β(ρ) satisfies (C2). That C^T_α,β(ρ) satisfies (C4) is directly derived from (T3). To prove that C^T_α, β(ρ) satisfies (C3), denote δ^o the optimal incoherent state such that f(ρ,δ^o)=max_δ∈ℐ f(ρ,δ ). Let Φ= {K_n} be the incoherent selective quantum operations given by Kraus operators {K_n} with ∑_nK_n^†K_n=I. The operation Φ on a state ρ gives rise to the post-measurement ensemble {p_n, ρ_n} with p_n = TrK_nρ K_n^† and ρ_n=K_nρ K_n^†/p_n. Hence the averaged coherence is ∑_np_n C^T_α, β(ρ_n)=min_δ _n∈ℐ1/α[1-∑_np_n f^1/γ (ρ _n,δ _n)], where γ=(1-α)β. Since the incoherent operation cannot generate coherence from the optimal incoherent state δ^o, we have δ_n^o=K_nδ^o K_n^†/q_n∈ℐ with q_n=TrK_nδ^o K_n^† for any incoherent operator K_n. Since α∈(0,1) and C^T_α, β(ρ)≥ 0, C^T_α, β(ρ) is the smallest when f^1/γ(ρ,δ) is the maximum. Therefore, one immediately finds that max_δ∈ℐ f^1/γ(ρ,δ)≥ f^1/γ(ρ_n,δ_n^o). Eq. (<ref>) implies then that ∑_np_n C^T_α, β(ρ_n)≤1/α(1-∑_np_nf^1/γ( ρ _n,δ_n^o )). By using the Hölder inequality ∑_k=0^da_k b_k≤(∑_k=0^d a^n_k)^1/n(∑_k=0^d b^m_k )^1/m for 1/n+1/m=1 and n>1, we obtain [∑_nq_n] ^1-γ[∑_np_nf^1/γ(ρ _n,δ _n^o)] ^γ≥∑_np_n^γq_n^1-γf (ρ _n,δ_n^o). Therefore, (<ref>) becomes ∑_np_n C^T_α, β(ρ_n) ≤1/α(1-∑_np_nf^1/γ( ρ _n,δ_n^o)) ≤1/α(1-[∑_np_n^γq_n^1-γf( ρ_n,δ_n^o)]^1/γ) ≤1/α(1-f^1/γ(ρ,δ^o )) = C^T_α, β(ρ), where the first inequality is due to (<ref>), the second inequality is from (<ref>), the third inequality is due to Lemma <ref>. (<ref>) shows that C^T_α,β(ρ) satisfies (C3). Thus, the Tsallis relative operator (α,β)-entropy of coherence C^T_α,β(ρ) is a bona fide measure of coherence. Because Tsallis relative operator (α, β)-entropy is the perspective mapping of the function ln_α x:=x^α-1/α, and the perspective mapping P_f has some good properties, for example, (i) The function f is matrix convex if and only if the perspective function P_f is jointly convex, (ii) The function f is matrix concave if and only if the perspective function P_f is jointly concave. Therefore, we suspect that we can get a new a new kind of well-defined coherence measure or entanglement measure by perspective mapping it with functions or existing coherence measure. This provides a new method for defining new coherence measure and entanglement measure, and also provides a new idea for us to study quantum coherence. § ORDERING STATES WITH TSALLIS RELATIVE OPERATOR (Α, Β)-ENTROPY OF COHERENCE §.§ Ordering states with C^T_1/2,1, C^T_α_1, C^R_α_1 and C_l_1 for single-qubit pure states In this section, we show that the Tsallis relative operator (α, β)-entropy, the Tsallis relative α-entropies, the Rényi α-entropy and the l_1 norm of coherence generate the same ordering for single-qubit pure states. For simplicity we consider the Tsallis relative operator (α, β)-entropy of coherence for α=1/2 and β=1. By definition we have C^T_1/2,1(ρ) = min_σ∈ℐ2{1- [ tr( ρ^1/2(ρ^-1/2σρ^- 1/2)^1/2ρ^1/2)]^2 } ≥ min_σ∈ℐ2{1-[tr(ρ (ρ^-1/2σρ^-1/2)ρ)^1/2]^2 } = min_σ∈ℐ2{1-[tr(ρ^1/2σρ^1/2)^1/2]^2 }, where the inequality is due to the Araki-Lieb-Thirring inequality, namely, for matrices A, B≥0, s≥0 and for 0≤ r≤1, it holds that tr(A^rB^rA^r)^s≤tr(ABA)^rs <cit.>. Let |ψ⟩=√(p)|0⟩ + e^iφ√(1-p)|1⟩ be a general single-qubit pure state, and σ=q|0 ⟩⟨ 0|+(1-q)|1 ⟩⟨ 1| a general incoherent qubit state, 0 ≤ p,q ≤ 1. From (<ref>), (<ref>), (<ref>) and (<ref>), we obtain C^T_1/2,1(ρ) ≥ min_q2[1-(-2 p q+p+q-1)^2], C^T_α_1(ρ) = 1/α_1 -1[ (p^1/α_1+(1-p)^1/α_1)^α_1 -1], C^R_α_1(ρ) = α_1 /α_1 -1[log(p^1/α_1+(1-p)^1/α_1)], C_l_1(ρ) = 2 √(p (1-p)). Here only C^T_1/2,1(ρ) depends on the incoherent state σ. For any given any p∈ [0,1], there exists q^∗∈ q such that C^T_1/2,1(ρ) = 2[1-(-2 p q^∗+p+q^∗-1)^2], where q^∗∈ [0,1]. From the derivations of the coherence measures with respect to p, ∂ C^T_1/2,1/∂ p = -4 (1-2 q^∗) (-2 p q^∗+p+q^∗-1), ∂ C^T_α_1/∂ p = 1/α_1 -1 [(p^1/α_1 -1-(1-p)^1/α_1 -1) (p^1/α_1 +(1-p)^1/α_1)^α_1 -1], ∂ C^R_α_1/∂ p = 1/α_1 -1 [p^1/α_1-1-(1-p)^1/α_1 -1/p^1/α_1+(1-p)^1/α_1], ∂ C_l_1/∂ p = 1-2 p/√((1-p) p), Proof we have, see Figs. <ref> and <ref>, ∂ C_l_1/∂ p, ∂ C^T_α_1/∂ p,∂ C^R_α_1/∂ p {[ > 0, 0<p<1/2, 0<α_1<1/2,; <0, 1/2<p<1, 0<α_1<1/2; >0, 0<p<1/2, 1/2<α_1<1,; <0, 1/2<p<1, 1/2<α_1<1,; ]. and ∂ C^T_1/2,1/∂ p {[ > 0, 0<p<1/2, 0<q^∗<1/2,; <0, 0<p<1/2, 1/2<q^∗<1,; >0, 1/2<p<1, 0<q^∗<1/2,; <0, 1/2<p<1, 1/2<q^∗<1.; ]. Therefore, we obtain that (1). C_l_1 is an increasing function for p≤1/2, and it is a decreasing function for p≥1/2; (2). For any 0<α_1 <1, C^T_α_1, C^R_α_1 are increasing function when 0≤ p≤1/2, they are decreasing function with 1/2≤ p≤ 1; (3). For any 0≤ p≤ 1, C^T_1/2,1 is an increasing function when 0≤ q^∗≤1/2, and it is a decreasing function for 1/2≤ q^∗≤ 1. From the above analysis, we see that C^T_1/2,1, C^T_α_1, C^R_α_1 and C_l_1 have the same monotonicity for single qubit pure states under the specific conditions. Without loss of generality, set 0≤ p_1,p_2 ≤1/2 and 0 ≤ q ≤1/2. Taking into account the above properties (1)-(3), we have C^T_1/2,1(| ψ⟩)≤ C^T_1/2,1( |φ⟩) if and only if p_1≤ p_2, C^T_α_1(| ψ⟩)≤ C^T_α_1( |φ⟩) if and only if p_1≤ p_2, C^R_α_1(| ψ⟩)≤ C^R_α_1( |φ⟩) if and only if p_1≤ p_2, and p_1≤ p_2 if and only if C_l_1(| ψ⟩)≤ C_l_1( |φ⟩). Therefore, we have the following theorem. For any two single-qubit pure states |ψ⟩=√(p_1)|0⟩ + √(1-p_1)|1⟩ and |φ⟩=√(p_2)|0⟩ + √(1-p_2)|1⟩, and σ=q|0 ⟩⟨ 0|+(1-q)|1 ⟩⟨ 1|, 0 ≤ p_1,p_2,q ≤ 1, the coherence measures have the following relationships: (B1) For any 0<α_1<1, 0≤ p_1,p_2 ≤1/2 and 0≤ q ≤1/2, C^T_1/2,1(| ψ⟩)≤ C^T_1/2,1( |φ⟩) ⇔ C^T_α_1(| ψ⟩)≤ C^T_α_1( |φ⟩) ⇔ C^R_α_1(| ψ⟩)≤ C^R_α_1( |φ⟩) ⇔ C_l_1(| ψ⟩)≤ C_l_1( |φ⟩); (B2) For any 0<α_1<1, 1/2≤ p_1,p_2 ≤ 1 and 1/2≤ q ≤ 1, C^T_1/2,1(| ψ⟩)≥ C^T_1/2,1( |φ⟩) ⇔ C^T_α_1(| ψ⟩)≥ C^T_α_1( |φ⟩) ⇔ C^R_α_1(| ψ⟩)≥ C^R_α_1( |φ⟩) ⇔ C_l_1(| ψ⟩)≥ C_l_1( |φ⟩); (B3) For any 0<α_1<1, 0≤ p_1,p_2 ≤1/2 and 1/2≤ q ≤ 1, C^T_1/2,1(| ψ⟩)≥ C^T_1/2,1( |φ⟩) ⇔ C^T_α_1(| ψ⟩)≤ C^T_α_1( |φ⟩) ⇔ C^R_α_1(| ψ⟩)≤ C^R_α_1( |φ⟩) ⇔ C_l_1(| ψ⟩)≤ C_l_1( |φ⟩); (B4) For any 0<α_1<1, 1/2≤ p_1,p_2 ≤ 1 and 0≤ q ≤1/2, C^T_1/2,1(| ψ⟩)≤ C^T_1/2,1( |φ⟩) ⇔ C^T_α_1(| ψ⟩)≥ C^T_α_1( |φ⟩) ⇔ C^R_α_1(| ψ⟩)≥ C^R_α_1( |φ⟩) ⇔ C_l_1(| ψ⟩)≥ C_l_1( |φ⟩). It is worthy noting that the Theorem <ref> is only valid for single-qubit pure states. The following example shows there could be different orderings for higher-dimensional systems. Consider the following two pure states in three-dimensional systems <cit.>, |ψ_1⟩=√(12/25)|0⟩+√(12/25)| 1⟩+√(12/25)|2⟩, |ψ_2⟩=√(7/10)|0⟩+√(2/10)| 1⟩+√(1/10)|2⟩. Let σ=q_1|0⟩⟨0|+ q_2|1⟩⟨ 1|+(1- q_1-q_2)|2⟩⟨ 2| with q_1,q_2 ∈ [0,1] be the incoherent state. It is easy to calculate that C_l_1(|ψ_1⟩)=1.5143, C_l_1(|ψ_2⟩)=1.5603, C^T_1/2,1(|ψ_1⟩) ≥ min_q_1, q_2{-2/25(11q_1+11 q_2-24)}, C^T_1/2,1(|ψ_2⟩) ≥ min_q_1, q_2{1/5 (9-6q_1-q_2)}, C^T_α_1(|ψ_1⟩) = (2^2/α_1 +1+3^1/α_1+1)^α_1 -1/25(α_1 -1), C^T_α_1(|ψ_2⟩) = (2^1/α_1+7^1/α_1+1)^α_1-1/10(α_1 -1), C^R_α_1(|ψ_1⟩) = α_1/α_1 -1log(25^-1/α_1 (2^2/α_1 +1+3^1/α_1+1)), C^R_α_1(|ψ_2⟩) = α_1/α_1 -1log(10^-1/α_1(2^1/α_1+7^1/α_1+1)). Set Δ C=C(|ψ_2⟩)-C(|ψ_1⟩). For any 0 ≤ q_1,q_2 ≤ 1, there exist q_1^∗∈ q_1 and q_2^∗∈ q_2 such that C^T_1/2,1(|ψ_1⟩) = -2/25(11q^∗_1+11 q^∗_2-24), C^T_1/2,1(|ψ_2⟩) = 1/5 (9-6q^∗_1-q^∗_2). Then we have Δ C^T_1/2,1 = 1/25 (-8q^∗_1+17q^∗_2-3), Δ C^T_α_1 = 1/α_1 -1[(10^-1/α_1 (2^1/α_1+7^1/α_1+1))^α_1 -(5^-2/α_1 (2^α_1 +2/α_1 +3^1/α_1+1))^α_1 ] , Δ C^R_α_1 = α_1/α_1 -1log10^-1/α_1(2^1/α_1+7^1/α_1+1)/25^-1/α_1(2^2/α_1 +1+ 3^1/α_1+1). As can be seen from Fig. <ref>, when q^∗_1, q^∗_2 ∈ [0,1] Δ C^T_1/2,1 is less than 0 at first and then greater than 0. C_l_1(|ψ_1⟩)< C_l_1(|ψ_2⟩), C^T_α_1(|ψ_1⟩)> C^T_α_1(|ψ_2⟩) and C^R_α_1(|ψ_1⟩)> C^R_α_1(|ψ_2⟩), see Fig. <ref>. Therefore, C^T_1/2,1 and C^T_α_1, C^R_α_1 generate different ordering for single-qutrit pure states |ψ_1⟩ and |ψ_2⟩. §.§ Ordering states with C^T_1/2,1, C^T_1/2, C^R_1/2 and C_l_1 for single-qubit mixed states Any single-qubit state ρ can be written as <cit.>, ρ(t,z)=[ [ 1+z/2 t/2; t/2 1-z/2 ]] with t^2+z^2≤ 1. t^2+z^2=1 if and only if ρ(t,z) is a pure state. Set σ=q̃|0 ⟩⟨ 0|+(1-q̃)|1 ⟩⟨ 1|, where 0 ≤q̃≤ 1. By substituting Eq.(<ref>) into (<ref>), (<ref>), (<ref>) and (<ref>), we get the coherence of ρ(t,z) as follows, C^T_1/2,1(ρ(t,z)) ≥ min_q̃{1+z-2q̃z-√(M^2_1-M^2_2)}, C^T_1/2(ρ(t,z)) = -2(r^1/2-1) , C^R_1/2(ρ(t,z)) = -log r , C_l_1(ρ(t,z)) = t, where M_1 = (2q̃-1) z+1, M_2 = √(4q̃z+z^2-2 z+1-4q̃(q̃-1)(t^2-1)), r = √(√(1-√(t^2+z^2))(√(t^2+z^2)-z)/2 √(2)√(t^2+z^2)-N_1) +√((√(t^2+z^2)+z) √(1-√(t^2+z^2))/2 √(2)√(t^2+z^2)+N_2), N_1 = (-√(t^2+z^2)-z) √(√(t^2+z^2)+1)/2 √(2)√(t^2+z^2), N_2 = (√(t^2+z^2)-z) √(√(t^2+z^2)+1)/2 √(2)√(t^2+z^2). For any 0 ≤q̃≤ 1, suppose there exists q̃^∗∈q̃ such that C^T_1/2,1(ρ(t,z)) = 1+z-2q̃^∗z-√(M̃^2_1-M̃^2_2), where M̃_2=√(4q̃^∗z+z^2-2 z+1-4q̃^∗(q̃^∗-1)(t^2-1)) and M̃_1=(2q̃^∗-1) z+1. Consider the derivations of the coherence measures with respect to t, we have ∂ C^T_1/2,1/∂ t = -4 (q̃^∗-1) q̃^∗ t/√(M̃^2_1-M̃^2_2) , ∂ C^T_1/2/∂ t = -r^-1/2∂ r/∂ t , ∂ C^R_1/2/∂ t = -log r∂ r/∂ t, ∂ C_l_1/∂ t = 1. Since t^2+z^2<1, assuming that t^2+z^2=a and a∈(0,1), we can get the analytical expression of ∂ r/∂ t, as shown in Fig. <ref>. The expression of ∂ r/∂ t is given in Appendix B. One sees that when 0≤ t < √(1-z^2), ∂ C^T_1/2,1/∂ t≥ 0, ∂ C^T_1/2/∂ t, ∂ C^R_1/2/∂ t≥ 0; and ∂ C^T_1/2,1/∂ t≤ 0, ∂ C^T_2/∂ t and ∂ C^R_2/∂ t≤ 0 when -√(1-z^2)< t≤ 0. Concerning the monotonicity of these coherence measures with respect to variable t, we have (a) For any -√(1-z^2)<t<√(1-z^2), C_l_1 is an increasing function. (b) For any -√(1-z^2)<t≤ 0, C^T_1/2,1, C^T_1/2 and C^R_1/2 are decreasing function, and for -√(1-z^2)< t≤ 0 they are increasing function. Without loss of generality, set 0≤ p_1,p_2 ≤ 1 and t≤ 0. We have C^T_1/2,1(| ψ⟩)≥ C^T_1/2,1( |φ⟩) if and only if p_1≤ p_2, C^T_1/2(| ψ⟩)≤ C^T_1/2( |φ⟩) if and only if p_1≤ p_2, C^R_1/2(| ψ⟩)≤ C^R_1/2( |φ⟩) if and only if p_1≤ p_2, and p_1≤ p_2 if and only if C_l_1(| ψ⟩)≤ C_l_1( |φ⟩). Therefore, we have the following theorem. For any two qubit mixed states ρ_1 and ρ_2 of the form(<ref>), and for any t^2+z^2 <1, C^T_1/2, 1, C^T_1/2, C^R_1/2 and C_l_1 satisfy the following relationships, C^T_1/2, 1(ρ_1)≤ C^T_1/2, 1( ρ_2) ⇔ C^T_1/2(ρ_1)≤ C^T_1/2(ρ_2) ⇔ C^R_1/2(ρ_1)≤ C^R_1/2(ρ_1) ⇔ C_l_1(ρ_1)≤ C_l_1(ρ_2); or C^T_1/2, 1(ρ_1)≥ C^T_1/2, 1( ρ_2) ⇔ C^T_1/2(ρ_1)≥ C^T_1/2(ρ_2) ⇔ C^R_1/2(ρ_1)≥ C^R_1/2(ρ_2) ⇔ C_l_1(ρ_1)≤ C_l_1(ρ_2). Theorem <ref> holds for qubit mixed states. For higher-dimensional systems there could be different orderings. Consider the following two mixed states in three-dimensional systems, ρ_1=( [ 6/25 6/25 √(3)/25; 6/25 6/25 √(3)/25; √(3)/25 √(3)/25 1/50; ]),  ρ_2=( [ 7/20 √(7/2)/10 √(7)/20; √(7/2)/10 1/10 1/10 √(2); √(7)/20 1/10 √(2) 1/20; ]). Let σ=q̃_1|0⟩⟨0|+ q̃_2|1⟩⟨ 1|+(1- q̃_1-q̃_2)|2⟩⟨ 2| with q̃_1, q̃_2 ∈ [0,1]. It is easy to calculate that C_l_1(ρ_1)=0.7571, C_l_1(ρ_2)=0.7802, C^T_1/2(ρ_1)=1.0383, C^T_1/2(ρ_2)=0.9608, C^R_1/2(ρ_1)=1.4645, C^R_1/2(ρ_2)=1.3093, C^T_1/2,1(ρ_1) ≥ min_q̃_1, q̃_2{1/25 (49-11q̃_1-11q̃_2)}, C^T_1/2,1(ρ_2) ≥ min_q̃_1, q̃_2{1/10 (19-6q̃_1-q̃_2)}. Set Δ C=C(ρ_2)-C(ρ_1). For any 0 ≤q̃_1,q̃_2 ≤ 1, suppose there exist q̃_1^∗∈q̃_1 and q̃_2^∗∈q̃_2 such that C^T_1/2,1(ρ_1) = 1/25 (49-11q̃^∗_1-11q̃^∗_2), C^T_1/2,1(ρ_2) = 1/10 (19-6q̃^∗_1-q̃^∗_2). Then we have Δ C^T_1/2,1 = 1/50 (-8q̃^∗_1+17 q̃^∗_2-3). It is clear that C_l_1(ρ_1)< C_l_1(ρ_2), C^T_1/2(ρ_1)> C^T_1/2(ρ_2) and C^R_1/2(ρ_1)> C^R_1/2(ρ_2). As can be seen from Fig. <ref>, when q̃^∗_1, q̃^∗_2 ∈ [0,1], Δ C^T_1/2,1 varies from less than 0 to greater than 0. Hence, C^T_1/2,1 and C^T_1/2, C^R_1/2 generate different ordering for the qutrit mixed states ρ_1 and ρ_2. § CONCLUSION In conclusion, we have proposed a quantum coherence measure based on the perspective mapping of function ln_α x:=x^α-1/α, that is, Tsallis relative operator (α, β)-entropy. It has been demonstrated that this coherence measure meets all the necessary criteria for satisfactory coherence measures, especially with the property of strong monotonicity. We have further investigated the ordering of the Tsallis relative operator (α, β)-entropy of coherence, Tsallis relative α-entropies of coherence, Rényi α-entropy of coherence and l_1 norm of coherence for single qubit states. When α=1/2 and β=1, we have proved that under certain conditions, the Tsallis relative operator (α, β)-entropy of choerence, Tsallis relative α-entropies of coherence, Rényi α-entropy of coherence and l_1 norm of coherence give the same ordering for pure qubit states. The results are extended to the case of qubit mixed states. Our results provide a new method for defining a new good coherence measure and a new idea for further study of quantum coherence. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (NSFC) under Grants 12075159, 12171044 and 12175147; Beijing Natural Science Foundation (Grant No. Z190005); the Academician Innovation Platform of Hainan Province. § APPENDIX §.§ proof of T1-T4 (T1) (monotonicity) For positive operators ρ, σ and τ with σ≤τ, and any real numbers α∈ (0,1] and β∈ (0,1], we have ρ^-β/2σρ^-β/2 ≤ ρ^-β/2τρ^-β/2 (ρ^-β/2σρ^-β/2) ^α ≤ (ρ^-β/2τρ^-β/2)^α (ρ^-β/2σρ^-β/2) ^α -I ≤ (ρ^-β/2τρ^-β/2)^α-I ρ^β/2[(ρ^-β/2σρ^-β/2) ^α -I] ρ^β/2/α ≤ ρ^β/2[(ρ^-β/2τρ^-β/2)^α-I ] ρ^β/2/α. (T2) (superadditivity) For ρ_1 ≤ρ_2 and σ_1 ≤σ_2, assume that both σ_1 and σ_2 are invertible. Set X=ρ_1^β/2 (ρ_1 +ρ_2)^-β/2, X^*=(ρ_1 +ρ_2)^-β/2ρ_1^β/2, Y=ρ_2^β/2 (ρ_1 +ρ_2)^-β/2 and Y^*=(ρ_1 +ρ_2)^-β/2ρ_2^β/2. Since X^*X +Y^*Y = I_H, where I_H is an identity operator in ℬ(H), ℬ(H) is a semi-algebra of all bounded linear operators on Hilbert space ℋ to ℋ. Then we obtain T_α, β(ρ_1+ρ_2||σ_1+σ_2) = (ρ_1 +ρ_2)^β/2ln_α[(ρ_1 +ρ_2)^-β/2 (σ_1 +σ_2) (ρ_1 +ρ_2)^-β/2]× (ρ_1 +ρ_2)^β/2 = (ρ_1 +ρ_2)^β/2ln_α(X^*ρ^-β/2_1 σ_1 ρ^-β/2_1 X +Y^*ρ^-β/2_2 σ_2 ρ^-β/2_2 Y )× (ρ_1 +ρ_2)^β/2 ≥ (ρ_1 +ρ_2)^β/2 [X^* (ln_α( ρ^-β/2_1 σ_1 ρ^-β/2_1))X +Y^*ln_α (ρ^-β/2_2 σ_2 ρ^-β/2_2) Y )](ρ_1 +ρ_2)^β/2 = ρ_1^β/2ln_α( ρ^-β/2_1 σ_1 ρ^-β/2_1) ρ_1^β/2 +ρ_2^β/2ln_α( ρ^-β/2_2 σ_2 ρ^-β/2_2) ρ_2^β/2 = T_α, β(ρ_1||σ_1)+T_q(ρ_2||σ_2), where the inequality follows from the Theorem 1.9 in <cit.>. (T4) For any unitary operator U, we have T_α, β(U ρ U^† || U σ U^†) =(U ρ U^†)^β/2ln_α((U ρ U^†)^-β/2 (U σ U^†) (U ρ U^†)^-β/2) (U ρ U^†)^β/2 =U ρ^β/2 U^†ln_α(U ρ^-β/2 U^† U σ U^† U ρ^-β/2 U^†) U ρ^β/2 U^† =U ρ^β/2 U^† U ln_α( ρ^-β/2σρ^-β/2 ) U^† U ρ^β/2 U^† =U ρ^β/2ln_α( ρ^-β/2σρ^-β/2 ) ρ^β/2 U^† =UT_α, β(ρ||σ)U^†. (T5) Assume ρ is invertible, then so does Φ(ρ). Define Φ_ρ (X)= Φ (ρ)^-β/2Φ ( ρ^β/2 X ρ^β/2 ) Φ (ρ)^-β/2. So Φ_ρ is a normalized positive linear map. Consequently, Φ(T_α, β(ρ||σ)) =Φ (ρ^β/2ln_α( ρ^-β/2σρ^-β/2 ) ρ^β/2 ) =Φ (ρ)^β/2Φ_ρ( ln_α( ρ^-β/2σρ^-β/2 ) ) Φ (ρ)^β/2 ≤Φ (ρ)^β/2ln_α (Φ_ρ( ρ^-β/2σρ^-β/2 ) ) Φ (ρ)^β/2 = Φ (ρ)^β/2ln_α [Φ (ρ)^-β/2Φ ( ρ^β/2ρ^-β/2σρ^-β/2ρ^β/2 ) Φ (ρ)^-β/2] Φ (ρ)^β/2 = Φ (ρ)^β/2ln_α [Φ (ρ)^-β/2Φ ( σ) Φ (ρ)^-β/2] Φ (ρ)^β/2 = T_α, β(Φ(ρ)||Φ(σ)), where the inequality is due to the Davis-Choi-Jensen's inequality <cit.>: Φ_ρ (F(X)) ≤ F(Φ_ρ(X)) for every operator concave function F on (0,∞). §.§ Proof of Lemma <ref> According to the property (T5) and the Jensen's inequality one gets Φ[ρ^β] ≤Φ(ρ)^β and Φ[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2] ≤Φ(ρ)^β/2(Φ(ρ)^-β/2Φ(σ) Φ( ρ)^-β/2)^αΦ(ρ)^β/2. For any CPTP map Φ, we have Tr[Φ(ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2)] =Tr[ρ^β/2(ρ^-β/2σρ^-β/2)^αρ^β/2] and Tr[Φ(ρ)^β/2(Φ(ρ)^-β/2Φ(σ) Φ(ρ)^-β/2)^αΦ(ρ)^β/2] =f(Φ(ρ),Φ(σ)). According to (<ref>), (<ref>) and (<ref>), we get f(ρ,σ) ≤ f(Φ(ρ),Φ(σ)). §.§ Proof of Lemma <ref> Any TPCP map can be achieved by unitary operations and local projection measurements on the composite system <cit.>. Suppose the system H is of interest to us and A is an auxiliary system. For a TPCP map Φ :={ K_n:∑_nK_n^†K_n= ℐ_H}, we can always find a unitary operation U_HA and a set of projectors {Π_n^A=| n ⟩_A⟨ n|} such that K_nρ _HK_n^†⊗Π _n^A = ( ℐ_H⊗Π _n^A) U_HA( ρ_H⊗Π _0^A) U_HA^†( ℐ_H⊗Π _n^A). According to Lemma <ref> and the property (T4), for any states ρ _H and σ _H we have f(ρ_H,σ_H) =f(U_HA( ρ _H⊗Π _0^A) U_HA^†,U_HA(σ_H⊗Π _0^A) U_HA^†). Denote ρ _Hf=Φ_HA[ U_HA( ρ_H⊗Π _0^A) U_HA^†] and σ _Hf=Φ_HA[ U_HA( σ _H⊗Π _0^A) U_HA^†]. Due to Lemma <ref>, we obtain f( ρ _H,σ_H) ≤ f( ρ _Hf,σ _Hf) . Let the TPCP map be given by Φ_HA:={ℐ_H⊗Π_n^A}. According to (<ref>), ρ _Hf and σ _Hf in (<ref>) can be replaced by ρ _Hf→ρ̃_Hf=∑_nK_nρ _HK_n^†⊗Π _n^A and σ _Hf→σ̃_Hf=∑_nK_nσ_HK_n^†⊗Π _n^A, respectively. Thus, we have f( ρ _H,δ _H) ≤ f( ρ̃_H_f,σ̃_H_f) = ∑_nf( K_nρ_HK_n^†⊗Π _n^A, K_nσ _HK_n^†⊗Π _n^A) = ∑_nf( K_nρ_HK_n^†, K_nσ _HK_n^†) = ∑_np_n^γq_n^1-γf( ρ _n,σ _n), which comletes the proof. §.§ The expression of ∂ r/∂ t ∂ r/∂ t = 1/2^11/4a^3/2√(1-a)[(√(1-√(a))-√(√(a)+1)) t^3/√((√(√(a)+1)-√(1-√(a))) √(a-t^2)/√(a)+√(1-√(a))+√(√(a)+1))+(√(1-√(a))-√(√(a)+1)) t^3/√((√(1-√(a))-√(√(a)+1)) √(a-t^2)/√(a)+√(1-√(a))+√(√(a)+1)) +t √(a-t^2)((√(1-√(a))-√(√(a)+1)) √(a-t^2)-2 √(1-√(a))-(√(1-√(a))+√(√(a)+1)) √(a)+2 √(√(a)+1))/√((√(√(a)+1)-√(1-√(a))) √(a-t^2)/√(a)+√(1-√(a))+√(√(a)+1)) +t √(a-t^2)((√(1-√(a))-√(√(a)+1)) √(a-t^2)+2 (√(1-√(a))-√(√(a)+1))+√(a) (√(1-√(a))+√(√(a)+1)))/√((√(1-√(a))-√(√(a)+1)) √(a-t^2)/√(a)+√(1-√(a))+√(√(a)+1))]. 00 Nielsen Nielsen MA, Chuang IL. Quantum Computation and Quantum Information (Canbrudge University Press, Cambridge, 2000). Lloyd https://doi.org/10.1017/S0017089517000131Giovannetti V, Lloyd S, Maccone L. Quantum-Enhanced Measurements: Beating the Standard Quantum Limit. Science 2004;306:1330. Dobrzanski2014 https://doi.org/10.1103/PhysRevLett.113.250801Demkowicz-Dobrzański R, Maccone L. Using Entanglement Against Noise in Quantum Metrology. Phys Rev Lett 2014;113:250801. Aberg https://doi.org/10.1103/PhysRevLett.113.150402Åberg J. Catalytic Coherence. Phys Rev Lett 2014; 113:150402. Lostaglio https://doi.org/10.1103/PhysRevX.5.021001Lostaglio M, Korzekwa K, Jennings D, Rudolph T. Quantum Coherence, Time-Translation Symmetry, and Thermodynamics. Phys Rev X 2015;5:021001. Sarovar https://doi.org/10.1038/nphys1652Sarovar M, Ishizaki A, Fleming GR, Whaley KB. Quantum entanglement in photosynthetic light-harvesting complexes. Nat Phys 2010;6:462-467. Lloyd1 https://iopscience.iop.org/article/10.1088/1742-6596/302/1/012037/pdf;QuantumLloyd S. Quantum coherence in biological systems. J Phys: Conf Ser 2011;302:012037. Huelga https://doi.org/10.1080/00405000.2013.829687Huelga SF, Plenio MB. Vibrations, quanta and biology. Contemp Phys 2013;54:181-207. Lambert https://doi.org/10.1038/nphys2474Lambert N, Chen YN, Cheng YC, Li CM, Chen GY, Nori F. Quantum biology. Nat Phys 2013;9:10-18. Gour https://iopscience.iop.org/article/10.1088/1367-2630/10/3/033023/pdfGour G, Spekkens RW. The resource theory of quantum reference frames: manipulations and monotones. New J Phys 2008;10:033023. Marvian https://iopscience.iop.org/article/10.1088/1367-2630/15/3/033001/metaMarvian I, Spekkens RW. The theory of manipulations of pure state asymmetry: I. Basic tools, equivalence classes and single copy transformations. New J Phys 2013;15:033001. Levi https://iopscience.iop.org/article/10.1088/1367-2630/16/3/033007/metaLevi F, Mintert F. A quantitative theory of coherent delocalization. New J Phys 2014;16:033007. Plenio https://doi.org/10.1103/PhysRevLett.113.140401Baumgratz T, Cramer M, Plenio MB. Quantifying Coherence. Phys Rev Lett 2014;113:140401. Aberg1 https://doi.org/10.48550/arXiv.quant-ph/0612146Åberg J. Quantifying Superposition. arXiv: quant-ph/0612146. Yu https://doi.org/10.1103/PhysRevA.94.060302Yu XD, Zhang DJ, Xu GF, Tong DM. Alternative framework for quantifying coherence. Phys Rev A 2016;94:060302. Guo2020 https://doi.org/10.1007/s11128-020-02885-1Guo ML, Jin ZX, Li B, Hu B, Fei SM. Quantifying quantum coherence based on the Tsallis relative operator entropy. Quant Inf Process 2020;19:382. Fan https://doi.org/10.1016/j.physrep.2018.07.004Hu ML, Hu XY, Wang J, Peng Y, Zhang YR, Fan H. Quantum coherence and geometric quantum discord. Phys Rep 2018;1:762-764. c10 https://doi.org/10.1103/PhysRevA.92.022112Yao Y, Xiao X, Ge L, Sun CP. Quantum coherence in multipartite systems. Phys Rev A 2015;92:022112. cq1 https://doi.org/10.1038/srep00885Li CM, Lambert N, Chen YN, Chen GY, Nori F. Witnessing Quantum Coherence: from solid-state to biological systems. Sci Rep 2012;2:885. c11 https://doi.org/10.1103/PhysRevA.92.042101Cheng S, Hall MJW. Complementarity relations for quantum coherence. Phys Rev A 2015;92:042101. X2 https://iopscience.iop.org/article/10.1088/1751-8121/aa7638/metaQi XF, Gao T, Yan FL. Measuring coherence with entanglement concurrence. J Phys A: Math Theor 2017;50:285301. JX https://iopscience.iop.org/article/10.1088/1674-1056/ab5930/metaXu JW. Coherence measures based on sandwiched Rényi relative entropy. Chin Phys B 2020;29:1:010301. c12 https://doi.org/10.1103/PhysRevLett.116.120404Winter A, Yang D. Operational Resource Theory of Coherence. Phys Rev Lett 2016;116:120404. yzcm https://doi.org/10.1103/PhysRevA.92.022124Yuan X, Zhou H, Cao Z, Ma XF. Intrinsic randomness as a measure of quantum coherence. Phys Rev A 2015;92:022124. nbcp https://doi.org/10.1103/PhysRevLett.116.150502Napoli C, Bromley TR, Cianciaruso M, Piani M, Johnston N, Adesso G. Robustness of Coherence: An Operational and Observable Measure of Quantum Coherence. Phys Rev Lett 2016;116:150502. auhm https://doi.org/10.1103/PhysRevLett.115.020403Streltsov A, Singh U, Dhar HS, Bera MN, Adesso G. Measuring quantum coherence with entanglement. Phys. Rev Lett 2015;115:020403. bukf https://doi.org/10.1103/PhysRevLett.119.150405Bu KF, Singh U, Fei SM, Pati AK, Wu JD. Maximum Relative Entropy of Coherence: An Operational Coherence Measure. Phys Rev Lett 2017;119:150405. Renyi1 https://doi.org/10.1103/PhysRevA.94.052336Chitambar E, Gour G. Comparison of incoherent operations and measures of coherence. Phys Rev A 2016;94:052336. Renyi2 https://iopscience.iop.org/article/10.1088/0253-6102/67/6/631/metaShao LH, Li YM, Luo Y, Xi ZJ. Quantum Coherence quantifiers based on Rényi α-relative entropy. Commun Theor Phys 2017;67:631. Tallis https://doi.org/10.1103/PhysRevA.93.032136Rastegin AE. Quantum-coherence quantifiers based on the Tsallis relative α entropies. Phy Rev A 2016;93:032136. dbq https://dl.acm.org/doi/abs/10.5555/2871378.2871381Du SP, Bai ZF, Qi XF. Coherence measures and optimal conversion for coherent states. Quantum Inf Comput 2015;15:1307-1306. QJ1 https://doi.org/10.1103/PhysRevA.93.012111Hillery M. Coherence as a resource in decision problems: The Deutsch-Jozsa algorithm and a variation. Phys Rev A 2016;93:012111. QJ2 https://doi.org/10.1103/PhysRevA.95.032307Shi HL, Liu SY, Wang XH, Yang WL, Yang ZY, Fan H. Coherence depletion in the Grover quantum search algorithm. Phys Rev A 2017;95:032307. QJ3 https://doi.org/10.1103/PhysRevLett.113.170401Girolami D. Observable measure of quantum coherence in finite dimensional systems. Phys Rev Lett 2014;113:170401. Ord https://doi.org/10.1103/PhysRevA.70.032326Miranowicz A, Grudka A. Ordering two-qubit states with concurrence and negativity. Phys Rev A 2004;70:032326. Ei99 https://www.tandfonline.com/doi/abs/10.1080/09500349908231260Eisert J, Plenio MB. A comparison of entanglement measures. J Mod Opt 1999;46:145-154. Vi00 https://doi.org/10.1016/S0375-9601(00)00157-2Virmani S, Plenio MB. Ordering states with entanglement measures. Phys Lett A 2000;268:31-34. Zy02 https://doi.org/10.1006/aphy.2001.6201Zyczkowski K, Bengtsson I. Relativity of Pure States Entanglement. Ann Phys 2002;295:115-135. We03 https://doi.org/10.1103/PhysRevA.67.022110Wei TC, Nemoto K, Goldbart PM, Kwiat PG, Munro WJ, Verstraete F. Maximal entanglement versus entropy for mixed quantum states. Phys Rev A 2003;67:022110. Zi06 https://doi.org/10.1103/PhysRevA.73.012312Ziman M, Buz̆ek V. Entanglement-induced state ordering under local operations. Phys Rev A 2006;73:012312. Se03 https://doi.org/10.1080/09500340308234547Sen(de) A, Sen U. Can there be quantum correlations in a mixture of two separable states. J Mod Opt 2003;50:981-985. Liu16 https://doi.org/10.1007/s11128-016-1398-5Liu CL, Yu XD, Xu GF, Tong DM. Ordering states with coherence measures. Quantum Inf Process 2016;15:4189-4201. MCD http:// jipam.vu.edu.au/article.php?sid=192Mc A, Mercer D. A monotonicity property of power means. J Ineq Pure Appl Math 2002;3:40. Zhang https://doi.org/10.1007/s11128-016-1488-4Zhang FG, Shao LH, Luo Y, Li YM. Ordering states with Tsallis relative α-entropies of coherence. Quantum Inf Process 2017;16:31. Nak92 https://doi.org/10.3792/pja/1195523782Nakamura M, Umegaki H. A note on the entropy for operator algebras. Proc Japan Acad 1961;37:149-154. SGH33 https://ci.nii.ac.jp/naid/10010238225/Fujii JI, Kamei E. Relative operator entropy in noncommutative information theory. Math Japon 1989;34:341-348. SGH39 https://doi.org/10.1016/j.laa.2003.11.017Furuta T. Parametric extensions of Shannon inequality and its reverse one in Hilbert space operators. Linear Algebra Appl 2004;381:219-235. SGH113 https://doi.org/10.1016/j.laa.2004.06.025Yanagi K, Kuriyama K, Furuichi S. Generalized Shannon inequalities based on Tsallis relative operator entropy. Linear Algebra Appl 2005;394:109-118. Glasgow2019 https://doi.org/10.1017/S0017089517000131Nikoufar I. Convexity of parameter extensions of some relative operator entropies with a perspective approach. Glasgow Math J 2020;62:737-744. SGH36 https://doi.org/10.48550/arXiv.1410.4904Furuichi S. Precise estimates of bounds on relative operator entropies. arXiv:1410.4904. SGH38 https://doi.org/10.1016/j.laa.2005.04.015Furuichi S, Yanagi K, Kuriyama K. A note on operator inequalities of Tsallis relative operator entropy. Linear Algebra Appl 2005;407:19-31. SGH28 https://doi.org/10.1073/pnas.0807965106Effros EG. A matrix convexity approach to some celebrated quantum inequalities. Proc Nat Acad Sci 2009;106:1006-1008. SGH57 Hiriart-Urruty, C. Lemaréchal. Fundamentals of convex analysis (Springer, Berlin, 2001). SGH77 https://doi.org/10.1007/s10957-005-2667-0Marechal P. On a Functional Operation Generating Convex Functions, Part 1: Duality. J Optim Theory Appl 2005;126:175-189. SGH78 https://doi.org/10.1007/s10957-005-4721-3Marechal P. On a Functional Operation Generating Convex Functions, Part 2: Algebraic Properties. J Optim Theory Appl 2005;126:357-366. OP Pec̆arić J, Furuta T, Hot JM, Seo Y. Mond-Pec̆arić Method in Operator Inequalities (Element, Zagreb, 2005). c32 https://doi.org/10.48550/arXiv.math/0701129Audenaert KMR. On the Araki-Lieb-Thirring inequality. arXiv:math/0701129 sic https://doi.org/10.1038/s41598-017-18692-1Zhao HQ, Yu CS. Coherence measure in terms of the Tsallis relative α entropy. Sci Rep 2018;8:299.
http://arxiv.org/abs/2307.00309v1
20230701114636
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey
[ "Hanieh Naderi", "Ivan V. Bajić" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
[1]Department of Computer Engineering, Sharif University of Technology Tehran (e-mail: [email protected]) [2]School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada (e-mail: [email protected]) Deep learning has successfully solved a wide range of tasks in 2D vision as a dominant AI technique. Recently, deep learning on 3D point clouds is becoming increasingly popular for addressing various tasks in this field. Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks. These attacks are imperceptible to the human eye but can easily fool deep neural networks in the testing and deployment stage. To encourage future research, this survey summarizes the current progress on adversarial attack and defense techniques on point cloud classification. This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes the adversarial example generation methods in recent years. Besides, it classifies defense strategies as input transformation, data optimization, and deep model modification. Finally, it presents several challenging issues and future research directions in this domain. 3D deep learning, deep neural network, adversarial examples, adversarial defense, machine learning security, 3D point clouds. =-15pt Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey Hanieh Naderi1 Ivan V. Bajić2 =========================================================================== § INTRODUCTION Deep learning (DL) <cit.> is a subset of machine learning (ML) and artificial intelligence (AI) that analyzes large amounts of data using a structure roughly similar to the human brain. Deep learning is characterized by the use of multiple layers of neural networks, which process and analyze large amounts of data. These neural networks are trained on large datasets, which allows them to learn patterns and make decisions on their own. DL has achieved impressive results in the fields of image recognition <cit.>, semantic analysis <cit.>, speech recognition <cit.> and natural language processing <cit.> in recent years. Despite the tremendous success of DL, in 2013 Szegedy  <cit.> found that deep models are vulnerable to adversarial examples in image classification tasks. Adversarial examples are inputs to a deep learning model that have been modified in a way that is intended to mislead the model. In the context of image classification, for example, an adversarial example might be a picture of a panda that has been slightly modified in a way that is imperceptible to the human eye but that causes a deep learning model to classify the image as a gibbon. Adversarial examples can be created in two or three dimensions. In the case of 2D adversarial examples, the input is an image, and the modification is applied to the pixels of the image. These modifications can be small perturbations added to the image pixels <cit.> or they can be more significant changes to the structure of the image <cit.>. Thanks to the rapid development of 3D acquisition technologies, various types of 3D scanners, LiDARs, and RGB-D cameras have become increasingly affordable. 3D data is often used as an input for Deep Neural Networks (DNNs) in healthcare <cit.>, self-driving cars <cit.>, drones <cit.>, robotics <cit.>, and many other applications. These 3D data, compared to 2D counterparts, capture more information from the environment, thereby allowing more sophisticated analysis. There are different representations of 3D data, like voxels <cit.>, meshes <cit.>, and point clouds <cit.>. Since point clouds can be received directly from scanners, they can precisely capture shape details. Therefore, it is the preferred representation for many safety-critical applications. Due to this, in the case of 3D adversarial examples, the input is a point cloud, and the modification is applied to the points in the cloud. These examples can be created by adding, dropping, and shifting some points in the input point clouds, or by generating entirely new point clouds with predefined target labels using methods such as Generative Adversarial Networks (GANs) or other transformation techniques. It is typically easier to create adversarial examples in 2D space than in 3D space because the input space is smaller and there are fewer dimensions to perturb. In general, adversarial examples exploit the vulnerabilities or weaknesses in the model's prediction process, and they can be very difficult to detect because they are often indistinguishable from normal examples to the human eye. As a result, adversarial examples can pose a serious threat to the security and reliability of DL models. Therefore, it is important to have effective methods for defending against adversarial examples in order to ensure the robustness and reliability of DL models. Adversarial defense in the 2D image and the 3D point clouds both seek to protect DL models from being fooled by adversarial examples. However, there are some key differences between the approaches used to defend against adversarial images and adversarial point clouds. Some of the main differences include the following: * Input data: Adversarial images are 2D data representations, while adversarial point clouds are 3D data representations. This means that the approaches used to defend against adversarial images and point clouds may need to take into account the different dimensions and characteristics of the input data. * Adversarial perturbations: Adversarial images may be modified using small perturbations added to the image pixels, while adversarial point clouds may be modified using perturbations applied to individual points or groups of points in the point cloud. This means that the approaches used to defend against adversarial images and point clouds may need to be tailored to the specific types of adversarial perturbations that are being used. * Complexity: Adversarial point clouds may be more complex to defend against than adversarial images, as the perturbations applied to point clouds may be more difficult to identify and remove. This may require the use of more sophisticated defenses, such as methods that are able to detect and remove adversarial perturbations from the input point cloud. On the whole, adversarial point clouds can be challenging to identify and defend against, as they may not be easily recognizable in the 3D point cloud data. Adversarial point clouds may be more harmful and harder to defend against, because their changes may be less obvious to humans due to the lack of familiarity compared to images. As a result, it is important to conduct a thorough survey of adversarial attacks and defenses on 3D point clouds in order to identify the challenges and limitations of current approaches and to identify opportunities for future research in this area. There are a number of published surveys that review adversarial attacks and defenses in general, including in the context of computer vision, machine learning, and deep learning systems. These surveys provide an overview of the various types of attacks and defenses that have been proposed, as well as their strengths and limitations. However, there is a lack of surveys specifically focused on 3D point cloud attacks and defenses. Some published surveys do mention 3D attacks and defenses briefly <cit.>, but there is a need for more comprehensive surveys that delve deeper into this topic. Table <ref> refers to a summary or overview of published surveys of adversarial attacks and defenses. Some of these surveys focus on specific domains, such as computer vision <cit.>, text <cit.>, and images <cit.> while others provide a more general overview of adversarial attacks and defenses in the field of artificial intelligence <cit.>. Our key contributions are as follows: * A review of the different types of adversarial point clouds that have been proposed and the methods that have been used to generate them, and proposing a taxonomy of these methods. * A review of the various methods that have been proposed for defending against adversarial point clouds, including data optimization, input transformation methods, and deep model modification. * Categorization of the most important datasets and models used by researchers in this field. * An assessment of the challenges and limitations of current approaches to adversarial attacks and defenses on 3D point clouds, and identification of opportunities for future research in this area. An overview of the categorization of adversarial attack and defense approaches on 3D point clouds is shown in Fig. <ref>. The rest of this paper is organized as follows. Section <ref> introduces a list of notations, terms and measurements used in the paper. We discuss adversarial attacks on deep models for 3D point cloud classification in Section <ref>. Section <ref> provides a detailed review of the existing adversarial defense methods. In Section <ref>, we summarize commonly used 3D datasets and present a taxonomy of datasets and victim models used in recent studies. We discuss current challenges and potential solutions related to adversarial attacks in Section <ref>. Finally, Section <ref> concludes the survey. § BACKGROUND In this section, we provide the necessary background in terms of notation, terminology, and point cloud distance measures used in the field of 3D adversarial attacks. By establishing clear definitions, researchers can more accurately compare the effectiveness of different approaches and identify trends or patterns in the methods. A list of symbols used in the paper is given in Table <ref>, along with their explanations. These symbols are used to represent various quantities related to point cloud adversarial attacks. The table provides a brief description of each symbol to help readers understand and follow the discussions and equations in the paper. Next, we briefly introduce the terminology and distance measures used in the field of adversarial attacks and defenses on 3D point clouds. §.§ Definition of terms It is crucial to define the technical terms used in the literature in order to provide a consistent discussion of the various methods and approaches. The definitions of these terms appear below. The rest of the paper follows the same definitions throughout. * 3D point cloud is a set of points in 3D space, typically representing a 3D shape or scene. * Adversarial point cloud is a 3D point cloud that has been intentionally modified in order to mislead a DL model that analyzes 3D point clouds. We focus on geometric modifications, rather than attribute (e.g., color) modifications, since these are predominant in the literature on adversarial point clouds. * Adversarial attack is a technique that intentionally introduces perturbations or noise to an input point cloud in order to fool a DL model, causing it to make incorrect predictions or decisions. * Black-box attacks are a type of adversarial attack in which the attacker only has access to the model's input and output, and has no access to the structure of the DL model being attacked. * White-box attacks are a type of adversarial attack in which the attacker knows all the details about the DL model’s architecture and parameters. * Targeted attacks involve manipulating the input point cloud in a way that causes the model to output a specific target label when presented with the modified input. * Non-targeted attacks involve manipulating the input point cloud in a way that causes the model to output a wrong label, regardless of what that label is. * Point addition attacks involve adding points to the point cloud to fool the DL model. * Point shift attacks involve shifting points of the point cloud to fool the DL model, while the number of points remains the same as in the original point cloud. * Point drop attacks involve dropping points from the point cloud to fool the DL model. * Optimization-based attacks are a type of attack in which the creation of an adversarial point cloud is formulated and solved as an optimization problem. * Gradient-based attacks are a type of attack in which the gradients of the cost function corresponding to each input point are used to generate an adversarial point cloud with higher tendency toward being misclassified. * On-surface perturbation attacks are a type of attack that involves modifying points along the object's surface in the point cloud. * Out-of-surface perturbation attacks are a type of attack that involves modifying points outside the object surface in the point cloud. * Transferability refers to the ability of adversarial examples generated for one DL model to be successful in causing misclassification for another DL model. * Adversarial defense is a set of techniques that aim to mitigate the impact of adversarial attacks and improve the robustness of the DL model against them. * Attack success rate refers to the percentage of times that an adversarial attack on a DL model is successful. §.§ Distance measures The objective of adversarial attacks is to modify points of 𝒫, creating an adversarial point cloud 𝒫^adv, which could fool a DL model to output wrong results. Geometric 3D adversarial attacks can be achieved by adding, dropping, or shifting points in 𝒫. If the adversarial point cloud is generated by shifting points, ℓ_P-norms can be used to measure the distance between 𝒫 and 𝒫^adv, as the two point clouds have the same number of points. In this case, we can talk about the vector difference (perturbation) η = 𝒫-𝒫^adv, and consider η_P as the distance between 𝒫 and 𝒫^adv. The typical choices for P are P ∈{0, 2, ∞}, and the equation is: D_ℓ_P (𝒫 , 𝒫^adv) = η_P = (∑_i=1^np_i - p^adv_i_P^P)^1/P where 𝒫∈ℝ^n×3 is the original point cloud consisting of n points in 3D space, 𝒫={p_i | i=1,2, ..., n} and the i^th point, p_i = (x_i,y_i,z_i), is a 3D vector of coordinates. 𝒫^adv is the adversarial point cloud formed by adding the adversarial perturbation η = (η_1,η_2, ..., η_n), η_i∈ℝ^3, to 𝒫. The three common ℓ_P norms have the following interpretations: * ℓ_0-norm or η_0 counts the number of non-zero elements in η, so it indicates how many points in 𝒫^adv have changed compared to 𝒫. * ℓ_2-norm or η_2 is the Euclidean distance between 𝒫^adv and 𝒫. * ℓ_∞-norm or η_∞ is the maximum difference between the points in 𝒫^adv and 𝒫. As mentioned above, ℓ_P-norm distance criteria require that 𝒫^adv and 𝒫 have the same number of points. Hence, these distance measures cannot be used for attacks that involve adding or dropping points. To quantify the dis-similarity between two point clouds that don't have the same number of points, Hausdorff distance D_H and Chamfer distance D_C are commonly used. Hausdorff distance is defined as follows: D_H (𝒫 , 𝒫^adv) = max_p ∈𝒫min_p^adv∈𝒫^advp - p^adv_2^2 It locates the nearest original point p for each adversarial point p^adv and then finds the maximum squared Euclidean distance between all such nearest point pairs. Chamfer distance is similar to Hausdorff distance, except that it sums the distances among all pairs of closest points, instead of taking the maximum: D_C (𝒫 , 𝒫^adv) = ∑_p^adv∈𝒫^advmin_p ∈𝒫p - p^adv_2^2 + ∑_p ∈𝒫min_p^adv∈𝒫^advp - p^adv_2^2 Optionally, Chamfer distance can be averaged with respect to the number of points in the two point clouds. Besides the distance measures mentioned above, there are other distance measures for point clouds, such as point-to-plane distance <cit.>, that are used in point cloud compression. However, these are not commonly encountered in the literature on 3D adversarial attacks, so we don't review them here. § ADVERSARIAL ATTACKS This section describes the seven most common approaches for generating adversarial point clouds. Our discussion encompasses the technicalities of these seven widely used methods and also briefly touches upon similar approaches related to these seven attacks. Some of the approaches <cit.> described in this section are extended versions of adversarial examples for 2D data, adapted for use with 3D point clouds. These approaches may face new challenges due to the additional dimension of the data. Other approaches <cit.> are specifically designed for 3D data and may be more effective at generating adversarial point clouds than methods that are simply adapted from 2D data. These approaches may consider the unique characteristics of 3D point clouds and the deep models that process them. Overall, the goal of these approaches is to understand better how adversarial point clouds could affect current deep 3D models. The most popular approaches are also summarized in Table <ref> and we explain how adversarial attacks and attack categories relate in the context of adversarial examples for point cloud classification tasks. §.§ 3D fast gradient sign method (3D FGSM) The fast gradient sign method (FGSM) presented by Goodfellow  <cit.>. In accordance with standard FGSM, the method adds an adversarial perturbation η to each point of given point cloud 𝒫 in order to create an adversarial point cloud as 𝒫^adv = 𝒫+η. Perturbations are generated according to the direction of the sign of gradient at each point. The perturbation can be expressed as η = ϵ sign(∇_𝒫J(f(𝒫:θ),Y) where f is deep model that is parameterized by θ and takes an input point cloud 𝒫 and Y denotes the label associated with 𝒫. Δ_xJ(.,.) is gradient of loss function of model w.r.t to 𝒫 and sign(.) denotes the sign function. The ϵ value is an adjusting hyperparameter that determines the ℓ_∞-norm of the difference between the original and adversarial inputs. The FGSM was extended by Liu  <cit.> to 3D data. There are three different ways were introduced <cit.> to define ϵ value as a constraint for η as follows * Constraining the ℓ_2-norm between each dimension of points 𝒫 and 𝒫^adv.. * Constraining the ℓ_2-norm between each point 𝒫 and 𝒫^adv. * Constraining the ℓ_2-norm between all points 𝒫 and 𝒫^adv. Due to the first method severely limiting the movement of points, the authors suggest the second and third methods. However, all three methods have shown little difference in the attack success rates. Yang  <cit.> used the Chamfer distance (instead of ℓ_2-norm) between the original point cloud and the adversarial counterpart to extend FGSM to a 3D domain. Using this approach, each point in the adversarial point clouds is perturbed slightly. There is a trade-off between the chamfer distance and the attack success rate because, as the chamfer distance decreases, it may become more difficult for an adversarial attack to achieve a high attack success rate. However, if the chamfer distance is set too high, the model may be more vulnerable to adversarial attacks. Finding the right balance between these two factors can be challenging, and it may depend on the specific characteristics of the point cloud model and the type of adversarial attack being used. Figure <ref> illustrates an example of an FGSM adversarial point cloud with Chamfer distances varying from 0.01 to 0.05 between the two point clouds. The author in <cit.> sets it to 0.02 as an "appreciate distance". Apart from the FGSM attack, Yang  <cit.> introduced another attack called "Momentum-Enhanced Pointwise Gradient (MPG)." The MPG attack, similar to <cit.>, integrates momentum into iterative FGSM. The MPG attack produces more transferable adversarial examples. §.§ 3D Carlini and Wagner attack (3D C&W) The C&W attack is presented by Carlini and Wagner <cit.>. They provided three kinds of attacks with three different distance measures, ℓ_0-norm, ℓ_2-norm, and ℓ_∞-norm. As a general rule, generating the C&W attack can describe as an optimization problem to find minimum perturbation η such that the label of the adversarial input 𝒫^adv is changed to the target label T by the objective function g. min_η D (𝒫 , 𝒫^adv) + c . g(𝒫 + η) s.t. f(𝒫^adv)=T where D(.) refers to distance measure (it can be defined using different distance measures like ℓ_P-norm, Chamfer or Hausdorff distance), c is a suitably chosen constant and g(𝒫^adv)≥ 0 if and only if f(𝒫^adv)=T. By doing so, the distance and penalty term can be optimized more effectively. There were seven objective functions g listed by the authors <cit.>. An effective function evaluated by their experiments, which was also used in other papers, is as follows g(𝒫^adv) = max(max_i=t(Z(𝒫^adv)_i)-Z(𝒫^adv)_t , -κ) where Z denotes the Softmax function, and κ represents a constant that controls confidence. In comparison with the FGSM attack, these attacks do not set a constraint for perturbation. In fact, the attacks search for minimal perturbation (without imposing any constraints) to change the label to the target label. As the first instance, a 3D version of the C&W attack was developed by Xiang  <cit.>. According to the paper, <cit.>, four types of attacks were proposed as follows. In Figure <ref>, you can see the four types of C&W attacks, where the bottle label has been misclassified as a result of these attacks. * Adversarial perturbation negligibly by using ℓ_2-norm (between all points 𝒫 and 𝒫^adv) as distance measure to shift points toward the point cloud's surface. * Adding adversarial independent points by using two different distance measures. 1. Chamfer distance between the original point cloud and the adversarial point cloud. 2. Hausdorff distance between the original point cloud and the adversarial point cloud. These measures are used to push independent points toward the point cloud's surface. * Adding adversarial clusters by the combination of three different distance measures. 1. Chamfer distance between the original point cloud and the adversarial cluster is used to push clusters toward the point cloud's surface. 2. The number of clusters added. Using this measure, only 1 to 3 clusters are added, so there is only a small number of clusters added. 3. Minimize the farthest distance. In this measure, the distance between the two most distant points in each cluster is minimized to constrain the added points clustered to be within small regions. * Adding adversarial objects by the combination of three different distance measures. 1. Chamfer distance between the original point cloud and the adversarial object is used to push adversarial objects toward the point cloud's surface. 2. The number of objects added. Using this measure, only 1 to 3 objects are added, so there is only a small number of objects added. 3. ℓ_2-norm between a real-world object and an adversarial object is used to generate shapes similar to the real-world ones. The first attack is based on shifting points, and three other attacks are based on adding points. Since directly adding points to the unbounded 3D space is not possible due to the vast search space, the last three attacks use the position of critical points as the initial positions of adversarial points (or clusters or objects). Critical points are like key points that are effective in classification results. An example of critical points in PointNet would be calculating the remaining points after max pooling. Tsai  <cit.> developed a shifting point attack called K-Nearest Neighbor (KNN) attack that limits distances between adjacent points by adding an extra distance loss to  <ref>, which calculates K-Nearest Neighbor distance for each point. By doing so, adversarial point clouds are restricted to becoming physical objects. They use Chamfer distance to measure the distance of two point clouds. Wen  <cit.> considered a new distance measure named consistency of local curvatures to guide perturbed points lean towards object surfaces. Adopting the C&W attack framework, the authors use the combination of Chamfer distance, Hausdorff distance, and local curvature consistency distance as the distance measure to create a geometry-aware adversarial attack (GeoA^3). The generated GeoA^3 attack has smoothness and fairness surface properties, so the difference between it and the original point cloud is imperceptible to the human eye. §.§ 3D Projected Gradient Decent method (3D PGD) One of the most potent attacks in the 2D literature is the Projected Gradient Descent (PGD), which has its roots in the pioneering paper of Madry  <cit.>. The iterative FGSM is considered a PGD method. Taking the iterative FGSM method, we can generate the adversarial point cloud as 𝒫^adv_0 = x , 𝒫^adv_t+1 = Clip_𝒫,ϵ[𝒫^adv_t+α sign(∇_𝒫J(f(𝒫:θ),Y)] where Clip_𝒫,ϵ limits the change of the generated adversarial input in each iteration and t refers to iteration. The PGD attack try to increase the cost of the correct class Y, without specifying which of the incorrect classes the model should select. The PGD attack finds the perturbation that maximizes the cost function under the η constraint with ϵ. max_η J(f(𝒫:θ),Y) s.t. D (𝒫 , 𝒫^adv) ≤ϵ-ball The 3D PGD attack is similar to the 2D version, but it usually uses different distance measures to calculate perturbations. In particular, Liu  <cit.> proposed a PGD attack named Distributional attack by using the Hausdorff distance between the triangular mesh (original point cloud surface approximate through a triangular mesh) and the adversarial point cloud as distance measure to push adversarial points toward the triangular mesh. This method is less sensitive to the density of points in 𝒫 because it uses a mesh instead of a point cloud to measure perturbation. Figure <ref> demonstrated two examples of adversarial point clouds generated by the distributional attack. Ma  <cit.> proposed Joint Gradient Based Attack (JGBA) attack. They added an extra term to the optimization function of the PGD attack  <ref> to defeat the SOR (Statistical Outlier Remover), which removes outlier points. The term computes the gradient of the loss function of model w.r.t to points in 𝒫 after removing outliers when the first term (term in  <ref>) computes the gradient of the loss function of model w.r.t to all points in 𝒫. These two terms are combined to solve the optimization problem. The JGBA attack takes ℓ_2-norm as the distance measure to constraint shifting of points. §.§ Shape attack This type of attack attempts to morph the point cloud's shape. The concept of shape attacks can be compared to what is called unrestricted attacks in 2D images <cit.>. When such attacks occur, input data might change significantly while not changing the semantics. This adversarial attacks fool the classifier without making humans confused. In this regard, Liu  <cit.> proposed three shape attacks as follows. Figure <ref> demonstrates these three shape attacks. * Perturbation resampling This attack resamples the certain number of points with the lowest gradients by farthest point sampling to ensure that all points are distributed approximately uniformly. The algorithm is iterated to generate an adversarial point cloud that deceives the model. The distance measure used to maintain the similarity between 𝒫 and 𝒫^adv is Hausdorff distance. * Adding adversarial sticks During this attack, the algorithm adds four sticks to the point cloud so that one end of them is attached to the point cloud and the other end has a very small distance from the first end. The algorithm optimizes the two ends of the sticks so that the label of the point cloud be changed. Finally, it adds a few points between the two ends to make them look like sticks. * Adding adversarial sinks In this case, critical points (remaining points after max pooling in PointNet) selects as sink points, and points pull in the point cloud toward them. The goal of this attack is to minimize global changes to points that are not selected by the max pooling operation. The distance measure used to maintain the similarity between 𝒫 and 𝒫^adv is ℓ_2-norm. Lee  <cit.> also proposed Shape-aware adversarial attacks called ShapeAdv that are based on injecting an adversarial perturbation η in the latent space z of a point cloud autoencoder. To be precise, the original point cloud is processed using an autoencoder to generate an adversarial point cloud, then the adversarial point cloud is fed to the classifier. Accordingly, Lee  <cit.> generated three attacks with varying distance measures. These measures are used as a term for C&W loss to maintain similarity between the original and the adversarial point clouds. All three attacks calculate gradient C&W loss w.r.t adversarial perturbation in the latent space z. The distance measures are defined as such for three types of attacks: * Shape-aware attack in the latent space. To make a more meaningful attack, the author minimizes the ℓ_2-norm between the latent space z and the adversarial latent space z+η. Using this approach, the generated adversarial point cloud is highly dissimilar from the original counterpart in terms of appearance. * Shape-aware attack in the point space. In this case, an attempt is being made to resolve the previous attack's problem. In order to maintain similarity between the original point cloud and the adversarial one, the distance measure is replaced by minimizing the Chamfer distance between the two. * Shape-aware attack with auxiliary point clouds. The attack minimizes the Chamfer distance between the adversarial point cloud and the average of k nearest neighbor, sampled from the original point cloud category. This attack aims to avoid adversarial perturbation in any direction in the latent space. To guide the direction in the latent space, it employs auxiliary point clouds sampled from the category of the original input. §.§.§ Shape attacks via autoencoders and generative models Hamdi  <cit.> proposed an attack called Advpc by using an autoencoder that could be transferred between networks effectively. This was achieved by introducing a new loss function and pipeline. Minimizing two losses was the goal of the Loss function. The first loss is C&W loss when adversarial point clouds are fed into deep models, and the second loss is C&W loss when adversarial point clouds are fed into deep models after reconstruction with a point cloud autoencoder. Using an autoencoder to generate an adversarial point cloud makes perturbations more meaningful. Consequently, their transferability from one network to another will be more promising. Lee  <cit.> also proposed Shape-aware attacks by injecting adversarial perturbation η in the latent space z of a point cloud autoencoder. In section <ref>, this attack was described in detail. LG-GAN attack <cit.> is proposed to generate an adversarial point cloud based on GAN (Generative Adversarial Network). The GAN is fed with the original point clouds and target labels to learn how to generate adversarial point clouds to fool deep models. In detail, it extracts hierarchical features from original point clouds using one multi-branch adversarial network, then integrates the specified label information into multiple intermediate features using the label encoder. The encoded features will be fed into a reconstruction decoder to generate the adversarial point cloud. This attack is so fast because it only takes one forward pass to generate an adversarial point cloud. Figure <ref> shows an instance of the LG-GAN attack. Dai <cit.> proposed a new type of attack based on GAN, which is created from noise rather than the original point cloud. In fact, the noise vector and the target label as the input are fed into a graph convolutional generator. It outputs the generated adversarial point cloud. The generator uses a loss function containing four parts (the objective loss, the discriminative loss, the outlier loss, and the uniform loss) to achieve a realistic adversarial attack that fools the victim network. The objective loss encourages the victim network to assign the target(incorrect) label to the adversarial point cloud while the discriminative loss encourages the auxiliary network to classify the adversarial point cloud correctly. The outlier loss and the uniform loss by removing outliers and generating a more uniform point cloud force the generator to preserve the point cloud shape. Lang <cit.> proposed a new type of adversarial attack that alters the reconstructed geometry of a 3D point cloud rather than just the predicted label, using an autoencoder trained on semantic shape classes. Mariani <cit.> proposed a method for creating adversarial attacks on surfaces embedded in 3D space, under weak smoothness assumptions on the perceptibility of the attack. §.§ Frequency attack (Attack on other domains) Liu  <cit.> have suggested an adversarial attack based on the frequency domain, which aims to enhance the transferability of generated adversarial examples. The author transformed points onto the frequency domain via graph Fourier transform (GFT). Then divide it into low-frequency components and high-frequency components, and apply perturbations to the low-frequency components to create an adversarial point cloud. In a contrasting way, Liu  <cit.> investigated the geometric structure of 3D point clouds by perturbing each of the three frequency components (low, mid, and high-frequency). They found that perturbing low-frequency components of point clouds significantly damaged their rough shape. To preserve the shape of the point cloud, they created an adversarial point cloud with constraints applying perturbations to the low-frequency components and guiding perturbations to the high-frequency components. Huang  <cit.> proposed a new attack based on applying reversible coordinate transformations to points in the original point cloud, which reduces one degree of freedom and limits their movement on the tangent plane. The best direction is calculated based on the gradients of the transformed point clouds. After that, all points are assigned a score to construct the sensitivity map. Finally, top-scoring points are selected to fool deep models. The authors in <cit.> suggest that by analyzing the eigenvalues and eigenvectors of the graph Laplacian matrix of a point cloud, it can be determined which areas of the model are particularly sensitive to perturbations. By focusing on these areas, the attack can be crafted more effectively. §.§ Minimal level of point manipulations for attacking A special type of adversarial attacks exists in the 2D domain that focuses on perturbing a minimum number of pixels in adversarial attacks <cit.>. For instance, the one-pixel attack <cit.>, which is the name given to the attack that can fool deep models by changing only one pixel, is a famous attack of this type. Taking inspiration from 2D attacks, Kim  <cit.> proposed adversarial attacks namely minimal attack that manipulate only a minimal number of points. To find an adversarial point cloud, they have modified the optimization function of the PGD attack <ref> by adding a term. In this term, the number of changed points is kept to a minimum. Furthermore, they used two different distance measures, Hausdorff and Chamfer distance, to preserve the similarity between 𝒫 and 𝒫^adv. Figure <ref> illustrates examples of minimal adversarial attack In another attack called Variable Step-size Attack (VSA) <cit.>, a hard boundary constraint on the number of modified points is incorporated into the optimization function of a PGD attack <ref> to preserve the point cloud's appearance. In more concrete terms, certain points with the highest gradient norms (which have the most impact on classification tasks) are initialized as modified points. By controlling the step-size (large step-size (α) at the beginning and smaller at the end), this method escapes local optima and finds the most appropriate locations for the modified (adversarial) points. Kim  <cit.> proposed a class of point cloud perturbation attacks called Nudge attacks that minimize point perturbation to flip 3D DNN results. The researchers generated adversarial point clouds using gradient-based and genetic algorithms with perturbations of up to 150 points in order to deceive DNNs. The attack can fool DNN even with a single point when the point has a large distance from the surface of 3D objects. Yang  <cit.> provided a point-attachment attack by attaching a few points to the point cloud. A Chamfer distance is used to preserve a small distance between the newly added points and the original point cloud. Hard boundary constraints limit the number of points added in the point cloud, making it more difficult to detect. Tan  <cit.> proposed a new type of attack called One point attack in which only a single point in the point cloud needs to be perturbed in order to fool the deep model. The authors also present an explainability method to identify the most important points in the point cloud for the attack Shape Prior Guided Attack <cit.> is a method that uses a shape prior, or prior knowledge of the structure of the object, to guide the generation of the perturbations, or changes made to the point cloud to create the adversarial point cloud. The goal of this method is to create adversarial point clouds that have minimal perturbations while still being able to fool the target object detection model. §.§ Attacks with drop points Attacks described in the previous sections mostly revolved around shifting, adding, or transforming points (transforming points into another space and making changes there). This section reviews attacks that drop some points to generate adversarial point clouds. Depending on how points are dropped, these attacks can be made. The authors have provided various algorithms for removing critical points effectively. As an example, Zheng <cit.> developed a method that by using a saliency map <cit.> finds critical points that are important in model decision-making and drops them. The points dropped by the saliency map are illustrated in red points in Figure <ref>. According to this method, every point is assigned a saliency score that reflects its contribution to the deep model recognition. By shifting high-saliency points towards the point cloud center, these points will not affect the surfaces much and practically operate in the same way as drop points. Consequently, the model can be deceived by shifting high-scoring points in a point cloud, resulting in adversarial point clouds. This method was proposed in two popular dropped attacks, Drop100 and Drop200, which drop 100 and 200 points respectively. An attack described in <cit.> identifies "adversarial drop points" in a 3D point cloud that, when dropped, significantly reduce a model's accuracy. These points are specified independently of the model by analyzing and combining fourteen-point cloud features and determining which features play key roles in the model's decision-making. In <cit.>, the critical points can be randomly determined and checked for dropping one by one. If a point increases the probability of changing the ground-truth label f(𝒫) = Y is considered a critical point and, will be dropped. Otherwise, it will not be dropped. This procedure continues iteratively until the minimum critical points are dropped according to the following optimization problem min_𝒫⊆𝒫^adv (|𝒫^adv|- |𝒫|) s.t. f(𝒫^adv) ≠ f(𝒫) where |𝒫^adv| and |𝒫| are number points in the original point cloud and the adversarial one. The adversarial examples are generated by dropping critical points that optimize formula <ref>. In order to determine the level of effectiveness of a given point in PointNet model decision-making, Yang <cit.> introduced a Point-detachment attack that assigned a class-dependent importance to each point. A greedy strategy is employed to generate an adversarial point cloud, in which the most important point dependent on the true class are dropped iteratively. The class-dependent importance associated with a given point is determined by multiplying the two terms. The first term uses the PointNet feature matrix before max-pooling aggregation. (In this matrix, each row represents a point in the point cloud and each column represents a special feature). The second term uses from gradient the feature matrix w.r.t. the true class output, which is a sparse matrix with non-zero only at the critical points. If a given point has the largest value in some columns, the first term sums the difference between the first and second largest values in these columns. A bigger difference means more significance for the largest value. This means that a given point that corresponds to the largest value is more effective in the model decision. The second term sums up all values for a given point at a row level in the sparse matrix. §.§ Miscellaneous attacks Miao <cit.> developed an adversarial point cloud based on rotation by applying an isometry matrix to the original point cloud. To find an appropriate isometry matrix the author used the Thompson Sampling method which can quickly find a suitable isometry matrix with a high attack rate. Liu  <cit.> proposed an Imperceptible Transfer Attack (ITA) that enhances the imperceptibility of adversarial point clouds by shifting each point in the direction of its normal vector. Zhang  <cit.> proposed a Mesh Attack that directly perturbs the mesh of a 3D object. Tang  <cit.> presented a method called NormalAttack for generating imperceptible point cloud attacks. The method deforms objects along their normals by considering the object's curvature to make the modification less noticeable. § DEFENSES AGAINST ADVERSARIAL ATTACKS Adversarial defense methods for 3D point clouds can generally be divided into three categories: input transformation, data optimization, and deep model modification. The following sections discuss defense methods under each of these categories. §.§ Input transformation An input transformation is a preprocessing approach that involves applying some transformations to the input point cloud before it is fed into the deep model. This transformation could be designed to reduce the sensitivity of the model to adversarial attacks or to make it more difficult for an attacker to craft an adversarial point cloud. Input transformation methods are listed below. §.§.§ Simple Random Sampling (SRS) Simple random sampling <cit.> is a statistical technique commonly known as SRS that randomly drops a certain number of points (usually 500) from an input point cloud (with the same probability). §.§.§ Statistical Outlier Removal (SOR) Since there exist outliers in most adversarial attacks, Zhou  <cit.> proposed a statistical outlier removal (SOR) method that trimmed the points in an adversarial point cloud if the average distance a point to its k nearest neighbors falls outside the (μ + σ.α), which μ is mean and σ is the standard deviation of k nearest neighbor distance of all points in the original point cloud. Depending on the size of the analyzed neighborhood, α will be determined. (In <cit.> α = 1.1 and k=2 are considered). A similar defense method is used in  <cit.>. The Euclidean distance between each point and its k-nearest neighbors is used to detect outliers. Points with High mean distances are discarded as outliers. §.§.§ Salient points removal This defense method <cit.> assumes that the adversarial points have fairly large gradient values. Taking this as true, this method calculated the saliency of each point based on the gradient output class of the model f w.r.t. to each point and points with high saliency were discarded. §.§.§ Denoiser and Upsampler Network (DUP-Net) The DUP-Net defense method consists of two steps. To remove outliers, it uses SOR as a denoiser in the first step. In the second step, the output of the first step is given to an upsampler network <cit.> to produce a denser point cloud. It is generally found that adversarial perturbations are missing critical points from original point clouds, so this defense uses a denser point cloud tracking the underlying surface of the point cloud with uniform distribution to recover these critical points. §.§.§ IF-Defense IF-Defense <cit.> is a preprocessing technique on the input point cloud. It first employs SOR to remove outliers from the input point cloud. In the next step, two losses are used to optimize input points' coordinates under geometry- and distribution-aware constraints. The geometry-aware loss tries to push points towards the surface in order to minimize outliers. To estimate the surfaces of objects, the authors train an implicit function network <cit.> on original point clouds. Because output of implicit functions are continuous, the predicted surface is locally smooth. This reduces outlier effects. The distribution-aware loss encourages points to have an uniform distribution by maximizing the distance between each point and its k-nearest neighbors. Accordingly, the input point clouds are captured in a clean shape using If-Defense. Figure <ref> shows the results of three different defense methods against a Drop100 attack, including SOR, DUP-Net, and If-defense. §.§.§ Miscellaneous Defenses Dong  <cit.> proposed Gather-Vector Guidance (GvG) method which is sensitive to the change of local features. In case the adversarial perturbation changes the local features, the gather-vector will also change. This method learns to ignore noisy local features. Liu  <cit.> developed PointGuard, a method that creates a number of random subsets of points in the original point cloud, then predicts the label of the original point cloud based on the majority vote among the labels of these random subsets. Sun <cit.> proposed a framework for evaluating the robustness of 3D point cloud classification models to adaptive attack. Ada3Diff <cit.> is a method for defending against adversarial attacks on 3D point cloud models. It uses an adaptive diffusion process to smooth out perturbations in the point cloud, effectively reducing the impact of the adversarial attack. §.§ Data optimization Another category is data optimization for training, which involves optimizing the training data to improve the robustness of the deep model to adversarial attacks. This could involve techniques such as data augmentation, which involves generating additional training examples by applying transformations to the existing training data, or adversarial training, which involves intentionally introducing adversarial examples into the training data in order to improve the model's robustness to such attacks. The following methods can be used to optimize data. §.§.§ Adversarial Training In terms of modified training sets, adversarial training <cit.> is an effective defense method, which augments the training set with adversarial examples to increase the model’s robustness against attacks. To be precise, in standard training, the model is trained using only the original point clouds, while adversarial training uses both original and adversarial point clouds. The adversarial training for point clouds is described in <cit.> for the first time. The authors of <cit.> and  <cit.> trained a deep model by augmenting the FGSM and ITA attacks. As a way to find a stronger adversarial training method, the authors in <cit.> used adaptive attacks. Using this new adversarial training, different types of attacks are added to the deep model by embedding a perturbation-injection module. This module is utilized to generate the perturbed features for adversarial training. Sun  <cit.> applied self-supervised learning to adversarial training with 3D point clouds. In different tries, the authors in <cit.> add Gaussian noise to each point by randomly sampling values from a Gaussian distribution. By doing so, the attacked models can escape from the narrow adversarial subspace. Also, they developed a Quantification Method for converting point cloud coordinates into low numerical precision with multiple quantification levels, which mitigates small variations in coordinates. These noisy point clouds are then used to augment training sets. §.§.§ PointCutMix Zhang  <cit.> proposed PointCutMix technique that generated a new training set by swapping points between two optimally aligned original point clouds and training a model with this new training set. §.§.§ Low Pass Frequency-Defense (LPF-Defense) In LPF-Defense <cit.>, deep models are trained with the low-frequency version of the original point cloud. More specifically, with the Spherical Harmonic Transform (SHT) <cit.>, original point clouds were transformed from the spatial to the frequency domain. The low-frequency version of the original point cloud is then retrieved back into the spatial domain by filtering the high-frequency input data components. This method is based on the assumption that 3D deep models are overly dependent on features with unnecessary information in the training sets, making them vulnerable to adversarial point clouds. Therefore it discards the unnecessary information from the training data by suppressing the high-frequency contents in the training phase. §.§ Deep model modification Another category is deep model modifications, which refer to modifying the architecture of the deep model itself in order to improve its robustness to adversarial attacks. This could be achieved by making changes to the original deep neural network architecture during training. Examples of this category are given below. §.§.§ Defense-PointNet The authors in <cit.> have provided a defense method by splitting the PointNet deep model into two parts. The first part is the feature extractor, with a discriminator attached to its last layer enabling it to learn more powerful features. The feature extractor feeds a mini-batch of the original point cloud and the adversarial counterpart (generated by the FGSM attack) as input to extract features and also fool the discriminator. The second part is the PointNet classifier which is trained to classify each input correctly. The model parameters are optimized using three different loss functions: a classifier, a discriminator, and a feature extractor. While discriminator loss attempts to distinguish the original point cloud from the adversarial one, feature extractor loss misleads the discriminator to label every original/adversarial vector as the original and classifier loss encourages the classifier to give correct predictions for each input. §.§.§ Context-Consistency dynamic graph Network (CCN) Li <cit.> proposed two methodologies to improve the adversarial robustness of 3D point cloud classification models. The first methodology is the introduction of a novel point cloud architecture called Context-Consistency dynamic graph Network (CCN), which is designed to be more robust to adversarial attacks. The second methodology involves an in-depth analysis of the factors that affect the robustness of point cloud models, and the development of techniques to mitigate these factors. In order to provide a more robust model against adversarial point clouds, the authors integrate the two techniques §.§.§ Lattice Point Classifier (LPC) Li  <cit.> proposed embedding a declarative node into the networks to transform adversarial examples to the clean manifold. The authors proposed an effective instantiation, the Lattice Point Classifier (LPC), which projects each point cloud onto the lattice and generates a 2D image for classification using 2D CNNs. (Structured sparse coding in the permutohedral lattice is defined as the declarative node in LPC.). The declarative nodes defend the adversarial attacks through implicit gradients by leading them to wrong updating directions for inputs. § TAXONOMY OF DATASETS AND VICTIM MODELS A variety of 3D point cloud datasets have been collected to evaluate shape classification on DNNs, including ModelNet <cit.>, ShapeNet <cit.>, ScanObjectNN <cit.>, McGill Benchmark <cit.>, ScanNet <cit.>, Sydney Urban Objects <cit.>. A summary of the characteristics of these datasets is also provided in Table <ref>. Among all, 4 datasets namely ModelNet10 <cit.>, ModelNet40 <cit.>, ShapeNet <cit.> and ScanObjectNN <cit.> have mostly been used to evaluate attack and defense techniques. Also, there is a taxonomy of datasets and victim models used in recent studies in Table <ref>. § CHALLENGES AND DISCUSSIONS This section discusses the current challenges that adversarial point clouds face, as well as the potential solutions that can be found. For both adversaries and researchers, adversarial point clouds are an interesting problem, which exploits the vulnerability of deep models and helps defenders avoid adversarial point clouds. Our discussion will focus on the following questions. What factors affect the attack on Point Cloud? §.§ What factors affect the success of adversarial attacks on 3D point clouds? There are some general factors that be more important for adversarial attacks on 3D point clouds including: The complexity and robustness of the model being attacked: When a deep model is less complex and less robust, it may be less immune to adversarial attacks and require a less sophisticated or weaker attack to fool it. The structure of the 3D point cloud: The distribution of points in the point cloud and the presence of outliers can potentially affect the success of most types of adversarial point clouds. §.§ Comparison of Different Defense Methods A 3D point cloud's distribution and outliers can significantly impact the effectiveness of defense methods against adversarial point clouds. For example, input transformation techniques are designed to make it more difficult for an attacker to craft adversarial point clouds. These techniques may rely on modifying the distribution of points in the point cloud or dropping outliers. By doing this, the structure of the original point cloud is disrupted. This makes it harder for the attacker to make successful modifications. On the other hand, other defense methods, such as adversarial training, may not rely as heavily on these factors and may not be as efficient. Adversarial training is one of the most powerful defenses in the 2D defense techniques, but it does not do well in 3D data. The paper <cit.> proves that the adversarial training maximizes the classifier loss by finding a worst-case example inside a constrained search space. This procedure can change decision boundaries so that the model gets more robust to different types of attacks. This proof is based on the regular structure of 2D data. Creating 2D attacks is performed by changing the pixel values. Note that in the 2D case, the data has a regular structure. But, a point cloud consists of a set of 3D data points that are placed irregularly in space. Furthermore, the point clouds used in the literatures are constructed by randomly sampling 1024 points from each 3D object. Therefore, points are not uniformly distributed across object's surface and any two point clouds from the same class (e.g., airplane) do not have the same regular structure, as opposed to the 2D cases. These structural differences result in different defense behaviors in the adversarial training phase. Therefore, training the model with the worst-case example inside a constrained search space can not guarantee robustness against other attacks. In other words, due to the irregular structure of point clouds, it is very challenging to model adversarial points to eliminate their impact on defense. §.§ Comparison of 3D point clouds and image data in terms of attacks and defenses There are several differences between 3D point clouds and images in terms of adversarial attacks and defenses: An adversarial attack on 3D point clouds can be more complex. Typically, an adversarial attack on an image data involves adding small perturbations to the pixel values. In contrast, adversarial attacks on 3D point clouds can involve more complex modifications, such as adding or dropping points, or changing the connectivity of the points in the point cloud. In fact, the structure of 3D point clouds is different from that of images. Images are typically represented as 2D arrays of pixel values, while 3D point clouds are represented as sets of 3D points. This difference in structure can make it more challenging to apply defense methods that were developed for image data to 3D point clouds. On the other hand, 3D point clouds can be more sensitive to perturbations. Because 3D point clouds are used to represent physical objects in the real world, even small perturbations to the point cloud can result in significant changes to the shape or appearance of the represented object. This sensitivity can make it more difficult to develop robust defense methods for 3D point clouds. § CONCLUSION Adversarial attacks on 3D point cloud classifications have become a significant concern in recent years. These attacks can successfully manipulate the classification of 3D point clouds, leading to incorrect decisions with potentially harmful consequences. Adversarial attacks on 3D point clouds can be categorized into several types, including drop attacks, add attacks, shift attacks, and transform attacks. To defend against these attacks, researchers have proposed two main categories of approaches: input transformation and adversarial training. Input transformation methods aim to preprocess the input data in order to make it more robust to adversarial perturbations, while adversarial training involves augmenting the training data with adversarial examples in order to improve the model's robustness. For more robust protection against adversarial attacks, input transformation techniques can be combined with adversarial training. Some potential future directions for research on adversarial attacks on 3D point clouds include optimizing attack methods by targeting only a subset of points in the point cloud and focusing on the local rather than global structure of the point cloud, as well as exploring the robustness of 3D point cloud classifiers to attacks that are specifically designed for 3D data rather than adapted from methods developed for 2D images. IEEEtranN
http://arxiv.org/abs/2306.08963v1
20230615085651
1st Solution Places for CVPR 2023 UG$^{\textbf{2}}$+ Challenge Track 2.1-Text Recognition through Atmospheric Turbulence
[ "Shengqi Xu", "Xueyao Xiao", "Shuning Cao", "Yi Chang", "Luxin Yan" ]
cs.CV
[ "cs.CV" ]
1st Solution Places for CVPR 2023 UG^2+ Challenge Track 2.1- Text Recognition through Atmospheric Turbulence Shengqi Xu, Xueyao Xiao, Shuning Cao, Yi Chang[1], Luxin Yan National Key Lab of Multispectral Information Intelligent Processing Technology, Huazhong University of Science and Technology, China {m202273123, xiaoxueyao, sn_cao, yichang, yanluxin}@hust.edu.cn July 31, 2023 ========================================================================================================================================================================================================================================================================== In this technical report, we present the solution developed by our team “VIELab-HUST” for text recognition through atmospheric turbulence in Track 2.1 of the CVPR 2023 UG^2+ challenge. Our solution involves an efficient multi-stage framework that restores a high-quality image from distorted frames. Specifically, a frame selection algorithm based on sharpness is first utilized to select the sharpest set of distorted frames. Next, each frame in the selected frames is aligned to suppress geometric distortion through optical-flow-based image registration. Then, a region-based image fusion method with DT-CWT is utilized to mitigate the blur caused by the turbulence. Finally, a learning-based deartifacts method is applied to remove the artifacts in the fused image, generating a high-quality outuput. Our framework can handle both hot-air text dataset and turbulence text dataset provided in the final testing phase and achieved 1st place in text recognition accuracy. Our code will be available at <https://github.com/xsqhust/Turbulence_Removal>. § INTRODUCTION This technical report presents our proposed solutions to CVPR 2023 UG^2+ Challenge Track 2.1-Text Recognition Through Atmospheric Turbulence. The participant's task is to mitigate the adverse effects caused by the turbulence so that the text recognition system can successfully recognize the text in the restored images. In this track, two types of text datasets are provided during the final testing phase: hot-air text dataset (Fig. <ref>(a)) and turbulence text dataset (Fig. <ref>(b)). The former is generated by simulating physical turbulence on images using a heat chamber, while the latter is obtained from a distance of 300 meters in hot weather <cit.>. The hot-air text dataset comprises 400 sequences, and the turbulence text dataset comprises 100 sequences, with each sequence composed of 100 distorted frames. It is required to reconstruct a high-quality image from these distorted frames. The reconstruction result of the final testing phase is based on the average accuracy of three text recognition systems (CRNN <cit.>, ASTERN <cit.>, DAN <cit.>). We propose a multi-stage framework to mitigate the distortion caused by the turbulence. Firstly, the sharpest frames are selected using frame selection based on sharpness. Next, each frame in the selected frames is aligned to suppress geometric distortion through optical-flow based registration. Then, we utilized an image fusion method with DT-CWT to mitigate the blur caused by the turbulence. Finally, we apply a learning-based deartifacts method to further improve the image quality. Our framework is capable of handling both types of text datasets, and it achieved 1st place on the final leaderboard. The technical report is organized as follows: Section <ref> briefly describes the restoration framework. Experimental results are given in Section <ref> to show the performance as compared with other methods. § RESTORATION FRAMEWORK The proposed restoration framework contains four main steps (see the diagram in Fig. <ref>): A. Frame Selection; B. Image Registration; C. Image Fusion; D. Artifacts Removal. §.§ Frame Selection In high-temperature imaging, atmospheric turbulence affects frames in a sequence unequally. The degree of distortion varies from frame to frame due to random fluctuations of the refractive index in the optical transmission path. Consequently, some frames have better image quality than others, with less blurriness and more useful image information. This point is illustrated in Fig. <ref>, where two sample frames of a signboard scene sequence that is distorted by atmospheric turbulence are shown. Fig. <ref>(a) is a frame severely distorted by the turbulence, while Fig. <ref>(b) is a sharp frame from the same sequence. A comparison of the two frames reveals that Fig. <ref>(a) has a negative contribution to the image restoration. Given an observed sequence {D_n}, each frame in the sequence distorted by atmospheric turbulence can have varying visual quality. Sharpness is a crucial factor that determines the amount of detail information conveyed by an image. Hence, we utilize a frame selection algorithm based on sharpness to select the sharpest set of distorted frames {S_k} that can aid in accurately reconstruting a high-quality image. In this step, we compute the sharpness based on the intensity gradients of the image. §.§ Image Registration In step B, each frame in the selected frames {S_k} is aligned with a reference frame using optical flow, generating a registered sequence {R_k} with less geometric distortion, and the reference frame is constructed by averaging the selected sequence {S_k}. The purpose of this step is to suppress geometric distortion. §.§ Image Fusion Step C involves in restoring a single image F from the registered sequence {R_k}. We utilize a region-based image fusion method with DT-CWT to mitigate the blur caused by turbulence. The fusion framework is shown in Fig. <ref>. The DT-CWT <cit.> is a popular technique for image fusion due to its shift invariance, orientation selectivity, and multiscale properties, which enables the selection and combination of useful information from multiple source images to generate a new image. However, the fusion process may introduce artifacts that affect the image quality. §.§ Artifacts Removal Finally, a learning-based deartifacts method FBCNN <cit.> is applied to remove the remaining artifacts in the Fused image F. FBCNN is capable of predicting the quality factor of a JPEG image and embedding it into the decoder to guide image restoration. Besides, the quality factor can be manually adjusted for flexible JPEG restoration according to the user's preference. In this task, we have set the quality factor to 20. § EXPERIMENTS §.§ Datasets and Experimental Setting Datasets. We conduct experiments on both hot-air text dataset and turbulence text dataset to evaluate the proposed framework. * Hot-air text dataset. Hot-air text dataset is generated by simulating physical turbulence on images using a heat chamber, which contains 400 sequences in this track. * Turbulence text dataset. Turbulence text dataset is obtained from a distance of 300 meters in hot weather, which contains 100 sequences in this track. Experimental Setting. We compare the proposed framework with several existing atmospheric turbulence mitigation methods, including CLEAR <cit.>, SG <cit.>, NDL <cit.>. §.§ Experiments On Text Datasets We evaluate the performance of the proposed framework and some existing methods on both types of text dataset. The visual results of different methods are shown in Fig. <ref>. As for visual comparison in the results, NDL <cit.> and SG <cit.> struggle to handle the turbulence text dataset and fail to correct geometric distortion. Albeit CLEAR <cit.> can effectively mitigate the blur and geometric distortion caused by turbulence, it introduces the severe artifacts into the results. Compared with these methods, the proposed framework outperforms these methods by simultaneously mitigating the blur and geometric distortion without destroying image details, demonstrating superior performance. §.§ Discussion Analysis of the image quality of different frames. In order to investigate whether atmospheric turbulence affects each frame differently in a high-temperature environment, we analyzed the image quality using the normalized gradient as a sharpness metric for each frame in the distorted sequences. As shown in Fig. <ref>, we can observe a considerable fluctuation in the temporal normalized gradients for both types of text sequences, indicating that the degree of distortion for each frame varies. Therefore, some of the frames have better image quality than others, with less blurriness and more useful image information. § CONCLUSION In our submission to the track 2.1 in UG^2+ Challenge in CVPR 2023, we propose an efficient multi-stage framework to restore a high-quality image from distorted frames. Firstly, A frame selection algorithm based on sharpness is first utilized to select the best set of distorted frames. Next, each frame in the selected frames is aligned to suppress geometric distortion through optical-flow based registration. Then, a region-based image fusion method with DT-CWT is utilized to mitigate the blur caused by the turbulence. Finally, a learning-based deartifacts method is applied to improve the image quality, generating a final out. Our framework can handle both hot-air text dataset and turbulence text dataset provided in the final testing phase, and ranked 1st in text recognition accuracy. In the future, we will explore more efficient methods to improve this task. plain
http://arxiv.org/abs/2306.03158v1
20230605181209
Task-Oriented Metaverse Design in the 6G Era
[ "Zhen Meng", "Changyang She", "Guodong Zhao", "Muhammad A. Imran", "Mischa Dohler", "Yonghui Li", "Branka Vucetic" ]
cs.NI
[ "cs.NI", "eess.SP" ]
-@thistlm defeDefinition theoTheorem lemmaLemma corCorollary pproProposition assumeAssumption IEEEtran algorithmAlgorithm theoremTheorem theorembox 500 corollaryCorollary corollarybox 500 @nat@width>@nat@width Task-Oriented Metaverse Design in the 6G Era Zhen Meng, Student Member, IEEE, Changyang She, Senior Member, IEEE, Guodong Zhao, Senior Member, IEEE, Muhammad A. Imran, Fellow, IEEE, Mischa Dohler, Fellow, IEEE, Yonghui Li, Fellow, IEEE, and Branka Vucetic, Life Fellow, IEEE July 31, 2023 =============================================================================================================================================================================================================================================== Task-Oriented Metaverse Design in the 6G Era Zhen Meng, Student Member, IEEE, Changyang She, Senior Member, IEEE, Guodong Zhao, Senior Member, IEEE, Muhammad A. Imran, Fellow, IEEE, Mischa Dohler, Fellow, IEEE, Yonghui Li, Fellow, IEEE, and Branka Vucetic, Life Fellow, IEEE July 31, 2023 =============================================================================================================================================================================================================================================== As an emerging concept, the Metaverse has the potential to revolutionize the social interaction in the post-pandemic era by establishing a digital world for online education, remote healthcare, immersive business, intelligent transportation, and advanced manufacturing. The goal is ambitious, yet the methodologies and technologies to achieve the full vision of the Metaverse remain unclear. In this paper, we first introduce the three infrastructure pillars that lay the foundation of the Metaverse, i.e., human-computer interfaces, sensing and communication systems, and network architectures. Then, we depict the roadmap towards the Metaverse that consists of four stages with different applications. To support diverse applications in the Metaverse, we put forward a novel design methodology: task-oriented design, and further review the challenges and the potential solutions. In the case study, we develop a prototype to illustrate how to synchronize a real-world device and its digital model in the Metaverse by task-oriented design, where a deep reinforcement learning algorithm is adopted to minimize the required communication throughput by optimizing the sampling and prediction systems subject to a synchronization error constraint. Metaverse, 6G, task-oriented design Task-Oriented Metaverse Design in the 6G Era Zhen Meng, Student Member, IEEE, Changyang She, Senior Member, IEEE, Guodong Zhao, Senior Member, IEEE, Muhammad A. Imran, Fellow, IEEE, Mischa Dohler, Fellow, IEEE, Yonghui Li, Fellow, IEEE, and Branka Vucetic, Life Fellow, IEEE July 31, 2023 =============================================================================================================================================================================================================================================== § INTRODUCTION The Metaverse is a digital world that will revolutionize the interactions among humans, machines, and environments by providing a shared, unified, perpetual, and inter-operable realm for participants from all over the world <cit.>. The digital world could be a pure virtual space or a digital mirror of the physical world that has the ability to reprogram the physical world in real time. It lays the foundation for the evolution of different vertical industries including education, entertainment, healthcare, manufacturing, transportation, and immersive business. This ambitious vision brings significant challenges to the development of next-generation communication networks. It is natural to raise the following questions: Q1: Is the available infrastructure sufficient for the Metaverse? To support an application in the Metaverse, the system needs to execute a sequence of interdependent tasks. A task is an activity that needs to be completed within a period of time or by a deadline, such as the pose and eye tracking, positioning, haptic control and feedback, and semantic segmentation. State-of-the-art infrastructure cannot meet the requirements of diverse emerging applications and tasks in the Metaverse. Specifically, existing input/output systems, like the touch screen, keyboards, and mouses, are inconvenient in supporting new tasks. Thus, new Human-Computer Interface (HCI), including Virtual/Augmented Reality (VR/AR), Tactile Internet, and brain-computer interface, will lay the foundation for the Metaverse. Sensing and communication technologies play critical roles in providing timely feedback and seamless connections in the Metaverse with a real-world counterpart. To reduce the infrastructure cost, one promising approach is to exploit widely deployed mobile networks for both sensing and communications. Furthermore, the Sixth Generation (6G) networks will bridge new HCI and sensing & communication systems. Due to the long propagation delay, executing all tasks on a global server cannot meet the latency requirements of tasks. A new multi-tier network architecture that can coordinate computing, communication, and storage resources at the end-user devices, edge/local servers, and global servers efficiently is essential for supporting interdependent tasks in the Metaverse <cit.>. In summary, HCI, sensing and communication technologies, as well as network architectures will serve as the three pillars of the Metaverse. Even with the above infrastructure, supporting emerging applications in the Metaverse is not straightforward. Q2: How to guarantee the Key Performance Indicators (KPIs) of diverse applications/tasks in the Metaverse? The highly integrated and multifaceted demands of applications in the Metaverse impose stringent requirements on KPIs that are much more diverse than the KPIs defined in the three typical scenarios in the Fifth Generation (5G) mobile communication standard, i.e., Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communications (uRLLC), and Massive Machine Type Communications (mMTC) <cit.>. Further considering that one application consists of multiple tasks, to meet the specific requirements of the application in the Metaverse, we should analyze the KPIs at task level, referred to as task-oriented KPIs. For example, to generate haptic feedback, the system should meet the Just Noticeable Difference (JND) constraint, which is the minimum difference between two force signals that is noticeable to users. The network functions and communication KPIs in 5G networks are task agnostic and hence cannot guarantee task level KPIs. The existing communication network design approach divides the whole system into multiple sub-modules for separate optimization and cannot break the barriers among the sub-modules. As a result, it is difficult to provide End-to-End (E2E) performance guarantees. To support the Metaverse in 6G mobile networks, we should revisit the following questions: Q3: What are the issues with the existing design approaches? Do we need new design methodologies in 6G? To improve E2E performance, cross-system design has been investigated in the existing literature <cit.>. To guarantee the control performance with stochastic wireless channels and limited communication resources in mission-critical applications, a predictive control and communication co-design system was introduced in <cit.>, where scheduling policy and transmission power are jointly optimized. To achieve substantial gains in spectral, energy, hardware and cost efficiency with mMTC, Integrated Sensing and Communication (ISAC) was developed in <cit.> to support sensing and communications simultaneously. Considering that end-user devices have limited computing, communication, and storage resources, a cloud-edge-end computing framework-driven solution was introduced in <cit.>. However, the cross-system design problems are in general non-convex or NP-hard, and novel design methodologies for real-time implementation are in urgent need. In this work, we introduce the task-oriented design for the Metaverse in the 6G era. The major contributions of this paper are summarized as follows: 1) We holistically illustrate the three infrastructure pillars that the Metaverse will be built upon, and depict the roadmap toward the full vision of the Metaverse. 2) We comprehensively review the challenges of task-oriented design in the Metaverse; To tackle these challenges, we put forward potential solutions from a system-level perspective. 3) We build a prototype to demonstrate the task-oriented design. The goal of the task is to synchronize a real-world device and its digital model in the Metaverse. § PILLARS OF THE METAVERSE In this section, we review the Metaverse infrastructure and its connections to the task-oriented design. As shown in Fig. <ref>, it consists of three pillars: the human-computer interface (HCI), sensing & communications, and network architecture. §.§ Human-Computer Interface Different from existing input/output systems that are designed to process video and audio signals, future HCI should be carefully designed to support new tasks in the Metaverse. §.§.§ XR Head-Mounted Devices The development of XR devices has greatly improved the user experience by identifying the mobility of the head-mounted device and rendering the three-dimensional (3D) video accordingly. Existing XR systems mainly focus on downlink video streaming. To further enable eye contact and expression reconstruction in the Metaverse, eye tracking and 3D modeling techniques should be integrated into XR systems. By predicting the moving direction of eyes, the XR system can render and transmit the field-of-view to be requested by users. Thus, we can improve the trade-off between data rate and latency in wireless XR. §.§.§ Tactile Devices Tactile devices are essential for supporting haptic feedback in the Metaverse. With a large number of tactile sensors and actuators, it is possible to recognize users' poses and gestures. Once the user hits a virtual item in the Metaverse, the tactile devices generate feedback to users via vibrations and resistance. Most existing tactile devices cannot provide tactile feedback for the entire human body. Several issues remain open in the development of whole-body tactile devices: 1) battery-life time of wearable devices is limited; 2) low-complexity graph signal processing that takes the topology of the sensors/actuators is not available; 3) the actuators should be controlled by engines and algorithms to mimic the tactile experience, which remains an open challenge. §.§.§ Brain-Computer Interface The brain-computer interface can be used for emotion recognition and reconstruction in the Metaverse. Existing brain-computer interfaces suffer from low classification accuracy and long processing delay. Due to these issues, the brain-computer interface may not be able to work as the stand-alone human-computer interface in the near future, but it may assist VR devices or tactile devices to improve the users' experience, as demonstrated in early trials by Meta. §.§.§ Combination of Different Human-Computer Interfaces Different human-computer interfaces have different data structures, generate responses in different time scales, and may support different tasks in one application. Developing a system that manages multiple human-computer interfaces brings unprecedented challenges, and is crucial for improving the users' experience in the Metaverse. To enable the interactions among users with different types of devices, new standards are needed. §.§ Sensing and Communications Sensing and communication technologies enable timely state updates of real-world devices/environments in the Metaverse. Thus, they are critical to the establishment of the digital world. §.§.§ Devices with Communication Modules Smart devices equipped with communication modules can update their states to the Metaverse. For example, a real-world robotic arm measures the angles, speeds, forces, and torques of the joints and sends the states to a server for reconstructing the digital robotic arm. As the number of devices increases, the communication resources become the bottleneck of the Metaverse. Improving the trade-off between the communication resource utilization efficiency and the synchronization accuracy/information freshness is a challenging problem. §.§.§ Environments without Communication Modules Some entities in real-world environments do not have communication modules, such as trees, buildings, pedestrians, etc. To collect their states in the digitally twinned Metaverse, we need a large number of external sensors or cameras. For example, Instant-Nerf is a neural rendering model developed by NVIDIA that can render 2D photos into 3D scenes in a few milliseconds <cit.>. To further understand the environment, semantic segmentation is crucial <cit.>. Nevertheless, most of the existing segmentation algorithms require a considerable amount of computation resources, and the processing time remains the bottleneck of real-time interactions. §.§.§ Joint Sensing-Communication Systems The cost of deploying and operating a large number of sensors and cameras could be extremely high. By integrating communication and sensing functionalities into widely deployed mobile networks, it is possible to reduce the cost. Thus, the Integrated Sensing and Communication (ISAC) system is a practical approach that collects real-world information for the Metaverse <cit.>. Note that there are tradeoffs between KPIs of different tasks and the resource utilization efficiency of ISAC systems, but a universal design framework for different tasks is still missing. §.§ Network Architecture 6G networks will bridge HCI and sensing & communication devices, and the Metaverse. §.§.§ Multi-tier Computing Architecture Developing the Metaverse on a global server for all the users and devices around the world would be very challenging due to long communication delay and the limited communication throughput. Multi-tier computing is believed to be a promising architecture that can coordinate interdependent tasks in the Metaverse by exploiting distributed computing, storage, and communication resources in central servers, local servers, and end-user devices <cit.>. Meanwhile, the multi-tier computing raises new challenges in 6G core networks and radio access networks (RANs). §.§.§ Core Networks 5G core networks manage resources and quality-of-service at the application level. The session management function will create a new protocol data unit session when there is a new service request. How to coordinate multiple tasks for one application remains unclear. To address this issue, several promising techniques have been investigated in the existing literate: 1) The authors of <cit.> developed a semantic-effectiveness plane task-level information processing. 2) In <cit.>, the authors built a knowledge pool for reasoning-driven AI-native systems that enable online learning and fast inference of different network functions. §.§.§ Radio Access Networks Most new HCI and sensing & communication devices will access to RANs for better user experience and flexible deployment. As a result, 6G RANs should support massive devices with diverse KPI requirements. To improve user experience in real-time interactions, ISAC is a promising technology that exploits a shared multi-antenna system and advanced signal processing algorithms for data transmission and environment sensing <cit.>. In addition, a space-air-ground-sea integrated network is promising for enabling seamless connectivity for global interactions in the Metaverse <cit.>. As the tasks and applications in the Metaverse evolve over time, Open-Radio Access Networks (O-RAN) with programmable network functions can reduce the cost for network deployment and upgrades significantly <cit.>. § ROAD MAP TOWARDS THE METAVERSE In this section, we discuss the road map towards the full vision of the Metaverse as illustrated in Fig. <ref>. §.§ Establish Multi-tier Metaverse in Multi-tier Architecture The first step toward the Metaverse is to build digital worlds. There are three types of digital worlds: 1) an imaginary environment that does not have a real-world counterpart; 2) a digital twin of a real-world environment; and 3) a digital world overlays on the physical world or even has the ability to reprogram the physical world in real time. To provide quick responses from the Metaverse to users, we need to use the computing, storage, and communication resources of a local server or the end-user device. Then, the states of the user or its digital model are updated to the global sever for synchronization. The multi-tier network architecture lays the foundation for building multi-tier digital worlds that support real-time interactions among users from all over the world. §.§ Single-User Activities in the Metaverse For single-user activities, all the virtual objects and environments can be built on the end-user device, e.g., a personal computer. With the help of a variety of HCI, the user can interact with everything in the digital world, where several applications in education, entertainment, designing, and planning become possible. For example, by synchronizing users' actions with their digital models in the Metaverse, the users can operate virtual objects and create virtual content (e.g., driving a vehicle or painting). Nevertheless, establishing a digital world on the end-user device is not easy, as the device has limited computing and storage resources. Thus, the processing delay could be the bottleneck for real-time interactions. To address this issue, low-complexity 3D reconstruction and segmentation algorithms are in urgent need. §.§ Local Interactions in the Metaverse In some private networks or local area networks, information is exchanged among devices and users in a small area. In these scenarios, all the devices and human users can interact with each other via a local Metaverse. For example, in a smart factory, sensors monitor manufacturing processes and update their states to the local sever, where a digital twin of the factory is built <cit.>. In the digital factory, it is possible to simulate the outcomes of different actions. If an accident is detected in the simulation, the local server sends commands to actuators to stop the processes in an anticipatory manner. In addition, the data is stored and processed in the local server that is not connected to the Internet. Therefore, this approach can protect users' privacy and avoid security issues. §.§ Global Interactions in the Metaverse The ultimate goal of the Metaverse is to support global interactions for a large number of users using different types of HCI. In this stage, remote healthcare, immersive business, and online education will be possible. Latency remains one of the major issues for global interactions. Specifically, the propagation delay is inevitable in long-distance communications. Stochastic network congestion leads to long queueing delay. In addition, the re-transmission scheme in the existing Transmission Control Protocol/Internet Protocol brings significant latency. Although some interesting ideas have been put forward to achieve real-time interactions <cit.>, the implementation in large-scale networks remains a challenging goal. § TASK-ORIENTED DESIGN FRAMEWORK In this section, we propose a task-oriented design framework. In general, there are three types of tasks in the Metaverse: 1) environment sensing or measurements with HCI for constructing the digital world, 2) data/signal processing for understanding, prediction, inference, and generating feedback, and 3) communications for information exchange among human users, machine-type devices, and servers. Let us take a virtual conference as an example to illustrate the tasks. As body language plays an important role in virtual conference, the HCI executes the task of identifying the human pose and eye tracking. After that, the computer/sever processes the measured/estimated data. In this stage, typical tasks include segmentation, object detection, and rendering of 3D scenes. Finally, based on the interaction between the avatars in the Metaverse (such as shaking hands), the feedback is generated and sent back to users by the Tactile Internet. §.§ Challenges of Task-Oriented Design §.§.§ Data Structures The data structure of a task depends on the HCI or environment sensing technologies. Traditional speech and image signals are represented by time-series data and the red-green-blue (RGB) model, respectively. Nevertheless, spatial correlation is critical for the tactile signals and brainwave signals, and relies on the topology of the sensors. The topology information is useful in signal processing, and may facilitate the execution of tasks. For example, the signals generated by a radar system or depth-sensing camera, such as point-cloud data are converted into 3D tensors in the Euclidean space before they can be processed by convolutional neural networks. This procedure causes additional computational overhead and processing delay. To reduce overhead and improve the performance of a task, the authors of <cit.> developed a PointNet to handle a range of tasks in environment sensing, such as 3D shape classification and segmentation. Nevertheless, a widely accepted standard for data storage, processing, or communications is still missing in the Metaverse, and it will lay the foundation for immersive interactions among human users, machine-type devices, and environments. §.§.§ Task-Oriented KPIs Diverse tasks in the Metaverse have stringent requirements on a range of KPIs, which are still difficult to fulfill. For example, in applications that require low-latency feedback, the user-experienced delay should be close to zero, but the propagation delay could be up to dozens of milliseconds when the communication distance is hundreds or thousands kilometers. Furthermore, the KPIs defined in the 5G standard, such as throughput, latency, and reliability, are not the same as the task-oriented KPIs as illustrated in Fig. <ref>. For instance, in haptic communications, it is natural to raise a question: Do we really need to guarantee the 99.999% reliability in communication systems in order to achieve the target JND? The impact of network resource allocation on the task-oriented KPIs remains unclear, and there is no theoretical model or closed-form expression that can quantify their relationships. To overcome this difficulty, we need novel design methodologies. §.§.§ Multi-Task Processing and Coordination With the multi-tier network architecture, tasks of an application may be executed by the end-user device, an edge/local server, or the cloud server. The offloading and coordination of multiple tasks is not trivial since they are interdependent. For some highly interactive applications, the end-user device senses the behavior of the user, and then communicates with the local server, where the feedback is generated. Finally, the local servers synchronize the states of users in a cloud server. Delays or packet losses in any of the tasks will have a serious impact on the overall performance of the application. To provide satisfactory user experience in the Metaverse, we need to break the barriers among sensing, communication and computing systems, and jointly design the whole network. §.§ Potential Solutions §.§.§ Cross-System Design Existing HCI, sensing, communication, and computing systems are developed separately. This design approach leads to sub-optimal solutions, brings extra communication overhead for coordinating multiple tasks, and can hardly meet the task-oreinted KPIs. To address these issues, a cross-system design has been investigated in the existing literature. There are several existing cross-system design approaches. (1) As shown in <cit.>, when dealing with reconstruction tasks including in-text sentences, sounds, images, and point cloud data, by joint source and channel coding, it is possible to achieve a better quality of service at low signal-to-noise ratios. Nevertheless, complicated coding schemes may bring extra processing delay, which remains an issue in ultra-low latency communications. (2) Considering the cost of deploying a large number of sensors, integrating sensing into communication systems is a promising approach, as cellular networks have been widely deployed <cit.>. By utilizing communication signals in environmental sensing, cellular networks can support a variety of tasks, such as localization, object detection, and health monitoring. (3) Given the fact that state observations are outdated in some tasks, the Metaverse needs to respond to users' actions in an anticipatory manner. To achieve this goal, prediction and communication co-design is promising, especially for applications in the Tactile Internet that requires ultra-low latency <cit.>. It is worth noting that cross-system problems are in general very complicated and may not have well-established models. As a result, most of the existing analytical tools and optimization algorithms are not applicable. §.§.§ Domain-Knowledge-Assisted Deep Learning To solve the above cross-system design problems, data-driven deep learning methods are promising, as they do not rely on theoretical models or assumptions. However, straightforward applications of deep learning may not generalize well with diverse task-oriented KPIs and data structures <cit.>. To address this issue, one should exploit domain knowledge in feature engineering, sample selection, value function design, etc. When deep learning is adopted in task-oriented design, there are three major issues. (1) Most of the existing deep neural networks work well in small-scale problems. As the scale of the problem increases, the training/inference time increases rapidly. In the Metaverse, there could be millions or billions of users and devices, and thus scalability remains an open issue. (2) To support various tasks in different sensing and communication environments, deep learning algorithms trained on a data set should achieve good performance in different use cases after a few steps of fine-tuning. This generalization ability is critical for using deep learning in the Metaverse. (3) Most deep learning algorithms do not offer a performance guarantee in terms of classification or regression accuracy. But the KPIs required by some mission-critical tasks are sensitive to the outcomes of learning algorithms. Improving the safety of deep/reinforcement learning algorithms by exploiting domain knowledge is a promising and vital approach. §.§.§ Universal Design The Metaverse aims to provide better interactions among users with different cultural backgrounds and health conditions (e.g., careers, nationalities, abilities or disabilities, etc.). Different users may have different preferences, habits, and cognition. Meanwhile, they may use different types of HCI devices with different data structures. The diversity of users brings significant challenges in the design of the Metaverse, and the universal design is essential for the success of the Metaverse by considering the diverse needs and abilities of all the users throughout the design process, standardization, and government regulation. For example, a universal design platform named Omniverse can meet user demands from different backgrounds (e.g., artists, developers, and enterprises), where the Universal Scene Description is promising to be the open and extensible standard language for the 3D Internet to eliminate the barriers among different user communities <cit.>. Nevertheless, a lot of effort is still needed in the universal design. § A CASE STUDY Timely and accurate synchronization between the real-world device and its digital model is the foundation of the Metaverse. In this section, we show how to jointly optimize the sampling, communication, and prediction modules for the synchronization task shown in Fig. <ref>. We use domain-knowledge to design a deep reinforcement learning (DRL) algorithm to minimize the communication load subject to an average tracking error constraint. The state is defined as the mean square error (MSE) between the trajectories of the real-world robotic arm and its digital model in the Metaverse. The action includes the prediction horizon and sampling rate. The reward is the communication load. Unlike the latency and reliability constraints in communication system design, our task-oriented design approach aims to guarantee a task-oriented KPI, i.e., the average tracking error. More details about the experiment and the DRL algorithm can be found in <cit.>. §.§ Prototype Setup To validate the algorithm, we built a prototype as shown in Fig. <ref>, where a virtual robotic arm is synchronized with a physical robotic arm in the real world. Specifically, the sensor attached to the robotic arm measures its trajectory (i.e., angle of the first joint) at the frequency of 1kHz. Then, the measured trajectory is sampled, i.e. decimated, and transmitted to the Metaverse, where the server predicts and reconstructs future trajectory to reduce the latency experienced by the user. Then, the digital model of the robotic arm follows the predicted trajectory and feeds back the prediction results to the real-world robotic arm. Finally, the real robotic arm computes the mean square error (MSE) between the measured trajectory and the predicted trajectory, and a deep reinforcement learning algorithm is applied to adjust the sampling rate and the prediction horizon. For data collection, we consider the motion-controlled robotic arm application, where the real robotic arm is controlled by a human operator. Other details of the system setup and the deep reinforcement learning can be found in <cit.>. §.§ Performance Evaluation There are two performance metrics: the average communication load and the average tracking error. In the case without sampling, the packet rate in the communication system is 1,000 packets/s, which is used to normalize the communication load. For example, if the average packet rate is 150 packets/s, then the normalized average communication load is 15 %. The average tracking error is measured by the average MES between the real-world trajectory and the reconstructed trajectory in the Metaverse. The results in Fig. <ref> show the average tracking error and the normalized average communication load in the training stage of the deep reinforcement learning algorithm, where the average tracking error constraint is 0.007^∘. The results show that the task-oriented design approach can meet the average track error constraint and can reduce the average communication load to 13% of the communication load in the system without sampling. In Fig. <ref>, we test the trade-off between the normalized average communication load and the average tracking error, where different packet loss probabilities in the communication system are considered, i.e., p_loss = 0, and 10%. The results show that with a smaller packet loss probability, it is possible to achieve a better trade-off between the normalized average communication load and the average tracking error. In a communication system with a packet loss probability of 10 %, our task-oriented design approach can reduce the normalized average communication load to 27 % when the average tracking error constraint is 0.002^∘. This observation indicates that by adjusting the sampling rate (i.e., the communication load in Fig. <ref>), it is possible to meet the requirement of a task in communication systems with high packet loss probabilities, e.g., 10 %. § CONCLUSION AND FUTURE DIRECTIONS In this paper, we introduced the three infrastructure pillars and depicted the road map toward the full vision of the Metaverse. Then, we proposed a task-oriented design approach followed by a prototype in a case study. In future 6G standards, we need new network functions for task-level resource management. As the tasks may evolve according to the road map of the Metaverse, an O-RAN interface could be a promising direction as it allows network operators to update network functions according to new applications and tasks in the Metaverse. Since machine learning has been adopted in 3GPP as a promising tool for developing network functions, improving the scalability and generalization ability of learning-based network functions remains an open issue. Note that the training of learning-based network functions may lead to huge energy consumption, we need to reconsider the energy efficiency of the whole network. Finally, privacy, security, and trust are critical for the Metaverse, where data is shared among different network functions. Zhen Meng ([email protected]) received his B.Eng. degree from the School of Engineering, University of Glasgow, UK, in 2019. He is currently pursuing his Ph.D. degree at the University of Glasgow, UK. His research interests include the ultra-reliable and low-latency communications, communication-robotics co-design, Metaverse, and cyber-physical systems. Changyang She ([email protected]) is a lecturer-level research fellow at the University of Sydney. He is a recipient of Australian Research Council Discovery Early Career Researcher Award 2021. His research interests lie in the areas of ultra-reliable and low-latency communications, wireless artificial intelligence, and Metaverse. Guodong Zhao ([email protected]) is a Senior Lecturer in the James Watt School of Engineering at the University of Glasgow, UK. He is an IEEE Senior Member and the senior academic lead of the Scotland 5G Centre. His research interests are in the areas of artificial intelligence, communications, robotics, and Metaverse. Muhammad Ali Imran ([email protected]) is a full professor in communication systems and the head of the Autonomous System and Connectivity (ASC) Research Division in the James Watt School of Engineering at University of Glasgow, UK. He is the founding member of the Scotland 5G Centre with expertise in 5G technologies for industrial and robotics applications. Mischa Dohler ([email protected]) is now Chief Architect at Ericsson Inc. in Silicon Valley, working on cutting-edge topics of 6G, Metaverse, XR, Quantum and Blockchain. He serves on the Technical Advisory Committee of the FCC and on the Spectrum Advisory Board of Ofcom. He is a Fellow of the IEEE, the Royal Academy of Engineering, the Royal Society of Arts (RSA), and the Institution of Engineering and Technology (IET). He is a Top-1% Cited Innovator across all science fields globally. Yonghui Li ([email protected]) is a Professor and Director of Wireless Engineering Laboratory at the University of Sydney. He is the recipient of the Australian Queen Elizabeth II Fellowship in 2008 and the Australian Future Fellowship in 2012. He is a Fellow of IEEE. His research interests are in the areas of millimeter wave communications, machine to machine communications, coding techniques and cooperative communications. Branka Vucetic ([email protected]) is an ARC Laureate Fellow and Director of the Centre of IoT and Telecommunications at the University of Sydney. Her current research work is in wireless networks and IoT. She is a Life Fellow of IEEE, the Australian Academy of Technological Sciences and Engineering and the Australian Academy of Science.
http://arxiv.org/abs/2306.07490v2
20230613014218
Top-Down Viewing for Weakly Supervised Grounded Image Captioning
[ "Chen Cai", "Suchen Wang", "Kim-hui Yap" ]
cs.CV
[ "cs.CV" ]
Top-Down Viewing for Weakly Supervised Grounded Image Captioning Chen Cai, Suchen Wang, Kim-hui Yap Corresponding author: Kim-Hui Yap. Chen Cai, Suchen Wang and Kim-hui Yap are with the School of Electrical and Electronic Engineering, Nanyang Technology University, Singapore (email:[email protected]; [email protected]). ======================================================================================================================================================================================================================================================================= Weakly supervised grounded image captioning (WSGIC) aims to generate the caption and ground (localize) predicted object words in the input image without using bounding box supervision. Recent two-stage solutions mostly apply a bottom-up pipeline: (1) first apply an off-the-shelf object detector to encode the input image into multiple region features; (2) and then leverage a soft-attention mechanism for captioning and grounding. However, object detectors are mainly designed to extract object semantics (i.e., the object category). Besides, they break down the structural images into pieces of individual proposals. As a result, the subsequent grounded captioner is often overfitted to find the correct object words, while overlooking the relation between objects (e.g., what is the person doing?), and selecting incompatible proposal regions for grounding. To address these difficulties, we propose a one-stage weakly supervised grounded captioner that directly takes the RGB image as input to perform captioning and grounding at the top-down image level. In addition, we explicitly inject a relation module into our one-stage framework to encourage the relation understanding through multi-label classification. The relation semantics aid the prediction of relation words in the caption. We observe that the relation words not only assist the grounded captioner in generating a more accurate caption but also improve the grounding performance. We validate the effectiveness of our proposed method on two challenging datasets (Flick30k Entities captioning and MSCOCO captioning). The experimental results demonstrate that our method achieves state-of-the-art grounding performance. Grounded image captioning, Weakly supervised, One-stage method. § INTRODUCTION Image captioning is a fundamental problem in computer vision that recognizes the objects and the relationships in the image and describes them with natural language <cit.>. More recently, the bottom-up attention-based mechanism <cit.> has been widely adopted for image captioning framework and achieves remarkable success <cit.>. Apart from the significant advances achieved in image caption generation, many mainstream works <cit.> explore more grounded image captioners that localize the groundable object words while generating a caption to facilitate the interpretability of image captioning models. These methods develop regularization schemes to match the object words in the generated caption with region features and use the corresponding coordinates as grounding regions. Some previous methods <cit.> utilize bounding boxes annotation of each groundable object word at the training stage and achieve satisfactory grounding performance. However, the cost of annotating bounding boxes for large-scale captioning datasets is extortionate. Besides, this group of methods is inherently unsuitable for handling relation words since the verb words are ambiguous to be annotated by bounding boxes. Recently, weakly supervised grounded image captioning (WSGIC) methods <cit.> have drawn more attention. It alleviates the requirement of box annotations and learns to ground only based on the given image and captioning pairs during training. Nevertheless, most existing methods use two-stage pipelines that raises three difficulties: 1) limiting the model's adaptability and efficiency in a real-world application. The raw RGB images must process with region proposal operation (i.e., RPN) in object detection to extract region features in the first stage. 2) The input region features for caption generation care more about object class information which overlooks the benefits of relation semantic information for caption generation and grounding. 3) Losing the global understanding of images leads to aligning incompatible regions for grounding. The grounded captioners tend to select the most discriminative region feature as the grounding region which inferior grounding performance. For instance, the two-stage method tends to select the “head” proposal region as the localized region for the groundable word “woman” in the caption rather than ground the entire body in Figure <ref>. Furthermore, the existing WSGIC methods focus more on object word generation using object-focused region features. The benefits of emphasizing indispensable relation semantic information in the grounded caption are largely underexplored. As shown in Figure <ref>, we observe that existing bottom-up methods are more sensitive to recognizing the objects in the image (e.g., woman, shirt, railing, etc.) while often overlooking the actions, behaviors, and activities (e.g., looking through, etc.), which can results in caption hallucinations problem <cit.> and lead to inferior captions. For example, with less informative relation words “standing next” as context information, the captioner tends to generate “railing” for grounding rather than the more desired word “telescope” in the ground truth. We observe that relation words often serve as a context that benefits object word generation in caption modeling. To address these problems, we propose a one-stage grounded captioner that can be trained in a weakly supervised manner. Different from previous approaches using pre-computed region features, we eliminate the off-the-shelf object detector and directly take the raw RGB image as input. It allows the captioner to perform captioning and grounding on the basis of the entire image. Specifically, our top-down vision transformer-based <cit.> encoder primarily encodes the raw image to produce the class token and visual patch token representations. The grounded language decoder then utilizes these visual representations to compute word representations at each time step. Concurrently, the proposed Recurrent Grounding Module (RGM) in the decoder takes the word representations and the visual features to compute the Visual Language Attention Maps (VLAMs) for grounding. The VLAMs present the spatial region and location of the generated groundable object words in the caption. Besides, as we model at the image level representation rather than isolated region features, it allows us to explore how to incorporate relation semantic information in one-stage grounded image captioning. In this work, we introduce a [REL] token to capture relation semantic information that helps the grounded captioner generate the more desired relation and object words in the caption. Our study shows that incorporating relation semantic features into the formulate can increase the captioning and grounding quality. The main contributions of our paper can be summarized as follows: * We propose a one-stage weakly supervised grounded image captioning method to generate image captions and perform grounding in a top-down manner. * We introduce a relation token that models relation semantic information, which provides rich context information to grounded captioners to significantly benefit the generation of captions. * We proposed a recurrent grounding module that enables the grounded captioner to compute precise Visual Language Attention Maps (VLAMs) for the object words, enhancing the grounding performance. * We achieve state-of-the-art grounding and competitive captioning performance in the Flick30k Entities and MSCOCO captioning datasets. The remaining sections are organized as follows. Section II describes related work on grounded image captioning, visual grounding, and weakly supervised object localization. In Section III, we introduce the proposed top-down image encoder and the grounded language decoder. In Section IV, we show the extensive experimental results of the Flick30k Entities captioning and MSCOCO captioning datasets. Finally, we conclude this work in Section V. § RELATED WORKS §.§ Grounded image captioning Many grounded image captioning models utilize pre-trained object detector to extract region features and adopt attention mechanisms to accomplish distinct advances in grounding and captioning. Zhou et al. <cit.> utilize the aligned bounding box annotations of the noun phases to improve the captioning and grounding quality. Zhang et al. <cit.> further extract two-stage scene graph relation features for the supervised grounded captioning. However, labeling and aligning the bounding boxes with noun words in the captions is expensive for the large-scale dataset. Recent methods <cit.> have shown a promising alternative to improve the grounding accuracy in a weakly supervised manner. Ma et al. <cit.> proposed the cyclical training regimen that first localizes the region of interest (ROI) of the generated words using a language decoder, then reconstructs the sentence based on localized visual regions to regularize the captioning model. Liu et al. <cit.> explored the prophet attention that utilizes future information to compute ideal attention weights to regularize the deviated attention. Chen et al. <cit.> studied an ensemble training strategy to solve incompatible region-word alignment problems. It fuses ROIs produced by multiple grounded captioners as the final grounded region. In addition, recent works <cit.> have developed some pre-trained/pre-built weak supervision techniques to encourage the grounded captioner to achieve a better word-region alignment. Different from pioneer works that are studied upon the pre-computed region features, we aim to explore a one-stage WSGIC solution. We expect that the captioner can learn how to generate captions and locate the groundable object words directly from raw images in a weakly supervised manner. Furthermore, we explore a simple and effective method to incorporate semantic relations into a one-stage grounded captioner. §.§ Visual grounding VG task aims to localize the object region in an image based on a natural language phrase or sentence query. The pioneer works <cit.> researches on the fully supervised VG, which aligns the noun phrases and extracted regional features via decoder using annotated bounding box supervision. More recently, <cit.> proposed to ground language query to the corresponding region in the image in an end-to-end manner to increase the adaptability of the VG model. Besides supervised works, some works <cit.> explored weakly supervised VG that learns from image-sentence pairs without the need of bounding box supervision. Akbari et al. <cit.> explored the non-linear mapping of visual and textual features at multiple levels, and it guides the model using a multi-level multimodal attention mechanism for visual grounding. Wang et al. <cit.> improved visual grounding performance via contractive learning <cit.>. Liu et al. <cit.> proposed a knowledge-guided pairwise reconstruction network that models the relationships between image-sentence pairs and performs localization. Unlike the VG task, our task only takes images as input and aims to automatically determine the most important visual concepts to describe. §.§ Weakly supervised object localization WSOL focuses on localizing objects with image-level category labels. Most WSOL methods <cit.> use the class activation map (CAM) to indicate object region with respect to the prediction of class label. Choe et al. <cit.> propose an attention-based dropout layer that exploits the attention mechanism to process the feature maps and improve the localization quality. The work <cit.> introduces erasing integrated learning that Investigates both the less discriminatory area and the high response class-specific area to explore the complete extent of the object region. Choe et al. <cit.> model the long-range dependency over image patches through Vision Transformer <cit.>, and they proposed a token semantic couple attention map (TS-CAM) to solve partial activation issues. To enhance localization accuracy, Gupta et al. <cit.> introduced a patch-based attention dropout layer. In contrast to WSOL, our work performs localization by exploring the attention map generated via the dot product similarity between visual representations and generated caption words. § METHODOLOGY Figure <ref> gives an overview of the proposed WSGIC model. In this work, we adopt an encoder-decoder framework. It is composed of a top-down image encoder to encode the input RGB image with object and relation semantic information, and a grounded language decoder to recurrently generate the caption and ground the objects in the image. For each time step, our grounded language decoder outputs the caption word 𝐲 (e.g., 𝐲 = man, jacket, etc.) and computes a Visual Language Attention Map (VLAM) 𝐦∈ℝ^H × W (e.g., 𝐦_man, 𝐦_jacket, etc.) to localize the groundable word with bounding box coordinate 𝐛_t={x_1, y_1, x_2, y_2}. In the following subsection, we elaborate the details of our top-down image encoder and grounded language decoder. §.§ Top-down Image Encoder The image encoder encodes a fixed resolution (e.g., 224 × 224, 384 × 384) RGB image 𝐈∈ℝ^H× W× 3 with Vision Transformer (e.g., Vit <cit.>, DeiT <cit.>) pre-trained model. Concretely, the image is first divided into P × P patches and encoded as a sequence of patch embeddings 𝐙_patch^0 = [𝐳_1^0; 𝐳_2^0; …; 𝐳_N^0] ∈ℝ^N × D, where 𝐳_i^l ∈ℝ^D represents the i-th patch token that has dimension D, N = H/P×W/P and l ∈{0, 1, ..., L} denotes the total number of patch tokens and Transformer layers, respectively. Usually, a learnable [CLS] token 𝐳_cls^0 ∈ℝ^D is prepended and trained along with image patches using L transformer layers to capture useful object information. Therefore, we treat the output 𝐳_cls^L as an object semantic representation and 𝐙_patch^L as patch representations. Taking advantage of the [CLS] token representation can drive the caption decoder to emphasize and generate desired object words. §.§.§ Visual relation semantic modeling Besides, letting the captioning model describe the relationship between objects is no less important. Similar to the [CLS] token, we introduce a new learnable [REL] token and expect it to capture image-level relation semantic information. Specifically, let 𝐳_rel^0 ∈ℝ^D be the [REL] token, then we concatenate it with the frozen patch representations 𝐙_patch^L and train it with additional L_r number of learnable transformer encoder layers T_enc, which can be written as: 𝐳_rel^Lr, 𝐙_patch^(L+L_r) = T_enc([𝐳_rel^0, 𝐙_patch^L]) Here [,] denotes concatenation. To ease the presentation, we omit the multi-head attention (MHA) <cit.>, position-wise Feed-Forward Networks (FFN), and Layer Normalization operation (LN) <cit.> in the equation (<ref>). 𝐳_rel^Lr will then inject into prediction heads (e.g., MLP) and train to capture the semantic information of selected relation classes similar to multi-label image classification. These representations are fused and projected into d=512 dimension with a linear layer, whereas we use 𝐕 = [𝐳_rel^Lr; 𝐳_cls^L; 𝐙_patch^L ] ∈ℝ^(N+2) × d to represent the final output of our top-down image encoder. The output 𝐕 will be passed into the subsequent caption decoder. Our intention is to inject the class and relation semantic representation into the caption decoder to benefit captioning and grounding performance. §.§.§ Selection of relation classes To model the relation semantics with relation labels. We utilize both verb (e.g., playing, jumping, etc.) and preposition (e.g., across, above, etc.) words in the caption as relation labels similar to VG dataset <cit.> with the help of Part-of-Speech (POS) tagging. By combining both types of words, relation modeling can capture a wider range of relationships between objects in the image, making it more accurate and informative. Additionally, not all captions contain verbs, so including prepositions can help ensure that all images are associated with a relation label for classification. For instance, 1) given a ground truth caption of A person in front of a building, the proposed method tends to model the relation semantic of in front of (prepositions) to assist the caption generation. 2) proposed method can model relation semantic concepts such as playing with (verb and preposition) when the caption is The girl is playing with her dog. The most frequently used relation words (62 and 72 relation classes for Flickr30k-Entities and MSCOCO, respectively) are selected to ensure every image is associated with at least one relation label for relation modeling (see Figure <ref> and <ref> for the statistics of relation labels). §.§ Grounded Language Decoder The proposed grounded caption decoder consists of a grounding enhanced language module and a recurrent grounding module (RGM). The language module generates caption representation 𝐂 = [𝐜_1; 𝐜_2; 𝐜_3; …; 𝐜_L] sequentially, which can be used to predict the caption words Y, where 𝐜_i ∈ℝ ^ d, L is the length of generated caption and d=512 is the dimension of caption representation. The RGM computes the Visual-Language Attention Maps (VLAMs) 𝐌 = [𝐦_1; 𝐦_2; 𝐦_3; …; 𝐦_L] for grounding sequentially with the generation of caption words, where 𝐦_i ∈ℝ^H × W. §.§.§ Recurrent grounding module For the time step t, assume we are generating the 𝐦_people∈ℝ^H × W (e.g., in Figure <ref>) that represents the spatial region of the generated object word “people". We first need to generate the word and visual similarity attention metrics 𝐬_people^* ∈ℝ ^ N, which can be computed through dot product between word representation 𝐡_people∈ℝ ^ d and visual representation 𝐕, where 𝐡_jacket is the output of the LSTM sub-module from the grounding enhanced language module, and N = H/P×W/P (e.g., N=24 × 24). Moreover, similar to sequential modeling, we propose to enhance the similarity attention metrics 𝐬_t^* (e.g., 𝐬_people^*) for the current time step by conditioning on the 𝐬_t-1^* (e.g., 𝐬_of^*) from the previous time step. This enables us to compute more precise 𝐌_people for grounding (shown in Table <ref> and Figure <ref>). We view 𝐬_t-1^* as the simple visual representation containing the information of a pseudo “object” with the pixel size of N. We then extract its features and project it into dimension d using a single dense layer. In particular, we perform the following steps to compute 𝐦_t for the current time step: 𝐦_t = Φ{𝐬_t^*} ∈ℝ^H ×W 𝐬_t^* ∈ℝ ^ N = δ{𝐬_t} ∈ℝ ^ N+2 = Softmax(𝐮_t 𝐕^⊤/√(d)) 𝐮_t = [𝐡_t + f(𝐬_t-1^*)] ∈ℝ ^ d Where f(.) denotes dense layers, and ⊤ represents the transpose operation. 𝐮_t (Query) is the fusion representation that fuses the word and the representation of 𝐬^*_t-1. 𝐬_t ∈ℝ^(N+2) has the same patch size as 𝐕 (Key) after dot product and Softmax function. Here, the split operation δ{.} is introduced to split 𝐬_t ∈ℝ^(N+2) to 𝐬_t^* ∈ℝ^N (N = H/P×W/P) implying how much attention the word representation paid for each visual patch token, where +2 is the attention score of [CLS] and [REL] token. Φ{.} denotes the reshaping and up-sampling process to convert and enlarge the similarity attention matrices 𝐬_t^* to VLAM of 𝐦_t∈ℝ^W × H. The generation of 𝐬_t^* can be extended to various numbers of the parallel heads N_h and perform summation over {𝐬_t^*}_i=1^N_h to compute more accurate VLAM and improve the grounding performance (illustrated in Table <ref>). §.§.§ Object bounding box generation The VLAMs are able to highlight the discriminative region of the object words (demonstrated in Figure <ref>). We borrow the idea of the thresholding approach <cit.> to find the object-bounding box based on generated VLAMs during the testing stage. Given the 𝐦 (e.g., 𝐦_people) that has the same image size H × W as input image. We define a binarized mask K ∈{0, 1}^W× H, where K_x, y=0 if the pixel at position x,y belongs to background region, K_x, y=1 if it is part of the object (foreground) regions. The produced masks can be computed as: 𝐊_x, y ={[ 1, if 𝐦^x, y > ρ, 0 < ρ < 1; 0, otherwise.; ]. Where the threshold ρ=0.05 is used to differentiate the region of the object and background in the 𝐦_t. With the shape and location information presented in 𝐊, we are able to determine the bounding box coordinates 𝐛_t={x_1, y_1, x_2, y_2} of the object. §.§.§ Grounding enhanced language module Generating accurate word representations is crucial in computing correct VLAMs for grounding, especially for the groundable object words in the caption. These word representations are used to produce precise word-visual similarity attention matrices (eq. (<ref>)), which are essential for achieving the correct localization of objects. Hence, in this work, we utilize the relation and object semantic representations as contextual cues to assist the language model in generating desired word representations, which leads to benefits in both captioning and grounding (shown in Table <ref>). Motivated by many existing captioning works <cit.>, the word representations are commonly generated with the Transformer-based decoder <cit.> or RNN-based decoder <cit.>. Based on experiments, we discovered that utilizing the LSTM network is better suited for generating the word representations that are used to compute VLAMs for our one-stage WSGIC (shown in Table <ref>). Therefore, the LSTM network is chosen to generate the word representations 𝐇 = [𝐡_1; 𝐡_2; 𝐡_3; …; 𝐡_L] ∈ℝ^L × d based on visual features 𝐕, that containing visual patch, [CLS] and [REL] semantic representations. Specifically, for each time step: 𝐜_t = LN(FFN(GLU(𝐡_t + 𝐜_t^g))) ∈ℝ^d 𝐜_t^g = 𝐬_t𝐕 ∈ℝ^d 𝐡_t = LSTM([𝐯̅ + 𝐜_t-1, 𝐖_eΠ_t], 𝐡_t-1) ∈ℝ^d Where 𝐯̅ denotes the mean pooled 𝐕. 𝐖_e∈ℝ^E×|∑| is a word embedding matrix for a vocabulary ∑, Π_t is the one-hot encoding of the input word at the current time step t and [,] denotes concatenation. The 𝐜_t^g captures attended language representation that aggregates the visual tokens, object 𝐳_cls and relation 𝐳_rel features through attention score 𝐬_t from equation (<ref>) in GRM, which aids the caption generation and allows GRM to be optimized at the training stage. We use the gated mechanism like GLU <cit.> to enhance the output representations for caption generation. The layer normalization LN and feedforward FFN sub-layer are added after the GLU sub-layer. The caption representations 𝐂 are fed into a linear projection layer and Softmax layer for the prediction of words 𝐘. §.§ Training and Objectives In this work, we adopt the standard multi-label classification loss for relation prediction: ℒ_MLC = -∑_i=1^N_c z_i log(p_i), p, z ∈ℝ^N_c, z ∈{1, 0} Where p and z denote the prediction and ground-truth relation label respectively, N_c is the number of relation class. We train the grounded image captioning model by optimizing the cross-entropy (XE) loss L_XE: ℒ_XE = -∑_t=1^Tlog(p_θ(𝐲_t^* | 𝐲_1:t-1^* )) The model is trained to predict the target ground truth caption 𝐲_t^* with the given words 𝐲_1:t-1^*. The overall learning objective is ℒ = ℒ_MLC + ℒ_XE. § EXPERIMENT We evaluate our proposed weakly supervised grounded image captioning method on the popular Flickr30k-Entities <cit.> and MSCOCO captioning dataset. The flickr30k-Entities dataset consists of 31k images, and each image is annotated with 5 captions. It contains 275k annotated bounding boxes associated with natural language phrases and a total of 480 object classes. We selects 72 most frequently used relation classes in the dataset. Similar to the existing method <cit.>, we used splits from Karpathy et al. <cit.>, where 29k/1k/1k images are used for training, validation, and testing, respectively. We drop the words that occur less than 5 times which end up with a vocabulary of 7000 words. For the MSCOCO caption dataset <cit.>, 113k, 5k, and 5k images are used for training, testing, and validation, respectively. The vocabulary size for the COCO caption dataset is 9487. We select the 62 most frequently used relation classes in the COCO caption dataset for relation semantic modeling. Besides captioning quality, we evaluate the grounding quality on the MSCOCO dataset for 80 object categories. We merge the MSCOCO captioning Karpathy et al. <cit.> testing split with the MSCOCO detection datasaet, which resulting 3648 image-caption-bounding boxes pairs to test the grounding performance. We use the F1_all and F1_loc metrics that defined in GVD <cit.> to evaluate the grounding quality. In F1_all, a region prediction is considered correct if the predicted object word is correct and the IoU between the predicted bounding box and ground truth is greater than 0.5. The F1_loc mainly considers correctly predicted object words. More detailed information about F1_all and F1_loc metrics can be found in the supplementary material of GVD. We use standard evaluation metrics, including BLEU <cit.>, METEOR <cit.>, CIDEr <cit.> and SPICE <cit.>, to evaluate the caption quality and compare with existing methods. §.§ Implementation Details Our GIC model is built based on the Deit <cit.> visual transformer-based architecture that is pre-trained at resolution 224×224 and fine-tuned at resolution 384×384 on ImageNet-1k. The Deit backbone encoder consists of L = 12 consecutive transformer blocks with 12 heads, and the patch size is 16. 3 additional learnable transformer blocks with 12 heads are adopted for relation semantic modeling. The dimensions of the visual patch token representations are D=768 and projected to a new embedding space with a dimension of d=512 (1024 for MSCOCO). The word embedding dimension and the hidden dimension of the grounded language decoder are set to 512 (1024 for MSCOCO). We optimized the proposed model with ADAM <cit.> optimizer with a learning rate initialized by 5e-4 (1e-4 for COCO) and annealed by a factor of 0.8 every three epochs. Following existing WSGIC methods <cit.>, the beam search is disabled for the convenience of grounding evaluation for Flickr30k-Entitie captioning dataset. §.§ Performance Comparison We compared the proposed method with state-of-the-art weakly supervised grounded image captioning methods on the Flickr-30k test set in Table <ref>. The comparison includes GVD <cit.>, Cyclical <cit.>, Prophet <cit.> that trained using cross-entropy loss, and weak supervision methods SCAN <cit.>, CVAE <cit.>, where ^† denotes the model is trained with weak supervision and KL-divergence loss. The SCAN <cit.>, CVAE_RL <cit.> method are fine-tuned with their proposed weak supervision and SCST <cit.> reward using reinforcement learning (RL). Ticks in the table denote the methods are fine-tuned with RL. We achieved significant improvement on both caption and grounding (F1_all and F1_loc) accuracy. Specifically, for methods trained with XE, our proposed methods achieve the performance of 7.88% for F1_all score and 19.9% for F1_loc score, which is 1.18 and 0.7 points higher as compared to CVAE <cit.>. We also achieve better F1_all score as compared to the GVD <cit.> that trained using bounding box supervision (Sup.). Our method that fined-tuned using RL has outperformed the methods SCAN and CVAE_RL that fine-tuned using RL and weak supervision with KL loss. These increments have proven the effectiveness of our proposed WSGIC methods. §.§ Ablation study We conduct ablation studies to verify the effectiveness of various components in the proposed method. In Table <ref>, we compare our model in various settings. The GM injects visual patch representations 𝐙_patch^L only to the LSTM network to generate word representations 𝐇 and use Cross-MHA <cit.> as the Grounding Module (GM) for the computation of VLAMs. GM+cls injects [CLS] token to the LSTM decoder for word representations generation, and GM+tokens includes both [CLS] and [REL] tokens to the LSTM network. The RGM model tests the effectiveness of utilizing fusion features 𝐔 to generate VLAMs, both tokens are injected into the LSTM for word representation generation. For RGM_cls, we include [CLS] token in the RGM, which helps predict more accurate groundable words in the caption. In RGM_cls+rel, we include both [CLS] and [REL] tokens, which archives the best captioning and grounding performance. Our experiment indicates that the RGM module greatly aids one-stage caption grounding. The inclusion of [CLS] and [REL] tokens can help the grounded captioner generate more accurate caption words (higher B@4 and C scores) that aid in better grounding performance, in which the RGM_cls+rel improves 2.52 and 5.8 points in F1_all and F1_loc score respectively as compared to the baseline GM model. Since visual representation plays a critical role in WSGIC, we investigated several visual backbones, such as Swin Transformer (SwinT-B) <cit.>, Vit-B <cit.>, and Deit-B <cit.> to determine their suitability for the WSGIC model (shown in Table <ref> and Figure <ref>). We observed that the generated VLAMs are more accurate for grounding when utilizing the Deit-T as the backbone network. The VLAMs for the SwinT tend to focus on the discriminative region of the each shift window <cit.> which affects the generation of VLAMs for grounding. The generated VLAMS with the ViT backbone is slightly less accurate as compared to using the DeiT backbone. Table <ref> shows the different number of parallel heads that aggregated to compute the VLAMs for grounding. We achieve the best grounding performance when Heads=8. Table <ref> shows the performance of additional Transformer layers used to model the relation semantic for better word representations and VLAMs generation. We select the layers of 3 as it performs better in grounding evaluation. The mean Average Precision (mAP) results for multi-label relation class prediction is 46.55%. In Table <ref>, we evaluate results for the interaction between semantic tokens and grounding module that benefit the grounded captioning model in achieving lesser caption hallucination problem. The method suffers less from the caption hallucinations problem when the F1_loc is higher, in which F1_loc is proposed to measures the existence of the correctly predicted object words in the generated captions with respect to the groundable object words in the ground truth (GT) caption. The TR and LSTM indicate we are utilizing the Transformer decoder and the LSTM decoder with Deit <cit.> backbone for captioning and measuring the F1_loc to evaluate the existence of the correctly predicted object words in the caption. Deit+GM indicates that we include the Grounding Module (GM) in the model without using the using features 𝐔 and both tokens proposed in the RGM, while Detector+GM indicates the region features <cit.> are adopted for captioning and matched for grounding. The RGM denotes we are utilizing the proposed Recurrent Grounding module, and RGM+tokens injecting both [CLS] and [REL] tokens into the model. The model with the Transformer decoder tends to have higher captioning evaluation scores but lower in F1_loc as compared to using the LSTM decoder, and it does not yield suitable VLAMs for grounding in our experiments. Hence, we choose LSTM as the baseline decoder for captioning and grounding in this paper. Furthermore, we can see that the captioning performance can be improved by utilizing the RGM module. Moreover, we observed that the captioning and F1_loc score had been significantly improved by utilizing the RGM and semantic tokens, which also implies that the proposed model is less prone to the caption hallucinations problem. §.§ Qualitative Analysis We present some qualitative samples in Figure <ref>. We compare our proposed method (Ours) using both [REL] and [CLS] tokens and the method (Ours_cls) using [CLS] token only in (a) and (b). We remark that the method with [REL] performs better in generating relation words and can be served as contextual information to aid in the generation of object words in the caption. For instance, the model (Ours) correctly predicts the relation words “eating” and as context words, leading to predict the correct groundable object words “apple” rather than “banana” in (a). In (b), it correctly predicts “sitting at a table” with respect to the ground truth (GT) caption. Furthermore, we have shown that our proposed models outperform the existing two-stage method <cit.> in generating correct relation words and object words (reducing object hallucination issues). For instance, in (c) and (d), our model predicts the correct relation words “holding” and “chasing” and the more desired object words of “racket” and “ball” in the caption with respect to GT caption. Furthermore, our proposed method successfully grounds the entire regions of the generated groundable object words rather than just part of the body in (a), (b), and (c) (e.g., locate the entire body of a man). Hence, our proposed method is able to predicts both relation and groundable object words in the caption correctly and localizes the entire object region in the image. Furthermore, we analyzed the VLAMs generated by GM and proposed RGM in Figure <ref>. We observed that the generated VLAMs using GM tend to contain more noise as compared to using RGM and result in localizing the regions with noise. For instance, comparing 8(a) and 8(b), the VLAMs generated with RGM in (b) attend to the object itself and yield more precise bounding boxes for object grounding as compared to (a). Furthermore, the VLAM in (c) tends to localize (ground the largest contour <cit.>) the noisy region in the man's body rather than the ball, whereas (d) is able to eliminate the noise using RGM and locate the spatial region of football correctly. As a result, the grounding performance in the one-stage grounded captioner is significantly improved by utilizing the proposed RGM. §.§ Performances on MSCOCO dataset Table <ref> evaluates the grounding performance for the MSCOCO captioning dataset. We evaluate the results under cross-entropy loss training. The beam search is disabled for grounding evaluation. We evaluate the result using Deit-B <cit.>. To the best of our knowledge, no existing grounding evaluation is available for the MSCOCO captioning dataset. Hence, we evaluate grounding performance with Detector+GM <cit.> baseline with our top-down method. Our proposed method (Ours) achieves 2.52 and 3.37 points higher in grounding performance. We also compared the proposed method (Ours_cls) with using the [CLS] token, and Ours achieves both higher scores in F1_all and F1_loc. This further proves the effectiveness of involving the [REL] token in the proposed method that improves the captioning score and reduces object hallucination issues by getting a higher F1_loc score. In Table <ref>, we evaluate our captioning performance and compare the result with recent WSGIC <cit.> and one-stage CLIP <cit.> based image captioning methods <cit.><cit.>. These models are trained with cross-entropy loss and further optimized with RL <cit.>. We compare the result of CLIPCap <cit.> trained using CLIP_Vit-B (Base) backbone. Our proposed method outperform the pioneer WSGIC methods <cit.> in most metrics and approaches the performance of one-stage large visual-language backbone <cit.> based captioning methods. In this paper, we mainly investigate the effectiveness of utilizing a grounding module and [REL] token in generating more interpretable one-stage grounded image caption, the captioning performance can be further enhanced with more powerful backbones as proven in existing work CLIPcap_Vit-L <cit.>. § CONCLUSION In this work, we propose a one-stage weakly supervised ground image captioning model that generates captions and localizes the groundable words in a top-down manner. We introduce a new token to capture the relation semantic information that serves as context information to eventually benefit the captioning and grounding performance. Furthermore, we propose to compute accurate visual language attention maps (VLAMs) recurrently, which allows it to give a higher quality grounding for the groundable object words. Experiments of two datasets show that the proposed method achieves state-of-the-art performance on captioning and grounding. IEEEtran
http://arxiv.org/abs/2306.05616v1
20230609014247
Throughput of Hybrid UAV Networks with Scale-Free Topology
[ "Zhiqing Wei", "Ziyu Wang", "Zeyang Meng", "Ning Zhang", "Huici Wu", "Zhiyong Feng" ]
cs.IT
[ "cs.IT", "math.IT", "94A99", "H.1.1" ]
Throughput of Hybrid UAV Networks with Scale-Free Topology Zhiqing Wei, Ziyu Wang, Zeyang Meng, Ning Zhang, Huici Wu, Zhiyong Feng Zhiqing Wei, Zeyang Meng, Huici Wu, and Zhiyong Feng are with Beijing University of Posts and Telecommunications, Beijing, China 100876 (email: {weizhiqing, mengzeyang, dailywu, fengzy}@bupt.edu.cn). Ziyu Wang is with Amazon (China) Holding Company Limited, Beijing, China 100025 (email: [email protected]). Ning Zhang is with the Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON, N9B 3P4, Canada. (e-mail: [email protected]). Correspondence authors: Ziyu Wang, Huici Wu, and Zhiqing Wei. Compiled by using A&A-latex ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Unmanned Aerial Vehicles (UAVs) hold great potential to support a wide range of applications due to the high maneuverability and flexibility. Compared with single UAV, UAV swarm carries out tasks efficiently in harsh environment, where the network resilience is of vital importance to UAV swarm. The network topology has a fundamental impact on the resilience of UAV network. It is discovered that scale-free network topology, as a topology that exists widely in nature, has the ability to enhance the network resilience. Besides, increasing network throughput can enhance the efficiency of information interaction, improving the network resilience. Facing these facts, this paper studies the throughput of UAV Network with scale-free topology. Introducing the hybrid network structure combining both ad hoc transmission mode and cellular transmission mode into UAV Network, the throughput of UAV Network is improved compared with that of pure ad hoc UAV network. Furthermore, this work also investigates the optimal setting of the hop threshold for the selection of ad hoc or cellular transmission mode. It is discovered that the optimal hop threshold is related with the number of UAVs and the parameters of scale-free topology. This paper may motivate the application of hybrid network structure into UAV Network. Unmanned Aerial Vehicle; Scale-free Network; Throughput; Scaling Law; Network Resilience § INTRODUCTION In recent years, with the development of the technologies such as mobile communications, artificial intelligence, and automatic control, Unmanned Aerial Vehicles (UAVs) have been widely applied in many areas, such as reconnaissance, disaster rescue, wireless communication and logistics, due to the advantages of high maneuverability and flexibility <cit.>. The application of UAVs is showing a blowout trend, and the application areas are continuously expanding. However, due to the extremely limited endurance time, power, size, and the harsh environment where UAVs carry out tasks, it is difficult for a single UAV to complete tasks quickly and efficiently. Thus, the collaboration among UAV swarm is required <cit.>. Taking into account the harsh environment in which UAV swarm is applied, the ability of UAV swarm to defend against device faults or other possible attacks and maintain the reliability of services, namely, the network resilience <cit.>, is of vital importance to UAV swarm. Hence, the network resilience of UAV swarm is a key factor restricting the wide application of UAV swarm <cit.> <cit.>. On the one hand, the network topology has a fundamental impact on the resilience of UAV network. Scale-free network topology, as a topology that exists widely in nature, such as Internet and biological swarms, has attracted widespread attention. The power-law distribution of the degree of the nodes is one of the important characteristics of this kind of complex network <cit.>. As a typical complex network, the large-scale UAV network is widely studied <cit.>, where the large-scale UAV network is a scale-free network. For example, <cit.> verified that the large-scale network has scale-free characteristic naturally under rules of Reynolds Boids. On the other hand, the study on scale-free structure of UAV network is of great significance to improve the network performance. Because the scale-free structure has great robustness when facing intentional attacks <cit.>. In an asymptotic sense, all network nodes need to be destroyed in order to destroy the scale-free network, which greatly improves the stability and reliability of UAV network. Tran et al. <cit.> and Fan et al. <cit.> studied the scale-free UAV network and found that randomly removing nodes has little effect on the connectivity of UAV network, so that the network has a higher tolerance for random attacks. To this end, in order to enhance the resilience of UAV swarm, the UAV network topology with scale-free network topology has been studied in depth. In <cit.>, inspired by the scale-free characteristics of bird flocks enhancing environmental response capabilities, Singh et al. studied the scale-free characteristics of UAV swarm to improve the survivability of UAV swarm. On the other hand, the network throughput has a correlation with network resilience. The increase of network throughput will decrease the congestion probability and the communication delay. As a result, the UAV swarm has a short response time to the interruption of the network, and the network resilience is improved. In order to alleviate congestion and reduce network response time, Defense Advanced Research Projects Agency (DARPA) <cit.> released a project called Content-Based Mobile Edge Networking (CBMEN) to effectively improve network throughput and reduce delay. Liu et al. <cit.> modeled the relation between network resilience and network throughput, and maximized the throughput to improve the fault recovery ability of the network. Therefore, it is shown that both network topology and network throughput can comprehensively affect the resilience of UAV network. In terms of the throughput of UAV network, Yuan et al. studied the impact of the mobility of UAVs on the link throughput of UAV Network <cit.>. Li et al. <cit.> studied the throughput of air-to-air links and the multiple access channel (MAC) throughput of UAV network. Chetlur et al. <cit.> studied the outage probability of the link of three-dimensional (3D) UAV network, and analyzed the resilience of the UAV network from the perspective of reliability. Gao et al. <cit.> enhanced the throughput of UAV Network by optimizing the deployment of UAVs in 3D space. We studied the throughput of 3D UAV network in <cit.>, discovering that the UAV network throughput is a function of the path loss factor in 3D space, the number of nodes, the factor of contact concentration, and so on. However, the throughput of UAV network with scale-free topology was seldom studied. The research on the wireless network with scale-free topology is firstly reported in <cit.>, where the degree of node follows a power-law distribution, which is the feature of scale-free network. <cit.> has found the relation between network throughput and node degree. By assigning independent spectrum resources to nodes with high degrees, the throughput of the UAV network is improved. <cit.> and <cit.> are preliminary works of <cit.>, where the distribution of node's degrees follows uniform distribution. We studied the throughput of 3D scale-free network in <cit.>. Compared with <cit.>, <cit.> studied the impact of 3D network topology on the throughput of scale-free network. Besides, the optimal threshold of the degree is studied, and the separated resources are allocated to nodes with the degree larger than the threshold to enhance the network throughput. To warp up, the scale-free topology improves the resilience of UAV network, and the enhancement of the throughput of UAV network is essential to improve the resilience of UAV network. In order to improve the throughput of UAV network, a hybrid UAV network is formed with the cooperation between UAV network and ground cellular network in this paper. Hybrid network is a combination of ad hoc network and cellular network <cit.>. As shown in Fig. 1, when the source node and the sink node are far away, the data can be transmitted with cellular mode. On the contrary, when the source node and the sink node are close, the data can be transmitted via ad hoc mode. Kumar et al. proposed this model earlier in <cit.> and proved the improvement of network throughput through the hybrid network model. In this paper, the throughput of the hybrid UAV network with scale-free topology is studied. The contributions of this paper are as follows. * The 3D model is applied in this paper, which is more realistic than the two-dimensional (2D) model. The 3D model has mainly two differences compared with 2D model. Firstly, the relative relationship between UAV and base station (BS) is more practical. Considering the flight capability and spatial distribution characteristics of the UAV, the BS is located on the 2D plane, namely the ground, and the UAV is located in a the cubic 3D space, rather than simply assuming that the UAV and the BS are deployed on the same plane. Secondly, the difference on dimension causes the difference on power and segmentation of theoretical results between two models. * The three dimensions of the scale-free characteristics are considered in the hybrid network, i.e., the probability of source nodes selecting contact group members follows the power-law distribution corresponding to the distance, the probability of source nodes communicating with contact group members follows the power-law distribution corresponding to the distance, and the number of contact group members follows a power-law distribution. Compared with the existing researches on the throughput of hybrid scale-free networks, such as <cit.>, the scale-free characteristic of the number of members in the contact group, i.e., the power-law exponent γ, is taken into consideration in this paper. This paper is organized as follows. In Section 2, the network model of hybrid UAV network with scale-free topology is introduced. In Section 3, the throughput of hybrid UAV Network with scale-free topology is derived. The numerical results of the analytical results are shown in Section 4. Finally, in Section 5, we summarize this paper. § NETWORK MODEL The hybrid network is the combination of ad hoc network and cellular network. The cellular network serves as backbone network. As illustrated in Fig. <ref>, in hybrid UAV network, the BSs are distributed on ground, and the UAVs are uniformly distributed in the unit cube[The square-area assumption and the division method of a 2D plane is described in <cit.>, which is equal to a Voronoi tessellation satisfying Remark 5.6 in <cit.>. The division method guarantees that there is at least one node in each small square when the number of nodes n tends to infinity. Similarly, the division method of 3D unit cube in this paper can be analogized from the result mentioned above, which is also applied in <cit.>.]. When the distance between source and destination is small, the information flow goes through ad hoc mode. However, when the distance between source and destination is large, the information flow goes through cellular mode. §.§ Communication model §.§.§ Interference model In the UAV network, n UAVs are uniformly distributed in the unit cube. The unit cube is divided into small cubes with side length Θ( ( log n/n)^1 3)[In this paper, f( n ) = O(g(n)) means that lim_n →∞f(n)/g(n) < ∞; f( n ) = Ω (g(n)) means that g( n ) = O(f(n)); f( n ) = Θ (g(n)) means that f( n ) = O(g(n)) and g( n ) = O(f(n)), which is also denoted by f(n) ≡ g(n).]. According to Fig. <ref>, each small cube contains at least one node with high probability (w.h.p.) if the transmission range r(n) between the two nodes is as follows <cit.>. r( n ) = Θ( ( log n/n)^1 3). We apply the protocol model <cit.> for interference management. Assuming that the 3D Cartesian coordinates of the nodes i, j, k are X_i, X_j, X_k respectively, two nodes can communicate successfully when | X_i - X_j| < r( n ), and the other nodes that transmit on the same frequency band satisfy the condition | X_k - X_j| > ( 1 + Δ)r( n ), where Δ > 0 is the guard zone factor. §.§.§ Multiple access control In order to avoid multiple access interference, time division multiple access (TDMA) is adopted. Suppose that the side length of each small cube is c_1r( n ), where c_1 is a constant smaller than 1 to ensure that all nodes in the neighboring cubes are within the transmission range. According to the interference model in Section <ref>, only the nodes within the intervals of M cubes are allowed to communicate simultaneously, where M ≥2 + Δ/c_1. Then M^3 cubes become a cluster, and the cubes in the entire cluster are traversed in M^3 time slots in a round-robin scheduling method. The TDMA scheme in this paper is denoted by M^3-TDMA scheme. As shown in Fig. <ref>, the green cubes are located in different clusters, and the nodes in these cubes can transmit data at the same time. Note that the analytical results under such protocol are also applicable to other multiple access control (MAC) protocol. Some studies have proved that MAC protocol will not affect the throughput scaling law. For example, <cit.> studied the throughput bound scaling law of ad hoc network under Carrier Sense Multiple Access with Collision Avoid (CSMA/CA) protocol, which proved that the ad hoc network with CSMA/CA protocol has the same throughput scaling law as the ad hoc network with TDMA protocol. §.§.§ Data flows As illustrated in Fig. <ref>, BSs are distributed in a unit square on ground. The unit square is divided into m =Θ( ( log n/n)^-2 3) cells. And there is a BS at the center of each cell. The volume of each cell is 1/m. A UAV is associated with the nearest BS. The BSs are connected via optical fiber with high throughput. Hence, there are no throughput limitations among the BSs. In Fig. <ref>, there are two kinds of information flows, namely, ad hoc flow adopting multi-hop transmission and cellular flow adopting BSs to transmit data. The total bandwidth W bits per second (bps) is divided into two parts, with W_a bps allocated to ad hoc flows and W_c bps allocated to cellular flows. Thus, we have W = W_a + W_c. §.§.§ Routing scheme The L-routing scheme <cit.> is applied in this paper. If the number of hops from source to destination is smaller than L, the ad hoc transmission mode is adopted. Otherwise, the information is transmitted via cellular mode. For the ad hoc information flow denoted by blue line in Fig. <ref>, the straight line routing is adopted, the information is transmitted from source to destination through the cubes passed through by the line connecting source and destination. For cellular flow, the source transmits data to the nearby BS. Then, the data is transmitted to the BS associated to destination and finally forwarded to the destination. §.§ Scale-free network model The network model of scale-free network consists of the distance based contact group model, the communication model of nodes and the distribution of the number of members in contact group. The contact group of node S is a collection of destination nodes that communicate with node S over a period of time. The distance based contact group construction describes the probability model of source node with specific contact group. The communication model describes the communication probability of nodes in contact groups. The number of members in contact group describes the probability model of the number of contact group members. §.§.§ Distance based contact group construction Source node S selects any other nodes as the member of its contact group G with a power-law distribution probability <cit.>. With d_i denoting the distance between S and node o_i, the probability that o_i is selected as a member of contact group follows power-law distribution as follows. P( D = o_i) = d_i^- α, where α is a factor representing the concentration of the network, which is named as concentration factor in this paper. When α is large, the selection probability of contact group members attenuates greatly with distance, and members in the contact group tend to be located near the source node. The selection of the member of contact group is an independent process. Thus, the probability that G consists of nodes o_g_1,o_g_2,...,o_g_q is <cit.> ( G = {o_g_1,...,o_g_q}) = d_g_1^- α...d_g_1^- α/∑_1 ≤i_1 < ... < i_q≤ nd_i_1^- α...d_i_q^- α. The denominator is an elementary symmetric polynomial and can be denoted as <cit.> σ _q( d_n) = ∑_1 ≤i_1 < ... < i_q≤ nd_i_1^- α...d_i_q^- α, where d_n = ( d_1^- α,...,d_n^- α) is an n-dimensional vector. Calculating the synthesis of all combinations, the probability of an arbitrary particular node o_k being a member of G is denoted by <cit.> ( o_k∈ G) = d_k^- ασ _q - 1( d_n^k)/σ _q( d_n), where d_n^k is the (n-1)-dimensional vector except of the k-th element d_k^- α. §.§.§ Communication model of nodes With the contact group established, the probability of source node S choosing the destination node inside the contact group G to communicate also follows power-law distribution. The probability of node o_i selected to be the destination is d_i^- β, where the factor β reveals the communication activity level of the contact group, which is named as communication activity factor in this paper. Thus, the probability that o_k is the destination node D in G is ( D = o_k| o_k∈ G.) = d_k^- β/∑_i = 1^q d_g_i^- β = d_k^- β/σ _1( d_q), where d_q = ( d_g_1^- β,...,d_g_q^- β). When β is large, the probability of communication destination selection decreases greatly with distance, and the source node tends to communicate with the node at a close location. §.§.§ Number of members in contact group The number of members in the contact group, which is the degree of a node, is denoted by d. Then, the probability density function (PDF) of d follows a power-law distribution as follows. P( d = q) ∝q^- γ, where q is a positive integer, γ is the power-law exponent, which is named as clustering factor in this paper. A ∝ B means that A is proportional to B. When γ is large, the number of contact group members is small. Assume that each source node S has a contact group G and the number of G's members is a random variable Q. The probability that G has q( q = 1,2,...,n) members is <cit.> ( Q = q) = q^- γ/∑_q = 1^n - 1q^- γ = q^- γ/σ _1( q), where σ _1( q) is an elementary symmetric polynomial, with q = {1^- γ, 2^- γ, ..., (n-1)^-γ}. § THROUGHPUT OF HYBRID UAV NETWORK The per-node throughput of ad hoc mode is denoted by λ _a^n bps. The per-node throughput of cellular mode is denoted by λ _c^n bps. Assuming that the number of ad hoc flows and cellular flows is N_a and N_c respectively, the network throughput is [The per-node throughput is denoted by superscript `n', while the throughput of the network has no superscript. The subscript `a' denotes ad hoc mode and the subscript `c' denotes cellular mode.] λ = λ _a + λ _c = N_aλ _a^n + N_cλ _c^n. §.§ Network throughput of cellular mode The network throughput of cellular mode is related with the number of BSs m and the bandwidth W_c. We have the following theorem. The network throughput of cellular mode λ _c satisfies the following equality. λ _c = Θ( mW_c). According to the bandwidth allocation strategy in (<ref>), the network throughput of each cell has upper bound λ _c^m = O( W_c). Assuming that there are x_cells cells sharing the same bandwidth W_c, the lower bound of the throughput of each cell is λ _c^m = Ω( W_c/ . -x_cells), where x_cells is a constant that is independent with n and m <cit.>. Hence, the network throughput of each cell is λ _c^m = Θ( W_c). Because there are totally m cells, the network throughput contributed by cellular mode is λ _c = mλ _c^m = Θ( mW_c). §.§ Network throughput of ad hoc mode The network throughput of ad hoc mode depends on the average number of hops of ad hoc flows passing through each small cube E[F], where F is the number of ad hoc flows contained in each small cube. The per-node throughput of ad hoc mode satisfies the following equation. λ_a^n≡Θ(W_a/E[F] M^3)=Θ(W_a/E[F]). As mentioned above, the side length of the small cube is c_2 r(n)=Θ((log n / n)^1/3). Supposing that the number of hops from source to destination is X, where X is a random variable, the average number of hops of one ad hoc flow is E[X]. Therefore, the average number of hops of all the ad hoc flows is N_aE[ X ]. Because of the random distribution of nodes, each cube contains E[F]=N_a E[X] V transmission flows, where V = ( c_2r( n ))^3 is the volume of the small cube. According to the Multiple Access Protocol in Section 2, the average bandwidth of each slot is W_a/M^3. Therefore, the per-node throughput λ _a^n of each node is λ _a^n ≡Θ( W_aE[ F ]M^3) = Θ( W_aE[ F ]). Therefore, the network throughput contributed by ad hoc mode is λ _a = N_aλ _a^n. §.§ Node and flow classification The structure of cubes with x hops away from a source node is a octahedron, as the shaded cubes illustrated in Fig. <ref>, which consists of 4x^2 + 2 cubes. ( X = x) represents the probability that the distance from the destination node D to the source node S is x hops. According to (6) in <cit.>, we have ( X = x) = ∑_l = 1^4x^2 + 2∑_o_k∈c_l( D = o_k), where c_l is the set of the nodes in the cube that is x hops away from the source node and o_k is the destination node within it. Because the nodes are randomly distributed, the probability that any node is located in the cube is r^3( n ). Therefore, the number of nodes contained in c_l is nr^3( n ) on average. Thus we have ( X = x) = ∑_l = 1^4x^2 + 2nr^3( n )( D = o_k). Because of the same power-law distribution as <cit.>, we use the same symbol as <cit.>, where α represents the concentration of the network, β reveals the communication activity level, and γ is the clustering factor. According to (<ref>) (<ref>) in this paper and (7) in <cit.>, we have the following probability ( X = x) = ∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = 1^n - 1q^- γd_k^- α - βσ _q - 1( d_n^k)σ _1( q)σ _1( d_q)σ _q( d_n). When the destination node is in the regions that are within L hops to the source node S, the data flow is forwarded with ad hoc mode. The probability that a flow is an ad hoc flow is denoted by Pr^a, then we have Pr^a = ∑_x = 1^L ( X = x) = ∑_x = 1^L ∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = 1^n - 1q^- γd_k^- α - βσ _q - 1( d_n^k)σ _1( q)σ _1( d_q)σ _q( d_n). When the destination node is in the regions that are more than L hops to the source node, the data flow is forwarded with cellular mode. According to <cit.>, the maximum number of hops of each flow is Θ( r^- 1( n )). The probability that a flow is a cellular flow is denoted by Pr^c. Then we have Pr ^c = ∑_x = L + 1^r^- 1( n )∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = 1^n - 1q^- γd_k^- α - βσ _q - 1( d_n^k)σ _1( q)σ _1( d_q)σ _q( d_n). The feature of scale-free topology shows that a few nodes have a large number of associated nodes. Thus, a threshold q_0 of node degree is chosen that classifies all the nodes into two classes. The nodes whose degree q > q_0 are leader nodes, and the nodes whose degree q ≤ q_0 are normal nodes. We define Pr_1^a as the probability for leader nodes transmitting with ad hoc mode and Pr_1^c for leader nodes transmitting with cellular mode, then Pr_1^a = ∑_x = 1^L ∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = q_0 + 1^n - 1q^- γd_k^- α - βσ _q - 1( d_n^k)σ _1( q)σ _1( d_q)σ _q( d_n), Pr_1^c = ∑_x = L + 1^r^- 1( n )∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = q_0 + 1^n - 1q^- γd_k^- α - βσ _q - 1( d_n^k)σ _1( q)σ _1( d_q)σ _q( d_n). Similarly, we define Pr_2^a as the probability for normal nodes transmitting with ad hoc mode and Pr_2^c as the probability for normal nodes transmitting with cellular mode, then Pr_2^a = ∑_x = 1^L ∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = 1^q_0q^- γd_k^- α - βσ _q - 1( d_n^k)σ _1( q)σ _1( d_q)σ _q( d_n), Pr_2^c = ∑_x = L + 1^r^- 1( n )∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = 1^q_0q^- γd_k^- α - βσ _q - 1( d_n^k)σ _1( q)σ _1( d_q)σ _q( d_n). §.§ Analysis of average number of hops As for Pr_1^a and Pr_1^c, we have the following theorem. Pr_1^a and Pr_1^c satisfy the following equations. Pr_1^a≡{[ Θ(r^3(n) L^3-β) 0 ≤β<3; Θ(r^3(n) ln L) β=3; Θ(r^3(n)) β>3 ]. Pr_1^c≡{[ Θ(r^β(n)-r^3(n) L^3-β) 0 ≤β<3; Θ(r^3(n) ln(1/Lr(n))) β=3; Θ(r^3(n)) β>3 ]. Please refer to Appendix A. The orders of _2^a and _2^c are listed in Table <ref> and Table <ref>, respectively. Please refer to Appendix B. Using the results of _1^a in (<ref>), _1^c in (<ref>), _2^a in Table <ref>, and _2^c in Table <ref>, the number of ad hoc flows N_a and the number of cellular flows N_c can be derived. When γ > 1, the results of N_a and N_c are shown in (<ref>) and (<ref>). mytempeqncnt Therefore, whether the flows are dominated by ad hoc flows or cellular flows is influenced by L in the routing strategy. It is observed that when α > 3, N_a increases linearly with n. Because when α increases, contact group members of source node gather around the source node, so that the number of hops needed for communication tends to be smaller than L, thus the number of ad hoc flows increases with n. Besides, on account of L = O( r^- 1( n )) and L = Ω( 1 ), we have N_a + N_c≡Θ( n ). The average number of hops of ad hoc flows passing through each cube, namely E[F], is ( n^a) ×( 1 ^aE^'[ X ]) ×r^3( n ), where ^a is the probability for nodes transmitting with ad hoc mode. As mentioned in Section <ref>, E[ F ] = N_aE[ X ]V, where V ≡r^3( n ). The total number of hops of all ad hoc flows is denoted by X_total. The number of hops of flow i is denoted by X_i. Then, we have E[ X_total] = E[ ∑_i = 1^N_aX_i] = ∑_i = 1^N_aE[ X_i]. Suppose that X_i( i ∈{1,2,...N_a}) are independent and identically distributed (i.i.d.), and X has the same distribution with X_i. Then, we have E[ X_total] = N_aE[ X ]. On the condition of unbiased estimation, E[ X_total] = N_aE[ X ]. Therefore, for each cube, E[ F ] = N_aE[ X ]V, where E[ X ] is as (<ref>). E[ X ] = ∑_x = 1^L x( X = x|The flow is ad hoc flow) = ∑_x = 1^L x( X = x)^a = 1 ^a∑_x = 1^L x( X = x). Suppose that E^'[X]=∑_x=1^L x Pr(X=x), we have E[X]=E^'[X] / Pr^a. We divide E^'[X] into two parts according to the threshold q_0, i.e. E^'[X]=E_1^'[X]+E_2^'[X], where E_1^'[X] represents the average number of hops of the flows starting from leader nodes, and E_2^'[X] represents the average number of hops of the flows starting from normal nodes. Substituting (<ref>) and (<ref>) into E_1^'[X] and E_2^'[X], E_1^'[X] is as follows. E_1^'[X]={[ Θ(r^3(n) L^4-β) 0 ≤β<4; Θ(r^3(n) ln L) β=4; Θ(r^3(n)) β>4 ]. When γ > 1, E_2^'[X] is E_2^'[X]={[ Θ(r^3-α(n) L^4-α-β) 0 ≤α<3,0 ≤α+β<4; Θ(r^3-α(n) ln L) 0 ≤α<3, α+β=4; Θ(r^3-α(n)) 0 ≤α<3, α+β>4; Θ(ln ^-1(r^-1(n)) L^1-β) α=3,0 ≤α+β<4; Θ(log _r^-1(n) L) α=3, α+β=4; Θ(ln ^-1(r^-1(n))) α=3, α+β>4; Θ(L^4-α-β) α>3,0 ≤α+β<4; Θ(ln L) α>3, α+β=4; Θ(1) α>3, α+β>4 ]. When 0 ≤γ≤ 1, E_2^'[X] is n^γ-1 times greater than that in (<ref>). (<ref>) and (<ref>) show that if the source nodes are leader nodes, the average number of hops of ad hoc flows is only related to β. If the source nodes are normal nodes, α and β will jointly influence the average number of hops of the ad hoc flows and cellular flows of the source nodes. This is due to the fact that the leader nodes connect more members, which counteracts the influence of the concentration factor α when n tends to infinity. Specifically, the distance between source node and destination node will decrease when α or β increases. For leader nodes, when 0≤β≤ 4, the average number of hops of ad hoc flows increases when L increases, and decreases when β increases. When β is large, the destination node tends to be closer to the source node. Therefore, when β>4, the trend of E_1^'[X] has nothing to do with L. For normal nodes, E_2^'[X] are influenced by α and β simultaneously. When 0 ≤α <3 and 0≤α+β <4, the average number of hops of the ad hoc flows increases when L increases, and decreases when α or β increases. When α >3 or α + β>4, L has no relationship with the number of average number of hops. Since γ >2 in the actual network <cit.><cit.>, we can derive the result of E^'[X] as (<ref>). Finally, we have the following result. E[ F ] = N_aE[ X ]V = ( n^a) ×( 1 ^aE^'[ X ]) ×r^3( n ) = log( n )E^'[ X ]. §.§ Network throughput According to (<ref>), the per-node throughput of ad hoc mode λ_a^n is λ_a^n≡Θ(W_a/E[F])=Θ(W_a/log (n) E^'[X]). Note that if E[F]=O(1), λ_a^n equals to Θ(W_a), because the average throughput of the ad hoc flows is smaller than W_a. The relationship between E^'[X] and L under different ranges of α and β is analyzed as follows. Note that since γ > 2 in the actual network <cit.><cit.>, only the results of γ >1 is considered in terms of the number of the ad hoc flows N_a. In order to better understand the piecewise of throughput as follows, recall that the probability that a node is selected as a member of contact group is proportional to d_i^- α, the probability that a contact group member will be communicated in a certain time slot is proportional to d_i^- β, and the number of members in the contact group is proportional to q^- γ. Therefore, the probability that a node will be communicated in a certain time slot is proportional to d_i^- (α + β ), i.e., (α + β ) reveals the communication activity level of the network. §.§.§ 0 ≤α <3, 0≤β<3 and 0≤α+β<3 When L=Ω(r^-1(n)), E^'[X] is dominated by E_1^'[X]. When L=O(r^-1(n)), E^'[X] is dominated by E_2^'[X]. Considering that in the unit cube of the communication model in Section <ref>, there is always L=O(r^-1(n)). Therefore, E^'[X] is always dominated by E_2^'[X] in this case. So the per-node throughput of ad hoc mode is as follows. λ_a^n≡{[ Θ(W_a/log (n) r^3-α(n) L^4-α-β); L=Ω((log ^-1(n) r^α-3(n))^1/4-α-β); Θ(W_a); L=O((log ^-1(n) r^α-3(n))^1/4-α-β) ]. ∙ When L=Ω((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n =Θ(n r^3(n) L^3-β+n r^3-α(n) L^3-α-β) ·Θ(W_a/log (n) r^3-α(n) L^4-α-β) =Θ(n^1-α/3 L^α-1 W_a/log ^1-α/3(n)+n W_a/log (n) L). ∙ When L=O((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is λ_a ≡ N_aλ_a^n =Θ(log (n) L^3-β W_a+n^α/3log ^1-α/3(n) L^3-α-β W_a). According to (<ref>) and (<ref>), when L=Θ((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is dominant, which is λ_a =Θ(log (n) L^3-β W_a+n^α/3log ^1-α/3(n) L^3-α-β W_a). §.§.§ 0 ≤α <3, 0 ≤β <3 and 3 < α + β <4 In this case, E^'[X] is dominated by E_2^'[X], where L=O(r^-1(n)). Therefore, the per-node throughput of ad hoc mode is (<ref>). ∙ When L=Ω((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n =Θ(n r^3(n) L^3-β+n r^3-α(n)) ·Θ(W_a/log (n) r^3-α(n) L^4-α-β) =Θ(n^1-α/3 L^α-1 W_a/log ^1-α/3(n)+n W_a/log (n) L^4-α-β). ∙ When L=O((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n=Θ(log (n) L^3-β W_a+n^α/3log ^1-α/3(n) W_a). When L=Θ((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is dominant, which is λ_a =Θ(log (n) L^3-β W_a+n^α/3log ^1-α/3(n) W_a). §.§.§ 0 ≤α <3, 0 ≤β <3 and α + β >4 In this case, E^'[X] is dominated by E_1^'[X] when L=Ω(r^α / β-4(n)), and E^'[X] is dominated by E_2^'[X] when L=O(r^α / β-4(n)). ∙ When normal nodes are dominant, E[F]=log (n) E[X]=O(1). Hence, λ_a^n≡Θ(W_a). Note that when α≡Ω(3+log _r(n)log (n)), we have r^α / (β-4)(n) ≡Ω((log ^-1(n) r^-3(n))^1 / 4-β). However, when n →∞, we have α≡O(3+log _r(n)log (n)). In this case, r^α / β-4(n) ≡O((log ^-1(n) r^-3(n))^1 / 4-β). Thus, the per-node throughput of ad hoc mode is λ_a^n≡{[ Θ(W_a/log (n) r^3(n) L^4-β) L=Ω((log ^-1(n) r^-3(n))^1/4-β); Θ(W_a) L=O((log ^-1(n) r^-3(n))^1/4-β) ]. ∙ When L=Ω((log ^-1(n) r^-3(n))^1 / 4-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n =Θ(n r^3(n) L^3-β+n r^3-α(n)) ·Θ(W_a/log (n) r^3(n) L^4-β) =Θ(n W_a/log (n) L+n^1+α/3 W_a/log√(3)^1+α/3(n) L^4-β). ∙ When L=O((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n=Θ(log (n) L^3-β W_a+n^α/3log ^1-α/3(n) W_a). According to (<ref>) and (<ref>), when L=Θ((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is dominant, which is λ_a =Θ(log (n) L^3-β W_a+n^α/3log ^1-α/3(n) W_a). §.§.§ 0≤α <3, β >3 and 3<α +β <4 In this case, E^'[X] is equal to that of subsection 1) and 2), in which L=O(r^-1(n)), and E^'[X] is dominated by E_2^'[X]. ∙ When L=Ω((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n =Θ(n r^3-α(n)) ·Θ(W_a/log (n) r^3-α(n) L^4-α-β) =Θ(n W_a/log (n) L^4-α-β). ∙ When L=Ω((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput is λ_a≡ N_aλ_a^n=Θ(n^α/3log ^1-α/3(n) W_a). Therefore, when L=Θ((log ^-1(n) r^α-3(n))^1 / 4-α-β), the network throughput of ad hoc mode is dominant, which is λ_a=Θ(n^α/3log ^1-α/3(n) W_a), which reveals that when β is large enough, the destination nodes will gather around the source nodes, which make the network throughput independent of L, and the number of ad hoc transmission depends on the value of α in a certain range. §.§.§ α >3, 0 ≤β <4 and 3 < α +β<4 In this case, E^'[X] is dominated by E_1^'[X] when L=Ω(r^-3 / α(n)), and E^'[X] is dominated by E_2^'[X] when L=O(r^-3 / α(n)). When L=O(r^-3 / α(n)), there is always r^-3 / α(n)=Ω((log ^-1(n) r^-3(n))^1 / 4-β). Therefore, the per-node throughput of ad hoc mode is as (<ref>). ∙ When L=Ω((log ^-1(n) r^-3(n))^1 / (4-β)), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n =Θ(n) ·Θ(W_a/log (n) r^3(n) L^4-β) =Θ(n^2 W_a/log ^2(n) L^4-β). ∙ When L=O((log ^-1(n) r^-3(n))^1 / (4-β)), and L=Ω(log ^-1 / (4-α-β)(n)), λ_a≡ N_aλ_a^n=Θ(n W_a/log (n) L^4-α-β). ∙ When L=O(log ^-1 / (4-α-β)(n)), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n=Θ(n W_a). Therefore, when L=Θ(log ^-1 / 4-α-β(n)), the network throughput of ad hoc mode is dominant, which is λ_a=Θ(n W_a). The result also shows that in this range of parameters, when L breaks the boundary of Θ(log ^-1 / 4-α-β(n)), the network throughput will have nothing to do with L, and all the nodes will be in ad hoc mode. §.§.§ α >3, 0 ≤β <4 and α + β >4 In this case, E^'[X] is dominated by E_1^'[X]. The per-node throughput of ad hoc mode is λ_a^n≡{[ Θ(W_a/log (n) r^3(n) L^4-β) L=Ω((log ^-1(n) r^-3(n))^1/4-β); Θ(W_a) L=O((log ^-1(n) r^-3(n))^1/4-β) ]. ∙ When L=Ω((log ^-1(n) r^-3(n))^1 / 4-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n =Θ(n) ·Θ(W_a/log (n) r^3(n) L^4-β) =Θ(n^2 W_a/log ^2(n) L^4-β). ∙ When L=O((log ^-1(n) r^-3(n))^1 / 4-β), the network throughput of ad hoc mode is λ_a≡ N_aλ_a^n=Θ(n W_a). Therefore, when L=Θ((log ^-1(n) r^-3(n))^1 / 4-β), the network throughput of ad hoc mode is dominant, which is λ_a=Θ(n W_a). The result shows that the increase of α influences the distribution of the destination nodes. When L breaks the boundary of Θ((log ^-1(n) r^-3(n))^1 / 4-β), the network throughput will no longer be related to L, and all the nodes will be in ad hoc mode. §.§.§ α >3 and β>4 In this case, E^'[X]=Θ(r^3(n)), and E[F]=log (n) E^'[X]=O(1) when n →∞. Therefore, the per-node throughput of ad hoc mode is λ_a^n≡Θ(W_a). When α and β are large, the destination nodes are close to the source nodes. As a result, the number of hops are always smaller than L, and all of the nodes transmit in ad hoc mode. In this case, the network throughput has nothing to do with L. The network throughput of ad hoc mode is λ_a≡ N_aλ_a^n=Θ(n) ·Θ(W_a)=Θ(n W_a). In conclusion, the aggregation of destination nodes (i.e. the value of α + β) is the main factor affecting the network throughput. When the distribution of destination nodes is sparse, the network throughput has complex relationships with hop threshold L, the number of nodes n, and the bandwidth W_a, as shown in (<ref>)(<ref>)(<ref>). When the destination nodes gather around the source nodes, the flows in the network are generally ad hoc flows. The hop threshold L has limited impact on the network throughput, and the throughput is only positively related to the number of nodes n and bandwidth W_a, as shown in (<ref>)(<ref>)(<ref>)(<ref>). Furthermore, as shown in (<ref>)(<ref>)(<ref>), when the destination nodes have strong aggregation to the source nodes, the network throughput will be in direct proportion to the product of n and W_a. § NUMERICAL RESULTS AND ANALYSIS In this section, the theoretical results in Section III are verified by numerical results, and the relationship between parameters and network throughput is analyzed through the numerical results. Besides, the network throughput of 100 to 10000 UAVs is simulated by MATLAB to verify the rationality of the theoretical results. According to the theoretical results derived above, Fig. <ref> shows the relation between the threshold L and the throughput with different values of α and β. We consider four typical conditions, where n = 100, W_a = 1, and the optimal L are identified. In Fig. <ref>(a), where 0≤α <3 and 0 ≤α+β <4, the optimal L is relatively small. Since the values of α and β are small, there are more long-distance flows. Besides, because the leader nodes have large contact groups, these long-distance flows are more likely to be sent by leader nodes and their hops are more likely to be larger than L, which means that most of the long-distance flows are cellular flows. In this case, the throughput of the ad hoc network is dominated by flows of normal nodes. In Fig. <ref>(b), where 0 ≤α<3, 0 ≤β <3 and α + β >4, the optimal L is relatively large. At this time, because α and β are small, there are still many long-distance flows from the leader nodes. However, since L increases, the number of long-distance flows with ad hoc mode increases, and the throughput tends to be dominated by leader nodes. In Fig. <ref>(c), where α>3, 0≤β<4 and 3<α+β<4, there are two optimal L, which shows that the value of L determines the type of nodes that dominate the throughput. When L=Ω((log ^-1(n) r^-3(n))^1 / (4-β)), the throughput is dominated by leader nodes. When L=O((log ^-1(n) r^-3(n))^1 / (4-β)) and L=Ω(log ^-1 / (4-α-β)(n)), the throughput is dominated by normal nodes. Because α is large, the aggregation of contact groups of source nodes is high. However, β and α + β are still small. Thus, it is still possible for source nodes to communicate with contact group nodes with a long distance. Due to the large number of contact group members of leader nodes, it is more likely that such long-distance flows will be sent by leader nodes. It can be explained that when L is large, more long-distance flows sent by leader nodes are transmitted by ad hoc mode, which dominates the network throughput. When L is small, leader nodes prefer cellular mode, so that the throughput of ad hoc flows is dominated by normal nodes. In Fig. <ref>(d), as α increases, the aggregation of the contact groups is further improved. It is more likely that the number of hops of long-distance flows is less than L, so that the throughput of ad hoc flows is dominated by leader nodes again. The theoretical results above show that there is an optimal value of L to maximize the average throughput of UAV network, which is of great significance to the design of hybrid UAV network. For example, when the number of UAVs and the capabilities of UAVs are determined, the parameters α, β and γ can be determined by analyzing the routing table and the topological relation of UAV network. With such parameters, we can determine the value of routing strategy L in hybrid UAV network to maximize the throughput of UAV network. In order to verify the theoretical results through simulation, firstly, the Bat Algorithm (BA) algorithm is applied to generate a scale-free network which has the same setting as the models in Section II. Taking 100 nodes as an example, the contact groups and communication relationships of each node are shown in Fig. <ref> and Fig. <ref>. Fig. <ref> is the contact group selection when n=100, α = 1, β = 0.5, and γ = 2. The UAVs are randomly distributed in the 3D space. The selection of the contact group members of the source nodes follows the power-law distribution with parameter α. In the simulation of Fig. <ref>, the threshold q_0 is set to be 17.33. The source node of Fig. <ref>(a) is a leader node with 25 social group members. The source node of Fig. <ref>(b) is a normal node with 5 social group members. Fig. <ref> illustrates the contact group selection related to parameter α, the communication selection in the contact group related to parameter β, and the final communication relationship. The three sub-figures in Fig. <ref> are all directed graphs. The evolution process is revealed from the first sub-figure to the last sub-figure. Then, according to the L-routing scheme, whether a transmission adopts ad hoc mode or cellular mode is determined. Taking the number of nodes n as the variable, the results of average hops and throughput under different parameters of α, β and γ are simulated, which is shown in Fig. <ref> and Fig. <ref>. Fig. <ref> illustrates the average number of hops of ad hoc flows under different parameters α and β. Fig. <ref> shows the throughput of the ad hoc flows. L is selected in the optimal range to maximize the throughput of ad hoc flows, and the bandwidth of ad hoc mode is set to be W_a = 1. There are the following observations. 1) When the values of α and β, or the sum of them increase, the average number of hops of flows decreases correspondingly, and the throughput of UAV network increases. This is due to the fact that α and β affect the location distribution of the destination nodes from the source node. When α or β is large, or the sum of them exceeds a certain range, the destination nodes will be highly clustered around the source node, so that the average number of hops of ad hoc flows are reduced, and the network throughput increases accordingly. 2) Within a certain range of α and β, the size of L will affect the type of the dominant nodes. For example, when 0 ≤α<3, 0 ≤β <3 and α + β >4, the simulation results of the average number of hops and throughput are in good agreement with the theoretical results, which shows that the flows of normal nodes is dominant in the network. 3) The introduction of the cellular transmission mode improves the throughput of the scale-free UAV network, compared with the throughput of pure ad hoc network studied in <cit.>. Fig. <ref> shows that as n increases, the average number of hops of ad hoc flows in hybrid UAV network is smaller than that of pure ad hoc network. Correspondingly, Fig. <ref> shows that as n increases, the throughput of hybrid UAV network is higher than that of pure ad hoc network. This is due to the fact that for the flows with the number of hops larger than L, the nodes will directly connect to BSs and exploit the resources of cellular network for transmission. Therefore, the number of ad hoc flows is reduced, and the resources of ad hoc network are saved, so that the network throughput is improved. § CONCLUSION In this paper, aiming at improving the throughput of UAV network, the hybrid UAV network with scale-free topology is studied. Besides, the impact of various parameters on the network throughput is analyzed. The optimal hop threshold L for the selection of ad hoc or cellular transmission mode is derived, which is a function of the number of nodes and scale-free parameters. This paper will provide guidance for the architecture design and protocol design for the future UAV network. § APPENDIX A According to <cit.> and law of large numbers (LLN), we have d_k^-ασ_q-1(𝐝_𝐧^𝐤)/σ_q(𝐝_𝐧)=q/n. Therefore, (<ref>) and (<ref>) can be simplified as Pr_1^a=∑_x=1^L∑_l=1^4 x^2+2∑_o_k∈ c_l∑_q=q_0+1^n-1q^-γ+1 d_k^-β/n σ_1(𝐪) σ_1(𝐝_𝐪). Pr_1^c=∑_x=L+1^r^-1(n)∑_l=1^4 x^2+2∑_o_k∈ c_l∑_q=q_0+1^n-1q^-γ+1 d_k^-β/n σ_1(𝐪) σ_1(𝐝_𝐪). According to LLN, we have 1/qσ_1(𝐝_𝐪)=E[𝐝_𝐪]. Therefore, ∑_q=q_0^n-1q^-γ+1/σ_1(𝐝_𝐪)=1/E[𝐝_𝐪]∑_q=q_0^n-1 q^-γ. According to (<ref>), we have E[𝐝_𝐪] ≡ E[{d_g_1^-β, d_g_2^-β, …, d_g_q^-β}], where d_g_i (i=1,2,…, q) is the the distance between source and destination, which can be replaced by xr(n), so we have E[𝐝_𝐪] ≡∑_x=1^r^-1(n)Pr(X=x)(x r(n))^-β = r(n)^-β. Therefore, with (<ref>) and (<ref>), Pr_1^a can be simplified as Pr_1^a≡r^β(n)/n∑_x=1^L∑_l=1^4 x^2+2∑_o_k∈ c_l d_k^-β∑_q=q_0+1^n-1q^-γ/σ_1(𝐪). If γ > 1, ∑_q=q_0^n-1 q^-γ and σ_1(𝐪) are all partial sum of the Riemann Zeta function, which has the following relations. ∑_q=q_0^n-1 q^-γ≤σ_1(𝐪) ≤ζ(γ) ≡Θ(1). Therefore, when γ > 1, according to (<ref>), q^-γ/σ_1(𝐪) = 1. Pr_1^a can be derived as follows Pr_1^a≡r^β(n)/n∑_x=1^L∑_l=1^4 x^2+2∑_o_k∈ c_l d_k^-β. For 0 ≤γ≤ 1, it's obvious that ∑_q=q_0+1^n-1 q^-γ = Θ (σ_1(𝐪)) = Θ(n^1-γ /(1-γ)). Thus, Pr_1^a is still equivalent to (<ref>) when 0 ≤γ≤ 1, i.e., Pr_1^a has the same form when γ varies. In (<ref>), ∑_l=1^4 x^2+2∑_o_k∈ c_l (·) represents the number of nodes in the small cubes with x hops on average, which has the same order as N n(r(n))^3=(4 x^2+2) n(r(n))^3. Therefore, with (<ref>) and Riemann integral, we have Pr_1^a ≡r^β(n)/n∑_x=1^L∑_l=1^4 x^2+2∑_o_k∈ c_l d_k^-β≡(r(n))^3∑_x=1^L(x^2-β+x^-β) ≡(r(n))^3∫_1^L(v^2-β+v^-β) d v. Thus, the simplified form of Pr_1^a is as follows. Pr_1^a≡{[ Θ(r^3(n) L^3-β) 0 ≤β<3; Θ(r^3(n) ln L) β=3; Θ(r^3(n)) β>3 ]. The simplified form of Pr_1^c can be derived similarly, which is Pr_1^c≡{[ Θ(r^β(n)-r^3(n) L^3-β) 0 ≤β<3; Θ(r^3(n) ln(1/Lr(n))) β=3; Θ(r^3(n)) β>3 ]. § APPENDIX B In (<ref>) and (<ref>), the term d_k^- ασ _q - 1( d_n^k) can be expanded as follows. d_k^- ασ _q - 1( d_n^k) = d_k^- α( σ _q - 1( d_n) - d_k^- ασ _q - 2( d_n^k)) ≤ d_k^- ασ _q - 1( d_n). Hence, the upper bound of d_k^- ασ _q - 1( d_n^k) is d_k^- ασ _q - 1( d_n). According to Lemma 4 in <cit.>, when q ≤q_0, we have σ _q - 1( d_n)/σ _q( d_n)≡1/σ _1( d_n)Θ( nq/n - q + 1), and Θ( nq/n - q + 1) = Θ( q ) = Θ( 1 ). Hence, when q ≤q_0, _2^a is equivalent to Pr _2^a ≡∑_x = 1^L ∑_l = 1^4x^2 + 2∑_o_k∈c_l∑_q = 1^q_0q^- γd_k^- α - βσ _q - 1( d_n)/σ _1( q)σ _1( d_q)σ _q( d_n) ≡∑_x = 1^L ∑_l = 1^4x^2 + 2∑_o_k∈c_ld_k^- α - β/σ _1( q)σ _1( d_n)∑_q = 1^q_0q^- γ/σ _1( d_q). According to (44) in <cit.>, we have ∑_q = 1^q_0q^- γ/ . -σ _1( d_q)≡r^β( n ), which is substituted into (<ref>). Using integral transformation techniques, we have Pr _2^a ≡r^β( n )/σ _1( q)σ _1( d_n)∑_x = 1^L ∑_l = 1^4x^2 + 2∑_o_k∈c_ld_k^- α - β ≡nr^3 - α( n )/σ _1( q)σ _1( d_n)∑_x = 1^L ( x^2 - α - β + x^- α - β) ≡nr^3 - α( n )/σ _1( q)σ _1( d_n)∫_1^L ( υ ^2 - α - β + υ ^- α - β) dυ. According to (16) in <cit.>, we have σ _1( d_n) ≡{[ Θ( n )0 ≤α < 3; Θ( nln( r^- 1( n )))α = 3; Θ( nr^3 - α( n ))α > 3 ]. Besides, there is the following relation. σ _1( q) = ∑_q = 1^n - 1q^- γ ≡{[ Θ( 1 )γ > 1; Θ( n^1 - γ)0 ≤γ≤ 1 ]. Substituting (<ref>) and (<ref>) into (<ref>), the values of _2^a are revealed in Table <ref>. Similarly, using the techniques of integral transformation, the values of _2^c are revealed in Table <ref>. 1 Intro1 S. Hayat, E. Yanmaz and R. Muzaffar, “Survey on Unmanned Aerial Vehicle Networks for Civil Applications: A Communications Viewpoint," in IEEE Communications Surveys & Tutorials, vol. 18, no. 4, pp. 2624–2661, Fourthquarter 2016. Intro2 L. Gupta, R. Jain, and G. Vaszkun, “Survey of Important Issues in UAV Communication Networks,” in IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1123–1152, Secondquarter 2016. Intro2.5 P. Smith, D. Hutchison, J. P.G. Sterbenz, et al. “Network resilience: a systematic approach," in IEEE Communications Magazine, vol. 49, no. 7, pp. 88–97, July 2011. Intro3 Z. Yuan, J. Jin, L. Sun, et al. “Ultra-Reliable IoT Communications with UAVs: A Swarm Use Case," in IEEE Communications Magazine, vol. 56, no. 12, pp. 90–96, December 2018. Intro4 M. M. Azari, F. Rosas, K. Chen, et al. “Ultra Reliable UAV Communication Using Altitude and Cooperation Diversity,” in IEEE Transactions on Communications, vol. 66, no. 1, pp. 330–344, Jan. 2018. ScaleFree_1 M. Faloutsos, P. Faloutsos, C. Faloutsos. “On power-law relationships of the internet topology,” in ACM SIGCOMM computer communication review, vol. 29, no. 4, pp. 251– 262, 1999. ScaleFree_2 S. Boccaletti, V. Latora, Y. Moreno, et al. “Complex networks: Structure and dynamics,” in Physics reports, vol. 424, no. 4, pp. 175–308, 2006. ScaleFree_3 J. Fan, D. Li, R. Li, et al. “Analysis on MAV/UAV cooperative combat based on complex network,” in Defence Technology, vol. 16, no. 1, pp. 150–157, 2020. ScaleFree_4 W. Xiaohong, Y. Zhang, W. Lizhi, et al. “Robustness evaluation method for unmanned aerial vehicle swarms based on complex network theory,” in Chinese Journal of Aeronautics, vol. 33, no. 1, pp. 352–364, 2020. Intro9 S. Singh, M. M. Kokar, “Simulation of Scale-Free Correlation in Swarms of UAVs,” International Conference on Complex Systems, Springer, Cham, pp. 91–97, 2018. Intro5 L. K. Gallos, R. Cohen, P. Argyrakis, et al. “Stability and topology of scale-free networks under attack and defense strategies,” in Physical review letters, vol. 94, no. 18, pp. 188701, 2005. Intro6 R. Cohen, K. Erez, D. Ben-Avraham, et al. “Breakdown of the internet under intentional attack,” in Physical review letters, vol. 86, no. 16, pp. 3682, 2001. Intro7 H. T. Tran, “A complex networks approach to designing resilient system-of-systems,” Georgia Institute of Technology, 2015. Intro10 Q. Chen, H. Wang, N. Liu. “Integrating Networking, Storage, and Computing for Resilient Battlefield Networks,” IEEE Communications Magazine, vol. 57, no. 8, pp. 56–63, 2019. Intro10.5 B. Liu, F. Qiu, Y. Cao, et al. “Maximizing Resilient Throughput in Peer-to-Peer Network,” Commun. Netw., vol. 3, no. 3, pp. 168–183, 2011. Intro11 X. Yuan, Z. Feng, W. Xu, et al. “Capacity Analysis of UAV Communications: Cases of Random Trajectories," in IEEE Transactions on Vehicular Technology, vol. 67, no. 8, pp. 7564–7576, Aug. 2018. Intro13 P. Li and J. Xu, “Fundamental Rate Limits of UAV-Enabled Multiple Access Channel With Trajectory Optimization," in IEEE Transactions on Wireless Communications, vol. 19, no. 1, pp. 458–474, Jan. 2020. Intro14 V. V. Chetlur and H. S. Dhillon, “Downlink Coverage Analysis for a Finite 3-D Wireless Network of Unmanned Aerial Vehicles," in IEEE Transactions on Communications, vol. 65, no. 10, pp. 4543–4558, Oct. 2017. Intro15 N. Gao, X. Li, S. Jin, et al. “3-D Deployment of UAV Swarm for Massive MIMO Communications," in IEEE Journal on Selected Areas in Communications, pp. 1–1, Jun. 2021. Intro16 Z. Wei, H. Wu, X. Yuan, et al. “Achievable Capacity Scaling Laws of Three-Dimensional Wireless Social Networks," in IEEE Transactions on Vehicular Technology, vol. 67, no. 3, pp. 2671–2685, March 2018. Intro17 M. Karimzadeh Kiskani, B. Azimdoost and H. R. Sadjadpour, “Effect of contact groups on the Capacity of Wireless Networks," in IEEE Transactions on Wireless Communications, vol. 15, no. 1, pp. 3–13, Jan. 2016. Intro18 B. Azimdoost, H. R. Sadjadpour and J. J. Garcia-Luna-Aceves, “Capacity of Wireless Networks with Social Behavior," in IEEE Transactions on Wireless Communications, vol. 12, no. 1, pp. 60–69, January 2013. Intro18.5 R. Hou, Y. Cheng, J. Li, et al. “Capacity of Hybrid Wireless Networks With Long-Range Social Contacts Behavior,” in IEEE/ACM Transactions on Networking, vol. 25, no. 2, pp. 834-848, April 2017. Intro19 Z. Wang, Z. Wei and Z. Feng, “Capacity of Three-Dimensional Scale Free Wireless Networks," 2018 IEEE/CIC International Conference on Communications in China (ICCC), pp. 288–292, 2018. Intro20 A. A. Khuwaja, Y. Zhu, G. Zheng, et al. “Performance Analysis of Hybrid UAV Networks for Probabilistic Content Caching," in IEEE Systems Journal, pp.1–12, Aug. 2020. Intro21 A. Agarwal, P. R. Kumar, “Capacity bounds for ad hoc and hybrid wireless networks,” in ACM SIGCOMM Computer Communication Review, vol. 34, no. 3, pp. 71–81, 2004. 19 Z. Wei, H. Wu, X. Yuan, et al. “Achievable Capacity Scaling Laws of Three-Dimensional Wireless Social Networks,” IEEE Transactions on Vehicular Technology, vol. 67, no. 3, pp. 2671–2685, Mar. 2018. FootNote_1 F. Xue, P. R. Kumar. “Scaling laws for ad hoc wireless networks: an information theoretic approach,” in Foundations and Trends in Networking, vol. 1, no. 2, pp. 145–270, 2006. FootNote_2 P. Gupta, P. R. Kumar. “Internets in the sky: The capacity of three-dimensional wireless networks,” in Communications in Information and Systems, vol. 1, no. 1, pp. 33–50, 2001. 02 F. Xue and P. R. Kumar, “Scaling Laws for Ad Hoc Wireless Networks: An Information Theoretic Approach,” Foundations and Trends inNetworking, vol. 1, no. 2, pp.145–270, 2006. MAC_1 C. Chau, M. Chen and S. C. Liew, “Capacity of Large-Scale CSMA Wireless Networks," in IEEE/ACM Transactions on Networking, vol. 19, no. 3, pp. 893–906, June 2011. 9.5 P. Li, C. Zhang, and Y. Fang, “Capacity and delay of hybrid wireless broadband access networks,” in IEEE Journal on Selected Areas in Communications, vol. 27, no. 2, pp. 117–125, February 2009. 10 M. K. Kiskani, H. Sadjadpour and M. Guizani, “Social interaction increases capacity of wireless networks," 2013 9th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 467–472, 2013. 22 P. Li, C. Zhang and Y. Fang, “Capacity and delay of hybrid wireless broadband access networks," in IEEE Journal on Selected Areas in Communications, vol. 27, no. 2, pp. 117–125, February 2009. 16 B. Azimdoost and H. R. Sadjadpour, “Capacity of scale free wireless networks," 2012 IEEE Global Communications Conference (GLOBECOM), pp. 2379–2384, 2012. wiki1 K. Choromanski, M. Matuszak, J. Miekisz. “Scale-free graph with preferential attachment and evolving internal vertex structure,” Journal of Statistical Physics, vol. 151, no. 6, pp. 1175–1183, 2013. wiki2 J. P. Onnela, J. Saramaki, J. Hyvonen, et al. “Structure and tie strengths in mobile communication networks,” Proceedings of the national academy of sciences,vol. 104, no. 18, pp. 7332–7336, 2007.
http://arxiv.org/abs/2306.06961v1
20230612084715
Kilonovae of binary neutron star mergers leading to short-lived remnant neutron star formation
[ "Kyohei Kawaguchi", "Sho Fujibayashi", "Nanae Domoto", "Kenta Kiuchi", "Masaru Shibata", "Shinya Wanajo" ]
astro-ph.HE
[ "astro-ph.HE", "gr-qc" ]
firstpage–lastpage Automated use case diagram generator using NLP and ML H.Rukshan Piyumadu Dias Department of computing Informatics Institute of Technology Colombo,Sri Lanka [email protected] C.S.L.Vidanapathirana Department of computing Informatics Institute of Technology Colombo,Sri Lanka [email protected] Rukshala Weerasinghe Department of computing Informatics Institute of Technology Colombo,Sri Lanka [email protected] M.D. Asitha Manupiya Department of computing Informatics Institute of Technology Colombo,Sri Lanka [email protected] R.M.S.J.Bandara Department of computing Informatics Institute of Technology Colombo,Sri Lanka [email protected] Y.P.H.W.Ranasinghe Department of computing Informatics Institute of Technology Colombo,Sri Lanka [email protected] July 31, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We study kilonova emission from binary neutron star (BNS) mergers for the case that a remnant massive neutron star (MNS) forms and collapses to a black hole within 20 ms after the onset of the merger (which we refer to as “a short-lived case") by consistently employing numerical-relativity and nucleosynthesis results. We find that such kilonovae are fainter and last shorter than those for BNSs resulting in the formation of long-lived (≫ 1 s) MNSs, in particular in the optical band. The resulting light curves are too faint and last for a too short duration to explain the kilonova observation for the BNS associated with GW170817, indicating that the merger remnant formed in GW170817 is unlikely to have collapsed to a black hole within a short period of time (∼ 20 ms) after the onset of the merger. Our present result implies that early observation is necessary to detect kilonovae associated with BNSs leading to short-lived MNS formation in particular for the optical blue band as well as that kilonovae could be hidden by the gamma-ray burst afterglow for nearly face-on observation. We provide a possible approximate scaling law for near-infrared light curves with the given reference time and magnitude when the decline power of the z-band magnitude, d M_ z/d log_10t, reaches 2.5. This scaling law suggests that the HK-band follow-up observation should be at least 1 mag deeper than that for the z-band reference magnitude and earlier than 4 times the reference time. gravitational waves – stars: neutron – nucleosynthesis – radiative transfer – hydrodynamics § INTRODUCTION Binary neutron star (BNS) mergers are among the most efficient gravitational-wave emitters in the universe and the most important sources of multi-messenger high-energy astrophysical phenomena, such as gamma-ray bursts <cit.>, kilonovae <cit.>, and synchrotron flares <cit.>. Furthermore, BNS mergers are considered to be important production sites of elements heavier than iron in the universe <cit.>. All these facts imply that BNS mergers are unmissable research subjects from an astronomical point of view. They are also among the unique systems in the universe in which the most extreme (strongly self-gravitating, high-density, and high-temperature) environments in the universe are realized. Hence, the multi-messenger observation of BNS mergers is also an indispensable tool to extend our knowledge of fundamental physics. Quantitative prediction of the merger dynamics and outcomes is crucial to correctly interpret the observed signals. Since the first simultaneous detection of gravitational waves and electromagnetic (EM) signals from a BNS (GW170817/AT2017gfo; ), remarkable progress has been achieved in the theoretical understanding, particularly, in the studies based on numerical simulations. For example, recent numerical studies revealed the quantitative nature of mass ejection from BNS mergers, for which the processes can be broadly divided into two phases: At the onset of the merger, a fraction of neutron-rich matter is ejected by tidal force and collisional shock heating <cit.>. After the merger, a massive neutron star (MNS) or a black hole (BH) surrounded by a strongly magnetized hot and dense accretion torus is formed <cit.>. The magnetized central objects and accretion tori are considered to launch relativistic jets and outflows by magnetic pressure and tension, viscous heating due to magneto-hydrodynamical turbulence, and neutrino irradiation. Quantitative properties of the ejecta and the nucleosynthetic element abundances for each phase are studied by various groups together with their dependence on binary parameters, such as NS masses and NS equations of state (EoS, ; see  for a review). The light curve modeling of EM counterparts, particularly for kilonovae, are also developed in this decade by employing numerical-simulation-based/motivated ejecta profiles and by performing radiative transfer simulations with realistic heating rates and/or detailed opacity tables <cit.>. However, there are still various open questions remaining. For example, whether the remnant NS has gravitationally collapsed into a BH or not is still being an open question for GW170817 due to the lack of the detection of post-merger gravitational waves in GW170817 <cit.>. Such information is important, because it is connected to the underlying physics of the uncomprehended NS EoS <cit.>. While we expect that the observation of the EM counterparts can provide a great hint to address this issue, it is still unclear from what observational features we can know about the fate of the remnant. Focusing particularly on the kilonova emission, a general consensus has not been yet reached for the property and origin of the ejecta in GW170817 <cit.>. Determination of the ejecta property is crucial for understanding the post-merger evolution of the system and whether BNS mergers could be the major production site of r-process elements in the universe. To address these questions, quantitative understanding of the relation between the initial condition and/or underlying physics, and EM signals is important. For this purpose, conducting a study based on numerical simulations consistently starting from the merger to the phase of EM emission is a useful approach to link the observables that should be related to each other. In particular, for the kilonova modeling, it is important to accurately determine the ejecta profile for the rest-mass density and compositions at the time of kilonova emission (>0.1 d). Previous studies showed that the ejecta profile induces significant spacial dependence in radioactive heating as well as strong geometrical effects in radiative transfer, which have great impact on the resultant light curves <cit.>. However, there are still limited number of studies which provide the end-to-end modeling from the merger to observational outputs following the hydrodynamics evolution of all the ejecta components up to the time of kilonova emission (<cit.>; see, however, <cit.> for the studies focusing on the dynamical ejecta components, and <cit.> in the context of BH-NS mergers). Given the situation that a number of BNS mergers will be observed in the next decades, the EM counterpart prediction based on the consistent simulations by taking the BNS diversity into account is an urgent task for correctly interpreting the observed data. In this paper, we study the kilonova light curves of BNS mergers for the case that a remnant MNS forms and subsequently collapses to a BH within 20 ms after the onset of the merger (which we refer to as “a short-lived case") consistently employing numerical-relativity (NR) results of <cit.>. This paper is organized as follows: In Section <ref>, we describe the method employed in this study. In Section <ref>, we describe the BNS models we study in this work. In Section <ref>, we present the property of the ejecta obtained by the long-term hydrodynamics evolution and the kilonova light curves obtained by radiative-transfer simulations. Finally, we discuss the implication of this paper in Section <ref>. Throughout this paper, c denotes the speed of light. § METHOD   Merger ejecta of a BNS are expected to be homologously expanding at the time of kilonova emission (≳ 0.1 d). To obtain the ejecta profile in the homologously expanding phase, we follow the same procedures as in the previous work <cit.>; adopting the outflow data obtained by NR simulations as the inner boundary condition <cit.>, the hydrodynamics evolution of merger ejecta is calculated by employing an axisymmetric relativistic hydrodynamics code developed in <cit.>. In the following, to distinguish between the present simulation and NR simulation, we refer to the present hydrodynamics simulations as the HD simulations. In the hydrodynamics code, relativistic hydrodynamics equations in the spherical coordinates are solved taking into account the effect of fixed-background gravity of a non-rotating BH metric in the isotropic coordinates. Radioactive-decay heating of heavy elements is also taken into account by referring to the nucleosynthesis results computed for each ejecta fluid element in the NR simulation (see <cit.> for the details). We employ the ideal-gas EoS with the adiabatic index of Γ=4/3. For the HD simulations, the uniform grid spacing with N_θ grid points is prepared for the polar angle θ, while for the radial direction, the following non-uniform grid structure is employed; the j-th radial grid point is given by ln r_j= ln(r_ out/r_ in)j-1/N_r+ ln r_ in, j=1⋯ N_r+1, where r_ in and r_ out denote the inner and outer radii of the computational domain, respectively, and N_r denotes the total number of the radial grid points. In the present work, we employ (N_r,N_θ)=(2048,256), and r_ in and r_ ext are initially set to be 8,000 km and 10^3 r_ in, respectively. We employ the same time origin for the HD simulations as in the NR simulations for the post-merger evolution. To import the outflow data from the NR simulations of <cit.> to the present HD simulations, the time-sequential hydrodynamics property of the outflow is extracted at r=r_ in in the NR simulations, and is used as the boundary condition at the inner radius, r=r_ in, of the HD simulations. The NR simulation data are run out at t> 5 s, and after then, the HD simulation is continued by setting a very small floor-value, which is negligible for the ejecta dynamics, to the rest-mass density of the inner boundary. To follow the evolution of ejecta even after the high velocity edge of the outflow reaches the outer boundary of our HD simulation, the radial grid points are added to the outside of the original outer boundary, while at the same time the innermost radial grid points are removed so as to keep the total number of the radial grid points. By this prescription, the value of r_ in is increased in the late phase of the HD simulations. The outermost radial grids are added so that the location of the outer radial boundary, r_ out, is always 10^3 r_ in. We note that the total mass lost by removing the inner radial grids is always much smaller (≲ 10^-4 M_⊙) than the post-merger ejecta mass. The light curves of kilonovae are calculated using a wavelength-dependent radiative transfer simulation code <cit.>. In this code, the photon transfer is simulated by a Monte Carlo method for given ejecta profiles composed of the density, velocity, and element abundance under the assumption of the homologous expansion. The time-dependent thermalization efficiency is taken into account following an analytic formula derived by <cit.>. The ionization and excitation states are determined under the assumption of the local thermodynamic equilibrium (LTE) by using the Saha's ionization and Boltzmann excitation equations. The impact of this assumption will be discussed in Appendix <ref>. For the photon-matter interaction, bound-bound, bound-free, and free-free transitions and electron scattering are taken into account for the transfer of optical and infrared photons <cit.>. The formalism of the expansion opacity <cit.> and the new line list derived in <cit.> are employed for the bound-bound transitions. In this line list, the atomic data of VALD <cit.> or Kurucz's database <cit.> are used for Z=20–29, while the results of atomic calculations from <cit.> are used for Z=30–88. For Sr II, Y I, Y II, Zr I, Zr II, Ba II, La III, and Ce III, which are the ions producing strong lines, line data are replaced with those calibrated with the atomic data of VALD and NIST database <cit.>. The radiative transfer simulations are performed from t=0.1 d to 30 d employing the density and internal energy profiles of the HD simulations at t=0.1 d. The spatial distributions of the heating rate and element abundances are determined by the table obtained by the nucleosynthesis calculations referring to the injected time and angle of the fluid elements. Note that the element abundances at t=1 d are used during the entire time evolution in the radiative transfer simulations to reduce the computational cost, but this simplified prescription gives an only minor systematic error on the resultant light curves as illustrated in <cit.>. § MODEL   In this work, we employ the NR outflow profiles obtained in <cit.> as the input for the HD simulations. The key quantities of each model are summarized in Table <ref>. The first four models listed in Table <ref> are BNSs with the total gravitational mass (at the infinite separation) of 2.7 M_⊙ but with various mass ratios in the range of 0.8–1.0. We also study an unequal mass BNS with a larger total gravitational mass (2.8 M_⊙), which we refer to as SFHo-125155. The SFHo EoS <cit.> supplemented by the Timmes (Helmholtz) EoS <cit.> for the low density part is employed. For all the models employing the SFHo EoS, a remnant MNS is formed after the merger, but it collapses to a BH within ≈20 ms. We note that these mass ranges of the BNSs with a short-lived remnant broadly cover the range of the mass estimation obtained by the gravitational-wave data analysis of GW170817 <cit.>. The BNS models which result in the formation of an MNS surviving for a long time (>1 s; ) are also shown in Table <ref> for comparison purposes (see also  ). The NR simulations are performed by a general-relativistic viscous neutrino-radiation hydrodynamics code with the dimensionless alpha viscous parameter of α=0.04  <cit.> except for MNS75a in which general-relativistic neutrino-radiation resistive-magnetohydrodynamics code is employed to take the magnetic dynamo effects into account <cit.>. The ejecta mass evaluated in the NR simulations is also listed in Table <ref>. The total ejecta mass increases as the mass ratio of the BNS deviates from unity due to the increase in the torus mass, and hence, the ejecta mass of the post-merger component. Broadly speaking, the mass of the dynamical ejecta tends to decrease as the binary becomes more asymmetric (but not so monotonically). This reflects the fact that, for an asymmetric binary, the tidal-interaction-driven component dominates the dynamical ejecta rather than the collisional shock-driven component, of which the launching mechanism is more efficient in mass ejection than the former. The total ejecta mass of the BNS merger for which the remnant MNS collapses to a BH in a short time is an order of magnitude smaller than that for the BNS which results in the formation of an MNS surviving for a long time (>1 s; ). Note that for the latter case, the total ejecta mass is dominated by the post-merger ejecta. § RESULTS   §.§ Ejecta profiles For all the models, we find that the total internal energy of ejecta is smaller by ≈4 order of magnitudes than the total kinetic energy at t=0.1 d and that the mass-averaged deviation of the velocity field from that in the later homologous expanding phase (v^r=r/t with v^r being the radial velocity) is as small as 10^-3 at t=0.1 d. This shows that the homologous expansion is well achieved for t ≥ 0.1 d. The total mass in the computational domain measured at t=0.1 d, M^ HD_ eje, is listed in Table <ref>. Note that the matter is in the homologously expanding phase at t=0.1 d, and hence, M^ HD_ eje can be regarded as the total ejecta mass. It is found that M^ HD_ eje is slightly smaller than M^ NR_ eje for some of the models. This is a consequence of the fact that a fraction of the matter falls back across the inner boundary as the pressure support from the inner boundary vanishes when the outflow data run out. While a fraction of the matter can actually experience such fall-back due to the deceleration by the pressure from the precedingly ejected matter, our treatment of suddenly vanishing pressure support on the inner boundary at the run-out time of NR data may artificially increase the mass of the fall-back matter. Nevertheless, as found in our previous studies <cit.>, the contribution of such marginally unbound matter to the kilonova emission is minor because it has only low velocity and has only a small contribution to the emission due to the long diffusion time scale. First, we focus on the BNS models of which the total mass is 2.7 M_⊙ to see the effect of the binary mass ratio. Fig. <ref> shows the rest-mass density profiles at t=0.1 d obtained by the HD simulations for models SFHo-135135, SFHo-130140, SFHo-125145, and SFHo-120150. The dynamical ejecta component located at x/ct≳ 0.05 or z/ct≳ 0.15 exhibits a broadly spherical morphology in the rest-mass density structure. On the other hand, the post-merger ejecta component, which is present in x/ct≲ 0.05 and z/ct≲ 0.15, exhibits a mildly prolate shape (see Fig. <ref> for a clearer distinction between the dynamical and post-merger ejecta components). These characteristics of the density profile are in broad agreement with the ejecta profile obtained in our previous studies <cit.>, in which BNSs result in long-lived MNSs (with the lifetime of >1 s). Taking a closer look, the dynamical ejecta show a relatively more prolate shape for an equal-mass BNS (SFHo-135135), while relatively more oblate shapes are seen for unequal mass cases (SFHo-125145 and SFHo-120150). This reflects the fact that the tidally driven component which spreads preferentially toward the equatorial direction dominates in the dynamical ejecta for an asymmetric binary over the collisional-shock-driven component which spreads in a more spherical manner. Fig. <ref> shows the electron fraction (Y_e) profiles at t=0.1 d for models SFHo-135135, SFHo-130140, SFHo-125145, and SFHo-120150. Here, the value of Y_e is evaluated when the temperature of the fluid element decreases to T=5 GK (=5×10^9 K). A clear boundary-like feature starting from x/ct≈ 0.05 on the equatorial plane to z/ct≲ 0.15 along the polar axis is seen for all the models. This corresponds to the boundary between the dynamical and post-merger ejecta components. The dynamical ejecta has a clear angular dependence in the Y_e profile. With θ being the angle measured from the polar axis, the value of Y_e of the dynamical ejecta is higher than 0.3 for θ≲ 45^∘–60^∘, while it is lower than 0.3 for θ≳ 45^∘–60^∘. This clearly reflects the difference in the mass ejection mechanism; the former is shock-heating-driven and the latter is tidally driven. The dynamical ejecta for unequal mass BNSs have relatively more extended distribution and lower Y_e values along the equatorial direction than those for the equal-mass case. This also reflects the fact that the tidally driven component dominates the dynamical ejecta and the ejecta experience a relatively small rise in temperature resulting from the shock heating for the unequal mass cases. On the other hand, the post-merger ejecta has only weak angular dependence in the Y_e value, which is always ≳0.3. These profiles of Y_e are also in broad agreement with the previous results of BNS mergers that result in long-lived remnant MNSs <cit.> and the results of BNS mergers in which the remnant survives for a moderately long time (0.1–1 s)  <cit.>. Fig. <ref> shows the rest-mass density and electron fraction profiles for model SFHo-125155. The qualitative features of the rest-mass density and Y_e profiles for this model are the same as those for other models with the total mass of 2.7 M_⊙, but the oblate shape and low-Y_e value region are more pronounced than those for the models shown in Figs. <ref> and <ref>. This reflects the fact that SFHo-125155 has the largest dynamical ejecta mass dominated by the tidally driven component as the consequence of the large asymmetry in the NS masses. Figs. <ref>–<ref> illustrate that the profiles of the rest-mass density and electron fraction depend sensitively on the total mass and mass ratio of the binaries. In the following we will show that the light curve and the spectral evolution depend on these differences, although the type of the remnant (either a short-lived or long-lived neutron star is formed as a remnant) has more impact on the brightness of the kilonova light curve. §.§ Kilonova light curves The left panel of Fig. <ref> shows the results of the bolometric light curves obtained by radiative-transfer simulations. The solid and dashed curves denote, respectively, the total and isotropically equivalent bolometric luminosities (the latter measured from the polar direction, 0^∘≤θ≤20^∘). For all the models, the bolometric light curves show approximately flat features with the luminosity of ∼ 10^41 erg/s for 0.3 d≤ t≤3–5 d, and decline rapidly after 3–5 d. As the ejecta mass increases, the epoch at which the bolometric light curve starts rapidly declining is delayed, and the luminosity after the decline becomes larger. This reflects the larger total optical depth and deposition energy for larger ejecta mass models. The right panel of Fig. <ref> shows the ratios of the bolometric fluxes measured from the polar (0^∘≤θ≤20^∘) and equatorial directions (86^∘≤θ≤90^∘) to those of spherical average. The isotropically equivalent luminosities measured from the polar and equatorial directions are brighter and fainter by a factor of ≈ 2, respectively, at t∼ 1 d due to the preferential diffusion of photons in the presence of optically thick dynamical ejecta around the equatorial plane <cit.>. However, such effects become less significant in the late phase (≳ 10 d) as the optical depth of the ejecta decreases due to the expansion. The viewing-angle dependence of the bolometric light curves is sustained for a longer time scale as the binary becomes more asymmetric. This reflects the fact that the tidally driven component of dynamical ejecta has more mass and a lower value of Y_e for more asymmetric binaries, resulting in more opaque ejecta. None of the model light curves in the left panel of Fig. <ref> can explain the observed brightness of the kilonova associated with GW170817 (AT2017gfo). The bolometric light curves are always below the observational data from 0.5 d to 17 d except for the last two data points in the plot. This is the case even if the enhancement of the brightness due to geometrical effects is taken into account (see the dashed curves in the left panel of Fig. <ref>, which denote the light curves measured from the polar direction). This is primarily due to the smallness of the ejecta mass, which leads to insufficient total radioactive deposition energy to explain the observation of AT2017gfo. Our results indicate that a BNS for which a remnant MNS collapses to a BH in a short time (t≲ 20 ms) is unlikely to be the progenitor of GW170817. We note that our light curves are fainter than the results of <cit.>, which considers the cases that a remnant MNS survives for a relatively longer time scale before it collapses to a BH (at t=0.1–1 s after the onset of the merger). This simply reflects the fact that the total ejecta mass is smaller for our present models. Fig. <ref> shows the gzK-band light curves for all the models of the short-lived cases listed in Table <ref>. The obtained light curves show the broadly similar properties to those obtained by the observation of AT2017gfo as well as the previous studies for a kilonova with multiple ejecta components <cit.>; the optical emission lasts for a short time scale (∼ 1 d), and the near-infrared (NIR) emission lasts for a longer time scale (∼ 10 d). The emission becomes faint as the viewing angle measured from the axis of symmetry increases. This primarily reflects the spatial dependence of element abundances (see Figs. <ref> and <ref>). The viewing-angle dependence is more pronounced for the emission in the optical wavelength (i.e., in the g-band) due to the so-called lanthanide-curtain effects in the presence of low-Y_e dynamical ejecta around the equatorial plane <cit.>. Interestingly, the peak magnitudes in the NIR wavelengths (i.e., in the K-band) do not significantly differ among the models regardless of the difference in the ejecta mass. However, the time scale for the emission to sustain the brightness close to the peak becomes shorten as the total ejecta mass decreases. The light curves in the optical wavelengths observed from the polar direction also show similar shapes among the models except for the most asymmetric BNSs (SFHo-120150 and SFHo-125155) for which the g-band light curves are fainter by ≥ 1 mag than those for the other models. We find that the strong suppression of the optical emission for the most asymmetric BNS models is due to the fact that the polar regions are more polluted by the lanthanide elements. The difference in the brightness of the optical emission observed from the equatorial direction among the models simply reflects the difference in the dynamical ejecta mass (see Table <ref>). Our result implies that an earlier follow-up observation than in GW170817/AT2017gfo is needed to observe the kilonova emission in the optical band for the short-lived BNS formation. For example, for the hypothetical distance of 200 Mpc, the g-band emission can only be detected by the observation within 0.5–1 d with the sensitivity deeper than 22 mag, which requires telescopes larger than 2 m-classes <cit.>. Also, such a detection can be achieved only for the case that the event is face-on, but we should note that it could be hidden by the GRB afterglow emission. In the z band, the emission lasts for a longer time scale, but yet the observation within 1 d is needed with 2 m and 4 m-class telescopes, respectively, to find kilonovae for the case of θ≤ 45^∘. The NIR follow-up observation by a telescope larger than 4-m classes, such as VISTA <cit.>, can detect the kilonova emission up to 5 d after the onset of the merger with the hypothetical distance of 200 Mpc and 100 Mpc for face-on and edge-on events, respectively. However, since the field of view of an NIR telescope is not as large as that of the optical one <cit.>, the improvement in the source localization by the gravitational-wave observation is crucial. §.§ Comparison with different BNS models Fig. <ref> compares the gzK-band kilonova light curves for the BNS models for which the remnant MNS survives for a short time scale (SFHo-125145) and for a long time scale (DD2-135135,  <cit.>), and for the case that significant magnetic dynamo effects are hypothetically present in a long-surviving remnant MNS (MNS75a,  ). The time scale for the emission to rapidly decline is much shorter for the model with a short-lived remnant MNS than those for the models associated with the formation of a long-lived MNS simply because the ejecta mass for the short-lived MSN models is smaller by a factor of 5–10 than that for the latter cases. The brightness at the peak is also high for the case with a long-lived MNS, and the difference is more significant in a shorter wavelength. As already mentioned, none of the merger models that result in a short-lived remnant MNS can explain the peak kilonova brightness of AT2017gfo observed in the gz-band, nor the brightness in the K-band in the late phase (≳ 5 d). This is likely to be the case even if we consider a possible enhancement in the optical-band emission due to the modification in the ionization states by the non-LTE effects (see Appendix <ref>). On the other hand, the kilonova model of a BNS that results in a long-surviving MNS (DD2-135135) reproduces the peak brightness in the optical wavelengths as well as the brightness and declining time scale in the NIR wavelengths, although a deviation from the observation is present in the optical wavelengths in the late phase (t≳ 2 d)[Taking the non-LTE effects on the ionization populations into account may solve the tension; see <cit.> for the discussion.]. This suggests that the formation of a short-lived remnant MNS is unlikely the case for GW170817 and the formation of an MNS which survives for a longer time scale (≳ 0.1 s) is more likely from the viewpoint of kilonova light curves. However, for the case that the significant magnetic dynamo effects are present in the long-surviving remnant MNS (MNS75a), the kilonova emission will be significantly brighter than the observed data (see the light curves of MNS75a in Fig. <ref>). This suggests that the remnant MNS of GW170817 should have not survived for too long time (i.e., over the time scale of the dynamo magnetic-field amplification) if the magnetic dynamo effect played a significant role in the post-merger phase (see also the discussion below for the viewpoint of the nucleosynthesis yields). §.§ Approximate scaling law of kilonova light curves While the peak brightness and the time scale of the emission differ among different BNS models and setups, Fig. <ref> implies that the shapes of the light curves as well as their relative brightness among different wavelengths share similar behaviour among the models. To examine this idea, we compare the gzK-band light curves for various models and viewing angles with the time and magnitude of each light curve being scaled by those at a certain reference time. For this purpose, we chose the reference time for each light curve to be the decline time of the z-band emission, t_ z,dec, defined as the time at which the decline power of the z-band magnitude, dM_ z/d log_10t, reaches 2.5. Fig. <ref> shows the reference time and z-band magnitude as functions of the viewing angle for various kilonova models <cit.>. The reference time and magnitude largely vary among the models and viewing angles. As expected from Fig. <ref>, the reference time and magnitude tend to be earlier and fainter, respectively, for the short-lived cases than the long-lived cases. The viewing-angle dependence is more pronounced for the short-lived cases, which reflects the fact that the dynamical component has a larger fraction in the total ejecta compared to the long-lived cases. Fig. <ref> compares the gzK-band light curves for various models and viewing angles, which are scaled with the reference time and z-band magnitude for each case. The g-band light curves show a large diversity among the models even after the scaling, for which we find no clear trend among the models and viewing-angles. On the other hand, although the reference time and magnitude largely vary among the models and viewing angles, the K-band light curves show relatively a less diversity after the scaling. In particular, the value of the K-band magnitude is always within ≈ 1 mag relative to the value of the reference z-band magnitude for 0.6≲ t/t_ z,dec≲ 4. We find that this is also the case for the H band. Hence, this suggests that the HK-band follow-up observation should be at least 1 mag deeper than the value of the z-band reference magnitude and earlier than 4 times the reference time. Once the kilonova candidate is found and the decline time is determined by the z-band observation in a few days after the event, this approximate scaling law can be used as a guideline for the NIR follow-up observation by letting us know how rapid and how deep the observation should be. For example, let us suppose the case for which an EM candidate is found in the z band and dM_ z/d log_10t reaches 2.5 with the z-band magnitude being 20 mag at 1.5 d after the gravitational-wave trigger. Then, our approximate scaling-law suggests that the follow-up observation deeper than 21 mag within 6 d is at least needed not to miss the peak brightness of the HK-band counterparts. Notably, the K-band emission tends to decline within t/t_ z,dec≈ 5–10 for the cases with a long-lived remnant MNS, while the K-band magnitude for the cases with a short-lived remnant MNS tends to keep the value close to the peak until a larger value of t/t_ z,dec. The observational data of AT2017gfo in the gzK-band scaled in the same way tend to follow the trend of the cases with a long-lived remnant MNS, which also supports our hypothesis that the remnant MNS for GW170817 did not collapse to a BH within a short time (<20 ms). § DISCUSSIONS We found that the kilonova light curves of a BNS of which the remnant MNS survives for a short time are too faint and last for a too short duration to explain the brightness of the optical and NIR observation of GW170817/AT2017gfo. This is primarily due to the smallness of ejecta mass. Instead, kilonova models of a BNS which results in a long-surviving MNS (DD2-135135) are more consistent with the observation. This indicates that the remnant MNS of GW170817 might not have collapsed within a short time (≲ 20 ms) but survived for a longer time (≳ 0.1 s). On the other hand, our previous study <cit.> indicated that, if the dynamo effects play a significant role for an efficient amplification of magnetic fields in a long-lived remnant MNS, the kilonova as well as the synchrotron emission stemming from the interaction between the ejecta fast tail and inter-stellar medium becomes too bright to be consistent with the EM observations associated with GW170817 (see also the discussion in <cit.>). Hence, the remnant MNS should have collapsed to a BH within the dynamo time scale of the magnetic-field growth, or the dynamo effect in the post-merger phase was subdominant. We find that the mass distribution of the ejecta in the polar region for the long-lived case is also compatible with the required property of the fast blue component, for which the origin is often discussed to be mysterious <cit.>. Fig. <ref> shows the isotropic equivalent ejecta mass, M^ iso_ eje(v^r,θ), for various models and latitudinal angles, which is defined by M^ iso_ eje(v^r,θ)=4π∫_>v^rρ(r,θ) r^2 dr, where ρ denotes the rest-mass density. For the case of long-lived MNS formation (DD2-135135), the polar value of M^ iso_ eje for v^r≳0.2 c is larger than 10^-2 M_⊙. This matches the property of the ejecta which is required to explain the luminosity and photo-spheric velocity of the blue component in AT2017gfo (see also  for similar findings). Such a polar ejecta component is originated from the dynamical ejecta component and the post-merger ejecta component of which the velocity is enhanced by neutrino-radiation from the MNS. While the spectral analysis with the non-LTE effects being taken into account is needed for a more quantitative argument, our finding suggests that the photo-spheric velocity of the blue component can be naturally explained by the setup obtained by NR simulations. Fig. <ref> suggests that the presence of the diversity in the evolution of photos-spheric velocities reflects the different types of the MNS evolution. For the case of short-lived MNS formation (SFHo-125145), the value of M^ iso_ eje only reaches 10^-2 M_⊙ for v_r<0.05 c, simply reflecting the smallness of the ejecta mass. This suggests that the photo-spheric velocity of the short-lived case is ≲ 0.05 c for t≳ 1 d. On the other hand, the result of MNS75a shows that an applicable amount of ejecta is distributed in the very high velocity components. This is due to the acceleration of ejecta in the presence of significant magnetic dynamo effects in the long-lived MNS, and the photo-spheric velocity of >0.8 c is expected be observed in the early phase of emission for such a case. As described above, the BNS that results in a long-lived MNS is more likely the case for GW170817 than the BNS that results in a short-lived remnant MNS from the viewpoint of kilonova light curves. However, the calculated nucleosynthesis yields for such long-lived MNS cases (DD2-135135 and MNS75a in Fig. <ref>) exhibit overproduction of the nuclei between the first and second r-process abundance peaks (A ∼ 80–130) when compared to the solar r-process abundances (see also  for the details, and  for similar results). This fact suggests that such long-lived MNSs should not be the major outcomes of BNSs that merge in a Hubble time if the dominant sources of r-process elements are BNS mergers. This implies that GW170817 may not be a typical type of BNS mergers in the universe. However, we should note that the total nucleosynthesis yields can be sensitive to the setups and physical ingredients of the numerical simulation. A latest work suggests that a more self-consistent magnetohydrodynamics treatment of angular momentum transfer could result in more production of elements heavier than the first r-process peak in the post-merger ejecta <cit.>. Hence, there may still exist a room that both the observation of GW170817 and the robustness of the solar abundance pattern <cit.> can be explained by some configuration of a BNS, while we should remind that the presence of an MNS which survives for a long time scale (t>1 s) with significant dynamo effects is unlikely the case of GW170817 as discussed above. For example, a BNS which results in a remnant MNS with significant dynamo effects but collapses to a BH at O(0.1) s can be a plausible model for interpreting GW170817 from this point of view. For the BNS resulting in a short-lived MNS, the kilonova emission lasts over a time scale appreciably shorter than that of GW170817/AT2017gfo, in particular for the optical band. This implies that for detecting kilonovae of this type, we need observation earlier than that for AT2017gfo. This is in particular the case for a large value of θ. It is also likely that the optical light curves could be more easily hidden by the afterglow light curves of GRBs for the small value of θ. Hence, the NIR light curves may be the primary target of the observation in the simultaneous detection of a GRB. In fact, the comparison of our model light curves with the observation of GRB130603B <cit.>, with which a plausible kilonova candidate is associated, indicates that the r-band emission for the case of short-lived MNS formation (SFHo-125145) is likely hidden by the afterglow emission (see Fig. <ref>). The brightness in the H band for the case of short-lived MNS formation is also at most only comparable to that of the afterglow emission. Hence, the progenitor of GRB130603B was unlikely to be a BNS which results in the formation of a short-lived remnant MNS assuming the excess in the H band is due to the kilonova emission. This also indicates that GRB-associated kilonovae from BNSs leading to short-lived MNS formation could be missed by being entirely hidden by the afterglows, which should result in a number of simultaneous detection of gravitational waves with short GRBs but lack of kilonovae in future. Indeed a statistical study shows that there are a substantial fraction of previous short GRBs that are not associated with kilonovae <cit.>. As the brightness of AT2017gfo is known to be broadly comparable with the optical and NIR counterparts of GRB130603B <cit.>, the kilonova model light curves for the cases of long-lived MNS formation (DD2-135135 and MNS75a) are also consistent with the observation of GRB130603B; while the r-band emission is hidden by the afterglow emission, the H band emission for the long-lived cases is brighter than the afterglow emission, and is consistent with the observed excess. This suggests that the progenitor of GRB130603B is likely to be a BNS which results in the formation of a MNS that survives more than ∼ 10 ms. In <cit.>, spectral features observed in the data of AT2017gfo are interpreted as the p-Cygni profiles by Sr (note, however,  <cit.> suggested that the spectral features could be also well interpreted by the absorption lines by He if non-LTE effects are considered). Recently, <cit.> performed a more detailed analysis for those spectral features, and show that the Sr distribution of the ejecta should have nearly spherical morphology. Fig. <ref> shows the Sr mass density profiles at t=1 d for SFHo-135135 and DD2-135135 <cit.>. The Sr distribution with the velocity larger than 0.15 c approximately exhibits a spherical morphology for SFHo-135135. On the other hand, the Sr distributions for DD2-135135 as well as the low-velocity part (<0.15 c) for SFHo-135135 show mildly prolate shapes. These aspherical features, which are in broad agreement with the results of <cit.>, are inconsistent with the implication of <cit.>. Detailed quantitative spectral analysis taking into account various uncertainties is nevertheless needed to clarify how severe the current tension from the observational implication is, which we leave it for a future task. § ACKNOWLEDGEMENTS KK thanks Masaomi Tanaka and Eli Waxman for the valuable discussions. We also thank Kenta Hotokezaka for helpful discussions. Numerical computation was performed on Yukawa21 at Yukawa Institute for Theoretical Physics, Kyoto University and the Sakura, Cobra, Raven clusters at Max Planck Computing and Data Facility. The simulations were performed on Fugaku provided by RIKEN through the HPCI System Research Project (Project ID: hp220174, hp230084), and the Cray XC50 at CfCA of the National Astronomical Observatory of Japan. ND acknowledges support from Graduate Program on Physics for the Universe (GP-PU) at Tohoku University. This work was supported by Grant-in-Aid for Scientific Research (JP20H00158, JP21K13912, JP23H04900, 22KJ0317, 23H01772) of JSPS/MEXT. mnras @urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc [Abbott et al.Abbott et al.2017a]TheLIGOScientific:2017qsa Abbott B., et al., 2017a, @doi [Phys. Rev. Lett.] 10.1103/PhysRevLett.119.161101, 119, 161101 [Abbott et al.Abbott et al.2017b]LIGOScientific:2017vwq Abbott B. P., et al., 2017b, @doi [Phys. Rev. Lett.] 10.1103/PhysRevLett.119.161101, 119, 161101 [Abbott et al.Abbott et al.2017c]LIGOScientific:2017zic Abbott B. P., et al., 2017c, @doi [Astrophys. J. Lett.] 10.3847/2041-8213/aa920c, 848, L13 [Abbott et al.Abbott et al.2017d]LIGOScientific:2017fdd Abbott B. P., et al., 2017d, @doi [Astrophys. J. Lett.] 10.3847/2041-8213/aa9a35, 851, L16 [Abbott et al.Abbott et al.2019]LIGOScientific:2018hze Abbott B. P., et al., 2019, @doi [Phys. Rev. X] 10.1103/PhysRevX.9.011001, 9, 011001 [Ackley et al.Ackley et al.2020]Ackley:2020qkz Ackley K., et al., 2020, @doi [Astron. Astrophys.] 10.1051/0004-6361/202037669, 643, A113 [Almualla, Ning, Salehi, Bulla, Dietrich, Coughlin & GuessoumAlmualla et al.2021]Almualla:2021znj Almualla M., Ning Y., Salehi P., Bulla M., Dietrich T., Coughlin M. W., Guessoum N., 2021, arXiv [Barnes, Kasen, Wu & Martínez-PinedoBarnes et al.2016]Barnes:2016umi Barnes J., Kasen D., Wu M.-R., Martínez-Pinedo G., 2016, @doi [Astrophys. J.] 10.3847/0004-637X/829/2/110, 829, 110 [Barnes, Zhu, Lund, Sprouse, Vassh, McLaughlin, Mumpower & SurmanBarnes et al.2021]Barnes:2020nfi Barnes J., Zhu Y. L., Lund K. A., Sprouse T. M., Vassh N., McLaughlin G. C., Mumpower M. R., Surman R., 2021, @doi [Astrophys. J.] 10.3847/1538-4357/ac0aec, 918, 44 [Bauswein, Goriely & JankaBauswein et al.2013]Bauswein:2013yna Bauswein A., Goriely S., Janka H. T., 2013, @doi [Astrophys. J.] 10.1088/0004-637X/773/1/78, 773, 78 [BergerBerger2014]Berger:2013jza Berger E., 2014, @doi [Ann. Rev. Astron. Astrophys.] 10.1146/annurev-astro-081913-035926, 52, 43 [Berger, Fong & ChornockBerger et al.2013]Berger:2013wna Berger E., Fong W., Chornock R., 2013, @doi [Astrophys. J.] 10.1088/2041-8205/774/2/L23, 774, L23 [Bernuzzi et al.Bernuzzi et al.2020]Bernuzzi:2020txg Bernuzzi S., et al., 2020, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/staa1860, 497, 1488 [Bovard, Martin, Guercilena, Arcones, Rezzolla & KorobkinBovard et al.2017]Bovard:2017mvn Bovard L., Martin D., Guercilena F., Arcones A., Rezzolla L., Korobkin O., 2017, @doi [Phys. Rev.] 10.1103/PhysRevD.96.124005, D96, 124005 [BullaBulla2019]Bulla:2019muo Bulla M., 2019, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stz2495, 489, 5037 [BullaBulla2023]Bulla:2022mwo Bulla M., 2023, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stad232, 520, 2558 [Bulla et al.,Bulla et al.2021]Bulla:2020jjr Bulla M., et al., 2021, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/staa3796, 501, 1891 [Christie, Lalakos, Tchekhovskoy, Fernández, Foucart, Quataert & KasenChristie et al.2019]Christie:2019lim Christie I. M., Lalakos A., Tchekhovskoy A., Fernández R., Foucart F., Quataert E., Kasen D., 2019, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stz2552, 490, 4811 [Ciolfi & KalinaniCiolfi & Kalinani2020]Ciolfi:2020wfx Ciolfi R., Kalinani J. V., 2020, @doi [Astrophys. J.] 10.3847/2041-8213/abb240, 900, L35 [Collins, Bauswein, Sim, Vijayan, Martínez-Pinedo, Just, Shingles & KromerCollins et al.2023]Collins:2022ocl Collins C. E., Bauswein A., Sim S. A., Vijayan V., Martínez-Pinedo G., Just O., Shingles L. J., Kromer M., 2023, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stad606, 521, 1858 [Cowan, Sneden, Lawler, Aprahamian, Wiescher, Langanke, Martínez-Pinedo & ThielemannCowan et al.2021]Cowan:2019pkx Cowan J. J., Sneden C., Lawler J. E., Aprahamian A., Wiescher M., Langanke K., Martínez-Pinedo G., Thielemann F.-K., 2021, @doi [Rev. Mod. Phys.] 10.1103/RevModPhys.93.015002, 93, 15002 [Cowperthwaite et al.Cowperthwaite et al.2017]Cowperthwaite:2017dyu Cowperthwaite P. S., et al., 2017, @doi [Astrophys. J.] 10.3847/2041-8213/aa8fc7, 848, L17 [Cucchiara, Prochaska, Perley, Cenko, Werk, Cao, Bloom & CobbCucchiara et al.2013]Cucchiara:2013vda Cucchiara A., Prochaska J. X., Perley D. A., Cenko S. B., Werk J., Cao Y., Bloom J. S., Cobb B. E., 2013, @doi [Astrophys. J.] 10.1088/0004-637X/777/2/94, 777, 94 [Curtis, Mösta, Wu, Radice, Roberts, Ricigliano & PeregoCurtis et al.2022]Curtis:2021guz Curtis S., Mösta P., Wu Z., Radice D., Roberts L., Ricigliano G., Perego A., 2022, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stac3128, 518, 5313 [Curtis, Bosch, Mösta, Radice, Bernuzzi, Perego, Haas & SchnetterCurtis et al.2023]Curtis:2023zfo Curtis S., Bosch P., Mösta P., Radice D., Bernuzzi S., Perego A., Haas R., Schnetter E., 2023, arXiv [Darbha & KasenDarbha & Kasen2020]Darbha:2020lhz Darbha S., Kasen D., 2020, @doi [arXiv] 10.3847/1538-4357/ab9a34 [Dessart, Ott, Burrows, Rosswog & LivneDessart et al.2009]Dessart:2008zd Dessart L., Ott C., Burrows A., Rosswog S., Livne E., 2009, @doi [Astrophys. J.] 10.1088/0004-637X/690/2/1681, 690, 1681 [Dietrich, Ujevic, Tichy, Bernuzzi & BruegmannDietrich et al.2017]Dietrich:2016hky Dietrich T., Ujevic M., Tichy W., Bernuzzi S., Bruegmann B., 2017, @doi [Phys. Rev.] 10.1103/PhysRevD.95.024029, D95, 024029 [Domoto, Tanaka, Wanajo & KawaguchiDomoto et al.2021]Domoto:2021xfq Domoto N., Tanaka M., Wanajo S., Kawaguchi K., 2021, @doi [Astrophys. J.] 10.3847/1538-4357/abf358, 913, 26 [Domoto, Tanaka, Kato, Kawaguchi, Hotokezaka & WanajoDomoto et al.2022]Domoto:2022cqp Domoto N., Tanaka M., Kato D., Kawaguchi K., Hotokezaka K., Wanajo S., 2022, @doi [Astrophys. J.] 10.3847/1538-4357/ac8c36, 939, 8 [Eastman & PintoEastman & Pinto1993]1993ApJ...412..731E Eastman R. G., Pinto P. A., 1993, @doi [] 10.1086/172957, http://adsabs.harvard.edu/abs/1993ApJ...412..731E 412, 731 [Eichler, Livio, Piran & SchrammEichler et al.1989]Eichler:1989ve Eichler D., Livio M., Piran T., Schramm D. N., 1989, @doi [Nature] 10.1038/340126a0, 340, 126 [Fernández, Quataert, Schwab, Kasen & RosswogFernández et al.2015]Fernandez:2014bra Fernández R., Quataert E., Schwab J., Kasen D., Rosswog S., 2015, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stv238, 449, 390 [Fernández, Foucart, Kasen, Lippuner, Desai & RobertsFernández et al.2017]Fernandez:2016sbf Fernández R., Foucart F., Kasen D., Lippuner J., Desai D., Roberts L. F., 2017, @doi [Class. Quant. Grav.] 10.1088/1361-6382/aa7a77, 34, 154001 [Fernández, Foucart & LippunerFernández et al.2020]Fernandez:2020oow Fernández R., Foucart F., Lippuner J., 2020, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/staa2209, 497, 3221 [Fernández, Tchekhovskoy, Quataert, Foucart & KasenFernández et al.2019]Fernandez:2018kax Fernández R., Tchekhovskoy A., Quataert E., Foucart F., Kasen D., 2019, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/sty2932, 482, 3373 [Foucart et al.,Foucart et al.2016]Foucart:2015gaa Foucart F., et al., 2016, @doi [Phys. Rev.] 10.1103/PhysRevD.93.044019, D93, 044019 [Foucart, Duez, Hebert, Kidder, Pfeiffer & ScheelFoucart et al.2020]Foucart:2020qjb Foucart F., Duez M. D., Hebert F., Kidder L. E., Pfeiffer H. P., Scheel M. A., 2020, @doi [Astrophys. J. Lett.] 10.3847/2041-8213/abbb87, 902, L27 [Foucart, Moesta, Ramirez, Wright, Darbha & KasenFoucart et al.2021]Foucart:2021ikp Foucart F., Moesta P., Ramirez T., Wright A. J., Darbha S., Kasen D., 2021, arXiv [Foucart, Duez, Haas, Kidder, Pfeiffer, Scheel & Spira-SavettFoucart et al.2022]Foucart:2022kon Foucart F., Duez M. D., Haas R., Kidder L. E., Pfeiffer H. P., Scheel M. A., Spira-Savett E., 2022, arXiv [Freiburghaus, Rosswog & ThielemannFreiburghaus et al.1999]Freiburghaus1999a Freiburghaus C., Rosswog S., Thielemann F. K., 1999, @doi [] 10.1086/312343, https://ui.adsabs.harvard.edu/abs/1999ApJ...525L.121F 525, L121 [Friend & CastorFriend & Castor1983]1983ApJ...272..259F Friend D. B., Castor J. I., 1983, @doi [] 10.1086/161289, https://ui.adsabs.harvard.edu/abs/1983ApJ...272..259F 272, 259 [Fujibayashi, Kiuchi, Nishimura, Sekiguchi & ShibataFujibayashi et al.2018]Fujibayashi:2017puw Fujibayashi S., Kiuchi K., Nishimura N., Sekiguchi Y., Shibata M., 2018, @doi [Astrophys. J.] 10.3847/1538-4357/aabafd, 860, 64 [Fujibayashi, Shibata, Wanajo, Kiuchi, Kyutoku & SekiguchiFujibayashi et al.2020a]Fujibayashi:2020qda Fujibayashi S., Shibata M., Wanajo S., Kiuchi K., Kyutoku K., Sekiguchi Y., 2020a, @doi [Phys. Rev. D] 10.1103/PhysRevD.101.083029, 101, 083029 [Fujibayashi, Shibata, Wanajo, Kiuchi, Kyutoku & SekiguchiFujibayashi et al.2020b]Fujibayashi:2020jfr Fujibayashi S., Shibata M., Wanajo S., Kiuchi K., Kyutoku K., Sekiguchi Y., 2020b, @doi [Phys. Rev. D] 10.1103/PhysRevD.102.123014, 102, 123014 [Fujibayashi, Wanajo, Kiuchi, Kyutoku, Sekiguchi & ShibataFujibayashi et al.2020c]Fujibayashi:2020dvr Fujibayashi S., Wanajo S., Kiuchi K., Kyutoku K., Sekiguchi Y., Shibata M., 2020c, @doi [Astrophys. J.] 10.3847/1538-4357/abafc2, 901, 122 [Fujibayashi, Kiuchi, Wanajo, Kyutoku, Sekiguchi & ShibataFujibayashi et al.2023]Fujibayashi:2022ftg Fujibayashi S., Kiuchi K., Wanajo S., Kyutoku K., Sekiguchi Y., Shibata M., 2023, @doi [Astrophys. J.] 10.3847/1538-4357/ac9ce0, 942, 39 [Gillanders, Smartt, Sim, Bauswein & GorielyGillanders et al.2022]Gillanders:2022opm Gillanders J. H., Smartt S. J., Sim S. A., Bauswein A., Goriely S., 2022, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stac1258, 515, 631 [Grossman, Korobkin, Rosswog & PiranGrossman et al.2014]Grossman:2013lqa Grossman D., Korobkin O., Rosswog S., Piran T., 2014, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stt2503, 439, 757 [Hotokezaka & NakarHotokezaka & Nakar2020]Hotokezaka:2019uwo Hotokezaka K., Nakar E., 2020, @doi [] 10.3847/1538-4357/ab6a98, https://ui.adsabs.harvard.edu/abs/2020ApJ...891..152H 891, 152 [Hotokezaka & PiranHotokezaka & Piran2015]Hotokezaka:2015eja Hotokezaka K., Piran T., 2015, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stv620, 450, 1430 [Hotokezaka, Kiuchi, Kyutoku, Okawa, Sekiguchi, Shibata & TaniguchiHotokezaka et al.2013]Hotokezaka:2012ze Hotokezaka K., Kiuchi K., Kyutoku K., Okawa H., Sekiguchi Y.-i., Shibata M., Taniguchi K., 2013, @doi [Phys. Rev.] 10.1103/PhysRevD.87.024001, D87, 024001 [Hotokezaka, Kiuchi, Shibata, Nakar & PiranHotokezaka et al.2018]Hotokezaka:2018gmo Hotokezaka K., Kiuchi K., Shibata M., Nakar E., Piran T., 2018, @doi [Astrophys. J.] 10.3847/1538-4357/aadf92, 867, 95 [Hotokezaka, Tanaka, Kato & GaigalasHotokezaka et al.2021]Hotokezaka:2021ofe Hotokezaka K., Tanaka M., Kato D., Gaigalas G., 2021, @doi [] 10.1093/mnras/stab1975, https://ui.adsabs.harvard.edu/abs/2021MNRAS.506.5863H 506, 5863 [Just, Bauswein, Pulpillo, Goriely & JankaJust et al.2015]Just:2014fka Just O., Bauswein A., Pulpillo R. A., Goriely S., Janka H. T., 2015, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stv009, 448, 541 [Just, Kullmann, Goriely, Bauswein, Janka & CollinsJust et al.2022]Just:2021vzy Just O., Kullmann I., Goriely S., Bauswein A., Janka H. T., Collins C. E., 2022, @doi [] 10.1093/mnras/stab3327, https://ui.adsabs.harvard.edu/abs/2022MNRAS.510.2820J 510, 2820 [Just, Vijayan, Xiong, Bauswein, Goriely, Guilet, Janka & Martínez-PinedoJust et al.2023]Just:2023wtj Just O., Vijayan V., Xiong Z., Bauswein A., Goriely S., Guilet J., Janka H.-T., Martínez-Pinedo G., 2023, arXiv [Kasen, Thomas & NugentKasen et al.2006]Kasen:2006ce Kasen D., Thomas R. C., Nugent P., 2006, @doi [Astrophys. J.] 10.1086/506190, 651, 366 [Kasen, Badnell & BarnesKasen et al.2013]Kasen:2013xka Kasen D., Badnell N. R., Barnes J., 2013, @doi [Astrophys. J.] 10.1088/0004-637X/774/1/25, 774, 25 [Kasen, Fernandez & MetzgerKasen et al.2015]Kasen:2014toa Kasen D., Fernandez R., Metzger B., 2015, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stv721, 450, 1777 [Kasen, Metzger, Barnes, Quataert & Ramirez-RuizKasen et al.2017]Kasen:2017sxr Kasen D., Metzger B., Barnes J., Quataert E., Ramirez-Ruiz E., 2017, @doi [Nature] 10.1038/nature24453 [Kasliwal et al.Kasliwal et al.2017]Kasliwal:2017ngb Kasliwal M. M., et al., 2017, @doi [Science] 10.1126/science.aap9455, 358, 1559 [Kawaguchi, Shibata & TanakaKawaguchi et al.2018]Kawaguchi:2018ptg Kawaguchi K., Shibata M., Tanaka M., 2018, @doi [Astrophys. J.] 10.3847/2041-8213/aade02, 865, L21 [Kawaguchi, Shibata & TanakaKawaguchi et al.2020]Kawaguchi:2019nju Kawaguchi K., Shibata M., Tanaka M., 2020, @doi [] 10.3847/1538-4357/ab61f6, https://ui.adsabs.harvard.edu/abs/2020ApJ...889..171K 889, 171 [Kawaguchi, Fujibayashi, Shibata, Tanaka & WanajoKawaguchi et al.2021]Kawaguchi:2020vbf Kawaguchi K., Fujibayashi S., Shibata M., Tanaka M., Wanajo S., 2021, @doi [Astrophys. J.] 10.3847/1538-4357/abf3bc, 913, 100 [Kawaguchi, Fujibayashi, Hotokezaka, Shibata & WanajoKawaguchi et al.2022]Kawaguchi:2022bub Kawaguchi K., Fujibayashi S., Hotokezaka K., Shibata M., Wanajo S., 2022, @doi [Astrophys. J.] 10.3847/1538-4357/ac6ef7, 933, 22 [Kedia et al.,Kedia et al.2023]Kedia:2022onl Kedia A., et al., 2023, @doi [Phys. Rev. Res.] 10.1103/PhysRevResearch.5.013168, 5, 013168 [Kiuchi, Kyutoku, Sekiguchi & ShibataKiuchi et al.2018]Kiuchi:2017zzg Kiuchi K., Kyutoku K., Sekiguchi Y., Shibata M., 2018, @doi [Phys. Rev. D] 10.1103/PhysRevD.97.124039, 97, 124039 [Kiuchi, Fujibayashi, Hayashi, Kyutoku, Sekiguchi & ShibataKiuchi et al.2022a]Kiuchi:2022nin Kiuchi K., Fujibayashi S., Hayashi K., Kyutoku K., Sekiguchi Y., Shibata M., 2022a, arXiv [Kiuchi, Held, Sekiguchi & ShibataKiuchi et al.2022b]Kiuchi:2022ubj Kiuchi K., Held L. E., Sekiguchi Y., Shibata M., 2022b, @doi [Phys. Rev. D] 10.1103/PhysRevD.106.124041, 106, 124041 [Korobkin et al.,Korobkin et al.2021]Korobkin:2020spe Korobkin O., et al., 2021, @doi [] 10.3847/1538-4357/abe1b5, https://ui.adsabs.harvard.edu/abs/2021ApJ...910..116K 910, 116 [Kramida, Yu. Ralchenko, Reader & and NIST ASD TeamKramida et al.2021]NIST Kramida A., Yu. Ralchenko Reader J., and NIST ASD Team 2021, NIST Atomic Spectra Database (ver. 5.9). Available: https://physics.nist.gov/asd. National Institute of Standards and Technology, Gaithersburg, MD. [KulkarniKulkarni2005]Kulkarni:2005jw Kulkarni S. R., 2005, arXiv [Kupka, Piskunov, Ryabchikova, Stempels & WeissKupka et al.1999]1999A AS..138..119K Kupka F., Piskunov N., Ryabchikova T. A., Stempels H. C., Weiss W. W., 1999, @doi [] 10.1051/aas:1999267, https://ui.adsabs.harvard.edu/abs/1999A AS..138..119K 138, 119 [Kurucz & BellKurucz & Bell1995]1995all..book.....K Kurucz R. L., Bell B., 1995, Atomic line list [Lattimer & SchrammLattimer & Schramm1974]Lattimer:1974slx Lattimer J. M., Schramm D. N., 1974, @doi [Astrophys. J.] 10.1086/181612, 192, L145 [Li & PaczynskiLi & Paczynski1998]Li:1998bw Li L.-X., Paczynski B., 1998, @doi [Astrophys. J.] 10.1086/311680, 507, L59 [Lippuner, Fernández, Roberts, Foucart, Kasen, Metzger & OttLippuner et al.2017]Lippuner:2017bfm Lippuner J., Fernández R., Roberts L. F., Foucart F., Kasen D., Metzger B. D., Ott C. D., 2017, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stx1987, 472, 904 [Lodders, Palme & GailLodders et al.2009]Lodders2009 Lodders K., Palme H., Gail H. P., 2009, @doi [Landolt B&ouml;rnstein] 10.1007/978-3-540-88055-4_34, https://ui.adsabs.harvard.edu/abs/2009LanB...4B..712L 4B, 712 [Margalit & MetzgerMargalit & Metzger2017]Margalit:2017dij Margalit B., Metzger B. D., 2017, @doi [Astrophys. J.] 10.3847/2041-8213/aa991c, 850, L19 [Margalit & PiranMargalit & Piran2020]Margalit2020MNRAS Margalit B., Piran T., 2020, @doi [] 10.1093/mnras/staa1486, https://ui.adsabs.harvard.edu/abs/2020MNRAS.495.4981M 495, 4981 [Metzger & FernándezMetzger & Fernández2014]Metzger:2014ila Metzger B. D., Fernández R., 2014, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stu802, 441, 3444 [Metzger et al.,Metzger et al.2010]Metzger:2010sy Metzger B. D., et al., 2010, @doi [Mon. Not. Roy. Astron. Soc.] 10.1111/j.1365-2966.2010.16864.x, 406, 2650 [Miller et al.,Miller et al.2019]Miller:2019dpt Miller J. M., et al., 2019, @doi [Phys. Rev. D] 10.1103/PhysRevD.100.023008, 100, 023008 [Mösta, Radice, Haas, Schnetter & BernuzziMösta et al.2020]Mosta:2020hlh Mösta P., Radice D., Haas R., Schnetter E., Bernuzzi S., 2020, @doi [Astrophys. J. Lett.] 10.3847/2041-8213/abb6ef, 901, L37 [NakarNakar2007]Nakar:2007yr Nakar E., 2007, @doi [Phys. Rept.] 10.1016/j.physrep.2007.02.005, 442, 166 [Nakar & PiranNakar & Piran2011]Nakar2011Natur Nakar E., Piran T., 2011, @doi [] 10.1038/nature10365, https://ui.adsabs.harvard.edu/abs/2011Natur.478...82N 478, 82 [Nativi, Bulla, Rosswog, Lundman, Kowal, Gizzi, Lamb & PeregoNativi et al.2020]Nativi:2020moj Nativi L., Bulla M., Rosswog S., Lundman C., Kowal G., Gizzi D., Lamb G. P., Perego A., 2020, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/staa3337, 500, 1772 [Nedora et al.,Nedora et al.2021]Vsevolod:2020pak Nedora V., et al., 2021, @doi [Astrophys. J.] 10.3847/1538-4357/abc9be, 906, 98 [Neuweiler, Dietrich, Bulla, Chaurasia, Rosswog & UjevicNeuweiler et al.2023]Neuweiler:2022eum Neuweiler A., Dietrich T., Bulla M., Chaurasia S. V., Rosswog S., Ujevic M., 2023, @doi [Phys. Rev. D] 10.1103/PhysRevD.107.023016, 107, 023016 [Nissanke, Kasliwal & GeorgievaNissanke et al.2013]Nissanke:2012dj Nissanke S., Kasliwal M., Georgieva A., 2013, @doi [Astrophys. J.] 10.1088/0004-637X/767/2/124, 767, 124 [PaczynskiPaczynski1991]1991AcA....41..257P Paczynski B., 1991, , https://ui.adsabs.harvard.edu/abs/1991AcA....41..257P 41, 257 [Perego, Rosswog, Cabezón, Korobkin, Kaeppeli, Arcones & LiebendorferPerego et al.2014]Perego:2014fma Perego A., Rosswog S., Cabezón R. M., Korobkin O., Kaeppeli R., Arcones A., Liebendorfer M., 2014, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stu1352, 443, 3134 [Perego, Bernuzzi & RadicePerego et al.2019]Perego:2019adq Perego A., Bernuzzi S., Radice D., 2019, @doi [Eur. Phys. J. A] 10.1140/epja/i2019-12810-7, 55, 124 [Perego et al.,Perego et al.2022]Perego:2020evn Perego A., et al., 2022, @doi [] 10.3847/1538-4357/ac3751, https://ui.adsabs.harvard.edu/abs/2022ApJ...925...22P 925, 22 [Piskunov, Kupka, Ryabchikova, Weiss & JefferyPiskunov et al.1995]1995A AS..112..525P Piskunov N. E., Kupka F., Ryabchikova T. A., Weiss W. W., Jeffery C. S., 1995, , https://ui.adsabs.harvard.edu/abs/1995A AS..112..525P 112, 525 [Pognan, Jerkstrand & GrumerPognan et al.2022]Pognan2022MNRAS Pognan Q., Jerkstrand A., Grumer J., 2022, @doi [] 10.1093/mnras/stab3674, https://ui.adsabs.harvard.edu/abs/2022MNRAS.510.3806P 510, 3806 [Price & RosswogPrice & Rosswog2006]Price:2006fi Price D., Rosswog S., 2006, @doi [Science] 10.1126/science.1125201, 312, 719 [Radice, Galeazzi, Lippuner, Roberts, Ott & RezzollaRadice et al.2016]Radice:2016dwd Radice D., Galeazzi F., Lippuner J., Roberts L. F., Ott C. D., Rezzolla L., 2016, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stw1227, 460, 3255 [Rezzolla, Most & WeihRezzolla et al.2018]Rezzolla:2017aly Rezzolla L., Most E. R., Weih L. R., 2018, @doi [Astrophys. J. Lett.] 10.3847/2041-8213/aaa401, 852, L25 [Rossi et al.Rossi et al.2020]Rossi:2019fnm Rossi A., et al., 2020, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/staa479, 493, 3379 [Rosswog, Liebendoerfer, Thielemann, Davies, Benz & PiranRosswog et al.1999]Rosswog:1998hy Rosswog S., Liebendoerfer M., Thielemann F. K., Davies M. B., Benz W., Piran T., 1999, Astron. Astrophys., 341, 499 [Rosswog, Korobkin, Arcones, Thielemann & PiranRosswog et al.2014]Rosswog:2013kqa Rosswog S., Korobkin O., Arcones A., Thielemann F., Piran T., 2014, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stt2502, 439, 744 [Ruffert, Ruffert & JankaRuffert et al.2001]Ruffert:2001gf Ruffert M., Ruffert H. T., Janka H. T., 2001, @doi [Astron. Astrophys.] 10.1051/0004-6361:20011453, 380, 544 [Ruiz, Shapiro & TsokarosRuiz et al.2018]Ruiz:2018wah Ruiz M., Shapiro S. L., Tsokaros A., 2018, @doi [Phys. Rev.] 10.1103/PhysRevD.98.123017, D98, 123017 [Ryabchikova, Piskunov, Kurucz, Stempels, Heiter, Pakhomov & BarklemRyabchikova et al.2015]2015PhyS...90e4005R Ryabchikova T., Piskunov N., Kurucz R. L., Stempels H. C., Heiter U., Pakhomov Y., Barklem P. S., 2015, @doi [] 10.1088/0031-8949/90/5/054005, https://ui.adsabs.harvard.edu/abs/2015PhyS...90e4005R 90, 054005 [Sarin, Omand, Margalit & JonesSarin et al.2022]Sarin:2022wby Sarin N., Omand C. M. B., Margalit B., Jones D. I., 2022, @doi [] 10.1093/mnras/stac2609, https://ui.adsabs.harvard.edu/abs/2022MNRAS.516.4949S 516, 4949 [Sekiguchi, Kiuchi, Kyutoku & ShibataSekiguchi et al.2015]Sekiguchi:2015dma Sekiguchi Y., Kiuchi K., Kyutoku K., Shibata M., 2015, @doi [Phys. Rev.] 10.1103/PhysRevD.91.064059, D91, 064059 [Sekiguchi, Kiuchi, Kyutoku, Shibata & TaniguchiSekiguchi et al.2016]Sekiguchi:2016bjd Sekiguchi Y., Kiuchi K., Kyutoku K., Shibata M., Taniguchi K., 2016, @doi [Phys. Rev.] 10.1103/PhysRevD.93.124046, D93, 124046 [Shibata & HotokezakaShibata & Hotokezaka2019]Shibata:2019wef Shibata M., Hotokezaka K., 2019, @doi [Ann. Rev. Nucl. Part. Sci.] 10.1146/annurev-nucl-101918-023625, 69, 41 [Shibata, Fujibayashi, Hotokezaka, Kiuchi, Kyutoku, Sekiguchi & TanakaShibata et al.2017]Shibata:2017xdx Shibata M., Fujibayashi S., Hotokezaka K., Kiuchi K., Kyutoku K., Sekiguchi Y., Tanaka M., 2017, @doi [Phys. Rev.] 10.1103/PhysRevD.96.123012, D96, 123012 [Shibata, Zhou, Kiuchi & FujibayashiShibata et al.2019]Shibata:2019ctb Shibata M., Zhou E., Kiuchi K., Fujibayashi S., 2019, @doi [Phys. Rev. D] 10.1103/PhysRevD.100.023015, 100, 023015 [Shibata, Fujibayashi & SekiguchiShibata et al.2021a]Shibata:2021bbj Shibata M., Fujibayashi S., Sekiguchi Y., 2021a, @doi [Phys. Rev. D] 10.1103/PhysRevD.103.043022, 103, 043022 [Shibata, Fujibayashi & SekiguchiShibata et al.2021b]Shibata:2021xmo Shibata M., Fujibayashi S., Sekiguchi Y., 2021b, @doi [Phys. Rev. D] 10.1103/PhysRevD.104.063026, 104, 063026 [Siegel & MetzgerSiegel & Metzger2017]Siegel:2017nub Siegel D. M., Metzger B. D., 2017, @doi [Phys. Rev. Lett.] 10.1103/PhysRevLett.119.231102, 119, 231102 [Siegel & MetzgerSiegel & Metzger2018]Siegel:2017jug Siegel D. M., Metzger B. D., 2018, @doi [Astrophys. J.] 10.3847/1538-4357/aabaec, 858, 52 [Sneppen, Watson, Bauswein, Just, Kotak, Nakar, Poznanski & SimSneppen et al.2023]Sneppen:2023vkk Sneppen A., Watson D., Bauswein A., Just O., Kotak R., Nakar E., Poznanski D., Sim S., 2023, @doi [Nature] 10.1038/s41586-022-05616-x, 614, 436 [Steiner, Hempel & FischerSteiner et al.2013]Steiner:2012rk Steiner A. W., Hempel M., Fischer T., 2013, @doi [Astrophys. J.] 10.1088/0004-637X/774/1/17, 774, 17 [Sutherland et al.,Sutherland et al.2015]2015A A...575A..25S Sutherland W., et al., 2015, @doi [] 10.1051/0004-6361/201424973, https://ui.adsabs.harvard.edu/abs/2015A A...575A..25S 575, A25 [Tanaka & HotokezakaTanaka & Hotokezaka2013]Tanaka:2013ana Tanaka M., Hotokezaka K., 2013, @doi [Astrophys. J.] 10.1088/0004-637X/775/2/113, 775, 113 [Tanaka et al.Tanaka et al.2017]Tanaka:2017qxj Tanaka M., et al., 2017, @doi [Publ. Astron. Soc. Jap.] 10.1093/pasj/psx121, 69, Publications of the Astronomical Society of Japan, Volume 69, Issue 6, 1 December 2017, 102, https://doi.org/10.1093/pasj/psx121 [Tanaka et al.Tanaka et al.2018]Tanaka:2017lxb Tanaka M., et al., 2018, @doi [Astrophys. J.] 10.3847/1538-4357/aaa0cb, 852, 109 [Tanaka, Kato, Gaigalas & KawaguchiTanaka et al.2020]Tanaka:2019iqp Tanaka M., Kato D., Gaigalas G., Kawaguchi K., 2020, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/staa1576, 496, 1369 [Tanvir, Levan, Fruchter, Hjorth, Wiersema, Tunnicliffe & de Ugarte PostigoTanvir et al.2013]Tanvir:2013pia Tanvir N. R., Levan A. J., Fruchter A. S., Hjorth J., Wiersema K., Tunnicliffe R., de Ugarte Postigo A., 2013, @doi [Nature] 10.1038/nature12505, 500, 547 [Tarumi, Hotokezaka, Domoto & TanakaTarumi et al.2023]Tarumi:2023apl Tarumi Y., Hotokezaka K., Domoto N., Tanaka M., 2023, arXiv [Thone, de Ugarte Postigo, Gorosabel, Tanvir & FynboThone et al.2013]2013GCN.14744....1T Thone C. C., de Ugarte Postigo A., Gorosabel J., Tanvir N., Fynbo J. P. U., 2013, GRB Coordinates Network, https://ui.adsabs.harvard.edu/abs/2013GCN.14744....1T 14744, 1 [Timmes & SwestyTimmes & Swesty2000]2000ApJS..126..501T Timmes F. X., Swesty F. D., 2000, @doi [] 10.1086/313304, https://ui.adsabs.harvard.edu/abs/2000ApJS..126..501T 126, 501 [TrojaTroja2023]Troja:2023cev Troja E., 2023, @doi [Universe] 10.3390/universe9060245, 9, 245 [Villar et al.Villar et al.2017]Villar:2017wcc Villar V. A., et al., 2017, @doi [Astrophys. J.] 10.3847/2041-8213/aa9c84, 851, L21 [Wanajo, Sekiguchi, Nishimura, Kiuchi, Kyutoku & ShibataWanajo et al.2014]Wanajo:2014wha Wanajo S., Sekiguchi Y., Nishimura N., Kiuchi K., Kyutoku K., Shibata M., 2014, @doi [Astrophys. J.] 10.1088/2041-8205/789/2/L39, 789, L39 [Watson et al.,Watson et al.2019]2019Natur.574..497W Watson D., et al., 2019, @doi [] 10.1038/s41586-019-1676-3, https://ui.adsabs.harvard.edu/abs/2019Natur.574..497W 574, 497 [Waxman, Ofek, Kushnir & Gal-YamWaxman et al.2018]Waxman:2017sqv Waxman E., Ofek E. O., Kushnir D., Gal-Yam A., 2018, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/sty2441, 481, 3423 [Wollaeger et al.,Wollaeger et al.2018]Wollaeger:2017ahm Wollaeger R. T., et al., 2018, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/sty1018, 478, 3298 [Wu, Fernández, Martínez-Pinedo & MetzgerWu et al.2016]Wu:2016pnw Wu M.-R., Fernández R., Martínez-Pinedo G., Metzger B. D., 2016, @doi [Mon. Not. Roy. Astron. Soc.] 10.1093/mnras/stw2156, 463, 2323 [Wu, Barnes, Martinez-Pinedo & MetzgerWu et al.2019]Wu:2018mvg Wu M.-R., Barnes J., Martinez-Pinedo G., Metzger B. D., 2019, @doi [Phys. Rev. Lett.] 10.1103/PhysRevLett.122.062701, 122, 062701 [Wu, Ricigliano, Kashyap, Perego & RadiceWu et al.2022]Wu:2021ibi Wu Z., Ricigliano G., Kashyap R., Perego A., Radice D., 2022, @doi [] 10.1093/mnras/stac399, https://ui.adsabs.harvard.edu/abs/2022MNRAS.512..328W 512, 328 [Zhu, Yang, Liu, Huang, Zhang, Li, Yu & GaoZhu et al.2020]Zhu:2020inc Zhu J.-P., Yang Y.-P., Liu L.-D., Huang Y., Zhang B., Li Z., Yu Y.-W., Gao H., 2020, @doi [Astrophys. J.] 10.3847/1538-4357/ab93bf, 897, 20 [Zhu, Lund, Barnes, Sprouse, Vassh, McLaughlin, Mumpower & SurmanZhu et al.2021]Zhu:2020eyk Zhu Y. L., Lund K., Barnes J., Sprouse T. M., Vassh N., McLaughlin G. C., Mumpower M. R., Surman R., 2021, @doi [Astrophys. J.] 10.3847/1538-4357/abc69e, 906, 94 § ESTIMATE OF UNCERTAINTIES DUE TO NON-LTE EFFECTS In our present radiative-transfer simulations, the LTE condition is assumed to determine the ionization/excitation populations of atoms. This assumption can be invalid for the later phase of kilonova emission, at which the ionization of the atoms caused by radioactive decay becomes more significant than the recombination of ions <cit.>. <cit.> indicate that such non-LTE effects may suppress the neutral and first ionized ions in the outer part of the ejecta even in the earlier phases. Such modifications in the ionization/excitation populations of atoms can have a great impact to the opacity and resultant light curves. Because computing the ionization/excitation population is challenging due to its computational complexity and lack of the atomic data for r-process elements (see ), we here provide qualitative estimates for the impacts of the non-LTE effects to the kilonova light curves following the same prescription which we applied in our previous studies <cit.>; we perform the radiative transfer simulations with a hypothetical setup in which both neutral and first ionized atoms are artificially forced to be ionized to the second ionization states. Note that this prescription is applied to whole ejecta, including high-density regions for simplicity. Fig. <ref> shows the g and K-band light curves for models SFHo-135135 and SFHo-125145 obtained with these hypothetical setups. As is also found in <cit.>, the emission in the optical wavelengths is enhanced by artificially increasing the ionization degrees. Yet, the brightness of the g-band emission is not high enough to explain the brightness of AT2017gfo. The emission in the NIR wavelengths in the late phase becomes even fainter and more inconsistent with the observation. These results indicate that the BNS that results in a short-lived remnant MNS is, at least in our studied range of binary configurations, likely to be different from the BNS of GW170817.
http://arxiv.org/abs/2306.08206v1
20230614021959
Ball Trajectory Inference from Multi-Agent Sports Contexts Using Set Transformer and Hierarchical Bi-LSTM
[ "Hyunsung Kim", "Han-Jun Choi", "Chang Jo Kim", "Jinsung Yoon", "Sang-Ki Ko" ]
cs.MA
[ "cs.MA", "cs.AI", "68T20 (Primary) 68U35, 68T30 (Secondary)" ]
0000-0002-6286-5160 Fitogether Inc. Seoul South Korea [email protected] Kangwon National University Chuncheon South Korea [email protected] Fitogether Inc. Seoul South Korea [email protected] Fitogether Inc. Seoul South Korea [email protected] 0000-0002-5406-5104 Kangwon National University Chuncheon South Korea Fitogether Inc. Seoul South Korea [email protected] As artificial intelligence spreads out to numerous fields, the application of AI to sports analytics is also in the spotlight. However, one of the major challenges is the difficulty of automated acquisition of continuous movement data during sports matches. In particular, it is a conundrum to reliably track a tiny ball on a wide soccer pitch with obstacles such as occlusion and imitations. Tackling the problem, this paper proposes an inference framework of ball trajectory from player trajectories as a cost-efficient alternative to ball tracking. We combine Set Transformers to get permutation-invariant and equivariant representations of the multi-agent contexts with a hierarchical architecture that intermediately predicts the player ball possession to support the final trajectory inference. Also, we introduce the reality loss term and postprocessing to secure the estimated trajectories to be physically realistic. The experimental results show that our model provides natural and accurate trajectories as well as admissible player ball possession at the same time. Lastly, we suggest several practical applications of our framework including missing trajectory imputation, semi-automated pass annotation, automated zoom-in for match broadcasting, and calculating possession-wise running performance metrics. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Information systems Spatial-temporal systems [500]Computing methodologies Neural networks [300]Computing methodologies Spatial and physical reasoning [300]Computing methodologies Learning latent representations Ball Trajectory Inference from Multi-Agent Sports Contexts Using Set Transformer and Hierarchical Bi-LSTM Sang-Ki Ko July 31, 2023 ========================================================================================================= § INTRODUCTION With the rapid progress in artificial intelligence (AI) and machine learning, there also is a growing interest in automated sports analytics for providing a competitive advantage to teams or individual players <cit.>. However, one of the major challenges is still the data collection process as the huge amount of high-quality data is necessary to apply advanced machine learning techniques for sophisticated analyses. In competitive team sports such as soccer, basketball, ice hockey, and so on, there are an enormous amount of accessible video data including broadcast videos, but it is very difficult to extract the essential information such as the trajectories of players and the ball from these videos. Especially, ball tracking is a critical problem for video-based analysis <cit.> in team sports. However, it is well known to be very difficult to reliably track the ball from videos due to the small size of the ball and the occlusion problem. Wang et al. <cit.> proposed a novel approach by formulating a tracking algorithm in terms of deciding who has the ball at a given time. Maksai et al. <cit.> introduced a more principled approach by modeling the interaction between the ball and the players and even the physical constraints of ball trajectories for better ball tracking performance on soccer videos. On the industry side, FIFA recently introduced the semi-automated offside technology <cit.> at the World Cup Qatar 2022 with automatic ball tracking by placing an inertial measurement unit (IMU) sensor inside the ball, but it is not broadly applicable due to a very expensive cost <cit.>. Compared to ball tracking, it is relatively easier to acquire player trajectories as players are much bigger than a ball in videos. Moreover, there are different types of systems for tracking players in team sports other than video-based systems such as global positioning systems (GPS) or local positioning systems (LPS). Currently, there are various data providers of player tracking data, either using video-based tracking <cit.>, GPS-based tracking <cit.>, or LPS-based tracking <cit.>. Some video-based data providers are acquiring ball tracking data by first collecting match events and combining it with player tracking data <cit.>, but it still needs a lot of manual work for event annotation. In this paper, we propose a framework for inferring the ball trajectories from player trajectories instead of directly tracking them from sports videos. We implement permutation-invariant and equivariant encoders using Set Transformers <cit.> to represent multi-agent contexts of sports games. Inspired by Zhan et al. <cit.>, we combine these context encoders with a hierarchical recurrence structure that intermediately predicts the domain-specific semantics such as player-level ball possession to enhance the final prediction performance. Additionally, we introduce the reality loss term and postprocessing to secure the estimated trajectories to be physically realistic in that they only change direction when they are carried or kicked by players. The experimental results show that our method predicts accurate ball trajectories with a mean position error smaller than 3.7 while predicting player ball possession with an accuracy of 64.7% as an intermediate product. The main contribution of our study is that we give shape to a new way of ball data collection in sports, neither relying on heavy camera infrastructure nor hard manual work but based on machine learning techniques and player tracking data that are relatively easy to obtain. Moreover, it enables semi-automated event annotation by detecting ball-related events so that humans only need to correct errors. We expect that the proposed method would contribute to the sports industry by lowering the cost of data acquisition and eventually the entry barrier for the sports analytics ecosystem. § RELATED WORK §.§ Ball Trajectory Inference from Player Trajectories in Team Sports Though ball tracking from sports videos is a topic of interest in computer vision, only a few studies <cit.> tried to estimate ball trajectories not relying on optical tracking but only using players' movement data. Amirli et al. <cit.> aggregated players' locations and speeds to make handcrafted input features and constructed a neural network regressor to estimate ball locations. However, they did not employ sophisticated architectures to encode the sequential nature or permutation-invariance of multi-agent trajectories. As a result, their framework shows too large position errors (7.56m along the x-axis and 5.01m along the y-axis) to be utilized in practice. On the other hand, Gongora <cit.> adopted Set Transformer <cit.> to encode permutation-invariant game contexts and used sequence models such as Transformer <cit.> or InceptionTime <cit.> to predict ball trajectories. The difference of ours from this approach is that we build a hierarchical architecture and employ another type of sequence model (i.e., Bi-LSTM <cit.>). In Section <ref>, we demonstrate that the use of a hierarchical architecture and Bi-LSTMs both promote more accurate ball trajectory prediction and explain why RNN models outperform Transformer in this problem. §.§ Multi-Agent Trajectory Prediction Predicting the future trajectories of objects is a crucial task, especially for autonomous platforms like self-driving cars or social robots. Zhang et al. <cit.> proposed the end-to-end Spatio-Temporal-Interactive Network (STINet) to model pedestrians. Felsen et al. <cit.> constructed a Conditional Variational Autoencoder (CVAE) for predicting adversarial multi-agent motion. Yeh et al. <cit.> proposed the Graph Variational RNN (GVRNN) using graph structure for the permutation-equivariant representation of multi-agent trajectories in sports games. Zhan et al. <cit.> introduced a hierarchical architecture that first predicts the long-term intents of agents and then generates future trajectories conditioned on the intents. Meanwhile, the problem of missing trajectory imputation is also a significant topic in spatiotemporal data mining. BRITS <cit.> (Bidirectional Recurrent Imputation for Time Series) is a method based on bidirectional RNNs for missing value imputation in time series data. However, BRITS cannot resolve the error propagation problem when imputing long-range sequences due to its autoregressive nature. In order to better handle the problem, NAOMI <cit.> (Non-Autoregressive Multiresolution Imputation) exploits the multiresolution structure that decodes recursively from coarse to fine-grained resolutions using a divide-and-conquer strategy. Also, Qi et al. <cit.> proposed an imitative non-autoregressive modeling method to simultaneously handle the trajectory prediction task and the missing value imputation task. Omidshafiei et al. <cit.> introduced Graph Imputer for predicting players' off-screen behavior in soccer, where the model is similar to GVRNN in that it combines graph networks and the Variational RNN <cit.>, except for using a bidirectional structure since it can observe partial future trajectories. Our ball trajectory inference task is closely related to the aforementioned problems, but there is a clear difference in the detailed setup. The goal of the original trajectory prediction or imputation problem is to predict the behavior of intelligent agents in unknown frames, given the information including the target agent's partial trajectories. In contrast, we have to estimate the entire trajectory of the ball that does not have an intent, given the trajectories of other agents that affect the target trajectories. Therefore, believing that our problem is deterministic rather than stochastic, we construct an LSTM-based regression model instead of generative models that many other trajectory prediction frameworks are based on. § LEARNING APPROACH In Section <ref>, we formally define the considered problem. Section <ref> and Section <ref> explain the two notable points of the neural network part of our framework, where we elaborate on the details in Section <ref>. Section <ref> describes the loss function for training the network, and Section <ref> introduces the rule-based postprocessing algorithm to enforce the output trajectories to be realistic. §.§ Problem Formulation Our ultimate goal is to find the ball conditioned on the player locations. However, since we construct a hierarchical framework that also predicts player-level ball possession, our approach actually solves the following two problems at once. * Ball possessor prediction: Given a set X_1:T^P = {𝐱_1:T^p }_p ∈ P of player trajectories, find the player q_t that possesses the ball at each time t where 1 ≤ t ≤ T. (We describe the definition of player possession in Section <ref>.) * Ball trajectory prediction: Given a set X_1:T^P = {𝐱_1:T^p }_p ∈ P of player trajectories, find the ball trajectory 𝐲_1:T. §.§ Player-Level Ball Possession as an Intermediate Target Variable Zhan et al. <cit.> proposed a hierarchical VRNN framework for future trajectory generation of multiple agents, using intermediate weak labels that capture macroscopic behavioral semantics. Inspired by the study, we also design a hierarchical Bi-LSTM <cit.> structure with intermediate labels to enhance the final prediction performance. In our study, we utilize player-level ball possession as an intermediate target variable. Namely, the model first produces 𝐠̂_t = (ĝ_t^p_1, … , ĝ_t^p_N) where ĝ_t^p_i is the probability that the player p_i possesses the ball at t, and predicts 𝐲̂_t conditioned on 𝐠̂_t. Since the ball is always either controlled by a player, in transition from one player to another, or out of play, this information about player ball possession can be a strong hint for final trajectory inference. Here we define that a player “possesses” the ball if the player is controlling the ball or he or she is the next controller of the ball. That is, we label that the ball possession changes from player A to B right after A passes the ball to B. The reason why we assign a specific player even when the ball is in transition is that it causes a significant class imbalance if we label the possession of those timesteps as “void” across the board. §.§ Permutation-Invariant and Equivariant Representations of Sports Contexts An overall game context of competitive team sports is partially permutation-invariant. That is, the order of players in each team is permutable when representing a game context, while players in a team are not exchangeable with those in the other team. On the other hand, player-level tasks such as player trajectory prediction or player ball possession inference need to be permutation-equivariant in that a permutation of the input players leads to the same permutation at the output. To this end, we adopt the Set Transformer <cit.>, a state-of-the-art methodology for permutation-invariant or equivariant embedding. It consists of an encoder block and a decoder block where the encoder produces a permutation-equivariant embedding of the set input, and the decoder returns a permutation-invariant embedding of a fixed dimension. Hence, we use a whole structure of a Set Transformer for permutation-invariant representation of game contexts, while only taking the encoder part of a Set Transformer for permutation-equivariant representation. In our study, the ball trajectory prediction is a partially permutation-invariant (PPI) task and the ball possessor prediction is a partially permutation-equivariant (PPE) one. For the former, we construct a PPI encoder consisting of Set Transformers per team. For the latter, on the other hand, we additionally construct fully permutation-equivariant (FPE) and full permutation-invariant (FPI) encoders as well as a PPE encoder and combine them to improve the performance. We explain the detailed architectures of these encoders in Section <ref> and the reasons for adopting each part in Appendix <ref>. §.§ Detailed Hierarchical Framework In this section, we elaborate on our hierarchical inference framework. The model consists of Player Possession Classifier (PPC) that estimates players' ball possession probabilities 𝐠_1:T and Ball Trajectory Regressor (BTR) that finds the ball trajectory 𝐲_1:T using the information from PPC. Figure <ref> depicts the model architectures. §.§.§ Player Possession Classifier First, we convert input features into context-aware player representations in different ways by partially permutation-equivariant encoding (PPE), fully permutation-equivariant encoding (FPE), and fully permutation-invariant encoding (FPI). In Appendix <ref>, we elucidate why these three variants of game context encoder are needed by presenting intuitive reasons and carrying out an ablation study. For PPE encoding, we employ a Set Transformer encoder (denoted as ST-Encoder below) for input player features {𝐱_t^p }_p ∈ P_k in each team P_k (k = 1, 2) at time t to get teammate-aware player embeddings {𝐳_t^p }_p ∈ P_k as follows: rCl (𝐳_g,t^p_1, …, 𝐳_g,t^p_n) = ST-Encoder (𝐱_t^p_1, …, 𝐱_t^p_n) (𝐳_g,t^p_n+1, …, 𝐳_g,t^p_2n) = ST-Encoder (𝐱_t^p_n+1, …, 𝐱_t^p_2n) 𝐳_g,t^p_2n+i = FC (𝐱_t^p_2n+i), i = 1, …, 4 where { p_1, …, p_n } and { p_n+1, …, p_2n} are the players in team P_1 and P_2, respectively, and P_0 = { p_2n+i}_i=1^4 are states of the ball out of the four pitch lines. We set the input coordinates 𝐱_t^p_2n+i for these ball-out states as the middle points of the pitch lines. While the above latent vectors include information on teammates' movements, they do not consider the opponents' behaviors. Hence, we apply an ST-Encoder to the entire player features to get FPE embeddings consulting the overall game context, i.e., (𝐳̃_g,t^p_1, …, 𝐳̃_g,t^p_2n+4) = ST-Encoder (𝐱_t^p_1, …, 𝐱_t^p_2n+4) Moreover, we input all the player features to a Set Transformer including encoder and decoder parts to get the FPI embedding 𝐳̃_g,t for the game context, i.e., 𝐳̃_g,t = SetTransformer (𝐱_t^p_1, …, 𝐱_t^p_2n+4) Then, player-wise Bi-LSTMs with shared weights update the joint hidden states 𝐡_g,t^p = (𝐡_g,t^p,f, 𝐡_g,t^p,b) for each component p ∈P̃ = P_1 ∪ P_2 ∪ P_0 using the input features {𝐱_t^p }, the PPE embeddings {𝐳_g,t^p }, the FPE embeddings {𝐳̃_g,t^p }, and the FPI embeddings {𝐳̃_g,t} to predict the player possession probabilities ĝ_t^p as follows: rCl 𝐡_g,t^p,f = LSTM^f (𝐱_t^p, 𝐳_g,t^p, 𝐳̃_g,t^p, 𝐳̃_g,t; 𝐡_g,t-1^p,f) 𝐡_g,t^p,b = LSTM^b (𝐱_t^p, 𝐳_g,t^p, 𝐳̃_g,t^p, 𝐳̃_g,t; 𝐡_g,t+1^p,b) ĝ_t^p = FC (𝐡_g,t^p) §.§.§ Ball Trajectory Regressor For the final trajectory prediction, the model produces a partially permutation-invariant embedding (PPI) by deploying a Set Transformer for each team. Here, the hidden states 𝐡_g,t^p and the possession probabilities ĝ_t^p resulting from PPC are concatenated with each player’s input features 𝐱_t^p to pass through the layer. That is, rCl 𝐳_t^P_1 = SetTransformer (𝐱̃_t^p_1, …, 𝐱̃_t^p_n) 𝐳_t^P_2 = SetTransformer (𝐱̃_t^p_n+1, …, 𝐱̃_t^p_2n) 𝐳_t^P_0 = FC (𝐱̃_t^p_2n+1, …, 𝐱̃_t^p_2n+4) 𝐳̃_t = FC (𝐳_t^P_1, 𝐳_t^P_2, 𝐳_t^P_0) where 𝐱̃_t^p = (𝐱_t^p, 𝐡_g,t^p, ĝ_t^p) for p ∈P̃. Finally, the ball-possession-aware context embedding 𝐳̃_t goes into a Bi-LSTM that returns a hidden state 𝐡_t = (𝐡_t^f, 𝐡_t^b), which is eventually converted into the predicted ball location 𝐲̂_t by a fully-connected layer, i.e., rCl 𝐡_t^f = LSTM^f (𝐳̃_t; 𝐡_t-1^f) 𝐡_t^b = LSTM^b (𝐳̃_t; 𝐡_t+1^b) 𝐲̂_t = FC (𝐡_t) §.§ Loss Function Our hierarchical models are trained using the loss function consisting of three parts. The first term is the mean squared error (MSE) loss commonly used in regression tasks. ℒ^MSE(𝐲̂_1:T, 𝐲_1:T) = -1/T∑_t=1^T 𝐲̂_t - 𝐲_t _2^2 While the MSE forces the model to predict trajectories close to the true ones, it does not guarantee that the predicted trajectories are physically realistic. Accordingly, the model trained only using the MSE tends to make unrealistic trajectories with changes of direction even when there is no player to control the ball as the green curve in Fig. <ref>. Therefore, we design the reality loss as the second term under the assumption that the ball does not drastically change direction if there is no player close enough to the ball. To elaborate, we first calculate the course angle of the predicted ball trajectory 𝐲̂_1:T by θ_t = arccos( 𝐯_t ·𝐯_t+1/𝐯_t𝐯_t+1) where 𝐯_t = 𝐲̂_t - 𝐲̂_t-1 is the velocity of the ball. Also, we denote the distance between the ball and the nearest player as d_t = min_p ∈ P𝐲̂_t - 𝐱_t^p Using θ_t and d_t, the reality loss is defined as ℒ^Real(𝐲̂_1:T; X_1:T^P) = -1/T-2∑_t=2^T-1tanh(θ_t) · d_t Intuitively, the reality loss increases in the situation that the ball changes its heading direction when there is no player close to it. One can train the model to return more realistic trajectories (i.e., tending to change the heading direction only near a player) by adding the reality loss term than only using the MSE loss. An accurate intermediate prediction of player ball possession leads to an accurate prediction of the final ball trajectory. Thus, we add the cross-entropy loss ℒ^CE(𝐠̂_1:T, 𝐠_1:T) = -1/T∑_t=1^T ∑_k=1^K g_t^k logĝ_t^k for predicting ball possession as the last term of the loss function. In summary, the total loss function is defined as ℒ = ℒ^MSE + λ^Realℒ^Real + λ^CEℒ^CE where λ^Real, λ^CE≥ 0 are weights for controlling the impact of the corresponding loss terms on the total loss, respectively. §.§ Rule-Based Postprocessing Although the estimated trajectories by our model are roughly similar to the true trajectories in terms of position error at each time, they may not be realistic since the neural network does not strictly enforce the output to follow physical constraints. Note that the reality loss term actually helps the output not to exhibit absurd movements but often fails to limit the trajectory to be realistic as a whole. In addition, the resulting ball possession and trajectory predictions do not provide information about whether the ball is in control by a player or moving from one player to another at a given time by the definition described in Section <ref>. Hence, we execute a rule-based postprocessing algorithm to decide whether a player is carrying the ball or it is in transition from one to another, and fine-tune the predicted trajectory based on this information. To be specific, we obtain the possession score s_t^p for each player p at time t by dividing the possession probabilities ĝ_t^p by the distance 𝐲̂_t - 𝐱_t^p from the predicted ball location. Then, we can deem that the ball has not arrived at the possessor when the possession probability is high but the predicted ball is far from the player by the following rules: * If ŝ_τ^q = max_P̃ŝ_τ^p > 0.5, the player q touches the ball at τ. * If 0.2 < ŝ_τ^q = max_P̃ŝ_τ^p ≤ 0.5 and ŝ_τ^q is a local maximum of the function max_P̃ŝ_t^p of t, the player q touches the ball at τ (observing that there is not enough time for the possession probability to increase in one-touch pass situations). * Otherwise, the ball is moving from one player to another. Figure <ref> shows the possession probability and score plots in a sample time interval. The possession scores in <ref> are sharper than the probabilities in <ref>, so we can distinguish transition intervals from touched intervals based on the score values. After partially assigning ball-touching players by the above rule, we set the ball locations for the “assigned” time steps to the ball-touching players. Lastly, we reconstruct the entire trajectory by linear interpolation for the unassigned time intervals. Figure <ref> shows two examples of raw predictions from either model trained with (orange) or without (green) the reality loss mentioned in Section <ref> and the postprocessed trajectory (pink) of the former model with the reality loss. One can observe that both the reality loss term and the postprocessing step contribute to the framework to return more natural trajectories in that the ball only changes its direction when it is carried or kicked by a player. § EXPERIMENTS In this section, we implement several baseline frameworks and compare the performance with that of ours to make our design choices more compelling. §.§ Data Preparation To show that our model is applicable to data from various sources, we use the mixed dataset including GPS tracking data measured by Fitogether <cit.> from 15 matches in K League 2020, the Korean professional soccer league, and publicly accessible optical tracking and event data[<https://github.com/metrica-sports/sample-data>] acquired from 3 sample matches provided by Metrica Sports <cit.>. We perform the train-test split as follows: * Training data: 10 matches of Fitogether's GPS data and Metrica Sample Game 1 and 2 * Validation data: 2 matches of Fitogether's GPS data and the first half of Metrica Sample Game 3 * Test data: 3 matches of Fitogether's GPS data and the second half of Metrica Sample Game 3 For Metrica Sample Game 3, there are mismatch errors between player IDs in tracking data and event data. Thus, we have reassigned the player IDs in the event records based on the distances between the ball and the players and uploaded the corrected version to our GitHub repository[<https://github.com/pientist/ballradar.git>]. One challenge is that goalkeepers (GK) are not measured by GPS trackers very often. Therefore, to predict the ball trajectory when only outfield players’ trajectories are given in real-world situations, we also train the GK trajectory prediction model using the Metrica data and use the inferred GK trajectories by applying the model to the Fitogether data. (More details about the GK trajectory prediction model are described in Appendix <ref>.) After generating the GK trajectories, we combine the manually annotated event data with GPS-based player trajectories to reconstruct the ball trajectories to be used as the ground truth. The way of rule-based reconstruction is the same as that in Section <ref>. Each example is a 10-second window of 10 trajectories (i.e., 100 time steps) collected in a soccer match. The original Metrica data is 25, but we downsampled them to 10 to adjust the frequency to the Fitogether data. Since there is no need to find the trajectories of balls out of play, we only take in-play situations into account and call a time interval from resuming the game to a pause an episode. We construct the dataset by sliding a window in each episode by 0.1 and sampling it at a time so that 99 time steps of a window are overlapped with the adjoining one. To avoid overfitting, we randomly flip pitches either horizontally, vertically, or in both directions. Also, we calculate the velocity, speed, and acceleration of each player and attach them to the location as input features to help the model understand the player dynamics. In summary, a training sample is a window of length 100 with 22 players having six features, the (x,y) location, the (x,y) velocity, the speed, and the acceleration. §.§ Models and Hyperparameters The following lists are the baselines used in the experiment. They are distinguished by whether having a hierarchical architecture and the type of sequence model deployed to make predictions. * VRNN: A generative baseline using VRNN <cit.> that generates ball trajectories conditioned on player trajectories. The detailed architecture is described in Appendix <ref>. * Transformer: A non-hierarchical framework proposed by Gongora <cit.> using Transformer <cit.> for prediction. * LSTM: A non-hierarchical framework using a Bi-LSTM. * H-Transformer: A hierarchical framework using Transformers in both submodels. * H-LSTM: Our hierarchical Bi-LSTM framework. All baselines missing the prefix `H-' directly predict ball trajectories without a hierarchical architecture and have the same PPI encoder as in Section <ref> to represent the game contexts. On the contrary, those with the prefix `H-' are hierarchical models that first estimate the player-level ball possession probabilities and predict the final ball trajectories conditioned on them. They employ the same context-encoding structures as in Section <ref>, but use different sequence models (i.e., Transformer or Bi-LSTM). To figure out the influence of reality loss, we train each hierarchical model with and without the reality loss term, respectively. The suffix `-RL' in Table <ref> indicates that the model is trained with λ^Real = 1, and the model without `-RL' takes λ^Real = 0. Also, we compare the prediction performance of the model before and after the rule-based postprocessing, differentiating them by attaching the suffix `-PP' to the latter. We train each model using the Adam optimizer <cit.> with an initial learning rate of 0.0005. For hierarchical models, the context embeddings {𝐳_g,t^p}_p ∈P̃, {𝐳̃_g,t^p}_p ∈P̃, and 𝐳̃_g,t in PPC have the dimension 16, while the dimension of {𝐳_t^P_k}_k=0^2 and 𝐳̃_t in BTR is 128. We take λ^CE = 20 to match the scales of the CE and MSE losses. For LSTM frameworks, every Bi-LSTM employs two layers of 256-dimensional hidden states ({𝐡_g,t^p }_p ∈P̃ or 𝐡_t) with dropout probability 0.2. For Transformer models, each Transformer has 4 heads and takes 256-dimensional inputs. §.§ Evaluation Metrics In this section, we introduce the evaluation metrics for the model performance. To evaluate the prediction performance of ball trajectory, we adopt the following two metrics: * Position error (PE): Mean distance in meters between the predicted and true ball locations, i.e., 1/T∑_t=1^T 𝐲̂_t - 𝐲_t _2. * Reality loss (RL): Same as the reality loss introduced in Eq. <ref> to evaluate the influence of the reality loss term and the rule-based postprocessing. In addition, we separately assess the prediction performance of ball possession since it affects many other tasks such as postprocessing and pass annotation. * Player-level possession accuracy (PPA): Prediction accuracy of ball-possessing players from the player possession probabilities that the model produces, i.e., the proportion of t such that max_p ∈P̃ĝ_t^p = q_t where q_t ∈P̃ denotes the true player having the ball at t. * Team-level possession accuracy (TPA): Prediction accuracy of attacking teams from the team possession probabilities obtained by summing up the player possession probabilities per team, i.e., the proportion of t such that max_p ∈P̃ĝ_t^p ∈ Q_t where Q_t ∈{ P_1, P_2, P_0 } denotes the team that the true ball-possessing player q_t ∈P̃ at t belongs to. §.§ Results and Discussion Table <ref> shows the main results of our experiments, and we have found several observations from them as follows: * Performance of the generative baseline (VRNN): Note that when the latent vector 𝐳_t of VRNN is sampled from the encoder q_ϕ, the mean position error is less than 1m. Nevertheless, the prediction using 𝐳_t sampled from the prior p_θ is far worse than LSTM or Transformer baselines. The reason we think is that unlike the previous studies for trajectory prediction or imputation in team sports <cit.>, the prior in our problem cannot leverage any fragmentary trajectory of the target. This hinders the model from reducing the KL divergence between p_θ and q_ϕ. * Performance of the Transformer baselines: The Transformer baselines show lower performance than those of their LSTM counterparts. We think the use of attentions instead of recurrence backfires in our problem. Transformers learn which time steps to focus on by multi-head attentions, showing great effects in many problems by resolving the information bottleneck imposed on the last hidden state of the encoder. However, paradoxically, the latest player locations are the most important context to predict the ball location, so the inclination of RNN models to focus on the latest time step is rather helpful in our case. Also, the recurrence seems to reduce the reality loss since it more strongly connects the adjacent time steps than the attention does. * Effects of adopting the hierarchical architecture: Hierarchical models (H-Transformer and H-LSTM) exhibit better performance than their non-hierarchical counterparts (Transformer and LSTM). This implies that the introduction of the intermediate target variable (player-level ball possession) promotes more accurate ball trajectory prediction. * Effects of the reality loss: Training a model with the reality loss term is shown to have generalization power in that it reduces the RL of predicted ball trajectories of the test data. In addition, it slightly improves essential performance in terms of PE and PPA. * Effects of rule-based postprocessing: The postprocessing step seems to sacrifice position accuracy to retain the naturalness of output trajectories. One might think that it is better not to perform postprocessing because it increases PE. However, it enables event detection as described in Section <ref> by separating a player's ball-possessing interval into a “transition” period and a “controlled” period. Thus, ultimately it is a necessary step since analyzing ball-related events is one of the main purposes of ball data acquisition. § PRACTICAL APPLICATIONS In this section, we provide a list of potential use cases of our framework, including missing trajectory imputation in the situation of video-based ball detection (Section <ref>), semi-automated pass annotation based on the estimated ball possession (Section <ref>), automated zoom-in for match broadcasting based on the estimated ball locations (Section <ref>), and separating running performance metrics into attacking and defending phases using the aggregated team possession probabilities (Section <ref>). For Section <ref>, <ref>, and <ref>, we use three test matches of Fitogether data as described in Section <ref>. On the other hand, we only use one match among the three in Section <ref> since it is the only match in the test dataset where we have GPS data, event data, and a full-pitch video recorded by fixed cameras. §.§ Ball Trajectory Imputation A major challenge of computer vision-based ball tracking is that the estimated ball trajectory is often inaccurate, especially when the ball moves very fast or is occluded by other objects such as players. A plausible approach to this problem is to find “reliable” fragments of ball trajectories obtained by object detection and perform imputation leveraging our framework. Namely, we first choose frames where the ball is clearly observable without any occlusion and in a relatively stationary situation. Then, we interpolate the ball locations for the remaining “unreliable” frames conditioned on the players’ trajectories and the fragmentary ball trajectories. More specifically, to adapt our framework to this scenario, we randomly mask the true ball trajectory by a certain probability and train our H-LSTM-RL to estimate the ball locations for the missing frames with the masked trajectories. During training, the model is provided with partial trajectories with 80% of the values masked for half of the batches, while it does not refer to any target trajectories for the other half. Then, we measure the prediction performance of the trained model for the test data given partial ball trajectories with varying masking probabilities (100%, 95%, 90%, and 80%). According to the result in Table <ref>, the prediction performance seems to dramatically improve when only 10% of ground-truth is given. When we provide 20% of targets to the model, PE reduces to less than 1.5m and PPA rises to more than 90%. This improvement implies that our method can be successfully employed to impute or revise the missing or unreliable results of other ball detection or tracking frameworks by leveraging multi-agent game contexts. §.§ Semi-automated Pass Annotation One of the ultimate goals of ball tracking in soccer, a representative team sports, is to detect and analyze event data that occurred during the match. Soccer event data is a record of on-the-ball actions such as passes, interceptions, dribbles, and shots that occurred during matches, originally collected by human annotators <cit.>. It is actively used from aggregating match statistics <cit.> to various tasks including performance evaluation <cit.>, playing style representation <cit.>, tactical analysis <cit.>, and so on. However, since 2,000 events occur in a match, data collection requires a lot of manual work. Several studies have tried to automatically detect events in soccer matches. Codina et al. <cit.> introduced a rule-based framework for event detection from player and ball tracking data. Sorano et al. <cit.> studied the problem of detecting passes from soccer videos by utilizing a CNN-based object detection engine (YOLOv3) and Bi-LSTM. Fassmeyer et al. <cit.> proposed a method that detects a wider range of events such as corner kicks, crosses, and counterattacks using a Variational Autoencoder (VAE) and Support Vector Machine (SVM). Note that all of these approaches rely on the ball tracking information from video data <cit.> or manual annotation <cit.>. Leveraging our framework, we also perform the event detection task using the predicted ball trajectory instead of the true ball trajectory. For simplicity, we only detect passes, the most frequent and fundamental event type in soccer matches. From the ball touch information predicted in Section <ref>, we define passes and their successes by a naive rule as follows: If a player p touches the ball at t_0, another player q touches the ball at t_1 > t_0, and the ball is in transition in (t_0, t_1), then we define there is a pass from p to q in (t_0, t_1). We calculate pass detection accuracy regarding that a true pass from p to q in (t_0, t_1) is correctly detected if there is a detected pass from p to q starting after t_0 - 2 and ending before t_1 + 2. Also, we evaluate the passer and receiver detection accuracies by counting the passes that at least the passer or receiver is correct. Since the number of passes is a common statistic in soccer match analysis, we aggregate the numbers of passes and receives per player and compare them to the true values, too. As a result, Table <ref> demonstrates that the pass detection accuracy is around 87% in terms of F1 score when only 20% of ground truth is given to the model. Also, the accuracy of the statistics such as the numbers of passes or receives is quite high even when the model can leverage 5% of true trajectories. This implies the potential use of our framework for building a semi-automated event annotation system that the model first suggests “candidate” passes and human annotators only correct partial prediction errors. Considering that collecting event and ball-tracking data requires a substantial cost, this AI-assisted annotation system would also contribute a lot to data completion in the sports analytics industry. §.§ Automated Zoom-in for Match Broadcasting Automatic sports broadcasting inevitably involves tracking the ball or players to zoom in on the region of interest from the whole pitch <cit.>. Figure <ref> demonstrates two snapshots obtained by setting the estimated ball position by our model to the center of the snapshots. Blue circles indicate the centers of snapshots while red circles mean the true ball locations in the video. As we can see from the snapshots, our model locates the ball quite well without using any result of ball detection. Table <ref> shows the accuracy of the estimated ball locations in terms of region-of-interest (ROI) bounding box obtained by setting the center of a bounding box to be the estimated ball location at that time. We calculate the “ROI accuracy” as the proportion of frames where the estimated ROI contains the true ball. We can observe that the ROI accuracy reaches about 80% when the size of the bounding box is 300 × 300, which is the standard resolution of input images in recent object detection networks. Also, when we increase the bounding box size to 600 × 600, the ROI accuracy reaches 96%. This implies that we may automatically generate soccer match broadcasting videos following the estimated ball trajectory instead of the real ball trajectory obtained by relying on sophisticated ball tracking algorithms, as the resolution of the broadcasting videos is usually much higher than 600 × 600. §.§ Approximating Possession-wise Running Performance Metrics The major use of GPS tracking data is to monitor players’ physiological demands. Running performance (RP) metrics such as total distance covered, distance covered by speed zone, and the number of sprints find general acceptance in the sports science domain as indicators for players’ workload <cit.>. Moreover, several studies <cit.> observed that RP with ball possession has a greater influence on the team’s success than that without ball possession. Accordingly, there is a growing interest in separately calculating RP metrics in offensive and defensive phases <cit.>. In this section, we demonstrate that accurately predicted team possession can also be a useful by-product of our framework by approximating the RP metrics in attacking and defending situations. Here we estimate that the team of the predicted ball possessor at each time is attacking at that time. We calculate the total distance and high-intensity running (running with speed > 20) distance covered per player for situations that our model predicts as offensive or defensive, respectively. Also, we randomly assign a team possession label per time as a baseline. Then, we compare the offensive and defensive RP metrics resulting from each method with the ground truth by calculating the maximum absolute percentage errors (MAPE). (See Figure <ref> visualizing the estimated and true HSR distance values for each of the attacking and defending phases in a test match.) Table <ref> shows that our model provides highly accurate total distances with MAPE of only about 0.035 and HSR distances with MAPE much less than the random baseline. A notable point is that the prediction accuracy for possession-wise RP metrics except for the HSR distance in attacking situations is even higher than that of raw team possession. This is because false offenses and false defenses supplement each other when aggregating the metrics and cancel out the errors. § CONCLUSION We address the problem of ball trajectory inference using the player context in sports matches. We combine the Set Transformer to represent the permutation-invariant and equivariant nature of team sports situations and a hierarchical recurrence structure to intermediately predict the game semantics such as ball possession. Other than the previous studies for trajectory prediction in multi-agent contexts, our framework estimates the accurate trajectory even when neither partial nor past trajectories of the target are given. Moreover, we suggest practical use cases mainly related to enhancing data acquisition and automating manual works including missing trajectory imputation, event annotation, and zoom-in on broadcasting videos. We expect that our method contributes to accumulating fundamental data for sports analytics but with a much lower level of difficulty than before. § ACKNOWLEDGEMENTS This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. RS-2023-00208094). abbrv § DESCRIPTION OF SUBSIDIARY MODELS In this section, we elaborate on the subsidiary models mentioned in the paper. These include the goalkeeper (GK) trajectory prediction model introduced in Section <ref> and the generative baseline using VRNN in Section <ref>. §.§ GK Trajectory Prediction Model First, we outline the architecture of the GK trajectory prediction model. Considering that goalkeepers' behavior depends on whether the team is attacking or defending, the model has a hierarchical architecture leveraging team-level ball possession as an intermediate target. That is, it consists of two submodels, the team possession classifier (TPC) and the GK trajectory regressor (GTR). The team-level ball possession differs from player-level ball possession in that it is partially permutation-invariant with respect to input players rather than permutation-equivariant. Thus, we deploy a single Bi-LSTM for all the players instead of constructing player-wise Bi-LSTMs. To be specific, we apply a Set Transformer for input player features in each team P_k, and merge them by a fully-connected layer to make a representation 𝐳̃_t of the game state. rCl 𝐳_t^P_1 = SetTransformer (𝐱_t^p_1, …, 𝐱_t^p_n) 𝐳_t^P_2 = SetTransformer (𝐱_t^p_n+1, …, 𝐱_t^p_2n) 𝐳̃_t = FC (𝐳_t^P_1, 𝐳_t^P_2) where { p_1, …, p_n } and { p_n+1, …, p_2n} are the players in team P_1 and P_2, respectively. Then, a single Bi-LSTM updates the hidden state 𝐡_g,t = (𝐡_g,t^f, 𝐡_g,t^b) using the partially permutation-invariant context embedding 𝐳̃_t to predict the team possession probabilities 𝐠̂_t = (ĝ_t^P_1, ĝ_t^P_2) as follows: rCl 𝐡_g,t^f = LSTM^f (𝐳̃_t; 𝐡_g,t-1^f) 𝐡_g,t^b = LSTM^b (𝐳̃_t; 𝐡_g,t+1^b) 𝐠̂_t = FC (𝐡_g,t) The GTR part is quite simple. It reuses the context embedding 𝐳̃_t as an input to the second Bi-LSTM along with the team probabilities 𝐠̂_t from the previous block. The final output 𝐲̂_t = (𝐲̂_t^P_1, 𝐲̂_t^P_2) where 𝐲̂_t^P_k (k = 1,2) denotes the estimated location of of team P_k's goalkeeper at time t is obtained by passing the resulting hidden state 𝐡_t = (𝐡_t^f, 𝐡_t^b) to a fully connected layer. rCl 𝐡_t^f = LSTM^f (𝐳̃_t, ĝ_t; 𝐡_t-1^f) 𝐡_t^b = LSTM^b (𝐳̃_t, ĝ_t; 𝐡_t+1^b) 𝐲̂_t = FC (𝐡_t) The GK trajectory prediction model trained on the Metrica training data described in Section <ref> shows the average position error of 5.4m for the Metrica test data. We apply this model to Fitogether data and use the predicted GK trajectories, together with the original outfield players' trajectories, for the ball trajectory prediction task. See Figure <ref> as an example of predicted GK trajectories. §.§ Context-Aware VRNN as a Baseline In Section <ref>, we implement a generative baseline and compare the prediction performance with our regression model to explain our design choices. Since major studies <cit.> for player trajectory prediction or imputation have built their framework on top of Variational Recurrent Neural Network (VRNN) <cit.>, we also construct a generative model based on the VRNN. §.§.§ Difference in problem settings from the previous studies The previous studies deal with the problem of observing fragmentary trajectories of given agents and predicting the remaining parts of the same agents’ trajectories. On the other hand, the target agent to predict the trajectory (i.e., the ball) differs from the agents that the model refers to (i.e., players) in our problem setting. That is, our ball trajectory prediction task is clearly different from the aforementioned studies in that (1) the model can observe non-target trajectories throughout the entire time interval and (2) it cannot leverage any fragmentary trajectory of the target. §.§.§ Model architecture We basically take the autoregressive architecture of the original VRNN (as adopted for future trajectory prediction tasks <cit.>), but make slight changes to reflect these differences. To put it concretely, the model consists of the three deep neural networks rCl f^pri(𝐡_t-1;θ) = [μ^pri_t,σ^pri_t] f^enc(𝐱_t,𝐡_t-1;ϕ) = [μ^enc_t,σ^enc_t] f^dec(𝐳_t,𝐡_t-1;ψ) = [μ^dec_t,σ^dec_t] where 𝐱_t is the ball location, 𝐳_t is the latent state of VAE, 𝐡_t = (𝐡_t^f, 𝐡_t^b) is the asymmetric joint hidden state of the Bi-LSTM with rCl 𝐡_t^f = LSTM^f (𝐨̃_t, 𝐱_t, 𝐳_t; 𝐡_t-1^f) 𝐡_t^b = LSTM^b (𝐨̃_t; 𝐡_t+1^b) and θ, ϕ, ψ are trainable parameters. Note that 𝐨̃_t denotes the context embedding obtained from non-target (i.e., players) trajectories (𝐱_t^p_1, …, 𝐱_t^p_2n), where the procedure of constructing it is the same as that of 𝐳̃_t in Appendix <ref> (See Eq. <ref>–<ref>). Here we use a different notation 𝐨̃_t instead of 𝐳̃_t to avoid confusion with the latent vector 𝐳_t of VRNN. Also, while the recurrence in the original VRNN operates through a unidirectional RNN (i.e., 𝐡_t = RNN(𝐱_t, 𝐳_t; 𝐡_t-1), we add a backward RNN since the model can leverage the information of the future context 𝐨̃_>t. §.§.§ Ball trajectory generation The neural network outputs in Eq. <ref>–<ref> act as parameters of the following normal distributions for the latent state 𝐳_t and the target 𝐱_t, respectively: rCl p_θ(𝐳_t | 𝐨̃_1:T, 𝐱_<t, 𝐳_<t) = 𝒩 ( 𝐳_t | μ^pri_t, diag(σ^pri_t)^2 ) q_ϕ(𝐳_t | 𝐨̃_1:T, 𝐱_≤t, 𝐳_<t) = 𝒩 ( 𝐳_t | μ^enc_t, diag(σ^enc_t)^2 ) p_ψ(𝐳_t | 𝐨̃_1:T, 𝐱_<t, 𝐳_≤t) = 𝒩 ( 𝐱_t | μ^dec_t, diag(σ^dec_t)^2 ) Given the context embedding 𝐨̃_t, a latent random variable 𝐳_t is sampled from the prior distribution p_θ(𝐳_t | 𝐨̃_1:T, 𝐱_<t, 𝐳_<t). Then, the decoder distribution p_ψ(𝐳_t | 𝐨̃_1:T, 𝐱_<t, 𝐳_≤ t) generates the ball location 𝐱_t conditioned on 𝐳_t. § ABLATION STUDY As well as the main experiments described in Section <ref>, we carried out several ablation studies to examine the contributing factors of the proposed framework. §.§ Effects of Using Derivatives of Coordinates First, we have conducted an ablation study about input features to demonstrate that the use of the first-order and second-order derivatives of the players' raw (x,y)-coordinates successfully improves our model. Table <ref> exhibits that our full model using six physical features (2D location, 2D velocities, speed, and acceleration) per player achieves much better performance compared to the degraded models taking four (2D location and 2D velocities) and two features (2D location only), respectively. §.§ Effects of Different Context Embeddings In Section <ref>, we utilize three types (PPE, FPE, and FPI) of game context embedding for ball possession prediction. The motivation for including each structure in the model is as follows: * PPE: The necessity of a PPE embedding is straightforward in that the PPC has to be partially permutation-equivariant as explained in Section <ref>. * FPE: The drawback of only using the PPE embedding is that each player's latent vector 𝐳_g,t^p does not use the information of opponents. Namely, in Eq. <ref>–<ref>, the model does not consult (𝐱_t^p_n+1, …, 𝐱_t^p_2n) when making 𝐳_g,t^p_1, …, 𝐳_g,t^p_n and vice versa. Thus, we attach the FPE embedding that covers the information of the entire agents as in Eq. <ref>. * FPI: While the above permutation-equivariant embeddings only employ ST-Encoders, a permutation-invariant embedding uses a full Set Transformer including the encoder and the decoder parts. As such, we have intended that the FPI embedding helps to extract additional information about game contexts thanks to the multi-head attention pooling in the decoder part. In addition, we conducted an ablation study to empirically demonstrate that all the variants help improve performance. For each trial, we deployed a subset of these three variants to PPC and measured the performance metrics. The naive ordering model does not use any permutation-equivariant or invariant embedding in PPC. Instead, a single Bi-LSTM directly takes the concatenated feature vectors of roughly ordered (i.e., by uniform number) 22 players as input. Note that the option of only using FPI is impossible for permutation-equivariant encoding since FPI produces a unified output for all the input players. According to the results shown in Table <ref>, the model with all the variants (PPE + FPE + FPI) performs the best. In addition, we can obtain some insights about the role of each embedding as follows: * When not using PPE, PPA, and TPA are significantly degraded. This justifies the use of the PPE embedding that distinguishes the team each player belongs to. * While having strength in predicting ball possession, PPE shows lower performance in ball trajectory prediction than FPE. This is because only using PPE makes the input 𝐱̃_t^p of the BTR not consult the information of opponents, and this insufficient reference leads to larger position errors for ball trajectory prediction. * As PPE and FPE have their own merits and faults when compared to each other, they create a synergy effect when employed together and show a better performance than when either one is used alone. * FPE and FPI seem to have overlapping roles in that they improve the model performance by a similar amount. Nevertheless, using both of them with PPE (i.e., PPE + FPE + FPI) slightly reduces the position error than using only one (i.e., PPE + FPE or PPE + FPI).
http://arxiv.org/abs/2306.02958v1
20230605152516
Constraining quantum fluctuations of spacetime foam from BBN
[ "Saurya Das", "Gaetano Lambiase", "Elias C. Vagenas" ]
gr-qc
[ "gr-qc", "hep-th" ]
[email: ][email protected] [email: ][email protected] [email: ][email protected] ^1Theoretical Physics Group and Quantum Alberta, Department of Physics and Astronomy, University of Lethbridge, 4401 University Drive, Lethbridge, Alberta, T1K 3M4, Canada. ^2Dipartimento di Fisica ”E.R. Caianiello” Università di Salerno, I-84084 Fisciano (Sa), Italy, ^3INFN - Gruppo Collegato di Salerno, Italy. ^4Theoretical Physics Group, Department of Physics, Kuwait University, P.O. Box 5969, Safat 13060, Kuwait α A possibility to describe quantum gravitational fluctuations of the spacetime background is provided by virtual D-branes. These effects may induce a tiny violation of the Lorentz invariance (as well as a possible violation of the equivalence principle). In this framework, we study the formation of light elements in the early Universe (Big Bang Nucleosynthesis). By using the Big Bang Nucleosynthesis observations, We infer an upper bound on the topological fluctuations in the spacetime foam vacuum σ^2, given by σ^2 ≲ 10^-22. 04.50.-h, 04.60.Bc Constraining quantum fluctuations of spacetime foam from BBN E.C. Vagenas^4 July 31, 2023 ============================================================ § INTRODUCTION Formulating a quantum theory of gravity is one of the most important challenges of the modern approaches aimed to unify all fundamental interactions. These studies have clearly shown that spacetime must have a non-trivial topology near the Planck scale. After the Wheeler <cit.> suggestion that spacetime may have a foam-like structure, the study of quantum fluctuations of the spacetime background have received a lot of interest <cit.>. The Planck-size topological fluctuations imply that the (quantum gravitational) vacuum behaves as a non-trivial medium. This occurs, for example, in the framework of string theory <cit.> and of the canonical approach to quantum gravity <cit.>. The underlying idea of Ref. <cit.> is that the quantum gravitational fluctuations in the vacuum get modified by the passage of an energetic particle, inducing recoil effects described by back reaction effects on the propagating particle <cit.>. Although present technologies preclude any possibility to probe quantum gravity effects, it has been suggested in Ref. <cit.> that Gamma-Ray Bursts (GRBs) might offer the possibility to test the theories at Planck energies. The idea is that the origin of GRBs at a cosmological distance and their high energies may make them sensitive to the dispersion scales that are comparable with the Planck scales <cit.>. In addition, the quantum fluctuations of spacetime may have had relevant consequences during the early Universe. In fact, the CPT-violating aspects of brane Universe models may induce an asymmetry between particles and antiparticles, allowing to explain the observed Baryon Asymmetry <cit.>. In this contribution, we investigate the foamy structure of the gravitational background, referring in particular to the Ellis-Mavromatos-Nanopoulos-Volkov (EMNV) model <cit.>, and its role on the formation of light elements during the primordial phase of the Universe evolution (Big Bang Nucleosynthesis). Big-Bang Nucleosynthesis (BBN) represents an important epoch during the evolution of the Universe. During this period, the primordial light elements formed leaving imprints on their abundance today. Thanks to the advancements in measurements and theoretical predictions of the abundances of light elements, BBN has become a powerful cosmological probe for testing the early Universe. BBN has hence no trivial consequences on any physics scenario beyond the Standard Models of particle physics and cosmology <cit.>. The latter may alter the course of the events at that era with respect to the standard theories, and therefore such a probe provides strong constraints. The rest of the paper is structured as follows. In section 2, we present the EMNV model and provide a formula that connects the baryon density parameter with the quantum fluctuations mass scale. In section 3, we provide bounds on the quantum fluctuations mass scale using today's primordial abundances of light elements which were produced in the BBN era. In section 4, we conclude and briefly present our results. § THE EMNV MODEL The basic idea of the EMNV model is that the recoil effect of a D-brane struck by particles (bosons <cit.> or fermions <cit.>), induces an energy dependence of the off-diagonal terms of the background metric, G_0i∼ u_i, where u_i∼ E/M_s≪ 1 (u_i is the average recoil velocity of the generic D-brane <cit.> and E is the energy of the particle scattering off the D-brane, and M_s characterizes the quantum fluctuations scale). The consequence of the off-diagonal term in the metric tensor implies the breaking of the Lorentz invariance <cit.>. For a D-dimensional spacetime, one has G_ij=δ_ij, G_00=-1, and G_0i∼ u_i ∥, i, j=1, … , D-1 (here u_i ∥ is the recoil (parallel) velocity of the D-particle). Moreover, the metric induces a variation of the light velocity δ c/c∼ -E/M_s. The capture and splitting of the open string and its interaction with the D-particle, and the recoil of the latter, gives rise to a local effective spacetime metric distortion <cit.> ds^2=g_μνdx^μdx^ν=(η_μν+G_μν)dx^μdx^ν . The dispersion relation of a particle (neutrino) propagating on the deformed isotropic spacetime reads g_μν p^μp^ν= (η_μν+G_μν) p^μp^ν=-m^2 ⇒ E^2-2Ep⃗· u_∥-p⃗^2-m^2=0, where m is the mass of the particle. Taking into consideration this on-shell condition and taking the average ≪…≫ over D-particle populations with the stochastic processes (≪ u_i ∥≫=0, ≪ u_i ∥u_j ∥≫=σ^2δ_ij), one gets the average neutrino and anti-neutrino energies in the D-foam background ≪ E_ν, ν≫ = √(p^2+m_ν^2)(1+1/2σ^2) ∓1/2M_s/g_s σ^2 . Here it is assumed that the recoil-velocities fluctuation strengths are the same for particle and antiparticle sectors (the asymmetric scenario has been studied in <cit.>). As we can see, the local violation of Lorentz symmetry (LV) induced by the recoil velocities of the D-particles, induced a CPT violation too, since the dispersion relations between particles and antiparticles are different, generating a matter-antimatter lepton asymmetry ≪ n-n≫=g_d.o.f.∫d^3p/(2π)^3≪[f(E)-f(E)]≫ where f(E,μ)=1/ exp[(E-μ)/T] ± 1, E^2= p^2+m^2, and g_d.o.f. denotes the number of degrees of freedom of relativistic neutrinos. Assuming that σ^2 is constant (independent of space and of the (anti)neutrino energy), one can get Δ n_ν≃g_d.o.f./π^2 T^3(M_sσ^2/g_s T) > 0. Notice that the CPT term, i.e., -1/2M_s/g_sσ^2, in the dispersion relation of the neutrino comes with the right sign (“loss") guaranteeing the excess of particles over antiparticles. The resulting lepton asymmetry reads η=Δ n_ν/n_γ∼315/2π^4GeV/T(M_s/GeV σ^2/g_s ) . Observations of the CMB radiation <cit.>, predictions of BBN <cit.> (and the absence of intense radiation from matter–antimatter annihilation <cit.>), implies that the observed baryon number asymmetry today is η = (6.04 ± 0.08)× 10^-10 . Such a value remains constant from early times till today. For later reasons, one introduces the baryon density parameter η_10 defined as <cit.> η_10≡ 10^10η≡ 10^10Δ n_ν/n_γ with η_10 to be determined. From (<ref>) and (<ref>) one gets M_s/GeVσ^2/g_s= 10^-132π^4/315T_BBN/MeV η_10 where T_BBN∼ 1MeV is the temperature at which the BBN processes are effective. § PRIMORDIAL LIGHT ELEMENT {^4 HE, D, LI} We will now derive the bound on the scale M_s by analyzing the effects of the primordial abundances of light elements, i.e., Deuterium ^2H, Helium ^4He, and Lithium ^7Li, using the asymmetry given by (<ref>). In this analysis, the baryon-antibaryon asymmetry, here indicated with η_10, plays a crucial role <cit.>. Since we are interested in deviations from the standard cosmological model, hereafter we shall assume three generations of neutrinos so that we set N_ν =3, which means Z = 1 (corresponding to the standard cosmological model) in the equations below. We follow the Refs. <cit.>. The relevant processes are here recalled: * ^4He abundance - The production of Helium ^4He is generated by the production of ^2H through a neutron and a proton. Consequently, the formed Deuterium converts into ^3He and Tritium. The best fit of the primordial ^4He abundance is <cit.> Y_p = 0.2485 ± 0.0006 + 0.0016 [( η_ 10 - 6) +100( Z-1) ] The standard result of BBN for the ^4He fraction is recovered for Z=1 and η_10 = 6, so in General Relativity (GR) one gets (Y_p)|_GR = 0.2485 ± 0.0006. However, observations of the Helium ^4He give the abundance 0.2449 ± 0.0040<cit.>. So employing the observational constraint and the Helium abundance given in (<ref>) for Z=1, we obtain 0.2449 ± 0.0040 = 0.2485 ± 0.0006 + 0.0016 ( η_ 10 - 6) . From the above equations, one infers the constraint 5.65 ≲η_10≲ 5.9 . * ^2H abundance - Deuterium ^2H is produced by the reaction n+p →^2H +γ. The best fit gives the Deuterium abundance <cit.> y_D p = 2.6(1 ± 0.06) ( 6/η_ 10 - 6 (Z-1))^1.6 . The values Z=1 and η_10 = 6 yield the result in GR, thus the Deuterium abundance will be y_D p |_GR = 2.6 ± 0.16. Equation (<ref>) and the observational constraint on deuterium abundance y_D p = 2.55 ± 0.03<cit.> give (for Z=1) 2.88 ± 0.22 = 2.6 (1 ± 0.06) ( 6/η_ 10)^1.6 . One then gets the constraint 5.88626 ≲η_10≲ 6.25264 . It is noteworthy that such a constraint partially overlaps the helium abundance (<ref>). * ^7Li abundance - The parameter η_10, defined in (<ref>), although successfully fits the abundances of D and ^4He, it does not fit the observations of ^7Li. This is referred in literature as the Lithium problem<cit.>). The ratio of the expected value of ^7Li abundance in GR and the observed one is in the range <cit.> Li|_GR/Li|_obs∈ [2.4-4.3] . The numerical best fit for ^7Li abundance is (for Z=1) <cit.> y_Li = 4.82 (1 ± 0.1)[η_ 10 - 3 (Z-1)/6]^2 = 4.82 (1 ± 0.1)[η_ 10/6]^2 . Employing the observational constraint on Lithium abundance, i.e., y_Li = 1.6 ± 0.3<cit.>, one gets the constraint 3.28457 ≲η_10≲ 3.59177 . It is evident that such range of values does not overlap with the constraints on ^2 H abundance, i.e., Eq. (<ref>), and on ^4He abundance, i.e., Eq. (<ref>). The constraints derived for the three abundances are reported in Fig. <ref>. It is obvious that the overlapping ranges of ^2 H and ^4 He correspond to the value η_10∼ 5.9 (orange region). This value does not overlap with the ^7 Li range, which means that the Lithium problem cannot be solved in the framework of the spacetime foam. However, this is not a conclusive result since the eventual modifications of Einstein's equations have not been considered in the present paper, as well as the possibility that the quantum fluctuation parameter σ^2 is not universal. Inserting the overlapping value of η_10 into (<ref>) for T_BBN≲ 1MeV and using spacetime foam parameter M_s/g_s∼ M_P (M_P∼ 10^19GeV is the Planck mass) <cit.>, one obtains M_s/g_sσ^2 ∼ 3.6 × 10^-13GeV → σ^2 ≲ 10^-22 . Therefore, we have inferred the upper bound on the dimensionless stochastic variable σ^2, which expresses the fluctuations of the recoil velocity of the D-branes. § CONCLUSIONS The BBN era, which occurred during the hot and expanding early Universe, has left an observable imprint in the abundance of primordial light elements. Precision observations and high-accuracy predictions of these elements provide an important test of the standard cosmological model (based on General Relativity) and allow probing of non-standard cosmological and particle physics scenarios. In this framework, we have used the BBN sensitivity to obtain a bound on the dimensionless stochastic variable σ^2 expressing the fluctuations of D-branes recoil velocity. Results give σ^2 ≲ 10^-22 for M_s/g_s ∼ M_P. A follow-up of the present work is to investigate physical scenarios related to spacetime foam where the BBN constraints are properly taken into account, or consider the general case in which the quantum fluctuation parameter σ^2 is not universal <cit.>. The authors would like to thank N. Mavromatos for useful correspondences and fruitful comments. This work was supported by the Natural Sciences and Engineering Research Council of Canada. wheeler J. A. Wheeler, Relativity, Groups and Topology, Eds. B. S. DeWitt and C. M.  DeWitt (Gordon and Breach, New York, 1964). hawking80 S. W. Hawking, D. N. Page and C. N. Pope, Nucl. Phys. B 170, 283-306 (1980). hawking82 S. W. Hawking, Commun. Math. Phys. 87, 395-415 (1982). ellis84 J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos and M. Srednicki, Nucl. Phys. B 241, 381 (1984). ellis92 J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Phys. Lett. B 293, 37-48 (1992) [arXiv:hep-th/9207103 [hep-th]]. garay1 L. J. Garay, Phys. Rev. D 58, 124015 (1998) [arXiv:gr-qc/9806047 [gr-qc]]. garay2 L. J. Garay, Phys. Rev. Lett. 80, 2508-2511 (1998) [arXiv:gr-qc/9801024 [gr-qc]]. ellis99 J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Gen. Rel. Grav. 32, 127-144 (2000) [arXiv:gr-qc/9904068 [gr-qc]]. ellis99_1 J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Gen. Rel. Grav. 31, 1257-1262 (1999) [arXiv:gr-qc/9905048 [gr-qc]]. amelino97 G. Amelino-Camelia, J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Int. J. Mod. Phys. A 12, 607-624 (1997) [arXiv:hep-th/9605211 [hep-th]]. pullin R. Gambini and J. Pullin, Phys. Rev. D 59, 124021 (1999) [arXiv:gr-qc/9809038 [gr-qc]]. yu H. W. Yu and L. H. Ford, Phys. Rev. D 60, 084023 (1999) [arXiv:gr-qc/9904082 [gr-qc]]. ellis00 J. R. Ellis, K. Farakos, N. E. Mavromatos, V. A. Mitsou and D. V. Nanopoulos, Astrophys. J. 535, 139-151 (2000) [arXiv:astro-ph/9907340 [astro-ph]]. amelino98 G. Amelino-Camelia, J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and S. Sarkar, Nature 393, 763-765 (1998) [arXiv:astro-ph/9712103 [astro-ph]]. nickEPJC N. E. Mavromatos, EPJ Web Conf. 70, 00083 (2014) [arXiv:1210.0211 [hep-ph]]. ellis00a J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and G. Volkov, Gen. Rel. Grav. 32, 1777-1798 (2000) [arXiv:gr-qc/9911055 [gr-qc]]. Capozziello:2017bxm S. Capozziello, G. Lambiase and E. N. Saridakis, Eur. Phys. J. C 77, no.9, 576 (2017) [arXiv:1702.07952 [astro-ph.CO]]. Barrow:2020kug J. D. Barrow, S. Basilakos and E. N. Saridakis, Phys. Lett. B 815, 136134 (2021) [arXiv:2010.00986 [gr-qc]]. Asimakis:2021yct P. Asimakis, S. Basilakos, N. E. Mavromatos and E. N. Saridakis, Phys. Rev. D 105, no.8, 084010 (2022) [arXiv:2112.10863 [gr-qc]]. Bernabeu:2006av J. Bernabeu, N. E. Mavromatos and S. Sarkar, Phys. Rev. D 74, 045014 (2006) [arXiv:hep-th/0606137 [hep-th]]. [1] P. A. R. Ade et al. [Planck], Astron. Astrophys. 594, A13 (2016) [arXiv:1502.01589 [astro-ph.CO]]. [2] N. Aghanim et al. [Planck], Astron. Astrophys. 641, A6 (2020) [erratum: Astron. Astrophys. 652, C4 (2021)] [arXiv:1807.06209 [astro-ph.CO]]. [3] A. G. Cohen, A. De Rujula and S. L. Glashow, Astrophys. J. 495, 539-549 (1998) [arXiv:astro-ph/9707087 [astro-ph]]. epjp52 G. Steigman, Adv. High Energy Phys. 2012, 268321 (2012) [arXiv:1208.0032 [hep-ph]]. epjp53 V. Simha and G. Steigman, JCAP 06, 016 (2008) [arXiv:0803.3465 [astro-ph]]. batt S. Bhattacharjee and P. K. Sahoo, Eur. Phys. J. Plus 135, no.4, 350 (2020) [arXiv:2004.04684 [physics.gen-ph]]. kavuk N. Katırcı and M. Kavuk, Eur. Phys. J. Plus 129, 163 (2014) [arXiv:1302.4300 [gr-qc]]. epjp57 J. P. Kneller and G. Steigman, New J. Phys. 6, 117 (2004) [arXiv:astro-ph/0406320 [astro-ph]]. epjp58 G. Steigman, Ann. Rev. Nucl. Part. Sci. 57, 463-491 (2007) [arXiv:0712.1100 [astro-ph]]. jcap2020 B. D. Fields, K. A. Olive, T. H. Yeh and C. Young, JCAP 03, 010 (2020) [erratum: JCAP 11, E02 (2020)] [arXiv:1912.01132 [astro-ph.CO]]. theory S. Boran and E. O. Kahya, Adv. High Energy Phys. 2014, 282675 (2014) [arXiv:1310.6145 [astro-ph.CO]]. theory42 B. D. Fields, Ann. Rev. Nucl. Part. Sci. 61, 47-68 (2011) [arXiv:1203.3551 [astro-ph.CO]]. Nick2 N. E. Mavromatos, J. Phys. Conf. Ser. 283, 012022 (2011) [arXiv:1010.5399 [gr-qc]]. alfaro J. Alfaro, H.A. Morales-Tectol, L.F. Urrutia, Phys. Rev. Lett. 84, 2318 (2000). rovelli C. Rovelli, Livings Reviews in Relativity, Vol 1 [hppt://www.livingreviews.org/Article]. higgs G. Aad et al. [ATLAS Collaboration], Phys. Lett. B 716, (2012) 1 [arXiv:1207.7214 [hep-ex]]; S. Chatrchyan et al. [CMS Collaboration], Phys. Lett. B 716, (2012) 30 [arXiv:1207.7235 [hep-ex]]. cpbau V. A. Kuzmin, V. A. Rubakov and M. E. Shaposhnikov, Phys. Lett. B 155, (1985) 36. susy See, for instance: M. E. Peskin, “Supersymmetry in Elementary Particle Physics,” arXiv:0801.1928 [hep-ph] and references therein; H. P. Nilles, Phys. Rept. 110, (1984) 1. sterile For an up-to-date review see: K. N. Abazajian et al., “Light Sterile Neutrinos: A White Paper,” arXiv:1204.5379 [hep-ph] and references therein. susysearchlhc P. de Jong [ATLAS Collaboration], EPJ Web Conf. 28, (2012) 09007 [arXiv:1201.4548 [hep-ex]]; R. Schofbeck [CMS Collaboration], J. Phys. Conf. Ser. 347, (2012) 012011, and references therein. posthiggssusy C. Beskidt, W. de Boer, D. I. Kazakov and F. Ratnikov, arXiv:1207.3185 [hep-ph]; H. Baer, V. Barger and A. Mustafayev, Phys. Rev. D 85, (2012) 075010 [arXiv:1112.3017 [hep-ph]]. wmapsusy U. Chattopadhyay, A. Corsetti and P. Nath, Phys. Rev. D 68, (2003) 035005 [hep-ph/0303201]. For reviews see: A. B. Lahanas, N. E. Mavromatos and D. V. Nanopoulos, Int. J. Mod. Phys. D 12, (2003) 1529 [hep-ph/0308251]; C. Munoz, Int. J. Mod. Phys. A 19, (2004) 3093 [hep-ph/0309346]. rpv V. A. Mitsou [ATLAS Collaboration], arXiv:1210.1679 [hep-ex], these proceedings. heterotic See, e.g.: P. Binetruy, A. Birkedal-Hansen, Y. Mambrini and B. D. Nelson, Eur. Phys. J. C 47, (2006) 481 [hep-ph/0308047]; P. Binetruy, M. K. Gaillard and B. D. Nelson, Nucl. Phys. B 604, (2001) 32 [hep-ph/0011081]. For a review see: N. E. Mavromatos, 22nd Lake Louise Winter Institute 2007: Fundamental Interactions (World Scientific, Singapore), 80-127 [arXiv:0708.0134 [hep-ph] ]. strings See, e.g.: J. Polchinski, “What is string theory?,” hep-th/9411028. landscape L. Susskind, “The Anthropic landscape of string theory,” In *Carr, Bernard (ed.): Universe or multiverse?* (2003), 247-266 [hep-th/0302219]. acharya See, e.g.: B. S. Acharya and M. Torabian, Phys. Rev. D 83, (2011) 126001 [arXiv:1101.0108 [hep-th]]; B. S. Acharya, G. Kane and E. Kuflik, arXiv:1006.3272 [hep-ph]; B. S. Acharya, P. Kumar, K. Bobkov, G. Kane, J. Shao and S. Watson, JHEP 0806, (2008) 064 [arXiv:0804.0863 [hep-ph]]. gasperini M. Gasperini, Elements of string cosmology (Cambridge Univ. Press, Cambridge, UK, 2007), 552 p.; Lect. Notes Phys. 737, (2008) 787 [hep-th/0702166 [HEP-TH]] and references therein. lahanas A. B. Lahanas, N. E. Mavromatos and D. V. Nanopoulos, PMC Phys. A 1, (2007) 2 [hep-ph/0608153]; A. B. Lahanas, Phys. Rev. D 83, (2011) 103523 [arXiv:1102.4277 [hep-ph]]. dutta A. B. Lahanas, N. E. Mavromatos and D. V. Nanopoulos, Phys. Lett. B 649, (2007) 83 [hep-ph/0612152]; B. Dutta, A. Gurrola, T. Kamon, A. Krislock, A. B. Lahanas, N. E. Mavromatos and D. V. Nanopoulos, Phys. Rev. D 79, (2009) 055002 [arXiv:0808.1372 [hep-ph]]. spanos A. B. Lahanas and V. C. Spanos, JHEP 1206, (2012) 089 [arXiv:1201.2601 [hep-ph]]. pBB M. Gasperini and G. Veneziano, Phys. Rept. 373, (2003) 1 [hep-th/0207130]. polchinski J. Polchinski, “Tasi lectures on D-branes,” hep-th/9611050 and references therein. westmuckett J. R. Ellis, N. E. Mavromatos and M. Westmuckett, Phys. Rev. D 70, (2004) 044036 [gr-qc/0405066]; ibid. D 71, (2005) 106006 [gr-qc/0501060]. rizos N. E. Mavromatos and J. Rizos, Phys. Rev. D 62, (2000) 124004 [hep-th/0008074]; Int. J. Mod. Phys. A 18, (2003) 57 [hep-th/0205299]. lahanasdetails G. A. Diamandis, B. C. Georgalas, A. B. Lahanas, N. E. Mavromatos and D. V. Nanopoulos, Phys. Lett. B 642, (2006) 179 [hep-th/0605181]. astro S. Basilakos, N. E. Mavromatos, V. A. Mitsou and M. Plionis, Astropart. Phys. 36, (2012) 7 [arXiv:1107.3532 [astro-ph.CO]] and references therein. Gamow G. Gamow, Phys. Rev. 70, (1946) 572. Mavromatos:2010nk N. E. Mavromatos, V. A. Mitsou, Sarben Sarkar and A. Vergou, Eur. Phys. J. C 72, (2012) 1956 [arXiv:1012.4094 [hep-ph]]. Interaction with the D-particles implies that at least one of the ends of the open string representing the neutrino field is attached to the D-particle defect, with the simultaneous creation of virtual strings stretched between the defect and the brane, that describe the recoil of the D-particle. During the interaction time, the D-particle undergoes motion characterized by non trivial velocities, u_∥=g_s/M_sΔ k_i=g_s/M_sr_i k_i along the brane longitudinal dimensions, where r_i denotes the proportion of the incident neutrino momentum that corresponds to the momentum transfer Δ k_i during the scattering. We have also assumed that the fraction of the neutrino momentum transfer in the direction perpendicular to the brane world is negligible. Meanwhile, other bulk D-particles (cf. fig. <ref>) exert forces on the vacuum energy of the brane world of mixed sign, depending on their relative distance. Thus, during the scattering process of a neutrino field with a D-particle, the vacuum energy of the brane fluctuates by an amount Δ𝒱 which depending on the process can be of either sign. From energy-momentum conservation, at each individual scattering event between a neutrino field and a recoiling D-particle, one could thus write: p⃗_⃗ ⃗b⃗e⃗f⃗o⃗r⃗e⃗+p⃗_ after+M_s/g_s u⃗_∥=0 , E_ before=E_ after+1/2 M_s/g_s u⃗_∥^2+Δ𝒱 where (p⃗, E)_ before (after) denote the incident (outgoing) neutrino momenta, energies repectively and we used the fact that the recoiling heavy D-particle of mass M_s/g_s (with M_s the string scale and g_s<1 the string coupling, assumed weak, so that string perturbation theory applies) has a non-relativistic kinetic energy 1/2 M_s/g_s u⃗_∥^2. Upon averaging ⟨⟨…⟩⟩ over a statistically significant number of events, due to multiple scatterings in a D-particle-foam background, we may use the following stochastic hypothesis <cit.> ≪ u_i ∥≫=0 , ≪ u_i ∥u_j ∥≫=σ^2δ_ij , ≪Δ𝒱≫ = 0 , implying that Lorentz invariance holds only as an average symmetry over large populations of D-particles in the foam. It is this violation of LI in stochastic fluctuations (or, equivalently, locally at individual scatterings of stringy matter off D-particles, due to the recoil of the latter) that is associated with the induced CPT Violation, in the sense of different dispersion relations between particles and antiparticles induced by the recoiling D-particle medium. On averaging over populations of D-particles, we need to find an expression for ≪ E_ before≫ appearing in (<ref>). To this end, we mention that the effects of the D-foam go beyond the above-mentioned kinematical ones. As discussed in  <cit.> the non-trivial capture and splitting of the open string during its interaction with the D-particle, and the recoil of the latter, resulting in a local effective space-time metric distortion, in the neighbourhood of the recoiling D-particle, of the form: ds^2=g_μνdx^μdx^ν=(η_μν+h_μν)dx^μdx^ν , h_0i=(u_i ∥^aσ_a) , where u_i ∥ is the recoil velocity of the D-particle on the D-brane world, with i=1,2,3 a spatial space-time index, σ_a are the 2×2 Pauli flavour matrices with a=1,2,3 (assuming two-neutrino-flavour oscillations for simplicity). On average over a population of stochastically fluctuating D-particles including neutrino-flavour changes, one may have the conditions (<ref>), the second of which, in the case of flavour oscillations, can be generalised to ≪ u_a,i^∥u_b,j^∥≫=σ^2δ_ijδ_ab. (we still assume that ≪ u_a,i^∥≫=0 . ) As a result of this, on average, the flavour change during the interactions of neutrinos with the D-foam can be ignored. In such a case, any flavour structure in the metric (<ref>) is ignored. On assuming isotropic momentum transfer, r_i=r for all i=1,2,3. The dispersion relation of a neutrino of mass m propagating on such a deformed isotropic space-time, then, reads: p^μp^νg_μν=p^μp^ν(η_μν+h_μν)=-m^2 ⇒ E^2-2Ep⃗· u_∥-p⃗^2-m^2=0. This on-shell condition implies that E=u⃗_⃗∥⃗·p⃗±√((u⃗·p⃗)^2+p⃗^2+m^2) . We take the average ≪…≫ over D-particle populations with the stochastic processes (<ref>). Hence we arrive at the following expression for an average neutrino energy in the D-foam background: ≪ E≫ = ≪p⃗·u⃗≫±≪√(p^2+m^2+(p⃗·u⃗)^2)≫≃±√(p^2+m^2)(1+1/2σ^2), p≫ m , for the active light neutrino species. The last relation in eq. (<ref>) expresses the corrections due to the space-time distortion of the stochastic foam to the free neutrino propagation. It is this expression for the neutrino energies that should be used in the averaged energy-momentum conservation equation (<ref>) that characterises a scattering event between a neutrino and a D-particle. From (<ref>), then, we obtain that the total combined effect on the energy-momentum dispersion relations, from both capture/splitting and metric distortion, can be represented as: ≪ E_ after≫=±√(p^2+m^2)(1+1/2σ^2)-1/2M_s/g_s σ^2 . Since antiparticles of spin 1/2 fermions can be viewed as “holes” with negative energies, we obtain from (<ref>) and (<ref>) the following dispersion relations between particles and antiparticles in this geometry (for Majorana neutrinos, the rôles of particles /antiparticles are replaced by left/right handed fermions): ≪ E_ν≫ = √(p^2+m_ν^2)(1+1/2σ^2)-1/2M_s/g_s σ^2 , ≪ E_ν≫ = √(p^2+m_ν^2)(1+1/2σ^2)+1/2M_s/g_s σ^2 where E_ν>0 represents the positive energy of a physical antiparticle. In our analysis above we have made the symmetric assumption that the recoil-velocities fluctuation strengths are the same between particle and antiparticle sectors (scenarios for which this symmetry was not assumed have also been considered in an early work <cit.>). There can thus be local CPTV in the sense that the effective dispersion relation between neutrinos and antineutrinos are different. This is a consequence of the local violation of Lorentz symmetry (LV), as a result of the non-trivial recoil velocities of the D-particle, leading to the LV space-time distortions (<ref>). The difference (<ref>) in the dispersion relations between particles and antiparticles will imply differences in the relevant populations of neutrinos (n) and antineutrinos (n). This difference between neutrino and antineutrino phase-space distribution functions in D-foam backgrounds generates a matter-antimatter lepton asymmetry in the relevant densities ≪ n-n≫=g_d.o.f.∫d^3p/(2π)^3≪[f(E)-f(E)]≫ , f(E,μ)=1/ exp(E-μ)/T)±1 , E^2=p⃗^2+m^2 where g_d.o.f. denotes the number of degrees of freedom of relativistic neutrinos, and ≪…≫ denotes an average over suitable populations of stochastically fluctuating D-particles (<ref>). For the purposes of this talk, we shall make the plausible simplifying assumption that σ^2 is constant i.e. independent of space and of the (anti)neutrino energy. It is a parameter which can only be positive. This is for estimation purposes only. A more detailed and complete analysis will be given elsewhere <cit.>. Ignoring neutrino mass terms and (1+σ^2/2) square-root prefactors in (<ref>), setting the (anti)neutrino chemical potential to zero (which is a sufficient approximation for relativistic light neutrino matter) and performing a change of variables |p⃗|/T→ũ we obtain from (<ref>) the result (to leading order in σ^2) Δ n_ν = g_d.o.f./2π^2 T^3∫_0^∞dũ ũ^2 [1/1+e^ũ-M_sσ^2/2g_s T-1/1+e^ũ+M_sσ^2/2 g_s T]= ≃ g_d.o.f./π^2 T^3(M_sσ^2/g_s T) > 0, We thus observe that the CPTV term -1/2M_s/g_sσ^2 in the dispersion relation (<ref>) for the neutrino, which corresponds to the energy `loss' due to the D-particle recoil kinetic energies, comes with the right sign (`loss') so as to guarantee an excess of particles over antiparticles. This is a nice feature of our string model, which is not met in other CPTV cases, induced by the coupling of the fermion (neutrino) spin to local curvature in (axisymmetric) background space-times, that could characterise the early Universe <cit.>, where the positive sign of Δ n has to be fixed by hand. The resulting lepton asymmetry then freezes out to a value: η=Δ n_ν/n_γ∼315/2π^4GeV/T(M_s/GeV σ^2/g_s ) For later reasons, one introduces the baryon density parameter η_10 defined as <cit.> η_10≡ 10^10η≡ 10^10Δ n_ν/n_γ , with η_10 to be determined. From (<ref>) and (<ref>) one gets M_s/GeVσ^2/g_s= 10^-132π^4/315T_BBN/MeV η_10 , where T_BBN∼ 1MeV is the temperature at which the big bang nucleosynthesis processes are effective.
http://arxiv.org/abs/2306.10177v1
20230616210044
Magnificent Minified Models
[ "Rich Harang", "Hillary Sanders" ]
cs.LG
[ "cs.LG" ]
Local vanishing for toric varieties Anh Duc Vo =================================== This paper concerns itself with the task of taking a large trained neural network and `compressing' it to be smaller by deleting parameters or entire neurons, with minimal decreases in the resulting model accuracy. We compare various methods of parameter and neuron selection: dropout-based neuron damage estimation, neuron merging, absolute-value based selection, random selection, OBD (Optimal Brain Damage). We also compare a variation on the classic OBD method that slightly outperformed all other parameter and neuron selection methods in our tests with substantial pruning, which we call OBD-SD. We compare these methods against quantization of parameters. We also compare these techniques (all applied to a trained neural network), with neural networks trained from scratch (random weight initialization) on various pruned architectures. Our results are only barely consistent with the Lottery Ticket Hypothesis <cit.>, in that fine-tuning a parameter-pruned model does slightly better than retraining a similarly pruned model from scratch with randomly initialized weights. For neuron-level pruning, retraining from scratch did much better in our experiments. § INTRODUCTION There are many ways to make a deep neural network smaller. In this paper, we focus on three categories of model size reduction: pruning, quantization, and training smaller models from scratch. Quantization means changing model parameters to lower-precision formats, like changing all 32-bit floating point parameters to 16-bit, which results in file size about half as large. Pruning deals with deleting parameters or groups of parameters (like entire neurons) from a trained model to make it smaller (often followed by a fine-tuning round of training, as done in our experiments). Parameter-level pruning (also called unstructured pruning) prunes individual parameters at a time, whereas neuron-level pruning (also called structured pruning) prunes all parameters associated with a given neuron at once. To simplify terminology across multiple methods we use the term 'damage' to broadly refer to the undesired impact of removing a node or zeroing a weight on network performance. Different compression methods use different approaches to either estimate damage directly, or rank neurons or weights in order of increasing assumed damage according to some other metric that does not directly evaluate the impact on loss or performance. This `damage' term, when used in the context of directly estimating the damage to loss caused by pruning a parameter is sometimes referred to as `saliency' in other papers. If a single neuron that is fully connected to its previous and subsequent layer is removed (pruned), the two matrices representing both sets of connections each lose a column or row, resulting in a smaller overall model (in memory and in file size). However, removing a single parameter in a fully connected layer generally doesn't reduce a model's memory requirements or file size, because the weight matrices remain the same dimensions. Currently, there is a lack of general support for sparse operations in neural network deployment libraries, but if these become more widespread, parameter-level pruning will more easily lead to significant memory and file-size savings. Without sparse operations support, parameter level pruning only shines when trying to minimize compressed (e.g. zipped) file size. We found that if zipped file size reduction is the goal, parameter removal tends to outperform neuron removal (lower reductions in accuracy given a reduced zipped file size), whereas if uncompressed file size or memory is what you want to reduce, parameter removal has little benefit. To make results comparable, our inline plots show reductions in accuracy with respect to percent reductions in zipped model file size. [Although the methods discussed in this paper would work fine on non-standard neural network layers (like convolutions), we only implemented this method on fully connected layers, mostly due to ease of implementation and ubiquitousness of fully connected layers in modern neural networks.] Finally, we also train various reduced size - in terms of number of neurons - model architectures from scratch: one architecture suggested by our OBD-SD neuron pruning method, as well as various architectures resulting from fixed reductions in fully connected layer sizes. We also train various reduced size - in terms of number of parameters - from scratch resulting from fixed reductions in the number of connections between (originally fully connected) layers. § BACKGROUND & RELATED WORK While quantization is a fairly straightforward way of reducing model size, model pruning is more complicated. It's not obvious which parts of your model should be removed to cause the least amount of damage. Optimal Brain Damage<cit.> attempts to estimate the change in loss that will be caused by setting an individual parameter to 0. While computationally difficult to compute directly, this can be estimated with the help of Taylor Series and a few assumptions. These damage estimates can be aggregated to the neuron level in order to choose which neurons to remove. Various other papers have experimented with OBD and variants of the OBD method<cit.><cit.><cit.>. All such papers we can find consider use empirical averages to estimate the Hessian for their OBD-related approaches; we introduce an approach (OBD-SD) that focuses on the empirical variance (see Methods section for details). Dan et al.<cit.> Experiment with parameter (neuron connection) pruning based on the absolute value of parameters; we replicate this approach in our experiments (and extend to the neuron level). Zhong et al.<cit.> propose merging similar neurons in the same layers together. This can be extended by prioritizing neuron pairs that are both similar in terms of parameters, and score low in terms of damage estimates. Salehinejad et al.<cit.> propose using dropout to find subnetworks that don't result in high loss values. We implement a similar approach, where dropout is used as a means of generating data to estimate neuron importance via a linear regression. Liu et al.<cit.> interestingly concludes that a large model pruned to a smaller size does no better (or does worse) than training a smaller model from scratch, contradicting implications of the Lottery Ticket Hypothesis<cit.>. This suggests that pruning may be useful for architecture search, but does not bring improvements that pruning has generally been thought to achieve. Because of these results, we also compare our pruned models to models of the same resulting architecture, trained from scratch with random weight initialization (with both parameter-level and neuron-level pruning). Our results are consistent with the Lottery Ticket Hypothesis<cit.>, though by a very slim margin, and only for parameter-level pruning. Specifically, we found than fine-tuning a parameter-pruned model with our best methods does slightly better than retraining a similarly sized model from scratch with random initializations. However, we found that fine-tuning a neuron-pruned model does consistently worse than retraining a similarly pruned model from scratch with random initializations. § METHODS We used the same `base' model in all of our experiments: our portable executable malware detection deep neural network, which is entirely comprised of dense, feed-forward layers (in addition to dropout and batch-norm `layers'). The model is described in detail in Harang et. al <cit.>'s paper, except that ours excludes auxiliary tag outputs. The original base model was trained on a dataset of approximately 20 million samples for 20 epochs. Trained-from-scratch models used the same dataset. Pruning methods with fine-tuning also used this same dataset. Evaluation was performed on the same time-split test set of size 3 million never-before-seen (in our dataset) samples for all results. The main output of the model is the binary `is_malware' prediction, which had a 75.3% positive class balance in the training dataset, and a 79.9% positive class balance in the test dataset. This is the output our accuracy results refer to. In our pruning experiments, we iteratively pruned p=10 percent neurons or parameters (neuron connections) from each of the first five fully connected layers in the network (output layers were not pruned) and then fine-tuned the smaller model on a random selection of 3 million samples from the original training dataset (resampled during each fine-tuning pass). §.§ Model Compression Methods §.§.§ Pruning Methods * Parameter-Level Damage Estimation Techniques These techniques were all also extended to the neuron-level evaluation by summing damage estimates over each neuron's parameters. * Random Parameters in each layer are deleted at random (fixed proportion per layer per pruning round). * Magnitude-based Parameters are selected to be deleted by their absolute value, prioritizing the deletion of parameters close to 0. The idea is that it doesn't cause much damage to set parameters close to 0, to 0. The `damage' ranking (where lowest damage parameters are pruned first) d_i for parameter θ_i is defined as: d̂_i = |θ_i | * Optimal Brain Damage (OBD) Parameters are selected to be deleted by selecting parameters with the smallest OBD<cit.> damage estimates. OBD attempts to estimate the change in loss L that would be caused by setting a parameter θ_i to θ_i'=0, that is: L(θ_i') - L(θ_i). OBD approximates L(θ') by using Taylor Series (as recalculating loss individually for each parameter in a neural network across a sufficient number of test samples would be very computationally expensive): L(θ_i') = L(θ_i) + ∂ L(θ_i)/∂θ_iθ_i + 1/2∂^2 L(θ_i)/∂θ_i^2θ_i^2 + ... OBD assumes away the higher order terms (...) in the equation above, as well as assumes that ∂ L/∂θ_i is 0, because loss of a trained model is at a local minimum (and OBD is applied to trained models). So the damage estimate comes out to: d̂_i = 1/2∂^2 L/∂θ_i^2θ_i^2 In our experiments, we used a sample from the training dataset to estimate loss. * Standard Deviation or Variance Based Optimal Brain Damage for Large Models (OBD-SD) While the theory behind the OBD technique is elegant, in practice we found that there are some stumbling blocks, particularly if the loss function you're approximating with Taylor Series is complex (i.e. a large model with many layers). Taylor Series approximations become less and less accurate the larger the change in a parameter θ_i is (and since we're setting parameters to 0, this is often quite large), and in general the more complex the function is. Additionally, OBD assumes away non-diagonal Hessian terms. If we trust that a model has been trained to a good local minimum, it would be surprising if setting parameters to 0 would result in a loss reduction. So we expected our damage estimates to be almost all non-negative. However, in practice, we found only about two-thirds of damage estimates to be non-negative. On further inspection, we found that (even with a large sample size of 100,000) there was almost zero correlation between mean(∂^2 L/∂θ_i^2θ_i^2) and SD( ∂^2 L/∂θ_i^2θ_i^2), but a very strong positive correlation between the absolute of these two variables, around .8-.9 (see Figure 1). These results indicated that the OBD damage score is not altogether trustworthy (indeed, it performed worse than random in our results).[In the original OBD paper (published in 1989) the authors lacked a procedure to directly calculate the hessian diagonal second derivatives ∂^2 L/∂θ_i^2. They mention the Levenberg-Marquardt approximation, which appears to approximate this value with 2∂ L/∂θ_i^2), which *does* guarantee a positive damage estimate (and, note, is the same as 2SD(∂ L/∂θ_i) when loss is at a local minimum). In our experiments though, this approximation had very little correlation with the thing it was attempting to approximate, so we chose to use the actual ∂^2 L/∂θ_i^2 form. It is interesting to note, though, that the approximation form performed much better than the non-approximated form, though still worse than our OBD-SD approach.] However, the strong correlation between the absolute value of the mean OBD damage scores and the variance of them indicated that perhaps while the direction of each OBD damage score might be untrustworthy, the overall strength of the scores might be more reliable a signal. Because of this, we decided to explore the more robust signal coming from the variance of the individual sample-level OBD signals. For this reason, we explored ranking parameters by the damage estimate provided by the variance of the OBD damage estimates (change in loss estimates). Var(1/2∂^2 L/∂θ_i^2θ_i^2) = θ^4_i/4var(∂^2 L/∂θ_i^2), which generates the same ordering as the slightly simpler: d̂_i = θ_i^2 SD(∂^2 L/∂θ_i^2) Which we refer to as OBD-SD (Optimal Brain Damage - Standard Deviation) in rest of the paper. * Neuron-Level Damage Estimation Techniques While all of the parameter-level damage estimation techniques can (and were, in our experiments) be extended to the neuron-level by simply summing damages across all parameters of a given neuron, these methods were only applied to neurons in our experiments. * Dropout-based Pruning OBD and related techniques estimate how loss will be changed when you delete one or more parameters via Taylor Series approximations. Another approach is to calculate the change in loss more directly. Evaluating loss accurately requires running many samples through a neural network, so directly re-evaluating loss for every permutation of your model's architecture (i.e. the current architecture sans one parameter, or sans one neuron) is extremely costly, computationally. However, we hypothesized that you might be able to estimate it by combining the loss results from many dropout rounds (where a random sub-sampling of neurons are deleted) in order to estimate the importance of each neuron. To do this, we calculated loss L from a fixed batch sample over d dropout rounds. We fit a linear regression to these d samples, modeling L as the dependent variable, and boolean values for each neuron (representing whether the neuron was masked or not during dropout) as the independent variables. This provided a computationally cheaper way to estimate the damage caused by deleting any given neuron. * Neuron Merging Computationally, it's quick to calculate the pairwise euclidean distances between each neuron in a network layer, in terms of their parameters. We chose the calculate the distance matrix amongst all neurons in a layer, and then iteratively choose the closest two remaining neurons to merge. We merged neurons by averaging their parameters (weights and biases) and then deleting one of the two neurons. Note that depending on the activation function being used, this technique can have variable effects (as in, the output of two neurons merged into one may not be the same as the average output from the original two neurons). Our neural network used ELU activations in its fully connected layers[We did run similar tests with a model using RELU activations and noticed no significant differences in the results] [We also tried combining this approach with damage estimates from other approaches by adding damage estimates into the pairwise distance matrix for each parameter (with a scaling weight w as a hyperparameter), which generated similarly successful results (not included in the results section, for brevity).] §.§.§ Quantization No neurons are deleted, instead all parameters in the model are saved as 16-bit floats instead of our default 32-bit floats. §.§.§ Trained From Scratch We chose various architectures to train fully from scratch (with random parameter initializations) on the original dataset. First, we trained from scratch (with random weight initialization) the architecture implied by our best (OBD-SD) neuron pruning method applied globally (with a small penalty added to ensure layers don't disappear entirely), which resulted in strongly decreasing layer sizes: 587, 146, 46, 57, 7 that reduced model file size by approximately 75% (original layer sizes were 1024, 768, 512, 512, 512). We also trained from scratch the various architectures yielded from removing a fixed proportion of neurons from each main fully connected layer, including one that also reduced model file size by about 75% for comparability. These tended to produce better final results than pruning methods yielded (see Figures 4 and 6). Finally, we also trained from scratch various architectures yielding from removing a fixed proportion of neuron connections (parameter-level pruning) in the first five fully connected layers of our model (analogous to the `random' parameter-level pruning approach). These tended to produce similar final results than our best pruning methods yielded (see Figures 3 and 5). § RESULTS Interestingly, the OBD approach did consistently worse than random[You might be wondering if a negative was dropped somewhere in our OBD equations (we did, initially!) - but nope, -OBD damage estimates did even worse than the OBD formula, crucially because (at least in our model) the parameters that have little effect on the model tend to have OBD damage estimates close to 0 (with low sample variance), whereas the OBD damage estimate tails' are more strongly negative and positive - further away from zero. This is due to calculating the second partial derivative directly in our OBD implementation, instead of using an imprecise approximation - discussed more in section 3.1.]. Investigating why led us to develop our OBD-SD modified approach (see section 3.1), which appeared to to perform best out of all our pruning comparisons particularly when heavily pruning models (absolute-value based parameter pruning did best for small amounts of pruning). The pruning results are less clear when removing entire neurons at once (although OBD-SD still seemed to come out a bit ahead, especially with large amounts of pruning). This makes sense: a neuron in our neural network is associated with hundreds to thousands of parameters. It's far more difficult to identify a neuron that is unlikely to cause much damage, purely because it's unlikely that all of their parameters are unimportant. So while some methods appear to do slightly better than the others, the benefit is nowhere near as stark as when methods are applied on a parameter level. When choosing which method to apply, it's important to consider in what way you want your model to be 'smaller'. If uncompressed (unzipped) file size and memory are the things you wish to minimize, parameter level pruning isn't particular useful (at least, not without sparse matrix operation code). However, if zipped file size is the thing you wish to minimize (often the case when deploying model to many endpoints), then parameter level pruning is performs significantly better than neuron-level pruning. However, even the winning pruning methods performed only slightly better than training a model from scratch on a smaller size. Liu et. al<cit.> asserts that fine-tuning a pruned model only gives comparable or worse performance than training the same model with randomly initialized weights. Whereas the Lottery Ticket Hypothesis<cit.> indicates that pruning and fine-tuning a model will produce better results. Our results side with the Lottery Ticket Hypothesis, but the margin is slim (possibly `comparable') - and only true for parameter level pruning. In our results, the most successful parameter-level pruning methods did slightly better than training from scratch, whereas neuron-level pruning did significantly worse than retraining from scratch (likely because parameter-level pruning is able to target irrelevant parameters more precisely). Float-16 Quantization also performed far better than pruning methods (approximately halving model size with barely an affect on accuracy), though is somewhat limited in terms of flexibility. § CONCLUSION While parameter-level pruning is effective at reducing zipped model size, and neuron-level pruning is effective and reducing overall (in-memory, uncompressed, and zipped) model size, our results suggest that simply training a smaller model from scratch yields the same or better results. Quantization is also a simple and good approach. While our new parameter damage estimation technique, OBD-SD, seems to do about the same or better than all other pruning methods we tested, we conclude that it's not particularly useful to prune with such methods unless you want to maintain your ability to easily trade off between accuracy and model size (because pruning generates many models of varying sizes, without having to train each different model from scratch). § ACKNOWLEDGEMENTS Thanks to Sophos for supporting this research.
http://arxiv.org/abs/2306.11958v1
20230621010210
PDS-MAR: a fine-grained Projection-Domain Segmentation-based Metal Artifact Reduction method for intraoperative CBCT images with guidewires
[ "Tianling Lyu", "Zhan Wu", "Gege Ma", "Chen Jiang", "Xinyun Zhong", "Yan Xi", "Yang Chen", "Wentao Zhu" ]
physics.med-ph
[ "physics.med-ph", "eess.IV" ]
PDS-MAR for intraoperative CBCT images with guidewires]PDS-MAR: a fine-grained Projection-Domain Segmentation-based Metal Artifact Reduction method for intraoperative CBCT images with guidewires ^1 Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, China ^2 Laboratory of Imaging Science and Technology, Southeast University, Nanjing, China ^3 First-Imaging Tech., Shanghai, China ^4 Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, China. ^* Authors to whom any correspondence should be addressed. <[email protected]> and <[email protected]> Since the invention of modern CT systems, metal artifacts have been a persistent problem. Due to increased scattering, amplified noise, and insufficient data collection, it is more difficult to suppress metal artifacts in cone-beam CT, limiting its use in human- and robot-assisted spine surgeries where metallic guidewires and screws are commonly used. In this paper, we demonstrate that conventional image-domain segmentation-based MAR methods are unable to eliminate metal artifacts for intraoperative CBCT images with guidewires. To solve this problem, we present a fine-grained projection-domain segmentation-based MAR method termed PDS-MAR, in which metal traces are augmented and segmented in the projection domain before being inpainted using triangular interpolation. In addition, a metal reconstruction phase is proposed to restore metal areas in the image domain. The digital phantom study and real CBCT data study demonstrate that the proposed algorithm achieves significantly better artifact suppression than other comparing methods and has the potential to advance the use of intraoperative CBCT imaging in clinical spine surgeries. Keywords: cone-beam CT, metal artifact reduction, projection-domain segmentation, CT image reconstruction § INTRODUCTION Minimally Invasive Spine Surgery (MISS) is a surgical procedure designed to stabilize the vertebral bones and spinal joints or relieve pressure on the spinal nerves caused by conditions such as herniated discs, lumbar spinal stenosis, spinal tumors, etc. Compared to conventional open spine surgery, MISS results in a significantly shorter surgical time, a quicker patient recovery, and a reduced risk of infection and postoperative discomfort <cit.>. In recent decades, it has gained popularity as a treatment for spinal disorders <cit.>. Depending on the condition of the patient, instrumentation like rods and screws may be necessary to stabilize the spine. To minimize skin incisions, guidewires are inserted through the skin and into the spinal vertebrae along the instrumentation's desired trajectories. Typically, this phase is navigated using fluoroscope images from 2D C-arm systems <cit.>. However, these systems must be manually adjusted to a specific angle to get a better view, which makes them incompatible with modern robotic-assisted surgery systems. Recent studies have shown that intraoperative 3-D imaging systems are more dependable than fluoroscope systems for guidewire and screw navigation, and may reduce surgical complications <cit.>. 3-D imaging systems can also provide better image navigation for robotic-assisted surgery systems <cit.>. Intraoperative 3-D imaging systems are promised to become more and more important in MISS with the thriving of robotic-assisted surgery systems. Cone-beam computed tomography (CBCT) is one of the most widely used intraoperative 3-D imaging modalities that has already been adopted in orthopedic surgery <cit.>. Compared to the heavy-weighted multi-slice CT (MSCT) commonly applied for diagnosis, CBCTs are usually light-weighted mobile systems producing much-lowered irradiation doses, making them the optimal imaging modality for intraoperative applications. However, metal artifacts introduced by metallic guidewires can significantly degrade images and impair the physician's ability to determine wire location <cit.>. Metal artifact has long been the cause of image quality degradation in all CT systems, and researchers have been addressing this issue for more than four decades <cit.>. These artifacts are more difficult to suppress in CBCT due to increased scattering, magnified noise, and insufficient data collection. The majority of current metal artifact reduction (MAR) methods involve two steps: segmentation and interpolation. The segmentation step distinguishes the metal-affected region from the unaffected region in the projection domain while the interpolation step inpaints the metal-affected region (metal trace) to generate projection data without metal. Recent related literature focuses primarily on the interpolation step. The metal-trace has been inpainted with linear interpolation (LI) <cit.>, prior-image-based interpolation <cit.>, and convolutional neural networks <cit.>. These methods assume that metal can be readily segmented in the image domain, and that metal traces can be extracted via forward-projecting metal masks. However, this strategy is not always effective with CBCT. Referring to Fig. <ref> as an example, Some metallic objects are successfully segmented (red regions in Fig. <ref>(a) and Fig. <ref>(b)) whereas the others are hard to segment completely with a threshold (yellow rectangles) due to beam-hardening, cone-beam artifact, limited angle artifact and other image degradations. These metallic guidewires also introduce metal artifacts to the images (green arrows). Fig. <ref>(c) depicts a projection view corresponding to the failed segmentation in the image domain, which shows the absence of metal traces (orange arrows). To overcome the segmentation inaccuracy problem, researchers have also studied projection domain metal segmentation methods <cit.>. However, data in the projection domain does not exhibit uniformity in fragments, making segmentation even more difficult. Even if one can obtain accurate metal traces, there is still a problem to retain image-domain metal locations from projection-domain masks. To improve the quality of intraoperative imaging for MISS and robotic-assisted MISS, a fine-grained Projection-Domain Segmentation-based MAR algorithm (PDS-MAR) targeting metallic guidewires in intraoperative CBCT images is proposed. In the proposed algorithm, there are three distinct stages. First, guidewires are segmented using tubular enhancement filtering in the projection domain. The metal-free images are then reconstructed utilizing triangulation-based interpolation and the FDK algorithm. Finally, backprojection of multiplicative forms is used to reconstruct image-domain metal masks. The contributions of this work are summarized as follows. * We demonstrate that it is impossible for image-domain segmentation-based MAR methods to eliminate CBCT metal artifacts in certain MISS situations, highlighting the significance of projection-domain metal segmentation. * We present a novel metal artifact reduction algorithm termed PDS-MAR for intraoperative CBCT with guidewires, which incorporates tubular enhancement filtering for metal segmentation and Delauney triangulation for metal trace inpainting. * We pose the problem of reconstructing image-domain metal masks from projection-domain metal traces and propose a method based on multiplicative-form backprojection as a phase in PDS-MAR to partially solve the problem. The rest of this article is described as follows. Section 2 illustrates the details of our PDS-MAR algorithm. Section 3 shows the experimental setting and results. Sections 4 and 5 present the discussion and conclusion of this study, respectively. § METHOD Fig. <ref> is a flowchart of the phases in the proposed algorithm. Reconstructed directly from the original rawdata is an uncorrected image. By thresholding the uncorrected image, the mask of metal is obtained. Simultaneously, tubular enhancement filtering is applied to enhance guidewires within projection images. Metal traces are segmented with morphological operations on enhanced projection views and projected metal masks. To inpaint metal traces in each projection view, a triangulation-based interpolation is then applied. Inpainted projection data and metal traces are used to reconstruct metal-free images and refined metal masks, respectively. The specific procedures are described in the sections that follow. §.§ Metal trace segmentation This study focuses primarily on the segmentation of metallic guidewires. Guidewires are tubular objects in the projection domain, so techniques for enhancing vascular objects can be applied. Here we refer to the neurite enhancement filtering method proposed in <cit.> to enhance guidewires. This approach relies on a modified Hessian matrix H'(x) defined with the following equation H'_P*G_σ(x)=H_P*Gσ(x)+α R^T_π/2H_P*Gσ(x)R_π/2 where P*G_σ denotes the result of the input projection view P Gaussian-smoothed with σ variance, H_f*Gσ represents the Hessian matrix of the smoothed image, R_π/2 stands for the rotation matrix with angle π/2. Let λ^σ_1(x) and λ^σ_2(x) be the two eigenvalues of the Hessian matrix H'_P*G_σ(x) and we have |λ^σ_1(x)|>|λ^σ_2(x)|, the output vesselness is then defined by V_σ(x)= λ^σ_1(x) / λ^σ_max, λ^σ_max > 0 0, λ^σ_max≤ 0 where λ^σ_max refers to the largest λ^σ_1(x) among all pixels in the image. Eq. <ref> normalizes vesselness image to the range of [0,1]. To better model guidewires with different cross-sectional radii, vesselnesses under a set of different Gaussian variances Σ=σ_i, i=1,2,...,N are calculated, and the enhancement result is set as the maximal vesselness value on each pixel. T(x)=max_σ∈ΣV_σ(x) Even though the tubular enhancement filtering step substantially enhances guidewire-affected regions, it also enhances other image boundaries, such as bone and body boundaries. A simple threshold Th_enh will include a lot of undesired regions. Fortunately, the metal traces from the image domain segmentation are still available for guidance. The image domain metals are segmented with a threshold Th_metal and then forward-projected into the image domain. Points with values greater than 0 are considered seed points for region-growing, and the resultant binary masks are considered metal-affected region masks. §.§ Metal-free image reconstruction Since guidewires are typically inserted horizontally, conventional 1-D linear interpolation in rows is likely to lose a great deal of tissue information. To enhance inpainting precision, we use a Delaunay triangulation-based 2-D interpolation method. To reduce computation costs, the input binary mask is first subdivided into connected components. The outer boundary of each connected component is extracted by subtracting itself from the morphologically dilated mask. The area within the outer boundary is then triangulated using the Delauney method <cit.>. For each point x inside a triangle facet Δ_x_1x_2x_3, the interpolated value P(x) is defined as a linear weighting of value on the vertices. P(x)=w_x_1P(x_1)+w_x_2P(x_2)+w_x_3P(x_3) where the weights are calculated with the following equations. w_x_1=x_3x×x_3x_2/x_3x_1×x_3x_2 w_x_2=x_3x×x_3x_1/x_3x_2×x_3x_1 w_x_3=1-w_x_1-w_x_2 With metal traces interpolated in the projection domain, metal-free images can be reconstructed with the widely-accepted FDK algorithm <cit.>. §.§ Metal mask reconstruction Despite that we have already reconstructed a set of metal-free images, all information regarding the locations of metallic guidewires has been lost. Here, we present a method to restore image-domain metal masks I_M(x) from projection-domain metal traces P_M(u,v,θ) based on multiplicative-form backprojection. Assuming that we have the perfect metal traces in the projection domain, forward-projecting an image-domain metal voxel will always find a projection-domain point inside the metal traces, whereas projecting a voxel outside the metallic convex hull will find points outside the metal traces in at least some views. The metal mask can therefore be reconstructed using the following equation: I_M(x)=∏_θ=0^2πP_M(u(x,θ),v(x,θ),θ) where u(x,θ) and v(x,θ) are the detector coordinate at view angle θ corresponding to image point x. Considering false negative points in metal trace segmentation which may introduce a large number of false negatives in I_M, we replace P_M in Eq. <ref> with a soft mask P^soft_M defined as follows P^soft_M=(1-γ)P_M+γ(1-P_M) where γ is a weighting parameter in (0,1). We usually choose γ values close to 1 (like 0.9). After soft-masking, metal points get large I_M values even if affected by false negatives on limited views while non-metal points get values close to 0. Metal masks are therefore segmented with a threshold Th_mask. § EXPERIMENTS AND RESULTS §.§ Experiment settings All algorithms are implemented and tested in MATLAB R2021 with some operators implemented in C++/CUDA and compiled with mex/mexcuda in MATLAB. As for parameter settings, we set α=1/3, Σ={1,3,5,7,9}, Th_enh=0, γ=0.9 and Th_mask=0.5 for all experiments, as the results are not sensitive to these parameters. Parameter Th_metal was set to 3000HU by default, and its settings will be discussed in <ref>. §.§ Digital phantom study To quantitatively evaluate the performance of PDS-MAR, we simulated CBCT scans under a clinical intraoperative C-arm CBCT geometry with metallic guidewires inserted. XCIST <cit.>, an open-source X-ray CT simulation toolkit, is used to simulate the projection data using a cone beam geometry. The detailed geometrical parameters are presented in Table <ref>, where DSD represents the distance between source and detector, DSO stands for distance between source and rotation axis. We used the default 110kV spectrum provided in XCIST without any pre-filtration for simulations. For the digital phantom, the voxelized female chest phantom provided along with XCIST (<https://github.com/xcist/phantoms-voxelized>) is adopted for simulation, which is also a part of the XCAT phantom dataset by Segars et al. <cit.>. To simulate CBCT metal artifact, an additional material mask is manually outlined from a set of intraoperative clinical CBCT images using guidewires and applied to the digital phantom as the distribution of material 'iron'. Additionally, we have modified the phantom position offset to simulate CBCT data truncation during spine procedures. As the performance of MAR algorithms mainly depends on the projection-domain interpolation, here we first evaluate the projection-domain accuracy. Classical MAR methods (MAR-LI <cit.> and NMAR <cit.>) are also compared here to show the performance of the proposed algorithm. To evaluate the effectiveness of each component, ablation analysis with two different configurations is also included here: 1) MAR-tri, MAR with image-domain metal trace segmentation and triangulation-based interpolation; 2) PDS-MAR-LI, MAR with projection-domain metal trace segmentation and linear interpolation. Fig. <ref> provides a visual comparison of different algorithms, and results on 3 different views are depicted. The reference projection views are simulated using the digital phantom without metal implants under the same geometry. Compared to other algorithms, the proposed algorithm generate corrected projection views closest to the reference views. To better view the details, Fig. <ref> also shows the enlarged projection-domain blocks corresponding to the red rectangles. The errors of MAR algorithms mainly come from two aspects, false segmentation and non-ideal interpolation. MAR-LI, MAR-tri and NMAR adopt image-domain segmentation, which introduces obvious false segmentations (white arrows in Fig. <ref>). Despite the fact that metals can be easily segmented with a threshold on reconstructed phantom images, there is no way for image-domain methods to get metal traces outside the reconstruction FoV. Besides, 1-D linear interpolation produces over-smoothed rows (green arrows in Fig. <ref>). This characteristic is particularly harmful when metal implants are long in the horizontal direction (View 0 in Fig. <ref>), which is, however, common for guidewires. Triangulation-based interpolation, on the other hand, inpaints the void regions according to 2-D information and results in interpolated views much closer to references. There are still some defects shown in the results of the proposed algorithm (orange arrows in Fig. <ref>). Still, generally, the proposed algorithm produces the best-interpolated projection data among all algorithms. To quantitatively evaluate the performance of the proposed algorithm, several metrics are calculated between the outputs and the references regarding both segmentation accuracy and final interpolation quality. For segmentation, we compute Precision, Recall, and Dice index between the generated metal trace and the simulated projection data with the metal object only. Commonly used metrics including Root Mean Square Error (RMSE), Peak-Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Metric (SSIM) are also calculated between interpolated projection data and reference data to compare interpolation similarity. The results are displayed in Table <ref>. The proposed projection-domain segmentation method achieves a dice index of 0.9188, whereas conventional image-domain segmentation only gets 0.8141. The projection-domain method achieves a ∼4% improvement in segmentation precision and a ∼14% improvement in recall over the image-domain method. The significant increase in recall is primarily attributable to the successful segmentation of metal traces outside the reconstruction field of view. The proposed method also obtains the highest metric values for final interpolation similarity compared to all other comparing methods. From the results, we get the following observations. (1) PDS-MAR-LI/PDS-MAR achieves better interpolation than MAR-LI/MAR-tri. This proves that projection-domain segmentation is more appropriate than image-domain segmentation. (2) Triangulation-based interpolation methods (MAR-tri/PDS-MAR) achieve better results than linear interpolation methods (MAR-LI/PDS-MAR-LI). (3) NMAR gets the worst results on all metrics. This may result from poor tissue/bone segmentation in the image domain. Fig. <ref> gives a visual comparison between the reconstruction results of different algorithms. Generally, image domain segmentation-based methods (MAR-LI, MAR-tri and NMAR) suffer from severe streak artifacts resulting from metal objects out of scanning FoV. Linear interpolation-based algorithms (MAR-LI, PDS-MAR-LI) suffer from over-smoothing near metal objects, and tissue contrast details get lost. The proposed algorithm shows much-reduced streak artifacts and sharper tissue boundaries around metal implants than other algorithms. However, the proposed algorithm does not eliminate streak artifacts totally, since there are still interpolation failures in the projection domain (orange arrows in Fig. <ref>). Quantitative evaluation results on the image domain are provided in Table <ref>. We first compared the metal mask accuracy between image domain thresholding and the proposed metal mask reconstruction method. The image domain thresholding method achieves a pretty good dice score in the phantom study. However, the success in thresholding mainly results from low image noise, low bone value and much-suppressed image artifacts in our simulated data, and cannot be reproduced on authentic CBCT images (see Fig. <ref>). The proposed algorithm achieves a dice score ∼1% higher than image domain thresholding does. As for artifact reduction, we compute RMSE, PSNR and SSIM metrics between MAR results and reference images from metal-free data. The proposed algorithm shows significant improvements in all metrics compared to other algorithms. §.§ Ideal image-domain method study According to the results of MAR-tri at view 0 in Fig. <ref>, metal traces are nearly eliminated at this view. The near-perfect interpolation results from a near-perfect segmentation at this view, indicating that metal objects within the scanning FoV in the image domain are nearly accurately segmented. However, the results of the reconstruction indicate that metal artifacts cannot be eliminated under these conditions. There are metal objects outside the scanning FoV that can be observed from other perspectives, and the artifacts introduced by these metal objects cannot be corrected using image-domain segmentation. To prove this conjecture, we conducted an experiment on an ideal image-domain segmentation-based MAR method (ID-ideal). In ID-ideal, the image-domain metal mask is generated by masking the ground truth with scanning FoV. The metal mask is then forward-projected into the projection to produce metal traces. For interpolation, we directly replace the values within the metal traces with corresponding values in the reference projection data (simulated without metal), which should be considered perfect interpolation. The resulting projection data at a view and reconstruction outputs on 3 slices are presented in <ref>. From the results, we see that metal traces are not completely corrected in the projection domain, which further introduces severe streak artifacts to the image domain. There are also streak artifacts introduced by boundary precision issues (orange arrows in Fig. <ref> (b,d)). Quantitative evaluations of the image domain are also provided in Table <ref>. The metrics attained by PDS-MAR are superior to those attained by ID-ideal. §.§ Animal body study The proposed algorithm is evaluated on a set of animal body data. The back of a deceased sheep was scanned with Kirschner wires (K-wires) inserted by an expert. These projection data were acquired using a commercial intraoperative CBCT system (B51s) from First-Imaging Medical Equipment Co., Ltd. All original data were initially pre-processed with commercial data correction software from First-Imaging (including field correction, denoising, and water beam hardening correction). Fig. <ref> displays the metal-interpolated projection data, while Fig. <ref> provides the reconstructed results. Unlike in digital phantom data, results on animal body data suffer from severe segmentation failures in the image domain with a threshold. Some K-wires are partially segmented (the left one in Fig. <ref> (a1-a4)) some are not found at all (the left one in Fig. <ref> (b1-b4), the one in Fig. <ref> (c1-c4)). The missegmentation in the image domain causes a significant metal trace missing in the projection domain (red rectangles in Fig. <ref>) which further leads to in uncorrected metal artifacts in the reconstructed images. The proposed projection domain segmentation method also fails in some projection views (green rectangles in Fig. <ref> (b5-b6)), but segmentation failures on a few views have little impact on the reconstruction results. Also, the projection domain segmentation method cannot segment isolated metal objects totally outside the reconstruction FoV (green rectangles in Fig. <ref> (c5-c6)) because it still relies on image domain segmentation results as seed points. Generally, the proposed method reconstructs image slices with substantially reduced metal artifacts and tissue details that are preserved. In addition, the proposed metal mask reconstruction method is able to preserve the correct image domain metal mask even when the thresholding strategy fails. Fig. <ref> provides the reconstructed results from sheep body data. Unlike in digital phantom data, results on animal body data suffer from severe segmentation failures in the image domain with a threshold. Some K-wires are partially segmented (the left one in Fig. <ref> (a1-a4)) some are not found at all (the left one in Fig. <ref> (b1-b4), the one in Fig. <ref> (c1-c4)). The missegmentation in the image domain causes a significant metal trace missing in the projection domain which further leads to uncorrected metal artifacts in the reconstructed images. Generally, the proposed algorithm reconstructs image slices with substantially reduced metal artifacts and tissue details that are preserved. In addition, the proposed metal mask reconstruction method is able to preserve the correct image domain metal mask even when the thresholding strategy fails. Some K-wires are nearly correctly segmented in the image domain (the right one in Fig. <ref> (b1-b4)), but still get slight deviations from the real metal traces in the projection domain after the forward projection. These deviations sometimes come from the geometrical randomness of the entire CBCT system. Triangulation-based interpolation is ineffective in this situation and results in saw-toothed inpainting results (red arrows in Fig. <ref> (a)). The saw-toothed projection data further introduces severe streak artifacts to the reconstructed images as a result of ramp filtering (Fig. <ref> (b3) as an example). As segmentation in the projection domain is more precise than in the image domain, the proposed method generates interpolation output that is substantially smoother. §.§ Human body phantom study Two human-body phantoms (an abdominal phantom and a head phantom) are scanned with guidewires inserted for educational purposes. The projection data used in this section were also collected with a commercial intraoperative CBCT system from First-Imaging Medical Equipment Co., Ltd. All original data were pre-processed as described in <ref>. The reconstruction results for the abdominal phantom are depicted in Fig. <ref>. Similar to the animal body results, conventional image domain segmentation suffers from segmentation faults and results in incomplete metal artifact suppression (see Fig. <ref> (c2-c3)) and secondary artifacts (see Fig. <ref> (a3)). The proposed algorithm shows superior ability in metal artifact reduction, and the metal masks are well-reconstructed with our multiplicative-form backprojection method. Furthermore, streak artifacts introduced by metal objects outside FoV are still observed even on slices without metal in Fov for image domain segmentation methods (bottom of Fig. <ref> (b2-b3)) while being totally eliminated by the proposed method (Fig. <ref> (b5)). The reconstruction results for the head phantom are depicted in Fig. <ref>. These results reveal another shortcoming of image domain segmentation, namely the difficulty in threshold choice. Referring to Fig. <ref> as an example, we compared results under Th_metal=3000HU (b-e) and Th_metal=6000HU (f-i) for different methods. Under a lower threshold, image domain methods can segment metal better. However, tooth regions are identified as metal objects due to their high attenuation properties, which introduce secondary (see Fig. <ref> (b,c)). Under the higher threshold, tooth regions are not included; however, guidewires are not recognized and metal artifacts are not corrected. The proposed also suffers from segmentation faults on teeth under the threshold of 3000HU but performs well under the threshold of 6000HU. A slice in sagittal view and a slice in coronal view are displayed in Fig. <ref>. Besides guidewires, there are also metal screws inserted in the head phantom and the proposed algorithm can effectively deal with metal artifacts introduced by these screws. § DISCUSSION Metal artifact is a long-standing problem for intraoperative CBCT imaging in MISS. In section <ref>, we experimentally proved that the image-domain segmentation-based MAR methods can never eliminate metal artifacts when there are metal objects outside scanning FoV. This situation, however, is prevalent in CBCT-based intraoperative imaging. Limited by the size of the flat panel detector, CBCTs are incapable of achieving a large scanning FOV like most MSCTs. Also, objects like guidewires can easily get outside the FoV due to their lengths. Therefore, projection-domain segmentation of metallic objects is urgently required. CBCT's inferior image quality is another cause for the failure of conventional MAR techniques. Scattering, noise, limited scan angle, and other factors introduce artifacts. On our CBCT dataset, the widely accepted NMAR algorithm for metal artifact reduction in CT images performs even worse than MAR-LI. This phenomenon is a result of NMAR's reliance on image domain segmentation outcomes. Using Fig. <ref> as an illustration, NMAR relies on image domain segmentation to get a prior image. Due to severe image domain artifacts (red arrows in Fig. <ref> (b)), the prior image is improperly generated, resulting in severe interpolation errors in the projection domain. Segmenting metal traces directly on the projection domain ought to be more accurate than the image-domain-segmentation + forward-projection way. However, segmentation in the projection domain is never easy due to the variation in both metal shapes and values. This paper focuses on a specific mission to segment metallic guidewires in MISS, which converts the metal into a tubular shape. In this endeavor, tubular enhancement filtering demonstrates exceptional performance, and metal traces are successfully extracted by combining information from the image domain. Experiments indicate that the proposed method is also effective with metallic objects such as small iron balls and stainless steel screws. Nonetheless, additional experiments demonstrate that the proposed procedure is incapable of handling Ti screws with handles. Though the proposed method generates better metal traces compared to image domain methods (Fig. <ref> (b,c)), there are still missing portions at the screw head and the handles (red arrows in Fig. <ref> (c)), which results in artifacts in the image domain (Fig. <ref> (d)). How to cope with metal artifacts introduced by Ti screws is still an open issue, and we will be focusing our future research on this issue. Triangulation-based interpolation has been proven effective in multislice helical CT <cit.>, but has hardly been tested on CBCT datasets. In this work, we show that the Delaunay triangulation-based interpolation is a powerful tool in CBCT metal artifact reduction but it depends heavily on the metal trace accuracy. This method may introduce severe secondary artifacts with biased metal traces and result in even worse image quality compared to 1-D linear interpolation. The proposed projection domain segmentation method is an excellent complement to triangulation-based interpolation for providing more precise metal traces. The retention of precise metal masks in the image domain is another problem for MAR algorithms based on projection domain segmentation. In this paper, we propose a multiplicative-form backprojection-based metal mask reconstruction method that performs well on CBCT images with guidewires. Nonetheless, it does not completely resolve the issue. The projection domain masks lose information on the thicknesses of metal materials, making it impossible to retain hollow or concave metal masks. Two 2-D examples are presented in Fig. <ref>. In the projection domain, the solid circle and hollow ring have distinct characteristics, but when converted to binary masks, they yield the exact same result, as do the concave object and its minimal convex hull. In CBCT, the information in the z-direction weakens this problem (screws in Fig. <ref>), but it still presents on some slices (Fig. <ref> (a5)). We are going to work in the future to incorporate metal thickness information into projection domain masks, which is the only way to solve this issue. § CONCLUSION In this work, we prove that conventional image-domain segmentation-based MAR algorithms cannot eliminate metal artifacts for intraoperative CBCT images with guidewires, and present PDS-MAR to solve this problem. We employ tubular enhancement filtering-based metal trace segmentation and Delaunay triangulation-based interpolation in PDS-MAR. Results on both simulation and real CBCT datasets show the extraordinary artifact suppression performance of our algorithm. Moreover, a novel multiplicative-form backprojection-based method is also proposed here in PDS-MAR to retain image domain metal masks from metal traces. The concept of projection-domain metal segmentation would advance MAR techniques in CBCT and has the potential to push forward the use of intraoperative CBCT in human-handed and robotic-assisted MISS. § ACKNOWLEDGEMENTS This work was supported in part by the National Natural Science Foundation of China under Grant T2225025, in part by the National Key Research and Development Program of China under Grant 2022YFC2408500, in part by the Key Research and Development Programs in Jiangsu Province of China under Grant BE2021703 and BE2022768, and also in part by the Key Research and Development Program of Zhejiang Province under Grant 2021C03029. § REFERENCES unsrt
http://arxiv.org/abs/2306.10684v1
20230619031057
Visually-Guided Sound Source Separation with Audio-Visual Predictive Coding
[ "Zengjie Song", "Zhaoxiang Zhang" ]
cs.SD
[ "cs.SD", "cs.CV", "cs.MM", "eess.AS" ]
Journal of Class Files, Vol. 6, No. 18, June 2023 Song et al.: Visually-Guided Sound Source Separation with Audio-Visual Predictive Coding Visually-Guided Sound Source Separation with Audio-Visual Predictive Coding Zengjie Song and Zhaoxiang Zhang, Senior Member, IEEE Manuscript first version received May 10, 2021; second version received February 22, 2022; third version received November 28, 2022; revised April 12, 2023; accepted June 12, 2023. This work was supported in part by the Major Project for New Generation of AI under Grant 2018AAA0100400; in part by the National Natural Science Foundation of China under Grant 61836014, Grant U21B2042, Grant 62072457, Grant 62006231, and Grant 61976174; and in part by the Project funded by China Postdoctoral Science Foundation under Grant 2021M703489. (Corresponding author: Zhaoxiang Zhang.) Zengjie Song is with the School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China (e-mail: [email protected]). Zhaoxiang Zhang is with the Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the Center for Artificial Intelligence and Robotics, Hong Kong Institute of Science & Innovation, Chinese Academy of Sciences, Hong Kong, China (e-mail: [email protected]). ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The framework of visually-guided sound source separation generally consists of three parts: visual feature extraction, multimodal feature fusion, and sound signal processing. An ongoing trend in this field has been to tailor involved visual feature extractor for informative visual guidance and separately devise module for feature fusion, while utilizing U-Net by default for sound analysis. However, such divide-and-conquer paradigm is parameter inefficient and, meanwhile, may obtain suboptimal performance as jointly optimizing and harmonizing various model components is challengeable. By contrast, this paper presents a novel approach, dubbed audio-visual predictive coding (AVPC), to tackle this task in a parameter efficient and more effective manner. The network of AVPC features a simple ResNet-based video analysis network for deriving semantic visual features, and a predictive coding-based sound separation network that can extract audio features, fuse multimodal information, and predict sound separation masks in the same architecture. By iteratively minimizing the prediction error between features, AVPC integrates audio and visual information recursively, leading to progressively improved performance. In addition, we develop a valid self-supervised learning strategy for AVPC via co-predicting two audio-visual representations of the same sound source. Extensive evaluations demonstrate that AVPC outperforms several baselines in separating musical instrument sounds, while reducing the model size significantly. Code is available at: https://github.com/zjsong/Audio-Visual-Predictive-Codinghttps://github.com/zjsong/Audio-Visual-Predictive-Coding. Sound source separation, predictive coding (PC), feature fusion, multimodal learning, self-supervised learning. § INTRODUCTION Surrounded by diverse objects and receiving multisensory stimuli, the human brain conducts multimodal perception to understand the physical world. In fact, the inherent correlations rooted in co-occurred sensory modalities have the potential to facilitate decision making on individual tasks <cit.>. One practical example emerges from the interplay between auditory and visual senses, namely in addition to by hearing the sound of a concert, audiences could be got in the mood for melodious music much easier by watching musicians' body and hand movements. With such inspiration in mind, researchers have been making great efforts to explore the interaction of vision and audio information in multimodal learning. Representative studies include sound recognition <cit.>, cross-model retrieval <cit.> and generation <cit.>, sound localization <cit.>, sound source separation <cit.>, etc. For concreteness this work focuses on the visually-guided sound source separation (also referred as audio-visual sound separation or visual sound separation), which aims to recover sound components from a mixture audio with the aid of visual cues. As one of the fundamental sound processing tasks, visual sound separation promotes a wide range of downstream applications, such as audio denosising, dialog following, audio event remixing, audio-visual video indexing, video sound editing, instrument equalization, and embodied navigation <cit.>. Today's dominant paradigm for visual sound separation is constituted mainly by three components, i.e., visual feature extraction, multimodal feature fusion, and sound signal processing, as shown in Fig. *fig:compare_framework_previous. The visual feature extraction part, drawing a lot of research attention and various model extensions exist, acts to derive discriminative visual features of the sounding object from video frames. Such discriminative features could be, for example, visual appearance cues extracted by an object recognition model <cit.>, pixel-wise trajectories computed based on dense optical flow <cit.>, body and hand dynamics formulated with keypoint estimation <cit.>, object proposals obtained by a pre-trained object detector <cit.>, etc. As for multimodal feature fusion, a simple and widely-used strategy is concatenating audio and visual features alone the channel dimension <cit.>. Several works also exploit other fusion forms, such as the feature-wise affine transformation (FiLM) <cit.> and the self-attention based cross-modal fusion <cit.>. In terms of sound signal processing, the U-Net <cit.> style encoder-decoder network is usually treated as a default model in the majority of existing works <cit.>. Despite the success of the divide-and-conquer paradigm mentioned above, there are still some concerns left unaddressed. (i) The complete pipeline is parameter inefficient. While sophisticated visual guidance usually provides discriminative cues, it comes at the price of relying heavily on various well-trained vision models <cit.> or third-party toolboxes <cit.>. As a result, the combination of those (deep neural network) modules makes the entire model cumbersome. For instance, to separate sounds with motion cues, Gan et al. <cit.> design a pipeline consisting of seven modules: ResNet-50 for global semantic features, AlphaPose toolbox for human body keypoints, a hand detector and OpenPose for hand keypoints, a graph CNN to fuse semantic context and body dynamics, self-attention module for audio-visual fusion, and U-Net style architecture for sound processing. (ii) It is challengeable to optimize different model parts jointly and effectively. Since different pre-trained networks work with distinct dynamics and settings (e.g., optimizer, learning rate, weight decay, etc), fine tuning the composition version of them becomes fragile, which may produce suboptimal sound separation results. (iii) It is open to question that U-Net is commonly viewed as a plausibly versatile sound analysis model for the visual sound separation task. The fused audio-visual features only serve as input to the U-Net decoder, and thus the interaction between fused features and low-level audio features (from U-Net encoder) implicitly occurs through the skip connection and transposed convolution. It is worth investigating whether there are other more effective ways to engage audio features with audio-visual information at different levels of abstraction. We address above concerns by proposing a novel visually-guided sound source separation approach, named audio-visual predictive coding (AVPC). The AVPC model is constructed based on two networks: video analysis network and sound separation network, with no need to separately design feature fusion module. Specifically, as shown in Fig. *fig:compare_framework_ours, object appearance features are first extracted from video frames by the simple video analysis network. Then, the sound separation network acts to estimate sound separation masks with the visual cues. The critical insight underlying our sound separation network is that different modalities' features of the same sounding object should share the same semantics in deep latent space, and hence should be predictable with each other. To this end, inspired by predictive coding (PC) in neuroscience <cit.>, the sound separation network predicts visual features with audio features extracted from mixture sound spectrogram, and improves the separation accuracy via iteratively minimizing the prediction error. More importantly, the PC-based sound separation network implements audio feature extraction, multimodal feature fusion, and sound separation mask prediction with the same architecture, which is more parameter efficient than previous paradigm. After that, the progressively refined separation masks are utilized to recover sound components of interest. We also devise a self-supervised audio-visual representation learning strategy for AVPC, called representation co-prediction (RCoP). The RCoP premises that if two mixture sounds include a same single sound component (e.g., a sound clip of guitar is mixed with the sound of trumpet and the sound of saxophone, respectively), the two groups of audio-visual representations corresponding to that sound component (e.g., guitar) should be similar. As a result, reducing the distance between the two groups of representations makes sound separation task easier. In summary, our contributions are threefold: * We propose a novel approach called AVPC for visually-guided sound source separation. Unlike previous arts investigating sophisticated visual feature extraction and multimodal feature fusion, AVPC focuses on sound signal processing with a bi-directional network architecture and iterative inference mechanism. This shift in perspective provides a parameter efficient and more effective way to handle visual sound separation task. * We extend a self-supervised visual representation learning method to audio-visual setting, specifically to visual sound separation, which can be adopted as a pretext task to further boost sound separation performance. * We systematically show that by adopting solely the plain visual appearance features, AVPC outperforms competitors that also use only appearance features as visual guidance, while achieves performance equivalent to those detection-based and motion-based methods. § RELATED WORK §.§ Sound Source Separation Sound source separation (from pure audio input), known as the “cocktail party problem” <cit.> in the sound signal processing field, has been investigated extensively in the past few years. By assuming that the sound spectrogram has a low-rank structure, methods based on non-negative matrix factorization (NMF) <cit.> have historically been the most prominent on this task, as seen in <cit.>. However, these methods as shallow models cannot deal with the high nonlinearity of sounds. By contrast, deep learning methods proposed recently give promising solutions to this problem <cit.>. Representative works for monaural speech separation include the model built on deep recurrent neural networks <cit.>, and the specifically-designed convolutional neural networks with vertical and horizontal convolution operations <cit.>. To mitigate the long-standing label ambiguity problem, the deep clustering method <cit.> and the permutation invariant training criterion <cit.> were designed, respectively, and both of which generalized well over unknown speakers and languages. A detailed discussion on this research direction can be found in <cit.>. Note that different from all of the above, we use the visual features extracted from available video frames to guide sound source separation. §.§ Visually-Guided Sound Source Separation With the aid of visual cues, additional discriminative features of target sound makers can be provided to the sound separation system, which proves to be beneficial to yield more accurate sound predictions <cit.>. One line of work explored audio-visual speech separation, where the information of face recognition embeddings <cit.>, facial movements <cit.>, lip motions <cit.>, and face landmarks’ movements <cit.> was employed as visual guidance, respectively. More recently, another line of work focused on visually-guided sound source separation for non-speech signals. Two contemporaneous and pioneering works for this task come from Gao et al. <cit.> and Zhao et al. <cit.>, who presented the multi-instance multi-label (MIML) and the sound-of-pixels (SoP) systems, respectively. The MIML <cit.> offered a learning framework to match audio bases with the object category predictions, where the learned audio bases proceed to supervise the NMF-based separation process. In SoP <cit.>, an audio-visual two-stream network was proposed and trained by the mix-and-separate self-supervised learning procedure. The impressive performance of SoP intrigued the research community to make further efforts to address more challenging problems in this area. First, to tackle the homo-musical separation problem, where different musical instruments belonging to the same category emit sound simultaneously, motion-related visual cues were usually used to capture the dynamic difference in vision. These motion cues could be the trajectory of pixels <cit.>, keypoint-based body dynamics <cit.>, optical flow <cit.>, their combinations <cit.>, etc. Second, to distinguish sound components with arbitrary numbers and types, Xu et al. <cit.> proposed a novel two-stage system named minusplus network (MP-Net), where the minus stage recursively separates and removes the sound with highest energy, and the plus stage refines each separated sound accordingly. Third, to alleviate the unreality issue raised by training with artificially mixed video clips, the co-separation approach <cit.> was devised, which learns an association between consistent sounds and similar-looking objects across pairs of training videos. Compared with aforementioned methods, our AVPC implements audio feature extraction, multimodal feature fusion, and sound separation mask prediction with the same PC-based architecture, which turns out to be more effective and parameter efficient over these U-Net-based ones <cit.>. Additionally, the proposed RCoP, as a new task-oriented audio-visual representation learning strategy, extends the widely applied mix-and-separate training paradigm <cit.>, leading to improved sound separation quality. §.§ Audio-Visual Learning There is increased interest in leveraging audio-visual correspondence to learn multimodal representations in a self-supervised manner. The widely-used correspondence is the semantic consistency across modalities <cit.>, i.e., audio and visual features extracted from the same video clip should have the same semantic category. In this regard, the feature of one modality with certain category could be treated as supervisory signal to guide representation learning on another modality <cit.>. Besides, the temporal synchronization between audio and visual streams is also useful <cit.>. As show in <cit.>, audio and visual samples taken from different slices of the same video were viewed as hard negatives, and performing contrastive learning with such hard negatives resulted in powerful multi-sensory representations. What's more, some other works also explored the audio-visual correspondence from a variety of interesting perspectives, including feature clustering <cit.>, semantic comparisons among triple modalities in vector embedding space <cit.>, and audio-visual object embedding learning <cit.>. A comprehensive survey on this topic can be found in <cit.>. The representations learned by these methods are task-agnostic, which provide an effective initialization to improve performance of various downstream tasks, such as action/scene recognition <cit.>, temporal action localization <cit.>, audio classification <cit.>, sound source localization <cit.> and separation <cit.>, etc. By contrast, our proposed RCoP, as a task-oriented self-supervised learning strategy, serves to learn representations customized for sound source separation, and thus is more effective. In addition, by taking inspiration from the self-supervised visual representation learning paradigms, BYOL <cit.> and SimSiam <cit.>, the whole network can be trained by RCoP without using negative samples, large batches, and momentum encoders. Therefore, RCoP offers an economical way to learn audio-visual representations from unlabeled videos. § PRELIMINARIES In this section, we give a concise introduction to the predictive coding network (PCNet) <cit.> that inspires us to construct sound separation network. We embody the information processing mechanism in PCNet by formulating its optimization objective and representation updating rules, respectively. §.§ Core Idea and Optimization Objective of PCNet In fact, at the heart of PCNet is the predictive coding (PC) model in neuroscience <cit.>. As a specific neural network model (see Fig. <ref>), PC uses feedback connections from a higher area to a lower area (e.g., V2 to V1 in visual cortex) to convey predictions of lower-level neural activities; while employs feedforward connections to carry prediction errors between the predictions and the actual lower-level activities; and the feedback and feedforward computing processes are executed alternatively such that prediction errors are dynamically reduced and all layers' representations are progressively refined. The basic PC in <cit.> performs alternative computing in local level, i.e., the representation of current layer starts to be updated only when the previous layer finishes the two computing processes. As one of the instantiations of PC, PCNet disentangles the two processes in global level, meaning that representations of all layers are first modulated in feedback (or feedforward) process, and then updated again in feedforward (or feedback) counterpart. By doing so, PCNet achieves competitive and more definitive object recognition results with less parameters compared with convolutional feedforward-only neural networks <cit.>. Formally, for the l-th hidden layer, we denote the representation as 𝐫_l, and the feedback connection weights from layer l to layer l-1 as 𝐖_l,l-1 (similarly 𝐖_l-1,l). At layer l, the optimization objective is to minimize the compound loss: ℒ^l = α_l/2𝐫_l-1-g((𝐖_l,l-1)^T𝐫_l)^2_2_ℒ_1^l + β_l/2𝐫_l - 𝐩_l^2_2_ℒ_2^l, where the function g stands for a generative transformation, α_l and β_l are hyperparameters controlling the relative importance of the two loss terms ℒ_1^l and ℒ_2^l, and 𝐩_l=g((𝐖_l+1,l)^T𝐫_l+1) is the prediction of 𝐫_l. Here, the prediction 𝐩_l is derived through a nonlinear transformation on the higher layer representation 𝐫_l+1, and can be viewed as prior knowledge based on past learning experiences <cit.>. From (<ref>) we see that the optimal representation of layer l, 𝐫_l, can not only reconstruct the previous layer representation 𝐫_l-1 (by minimizing ℒ_1^l), but also be similar with the prediction 𝐩_l from higher layer (by minimizing ℒ_2^l). Consequently, each layer representation is mediated so as to adaptively fuse input information (from lower-level 𝐫_l-1) and prior knowledge (from higher-level 𝐩_l). §.§ Basic Rules of Representation Updating in PCNet As for representation updating in feedback process, PCNet uses the prediction signal generated from higher layer to adjust the representation of current layer. Similar to <cit.>, we set g as a liner-generative transformation to simplify derivation, i.e., g((𝐖_l,l-1)^T𝐫_l)=(𝐖_l,l-1)^T𝐫_l. Then the prediction signal of 𝐫_l at time step t is computed as: 𝐩_l(t)=(𝐖_l+1,l)^T𝐫_l+1(t). After that, we substitute (<ref>) into the loss term ℒ_2^l in (<ref>), and minimize ℒ_2^l w.r.t. 𝐫_l by gradient decent, leading to following update rules: ∂ℒ_2^l/∂𝐫_l(t) = 2(𝐫_l(t) - 𝐩_l(t)), 𝐫_l(t+1) = 𝐫_l(t) - η_lβ_l/2∂ℒ_2^l/∂𝐫_l(t) = (1 - η_lβ_l)𝐫_l(t) + η_lβ_l𝐩_l(t), where the non-negative scalar η_l governs feedback representation learning. For simplicity, we set b_l=η_lβ_l and rewrite (<ref>) as follows: 𝐫_l(t+1) = (1 - b_l) 𝐫_l(t) + b_l𝐩_l(t). On the other hand, in the feedforward process PCNet updates the representation again based on the prediction error conveyed from previous layer. The prediction error of layer l-1 is defined as: 𝐞_l-1(t) = 𝐫_l-1(t) - 𝐩_l-1(t), which manifests the difference between the actual representation and its prediction. By using gradient decent to minimize ℒ_1^l w.r.t. 𝐫_l, we derive the update equation: ∂ℒ_1^l/∂𝐫_l(t) = -2𝐖_l,l-1𝐞_l-1(t), 𝐫_l(t+1) = 𝐫_l(t) - κ_lα_l/2∂ℒ_1^l/∂𝐫_l(t) = 𝐫_l(t) + κ_lα_l𝐖_l,l-1𝐞_l-1(t), where κ_l is a scalar like η_l. We also set a_l=κ_lα_l to simplify denotation. Following <cit.>, we introduce more degrees of freedom to representation learning by assuming that feedback connection weights are the transposed feedward connection weights, i.e., 𝐖_l,l-1=(𝐖_l-1,l)^T. As a result, the update equation in (<ref>) is equivalently converted to the following form: 𝐫_l(t+1) = 𝐫_l(t) + a_l (𝐖_l-1,l)^T𝐞_l-1(t). To add nonlinearity to the above two process, we can apply some nonlinear activation function (e.g., <cit.> as used in <cit.>) to the output of each convolutional layer. Therefore, the final nonlinear feedback process is described as: 𝐫_l(t+1) = ((1 - b_l) 𝐫_l(t) + b_l𝐩_l(t)), and the nonlinear feedforward process is given as: 𝐫_l(t+1) = (𝐫_l(t) + a_l (𝐖_l-1,l)^T𝐞_l-1(t)). PCNet performs these two processes alternatively several times, and consequently all layers' representations are refined in a recursive manner. In Section <ref>, we will extend the basic representation updating rules to drive learning in predictive coding-based sound separation network. § PROPOSED METHOD §.§ Overview The key insights underlying our method are twofold. On the one hand, by leveraging audio feature to predict semantic visual feature, AVPC implements audio feature extraction, multimodal feature fusion, and sound separation mask prediction in the same one network architecture (i.e., the sound separation network). The fusion results can be increasingly refined with the inherent representation updating rules, i.e., iteratively minimizing the prediction error in bi-directional processes. Equipped with such network architecture and updating rules, we mitigate the need for laborious design for and fragile optimization on various model components. On the other hand, the self-supervised learning strategy RCoP works to excavate the correlations between two sound mixtures that include a same sound source. To this end, RCoP makes the two learned audio-visual representations corresponding to the shared sound source as similar as possible. From the view of model structure, AVPC consists of two network branches, i.e., video analysis network and sound separation network, as shown in Fig. <ref>. First, the video analysis network takes as input video frames 𝐕_n and acts to compute semantic feature map as visually-guided signal 𝐟_n[Hereafter we use n∈{1,2,…,N} to index different video clips.]. Then, the sound separation network extracts audio features from the mixture sound spectrogram 𝐒_mix, and uses these extracted features to predict the visually-guided feature map recursively, resulting in the progressively refined audio-visual representation 𝐫_L(T). Next, a transposed convolutional layer followed by a sigmoid function (omitted in Fig. <ref> for simplicity) is operated on the audio-visual representation, obtaining a prediction of the sound separation mask 𝐌̂_n. Finally, performing the pixel-wise multiplication between the predicted mask and the mixture spectrogram outputs a specific sound spectrogram 𝐒̂_n, which is further converted to sound signal by applying an inverse Short-Time Fourier Transform (iSTFT). In the following, we detail the two network branches, training methods, and representation co-prediction strategy of AVPC, respectively. §.§ Video Analysis Network The deep features of video frames are extracted by the video analysis network which, similar to a series of existing works <cit.>, takes the plain ResNet-18 <cit.> as a backbone. Given a video clip of size F×H×W×3, where F, H, and W denote number, height, and width of video frames, respectively, the ResNet extracts high-level semantic features from each frame before the spatial average pooling layer, obtaining a feature tensor of size F×(H/32)×(W/32)×512. Subsequently, by conducting an additional convolution operation spatially and then temporal pooling along the first time dimension F, the feature tensor will be transformed into a smaller version of size Ĥ×Ŵ×K, where Ĥ, Ŵ, and K stand for height, width, and number of channels of the final visually-guided feature map 𝐟_n, respectively. In our all implementations, we set Ĥ=2, Ŵ=2, and K=16. We empirically show that although AVPC is conditioned on the visually-guided feature map that only captures sound makers' appearance information, it can still perform on par with detection-based <cit.> and motion-based <cit.> visual sound separation methods (see quantitative results in Section <ref>). §.§ Predictive Coding-Based Sound Separation Network §.§.§ Formulation We extend PCNet introduced in Section <ref> to construct the sound separation network, which is mainly characterized by a cross-modal feature prediction mechanism. The assumption supporting this prediction mechanism is that audio and visual information extracted from the same sounding object should be consistent in high-level semantic feature space, and thus could be predictable with each other. In this regard, we develop the iterative minimization process of PCNet to gradually decrease the prediction error between audio and visual features. Formally, sound separation network receives the visually-guided feature map 𝐟_n and the mixture spectrogram 𝐒_mix as input, where 𝐟_n is viewed as the target signal to be predicted, and 𝐒_mix is derived from the mixture waveform by applying a Short-Time Fourier Transform (STFT). Let L denote the number of layers, and T for the recursive cycles. The sound separation network uses the following steps to recursively update representations (i.e., fuse audio and visual features). First, it initializes representations of all layers, {𝐫_l^init|l=1,2,⋯,L}, in a top-down manner (from layer L to layer 1): 𝐫_l^init = g((𝐖_l+1,l)^T𝐫_l+1^init), where we set 𝐫_L+1^init≜𝐒_mix and instantiate the generative transformation g with nonlinear function <cit.>. Starting at the mixture spectrogram, this top-down initialization actually extracts audio features layer-by-layer, and thus injects sound-related information into each layer's representation, from a local spatial scale (i.e., 𝐫_L^init) to a global spatial scale (i.e., 𝐫_1^init). Then, we are able to update all representations based on {𝐫_l^init|l=1,2,⋯,L} and 𝐟_n for time t=0. To this end, a bottom-up error propagation procedure, similar to the nonlinear feedforward process of PCNet in (<ref>), is triggered and formulated as follows (from layer 1 to layer L): 𝐞_l-1(0) = 𝐫_l-1(0) - 𝐫_l-1^init, 𝐫_l(0) = f(𝐫_l^init + a_l (𝐖_l-1,l)^T𝐞_l-1(0)), where 𝐫_0(t)≡𝐟_n, 𝐫_0^init=g((𝐖_1,0)^T𝐫_1^init) is the coarse prediction of 𝐟_n, and f is a nonlinear activation function (e.g., ) in the feedforward process. Up to now, sound separation network finishes one recursive cycle to update all representations, where the top-down feedback process only works for representation initialization (i.e., (<ref>)). To proceed the update rules in both top-down feedback and bottom-up feedforward processes alternatively, we perform the following calculations. Nonlinear feedback process (l=L, L-1, ⋯, 1): 𝐩_l(t) =(𝐖_l+1,l)^T𝐫_l+1(t), 𝐫_l(t+1) = g((1 - b_l) 𝐫_l(t) + b_l𝐩_l(t)), where 𝐫_L+1(t)≡𝐒_mix. Nonlinear feedforward process (l=1, 2, ⋯, L): 𝐞_l-1(t) = 𝐫_l-1(t) - 𝐩_l-1(t), 𝐫_l(t+1) = f(𝐫_l(t) + a_l (𝐖_l-1,l)^T𝐞_l-1(t)), where 𝐩_0(t)=g((𝐖_1,0)^T𝐫_1(t)). The top-layer feature, 𝐫_L(t), can act as the audio-visual representation to predict sound separation mask. Due to the nature of iterative inference method adopted here, 𝐫_L(t) will evolve into a sequence of progressively refined audio-visual representations, i.e., {𝐫_L(t)|t=0,1,⋯,T}. The output at last time step, 𝐫_L(T), is used to compute mask prediction 𝐌̂_n with best performance. We summarize the whole process of sound separation mask prediction in Algorithm <ref>. Finally, by multiplying the mixture spectrogram 𝐒_mix with the predicted mask 𝐌̂_n, we obtain the n-th sound component spectrogram 𝐒̂_n: 𝐒̂_n = 𝐒_mix⊗𝐌̂_n, where ⊗ denotes pixel-wise multiplication. The n-th sound component is then recovered through iSTFT applied on 𝐒̂_n. §.§.§ Discussion The multimodal feature fusion in PCNet is reached by the iterative inference procedure. In fact, as formulated in Section <ref>, updating representations based on (<ref>) and (<ref>) (or (<ref>) and (<ref>)) is equivalent to minimizing prediction error of each layer. Particularly, reducing the bottom-layer prediction error, 𝐞_0(t), makes the prediction signal extracted from audio source, 𝐩_0(t), similar to the specific visual feature, 𝐟_n, in the sense of L_2 distance. Because the visual feature carries discriminative information of sounding object, the prediction signal would ideally tend to be discriminative as well. In this regard, we think that the prediction from audio source is fused with visual information. Note that PCNet generates the prediction and propagates the prediction error layer-by-layer, therefore the semantic discrimination flows across hierarchy and the top-layer audio-visual representation, 𝐫_L(t), would become more discriminative as time goes on. In Section <ref>, we will empirically verify the effectiveness of this feature fusion mechanism by visualizing the semantic discrimination of learned embeddings. By comparing the basic modules constructing PCNet and U-Net, we find that the feedback and feedforward processes of PCNet in Fig. *fig:comp_u-net_pcnet_part2 are analogous to the contracting and expansive paths of U-Net in Fig. *fig:comp_u-net_pcnet_part1, respectively. Equipped with this symmetric structure, both U-Net and PCNet provide ways to integrate different levels of features. For U-Net in Fig. *fig:comp_u-net_pcnet_part1, the lower-level feature map 𝐫_p are concatenated with the higher-level one 𝐫_q-1 over channel, and then converted to new feature 𝐫_q by convolution operation. By doing so, the context information from contracting path are combined with the upsampled output from expansive path, yielding precise pixel-wise prediction. However, the channel expansion also induces more weight connections to deal with the appended feature map. By contrast, PCNet in Fig. *fig:comp_u-net_pcnet_part2 fuses features from two processes in a recurrent way, the realization of which needs no parameterized connection between symmetric layers (instead resorting to the recurrent connection), and turns out to be more parameter efficient. Note that the extended PCNet differs from the raw version <cit.> (denoted as PCN here) in the following respects. (i) We customize the input configuration of the network such that bottom layer takes as input visual feature map, while top layer deals with sound spectrogram as a kind of prior knowledge. By contrast, PCN only receives image as input at bottom layer. (ii) The distinction between input configurations induces that PCNet is inherently able to handle multimodal learning task while PCN is fit for unimodal image classification task. (iii) We technically introduce and adjust the batch normalization variant originally designed for recurrent neural networks <cit.>, which proves to be conducive to stabilizing the multimodal learning process in PCNet. (iv) We empirically show that through cross-modal feature prediction in PCNet, the semantic discrimination can be conveyed from visual feature to audio-visual representation (cf., Fig. <ref>). This intriguing property is not explored before in PCN and other related works. §.§ Training Method We employ the Mix-and-Separate (MaS) framework proposed in <cit.> to train models. Because of the additivity of sound signals <cit.>, mixture audio ground truth can be artificially created by linearly mixing sound signals from different video clips. The goal of our model is then to recover target sound component from mixture audio conditioned on the corresponding visual cues. Formally, we randomly sample N video clips {𝐕_n, 𝐒_n}_n=1^N from training dataset, where 𝐕_n and 𝐒_n denote video frames and sound signal of the n-th video clip, respectively. Subsequently we linearly combine these sound signals to synthesize sound mixture as 𝐒_mix=1/N∑_n=1^N𝐒_n. The video analysis network takes as input video frames 𝐕_n and is responsible to compute visual feature map 𝐟_n, which proceeds to help the sound separation network estimate sound component 𝐒̂_n from 𝐒_mix. In practice, the direct output of the sound separation network is a binary mask 𝐌̂_n, while the ground truth mask is determined based on whether the n-th sound component is the dominant component in the input mixed sound, i.e., 𝐌_n(u,v) = [[𝐒_n(u,v) ≥𝐒_mix(u,v)]], where (u,v) indicates the time-frequency coordinates in the sound spetrogram, and [[]] denotes an indicator function whose value is 1 when the is satisfied, otherwise 0. Finally, the model is trained by minimizing per-pixel binary cross-entropy () between the predicted masks and the ground truth masks: ℒ_MaS = 1/N∑_n=1^N(𝐌̂_n, 𝐌_n). §.§ Boosting Performance by Representation Co-Prediction Inspired by the self-supervised visual representation learning methods BYOL <cit.> and SimSiam <cit.>, we propose a new audio-visual representation learning strategy, named representation co-prediction (RCoP), to further boost the visual sound separation performance, as shown in Fig. <ref>. Different from SimSiam that focuses on discovering semantic similarity between augmented views of the same image, the core idea of RCoP is to make two audio-visual representations of the given video clip (e.g., guitar playing here) as similar as possible. By doing that, RCoP enforces the audio-visual model to explore intrinsic correlation between two mixed audios sharing the same one sound component, and as a result providing a better parameter initialization for separating sound of interest. Formally, given an video clip (𝐕, 𝐒), we randomly select two audio clips from another two different videos, 𝐒^1 and 𝐒^2, and synthesize mixed audios 𝐒_mix^1=(𝐒 + 𝐒^1) / 2 and 𝐒_mix^2=(𝐒 + 𝐒^2) / 2, respectively. The new video pairs (𝐕, 𝐒_mix^1) and (𝐕, 𝐒_mix^2) are then fed into the audio-visual model like AVPC, obtaining two audio-visual representations 𝐫^1 and 𝐫^2, respectively. A projector further transforms the two representations into new latent space, producing projections 𝐳^1 and 𝐳^2, respectively. And a predictor head takes as input 𝐳^1 to approximate 𝐳^2 by minimizing the following negative cosine similarity (NCS): ℒ_NCS(𝐳^1, 𝐳^2) = -(𝐳^1)/‖(𝐳^1)‖_2·𝐳^2/‖𝐳^2‖_2, where ‖·‖_2 is ℓ_2-norm. Following <cit.>, we define a symmetric loss as: ℒ̂_RCoP = 1/2ℒ_NCS(𝐳^1, 𝐳^2) + 1/2ℒ_NCS(𝐳^2, 𝐳^1). It has been empirically verified that the stop-gradient () operation plays a crucial role in preventing SimSiam from representation collapse <cit.>, i.e., using modified loss ℒ_NCS(𝐳^1, (𝐳^2)) in (<ref>). This implies that 𝐳^2 is viewed as a constant therein, and would not receive gradient information from the NCS loss during training. A similar setting holds for ℒ_NCS(𝐳^2, (𝐳^1)). To this end the loss function of RCoP in (<ref>) is reformulated as: ℒ_RCoP = 1/2ℒ_NCS(𝐳^1, (𝐳^2)) + 1/2ℒ_NCS(𝐳^2, (𝐳^1)). Note that a full discussion of the relationship between stop-gradient and optimization dynamic is beyond the scope of this paper. Readers may refer to <cit.> and <cit.> for more details. Similar to SimSiam, RCoP can learn meaningful audio-visual representations without using negative samples, large batches, and momentum encoders. This simplicity of implementation makes RCoP reduce the demand for large memory space and high performance computing, and hence suitable for processing video data. § EXPERIMENTS §.§ Experimental Setup §.§.§ Datasets We perform experiments on three music video datasets: MUSIC-11 <cit.>, MUSIC-21 <cit.>, and URMP <cit.>. MUSIC-11 dataset contains instrument solo and duet videos collected by keyword query from YouTube. There are 11 instrument categories considered therein, namely accordion, acoustic guitar, cello, clarinet, erhu, flute, saxophone, trumpet, tuba, violin, and xylophone. The original dataset has 565 videos of solos and 149 videos of duets, but about 11% of which have been removed by the YouTube users at the time of conducting experiments. For a fair comparison, we replaced the unavailable entries with similar YouTube videos, and finally yielding 516 solo and 143 duet videos, respectively. Following <cit.> we select the first and second videos in each instrument category to construct validation and test datasets, and the rest ones as training data. All videos are split into 20s clips. MUSIC-21 dataset is an extended version of MUSIC-11 and is more challenging for the task of visual sound separation. In addition to the above 11 instrument categories it also includes another 10 categories, i.e., bagpipe, banjo, bassoon, congas, drum, electric bass, guzheng, piano, pipa, and ukulele. The original MUSIC-21 dataset has 1365 untrimmed videos of musical solos and duets, however the number of available terms used in our experiments is 1226. The data split method for this dataset is the same as MUSIC-11. URMP dataset comprises totally 44 multi-instrument musical pieces recorded in studio. We use duet videos from this dataset as part of test samples in the qualitative comparison experiments. §.§.§ Implementation Details For the visual data pre-processing, we extract video frames at 8 FPS and perform data augmentation on each frame with random scaling, cropping, and horizontal flipping at training stage, like <cit.>. Unless otherwise indicated, we only use 3 frames per 6-second video clip to compute the visually-guided feature map, the same as in <cit.>. We also leverage the method described in <cit.> to pre-process audio data. Specifically, we first sub-sample each sound signal at 11kHz and then randomly crop a 6-second audio clip for training and test. By using STFT with a Hanning window size of 1022 and a hop length of 256, we transform the input audio into a time-frequency (T-F) spectrogram of size 512×256. The spectrogram is further re-sampled on a log-frequency scale to produce a T-F representation of size 256×256. We adopt the AdamW <cit.> optimizer to train our AVPC model, where the weight decay coefficient is 1e-2, and the step size hyperparameter β_1 is equal to 0.9 and β_2 is 0.999 in all cases. The sound separation network uses a learning rate of 1e-3; all parameters of video analysis network are froze, except for the additional convolutional layer concatenated with the backbone structure, which uses a learning rate of 1e-4. In case of learning audio-visual representation with RCoP (i.e., AVPC-RCoP), we employ the SGD optimizer at first training stage (i.e., training with RCoP), where the momentum is 0.9, weight decay is 1e-4, and the predictor uses a learning rate of 1e-3; the remaining settings at second training stage (i.e., training with MaS) are the same as the case of AVPC. To stabilize the learning process, we utilize the batch normalization variant <cit.> at each layer and at each time step in PCNet, except the visual feature prediction at bottom layer. §.§.§ Evaluation Metrics Because in MUSIC dataset the ground truth sound components of real videos containing multiple sounds are unknown, we use the synthetic mixture audios for quantitative evaluation, similar to <cit.>. Three metrics are adopted to quantify performance: Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR), and Signal-to-Artifact Ratio (SAR). SIR indicates the suppression of interference; SAR reflects the artifacts introduced by the separation process; and SDR measures the overall performance <cit.>. Their units are in dB, and higher is better for all three metrics. §.§ Quantitative Results We first evaluate the method performance in the task of separating sounds from two different kinds of instruments. On MUSIC-11 we train our model with two different data sources, i.e., 2-Mix and 3-Mix, which contain 2 and 3 sound components in the mixture, respectively. We also use the publicly released code to retrain two related models, Sound-of-Pixels[https://github.com/hangzhaomit/Sound-of-Pixelshttps://github.com/hangzhaomit/Sound-of-Pixels] <cit.> and MP-Net[https://github.com/SheldonTsui/Minus-Plus-Networkhttps://github.com/SheldonTsui/Minus-Plus-Network] <cit.>. As shown in Table <ref>, performance of pure sound separation approach NMF-MFCC <cit.> is poorer than visually-guided methods, indicating the benefit of leveraging visual features to separate sounds. By employing an object detection network pre-trained on other large-scale dataset, Co-Separation <cit.> reports superior performance over other methods that use only instrument appearance information. However, its superiority comes at the cost of increased model size (cf., the biggest #Param, 107.88M, among all compared methods). By comparison, AVPC enables separation quality on par with or better than the state-of-the-art on MUSIC-11, while reducing a significant number of network parameters (cf., the smallest #Param, 16.16M, among all compared methods). The results indicate that the PC-based sound separation network is essential. Besides, AVPC-RCoP further improves AVPC's performance especially on SDR and SIR metrics, demonstrating that the learning strategy RCoP can provide reasonable parameter initialization for the audio-visual model. What's more, both Co-Separation and ours consistently outperform all other competitors across two training configurations (i.e., 2-Mix and 3-Mix), and thus show better generalization ability on this dataset. Table <ref> shows the comparison results of separating sounds from two different instruments on MUSIC-21. It can be seen that in such a scenario with more diverse instruments, Sound-of-Motions <cit.> performs better than other previous works. Its success roots in taking optical flow and trajectory based motion cues into account, whereas resorting to several existing network models pre-trained on other large-scale datasets. Although the proposed AVPC and AVPC-RCoP, similar to MIML <cit.>, Sound-of-Pixels, and MP-Net, solely leverage instrument appearance feature to guide sound separation, they obtain top two separation accuracies in terms of SDR, and competitive scores on SIR and SAR, with least parameters among visually-guided methods. This appropriate trade-off between performance and model size can be attributed to the iterative representation inference process in PCNet, which enables AVPC to build increasingly accurate correspondence between audio and visual modalities. We conduct detailed ablation studies to illustrate this rationality in Section <ref>. Additionally, the learning strategy RCoP again improves separation qualities of AVPC across all metrics, showing effectiveness in learning task-oriented audio-visual representations. Moreover, to explore the behavior of AVPC to address more complex sound separation tasks, we perform additional experiments where the mixture audio consists of 3 or 4 sound components. Here we use parameters of AVPC that were well trained on MUSIC-21 as model initialization, and retrain the network in a simple curriculum learning manner (i.e., training model first on the 3-Mix data source and then on the 4-Mix data source). The two baselines, Sound-of-Pixels and MP-Net, are retrained with the same data and the same training pipeline accordingly. Results are reported in Table <ref>. We find that in these highly mixed cases our AVPCs are still superior to baseline methods. In particular, at N=4 the SDR of AVPC outperforms that of Sound-of-Pixels by 0.6dB, and that of MP-Net by 1.07dB. The results manifest that AVPC can generalize better over two baselines. However, the performance of all approaches degrades as increasing the number of sound components in test mixture audios, indicating that separating sound mixtures with multiple sources remains a very challenging task. We leave this to future work. §.§ Qualitative Results Fig. <ref> shows qualitative samples with sound separation masks and spectrograms predicted by Sound-of-Pixels, MP-Net, AVPC, and AVPC-RCoP, respectively. As depicted in the mixture pair 1 from MUSIC-11 (i.e., the two columns on the left in Fig. <ref>), Sound-of-Pixels generates masks that allow more pixel information of the mixture sepctrogram to leak out, leading to predicted spectrograms carrying a few tangled frequency features. By contrast, MP-Net excels at filtering out noisy components from predicted spectrograms, but simultaneously removing meaningful details. Compared with these methods, AVPC and AVPC-RCoP produce more accurate sound separation masks and thus spectrograms. Similar phenomena can be observed in mixture pair samples from MUSIC-21 and URMP dataset. Again, these results indicate the superiority of AVPCs in separating musical instrument sounds. More comprehensive results can be found in the supplementary video, where we also show exemplars of separating realistic duet sounds by our AVPCs. To illustrate the performance stability of AVPC, we conduct experiments to separate sounds from mixture audios that share the same one sound component. Specifically, for each instrument category in MUSIC-11 we select a 6-second audio clip as the first sound component (denoted as Instr. 1), and then synthesize 10 mixture audios by summing the first sound component with the second sound component (denoted as Instr. 2) from the other 10 instrument classes, respectively. The goal is to predict masks that will be used to separate sounds of Instr. 2. Fig. <ref> visualizes four groups of mask prediction examples. We find that Sound-of-Pixels and MP-Net are prone to produce masks underestimating or overestimating occlusion pixels in ground truths (e.g., see marked red regions in top right panel and in bottom right panel in Fig. <ref>, respectively). Compared with two baseline methods, AVPC and AVPC-RCoP exhibit better response to recover local textures in predicted masks. In fact, we also employ two image quality assessment indexes, peak signal-to-noise ratio (PSNR) and multi-scale structure similarity (MS-SSIM) <cit.>, to evaluate the predicted mask images' quality. From Fig. <ref>, we observe that both AVPC and AVPC-RCoP produce higher PSNR and MS-SSIM values across all 11 instrument categories than baseline approaches. These results show that for various kinds of instrument mixture audios, AVPC performs consistently well in predicting sound separation masks, hence resulting in sound separation ability superior to Sound-of-Pixels and MP-Net. §.§ Ablation Study §.§.§ Effect of Each Part To demonstrate how different parts of AVPC influence sound separation performance, we present extensive ablations and analysis in this section. All the results, as summarized in Table <ref>, are based on the held out MUSIC-11 test set. Here, the baseline (model A) is set to learn in a concise way and gets separation scores for reference. By increasing the recursive cycles at training stage (T=5) rather than at test stage (T=1), we obtain lowest values of the three performance metrics (model B). However, as we also set T=5 at test stage (model C), separation scors are significantly improved (compared with model A, model C improves SDR by 1.44dB, SIR by 0.9dB, and SAR by 1.24dB, respectively). This verifies the effectiveness of the iterative inference procedure adopted in AVPC, which only works when the recursive cycles are consistent across traing and test phases. Then, transforming the visual feature through an additional convolutional layer improves performance further (model D). We speculate that the original features derived from off-the-shelf visual backbones (here is ResNet-18) include task-irrelevant noise, and the additional convolutional layer can serve to suppress those noisy information. Moreover, a great improvement is gained after using RCoP during training (model E), showing that RCoP is able to provide a better parameter initialization for the MaS training framework. Ideally AVPC should benefit from sufficient video frames. This is validated by model F that utilizes all 48 frames (corresponding to a 6-second video clip) to extract visual features at test stage. §.§.§ Effect of Iterative Representation Inference The most remarkable nature of AVPC, that distinguishes it from all other audio-visual methods, is inferring representation in an iterative manner. To have an in-depth analysis of this mechanism, we first investigate the effect of different recursive cycles (T) at training stage. As shown in Table <ref>, AVPC can benefit from training with bigger recursive cycles, however, the performance tends to be saturated when T>5. Considering that long-time iterative inference also brings increased computational complexity, we set T=5 during training in all experiments. Second, we inspect sound separation results of AVPCs at different computation time steps (t) during test. As illustrated in Fig. <ref>, all separation scores tend to increase given more cycles of computation, meaning that audio-visual representation is progressively refined by the iterative procedure. In terms of the overall performance metric SDR, AVPC acts slightly better than AVPC-RCoP at initial time step (t=1); however, AVPC-RCoP turns out to be superior to AVPC as time goes on, especially at the end of the iterative procedure (t=5). This phenomenon reveals that the learning strategy RCoP gains more from increasing the number of iterative computing. Third, to visualize that the iterative inference in AVPC indeed makes audio-visual representation achieve better feature fusion, Fig. <ref> displays the t-SNE <cit.> embeddings of visual feature and audio-visual representation, respectively. With the guidance from discriminative visual cues, the audio-visual representations of different sounding objects become more and more distinguishable as increasing time steps (e.g., the embedding cluster of “Erhu” denoted by green evolves to be compact). This indicates that the iterative inference enables audio-visual representation to extract semantic discrimination from visual feature, and thus can reduce ambiguity in predicted sound separation masks. §.§.§ Effect of Visual Feature Extractor To analyze the influence of visual feature extractor used in AVPC, we report Performance vs. #Params trade-offs on MUSIC-11 test set in Fig. <ref>. To this end, all AVPCs take PCNet with the same configuration to construct the sound separation network, and the difference among them is only in the used visual backbone networks. As shown in Fig. <ref>, AVPCs equipped with different backbones consistently perform better and meanwhile have fewer parameters than two baseline methods, Sound-of-Pixels and MP-Net. Because in this comparison experiment the number of parameters of PCNet (4.69 M) is much smaller than that of U-Net (30.26 M), the AVPC taking ResNet-34 as visual backbone is still a parameter efficient method (26.27 M) compared with two baselines. These observations show the potential of AVPC to reduce memory cost on maintaining models. Additionally, with the increase of model size, the performance of AVPC is generally improved across three metrics. In particular, when employing ResNet-34 as the visual backbone network, AVPC achieves best sound separation quality on MUSIC-11 (SDR=9.93, SIR=14.47, SAR=13.10). All these results demonstrate that AVPC can gain from visual features that carry more discriminative cues. §.§.§ Effect of Visual Feature Map's Shape We conduct experiments to provide a sensitivity analysis about the shape of the visual feature map 𝐟_n (i.e., ablation on Ĥ×Ŵ while setting K=16). In Table <ref>, we find that compared with the smallest feature map (1×1, as also used in Sound-of-Pixels and MP-Net) and the largest one (7×7), the feature map with the shape of 2×2 benefits AVPC's sound separation performance most. This is presumably because large feature map not only conveys semantic discrimination related to sounding object, but also contains noise of background distractors; while small map leaves out too much information to serve as valid visual guidance. Therefore, we adopt the 2×2 visual feature map to guide sound separation in all cases. §.§.§ Effect of Feature Fusion Method In AVPC, multimodal feature fusion is implicitly implemented through the alternative representation updating in PCNet. Here we compare the PCNet way with other four feature fusion methods: addition (Add), multiplication (Mul), concatenation (Cat) as widely used in <cit.>, and attention-based modules (Att) <cit.>. For fair comparisons, we leverage ResNet-18 as the default visual feature extractor in all compared methods, while use the 14-layer U-Net (the same as <cit.>) to process sound signal when the above four fusion methods are adopted. Besides, we set the attention module according to the configuration described in <cit.>. As we can see from Table <ref>, when fusing audio and visual features without employing parameterized modules, the three naïve methods (Add, Mul, and Cat) only achieve limited sound separation performance. While Att harvests results comparable to ours, it needs a heavily parameterized module for feature interaction. By contrast, our fusion with PCNet reaches better separation accuracy with smaller model size. §.§ Failure Cases Fig. <ref> illustrates failure cases of AVPC for homo-musical separation, which aims to separate sounds emitted by the same kind of musical instrument. The predicted spectrograms usually contain frequencies of the other sound component, as marked in red rectangles in Fig. <ref>. This is mainly because AVPC only utilizes visual appearance cues of sounding object to guide separation, however, such visual cues are not distinguishable between the same category of instruments. To handle this challenging task, object motion-related cues, such as pixel-wise trajectories <cit.> and human body keypoints <cit.>, have to be involved for visual feature extraction and integration. § CONCLUSION AND FUTURE WORK We have presented a novel approach—audio-visual predictive coding (AVPC)—for visually-guided sound source separation. AVPC uses a simple video analysis network to derive semantic visual features, which then serve to guide the sound separation network to extract audio features, fuse multimodal information, and predict sound separation masks. Thanks to the parameter efficient network architecture and the iterative representation inference mechanism of predictive coding, AVPC not only achieves superior performance compared with conventional U-Net-based methods, but also significantly reduces the number of model parameters. Furthermore, we have introduced a self-supervised learning strategy RCoP that aims at learning two similar audio-visual representations of the target sound component, showing an effective way to further boost sound separation performance. Extensive experiments demonstrate the effectiveness of our approach on separating musical instrument sounds. We anticipate that this work would provide a new insight into multimodal representation learning. By viewing one modality feature with semantic discrimination as input and another modality feature with uncertainty as adjustable prior, one could leverage the cross-modal feature prediction in predictive coding to reduce such uncertainty. This mechanism has proven to be helpful to visual sound localization <cit.>, and also possesses potential to benefit vision-language tasks (e.g., referring expression comprehension <cit.>). For visual sound separation, there exist open yet challenging problems that are not investigated in this work, such as separating sounds from sound mixtures with arbitrary numbers and types of sound components <cit.>, removing off-screen sound noise or separating intermittent sounds <cit.>, the homo-musical separation task <cit.>, etc. In addition to develop task-oriented methods to mitigate these problems (e.g., from the perspective of disentangled representation learning <cit.>), it is also necessary to collect high-quality video data that can cover diverse audio-visual scenarios <cit.>. This may be a direction of future investigation. IEEEtran
http://arxiv.org/abs/2306.09199v1
20230615152854
Kinetic based optimization enhanced by genetic dynamics
[ "Giacomo Albi", "Federica Ferrarese", "Claudia Totzeck" ]
math.OC
[ "math.OC", "cs.NA", "math.NA" ]
positioning equationsection letterpaper 𝒪 ℳ ℱ β̂ f̂ ĝ Q̂ ℝ #1 #1 propositionProposition conjectureConjecture definition algAlgorithm[section] remarkRemark assumptionAssumption corollaryCorollary propProposition ℝ Eulerconst c_max _∞ V_r V_a ρ 𝒩 𝒯 Łℒ Kinetic based optimization enhanced by genetic dynamics Giacomo Albi[ Dipartimento di Informatica, Università di Verona, Verona, Italy, e-mail: [email protected]], Federica Ferrarese[ Dipartimento di Matematica, Università di Trento, e-mail: [email protected]], and Claudia Totzeck[Department of Mathematics and Informatics, University of Wuppertal, e-mail: [email protected]] ============================================================================================================================================================================================================================================================================================================================================================ We propose and analyse a variant of the recently introduced kinetic based optimization method that incorporates ideas like survival-of-the-fittest and mutation strategies well-known from genetic algorithms. Thus, we provide a first attempt to reach out from the class of consensus/kinetic-based algorithms towards genetic metaheuristics. Different generations of genetic algorithms are represented via two species identified with different labels, binary interactions are prescribed on the particle level and then we derive a mean-field approximation in order to analyse the method in terms of convergence. Numerical results underline the feasibility of the approach and show in particular that the genetic dynamics allows to improve the efficiency, of this class of global optimization methods in terms of computational cost. Keywords: global optimization, mean-field limit, Boltzmann equations, particle-based methods, consensus-based optimization. Mathematics Subject Classification: 90C26,90C56, 35Q93, 35Q20 § INTRODUCTION In recent years a new perspective on gradient-free methods for global optimization of non-convex high-dimensional functions was established. This arises from a new class of models which exploit collective dynamics of swarms and define communication strategies among the swarm members to find global optimizers, thereby allowing for rigorous convergence analysis using the mesoscopic approximations. In particular, the characterization of the long-time behaviour allows to prove that the swarms concentrate arbitrarily close by the unique global minimizer of the objective function, we refer to  <cit.> and the overview in <cit.>. Similar to gradient-based methods <cit.>, there are first-order and second-order models available, some of them including memory <cit.> or momentum effects <cit.>. However, in contrast to gradient-based methods as for example stochastic gradient descent, the landscape of the objective functions is explored via function evaluations only. In more detail, the objective value at the current position and the current position of the agents is exchanged, for example, with the help of a weighted mean value which is constructed such that the Laplace principle <cit.> applies. The dynamics is tailored such that the agents confine towards the weighted-mean on the one hand, and randomly explore the landscape on the other hand. This already indicates that the two components of the dynamics need to be well-balanced to obtain desirable results. Consensus-based optimization (CBO) was a first step towards the mathematical understanding of metaheuristics for global optimization, such as particle swarm optimization (PSO) methods, where second order dynamics is used for the evolution of the particles <cit.>. Recently, the gap between the first-order method CBO and PSO was bridged in <cit.>, further extensions were provided for constrained optimization <cit.>, and multi-objective problems <cit.>. More recently, kinetic-based optimization (KBO) methods have been proposed in<cit.>, where each agent with position x moves subject to the following interaction rules x' = x + ν_F (x̂(t)-x)+σ_F D(x) ξ, where x' denotes the post iteration position, σ_F, ν_F are positive parameters which allow to balance the exploitation and exploration of the swarm, ξ is a random perturbation term, D(x) is a diffusion matrix and x̂(t) denotes the global estimate of the position of the global minimizer at time t. In addition to this dynamics a drift towards the local best and a local diffusion term is proposed in <cit.>. The corresponding dynamics is described by a multidimensional Boltzmann equation and can be simulated with the help of Monte Carlo algorithms <cit.>. With this work, we aim at extending KBO algorithm reaching out towards genetic algorithms (GA) <cit.>, a very popular class of metaheuristics that is widely used in engineering. The GA models a natural selection process based on biological evolution <cit.>. To this end, individuals (parents) from the current population are selected and their objective values (gene information) is combined to generate the next generation (children). The selection process is usually driven by a survival-of-the-fittest idea, hence over successive generations, the system is assumed to evolve towards an optimal solution. Agents in promising positions, i.e., with small objective values, are labeled as parents and the others are labeled as children. Leaders do not modify their position, hence they survive the iteration like in a survival-of-the-fittest strategy. In contrast, a child in position x interacts with a randomly chosen parent in position x_* and updates the position according to the rules x' = x_* with rate ν_F, x' = x with rate 1-ν_F. Here x' denotes the post interaction position and ν_F >0 is the jump rate. Furthermore, mutations can occur, that means a child in position x encounters a random perturbation of the form x' = x + σ_F ξ, where σ_F is a positive parameter and ξ random vector drawn from a normal distribution. Slight modifications of the mutation process can be considered, assuming that all children encounter a random perturbation of the type x' = x +σ_F D(x) ξ, with diffusion matrix D(x). Thus, to establish a relation between KBO and GA, we divide the swarm into two species called followers (children) and leaders (parents) via a labeling strategy, evolving according to different transition processes <cit.>. More details are presented in the following section which are organized as follows: in Section <ref> we introduce the Genetic Kinetic based optimization methods focusing on the description of the binary interactions rules which describes the dynamics. In Section <ref>, we derive the mean field, in particular the evolution equations of the density functions of the two species. In Section <ref>, we discuss different strategies of how to assign labels. In Section <ref>, we provide a theoretical analysis including the exponential decay of the variance and the convergence of the method to the global minimum. In Section <ref>, we describe Nanbu's algorithm which is used to obtain the numerical results presented in Section <ref>. Here, we show different numerical experiments, testing the efficiency in terms of success rate and number of iterations and compare the results of the GKBO algorithm, to KBO and genetic algorithms. § GENETIC KINETIC BASED OPTIMIZATION (GKBO) The GKBO method we propose in the following enhances kinetic based optimization, which belongs to the class of consensus based algorithms, with ideas from genetic algorithms. To this end, we assume to have a population divided into two groups, similar to the parent and children populations in genetic algorithms. The two groups are specified with the help of labels leading to a modified KBO dynamic with followers and leaders. The dynamics is tailored in such a way that the the population clusters at the unique global minimum of the possibly non-convex objective function ℰ^d →. Hence, in the long time limit the dynamics solves the global optimization problem given by min_x∈^dℰ(x), where ℰ is assumed to have a unique global minimizer. In more detail, each agent is described by its position x ∈ℝ^d varying continuously and a binary variable for the leadership-level λ∈{0,1}. In the following we identify leaders with λ = 1 and followers with λ = 0. We are interested in the evolution of the density function f=f(x,λ,t), f: ^d×{0,1}×_+→_+ where t∈^+ denotes as usual the time variable. In the rest of the paper we denote f(x,λ,t) as f_λ(x,t) and define g(x,t) = ∑_λ∈{0,1} f_λ(x,t), to be the density of the whole population at time t. We assume that g(x,t) is normalized, hence a probability measure and introduce the fractions ρ_λ∈[0,1] with λ∈{0,1} s.t. ρ_0 + ρ_1 = 1 and f_λ(x,t)/ρ_λ are probability measures as well. §.§ Binary interaction between agents A binary interaction of agents with state (x,λ) and (x_*,λ_*) is described by their post-interaction positions given by x' = x + ( ν_F (x_*-x) + σ_F D(x)ξ)λ_* (1-λ)+ ν_L(x̂(t)-x)λ , x_*' = x_*, where σ_F,ν_F, ν_L, are positive parameters, ξ is a normally distributed random number and D(x) is the diffusion matrix, defined to be either D(x) = |x̂(t)-x| Id_d, in the case of isotropic diffusion<cit.>, or D(x) = diag{(x̂(t)-x)_1,… (x̂(t)-x)_d} , in the case of anisotropic diffusion, <cit.>. In equation (<ref>)-(<ref>) the term x̂(t) is the global estimate of the best position of the minimizer. The term x̂(t) is computed as a convex combination of particle locations weighted by the cost function according to Laplace principle (<cit.>). In case we consider the whole population, we have x̂(t) = ∫_ℝ^dx e^-αℰ(x)g(x,t)dx/∫_ℝ^d e^-αℰ(x)g(x,t)dx, and Laplace principle yields lim_α→∞( -1/α∫_ℝ^d e^-αℰ(x) g(x,t) dx ) = inf_x∈supp g(x,t)ℰ(x). In the section on numerical results we will also consider variants, where the weighted mean is computed with information of leaders or followers only. Note that (<ref>) implies that no follower-follower interactions are considers, since if both λ and λ_* are equal to zero the agents keep their positions. §.§ Emergence of leaders and followers The emergence of leaders and followers is realized with the help of a transition operator which acts as follows 𝒯[f_0](x,t) = π_L→ F(x,λ;f)f_1(x,t)- π_F→ L(x,λ;f) f_0(x,t), 𝒯[f_1](x,t) = π_F→ L(x,λ;f)f_0(x,t)- π_L→ F(x,λ;f)f_1(x,t), where π_F→ L(·) and π_L→ F(·) are certain transition rates, possibly depending on the current states. In the simplest case, if we assume that leaders emerge with fixed rate π_FL>0 and return to the followers status with fixed rate π_LF>0 then the transition rates reduce to π_L→ F =π_LF, π_F→ L =π_FL. However, we also cover more general cases, as for example proposed in <cit.>, where each agent in position x is associated with a weight ω(x,t) =1/N #{ y∈𝒜(t): |ℰ(x_min) - ℰ(y)| < |ℰ(x_min) - ℰ(x)|} = 1/N∑_λ∈{0,1}∫_ℝ^dχ_[0,1)(|ℰ(x_min(t)) - ℰ(y) |/|ℰ(x_min(t)) - ℰ(x) |) f(y,λ,t) dy, with χ_[0,1)(·) denoting the characteristic function of the interval [0,1) and x_min(t) = min_x ∈𝒜(t)ℰ(x), where 𝒜(t) is the set of agents at time t. Assuming that agents with weight smaller than a certain threshold ω̅, which depends on the amount of leaders that we would like to generate, are in the leaders status while the others are in the followers status, then we can write the transition rates as follows π_L→ F = 1, if ω(x,t)>ω̅, 0, if ω(x,t)≤ω̅, π_F→ L = 0, if ω(x,t) ≥ω̅, 1, if ω(x,t)<ω̅. The evolution of the emergence and decay of leaders can be described by the master equation ddtρ_λ(t) + ∫_^2d[f](x,λ,t) dx=0, for λ∈{0,1}, with ρ_λ(t) = ∫_^d f_λ(x,t) dx. From the above definition of the transition operator [·], it follows that ddt∑_λ∈{0,1}ρ_λ(t)=0. In case of constant transition rates π_L→ F(·) = π_LF, π_F→ L(·) = π_FL, we can rewrite equation (<ref>) as ∂_tρ_1(t) =π_FLρ_0(t) - π_LFρ_1(t), ∂_tρ_0(t) =π_LFρ_1(t)-π_FLρ_0(t). which allows us to calculate its stationary solution explicitly as ρ_1^∞ = π_FL/π_LF + π_FL, ρ_0^∞ = π_LF/π_LF + π_FL. The weighted strategy is inspired from the selection criterion of GA, where parents are chosen to be the agents in best position w.r.t. the cost function. In the numerical experiments we will also consider a mixed strategy, assuming that a certain percentage p̅ of the total amount of leaders change their label according to the weighted strategy and the remaining ones changes their label randomly. § DERIVATION OF THE MEAN-FIELD EQUATION Combining the interaction and transition dynamic described in the previous section, we obtain the evolution of the density function f_λ(x,t) which is described by the integro-differential equation of Boltzmann-type ∂_t f_λ(x,t) -[f_λ](x,t)= ∑_λ_*∈{0,1} Q(f_λ,f_λ_*)(x,t), where [·] is the transition operator and Q(·,·) is the binary interaction operator defined as follows Q(f_λ,f_λ_*) =η∫_ℝ^2d(1Jf_λ(x',t)f_λ_*(x_*',t)-f_λ(x,t)f_λ_*(x_*,t))dx dx_*, where (x',x_*') are the pre-interaction positions generated by the couple (x,x_*) after the interaction (<ref>). The term J denotes the Jacobian of the transformation (x,x_*)→ (x',x_*') and η>0 is a constant relaxation rate representing the interaction frequency. To obtain a weak-formulation, we consider a test function ϕ(x) and rewrite the collision operator ∫_^d Q(f_λ,f_λ_*)(x,t)ϕ(x)dx =η∫_^2d (ϕ(x')-ϕ(x)) f_λ_*(x_*,t) f_λ(x,t) dx dx_* . Hence, the weak form of (<ref>) reads ∂/∂ t∫_ℝ^d f_λ(x,t) ϕ(x) dx- ∫_ℝ^d𝒯[f_λ](x,t) ϕ(x) dx = η∑_λ_*∈{0,1}⟨∫_ℝ^2d[ϕ(x')-ϕ(x)] f_λ(x,t) f_λ_*(x_*,t) dx dx_* ⟩. To simplify the computations, we assume to have constant transition rates (<ref>) and to be in the quasi-stationary state ρ_λ^∞ i.e. ρ_λ≈ρ_λ^∞ for any λ∈{0,1} as in (<ref>). Moreover, we introduce the scaling parameter ε > 0 and consider ν_F →ν_F/ρ_1ε, ν_L →ν_L/ρ_1ε, σ_F →σ_F/√(ρ_1)√(ε), η→1/ε. This scaling corresponds to the case where the interaction kernel concentrates on binary interactions producing very small changes in the agents position but at the same time the number of interactions becomes very large. To obtain the mean-field equation, we consider the Taylor expansion of the test function ϕ(x') centred in x given by ϕ(x')-ϕ(x) =∇_xϕ(x)· (x'-x) + 1/2Δ_x ϕ(x) (x'-x)^2 + 𝒪(ε^2), and use it to rewrite (<ref>) as follows ∂/∂ t∫_ℝ^d f_λ(x,t) ϕ(x) dx -∫_ℝ^d𝒯[f_λ](x,t) ϕ(x) dx = ∑_λ_*∈{0,1}{∫_^2d( ν_F/ρ_1 (x_*-x) λ_* (1-λ) +ν_L/ρ_1 (x̂(t)-x) λ)·∇_x ϕ(x) df_λ df_λ_*. +.σ^2_F/2∫_^2d D^2(x)(1-λ)^2λ_*^2 Δ_x ϕ(x) df_λ df_λ_*} + 𝒪(ε), where for simplicity we write df_λ = f_λ(x,t) dx and df_λ_* = f_λ_*(x_*,t)dx_*. Now, taking the limit ε→ 0, integrating by parts and rewriting the equation in strong form yields ∂/∂ t f_0(x,t) - 𝒯[f_0](x,t) = σ_F^2/2Δ_x[D^2(x) f_0(x,t)] - ν_F ∇_x ·[(m_1(t)/ρ_1 -x ) f_0(x,t)], ∂/∂ t f_1(x,t) - 𝒯[f_1 ](x,t) = -ν_L/ρ_1∇_x ·[(x̂(t) -x ) f_1(x,t)], where D(x) is the diffusion matrix defined in (<ref>)-(<ref>), x̂(t) is the global estimate of the global minimizer at time t defined in equation (<ref>) and m_1(t) = ∫_^2d x f_1(x,t) dx denotes the centre of mass of the leaders at time t. Multiplying both side of the second equation in (<ref>) by x/ν_L integrating and taking the formal limit ν_L→ + ∞, we get m_1(t)/ρ_1 = x̂(t). Plugging it into the first equation in (<ref>), assuming 𝒯[f_0](x,t)=0, we recover the equation that governs the dynamics in absence of leaders that is ∂/∂ t f_0(x,t) = σ_F^2/2Δ_x [D(x)^2 f_0(x,t)] + ν_F ∇_x ·[(x̂(t) -x ) f_0(x,t)]. To summarize, the diagram in Figure <ref> describes the relation between the three algorithms at the particle and mean field level. § MOMENTS ESTIMATES AND CONVERGENCE TO THE GLOBAL MINIMUM Following the idea introduced in <cit.> we provide moments estimates, showing that the variance decreases exponentially to zero, and we prove the convergence of the method toward the position of the global minimum. In this section we study the behaviour of the two population dynamic, we therefore assume throughout this section ρ_0, ρ_1 >0. §.§ Evolution of the moment estimates We define the first two moments of the total population by m(t) = m_0(t) + m_1(t), e(t) = e_0(t) + e_1(t), respectively, where m_0(t) = ∫_ℝ^dx   f_0(x,t) dx, e_0(t) = ∫_ℝ^d| x |^2 f_0(x,t) dx, m_1(t) =∫_ℝ^dx   f_1(x,t) dx, e_1(t) = ∫_ℝ^d| x |^2 f_1(x,t) dx, are the first two moments of the subpopulations f_λ(x,t) for λ∈{0,1} and V(t) = v_0(t) + v_1(t), the sum of the variances of the subpopulations given by v_0(t) = ∫_^d| x - m_0/ρ_0|^2 f_0(x,t) dx, v_1(t) =∫_^d| x -m_1/ρ_1|^2 f_1(x,t) dx. For the following computations it is helpful to have in mind that m(t) = ∫_^d x (f_0(x,t) + f_1(x,t)) dx, e(t) = ∫_^d |x|^2 (f_0(x,t) + f_1(x,t)) dx, but due to the nonlinearity V(t) ∫_^d |x - m(t)|^2 (f_0(x,t) + f_1(x,t)) dx. Let us assume the transitions have equilibrated, that is, ρ_0 ≡ρ_0^∞ and ρ_1 ≡ρ_1^∞. Furthermore let ℰ(x) positive and bounded for all x∈ℝ^d, in particular, there exist constants ℰ,ℰ >0 such that ℰ:=inf_x ℰ(x) ≤ℰ(x) ≤sup_x ℰ(x) :=ℰ, and define σ̃ = k σ_F^2 b_ℰ, with b_ℰ = exp(α(ℰ̅-ℰ)), where k=d in the case of isotropic diffusion and k=1 in the case of anisotropic diffusion. If ν_F = ν_L, ν_F > max{σ̃/2, ρ_1/2}, it holds d/dtm(t) = ν_F(x̂-m)(t), d/dtV(t) ≤ (-2ν_F+σ̃) V(t) + (σ̃ρ_0ρ_1-π_LFρ_1 -π_FLρ_0 ) ( m_0(t)/ρ_0 - m_1(t)/ρ_1) ^2. Let us define for simplicity M_λ(t) = 1/ρ_λ∫_ℝ^dx   f_λ(x,t) dx, E_λ(t) =1/ρ_λ∫_ℝ^d| x |^2 f_λ(x,t) dx, V_λ(t) = 1/ρ_λ∫_^d| x -M_λ|^2 f_λ(x,t) dx, for any λ∈{0,1} such that m(t) = ρ_0 M_0(t) + ρ_1 M_1(t), e(t) = ρ_0 E_0(t) + ρ_1 E_1(t), V(t) = ρ_0 V_0(t) + ρ_1 V_1(t). We begin by computing the evolution of the first moments d/dtm(t) = ρ_0 d/dtM_0(t) + ρ_1 d/dt M_1(t). For the first term of (<ref>) we obtain ρ_0 d/dt M_0(t) = ∫_^d x ∂_t f_0 = =∫_^d x [ -π_LF f_1 + π_FL f_0 + . -.∇_x ·( ν_F (M_1-x) f_0 )+ σ_F^2/2Δ_x( D^2(x) f_0) ] dx = = -π_LFρ_1 M_1(t) + π_FLρ_0 M_0(t) + ρ_0 ν_F (M_1-M_0)(t). and for the second term in (<ref>) it holds ρ_1 d/dt M_1(t) = ∫_^d x ∂_t f_1 = =∫_^d x [ π_LF f_1 - π_FL f_0 -∇_x ·( ν_L/ρ_1 (x̂-x) f_1 )] dx = = π_LFρ_1 M_1(t) - π_FLρ_0 M_0(t) + ν_L (x̂-M_1)(t). Together this yields d/dtm(t) = ν_F ρ_0 (M_1-M_0)(t) +ν_L (x̂-M_1)(t), and recalling the definition of M_0(t) and M_1(t) in (<ref>) we get d/dt m(t) = ν_L x̂(t) - ν_F m(t) + ( ν_F ρ_0/ρ_1 + ν_F -ν_L/ρ_1) m_1(t). By the first assumption in (<ref>) we can recover the first equation of the statement. For V(t) we have d/dt V (t) = ρ_0 d/dtV_0 (t)+ ρ_1 d/dtV_1 (t). We investigate the terms separately. First, we obtain d/dtV_0(t) = 1/ρ_0d/dt∫_^d| x - M_0(t)|^2 df_0 = 2/ρ_0∫_^d( x-M_0(t), -d/dtM_0(t)) df_0_=: I_0 + 1/ρ_0∫_^d| x - M_0(t)|^2 ∂_t f_0_=: I_1. We note that I_0 vanishes, since 2ρ_0^-1∫_^d x - M_0(t) df_0 =0. We divide I_1 into its drift, diffusion and transition parts to obtain I_1 =: I_1^0 + I_1^1 + I_1^2, with I_1^0 = 1/ρ_0∫_^d| x-M_0(t)|^2 (-ν_F ∇_x ·( ( M_1(t)-x) f_0)) dx = =2 ν_F/ρ_0∫_^d (x-M_0(t)) (M_1(t)-x) df_0 = = 2ν_F ( M_0(t) M_1(t) - E_0(t) - M_0(t)M_1(t) +M_0^2(t)) = -2ν_F V_0, and, by an application of Jensen inequality we get I_1^1 = 1/ρ_0∫_^d| x-M_0(t)|^2 Δ_x( σ_F^2/2 D^2(x) f_0) dx = = σ_F^2/2ρ_0∫_^d k |x̂(t)-x|^2 df_0 = =σ_F^2/2ρ_0∫_^d k ∫_^d|∫_^d(y-x) e^-αℰ(y) g(y) dy/∫_^d e^-αℰ(y) g(y) dy|^2 df_0 = ≤σ̃/ρ_0∫_^2d| y-x |^2 g(y) f_0(x) dx dy = = σ̃/ρ_0( ρ_0 E_0(t)+ρ_1 E_1(t) - (ρ_0 M_0(t) + ρ_1 M_1(t))^2) = = σ̃/ρ_0( V(t) + ρ_0ρ_1 ( M_0(t)-M_1(t)) ^2), and finally, I_1^2 = 1/ρ_0∫_^d| x-M_0(t)|^2 ( -π_LFf_1 + π_FL f_0) dx = -π_LF/ρ_0∫_^d( x^2+M_0^2(t)-2xM_0(t)) df_1 + π_FL V_0(t) = -π_LF/ρ_0( ρ_1 V_1(t) + ρ_1 (M_0-M_1)^2(t)) +π_FLV_0(t), where we add and subtract ρ_1 M_1^2(t) in the last term of I_1^2. For V_1(t) we have d/dtV_1(t) = 1/ρ_1d/dt∫_^d| x - M_1(t)|^2 df_1 = 2/ρ_1∫_^d( x-M_1(t), -d/dtM_1(t)) df_0_=: I_2 + 1/ρ_1∫_^d| x - M_1(t)|^2 ∂_t f_1_=: I_3. Similarly to the case I_0 one can easily conclude that I_2 vanishes. We divide I_3 into the drift and transition part to obtain I_3 = I_3^0 + I_3^1, with I_3^0 = 1/ρ_1∫_^d| x-M_1(t)|^2 (- ν_L/ρ_1∇_x ·( ( x̂(t)-x) f_1)) dx = =2 ν_L/ρ^2_1∫_^d (x-M_1(t)) (x̂(t)-x) df_1 = = 2 ν_L/ρ_1( M_1(t) x̂(t) - E_1(t) - M_1(t)x̂(t) +M_1^2(t)) = -2ν_L/ρ_1 V_1, and I_3^2 = 1/ρ_1∫_^d| x-M_1(t)|^2 ( π_LFf_1 - π_FL f_0) dx = = -π_FL/ρ_1∫_^d( x^2+M_1^2(t)-2xM_1(t)) df_0 + π_LF V_1(t) = = -π_FL/ρ_1( ρ_0 V_0(t) + ρ_0 (M_0-M_1)^2(t)) +π_LFV_1(t), where we add and subtract ρ_0 M_0^2(t) in the last term of I_3^2. Altogether, we get d/dtV(t)≤ ρ_1 ( 2ν_F +ρ_0-2ν_F/ρ_1) V_0(t) +( -2ν_F +σ̃) V(t) + + ( σ̃ρ_0 ρ_1 -π_LFρ_1 -π_FLρ_0) ( M_0-M_1)^2(t). Using the assumptions, we recover the second inequality in (<ref>). Let the assumptions of Proposition <ref> hold, and in addition suppose that ν_F > max{π_LFρ_1/ρ_0 (1-b_ℰ̅ρ_1), π_FL/b_ℰ̅ρ_0}, with b_ℰ̅ = e^α(ℰ-ℰ̅). Then it holds |m_0(t)/ρ_0-m_1(t)/ρ_1|^2 → 0, V(t)→ 0, as t →∞. Let us first study the behavior of | M_0-M_1|^2(t). We have d/dt| M_0-M_1|^2(t) = 2 ( M_0-M_1) (t) d/dt( M_0-M_1) (t) ≤ -2ν_F | M_0-M_1|^2(t) +2 (M_0-M_1)(t) ( 𝒞_1 M_1(t) - 𝒞_0 M_0(t)) =-2ν_F | M_0-M_1|^2(t) -2 [ M_0(t); M_1(t) ]^T [ 𝒞_0 -𝒞_0; -𝒞_1 𝒞_1 ][ M_0(t); M_1(t) ], with 𝒞_0 = -π_FL-ν_F ρ_0 b_ℰ̅/ρ_1, 𝒞_1 = -π_LFρ_1+ν_Fρ_0(1-ρ_1 b_ℰ̅)/ρ_0ρ_1, and we used equations (<ref>)-(<ref>) and the estimate x̂(t) = ∫_ℝ^dx e^-αℰ(x)g(x,t)dx/∫_ℝ^d e^-αℰ(x)g(x,t)dx≥e^αℰ/e^αℰ̅∫_ℝ^d x  g(x,t) dx := b_ℰ̅  m(t). Note that, if condition (<ref>) holds then 𝒞_0, 𝒞_1>0 and so -2 [ M_0(t); M_1(t) ]^T [ 𝒞_0 -𝒞_0; -𝒞_1 𝒞_1 ][ M_0(t); M_1(t) ]≤ 0, since the above matrix is weakly diagonal dominant and hence positive semidefinite. Altogether, we obtain the estimate d/dt| M_0-M_1|^2(t) ≤ -2ν_F| M_0-M_1|^2(t). and an application of Grönwall lemma yields | M_0-M_1 |^2(t) ≤| M_0-M_1 |^2(0) e^-2ν_F t, which allow us to conclude | M_0-M_1 |^2(t) → 0 as t→∞. In particular, this implies | M_0-M_1 |^2(t) ≤| M_0-M_1 |^2(0), which helps us to show the second statment. Indeed, we rewrite the second inequality in (<ref>) in integral form as V(t) ≤ V(0) + 𝒞^0_v| M_0 -M_1|^2 (0) ∫_0^t ds- 𝒞_v ∫_0^tV(s)ds, with 𝒞^0_v = σ̃ρ_0 ρ_1 -π_LFρ_1 -π_FLρ_0 and 𝒞_v = 2ν_F-σ̃. Moreover, we note that t→ V(0)+𝒞^0_v | M_0 -M_1|^2 (0)t, is a non-decreasing function. Hence, again using Grönwall lemma, we get V(t) ≤[V(0)+𝒞^0_v | M_0 -M_1|^2 (0)t] e^-𝒞_vt, which implies V(t)→ 0 as t→∞. The fact that V(t) vanishes in the limit t→∞ allows us to conclude that the crowd concentrates. However, the position of the concentration point is unknown. This position is quantified in the following section. §.§ Convergence to the global minimum In this section, we determine the conditions under which the mean value of the population is a reasonable approximation of the global minimizer. Suppose the assumptions of Proposition <ref> hold. Further, we assume that ℰ∈ C^2(ℝ^d) and that there exist constants c_1,c_2 >0 such that sup_y∈ℝ^2|∇ℰ(y)|≤ c_1, sup_y∈ℝ^2|Δℰ(y)|≤ c_2, and that the initial condition is well-prepared in the sense that the minimizer of ℰ is in the support of the initial population and μ/M_α^2(0)≤3/4, is satisfied with M_α(t) = ∫_^d e^-αℰ(x) g(x) dx, and μ = 2 α e^-αℰ [ c_1 √(2)( ν_F + ν_F/ρ_1) + c_2 σ_F^2 k ] · ·[ max{1/𝒞_v V(0)+ γ_m ( 𝒞_v^0/𝒞_v^2 + ρ_0 ρ_1/2ν_F), 2/𝒞_v V(0) + 4 𝒞^*/𝒞_v + √(ρ_0ρ_1γ_m)/ν_F}], with 𝒞^0_v = σ̃ρ_0 ρ_1 -π_LFρ_1 -π_FLρ_0, 𝒞_v = 2ν_F-σ̃, γ_m = ( m_0(0)/ρ_0-m_1(0)/ρ_1) ^2, and 𝒞^* is the maximal value of t→ e^-𝒞_v/4t√(𝒞_v^0 γ_m t). Then there exists x̃∈^d such that m(t) →x̃ as t →∞ and ℰ(x̃) = ℰ + r(α) + log2/α, where r(α) = -1/αlog (M_α(0)) - ℰ→ 0, as α→∞. First, we show |d/dtm(t) |→ 0, as t→∞. To this end, we rewrite |d/dtm(t) | = |ν_F ∫_^d( e^-αℰ(x)/M_α(t) -1 ) x   g(x) dx |, where we use the first estimate in (<ref>) and the definition of x̂(t). Applying Jensen inequality and using the estimate x̂(t)≤ e^-α(ℰ-ℰ̅)∫_^d x g(x,t) dx := b_ℰ m(t), we get |d/dtm(t) | = ν_F/M_α(t)|∫_^2d x e^-αℰ(x) g(x) g(x_*) dx dx_* - ∫_^2d x_* e^-αℰ(x) g(x) g(x_*) dx dx_* | = ν_F/M_α(t)|∫_^2d (x-x_*) e^-αℰ(x) g(x) g(x_*) dx dx_* | ≤ν_F/M_α(t)∫_^2d| x-x_* | e^-αℰ(x) g(x) g(x_*) dx dx_* ≤ b_ℰν_F ( ∫_^2d| x-x_*|^2 g(x) g(x_*) dx dx_*) ^1/2 = b_ℰν_F √(2)( ρ_0 E_0(t)+ρ_1 E_1(t) - (ρ_0 M_0(t) + ρ_1 M_1(t))^2 )^1/2 = b_ℰν_F √(2)( V(t) + ρ_0 ρ_1 (M_0-M_1)^2(t) )^1/2→ 0, as t →∞, since both, V(t) and (M_0-M_1)^2(t), go to zero as t→∞. Thus, there exists x̃∈^d such that x̃ = m(0) + ∫_0^td/dsm(s)ds = lim_t→∞ m(t). Let us now focus on the term M_α(t) d/dt M^2_α (t) = 2 M_α(t) d/dt M_α(t) = 2 M_α(t) ∫_^d e^-αℰ(x)∂_t g(x,t) dx, with ∂_t g(x,t) = ∂_t f_0(x,t) + ∂_t f_1(x,t) = -ν_F ∇_x ·[ (M_1 - x) f_0(x,t) ] + σ_F^2/2Δ_x [D^2(x) f_0(x,t)] - ν_F/ρ_1∇_x ·[ (x̂(t)-x)f_1(x,t)], where we recall that we assume ν_L= ν_F and ∑_λ_∈{0,1}𝒯[f_λ](x,t) = 0. We consider the terms separately to obtain I_1 = -ν_F ∫_^d e^-αℰ(x)∇_x ·[(M_1-x) f_0 ] dx = = -ν_F α∫_^2d e^-αℰ(x)∇ℰ(x) (x_*-x) df_0 df_1≥ ≥ -ν_F α e^-αℰ c_1 M_α(t)/M_α(t)∫_^2d| x_*-x | dg dg_* ≥ ≥ -ν_F αe^-2αℰ/M_α(t) c_1 ( ∫_^2d| x_*-x |^2 dg dg_* )^1/2≥ ≥ -ν_F αe^-2αℰ/M_α(t) c_1 √(2)[V(t) + ρ_0 ρ_1 (M_0-M_1)^2(t)]^1/2, I_2 = σ_F^2/2∫_^d e^-αℰ(x)Δ_x [ D^2(x) f_0]dx = =- σ_F^2/2α∫_^d e^-αℰ(x)Δℰ(x) k |x̂(t) -x |^2 df_0 + +σ_F^2/2α^2∫_^2d e^-αℰ(x)∇_x ℰ(x) ⊗∇_x ℰ(x) k |x̂(t) -x |^2 df_0 ≥ ≥ -ασ_F^2/2 k c_2 e^-αℰ∫_^d|x̂(t)-x | ^2 dg ≥ ≥ -ασ_F^2/2 k c_2 e^-αℰ∫_^2d∫| x_*-x|^2 e^-αℰ(x_*)/M_α(t) dg dg_*≥ ≥ -ασ_F^2/2 k c_2 e^-2αℰ/M_α(t)∫_^2d| x_*-x |^2 dg dg_* ≥ -ασ_F^2/2 k c_2 e^-2αℰ/M_α(t)[ V(t) + ρ_0 ρ_1 (M_0-M_1)^2(t)], and I_3 = -ν_F/ρ_1∫_^d e^-αℰ(x)∇_x ·[(x̂(t)-x) f_1 ] dx ≥ ≥ -αν_F/ρ_1 c_1 e^-αℰ∫|x̂(t)-x | dg ≥ ≥ -αν_F/ρ_1 c_1 e^-2αℰ/M_α(t)( ∫_^2d| x_* - x | ^2 dg dg_*)^1/2≥ ≥ -αν_F/ρ_1 c_1 e^-2αℰ/M_α(t)[ V(t) + ρ_0 ρ_1 (M_0-M_1)^2(t)]^1/2, where we use assumption (<ref>), we integrate by parts, use Jensen inequality and the previous estimates. Altogether, we estimate (<ref>) as follows d M_α(t)/dt ≥ -2α e^-2αℰ[ c_1 √(2)ν_F (1+ 1/ρ_1) (V(t) + ρ_0 ρ_1 (M_0-M_1)^2(t) )^1/2 + . .c_2 σ^2_F k (V(t) + ρ_0 ρ_1 (M_0-M_1)^2(t) )]. Using the estimates for the mean and variance in (<ref>)-(<ref>) and integrating equation (<ref>) we get M^2_α(t)≥ M^2_α(0)-2α e^-αℰ[ c_1 √(2)ν_F (1+ 1/ρ_1) + c_2 σ_F^2k ]· ·max {∫_0^t [ V(0) + 𝒞_v^0γ_m s ] e^-𝒞_vs + ρ_0ρ_1 γ_m e^-2ν_Fsds,. .∫_0^t√([ V(0) + 𝒞_v^0γ_m s ] e^-𝒞_vs + ρ_0ρ_1 γ_m e^-2ν_Fs)ds}. We integrate the first integral in (<ref>) by parts to get ∫_0^t [ V(0) + 𝒞_v^0γ_m s ] e^-𝒞_vs + ρ_0ρ_1 γ_m e^-2ν_Fsds≤V(0)/𝒞_v + γ_m ( 𝒞_v^0/𝒞_v^2 + ρ_0ρ_1/2ν_F). Moreover, applying Hölder inequality to the second integral in (<ref>) yields ∫_0^t√([ V(0) + 𝒞_v^0γ_m s ] e^-𝒞_vs + ρ_0ρ_1 γ_m e^-2ν_Fs)ds≤ ≤2√(V(0))/𝒞_v + ‖√(𝒞_v^0γ_m s) e^-𝒞_v s/4‖_∞∫_0^t e^-𝒞_v s/4 ds + √(ρ_0 ρ_1 γ_m)/ν_F≤ ≤2√(V(0))/𝒞_v + 4𝒞^*/𝒞_v + √(ρ_0 ρ_1 γ_m)/ν_F, where 𝒞^* := max_s∈√(𝒞_v^0γ_m s) e^-𝒞_v s/4 and we use the fact that √(a+b)≤√(a) + √(b), for and a,b≥ 0. Altogether, using assumption (<ref>) we obtain M^2_α(t)≥ M^2_α(0)- μ≥1/4 M^2_α(0), with μ defined as in equation (<ref>). Thus M_α(t)≥1/2 M_α(0). In addition, since m(t) →x̃ and V(t)→ 0 as t→∞ it holds, M_α(t) = ∫_^de^-αℰ(x)g(x) dx → e^-αℰ(x̃), as t→∞ as a consequence of Chebishev inequality (see <cit.>). Thus 0≥ e^-αℰ(x̃)≥1/2M_α(0) ⟺ 0 ≥ -αℰ(x̃) ≥log( M_α(0)/2), that is 0≤ℰ(x̃) ≤ -1/αlog (M_α(0)) + log(2)/α. Finally, 0 ≤ℰ(x̃) ≤ℰ as α→∞, since the first term tends to ℰ thanks to Laplace principle and log(2)/α vanishes in the limit. We emphasize the following observations: * In order to satisfy condition (<ref>), V(0) and m(0) need to be small. * Note that if we assume to have anisotropic diffusion the convergence is guaranteed independently of the parameters choice and, in particular, of the dimension d. For this reason, all numerical examples of the next section consider the anisotropic noise. § NUMERICAL METHODS In order to approximate the time evolution of the density f_λ(x,t) we sample N_s particles (x_i^0,λ_i^0), i=1,…,N_s from the initial distribution. We consider a time interval [0, T] discretized in N_t intervals of length h. The interaction step is solved by means of binary interaction algorithms, see <cit.> for details. We denote the approximation of f_λ(x,nh) at time t^n by f_λ^n(x). For any λ∈{0,1} fixed, the next iterate is given by f_λ^n+1(x) =( 1-h/ε) f_λ^n(x) + h/ε∑_λ_*∈{0,1} Q_α^+(f_λ^n,f_λ_*^n )(x), where ε>0 is a frequency parameter and Q^+(f_λ^n,f_λ_*^n) is the gain part of the collision operator defined in (<ref>). Equation (<ref>) can be interpreted as follows: with probability 1-h /ε an individual in position x does not interact with other individuals and with probability h /ε it interacts with another randomly selected individual. In the following we will assume h = ε. In order to simulate changes of the label λ, we discretize equation (<ref>). For any fixed x∈^d, we obtain f_0^n+1(x) = (1-ε π_F→ L)  f_0^n(x) + ε π_L→ F f_1^n(x), f_1^n+1(x) = (1-ε π_L→ F)  f_1^n(x) + ε π_F→ L  f_0^n(x), where π_F→ L(·) and π_L→ F(·) are the transition rates as defined in (<ref>)-(<ref>). The details of the numerical scheme are summarized in Algorithm <ref>. Here, the parameters δ_stall and j_stall are used to check if consensus has been reached in the last j_stall iterations within a tolerance δ_stall. In more detail, we stop the iteration if the distance of the current and previous mean x̂ is smaller then the tolerance δ_stall for at least j_stall iterations. In this case, the evolution is stopped before the total number of iterations has been reached.  [GKBO] . Draw (x_i^0,λ_i^0)_i=1,…,N_s from the initial distribution f^0_λ(x) and set n=0, j=0. . Compute x̂^0 as in equation (<ref>). .n<N_tj<j_stall * i=1N * Select randomly a leader with position y^n_k, k≠ i. * Compute the new positions y^n+1_k = y^n_k + ν_L ε (x̂^n-y^n_k) x_i^n+1 = x_i^n +ν_F ε( y_k^n+1-x_i^n) + σ_F √(ε) D ξ( 1-λ_i^n) + εν_L (x̂^n-x_i^n)λ_i^n. * Compute the following probabilities rates p_L =ε π_F→ L(x_i^n+1,λ_i^n), p_F=ε π_L→ F(x_i^n+1,λ_i^n). * λ_i^n = 0, with probability p_L agents i becomes a leader: λ_i^n+1 = 1. * λ_i^n = 1, with probability p_F agents i becomes a follower: λ_i^n+1 = 0. * Compute x̂^n+1 as in equation (<ref>). * ‖x̂^n+1-x̂^n‖_∞≤δ_stall j← j+1 n← n+1 The above algorithm is inspired from Nanbu's method<cit.>, for larger class of direct simulation Monte-Carlo algorithm for interacting particle dynamics we refer to<cit.>. § VALIDATION TESTS In this section we test the performance of the GKBO algorithm in terms of success rate and number of needed iterations. We consider the translated Rastrigin function with global minimum in x̅ = 1 for the vast majority of the tests. In the last experiment we compare the results for different benchmark functions (see <cit.> for a complete list). If not explicitly specified, we run M=20 simulations and, according to <cit.>, we consider a simulation successful if ‖x̂(t) - x̅‖_∞≤ 0.25. We set α = 5· 10^6 and we adopt the numerical trick described in <cit.> to allow for arbitrary large values of α. We assume N=200 and that agents are initially uniformly distributed in the hypercube [-4.12,0]^d, which does not contain the global minimum. At time t=0 we suppose all agents are in the followers status and they change their label according to equation (<ref>). For the GKBO algorithm we set the total percentage of leaders is ρ_1^∞=0.5, if not specified explicitly. Hence, the transition rates are defined as in equation (<ref>), with π_LF = π_FL = 0.2, if the emergence of leaders is random or as defined in equation (<ref>) if the labels change according to the weighted criterion defined in Section <ref>. We will consider also a mixed strategy with p̅ =0.5, that is, among the total amount of generated leaders, 50% change their labels according to the weighted strategy and the remaining ones change their labels randomly. We let the dynamics in (<ref>) to evolve for N_t=10000 iterations with ε=0.1, where differently specified. We set j_stall = 1000, δ_stall= 10^-4. We assume ν_F=1, ν_L= 10 while the diffusion parameter and the dimension change in the different tests and will be specified later. §.§ Test 1: Comparison of different followers / leaders ratios Suppose σ_F=4, d=20. Table <ref> reports the mean of the number of iterations and success rate (in parenthesis) for the GKBO algorithm tested on the translated Rastrigin function as the leaders mass at the equilibrium ρ_1^∞, defined as in equation (<ref>), varies. The success rate and number of iterations for the KBO algorithm are 1 and 10000 respectively. GKBO outperforms KBO in terms of the number of interations. However, the success rate of GKBO with random leader emergence deterioates for ρ_1^∞ = 0.75. §.§ Test 2: GKBO for different choices of x̂. We compare the results of the algorithm with x̂ as in (<ref>) and slight modifications given by x̂_F = ∫_ℝ^dx e^-αℰ(x)f_0(x,t)dx/∫_ℝ^d e^-αℰ(x)f_0(x,t)dx, or x̂_L = ∫_ℝ^dx e^-αℰ(x)f_1(x,t)dx/∫_ℝ^d e^-αℰ(x)f_1(x,t)dx, which corresponds to the cases where the weighted mean depends only on the followers or only on the leaders, respectively. In Figure <ref> the success rate and number of iterations as σ_F and d varies for x̂ (left), x̂_f (middle) and x̂_L(right). In the first row, results for the case with random leaders generation are shown, in the second row the mixed leaders generation with p̅=0.5 and in the third row the case with weighted leaders generation. Note that the performance of the random strategy, especially for large values of the dimension d is higher if x̂_F(t) is used for the estimate of the global minimizer. This can be explained by a better exploration phase of the particles during the evolution, whereas the leaders position x̂ _L may result in a less accurate estimate, since labels change randomly. The weighted strategy with x̂_L(t) has computational advantages since leaders are chosen to be the agents with optimal position and the computation of the x̂_L(t) requires a lower number of evaluations of the cost function. This may be advantageous in particular if the evaluation of the cost function is numerically expensive. §.§ Test 3: Comparison in d=20 dimensions for varying σ_F We fix d=20 and let σ_F vary from σ_F=0.1 to σ_F = 10 to compare the performance of GKBO (equation (<ref>)), standard GA (equation (<ref>)-(<ref>)), the modified GA (equation (<ref>)-(<ref>)) and the KBO (equation (<ref>)). In Figure <ref> the success rates and means of the number of iterations obtained with the different algorithms in the case of the translated Rastrigin function is shown. Here, test GKBO with x̂, x̂_F and x̂_L as defined above and study random leader emergence (left), mixed leader emergence with p̅=0.5 (middle), and weighted leader emergence (right). Altough the success rates of KBO and the variants of GKBO behave similar, the GKBO versions required less iterations. Moreover, we remark that the behavior of the GKBO with weighted leaders generation and with x̂ as in equation (<ref>) and of the KBO is similar, as expected from our analysis. §.§ Test 4: Comparison of different leader emergence strategies. Let us fix d = 20 and consider the mixed leader emergence strategies as discussed in Remark <ref>. In Figure <ref> on the left we see the success rates for different values of σ_F and p̅, on the right the number of iterations for different values of p̅ and for σ_F= 4,5. In Figure <ref> the success rate and minimum, maximum and mean iterations number for the GKBO method with x̂ is shown for d=20 as p̅ and σ_F vary. §.§ Test 5: Comparison of different methods for varying d We fix σ_F=4 and vary the dimension d from 1 to 20. Figure <ref> shows the success rates and means of the number of iterations of the different methods in the case of the translated Rastrigin function. GKBO uses x̂ as in (<ref>). §.§ Test 6: Comparison of the accuracy for varying frequency ε Here we study the influence of the frequency parameter ε by comparing the accuracy of the KBO and GKBO with weighted and random leader emergence. We run the test for M=100 simulations assuming the initial data to be normally distributed in the hypercube [-4.12,0]^d, d = 20. The accuracy is computed as ‖x̂(t)-x̅‖_∞, where x̅ is the actual value of the minimum. In Figure <ref> the accuracy of the KBO algorithm (left) for GKBO algorithm with random leader emergence (middle) and weighted leader emergence (right) with ε=0.01 (first row) and ε=0.1 (second row). Note that in both cases, the values of σ_F for which the method converges with the weighted GKBO and the KBO algorithm is almost the same. If ε=0.01 the accuracy of the weighted GKBO is higher than the one of the KBO. If ε=0.1 the random strategy performs better than the other methods since the algorithm converges for almost all the values of σ_F considered. Furthermore, if we look at the case σ_F=4, all the methods converge but the random strategy reaches higher levels of accuracy. In Figure <ref> the results in terms of success rate and number of iterations needed for different values of ε are shown. If ε = 0.01 the success rate are of the GKBO methods is smaller than the one of the KBO but the number of needed iterations is reduced. If ε =0.1, the success rate area is enlarged for the strategy with random leader emergence. With this test we confirm the results obtained in Figure <ref>. Moreover, the number of iterations is reduced with respect to the KBO and the GKBO method with weighted leader emergence. §.§ Test 7: Comparison of different benchmark functions In the previous subsection we tested the different algorithms and different parameter sets with the translated Rastrigin function. Now, we choose σ_F such that both the KBO and the GKBO algorithms have success rate equal to one in the previous studies and test different benchmark functions in 20 dimensions. In Figure <ref> the comparison of KBO and GKBO in terms of success rate and mean number of iterations are shown. GKBO with both variants of leader emergence outperforms KBO in terms of the number of iterations. § CONCLUSION We propose a variant of the KBO method for global optimization which is enhanced by a transition process, inspired by genetic dynamics. These lead to a population divided into two species which we call followers and leaders. We adapt the convergence analysis to the new method and show in particular that the population concentrates in the long-time limit arbitrarily close to the global minimizer of the cost function. Numerical results show the feasibility of the approach and the improvement of the proposed generalization in terms of numerical effort. § ACKNOWLEDGMENTS GA and FF were partially supported by the MIUR-PRIN Project 2022, No. 2022N9BM3N “ Efficient numerical schemes and optimal control methods for time-dependent PDEs”. spmpsci
http://arxiv.org/abs/2306.04164v1
20230607053421
A Survey on Multi-AP Coordination Approaches over Emerging WLANs: Future Directions and Open Challenges
[ "Shikhar Verma", "Tiago Koketsu Rodrigues", "Yuichi Kawamoto", "Nei Kato" ]
cs.NI
[ "cs.NI" ]
A Survey on Multi-AP Coordination Approaches over Emerging WLANs: Future Directions and Open Challenges Shikhar Verma, Member, IEEE, Tiago Koketsu Rodrigues,  Member, IEEE, Yuichi Kawamoto, Member, IEEE, and Nei Kato, Fellow, IEEE S. Verma, , T.K. Rodrigues, Y. Kawamoto, and N. Kato are with the Graduate School of Information Sciences, Tohoku University, Sendai, Japan. Emails: {shikhar.verma, tiago.gama.rodrigues, youpsan, and kato}@it.is.tohoku.ac.jp ============================================================================================================================================================================================================================================================================================================================================================================ The 802.11 IEEE standard aims to update current Wireless Local Area Network (WLAN) standards to meet the high demands of future applications, such as 8K videos, augmented/virtual reality (AR/VR), the Internet of Things, telesurgery, and more. Two of the latest developments in WLAN technologies are IEEE 802.11be and 802.11ay, also known as Wi-Fi 7 and WiGig, respectively. These standards aim to provide Extremely High Throughput (EHT) and lower latencies. IEEE 802.11be includes new features such as 320 MHz bandwidth, multi-link operation, Multi-user Multi-Input Multi-Output (MIMO), orthogonal frequency-division multiple access, and Multiple-Access Point (multi-AP) cooperation (MAP-Co) to achieve EHT. With the increase in the number of overlapping Access Points (APs) and inter-AP interference, researchers have focused on studying MAP-Co approaches for coordinated transmission in IEEE 802.11be, making MAP-Co a key feature of future WLANs. Additionally, the high overlapping AP densities in EHF bands, due to their smaller coverage, must be addressed in future standards beyond IEEE 802.11ay, specifically with respect to the challenges of implementing MAP-Co over 60GHz bands. In this article, we provide a comprehensive review of the state-of-the-art in MAP-Co features and their drawbacks concerning emerging WLAN. Finally, we discuss several novel future directions and open challenges for MAP-Co. Multi-AP Coordination, Wireless local area network, IEEE 802.11ay, IEEE 802.11be, millimeter Wave. § INTRODUCTION Over the last two decades, wireless local area networks (WLAN) have treaded the path of continuous evolution to provide high data rates at lower costs <cit.>. In this regard, Wi-Fi Alliance with IEEE has proposed IEEE 802.11ax (Wi-Fi 6), and IEEE 802.11ad to provide extremely high throughput (EHT) <cit.>. Especially, post-pandemic, business models are dramatically switching towards digitalization of everything, drastically increasing the number of connected devices and traffic over WLAN. Moreover, users will soon experience a quantum leap in new applications such as 8K videos, virtual/augmented reality (VR/AR), and large-scale connected sensors, leading to new advanced network requirements <cit.>. For instance, VR/AR will require EHT (20 Gbps) with low latency (lower than 5ms) and stringent reliabilities. Meeting such extremely high requirements is beyond the capacity of 802.11ax and 802.11ad. In this regard, IEEE has amended 802.11ax and 802.11ad to propose new standards 802.11be (namely Wi-Fi 7) and 802.11ay (namely WiGig), respectively <cit.>. In this article, we use 802.11be and Wi-Fi 7 interchangeably as we use 802.11ay and WiGig interchangeably. In 2018, IEEE 802.11 approved the creation of a new task group for next-generation IEEE 802.11be (TGbe) to define EHT physical (PHY) and medium access control (MAC) protocols for enabling at least 30gigabits per second (Gbps) throughput while ensuring backward compatibility <cit.>. The task group also focuses on reducing worst-case latency and jitters to support time-sensitive applications such as AR/VR. In PHY layer amendments, 802.11be added new bandwidths such as continuous 240 MHz, continuous 320 MHz, and noncontinuous 160+160 MHz, multi-user (MU) resource unit (RU) assignment, 4096-quadrature amplitude modulation, permeable formats, and puncturing techniques <cit.>. In the MAC layer, multi-link operation, 16 spatial streams MU MIMO, Hybrid automatic repeat request, and MAP-Co are the main discussed features. MAP-Co is well a studied method for 802.11be protocols <cit.>. However, the group decided not to include MAP-Co in the release of 802.11be in 2024 <cit.>. The beyond-IEEE 802.11be/Wi-Fi 7 shall include MAP-Co features. Similarly, from 2015 to 2021, IEEE 802.11 worked on amendments of 802.11ad to provide at least 20Gbps throughput and latency lower than 5ms over 60GHz while maintaining power efficiency <cit.>. 802.11ay proposed channel bonding and aggregation of 2.16 GHz channels, MU-MIMO beamforming, and MIMO multi-channel access feature to provide EHT <cit.>. However, IEEE 802.11ay-based WLAN has small coverage and suffers from high attenuation losses such as reflection, diffraction, and scattering with inter-cell interference (ICI) issues <cit.>. Therefore, multi-AP coordination (MAP-Co) features can improve resource utilization, fast AP handovers, and to avoid ICI by coordinated MIMO and beamforming, coordinated OFDM, and joint transmission <cit.>. Despite its importance, 802.11ay have not included such essential features for future WLAN, as mentioned in Fig. <ref>. It is expected that beyond IEEE 802.11ay/WiGig should introduce MAP-Co features <cit.>. As there are many existing surveys on 802.11be and 802.11ay <cit.>, yet most of them have reviewed overall PHY and MAC layer features of a single AP environment. Yet, there are no comprehensive surveys on MAP-Co for future WLAN and discussion on open issues for MAP-Co over next-generation WLAN. Fig. <ref> depicts the research gaps in the study of emerging WLANs. Hence, our focus is to firstly review the state-of-the-art on MAP-Co approaches that will be helpful for standards during the development of MAP-Co for future WLAN. In this article, we also present the future direction of MAP-Co in the emerging WLANs and discuss open issues to enable future MAP-Co. The contributions of our work in this paper are as follows: * Through an extensive study of the existing surveys on WLANs, we are the first to point out the need for a comprehensive review of existing WLAN features concerning MAP-Co over future WLANs. * This is also the first paper to provide a taxonomy and a comprehensive survey of MAP-Co architecture and existing MAP-Co features. In addition, we explain the shortcomings of each architecture and feature as a lesson learned. * As the final contribution, we identify several future directions and several challenges for implementing future MAP-Co over the emerging WLAN. The rest of this paper is organized as follows and also depicted in Fig. <ref>: Sec. <ref> provides the relevant surveys on emerging WLANs. Sec. <ref> presents a comprehensive survey on MAP-Co architectures, and taxonomy with explanations on each feature and lessons learned. In Sec. <ref>, we discuss several MAP-Co features and summarize each feature with a lesson learned at the end of the discussion. Sec. <ref> discusses several future directions and open research issues from the perspective of implementing of MAP-Co over future WLAN. Finally, Sec. <ref> concludes the paper. The used acronyms in this paper are summarized alphabetically in Table. <ref> in the Appendix. § RELATED SURVEYS This section provides related surveys on developed and emerging IEEE 802.11 WLAN standards. The subsequent subsection presents related surveys of 802.11n/ac/ax/be that operates over sub-6GHz bands. The next subsection explains related surveys on mmWave WLAN standards that are 802.11ad/ay. At the end of both subsections, we present the gaps in existing literature and provide direction for further sections of this article. §.§ Related Surveys of IEEE 802.11n/ac/ax/be IEEE Standards and Wi-Fi Alliance are working on development of several WLAN technologies. The most used technology is the IEEE 802.11 family. IEEE 802.11n (also labeled as Wi-Fi 4) was one of the major leaps in WLAN technologies, which operated both on 2.4 GHz and 5 GHz. Several other WLAN standards followed 802.11n, operating on the same spectrum with new features. For instance, 802.11ac (Wi-Fi 5), and 802.11ax (Wi-Fi 6) are successors of 802.11n and are commercially available. There are already several surveys on 802.11n, 802.11ac, and 802.11ax. IEEE is also amending Wi-Fi 6 for the design of IEEE 802.11be (aka Wi-Fi 7). Hence, we present the related surveys on IEEE 802.11n/ac/ax/be in this section. Thomas et al. <cit.> have presented a detailed survey on features of 802.11n, insights, and challenges of the PHY layer such as channel estimation, space-time block coding, and so on. Similarly, the authors in <cit.> have provided a comprehensive review of MAC layer enhancements for 802.11n. Further, Karamakar et al. <cit.> have presented a survey on enhanced PHY/MAC features of 802.11n/ac and studied the impact of new PHY features of 802.11n/ac on the upper layers of the network. There are also several surveys on PHY and MAC layer amendments of 802.11ac to design 802.11ax (Wi-Fi 6)  <cit.>. For instance, the authors in <cit.> have reviewed 802.11ax standardization activities of MAC protocols to better support QoS and explained the challenges of collaboration between cellular and 802.11ax. Bo et al. <cit.> and Saloua et al. <cit.> surveyed different OFDMA MAC protocols for allowing multi-user access and propose a new taxonomy to classify those methods. Evgeny et al. <cit.> have presented a comprehensive tutorial on IEEE 802.11ax features such as OFDMA-based random access, spatial frequency reuse, MU-MIMO, power saving, and so on. Most of these surveys focused on features under a single-AP transmission environment. Now, we discuss related surveys on the latest 802.11be protocol. The first part of Table. <ref> presents the related surveys of IEEE 802.11be/Wi-Fi 7. The authors in <cit.> have provided a detailed survey on the application of machine learning for PHY and MAC features improvements such as channel access, link configuration, signal quality estimation, and so on. They also identified available tools and datasets while concluding with open research challenges of ML in Wi-Fi. However, the article lacks a thorough survey on new features of Wi-Fi 7. <cit.> is the first paper that summarized the objective, timelines, and listed candidate features of 802.11be. <cit.> is another tutorial article on the newly added PHY and MAC layer features of 802.11be. In this article, the authors listed multi-link, channel sounding, PHY and MAC format, enhanced OFDMA, and Multi-AP cooperation. Regardless, the article failed to provide insights into the study of each feature and new open issues. In this regard, Cailian Deng et al. <cit.> are the first to provide a more detailed review of MAC and PHY layer techniques mentioned by the task group. The discussed features are channelization and tone plan, multiple resource units (multi-RU) support, 44096-QAM, preamble designs, multiple link operations, MIMO enhancement, multi-AP coordination, enhanced link adaptation, and re-transmission protocols (e.g., hybrid automatic repeat request). The authors also provide a few insights into each feature with future directions and opportunities considering all features of 802.11be. However, such reviews considered all features which restricted them to provide a detailed review of all methods in each feature. After such wide reviews of all features, the authors are focusing on a thorough review of each feature. For instance, the authors of <cit.> gave a brief overview of features such as multi-band and multi-coordination concerning guaranteeing reliability, latency, and jitter. This paper focused on discussing a certain number of features with a particular objective. Hence, this article cannot be considered a wide review of a particular feature of 802.11be. <cit.> is one of the first articles to examine a specific feature of 802.11be. The authors of <cit.> briefly explained different multi-link operation modes and provided the performance evaluation. Likewise, there is a detailed survey on random access-based MAC protocols for MU-MIMO over 802.11ax by Liao et al. <cit.>. They identified key requirements and research challenges of designing MU-MIMO. Such a detailed survey of a single feature will provide insightful contributions during the standardization of MU-MIMO. MAP-Co is a new concept that was never discussed in previous standards like MIMO, multi-RU, etc. Moreover, no detailed surveys are available on MAP-Co. Thus, the TGbe task group has decided not to include MAP-Co in the upcoming 802.11be standard which would be included MAP-Co as a part of the beyond 802.11be protocols with enough research. Hence, in this article, we provide an extensive review of every MAP-Co feature. Our comprehensive survey can provide a guideline for the future development of beyond 802.11be protocol. §.§ Related Surveys of IEEE 802.11ad/ay In 2012, IEEE established another task group to focus on designing 802.11 WLAN standards for accessing unlicensed mmWave such as 60GHz spectrum to guarantee Gbps throughput. In this vein, IEEE introduces the 802.11ad standard intending to design the operation of Wi-Fi that can address the several challenges of accessing mmWave bands. However, there are a few good surveys that provide an extensive survey of IEEE 802.11ad features. For instance, <cit.> is the first paper that provided a succinct study of the PHY and MAC features of 802.11ad along with 802.11ac. They explained PHY and MAC layer format, beamforming training, different aggregation approaches, channel access mechanisms, and QoS mechanisms. However, the article did not provide any future direction and open challenges for future 802.11ad protocols. The survey in <cit.> provided a detailed tutorial on 802.11ad wherein the authors enumerated some design challenges for 802.11ad such as directional transmission using beamforming, new PHY and MAC layer format, and channel access methods. Verma et al. <cit.> provided an overview of 802.11ac/ad features and hardware challenges such as hardware complexity, and semiconductor cost. Similarly, <cit.> has surveyed use cases, PHY, MAC, and Network layers of 802.11ad protocol. In PHY layer, the authors have focused on MIMO, channel model, precoding and so on. In the MAC layer, the paper reviewed MAC layer protocol in different types of networks such as ad-hoc networks, mesh networks, etc. However, these articles failed to provide future issues and directions to address the challenges of designed 802.11ad protocols. 802.11 standards have proposed amendments on 802.11ac to enhance the MAC and PHY layer protocols for supporting next-generation applications such as AR/VR, data center connectivity, vehicle-to-vehicle connection, etc. The new mmWave WLAN standard is named 802.11ay. In this vein, <cit.> is the first paper to list newly incorporated features on 802.11ad to design 802.11ay. This review article focused on beamforming enhancements to support new applications, and different resource scheduling approaches. However, the authors did not provide any open issues that are not mentioned by the standard. Finally, Zhou et al. <cit.> conducted a broad review on the MAC-related issues of 802.11ay, cross-layer issues between MAC and PHY while reviewing the challenges of 802.11ad MAC protocols that lead to the design of 802.11ay MAC. The authors explained channel bonding/aggregation, channel access and allocation using MIMO over multiple channels, spatial sharing, interference mitigation approach, beamforming training and tracking, and single and MU-MIMO. Additionally, they pointed out open research issues and future work. They have mentioned that inter-AP coordination/MAP-Co will be one of the significant future directions for IEEE 802.11ay owing to several advantages and the ability to solve the existing issues. The MAP-Co can reduce the handover delay, improve failed link recovery, much higher throughput using joint transmission, ICI avoidance, and so on. However, the authors did not explain any technical details and challenges of implementing MAP-Co in 802.11ay. To the best knowledge of authors and summary in the third column of Table. <ref>, there is no such review article that extensive study the future challenges and direction of implementing MAP-Co for 802.11ay. Hence, after reviewing each method of MAP-Co proposed in 802.11be, we will present novel future directions and open challenges of implementing MAP-Co for beyond 802.11ay protocols. To view the summary of the existing surveys on emerging 802.11 standards, Table <ref> may be referred to. Thus, on the one hand, it is evident that there is no extensive survey on MAP-Co between emerging WLANs. Hence, the next section reviews the existing architecture of MAP-Co. In Section <ref>, we provide an extensive survey on MAP-Co features proposed by standards and researchers for existing WLAN. We also present lesson learned following discussions of every MAP-Co feature. Further, in the next section, we present future directions and challenges of MAP-Co over emerging WLANs. § MAP-CO NETWORK ARCHITECTURE MAP-Co is one of the key features discussed by 802.11be to enable coordination between neighboring APs for exchanging channel state information (CSI), resource allocation, and so on <cit.>. Therefore, in this section, we first review the MAP-Co system and architecture that can establish the base for extensively studying the different features of MAP-Co in the next section. Additionally, we articulate the lessons learned at the end of the review on MAP-Co architectures, which can help readers to choose suitable methods according to their research requirements. A typical multi-AP network scenario consists of several neighboring overlapping APs and distributed STAs under each AP. For instance, a large factory or shopping mall has several APs to manage connections for static and mobile users. The coordination between multi-APs can improve reliability, reduce latency, increase manageability, increase throughput at different signal-to-noise-ratio (SNR), and reduce power consumption <cit.>. For instance, multi-AP coordinated transmission and reception increase throughput, especially in the case of cell edge users having high path loss and interference from neighboring APs. However, the traditional neighboring APs can be developed by the same or different original equipment manufacturers . However, the diverse APs can have interoperability issues and previous standards had no general protocol for leveraging coordination between APs. Therefore, 802.11be introduces new components, methods, MAC, and PHY layer protocols to enable MAP-Co. In MAP-Co, there is a master controller as a multi-AP coordinator and neighboring APs that can called as slave APs <cit.>. Slave APs are connected with a master controller through Ethernet or to an master AP within its range. STAs are connected to slave APs and the master controller/master AP cannot hear STAs directly unless STAs are directly connected to it. However, a master controller/master AP can have information of STAs through their connected slave APs. Hence, a master controller/master AP, slave APs, and their associated STAs are called multi-AP candidate sets, and their information is maintained by the master controller/master AP. The master controller/master AP can also be singly called as coordinator. Thus, there can be two ways of interconnecting the coordinator, slave APs and STAs. These two approaches define the architecture of MAP-Co and are named as master controller-based (MC-based) and master AP (MA)-based MAP-Co<cit.>. MC-based is a centralized whereas MA-based is a semi-distributed architecture. Both architectures are depicted in the Fig. <ref>. In MC-based, the master controller can be a local server, where software-defined networking functions are defined and slave APs are connected to the master AP through high capacity, low latency fiber or wireless backhaul links, as shown in Fig. <ref>. The master controller has a full view of all its connected slave APs and their associated STAs. The backhaul links are used to shared information by slave APs such as resources utilized, managing carrier sense multiple access/collision avoidance (CSMA/CA) states, CSI and so on. The master controller sends control information or request for CSI updates to slave APs. Slave APs can report CSI and schedule transmission based on received control signals from MC. In the MA-based approach, an AP is chosen as a coordinator which can be connected to multiple slave APs and called as a master AP. The architecture of MA-based MAP-Co is shown in Fig. <ref>. As depicted in Fig. <ref>, there are three connected basic service sets (BSS), where AP1, AP2 and AP3 are access points in each BSS. STA1 and STA2 are associated with AP1, STA2 can be also associated with AP2, and STA3 is associated with AP3. AP3 is the MA whereas AP2 and AP3 are slave APs. STA 2 is at the edge of a cell and under the range of AP1 and AP2. STA2 can get the benefits of multi-AP transmission whereas other STAs will have single AP transmission, as presented in Fig. <ref>. Hence, master AP have tasks for data transmission to their own STAs and also to slave APs. Hence, the MA should perform extra MAC and PHY layer functions of coordination during each data transmission. Slave APs can also work as traditional single AP transmission. Both of the architecture has advantages and drawbacks. Hence, in the next subsection, we summarize the architectures with advantages and drawbacks as lesson learned from study of each architecture. §.§ Summary and Lesson learned The architecture discussion highlights that MC-based approach is centralized, while MA-based approach is semi-distributed. MC offers greater computational power and higher bandwidth compared to MA, as it operates on local servers and wired backbone networks, whereas MA uses an AP as its controller and wireless connection. However, implementing MC requires additional infrastructure, making it expensive, while MA-based approach utilizes slave APs for coordination, making it cost-effective. MC-based architecture provides better control of resources and synchronization between multiple slave APs. In contrast, MA-based systems can be more complex to manage and synchronize due to the distributed nature of resources. Additionally, MA-based architecture is more scalable than MC systems, as the load can be distributed among multiple slave APs rather than being congested at a centralized server. However, MC-based systems offer better security as all slave APs can be controlled more easily, while MA-based systems may have less security due to their distributed nature. In conclusion, choosing the architecture for MAP-Co should be based on the specific requirements of the system, such as scalability, cost, and complexity. By utilizing both architectures, network performance can be improved at various stages. In the next section, we will explore different MAP-Co features that can enhance data communication. § MAP-CO FEATURES MAP-Co can improve the performance of overall APs in an area through collaboration between adjacent APs at different stages of data transmission such as channel sounding and data transmission<cit.>. To optimize multi-AP transmission, it is necessary to conduct multi-AP channel sounding before scheduling and designing policies for transmission. Hence, MAP-Co channel sounding is considered as a feature of MAP-Co in which multiple STAs from multiple BSSs send CSI feedback to their associated slave APs and slave APs report the feedback to the master controller <cit.>. There can be two types of channel sounding that are explicit and implicit channel sounding <cit.>. Similarly, channel access, resources management, concurrent transmission, beamforming, etc. can be coordinated. For instance, OFDMA channel access at different APs can be coordinated to avoid collision and interference <cit.>. Hence, Coordinated OFDMA (C-OFDMA) is another feature that is explored. Coordinated spatial reuse (CSR) is also a discussed feature of MAP-Co that is studied in 802.11be for providing interference-free parallel transmission in overlapping BSSs (OBSSs) <cit.>. Coordinated beamforming (CBF) is also proposed as a feature of MAP-Co to nullify the interference at neighboring APs and its STAs during transmission by an AP through cooperation. MAP-Co can also allow the transmission of data at an STA from multiple APs jointly <cit.>. Hence, joint transmission (JTX) is also included as another feature of MAP-Co <cit.>. Hence, in this section, we present the comprehensive state-of-the-art of all MAP-Co features with new taxonomy, advantages, and drawbacks. We also provide a summary and lessons learned after the discussion of each MAP-Co feature. §.§ Multi-AP Channel Sounding Channel sounding is a well-known process of surveying radio frequency channel characteristics and beam training over WLAN <cit.>. The channel quality can be significantly affected by obstacles, interference, propagation paths, diffraction loss, reflection loss, and so on. The channel sounding helps to adapt the data transmission according to the current propagation environment. For instance, the case of signal loss below a threshold can indicate to increase in the transmission power or a change in modulation and coding values. Hence, channel sounding is a key process for efficient data transmission by accurately estimating CSI. The CSI feedback acquisition is the process of channel sounding. The collected feedback can also be used to decide a weight and steering matrix for the optimization of beamforming also. In a single AP WLAN, the sounding procedure is carried out within a BSS <cit.> that is between an AP and STAs. The first step is to send a request for CSI feedback to STAs and STAs respond with a null signal to AP. The AP evaluates the channel quality based on a received signal. Another approach is that STAs estimate channel states based on the received request signal and feedback CSI to APs as a response. Hence, there are two existing CSI gathering schemes in the literature: Explicit and Implicit channel sounding. Channel sounding approach is also known as CSI feedback acquisition in other WLANs such as 802.11ay. We refer CSI estimation, channel quality estimation, etc as channel sounding in the rest of the article. However, Multi-AP transmission approaches such as CSR, CBF, joint transmission, and so on, require CSI between STAs and different APs beforehand. The process of acquiring CSI information between different STAs and multiple APs is called as MAP-Co channel sounding. The existing channel sounding process developed for a single-AP environment is not suitable for MAP-Co because the process involves signaling from an AP to its only associated STAs. Moreover, the architecture of MAP-Co is different, which consists of master AP and slave APs whereas there is no coordination in a single AP. Therefore, the 802.11be WLAN discussed new processes for the MAP-Co channel sounding. Similar to single-AP channel sounding, MAP-Co channel sounding should be two types: Explicit MAP-Co channel sounding and implicit MAP-Co channel sounding. This section presents comprehensive surveys on such two types of channel sounding concerning MAP-Co while showing how the single-AP sounding method is improved to enable MAP-Co channel sounding in the next subsections. We also summarize and provide lessons learned from each discussed multi-AP channel-sounding approach. §.§.§ Explicit channel sounding Explicit channel sounding requires the receiver to estimate CSI and periodically feedback to a transmitter, as mentioned before when STAs estimate CSI to report back to AP for downlink (DL) transmission <cit.>. In the 802.11be single-AP system, the process of explicit channel sounding begins when the AP sends a null data packet announcement (NDPA) to one or more STAs. This is followed by a short interframe space (SIFS) and then the transmission of a null data packet (NDP) frame, which only contains the physical header. The STAs that received the NDPA frame wait for the NDP frame, and the first STA mentioned in the NDPA frame sends compressed beamforming (CB) frame after another SIFS. The CB frame includes a feedback matrix that informs the AP about beam steering for future data transmission. The other STAs wait for a beamforming report poll (BRP) frame from the AP to send their CB feedback. This process is repeated for each STA mentioned in the NDPA frame. Once the AP collects CB from all the STAs, it can transmit data to multiple users or a single user using the received CB. In the case of UL MU-MIMO or UL OFDMA, the CB from multiple users can be transmitted together, reducing the CSI feedback overhead in a single-AP system. However, in the case of MAP-Co, all participating APs must be aware of the CSI at each STA in their range to avoid issues like ICI and resource utilization <cit.>. Therefore, the 802.11be introduces a multi-AP explicit channel sounding MAC process. There are two architectures in MAP-Co transmission: MC-based and MA-based. To classify the process of explicit MAP-Co channel sounding, we differentiate between these two architectures. The channel sounding process differs slightly in each MAP-Co architecture. In the MC-based architecture, the master controller (MC) sends a trigger frame or NDPA frame (called MC-NDPA in this article) to each AP in its control (Slave APs) to initiate sounding, as shown in Fig. <ref>. The MC-based architecture involves slave APs transmitting NDP frames either sequentially or jointly after receiving MC-NDPA from the MC. This results in two types of multi-AP explicit sounding: sequential and joint MC-based. In sequential multi-AP explicit channel sounding, a slave AP broadcasts an NDPA frame to STAs, followed by an NDP frame to all STAs after SIFS. Once the transmission of NDP frames is complete, MC sends the MC poll frame to the next slave AP in the sequence mentioned in the MC-NDPA transmit. After receiving the MAP poll frame, the slave AP broadcasts NDPA frames to STAs and transmits NDP frames after SIFS. This process is repeated by other slave APs in sequential multi-AP explicit channel sounding. The process of joint MC-based explicit channel sounding involves slave APs broadcasting NDPA and NDP frames together. This phase, in both types of sequential and joint transmission, is called the multi-AP NDP sounding phase, and it is depicted in Fig. <ref>. Following the initial phase, the multi-AP CSI feedback phase takes place. In MC-based systems, MC sends out the BRP trigger frame to slave APs after the NDP sounding phase. With sequential multi-AP explicit channel sounding, the first slave AP on the list of MC-NDPA frames forwards a BRP frame to its STAs. The STAs respond to the BRP frames with CB frames, each with information about the channel quality and beamforming parameters, sequentially with a SIFS gap. Once the CB frames from the first slave AP are complete, the next slave AP performs the same process in sequence, followed by the other slave APs in a similar fashion. In joint multi-AP explicit channel sounding, the slave APs send the BRP frame to all STAs simultaneously. After the SIFS interval of receiving BRP frames, the STAs transmit CB frames together to all slave APs. Thereafter, MC sends a trigger frame to collect the CSI information of all STAs from each slave AP. In sequential, each slave AP transmits the CSI feedbacks individually whereas slave APs report CSI information to MC simultaneously in joint multi-AP explicit channel sounding. In the MA-based architecture, the MA sends an NDPA trigger frame to slave APs and associated STAs for channel sounding. The sequential MA-based channel sounding involves the MA and slave APs performing the NDP sounding phase after the NDPA. First, the MA transmits NDPA and NDP frames to slave APs and STAs with a SIFS interval. Then, slave APs send NDPA and NDP frames sequentially. After the NDP sounding phase, STAs of MA report CB sequentially with a SIFS interval, using the BRP frame to instruct them. Each slave AP transmits a BRP frame to each STA after receiving it from MA. This frame indicates slave APs to request CB to their associated STAs sequentially, with each STA reporting CB information in the CSI feedback phase. After receiving CB frames from STAs, MA sends a trigger frame to slave APs, and slave APs report CSI to MA sequentially. In joint explicit MA-based channel sounding, NDPA, NDP, and CB frames are transmitted by MA, slave APs, and STAs together during their turn. MA requests slave APs through a trigger frame to report collected CB from different STAs, and all slave APs transmit CB reports to MA jointly with a SIFS interval. Summary and Lesson Learned: In the beginning, we categorize the explicit channel sounding for MAP-Co into two types: MC and MA-based. We then proceed to describe the process for each type of explicit channel sounding in MC and MA-based MAP-Co. In both architectures, we talked about two types of channel sounding: sequential and joint explicit channel sounding. Sequential involves each AP and STA taking turns transmitting channel sounding frames, while joint sounding sends all frames from slave APs and STAs at once. Based on the discussion, we can conclude that using a sequential sounding approach for channel sounding requires more frame exchanges. As a result, this method can significantly increase communication and computation overhead for MC and MA. MC-based architecture can handle this overhead better because it only communicates with slave APs and has higher processing power. However, MA-based architecture may struggle with high overhead because it has to process all slave APs and STAs. Sequential explicit channel sounding has the benefit of not congesting network resources, making it the best option for MC-based architecture. However, it has a longer waiting delay due to waiting for other slave APs and STAs to finish their sounding process, resulting in higher latency. This method is not suitable for real-time or ultra-low latency services and does not meet the requirements of higher bandwidth for future applications like augmented reality and massive IoT. On the other hand, joint explicit channel sounding allows for joint exchange of frames by MC, MA, slave APs, and STAs, making it a better approach for proper resource utilization of OFDMA-based resource allocation. However, it can highly congest the WLAN in the case of massive IoT applications per AP, leading to unsuccessful transmission and re-transmission, which increases communication overheads, energy consumption, and delays actual data transmission. Additionally, joint transmission is not applicable for energy and delay constrained STAs, and computation and communication overhead in explicit channel sounding for MAP-Co is an issue at a larger scale compared to a single AP explicit channel sounding. In Table <ref>, the benefits and limitations of each explicit channel sounding type in MAP-Co are outlined. It is clear from the analysis that the explicit channel sounding method incurs a significant overhead due to the increased number of frame exchanges required. Different methods have been proposed to decrease signal overhead in a single AP environment in 802.11ax/be. Table <ref> summarizes the state-of-the-art overhead reduction methods for explicit channel sounding designed for 802.11ax/be and determines the best solution for MAP-Co WLAN environment. These methods have been well-researched in the literature <cit.>, so we won't go into detail about each one. Based on the information in Table <ref>, we'll explain what we learned from each solution regarding its suitability for MAP-Co channel sounding. The time parameter-based solution proposed in <cit.> is suitable for MAP-Co because it reduces the amount of information needed to encapsulate channel information such as multipath locations or amplitude by transforming frequency domain channel observation into the time domain. However, the approach used in <cit.> is not well-suited for MAP-Co channel sounding because it's designed for single data streams, which won't be the case in future WLANs. The solution presented in <cit.> can't solve issues in MAP-Co due to the need for additional processing and signaling, which could lead to errors in estimation. Similarly, the solution in <cit.> requires additional processing and signaling, which may not be good for MA-based MAP-Co channel sounding. However, it can be applied for MC-based explicit channel sounding since higher resources are available. The proposed method in <cit.> can be used for MC and sequential MC-based MAP-Co systems since it dynamically sets the size of CSI feedback. However, reducing feedback size alone won't be enough for joint and MA-based explicit channel sounding. Another idea to decrease overhead based on feedback is using codebook <cit.>. The feedback overhead can be decreased by optimizing the size of the codebook, which sounds promising for MAP-Co. However, it requires additional processing power and a new design for the MAP-Co system. Recently, machine learning-based approaches have been proposed for CSI estimation in different systems, which significantly reduce signaling overhead <cit.>. For example, <cit.> proposed a fully connected deep neural network for CSI estimation in cellular networks, but such an approach requires a lot of processing power that may not apply to MA-based architecture. It is important to note that EHF bands, like 802.11ay, do not have a specific MAP-Co channel sounding method outlined in their protocols. Instead, 802.11ay's MAC protocol divides the beacon interval into two parts: the beam header interval and the data transmission interval. The protocol uses beam training and sector sweeping to optimize beamforming and understand the channel quality of STAs of an AP. CSI feedback is reported during beamforming training and beam tracking through beacon transmission and sector-level sweeping (SLS). The beacon interval structure and beam training frame MAC structure for a single AP 802.11ay protocol are explained <cit.>. Other WLAN protocols should consider enabling MAP-Co explicit channel sounding in their MAC protocols. §.§.§ Implicit Channel Sounding To provide feedback in a network with a large number of STAs and antennas at AP, the explicit channel sounding process can increase network overhead and introduce delays before data transmission. To address these issues, researchers have proposed the implicit channel sounding method. This method relies on channel reciprocity and uses CSI at the transmitter to calculate CSI at the receiver. It estimates CSI from a null signal transmitted by STAs and estimates downlink (DL) CSI based on received uplink (UL) channel sounding. The implicit sounding assumes that the impulse response of the uplink (UL) and downlink (DL) channels are identical within the same coherence interval. Several studies, such as <cit.>, have examined this method and its advantages over explicit sounding. In reality, the UL and DL baseband channels are not the same due to non-reciprocal impairments that differ in the baseband-to-RF and RF-to-baseband chains. To adjust for this difference, a calibration approach is used. Implicit channel sounding is a method that provides lower overhead and latency as it doesn't require beamforming feedback information or quantization. In this method, an AP sends NDP trigger packets to STAs and each STA responds with NDP packets to the AP with a gap of SIFS frames between each response. The AP estimates the CSI based on the received NDP packets from STAs. The UL channel sounding can be used as DL channel sounding for UP since both have the same impulse response in the same interval. However, a modified single AP implicit channel sounding is used for the MAP-Co environment. When dealing with MAP-Co over 802.11be, the implicit channel sounding approach needs to be adjusted based on MC and MA-based architecture. The IEEE standard, as described in <cit.>, discusses this approach for MAP-Co. To initiate implicit channel sounding for MAP-Co, MC or MA triggers slave APs and STAs to transmit NDPA/NDP frames. Additionally, there are two types of multi-AP implicit channel sounding approaches: MC-based and MA-based. These two types can also be further divided based on the NDP transmission pattern by STAs, which are sequential and joint. To illustrate, Fig. <ref> shows the MC-based implicit sequential channel sounding approach, while Fig. <ref> presents the MA-based sequential implicit channel sounding. In Figure  <ref>, the MC broadcasts NDPA trigger frames to all slave APs, which includes the ID of all slave APs and user information for its STAs and other STAs in OBSS. The user information consists of resource units on which NDP frames would be transmitted, multiplexing type for multi-user NDP, number of antennas to transmit NDP frames and more. After receiving the trigger frame, slave APs transmit NDP trigger frames to all STAs. In sequential implicit channel sounding, STAs transmit NDP frames to their APs in the order mentioned in the NDP frames, after a SIFS interval since receiving the NDP trigger frames. There must be a gap of SIFS between NDP frames transmission by each STA. In joint implicit channel sounding, STAs transmit NDP frames together in different resource units. After receiving the NDP frames, slave APs perform calibration to estimate the channel states. Thereafter, the MC sends MA trigger frames to slave APs to report the measured channel states, and slave APs report their estimated channel reports together. In the MA-based implicit channel sounding method, the MA sends NDP trigger frames to its STAs and slave APs as shown in Fig. <ref>. The slave APs then send the same frames to their own STAs. The STAs respond to the trigger frames with NDP frames either sequentially or together. In sequential MA-based implicit channel sounding, the STAs of the MA respond with NDP frames one after the other with a SIFS interval. The STAs of the slave APs then transmit NDP frames to their own APs sequentially and with a SIFS interval. In joint MA-based implicit channel sounding, all STAs of the MA and slave APs respond with NDP frames at the same time. This article provides a taxonomy of implicit channel sounding based on MAP-Co architecture and NDP frame transmission patterns. In the next subsection, we will discuss the summary and lessons learned from the discussion of MAP-Co implicit channel sounding. Summary and Lesson Learned:Table <ref> summarizes the various types of implicit channel sounding used for MAP-Co, along with their advantages and drawbacks. By studying these methods, we have learned valuable lessons. Among the different methods, the MC-based sequential implicit channel sounding method requires less calibration processing and does not congest the UL network. This approach has sufficient computation resources, thanks to high-end servers, which reduces the processing delay during calibration estimation. However, STAs may have to wait longer to transmit the NDP frames to their slave APs with this method, and it does not properly utilize OFDMA-based network resources for transmission by multiple STAs simultaneously. The MC-based joint implicit channel sounding method, on the other hand, efficiently utilizes the UL network resources for transmission by multiple STAs during NDP frames, and it can perform multi-channel sounding for an STA. In this method, the calibration estimation processing for multiple NDPs at the same time is not an overhead due to high-end servers. However, this method can increase the network overhead if there are numerous STAs per AP. Furthermore, machine learning approaches for channel estimation can also be used instead of implicit channel sounding. When using MA-based sequential implicit channel sounding, the amount of communication required is lower compared to sequential MC-based, as MA can send trigger frames to multiple STAs and slave APs simultaneously, reducing the number of exchanges needed. This method also experiences lower NDP transmission delay and has no UL network congestion. However, MA can experience high calibration calculation overhead and longer processing delays due to limited processing power, so it may not be suitable for a large number of devices per AP. Additionally, like other sequential methods, it may not efficiently utilize OFDMA-based network resources. The joint MA-based implicit channel sounding can use network resources efficiently with lower waiting time for transmission of NDP frames, but it may significantly increase the overhead at MA due to limited processing resources, leading to inaccurate channel estimation and high latency. This method can also congest UL channels during reporting of NDP frames, so it may not be suitable for MAP-co in high-device scenarios. Therefore, there is a need for an approach that can optimize communication and computation overhead of joint MA-based implicit channels sounding together. The quality of implicit sounding in IEEE 802.11be depends on the reciprocity nature of both DL and UL channels. However, this may not be suitable for the 802.11ay spectrum due to non-reciprocal interference from other APs. 802.11ay APs have a small coverage area, which can cause signal interference with neighboring STAs and APs. Additionally, the implicit approach is not effective on OFDM-based channels because of their non-reciprocal nature. To overcome these issues, researchers have studied implicit channel sounding, which is commonly used in designing hybrid analog-digital beamforming <cit.>. Also, various calibration methods have been proposed for mmWave WLAN, but they are mainly for single-AP 802.11ay WLAN <cit.>. In the future, these issues should be studied for MAP-Co in 802.11ay WLAN. It is important to note that implicit channel sounding in 802.11ay WLAN may experience similar overhead as explicit due to the large number of slave APs in an area compared to 802.11be WLAN. Therefore, an overhead analysis needs to be conducted for implicit channel sounding in 802.11ay WLAN. §.§ Coordinated OFDMA OFDMA is a significant feature of the 802.11ax/be WLAN, which allows multiple STAs to transmit/receive data simultaneously by sharing bandwidth, thereby improving throughput and reducing latency. In OFDMA, the bands are represented in terms of sub-carriers, which are grouped to form resource units (RUs). In a single AP environment, an AP cannot access the same frequency as another neighboring AP in the same transmission opportunity (TXOP), limiting the system's ability to achieve higher throughput. To address this limitation, C-OFDMA has been proposed to enable sharing of OFDMA resources among APs and STAs at the same TXOP while avoiding RU conflicts and interference. For instance, two STAs close to their BSSs can access the same frequency or RUs if they are coordinated, as they cannot interfere with each other. Moreover, edge STAs cannot transmit on the same RUs and frequency as they can interfere with each other. C-OFDMA allows for the assignment of different RUs in the same frequency or different frequencies for transmission to edge STAs at the same TXOP. However, in the upcoming 802.11be, a group has been formed to discuss C-OFDMA operation in 802.11be-enabled APs <cit.>. Therefore, in this article, we present the discussed C-OFDMA operations for 802.11be protocols. §.§.§ C-OFDMA Operation The MC/MA is responsible for directing the slave APs on how to access frequency resources in C-OFDMA, which consists of three steps: C-OFDMA setup, transmission scheduling, and transmission. During the C-OFDMA setup, the MA/MC shares available resources with all slave APs and receives requests from them indicating their desire to participate by specifying the required resources (frequency or time). These requests can be included in the physical layer protocol data unit for single-user or high-efficiency trigger-based PPDU for UL MU transmission if supported by the slave APs. The MA/MC then schedules the C-OFDMA transmission by allocating bandwidth, length, and other transmission parameters (TXVECTOR) for both DL and UL transmission. The TXVECTOR defines the maximum length of MPDU, data rate, and other relevant details. The MA/MC shares the allocation information with the slave APs. As shown in Fig.<ref>, it is possible to transmit two COA frames over two separate channels. One COA frame is sent to AP1 via channel CH1, while the other is transmitted to AP2 via channel CH2. STA1 is associated with AP1, and STA2 is associated with AP2. In this setup, the MA/MC does not allocate RUs to STAs. Instead, the available RUs of the allocated channels are assigned by the slave APs to their respective associated STAs. For example, AP1 can assign RUs of CH1 to STA1, and AP2 can allocate RUs of CH2 to STA2. Once the RUs are assigned, STA1 transmits UL data on the assigned RUs of CH1, and STA2 transmits UL data on the RUs of CH2. In case of DL transmission, the slave APs transmit DL data to their associated APs on the channels where they receive COA frames, as depicted on the right side of Fig.<ref>. Hence, C-OFDMA can prevent collisions between neighboring APs transmitting over the same RUs by coordinating their transmissions. However, if slave APs choose the same channels before C-OFDMA, they may receive COAs in the same primary channels, which can cause collisions. To avoid this, the coordinator can instruct the slave AP to switch to another primary channel within the allocated bandwidth or assign different RUs to slave APs within the same channel. With C-OFDMA, different slave APs can have different primary channels within the allocated bandwidth. However, using the same primary channel can lead to ICI issues. Therefore, coordinated RUs and multiple channel allocation among different APs are important research areas for C-OFDMA to prevent collisions and interference. In the next section, we will provide a comprehensive overview of the research areas of C-OFDMA. §.§.§ State-of-the-art of C-OFDMA In the area of C-OFDMA, researchers have been working on ways to manage WLAN resources in order to improve the coordinated TXOPs of multiple devices and maximize throughput. This section will present the latest WLAN OFDMA resource management schemes by multiple coordinated APs. As mentioned earlier, there are two types of resource allocation among different APs: using the same channel but different RUs at neighboring APs, or using different channels for each neighboring AP. The first approach may cause co-channel interference issues, while the second method may introduce inter-channel interference issues. Therefore, C-OFDMA-based resource allocation strategies can be categorized into two groups: inter-channel and intra-channel assignment. i) C-OFDMA based Inter-channel Allocation: C-OFDMA resource allocation often utilizes inter-channel resource allocation to mitigate co-channel interference between different basic service sets (BSSs) or cells. This method assigns different primary channels to different BSSs to avoid interference, but it can introduce inter-channel interference (ICI). To tackle this challenge, Tim et al. <cit.> proposed a multi-cell C-OFDMA resource management scheme. They allocated specific sub-channel sequences to each type of cell, resulting in a significant improvement in spectral efficiency, particularly for sector-wise cells. By allocating a subset of sub-channels based on user requests, collisions between neighboring cells can be minimized. In their conclusion, the authors found that sector-wise cell allocation significantly enhances spectral efficiency in both sector-wise and omnidirectional cells. They also noted that this approach can be applied to emerging WLAN technologies, including sector-wise beamforming in mmWave WLANs. In another study by different authors, a graph-based sub-channel assignment scheme was proposed for downlink (DL) OFDMA networks <cit.>. The scheme consisted of two phases: location-aware interference management and channel-aware sub-channel assignment. The authors utilized graph coloring techniques, where STAs were represented as nodes and inter-channel interference (ICI) as edges. Taking into account the location and movement of STAs, the MAX-k-cut algorithm was employed to identify disjoint clusters with reduced interference. Sub-channels were then assigned to these clusters based on the instantaneous channel quality. However, the study lacked detailed information on obtaining instantaneous channel quality and scalability challenges could arise with a larger number of mobile devices. Additionally, the model did not consider different mobility patterns. The article <cit.> presents a method for efficiently assigning sub-channels to DL C-OFDMA networks using a centralized iterative water-filling algorithm. The authors also suggest coordinated power control to maximize the overall rate. This involves adjusting the power levels for each base station and calculating the sum rate at each iteration until it is optimized. However, it is important to note that this approach assumes that there is perfect synchronization between base stations and that channel gain information is available for each user and base station. This may not be practical for emerging WLANs where the network environment and cells cause a constantly changing channel quality, requiring continuous synchronization and channel gain information. Zhang et al. <cit.> presented a resource allocation plan for 3D antenna array systems that optimizes physical layer parameters. They utilized vertical dynamic beamforming to allocate downtilts for cell-center and cell-edge users. Their aim was to enhance throughput for both types of users by coordinating transmission power and downtilt adjustments based on assigned sub-channels. To prevent interference during transmission by cell-center users, their approach assigns RUs and adjusts physical layer parameters for cell-edge users. However, there is limited research on inter-cell resource allocation over WLAN using C-OFDMA. In this vein, the authors introduced a MAC layer framework for C-OFDMA over multi-band WLAN <cit.>. To avoid interference, they recommended assigning different frequency bands to each access point (AP) through coordination. However, their approach reserves entire bands for all stations (STAs) and their APs, even when there is no overlap or interference. As a result, this approach may not be efficient. Another approach is C-OFDMA-based intra-channel allocation, which is presented below. ii) C-OFDMA based Intra-channel Allocation: In Intra-channel assignment, neighboring BSSs allocate the same channel to different users. The main challenge is to prevent co-channel interference while maximizing network throughput and other performance metrics. There are three methods to efficiently assign RUs within the same sub-channels in case of OBSSs while avoiding interference. The first two methods optimize physical layer parameters, including precoding and transmission power control (TPC). The third approach involves improving the WLAN's MAC layer by using a grouping and graph-based method. The work by Wang et al. <cit.> proposes the use of coordinated linear precoding in DL MIMO OFDMA networks to reduce co-channel interference among multiple BSSs. The authors focus on two precoding design problems: weighted sum rate maximization and maximizing the weighted sum of the minimal user rates (MWSMR) of the coordinated BSSs while considering power constraints per cell. They propose mathematical models to determine the precoding matrix for a certain number of users of multiple BS accessing a sub-channel, such that interference can be avoided. Linear precoding is used because signals of other users can be treated as noise at the intended user. In a similar vein, another approach to avoid co-channel interference in a coordinated way was proposed in <cit.>. This approach is based on precoding and is called dirty paper coding. With this method, the encoder for the current user requires knowledge of the encoding of previous users and associated channel state information (CSI) and propagation delays to cancel out any interference caused by those users. Kasaeyan et al. <cit.> proposed a coordinated precoder and postcoder design approach to reduce interference in DL C-OFDMA networks. The idea is to fix one coder and optimize the other, with the process being repeated until the error is minimized. Their formulation is specifically for DL and their proposed approach improves throughput significantly compared to single-cell optimization. Additionally, other works have also utilized TPC for mitigating co-channel interference in C-OFDMA. The authors of <cit.> proposed an energy-aware approach for downlink resource allocation in C-OFDMA, which optimizes user scheduling and power allocation across a group of coordinated base stations. The approach aims to balance the trade-off between energy efficiency and spectral efficiency while ensuring a constraint on the transmit power per allocated resource units (RUs) in the same channel. The optimal number of users from multiple cells assigned over sub-carriers and the transmit power level were determined to achieve maximum efficiency. However, the approach assumed perfect synchronization and accurate channel state information (CSI), which may not be feasible in practical WLAN scenarios as previously discussed. An additional study has also addressed the challenge of co-channel interference in C-OFDMA networks by selecting users for co-channel access in each tone and determining the power allocation across tones to optimize the weighted system sum rate for DL transmission <cit.>. While most of the existing literature has focused on the DL C-OFDMA, there are some studies on the UL C-OFDMA. For example, in <cit.>, the authors developed mathematical models to investigate the trade-off between spectral and energy efficiency in UL C-OFDMA. They demonstrated that their proposed energy models for C-OFDMA can result in significant energy savings. Therefore, coordinated TPC over each RU or sub-carriers can be an effective way to avoid interference and enhance both energy and spectral efficiency. The authors of <cit.> proposed an energy-efficient sub-channel allocation scheme with target wake-time scheduling for multiple BSSs using a MAC-based C-OFDMA intra-channel allocation approach. Their method involved dividing STAs under coordinated APs into groups and creating an undirected graph of OBSS where vertices represented STAs and edges represented overlapping STAs. They assigned different colors to vertices that were not adjacent to each other and then used the Welch-Powell method to find the minimum number of colors required to maximize the number of parallel transmissions while avoiding interference. They proposed a scheduling approach to determine the transmission and sleep times for each group, with each group consisting of vertices of the same color to enable concurrent transmission without interference. This is considered one of the best MAC-layer-based approaches for implementing C-OFDMA-based intra-channel allocation. §.§.§ Summary and Lesson Learned C-OFDMA resource allocation can be divided into two types: inter-channel and intra-channel. Inter-channel allocation coordinates multiple channels among different APs to minimize inter-cell interference, while intra-channel allocation assigns different RUs of the same channel to STAs of different APs through coordination. Table <ref> provides an overview of the state-of-the-art for both types of C-OFDMA. Inter-channel allocation typically uses combinatorial optimization to determine the best sub-channel sets for each BSS, but this method can be time-consuming. Alternatively, graph coloring algorithms can be used to optimize sub-channel sets. Most approaches aim to reduce inter-cell interference, but they require perfect synchronization and knowledge of CSI, which may not be practical in mmWave WLAN due to environmental dependencies. To allocate resources within a channel, there are different approaches such as coordinated precoding, coordinated TPC, and coordinated MAC-layer based methods. However, coordinated precoding has a longer preparation delay and higher signal overhead, while coordinated TPC requires perfect synchronization, which can be difficult for a large number of STAs. Additionally, some approaches involve determining combinations of STAs and RUs to avoid interference, but this can be time-consuming and require complex algorithms to find the best sets. Therefore, there is a need for a more efficient and proactive C-OFDMA resource allocation approach. §.§ Coordinated Spatial Reuse Spatial reuse (SR) is a method introduced by the 802.11ax standard to increase parallel transmission and spectral efficiency in dense WLAN environments through transmission power control and modulation and coding schemes (MCS). In this method, the Overlapping Basic Service Set (OBSS) Packet Detect (PD) technique adjusts the clear channel assignment/carrier sense (CCA/CS) threshold for the detected OBSS to the OBSS-PD threshold <cit.>. One AP is chosen to enable OBSS-PD threshold while other APs can use the CCA/CS threshold. This ensures that transmissions at different thresholds do not overlap, resulting in improved TXOP and spectral efficiency. However, some STAs may experience low SINR when an AP decreases its transmission power, and coordination is necessary for dense AP environments to avoid interference. To address this issue, the 802.11be standard introduced Coordinated Spatial Reuse (CSR) to perform SR in a coordinated manner. CSR enables APs to perform coordinated TPC to maintain adequate SINR at each STA, as shown in Fig. <ref>. Different aspects of existing CSR operations are described, including architecture, preparation stage, data transmission procedure, and control information exchange between APs. §.§.§ CSR operations The CSR architecture is based on the MAP-Co architecture. If using an MC-based MAP-Co, only slave APs are involved in CSR. If using an MA-based MAP-Co, there are two options for CSR. Option 1 is to have a fixed AP as the coordinator and all other APs as slaves. Option 2 is a dynamic approach where the AP that wins TXOP first becomes the coordinator and other APs willing to be coordinated become slaves. In CSR, the transmission power of each AP is determined by the coordinator (MC/MA) based on information obtained from channel measurement reports and beacon frames. The coordinator estimates the receiving signal strength index (RSSI) of STAs at each channel and the mutually acceptable receiver interference level between neighboring APs using the measurement reports. The coordinator then uses this information to calculate the transmission power of each AP for its OBSS-PD threshold. The transmission power for the OBSS-PD threshold of a slave AP can be estimated using an equation that takes into account the difference between the desired OBSS-PD threshold and the CCA/CS threshold, as well as the difference between the RSSI and noise floor, as mentioned in equation <ref> <cit.>. This calculation ensures that the SINR of STAs is maintained at an adequate level, even when the APs are transmitting at lower power levels. TX_PW_AP1≤ TX_PW_NAP - RSSI_AP1 + ARIL_N Let's focus on AP1 as the slave AP that requires adjustment of the CCA/CS threshold and estimation of maximum TP. In the equation above, TX_PW_AP1 represents the average transmission power of AP1, which needs to be OBSS-PD. TX_PW_NAP represents the transmission power of neighboring slave APs (NAP), such as AP2. RSSI_N represents the RSSI values for a signal from AP1 to the STA of the neighboring slave AP, and ARIL_N represents the acceptable receiver interference level (ARIL) at the STAs of NAP. After estimating the new TP of an AP, the ARIL values of STAs of AP1 (ARIL-AP1) should be updated. AP1 can only select an STA within its new range when the updated ARIL of STAs is greater than the RSSI from NAP to the STAs, which shows interference control through CSR. The updated ARIL-AP1 is equal to (TX_PW_AP1-pathloss-min SNR-safety margin). To provide an example of the estimation of TP in CSR, let's consider two slave APs (AP1 and AP2) and an MC, as shown in Fig. <ref>. STA1 is associated with AP1, and STA2 is associated with AP2. The dotted circle in Fig. <ref> presents the original logical coverage of AP1 and AP2, as well as the distance between STAs and each slave AP. When AP2 transmits a packet to STA2 in the same TXOP, STA1 can experience interference and may find that the channel is not idle. However, STA1 is far from AP2, so AP2's transmission to STA2 cannot impact STA1. To perform CSR, the maximum TP of AP2 should be decreased to reduce the coverage during that TXOP. The updated ARIL of STA1 and STA2 should be estimated, and the updated ARIL of STA1 will be greater than the RSSI from AP2 due to the out-of-coverage area of AP2 and lower TP. The 802.11be proposed two options of CSR based on the timing of decision-making. In option 1, the TPC is performed periodically by each slave AP, and the coordinator decides the TPC and other settings periodically. The coordinator informs slave APs through periodic transmission of CSR trigger frames, which carry control information such as the identity of slave APs, PPDU length of CSR data transmission, length of the basic time frame, maximum TP of each slave AP, the TX power of the coordinator for transmission by slave APs, etc. This approach does not increase signal overhead during CSR process. The other option is to control the TP in every transmission. The coordinator exchanges CSR trigger frames before each transmission, and slave APs adjust their CCA/CS threshold accordingly by TPC. Option 2 increases system overhead with more CSR trigger frames assignment, but it is dynamic and improves the system throughput at each transmission. Option 2 is highly suitable for mobile STAs. In the next section, we will summarize the available literature addressing the issue of CSR. §.§.§ State-of-the-art of CSR In this section, we will provide a breakdown of the different approaches to CSR and discuss the current issues in each category. There are three main categories of CSR approaches: OBSS PD-based CSR (OPCSR), Parameterized CSR (PCSR), and MAC-layer-based CSR (MCSR). OBSS PD-based CSR: The OPCSR approach regulates the transmission power of an OBSS to ensure that the RSS of any ongoing transmissions from a neighboring cell is lower than the OBSS-PD threshold of the OBSS <cit.>. This allows for controlled OBSS to begin transmitting without interference from neighboring cells. Lee et al. <cit.> researched situations of mutual interference, even when RSS values are below the OBSS threshold. They found that failed transmission is possible if the ongoing frame receiver is close to SR transmission on the same link, even after OPCSR. Therefore, they suggested determining coordinated links for ongoing transmission and SR transmission with OPCSR, so that available links can be used efficiently. They also proposed determining coordinated MCS values for SR and ongoing links. This distributed approach has a lower overhead but may have synchronization and higher communication delays in the case of a larger number of data streams, leading to longer waiting times for available links. Thus, this approach is suitable for a few CSR-based DL data streams but not ideal for CSR-based UL transmission. It is worth noting that the proposed approach did not investigate the limitations of link-aware OPCSR for more than two slave APs. Similarly, in <cit.>, simulations were conducted to assess the performance of OPCSR in various scenarios involving two slave APs. The authors analyzed path loss between APs and STAs and altered TPC and MCS to determine throughput, packet error rate, and optimal MCS values. This analysis can assist researchers and industries in determining the best MCS values for CSR in different circumstances. However, the study was conducted on a small scale, which may not be entirely realistic, and scalability could be a concern when implemented in a real network. Also, a new method has been suggested to enhance the quality of links when using OPCSR with uncorrelated antennas, as discussed in <cit.>. The authors proposed a mathematical model that identifies channels that are free from interference and assigns them accordingly to prevent mutual interference. However, this statistical model is only suitable for DL loads and cannot be used with UL CSR or real WLAN. Most of the existing research on OPCSR has focused on DL transmission. Another way to manage interference with OPCSR is to integrate it with C-OFDMA, as explained in <cit.>. In this integrated system, coordinated APs share assigned RUs with their cell-edge STAs, and neighboring APs determine transmission power for transmission at RUs accordingly. Parameterized CSR: The PCSR approach involves the coordinator informing multiple STAs of different slave APs about spatial reuse TXOP through a trigger frame. This frame contains information about transmission scheduling, allowable interference levels, and transmit power of other slave APs to estimate interference. PCSR has several benefits, including reducing contention time during CSR, allowing for more concurrent TX through high resource utilization compared to OPCSR, and reducing latency for time-sensitive traffic through priority scheduling information <cit.>. However. the PSCR has been thoroughly researched and studied in cellular networks. For instance, a mathematical model for joint and fair transmission scheduling and a power spectrum adaption algorithm for CSR was proposed in a study referenced as <cit.>. The study confirmed that PCSR approaches can significantly improve network throughput. Another study referenced as <cit.> presented the performance analysis of PSCR and coordinated scheduling to utilize spatial reuse over cellular networks. Additionally, several works have discussed coordinated scheduling methods for coordinated spatial reuse in cellular networks, including references <cit.>. While PCSR has been extensively studied in cellular networks, there is a lack of research on its implementation in WLAN. However, there are a few articles, such as <cit.>, that offer guidance on implementing PCSR using a Q-learning-based approach to share information about scheduled APs. The proposed approach aims to avoid sharing information about interference-free APs by comparing Q values. Additionally, <cit.> proposes an adversarial reinforcement learning-based CSR method to reduce frame signaling overhead with partial receiver awareness, which could be beneficial for highly dense WLANs. MAC-based CSR: The MCSR addresses various concerns including determining AP-STA pairs for CSR, grouping STAs that are free from interference, and implementing CSR based on graph coloring. In an example described in <cit.>, multiple APs within range are connected over the air to share control information. A MA is selected to win the TXOP first, while other nearby APs act as slave/coordinated APs. The proposed approach forms groups with similar interference levels and determines TPC parameters for each group type. A reasonably distant AP is chosen to form a CSR group, restricting the use of immediate neighboring APs and STAs at the edge of neighboring APs. However, this approach only considers static STAs and WLAN environments, which may not be effective in dynamic WLAN environments with new AP deployments and moving STAs. In <cit.>, the authors proposed TXOP sharing methods for CSR over 802.11be WLAN, using mathematical models to determine mutual interference and choosing combinations that provide minimal interference and higher throughput. However, this article lacks analysis on the interval of finding combinations, and the solutions have high complexity issues in dense AP environments like mmWave WLAN. As finding the number of combinations using brute force is an NP-hard problem, the use of evolutionary algorithms may be useful. §.§.§ Summary and Lesson Learned This section covers the difference between CSR and SR, as well as the architecture, preparation stage, and data transmission process of CSR. We discovered that in the CSR architecture, it's advisable not to fix the coordinator in the MA-based architecture to allow for flexibility and avoid burden on one AP. We also provided formulae to calculate the transmission power for an AP that will have SR in CSR, based on channel sounding information from multiple APs. According to equation.<ref>, the allowed interference level should be greater than the RSSI values from neighboring APs. These mathematical formulae can help mathematically model complex situations of CSR for TPC. We have categorized CSR methods based on when they are performed: Periodic and Every-Transmission. After discussing these two types, we have found that periodic transmission has lower overhead and energy consumption compared to Every-Transmission, which requires more frames to be exchanged. However, periodic transmission may not be suitable for dynamic WLAN environments like future extreme high-frequency bands (emerging 802.11ay and Terahertz(THz)), which experience frequent changes. On the other hand, Every-Transmission based CSR may not be suitable for the emerging 802.11be WLAN due to its significant overhead. As a result, a new CSR transmission method needs to be developed in the future. Additionally, we have showcased the current state-of-the-art in CSR methodologies, which are conveniently summarized in Table <ref>. After conducting thorough research on existing CSR literature, we have introduced a classification system that divides CSR approaches into three categories based on their optimization techniques: OPSCR, PCSR, and MCSR. OPCSR approaches employ various methods to optimize the transmission power of OBSSs in a coordinated manner, thereby minimizing mutual interference during concurrent transmission. In Table  <ref>, we can observe that the majority of OPCSR techniques are designed for DL transmission, which goes from the AP to the STAs. It is difficult to implement OPCSR for UL transmission because synchronization is time-consuming and costly. Additionally, we can infer that OPCSR can be either periodic or applied to every transmission. In Table. <ref>, we have listed the downsides of each OPCSR method discussed. However, the PSCR approach is versatile and can be used for both UL and DL transmission, as shown in Table. <ref>. Additionally, PCSR has several advantages over OPCSR, including reducing contention time during CSR, allowing more concurrent TX through high resource utilization, and reducing latency for time-sensitive traffic through priority scheduling. Therefore, PCSR is recommended for both UL and DL transmission in emerging WLAN. Nevertheless, implementing PCSR may present some challenges. When it comes to UL transmission in WLAN, STAs won't know if there's any interference at APs because there's no feedback system in place. Additionally, some STAs have to wait for other STAs to finish transmitting in CSR before they can start. Unfortunately, there isn't a mathematical model to figure out the average delay STAs experience during UL transmission using PCSR. That's where MCSR comes in, which helps identify interference-free APs and STAs pairs and groups them for simultaneous transmission. By using the grouping approach and identifying interference-free AP-STA pairs, the overhead can be reduced. It's important to redesign the MAC layer of emerging WLANs to include a CSR-based grouping of STAs.Most CSR approaches rely on predetermined frequency-time allocations by coordinators. This limits their ability to quickly adapt to changes in user distribution, QoS requirements, and channel conditions. Additionally, these approaches are reactive rather than proactive, meaning CSR transmission and decision-making occur after several rounds of channel information gathering and analysis. This reactive approach can cause delays that are not suitable for URLLC applications, especially in highly dense WLANs like mmWave WLAN. Therefore, there is a need for a proactive CSR approach that can predict future CSR transmission and determine scheduling details in advance. Currently, CSR approaches have been well studied for 802.11be WLAN but have not explored the challenges specific to mmWave WLAN (802.11ay). When there are many active users simultaneously, maintaining orthogonality among users becomes difficult, leading to interference from various sources. Therefore, it is essential to identify scenarios where mutual cell interference can occur and propose new management policies for CSR. §.§ Coordinated Beamforming The CBF concept aims to improve spatial reuse using same transmission power by enabling cooperative APs to cancel incoming interference at the spatial level. For instance, if there are two coordinated APs, each with an STA, both STAs can transmit data to their respective APs using spatial reuse TXOP. In CBF, the APs work together to share information on the interference their STAs receive, which can be used for interference nulling or alignment. Non-serving STAs can provide Channel State Information (CSI) by using MAP-Co channel sounding methods to help APs understand the interference they receive. CBF is a popular topic in cellular networks and has recently been discussed in unlicensed WLANs, such as 802.11be <cit.>. Recent results show that MIMO antennas can suppress interference, reducing neighboring link interference by up to 10 dB <cit.>. This section provides a detailed explanation of CBF operations and the existing state-of-the-art. §.§.§ CBF Operations The process of CBF operations is similar to the previous MAP-Co features and consists of three phases: MAP-Co request, channel sounding, and MAP-Co data transmission. The MA/MC sends MAP-Co requests to its STAs and slave APs, which then forward the request to their respective STAs. Afterward, channel sounding is performed, which is explained in detail in previous sections. In this section, we will focus on the MAP-Co CBF data transmission steps. We consider a WLAN network with two OBSS that have two APs (AP1 and AP2), each having an STA associated with it (STA1 to AP1 and STA2 to AP2). AP1 and AP2 are slave APs, and AP1 needs to transmit DL to STA1 while STA2 needs to send UL packets to AP2. However, AP1 DL transmission can cause interference at STA2, while UL transmission by STA2 can cause interference at AP1. To avoid interference, the MA/MC sends a trigger frame to slave APs that conveys synchronization, scheduling, and interference nulling information. AP1 and STA2 synchronize their data transmissions within the same TXOP of AP2 and STA2 and utilize coordinated sounding information to optimize the antenna weights of their respective antenna arrays. The goal is to suppress and nullify interference at STA1 by adjusting the antenna weights. In the next section, we will present the state-of-the-art of interference management through CBF. §.§.§ State-of-the-art of CBF In the context of CBF, the primary objective is to effectively handle interference in a multi-AP environment. This implies that when an AP transmits data to an STA, it should not disrupt the reception of another STA in the adjacent OBSS. Interference management in CBF can be classified into two main categories: PHY layer-based and MAC layer-based approaches. The MAC layer-based approach can be further subdivided into centralized and semi-distributed schemes. Similarly, the PHY layer-based approach can be categorized into two types: Interference Nulling and Interference Alignment. In centralized schemes, an MC-based architecture can be employed to coordinate interference management through beamforming among multiple slave APs. On the other hand, semi-distributed approaches utilize MA-based architectures to manage interference. Interference nulling is a technique used by a transmitting AP to completely cancel its signal at a specific receiver by precoding the signal to counteract the interference caused by the signal transmitted to another receiver. In contrast, interference alignment aligns all interfering signals in the same directions at the receiver, allowing the signal from a particular transmitting AP to be free from interference and enabling successful decoding. For more in-depth information regarding interference nulling, refer to <cit.>, while details on interference alignment can be found in <cit.>. Now, we will delve into the existing literature pertaining to each category of interference management in CBF. One of the previous studies, identified as <cit.>, concentrated on managing interference in a centralized manner for a multi-AP setting that is coordinated using a backbone network. In this method, all the slave APs shared data packets through a wired backbone network, and the precoding process was similar to MU-MIMO transmission. The method employed interference nulling in CBF and required explicit channel sounding prior to CBF. However, the drawback of this approach is the overhead introduced by explicit channel sounding, which increases the overall overhead and renders it unsuitable for densely populated real networks. Therefore, the approach proposed by the authors in <cit.> uses implicit channel sounding and offers higher accuracy. These papers are excellent examples of comparing the CBF approach using different channel sounding methods in a centralized way. Another method, discussed in <cit.>, focuses on a centralized interference alignment approach for CBF. In this approach, the centralized server performs joint precoding of transmitted signals to APs and transfers the frequency domain symbols, ensuring that all slave APs are phase synchronized. The method utilizes low complexity zero-forcing beamforming and Tomlinson-Harashima precoding. In their work <cit.>, the authors put forth a centralized interference alignment technique known as POLYPHONY. Essentially, their method involves directing the signal from a station towards all access points (APs) in the same direction to maximize the signal-to-noise ratio (SNR) for packet decoding. This process is repeated for every packet from each station. While this approach is reliable, it can create significant overhead by sending the same packet to each AP, and it only applies to the uplink (UL). In their study, <cit.> suggested a centralized method for interference alignment and nulling in both the uplink and downlink using MIMO technology. Their approach involved using an antenna to nullify a signal received by an STA, and then transmitting another signal to another STA through its antenna to align with the received signal. A proposed approach by <cit.> also uses a centralized method that combines signal processing to create a joint precoding matrix in the downlink and joint decoding in the uplink transmission for interference alignment. This approach also uses explicit channel sounding to gather CSI information. An experimental study confirmed that this method offers optimal multiplexing gain. Other works, such as <cit.>, also focus on joint precoding and decoding for interference alignment. Some approaches prioritize CBF over MA-based architecture to minimize the burden on a central server. An example is the CoaCo method proposed by the authors of <cit.>, which uses semi-distributed CBF to address interference in WLAN. Their method optimizes beamforming weights to reduce inter-AP interference through grouping, ensuring that interference between groups is minimized. Coordination is only required between the heads of each group, similar to MA-based architecture. Another method called OpenRF has been proposed for MA-based architecture to nullify interference. This approach uses software-defined networking (SDN) to target the beam to a specific STA and automatically nullify any interference at other STAs. The implementation of SDN-based interference alignment has also allowed for an increase in concurrent data streams. In <cit.>, the authors suggest a cluster-based CBF for MA-based architecture. This method utilizes decentralized CSMA to prevent interference between clusters through a precoding method within the cluster. Other research has also addressed inter-cell interference during beamforming through clustering methods, such as <cit.>. §.§.§ Summary and Lesson Learned In this section, we explored different methods for improving concurrent transmission using CBF. Centralized and semi-distributed interference nulling and alignment approaches were discussed. Centralized schemes rely on the network's backbone capacity for efficient information sharing, making them suitable for enterprise WLAN but not ideal for home-based WLANs that require low latency and high bandwidth. Interference nulling and alignment have been well studied for SU-MIMO and MU-MIMO, but not enough attention has been given to dense multiple AP scenarios. Precise CSI information is needed for precoding, and existing reactive CSI gathering schemes may not meet future extreme requirements. We also looked at a clustering-based approach for interference management in beamforming. STA selection is crucial within the cluster for beamforming scheduling. However, existing methods use a greedy approach for user selection. In the future, it would be noteworthy to study the design of an evolutionary algorithm for user selection in large-scale networks. Semi-distributed approaches are promising for addressing scalability issues related to CBF. §.§ Joint Transmission We previously discussed the efficient management of data transmission by multiple STAs of OBSSs using techniques like C-OFDMA, CSR, and CBF. In addition to these features, MAP-Co also supports efficient distributed MIMO (D-MIMO), which enables transferring packets between multiple APs and an STA. This concept has been around for a decade and has been implemented over cellular networks. However, researchers are now considering D-MIMO over emerging WLAN technologies like 802.11be <cit.>. D-MIMO is a significant feature of MAP-Co and is also known as joint transmission (JTX). With JTX, multiple slave APs can transmit data to an STA by receiving JTX scheduling and control information from MC/MA. JTX offers numerous advantages, including seamless association for mobile stations (STAs), increased throughput, improved reliability, and reduced transmission delays. One example of JTX operations in the context of WLAN is its ability to avoid inter-channel interference (ICI) by enabling multiple overlapping basic service sets (OBSSs) to transmit on the same channel simultaneously. In this section, we will discuss an example of JTX operations specifically in the context of 802.11be and provide an overview of the current state-of-the-art in JTX. Finally, we will summarize the key findings and lessons learned from the discussion. §.§.§ JTX Operations In this section, we will discuss the process of performing JTX from the perspective of both MAP-Co architectures. Prior to initiating JTX operations, it is important to conduct multi-AP channel sounding, which involves characterizing the wireless channel environment at multiple access points (APs). In section <ref>, we explained various multi-AP channel sounding methods. However, all of those methods were focused on single AP-STA transmission. Therefore, a new method called JTX channel sounding was proposed by IEEE for implementing the same process in JTX. During JTX channel sounding, the coordinator (MA/MC) initiates the process by sending an NDPA frame to all slave APs involved in JTX. This frame contains information about the intended STA for JTX and instructs the directly associated AP to send an NDP trigger frame to intended STAs, which broadcasts NDP packets from the STA. The other slave APs are instructed to be ready to receive the NDP frames from the STA. Once all slave APs receive the NDP frames, they estimate the CSI and report it back to the coordinator. Only the slave APs involved in JTX participate in this process. Next, the coordinator analyzes the reported CSI and sends a BRP trigger frame to the selected slave APs for JTX. However, if a slave AP is busy and unable to participate in JTX, the BRP frame is not transmitted to it. The BRP frame contains scheduling and control information for JTX, including power levels, phase shifts, and data frames. To ensure synchronized transmission to the intended STAs, control parameters are used to strengthen the transmitter. Upon receiving the BRP trigger frame, the slave APs initiate joint data transmission to the STA. The STA sends ACK packets to the slave APs, and the coordinator requests ACK acknowledgments from the slave APs. When the request is received, the slave APs forward the ACK acknowledgments. JTX manages tight synchronization among the slave APs to ensure that data transmission starts and ends simultaneously. In IEEE 802.11, multiple access by STAs in JTX can be achieved using C-OFDMA (Coordinated OFDMA) and CBF (Coordinated Beamforming). Slave APs can utilize different primary or secondary channels through C-OFDMA for DL JTX to an STA or employ CBF to transmit in different sectors to an STA. During joint transmission, the slave APs configure their transmission parameters to ensure constructive and joint decoding of the signal at the STA, resulting in a higher SINR. Similarly, CBF and C-OFDMA can be applied for UL JTX by multiple STAs to an AP, and joint processing, such as joint decoding, is performed at the AP. The STA needs to perform joint processing of the received signals for effective decoding. §.§.§ State-of-the-art of JTX In the previous section, we covered various steps involved in JTX operations over WLAN. Now, we will introduce the latest advancements in JTX through a new classification system. The state-of-the-art of JTX can be categorized into three types, each addressing a distinct problem: Joint Scheduling, Synchronization, and Joint processing. JTX Scheduling: In JTX, one important aspect is deciding on the intended STAs and scheduling policies for resources. This process typically involves three steps: selecting the transmission points, pairing them with the corresponding STAs or users, and determining resource allocation parameters such as power levels and MCS. Researchers have explored various methods for joint scheduling approaches, which is also a well-researched topic in cellular networks. To begin our discussion, let's examine joint scheduling in cellular networks and its limitations in WLAN. The authors of <cit.> proposed a centralized grouping-based MAC scheduling approach for multi-point JTX. In this approach, the base stations are grouped in different clusters with different strategies while considering fixed transmission points and groups. The approach is a good study of different strategies, but it has a few limitations. Firstly, it is not applicable for more than three base stations in the group, and secondly, it is not suitable for mobile STAs. Furthermore, the delay can be very high in case of larger access points, such as in the case of WLAN. Another disadvantage of the static cluster is the consideration of fixed interference, which is not practical in the case of WLAN. Therefore, such grouping approaches cannot be applied to the JTX over WLAN. In a study cited as <cit.>, the authors suggest a clustering-based multi-point JTX approach and allocate resources to users within a cluster through Space-division multiple access. However, the authors limit the number of users who can perform JTX to the number of cells in the cluster. This restriction can cause significant delays in WLAN due to the CSMA/CA process and the higher number of APs. Additionally, the approach assumes static devices and fixed groups, which may not be practical for dynamic changes in the WLAN environment, such as interference and mobile devices. While this method is applicable for WLANs that consider interference and power levels, it is too basic and does not consider WLAN channel access models. Therefore, determining JTX and non-JTX users using this straightforward approach may not be suitable for highly dense WLANs, leading to longer delays due to the CSMA/CA process. Other works have considered dynamic clustering methods for JTX. For an example, an user-centric approach based on location was proposed by <cit.> for creating small virtual cells for a JTX to a user using multiple access points. The approach considers power constraints for each user and uses mathematical models to estimate average user throughput. This method is suitable for 802.11be WLAN due to its wider coverage. However, clustering based on user location may result in resource wastage if available bandwidth does not meet the required QoS. Therefore, user location alone may not be the most efficient approach for JTX. Another method for clustering APs in JTX involves selecting a certain number of users and scheduling resources based on interference and the optimal power level difference between the base station and the users <cit.>. In addition to clustering, another method for determining the number of APs needed for JTX is a graph-based approach outlined in <cit.>. The authors proposed an OFDMA-based approach for JTX in cellular networks, where they created graphs of base stations involved in JTX, with each node assigned its queue size and traffic arrival. By using graph coloring and the knapsack problem, they were able to solve the JTX resource scheduling issue and improve the throughput for inter-cell users. In their <cit.>, the authors proposed a novel method for choosing user-AP pairs in JTX that do not rely on clustering or graphing. Instead, they suggest determining the optimal power level difference required for a certain number of users to operate in JTX mode under an AP. They also recommend radio resource management for JTX users, which can be beneficial for smaller networks that prioritize individual users over APs. However, this method may not be practical for denser networks due to synchronization requirements and network delay. Additionally, scheduling approaches for JTX cannot be directly applied to WLANs because of differences in multiple access methods, path loss, blockage, interference, and QoS requirements. In addition, WLANs typically lack X2 interfaces which are needed for synchronization and sharing. As a result, there have been very few studies on JTX conducted over WLANs. We will now showcase the most recent developments regarding JTX over WLAN. In 2018, a group of researchers proposed a JTX method specifically for 60 GHz WLANs <cit.>. Their method accounted for link outages caused by human body blockages and introduced a blockage mitigation technique using JTX. This technique ensured reliable communication even when blockages were present. The researchers carefully selected STAs for JTX to improve reliability and capacity. They also addressed interference issues by determining the optimal JTX path, taking into consideration the presence of potential sources of interference. However, the researchers had limited control over available paths, which could result in interference problems, especially when dealing with a higher number of mmWave APs. In order to enable JTX in crowded places where interference-free paths are not available, the authors of <cit.> suggested using an IRS based JTX to create new paths. Additionally, there is a channel reservation based MAC protocol for JTX in WLAN, introduced in <cit.>, which can enhance the throughput and reliability of a STA. A method for analyzing the performance of JTX over 802.11be, as well as a distributed scheduling scheme for JTX, single AP, and single AP transmission with interference, was proposed by <cit.>. The goal of their work is to allow JTX for an STA that is under more than one AP. However, this approach may activate JTX on multiple STAs that do not require it, leading to reduced resource utilization. As a result, more research is needed to address WLAN challenges and advance JTX scheduling. Joint Synchronization and Processing: To ensure smooth operation of the JTX, it's important to synchronize the joint process from multiple APs. This will prevent any timing offset, allow for coordinated phase shift, and synchronize the digital clock at the STAs. Failure to synchronize can result in inaccurate channel value estimates, poor interference nulling, and degraded SINR at the STAs, as noted in <cit.>. Moreover, in JTX scenarios, accurately estimating CSI by each STA becomes increasingly difficult over time. This creates a challenge for synchronization. To address this challenge, AirSync <cit.> proposes a method for timing and phase synchronization. This method detects slot boundaries in OFDM, synchronizes APs with cyclic prefix, and predicts carrier phase correction for transmitters. While this method is suitable for 802.11ay WLAN, it cannot be applied to 802.11be WLAN due to differences in channel modulation techniques. Joint processing between AP and STAs is also crucial, encompassing tasks such as estimating joint CSI, performing joint precoding, and handling backhaul processing. This allows for enhanced desired signals, overpowering any interference or noise, by combining signals from multiple APs at an STA. To achieve this, all signals need to undergo joint processing. The maximum ratio method is one way to merge received signals, optimizing the ratio between the power of the desired signal and the squared norm of the combining vector <cit.>. However, this technique is not optimal for dealing with interfering signals. Another approach involves a tradeoff between interference rejection and maximizing signal gain <cit.>. In <cit.>, various other methods for joint processing of reference signals are surveyed. Furthermore, joint processing of transmit precoding is essential for decoding joint reception at receptors, and one approach involves precoding through uplink-downlink duality using linear combination schemes for joint precoding vectors <cit.>. §.§.§ Summary and Lesson Learned In our previous discussion, we talked about the JTX process, where an STA receives data from multiple APs, also known as transmission points. In addition, we have discussed the JTX operations recommended by the 802.11be standards for JTX over WLAN. We also provided an overview of the current state-of-the-art JTX scheduling methods. Many of these methods use a grouping and clustering approach to organize the APs involved in JTX for an STA and allocate resources based on their needs. However, this approach has lower scalability and is best suited for static WLAN environments with a fixed number of STAs and APs. In case of dynamic and dense mmWave WLAN environments, this approach may cause delays and reduce reliability. A dynamic clustering approach, such as the one presented in <cit.>, could be a suitable option for future WLANs. However, it's important to note that the efficiency of JTX may decrease in the presence of obstacles in mmWave WLANs. To address this issue, blockage-aware solutions, as discussed in <cit.>, should be studied. Another challenge with JTX is joint synchronization and processing. Improper synchronization can result in a different MIMO channel between APs and STA, causing poor JTX. JTX scheduling should also consider channel aggregation/bonding, which hasn't been fully explored in the literature. Additionally, joint precoding and decoding of massive JTX in future dense mmWave WLANs can create overhead and longer delays. Therefore, an intelligent and proactive JTX solution is necessary to adapt to the ever-changing WLAN environment, with a focus on designing distributed precoding and decoding processing of JTX. § FUTURE MAP-CO AND CHALLENGES In this section, we present several promising future research areas for MAP-Co and open technical challenges for the realization of future MAP-Co. The explained future areas and challenges are going to promote the development of MAP-Co in the emerging WLANs such as beyond 802.11ay, beyond 802.11be, and THz. The details of each future area of MAP-Co are explained in the next subsections. §.§ Blockage-Aware MAP-Co Future MAP-Co will be implemented over EHF bands such as mmWave or THz. However, the EHF bands are highly vulnerable to WLAN environments such as obstacles or humans, which can degrade signal quality and strength. Unfortunately, existing MAP-Co features such as channel sounding, CBF, CSR, and JTX do not consider the impact of blockage on their performance. For instance, the existing CBF does not take blockage-aware coordination into account. As a result, beams from a blocked AP cannot perform transmission at the same TXOP with other APs, and cannot take advantage of CBF. Similarly, if power control in CSR is oblivious to blockages, then the estimated power may not be enough for a high-quality signal. Therefore, transmission power control in CSR should consider the blockage between STA and AP. Blockage in MAP-Co over mmWave WLAN can also impact the performance of JTX. For example, joint transmission by multiple APs to STAs can be affected if there is a blockage between the STA and a particular AP, as presented in Fig. <ref>. In such cases, the transmission by that AP can reach the STA with very low quality <cit.>, reducing the overall performance of JTX. Moreover, existing models for calibration in implicit channel sounding do not consider the impact of blockage, leading to inaccurate CSI estimates. Therefore, blockage-aware MAP-Co design is another open issue. One solution to these issues is to use an intelligent reflecting surface (IRS) to improve the propagation environment by converting non-line-of-sight to line-of-sight without signal loss <cit.>. An IRS can also expand the coverage of JTX and other MAP-Co features, as depicted in Fig. <ref>. However, using an IRS for MAP-Co performance poses several challenges that need to be addressed. Researchers need to propose new mathematical models to understand multiple channels in IRS-enabled MAP-Co, and take into account the extra signal processing required at the master AP. Moreover, the placement of the IRS in the context of MAP-Co is a key investigation to avoid interference at STAs or other APs from the reflected signal. Another direction could be the design of new calibration models for implicit sounding that consider the signal losses due to blockage. Therefore, in the future, MAP-Co shall be IRS-assisted. §.§ Proactive MAP-Co Channel Sounding After conducting research on various MAP-Co channel sounding methods, we have identified two prevalent problems. Firstly, there is considerable signal overhead when using explicit channel sounding. Secondly, implicit channel sounding can result in calibration errors, as indicated in Fig. <ref>. These issues arise due to the reactive nature of multi-AP channel sounding, which involves sending reference signals or null packets. Hence, an intelligent channel sounding is a promising approach to address the issues of signal overhead and calibration errors in multi-AP channel sounding, as depicted in Fig. <ref>. Various methods have been developed to predict CSI by examining a user's past mobility patterns or handover patterns to forecast future locations and estimate the CSI from that location <cit.>. Other approaches analyze an STA's past CSI information within an AP to predict the CSI. Many of these methods employ deep convolutional neural networks (CNN) to forecast the future CSI that an STA will experience at an AP. However, existing deep learning-based approaches for single-AP CSI prediction cannot be directly applied to multi-AP environments due to the increased complexity of input and output parameters. Therefore, developing new approaches that can reduce the scale of input and output parameters is an open area of investigation. In addition, implementing an intelligent channel sounding method is simple with MC-based MAP-Co. However, a centralized multi-AP deep learning approach is not feasible for MAP-based MAP-Co due to its higher computational demands. Federated learning can be a promising approach for implementing intelligent channel sounding in MAP-based MAP-Co. Federated learning is an approach to improve network by using decentralized learning at multiple local servers <cit.>. In this approach, local servers at each slave AP can predict the channel quality of STAs locally, and the predicted results can be sent to the AP for multi-AP prediction. However, due to the highly dynamic nature of the mmWave WLAN environment, the accuracy rate of the prediction can decrease with even small changes in the environment. Therefore, the multi-AP deep CNN models for 802.11ay should be designed to ensure that changes in the WLAN environment do not reduce the accuracy rate for an extended period of time. This can be achieved through continuous monitoring and re-training of the deep learning models, which would help maintain a high accuracy rate for multi-AP channel state prediction in dynamic WLAN. §.§ Multi-band MAP-Co Wireless Local Area Networks (WLAN) operating on Sub-6GHz bands may not be sufficient for advanced applications like virtual reality (VR) and may not provide seamless connectivity. To address this issue, multiple bands such as 2.4GHz/sub-6GHz and mmWave are being used in WLAN development to ensure seamless connectivity and meet extreme requirements <cit.>. The 802.11 protocols offer a way to quickly transfer sessions between different bands and enable multi-band operations through on-channel tunneling. Transmission over multiple bands can happen simultaneously or not. Traffic can be distributed over single or multiple bands, and the MAC architecture for multi-band operations can be Independent, Distributed, or Unified according to the IEEE. <cit.>. The MAC protocol usage in different bands varies based on the type of architecture employed. Independent architecture permits the use of different MAC addresses and protocols for each band, while information from upper layers is utilized to distribute traffic. Distributed architecture, however, assigns only one MAC address for each band and shares information at the local level, without the upper layer being aware of session transfer of same or different traffic between different bands. In unified MAC architecture, a new MAC layer is responsible for the dynamic transfer of traffic among different bands for concurrent or non-concurrent transmission. The current multi-band MAC protocol enables the transfer of traffic between different links, with this switching information being shared within the access point (AP). This process of transferring the same or different traffic to multiple bands is referred to as "multi-band switching". When multiple access points (APs) are used, switching traffic or transmitting multiple traffic concurrently or non-concurrently can cause issues for MAP-Co in the future. For example, if traffic is switched between multiple bands without using multi-band C-OFDMA, inter-channel interference can negatively impact the signal quality of the switched traffic. This occurs when signals from different communication channels interfere with the selected channel, resulting in signal degradation. To ensure optimal switch traffic, it is recommended to use a coordinated multi-band OFDM method to effectively select channels and avoid potential issues. Similarly, in the CSR, switching traffic to another band can create interference at the neighboring same band if the transmission power is not controlled in coordination with multi-band switching at another AP. In CBF, switching traffic to another band and using a beam for transmission can create interference if nulling is not performed to neighboring AP in the same band due to no coordination of multi-band switching. JTX by multi-band switching without MAP-Co can lead to interference and collision if the same channel/resource units are used, which can lower the throughput. When using single-band APs, the APs share channel information and determine parameters for using MAP-Co features. However, with multi-band APs, band selection occurs at the packet level, resulting in rapid switching of packets between multiple bands. However, sharing information among multi-band APs within the limited time frame imposed by fast switching and with the same packet arrival rate presents a significant challenge. One solution to this challenge is to use AI and machine learning to predict when multi-band switching will happen and notify other APs ahead of time. By doing this, APs can make better decisions in real-time and improve the efficiency of multi-band MAP-Co. §.§ Future MAP-Co Architecture This paper explores two types of MAP-Co architecture: MA-based and MC-based. The MC-based approach is centralized and uses a local server to connect all APs through a wired backhand network. In the future, we plan to introduce MAP-Co systems that use mmWave and THz WLAN, which will connect to the local server and all 2.4GHz APs, as shown in Fig. <ref>. However, the local server has limited computation power and communication capacity, so a semi-centralized architecture is necessary to balance the load at each MC and prevent congestion. To accomplish this, an intelligent policy for the semi-centralized MAP-Co architecture must be developed to determine the connection between multiple APs and MCs, as depicted in Fig. <ref>. The challenge for such a policy will be to consider the delay and dynamic nature of the WLAN environment, such as mobility and traffic variation. Therefore, we propose a deep-learning-based approach to determine MAP-Co control at each local server and how to distribute it among different local servers. During our previous discussion, we covered MA-based architecture, which is a type of MAP-Co architecture that uses a semi-distributed approach. In this approach, one AP serves as the master, with the other APs acting as slaves. Although this approach holds promise, it poses several challenges and practical issues for future MAP-Co systems that include heterogeneous APs such as 2.4 GHz/60GHz/THz. The current MAP-Co architecture only considers clusters of the same type of WLAN network, but the future MAP-based cluster will include other WLANs like mmWave and THz. This raises new questions, such as how to make a heterogeneous cluster that is dynamic and flexible based on the current network environment and connections, and what should be the optimal cluster size of the heterogeneous MAP-Co. Additionally, the existing MA-based architecture selects a fixed AP as the master, as shown in Fig. <ref>. This prompts the question of whether it is fair to always rely on one access point (AP) for all responsibilities, leading to potential overload. Alternatively, could a scalable MA-based architecture with higher distribution be a better solution? To address this issue, we can propose a dynamic cluster head approach that selects a master AP based on the overhead conditions and MAP-Co features of each AP. The master AP can be chosen from the same WLAN (intra-head) or a different WLAN (inter-head). However, checking each inter and intra cluster's AP as a master AP using a brute force algorithm is an NP-hard problem that cannot be done in real-time. Therefore, researchers need to propose an intelligent method to choose the intra or inter-cluster head or master AP based on traffic at each AP and WLAN conditions, maximizing network performance. § CONCLUSION The MAP-Co protocol is being considered for use in future WLANs such as beyond-802.11be, beyond-802.11ay, and THz due to its ability to solve issues related to ICI, and resource utilization. However, most surveys and WLAN standards have focused on single AP communication features, such as MU-MIMO, OFDMA, and multi-link, without providing a comprehensive and organized state-of-the-art of the MAP-Co architecture and its features. In this article, we provide an organized state-of-the-art of MAP-Co architecture and its features, which can be of two types: MC and MA-based coordinator. We discuss the advantages and drawbacks of each architecture. We also present a review MAP-Co features such as coordinated channel sounding, C-OFDMA, CSR to avoid ICI, CBF to mitigate interference, and JTX to increase throughput. We conducted a thorough analysis of several techniques for multi-AP channel sounding. Our review revealed that both explicit and implicit multi-AP channel sounding methods face significant overhead challenges in future WLANs. To address this issue, we examined various solutions and found that certain methods, such as multiple component and codebook-based feedback or deep-learning-based channel sounding, effectively mitigate the overhead problems associated with multi-AP explicit and implicit channel sounding. Our research on C-OFDMA methods highlights the necessity for a proactive and less time-consuming resource allocation approach. Additionally, we provided an overview of existing OBSS-PD, PCSR, and MAC-based CSR methods. We determined that many of these methods are reactive and cause significant delays before performing CSR, which may not meet the demands of upcoming applications. We conducted a detailed analysis of the latest developments in CBF and JTX and learned from it. Additionally, we shared potential advancements for MAP-Co in the realm of emerging WLAN, including blockage-aware MAP-Co, proactive multi-AP channel sounding, multi-band MAP-Co, and future MAP-Co architectures. For each of these areas, we identified open issues and provided suggestions on how to address them. § ACKNOWLEDGMENT This research is carried out as a part of “Research and Development of Ultra-Large Capacity Wireless LAN using Terahertz Waves”, supported by the Ministry of Internal Affairs and Communications (MIC), Japan. Additionally, this work is also part of “JUNO: R&D for Programmable Networking for next-generation Core and Beyond 5G/6G networks” (No. 22403), which is supported by the National Institute of Information and Communication Technology (NICT), Japan. A list of acronyms is provided in Table <ref>. 9999 shikharS. Verma, Y. Kawamoto and N. Kato, “A Smart Internet-Wide Port Scan Approach for Improving IoT Security Under Dynamic WLAN Environments," in IEEE Internet of Things Journal, vol. 9, no. 14, pp. 11951-11961, Jul. 2022. ieee1 “IEEE Standard for Information Technology–Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks–Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 1: Enhancements for High-Efficiency WLAN," in IEEE Std 802.11ax-2021 (Amendment to IEEE Std 802.11-2020) , vol., no., pp.1-767, May 2021. vr D. Shi, F. Liu, Q. Yutian and Y. Ji, “A WLAN-based positioning system for indoor augmented reality services," in International Conference on Information Science, Electronics and Electrical Engineering, Sapporo, Japan, Nov. 2014, pp. 420-424. kho E. Khorov, I. Levitsky and I. F. Akyildiz, “Current Status and Directions of IEEE 802.11be, the Future Wi-Fi 7," in IEEE Access, vol. 8, pp. 88664-88688, May 2020. ieee2 “IEEE Standard for Information Technology–Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks–Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 2: Enhanced Throughput for Operation in License-exempt Bands above 45 GHz," in IEEE Std 802.11ay-2021 (Amendment to IEEE Std 802.11-2020 as amendment by IEEE Std 802.11ax-2021), vol., no., pp.1-768, Jul. 2021. lopez D. Lopez-Perez, A. Garcia-Rodriguez, L. Galati-Giordano, M. Kasslin and K. Doppler, “IEEE 802.11be Extremely High Throughput: The Next Generation of Wi-Fi Technology Beyond 802.11ax," in IEEE Communications Magazine, vol. 57, no. 9, pp. 113-119, Sep. 2019. ieee3 “IEEE Standard for Information Technology–Telecommunications and Information Exchange between Systems - Local and Metropolitan Area Networks–Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications - Corrigendum 1 – Correct IEEE 802.11ay Assignment of Protected Announce Support bit," in IEEE Std 802.11-2020/Cor 1-2022 (Corrigendum to IEEE Std 802.11-2020 as amended by IEEE Std 802.11ax-2021, IEEE Std 802.11ay-2021, and IEEE Std 802.11ba-2021), vol., no., pp.1-18, Dec. 2022, kobayashi M. Kobayashi, H. Motozuka, T. Urushihara, N. Shirakata and K. Takinami, “IEEE 802.11ad/WiGig based millimeter-wave small cell systems with adjacent channel interference suppression," in IEEE Conference on Standards for Communications and Networking (CSCN), Berlin, Germany, Dec. 2016, pp. 1-5. niyato2H. Du et al., “Performance and Optimization of Reconfigurable Intelligent Surface Aided THz Communications," in IEEE Transactions on Communications, vol. 70, no. 5, pp. 3575-3593, May 2022. shen L. -H. Shen, K. -T. Feng and L. Hanzo, "Coordinated Multiple Access Point Multiuser Beamforming Training Protocol for Millimeter Wave WLANs," in IEEE Transactions on Vehicular Technology, vol. 69, no. 11, pp. 13875-13889, Nov. 2020. aysurvey P. Zhou et al., “IEEE 802.11ay-Based mmWave WLANs: Design Challenges and Solutions," in IEEE Communications Surveys & Tutorials, vol. 20, no. 3, pp. 1654-1681, Mar. 2018, deng C. Deng et al., “IEEE 802.11be Wi-Fi 7: New Challenges and Opportunities," in IEEE Communications Surveys & Tutorials, vol. 22, no. 4, pp. 2136-2166, Jul. 2020. thomas T. K. Paul and T. Ogunfunmi, “Evolution, insights and challenges of the PHY layer for the emerging ieee 802.11n amendment," in IEEE Communications Surveys & Tutorials, vol. 11, no. 4, pp. 131-150, Dec. 2009. xiao Y. Xiao, “IEEE 802.11n: enhancements for higher throughput in wireless LANs," in IEEE Wireless Communications, vol. 12, no. 6, pp. 82-91, Dec. 2005. karam R. Karmakar, S. Chattopadhyay and S. Chakraborty, “Impact of IEEE 802.11n/ac PHY/MAC High Throughput Enhancements on Transport and Application Protocols—A Survey," in IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2050-2091, Aug. 2017. R5D. -J. Deng, S. -Y. Lien, J. Lee and K. -C. Chen, “On Quality-of-Service Provisioning in IEEE 802.11ax WLANs," in IEEE Access, vol. 4, pp. 6086-6104, Aug. 2016. R6B. Li, Q. Qu, Z. Yan and M. Yang, “Survey on OFDMA based MAC protocols for the next generation WLAN," in IEEE Wireless Communications and Networking Conference Workshops (WCNCW), New Orleans, LA, USA, Jun. 2015, pp. 131-135. R7 S. Brahmi, M. Yazid and M. Omar, “Multiuser Access via OFDMA Technology in High Density IEEE 802.11ax WLANs: A Survey," in Second International Conference on Embedded & Distributed Systems (EDiS), Oran, Algeria, Dec. 2020, pp. 105-110. R8E. Khorov, A. Kiryanov, A. Lyakhov and G. Bianchi, "A Tutorial on IEEE 802.11ax High Efficiency WLANs," in IEEE Communications Surveys & Tutorials, vol. 21, no. 1, pp. 197-216, Sept. 2019. R9 S. Szott et al., “Wi-Fi Meets ML: A Survey on Improving IEEE 802.11 Performance With Machine Learning," in IEEE Communications Surveys & Tutorials, vol. 24, no. 3, pp. 1843-1893, Jun. 2022. R13Yang, M., Li, B. “Survey and Perspective on Extremely High Throughput (EHT) WLAN-IEEE 802.11be," in Springer Mobile Networks and Applications, Vol. 25, pp. 1765–1780, Oct. 2020. R14C. Chen, X. Chen, D. Das, D. Akhmetov and C. Cordeiro, “Overview and Performance Evaluation of Wi-Fi 7," in IEEE Communications Standards Magazine, vol. 6, no. 2, pp. 12-18, Jun. 2022, R15R. Liao, B. Bellalta, M. Oliver and Z. Niu, “MU-MIMO MAC Protocols for Wireless Local Area Networks: A Survey," in IEEE Communications Surveys & Tutorials, vol. 18, no. 1, pp. 162-183, Dec. 2014. R4 E. Charfi, L. Chaari and L. Kamoun, “PHY/MAC Enhancements and QoS Mechanisms for Very High Throughput WLANs: A Survey," in IEEE Communications Surveys & Tutorials, vol. 15, no. 4, pp. 1714-1735, Feb. 2013. R16T. Nitsche, C. Cordeiro, A. B. Flores, E. W. Knightly, E. Perahia, and J. C. Widmer, “IEEE 802.11ad: directional 60 GHz communication for multi-Gigabit-per-second Wi-Fi [Invited Paper]," in IEEE Communications Magazine, vol. 52, no. 12, pp. 132-141, December 2014, R17L. Verma, M. Fakharzadeh and S. Choi, “Wifi on steroids: 802.11AC and 802.11AD," in IEEE Wireless Communications, vol. 20, no. 6, pp. 30-35, Dec. 2013. R18X. Wang et al., “Millimeter Wave Communication: A Comprehensive Survey," in IEEE Communications Surveys & Tutorials, vol. 20, no. 3, pp. 1616-1653, Jun. 2018. R19C. Chen, O. Kedem, C. R. C. M. d. Silva and C. Cordeiro, “Millimeter-Wave Fixed Wireless Access Using IEEE 802.11ay," in IEEE Communications Magazine, vol. 57, no. 12, pp. 98-104, Dec. 2019. cari L. Cariou, 802.11 EHT Proposed PAR, Mar. 2019, [online] Available: https://mentor.ieee.org/802.11/dcn/18/11-18-1231-04-0eht-eht-draft-proposed-par.docx ryu K. Ryu, “Consideration on Multi-AP Coordination for EHT", Piscataway, NJ, USA, Jan. 2019, [online] Available: https://mentor.ieee.org/802.11/documents?is_dcn=1982&is_group=0eht. perezD.L. Perez, “Distributed MU-MIMO Architecture Design Considerations," Piscataway, NJ, USA, Jan. 2019, [online] Available: https://mentor.ieee.org/802.11/documents?is_dcn=89&is_group=0eht. ieee4 IEEE, 802.11-2016, “IEEE Standard for Information Technology—Telecommunications and Information Exchange Between Systems Local and Metropolitan Area Networks-Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications", Dec. 2016. sch S. Schelstraete, I. Latif, D. Dash and H. Wang, “Implicit Sounding Overhead Analysis", Jul. 2019, [online] Available: https://mentor.ieee.org/802.11/dcn/19/11-19-1268-00-00be-implicit-sounding-overhead-analysis.pptx. ieee6IEEE, 802.11ac(TM)-2013, “IEEE Standard for Information Technology—Telecommunications and Information Exchange Between Systems local And Metropolitan Area Networks—Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications—Amendment 4: Enhancements for Very High Throughput for Operation in Bands Below 6 GHz", Dec. 2013. ieee7IEEE, 802.11ah-2016, “IEEE Standard for Information Technology—Telecommunications and Information Exchange Between Systems—Local and Metropolitan Area Networks–Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 2: Sub 1 GHz License Exempt Operation", May 2017. ieee8IEEE, P802.11ay/D4.0, “IEEE Draft Standard for Information Technology—Telecommunications and Information Exchange Between Systems Local and Metropolitan Area Networks—Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications—Amendment: Enhanced Throughput for Operation in License-Exempt Bands Above 45 GHz", Jul. 2019. parkS. Park, “Multi-Link-BA-Bitmap-Parsing-Rule", Piscataway, NJ, USA, Mar. 2019, [online] Available: https://mentor.ieee.org/802.11/documents?is_dcn=0448&is_group=00be. jyuR. J. Yu, “Sounding Procedure in AP Collaboration", Piscataway, NJ, USA, Jul. 2019, [online] Available: https://mentor.ieee.org/802.11/documents?is_dcn=1097&is_group=00be. ayfeature Y. Ghasempour, C. R. C. M. da Silva, C. Cordeiro and E. W. Knightly, “IEEE 802.11ay: Next-Generation 60 GHz Communication for 100 Gb/s Wi-Fi," in IEEE Communications Magazine, vol. 55, no. 12, pp. 186-192, Dec. 2017. E1X. -a. Wang and S. B. Wicker, “Channel Estimation and Feedback in OFDM Systems," in IEEE 77th Vehicular Technology Conference (VTC Spring), Dresden, Germany, Jan. 2013, pp. 1-5. E2“IEEE Draft Standard for Information Technology—Telecommunications and Information Exchange Between Systems Local and Metropolitan Area Networks—Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications— Amendment: Enhanced Throughput for Operation in License-Exempt Bands Above 45 GHz", IEEE Standard P802.11ay/D4.0, Jul. 2019. E3R. Porat, E. Ojard, N. Jindal, M. Fischer and V. Erceg, “Improved MU-MIMO performance for future 802.11 systems using differential feedback," in IEEE Information Theory and Applications Workshop (ITA), San Diego, CA, USA, May 2013, pp. 1-5. E4 K. Oteri, “Feedback Overhead Reduction in 802.11be," Piscataway, NJ, USA, Mar. 2019, [online] Available: https://mentor.ieee.org/802.11/documents?is_dcn=0391&is_group=00be. E5Chau Yuen, Sumei Sun and Mel Meau Shin Ho, “Beamforming matrix quantization with variable feedback rate," in IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications, Cannes, France, Dec. 2008, pp. 1-5. E6D. J. Love, R. W. Heath, V. K. N. Lau, D. Gesbert, B. D. Rao and M. Andrews, “An overview of limited feedback in wireless communication systems," in IEEE Journal on Selected Areas in Communications, vol. 26, no. 8, pp. 1341-1365, Oct. 2008. E7Z. Liu, L. Zhang and Z. Ding, “Overcoming the Channel Estimation Barrier in Massive MIMO Communication via Deep Learning," in IEEE Wireless Communications, vol. 27, no. 5, pp. 104-111, Oct. 2020. E8M. Belgiovine, K. Sankhe, C. Bocanegra, D. Roy and K. R. Chowdhury, “Deep Learning at the Edge for Channel Estimation in Beyond-5G Massive MIMO," in IEEE Wireless Communications, vol. 28, no. 2, pp. 19-25, Apr. 2021. gastM. Gast, “802.11n: A Survival Guide," Newton, MA, USA:O’Reilly Media, 2012. dooR. Doostnejad, Z. Avital, L. Cariou, X. Chen, F. Jiang and Q. Li, “Implicit Channel Sounding in IEEE 802.11 (Feasibility Study)," Jun. 2019, [online] Available: https://mentor.ieee.org/802.11/dcn/19/11-19-0767-01-00be-implicit-channel-sounding-in-ieee-802-11-feasibility-study.pptx. rogaR. Rogalin, O. Y. Bursalioglu, H. C. Papadopoulos, G. Caire and A. F. Molisch, “Hardware-impairment compensation for enabling distributed large-scale MIMO", in IEEE Information Theory and Applications Workshop (ITA), San Diego, CA, USA, May. 2013, pp. 1-10. shepardC. Shepard, H. Yu, N. Anand, L. E. Li, T. L. Marzetta, R. Yang, et al., “Argos: Practical many-antenna base stations", in ACM International Conference on Mobile Computing and Networking (MobiCom), NY, USA, Aug. 2012, pp. 53-64. cs1“IEEE Standard for Information technology—Local and Metropolitan Area Networks—Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC)and Physical Layer (PHY) Specifications Amendment 5: Enhancements for Higher Throughput," IEEE Standard 802.11n-2009, Oct. 2009 xiwenX. Jiang et al., “A Framework for Over-the-Air Reciprocity Calibration for TDD Massive MIMO Systems," in IEEE Transactions on Wireless Communications, vol. 17, no. 9, pp. 5975-5990, Sept. 2018 cal1T. Moon, J. Gaun and H. Hassanieh, “Online Millimeter Wave Phased Array Calibration Based on Channel Estimation," 2019 IEEE 37th VLSI Test Symposium (VTS), Monterey, CA, USA, 2019, pp. 1-6, doi: 10.1109/VTS.2019.8758627. cal2M. Park, C. Cordeiro, E. Perahia and L. L. Yang, “Millimeter-wave multi-Gigabit WLAN: Challenges and feasibility," IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications, Cannes, France, Dec. 2008, pp. 1-5. cofdma6 P. Imputato, S. Avallone and D. Magrin, “Multi-AP coordination in Wi-Fi 7 exploiting time resources sharing," in IEEE International Mediterranean Conference on Communications and Networking (MeditCom), Athens, Greece, Sept. 2022, pp. 166-171. cofdmaG. Haile and J. Lim, “C-OFDMA: Improved Throughput for Next Generation WLAN Systems Based on OFDMA and CSMA/CA," in 4th International Conference on Intelligent Systems, Modelling and Simulation, Bangkok, Thailand, Apr. 2013, pp. 497-502. cofdma1 K. T. Kim and S. K. Oh, “Multi-Cell Coordinated Radio Resource Management Scheme Using a Cell-Specific Sequence in OFDMA Cellular Systems," in IEEE Annual Wireless and Microwave Technology Conference, Clearwater Beach, FL, USA, Dec. 2006, pp. 1-5. cofdma2 R. Y. Chang, Z. Tao, J. Zhang and C. . -C. J. Kuo, “Multicell OFDMA Downlink Resource Allocation Using a Graphic Framework," in IEEE Transactions on Vehicular Technology, vol. 58, no. 7, pp. 3494-3507, Sept. 2009. cofdma3 Y. J. Chang, Z. Tao, J. Zhang and C. . -C. J. Kuo, “A Graph-Based Approach to Multi-Cell OFDMA Downlink Resource Allocation," IEEE GLOBECOM 2008 - 2008 IEEE Global Telecommunications Conference, New Orleans, LA, USA, 2008, pp. 1-6, cofdma4 L. Hanhui, C. Xinyue, W. Weidong, Z. Yinghai and C. Gaofeng, “A centralized iterative water filling algorithm in coordinated multipoint OFDMA network," in 4th IEEE International Conference on Network Infrastructure and Digital Content, Beijing, China, Sept. 2014, pp. 174-179. cofdma5 W. Zhang, Y. Wang, F. Peng and Y. Yuan, “Interference Coordination with Vertical Beamforming in 3D MIMO-OFDMA Networks," in IEEE Communications Letters, vol. 18, no. 1, pp. 34-37, Jan. 2014. cofdma7 K. Wang, X. Wang, W. Xu and X. Zhang, “Coordinated Linear Precoding in Downlink Multicell MIMO-OFDMA Networks," in IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4264-4277, Aug. 2012. cofdma8 Zhang, H., Dai, H, “Cochannel Interference Mitigation and Cooperative Processing in Downlink Multicell Multiuser MIMO Networks," EURASIP Journal on Wireless Communications and Networking, SP. 202654 Dec. 2004. cofdma9 A. Kasaeyan, K. Mohamed-pour and S. M. H. Andargoli, “Double-layered coordinated resource allocation in multi-cell MIMO-OFDMA systems," in 21st Iranian Conference on Electrical Engineering (ICEE), Mashhad, Iran, Sept. 2013, pp. 1-5. cofdma10 L. Venturino, A. Zappone, C. Risi and S. Buzzi, “Energy-Efficient Scheduling and Power Allocation in Downlink OFDMA Networks With Base Station Coordination," in IEEE Transactions on Wireless Communications, vol. 14, no. 1, pp. 1-14, Jan. 2015. cofdma11 L. Venturino, N. Prasad and X. Wang, “Coordinated Scheduling and Power Allocation in Downlink Multicell OFDMA Networks," in IEEE Transactions on Vehicular Technology, vol. 58, no. 6, pp. 2835-2848, Jul. 2009. cofdma12 O. Onireti, F. Heliot and M. A. Imran, “On the Energy Efficiency-Spectral Efficiency Trade-Off in the Uplink of CoMP System," in IEEE Transactions on Wireless Communications, vol. 11, no. 2, pp. 556-561, February 2012. cofdma13 Chen, “An Energy-Efficient Channel Access With Target Wake Time Scheduling for Overlapping 802.11ax Basic Service Sets," in IEEE Internet of Things Journal, vol. 9, no. 19, pp. 18973-18986, Oct. 2022. ieeecsrK. Aio, “Coordinated Spatial Reuse Performance Analysis", Piscataway, NJ, USA, Sep. 2019, [online] Available: https://mentor.ieee.org/802.11/documents?is_dcn=1534&is_group=0eht. csrproc S. Park, “Coordinated Spatial Reuse Procedure, Piscataway," NJ, USA, Mar. 2020, [online] Available: https://mentor.ieee.org/802.11/dcn/20/11-20-0410-04-00be-coordinated-spatial-reuse-procedure.pptx. CSR K. Kawamura, A. Inoki, S. Nakayama, K. Wakao and Y. Takatori, “Cooperative control of 802.11ax access parameters in high density wireless LAN systems," in IEEE Wireless Communications and Networking Conference (WCNC), Marrakesh, Morocco, Apr. 2019, pp. 1-6. CSR1 H. Lee, H. -S. Kim and S. Bahk, “LSR: Link-aware Spatial Reuse in IEEE 802.11ax WLANs," in IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, May. 2021, pp. 1-6. CSR2 M. Knitter and R. Kays, “Spatial Reuse Insights for IEEE 802.11ax and IEEE 802.11be Wireless LANs and Beyond," in IEEE 33rd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Kyoto, Japan, Sept. 2022, pp. 919-925. CSR3 R. M. Radaydeh, A. Zafar, F. S. Al-Qahtani and M. -S. Alouini, “Improved Interference-Free Channel Allocation in Coordinated Multiuser Multiantenna Open-Access Small Cells," in IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9994-10010, Dec. 2016. CSR4 D. López-Pérez, X. Chu, A. V. Vasilakos and H. Claussen, “On Distributed and Coordinated Resource Allocation for Interference Mitigation in Self-Organizing LTE Networks," in IEEE/ACM Transactions on Networking, vol. 21, no. 4, pp. 1145-1158, Aug. 2013. CSR5 A. Garcia-Rodriguez, D. López-Pérez, L. Galati-Giordano and G. Geraci," IEEE 802.11be: Wi-Fi 7 Strikes Back," in IEEE Communications Magazine, vol. 59, no. 4, pp. 102-108, Apr. 2021. CSR6W. Yu, T. Kwon and C. Shin, “Multicell Coordination via Joint Scheduling, Beamforming, and Power Spectrum Adaptation," in IEEE Transactions on Wireless Communications, vol. 12, no. 7, pp. 1-14, Jul. 2013. CSR7 D. Lee et al., “Coordinated multipoint transmission and reception in LTE-advanced: deployment scenarios and operational challenges," in IEEE Communications Magazine, vol. 50, no. 2, pp. 148-155, Feb. 2012. CSRN1C. -Y. Chen, Y. -Y. Chen and H. -Y. Wei, “Multi-cell interference coordinated scheduling in mmWave 5G cellular systems," in IEEE Eighth International Conference on Ubiquitous and Future Networks (ICUFN), Vienna, Austria, Aug. 2016, pp. 912-917. CSRN2R. K. Saha, S. Nanba and K. Nishimura, “A Technique for Cloud Based Clustering and Spatial Resource Reuse and Scheduling of 3D In-Building Small Cells Using CoMP for High Capacity CRAN," in IEEE Access, vol. 6, pp. 71602-71621, Nov. 2018. CSRN3 J. Chang, J. Heo and W. Sung, “Cooperative interference mitigation using fractional frequency reuse and intercell spatial demultiplexing," in Journal of Communications and Networks, vol. 10, no. 2, pp. 127-136, Jun. 2008. CSRN4C. Fan, B. Li, C. Zhao, W. Guo and Y. -C. Liang, “Learning-Based Spectrum Sharing and Spatial Reuse in mm-Wave Ultradense Networks," in IEEE Transactions on Vehicular Technology, vol. 67, no. 6, pp. 4954-4968, Jun. 2018, pscr-wlanYuto Kihira, Koji Yamamoto, Akihito Taya, Takayuki Nishio, Yusuke Koda, Kazuto Yano, “Interference-free AP identification and shared information reduction for tabular Q-learning-based WLAN coordinated spatial reuse", IEICE Communications Express, Vol. 11, no. 7, pp. 392-397, Jul. 2022. mlcsr Kihira, Y., Koda, Y., Yamamoto, K., and Nishio, T. (2023). “Adversarial Reinforcement Learning-Based Coordinated Robust Spatial Reuse in Broadcast-Overlaid WLANs," IEICE Transactions on Communications, Vol. E106B, no. 2, pp. 203-212, Fe. 2023. CSR8Wang, James June-Ming, et al. “Multi-Access Point Coordinated Spatial Reuse Protocol And Algorithm." U.S. Patent Application No. 17/066,103. CSR9 D. Nunez, F. Wilhelmi, S. Avallone, M. Smith and B. Bellalta, “TXOP sharing with Coordinated Spatial Reuse in Multi-AP Cooperative IEEE 802.11be WLANs," in IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, Jan. 2022, pp. 864-870. cbfwlan A. Garcia-Rodriguez, D. Lopez-Perez, M. Kasslin, O. Alanen, and L. Galati, “Coordinated Null Steering for EHT", document IEEE 802.11- 19/0811r1, May 2019. cbflte1D. Lee et al., “Coordinated Multipoint Transmission and Reception in LTE-Advanced: Deployment Scenarios and Operational Challenges,” IEEE Communication Magazine, vol. 50, no. 2, pp. 148–55, Feb. 2012. cbflte2 L. Bertizzolo et al., “CoBeam: Beamforming-Based Spectrum Sharing With Zero Cross-Technology Signaling for 5G Wireless Networks,” IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, Toronto, ON, Canada, Jul. 2020, pp. 1429-1438. null1Kun Tan, He Liu, Ji Fang, Wei Wang, Jiansong Zhang, Mi Chen, and Geoffrey M. Voelker, “SAM: enabling practical spatial multiple access in wireless LAN” in 15th annual international conference on Mobile computing and networking (MobiCom '09), Association for Computing Machinery, New York, NY, USA, Sept. 2009, pp. 49–60. null2Shyamnath Gollakota, Samuel David Perli, and Dina Katabi. “Interference alignment and cancellation," in ACM SIGCOMM 2009 conference on Data communication (SIGCOMM '09), Association for Computing Machinery, New York, NY, USA, Oct. 2009, pp. 159–170. cbf1Hariharan Shankar Rahul, Swarun Kumar, and Dina Katabi. “JMB: scaling wireless capacity with user demands," in Communications of the ACM, Vol. 57, no. 7, pp 97–106, Jul. 2014. cbf2 Ezzeldin Hamed, Hariharan Rahul, Mohammed A. Abdelghany, and Dina Katabi. “Real-time Distributed MIMO Systems," in ACM SIGCOMM Conference (SIGCOMM '16). Association for Computing Machinery, New York, NY, USA, Aug. 2016, pp. 412–425. cbf3 Horia Vlad Balan, Ryan Rogalin, Antonios Michaloliakos, Konstantinos Psounis, and Giuseppe Caire. “Achieving high data rates in a distributed MIMO system," in 18th annual international conference on Mobile computing and networking (Mobicom '12), Association for Computing Machinery, New York, NY, USA, Aug. 2012, pp. 41–52. cbf4 P. Yang, Y. Yan, X. Li and Y. Zhang, “POLYPHONY: Scheduling-free cooperative signal recovery in enterprise wireless networks", IEEE Transactions on Mobile Computing, vol. 16, no. 9, pp. 2599-2610, Sep. 2017. cbf5 B. Chen, V. Yenamandra, and K. Srinivasan, “FlexRadio: Fully flexible radios and networks,” in 12th USENIX Conference on Networked Systems Design and Implementation, Berkeley, CA, United States, May 2015, pp. 205–218. cbf6 F. Adib, S. Kumar, O. Aryan, S. Gollakota, and D. Katabi, “AirShare: Distributed coherent transmission made seamless,” in IEEE Conference on Computer Communications (INFOCOM), Hong Kong, China, Aug. 2015, pp. 1742-1750. cbf7 F. Adib, S. Kumar, O. Aryan, S. Gollakota, and D. Katabi, “Poster: Clock synchronization for distributed wireless protocols at the physical layer,” in MobiCom '14: Proceedings of the 20th annual international conference on Mobile computing and networking, Maui Hawaii, USA, Sept. 2014, pp. 337–340. cbf8 H. Yu, O. Bejarano, and L. Zhong, “Combating inter-cell interference in 802.11ac-based multi-user MIMO networks,” in MobiCom '14: Proceedings of the 20th annual international conference on Mobile computing and networking, Maui Hawaii, USA, Sept. 2014, pp. 141–152. cbf9 S. Kumar, D. Cifuentes, S. Gollakota, and D. Katabi, “Bringing cross- layer MIMO to today’s wireless LANs,” in SIGCOMM '13: Proceedings of the ACM SIGCOMM 2013 conference on SIGCOMM, NY, United States, Aug. 2013, pp. 387–398. cbf10X. Zhang, K. Sundaresan, and K. G. Shin, “NEMOx: Scalable network MIMO for wireless networks,” in MobiCom '13: Proceedings of the 19th annual international conference on Mobile computing & networking, Miami, Florida, USA, Sept. 2013, pp. 453–464. cbf11Y. Du, E. Aryafar, J. Camp, and M. Chiang, “iBeam: Intelligent client- side multi-user beamforming in wireless networks,” in IEEE INFOCOM 2014 - IEEE Conference on Computer Communications, Toronto, ON, Canada, Jul. 2014, pp. 817-825. cbf12K. C.-J. Lin, W.-L. Shen, M.-S. Chen, and K. Tan, “User-centric network MIMO with dynamic clustering,” IEEE/ACM Transactions on Networking, vol. 25, no. 3, pp. 1910-1923, Jun. 2017. cbf13W.-L. Shen, K. C.-J. Lin, M.-S. Chen, and K. Tan, “Client as a first- class citizen: Practical user-centric network MIMO clustering,” in IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications, San Francisco, CA, USA, Jul. 2016, pp. 1-9. JT1S. Brueck, L. Zhao, J. Giese and M. A. Amin, “Centralized scheduling for joint transmission coordinated multi-point in LTE-Advanced," in IEEE International ITG Workshop on Smart Antennas (WSA), Bremen, Germany, Apr. 2010, pp. 177-184. JT2Y. -P. Zhang, L. Xia, P. Zhang, S. Feng, J. Sun and X. Ren, “Joint Transmission for LTE-Advanced Systems with Non-Full Buffer Traffic," in IEEE 75th Vehicular Technology Conference (VTC Spring), Yokohama, Japan, Jul. 2012, pp. 1-6. JT5 Y. Zhang, S. Bi and Y. -J. A. Zhang, “User-Centric Joint Transmission in Virtual-Cell-Based Ultra-Dense Networks," in IEEE Transactions on Vehicular Technology, vol. 67, no. 5, pp. 4640-4644, May 2018. JT4 T. M. Shami, D. Grace, A. Burr and M. D. Zakaria, “User-centric JT-CoMP clustering in a 5G cell-less architecture," in IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Bologna, Italy, Dec. 2018, pp. 177-181. JT3Guy Grebla, Berk Birand, Peter van de Ven, Gil Zussman, “Joint transmission in cellular networks with CoMP—Stability and scheduling algorithms," Performance Evaluation, Volume 91, Pages 38-55, Jun. 2015. JT6 Ding Zhang, Mihir Garude, and Parth H. Pathak. “MmChoir: Exploiting Joint Transmissions for Reliable 60GHz mmWave WLANs," In Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing (Mobihoc '18), Association for Computing Machinery, New York, NY, USA, Jun. 2018, pp. 251–260. JT7 T. Nakazato, Y. Kawamoto and N. Kato, “Radio Access Control of Access Points and Intelligent Reflecting Surfaces for Data Rate Improvement in Joint Transmission," IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), Helsinki, Finland, Jun. 2022, pp. 1-5. JT8 Tan, P., Wang, D., Yang, M., Yan, Z., Li, B. “CRJT: Channel Reservation Based Joint Transmission MAC Protocol for the Next Generation WLAN," In: Chen, JL., Pang, AC., Deng, DJ., Lin, CC. (eds) Wireless Internet. WICON 2018. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 264. Springer, Jan. 2019. JT9 A. Titus, R. Bansal, T. V. Sreejith, A. A. Kherani and N. Akhtar, “Decision Problems for Joint Transmission in Multi-AP Coordination Framework of IEEE 802.11be," in IEEE International Conference on Communication Systems & NETworkS (COMSNETS), Bangalore, India, Jan. 2021, pp. 326-333. JT10 ] S. P. Sundaravaradhan, R. Porat and K. N. Toussi, “Increasing Spatial Multiplexing Gain in Future Multi-AP WiFi Systems via Joint Transmission," in IEEE Communications Standards Magazine, vol. 6, no. 2, pp. 20-26, Jun. 2022. JT11 H. V. Balan, R. Rogalin, A. Michaloliakos, K. Psounis and G. Caire, “AirSync: Enabling Distributed Multiuser MIMO With Full Spatial Multiplexing," in IEEE/ACM Transactions on Networking, vol. 21, no. 6, pp. 1681-1695, Dec. 2013, JT12 Ashikhmin, Alexei, et al. “Pilot assignment in cell free massive MIMO wireless systems." U.S. Patent No. 9,615,384. 4 Apr. 2017 JT13 Björnson, Emil, and Luca Sanguinetti. “Making cell-free massive MIMO competitive with MMSE processing and centralized implementation" IEEE Transactions on Wireless Communications, vol. 19, no. 1, pp. 77-90, Jan. 2020. JT14 S. Buzzi, C. D'Andrea and C. D'Elia, “User-Centric Cell-Free Massive MIMO with Interference Cancellation and Local ZF Downlink Precoding", in 15th International Symposium on Wireless Communication Systems (ISWCS), Lisbon, Portugal, Oct. 2018, pp. 1-5. JT15 H. A. Ammar, R. Adve, S. Shahbazpanahi, G. Boudreau and K. V. Srinivas, “User-Centric Cell-Free Massive MIMO Networks: A Survey of Opportunities, Challenges and Solutions," in IEEE Communications Surveys & Tutorials, vol. 24, no. 1, pp. 611-652, Dec. 2021. niyato1Du, Hongyang, Jiayi Zhang, Ke Guan, Dusit Niyato, Huiying Jiao, Zhiqin Wang, and Thomas Kürner. “Performance and optimization of reconfigurable intelligent surface aided THz communications." IEEE Transactions on Communications, vol. 24, no. 1, pp. 611-652, Dec. 2021. niyato3Y. Xiao et al., “Distributed Traffic Synthesis and Classification in Edge Networks: A Federated Self-supervised Learning Approach," in IEEE Transactions on Mobile Computing, Early access. kato1Zubair Md. Fadlullah, and Nei Kato, “HCP: Heterogeneous Computing Platform for Federated Learning Based Collaborative Content Caching Towards 6G Networks," IEEE Transactions on Emerging Topics in Computing (TETC), vol. 10, no.1, pp. 112-123, Jan. 2022. aicsi1 G. Cerar et al., “Machine Learning for Link Quality Estimation: A Survey,” arXiv preprint arXiv:1812.08856, Nov. 2019. aicsi2T. Liu and A. E. Cerpa, “Temporal Adaptive Link Quality Prediction with Online Learning,” ACM Transactions on Sensor Networks, vol. 10, no. 3, May 2014, pp. 1–41. aicsi3A. Kulkarni et al., “Deepchannel: Wireless Channel Quality Prediction Using Deep Learning,” IEEE Transaction on Vehicular Technology, vol. 69, no. 1, pp. 443–456, Jan. 2020. aicsi4J. Yuan, H. Q. Ngo, and M. Matthaiou, “Machine Learning-Based Channel Prediction in Massive MIMO with Channel Aging,” IEEE Transaction on Wireless Communication, vol. 19, no. 5, pp. 2960–2973, May 2020. multi-band Y. Fang, “Multi-Link Architecture and Requirement Discussion," document IEEE 802.11, IEEE, Piscataway, NJ, USA, Jul. 2019. [Online]. Available: https://mentor.ieee.org/802.11/documents?is_dcn=1095&is_group=00be zubair2 Z. M. Fadlullah et al., “Multi-Hop Wireless Transmission in Multi-Band WLAN Systems: Proposal and Future Perspective," in IEEE Wireless Communications, vol. 26, no. 1, pp. 108-113, Feb. 2019. [ < g r a p h i c s > ] Shikhar Verma (Member, IEEE) received the M.Sc. and Ph.D. degrees from the Graduate School of Information Sciences (GSIS), Tohoku University, Sendai, Japan, in 2018 and 2021, respectively. Since 2021, he has been a Research Assistant Professor with GSIS, Tohoku University. Dr. Verma was also a recipient of the prestigious MEXT Scholarship and JSPS Fellowship. He also received the Dean’s Award from Tohoku University in 2021 and the Best Paper Award at IEEE ICC in 2018. [ < g r a p h i c s > ] Tiago Koketsu Rodrigues (Member, IEEE) previously Tiago Gama Rodrigues, has been an assistant professor at Tohoku University since April 2020. He received his Bachelor's Degree in Computer Science from the Federal University of Piaui, in Brazil, in 2014. He worked as a researcher in 2013 at Bucknell University, U.S.A., and in 2014 at Tohoku University, Japan. He received his M.Sc. degree from Tohoku University in 2017 and his Ph.D. from the same institution in 2020. He has previously been awarded a scholarship by the Coordination for the Improvement of Higher Education Personnel from Brazil to study for one year at Bucknell University in 2013 and a scholarship from the Japanese Ministry of Education, Culture, Sports, Science and Technology to perform his post-graduation studies in Japan. He was the recipient of the 2017 and the 2020 Tohoku University Graduate School of Information Sciences Dean Awards, the 2018 Best Paper Award from IEEE Transactions on Computers, the 2020 Tohoku University President Award, and the IEEE Communications Society Asia-Pacific Region 20222 Outstanding Young Researcher Award, among others. From 2017 to 2020, he was the System Administrator of the IEEE Transactions on Vehicular Technology. Since 2020, he has served as an editor in IEEE Network and IEEE Transactions on Vehicular Technology. Since 2023, he is the Lead System Administrator of IEEE Internet of Things Journal. [ < g r a p h i c s > ] Yuichi Kawamoto (Member, IEEE) is serving as an assistant professor at GSIS. He received his B.E. degree from the Information Engineering, and M.S. degree and Ph.D. degree from the Graduate School of Information Science (GSIS) at Tohoku University, Japan, in 2011 and 2013, 2016, respectively. He was a recipient of the prestigious Dean’s and President’s Awards from Tohoku University in March 2016. He has also received several best paper awards at conferences including IWCMC'13, GLOBECOM'13, and WCNC'2014. [ < g r a p h i c s > ] Nei Kato (Fellow, IEEE) is a full professor and the Dean with Graduate School of Information Sciences, Tohoku University, Japan. He has researched on computer networking, wireless mobile communications, satellite communications, ad hoc & sensor & mesh networks, UAV networks, AI, IoT, Big Data, and pattern recognition. He is the Editor-in-Chief of IEEE Internet of Things Journal. He served as the Vice-President (Member & Global Activities) of IEEE Communications Society (2018-2021), and the Editor-in-Chief of IEEE Transactions on Vehicular Technology (2017-2021). He is a fellow of The Engineering Academy of Japan, a Fellow of IEEE, and a Fellow of IEICE.
http://arxiv.org/abs/2306.05248v2
20230608144550
A new framework for the analysis of finite element methods for fluid-structure interaction problems
[ "Buyang Li", "Weiwei Sun", "Yupei Xie", "Wenshan Yu" ]
math.NA
[ "math.NA", "cs.NA" ]
A new framework for the analysis of finite element methods for fluid-structure interaction problems Buyang Li [ Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong. E-mail address: [email protected] and [email protected]. ] Weiwei Sun [Advanced Institute of Natural Sciences, Beijing Normal University, Zhuhai, 519087, P.R. China. E-mail address: [email protected]. ] [ Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, BNU-HKBU United International College, Zhuhai, 519087, P.R.China; Hong Kong Baptist University, Kowloon Tong, Hong Kong. E-mail address: [email protected] ] Yupei Xie [1] and Wenshan Yu [3] =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Finite element methods and kinematically coupled schemes that decouple the fluid velocity and structure's displacement have been extensively studied for incompressible fluid-structure interaction (FSI) over the past decade. While these methods are known to be stable and easy to implement, optimal error analysis has remained challenging. Previous work has primarily relied on the classical elliptic projection technique, which is only suitable for parabolic problems and does not lead to optimal convergence of numerical solutions to the FSI problems in the standard L^2 norm. In this article, we propose a new kinematically coupled scheme for incompressible FSI thin-structure model and establish a new framework for the numerical analysis of FSI problems in terms of a newly introduced coupled non-stationary Ritz projection, which allows us to prove the optimal-order convergence of the proposed method in the L^2 norm. The methodology presented in this article is also applicable to numerous other FSI models and serves as a fundamental tool for advancing research in this field. Keywords: Fluid-structure interaction, finite element method, kinematically coupled schemes, energy stability, error estimates, coupled non-stationary Ritz projection § INTRODUCTION There has been increasing interest in studying fluid-structure interaction due to its diverse applications in many areas <cit.>. This article focuses on the blood flow in the human cardiovascular system, which can be modeled as incompressible flow passing through an elastic tube. Numerical simulations are crucial in this field, and over the past two decades, numerous efforts have been devoted to developing efficient numerical algorithms and analysis methods. Under certain assumptions of thin structure, a simple fluid-structure interaction model can be described by the following equations [left=]align ρ_f ∂_t -̆ (,̆ p) = 0, (0,T) ×Ω, = 0 , (0,T) ×Ω, (̆0, ·) = _̆0(x) , Ω, [left=]align ρ_s ϵ_s ∂_tt - L_s = - (,̆p), (0,T) ×Σ, (0, x) = _0(x), Σ, ∂_t (0, x) = _̆0(x), Σ with the kinematic interface condition ∂_t = (0,T) ×Σ and certain inflow and outflow conditions at Σ_l and Σ_r; see Figure <ref>. The unknown solutions in (<ref>)–(<ref>) are fluid velocity $̆, fluid pressurepand structure displacement, and the following notations are used: p4.5cmp9cm ϵ_s: The thickness of the structure. μ: The fluid viscosity. ρ_f: The fluid density. ρ_s: The structure density. : The outward normal vector on ∂Ω. (,̆ p) = -pI + 2 μ()̆: The fluid stress tensor. ()̆ = 1/2 ( ∇+̆ (∇)̆^T): The strain-rate tensor. L_s: An elliptic differential operator on Σ, such as L_s= - I + Δ_s, where Δ_s is the Laplace-Beltrami operator on Σ. In general, two strategies can be employed to construct numerical schemes for solving fluid-structure interaction problems. Monolithic algorithms solve a fully coupled system, which can be expensive for complex fluid-structure problems. Various studies have focused on the numerical simulation and analysis of monolithic algorithms, as can be found in <cit.>. Alternatively, partitioned algorithms based on certain weak coupling or decoupling of the fluid and solid problems have also been developed <cit.>, some of which suffer from instability in some physical applications <cit.>. Amongst these partitioned algorithms, the kinematically coupled scheme is the most popular one due to its modularity, stability, and ease of implementation. The scheme was first studied in <cit.> for the fluid-structure interaction problems and subsequently by numerous researchers <cit.>. However, the analysis of kinematically coupled schemes has been challenging due to the specific coupling of two distinct physical phenomena. In <cit.>, Fernandez proposed an incremental displacement-correction scheme, which proved to be stable, and the following energy-norm error estimate was established using piecewise polynomials of degreekfor both_̆h^nand_h^nin error-0, i.e., ^̆n - _̆h^n _L^2(Ω) + ( ∑_m=1^n τ^̆n - _̆h^n _f^2 )^1/2 + ^̆n - _̆h^n _L^2(Σ) + ^n - _h^n _s ≤ C(τ + h^k ) . Several different schemes were investigated, and similar error estimates, such as those given in <cit.>, were provided. The kinematic coupling has been extended to other applications, such as composite structures and non-Newtonian flow <cit.>, by many researchers. Recently, a fully discrete loosely coupled Robin-Robin scheme for thick structures was proposed in <cit.>, where they showed that the error estimate in the same energy norm as in error-0 is in the order ofO(√(τ) + h)fork=1. Additionally, a splitting scheme was proposed in <cit.> for the fluid-structure interaction problem with immersed thin-walled structures. The scheme was proved to have unconditional stability, and a suboptimalL^2-norm error estimate was presented. OptimalL^2-norm error estimates play a crucial role in both theoretical analysis of algorithms and development of novel algorithms for practical applications. However, to our knowledge, such results have not been established due to the lack of properly defined Ritz projections for fluid-structure interaction problems. This is in contrast to the error analysis of finite element methods for parabolic equations, where the Ritz projections have been well-known since the early work of Wheeler <cit.>. For instance, for the heat equation∂_tu-Δu=f, the Ritz projection is a finite element functionR_huthat satisfies the weak formulation: ∫_Ω∇ (u - R_h u)·∇ v_h d x = 0 . With this projectionR_h, the error of the finite element solution can be decomposed into two parts:u - u_h = (u -R_hu) + (R_hu - u_h).In the analysis of the second part, the pollution from the approximation of the diffusion term is not involved, thus enabling the establishment of an optimal-order error estimate for R_h u - u_h _L^2(Ω). The optimal estimate foru - u_h _L^2(Ω)can be derived from the fact that the projection erroru -R_hu _L^2(Ω)is also of optimal order. However, formulating and determining optimalL^2-norm error estimates for a suitably defined Ritz projection in fluid-structure interaction systems remains a challenge. The standard elliptic Ritz projection for the Stokes equations, while widely employed for obtaining error estimates in the energy norm, no longer produces optimalL^2-norm error estimates for such fluid-structure interaction systems; see <cit.>. In this article, we propose a new kinematically coupled scheme which decouples(,̆p)andfor solving the thin-structure interaction problem, and demonstrate its unconditionally stability for long-time computation. More importantly, we establish an optimalL^2-norm error estimate for the proposed method, i.e., ^̆n - _̆h^n _L^2(Ω) + ^̆n - _̆h^n _L^2(Σ) + ^n - _h^n _L^2(Σ)≤ C( τ + h^k+1 ) , by developing a new framework for the numerical analysis of fluid-structure interaction problems in terms of a newly introduced coupled non-stationary Ritz projection, which is defined as a triple of finite element functions(R_h,̆ R_h p, R_h) satisfying a weak formulation plus a constraint condition(R_h )̆|_Σ= ∂_t R_h onΣ×[0,T]. This is equivalent to solving an evolution equation ofR_h under some initial conditionR_h (0). Moreover, the dual problem of the non-stationary Ritz projection, required in the optimalL^2-norm error estimates for the fluid-structure interaction problem, is a backward initial-boundary value problem - ℒ_s ϕ + ϕ = ∂_t (ϕ, q) + on Σ× [0,T) -∇·σ (ϕ, q) + ϕ = 0 in Ω× [0,T) ∇·ϕ = 0 in Ω× [0,T) (ϕ, q) =0 t=T . which turns out to be equivalent to a backward evolution equation ofξ=(ϕ, q) , i.e., - ℒ_s 𝒩ξ + 𝒩ξ - ∂_t ξ = on Σ×[0,T), ξ (T) = 0 , where𝒩: H^-1/2(Σ)^d→H^1/2(Σ)^dis the Neumann-to-Dirichlet map associated to the Stokes equations. By choosing a well-designed initial valueR_h(0)and utilizing the regularity properties of the dual problem (<ref>), which are shown by analyzing the equivalent formulation in (<ref>), we are able to establish optimalL^2error estimates for the non-stationary Ritz projection and, subsequently, optimalL^2-norm error estimates for the finite element solutions of the thin-structure interaction problem. The rest of this article is organized as follows. In Section 2, we introduce a kinematically coupled scheme and present our main theoretical results on the unconditional stability and optimalL^2-norm error estimates of the scheme. We focus on a first-order kinematically coupled time-stepping method and the class ofH^1-conforming inf-sup stable finite element spaces, including the classical Taylor–Hood and MINI elements. In Section 3, we introduce a new non-stationary coupled Ritz projection and present the corresponding projection error estimates (with its proof deferred to Section 4). Then we establish unconditionally stability and optimalL^2-norm error estimates for the fully discrete finite element solutions by utilizing the error estimates for the non-stationary coupled Ritz projection. Section 4 is devoted to the proof of the error estimates of the non-stationary coupled Ritz projection. We present a well-designed initial value of the projection and the corresponding error estimates based on duality arguments on the thin solid structure. In Section 5, we provide two numerical examples to support the theoretical analysis presented in this article. The first example illustrates the optimalL^2-norm convergence of the proposed kinematically coupled scheme. The second example demonstrates the simulation of certain physical features, which are consistent with previous works. § NOTATIONS, ASSUMPTIONS AND MAIN RESULTS In this section, we propose a fully-discrete stabilized kinematically coupled FEM for the fluid-structure problem f-e–bc, as well as the main theoretical results of unconditional stability and optimal-order convergence in theL^2norm. §.§ Notation and weak formulation Some standard notations and operators are defined below. For any two functionu,v ∈L^2(Ω), we denote the inner products and norms ofL^2(Ω)andL^2(Σ)by (u,v) = ∫_Ω u() v() d, u^2:= (u,u) , (w, ξ)_Σ = ∫_Σ w() ξ() d, w_Σ^2 := (w, w)_Σ. We assume thatΩ⊂ℝ^d(d=2,3) is a bounded domain with∂Ω= Σ_l ∩Σ_r ∩Σ, whereΣdenotes the fluid-structure interface,Σ_landΣ_rare two disks (or lines in 2-dimensional case) denoting the inflow and outflow boundary andΣ_r = { (x,y,z+ L) : (x,y,z) ∈Σ_l L>0}. For the simplicity of analysis, we consider the problem with the periodic boundary condition onΣ_landΣ_r, and assume that the extended domainsΩ_∞andΣ_∞are smooth, where Ω_∞ :={(x,y,z):∃ k∈ℤ such that (x,y,z+Lk)∈Ω∪Σ_l } , Σ_∞ :={(x,y,z):∃ k∈ℤ such that (x,y,z+Lk)∈Σ} . The structure is assumed to be a linear thin-solid (e.g., string in two-dimensional model and membrane for three-dimensional model). Nonetheless, the algorithm presented in this paper is also applicable to problems with more general boundary conditions and domains. We say a functionfdefined inΩ_∞is periodic if f(x,y,z)=f(x,y,z+kL) ∀ (x,y,z)∈Ω∪Σ_l ∀ k∈ℤ . The space of periodic smooth functions onΩ_∞is denoted asC^∞(Ω_∞). The periodic Sobolev spacesH^s(Ω)andH^s(Σ), withs≥0, are defined as H^s(Ω) :=The closure of C^∞(Ω_∞) under the conventional norm of H^s(Ω) , H^s(Σ) :=The closure of C^∞(Σ_∞) under the conventional norm of H^s(Σ) . which are equivalent to the Sobolev spaces by consideringΩandΣas tori in thezdirection. The dual spaces ofH^s(Ω)andH^s(Σ)are denoted byH^-s(Ω)andH^-s(Σ), respectively. We define the following function spaces associated to velocity, pressure and thin structure, respectively: X(Ω): = H^1(Ω)^d , Q(Ω): = L^2(Ω) , (Σ): = H^1(Σ)^d . Correspondingly, we define the following bilinear forms: a_f(,̆): = 2 μ ( ()̆, ()) ,̆∈ X(Ω), b(p, ): = (p, ∇·) ∈ X(Ω) p∈ Q(Ω) , a_s(, ): = ( - L_s , )_Σ ,∈(Σ) . We assume thatL_sis a second-order differential operator onΣsatisfying the following conditions: ℒ_s_H^k(Σ)≤ C_H^k+2(Σ) ∀ ∈ H^k(Σ)^d, ∀ k≥ -1, k∈ℝ, a_s(,)=a_s(,) a_s(,)≥ 0 ∀∈ H^1(Σ)^d, _s+_Σ∼_H^1(Σ) _s: = √(a_s(, )) . In addition, we denote _f := √(( ()̆, ()̆))and mention that the following norm equivalence holds (according to Korn's inequality): _f+∼_H^1(Ω) . For the simplicity of notations, we denote by_L^p Xthe Bochner norm (or semi-norm) defined by _L^p X:= (∫_t=0^t=T(t,·)^p_X dt)^1/p 1≤ p<∞ sup_t∈ [0,T](t,·)_X p=∞ , where·_Xis any norm or semi-norm in space, such as·_f,·_sor·_L^2(Σ). The following conventional notations will be used: ·_X: = ·_X(Ω), ·: = ·_L^2(Ω), ·_Σ: = ·_L^2(Σ)and ·_H_f:=·_f, ·_H_s:=·_s. For smooth solutions of (<ref>)–(<ref>), one can verify that (via integration by parts) the following equations hold for all test functions(, q, ) ∈X ×Q ×with|_Σ= : ρ_f (∂_t ,̆) + a_f(,̆) - b(p, ) + b(q, )̆ + ρ_s ϵ_s ( ∂_tt, )_Σ + a_s(, ) = 0 . §.§ Regularity assumptions To establish the optimal error estimates for the finite element solutions to the thin-structure interaction problem, we need to use the following regularity results. * We assume that the domain Ω is smooth so that the the solution (,̆ p, ) of the fluid-structure interaction problem f-e–bc is sufficiently smooth. * The weak solution (ω,λ)∈ H^1(Ω)^d× L^2(Ω) of the Stokes equations -∇·(ω,λ)+ω = f ∇·ω =0 has the following regularity estimates: ω_H^k+3/2+λ_H^k+1/2≤ C f_H^k-1/2+(ω,λ)·_H^k(Σ) k≥ -1/2, k∈ℝ , ω_H^k+1/2+λ-λ̅_H^k-1/2≤ C f_H^k-3/2+ω_H^k(Σ) k≥ 1/2, k∈ℝ , where λ̅:=1/|Ω|∫_Ωλ is the mean value of λ over Ω. The estimates in (<ref>) and (<ref>) correspond to the Neumann and Dirichlet boundary conditions, respectively; see <cit.> for a proof of (<ref>) in smooth domains, with a similar approach as in <cit.> one can prove (<ref>). We also refer to <cit.> for a proof of (<ref>) in the case of polygonal domain. * We assume that operator ℒ_s possesses the following elliptic regularity: The weak solution ξ∈ H^1(Σ)^d of the equation (in the weak formulation) a_s(ξ,)+(ξ,)_Σ=( g, )_Σ ∀∈ H^1(Σ)^d , has the following regularity estimate: ξ_H^2+k(Σ)≤ C g_H^k(Σ) k≥ -1, k∈ℝ. §.§ Assumptions on the finite element spaces Let𝒯 _hdenote a quasi-uniform partition onΩwithΩ = ⋃_K ∈𝒯_h K. EachKis a curvilinear polyhedron/polygon withdiam(K)≤h. All of the boundary faces onΣconsist of a partition𝒯_h (Σ),Σ= ⋃_D ∈𝒯_h (Σ) D, all of the boundary faces onΣ_l orΣ_rconsist of a partition ofΣ_lorΣ_r, respectively, and these two partitions coincide after shiftingLinz-direction. To approximate the weak form (<ref>) by finite element method, we assume that there are finite element spaces(^r_h, ^r_h, Q^r-1_h)on𝒯 _h(wherer≥1) with the following properties. 0.1in * (A1)^r_h ⊆, ^r_h ⊆ and ℝ⊆ Q^r-1_h ⊆ Q, with ^r_h = {_h |_Σ : _h ∈^r_h }. * (A2) For _h^r and Q_h^r-1, the following local inverse estimate holds on each K∈𝒯_h for 0≤ l≤ k,1≤ p,q≤∞: _h _W^k, p (K)≤ C h^- (k - l) + (d/ p - d/ q)_h _W^l, q (K) ∀_h ∈^r_h Q^r-1_h, For ^r_h, the following global inverse estimate holds: _h _H^s(Σ)≤ C h^k-s_h _H^k(Σ) ∀_h ∈^r_h; ∀ k,s∈ℝ with 0≤ k≤ s≤ 1 . * (A3) There are interpolation/projection operators I^X_h : →^r_h and I_h^Q : Q → Q^r-1_h which have the following local L^p approximation properties on each K∈𝒯_h, for all 1≤ p≤∞: I_h^X -̆_L^p(K)+h I_h^X -̆_W^1,p(K)≤ C h^k + 1_W^k + 1,p(Δ_K) ∀ 0≤ k ≤ r , I_h^Q p - p _L^p(K)≤ C h^k + 1 p _W^k +1,p (Δ_K) ∀ 0 ≤ k ≤ r - 1, where Δ_K is the macro element including all the elements which have a common vertex with K. And there is an interpolation/projection operator I_h^S : →^r_h satisfying I_h^X |̆_Σ = I_h^S (|̆_Σ) for all ∈̆ with |̆_Σ∈. Moreover, we require the following optimal order error estimate I_h^S - _Σ + h I_h^S - _H^1(Σ)≤ C h^k + 1_H^k + 1_h (Σ) ∀ 0 ≤ k ≤ r, where ·_H^k+1_h(Σ) is the piecewise H^k+1-norm associated with partition 𝒯_h(Σ). We will abuse notation and use I_h to denote one of the operators I_h^X, I_h^S and I_h^Q when there is no confusion. * (A4) Let ^r_h := {_h ∈^r_h : _h |_Σ = 0 } and Q_h,0^r-1:={q_h∈ Q_h^r-1:q_h∈ L^2_0(Ω)}. The following inf-sup condition holds: q_h ≤ C sup_0 ≠_h ∈^r_h( div _h, q_h)/_h _H^1 ∀ q_h ∈ Q^r-1_h,0 Examples of finite element spaces which satisfy Assumptions (A1)–(A4) include the Taylor–Hood finite element space with I_h^X, I_h^Q and I_h^S being the Scott–Zhang interpolation operators onto ^r_h, Q_h^r-1 and ^r_h respectively. We refer to <cit.> and the references therein for the details on construction and properties of Scott-Zhang interpolation, and refer to <cit.> for a proof of (<ref>) for the Taylor-Hood finite element spaces. The following properties are consequences of the assumptions on the finite element spaces in (A1)–(A4). * From (A2) and (A3) we can derive the following estimate for _h∈^r_h: (_h)_Σ =(∑_D∈𝒯_h(Σ)(_h)^2_L^2(D))^1/2 ≤ C(∑_D∈𝒯_h(Σ)h^d-1_h^2_W^1,∞(K))^1/2 ≤ C(∑_D∈𝒯_h(Σ)h^-1_h^2_H^1(K))^1/2≤ Ch^-1/2_h_H^1 . Therefore, we can obtain the following inverse estimate for the boundary term (_h,q_h): (_h,q_h)_Σ≤ Ch^-1/2(_h_H^1+q_h) . * From (A3) and (A4) we can see that when r≥ 2, the mixed finite element space (_h^r,Q_h^r-1) can be realized by the (r,r-1) Taylor-Hood finite element space. When r=1, (_h^1,Q_h^0) can be realized by the MINI element space. * From inf-sup condition (<ref>), we can deduce the following alternative version of inf-sup condition (involving H^1(Σ)-norm in the denominator) q_h ≤ C sup_0 ≠_h ∈^r_h( div_h, q_h)/_h _H^1 +_H^1(Σ) ∀ q_h ∈ Q^r-1_h. An inf-sup condition similar to (<ref>) was proved in <cit.>, though thick structure problem is considered there. For the reader's convenience, we present a proof of (<ref>) in Appendix C. * For each _h∈^r_h, we denote by E_h_h∈_h^r an extension such that E_h_h: = I_h^X, where ∈ H^1(Ω)^d is the extension of _h by trace theorem, satisfying _H^1≤ C_h_H^1/2(Σ) and |_Σ = _h. Combining (<ref>) with (<ref>) we see that E_h_h_H^1≤ Ch^-1/2_h_Σ. * Combining (<ref>) with (<ref>) we have for any _̆h∈_h^r, p_h∈ Q_h^r-1 (-̆_̆h,p-p_h)_Σ ≤(-̆I_h,̆p-I_hp)_Σ+(I_h-̆_̆h,I_hp-p_h)_Σ ≤ C(-̆I_h_W^1,∞+p-I_hp_L^∞)+(I_h-̆_̆h,I_hp-p_h)_Σ ≤ Ch^r+Ch^-1/2(I_h-̆_̆h_H^1+I_hp-p_h) ≤ Ch^r-1/2+Ch^-1/2(-̆_̆h_H^1+p-p_h) , where we have used (<ref>) with p=∞ and (<ref>) in the second to last inequality. §.§ A new kinematically coupled scheme and main theoretical results Let{ t_n }_n=0^Nbe a uniform partition of the time interval[0,T]with stepsizeτ= T/N. For a sequence of functions{^̆n}_n=0^Nwe denote D_τ^̆n = ^̆n-^̆n-1/τ, for n=1, 2, …, N. With the above notations, we present a fully discrete kinematically coupled algorithm below. 0.1in Step 1: For given_̆h^n-1, p_h^n-1, _h^n-1, find_h^n ∈_h^rsuch that ρ_s ϵ_s ( _h^n-_̆h^n-1/τ, _h )_Σ + a_s(_h^n, _h) = - ( ^n-1_h · n, _h )_Σ, ∀_h ∈_h^r _h^n = _h^n-1 + τ_h^n . Step 2: Then find(_̆h^n, p_h^n) ∈_h^r×Q_h^r-1satisfying ρ_f (D_τ_̆h^n, _h) + a_f(_̆h^n, _h) - b(p_h^n, _h) + b(q_h, _̆h^n) - ( ^n_h · n, _h )_Σ + ρ_s ϵ_s ( _̆h^n - _h^n/τ, _h + τ/ρ_s ϵ_s(_h, q_h)· )_Σ + ( (^n_h-^n-1_h) · n, _h + τ(1+β)/ρ_s ϵ_s(_h, q_h)· )_Σ = 0 for all(_h, q_h) ∈_h^r×Q_h^r-1, where_h^n = (_̆h^n, p_h^n)andβ>0denotes a penalty constant. Initial values: Since^n-1_hdepends on both_̆h^n-1andp_h^n-1, the numerical scheme in (<ref>)–(<ref>) requires the initial value(_̆h^0,p_h^0,_h^0)to be given. We simply assume that the initial value(_̆h^0,p_h^0,_h^0)are given sufficiently accurately, satisfying the following conditions: _̆h^0 - R_h^̆0 _H^1(Σ) + p_h^0 - R_hp^0 _L^2(Σ) ≤ C , _̆h^0 - R_h ^̆0 + _̆h^0 - R_h ^̆0 _Σ + _h^0 - R_h ^0 _H^1(Σ) ≤ Ch^r+1 , where(R_h^̆0, R_h p^0, R_h^0)is a couple non-stationary Ritz projection defined in Section <ref>. Since in practical computation, only the initial _̆0 and _0 are known, to start our algorithm, we need to first compute an approximation p'_0 of the initial pressure p_0. The error order of our algorithm is kept whenever p_0'-p_0≤ C h^r+1. We can obtain such p_0' for example by making a single step computation using the coupled Euler scheme: Find (_̆h'^1,p_h'^1)∈_h'^r'× Q_h'^r'-1 such that ρ_s ϵ_s ( _̆h'^1-_̆h'^0/τ', _h' )_Σ + a_s(_h'^1, _h')+ ρ_f (D_τ'_̆h'^1, _h') + a_f(_̆h'^1, _h') - b(p_h'^1, _h') + b(q_h', _̆h'^1)=0 ∀ (_h',q_h')∈_h'^r'× Q_h'^r'-1 _h'^1 = _h'^0 + τ' _̆h'^1 . where _h'^0:=R_h'^S_0 with R_h'^S defined as in (<ref>) and ^̆0_h' is defined as follows: find ^̆0_h'∈_h'^r' such that there exists λ_h'∈ Q_h',0^r'-1 satisfying a_f(_̆0-^̆0_h',_h')+(_̆0-^̆0_h',_h')-b(λ_h',_h')=0 ∀_h'∈_h'^r' b(^̆0_h',q_h)=0 ∀ q_h∈ Q_h',0^r'-1; _̆h'^0|_Σ=I_h_̆0|_Σ on Σ. Then we can choose the space-time discrete parameters h',τ' sufficiently small or make the r' higher such that if we take the initial approximation of pressure to be p_0'=p_h'^1 then p_0'-p_0≤ C h^r+1. (For example, here we can choose h'=h,r'=r+2,τ'≃ h^r+2.) Kinematically coupled schemes were firstly proposed in <cit.> with the following time discretization: Find (^n, ^n) such that ρ_s ϵ_s ^n-^̆n-1/τ + L_s (^n) = - ^n-1· n Σ ^n = ^n-1 + τ^n Σ and then find (^̆n, p^n) satisfying ρ_f D_τ^̆n + ∇·^n = 0 ∇·^̆n =0 Ω, ρ_s ϵ_s ^̆n - ^n/τ + (^n-^n-1)· n = 0 Σ . The extension to full discretizations was considered by many authors <cit.>, while the convergence analysis for full discretizations is incomplete (sub-optimal in the L^2 norm). In the design of our fully discrete scheme, we have the two equations in (<ref>) with two different test function _h and _h + τ/ρ_s ϵ_s(_h, q_h)·, respectively, and have added an additional stabilization term τβ/ρ_s ϵ_s ( (^n_h-^n-1_h) · n, (_h, q_h)· )_Σ . These different treatments allow us to prove unconditional stability of the proposed method for any T>0, as well as optimal-order convergence in the L^2 norm for the full discretizations. For the Taylor–Hood finite element spaces, the conditions in (<ref>) on the initial values can be satisfied if one chooses _̆h^0 and p_h^0 to be the Lagrange interpolations of ^̆0 and p^0, respectively, and chooses _h^0=R_sh(0), where R_sh(0) is defined in Section <ref>; see Definition <ref> and estimate (<ref>). The main theoretical results of this article are the following two theorems. Under the assumptions in Section <ref> (on the finite element spaces), the finite element system in s1–s2 is uniquely solvable, and the following inequality holds: E_0(_̆h^n, p_h^n, _h^n) + ∑_m=1^n τ E_1(_̆h^m, p_h^m, _h^m) ≤ E_0(_̆h^0, p_h^0, _h^0), n=1,2,..., N , where E_0(_̆h^n, p_h^n, _h^n) = ρ_f/2_̆h^n ^2 + 1/2_h^n _s^2 + τ^2(1+β)/2ρ_s ϵ_s^n_h ·_Σ^2+ρ_sϵ_s/2_̆h^n_Σ^2 , E_1(_̆h^n, _h^n, _h^n) = 2 μ_̆h^n_f^2 + ρ_f /2τ_̆h^n - _̆h^n-1^2 + ρ_s ϵ_s/2τ_h^n - _̆h^n-1_Σ^2 + ρ_s ϵ_sβ_0/2τ^n_h-_̆h^n_Σ^2 + τβ_0/2ρ_s ϵ_s (_h^n - _h^n-1) ·_Σ^2 + τ/2 D_τ^n_h _s^2 , with β_0 = 1-(√(4+β^2)-β)/2. For finite elements of degree r≥ 2, under the assumptions in Sections <ref>–<ref> (on the regularity of solutions and finite element spaces), there exist positive constants τ_0 and h_0 such that, for sufficiently small stepsize and mesh size τ≤τ_0 and h≤ h_0, the finite element solutions given by s1–s2 with initial values satisfying (<ref>) has the following error bound: max_1≤ n≤ N( (̆t_n, ·) - _̆h^n + (t_n, ·) - _h^n _Σ + (̆t_n,·)-_̆h^n_Σ) ≤ C ( τ + h^r+1), where C is some positive constant independent of n, h and τ. The proofs of Theorem <ref> and Theorem <ref> are presented in the next section. § ANALYSIS OF THE PROPOSED ALGORITHM This section is devoted to the proof of Theorems <ref> and <ref>. For the simplicity of notation, we denote byCa generic positive constant, which is independent ofn,handτbut may depend on the physical parametersρ_s,ϵ, μ, ρ_fand the exact solution(,̆p,). In addition, we denote bya≲bthe statement “|a|≤b”. §.§ Proof of Theorem <ref> We rewrite s2 into ρ_f (D_τ_̆h^n, _h) + a_f(_̆h^n, _h) -b(p_h^n, _h^n) + b(q_h, _̆h^n) + ρ_s ϵ_s ( _̆h^n - _h^n/τ, _h )_Σ = ( ^n-1_h · n, _h )_Σ - ( _̆h^n - _h^n, (_h, q_h)· )_Σ - τ(1+β)/ρ_s ϵ_s ( (^n_h-^n-1_h) · n, (_h, q_h)· )_Σ . Taking_h = _̆h^n, q_h = p_h^nin (<ref>) and_h=_h^n =D_τ_h^nin (<ref>), respectively, gives the following relations: ρ_f/2τ ( _̆h^n _L^2^2 - _̆h^n-1_L^2^2 + _̆h^n - _̆h^n-1_L^2^2 ) +2 μ_̆h^n _f^2 + ρ_s ϵ_s ( _̆h^n - _h^n/τ, _̆h^n )_Σ = ( ^n-1_h · n, ^̆n_h )_Σ - ( _̆h^n - _h^n, _h^n· )_Σ - τ(1+β)/ρ_s ϵ_s ( (^n_h-^n-1_h) · n, _h^n · )_Σ and 1/2τ ( a_s( _h^n, _h^n) - a_s( _h^n-1, _h^n-1) + τ^2 a_s( _h^n, _h^n) ) + ρ_s ϵ_s ( _h^n-_̆h^n-1/τ, _h^n )_Σ = - ( ^n-1_h · n, _h^n )_Σ . By summing up the last two equations, we have ρ_f/2 ( _̆h^n _L^2^2 - _̆h^n-1_L^2^2 + _̆h^n - _̆h^n-1_L^2^2 ) +2 μτ_̆h^n _f^2 + ρ_s ϵ_s/2 ( ^n_h-_̆h^n-1_Σ^2 + _̆h^n - _h^n _Σ^2 ) + 1/2 ( a_s(_h^n, _h^n) - a_s(_h^n-1, _h^n-1) + τ^2 a_s( _h^n, _h^n ) )+ρ_sϵ_s/2(_̆h^n^2_Σ-_̆h^n-1^2_Σ) = τ ((_h^n-1-_h^n) ·, _̆h^n - _h^n)_Σ - τ^2(1+β)/ρ_s ϵ_s ((^n_h-^n-1_h) · n, σ_h^n· )_Σ ≤τ^2(1+β-β_0)/2ρ_s ϵ_s (_h^n - _h^n-1) ·_Σ^2 + ρ_s ϵ_s/2(1+β-β_0)_̆h^n - _h^n _Σ^2 - τ^2(1+β)/2ρ_s ϵ_s ( ^n_h ·_Σ^2 - _h^n-1·_Σ^2 + (_h^n - _h^n-1)·_Σ^2 ) ≤ρ_s ϵ_s(1-β_0)/2_̆h^n - _h^n _Σ^2 - τ^2(1+β)/2ρ_s ϵ_s ( ^n_h ·_Σ^2 - _h^n-1·_Σ^2 ) - τ^2β_0/2ρ_s ϵ_s (^n_h - _h^n-1) ·_Σ^2 , which leads to the following energy inequality: E_0(_̆h^n, p_h^n, _h^n) - E_0(_̆h^n-1, p_h^n-1, _h^n-1) + E_1(_̆h^n, p_h^n, _h^n) τ≤ 0 . This implies stability and completes the proof of Theorem <ref>. 0.1in §.§ A coupled non-stationary Ritz projection To establishL^2-norm optimal error estimate as given in Theorem <ref>, we need to introduce a new coupled Ritz projection. Since the thin fluid-structure model is governed by the Stokes type equation for fluid coupled with a hyperbolic type equation for solid, the coupled projection, which is non-stationary and much more complicated than the standard Ritz projections, plays a key role in proving the optimal-order convergence of finite element solutions to the fluid-structure models. Let (,̆ p, ) ∈ X× Q × S be a triple of functions smoothly depending on t∈[0,T] and satisfying the condition |̆_Σ = ∂_t. For a given initial value R_h (0), the coupled Stokes–Ritz projection R_h(,̆ p, ): =(R_h,̆ R_h p, R_h) ∈_h^r× Q_h^r-1×_h^r is defined as a triple of functions satisfying (R_h )̆|_Σ = ∂_t R_h and the following weak formulation for every t∈[0,T]: a_f(-̆ R_h ,̆_h) - b( p-R_hp, _h) + b(q_h, -̆ R_h )̆+(-̆R_h,̆_h) + a_s( - R_h , _h) + ( - R_h , _h )_Σ = 0, ∀ (_h, q_h) ∈_h^r× Q_h^r-1 . In Section <ref> we shall see that, due to the condition(R_h)̆|_Σ=∂_t R_h , equation (<ref>) can be equivalently reformulated into an evolution equation ofR_h. OnceR_his determined by the evolution equation, (<ref>) determines the valueR_h$̆ and R_hp. In order to guarantee that the coupled non-stationary Ritz projection R_h possesses optimal-order approximation properties, we need to define R_h(0) in a rather technical way. Therefore, we present error estimates for this coupled non-stationary Ritz projection in Theorem <ref> and postpone the definition of R_h(0) and the proof of Theorem <ref> to Section <ref>. For sufficiently smooth functions (,̆ p, ) satisfying |̆_Σ=∂_t, there exists _h ∈_h^r such that when R_h(0) = _h, the following estimates hold uniformly for t∈[0,T]: max_t∈[0,T]( - R_h _Σ + -̆ R_h +-̆ R_h _Σ + hp-R_hp) ≤ C h^r+1 max_t∈[0,T]( ∂_t (-̆ R_h )̆_H^1+∂_t (-̆ R_h )̆_H^1+∂_t(p-R_hp)) ≤ C h^r ∂_t (-̆ R_h )̆_L^2L^2(Σ)+∂_t(-̆R_h)̆_L^2L^2 ≤ C h^r+1 . §.§ Proof of Theorem <ref> For the solution (,̆ p, ) of problem (<ref>)–(<ref>), we define the following notations: ^̆n = (̆t_n, ·) , ^n = (t_n, ·) , p^n = p(t_n, ·). For the analysis of the kinematically coupled scheme, we introduce ^n∈ H^1(Σ) and R_h^n∈_h^r by ^n = ∂_t (t_n, ·) = (̆t_n, ·) R_h^n:=(R_h)̆(t_n)=∂_tR_h(t_n) Σ, which satisfy the following estimates according to the estimates in Theorem <ref>: ^n - R_h ^n _Σ≤ C h^r+1 . By Taylor's expansion, we have ^n = ^n-1 + τ^n+𝒯_0^n, with a truncation error 𝒯_0^n which has the following bound: 𝒯_0^n_H^1(Σ)≤ Cτ^2 ∀ n≥ 1 . By (<ref>)–(<ref>), we can see that the sequence ( ^̆n, p^n, ^n, ^n) satisfies the following weak formulations ρ_s ϵ_s ( ^n- ^̆n-1/τ, _h )_Σ + a_s(^n, _h) + ( ^n-1· n, _h )_Σ = E_s^n(_h), ∀_h ∈_h^r and ρ_f (D_τ^̆n, _h) + a_f(^̆n, _h) - b(p^n, _h) + b(q_h, ^̆n) + ρ_s ϵ_s ( ^̆n - ^n/τ, _h )_Σ = ( ^n-1· n, _h )_Σ - ( ^̆n - ^n, (_h, q_h)· )_Σ - τ(1+β)/ρ_s ϵ_s ( ( ^n- ^n-1) · n, (_h, q_h)· )_Σ + E_f^n(_h, q_h) , 1in ∀ (_h, q_h)∈_h^r× Q_h^r-1 where ^n = (^̆n, p^n) and the truncation error functions satisfy the following estimates: [ | E_s^n(_h)| ≤ Cτ_h _Σ ,; [5pt] | E_f^n(_h, _h) | ≤ C τ(_h _Σ +_h) + Cτ^2 (_h, _h) ·_Σ . ] For given ( ^̆n, p^n, ^n, ^n), we denote by (R_h^̆n,R_hp^n,R_h^n,R_h^n) the corresponding coupled Ritz projection and define R_h𝒯_0^n to be the defect satisfying R_h^n = R_h^n-1 + τ R_h^n+ R_h 𝒯_0^n ∀ n≥ 1 . Then we introduce the following error decomposition: e_u^n: = ^̆n - _̆h^n = ^̆n - R_h ^̆n + R_h ^̆n - _̆h^n: = θ_u^n + δ_u^n, 1in Ω e_p^n: = p^n - p_h^n = p^n - R_h p^n + R_h p^n - p_h^n: = θ_p^n + δ_p^n, 1.1in Ω e_σ^n: = (^̆n, p^n) - (_̆h^n, p_h^n) = (θ_u^n, θ_p^n) + (δ_u^n, δ_p^n) : = θ_σ^n + δ_σ^n Ω e_s^n: = ^n - _h^n = ^n - R_h ^n + R_h ^n - _h^n: = θ_s^n + δ_s^n, 1.1in Σ . e_η^n: = ^n - _h^n = ^n - R_h ^n + R_h ^n - _h^n: = θ_η^n + δ_η^n, 1in Σ . Since ^̆n|_Σ = ^n, it follows that θ_u^n|_Σ=θ_s^n. Moreover, the following relations hold: (^̆n - ^̆n-1)-(^n_h - ^̆n-1_h) = θ_u^n + δ_s^n - θ_u^n-1 - δ_u^n-1 , (^̆n - ^̆n)-(^̆n_h - ^n_h) = θ_u^n + δ_u^n - θ_u^n - δ_s^n = δ_u^n - δ_s^n . By using s1–s2 and s1-3–s2-3, we can write down the following error equations: ρ_s ϵ_s ( δ_s^n - δ_u^n-1/τ, _h )_Σ + a_s( δ_η^n, _h) + ( δ_σ^n-1· n, _h )_Σ = E_s^n(_h) - F^n_s(_h), ∀_h ∈_h^r δ_η^n = δ_η^n-1 + τδ_s^n+ R_h 𝒯_0^n, Σ ρ_f ( δ_u^n-δ_s^n/τ, _h ) + a_f(δ_u^n, _h) - b(δ_p^n, _h^n) + b(q_h, δ_u^n) + ρ_s ϵ_s ( δ_u^n - δ_s^n/τ, _h )_Σ = ( δ_σ^n-1·, _h )_Σ - ( δ_u^n - δ_s^n, (_h, q_h) )_Σ - τ(1+β)/ρ_s ϵ_s ( (δ_σ^n - δ_σ^n-1)·, (_h, q_h) · )_Σ + E_f^n(_h, q_h) - F_f^n(_h, q_h), 1in ∀ (_h, q_h)∈_h^r× Q_h^r-1 where F^n_s(_h) = ρ_s ϵ_s ( D_τθ_u^n, _h )_Σ + a_s(θ_η^n, _h) + ( θ_σ^n-1· n, _h )_Σ F_f^n(_h, q_h) = ρ_f(D_τθ_u^n, _h)+a_f(θ_u^n, _h) - b(θ_p^n, _h) - ( θ_σ^n-1·, _h )_Σ +τ(1+β)/ρ_s ϵ_s ( (θ_σ^n - θ_σ^n-1)·, (_h, q_h) · )_Σ Moreover, the third relation in (<ref>) implies the following result: θ_η^n=θ_η^n-1+τθ_s^n+(𝒯_0^n-R_h 𝒯_0^n), where the last term can be estimated by using (<ref>), i.e., 𝒯_0^n- R_h 𝒯_0^n_H^1(Σ)≤ Cτ^2∂_t(R_h-̆)̆_L^∞ H^1(Σ)≤ Cτ^2h^r . Therefore, by the triangle inequality with estimates (<ref>) and (<ref>), we have R_h 𝒯_0^n_H^1(Σ)≤𝒯_0^n _H^1(Σ) +𝒯_0^n- R_h 𝒯_0^n_H^1(Σ)≤ Cτ^2 ∀ n≥ 1 We take (_h, q_h) = (δ_u^n, δ_p^n) ∈_h^r× Q_h^r-1 in err-e3 and _h =δ_s^n∈_h^r in err-e1, respectively, and then sum up the two results. Using the stability analysis in stability-2 and the relation δ_s^n=D_τδ_η^n-τ^-1 R_h 𝒯_0^n , we obtain D_τ E_0( δ_u^n, δ_p^n, δ_η^n) + E_1( δ_u^n, δ_s^n, δ_η^n) ≤ E_s^n(δ_s^n ) - F_s^n(δ_s^n) + E_f^n(δ_u^n, δ_p^n) - F_f^n(δ_u^n,δ_p^n)+τ^-1a_s(δ_η^n, R_h 𝒯_0^n) . To establish the error estimate, we need to estimate each term on the right-hand side of (<ref>). From EE and (<ref>) we can see that [ | E^n_s(δ_s^n)| ≤ Cτδ_s^n _Σ; [5pt] | E^n_f(δ_u^n, δ_p^n) | ≤ C τ (δ_u^n _Σ+δ_u^n) + τ^2 δ_σ^n ·_Σ; [5pt] |τ^-1a_s(δ_η^n, R_h𝒯_0^n)|≤ Cτδ_η^n_s ] It remains to estimate F_s^n(δ_s) + F_f^n(δ_u,δ_p) from the right-hand side of (<ref>). * The second term in (<ref>) plus the second and third terms in (<ref>) can be estimated as follows. Let ξ_h^n: = δ_u^n - E_h(δ_u^n - δ_s^n), where E_h(δ_u^n-δ_s^n) is an extension of δ_u^n-δ_s^n to Ω satisfying estimate (<ref>) and ξ_h^n|_Σ = δ_s^n. By choosing v_h=ξ_h^n and q_h=0 in (<ref>) (definition of the coupled Ritz projection), we obtain the following relation: a_f(θ_u^n, δ_u^n) - b(θ_p^n,δ_u^n) + a_s(θ_η^n,δ_s^n) = a_f(θ_u^n, E_h(δ_u^n-δ_s^n)) - b(θ_p^n,E_h(δ_u^n-δ_s^n)) -(θ_u^n, ξ_h^n)-(θ_s^n,δ_s^n)_Σ ≲ Ch^rE_h(δ_u^n - δ_s^n)_f + Ch^r+1(ξ_h^n+δ_s^n _Σ) ≤ Ch^r-1/2δ_u^n - δ_s^n_Σ + Ch^r+1(δ_u^n+δ_s^n_Σ), where we have used estimate (<ref>)–(<ref>). * The third term in (<ref>) plus the fourth term in (<ref>) can be estimated as follows: (θ_σ^n-1·,δ_s^n)_Σ-(θ_σ^n-1·, δ_u^n )_Σ ≲ θ_σ^n-1·_Σδ_s^n - δ_u^n _Σ ≤ C(h^r-1/2+h^-1/2(θ_u^n-1_H^1 +θ^n-1_p))δ_s^n - δ_u^n _Σ ≤ Ch^r-1/2δ_s^n - δ_u^n _Σ, where we used (<ref>) in the second inequality and (<ref>) in the last inequality. * For the first term in (<ref>) and (<ref>), respectively, we have ρ_s ϵ_s ( D_τθ_u^n, δ_s^n )_Σ≤C/τδ_s^n_Σ∫_t_n-1^t_n∂_t θ_u(t)_Σ dt ρ_f(D_τθ_u^n, δ_u^n )≤C/τδ_u^n ∫_t_n-1^t_n∂_t θ_u(t) dt * The last term in (<ref>) can be estimated by using (<ref>) and (<ref>), i.e., τ/ρ_s ϵ ( (θ_σ^n - θ_σ^n-1)·, (δ_u^n, δ_p^n) · )_Σ ≤ Cτ(∫_t_n-1^t_n(∂_t θ_u, ∂_t θ_p)(t)_Σ dt)(δ_u^n, δ_p^n)·_Σ ≤ Cτ^2h^r-1/2(δ_u^n, δ_p^n) ·_Σ Now we can substitute estimates (<ref>)–(<ref>) into the energy inequality in (<ref>). This yields the following result: D_τ E_0( δ_u^n, δ_p^n, δ_η^n) + E_1( δ_u^n, δ_s^n, δ_η^n) ≤ Cτ (δ_s^n _Σ+δ_u^n _Σ+δ_u^n+δ_η^n_s)+Ch^r-1/2δ_u^n-δ_s^n_Σ+Ch^r+1(δ_u^n+δ_s^n_Σ) +C/τδ_s^n_Σ∫_t_n-1^t_n∂_t θ_u(t) _Σ dt +C/τδ_u^n∫_t_n-1^t_n∂_t θ_u(t) dt +Cτ^2δ_σ^n ·_Σ . Since δ_s^n _Σ≤δ_s^n - δ_u^n_Σ + δ_u^n _Σ, by using Young's inequality, we can re-arrange the right hand side of (<ref>) to obtain D_τ E_0( δ_u^n, δ_p^n, δ_η^n) + E_1( δ_u^n, δ_s^n, δ_η^n) ≤ Cε^-1(τ^2+Ch^2(r+1)+τ h^2r-1)+C ε (δ_u^n ^2_Σ+δ_u^n^2+δ_η^n^2_s)+Cε/τδ_u^n-δ_s^n^2_Σ +Cε^-1/τ ( ∫_t_n-1^t_n∂_t θ_u (t)^2_Σ dt + ∫_t_n-1^t_n∂_t θ_u(t)^2 dt ) +Cτ^2δ_σ^n ·^2_Σ where 0<ε<1 is an arbitrary constant. We can choose a sufficiently small ε so that the term Cε/τδ_u^n-δ_s^n^2_Σ can be absorbed by E_1( δ_u^n, δ_s^n, δ_η^n on the left-hand side. Then, using the discrete Gronwall's inequality and the estimates of θ_u in SR-error-3, as well as the definition of E_0 and E_1 in E0–E1, we obtain E_0( δ_u^n, δ_p^n, δ_η^n) +∑_m=1^n τ E_1( δ_u^m, δ_s^m, δ_η^m)≤ C E_0( δ_u^0, δ_p^0, δ_η^0)+ C(τ^2+Ch^2(r+1)+τ h^2r-1), Since the initial values satisfy the estimates in (<ref>), the term E_0( δ_u^0, δ_p^0, δ_η^0) can be estimated to the optimal order. Thus inequality (<ref>) reduces to δ_u^n+δ_u^n_Σ+δ_η^n_s+δ_u^n-δ_s^n_Σ≤ C (h^r - 1 / 2τ^1 / 2 + τ + h^r + 1). It follows from the relation δ_η^n = δ_η^n-1 + τδ_s^n+R_h𝒯_0^n, n≥ 1, that δ_η^n_Σ≤ δ_η^0_Σ+∑_m=1^nτδ_s^m_Σ+∑_m=1^nR_h𝒯_0^n_Σ≤ C (h^r - 1 / 2τ^1 / 2 + τ + h^r + 1), where we have used (<ref>) and (<ref>). Then, combining the two estimates above with the following estimate for the projection error: θ_u^n+θ_u^n_Σ+θ_η^n_Σ≤ Ch^r+1 ∀ n≥ 0, we obtain the following error bound: e_u^n+e_u^n_Σ+e_η^n_Σ≤ C (h^r - 1 / 2τ^1/2 + τ + h^r + 1) ≤ C(τ + h^r+1) , where the last inequality uses h^r - 1 / 2τ^1/2≤τ + h^2r-1 and r≥ 2. This completes the proof of Theorem <ref>. § THE PROOF OF THEOREM <REF> The error estimates for the coupled Ritz projection introduced in (<ref>) requires choosing the initial value R_h(0) properly and estimating. Therefore, we divide the proof of Theorem <ref> into three parts and present them in the next three subsections, respectively. §.§ The initial value of coupled Ritz projection We present two auxiliary Ritz projections associated to the structure model and fluid model, respectively. Based on these auxiliary Ritz projections, we define the initial value R_h (0) of the coupled Ritz projection. We define an auxiliary Ritz projection R_h^S:→_h^r for the elastic structure problem by a_s(R_h^S-,_h)+(R_h^S-,_h)_Σ=0 ∀_h∈_h^r. This is the standard Ritz projection on Σ, which satisfies the estimate R_h^S-_Σ≤ Ch^r+1 when is sufficiently smooth. Moreover when r≥ 2, there holds negative norm estimate R_h^S-_H^-1(Σ)≤ Ch^r+2. Let ^r_h := {_h ∈^r_h : _h |_Σ = 0 } and Q_h,0^r-1:={q_h∈ Q_h^r-1:q_h∈ L^2_0(Ω)}. We denote _h^r:={_h∈_h^r:(_h,)_Σ=0} and by P the L^2(Σ)-orthogonal projection from _h^r to _h^r. Let :={∈̆:|̆_Σ∈}. We define an auxiliary Dirichlet Stokes–Ritz projection R^D_h : × Q →^r_h × Q^r-1_h by a_f( -̆ R^D_h,̆_h) - b( p - R^D_hp, _h) + (-̆ R_h^D,̆_h) = 0 ∀_h ∈_h^r , b(q_h, -̆ R_h^D)̆ = 0 ∀ q_h ∈ Q^r-1_h,0; R_h^D =̆P R_h^S(|̆_Σ) on Σ, In addition, we choose R_h^D p to satisfy R_h^D p - p ∈ L^2_0 (Ω). This uniquely determines a solution (R_h^Du,R_h^Dp)∈^r_h × Q^r-1_h, as explained in the following remark. Let _h∈_h^r be an extension of PR_h^S$̆ to the bulk domainΩand letp̂_hbe theL^2(Ω)-orthogonal projection ofpontoQ_h^r-1. Then_h-R_h^D∈̆_h^randp̂_h-R_h^Dp∈Q^r-1_h,0, after reformulating (<ref>)-(<ref>) into an equation system on_h-R_h^D$̆ and p̂_h-R_h^Dp, we obtain a standard Stokes FE system with homogeneous Dirichlet boundary condition, of which the well-posedness directly follows from inf-sup condition (<ref>). Since (PR_h^S,̆)_Σ=0, for $̆ with∇·=̆0we have b(1,-̆R_h^D)̆=(-̆R_h^D,̆)_Σ=0, which implies that b(q_h, -̆ R_h^D)̆ = 0 ∀ q_h ∈ Q^r-1_h. Moreover, we denote by_h ∈^r_htheL^2 (Σ)-orthogonal projection of unit normal vector fieldofΣto^r_h, i.e., (, _h)_Σ = (_h, _h)_Σ ∀_h ∈^r_h . Then for any_h ∈^r_h, we have P_h = _h - λ (_h) _h ∈^r_h with λ (_h):= (_h, )_Σ/_h ^2_Σ. From-_h_Σ≤-I_h_Σ≤Ch^r+1(sinceis smooth onΣ), especially we have_h_Σ∼Cand λ(R_h^S)̆=(R_h^S-̆,̆)_Σ/_h^2_Σ≲ Ch^r+1 and PR_h^S-̆R_h^S≤ Ch^r+1. Therefore we obtain the estimateR_h^D-̆_Σ≤Ch^r+1. The following lemma on the error estimates of the Dirichlet Stokes–Ritz projection is standard. We refer to <cit.> for the proof of (<ref>). The negative norm estimate of pressure in (<ref>) requires a further duality argument, which is presented in the proof of (<ref>) in Appendix B. We omit the details here. Under the regularity assumptions in Section <ref>, the Dirichlet Stokes–Ritz projection R_h^D defined in (<ref>) satisfies the following estimates: -̆R_h^D_Σ + -̆ R_h^D + h ( -̆ R_h^D _H^1 + p - R_h^D p ) ≤ C h^r+1 , R_h^D p - p _H^- 1≤ C h^r + 1 . For any ϕ∈ H^1 (Ω), since R_h^D p - p ∈ L^2_0 (Ω) (R_h^D p - p, ϕ) = (R_h^D p - p, ϕ - ϕ) thus it suffices to assume ∫_Ωϕ = 0. Then, using Bogovoski's map we have result (see details in <cit.>) that there exists ∈ H^2_p (Ω)^d such that ∇· = ϕ, _H^2 (Ω)≤ C ϕ_H^1 (Ω), |_Σ = 0 . Testing (<ref>) with _h=I_h∈_h^r we have (R_h^D p - p, ϕ) = b (R_h p - p, ) = b (R_h p - p, - I_h ) + a_f (R_h^D -̆,̆ I_h - ) + (R_h^D -̆,̆ I_h - ) + a_f (R_h^D -̆,̆) + (R_h^D -̆,̆). Thus | (R_h^D p - p, ϕ) | ≤ C h^r + 1ϕ_H^1 (Ω) + | a_f (R_h^D -̆,̆) | Note that integration by part gives a_f (R_h^D -̆,̆) = 2μ ( (R_h^D-̆)̆, ) = - 2 (R_h^D -̆,̆∇·) + 2 (R_h^D -̆,̆·)_Σ≲ C h^r+ 1ϕ_H^1 Therefore, we proved | (R_h^D p - p, ϕ) | ≤ C h^r + 1ϕ_H^1 (Ω). Thus this lemma holds. We define an initial valueR_h(0)as follows in terms of the Dirichlet Ritz projectionR_h^D. Firstly, assuming that the function (R_h^D∂_t (̆0),R_h^D∂_t p(0)) is known with operator R_h^D defined by (<ref>), we define R_sh(̆0) ∈_h^r to be the solution of the following weak formulation: a_s (_ ( -̆ R_sh)̆ (0), _h) + (( -̆ R_sh)̆ (0), _h)_Σ + a_f (( ∂_t -̆ R_h^D∂_t )̆ (0), E_h_h) - b (( ∂_t p - R_h^D∂_t p) (0), E_h_h) + (( ∂_t -̆ R^D_h∂_t )̆ (0), E_h_h) = 0 ∀_h ∈ S^r_h, where E_h_h denotes an extension of _h to the bulk domain Ω. From the definition of R_h^D in (<ref>) we can conclude that this definition is independent of the specific extension. Therefore, (<ref>) actually holds for all _h ∈ X^r_h. Secondly, we denote by (R_h (̆0),R_hp(0)) ∈ X^r_h× Q_h^r-1 a Dirichlet-type Stokes–Ritz projection satisfying a_f((̆0) - R_h (̆0), _h) - b( p(0)-R_hp(0), _h) +((̆0)-R_h(̆0),_h) ∀_h ∈^r_h , b (q_h, (̆0) - R_h (̆0)) = 0 ∀ q_h ∈ Q^r-1_h,0; R_h (̆0) = P R_s h(̆0) on Σ , where we require p (0) - R_h p (0) ∈ L^2_0 (Ω). Finally, with the R_h(̆0) and R_hp(0) defined above, we define R_h (0) ∈_h^r to be the solution of the following weak formulation on Σ: a_f((̆0) - R_h (̆0), E_h_h) - b( p(0)-R_hp(0), E_h_h) +((̆0)-R_h(̆0), E_h_h) + a_s((0) - R_h (0), _h) + ( (0) - R_h (0), _h )_Σ = 0 ∀_h ∈^r_h . Again, E_h_h denotes an extension of _h to the bulk domain Ω, and this definition is independent of the specific extension. Therefore, (<ref>) actually holds for all _h ∈ X^r_h. By adding d3-2b to (<ref>), we see that (R_h (̆0),R_hp(0),R_h (0)) defined in d3-2a2b–ritz-initial2 satisfies (<ref>) at t=0. By differentiating (<ref>) with respect to time, we have the following evolution equations: a_s ( -̆ R_h,̆_h) + ( -̆ R_h ,̆_h)_Σ + a_f (∂_t ( -̆ R_h)̆, _h) - b (∂_t ( p - R_hp), _h) + (∂_t ( -̆ R_h)̆, _h) = 0 ∀_h ∈^r_h , b (q_h, ∂_t ( -̆ R_h)̆) = 0 ∀ q_h ∈ Q^r-1_h . The functionR_h(0)defined above is only used in the error analysis (for proving Theorem <ref>). It requires knowledge of∂_t(̆0)and∂_tp(0)in definingR_sh(̆0)by (<ref>), and therefore is not convenient for practical computation. For the computation with the numerical scheme (<ref>)–(<ref>), we can define the initial value_h^0 =R_sh(0)∈^r_hin an alternative way below. We define _h^0 =R_sh(0)∈^r_h as the solution of the following weak formulation: a_s ((R_sh - ) (0), _h) + ((R_sh - ) (0),_h)_Σ ∀_h ∈^r_h = - a_f ((R_h^D -̆)̆ (0), E_h _h) + b ((R_h^D p - p) (0), E_h_h) - ((R_h^D-̆)̆(0), E_h _h) , which does not require knowledge of ∂_t(̆0) or ∂_tp(0). Again, E_h_h denotes an extension of _h to the bulk domain Ω, and this definition is independent of the specific extension. Therefore, (<ref>) actually holds for all _h ∈ X^r_h. For r≥ 2, the following result can be proved (see Lemma <ref> of Appendix B): R_sh(0)- R_h (0)_H^1(Σ)≤ Ch^r+1. §.§ Error estimates for the coupled Ritz projection at t=0 Firstly, we consider the estimation ofR_sh(̆0)which occurs as an auxiliary function in the definition ofR_h(̆0). Under the assumptions in Sections <ref> and <ref>, the following error estimate holds for the R_sh(̆0) defined in (<ref>): R_sh(̆0) - (̆0) _Σ + h R_sh(̆0) - (̆0) _s ≤ C h^r + 1 . Since we can choose an extension E_hξ_h to satisfy that E_h ξ_h _H^1 (Ω)≤ C ξ_h _H^1 (Σ), equation (<ref>) implies that a_s ((̆0) - R_sh(̆0), ξ_h) + ((̆0) - R_sh(̆0), ξ_h)_Σ≤ Ch^r ξ_h _H^1 (Σ) . This leads to the following standard H^1-norm estimate: (̆0) - R_sh(̆0) _s + (̆0) - R_sh(̆0) _Σ≤ C h^r . In order to obtain an optimal-order L^2-norm estimate for (̆0) - R_sh(̆0), we introduce the following dual problem: - ℒ_s ψ + ψ = R_sh(̆0) - (̆0) , ψ has periodic boundary condition on Σ. The regularity assumption in (<ref>) implies that a_s (ψ, ξ) + (ψ, ξ)_Σ = ( (̆0) - R_sh(̆0), ξ)_Σ ∀ξ∈ and ψ_H^2(Σ)≤(̆0) - R_sh(̆0) _Σ . We can extend ψ to be a function on Ω, still denoted by ψ, satisfying the periodic boundary condition and ψ_H^2 (Ω)≤ C ψ_H^2 (Σ). Therefore, choosing ξ=(̆0) - R_sh(̆0) in the equation above leads to (̆0) - R_sh(̆0) _Σ^2 = a_s ((̆0) - R_sh(̆0),ψ) + ((̆0) - R_sh(̆0),ψ) _Σ = a_s ((̆0) - R_sh(̆0),ψ - I_h ψ) + ((̆0) - R_sh(̆0),ψ - I_h ψ)_Σ - a_f (∂_t(̆0) - R_h^D∂_t(̆0), I_h ψ) + b(∂_tp(0) - R_h^D∂_tp (0), I_h ψ) - (∂_t(̆0) - R_h^D∂_t(̆0), I_h ψ) ≤ C h^r + 1ψ_H^2 (Σ) + | a_f (∂_t(̆0) - R_h^D∂_t(̆0), ψ)| + |b(∂_tp(0) - R_h^D∂_tp (0), ψ)| + |(∂_t(̆0) - R_h^D∂_t(̆0), ψ) | . Since ( ( ∂_t (̆0) - R_h^D∂_t (̆0)), ψ) = - ( ∂_t (̆0) - R_h^D∂_t (̆0), ∇·ψ) + (∂_t (̆0) - R_h^D ∂_t (̆0), ψ·)_Σ ≲ C h^r + 1ψ_H^2(Σ) , where the last inequality uses the estimate ψ_H^2 (Ω)≤ C ψ_H^2 (Σ) as well as the estimates of ∂_t (̆0) - R_h^D∂_t (̆0) and ∂_t (̆0) - R_h^D∂_t (̆0)_Σ in (<ref>) (with (̆0) replaced by ∂_t(̆0) therein). Furthermore, using the H^-1 estimate in (<ref>), we have b( ∂_t p (0) - R_h^D∂_t p (0), ψ) ≲ C ∂_t p (0) - R_h^D∂_t p (0) _H^- 1_pψ_H^2≤ C h^r+1ψ_H^2(Σ) . Then, summing up the estimates above, we obtain (̆0) - R_sh(̆0) _Σ≤ C h^r + 1 . The proof of Lemma <ref> is complete. 0.1in Secondly, we present estimates for(̆0)-R_h(̆0),(0)-R_h(0)and p(0) - R_h p(0). Under the assumptions in Sections <ref> and <ref>, the following error estimates hold (for the coupled Ritz projection in Definition <ref>): (0) - R_h (0)_Σ + h(0) - R_h (0) _s + (̆0) - R_h(̆0) _Σ≤ C h^r + 1 , (̆0) - R_h(̆0) + + h p(0) - R_hp(0) ≤ C h^r + 1 . From (<ref>) we know that R_h(̆0)=PR_sh(̆0)=R_sh(̆0)-λ(R_sh(̆0))_h on Σ, with |λ (R_sh(̆0))|= |(R_sh(̆0), )_Σ|/_h ^2_Σ = |(R_sh(̆0) - (̆0), )_Σ|/_h _Σ^2≤ C R_sh(̆0) - (̆0)_Σ≤ C h^r + 1. Therefore, using the triangle inequality, we have (̆0) - R_h(̆0) _Σ≤(̆0) - R_sh(̆0)_Σ + |λ(R_sh(̆0)) | _h _Σ≤ Ch^r+1 , where the last inequality uses the estimate of |λ(R_sh(̆0)) | above and the estimate in (<ref>). Since (R_h(̆0),R_hp(0)) is essentially a Dirichlet Ritz projection with a different boundary value, i.e., P R_s h(̆0), the error estimates for (̆0) - R_h(̆0) and p(0) - R_hp(0) are the same as those in Lemma <ref>. With the optimal-order estimates of (̆0) - R_h(̆0) _Σ, (̆0) - R_h(̆0) and p(0) - R_hp(0), the estimation of (0) - R_h (0)_Σ and (0) - R_h (0) _s would be the same as the proof of Lemma <ref>. 0.1in We need to define the auxiliary transformation map:Θ_h : 𝔛 _h →S_hdefined byΘ_h (u_h, p_h) = ξ_ha_s (ξ_h, v_h) + (ξ_h, v_h)_Σ + a_f (u_h, v_h) - b (p_h, v_h) + (u_h, v_h) = 0 ∀ v_h ∈ X_h where𝔛̃_h ⊆X_h ×Q_hconsists of(u_h, p_h)satisfying a_f (u_h, v_h) - b (p_h, v_h) + (u_h, v_h) = 0 ∀ v_h ∈ X^∘_h (= V_h) b (q_h, u_h) = 0 ∀ q_h ∈ Q_h This mapΘ_his well-defined because it is equavilent to solve a_s (ξ_h, s_h) + (ξ_h, s_h)_Σ = l (s_h) ∀ s_h ∈ S_h where the functionall : S_h →ℝ is defined by l (s_h) 0 - a_f (u_h, v_h) + b (p_h, v_h) - (u_h, v_h) wherev_h ∈X_his any finite element function withv_h |_Σ 0 = s_h. (From (<ref>) we seel (s_h)does not dependent on the choice ofv_h). The linear map Θ_h : 𝔛_h → S_h is bijective. Let the inverse of Θ_h be denoted by Θ_h^- 1 (ξ_h) = (Π_h^v ξ_h, Π_h^p ξ_h) ∈ X_h × Q_h. We have estimate Π^v_h ξ_h _f + Π_h^v ξ_h + Π_h^p ξ_h ≤ C h^- 1 / 2ξ_h _H^1 (Σ) Denote (u_h, p_h) = (Π_h^v ξ_h, Π_h^p ξ_h), then a_f (u_h, v_h) - b (p_h, v_h) + (u_h, v_h) ≲ C ξ_h _H^1 (Σ) v_h _H^1 (Σ) By inverse estimate we have v_h _H^1 (Σ)≤Ch^- 1 / 2 v_h _H^1. Testing with v_h = u_h and use the fact that b (p_h, u_h) = 0, we get u_h _f^2 + u_h ^2 ≤Ch^- 1 / 2ξ_h _H^1 (Σ) u_h _H^1 (Ω) By Korn's inequality we see u_h _H^1≤ C ( u_h _f + u_h ) It follows that u_h _f + u_h ≤ C h^- 1 / 2ξ_h _H^1 (Σ) and by inf-sup condition we have p_h ≤ C h^- 1 / 2ξ_h _H^1 (Σ) Finally, we present estimates for the time derivatives∂_t (-̆ R_h )̆ (0)and∂_t ( p - R_hp) (0). To this end, we use the following relation: ( -̆ R_h)̆ (0) = ( -̆ R_sh)̆ (0) + λ (R_sh(̆0)) _h Σ . Replacing( -̆ R_sh)̆ (0) by( -̆ R_h)̆ (0) - λ(R_sh (̆0)) _hin (<ref>), we have a_s (( -̆ R_h)̆ (0), _h) + (( -̆ R_h)̆ (0), _h)_Σ +a_f (( ∂_t -̆ R_h^D∂_t )̆ (0), _h) - b (( ∂_t p - R_h^D∂_t p) (0), _h) + (( ∂_t -̆ R^D_h ∂_t )̆ (0), _h) = λ (R_sh(̆0)) (a_s (_h, _h) + (_h, _h)_Σ) ∀_h ∈^r_h . Let(^̆#, p^#)∈×Qbe the weak solution of a_f (^̆#, ) - b (p^#, ) +(^̆#, ) = a_s ( , ) + (, )_Σ ∀∈ b (q , ^̆# ) = 0 ∀ q ∈ Q and(_̆h^#, p_h^#) ∈(^r_h, Q^r-1_h)denote the corresponding FE solution satisfying a_f (^̆#_h, _h) - b (p^#_h, _h) + (^̆#_h, _h) = a_s (_h, _h) + (_h, _h)_Σ ∀_h ∈^r_h b (q_h, ^̆#_h) = 0 ∀ q_h ∈ Q^r-1_h, where_his defined in (<ref>). Note that (<ref>) is equivalent to-∇·(^̆#,p^#) + ^̆# = 0 Ω (^̆#,p^#) n = -ℒ_s n + n Σ .Therefore, from the regularity estimate in (<ref>) (withk=r-1/2therein) and assumption (<ref>) onℒ_s, we obtain the following regularity estimate for the solutions of (<ref>): ^̆#_H^r+1 + p^#_H^r≤ C_H^r+3/2(Σ)≤ C . By considering the difference between (<ref>) and (<ref>), the following estimates ofe^#_h:=I_h ^̆# - _̆h^#andm^#_h:=I_h p^# - p_h^#can be derived for all_h ∈^r_handq_h ∈Q^r-1_h: a_f ( e^#_h, _h) - b (m^#_h, _h) + ( e^#_h, _h) ≲ C h^r_h _H^1 (Σ) + C h^r_h _H^1≤ C h^r-1/2_h _H^1 b (q_h, e^#_h) ≲ C h^r q_h , where we have used the inverse estimate in (<ref>) and the following trace inequality: _h_H^1(Σ)≤ Ch^-1/2_h_H^1/2(Σ)≤ C h^-1/2_h _H^1. From Korn's inequality and inf-sup condition (<ref>), choosing_h=e_h^#yields the following result: e_h^#_H^1 + m_h^#≤ Ch^r-1/2 , which also implies the following boundedness through the application of the triangle inequality: _̆h^#_H^1 + p_h^#≤ C . By using the estimates of e_h^# and m_h^# , we can estimate∂_t (-̆ R_h )̆ (0)and∂_t ( p - R_hp) (0)as follows. Under the assumptions in Sections <ref> and <ref>, the following error estimates hold (for the time derivative of the coupled Ritz projection in Definition <ref>): ∂_t ( -̆ R_h)̆ (0) + ∂_t ( -̆ R_h)̆ (0) _Σ + h ∂_t ( p - R_h p) (0) ≤ C h^r +1 . Combining (<ref>)-(<ref>) with (<ref>)-(<ref>) we observe a_s (( -̆ R_h)̆ (0), _h) + ((-̆ R_h)̆ (0), _h)_Σ +a_f(( ∂_t -̆ R_h^D∂_t )̆ (0) - λ (R_sh(̆0))_̆h^#, _h)- b (( ∂_t p - R_h^D∂_t p) (0)-λ(R_sh(̆0)) p_h^#, _h) +(( ∂_t -̆ R^D_h∂_t )̆(0) - λ (R_sh(̆0)) _̆h^#, _h) = 0 ∀_h ∈^r_h b (q_h,(∂_t -̆ R^D_h ∂_t )̆(0) - λ (R_sh u (0)) _̆h^#) = 0 ∀ q_h ∈ Q^r-1_h By comparing (<ref>)-(<ref>) with (<ref>)-(<ref>) we find the following relations: ∂_t ( -̆ R_h)̆ (0) = ( ∂_t -̆ R^D_h∂_t ) (0) - λ (R_sh(̆0)) _̆h^# , ∂_t ( p - R_h p) (0) = ( ∂_t p - R_h^D∂_t p) (0) - λ (R_sh(̆0)) p_h^# . Since |λ (R_sh(̆0))|≤ Ch^r+1, the result of this lemma follows from the estimates of the Dirichlet Stokes–Ritz projection in Lemma <ref> (with $̆ andpreplaced by∂_t$̆ and ∂_tp therein). §.§ Error estimates of the coupled Ritz projection for t>0 We first presentH^1-norm error estimates for the the coupled Ritz projection by employing the auxiliary Ritz projectionsR_h^SandR_h^Ddefined in (<ref>) and (<ref>), respectively. From (<ref>) we see thatR_h^D-̆R_h^S=̆P R_h^S-̆ R_h^S=̆ - λ (R_h^S)̆_h λ (R_h^S)̆∈ℝ,where the last equality follows from relation (<ref>). Therefore, with the relation above we have a_s(-̆R_h^D,̆_h)+(-̆R_h^D,̆_h)_Σ = a_s(-̆R_h^S,̆_h)+(-̆R_h^S,̆_h)_Σ + λ(R_h^S)̆(a_s(_h,_h)+(_h,_h)_Σ) ≲ Ch^r+1_h_H^1(Σ)≤ Ch^r+1/2_h_H^1/2(Σ)≤ Ch^r+1/2_h_H^1 ∀_h∈_h^r, where we have used the inverse inequality in (<ref>) and the trace inequality in the derivation of the last two inequalities. Moreover, since the auxiliary Ritz projectionR_h^Ddefined in (<ref>) is time-independent, it follows that(∂_t R_h^D u,∂_t R_h^D p) = (R_h^D ∂_t u, R_h^D∂_t p). Therefore, in view of estimate (<ref>) for the Dirichlet Stokes–Ritz projection, the following estimate can be found: a_s ( -̆ R_h^D,̆_h) + ( -̆ R_h^D,̆_h)_Σ + a_f (∂_t ( -̆ R_h^D)̆, _h) - b (∂_t ( p - R_h^Dp), _h) + (∂_t ( -̆ R_h^D)̆, _h) ≲ C h^r _h _H^1 ∀_h∈_h^r. By considering the difference between (<ref>) and (<ref>), we can derive the following inequality: a_s (R_h -̆ R_h^D ,̆_h) + (R_h -̆ R_h^D ,̆_h)_Σ + a_f (∂_t (R_h -̆ R_h^D )̆, _h) - b (∂_t (R_h p - R_h^D p), _h) + (∂_t (R_h -̆ R_h^D )̆, _h) ≲ C h^r _h _H^1 ∀_h∈_h^r. Then, choosing_h = ∂_t (R_h -̆ R_h^D )̆in (<ref>) and using relationb(∂_t (R_h p - R_h^D p), ∂_t (R_h -̆ R_h^D )̆)=0 (which follows from (<ref>) and (<ref>)), using Young's inequalityCh^r ∂_t(R_h-̆R_h^D)̆_H^1≤ Cε^-1 h^2r + ε∂_t(R_h-̆R_h^D)̆_H^1^2with a small constantεso thatε∂_t(R_h-̆R_h^D)̆ _H^1^2can be absorbed by the left hand side of (<ref>), we obtain R_h -̆ R_h^D _L^∞ H^1(Σ) + ∂_t ( R_h -̆ R_h^D )̆ _L^2 H^1 ≤ C h^r + C (R_h -̆ R_h^D )̆ (0) _s +C (R_h -̆ R_h^D )̆ (0) _Σ≤ C h^r , where the last inequality uses the estimates in Lemma <ref> and Lemma <ref>. Then, by applying the inf-sup condition in (<ref>) (which involves _h _H^1(Σ)in the denominator), we can obtain the following estimate from (<ref>): ∂_t (R_h p - R_h^D p) ≤ C R_h -̆ R_h^D _H^1(Σ)+C∂_t( R_h -̆ R_h^D )̆_H^1+Ch^r, which combined with the estimate in (<ref>), leads to the following estimate: ∂_t (R_h p - R_h^D p) _L^2 L^2≤ C h^r . Therefore, using an additional triangle inequality, the estimates in (<ref>)–(<ref>) can be written as follows: ∂_t(R_h -̆)̆_L^2 H^1 + R_h -̆_L^∞ H^1 (Σ) + ∂_t( R_h p - p) _L^2 L^2≤ C h^r . With the initial estimates in Lemma <ref>, the estimate of ∂_t ( R_h -̆ )̆ _L^2 H^1above further implies that R_h -̆_L^∞ H^1≤ (R_h -̆)̆ (0) _H^1 + C∂_t ( R_h -̆)̆ _L^2 H^1≤ C h^r . Since ∂_t (R_h - ) = R_h -̆ $̆ on the boundary Σ, by using the Newton–Leibniz formula with respect to t∈[0,T], the estimate in (<ref>) and initial estimates in Lemma <ref>, we have R_h - _L^∞ H^1 (Σ) ≤ ( R_h - ) (0) _H^1(Σ) + C ∂_t (R_h - ) _L^2 H^1 (Σ) ≤ ( R_h - ) (0) _H^1(Σ) + C R_h -̆_L^2 H^1 (Σ)≤ C h^r . In the same way, from (<ref>) and initial estimates in Lemma <ref> we have R_h p - p _L^∞ L^2 ≤ C (R_h p - p)(0) + CR_h p - _L^2 L^2≤ Ch^r. Moreover, by differentiating (<ref>) with respect to time, we have a_s (∂_t (R_h -̆)̆, _h) + (∂_t (R_h -̆)̆, _h)_Σ + a_f (∂^2_t (R_h -̆)̆, _h) - b (∂_t^2 (R_h p - p), _h) + (∂_t^2 (R_h -̆)̆, _h) = 0 ∀_h ∈_h^r , b (q_h,∂_t^2(R_h-̆)̆) =0 ∀ q_h∈ Q_h^r-1 . Similarly, by choosing _h = ∂_t^2 (R_h -̆ R_h^D )̆ in (<ref>) and using the same approach as above with the initial value estimates in (<ref>), we can obtain the following estimate (the details are omitted): ∂_t (R_h -̆)̆_L^∞ H^1 + ∂_t (R_h -̆)̆_L^∞ H^1(Σ) + ∂_t (R_h p - p) _L^∞ L^2 + ∂_t^2 ( R_h -̆)̆_L^2 H^1+∂_t^2(R_hp-p)_L^2L^2 ≤ C h^r . This establishes the H^1-norm error estimates for the non-stationary Ritz projection defined in (<ref>). Testing (<ref>) with v_h = ∂_t^2 (R_h u - I_h u), we can deduce similarly that ∂_t (R_h -̆ I_h )̆_L^∞ H_s + ∂_t ( R_h -̆ I_h )̆_L^∞ L^2 (Σ) + ∂_t^2 ( R_h -̆ I_h )̆_L^2 H _f + ∂_t^2 ( R_h -̆ I_h )̆_L^2 L^2 ≤ C h^r + C∂_t (R_h -̆ I_h )̆ (0) _Σ + C∂_t (R_h -̆ I_h )̆ (0) _H_s≤ C h^r From Korn's inequality we obtain ∂_t ( R_h -̆ I_h )̆_L^∞ H^1 (Σ) + ∂_t^2 ( R_h -̆ I_h )̆_L^2 H^1 (Ω)≤ C h^r And it follows from ∂_t^2 ( R_h -̆ I_h )̆_L^2 H^1 (Ω)≤ C h^r that ∂_t ( R_h -̆ I_h )̆_L^∞H^1 (Ω)≤∂_t ( R_h -̆ I_h )̆ (0) _H^1 (Ω) + C∂_t^2 ( R_h -̆ I_h )̆_L^2 H^1 (Ω)≤ C h^r Finally, from inf-sup condition we have ∂_t (R_h p - I_h p) ≤ C R_h -̆ I_h _H^1 (Σ) + C ∂_t (R_h -̆ I_h )̆_H^1 (Ω) + C h^r Therefore, ∂_t (R_h p - I_h p) _L^∞ L^2≤ C R_h -̆ I_h _L^∞ H^1 (Σ) + C ∂_t (R_h -̆ I_h )̆_L^∞ H^1 (Ω) + C h^r ≤ C h^r Thus (<ref>) and (<ref>) are proved. And from inf-sup condition ∂_t^2 (R_h p - I_h p) ≤∂_t ( R_h -̆ I_h )̆_H^1 (Σ) + ∂_t^2 (R_h -̆ I_h )̆_H^1 (Ω) + C h^r Thus we have ∂_t^2 (R_h p - I_h p) _L^2 L^2≤ C h^r We then present L^2-norm error estimates for the the coupled Ritz projection. To this end, we introduce the following dual problem: - ℒ_s ϕ + ϕ = ∂_t (ϕ, q) + f in Σ -∇·σ (ϕ, q) + ϕ = 0 in Ω ∇·ϕ = 0 in Ω , with the initial condition (ϕ,q) =0 at t=T. Problem (<ref>) can be equivalently written as a backward evolution equation of ξ= (ϕ, q), i.e., - ℒ_s 𝒩ξ + 𝒩ξ - ∂_t ξ = on Σ×[0,T), ξ (T) = 0 , where 𝒩: H^-1/2(Σ)^d→ H^1/2(Σ)^d is the Neumann-to-Dirichlet map associated to the Stokes equations. The existence, uniqueness and regularity of solutions to (<ref>) are presented in the following lemma, for which the proof is given in Appendix A by utilizing and analyzing (<ref>). Problem (<ref>) has a unique solution which satisfies the following estimate: ϕ_L^2 H^2 + ϕ_L^2 H^2 (Σ) + q _L^2 H^1 + (ϕ, q) (0) _Σ≤ C f _L^2 L^2 (Σ). By choosing f = R_h - and, testing equations (<ref>) and (<ref>) with R_h - and R_h-̆$̆, respectively, and using relation∂_t(R_h - )=R_h-̆$̆ on Σ, we have a_s (ϕ, R_h - ) + (ϕ, R_h - )_Σ + a_f (ϕ, R_h -̆)̆ - b (q, R_h -̆)̆ + (ϕ, R_h -̆)̆ = d/d t ( (ϕ, q)·, R_h - )_Σ + R_h - _Σ^2. In view of the definition of the non-stationary Ritz projection in (<ref>), we can subtract I_hϕ from ϕ in the inequality above by generating an additional remainder b(R_hp-p,ϕ-I_hϕ). This leads to the following result in view of the estimate in (<ref>): d/d t ( (ϕ, q) , R_h - )_Σ + R_h - _Σ^2 = a_s ( ϕ - I_h ϕ , R_h - ) + (ϕ - I_h ϕ, R_h - )_Σ + a_f (ϕ-I_hϕ, R_h -̆)̆ - b (q - I_h q, R_h -̆)̆ + (ϕ-I_hϕ, R_h -̆)̆-b(R_hp-p,ϕ-I_hϕ) ≤ C h^r + 1 (ϕ_H^2 + ϕ_H^2 (Σ) + q _H^1) . Since ( R_h - ) (0) _Σ≤ C h^r+1 (see Lemma <ref>), the inequality above leads to the following result: R_h - _L^2 L^2 (Σ)^2 ≤ Ch^2r + 2 + Ch^r + 1 R_h - _L^2 L^2 (Σ) + R_h (0) - (0) _L^2 (Σ) ( (ϕ, q) ) (0) _L^2(Σ) ≤ Ch^2r + 2 + Ch^r + 1 R_h - _L^2 L^2 (Σ) + C h^r+1 R_h - _L^2 L^2 (Σ) , and therefore R_h - _L^2 L^2 (Σ)≤ Ch^r + 1 . By using the same approach, choosing f = R_h -̆$̆ andf = ∂_t(R_h -̆ )̆in (<ref>), respectively, the following result can be shown (the details are omitted): R_h -̆_L^2 L^2 (Σ) + ∂_t ( R_h -̆)̆_L^2 L^2 (Σ)≤ C h^r +1 . This also implies, via the Newton–Leibniz formula in time, R_h -̆_L^∞ L^2 (Σ)≤ C h^r +1 . Furthermore, we consider a dual problem defined by { - ∇·σ (ϕ, q) + ϕ = R_h -̆ in Ω ∇·ϕ = 0 in Ω ϕ|_Σ = 0, q∈ L^2_0(Ω) , . which satisfies the following standardH^2regularity estimate ϕ_H^2 + q _H^1 + σ(ϕ ,q) _L^2(Σ)≤ C R_h -̆ , where the term σ(ϕ ,q) _L^2(Σ) is included on the left-hand side because it is actually bounded by ϕ _H^2 + q _H^1. Then, testing (<ref>) withR_h -̆ $̆, we have R_h -̆^2 = a_f (ϕ, R_h -̆)̆ - b (q, R_h -̆)̆ + (ϕ,R_h -̆)̆ - ( (ϕ, q) , R_h -̆)̆_Σ = a_f (ϕ - I_h ϕ, R_h -̆)̆ - b (q - I_h q, R_h -̆)̆ - ( (ϕ, q) , R_h -̆)̆_Σ +(ϕ-I_hϕ,R_h -̆)̆ - b (R_h p - p,ϕ- I_h ϕ) ≤ C h (ϕ_H^2 +q_H^1) ( R_h -̆_H^1 +R_h p - p) + (ϕ, q)·_Σ R_h -̆_Σ ≤ C h^r + 1 R_h -̆ + C R_h -̆ R_h -̆_Σ . The last inequality implies, in combination with L-infty-2, the following result: R_h -̆≤ Ch^r + 1 . By using the same approach, replacing R_h-̆$̆ by∂_t (R_h-̆)̆in (<ref>), the following estimate can be shown (the details are omitted): ∂_t(R_h -̆)̆_L^2L^2≤ Ch^r + 1 . The proof of Theorem <ref> is complete. § NUMERICAL EXAMPLES In this section, we present numerical tests to support the theoretical analysis in this article and to show the effectiveness of the proposed algorithm. The operatorL_s= C_0 ∂_xx - C_1 on the interfaceΣis considered. All computations are performed by the finite element package NGSolve; see <cit.>. 0.1in To test the convergence rate of the algorithm, we consider an artificial example of a two-dimensional thin structure model given in f-e–s-e with extra sources such that the exact solution is given by u_1 = 4sin(2π x)sin(2 π y)sin(t), u_2 = 4(cos(2π x)cos(2 π y))sin(t), p = 8(cos(4π x) - cos(4π y))sin(t). First, we examine a problem involving left/right-side periodic boundary conditions and top/bottom interfaces on the domain Ω = [0,2] × [0,1]. A uniform triangular partition is employed, featuring M+1 vertices in the y-direction and 2M+1 vertices in the x-direction, where h = 1/M. The classical lowest-order Taylor–Hood element is utilized for spatial discretization. To verify the L^2-norm error estimates, we set all involved parameters to 1. Our algorithm is applied to solve the system with M = 8, 16, 32, τ = h^3, and the terminal time T = 0.1. The numerical results are presented in Table <ref>, which shows that the algorithm has third-order accuracy for velocity and displacement in the L^2-norm, as well as second-order accuracy for pressure in the L^2-norm and displacement in the energy-norm. These numerical results align with our theoretical analysis. Next, we test our algorithm for a model with Dirichlet boundary condition on the left and right boundaries, using the same configuration as previously described. Both the lowest-order Taylor–Hood element and the MINI element are employed for spatial discretization. For the Taylor–Hood element and the MINI element, we adopt τ = h^3 and τ = h^2 in the computation, respectively. The numerical results are displayed in Table <ref>. As observed in Table <ref>, the algorithm, when paired with both the Taylor–Hood element and the MINI element, yields numerical results exhibiting optimal convergence orders for $̆ and. We consider a benchmark model which was studied by many people <cit.>. All the quantities will be given in the CGS system of units <cit.>. The model is described by f-e–s-e in Ω = (0,5)× (0, 0.5) with the physical parameters: fluid density ρ_f = 1, fluid viscosity μ = 0.035, solid density ρ_s = 1.1, the thickness of wall ϵ_s = 0.1, Young's modulus E = 0.75× 10^6, Poisson's ratio σ = 0.5 and C_0 = Eϵ_s/2(1+σ), C_1 = Eϵ_s/R^2(1-σ^2) , where R = 0.5 is the width of the domain Ω. The boundary conditions on the in/out-flow sides (x=0, x=5) are defined by σ(,̆p) = -p_ in/out where p_in (t)={[ p_max2[1-cos(2 π tt_max)] if t ≤ t_max; 0 if t>t_max ] , p_out (t)=0 ∀ t ∈(0, T). . with p_ max = 1.3333× 10^4 and t_ max = 0.003. The top and bottom sides of Ω are thin structures, and the fluid is initially at rest. We take a uniform triangular partition with M+1 vertices in y-direction and 10M+1 vertices in x direction (h=1/M), and solve the system by our algorithm where the lowest-order Taylor–Hood finite element approximation is used with the spatial mesh size h=1/64 (M=64), the temporal step size τ =h^3 and the penalty parameter β = 0.5. We present the contour of pressure p in Figure <ref> at t=0.003, 0.009, 0.016, 0.026 (from top to bottom). We can see a forward moving pressure wave, which reaches the right-end of the domain and gets reflected. The reflected wave is characterized by negative values of the pressure, which was also observed in <cit.>. 0.1in § CONCLUSION We have proposed a new kinematically coupled scheme which decouples fluid velocity from the structure displacement for solving a thin-structure interaction problem described by (<ref>)–(<ref>). We have proved unconditionally energy stability and optimal-order convergence in theL^2norm for the proposed method. The latter is established by introducing a newly defined coupled non-stationary Ritz projection for the fluid-structure interaction problem and establishing optimal-order approximation properties for the coupled non-stationary Ritz projection through analyzing its dual problem, which turns out to be equivalent to a backward evolution equation on the boundaryΣ, i.e., - ℒ_s 𝒩ξ + 𝒩ξ - ∂_t ξ = on Σ×[0,T), ξ (T) = 0 , in terms of the Neumann-to-Dirichlet map𝒩: H^-1/2(Σ)^d→ H^1/2(Σ)^dassociated to the Stokes equations. Although we have focused on the analysis for the specific kinematically coupled scheme proposed in this article for a thin-structure interaction problem, the new framework developed in this article, including the non-stationary Ritz projection and its approximation properties, may be extended to other schemes and different fluid-structure interaction models. § ACKNOWLEDGEMENT This work is supported in part by the NSFC key program (project no. 12231003), NSFC general program (project no. 12071020), Guangdong Provincial Key Laboratory IRADS (2022B1212010006, UIC-R0400001-22) and Guangdong Higher Education Upgrading Plan (UIC-R0400024-21), the Hong Kong Research Grants Council (GRF project no. PolyU15301321), and an internal grant of The Hong Kong Polytechnic University (Work Programme: ZVX7). § APPENDIX A: PROOF OF LEMMA <REF> In this appendix, we prove Lemma <ref> via the following proposition, where equation (<ref>) differs from (<ref>) via a change of variablet→ T-tin time. The initial-boundary value problem - ℒ_s ϕ + ϕ = -∂_t (ϕ, q) + on Σ× (0,T] -∇·σ (ϕ, q) + ϕ = 0 in Ω× (0,T] ∇·ϕ = 0 in Ω× (0,T] (ϕ, q) =0 t=0 , has a unique solution (ϕ,q) which satisfies the following regularity estimate: ϕ_L^2 H^2 + ϕ_L^2 H^2 (Σ) + q _L^2 H^1 + (ϕ, q) _L^∞ L^2 (Σ)≤ C _L^2 L^2 (Σ) We divide the proof into three parts. In the first part, we introduce the Neumann-to-Dirichlet operator and reformulate (<ref>) into an evolution equation (<ref>) on the boundary Σ with the aid of Neumann-to-Dirichlet operator, and then establish some mapping properties of the Neumann-to-Dirichlet operator to be used in the proof of Proposition <ref>. In the second part, we establish the existence, uniqueness and regularity of solutions to an equivalent formulation of (<ref>), i.e., equation (<ref>) below. Finally, in the third part, we establish regularity estimates for the solutions to (<ref>). Part 1. We can define the Neumann-to-Dirichlet operator 𝒩 : H^- 1 / 2(Σ)^d → H^1 / 2 (Σ)^d as ζ↦ (𝒩^vζ) |_Σ, with (𝒩^v ζ, 𝒩 ^p ζ) being the solution of the following Stokes equation: a_f (𝒩 ^v ζ , ) - b (𝒩^p ζ, ) + (𝒩^v ζ, ) = (ζ , )_Σ ∀∈ H^1(Ω)^d b (q, 𝒩^v ζ) = 0 ∀ q ∈ L^2(Ω) . Therefore, - ∇·σ(𝒩 ^v ζ,𝒩 ^p ζ) + 𝒩 ^v ζ = 0 Ω σ(𝒩 ^v ζ,𝒩 ^p ζ) = ζ Σ . Let ξ = (ϕ, q). Then it is easy to see that problem (<ref>) can be equivalently formulated as follows: Find ξ (t) ∈ H^1(Σ)^d for t∈ [0,T] satisfying the following evolution equation: - ℒ_s 𝒩ξ + 𝒩ξ + ∂_t ξ = on Σ×(0,T], ξ (0) = 0. By choosing =𝒩 ^v in (<ref>) and using relation b (𝒩^p ζ, 𝒩 ^v )=0 (due to the definition of 𝒩 ^v), we obtain (ζ, 𝒩)_Σ = a_f (𝒩^v ζ, 𝒩^v ) + (𝒩^v ζ, 𝒩^v ) ∀,∈ H^- 1 / 2 (Σ)^d . Especially, this implies that (ζ, 𝒩ζ)_Σ =2μ𝒩^v ζ^2_f+ 𝒩 ^v ζ^2 ∼𝒩^v ζ_H^1 ^2∼𝒩ζ^2_H^1/2(Σ) ∀ζ∈ H^- 1 / 2 (Σ)^d . By choosing k=s in the regularity result in (<ref>) with s≥ -1/2,s∈ℝ and noting the trace inequality, we can establish the following mapping property of the Neumann-to-Dirichlet operator: 𝒩ζ_H^s+1 (Σ)≤ C𝒩^vζ_H^s+3/2(Ω)≤ C ζ_H^s (Σ) ∀ s≥ -1/2,s∈ℝ, and from regularity estimate (<ref>) of Stokes equation there holds 𝒩ζ_H^3 / 2 (Σ)≤ C 𝒩^v ζ_H^2 (Ω)≤ C ζ_H^1 / 2 (Σ). Complex interpolation of (<ref>) and (<ref>) gives 𝒩ζ_H^1 (Σ)≤ C ζ_L^2 (Σ) Note that 𝒩ζ = 0 if and only if ζ = λ for some scalar constant λ∈ℝ. This motivates us to define the following subspace of H^s(Σ)^d for s∈ℝ: H^s(Σ)^d:={ζ∈ H^s(Σ)^d: (ζ,)_Σ=0 } . Then we define the Dirichlet-to-Neumann operator 𝒟:H^1/2(Σ)^d→H^-1/2(Σ)^d as follows: For ζ∈H^1/2(Σ)^d, let (𝒟^vζ,𝒟^pζ) be the weak solution of -∇·(𝒟^vζ,𝒟^pζ)+𝒟^vζ =0 in Ω ∇·𝒟^vζ =0 in Ω (𝒟^vζ)|_Σ =ζ on Σ , and then define 𝒟ζ∈H^-1/2(Σ)^d by the following equation a_f (𝒟^v ζ , ) - b (𝒟^p ζ, ) + (𝒟^v ζ, ) = (𝒟ζ , )_Σ ∀∈ H^1(Ω)^d . Since the function 𝒟^pζ in equation (<ref>) is only determined up to a constant, we can choose this constant in such a way that the function 𝒟ζ defined by (<ref>) lies in H^-1/2(Σ)^d. Using trace theorem and Bogovoski's map (cf. <cit.>) there exists ∈ H^1(Ω)^d such that |_Σ=, ∇·=_Σ^2/|Ω| with _H^1≤ C, testing (<ref>) with such , noting the assumption that (𝒟ζ,)_Σ=0, we obtain |𝒟^pζ|≤ C𝒟^vζ_H^1, where 𝒟^pζ is the mean value of 𝒟^pζ over Ω. Therefore, choosing k=s with s≥ 1/2,s∈ℝ in (<ref>) and combining (<ref>) leads to the following estimates 𝒟^vζ_H^s+1/2+𝒟^pζ_H^s-1/2≤ Cζ_H^s(Σ) ∀ s≥ 1/2,s∈ℝ. From the weak form (<ref>), it follows that 𝒟ζ_H^-1/2(Σ)≤ C(𝒟^vζ_H^1+𝒟^pζ). Meanwhile when s≥ 3/2, by trace inequality we have 𝒟ζ_H^s-1(Σ)≤ C(𝒟^vζ_H^s+1/2+𝒟^pζ_s-1/2) ∀ s≥ 3/2,s∈ℝ. Combining (<ref>), (<ref>) and (<ref>) leads to the following estimates of the Neumann value 𝒟ζ in terms of the Dirichlet value ζ: 𝒟ζ_H^-1/2(Σ) ≤ Cζ_H^1/2(Σ) ∀ζ∈H^1/2(Σ)^d , 𝒟ζ_H^s-1(Σ) ≤ Cζ_H^s(Σ) ∀ζ∈H^s(Σ)^d. The following complex interpolation of Sobolev spaces hold: [H^k(Σ)^d,H^s(Σ)^d]_θ=H^θ s+(1-θ)k(Σ)^d ∀ k,s∈ℝ,θ∈ [0,1]; [H^k(Σ)^d,H^s(Σ)^d]_θ=H^θ s+(1-θ)k(Σ)^d ∀ k,s∈ℝ,θ∈ [0,1]; where (<ref>) follows from <cit.> and (<ref>) follows from (<ref>) because H^s(Σ)^d is a retract of H^s(Σ)^d for s∈ℝ via projection π:H^s(Σ)^d→H^s(Σ)^d, with π(ζ):=ζ-(ζ,)_Σ/^2_Σ. Therefore, the following result follows from the complex interpolation between the two estimates in (<ref>): 𝒟ζ_H^s-1(Σ)≤ Cζ_H^s(Σ) ∀ζ∈H^s(Σ)^d ∀ s≥ 1/2,s∈ℝ. If we restrict the domain of 𝒩 to H^-1/2(Σ)^d, then 𝒩:H^-1/2(Σ)^d→H^1/2(Σ)^d and 𝒟:H^1/2(Σ)^d→H^-1/2(Σ)^d are inverse maps of each other. This leads to the following norm equivalence: ζ_H^-1/2(Σ)∼𝒩ζ_H^1/2(Σ) ∀ζ∈H^-1/2(Σ)^d. Similarly, from identity 𝒟𝒩ζ=𝒩𝒟ζ=ζ for ζ∈H^1/2(Σ)^d and the mapping property in (<ref>) and (<ref>), we conclude that the maps 𝒩:H^s(Σ)^d→H^s+1(Σ)^d and 𝒟:H^s+1(Σ)^d→H^s(Σ)^d are also inverse to each other for all s≥ -1/2,s∈ℝ. This implies the following norm equivalence for s≥ -1/2,s∈ℝ: ζ_H^s(Σ)∼𝒩ζ_H^s+1(Σ); ζ_H^s+1(Σ)∼𝒟ζ_H^s(Σ) ∀∈H^-1/2(Σ)^d To facilitate further use, we summarize the properties of the NtD (Neumann to Dirichlet) operator and DtN (Dirichlet to Neumann) operator in the following lemma:   * For s≥ -1/2,s∈ℝ, the NtD operator 𝒩:H^s(Σ)^d→H^s+1(Σ)^d and DtN operator 𝒟:H^s+1(Σ)^d→H^s(Σ)^d are bounded and inverse to each other. * With domain dom(𝒟):=H^1(Σ)^d⊆L^2(Σ)^d, the DtN operator 𝒟 is a self-adjoint positive-definite operator on L^2(Σ)^d. The NtD operator 𝒩:L^2(Σ)^d→L^2(Σ)^d is a compact self-adjoint positive-definite operator on L^2(Σ)^d. * The square root operators 𝒟^1/2 and 𝒩^1/2 are well defined. Moreover, for s≥ -1/2,s∈ℝ, operators 𝒩^1/2:H^s(Σ)^d→H^s+1/2(Σ)^d and 𝒟^1/2:H^s+1/2(Σ)^d→H^s(Σ)^d are bounded and inverse to each other. The three statements are proved as follows. * The first statement has been proved in (<ref>). * From (<ref>) and (<ref>) it follows that 𝒩 is self-adjoint positive-definite operator on L^2(Σ)^d. Since H^1(Σ)^d→L^2(Σ)^d is a compact embedding by Rellich-Kondrachov theorem (cf. <cit.>), from mapping property (<ref>) of 𝒩 it follows that 𝒩 is a compact operator. To verify 𝒟: dom(𝒟)→L^2(Σ)^d is self-adjoint, it suffices to show that if ζ satisfies |(ζ,𝒟)_Σ|≤ C_Σ ∀∈H^1(Σ)^d, then ζ∈H^1(Σ)^d. From (<ref>), by Riesz representation theorem there exists ∈L^2(Σ)^d such that (ζ,𝒟)_Σ=(,)_Σ ∀∈H^1(Σ)^d. Especially, taking =𝒩, it follows that (ζ,)_Σ=(,𝒩)_Σ=(𝒩,)_Σ ∀∈L^2(Σ)^d. Therefore ζ=𝒩∈H^1(Σ)^d, proof of the second statement is complete. * By the spectrum theory of compact self-adjoint operator (cf. <cit.>), L^2(Σ)^d admits an orthornormal basis of eigenvectors {_i}_i∈ℕ of 𝒩 and 𝒩 has the following expression 𝒩=∑_i=1^∞ (𝒩,_i)_Σ_i=∑_i=1^∞λ_i(,_i)_Σ_i ∀∈H^-1/2(Σ)^d, where λ_i>0 is the eigenvalue associated with _i. From norm equivalence (<ref>), we can deduce that for s∈ℕ there holds _H^s(Σ)∼𝒟^s_Σ= (∑_i=1^∞λ_i^-2s|(,_i)_Σ|^2 )^1/2 ∀ s∈ℕ. In view of complex interpolation result of weighted ℓ^2-sequence spaces (cf. <cit.>), in fact (<ref>) is valid for all s≥ 0,s∈ℝ by complex interpolation method. Moreover for -1/2≤ s<0, using norm equivalence (<ref>) and (<ref>) (which is valid for s≥ 0,s∈ℝ) we have _H^s(Σ)∼𝒩_H^s+1(Σ)∼(∑_i=1^∞λ_i^-2s|(,_i)_Σ|^2 )^1/2 ∀ s∈ℝ,-1/2≤ s<0. Combining (<ref>) and (<ref>), we arrive at _H^s(Σ)∼(∑_i=1^∞λ_i^-2s|(,_i)_Σ|^2 )^1/2 ∀ s≥ -1/2,s∈ℝ. We can define square root operators 𝒩^1/2 and 𝒟^1/2 by formula 𝒩^1/2: =∑_i=1^∞λ_i^1/2(,_i)_Σ_i ∀∈H^-1/2(Σ)^d 𝒟^1/2: =∑_i=1^∞λ_i^-1/2(,_i)_Σ_i ∀∈L^2(Σ)^d , from the norm equivalence in (<ref>), it is direct to verify that operators 𝒩^1/2 and 𝒟^1/2 are inverse to each other and satisfy the following mapping property for s≥ -1/2, s∈ℝ ζ_H^s(Σ)∼𝒩^1/2ζ_H^s+1/2(Σ); ζ_H^s+1/2(Σ)∼𝒟^1/2ζ_H^s(Σ) ∀∈H^-1/2(Σ)^d. The proof of third statement is complete. Part 2. Taking into account of the fact that 𝒩 is not injective on L^2(Σ)^d, for convenience of our further construction we first take the L^2-orthogonal projection π:L^2(Σ)^d→L^2(Σ)^d defined as in (<ref>) on the both side of (<ref>), and obtain the following equation with solution space contained in L^2(Σ)^d: seek ∈ L^2H^1(Σ)^d with ∂_t∈ L^2L^2(Σ)^d satisfying ∂_t+𝒜=; (0)=0, where 𝒜=π(I-ℒ_s)𝒩 and =π. One difficulty in proving existence and regularity of solution to (<ref>) is that the operator 𝒜:H^1(Σ)^d→L^2(Σ)^d is not a self-adjoint operator in L^2(Σ)^d. To overcome this difficulty, we consider the following change of variable =𝒩^1/2, and reformulate (<ref>) into an abstract Cauchy problem on : seek ∈ L^2H^1(Σ)^d with ∂_tω∈ L^2L^2(Σ)^d satisfying ∂_t +ℬ=; (0)=0, where ℬ=𝒩^1/2π(I-ℒ_s)𝒩^1/2. We summarize some useful properties on the operators 𝒜 and ℬ in the following lemma:   * There holds norm equivalence for 0≤ s≤ 1,s∈ℝ 𝒜_H^s(Σ)∼_H^s+1(Σ); ℬ_H^s(Σ)∼_H^s+1(Σ) ∀∈H^-1/2(Σ)^d. * ℬ is a self-adjoint positive-definite operator on L^2(Σ)^d with domain dom(ℬ):=H^1(Σ)^d. The two statements are proved as follows. * In view of the norm equivalence relations in (<ref>) and (<ref>), it suffices to show the following norm equivalence for -1≤ s≤ 1,s∈ℝ π (I-ℒ_s)_H^s(Σ)∼_H^s+2(Σ) ∀∈H^-1/2(Σ)^d. Note that one direction of the norm equivalence in (<ref>) is given by assumption (<ref>). To prove the opposite direction, observe first that π(I-ℒ_s)_H^-1(Σ)_H^1(Σ)≥ (π(I-ℒ_s),)_Σ=((I-ℒ_s),)_Σ≥ C^2_H^1(Σ). It follows that (<ref>) is valid for s=-1. Next we note that, by definition (<ref>) of projection π:H^s(Σ)^d→H^s(Σ)^d, π(I-ℒ_s)-(I-ℒ_s)_H^s(Σ)≤ C|(,(I-ℒ_s))_Σ|≤ C_H^1(Σ) For -1≤ s≤ 1,s∈ℝ, in view of regularity assumption (<ref>) and the estimate (<ref>) above, we have _H^s+2(Σ) ≤ C(I-ℒ_s)_H^s(Σ) ≤ Cπ(I-ℒ_s)_H^s(Σ)+C_H^1(Σ) ≤ Cπ(I-ℒ_s)_H^s(Σ) . Thus (<ref>) is proved and the first statement follows directly. * Since ℬ is obviously symmetric and positive definite on its domain dom(B)=H^1(Σ)^d. To prove that ℬ is self-adjoint, it remains to show that the domain of the dual operator ℬ' defined by dom(ℬ') = {∈L^2(Σ)^d: ∃ ∈L^2(Σ)^d (,ℬ)_Σ=(,)_Σ ∀∈H^1(Σ)^d } , coincides with the domain of ℬ. Therefore, we need to prove that if ∈L^2(Σ)^d satisfies (,ℬ)_Σ=(,)_Σ ∀∈H^1(Σ)^d, for some ∈L^2(Σ)^d, then ∈H^1(Σ)^d. To this end, we define ∈H^1(Σ)^d to be the weak solution of equation a_s(,)+(,)_Σ=(𝒟^1/2,)_Σ ∀∈H^1(Σ)^d, where the existence and uniqueness of solution to (<ref>) is due to coercive property: ^2_s+^2_Σ∼_H^1(Σ)^2,∀∈H^1(Σ)^d. Equation (<ref>) means π(I-ℒ_s)=𝒟^1/2∈H^-1/2(Σ)^d, thus by norm equivalence (<ref>) we have ∈H^3/2(Σ)^d. Now we observe (𝒟^1/2,ℬ)_Σ=(π(I-ℒ_s),𝒩^1/2)_Σ=(𝒟^1/2,𝒩^1/2)_Σ=(,)_Σ ∀∈H^1(Σ)^d By comparing (<ref>) with (<ref>) we obtain =𝒟^1/2∈H^1(Σ)^d. This completes the proof. Especially, since ℬ is a self-adjoint positive-definite operator on L^2(Σ)^d with domain dom(ℬ):=H^1(Σ)^d, -ℬ generates an analytic semigroup E(t):L^2(Σ)^d→L^2(Σ)^d for t≥ 0 (cf. <cit.>), and the unique solution to (<ref>) is given by (t)=∫_0^tE(t-s)(s)ds. Moreover, for self-adjoint semigroup on a Hilbert space, the following L^2-maximal regularity estimate holds (cf. <cit.>): ∂_t _L^2L^2(Σ)+ℬ_L^2L^2(Σ)≤ C_L^2L^2(Σ) , which can be obtained by testing (<ref>) with ∂_t. If the source term in (<ref>) possesses higher spacial regularity, the solution also inherits higher spacial regularity. To see this, assume ∈ L^2H^1(Σ)^d, then since ℬ(t)=∫_0^t E(t-s)ℬ(s)ds is the solution to (<ref>) with the source term replaced by ℬ. Thus again by maximal L^2-regularity estimate, we have ℬ∂_t _L^2L^2(Σ)+ℬ^2_L^2L^2(Σ)≤ Cℬ_L^2L^2(Σ). By norm equivalence in (<ref>), it follows that ∂_t _L^2H^1(Σ)+ℬ_L^2H^1(Σ)≤ C_L^2H^1(Σ). Complex interpolation of (<ref>) and <ref> gives ∂_t _L^2H^1/2(Σ)+ℬ_L^2H^1/2(Σ)≤ C_L^2H^1/2(Σ). Now we take =𝒩^1/2, then it is direct to verify that :=𝒟^1/2 is the solution to (<ref>) and satisfies estimate ∂_t_L^2L^2(Σ)+_L^2H^1(Σ)≤ C_L^2L^2(Σ), where we have used norm equivalences in (<ref>) and (<ref>). Having obtained the solution to equation (<ref>), if we write (t)=(t)+c(t) then it is direct to verify that (t)=(t)+k(t) is the solution to (<ref>), where k(t) is given by ∂_t k =c-r(); k(0)=0 r() :=((I-ℒ_s)𝒩,)_Σ/^2_Σ, it follows that ∂_t k_L^2(0,T)≤ C(_L^2L^2(Σ)+_L^2H^1(Σ))≤ C_L^2L^2(Σ). Therefore, combining (<ref>) and (<ref>) we obtain ∂_t _L^2L^2(Σ)+_L^2H^1(Σ)≤ C_L^2L^2(Σ) Part 3. Given the solution ξ to equation (<ref>), we define (ϕ, q) = (𝒩^v ξ, 𝒩^p ξ). Then ξ = (ϕ, q) and 𝒩ξ = ϕ |_Σ. Therefore, equation (<ref>) can be written as - ℒ_s ϕ + ϕ = - ∂_t (ϕ, q) + Σ× (0, T], (ϕ, q) (0) = (0, 0) . Thus (ϕ, q) is a solution of equation (<ref>). Since (ϕ,q)(0)=0, it follows that (ϕ, q) _L^∞ L^2 (Σ)≤ C∂_t (ϕ, q) _L^2L^2(Σ). Therefore, (ϕ, q) satsifies the following estimate according to (<ref>) and the inequality above: (ϕ, q) _L^∞ L^2 (Σ) + ϕ_L^2 H^2 (Σ)≤ C _L^2 L^2(Σ). Moreover, since (ϕ,q) is the solution of the homogeneous Stokes equation (with boundary value ϕ|_Σ=𝒩ξ), the following two estimates follow from the regularity results of the Stokes equation in (<ref>)–(<ref>): ϕ_H^2 + ∇ q ≤ C ϕ |_Σ_H^2 (Σ) ; ϕ_H^1 + q ≤ C (ϕ, q)_H^- 1 / 2 (Σ) Combining the estimates in (<ref>) and (<ref>), we obtain the result of Proposition <ref>. Let ϕ=ϕ(T-t) and q=q(T-t), then ϕ and q is a solution of (<ref>) in Proposition <ref>, with source term (t)=(T-t), thus we have ϕ_L^2 H^2 (Ω) + ϕ_L^2 H^2 (Σ) + q_L^2 H^1 (Ω) + (ϕ, q) _L^∞ L^2 (Σ)≤ C_L^2L^2(Σ). The proof of Lemma <ref> is complete. § APPENDIX B: PROOF OF (<REF>) In this subsection, we assume thatr≥ 2. Under this assumption, we establish a negative-norm estimate for the Dirichlet Stokes–Ritz projectionR_h^Din the following lemma. For the Dirichlet Stokes–Ritz projection R_h^D defined in (<ref>), the following error estimate holds: -̆ R_h^D _H^-1 + -̆ R_h^D _H^-1(Σ)+ p - R_h^D p _H^-2≤ C h^r+2 . From the definition of R_h^D$̆ in (<ref>) we can see that the following relation holds on the boundaryΣ: R_h^S-̆R_h^D=̆(R_h^S,̆)_Σ/_h^2_h=(R_h^S-̆,̆)_Σ/_h^2_h . Since (R_h^S-̆,̆)_Σ/_h^2≲ CR_h^S-̆_H^-1(Σ)≤ Ch^r+2, it follows that-̆R_h^D_H^-1(Σ)≤ Ch^r+2. Then (<ref>) follows from the same routine of duality argument for the Dirichlet Stokes–Ritz projection. Next, we note that(R_h-̆)̆(0)also satisfies negative norm estimate below. For the projection operator R_sh defined in (<ref>), the following negative-norm estimate of R_sh(̆0) holds: (R_sh-̆)̆ (0) _H^-1(Σ)≤ C h^r + 2 . We introduce a dual equation - ℒ_s ψ + ψ = φ ψ has periodic boundary condition on Σ. The regularity assumption in (<ref>) implies that ψ_H^3(Σ)≤φ_H^1(Σ). We can extend ψ to be a function (still denoted by ψ) which is defined in Ω with periodic boundary condition and satisfies ψ_H^3≤ C ψ_H^3 (Σ). Then the following relation can be derived: ((R_sh-̆)̆ (0),φ)_Σ = a_s (_ (R_sh-̆)̆ (0), ψ) + ((R_sh-̆)̆ (0), ψ)_Σ = a_s ((R_sh-̆)̆ (0), ψ - I_h ψ)_Σ + ((R_sh-̆)̆ (0), (ψ - I_h ψ))_Σ - a_f ((R_h^D ∂_t -̆∂_t )̆ (0), I_h ψ) + b((R_h^D ∂_t p - ∂_t p) (0), I_h ψ) - ((R^D_h∂_t -̆∂_t )̆ (0), I_h ψ) ≤ C h^r + 2ψ_H^3 (Σ) + | a_f ((R_h^D ∂_t -̆∂_t )̆ (0), ψ)|+ |b((R_h^D ∂_t p ∂_t - p) (0), ψ)| + |((R^D_h ∂_t -̆∂_t )̆ (0), ψ) | . Since ( (R_h^D ∂_t -̆∂_t )̆ (0), ψ) = - ((R_h^D ∂_t -̆∂_t )̆ (0), ∇·ψ) + ((R_h^D ∂_t -̆∂_t )̆ (0), ·ψ)_Σ ≲((R_h^D ∂_t -̆∂_t )̆ (0)_H^-1(Ω)+(R_h^D ∂_t -̆∂_t )̆ (0)_H^-1(Σ))ψ_H^3(Ω) ≤ Ch^r+2φ_H^1(Σ) and b( (R_h^D ∂_t p - ∂_t p) (0), ψ) ≲ C (R_h^D ∂_t p - ∂_t p) (0) _H^- 2ψ_H^3≤ Ch^r+2φ_H^1(Σ) ((R^D_h ∂_t -̆∂_t )̆ (0), ψ)≲(R^D_h ∂_t -̆∂_t )̆ (0)_H^-1ψ_H^1≤ Ch^r+2φ_H^1(Σ) , summing up the estimates above yields the result in (<ref>). The proof is complete. On the boundaryΣ, relations (<ref>) and (<ref>) imply that(R_h -̆)̆ (0) = (R_sh-̆)̆ (0) - λ (R_sh(̆0)) _h. Since λ (R_sh(̆0))= (R_sh(̆0), )_Σ/_h ^2_Σ = (R_sh(̆0) - (̆0), )_Σ/_h _Σ^2≲ C R_sh(̆0) - (̆0)_H^-1(Σ)≤ C h^r + 2, it follows from (<ref>) that(R_h-̆)̆(0)_H^-1(Σ)≤ Ch^r+2. Since (<ref>) implies inequality(R_h^D-̆)̆(0)_H^-1(Σ)≤ Ch^r+2, it follows that (R_h -̆ R_h^D )̆ (0) _H^- 1 (Σ)≤ C h^r + 2 . Let e^u_h := (R_h -̆ R_h^D )̆ (0) and e_h^p:= (R_h p - R_h^D p)(0). Then the following estimates hold: e_h^u _H^1 + e_h^p ≤Ch^r + 1 / 2 , e_h^u _H^- 1 (Σ)+ e_h^u _H^- 1 / 2 +e_h^p _H^- 3 / 2≤ C h^r + 2 . To prove the first inequality in Lemma <ref>, we note that a_f (e^u_h, _h) + (e_h^u, _h) - b (e_h^p, _h) = 0 ∀_h ∈^r_h. Let _̆h = E_h (e_h^u |_Σ), where E_h is an extension operator as in (<ref>). Then e_h^u - _̆h ∈^r_h and _̆h _H^1≤ Ch^- 1 / 2 e_h^u _Σ≤ C h^r + 1 / 2. This estimate of _̆h _H^1 and relation (<ref>) imply that a_f (e^u_h - _̆h, _h) + (e_h^u - _̆h, _h) - b (e_h^p, _h) ≲ C h^r + 1 / 2_h _H^1 ∀_h ∈^r_h . Now, choosing _h = e_h^u - _̆h in the inequality above, we obtain e_h^u _H^1 (Ω) + e_h ^p ≤ C h^r + 1 / 2 . Next, we consider a dual problem: For given ∈ H^1 / 2(Ω)^d, we construct (ϕ, q) to be the solution of - ∇·σ (ϕ, q) + ϕ = ; ∇·ϕ = 0 ; ϕ |_Σ = 0 ; q ∈ L^2_0 (Ω) . By the regularity assumptions in (<ref>), the following estimate of ϕ and q can be written down: ϕ_H^5 / 2 + p _H^3 / 2≤ C _H^1 / 2 . From equation (<ref>) one can see that (e_h^u, ) = a_f (e_h^u, ϕ - I_h ϕ) + (e_h^u, ϕ - I_h ϕ) -((ϕ, q) , e_h^u)_Σ + b (e_h^p, I_h ϕ- ϕ) ≲ C h^r + 2_H^1 / 2 + e_h^u _H^- 1_p (Σ)_H^1/2≤ Ch^r + 2_H^1 / 2. Therefore, the following result is proved: | (e_h^u, ) | ≤ C h^r + 2_H^1 / 2 ; e_h^u _H^- 1 / 2≤ C h^r + 2. We move on to consider the dual problem for pressure: For given f ∈ H^3 / 2 (Ω), since e_h^p ∈ L^2_0 (Ω) it follows that (e_h^p, f) = (e_h^p, f - f) . Thus it suffices to assume that ∫_Ω f = 0. Then, using Bogovoski's map (see details in <cit.> and <cit.>), there exists ∈ H^5 / 2 (Ω) such that ∇· = f, _H^5 / 2≤ C f _H^3 / 2, |_Σ = 0 . From equation (<ref>), we can find that (e_h^p, f) = b (e_h^p, ) = b (e_h^p, - I_h ) + a_f (e_h^u, I_h - ) + (e_h^u, I_h - ) + a_f (e_h^u, ) + (e_h ^u, ) . Thus, combining the known estimate e_h^u_H^-1/2≤ Ch^r+2 and e_h^u_H^1+e_h^p≤ Ch^r, we have | (e_h^p, f) | ≤ C h^r + 2f_H^3 / 2 + | a_f (e_h^u, ) | . Using integration by parts, we derive that a_f (e_h^u, ) = 2μ( e_h^u, ) = - 2μ (e_h^u, ∇·) + 2μ(e_h^u, ·)_Σ≲ C h^r + 2 f _H^3/2 . Therefore, we have proved the following result: | (e_h^p, f) | ≤ C h^r + 2 f _H^3 / 2 ; e_h^p _H^- 3 / 2≤ C h^r + 2 . This completes the proof of Lemma <ref>. For the projection operators R_h and R_sh defined in (<ref>) and (<ref>), respectively, the following estimate holds: R_sh (0) - R_h (0) _H^1 (Σ)≤ C h^r + 1 . By denoting δ_h : = R_h (0)-R_sh(0), e^u_h := (R_h -̆ R_h^D )̆ (0) and e_h^p := (R_h p - R_h^D p) (0), we can write down the following equation according to the definitions of the two projection operators: a_s (_δ_h, _h) + (δ_h, _h)_Σ + a_f (e_h^u, _h) - b (e_h^p, _h) + (e_h^u, _h) = 0 ∀_h ∈^r_h . Then, choosing _h = E_h δ_h ∈^r_h in the relation above and note that _h _H^1≤ C h^- 1 / 2δ_h _Σ, we derive that δ_h ^2_H^1 (Σ)≤ C h^- 1 / 2δ_h _Σ ( e_h^u _H^1 + e_h^p ) ≤ C h^r δ_h _Σ . Next, we consider the following dual problem: Let ψ be the solution of - ℒ_s ψ + ψ = δ_h ψ has periodic boundary condition on Σ . Then a_s (ψ, ξ) + (ψ, ξ)_Σ = (δ_h, ξ)_Σ ∀ξ∈ and ψ_H^2 (Σ)≤δ_h _Σ . We can extend ψ to a function (still denoted by ψ) defined on Ω with periodic boundary condition and ψ_H^5 / 2≤ C ψ_H^2. Therefore δ_h ^2_Σ = a_s (δ_h, ψ - I_h ψ)_Σ + (δ_h, (ψ - I_h ψ))_Σ - a_f (e_h^u, I_h ψ) + b (e_h^p, I_h ψ) - (e_h^u, I_h ψ) ≤ C h δ_h _H^1(Σ)ψ_H^2(Σ) + C h^3 / 2 ( e_h^u _H^1 + e_h^p )ψ_H^5/2 + | a_f (e_h^u, ψ) - b (e_h^p, ψ) + (e_h^u, ψ) | . Using integrating by parts and negative-norm estimates, the following estimates can obtain: b (e_h^p, ψ) ≲ C ψ_H^5 / 2e_h^p _H^- 3 / 2≤ C h^r + 2δ_h _Σ , a_f (e_h^u, ψ) + (e_h^u, ψ) ≲ C e_h^u _H^- 1 / 2ψ_H^5 / 2 + C e_h^u _H^- 1 (Σ)ψ_H^5 / 2≤ C h^r + 2δ_h _Σ . Therefore, combining the estimates in (<ref>)–(<ref>), we obtain the following error estimate for δ_h: δ_h _Σ≤ C h^r / 2 + 1δ_h _Σ^1 / 2 + C h^r + 2⇒δ_h _Σ≤ C h^r + 2 . The inverse inequality implies δ_h _H^1 (Σ)≤ C h^r + 1. This completes the proof of Lemma <ref>. § APPENDIX C: PROOF OF (<REF>) In this appendix, we prove (<ref>) in the following lemma. Under assumptions (A1)– (A4) on the finite element spaces, the following type of inf-sup condition holds (the H^1(Σ)-norm is involved on the right-hand side of the inequality): p_h ≤ C sup_0 ≠_h ∈^r_h( div_h, p_h)/_h _H^1 +_H^1(Σ) ∀ p_h ∈ Q^r-1_h, where C>0 is a constant independent of p_h and the mesh size h. Each p_h∈ Q_h^r-1 can be decomposed into p_h=p_h+p̅_h, with p_h∈ Q^r-1_h,0 and p̅_h=1/|Ω|∫_Ω p_h d x. Since we have assumed that inf-sup condition (<ref>) holds, there exists _h∈_h^r such that _h_H^1≤p_h and b(p_h,_h)≥ C_1p_h^2 . For the constant p̅_h∈ℝ, we note that b(p̅_h,_h)=p̅_h b(1,_h)=p̅_h(_h,)_Σ. Let _h^*∈_h^r be defined as _h^*=E_h(_h), where _h is defined in (<ref>) and E_h is the extension operator defined in item 4 of Remark <ref>, i.e., _h^*=I_h^X ∈^r_h with ∈ H^1(Ω)^d being an extension of _h such that |_Σ=_h. By the definition of _h^*, we have _h^*_H^1 = I_h^X _H^1≤ C_H^1≤ C_h_H^1/2(Σ)≤ C _h^*_H^1(Σ) =_h_H^1(Σ)≤ C. Moreover, the following relation holds: b(1,_h^*)=(_h^*,)_Σ=(_h,)_Σ=_h_Σ^2≥ C>0. Therefore, the function _h^1:=p̅_h_h^* has the following property: _h^1_H^1+_h^1_H^1(Σ)≤ C|p̅_h|≤ C_0p̅_h . We can re-scale _h^1 to _h^e=1/C_0_h^1 so that the following inequalities hold for some constant C_2>0: _h^e_H^1+_h^e_H^1(Σ)≤p̅_h and b(p̅_h,_h^e)= |p̅_h|^2/C_0b(1,_h^*)≥ C_2p̅_h^2 . By considering _h=_h+ϵ_h^e, with a parameter ϵ>0 to be determined later, and using the relation b(p̅_h,_h)=p̅_h( _h ,_h)_Σ = 0, we have b(p_h,_h) =b(p_h+p̅_h,_h+ϵ_h^e) =b(p_h,_h)+ϵ b(p_h,_h^e)+ϵ b(p̅_h,_h^e) ≥ C_1p_h^2+ϵ b(p_h,_h^e) +ϵ C_2p̅_h^2 ≥ C_1p_h^2-Cϵp_hp̅_h +ϵ C_2p̅_h^2 . By using Young's inequality, we can reduce the last inequality to the following one: b(p_h,_h)≥ C_1p_h^2+ϵ C_2p̅_h^2-(C_1/2p_h^2+C^2ϵ^2/2C_1p̅_h^2) . Then, choosing ϵ=C_1C_2/C^2, we derive that b(p_h,_h)≥C_1/2p_h^2+C_1C_2^2/2C^2p̅_h^2≥ C_3p_h^2 . Since _h=_h+ϵ_h^e with _h=0 on Σ, it follows from the triangle inequality and (<ref>)–(<ref>) that _h_H^1+_h_H^1(Σ) ≤_h_H^1+ϵ(_h^e_H^1+_h^e_H^1(Σ)) ≤p_h+C_1C_2/C^2p̅_h≤(1+C_1C_2/C^2)p_h. Therefore, (<ref>) and (<ref>) imply that b(p_h,_h)/_h_H^1+_h_H^1(Σ)≥C_3p_h^2/(1+C_1C_2/C^2)p_h =C_3p_h/(1+C_1C_2/C^2) . This proves that p_h≤1/C_3(1+C_1C_2/C^2)b(p_h,_h)/_h_H^1+_h_H^1(Σ) . and therefore completes the proof of (<ref>). 00girault-94 C. Amrouche and V. Girault. On the existence and regularity of the solution of Stokes problem in arbitrary dimension. Journal of the Mathematical Society of Japan 46, no. 4 (1994): 607-643. AFL-2023 M. Annese, M. A. Fernández, and L. Gastaldi. Splitting schemes for a Lagrange multiplier formulation of FSI with immersed thin-walled structure: stability and convergence analysis. IMA Journal of Numerical Analysis 43, no. 2 (2023): 881-919. fortin84 D. N. Arnold, F. Brezzi, and M. Fortin. A stable finite element for the stokes equations. Calcolo 21, no. 4 (1984): 337-344. Badia-Nobile-2008 S. Badia, F. Nobile, C. Vergara, Fluid–structure partitioned procedures based on Robin transmission conditions. Journal of Computational Physics 227 (2008): 7027–7051. bergh J. Bergh and J. Löfström. Interpolation spaces: an introduction. Vol. 223. Springer Science & Business Media, 2012. boffi-13 D. Boffi, F. Brezzi, and M. Fortin. Mixed finite element methods and applications. Vol. 44. Heidelberg: Springer, 2013. salamon T. Bühler and D. A. Salamon. Functional analysis. Vol. 191. American Mathematical Soc., 2018. brenner08 S. C. Brenner and L. R. Scott. The Mathematical Theory Of Finite Element Methods. Vol. 3. New York: Springer, 2008. BCGTQ M. Bukač, S. Čanić, R. Glowinski, J. Tambača, and A. Quaini. Fluid–structure interaction in blood flow capturing non-zero longitudinal structure displacement. Journal of Computational Physics 235 (2013): 515-541. BCM-composite-JCP-2015 M. Bukač, S. Čanić, and B. Muha. A partitioned scheme for fluid-composite structure interaction problems. Journal of Computational Physics 281 (2015): 493-517. BM-2016SINUM M. Bukač and B. Muha. Stability and convergence analysis of the extensions of the kinematically coupled scheme for the fluid-structure interaction. SIAM Journal on Numerical Analysis 54, no. 5 (2016): 3032-3061. BukacT-2022-thin M. Bukač and C. Trenchea. Adaptive, second-order, unconditionally stable partitioned method for fluid-structure interaction. Computer Methods in Applied Mechanics and Engineering 393 (2022): 114847. BDFG-2022 E. Burman, R. Durst, M. A. Fernández, and J. Guzmán. Fully discrete loosely coupled Robin-Robin scheme for incompressible fluid–structure interaction: Stability and error analysis. Numerische Mathematik 151, no. 4 (2022): 807-840. AddedMass-NumericalInstability P. Causin, J. F. Gerbeau, and F. Nobile. Added-mass effect in the design of partitioned algorithms for fluid-structure problems. Computer Methods in Applied Mechanics and Engineering 194, no. 42-44 (2005): 4506-4527. Dowell-FSI-reviewmodel E. H. Dowell and K. C. Hall. Modeling of fluid-structure interaction. Annual Review of Fluid Mechanics 33, no. 1 (2001): 445-490. Xie2010mathcomp H. Eichel, L. Tobiska, and H. Xie. Supercloseness and superconvergence of stabilized low-order finite element discretizations of the Stokes problem. Mathematics of Computation 80, no. 274 (2011): 697-722. farwig R. Farwig and S. Hermann. Generalized resolvent estimates for the Stokes system in bounded and unbounded domains. Journal of the Mathematical Society of Japan 46, no. 4 (1994): 607-643. Fer-Ger-G-2007-ProjectionSemi-implicitScheme M. A. Fernández, J. F. Gerbeau, C. Grandmont, A projection semi-implicit scheme for the coupling of an elastic structure with an incompressible fluid. Int. J. Numer. Methods Engineering 69, no. 4 (2007): 794–821. Fer-B-2009-Stabilization-Nitsche M. A. Fernández, E. Burman, Stabilization of explicit coupling in fluid-structure interaction involving fluid incompressibility. Computer Methods in Applied Mechanics and Engineering 198, no. 5-8, (2009): 766–784. Fer-2013NumMath M. A. Fernández. Incremental displacement-correction schemes for incompressible fluid-structure interaction. Numerische Mathematik 123, no. 1 (2013): 21-65. Formaggia-CMAME-2001 L. Formaggia, J. F. Gerbeau, F. Nobile, and A. Quarteroni. On the coupling of 3D and 1D Navier-Stokes equations for flow problems in compliant vessels. Computer Methods in Applied Mechanics and Engineering 191, no. 6-7 (2001): 561-582. Formaggia-Cardio-book-2009 L. Formaggia, A. Quarteroni, and A. Veneziani, (eds.) Cardiovascular Mathematics. Modeling and simulation of the circulatory system. Vol. 1. Springer Science & Business Media, 2010. Galdi G. Galdi. An Introduction To The Mathematical Theory of the Navier-Stokes Equations: Steady-State Problems. Springer Science & Business Media, 2011. GGCC-2009 G. Guidoboni, R. Glowinski, N. Cavallini, and S. Canic. Stable loosely-coupled-type algorithm for fluid-structure interaction in blood flow. Journal of Computational Physics 228, no. 18 (2009): 6916-6937. Gunzburger M. D. Gunzburger and S. L. Hou. Treating inhomogeneous essential boundary conditions in finite element methods and the calculation of boundary stresses. SIAM journal on numerical analysis 29, no. 2 (1992): 390-424. Schwab B. Guo and C. Schwab. Analytic regularity of Stokes flow on polygonal domains in countably weighted Sobolev spaces. Journal of Computational and Applied Mathematics 190 (2006), pp. 487-519. Sun-Xu-2021-monolithic W. Hao, P. Sun, J. Xu, and L. Zhang. Multiscale and monolithic arbitrary Lagrangian-Eulerian finite element method for a hemodynamic fluid-structure interaction problem involving aneurysms. Journal of Computational Physics, 433 (2021), 110181. Hou-Wang-Layton-reviewcomp G. Hou, J. Wang, and A. Layton. Numerical methods for fluid-structure interaction-a review. Communications in Computational Physics 12, no. 2 (2012), pp. 337-377. JS-monolithic-lecturenotes J. Hron and S. Turek. A monolithic FEM/multigrid solver for an ALE formulation of fluid-structure interaction with applications in biomechanics. Springer Berlin Heidelberg, 2006. GKW-monolithic M. W. Gee, U. Küttler, and W. Wall. Truly monolithic algebraic multigrid for fluid-structure interaction. Int. J. Numer. Methods Engineering 85, no. 8 (2011), pp. 987-1016. PSun-2020_monolithic-simplified R. Lan and P. Sun. A monolithic arbitrary Lagrangian-Eulerian finite element analysis for a Stokes/parabolic moving interface problem. J. Sci. Comput. 82, no. 3 (2020): 59. JuLIU-2022-CMAME I. S. Lan, J. Liu, W. Yang, and A. L. Marsden. A reduced unified continuum formulation for vascular fluid-structure interaction. Computer Methods in Applied Mechanics and Engineering 394 (2022): 114852. Lee-Xu-2016 H. Lee and S. Xu. Fully discrete error estimation for a quasi-Newtonian fluid-structure interaction problem. Computers & Mathematics with Applications 71, no. 11 (2016): 2373-2388. JuLIU-2021-Notices-AMS J. Liu, I. S. Lan, and A. L. Marsden. Mathematical modeling of the vascular system. arXiv preprint arXiv:2102.11064 (2021). LiuJ-JCP-2014 J. Liu, R. K. Jaiman, and P. S. Gurugubelli. A stable second-order scheme for fluid-structure interaction with strong added-mass effects. Journal of Computational Physics 270 (2014): 687-710. LRH-nonNewtonian M. Lukáčová-Medvid’ová, G. Rusnáková, and A. Hundertmark-Zaušková. Kinematic splitting algorithm for fluid-structure interaction in hemodynamics. Computer Methods in Applied Mechanics and Engineering 265 (2013): 83-106. FNobile-PhDThesis F. Nobile. Numerical Approximation of Fluid–Structure Interaction Problems with Application to Haemodynamics. PhD Thesis, EPFL, 2001. DOI: 10.5075/EPFL-THESIS-2458 Bukac2018SINUM O. Oyekole, C. Trenchea, and M. Bukac. A second-order in time approximation of fluid-structure interaction problem. SIAM Journal on Numerical Analysis 56, no. 1 (2018): 590-613. Ngs J. Schöberl. C++11 implementation of Finite Elements in NGSolve. Institute for analysis and scientific computing, Vienna University of Technology 30 (2014). Taylor M. E. Taylor. Partial differential equations. 1, Basic theory. Springer, 1996. Whe M. F. Wheeler. A priori L^2 error estimates for Galerkin approximations to parabolic partial differential equations, SIAM Journal on Numerical Analysis 10, no. 4 (1973): 723-759. xujinchao-15 J. Xu and K. Yang. Well-posedness and robust preconditioners for discretized fluid–structure interaction systems. Computer Methods in Applied Mechanics and Engineering 292 (2015): 69-91.
http://arxiv.org/abs/2306.03794v1
20230606154023
Azimuthal ion movement in HiPIMS plasmas -- Part I: velocity distribution function
[ "S. Thiemann-Monjé", "J. Held", "S. Schüttler", "A. von Keudell", "V. Schulz-von der Gathen" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
./figs/ equation*endequation* A5Report.bib note issn url urlday urlmonth urlyear doi conditions[1][with:] #1 t]>l<@= l Experimental Physics II, Ruhr University Bochum, [email protected] Magnetron sputtering discharges feature complex magnetic field configurations to confine the electrons close to the cathode surface. This magnetic field configuration gives rise to a strong electron drift in azimuthal direction, with typical drift velocities on the order of 100. In high power impulse magnetron sputtering (HiPIMS) plasmas, the ions have also been observed to follow the movement of electrons with velocities of a few , despite being unmagnetized. In this work, we report on measurements of the azimuthal ion velocity using spatially resolved optical emission spectroscopy, allowing for a more direct measurement compared to experiments performed using mass spectrometry. The azimuthal ion velocities increase with target distance, peaking at about 1.55 for argon ions and 1.25 for titanium ions. Titanium neutrals are also found to follow the azimuthal ion movement which is explained with resonant charge exchange collisions. The experiments are then compared to a simple test-particle simulation of the titanium ion movement, yielding good agreement to the experiments when only considering the momentum transfer from electrons to ions via Coulomb collisions as the only source of acceleration in azimuthal direction. Based on these results, we propose this momentum transfer as the primary source for ion acceleration in azimuthal direction. Azimuthal ion movement in HiPIMS plasmas - Part I: velocity distribution function S Thiemann-Monjé, J Held[current affiliation: University of Minnesota, Minneapolis, USA], S Schüttler[current affiliation: Plasma Interface Physics, Ruhr University Bochum, Bochum, Germany], A von Keudell, V Schulz-von der Gathen July 31, 2023 ========================================================================================================================================================================================================================================= § INTRODUCTION Magnetron sputtering processes are widely used in industry for thin film deposition <cit.>. Traditionally, magnetron sputtering discharges are driven with continuous voltage (DCMS). However, in recent years, high power impulse magnetron sputtering (HiPIMS) has become more and more relevant. HiPIMS plasmas are excited with short high voltage pulses, leading to high current densities and peak pulse powers. At typical duty cycles of a few percent at most, the time-averaged power is kept low to prevent target melting. The high pulse power in HiPIMS discharges results in plasma densities ranging from 10^19 m^-3 to 10^20 m^-3<cit.> and ionization degrees of the sputtered particles of up to 90%<cit.> leading to superior coating qualities <cit.>. The main drawback of HiPIMS discharges are the often observed lower deposition rates compared to DCMS discharges operated at similar average powers <cit.>. The geometry for magnetron sputtering discharges is often cylindrical symmetric with a circular cathode, the so-called target. Two concentric ring magnets placed behind this target are forming arch-shaped magnetic field lines in radial direction, trapping the electrons to the region close to the target. This magnetic trap configuration then leads to a torus-shaped plasma. Consequently, sputtering is mostly taking place in the ring-shaped area below the plasma torus forming an equally shaped erosion area, the so-called racetrack. Above the racetrack area, the magnetic field is parallel to the target surface while the electric field vector points towards the target <cit.>. On one hand, this electric field pulls ionized sputtered particles back towards the target, hindering them from reaching the substrate and lowering the deposition rate of HiPIMS discharges <cit.>. On the other hand, the crossed electric and magnetic field configuration induces a significant electron drift. Additionally, curvature and diamagnetic drifts are also present, adding up to azimuthal electron drift velocities in the order of 100 in the case of HiPIMS <cit.>. The ion movement in axial direction has been studied by several authors <cit.> as being dictated by the electric field <cit.>, collisions <cit.> and the sputtering process. The ion movement in azimuthal direction has been studied by Lundin et al. <cit.>, who placed a mass spectrometer at positions tangential to the racetrack of an HiPIMS-discharge with a titanium target to capture ions leaving the target region tangentially either in the direction of the movement or against it. They found the energy of fast titanium ions to be larger by about 10 (or about 2.5) in the direction of the movement. From these measurements, performed outside the magnetic trap, they concluded that the ions inside the magnetized region must be moving along the plasma torus, in the azimuthal direction of the discharge. Since the ions in magnetron sputtering discharges are unmagnetized <cit.>, an E⃗×B⃗ drift of ions can be excluded as the explanation of the observed movement. Lundin proposed momentum transfer from the drifting electrons onto the ions as the reason for the observed phenomenon <cit.>. They speculated that a modified two-stream instability is excited by the difference in drift velocity between electrons and ions. The resulting azimuthal electric field can then accelerate the ions, slowly dragging them along with the electron drift. Simple estimations showed that such a force from electrons on ions mediated by an instability might indeed explain the observed behavior. Later, Poolcharuansin repeated the experiment, using a retarding field analyzer instead of a mass spectrometer <cit.>. They also found a difference of roughly 10 or 2.5 for ions leaving the magnetic trap region tangentially in or against the direction. The authors combined their experiments with a fairly complex model, describing both the acceleration of ions in the azimuthal direction, as well as collisions and the conditions under which ions can even reach the detector, without being pulled back into the magnetic trap region by the electric field. From their model, the authors found support for the modified two-stream instability hypothesis proposed by Lundin , explaining that ion-electron collisions alone would be insufficient to provide enough acceleration for the ions. A different explanation for the same phenomenon was proposed by panjan_asymmetric_2014 after performing similar experiments with an ion or electron collecting flat probe and a mass spectrometer, both again positioned tangentially to the target and outside the magnetic trap region. The authors found a correlation between the azimuthal ion movement and the appearance of spokes, another wave phenomenon present in magnetron sputtering discharges <cit.>. Spokes are known to cause plasma potential fluctuations and, thus, induce an asymmetric electric field <cit.>, which is expected to influence the ion movement, both in axial as well as in azimuthal direction <cit.>. All these prior measurements have in common that they observed only those ions that have left the magnetic trap region. Since most ions are expected to eventually return to the target surface, this group of ions leaving the magnetic trap region is not representative for the overall ion population inside the magnetic trap. Thus, gaining information about physical processes inside the magnetic trap from such measurements is very challenging and prone to error. The azimuthal movement of ions in magnetron plasmas is addressed in a two part series with part I addressing the velocity distribution functions of the ions inside the plasma and part II addressing the lateral deposition of species leaving the magnetic trap region<cit.>. This paper constitutes part I, where we investigate the azimuthal ion movement using high-resolution optical emission spectroscopy. From the broadening and shifting of optical emission lines, we can directly determine the velocity distribution function of ions inside the magnetic trap region, temporally and spatially resolved. The measurements are compared to a simple model, only considering the momentum transfer from electrons to ions via Coulomb collisions. We show that already such a momentum transfer via collisions alone can explain the observed ion velocities, without the need to consider wave phenomena, which do not seem to play a dominant role. § EXPERIMENTAL SETUP §.§ Chamber and discharge A cylindrical vacuum chamber with a diameter of 25 cm and a height of 40 cm was used for the experiment. It was pumped to a base pressure of 4 × 10^-6 Pa. Argon was used as working gas at a pressure of 0.5 Pa. A planar 2" magnetron (Thin Film Consulting IX2U) in combination with a TRUMPF Hüttinger power supply (TruPlasma Highpulse 4002) was used to drive the plasma discharges. The discharge was monitored by current and voltage measurements with commercial probes (Tektronix TCP A400, Tektronix P6015A) attached to the connection cable between the power supply and the magnetron assembly. Discharge conditions were selected to be the same as in earlier publications <cit.>. The applied voltage was -590 V with a repetition frequency of 40 Hz and a pulse length of 100 μs. Using titanium targets, these values result in peak currents of 50 A, peak target-area-normalized current densities of 2.5 Acm^-2 and peak power densities of 1.1 kWcm^-2. The corresponding voltage and current waveforms can be found in a previous publication <cit.>. §.§ High-resolution optical emission spectroscopy The setup for the high-resolution optical emission spectroscopy was adapted from <cit.> and is shown in figure <ref>a. The plasma is observed parallel to the target surface. A convex lens (f = 150 mm) is used to collect the emitted light and couple it into an optical fibre (Ø = 800 μm). The distance between lens and fiber is adjusted to limit the field of view of the system to a narrow cone (figure <ref>b). The focal spot has a diameter of approximately 2 mm with the focal plane adjusted to the center of the target. The whole lens system is mounted on a movable stage and can be moved along the magnetron axis or parallel to the target surface (z and x direction). As illustrated in figure <ref>b, we define the z-axis of our coordinate system in target normal or axial direction and the x and y-axis parallel to the target surface with the origin in the center of the target. Additionally, the coordinate φ is used to describe the azimuthal ion movement. It points in the same direction as the drift. Measurements of the emission lines were performed with an intensified CCD-Camera (Andor iStar DH320T-25U-A3) attached to a 2 m plane grating spectrograph (Zeiss Jena PGS 2, 1300 lines/mm grating). All measurements were performed at the end of the discharge pulse by triggering the camera with a delay of Δ t = 90 μs to the plasma ignition. The gate width was set to 10 and the data were accumulated over 2000 plasma pulses, with the exception of the x-scan presented in figure <ref>b, where the last 15 of the pulse were measured and 500 accumulations were used, instead. By operating the spectrograph in the third diffraction order, an spectral resolution of 1.5 pixel-to-pixel at the camera chip was achieved. To enable calibration of the wavelength axis for the measured spectra the emission from a hollow cathode lamp (HCL, Cathodeon 3UNX Ti) was measured simultaneously with the plasma emission, as indicated in figure <ref>a. Details about the used emission lines including the involved energy levels are displayed in table <ref>. The determination of the velocity distribution function (VDF) from the emission lines was performed as described in <cit.>. The method is based on the analysis of the two dominant line broadening mechanisms, Doppler broadening and instrumental broadening. As a first step, a Wiener deconvolution is used to remove the contribution of instrumental broadening to obtain an emission line profile only affected by Doppler broadening. Afterwards, the wavelength axis of the spectra is transformed into a velocity axis using the relation v = c(λ/λ_0 - 1), where λ_0 is the wavelength of the emission line in an unshifted state measured from the emission by the hollow cathode lamp. §.§ Probe measurements Probe measurements were performed above the racetrack position, in target distances of 6.3, 8.0 and 9.7 mm. The probe setup <cit.> and results <cit.> are discussed in great detail in recent publications. Here, we only use the electron density and plasma potential obtained from those measurements to estimate the physical background and the corresponding forces acting on the ions. § RESULTS AND DISCUSSION Figure <ref> a) shows an example of two obtained VDFs for titanium ions (Ti II). The optical system for both measurements was aligned to point in y-direction (compare figure <ref>b) at a fixed distance of z = 3. Intending a measurement above the racetrack on each side of the target, the measurement position in x-direction was selected to be x = ±13.5, as shown in figure <ref>b. As figure <ref> demonstrates, a clear shift between the VDFs is observed, while their shapes remain the same. The VDF recorded at x = 13.5 is shifted to positive values by about 0.5, indicating a mean particle movement away from the optical system (in positive y-direction). Since the VDF is symmetrical, except for its shift, the mean velocity is also calculated to be 0.5. On the opposite side, at x = -13.5, the VDF is shifted to negative values by the same amount, hence indicating a mean particle movement towards the optical system (in negative y-direction). In both cases, the movement follows the direction of the - drift, demonstrating that ions move along the racetrack in azimuthal direction together with the electrons - but much slower. Figure <ref> b) shows the mean titanium ion velocity in y-direction, calculated from the VDFs, for different x positions, again at a fixed target distance of z = 3 mm. Error bars indicate the standard deviation of three consecutive measurements. The displayed data show the expected change of v_y due to the changing angle between the optical axis and the azimuthal particle movement. At x = 0, the azimuthal direction φ is entirely perpendicular to the measurement direction y, so that the mean velocity in y direction is v_y = 0. As such, correctly deducing the azimuthal velocity at each radial position would require Abel-inversion of the line-of-sight integrated measurement data. Unfortunately, this would increase the noise of the measured data, leading to problems with the deconvolution used to obtain the VDF from the measured emission line profiles. However, figure <ref> shows constant values of v_y = -0.5 km/s and v_y = 0.5 km/s across the whole width of the racetrack region -16 mm≤ x ≤ -11 mm and 11 mm≤ x ≤ 16 mm. This indicates that the particularly bright emission above the racetrack dominates over the contributions from all other radial positions, rendering v_y ≈ v_φ and allows us to use values measured at these positions as the azimuthal ion velocity. Consequently, all further measurements reported here were performed in the middle of the racetrack at x = ± 13.5 mm where the absolute value of the measured velocity represents the mean azimuthal velocity. All measurements were performed on both sides of the racetrack ( x = 13.5 mm and x = -13.5 mm), and examined to ensure that the results are perfectly mirrored, i.e. positive velocities on one side exhibit the same magnitude of negative velocity on the other side. In this way, it was ensured that the measured velocities really represent the azimuthal movement of particles and are not distorted by any influence of other emission lines or a possible misalignment of the optical system. From here on, positive values for this azimuthal velocity indicate movement in direction. §.§ Azimuthal particle velocities The dependence of the average velocity on the distance to the target surface is shown in figure <ref>a. Considering argon ions (Ar II) first, we initially observe a steep increase with increasing target distance from about v_φArII = 0.8 at z = 1.5 to a maximum of v_φArII = 1.55 around z = 7. The gap in the measurement data around z=5 is caused by the anode cover, blocking the field of view of the optical measurement system, as explained in a recent publication <cit.>. For higher values of z, v_φArII begins to decrease with target distance and vanish at z = 23. A similar trend is observed for Ti II, with a smaller maximum velocity of v_φTiII =1.25 peaking at a slightly larger target distance of z = 10. Qualitatively, the observed trends in azimuthal ion velocity with varying target distance can be understood by considering the trajectories of the ions. For titanium, particles are created by sputtering at the target surface and are then ionized very close to the target, at z < 1, as we recently reported <cit.>. Particles entering the plasma from the target exhibit an initial azimuthal velocity of v_φ = 0 and are subsequently accelerated inside the plasma. Thus, v_φ increases with target distance for Ti II as particles travel through the plasma and are continuously accelerated in the azimuthal direction. The v_φ decrease at larger target distances z > 10 can be explained by the lack of any additional acceleration force at these positions, independent of which physical process is actually causing the acceleration: waves are expected to be much weaker at this position <cit.> and the electron density much lower, leading to less momentum transfer to the ions. Instead, the ions are only slowed down by collisions with the background gas, leading to smaller azimuthal velocities. On top of that, only the fastest ions can overcome the electric field and reach positions with z > 10. Because of their large velocity, such ions have crossed the dense plasma region close to the target very quickly, leaving them not much time to be accelerated in azimuthal direction. This effect will be explored in more detail in section <ref>. The larger maximum azimuthal velocity for argon ions compared to titanium ions can simply be explained by their smaller mass: the effective force acting on the ions is independent of the ion mass according to the reported explanations for the azimuthal ion movement found in the literature. As such, the ion acceleration is expected to scale with the ion mass m as m^-1. This would lead to a difference in the maximum velocity of m_Ti/m_Ar = 1.2, which accounts for almost all of the observed differences in azimuthal velocity. On top of the difference in maximum azimuthal velocity, figure <ref>a also shows that the position of peak velocity is different for the two ion species: for titanium ions, the velocity peaks at z = 10, whereas the peak position is at about z = 7 for argon ions. This shift can likely be explained by the difference in location where ionization occurs for the different ion species. Assuming strong working gas rarefaction, the direct vicinity of the target will be void of argon neutrals, and ions are rather created at some distance to the target surface. They are then accelerated towards the target by the electric field. Their average flow velocity towards the target leads to the observed shift in the position of maximum azimuthal velocity towards smaller z values compared to titanium ions, which instead maintain a positive average flow velocity (away from the target) since they are created close to the target surface. The smaller argon ion velocity closer to the target surface, z < 7 might partly be caused by ion-ion collisions with the slower titanium ions and partly by mixing with argon ions created from neutrals that have outgassed from the target surface due to the working gas recycling <cit.>. For titanium neutrals, figure <ref>a reveals a maximum azimuthal velocity of 0.45, located somewhere around z = 10. Data for z > 13 could not be obtained, since the emission line used for the measurement was disturbed by titanium ion emission, which increases in relative intensity with the target distance. The observation that even neutral species have a considerable velocity in azimuthal direction is at first surprising, since neither of the two explanations proposed in the literature for the azimuthal acceleration - momentum transfer from the electrons or the electric field fluctuations caused by the spokes - applies to neutrals. We propose that the movement of neutrals is due to resonant charge exchange collisions with the titanium ions. Since titanium ions are expected to have a large density and the cross section for resonant charge exchange is very large for titanium (σ_cx≈2e-18□<cit.>), the titanium neutrals are being dragged along with the azimuthal movement of ions. Assuming a titanium ion density of n_Ti^+ = 5e19, the mean free path for resonant charge exchange is only λ=(n_Ti^+σ_cx)^-1 = 10, demonstrating that our hypothesis is reasonable. This explanation is also in good agreement with the previously observed close coupling of titanium neutral and ion VDF in target normal direction <cit.>. For argon neutrals, no azimuthal drift could be observed with our setup. However, previous work by Kanitz did reveal a mean azimuthal velocity of about 30 for argon metastable atoms <cit.>. This much lower azimuthal velocity can be explained by the smaller cross-section for resonant charge exchange for argon and the lower argon ion density in the target vicinity, leading to much less efficient momentum transfer from ions to neutrals. Figure <ref>b, shows the width (full width at half maximum - FWHM) of the VDF, as a measure of the average energy or effective temperature of the species. For titanium ions, the width of the VDF was already discussed in a recent publication <cit.>. There, we explained that titanium ions start their life as highly energetic sputtered particles following a Thompson energy distribution. The Thompson distribution is rather narrow (in terms of FWHM), but features a strongly populated high-energy tail. As particles move through the plasma they undergo Coulomb collisions with each other, leading to the relaxation of the VDF towards a Maxwell distribution. At the same average energy, the Maxwell distribution has a much larger FWHM, which is why the FWHM in figure <ref>b increases with target distance until about z = 8 after which cooling by collisions with the background gas causes the VDF to become more narrow again. The maximum width observed here for Ti II is around 11, which would correspond to a temperature of about 10 in case of a fully relaxed distribution. For argon ions, we generally find a much narrower VDF, with FWHMs between 5 and 8. The reason for this smaller VDF width, which corresponds to a lower average energy, is that argon ions are created from argon neutrals, which are known to remain comparatively cold during the discharge pulse <cit.>. As such, the newly ionized argon particles start out cold and are then heated up by Ohmic heating and by collisions with the energetic titanium ions, which leads them to acquire an effective temperature somewhere between the temperature of the cold argon neutrals and of the highly energetic titanium ions. The maximum VDF width observed for the argon ions (8) corresponds to a temperature of 4.8. For titanium neutrals, the VDF has an initial width of about 8.5, corresponding to an unaltered Thompson distribution. For larger target distances, the VDF becomes slightly more narrow, presumably due to collisions with the working gas. §.§ Model for the ion movement Following the qualitative description of the observed azimuthal velocities, we will now attempt to find a quantitative description. To this end, forces in both axial (z) as well as azimuthal (φ) direction need to be considered. The forces in z direction determine the residence time of the particles within each volume element, which determines how much acceleration in φ direction the passing species can accumulate. §.§.§ Force in z direction and electric field The movement of ions in z-direction is mainly determined by the electric field in the magnetic trap region of the plasma, caused by the limited mobility of electrons across the magnetic field lines <cit.>. As such, we need to find an estimation of the electric field configuration to describe the ion movement in this direction. The electric field E⃗ is derived from the topology of the magnetic field B⃗ following a simple physical argument: due to the high mobility of electrons parallel to the magnetic field lines, any potential differences inside the magnetic trap region can only occur perpendicular to those. Therefore, the magnetic flux coordinates Ψ introduced by brinkmann_axisymmetric_2020 can be used to construct the topology of the electric potential Φ inside the magnetic trap region. Since this approach can only produce the topology of the plasma potential but not its absolute values, a specific scaling has to be assumed. This is achieved, by adjusting the electric potential to be consistent with probe measurements, which were performed at distances of 6.3, 8.0 and 9.7 mm. The magnetic field configuration, obtained from Hall-probe measurements following the method of kruger_reconstruction_2018, can be found in a previous publication <cit.>. Figure <ref> a) shows the reconstructed plasma potential assuming Φ∝Ψ. The arrows in the figure indicate the direction of the electric field, perpendicular to the magnetic field lines. This plasma potential topology is in good agreement with measurements of Rauch and Mishra <cit.>. Figure <ref>b shows the potential above the racetrack position (r = 13.5) as well as the electric field in axial direction E_z calculated from the potential. This electric field can now be used to model the particle movement in z direction as: d v_z/d t = e E_z/m §.§.§ Forces in φ direction and electron density During their movement in z-direction, the particles are accelerated in azimuthal direction. In contrast to the prior work from the literature <cit.>, we do not consider a two stream instability or additional azimuthal electric field, but only the momentum transfer from electrons to ions via Coulomb collisions. The drag force acting on the ions F_drag can be calculated as F_drag = η_⊥ e^2 n_e v_e with the electron density n_e, the electron drift velocity in azimuthal direction v_e and the cross B resistivity η_⊥. For a highly ionized plasma, η_⊥ can be calculated as <cit.>: η_⊥ = 2 π e^2 √(m_e)/(4 πϵ_0)^2 (k_B T_e)^3/2lnΛ with the electron mass m_e, the electron temperature T_e = 4.5 and Λ = 12 π n_e λ_D^3 depending on the Debye length λ_D. The electron drift velocity v_e consists of the E⃗×B⃗ drift, the curvature drift and the diamagnetic drift <cit.>: v_E× B = E⃗×B⃗/B^2 v_c = -v_||^2/ω_cb⃗× (b⃗·∇)b⃗ v_dia =T_e ∇ p ×B⃗/e n_e B^2 with the electron cyclotron frequency ω_c =eB/m_e, the unit vector b⃗ = B⃗/B, and the electron velocities v_|| parallel and v_⊥ perpendicular to the magnetic field. These drift velocities, together with equations <ref> and <ref> can be used to calculate the drag force acting on the ions as they travel through the plasma, leading to acceleration in the azimuthal direction. However, an estimation for the electron density is required for the diamagnetic drift as well as the momentum transfer from electrons to ions. Electron density: The spatial dependence of the electron density was estimated from the discharge current, from Langmuir probe measurements as well as from optical measurements. The maximum of the electron density n_e is expected to be located at the pre-sheath edge close to the target surface. This density can be derived from the measured discharge current I and the Bohm velocity v_B = √(k_B T_e/M) as n_e ≈2 I/0.61 e v_B A with the target surface area A and the factor of two accounting for the difference between the average current density across the target surface and the larger local current density above the racetrack <cit.>. Assuming a mix of titanium and argon ions with an average mass of M = 44u and neglecting multiply charged ions as well as secondary electrons, we find a maximum density of n_e ≈1.8e20. The density was also measured using a Langmuir probe at distances of 6.3, 8.0 and 9.7 mm. The spatial distribution of the electron density in z direction was further assumed to roughly follow the square root of the titanium ion emission, since this emission depends on the product of electron density n_e and titanium ion density n_Ti^+. This relationship requires singly charged titanium ions to be the dominant ion species and a constant electron temperature, which is not the case. But the relationship can still be useful as an indication of the expected shape of electron density distribution. The titanium ion emission was obtained using Abel-inverted optical imaging, using a recently described setup <cit.>. Based on these three pieces of information we approximate the spatial electron density profile as: n_e(z) = 2.47e20/exp(L_1/z)+1( exp(-z/L_2) + exp(-z/L_3) ) + 2.5e19  . with L_1 = 0.1, L_2 = 0.8 and L_3 = 4.5. The resulting electron density is shown figure <ref>, together with the probe measurements and the Abel inverted titanium ion emission, for comparison. Based on the proposed electric field E⃗(z), magnetic field B⃗(z), and electron density n_e(z), the different electron drift velocities as well as their sum can be calculated, as shown in figure <ref>a. Diamagnetic and drift are almost constant along z, which is a consequence of assuming the gradients for electron density and electric field to be similar as the gradients in the magnetic field configuration. For the diamagnetic drift and drift, we find drift velocities of about 15 and 26, respectively. In contrast, the curvature drift velocity increases from about 20 close to the target to about 68 at 10 and then begins to decrease. All drift velocities are set to zero for z > 12.3, because the gyroradius r_L of the electrons becomes larger than half the gradient length scale of the magnetic field B/∇ B, which we use as criterion for electron magnetization <cit.>. One can state that all drift mechanisms contribute similarly to the overall azimuthal electron drift velocity yielding values in the order of 100, as expected <cit.>. From the calculated drift velocities, we can now determine the azimuthal force acting on the ions, using equation <ref>. Figure <ref>b shows this drag force and the individual contributions from the different drift velocities as a function of target distance. The drag force peaks close to the target surface due of the large electron density, but then decreases at larger target distances. §.§ Test-particle simulation The model described above is solved using a test-particle Monte Carlo simulation (TPMC) with one spatial (z) and two velocity dimensions (v_z and v_φ). This method was first used by davis_monte_1960 and is comparable to a PIC simulation in which the fields are specified a priori<cit.>. The simulation considers an ensemble of 10^7 particles that propagate in space according to their velocity and are accelerate according to the forces within small time steps Δ t = 75. Titanium ions are considered as the test particles and are introduced at z = 0, corresponding to ionization very close to the target surface <cit.>. Particles are initialized with a Thompson distribution and are then accelerated by the electric field in z direction and by the electron drag force in φ direction. Particles are removed from the simulation if they either leave the simulation volume by moving beyond z = 40, or if they return to the target. In both cases, a new test particle is created at z = 0, to keep the total amount of particles constant. The simulation is performed over a time of 60, ensuring full convergence to a steady state. Collisions are neglected in the simulation since the densities of the main collision partners are unknown. Further details on the simulation can be found in the appendix. The mean azimuthal velocity is extracted from the converged simulation results for z-positions from z=0 mm up to z = 25 mm in steps of Δ z = 0.25 mm by integrating the VDF of all particles within the defined interval [z, z+Δ z]. A comparison of the simulation with the measurements is shown in figure <ref>. The simulation yields an increase of the azimuthal velocity in the vicinity of the target and a decrease at large distances from the target with a maximum at a distance of about 10 mm, in good agreement with the experiment. However, the simulation predicts a slightly smaller maximum velocity of only 0.95, compared to the 1.25 found in the experiment. Furthermore, the simulation shows a much less steep decrease in azimuthal velocity for z > 12 than the experiment. For the difference in maximum velocity, we propose the influence of spokes as a possible reason. Since spokes posses strong azimuthal electric fields, they should be expected to additionally affect the azimuthal velocities. However, due to the complexity of spokes, an implementation of this influence within the simulation was not possible, since the wave phenomenon propagating in azimuthal direction breaks the symmetry of the 1d simulation. Based on the good agreement between simulation and experiment, the influence of spokes appears to be smaller than the drag force between electrons and ions, caused by Coulomb collisions. However, it should be noted, that this conclusion is not necessarily valid for all discharge conditions, since spokes under the present conditions have been observed to not be very strong <cit.>. The difference between simulation and experiment for z > 12 is likely caused by collisions. In the experiment, titanium ions at these target distances experience collisions with the background gas, which will lead them to slow down. Such collisions, however, are not included in the simulation, but a slight decrease in azimuthal velocity for z > 12 is nevertheless reproduced. In the absence of any forces, no change of azimuthal velocity should take place in this region. However, the velocity decrease is assigned to the filtering of particles by the electric field in z-direction: only particles with a certain minimum starting velocity can reach a certain z-distance. The higher these minimum starting velocity the smaller the transit time within the region of high azimuthal force. This consequently leads to a smaller degree of accumulation of azimuthal velocity for the particles which are able to reach larger distances. Despite these differences, the agreement between the simple simulation and the experiment is surprisingly good. Based on this agreement, we propose that at least a considerable part of the ion acceleration in azimuthal direction, observed in HiPIMS plasmas, is caused by the drag force from the drifting electrons on the ions via Coulomb collisions. This explanation differs from those found in the literature, which proposed different wave phenomena (the modified two stream instability or spokes) as the reason for the ion acceleration. It is likely, that these wave phenomena will also play a role in the azimuthal acceleration of ions. However, electron-ion collisions can clearly not be neglected. § CONCLUSION The azimuthal velocity of titanium and argon ions was measured inside the magnetic trap region of a HiPIMS discharge. The velocity distribution function (VDF) of the ions was obtained using optical emission spectroscopy, thus allowing access to the space-resolved VDF inside the discharge, instead of only sampling ions that leave the plasma. Results showed the azimuthal ion velocity to increase with target distance, peaking at about 1.55 for argon ions and 1.25 for titanium ions. The difference between the maximum velocities was explained as partly caused by the difference in ion mass and partly by the different locations, where ionization occurs for the two species. Titanium neutrals were also found to follow the azimuthal ion movement of the ions, likely due to frequent charge exchange collisions between neutrals and ions. A model for the discharge was proposed, estimating the electric field and electron density inside the magnetic trap region from probe measurements and simple physical arguments. Based on this, electron drift velocities were calculated and the corresponding drag force, caused by Coulomb collisions between electrons and ions was obtained. A simple test particle simulation was performed to determine the azimuthal ion velocity under these conditions, only considering the drag force of electrons on the ions caused by collisions, and neglecting other aspects, such as spokes and ion-neutral collisions. The simulation showed surprisingly good agreement to the experiment, indicating that Coulomb collisions between the drifting electrons and the much slower ions might be the primary reason for the azimuthal ion movement in HiPIMS plasmas. The second part of this series of two papers will investigate how ions leaving the magnetic trap region are affected by the azimuthal drag force. § ACKNOWLEDGMENTS This work has been funded by the DFG within the framework of the collaborative research centre SFB-TR 87. § DATA AVAILABILITY The data that support the findings of this study are openly available at the following DOI: 10.5281/zenodo.7904947 § TEST PARTICLE MONTE CARLO SIMULATION After initializing the particle ensemble using defined starting conditions, a leapfrog algorithm is used to perform the particle movement. Accordingly the calculations within each timestep are as follows: * Calculating electric field and azimuthal drag force for current z-position * Accelerating for Δ t/2 according to calculated electric field and azimuthal drag force (v'_z = v_z + q · E(z)/m_Ti·Δ t/2 and v'_φ = v_φ + F_drag(z)/m_Ti·Δ t/2) * Moving in z-direction according to calculated v'_z-values (z” = z + v'_z ·Δ t) * Calculating electric field and azimuthal drag force for current z-position * Accelerating for Δ t/2 according to calculated electric field and azimuthal drag force (v”_z = v'_z + q · E(z”)/m_Ti·Δ t/2 and v”_φ = v'_φ + F_drag(z”)/m_Ti·Δ t/2) * Replace certain particles of the ensemble according to the boundary conditions As initial condition the z-position of all particles is set to zero, while v_z and v_φ are selected randomly from a Thompson distribution. Within the selection process a 3-dimensional velocity distribution function is calculated according to a cosine angular and a Thompson energy distribution. Randomly selected values from the projections of this distribution in z- and φ - direction are used for v_z and v_φ. This initial condition is motivated by the expected angular and energy distribution of sputtered titanium neutrals, which are ionized in the vicinity of the target surface. The boundary conditions are as follows: particles reaching z≤ 0 mm or z≥ = 40 mm are removed from the ensemble, which mimics the loss of particles either to the target surface or to the substrate. To keep the total number of particles in the simulation constant, all removed particles are replaced with new particles at z = 0 mm with v_z and v_φ being selected from the 3-dimensional Thompson distribution of sputtered titanium with its angular and energy dependencies being projected on the z- and φ - direction. These initial conditions correspond to an ionisation of all sputtered particles directly after ejection from the target surface. A convergence of the simulation is reached, when the number of particles entering and leaving each volume element is equal and the distribution of particle densities and velocities reach steady state.
http://arxiv.org/abs/2306.03469v1
20230606074239
Joint Event Extraction via Structural Semantic Matching
[ "Haochen Li", "Tianhao Gao", "Jingkun Wang", "Weiping Li" ]
cs.CL
[ "cs.CL" ]
An invitation to _2() cocycles over random dynamics Jamerson Bezerra and Mauricio Poletti Received – –, —-; accepted – –, —- =================================================== Event Extraction (EE) is one of the essential tasks in information extraction, which aims to detect event mentions from text and find the corresponding argument roles. The EE task can be abstracted as a process of matching the semantic definitions and argument structures of event types with the target text. This paper encodes the semantic features of event types and makes structural matching with target text. Specifically, Semantic Type Embedding (STE) and Dynamic Structure Encoder (DSE) modules are proposed. Also, the Joint Structural Semantic Matching (JSSM) model is built to jointly perform event detection and argument extraction tasks through a bidirectional attention layer. The experimental results on the ACE2005 dataset indicate that our model achieves a significant performance improvement. § INTRODUCTION An event is a specific occurrence involving participants, which is can frequently be described as a change of state[The ACE English event guidelines]. Event Extraction (EE) is one of the essential tasks in information extraction, and it provides structured information for downstream tasks, such as knowledge graph construction, auto abstracting, and machine Q & A. This paper focuses on the classic Event Extraction task, which consists of four subtasks: 1) Detecting event mentions from several natural language texts; 2) Determining the specific types of events; 3) Finding the arguments for each event; 4) Classifying the arguments into their roles corresponding to the event. The first two subtasks are also defined as Event Detection (ED) task, while the latter two are defined as Argument Extraction (AE) task. In the task of Event Extraction, some event types are specified. For example, the ACE2005 event extraction corpus is annotated with 8 types and 33 subtypes of events. Also, for each predefined event type, there is a semantic definition. For example, An Attack Event is defined as a violent physical act causing harm or damage. In addition, the argument slots contained in the event type are also specified, which constitute the argument structures of the event. For example, Attack event contains five argument slots: Attacker, Target, Instrument, Time, and Place. Semantic type definitions and argument structures together distinguish different event types, which are crucial in the process of human's event extraction. In the human's way, the EE task can be abstracted as a process of matching the semantic definitions and argument structures of event types with the target text. Specifically, event mentions are detected from the text that matches the type definitions, and argument roles are extracted to fill the corresponding argument slots. The previous methods mostly focused on the feature encoding of the target text, and only introduced the event types in the final classification layer, which is called the single encoding classification models (Fig <ref>a). These models ignored event types' semantic and structural features, and the differences between event types are only the category numbers. As a result, these models are only sensitive to the training text. Also, they fail to correctly understand the semantic and structural connection between event types and text. This paper proposes a dual encoding matching model (Fig <ref>b) to simultaneously encode event type definitions and semantic features of the target text, and structural matching is then performed to get the output. In detail, Semantic Type Embedding (STE) is first proposed to encode the semantic features of event types, entity types, and argument slot types. Then, Dynamic Structure Encoder (DSE) is used to simulate the structural matching between event types and target text. Besides, Joint Structural Semantic Matching (JSSM) model (<ref>), a joint event extraction model based on a bidirectional attention layer is also built. JSSM achieves a significant performance improvement on the ACE2005 dataset, verifying the importance of introducing structural semantic matching. In summary, the main contributions of this paper are as follows: * Semantic Type Embedding (STE) module is proposed to encode the semantic features of event types, entity types, and argument slot types. Utilizing these features, the Dynamic Structure Encoder (DSE) module is used to simulate the structural matching between event types and target text. * Based on a bidirectional attention layer, a Joint Structural Semantic Matching (JSSM) model is built to perform event detection and argument extraction tasks jointly. * Experiments on the ACE2005 dataset show that our method achieves significant performance improvement. Further analysis of the experimental results indicates the effectiveness of each module. § RELATED WORK The Event Extraction (EE) task can be divided into two parts: Event Detection (ED) and Argument Extraction (AE). These two parts can be solved separately or jointly. To solve the ED task, classic methods made a hypothesis that event triggers can represent event mentions and types. regards ED as a token-level sequence labeling problem, and uses a CNN-based classification model. Then, new models are introduced, such as Hybrid Neural Networks <cit.>, Graph convolution networks <cit.>, Hierarchical Multi-Aspect Attention <cit.> and so on. Since the data in ACE2005 dataset is small in scale, researchers have introduced knowledge enhancement methods, such as FrameNet <cit.>, multilingual attention <cit.>, and open-domain Knowledge <cit.>. To solve the AE task, some models regarded it as a downstream task of ED, and these models are called the Pipeline models <cit.>. In contrast, the Joint models <cit.> take the AE and ED as interactional tasks. Pipeline models exhibit better interpretability and higher precision, but the recall rate is often lower, and the overall effect is slightly worse due to error propagation. Meanwhile, joint models often achieve better final performance owing to the information interaction between tasks. There is a bottleneck in methods that detect event mentions and types using only event trigger words. exploits argument information to improve ED task, while document level information <cit.>, word sense <cit.>, pretrained language models <cit.>, enhanced local features <cit.>, or other features are used in the AE task. also investigates the possibility of event detection without using triggers. and treat EE as a machine reading comprehension (MRC) problem and use prior knowledge of reading comprehension to improve the model performance. However, previous approaches ignore the semantic and structural features of event types. This paper is the first work that explores the semantic features of event types and structurally matches them with target text to the best of our knowledge. § METHODS §.§ Semantic Type Embedding (STE) The event extraction task aims to find the predefined event types from the text. However, if the event types are only introduced as category labels or randomly initialized vectors, event definitions' semantic features are completely lost. Inspired by the MRC-based EE approaches <cit.>, the questions are replaced with event type definitions. For example, the event type Be-Born can be defined as "Be-Born Event occurs whenever a PERSON Entity is given birth to.". Such a question includes semantic information that leads us to pay more attention to a person or some word standing for time. Formally, Semantic Type Embedding (STE) module is proposed to contain two parts: static STE and dynamic STE. Given the input sentence S=[s_1, s_2, ..., s_n] and the question Q=[q_1, q_2, ..., q_m], a pre-trained language model, i.e., BERT () is used as the encoder. §.§.§ Static STE First, the definition of each event type is tokenlized as the static question: Q_static=[[CLS], q_1, q_2, ... q_m, [SEP]]. After feeding these tokens into a BERT encoder, the [CLS] token's embedding is used as the static type embedding. Repeating this operation, the ste_static for each event type can be obtained: ste_static=BERT(Q_static)_0 A lookup table is built, where each type corresponds to a static embedding. It is worth noting that the static STE can be generated not only for event types but also for entity types and argument slot types, because semantic definitions also exist for their types. Static STE presents a fixed feature representation for each type. Furthermore, it is desired that the type features can be fine-tuned as the target text changes. In this case, a dynamic STE is proposed. §.§.§ Dynamic STE When definitions and target sentences are encoded together, the semantic connections between type definitions and words can be also encoded. The dynamic STE concatenates definitions of event types and target sentences, before it gets through the encoder. For example, considering a event type definition and a sentence, the input is: Q_dynamic=[ [CLS], q_1, q_2 ... q_m, [SEP], w_1, w_2 ... w_n, [SEP]]. Then, the same encoder is used: ste_dynamic=BERT(Q_dynamic)_0 Same as the static method, each sentence is combined with definitions of every event type, and the [CLS] token's embedding is used as the dynamic STE for each type. Finally, it is mixed with the static method with a mixing ratio α: ste=α*ste_static⊕ (1-α)*ste_dynamic In this paper, the static STE is used for entity type (ste^entity) and argument slot type (ste^slot), while the dynamic STE is used for event type (ste^event). §.§ Dynamic Structure Encoder (DSE) When humans discover events and find event roles from the text, they often use the target text to match the structure of known event types. Argument slots can be used to represent the structure of event types, because different event types have different numbers and types of argument slots. Therefore, the matching process can be abstracted as filling the argument slots of corresponding event types with words in the text. According to the hierarchical structure of events, this paper proposes a Dynamic Structure Encoder (DSE), which has three levels of features: sentence-level feature, slot-level feature, and event-level feature. Every two levels are dynamically connected through an attention-based adding (shown in Fig <ref>). On the Sentence-Slot connection, sentence is used to match argument slots. Taking Fig <ref> as an example, after training, the slot Target tends to extract the features of the word "cameraman", and the slot Instrument tends to extract the features of the words "American" and "tank". Then, the argument slots are used to match event types based on the Slot-Event connection, and corresponding slots of each event type are only used. For example, Attacker, Target, Instrument, etc are used to form the event-level feature of the Attack event, and Victim, Place, Instrument, etc are used to form the event-level feature of the Die event. Specifically, the input sentence can be formed as : Sent = [w_0, w_1, ..., w_n], where n represents the sequence length. After feature encoding, The sequence feature is Seq = [x_1, x_2, ..., x_n]. For slot-level feature, an attention-based adding is used to make the slot aggregate the sequence features: Slot_i = ∑_j=0^nattention(ste^slot_i, x_j)× x_j Where i∈[0, S], because there are S different slot types and an None type. The same method in <ref> is employed to obtain the static STE output ste^slot_i for each slot, and a bidirectional attention layer is used to obtain attention scores (see <ref> for details). For event-level feature, another attention-based adding is performed to encode the event features: Event_k = ∑_i=0^Sattention(ste^event_k, Slot_i)× Slot_i Where k∈[0, E], because there are E different event types and an None type. A simple Cosine similarity is used to obtain the attention values. After encoded by the DSE, each input sentence produces E different event-level features for every event types, containing lexical, semantic, and structural information. In section <ref>, these event-level features and event type embeddings are used to perform event detection. § MODEL A joint model called Joint Structural Semantic Matching (JSSM) is proposed in this paper for EE and AE tasks. The overall architecture of JSSM is shown in Fig <ref>. JSSM first uses an STE Preparation Module to prepare STE for event, entity, and slot types. Then, the input sentences are encoded by a Sequence Encoding Module. After sequence feature and Ste^slot are sent to the bidirectional attention layer, sequence-aware slot features (red boxes in Fig <ref>) and sot-aware sequence features (grey boxes in Fig <ref>) can be obtained. The slot features and sequence features are passed to the Event Detection Module and the Argument Extraction Module to obtain final outputs, respectively. §.§ STE Preparation Module Three STE modules are needed in this model, namely Ste^entity, Ste^slot, and Ste^event. The first two are static STEs, while Ste^event is a dynamic STE. Taking Ste^slot as an example: ste^slot_i = BERT(question_i^slot)_0, i∈[0,S] There are S types of slots, and question_i^slot stands for the definition sentence for the ith type of slot. ste^slot_i∈ R^d, where d follows the dimension of BERT encoder's output. Slot = [ste^slot_i] is one of the two inputs of the bidirectional attention layer. §.§ Sequence Encoding Module Given an input sentence Sent = [s_i] with n tokens, the same BERT encoder is used in STE preparation. After BERT encoding, the sequence embeddings E = [e_i] can be obtained. E = BERT(Sent), E∈ R^n× d At the same time, each token s_i corresponds to a certain entity type, and ste^entity_i is used to represent the STE of the entity. Because the same BERT encoder is used, ste^entity_i∈ R^d, and they can be added together: seq_i = e_i + ste^entity_i, i∈ [1, n] Then, the sequence encoding is obtained: Seq = [seq_i], Seq∈ R^n× d, which is the second input of the bidirectional attention layer. §.§ Bidirectional Attention Layer More attention should be paid to the structural features in performing the event detection task, and slot features can be used (section <ref>). At the same time, in performing the argument extraction task, the sequence features are more important. To jointly solve these two tasks, a bidirectional attention layer is proposed, and the two directions are called seq2slot and slot2seq. Unlike the previous joint model that directly adds the losses of different tasks, our model treats event classification and argument extraction as two mirror tasks, and the tasks enhance each other's results through the attention layer. Assuming the STE for argument slots is Slot = [ste_i^slot], i∈[0, S], and the sequence encoding Seq = [seq_j], j∈[1, n]. A method similar to the self-attention mechanism () is used. Attention(Q,K,V)=softmax(QK^T/√(d_k))V Three weight matrices are set for each way. So, there are six matrices in total, namely, W_V^slot, W_Q^slot, W_K^slot and W_V^seq, W_Q^seq, W_K^seq. seq2slot: In the first direction, it is desired to fill the argument slots with sequence features. So, Slot is used to generate the queries, and Seq is used to generate the keys and values (grey part in Fig <ref>): Q^slot = Slot· W_Q^slot K^seq = Seq· W_K^seq V^seq= Seq· W_V^seq Then, the sequence-aware slot features Slot ( slot-level features in DSE) can be obtained, which are red boxes in Fig <ref>: Slot = Attention( Q^slot, K^seq, V^seq) seq2slot: In the second direction, argument slot features are used to enrich the sequence features, which is beneficial to the Argument Detection task. So, Q^seq, K^slot, and V^slot are calculated (lake blue part in Fig <ref>). Then, the slot-aware sequence features are obtained: Seq = Attention( Q^seq, K^slot, V^slot) Slot and Seq are sent to the Event Classification Module and the Argument Extraction Module, respectively. §.§ Event Detection Module For each sentence input, DSE (section <ref>) is used to obtain the event-level features. The input then goes through an event type matching layer to get the results. The prepared Ste^event is used as the type features for every event type, and the event definitions in the ACE English event guidelines are used. For each event type e_k, its type embedding ste^event_k is used to match the input Slot: Firstly, since the event type and corresponding slot are already predefined, an event to slot Mask with a hard attention map is used to mask the unrelated slot. Then, a Cosine attention adding is used to get event-level features Event_k: Event_k = [ste^event_k * (Mask * Slot)]·Slot Finally, a two-layer affine network is used to perform the matching process: o^tmp_k = σ[(Event_k+ ste^event_k) * W^evt_1 + b^evt_1] o^evt_k = σ(o^tmp_k * W^evt_2 + b^evt_2) Where W_evt^1, W_evt^2 and b^evt_1, b^evt_2 are parameters, σ is an GELU activation function (<cit.>). Because each sentence may contain more than one types of event, E affine networks are defined for every event types. Assuming a sentence contains event types A and C, the output should be [1, 0, 1]. The output O^evt=[ o^evt_i ], i∈[1,E] and the golden label Y^evt = [y^evt_i], i∈[1,E] is used to calculate the MSE loss: L_evt=1/E∑_i=1^E(y^evt_i - o^evt_i)^2 §.§ Argument Extraction Module Seq is used as main features for argument extraction, which contains the features of argument slots by the slot2seq attention. However, Seq does not contain the lexical features for each word, so the original sequence features are also added (purple boxes in Fig <ref>). Arg_i = Seq_i + seq_i Since each word may have multiple argument labels, similar affine networks in section <ref> are used. o^tmp_k = σ[(Arg_k+ ste^slot_k) * W^arg_1 + b^arg_1] o^arg_k = σ(o^tmp_k * W^arg_2 + b^arg_2) Assuming the golden labels are Y^arg=[y^arg_i],i∈ [1,S], the loss for arguments extraction can be obtained: L_arg=1/S∑_i=1^S(y^arg_i - o^arg_i)^2 Combining L_evt and L_arg, the overall loss function can be obtained. L = λ L_evt+(1-λ L_arg) § EXPERIMENTS §.§ Dataset and Settings Dataset: The proposed method is evaluated on the ACE2005 () English event extraction benchmark. Annotated from 599 documents, the benchmark contains 33 types of event. Following the previous splitting and processing settings (, , ), 529 documents are used for train data, 40 documents for test and 30 for dev. Environment and Hyper parameters: Experiments are performed on a single Nvidia gpu rtx2080ti. Using BERT-base as the pretrained language model, all embedding and features' dimensions are unified to 768. The batch size is 16. The learning rate is set to 1.5e-5, and the dropout rate is 0.6. The mixing ratio α in dynamic STE (Eq: <ref>) and λ in loss function (Eq: <ref>) are set to 0.5 and 0.3, respectively. Each epoch takes about 20 minutes. Evaluation Metrics: For the event detection task, instead of locating the trigger, whether the event types of a sentence match the golden labels is directly judged. For the argument extraction task, an argument is correctly identified if its span matches a golden annotated argument. For the argument classification task, an argument is correctly classified if its span and role both match the gold annotation. The Precision, Recall and macro F1 scores are calculated for each task. §.§ Baselines The following models are used as baselines. Two pipeline models: 1) DMCNN(), which extracts sentence-level features by adopting a dynamic multi-pooling CNN model. 2) RBPB(), which proposed a regularization-based method to utilize the pattern of arguments. Three joint models: 3) JRNN (), which uses an RNN-based joint model. 4) Joint3EE () and which jointly models entities and events based on shared Bi-GRU hidden representation. 5) DyGIE++(), which jointly extracts entity, relation, and event. Two MRC-based models: 5) BERT-QA(), which formulates event extraction as a question answering problem; 6) MQAEE (), which performs event extraction based on a multi-turn QA framework. §.§ Overall Performance The overall performance is listed in table <ref>. The Event ID& CLS column compares precision, recall and F1 scores of the event detection task. The Argument ID and Argument CLS columns compare performance of the argument extraction task. It can be observed that: 1) By using structural semantic matching, the JSSM model's overall performance outperforms prior works on all tasks. 2) The recall scores of JSSM significantly exceed all baselines, while the precision score of the arguments classification task is relatively weak. 3) The drop of F1 from argument identification to classification is 12.1 %, which is much larger than prior models. It is speculated that our joint model loses certain prior information while avoiding error propagation. §.§ Ablation Study To clarify each modules' performance contribution to the model, some ablation studies are conducted, and the results are listed in table <ref>. In these studies, parts of our model are deactivated separately. In table <ref>, "-" means that the module is abandoned, while "random" means that randomly initialized embedding is used to replace the STE module. After the DSE module is removed, the F1 score of Event ID & CLS drops by 1.4%. Furthermore, the F1 score of Argument ID and Argument CLS drops by 4.5% and 5.1%, respectively. Though the DSE module does not directly act on the Argument ID & CLS tasks, these tasks' performance exhibits a significant decrease, which reflects our model connects the structural features of event types and argument slots through a joint training process. When the Ste_dynamic is removed, the F1 scores drop slightly on all tasks. It is worth mentioning that only using static STE reduces noise disturbance and has higher precision scores in argument extraction tasks. The experiment "-Ste^entity" shows that after entity labels are abandoned, the model suffers from a sharp performance decrease, 7.4% in both argument identification and classification. These results indicate that entity features are one of the most useful information in sequence classification tasks. In the three random experiments, the semantic features of event, entity, and argument slot are completely abandoned. The performance on each task decreases apparently, indicating the indispensability of semantic features. Furthermore, when the Ste^event is randomly initialized, the structural semantic matching between sentences and event types also fails, causing a devastating performance drop in Event ID & CLS tasks. §.§ Semantic Material Selection The semantic material used to generate questions for the STE module determines the semantic information of type embeddings. To explore the influence of different question strategies on the results, three templates focusing on the event names, triggers, and definitions are designed respectively: Single event name: A single event name is used as the question, and its BERT embedding is used as the STE for event. Top trigger words: For each event type, the top five most frequently occurring trigger words are chosen, and they are concatenated as the question. Guideline definition (used in JSSM): Each event type has a definition in the ACE English event guidelines. The definition is slightly modified to contain more structural information. Then, they are taken as the questions. The results are shown in Table <ref>. It can be seen that: 1) The Guideline definition material exhibits the best effect. Because it describes the event from all aspects, contributing to sufficient semantic information. 2) Single event name strategy achieves better performances in the Argument ID task. Owing to BERT's large-scale training corpus, the model can also generate event-related semantic information from event names. 3) Trigger words are mostly verbs, but these verbs have different meanings depending on the context, and they do not merely represent a specific event, leading to the worst results. §.§ Event Imbalance Analysis The ACE2005 dataset has a serious data imbalance problem (<cit.>). Only the number of Attack, Transport and Die events accounted for 51.5% of all events in the training set. Therefore, models that only use text features have poor results on rare event types. Our model makes full use of the event type's semantic features, making it robust to data imbalance. It can be seen from Fig <ref> that there is no significant difference in the performance from dense to rare event types. § CONCLUSION In this paper, we regard the event extraction task as a structural semantic matching process between the event types and the target text. The STE module and the DSE module are proposed, and a joint extraction model named JSSM is built based on a bidirectional attention layer, which performs well on the ACE2005 benchmark. Each module's effectiveness is verified through the ablation study, and the performance of different question strategies for STE is compared. Also, our model is showed extremely robust on the imbalanced data. [ [ 1; 0; 1; ...; 0; 0 ]] acl_natbib
http://arxiv.org/abs/2306.07267v2
20230612175240
Spectrally multimode squeezed states generation at telecom wavelengths
[ "Victor Roman-Rodriguez", "David Fainsin", "Guilherme L. Zanin", "Nicolas Treps", "Eleni Diamanti", "Valentina Parigi" ]
quant-ph
[ "quant-ph" ]
APS/123-QED [email protected][Currently with ]ICFO - Instituto de Ciencias Fotonicas, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Barcelona, Spain ^1Laboratoire Kastler Brossel, Sorbonne Université, ENS-Université PSL, CNRS, Collège de France, 4 place Jussieu, Paris F-75252, France ^2Sorbonne Université, LIP6, CNRS, 4 place Jussieu, 75005 Paris, France We report on the experimental demonstration of a source that generates spectrally multimode squeezed states of light over the infrared C-Band. This is achieved using a single-pass Spontaneous Parametric Down Conversion (SPDC) process in a periodically-poled KTP waveguide that is pumped with the second harmonic of a femtosecond laser. Our measurements show significant squeezing in more than 21 frequency modes, with a maximum squeezing value over 2.5 dB. Moreover, we demonstrate multiparty entanglement across 8 individual frequency bands by measuring the covariance matrix of their quadratures. Finally, we use reconfigurable mode-selective homodyne detection to mold the output into cluster states of various shapes. This result paves the way for the implementation of continuous variable quantum information protocols at telecommunication wavelengths, with applications in multiparty, entanglement-based quantum communication and computation. Spectrally multimode squeezed states generation at telecom wavelengths Guilherme L. Zanin^1, Nicolas Treps^1, Eleni Diamanti^2, and Valentina Parigi^1 July 31, 2023 =================================================================================== § INTRODUCTION Continuous variable (CV) encoding of quantum information requires the generation of multimode quantum states of light with tailored spectral, spatial and temporal mode properties <cit.>. In particular, measurement-based protocols are based on the possibility of deterministically generating large multimode entangled states, where entanglement is established between amplitude and phase quadratures of different light modes <cit.>. This in turn requires the generation of a large number of squeezed modes. Second-order nonlinear waveguides are currently largely explored to generate tailored squeezed modes: single-mode over a large bandwidth <cit.>, temporally multiplexed <cit.>, and both spectrally and temporally multiplexed <cit.>. Particular effort has been devoted to generate squeezed sources at telecommunication wavelengths. Single-mode quantum states with significant squeezing <cit.>, on-chip few-mode squeezed states <cit.> and micro-comb structures have been shown <cit.>. Recent results on single-mode squeezing generation in nonlinear waveguides have been reported in a regime where optical communication technologies can be exploited <cit.>, where it is possible to have quantum state transmission through fibers over long distances <cit.> and sensing <cit.>. Here we demonstrate the generation of a vacuum squeezed field in the near infrared C-Band that is intrinsically spectrally multimode, i.e., a system that cannot be reduced to a single squeezer acting on a specific spectral mode, but that involves many squeezers acting on different (orthogonal) spectral modes. This implies that the many spectral modes can be directly shaped, via linear optics transformations, into entangled networks of the same number of nodes without mixing them with extra vacuum field states, thus not degrading the squeezing and/or entanglement correlations. The networks built in this way can be easily tailored in our setup via mode-selective homodyne detection <cit.>, and their quality can be characterized via the squeezing level of their so-called nullifier operators. We can also independently check entanglement correlations between different individual frequency bands, by measuring the full quadrature covariance matrix for 8 different bands. A Positive-Partial-Transpose (PPT) criterion can be used as an indication of entanglement over all the possible band bipartitions. Both intra-band entanglement and cluster structures can be used for frequency-multiplexed QKD <cit.> and entangled-based multiparty quantum communication protocols <cit.>. Moreover, the demonstrated source, being pumped with a pulsed laser in a single-pass configuration, is compatible with simultaneous spectral and temporal (pulse-based) multiplexing <cit.> as well as mode-selective non-Gaussian operations <cit.>. This fact makes the source appealing for the generation of scalable 3-dimensional entangled structures to be used in measurement-based quantum computing <cit.>. The paper is structured as follows: in Section II we briefly summarize the theoretical description of the generation of the multimode quantum states via SPDC in type 0 nonlinear waveguides <cit.>. We also describe our scheme for the experimental setup used in this work. In Section III, we show the experimental results, covering multimode squeezing, covariance matrix in the basis of equidistant frequency bands (that we call frexel basis, <cit.>), and squeezing in the nullifiers of certain cluster states. We conclude this work with a summary in Section IV. § EXPERIMENT DESCRIPTION §.§ Multimode SPDC We proposed the generation and engineering of CV multipartite entangled states of light in the frequency domain via SPDC, using nonlinear waveguides, in <cit.>. Details about the theoretical considerations can be found there and in the references within. In summary, we investigated the properties of the joint spectral amplitude (JSA): J(ω_s,ω_i) = ∑_kλ_k h_k(ω_s)g_k(ω_i), where ω_s/i is the signal/idler frequency, λ_k are the Schmidt coefficients and h_k(ω_s) (g_k(ω_i)) are the signal (idler) frequency modes composing the signal (idler) field after the interaction. Given that the JSA contains all the frequency information of our states, the output signal and idler fields can be described with the modes: Â_k^† = ∫dω_sh_k(ω_s)â^†(ω_s) B̂_k^† = ∫dω_ig_k(ω_i)b̂^†(ω_i), which define the sometimes called supermode basis <cit.>. The Hamiltonian describing the interaction in this basis takes the form: Ĥ = ∑_k^Nλ_kÂ_k^†B̂_k^† + h.c. Evolution under this Hamiltonian is known to produce N independent pairs of completely entangled modes (EPR pairs). In <cit.>, we focused on the EPR pair multimode field as a starting point for the generation of the more general cluster states in the context of type II SPDC. In the experiment described here, for practical reasons, we work instead with the type 0 process in the degenerate case, i.e., when the signal and idler modes are identical, Â_k = B̂_k, and hence there is a unique output field. In this case the Hamiltonian of Eq. (<ref>) reduces to: Ĥ = ∑_k^Nλ_k(Â_k^†)^2 + h.c. Evolution under this Hamiltonian produces a multimode field with N independent squeezed states. Therefore, we can take as the experimental signature of the generation of these states the squeezing levels of our multimode output field. Each individual squeezed mode can be addressed experimentally via coherent (homodyne) detection. Indeed, since homodyne detection is a projective measurement, interference with a properly spectrally shaped Local Oscilator (LO) gives access to the noise properties of the individual modes under study <cit.>. Once the squeezing levels are characterized, the generation of a particular cluster state with up to N nodes can be obtained by performing an adequate unitary transformation <cit.>. This can be done by adding to the experimental setup a passive optical circuit performing the unitary operation, or equivalently, changing appropriately the LO shape to access directly the modes composing the cluster state with homodyne detection <cit.>. §.§ Experimental Setup The experimental setup for generating the multimode squeezed state is depicted schematically in Fig. <ref>. A broadband fiber femtosecond laser (characteristics: bandwidth ∼ 55 nm, pulse width ∼ 57 fs, repetition rate 100 MHz, power ∼ 500 mW, centered at 1560 nm), is partly directed to a periodically poled Lithium Niobate (ppLN) crystal, engineered to produce the second harmonic frequency. After the ppLN we obtain light with a bandwidth of ∼ 2 nm, centered at 780 nm. This field is then coupled to a single(spatial)-mode, rectangular, nonlinear, periodically poled Potassium Titanyl Phosphate (ppKTP) waveguide[The waveguides were purchased from the company AdvR.] that down converts the second harmonic field to the C-band telecom wavelengths and generates the multimode squeezed states. On the other side, a big fraction of the original power from the laser is sent to a pulse shaper in order to generate a spectrally configurable LO. The signal field from the waveguide and the LO are mixed and directed to separate photodiodes, whose electrical outputs are subtracted to obtain the homodyne signal. We constructed the pulse shaper by diffracting the wavelength components of our input field using a grating, and directing them to a Spatial Light Modulator (SLM), where the light is reflected back and recombined in a similar grating (so called 4f configuration)<cit.>. In the SLM screen, each pixel, and hence, each frequency component of the field, can be addressed individually, resulting in a spectrally shaped pulse at the output. We used a home-made interface in Python to control the different masks applied to the pulse shaper. The LO is then passed by another ppKTP waveguide (identical in dimensions) before mixing it with the quantum signal. This is done to spatially match the LO and the signal, effectively decreasing losses in the detection. We perfomed numerical simulations in advance in order to predict the properties of the independent squeezed modes following <cit.>. Details about these numerical simulations and about the characterization of our waveguides can be found in Appendix A. § EXPERIMENTAL RESULTS §.§ Multimode Squeezing Fig. <ref> summarizes the squeezing values obtained in each of the measured modes, where the shot noise stands as reference (0 dB). The quadrature noise was measured as a function of the relative phase between the signal and the LO, for different LO spectral shapes, via a spectrum analyzer. The relative phase is modulated in time thanks to a piezoelectric mirror in the LO optical path. We first performed the experiment by projecting the states into the family of Hermite-Gauss (HG) modes. The particular family we used, derived from our numerical simulation, is defined by a fundamental HG_0 mode with 45 nm of FWHM (in amplitude). The Sqz (squeezing) and ASqz (antisqueezing) values of Fig. <ref> are the minimal and maximal values of the quadrature noise measured via averaging on scans of several phase-periods from the spectrum analyzer. The asymmetry between squeezing and antisqueezing levels (typical values are -1.0 dB for squeezing and 1.4 dB for antisqueezing), is attributed to experimental optical losses. The main loss source is the non-optimal spatial mode-matching between the signal and LO optical modes. We characterize it by measuring the visibility of fringes between the LO and a small fraction of telecom light coupled into the waveguides. The associated quantum efficiency scales quadratically with the measured visibility (other contributions to the global quantum efficiency in the homodyne setup are defined in Appendix B). In order to match the spatial modes of LO and signal, we inserted, in the LO path, a waveguide identical to the one used for the generation of the signal. The overall visibility in such a configuration, which is the one of the measurements in Fig. <ref>, reaches the value of 77%. We explain the non-ideal visibility by residual differences between the spatial modes of the signal and the LO. This can be due to inhomogeneities in the two waveguide structures that are known to appear when the waveguide is long (∼ cm scale). Fig. <ref> also shows the spectral shapes set in the LO pulse shaper to measure the corresponding squeezing values. A detrimental effect in our setup is what we call optical clipping, which arises due to the limited size of the cylindrical mirror responsible for focusing the light beam onto the SLM for pulse shaping. Since the beam comes from a refractive element (a grating), some wavelengths at the extremes of the spectrum are therefore cut off. Due to this effect, the LO spectral extremes cannot be used are not in homodyne detection (the corresponding clipped regions fro the different modes are shown in red in Fig. <ref>). We would then expect larger values for the measured squeezing levels if the full spectrum was experimentally available. It is also worth mentioning that due to the optical clipping, the cut HG modes in which we project our state are technically not orthogonal anymore. The dimension of the subspace spanned by the modes shown in Fig. <ref> is nevertheless close to 21 HG modes (details can be found in Appendix D, where we calculate it to be around 18). Furthermore, we projected the states into a basis of a form of orthogonal and flattened HG modes that we call flat modes. For such modes we observed larger squeezing values - with more than 2 dB of squeezing up to the 4th mode - than for the HG modes. The flat modes and their squeezing level are shown in the right side of Fig. <ref>, as well as in Appendix E. This result implies that the HG basis shown on the left side of Fig. <ref> is not the family of supermodes, since those should be the most squeezed modes in the system. Further evidence in this direction is given by the covariance matrix measurements in the next section, showing that the spectral widths of the theoretically predicted HG modes are probably underestimated. We therefore expect that the measured squeezing values in Fig. <ref> constitute a lower bound for the potential squeezing that can be achieved with larger mode-matching visibilities and without optical clipping. This is also witnessed by the significant asymmetry in the squeezing and antisqueezing values measured in the flat mode basis. To summarize the above discussion, Fig. <ref> demonstrates the experimental realization of an optical multimode squeezed state composed of at least 21 frequency modes, with reasonable margin for improvement, and that can be used for the quantum information protocols. §.§ Covariance Matrix measurement In order to check the presence of entanglement, i.e., quantum correlations, in our multimode state, we measured the state in different basis from the one of the supermodes <cit.>. In particular, we used the so-called frexel basis, which is composed of a number of equally spaced frequency bands covering the total spectrum of the LO. This is a suitable basis, not only because it is easily accessible via the shaping capability of the LO, but also because the frexel modes can be easily spatially separated via dispersive elements and sent to different locations, which is necessary for multiparty quantum protocols. We therefore measured the covariance matrix that characterizes our gaussian state, in the frexel basis, using 8 equally spaced frequency bands. The covariance matrix is shown in Fig. <ref>. Under the reasonable assumption of a non-chirped pump, we expect no correlations between the different position, q_i, and momentum, p_j, quadrature components and hence zero value for the symmetrized expectation values of the form q_ip_j <cit.>. We then measure only the subgroups of q_iq_j or p_ip_j quadratures. The results are shown in the left of Fig. <ref>. The non-zero off-diagonal elements in the data show the presence of correlations between the frequency bands defining the 8 frexels in our state. This can be associated to the entanglement between the frequency bands that can be tested via the positive partial transpose (PPT) criterion <cit.> (see Appendix G for more details). We obtained violation of the PPT criterion for all of the possible bipartitions of our system. On the right of Fig. <ref>, we perform a numerical diagonalization of the measured covariance matrix to recover the eigenmode basis, where no entanglement is present. Thus we expect these eigenvectors to resemble discretized versions of the supermodes, with eigenvalues related to a squeezing level value over the frequency band composing the frexel. Overall, the numerical eigenvectors obtained in the diagonalization are in good agreement with the theoretical prediction of approximate Hermite-Gauss modes, except that their spectral widths are larger than the theoretically predicted modes. This is consistent with the measurement of larger squeezing values in the flat modes basis rather than in the theoretically derived HG mode basis, as shown in the previous section. The eigenmodes and eigenvalues from the diagonalization of the covariance matrix can be found in Appendix F. §.§ Cluster State Generation Finally, we used the experimental setup for the deterministic generation of some few-node cluster states, as a proof of principle on the versatility of the source. We probe the generation of the cluster states by measuring squeezing in the nullifiers that characterize a specific adjacency matrix, i.e., a particular topology defining the graph. Changing from one topology to another can be achieved by appropriately changing the mask on the pulse shaper <cit.>. The nullifiers, {δ̂_i}, of a particular graph with quadratures operators for the nodes denoted q̂_i and p̂_i are written as: δ̂_i = p̂_i - ∑_j V_ijq̂_j, where V_ij are the elements of the adjacency matrix defining the cluster state, which is known in advance<cit.>. Furthermore, the quality of the probed cluster state can be qualitatively obtained by the amount of squeezing measured in the nullifiers. We measured 4-node cluster states with different topologies and whose nullifier's level of squeezing is summarized in Fig. <ref> via statistical box plots for each cluster topology. Additionally, we show that all nullifiers are squeezed below the shot noise up to an 8-mode linear cluster. This proves the generation of the cluster states. We leave further source optimization for future studies and experiments. § CONCLUSION In this work we report on a deterministic source of multimode quantum states of light generated via type 0 SPDC in a nonlinear ppKTP rectangular waveguide. We measured up to 21 squeezed optical spectral modes in the HG basis and over 2 dB of squeezing for four spectral flat modes. We measured and diagonalized the covariance matrix in the frexel basis and demonstrated the realization of different cluster states topologies, characterizing them with the squeezing in their nullifier operators. The quality of the resource can be enhanced by improving the signal-LO mode-matching and the optical configuration of the pulse shaper, both within reach of current technology. The demonstrated source can be used to implement entanglement based quantum communication protocols and to build scalable resources for quantum computing at telecom wavelenghts. This work was supported by the European Research Council under the Consolidator Grant COQCOoN (Grant No. 820079). § NUMERICAL SIMULATION OF THE MULTIMODE STATE IN OUR EXPERIMENTAL CONFIGURATION Numerical simulations were performed in order to predict the properties of the independent squeezed modes at the output of the waveguide. Given the experimental values measured from the second harmonic field (hence the pump to the ppKTP waveguide), and the chosen waveguide dimensions (3 by 3 μm and 15 mm in length), the results are shown in Fig. <ref>. A number of about 34 modes is expected with the Schmidt distribution, {λ_k}, shown in the figure. The first three frequency modes, similar to Hermite-Gauss modes, are also shown. For more details about the numerical simulation of the nonlinear waveguides and how the experiment was designed, see <cit.>. § WAVEGUIDE CHARACTERIZATION The characterization of our waveguides was performed by coupling them to the C-band wavelength (1560 nm). A measurement of the spectrum of the second harmonic field produced by the C-band input gives an estimate on the homogeneity of the waveguide along the propagation direction (a sinc-like function is to be measured, which is the response to a square nonlinear profile by the Fourier transform, expected from a homogeneuous structure). The amount of second harmonic produced from the telecom input gives an estimate on the nonlinear coefficient of the waveguide, that is fitted from data in the next section. Finally, a measurement of the spatial profile of the output telecom light can be compared with a numerical simulation (in our case with a finite element method, <cit.>) to check the single-spatial-mode feature of the waveguides, and the deviation from the expected fundamental mode propagating in the actual structure. The homodyne measurement was performed with a home-made detector, including the two photodetectors and the transimpedance circuit outputting the homodyne electrical signal. The total homodyne efficiency, η_h, can be decomposed in the following terms: η_h = η_PD·η_el·η_opt·η_mod, where η_PD is the photodetector's efficiency (around 85% in our case), η_opt is related to the optical losses in the homodyne circuit (near unity in our case), η_mod is the mode matching efficiency, i.e., how close are the LO and the signal in terms of polarization, spatial and temporal profile when interfering, and η_el is the electronic efficiency. The electronic efficiency can be written as η_el=1-1/SNR <cit.>, with SNR the signal to noise ratio, i.e., the ratio between the shot noise at a certain input intensity and the value in the absence of any input signal (electronic noise). We measured the best signal-to-noise ratio (also called clearance, if one measures it in dB) at a demodulation frequency of 2 MHz, where the clearance was about 20 dB at 2 mW of input power. The mode-matching efficiency was limiting the amount of available squeezing in our experiment and it is discussed in the main text. § PHASE SENSITIVE AMPLIFICATION RESULTS Before measuring the multimode squeezing curves shown in the main text, we built a degenerate Optical Parametric Amplifier, or OPA, by pumping the waveguide with a relatively intense seed field at telecom wavelengths (taken directly from the ultrafast laser) and a pump field (from our second harmonic ppLN crystal at 780 nm). The phenomenon of parametric amplification can be observed as a modulation of the seed amplitude at the output of the waveguide, depending on the relative phase between the seed and the pump fields. This measurement allows us to show the existence of parametric gain in our waveguides, which is a precondition of squeezing generation, even though the levels of multimode squeezing cannot be predicted in this way. The extrema of the parametric gain, G_±, can be approximated, for a single-mode OPA, as <cit.>: G_±∼exp(±2√(η_PSAP)) , where P is the pump power and η_PSA is the parametric efficiency. Ideally, the minimum deamplification, G_-, should be symmetric with respect to G_+, although a disparity between the two has been reported when using pulsed lasers, and attributed to a distortion of the spatial or temporal profile inside the nonlinear material. The disparity appears at sufficiently high power density in the material. Fig. <ref> shows the parametric gain measured as a function of the pump power. The data fits well Eq. (<ref>) and gives two values for the parametric efficiency due to their asymmetry. However, a measurement of the second harmonic efficiency in the same nonlinear waveguide gives an efficiency of η_SHG=0.33 W^-1, which is in good agreement with the extracted parametric efficiency for the deamplification. The conclusion of the phase sensitive experiment is that, at pump powers of few mW, we can expect at least some dB of total squeezing in our multimode states, which are the measured values showed in the main text. § INDEPENDENT HERMITE-GAUSS MODES As discussed in the main text, due to optical clipping, the 21 clipped Hermite-Gauss modes implemented in the experiment were not exactly orthogonal, and therefore the actual number of measured supermodes is expected to be lower than 21. This number depends on the dimensionality of the vector space spanned by the non-orthogonal modes. In order to account for that, we evaluate the rank of the matrix composed by these modes performing a singular value decomposition on the set of 21 vectors. The non-zero singular values count the number of linearly independent modes and hence the dimension of the space we are looking for. The singular values resulting from the decomposition is summarized in Table <ref>. Since this is a numerical computation, we need a somewhat arbitrary criterion to give a whole number indicating the dimension of our vector space from this result. In our case, we obtain the dimension of the vector space by accounting for singular values that are at least 10% of the highest one. This criterion gives us 18 linearly independent modes. The value being close to the 21 Hermite-Gauss modes indicates that the optical clipping did not have a large impact in reducing the dimensionality of the modes measured in the experiment. § FLAT MODE BASIS The spectral flat mode basis discussed in the main text is constructed to be orthogonal, taking advantage of even and odd symmetries for the subsequent mode functions. It was specifically designed to roughly resemble the first 4 Hermite-Gauss modes, so that the spectral width of each flat mode was obtained by minimizing the l^2 norm distance to the corresponding Hermite-Gauss mode counterpart. Fig. <ref> shows the 4 flat modes constructed and implemented in the experiment, together with the first 4 Hermite-Gauss modes from the supermode basis. § EIGENMODES FROM THE DIAGONALIZATION OF THE COVARIANCE MATRIX The numerical diagonalization of the covariance matrix performed in the main text gives back the eigenmode basis where there are no quantum correlations, i.e., the supermode basis discussed in the text. For additional information concerning the reconstruction of the covariance matrix see <cit.>. We therefore expect the numerical eigenmodes to resemble discretized versions of the quasi-Hermite-Gauss supermodes found in our numerical simulations from theory. Fig. <ref> shows the numerical eigenmodes obtained after the diagonalization. In general, the supermode shapes are in very good agreement with the expected theoretical Hermite Gauss modes. The bandwdith of every Hermite-Gauss is systematically higher than the theoretical value, which together with the optical clipping in the pulse shaper and the LO bandwidth could be causing the degradation of the measured squeezed values described in the main text. On a technical note, the 55 nm bandwidth of our LO was slightly smaller than the bandwidth of our expected quantum signal. This limits the experimentally accessible modes if the wavelengths outside the LO bandwidth are involved in the spectral features of the supermode. This effect is noticeable for high-order modes, which are the most broadband. In Fig. <ref>, the red area highlights the wavelength range that was not accessible with our LO bandwidth. Please also note that the eigenvalues of the covariance matrix are directly related to the expected squeezing levels of each eigenmode. Fig. <ref> shows the eigenvalues obtained in the diagonalization, which are consistent with those values directly measured in the supermode basis in the main text. § PERES–HORODECKI (PPT) CRITERION As explained in the main text, the Positive Partial Transposition (PPT) criterion can be applied to determine whether quantum correlations are present in our data of the covariance matrix. The PPT criterion is based on the fact that given a density matrix defining a quantum state, the partial transpose matrix of two possible bipartitions of the density matrix has to be positive defined for the two bipartitions being separable. The criterion applied to continuous variable systems can be found, for example, in <cit.>. This theorem applies to the covariance matrix since it completely defines any gaussian state. In our case, given that we have an 8x8 matrix in our frexel basis, we have a total of 127 possible bipartitions. For each bipartition, we can define the minimum eigenvalue of the partial transpose matrix as the PPT value, indicating quantum correlations if negative. Fig. <ref> shows the PPT value for the different bipartitions of our covariance matrix, showing the violation of the PPT criterion, i.e., negativity in the partial transpose matrix, in all of them. More details about the PPT criterion applied to the covariance matrix can be found in <cit.>.
http://arxiv.org/abs/2306.05979v2
20230609154635
Optimal distance query reconstruction for graphs without long induced cycles
[ "Paul Bastide", "Carla Groenland" ]
cs.DS
[ "cs.DS", "cs.DM", "math.CO", "G.2.2" ]
GPT-Calls: Enhancing Call Segmentation and Tagging by Generating Synthetic Conversations via Large Language Models Angela Sara Cacciapuoti, Senior Member, IEEE, Jessica Illiano, Michele Viscardi, Marcello Caleffi, Senior Member, IEEE A.S. Cacciapuoti, J. Illiano, M. Viscardi, M. Caleffi, are with the www.quantuminternet.itwww.QuantumInternet.it research group, FLY: Future Communications Laboratory, University of Naples Federico II, Naples, 80125 Italy. E-mail:mailto:[email protected]@unina.it, mailto:[email protected]@unina.it, mailto:[email protected]@unina.it, mailto:[email protected]@unina.it. Web: http://www.quantuminternet.itwww.quantuminternet.it. Michele Viscardi acknowledges PNRR MUR project CN00000013, Marcello Caleffi acknowledges PNRR MUR project RESTART-PE00000001, Angela Sara Cacciapuoti acknowledges PNRR MUR NQSTI-PE00000023. ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Let G=(V,E) be an n-vertex connected graph of maximum degree Δ. Given access to V and an oracle that given two vertices u,v∈ V, returns the shortest path distance between u and v, how many queries are needed to reconstruct E? We give a simple deterministic algorithm to reconstruct trees using Δ nlog_Δ n+(Δ+2)n distance queries and show that even randomised algorithms need to use at least 1/100Δ nlog_Δ n queries in expectation. The best previous lower bound was an information-theoretic lower bound of Ω(nlog n/loglog n). Our lower bound also extends to related query models including distance queries for phylogenetic trees, membership queries for learning partitions and path queries in directed trees. We extend our deterministic algorithm to reconstruct graphs without induced cycles of length at least k using O_Δ,k(nlog n) queries, which includes various graph classes of interest such as chordal graphs, permutation graphs and AT-free graphs. Since the previously best known randomised algorithm for chordal graphs uses O_Δ(nlog^2 n) queries in expectation, we both get rid off the randomness and get the optimal dependency in n for chordal graphs and various other graph classes. Finally, we build on an algorithm of Kannan, Mathieu, and Zhou [ICALP, 2015] to give a randomised algorithm for reconstructing graphs of treelength k using O_Δ,k(nlog^2n) queries in expectation. § INTRODUCTION How can we determine the network structure of a decentralized networks (such as the Internet or sensor networks) with minimal overhead? Such reconstruction problems have been extensively studied (e.g. <cit.>). The vertices of the network are distinct networks (autonomous systems) and the edges represent peering relations (voluntary interconnection). Tools such as (also called ) are used to record the route through the internet from one network to another. Due to privacy and security concerns, the full path information may not always be available and rather only delay information may be given. A ping-pong protocol is one of the most basic tools at disposal in a peer-to-peer or internet network. It is a two node protocol where one node sends a dummy message to the second one. Once the message is received, the second node directly responds a dummy message to the first node. The goal of this process is to compute the time between the departure of the first message and the arrival of the second one. From this, the first node can deduce an estimate of the distance between itself and the second node in the network. In this paper, we are interested in the following question: how fast can you reconstruct a hidden network only using a ping-pong protocol? The distance query model In order to model this kind of protocol, the distance query model has been introduced <cit.>. In this model, only the vertex set V of a hidden graph G=(V,E) is known and the aim is to reconstruct the edge set E via distance queries to an oracle. For a pair of vertices (u,v) ∈ V^2, the oracle answers the shortest path distance between u and v in G and the algorithm can select the next query based on the responses of earlier queries. If there is a unique graph consistent with the query responses, the graph has been reconstructed. For a graph class 𝒢 of connected graphs, we say an algorithm reconstructs the graphs in the class if for every graph G∈𝒢 the distance profile obtained from the queries is unique to G within 𝒢. The query complexity is the maximum number of queries that the algorithm takes on an input graph from 𝒢. For a randomised algorithm, the query complexity is given by the expected number of queries (with respect to the randomness in the algorithm). Of course, by asking the oracle the distance between every pair (u,v) of vertices in G, we can completely reconstruct the edge set as E = {{u,v}| d(u,v) = 1 }. This implies a trivial upper bound of O(|V|^2) on the query complexity. Unfortunately, this upper bound is tight in general. For example, the clique K_n is indistinguishable from K_n minus an edge e: every query answer is the same for both graphs except for the pair (u,v). Thus, any algorithm would need Ω(|V|^2) queries to reconstruct these graphs. This trivial upper bound happens to be tight even on sparse graphs such as trees (see <ref>). The core of this problem is in fact high degree vertices and therefore we will restrict our attention to connected n-vertex graphs of maximum degree Δ, as has also been done in earlier work. Kannan, Mathieu, and Zhou <cit.> designed a randomised algorithm with query complexity O_Δ(n^3/2), where the subscript denotes the constant may depend on Δ and O(f(n)) is a short-cut for O(f(n)polylog(n)). In the same article, they give randomised algorithms for chordal graphs and outerplanar graphs with a quasi-linear query complexity O_Δ(nlog^3 n). Rong, Li, Yang and Wang <cit.> improved the randomised query complexity for chordal graphs to O_Δ(nlog^2 n). Their algorithm only requires a weaker type of oracle and applies to graphs without induced cycles of length at least 5. The best known lower bound (for bounded degree graphs) is from <cit.>: by an information-theoretic argument, Ω(n log n/loglog n) queries are needed to reconstruct n-vertex trees of maximum degree 3. Reconstruction of phylogenetic trees A well-studied variation on our distance reconstruction model comes from biology. Reconstructing a phylogenetic tree has been modelled via “distance” queries (similarity of DNA) between leaves of the input tree <cit.>. There are a few important differences compared to our setting: the set of leaves is already known and the distance queries are only possible between leaves. Moreover, we consider a phylogenetic tree to be reconstructed once we know all the pairwise distances between the leaves, and so there is no direct relation to the distance query model. Improving on various previous works <cit.>, King, Zhang and Zhout <cit.> obtained the following deterministic lower bound. Any deterministic algorithm reconstructing a phylogenetic tree of maximum degree Δ with leaves needs at least Ω(Δlog_Δ) distance queries between leaves. Upper bounds have also been investigated <cit.> and there are deterministic algorithms using at most Δlog_Δ+O(Δ) queries. A natural question is whether a randomised algorithm can do even better, but to the best of our knowledge only weaker information-theoretic lower bounds were known for randomised algorithms in this setting. Our contribution In this paper, we provide the first tight bounds on the number of distance queries required to reconstruct several natural graph classes. * We prove that any randomised algorithm requires 1/100Δ nlog_Δ n queries to reconstruct n-vertex trees of maximum degree Δ from distance queries. This lower bound applies to any graph class containing trees. We extend our technique to improve the state-of-the-art of various other query models, most notably obtaining a randomised variant of <ref> for phylogenetic trees. * We provide a new algorithm with a query complexity of O_k,Δ(nlog n) for the class of k-chordal graphs, which are graphs that do not contain an induced cycle of length at least k+1. This is optimal, up to the constant factor, for various well-studied graph classes including trees, chordal graphs, permutation graphs and AT-free graphs. We obtain an optimal dependency on Δ for trees and a near-optimal dependency for chordal graphs. Besides improving on the query complexity, our algorithm also applies to more graphs and is the first quasi-linear query complexity for k-chordal graphs (for k≥ 5). Thereby, we take a step towards efficient reconstruction of Internet-like networks, which tend to avoid very long induced cycles. Low chordality has already been used to design efficient algorithms in such network (e.g. routing schemes <cit.>). Our result is still restricted to bounded degree graphs, which is a common assumption in this setting. * Although earlier (randomised) algorithms also exploit separators, our approach is simple and deterministic. In particular, we determine tight bounds for both deterministic and randomised algorithms in our setting. * We also provide a structural lemma that allows us to extend the techniques developed by <cit.> to an even larger graphs class than bounded chordality, resulting in a randomised algorithm with O(nlog^2 n) query complexity for bounded treelength graphs. We provide the formal statement of our results with various key ideas below. Improved randomised lower bounds We provide the following randomised lower bound. theoremlowerboundnlogn Let Δ≥ 2 and n=2c Δ^k be integers, where c∈ [1,Δ) and k≥ 50(cln c + 3) is an integer. Any randomised algorithm requires at least 1/50Δ nlog_Δ n queries to reconstruct n-vertex trees of max degree Δ+1. Besides removing the (1/loglog n)-factor compared to the information-theoretic lower bound, we are also able to achieve the correct dependency on Δ (namely Δ/logΔ) for the term in front of nlog n. We made no attempt to optimize the constants. Note that we allow Δ to grow with n and that Δ is allowed to be a small polynomial in n if c is constant, e.g. when n=Δ^k. To prove this lower bound, we first reduce the problem to reconstructing a function f:[n]→ [Δ]^k that is promised to be `balanced' (|f^-1(b)|=c for all b∈ [Δ]^k) from two types of coordinate queries: (1) `is f(a)_i=f(a')_i?' for a,a'∈ [n] and i∈ [k], and (2) `is f(a)_i=j?' for a∈ [n], j∈ [Δ] and i∈ [k]. (We short-cut [m]={1,…,m}.) Interestingly, instead of the usual number of queries, we can link the complexity of our original problem to the number of NO answers given by the coordinate oracle. The reduction goes via another function reconstruction problem with a more involved type of queries (see Section <ref>). To establish the lower bound for function reconstruction from a coordinate oracle, we apply Yao's minimax principle to reduce to lower bounding the average complexity of the best deterministic algorithm. We use our assumption that k≥ 50(cln c+3) to remove the constraint that the input function f has to be balanced, which removes dependencies between the function values. When the function f is sampled at uniformly at random, we stochastically dominate the number of NO answers to a special subset of the queries by a sum of independent Bernouilli random variables so that we can deduce concentration bounds. The lower bounds for both function reconstruction problems may be of independent interest. Using our intermediate results, we also deduce improved randomised lower bounds for various related reconstruction problems (see <ref> for formal statements). First, we deduce a similar randomised lower bound for the phylogenetic setting. theoremphylogenetic Let Δ≥ 2, c≤Δ-1 and k≥ 50(cln c+2) be positive integers. Let =cΔ^k. Any randomised algorithm reconstructing phylogenetic trees of maximum degree Δ+1 with leaves needs at least 1/20Δlog_Δ distance queries between leaves in expectation. We also deduce improved randomised lower bounds for other query models: * path queries in directed graphs <cit.>, * betweenness queries in graphs <cit.> (also called separator queries <cit.>), * membership queries for learning a partition <cit.>, * comparison queries in tree posets <cit.>. Previous work in these settings found deterministic lower bounds or used information-theoretic arguments to obtain weaker randomised lower bounds. Deterministic algorithms matching the lower bound Most known algorithms with quasi-linear query complexity in the distance oracle setting are randomised, with a recent work giving a linear deterministic algorithm for interval graphs from Rong, Li, Yang and Wang <cit.> as notable exception. We present a new approach to exploit separators in a deterministic fashion, which turns out to use less queries than the earlier approaches resulting in a query complexity that matches our randomised lower bounds. theoremkchordal There exists a deterministic algorithm to reconstruct k-chordal graphs of maximum degree at most Δ on n vertices using O_Δ,k(n log n) queries. No algorithms were previously known achieving quasi-linear query complexity for k-chordal graphs even for k=5 and when allowing randomised algorithms. Since permutation graphs and AT-free graphs are known to be 5-chordal and 6-chordal respectively (see <cit.>), the result implies a deterministic algorithm with query complexity O_Δ(nlog n) for those graph classes. For trees and chordal graphs, we use the approach of Theorem <ref> to give algorithms with an improved dependency on the maximum degree Δ. We give the algorithm for trees first because this has a very simple analysis from the structural graph theory side and with additional analysis we show it is even tight in terms of the dependency on Δ. theoremtreerec There exists a deterministic algorithm to reconstruct trees of maximum degree Δ on n vertices using Δ nlog_Δ n +(Δ +2)n queries. In the appendix, we adjust the algorithm to chordal graphs to obtain a dependency on Δ which is optimal up to a (logΔ)-factor. Our algorithm is surprisingly simple: we compute a BFS tree starting from a vertex v_0 and then inductively reconstruct the tree up to layer i. For each vertex in layer i+1, we use balanced separators to `binary search' its parent in layer i. Parameterized approach: treelength The parameter treelength introduced in <cit.> roughly restrains vertices in the same bag of a tree decomposition to be “close” in the graph. (We give a formal definition in <ref>.) Building on methods used by Kannan, Mathieu, and Zhou <cit.> to reconstruct chordal graphs, we prove the following result. theoremrandtl There is a randomised algorithm that reconstructs an n-vertex graph of maximum degree at most Δ and treelength at most k using O_Δ,k(n log^2 n) distance queries in expectation. It was proved in <cit.> that k-chordal graphs have treelength at most k. In particular, Theorem <ref> covers a wider class of graphs than Theorem <ref>. Graphs of bounded treelength avoid long geodesic cycles (i.e. cycles C for which d_C(x,y)=d_G(x,y) for all x,y∈ C) and bounded treelength is equivalent to avoiding long `loaded geodesic cycles' or being `boundedly quasi-isometric to a tree' (see <cit.> for formal statements). Treelength has been extensively studied from an algorithmic standpoint, particularly for problems related to shortest path distances. For example, there exist efficient routing schemes for graphs with bounded treelength <cit.> and an FPT algorithm for computing the metric dimension of a graph parameterised by its treelength <cit.>. Low treelength can also be exploited when approximating the Travelling Salesman Problem <cit.>. Although deciding the treelength of a given graph is NP-complete, it can still be approximated efficiently <cit.>. We give a short description of the algorithm used in Theorem <ref>. The general approach is the same as <cit.>: we first find a vertex v that lies on many shortest paths (with high probability). For this we use a slightly different method as the one used in <cit.> getting rid of a (log n)-factor on the way. We then show that for such a vertex v, the set S=N^≤ 3k/2[v] of vertices at distance at most 3k/2 is a good separator, for k the treelength of the input graph. We compute the components of G∖ S to check that indeed we found a good separator and then recursively reconstruct the components until we reach a sufficiently small vertex set on which a brute-force approach can be applied. It is key to our recursive approach that we can add a small boundary set and still preserve all the relevant distances for a component. For this, we prove the following lemma which may be of independent interest. lemmapathstayclose Let G be a graph of treelength at most k≥ 1. If G[A] is connected for some vertex set A, then every shortest path in G between two vertices a,b ∈ A is contained in N^≤ 3k/2[A]. Roadmap In <ref>, we set up our notation and give the relevant definitions. In <ref>, we present our deterministic algorithm for trees as a `warm-up'. In <ref>, we give the matching randomised lower bound and in <ref> we extend our deterministic algorithm to k-chordal graphs. In <ref>, we present our randomised algorithm for graphs of bounded treelength and in <ref> we conclude with some open problems. § PRELIMINARIES In this paper, all graphs are simple, undirected and connected except when stated otherwise. All logarithms in this paper are base 2, unless mentioned otherwise, where ln=log_e. For a ≤ b two integers, let [a,b] denote the set of all integers x satisfying a ≤ x ≤ b. We short-cut [a]=[1,a]. For a graph G and two vertices a,b ∈ V(G), we denote by d_G(a,b) the length of a shortest path between a and b. For G = (V,E), A ⊆ V and i ∈, we denote by N^≤ i_G[A] = {v ∈ V |∃ a ∈ A, d_G(v,a) ≤ i}. We may omit the superscript when i=1. We write N_G(A)=N_G[A]∖ A and use the shortcuts N_G[u],N_G(u) for N_G[{u}],N_G({u}) when u is a single vertex. We may omit the subscript when the graph is clear from the context. Distance queries We denote by _G(u,v) the call to an oracle that answers d_G(u,v), the distance between u and v in a graph G. For A,B two sets of vertices, we denote by _G(A,B) the |A|·|B| calls to an oracle, answering the list of distances d_G(a,b) for all a ∈ A and all b ∈ B. We may abuse notation and write _G(u,A) for _G({u},A) and may omit G when the graph is clear from the context. For a graph class 𝒢 of connected graphs, we say an algorithm reconstructs the graphs in the class if for every graph G∈𝒢 the distance profile obtained from the queries is not compatible with any other graph from 𝒢. The query complexity is the maximum number of queries that the algorithm takes on an input graph from 𝒢, where the queries are adaptive. For a randomised algorithm, the query complexity is given by the expected number of queries (with respect to the randomness in the algorithm). Tree decomposition and treelength A tree decomposition of a graph G is a tuple (T,(B_t)_t ∈ V(T)) where T is a tree and B_t is a subset of V(G) for every t ∈ V(T), for which the following conditions hold. * For every v ∈ V(G), the set of t ∈ V(T) such that v ∈ B_t, is non-empty and induces a subtree of T. * For every uv ∈ E(G), there exists a t ∈ V(T) such that {u,v}⊆ B_t. This notion was introduced by <cit.>. The treelength of a graph G (denoted (G)) is the minimal integer k for which there exists a tree decomposition (T,(B_t)_t∈ V(T)) of G such that d(u,v) ≤ k for every pair of vertices u,v that share a bag (i.e. u,v∈ B_t for some t∈ V(T)). We refer the reader to <cit.> for a detailed overview of the class of bounded treelength graphs. Balanced separators For β∈ (0,1), a β-balanced separator of a graph G = (V,E) for a vertex set A⊆ V is a set S of vertices such that the connected components of G[A ∖ S] are of size at most β |A|. One nice property of tree decompositions is that they yield 1/2-balanced separators. Let G be a graph, A⊆ V(G) and (T,(B_t)_t ∈ V(T)) a tree decomposition of G. Then there exists t ∈ V(T) such that B_t is a 1/2-balanced separator of A in G. § WARM-UP: OUR ALGORITHM FOR TREES This section presents our deterministic algorithm for the class of trees, as it encapsulates most of the algorithmic ideas, while being fairly simple. In <ref> we generalise the techniques used here to a much larger class of graphs. We restate the theorem below. * Let T be a tree on n vertices, and let Δ be the maximum degree of T. Our algorithm starts as follows. We pick an arbitrary vertex v_0∈ V(T) and will consider (for the analysis) the input tree T as rooted in v_0. We call (v_0,V(T)). We define the ith layer of T as L_i = {v ∈ V(T) | d(v_0,v) = i }. We proceed to reconstruct the graph induced by the first i layers by induction on i. Note that T[L_0] = ({v_0},∅) is immediately reconstructed. We fix an integer i ≥ 1 and assume that the first i-1 layers are fully reconstructed (i.e we discovered all the edges and non-edges of T[L_0 ∪⋯∪ L_i-1]). Let T' = T[L_0 ∪⋯∪ L_i-1] be the already reconstructed subtree. We show how to reconstruct the edges between the (i-1)th layer and the ith layer. Note that this suffices to reconstructs all the edges (since in a tree, edges can only be between consecutive layers). Choose an arbitrary vertex v ∈ L_i. We first show that we can find the parent of v in L_i-1 using O(Δlog n) queries and then describe how to shave off an additional (logΔ)-factor. The procedure goes as follows. As T' is a tree, it admits a 1/2-balanced separator of size 1. Let s_1 be a vertex for which {s_1} forms such a separator. We ask first (v,N[s_1]), where the neighbourhood is taken in T'. As T is a tree, there is a unique path between any two vertices. So for w ∈ N(s_1), the distance d(v,w)= d(v,s_1)-1 if w lies on the shortest path from v to s_1 and d(v,s_1)+1 otherwise. From this, we can infer the neighbour x of s_1 that is the closest to v as the one for which the answer is smallest (or find that s_1 is adjacent to v and finish). Moreover, the unique path from s_1 to v lives in the connected component T_1 of T' ∖{s_1} that contains x. In particular, T_1 contains the parent of v (see <ref>). We can repeat this process and construct two sequences (T_j)_j∈ℕ and (s_j)_j∈ℕ, where T_j+1 is the connected component of T_j ∖{s_j} containing the parent of v and s_j∈ V(T_j) is chosen so that {s_j} is a 1/2-balanced separator of T_j. Once T_ℓ contains less than Δ +1 vertices for some ℓ or the vertex s_ℓ is identified as the parent of v, we finish the process[If desired, we may define T_j=T_ℓ and s_j=s_ℓ for all j≥ℓ.]. By definition of 1/2-balanced separator, ∀ j ∈ [ℓ-1], |T_j+1| ≤ |T_j|/2 and thus ℓ≤log n. If the process finished because T_ℓ has at most Δ vertices, we use at most Δ additional queries via (T_ℓ,v). We infer the parent of v from the result. For each j≤ℓ, we use at most Δ+1 queries to reconstruct T_j+1 from T_j. Hence we use O(Δlog n) queries in total. Taking a closer look at the process, at any step j, we can choose an arbitrary order on the queries (v,w) for w∈ N(s_j). Since T_j has at least Δ+1 vertices, s_j has at least two neighbours. We order the connected component of T_j ∖{s_j} by decreasing size, and ask the queries in the same order: we start with (v,w_1) for w_1 the neighbour of s_j which is in the largest component and terminate when we find two different distances (or queried all the neighbours). In particular, we never query (v,s_j). * If s_j is the parent of v, then the distances d(v,w) for w∈ N(s_j) are all the same and we terminate after at most Δ: we recognised s_j as the neighbour of v. * If s_j is not the parent of v, then there is a unique w∈ N(s_j) closest to v and d(v,w)=d(v,w')-2 for all w'∈ N(s_j). We recognise w as the neighbour of s_j with a different (and smaller) distance to v. If we query 2 neighbours of s_j before detecting the component containing the parent of v, our next subtree T_j+1 satisfies |T_j+1| ≤ |T_j|/2 since s_j is a balanced separator. If we query m≥ 3 neighbours of s_j before detecting the component containing the parent of v, our next subtree T_j+1 satisfies |T_j+1| ≤ |T_j|/m since there are m-1 components of T_j∖{s_j} that are at least as large. Either way, we decrease the size of the tree by a factor at least x if we perform x queries, where x∈{2,…,Δ}. Let S be an s-vertex subtree of T' containing the parent of v. We show the procedure uses at most f(s) = Δlog_Δ s+Δ+1 queries. The claim is true when s≤Δ+1. By induction on s, it suffices to show f satisfies f(s)≥ f(s/x)+x for all x∈{2,…,Δ}. Since f(s)-f(s/x)=Δlog_Δ x, we need to show that Δlog_Δ x≥ x for all x∈{2,…,Δ}. By analysing the derivative of Δlog_Δ x -x in the interval x ∈ [2,Δ], we find that the minimum is achieved at x=Δ so indeed the inequality holds on the interval [2,Δ]. We conclude that we can reconstruct the edge from L_j-1 to v in Δlog_Δ n+Δ+1 queries. Repeating the same strategy to reconstruct the parent of every vertex, we obtain the edge set of T in at most (n-1)+(n-1)(Δlog_Δ n +(Δ +1))≤Δ nlog_Δ n +(Δ +2)n queries. We remark that even though we show in the next section that we cannot achieve a better dependency in (n,Δ) using randomisation, we can improve the average query complexity by almost a factor of 2. When we are in step j of learning the parent of some v∈ L_i (i.e. wish to learn the vertex in N[s_j] closest to v) then we perform (v,w) for w∈ N[s_j] in an order that is chosen at random. Suppose that |T_j|=s. We show in Appendix <ref> that we can find such an order, such that the expected position of each component of size a is at most 1/2 (s/a)+1. This means that for a size reduction of x=s/a, we perform approximately 1/2x queries in expectation, compared to x in our deterministic algorithm. Using linearity of expectation, a similar calculation as was done in the proof above gives an expected query complexity of 1/2Δ nlog_Δ n+o(Δ nlog_Δ n). § LOWER BOUNDS FOR RANDOMISED TREE RECONSTRUCTION In this section we show that the algorithm presented in the previous section is optimal in terms of the dependency on n and Δ, even when randomization is allowed. In the next section, we will match the lower bound for a much larger class of graphs, namely k-chordal graphs. * Note that, for constant c, Δ could even be a small polynomial in n. Any `algorithm' is allowed to be randomised unless specified to be deterministic. §.§ Reconstructing functions from the coordinate oracle In order to prove the lower bound we reduce to a natural function reconstruction problem that could be of interest in itself. Let Δ≥ 3, k≥ 1 and n=cΔ^k be integers, where c∈ [1,Δ). Let A = [n] and B = [Δ]^k. Suppose that f: A→ B is an unknown function that we want to reconstruct. For b∈ B and 1≤ i ≤ k, we write b_i for the value of the ith coordinate of b. The coordinate oracle can answer the following two types of queries: * Type 1. _1^c(a,b,i) For a ∈ A, b ∈ [Δ] and i∈ [k] answers YES if f(a)_i = b and NO otherwise. * Type 2. _2^c(a,a',i) For a,a' ∈ A answers YES if f(a)_i = f(a')_i and NO otherwise. In the case of the coordinate oracle, we will count the number of queries for which the answer is NO instead of the number of queries. We say that f : A → B is a balanced function if for every b∈ B, |f^-1(b)| = c for some integer c≥ 1. Our main result on function reconstruction from a coordinate oracle is the following. Let Δ≥ 3, c≤Δ-1 and k ≥ 50(cln c +2) be positive integers and let n=cΔ^k. Any algorithm reconstructing f:[n] → [Δ]^k using the coordinate oracle, in the special case where f is known to be a balanced function, has at least 1/11Δ n k queries answered NO in expectation. In order to prove <ref>, we first study the query complexity in the general case, when no restriction is put on f. Using Yao's minimax principle <cit.>, studying the expected complexity of a randomised algorithm can be reduced to studying the query complexity of a deterministic algorithm on a randomised input. For any distribution D on the inputs, for any randomised algorithm M, the expected query complexity of M is at least the average query complexity of the best deterministic algorithm for input distribution D. We will apply Yao's principle for D the uniform distribution and the query complexity measuring the number of queries answered NO. We combine this with the following lemma. For any deterministic algorithm R using the coordinate oracle and f: [n] → [Δ]^k sampled u.a.r, the probability that R reconstructs f in at most 1/10Δ n k queries answered NO is at most e^-1/50 n k. We first deduce our main theorem on function reconstruction from the two lemmas above. Let M be a deterministic algorithm that reconstructs balanced functions using the coordinate oracle. We first extend M to an algorithm M that reconstructs all functions (among all functions) while the number of NO answers remains the same if the input is balanced. The algorithm M first performs the same queries as M does, until it either has no balanced candidates or a single balanced candidate f compatible with the answers so far. In the former case, it reconstructs the function by brute-force. In the second case, it performs _1^c(a,f(a)_i,i) for all a∈ A and i∈ [k] to verify that indeed the input is f. If the input is indeed f, we have now distinguished f among all functions (rather than all balanced functions) without additional NO answers. If any of the queries answers NO, we again have no balanced candidates left and may perform the brute-force approach again. We will show that, when restricted to balanced functions, M has an average query complexity (in terms of the number of NO answers) greater than 1/11Δ n k. Since M has the same number of NO answers as M on balanced inputs, it has the same average query complexity as M. Using Yao's principle (<ref>), it then follows that any randomised algorithm that reconstructs balanced functions has at least 1/11nk queries answered NO in expectation. By <ref>, there are at most |B|^n e^-1/50n k functions f:A→ B for which M reconstructs f in less than 1/10Δ n k queries. On the other hand the number of balanced function from A to B is the following multinomial coefficient n c,…,c = n!/(c!)^n. In particular, there are at least n c,…,c - (n/c)^n e^-1/50nk balanced function for which M requires at least 1/10Δ nk queries. This means that the average query complexity of M is at least n c,…,c -(n/c)^n e^-1/50nk/n c,…,c1/10Δ n k = n! - (c!)^n (n/c)^n e^-1/50nk/n!1/10Δ n k ≥1/11Δ n k since, (c!)^n (n/c)^ne^-1/50nk = (n/e)^n ( ec!/c e^-1/50k)^n ≤ n^n e^-51/50n≤1/100n! using for the first inequality that k ≥ 50(c ln c + 2) and for the second that n≥ 2^51. Let R be a deterministic algorithm that uses the coordinate oracle to reconstruct functions. Let F_t denote the set of possible functions f:A→ B that are consistent with the first t queries done by R. (This depends on the input function g:A→ B, but we leave this implicit.) For a∈ A and i∈ [k], let J_a,i^t = {j∈{1,…,Δ}| f(a)_i=j for some f∈ F_t}. Note that all values j_1,j_2∈ J_a,i^t are equally likely in the sense that there is an equal number of f∈ F_t with f(a)_i=j_1 as with f(a)_i=j_2. The algorithm R will perform the same t queries for all f∈ F_t. In particular, if g:A→ B was chosen uniformly at random, then after the first t queries all f∈ F_t are equally likely (as input function) and in particular g(a)_i is uniformly distributed over J_a,i^t, independently of the sets J_a',i'^t for (a',i')≠ (a,i). This is the part for which we crucially depend on the fact that we allow all functions f:A→ B and not just bijections (where there may be dependencies between the probability distributions of g(a) and g(a') for distinct a,a'∈ A). We say that the t^th query of the algorithm is special if * it is a Type 1 query _1^c(a,b,i) and |J_a,i^t| ≥Δ/2, or * it is a Type 2 query _2^c(a,a',i) and either |J_a,i^t| or |J_a',i^t| is at least Δ/2. Let T denote the number of NO answers to special queries that R does to the coordinate oracle until it has reconstructed the input function. We let Y_i=1 if the answer of the i^th special query is YES and 0 otherwise. So ∑_i=1^T Y_i denotes the number of special queries with answer YES. At the start of the algorithm J_a,i^0 = [Δ] for all a∈ A and i ∈ [k]. Thus, to reconstruct the function, the pair (a,i) is either (1) involved in a special query with answer is YES or (2) involved in Δ/2 special queries for which the answer is NO. Since any query involves at most two elements of A, we deduce that |A|k/2 = nk/2 ≤(T - ∑_i=1^T Y_i ) 2/Δ + ∑_i=1^T Y_i. We aim to prove that if g:A→ B is sampled uniformly at random, then with high probability T=T(g) ≥1/10Δ nk. In order to do so, we consider a simplified process and a random variable τ which is stochastically dominated by T (i.e. for any x ∈ℝ^+, (T ≤ x) ≤(τ≤ x)). Let us consider an infinite sequence of i.i.d. random variables X_1,X_2,X_3,…∼Bernouilli(2/Δ). Note that H(t)=( t - ∑_i=1^t X_i ) 2/Δ + ∑_i=1^t X_i = (1- 2/Δ) ∑_i=1^t X_i + 2t/Δ is increasing in t. Let τ be the first integer t for which H(t)≥1/2n log_Δ n. If g is sampled uniformly at random then the jth special query (say involving a∈ A and i∈ [k] with |J^t_a,i|≥Δ/2) has answer YES with probability (Y_i = 1) ≤2/Δ= (X_i = 1). This is because all values of J^t_a,i are equally likely for g(a)_i (and independent of the value of g(a')_i or b_i for b∈ B and a'∈ A). This inequality holds independently of the values of (Y_1,…,Y_i-1). This implies that, for any t ∈ℕ^+ and any x ∈ℝ^+, (∑_i=1^t X_i ≤ x) ≤(∑_i=1^t Y_i ≤ x). Therefore, ((1- 2/Δ) ∑_i=1^t X_i + 2t/Δ≤ x) ≤((1- 2/Δ) ∑_i=1^t Y_i + 2t/Δ≤ x). From this we can conclude that (T ≤ x) ≤(τ≤ x), thus T stochastically dominates τ. If τ≤1/10Δ nk, then using the definition of τ we find that ( 1/10Δ nk - ∑_i=1^τ X_i ) 2/Δ + ∑_i=1^τ X_i ≥1/2 nk which implies∑_i=1^τ X_i≥(1 - 2/Δ) ∑_i=1^τ X_i ≥3/10n k. Let x = 1/10Δ nk. We compute 𝔼[∑_i=1^x X_i] = 2/Δx = 1/5 nk. Using Chernoff's inequality (see e.g. <cit.>) we find (τ≤ x) ≤(∑_i=0^x X_i ≥(1+1/2) 1/5 nk) ≤exp(-(1/2)^21/5 n k /(2+1/2)). Since 1/21/21/52/5=1/50, this proves (T ≤ x) ≤(τ≤ x)≤ e^-1/50n k. In particular, the probability that at most 1/10Δ k queries are used is at most e^-1/50n k, as desired. §.§ Reconstructing functions from the word oracle Let once again A =[n] and B = [Δ]^k. We next turn our attention to reconstructing functions f:A→ B from a more complicated oracle that we use as a stepping stone to get to distance queries in trees. For b∈ B, we write b_[i,j]=(b_i,b_i+1,…,b_j). It will also be convenient to define b_∅ as the empty string. The word oracle can answer the following two types of questions. * Type 1. _1^w(a,b) for a ∈ A and b ∈ B, answers the largest i∈ [0,k] with f(a)_[1,i]= b_[1,i]. * Type 2. _2^w(a,a') for a,a' ∈ A, answers the largest i∈ [0,k] with f(a)_[1,i]= f(a')_[1,i]. By studying the number of queries for the word oracle and the number of NO answers for the component oracle, we can link the two reconstruction problems as follows. For all positive integers Δ,k and n, for any algorithm M using the word oracle that reconstructs functions f:A → B in at most q(f) queries in expectation, there exists an algorithm M' using the coordinate oracle that reconstructs functions f:[n] → [Δ]^k such that at most q(f) queries are answered NO in expectation. Given an algorithm M using the word oracle, we build a new algorithm M' using the coordinate oracle. We do so query-by-query. If M asks _1^w(a,b), then M' performs a sequence of queries _1^c(a,b_1,1),_1^c(a,b_2,2),…,_1^c(a,b_i+1,i+1), where i∈ [0,k-1] is the largest for which f(a)_[1,i] = b_[1,i]. Note that the sequence indeed simulates a query of the word oracle yet the coordinate oracle answers NO at most once (on the (i+1)th query). Queries of Type 2 can be converted analogously. This way, for every input f, the natural `coupling' of the randomness in M and M' ensures that the number of NO answers to M' is stochastically dominated by the number of queries to M. In particular, the expected number of NO answers given by M' is upper bounded by q(f), the expected number of queries to M. <ref> now give the following result. Let Δ≥ 3, c≤Δ-1 and k ≥ 50(cln c +2) be positive integers. Let n = cΔ^k. Any algorithm reconstructing f:[n] → [Δ]^k using the word oracle, in the special case where f is known to be balanced, needs at least 1/11Δ n k queries in expectation. §.§ Reducing tree reconstruction to function reconstruction In order to prove <ref> we consider a specific tree T_c,Δ,k (with c ≤Δ) the tree of depth k+1 where each node on at depth at most k-1 has exactly Δ children and the node at depth k has exactly c children (see <ref>). <ref> is an almost direct consequence of the following lemma. Let Δ≥ 2, c≤Δ-1 and k≥ 50(c ln c +2) be positive integers. Consider a labelling of the tree T_c,Δ,k with =cΔ^k leaves, where only the labels of the leaves are unknown. Any randomised algorithm requires at least 1/11Δ k queries in expectation to reconstruct the labelling of the leaves. We consider T = T_c,Δ,k and let L be the set of leaves of T and let P be the set parents of the leaves. The tree T has n=∑_i=0^kΔ^i + cΔ^k nodes and =cΔ^k leaves. Let p:L→ P be the bijection that sends each leaf to its direct parent. We label internal nodes as follows. The root is labelled ∅ (the empty string) and if a node v has label ℓ and has Δ children, then we order the children 1,…,Δ and we label the child i with label obtained from concatenation ℓ+(i). We put such labels on all internal nodes. Let I denote the set of internal nodes and let ℓ(v) denote the label of v∈ I. Let f:L→ [Δ]^k be the bijection that sends a leaf u∈ L to the label ℓ(p(u))∈ [Δ]^k of its direct parent. We consider the trees which have a fixed labelling (as described above) for node in I, and every possible permutation of the labelling of the leafs. All possible bijections f:L→ [Δ]^k appear among the trees that we are considering. To reconstruct the tree, we in particular recover the corresponding bijection f. Distance queries between internal vertices always give the same response and can be ignored. We show the other queries are Type 1 and Type 2 queries in disguise. * For a ∈ L and b∈ I the distance between a and b is given as follows. Let z∈ I be the nearest common ancestor of a and b and say z has depth i and b has depth j. The distance between a and b is 1+(k-i)+(j-i). The values of k and j do not depend on f but the value of i is exactly given by max{s:f(a)_[0,s]= b_[0,s]}, the answer to the corresponding type 1 query of (a,b) to the word oracle. To be precise, since b may have a length shorter than i, the query _1^w(a,b') where b'_s=b_s for all s∈ [1,|b|] and b_s'=0 otherwise, gives the desired information. * For a,a' ∈ L, the distance between a and a' is given by 2(1+(k-i)) for i the answer of a Type 2 _2^w(a,a') to the word oracle. This shows that we reduce an algorithm to reconstruct the labelling of the leaves from q distance queries to an algorithm that reconstructs functions f: L → [Δ]^k from q queries to the word oracle. By <ref>, since k ≥ 50(c ln c +2), we need at least 1/11Δ k queries. We are now ready to deduce the main result of this section. * Let Δ≥ 2 and n be integers and we write n=2cΔ^k for c∈ [1,Δ) and k an integer. Suppose that k≥ 50(cln c+3). In particular, Δ≥ 2 implies k ≥⌊log_Δ n - 1⌋. The tree T=T_⌊ c ⌋,Δ,k considered in <ref> has maximum degree Δ+1, =⌊ c ⌋Δ^k leaves and n' = ∑_i=0^kΔ^i +⌊ c ⌋Δ^k vertices, where n/4≤ N≤ n' ≤ 2 cΔ^k = n. For Δ≥ 2 and c ≥ 1, if n ≥ 2Δ^50(c ln c+3) then ≥n/4≥Δ^50(c ln c +2). So we may apply <ref> and find that at least 1/11Δ⌊log_Δ -1 ⌋≥1/44Δ n ( log_Δ n-4) queries are required. As log_Δ n≥ 150, we find that this is at least 1/50Δ n log_Δ n as claimed. §.§ Randomised lower bounds for related models We first deduce our randomised lower bound for the phylogenetic setting, in which we are only required to reconstruct the distances between the leaves but can also only query distances between leaves. * Let T=T_c,Δ,k be the tree considered in <ref> with leaves. Suppose towards a contradiction that we could obtain the pairwise distances between the leaves of this tree in 1/20Δlog_Δ distance queries between the leaves in expectation. We show that, from this, we can recover the labels of the leaves of T using only Δ^2 ≤1/30Δlog_Δ additional distance queries, contradicting <ref> since 1/20+1/50≤1/11 and log_Δ≥ k ≥ 50. We proceed by induction on k, the depth of the tree T. When k=0, T_c,Δ,0 is a star with c leaves. There is nothing to prove, as the parent of each leaves in known to be the root. Suppose k ≥ 1 and that the claim has already been shown for smaller values of k. We define an equivalence relation on the set of leaves: for u_1,u_2 ∈ L, u_1 ∼ u_2 if and only if d(u_1,u_2) < 2k. This is an equivalence relation with Δ equivalence classes, as d(u_1,u_2) < 2k if and only if u_1 and u_2 have a child of the root as common ancestor. : Let u_1,u_2,…,u_Δ be arbitrary representatives of each of the Δ equivalence classes. (Note that we can select these since we already know the distances between the leaves.) Let r denote the root of T. We ask (u_i,N(r)) for all i ∈ [Δ]. From this we can deduce the common ancestor among the children of the root for each of the classes. It is the unique neighbour of r lying on a shortest path from u_i to r. Let V_i denote the set of all the leaves that have the i^th neighbour of r as common ancestor. We also define T_i to be the subtree rooted in the i^th neighbour of r. We remark that it is now sufficient to solve Δ subproblems of reconstructing T_i knowing each V_i leaf matrix. By the induction hypothesis, each subproblem is solvable in |V(T_i)| Δ^2 = -1/ΔΔ^2 = (-1)Δ queries. Therefore, in total this algorithm uses (-1)Δ^2 + Δ^2 = Δ^2 queries. We can get a lower bound for all values of in a similar way to the proof of <ref>. We next show that our result implies various other new randomised lower bounds. Although we state these results with a weaker assumption on n for readability reasons, we remark that our more precise set-up (allowing Δ to be a small polynomial in n for specific values of n) also applies here. A betweenness query answers for three vertices (u,v,w) whether v lies on a shortest path between u and w. Using three distance queries to (u,w), (u,v) and (v,w), you can determine whether v lies on a shortest path between u and w, so the betweenness oracle is weaker (up to multiplicative constants). It has been shown in <cit.> that randomised algorithms can obtain a similar query complexity for betweenness queries as was obtained for distance queries by <cit.>. Moreover, a randomised algorithm for 4-chordal graphs has been given that uses a quasi-linear number of queries to a betweenness oracle <cit.>. A deterministic algorithm using Õ(Δ n^3/2) betweenness queries has been given for trees, as well as a Ω(Δ n) lower bound <cit.>. Our randomised lower bound from Theorem <ref> immediately extends to this setting. Any randomised algorithm requires at least 1/150Δ nlog_Δ n betweenness queries to reconstruct n-vertex trees of maximum degree Δ+1≥ 3 if n≥ 2Δ^50 (ΔlnΔ+3). Given two nodes i,j in a directed tree, a path query answers whether there exists a directed path from i to j. Improving on work from <cit.>, it was shown in <cit.> that any algorithm needs Ω(nlog n+ nΔ) to reconstruct a directed tree on n nodes of maximum degree d. When we consider a directed rooted tree in which all edges are directed from parent to child, then path queries are the same as ancestor queries: given u,v in a rooted tree, is u an ancestor of v? We apply this to the tree T_c,Δ,h from <ref> for which the labels of all internal vertices are fixed but the labels of the leaves are unknown. Path queries (u,v) only give new information if v is a leaf and u is an internal vertex. But this is weaker than distance queries, since we can obtain the same information by asking the distance between u and v. This means that we can redo the calculation from the proof of <ref> (applying <ref>) to lift the lower bound to path queries. Any randomised algorithm requires at least 1/50Δ nlog_Δ n path queries to reconstruct n-vertex directed trees of maximum degree Δ+1≥ 3 if n≥ 2Δ^50(ΔlnΔ+3). A randomised algorithm using O(nlog n) path queries on bounded-degree n-vertex trees has been given in <cit.> but their dependency on Δ does not seem to match our lower bound. We remark that besides query complexity, works on path queries such as <cit.> also studied the round complexity (i.e. the number of round needs when queries are performed in parallel). The same ideas applies to lift our lower bound to one for reconstructing tree posets (T,>) from comparison queries, which answer for given vertices u,v of the tree whether u<v,v<u or u||v. Any randomised algorithm requires at least 1/50Δ nlog_Δ n comparison queries to reconstruct n-vertex tree posets of maximum degree Δ+1≥ 3 if n≥ 2Δ^50 (ΔlnΔ+3). This improves on the lower bound of Ω(Δ n + n log n) from <cit.> and matches (up to a constant factor) the query complexity of their randomized algorithm. In the proof of <ref>, the adversary method is used by <cit.> to prove Ω(nk) queries are needed by any deterministic algorithm for an auxiliary problem called the (n,k)-partition problem. Given n elements which are partitioned into k equal-sized classes, the partition needs to be determined via queries of the form “Are elements a and b in the same class?”. This problems has recently resurfaced under two new names. Liu and Mukherjee<cit.> studied this problem phrased as learning the components of a graph via membership queries (which answer whether given vertices lie in the same component or not) and provide an exact deterministic lower bound of (k-1)n-k2 for deterministic algorithms. Randomised algorithms have been studied in a similar setting by Lutz, De Panafieu, Stein and Scott <cit.> under the name active clustering. They provide the optimal average query complexity when the partition is uniformly at random (among all partitions, so without a specified number of parts) and made the following conjecture. Let p_1≥…≥ p_k be a probability distribution over k parts. If n elements are partitioned at random in k parts by putting an element in part i with probability p_i, independently of the other elements, then (1+o(1))∑_i=1^k i p_i n membership queries are required on average. In particular, if all p_i are equal then ∑_i=1^k ip_in=∑_i=1^k i/k n =k(k+1)/2kn=k+1/2 n. This seems reasonable under the assumption that the set of answers the algorithm receives, needs to distinguish the partition from any other partition (including those with a very large number of parts), since if the number of parts k is `known', it clearly suffices to perform n-1 queries for k=2 (e.g. take the queries that correspond to the edges of any tree on n vertices). For general (fixed) k, we can similarly[We can learn one vertex per part in a negligible number of expected queries and then take a random permutation on the k parts for each element, specifying the order in which we will query for our vertex. We can stop when we hear a YES or k-1 NO's and hence improve the final k/k to k-1/k.] reduce the number of queries by 1/kn by exploiting that we know the part if we discovered k-1 parts that an element is not part of. Either way, it seems natural that randomised algorithms need α kn queries for some constant α, but the best lower bound for randomised algorithm seems to be the information-theoretic lower bound of Ω(nlog k). Our result remedies this gap in the literature. Let ε>0. Any randomised algorithm requires at least 1/11 nk membership queries to solve the (n,k)-partition problem if n≥ k^1+ε is sufficiently large. Since we plan to apply <ref>, we will write Δ=k. First note that we can see the (n,Δ)-partition problem using membership queries as reconstructing a balanced function using only Type 2 queries to the coordinate oracle. Formally, if f : [n] → [Δ] is the function which associates an element a∈ [n] to the index i∈ [Δ] of the part that contains a (out of the Δ parts in the partition), then a membership query between a,a'∈ [n] is exactly equivalent to the coordinate query _2^c(a,a') applied the function f. Once we reconstructed the partition, we can retrieve the index of each parts using Δ^2 = o(nΔ) queries of Type 1 to the coordinate oracle. Therefore it suffices to show that at least 1/11Δ n queries are needed in expectation to reconstruct a balanced function using the coordinate oracle. Applying <ref> with k=1, we find that when f is sampled uniformly at random (among all functions g:[n]→ [Δ]), the probability that a given randomised algorithm uses less than 1/10nΔ queries is at most e^-1/50 n. In particular, the number of balanced functions reconstructed in less than 1/10nΔ queries is upper bounded by Δ^n e^-1/50 n. We compare this number to the total number n n/Δ,…,n/Δ = n!/(n/Δ)!^Δ of balanced functions: Δ^n e^-1/50 n/n!/(n/Δ)!^Δ ≤Δ^n e^-1/50n(2π n/Δ)^Δ/2(n/(eΔ)^n/√(2π n)(n/e)^n e^Δ/(12n/Δ) =1/√(2π n)(2π n/Δ)^Δ/2exp(Δ^2/(12n)-n/50). Here we used that for all n≥ 1, √(2π n) (n/e)^n <n!< √(2π n) (n/e)^n e^1/(12n). Since Δ≤ n^1/(1+ϵ) for some ϵ>0, the fraction tends to 0, so in particular becomes smaller than 1/100 when n is sufficiently large (depending on ϵ). This implies that the expected number of queries to reconstruct a Δ-balanced function is at least 99/1001/10Δ n ≥1/11 n Δ. By the discussion at the start, we find the same lower bound for the (n,Δ)-partition problem. The same lower bound holds when queries of the form “Is element a in class i?” are also allowed. We expect that our methods can be adapted to handle parts of different sizes and that our constant 1/11 can be easily improved. § OPTIMAL DETERMINISTIC ALGORITHM FOR K-CHORDAL GRAPHS Using additional structural analysis, we extend our algorithm from trees to k-chordal graphs: graphs without induced cycles of length at least k+1. In the simpler case of (3-)chordal graphs, (randomised) reconstruction from a quasi-linear number of queries was already known to be possible. Besides extending the class of graphs, our algorithm shaves off a (log n)-factor and is now optimal in n (the number of vertices of the input graph). We put in additional effort to optimize the dependency on Δ for chordal graphs up to a factor O(logΔ ) in the appendix. The core of the proof uses the same principles as for trees in <ref>: we reconstruct the edges of a vertex u to the previous layer, layer-by-layer and vertex-by-vertex. The two important ingredients are (1) a structural result on the neighbourhood of a vertex (see <ref> and <ref>) and (2) the existence of `nice' balanced separators on the already reconstructed subgraph (see <ref> and <ref>). After removing the separator, we need to show that we can correctly determine the component that contains the neighbourhood of the vertex u that we are currently considering. We also need to reconstruct the edges within the layer, but this turns out to be relatively easy. * We start by fixing a vertex v_0 and asking (V(G),v_0). From that, we reconstruct L_i = {v ∈ V(G) | d(v,v_0) = i}. We write L_⋈ i = ∪_j ⋈ i L_j for any relation ⋈ ∈{≤,<,>,≥}. The algorithm proceeds by iteratively reconstructing G[L_≤ i] for increasing values of i. Note that we can reconstruct L_≤ 2Δ k, the vertices at distance at most 2Δ k from v_0, using O_k,Δ(n) queries. Suppose that we reconstructed G_1 := G[L_≤ i-1] for some i≥ 2Δ k and we again want to reconstruct the two edge sets E_i-1,i = {uv ∈ E | u ∈ L_i-1, v ∈ L_i} and E_i,i = {uv ∈ E | u , v ∈ L_i}. We call H_1=G[L_≤ i-1-k] the core of G_1. We need a lemma that implies that neighbourhoods are not spread out too much in G_1. For all u ∈ L_i and v,w ∈ N(u) ∩ L_i-1, d_G_1(v,w) ≤Δ k. Let v,w ∈ N(u) ∩ L_i-1 and let P be a shortest vw-path in G_1. If V(P) ∩ N(u) = {v,w}, then the vertex set V(P) ∪{u} induces a cycle in G, and so |V(P)| ≤ k (else the k-chordality would be contradicted). For the same reason, P can have at most k-1 consecutive vertices outside of N(u). Since u has at most Δ neighbours, it follows that d_G_1(v,w)≤Δ k. Since G has treelength at most k, it has a tree decomposition (T',ℬ') such that all bags B'∈ℬ' satisfy d_G(u,v)≤ k for all u,v∈ B'. In particular the bags have size at most Δ^k+1. We have already reconstructed G_1, so in particular we know N^≤ k[v] for all v∈ H_1. Therefore, we can construct a tree decomposition of (T,ℬ) of G_1 such that each B ∈ℬ has size at most Δ^k+1 and for any bag B ∈ℬ that contains at least one vertex of the core H_1, we have d_G_1(u,v)≤ k for all u,v∈ B. Fix u∈ L_i. We describe an algorithm to reconstruct N(u) ∩ L_i-1. The algorithm recursively constructs a sequence of connected graphs (G_j)_j=0^ℓ and a sequence of separators (S_j)^ℓ_j=1 for some ℓ≤⌈log(n) ⌉, such that S_j is a 1/2-balanced separator of G_j, S_j⊆ L_≤ i-Δ k - 1 and N(u) ∩ L_i-1⊆ V(G_j). We first prove the following claim that we use to find our sequence of separators (S_j). For n large enough compared to Δ and k and any set of vertices A ⊆ V(G_1) with |A| ≥log n, there exists a bag B of T such that B is a 1/2-balanced separator of A and B is contained in L_≤ i-Δ k - 1. Let T be rooted in a bag that contains v_0. By <ref>, there is a bag B of T that forms a 1/2-balanced separator for A (i.e. all connected components of G_1[A∖ B] are of size at most |A|/2). We choose such a bag B of minimum depth (in T). We need to show B is contained in L_≤ i-Δ k - 1. If B contains v_0, then B ⊆ G[L_≤ k]. Since i≥ 2Δ k, we are done in this case. Suppose now that v_0 ∉ B and let B' be the parent of B. By definition, B' is not a 1/2-balanced separator of A. If B contains a vertex of L_≤ i-1-k=V(H_1), then its diameter in G_1 is at most k. So either B⊆ L_≤ i-Δ k-1 or B⊆ L_>i-(Δ+1)k-1. We are done in the first case, so assume the latter. Since G_1 is connected, B∩ B'≠∅. The same diameter argument gives that B'⊆ L_>i-(Δ+2)k-1. If C is a component of G_1∖ B' that does not contain v_0, then the shortest path (of length at most i) from any v∈ C must go through B' (at distance at least i-(Δ+2)^k-1 from v_0). In particular, all such components are contained in N^((Δ+2)^k+1)(B') and so the total size is at most Δ^(Δ+2)k + 1 |B'| =O_k,Δ(1). For n sufficiently large, this is at most 1/2log n. On the other hand, as B' is not a 1/2-balanced separator, there exist a component A' of G_1[A∖ B'] with |A'| > 1/2log n. We found above that A' must contain v_0. Since B does not contain v_0, A' is contained in the component of G_1[A ∖ B] containing v_0. This yields a contradiction with the fact that B is a 1/2-balanced separator of A. Suppose that we have defined G_j for some j ≥ 1 and let us describe how to define G_j+1. If |G_j| ≤log n, we ask (u,V(G_j)) and output N(u)∩ V(G_j). Otherwise, we let S_j be the bag found in <ref> when applied to A=V(G_j). Then S_j ⊆ L_≤ i -Δ k - 1 and it is a 1/2-balanced separator of G_j. Since it is a bag of T and contained in H_1, we find the size of the bag is at most Δ^k+1 and d_G_1(u,v)≤ k for all u,v∈ S_j. We ask (N[S_j],u) and let G_j+1 be a component of G_j∖ S_j that contains a vertex from min_x∈ N[S_j] d_G(x,u). Then, we increase j by one and repeat the same procedure. We now prove the correctness of the algorithm presented above. We first argue that N(u)∩ L_i-1 is contained in a unique connected component of G_j ∖ S_j. Since every separator is included in L_≤ i-Δ k-1, we find d_G_1(u,S_ℓ) ≥Δ k + 1 for all ℓ≤ j. By <ref>, all vertices in N(u)∩ L_i-1 are connected via paths in G_1 of length at most Δ k and by the observation above, these paths avoid all separators so will be present in a single connected component of G_j∖ S_j. The only thing left to prove is the following claim. G_j+1 is the component that contains N(u)∩ L_i-1. Let x ∈ N[S_j] such that d_G(x,u) is minimised and let H_x be the connected component of x. We will prove that H_x contains N(u)∩ L_i-1. In particular, this implies that G_j+1=H_x (a priori, H_x could be a different component for different minimisers x), so that G_j+1 contains N(u)∩ L_i-1, as desired. Suppose towards a contradiction that N(u) ∩ L_i-1 is instead contained in a different connected component H. We will find an induced cycle of length at least k+1. Let P be a shortest path in G from x to u. Let P' be a path from u to S_j with all internal vertices in H. Such a path exists since G_j is connected. Let P” be shortest path in G between a neighbour of x in S_j to the endpoint of P' in S_j. As S_j is a bag contained in H_1, any two vertices in S_j are within distance k in G_1. So P”⊆ L_≤ i-2k-2 (we may assume Δ≥ 4). Let y be the first vertex on the path P (from x to u) that lies in L_i (such a vertex must exist since the path does not have internal vertices in S_j by choice of x and since H_x contains no neighbours of u). Let x_1,…,x_k be the k vertices before y in P. Note that none of the x_i can be adjacent to or part of P'∪ P” (since they are in H_x∩ L_≥ i-k). Let G' be the graph obtained from G[P∪ P”∪ P'] by contracting P”∪ P'∪ (P∖{x_1,…,x_k}) to a single vertex p. Note that the selected vertex set is indeed connected and that the resulting graph has vertex set {x_1,…,x_k,p}. Since P was a shortest path in G, the vertex set {x_1,…,x_k} still induces a path and it suffices to argue about the adjacencies of p. Via edges of P, the vertex p is adjacent to x_1 and x_k. If p was adjacent to x_i for some i∈ [2,k-1], then there must be a vertex y∈ P”∪ P'∪ (P∖{x_1,…,x_k}) adjacent to x_i. But a case analysis shows this is not possible. (The only vertices adjacent to x_i in P are x_i+1 and x_i-1 since P is a shortest path; we already argued that y∉P'∪ P”.) We hence found an induced cycle of length k+1, a contradiction. We now show how to reconstruct all edges in E_i,i incident to u. If x ∈ N(u)∩ L_i-1 and y∈ N(v)∩ L_i-1 for some uv ∈ E_i,i, then d_G[L_≤ i-1](x,y) ≤ 2Δ k. Since |N[{u,v}]| ≤ 2Δ, it suffices to prove that d_G_1(x,y)≤ k when the shortest path P in G_1 between x and y avoids other vertices from N[{u,v}]. As we argued in <ref> this is true when x or y is a neighbour of both u and v (else we create an induced cycle of length at least k+1). So we may assume that x∈ N(u)∖ N(v) and y∈ N(v)∖ N(u). But now P ∪{u,v} is an induced cycle of length at least k+1. Let G_1' be the graph obtained from G_1 by adding the vertices in L_i and the edges in E_i,i-1. Our algorithm already reconstructed G_1'. If uv∈ E_i,i, then applying <ref> to vertices x,y∈ L_i-1 on the shortest paths from u,v to the root v_0 respectively, we find that d_G_1'(u,v)≤ 2Δ k+2. For each u∈ L_i, we ask (u,N_G'_1^≤ 2Δ k +2(u) ∩ L_i) and we record the vertices v for which the response is 1. Those are exactly the vertices adjacent to u. Per vertex u∈ L_i, this takes at most Δ^2Δ k +3 queries. The query complexity of reconstructing E_i-1,i is O_k,Δ(log n|L_i|) as there are at most log n iterations (using the fact that the (S_j)_j are 1/2-balanced separators) and in each iteration we do O_k,Δ(log n) queries per vertex u∈ L_i. In order to reconstruct E_i,i, we use O_k,Δ(1) queries per vertex of L_i. Therefore, the total query complexity of the algorithm is ∑_i O_k,Δ(|L_i|log n) = O_k,Δ(nlog n). We remark that the dependency of the constant on Δ and k can be improved in a similar fashion to what we explain for chordal graphs in Appendix <ref>. § RANDOMISED ALGORITHMS FOR BOUNDED TREELENGTH In this section, we prove <ref>. * Given a tree decomposition (T, (B_t)_t ∈ V(T)) of a graph G and a set X of vertices of G, we denote by T_X the subtree of T induced by the set of vertices t ∈ V(T) such that B_t contains at least one vertex of X. Given v∈ V(G), we may abuse notation and use T_v as the subtree T_{v}. We first prove the following useful property of graphs of bounded treelength. * Consider a tree decomposition (T,(B_t)_t∈ V(T)) of G such that any two vertices u,v in the same bag satisfy d(u,v) ≤ k. If two vertices a,b∈ V(G) share a bag, then d(a,b) ≤ k and the claim holds. Otherwise, T_a and T_b are disjoint subtrees of T and we can consider the unique path P in T between T_a and T_b, with internal nodes taken from V(T)∖ V(T_a)∪ V(T_b). We also consider a shortest path Q:={q_1,q_2,…, ,q_m} between a and b in G with q_1 = a, q_m = b and q_i q_i+1∈ E(G) for all i < m. We remark that by connectivity, P is a subpath of T_A. Suppose now, towards a contradiction, that there is some vertex z ∈ Q such that z ∉ N^≤ 3k/2[A]. Note that T_z can not have common vertices with P because we assumed d(z,A) > k using the previous remark and the fact that vertices that share a bag are at distance at most k. We can then consider the vertex t∈ P such that {t} separates P ∖{t} from T_z in T. The shortest path Q must go through B_t twice: once to go from a to z and once to go from z to b. Let i<ℓ<j be given such that q_i,q_j ∈ B_t and q_ℓ = z. Since Q is a shortest path in G, d(q_i,z) + d(z,q_j)=d(q_i,q_j). Moreover, d(q_i,q_j) ≤ k because q_i and q_j share a bag. By the pigeonhole principle, we deduce that either d(p_i,z) ≤ k/2 or d(p_j,z) ≤ k/2. Suppose that d(p_i,z) ≤ k/2. Remember that t ∈ P thus B_t contains an element of A as G[A] is connected. It follows that d(p_i,A) ≤ k thus d(z,A) ≤ d(z,p_i) + d(p_i,A) ≤ 3k/2, which is a contradiction. The other case follows by a similar argument. We now sketch the proof of <ref>. The skeleton of the proof is inspired by <cit.>: we find a balanced separator S, compute the partition of G ∖ S into connected components, and reconstruct each component recursively. In order to find this separator, we use a notion of betweenness that roughly models the number of shortest paths a vertex is on. We prove four claims. The first one ensures that in graphs of bounded treelength, the betweenness is always at least a constant. Then, the next three claims are building on each other to form an algorithm that computes the partition of G ∖ S into connected components of roughly the same size. * <ref> is a randomised procedure for finding a vertex z with high betweenness (using few queries and with constant success probability). * <ref> shows S=N^≤ 3k/2[z] is a good balanced separator if z has high betweenness. * <ref> computes the partition of G ∖ S into connected components. Note that, once you computed the partition, you can check if the preceding algorithms have been successful. If not, we can call again <ref> until we are successful, yielding a correct output with a small number of queries in expectation. Let G be a connected n-vertex graph of maximum degree at most Δ and let (T,(B_t)_t∈ V(T)) be a tree decomposition of G such that d(u,v)≤ k for all u,v∈ V(G) that share a bag in T. We initialize A=V(G), n_A = |A| and R^i=∅ for i ∈ [1,3k]. For any j ∈ℝ^+ we abbreviate R^≤ j=∪_i≤ j R^i. Lastly, let r = |R^≤ 3k|. We will maintain throughout the following properties: * G[A] is a connected induced subgraph of G. * R^i consists of the vertices in G that are at distance exactly i from A. * Both A and R^i for all i are known by the algorithm. In particular, we know which vertices are in sets such as R^≤ 3k/2 = N^≤ 3k/2[A] and by Lemma <ref> we also obtain the following crucial property. 4. For a,b∈ A, any shortest path between a and b only uses vertices from A ∪ R^≤ 3k/2. The main idea of the algorithm is to find a balanced separator S and compute the partition of G[A∖ S] into connected components, then call the algorithm recursively on each components. As soon as n_A has become sufficiently small, we will reconstruct G[A] by `brute-force queries'. In order to find the separator S, we use the following notion. For G a graph, a subset A ⊆ V(G) and a vertex v∈ V(G), the betweenness p_v^G(A) is the fraction of pairs of vertices {a,b}⊆ A such that v is on some shortest path in G between a and b. We first prove that there is always some vertex v∈ A∪ R^≤ k (a set known to our algorithm) for which p_v(A) is large. We have p:=max_v∈ A ∪ R^≤ kp_v^G(A)≥1/2(Δ^k+1). Our original tree decomposition also restricts to a tree decomposition for G[A], so Lemma <ref> shows that there exists a bag B of T such that B is a 1/2-balanced separator of G[A]. Note that G[A] is connected, so there exists some a∈ A∩ B. As T is a witness of G being of bounded treelength, the distance between any two vertices of B is at most k. In particular, B⊆ N^≤ k[a] ⊆ A ∪ R^≤ k, and |B|≤Δ^k+1 since G has maximum degree Δ. Moreover, since B is a 1/2-balanced separator of G[A], for at least half of the pairs {u,v}⊆ A, the shortest path between u and v goes through B. Using the pigeonhole principle, there exists a v ∈ B such that p^G_v(A) ≥1/2(Δ^k+1). The next three claims are building on each other to find a balanced separator S. In the first one, we argue that we can find, using few queries, a vertex with high betweenness. There is a randomised algorithm that finds z∈ N^≤ 3k/2[A] with p_z^G(A)≥ p/2 with probability at least 2/3 using O(p^-1(n_A+r)log(n_A+r)) distance queries in G. To simplify notation, we omit G and A from p^G_v(A) and only write p_v. We first sample uniformly and independently (with replacement) pairs of vertices {u_i,v_i}⊆ A for i ∈ [C log (n_A+r)] where C≤1/2p+1 is defined later. Then, we ask (u_i,N^≤ 3k/2[A]) and (v_i,N^≤ 3k/2[A]). We write P_i = {x ∈ N^≤ 3k/2[A] | d(u_i,x) + d(x,v_i) = d(u_i,v_i)} for the set of vertices that are on a shortest path between u_i and v_i. Note that <ref> implies that P_i contains all vertices of V(G) on a shortest path from u_i to v_i. From the queries done above we can compute P_i for all i∈ [Clog (n_A+r)]. For each vertex v ∈ N^≤ 3k/2[A], we denote by p_v an estimate of p_v defined by p_v = |{i∈ [Clog(n_A+r)] : v ∈ P_i}| /(C log (n_A+r)). The algorithm outputs z such that z = max_v ∈ N^≤ 3k/2[A]p_v. The query complexity of this algorithm is 2Clog(n_A+r)|N^≤ 3k/2[A]|=O_k,Δ(n_Alog(n_A+r)) We now justify the correctness of this algorithm and give C. Let y = max_w ∈ N^≤ 3k/2[A] p_w. We need to show that p_z≤p_y/2 has probability at most 1/3. Let u be a vertex chosen uniformly at random among the set of vertices w∈ N^≤ 3k/2[A] with p_w≤ p_y/2. A simple union bound implies that it is sufficient to show that ℙ [ p_y≤p_u] < 1/(3n_A+3r). Indeed, this implies that the probability that a vertex w with p_w≤ p_y/2 is a better candidate for z than y, is at most 1/3. Note that the elements of {p_w | w ∈ N^≤ 3k/2[A]} (and thereby z) are random variables depending on the pairs of vertices sampled at the start, and that the elements of {p_w | w∈ N^≤ 3k/2[A]} are fixed. We denote by A_i the event {u ∈ P_i} and by B_i the event {y ∈ P_i}. The events (A_i)_i are independent, since each pair {u_i,v_i} has been sampled uniformly at random and independently. By definition, [A_i] = p_u ≤ p_y/2 and [B_i] = p_y. Thus, the random variable X_i defined by X_i = 1_A_i - 1_B_i has expectation [X_i] ≤ -p_y/2. Therefore, applying Hoeffding's inequality <cit.>, we obtain [∑_i=1^Clog (n_A+r) X_i ≥ 0 ] ≤ 2 exp(-2(Clog(n_A+r)p_y/2)^2/4log(n_A+r)). By choosing 1/2p+1≥ C ≥1/2p_y = 1/2p such that Clog(n_A+r) is an integer, we conclude that [ p_y≤p_u]= [∑_i=1^Clog(n_A+r) X_i ≥ 0 ] ≤ 2 exp(-2log(n_A+r)) ≤ 1/(3n_A+3r) for n_A≥ 6. This completes the proof. Let z be a vertex with high betweenness as in the claim above. We now argue that N^3k/2[z] is an α-balanced separator for some constant α depending only on Δ and k. Let α = √(1/4(Δ^k +1)). If z ∈ N^≤ 3k/2[A] satisfies p_z^G(A)≥ p/2, then S:= N^≤ 3k/2[z] is an α-balanced separator for A. Suppose towards contradiction that S is not an α-balanced separator. Thus there is a connected component C of G[ V(G) ∖ S] with |C∩ A|> α n_A. By definition of S, d(z,C) > 3k/2 which implies by <ref> that for any pair of vertices in C, no shortest path between these two vertices goes through z. In particular, this holds for pairs of vertices in C ∩ A. Therefore, p_z^G(A)≤(n_A^2-|C∩ A|^2)/n_A^2 < 1 - α^2 = 1 - (1 - 1/4(Δ^k +1)) = 1/4(Δ^k +1)≤ p/2 using Claim <ref> for the last step, contradicting our assumption that p_z^G(A)≥ p/2. We apply <ref> to find z∈ N^≤ 3k/2, where p_z^G(A)≥ p/2 with probability at least 2/3 (using also <ref>). We compute S =N^≤ 3k/2[z] using O_k,Δ(n_A + r) distance queries; this can be done since S ⊆ A ∪ R^≤ 3k so the algorithm only needs to consider n_A+r vertices when searching for neighbours. The set S is an α-balanced separator with probability at least 2/3 by <ref>. In particular, the algorithm does not know yet at this point if it is indeed a good separator or not. It will be able to determine this after computing the partition of G[A∖ S] into connected components. The following claim uses mostly the same algorithm as <cit.>, and the proof is analogous. As we are using this algorithm in a slightly different setting, we still give a complete proof of the lemma. There is a deterministic algorithm that given a set S ⊆ A, computes the partition of A∖ S into connected components of G[A ∖ S] using at most n_A ·Δ (r + |S|) distance queries. By assumption, R^1 is the set of vertices at distance exactly 1 from A in G. Since A is connected, it is a connected component of G[V(G)∖ R^1]. Therefore, the connected components of G[A∖ S] are exactly the connected components of G[V(G) ∖ (R^1∪ S)] containing an element of A. We denote by B the open neighbourhood of S ∪ R^1 in A, that is, B = (N[S∪ R^1] ∩ A) ∖ (S ∪ R^1). We use the following algorithm. * We ask (A, S ∪ R^1) in order to deduce N[S ∪ R^1] ∩ A, and then we ask (A,N[S ∪ R^1] ∩ A). * We compute D_b = { v ∈ A ∩ S | d(v,b) ≤ d(v, S ∪ R^1)} for b ∈ B, the set of vertices in A∩ S which have a shortest path to b that does not visit a vertex of S∪ R^1. * Let 𝒟 = { D_s | s ∈ B }. While there are two distinct elements D_1,D_2 ∈𝒟 such that D_1 ∩ D_2 ≠∅, merge them in 𝒟, that is, update 𝒟← (𝒟∖{D_1,D_2}) ∪{ D_1∪ D_2 }. We output 𝒟. Note that any vertex a∈ A∖ S, is not in S∪ R_1, so will be in D_s for at least one s∈ B (possibly s=a) before we do the last step of the algorithm. The last step ensures that the output is indeed a partition of A. We first argue that 𝒟, as outputted by the algorithm above, is an over-approximation of the connected component partition of G[A∖ S] (that is, for any connected component C of G[A∖ S], there exists D ∈𝒟 such that C ⊆ D). It suffices to prove that for any edge ab ∈ E(G[A∖ S]) there exists D ∈𝒟 such that {a,b}⊆ D. Suppose without loss of generality that d(a,S ∪ R^1) ≤ d(b,S ∪ R^1). Moreover let s ∈ B such that d(a,s) = d(a,S∪ R^1) - 1 and thus a ∈ D_s. Now d(b,s) ≤ d(a,s) + 1 ≤ d(b,S ∪ R^1) thus b ∈ D_s. We now argue that 𝒟 is an under-approximation too, by showing that G[D∖ S] is connected for all D ∈𝒟. We first show this for the initial sets D_s with s∈ N[S∪ R^1]∩ A. Let s ∈ B. For any v ∈ D_s, by definition, d(v,s) ≤ d(v,S ∪ R^1), thus there is a shortest path P between v and s not using vertices of S ∪ R^1. Moreover s ∈ A and A is separated from V(G) ∖ A by R^1, therefore P is contained in A ∖ S. This shows that v is in the same connected component of G[A∖ S] as s. To see that G[D] remains connected for all D∈𝒟 throughout the algorithm, note that when the algorithm merges two sets D_1,D_2 ∈𝒟, they need to share a vertices, thus if both G[D_1] and G[D_2] are connected then G[D_1 ∪ D_2] is also connected. Remember that |S| ≤Δ^3k/2 + 1 = O_k,Δ(1) and that the bounded degree condition implies |N[S ∪ R^1]| ≤Δ· |S ∪ R^1|. This allow us to conclude that the query complexity is bounded by |A|·|N[S∪ R^1]| ≤ n_A·Δ |S ∪ R^1| ≤ n_A ·Δ (r + |S|). We apply the algorithm from <ref> with the separator S computed by <ref>. Knowing the partition, the algorithm can check if S is indeed α-balanced. If not, the algorithm repeats <ref> and computes a new potential separator. An single iteration succeeds with probability at least 2/3 and each iteration is independent from the others, so the expected number of repetitions is 3/2. We ask (S ∪ R^≤ 3k, A). For each connected component A of G[A ∖ S], we will reconstruct G[A] and then we will describe how to reconstruct G[A]. If |A|≤log(n), then we ask (A,A) to reconstruct G[A]. Otherwise, we will place a recursive call on A, after guaranteeing that our desired properties mentioned at the start are again satisfied. By definition, G[A] is connected. So we know property 1 holds when A is replaced by A. To ensure properties 2 and 3 are also satisfied for the recursive call, we reconstruct R^i, the set of vertices at distance exactly i from A. As S∪ R^1 separates A from other component of G[A∖ S], for any other connected component D of G[A∖ S] and for any v∈ D, we have: d(A,v) = min_s ∈ S ∪ R^1 d(A,s) + d(s, v). Therefore we can compute d(A,v) from the query results of (S ∪ R^≤ 3k, A) for all v ∈ A ∪ R^≤ 3k. This is enough to deduce R^i for any i ≤ 3k because A⊆ A and thus R^i⊆ A ∪ R^i. After we have (recursively) reconstructed G[A] for each connected component A of G[A∖ S], we reconstruct G[A] by using that we already queried the distance between all pairs (a,s) with a∈ A and s∈ S. In particular, we know G[S∩ A] and also how to `glue' the components to this (namely, by adding edges between vertices at distance 1). By <ref>, each recursive call reduces the size of the set A under consideration by a multiplicative factor of α. Therefore, the recursion depth is bounded by O_Δ,k(log n) and the algorithm will terminate. We argued above that the algorithm correctly reconstructs the graph. It remains to analyse the query complexity. We analyse the query complexity via the recursion tree, where we generate a child for a vertex when it places a recursive call. We can associate to each vertex v of the recursion tree T_R, a subset A_v⊆ V(G) for which the algorithm is trying to reconstruct G[A_v]. The subsets associated to the children of a node v are disjoint, since each corresponds to a connected component of A_v∖ S_v for some subset S_v⊆ V(G) that is an α-balanced separator. In particular, the subsets associated to the leafs are disjoint. In a leaf node v, the algorithm performs |A_v|^2 queries to reconstruct G[A_v], where |A_v|≤log(n). If we enumerate the sizes A_v for the leafs v of the recursion tree as a_1,…,a_ℓ, then ∑_i=1^ℓ a_i^2≤ℓlog(n)^2≤ n log(n)^2, where we use that we have at most n leafs since the corresponding subsets are disjoint. Since there are at most n leafs, and the recursion depth is O_k,Δ(log n), there are at most O_k,Δ(nlog n) internal nodes. Let v be an internal node and let n_A and r denote the sizes of the corresponding subsets A=A_v and R^≤ 3k. The algorithm makes the following queries: * Finding z takes O_k,Δ(n_Alog(n_A+r)) queries in Claim <ref>. * O_k,Δ(n_Ar) queries to compute S from z and to find the connected components of A∖ S in Claim <ref>. This step and the previous step are repeated a constant number of times (in expectation). * O_k,Δ(n_Ar) queries to set up the recursive calls to the children of v. Since each recursive call increases the size of R^≤ 3k by at most an additive constant smaller than (Δ+1)^9k/2 (recall that R^≤ 3k⊆ R^≤ 3k∪ N^≤ 9k/2[z]), and the recursion depth is O_k,Δ(log n), it follows from an inductive argument that r= O_k,Δ(log n). So the number of queries listed above is O_k,Δ(n_Alog n). To compute the total query complexity of internal nodes, we use the fact that for two nodes v and v' at the same recursion depth we have that A_v ∩ A_v' = ∅. Therefore, by adding contribution layer by layer in the recursion tree we get a query complexity of O_k,Δ(nlog n) for any fixed layer, and the total number of queries performed sum up to: nlog(n)^2 +O_k,Δ(log n) O_k,Δ(nlog n)=O_k,Δ(nlog^2n). § OPEN PROBLEMS We presented new algorithms for reconstructing classes of graphs with bounded maximum degree from a quasi-linear number of distance queries and gave a new randomised lower bound of Δ n log_Δ n for n-vertex trees of maximum degree Δ. This lower bound is now also the best lower bound for the class of bounded degree graphs, while the best-known randomised algorithm uses O_Δ(n^3/2) queries <cit.>. The main open question is to close the gap between our lower bound and this upper bound. In particular, it would be interesting to see if the quasi-linear query complexity achieved for various classes of graphs can be extended to all graphs of bounded maximum degree. Does there exist a randomised algorithm that reconstructs a n-vertex graph of maximum degree Δ using O_Δ(n) queries in expectation? One way to make progress towards such a result is to continue in the line of <ref> and try to understand distance reconstruction problems from a graph parameters point of view. It seems natural that having small balanced separators helps with obtaining a quasi-linear query complexity. For now, some additional structure on the separator is needed in <ref> (i.e. vertices being `close'). Thus, an interesting next step would be to study the query complexity in the case of bounded treewidth graphs, for which we expect new techniques are required. Does there exist a randomised algorithm that reconstructs an n-vertex graph of maximum degree Δ and treewidth k using O_Δ,k(n) queries in expectation? From a more practical viewpoint, studying the query complexity of reconstructing scale-free networks is of high importance as it is the class of graphs that best describes real-world networks like the Internet. Graphs in this class have vertices of large degree, therefore recent theoretical works (including this one), do not directly apply. In particular, not even an o(n^2) algorithm is known in this setting. How many distance queries are needed for reconstructing scale-free networks? We showed in this paper that deterministic algorithms with good query complexity exist for specific classes of graphs. One of the classes of graphs that does not fit in the scope of this paper but is known to have an efficient randomised algorithm is the class of bounded degree outerplanar graphs <cit.>. Note that outerplanar graphs also have a nice separator structure. For example, they always contain an induced cycle which vertex set is a 1/2-balanced separator. Does there exist a deterministic algorithm that reconstructs an n-vertex outerplanar graphs of maximum degree Δ in O_Δ(n) distance queries? We provided various randomised lower bounds in Section <ref> that are tight up to a multiplicative constant. We expect that determining the exact constant for the optimal expected number of queries for reconstructing n-vertex trees of maximum degree Δ for the entire range of (n,Δ) may be complicated. In fact, the constant may depend on the relation between n and Δ (e.g. for Δ=n-1, the reconstruction problem is trivial since the input graph is always a tree). We believe that for deterministic algorithms, when n is large compared to Δ and when n takes the form Δ^k, determining the correct constant could be achievable (yet require new ideas). Let Δ≥ 3. When restricted to sufficiently large n of the form n=Δ^k, is there a deterministic algorithm that reconstructs n-vertex trees of maximum degree Δ in cΔ nlog_Δ n queries for some constant c<1? A lower bound for trees immediately implies a lower bound for any class containing trees (such as k-chordal graphs), but we expect that a better lower bound can be proved for bounded degree graphs in general. In our algorithm for k-chordal we did not try to optimise the dependency on the maximum degree Δ and k, but we expect that this can be vastly improved upon, perhaps to a query complexity of C Δ k n log_Δ n for some constant C. If one believes the answer to <ref> to be positive, then the dependency on k should be poly-logarithmic. It would be interesting to see if lower bounds can also exploit the fact that larger induced cycles are allowed. In order to ensure the lower bound depends on input graphs that are not trees (for which we gave an algorithm Δ nlog_Δ n+O(Δ n) using queries), we pose the following problem. Is it true that for some fixed values of k and Δ and for all n sufficiently large, any algorithm reconstructing k-chordal graphs on n vertices of maximum degree Δ requires at least 10Δ nlog_Δ n queries? Finally, we believe the problems of reconstructing functions from the coordinate or from the word oracle are of independent interest and there are various variations on our setting that can be considered that our methods would partially apply to as well. For example, it is also natural to consider functions f:A→ B where the sets A and B are products of sets of different sizes (e.g. A=B=[a_1]× [a_2]×…× [a_k]). Acknowledgements We would like to thank Claire Mathieu for helpful suggestions and Guillaume Chapuy for helpful discussions regarding randomised lower bounds for the (n,k)-partition problem which inspired our randomised lower bound proofs. We are also grateful to Jatin Yadav for pointing us to an issue we did not address in an earlier version. alpha § OMITTED DETAILS: FINDING A GOOD RANDOM ORDER Let T be a tree with a vertex t∈ T. Let a_1,…,a_k be the sizes of the components of T-t and let v_1,…,v_k denote the neighbours of t in these components. We show that there is a random order on v_1,…,v_k such that the expected number of vertices placed before v_i is at most 1/2 (s /a_i) for all i, where s=∑_j=1^ka_j. We generate the order by independently sampling X_i∼ U[0,a_i] uniformly at random for all i∈ [k], where [0,a_i] denotes the set of real numbers between 0 and a_i. Almost surely, X_π(1)>…>X_π(k) for some π∈ S_k and this gives us our desired random order. We prove that the expected number of vertices placed before v_1 is at most 1/2a_2+…+a_k/a_1 and then the remaining cases will follow by symmetry. Let I(x_1) denote the number of vertices placed before v_1 given that X_1=x_1, i.e. the number of i∈{2,…,k} such that X_i>x_1: I(x_1)=∑_i=2^k Bern(max(a_i-x_1/a_i,0)). A Bernouilli random variable with probability p has expectation p. The expected position for v_1 is hence one plus 1/a_1∫_0^a_1[I(x_1)]dx_1=∑_i=2^k 1/a_1∫_0^min(a_1,a_i)1-x_1/a_idx_1. We now show for all i∈{2,…,ℓ} that the ith summand is at most 1/2a_i/a_1, which implies the number of vertices placed before v_1 is indeed at most 1/2a_2+…+a_k/a_1. We compute 1/a_1∫_0^min(a_1,a_i)1-x_1/a_idx_1= min(a_1,a_i)/a_1(1 - min(a_1,a_i)/2a_i). When a_i≤ a_1, the expression simplifies to a_i/a_11/2 as desired. When a_i≥ a_1, the expression simplifies to 1-1/2a_1/a_i which is at most 1/2a_i/a_1 since a_1a_i≤1/2 a_i^2+1/2 a_1^2. § NEARLY TIGHT DELTA-DEPENDENCY FOR CHORDAL GRAPHS There are multiple equivalent definitions for when a graph G is chordal, two of which are given below. * G has no induced cycle of length at least 4. * G has a tree decomposition (T,(B_t)_t∈ V(T)) such that B_t induces a clique for every t ∈ V(T). Throughout, we often make use of the folklore fact that the class of chordal graphs is closed under taking minors. The class of chordal graphs is closed under edge contraction and taking induced subgraphs. A pseudocode version for our algorithm is written down in <ref> and the description, correctness and query complexity analysis is given in the proof. In view of our lower bound, the algorithm has a tight dependency on Δ up to a (logΔ)-factor. There exists a deterministic algorithm to reconstruct n-vertex chordal graphs of maximum degree Δ≤√(log n) using O(Δ n log n) distance queries. Let G = (V,E) be a chordal graph. The algorithm follows a more involved version of the tree reconstruction algorithm described for <ref>. We choose an arbitrary root v_0 and ask (v_0,V). We then infer a partition of the vertices in “layers” with respect to their distance to v_0. We denote the ith layer by L_i = {u∈ V | d(u,v_0) = i} and for convenience we define L_≤ i = {u∈ V | d(u,v_0) ≤ i} and L_≥ i similarly. Our goal is to reconstruct G[L_≤ i] iteratively, recovering edges incident to L_i via a sequence of 1/2-balanced clique separators. Suppose that we reconstructed G[L_≤ i-1] and we want to reconstruct the two edge sets E_i-1,i = {uv ∈ E | u ∈ L_i-1, v ∈ L_i} and E_i,i = {uv ∈ E | u ∈ L_i, v ∈ L_i}. Note that, by combining these two edge sets with G[L_≤ i-1], we would obtain the complete edge set of G[L_≤ i]. During the analysis we will need the following structural claim. For any u ∈ L_i, N(u) ∩ L_i-1 is a clique. Suppose, towards a contradiction, that there exist two vertices v,w ∈ N(u) ∩ L_i-1 such that {v,w}∉ E. We build the graph G' by contracting the connected subgraph L_0 ∪…∪ L_i-2 in G into a single vertex v'_0. By doing so, the neighbourhood of v'_0 is exactly L_i-1. Note that v,v'_0,w,u form an induced C_4 in G'. Since G' is not chordal, <ref> implies that G is also not chordal, which is a contradiction. We first describe an algorithm that reconstructs, for a fixed u ∈ L_i, the set E_i-1,i^u = {{v,u}| v ∈ N(u) ∩ L_i-1}. Repeating this for every u ∈ L_i gives E_i-1,i=∪_u∈ L_iE_i-1,i^u. We fix a tree decomposition T of G_1 = G[L_0 ∪…∪ L_i-1] in which each bag is a maximal clique, which is possible because G_1 is chordal. Note that the bag size is bounded by Δ+1. We will define a sequence (K_j)_j=1^ℓ of clique separators (chosen to be bags of T) and connected subgraphs (G_j)_j=1^ℓ of G_1, for some ℓ=O(log n), similar to the proof of Theorem <ref>. We maintain throughout that G_j contains N(u) ∩ L_i-1 for all j. Suppose we have defined G_j for some j≥ 1 and let us describe how to build G_j+1. (a) If |V(G_j)| ≤ 4 Δlog(n), we simply ask (V(G_j),u) and directly deduce N(u) ∩ L_i-1. We set ℓ=j and finish this subroutine. Otherwise, |V(G_j)| > 4Δlog(n). We define K_j to be a bag of T that forms a 1/2-balanced separator of G_j (This is possible by <ref>.). Moreover, we consider K_j ⊆ L_≤ i-1, we prove that such a separator exists in <ref>. We ask (u,K_j). (b) If d(u,K_j) = 1, let k∈ K_j be a neighbour of u. Then k∈ L_i-1 since K_j⊆ L_≤ i-1 and u∈ L_i. By <ref>, N(u)∩ L_i-1 is a clique and so each neighbour of u in L_i-1 is adjacent or equal to k. We know N[K_j]∩ L_i-1 since G[L_≤ i-1 ] has already been reconstructed. We ask (u,N[K_j]∩ L_i-1), record the edges from u to N(u) ∩ L_i-1, set ℓ=j and terminate this subroutine. Otherwise, d(u,K_j) > 1. This implies that we know the vertex set N[K_j]. (c) We ask (u,N[K_j]∖ K_j) and select an arbitrary x∈_y ∈ N[K_j]∖ K_jd(u,y). We define G_j+1 to be the component of G_j∖ K_j containing x. We increase j by one and repeat the same procedure. We now prove the correctness of the algorithm described above. The correctness of (a) and (b) are direct. For (c), we still need to show that N(u)∩ L_i-1 is indeed included in G_j+1 as well as the claim below. In case (c) of the algorithm, N[K_j]⊆ L_≤ i-1. Let T be rooted in a vertex corresponding to a bag containing v_0. By <ref> there is a bag K_j of T that forms a 1/2-balanced separator of V(G_j) in G_1. If v_0 ∈ K_j, then i ≤ 2. This means |G_j| ≤ 1 + Δ + Δ^2 ≤ 4Δlog(n), so we would have applied (a) instead. Otherwise v_0 ∉K_j. Suppose towards a contradiction that K_j has a neighbour in L_≤ i then K_j ⊆ L_≥ i-2 as K_j induces a clique in G_1. Consider K' the unique bag parent of K_j in the tree T. By definition of K_j, K' is not a 1/2-balanced separator of G_j. Therefore there must exist a large component C of G_j in G[V(G_1) ∖ K']. Note that G_1 is connected thus K_j and K' need to share a vertex, which implies that K' ⊆ L_≥ i-3. In particular, the component of G[V(G_1) ∖ K'] that does not contain v_0 sum up to at most Δ^3 +Δ^2 + Δ≤ 2 Δ^3 ≤ 2 Δlog(n) vertices. Remember that we are not in case (a) therefore |V(G_j)|≥Δ 4log(n), thus C is contained in the component containing v_0 of G[V(G_1)∖ K']. But if this is the case C is also included in the component containing v_0 of G[V(G_1) ∖ K_j], which yields a contradiction with the fact that K_j is a 1/2-balanced separator of G_j in G_1. It remains to show that G_j+1 contains N(u) ∩ L_i-1. By the induction hypothesis and since d(u,K_j)>1, N(u)∩ L_i-1 is contained in V(G_j)∖ V(K_j). By Claim <ref>, it is contained in a single component H of G_j∖ K_j. Suppose towards contradiction that H≠ G_j+1. We give an example in <ref>. Our aim will be to identify an induced cycle of length 4 in a contraction of G. Let P=p_1p_2… p_ℓ be a shortest path in G from be a shortest path in G from x=p_1 to u=p_ℓ. Let s ∈ [1,ℓ-1] be the smallest index with p_s+1∈ H and let P'=p_1… p_s be a subpath of P that avoids H but is adjacent to it. Since d(u,K_j)>1, and x is a vertex in N[K_j]∖ K_j closest to u, we find that the path obtained from P by adding a neighbour in K_j of x to P, is a shortest path between u and K_j. This implies that d(P' ∖{x},K_j) ≥ d(P' ∖{x},x)+1≥ 2. Consider the graph G' obtained from G by contracting K_j to a single vertex κ, H to a single vertex h and contracting the sub-path P' ∖{x} into a single vertex p. Note that these contractions are legal as all contracted sets are connected and disjoint from each other. We claim that {x,h,p,κ} induces a 4-cycle in G'. Since H and K_j are adjacent, and x∈ N[K_j], we find h,κ and x,κ will be adjacent in G' as well. The fact that d(P ∖{x}, K_j) ≥ 2 implies that p and κ will not be adjacent. By construction, p is connected to x and h (note that u has a neighbour in H). Finally, x is not adjacent to h, since otherwise x would be part of the connected component H. Thus {x,h,p,κ} induces a 4-cycle in a contraction of G, a contradiction with G being chordal (see Lemma <ref>). Therefore, the algorithm described earlier is correct and we can reconstruct E_i,i-1. We now argue that we can reconstruct E_i,i. First, we need a structural claim in the same flavour as <ref>. For any edge uv ∈ E_i,i, either N(u) ∩ L_i-1⊆ N[v] ∩ L_i-1 or N[v] ∩ L_i-1⊆ N(u) ∩ L_i-1. Suppose towards contradiction that u (resp. v) has a private neighbour u' (resp. v'). Note that u'v' is not edge: otherwise, u,u',v',v would induce a 4-cycle. Once again, we build the graph G' by contracting the connected subgraph L_≤ i-2 into a single vertex v'_0. Note that u'v'_0,v'v_0'∈ E(G') since u',v'∈ L_i-1. On the other hand, both u and v are in L_i and thus have no edge towards L_i-2, and therefore no edge towards v'_0. We reach a contradiction using <ref> as now u',v'_0,v',v,u forms an induced cycle in G'. In order to reconstruct E_i,i, we simply use a brute-force algorithm that queries any pair uv⊆ L_i for which N(u)∩ L_i-1 and N[v]∩ L_i-1 are comparable. This correctly identifies all edges by <ref>. Combining the algorithms to reconstruct E_i-1,i and E_i,i described above, we reconstruct G[L_≤ i ] from G[L_≤ i-1] for all i until we obtain G. It remains to study the query complexity. We first show the algorithm uses O(Δlog n|L_i|) queries to reconstruct E_i-1,i. Since we repeatedly find 1/2-balanced separators, |G_j| ≤ |G_j-1|/2 for all j so the maximal recursion depth is ⌈log n ⌉. In the cases (a) and (b) we terminate and ask O(Δ^2) = O(log(n)) queries. We show below how to adjust the queries in a recursion step to reduce the number of queries at each non-terminal step to at most 2Δ. During the j^th step when we ask (u,N[K_j]), we can order each of the query arbitrarily. We will explain how to do so slightly more efficiently than querying all vertices in N[K_j]. We use the fact N[K_j] is a graph of diameter 3, thus there are at most four possible values for d(u,w) for w ∈ N[K_j]. Let us denote these values by ℓ,ℓ+1,ℓ+2,ℓ+3. The goal of the process is to find the connected component of G∖ K_j which contains all vertices x such that d(u,x) = ℓ. The unicity of this component implies that any w ∈ K_j such that d(u,w) = ℓ+1 is a neighbour of the component. Therefore, we first ask (u,K_j) and consider a vertex w∈ K_j such that d(u,w) = ℓ+1. After that, we only need to query d(u,N[w]) in order to retrieve some vertex x ∈ N[w] such that d(u,x) = ℓ, and learn about the targeted component. Note that, doing so, we queried at most 2Δ vertices inside N[K_j]. We iterate the algorithm on all vertices in L_i therefore we obtain a query complexity of O(Δlog(n)|L_i|). The query complexity for computing E_i,i is also small. Given a vertex u ∈ L_i, |N(u) ∩ L_i-1| ≤Δ. For fixed u∈ L_i, the number of v∈ L_i for which N(u) ∩ L_i-1 and N[v] ∩ L_i-1 are comparable is at most |N[N(u) ∩ L_i-1]|≤Δ^2. Therefore, we reconstruct E_i,i using at most Δ^2 |L_i| = O(log(n) |L_i|) distance queries. Finally, (L_i)_i is a partition of the vertex set, so ∑_i |L_i|=n. Therefore, the entire algorithm uses at most O(Δ n log n) distance queries. § PSEUDOCODE §.§ Algorithm for trees distance reconstruction §.§ Algorithm for chordal graphs distance reconstruction
http://arxiv.org/abs/2306.08271v1
20230614061416
Multiclass Confidence and Localization Calibration for Object Detection
[ "Bimsara Pathiraja", "Malitha Gunawardhana", "Muhammad Haris Khan" ]
cs.CV
[ "cs.CV" ]
Multiclass Confidence and Localization Calibration for Object Detection Bimsara Pathiraja     Malitha Gunawardhana     Muhammad Haris Khan     Mohamed bin Zayed University of Artificial Intelligence, UAE    {bimsara.pathiraja,malitha.gunawardhana,muhammad.haris}@mbzuai.ac.ae July 31, 2023 ========================================================================================================================================================================================================================= Albeit achieving high predictive accuracy across many challenging computer vision problems, recent studies suggest that deep neural networks (DNNs) tend to make overconfident predictions, rendering them poorly calibrated. Most of the existing attempts for improving DNN calibration are limited to classification tasks and restricted to calibrating in-domain predictions. Surprisingly, very little to no attempts have been made in studying the calibration of object detection methods, which occupy a pivotal space in vision-based security-sensitive, and safety-critical applications. In this paper, we propose a new train-time technique for calibrating modern object detection methods. It is capable of jointly calibrating multiclass confidence and box localization by leveraging their predictive uncertainties. We perform extensive experiments on several in-domain and out-of-domain detection benchmarks. Results demonstrate that our proposed train-time calibration method consistently outperforms several baselines in reducing calibration error for both in-domain and out-of-domain predictions. Our code and models are available at <https://github.com/bimsarapathiraja/MCCL> § INTRODUCTION Deep neural networks (DNNs) are the backbone of many top-performing systems due to their high predictive performance across several challenging domains, including computer vision <cit.> and natural language processing <cit.>. However, some recent works <cit.> report that DNNs are susceptible to making overconfident predictions, which leaves them miscalibrated. This not only spurs a mistrust in their predictions, but more importantly, could lead to disastrous consequences in several safety-critical applications, such as healthcare diagnosis <cit.>, self-driving cars <cit.>, and legal research tools <cit.>. For instance, in self-driving cars, if the perception component wrongly detects a stop sign as a speed limit sign with high confidence, it can potentially lead to disastrous outcomes. Several strategies have been proposed in the recent past for improving model calibration. A simple calibration technique is a post-processing step that re-scales the outputs of a trained model using parameters which are learnt on a hold-out portion of the training set <cit.>. Despite being easy to implement, these post-processing approaches are restrictive. They assume the availability of a hold-out set, which is not always possible in many real-world settings. Another route to reducing calibration error is train-time calibration techniques, which intervene at the training time by involving all model parameters. Typically train-time calibration methods feature an auxiliary loss term that is added to the application-specific loss function to regularize predictions <cit.>. We note that almost all prior efforts towards improving model calibration target the task of visual image classification. Surprisingly, little to no noticeable attempts have been made in studying the calibration of visual object detection models. Visual object detection methods account for a major and critical part of many vision-based decision-making systems. Moreover, most of the current calibration techniques only aim at reducing calibration error for in-domain predictions. However, in many realistic settings, it is likely that, after model deployment, the incoming data distribution could continuously change from the training data distribution. In essence, the model should be well-calibrated for both in-domain and out-of-domain predictions. To this end, in this paper, we aim to study the calibration of (modern) deep learning-based object detection methods. In this pursuit, we observe that, (a) object detection methods are intrinsically miscalibrated, (b) besides displaying noticeable calibration errors for in-domain predictions, they are also poorly calibrated for out-of-domain predictions and, (c) finally, the current calibration techniques for classification are sub-optimal for object detection (<Ref>). Towards improving the calibration performance of object detection methods, inspired by the train-time calibration route, we propose a new train-time calibration approach aims at jointly calibrating the predictive multiclass confidence and bounding box localization. Contributions: (1) We study the relatively unexplored direction of calibrating modern object detectors and observe that they are intrinsically miscalibrated in both in-domain and out-of-domain predictions. Also, the existing calibration techniques for classification are sub-optimal for calibrating object detectors. (2) We propose a new train-time calibration method for detection, at the core of which is an auxiliary loss term, which attempts to jointly calibrate multiclass confidences and bounding box localization. We leverage predictive uncertainty in multiclass confidences and bounding box localization. (3) Our auxiliary loss term is differentiable, operates on minibatches, and can be utilized with other task-specific loss functions. (4) We perform extensive experiments on challenging datasets, featuring several in-domain and out-of-domain scenarios. Our train-time calibration method consistently reduces the calibration error across DNN-based object detection paradigms, including FCOS <cit.> and Deformable DETR <cit.>, both in in-domain and out-of-domain predictions. § RELATED WORKS Post-processing calibration methods: A simple approach to calibration is a post-processing step, which re-scales the outputs of a trained model using some parameters that are learned on the hold-out portion of the training set. Temperature scaling (TS), which is an adaptation of Platt scaling <cit.>, is a prominent example. It divides the logits (pre-softmax activations) from a trained network with a fixed temperature parameter (T>0) that is learned using a hold-out validation set. An obvious limitation of TS is that it decreases the confidence of the whole (confidence) vector, including the confidence of the correct class. Beyond using a single temperature parameter (T), some works uses a matrix (M) to to transform the logits. The matrix (M) is also learnt using a hold-out validation set. Dirichlet calibration (DC) employed Dirichlet distributions to generalize the Beta-calibration <cit.> method, originally proposed for binary classification, to a multi-class setting. DC is realized as an extra layer in a neural network whom input is log-transformed class probabilities. The work of <cit.> proposed a differentiable approximation of expected calibration error (ECE) and utilizes it in a meta-learning framework to obtain well-calibrated models. Islam et al. <cit.> achieved class-distribution-aware calibration using temperature scaling (TS) and label smoothing (LS) <cit.> for long-tailed visual recognition. Majority of the aforementioned work address in-domain calibration. Recently, <cit.> proposed to gradually perturb the hold-out validation set for simulating out-of-domain prior to learning the temperature parameter (T). Despite being easy-to-implement and effective, TS methods require a hold-out validation set, which is not readily available in many realistic scenarios. Train-time calibration techniques: Another approach to improving model calibration are train-time calibration techniques. Brier score is considered one of the earliest attempts for calibrating binary probabilistic forecast <cit.>. Some recent works report that models trained with negative log-likelihood (NLL) are prone to making overconfident predictions. A dominant class in train-time methods typically propose an auxiliary loss term that is used in conjunction with NLL. For instance, <cit.> utilized the Shanon entropy to penalize overconfident predictions. Similarly, Muller et al. <cit.> showed that label smoothing <cit.> also improves calibration. Recently, <cit.> introduced a margin into the label smoothing technique to obtain well-calibrated models. While re-visiting focal loss (FL) <cit.>, <cit.> demonstrated that it is capable of implicitly calibrating DNNs. Liang et al. <cit.> incorporated the difference between confidence and accuracy (DCA) as an auxiliary loss term with the Cross-Entropy loss to achieve model calibration. Likewise, <cit.> developed MMCE loss for calibrating DNNs, which is formulated using a reproducible kernel in Hilbert space <cit.>. Most of these methods only calibrate the confidence of the predicted label ignoring the confidences of non-predicted classes. Recently, <cit.> proposed an auxiliary loss term for calibrating the whole confidence vector. Probabilistic and non-probabilistic methods: Many probabilistic approaches stem from Bayesian formalism <cit.>, which assumes a prior distribution over the neural network (NN) parameters, and training data is leveraged to obtain the posterior distribution over the NN parameters. This posterior is then used to estimate the predictive uncertainty. The exact Bayesian inference is computationally intractable, consequently, we can see approximate inference methods, including variational inference <cit.>, and stochastic expectation propagation <cit.>. A non-probabilistic approach is ensemble learning that can be used to quantify uncertainty; it uses the empirical variance of the network predictions. Ensembles can be created with the differences in model hyperparameters <cit.>, random initialization of weights and random shuffling of training data <cit.>, dataset shift <cit.>, and Monte Carlo (MC) dropout <cit.>. In this work, we propose to use MC dropout <cit.> to quantify predictive uncertainty both in class confidences and the bounding localization. It allows creating a distribution over both outputs from a typical DNN-based object detector. The naive implementation of MC dropout can incur high computational cost for large datasets and network architectures during model training. So, we resort to an efficient implementation of MC dropout that greatly reduces this computational overhead. We note that, almost all prior work for addressing calibration is targeted at classification task <cit.>, and no noticeable study has been published that strives to improve the calibration of object detection methods, especially for out-of-domain predictions. In this paper, we explore the problem of calibrating object detectors and observe that they are inherently miscalibrated for both in-domain and out-domain predictions. To this end, we propose a train-time calibration method aimed at jointly calibrating multiclass confidence and bounding box localization. § METHOD §.§ Defining and Measuring Calibration Calibration for classification: A perfectly calibrated model for (image) classification outputs class confidences that match with the predictive accuracy. If the accuracy is less than the confidence, then the model is overconfident and if the accuracy is higher than the confidence, then the model is underconfident. Let 𝒟 = <(𝐱_i,y_i^*)>_i=1^N denote a dataset consisting of N examples drawn from a joint distribution 𝒟(𝒳, 𝒴), where 𝒳 is an input space and 𝒴 is the label space. For each sample 𝐱_i ∈ 𝒳, y_i^* ∈ 𝒴 = {1,2,...K } is the corresponding ground truth class label. Let 𝐬 ∈ ℝ^K be the vector containing the predicted confidences of all K classes, and 𝐬_i[y] be the confidence predicted for a class y on a given input example 𝐱_i. The model is said to be perfectly calibrated when, for each sample (𝐱,y) ∈ 𝒟: ℙ(y=y^* | 𝐬[y] = s) = s where ℙ(y=y^* | s[y] = s) is the accuracy for each confidence scores in 𝐬. Calibration for object detection: Contrary to classification, in object detection, the dataset contains the ground-truth annotations for each object in an image, specifically the object localization information and the associated object categories. Let 𝐛^*∈ℬ = [0,1]^4 be the bounding box annotation of the object and y^* be the corresponding class label. The prediction from an object detection model consists of a class label ŷ, with a confidence score ŝ and a bounding box 𝐛̂. Unlike classification, for object detection, precision is used instead of accuracy for calibration. Therefore, an object detector is perfectly calibrated when <cit.>: ℙ(m=1 | ŝ =s, ŷ = y, 𝐛̂= 𝐛) = s ∀ s ∈ [0,1], y ∈𝒴, 𝐛∈ [0,1]^4 where m=1 denotes a correctly classified prediction i.e. whose ŷ matches with the y^* and the Intersection-over-Union (IoU) between 𝐛̂ and 𝐛^* is greater than a certain threshold γ. Thus, ℙ(m=1) amounts to approximating ℙ (ŷ=y^*, 𝐛̂ = 𝐛^*) with a certain IoU threshold γ. Measuring miscalibration for classification and object detection: For classification, the expected calibration error (ECE) is used to measure the miscalibration of a model. The ECE measures the expected deviation of the predictive accuracy from the estimated confidence <cit.>: 𝔼_ŝ [ | ℙ(ŷ=y| ŝ=s)-s| ] As ŝ is a continuous random variable, the ECE is approximated by binning the confidence space of ŝ into N equally spaced bins. Therefore, ECE is approximated by <cit.>: ECE = ∑_n=1^N|I(n)|/|𝒟| . | acc(n) -conf(n)| where |I(n)| is the number of examples in the n^th bin, and |𝒟| is the total number of examples. acc(n) and conf(n) denote the average accuracy and average confidence in the n^th bin, respectively. Although the ECE measure can be used for measuring miscalibration of object detectors, it fails to reflect the calibration improvement when additional box coordinates are used for calibration since the ECE considers confidence of each example independent of the box properties to apply binning and to calculate an average precision. In this work, we use location-dependent calibration, termed as detection ECE (D-ECE). It is defined as the expected deviation of the observed precision with respect to the given box properties. 𝐄_ŝ𝐛̂[ | ℙ(m = 1| ŝ = s, ŷ = y , 𝐛̂= 𝐛) -s | ] Similar to ECE, the multidimensional D-ECE is calculated by partitioning both the confidence and box property spaces in each dimension k into N_k equally spaced bins. Thus, D-ECE is given by <cit.>: D-ECE_k = ∑_n=1^N_total|I(n)|/|𝒟| . |prec(n) - conf(n)| where N_total is the total number of bins. prec(n) and conf(n) denote the average precision and confidence in each bin, respectively. §.§ Proposed train-time calibration: MCCL This section describes our new train-time calibration method at the core of which is an auxiliary loss function. This auxiliary loss formulation aims at jointly calibrating the multiclass confidence and bounding box localization. It is based on the fact that, the modern object detectors (based on DNNs) predict a confidence vector along with the bounding box parameters. The two key quantities to our loss function are (1) the predictive certainty in class logits and the bounding box localization and, (2) the class-wise confidence after computing class-wise logits mean (termed mean logits based class-wise confidence hereafter) and mean bounding box localization. The predictive certainty in class-wise logits is used in-tandem with the mean logits based class-wise confidence to calibrate the multi-class confidence scores. While, the predictive certainty in the bounding box prediction is used to calibrate the bounding box localization. Instead of inputting the class-wise logits and predicted bounding box parameters to the classification loss and regression loss in task-specific detection losses, we input the class-wise mean logits and mean bounding box parameters, respectively. We first describe how to compute the mean logits based class-wise confidence, mean bounding box parameters, and the certainty in both class logits and bounding box localization. Quantifying means and certainties: For the n^th positive location, we aim to quantify the mean logits based class-wise confidence 𝐬̅_n∈ℝ^K and class-wise certainty in logits 𝐜_n∈ℝ^K as well as the mean bounding box parameters 𝐛̅_n∈ [0,1]^J and certainty in bounding box localization g_n. Where J is the number of bounding box parameters. Given an input sample (image), we perform N stochastic forward passes by applying the Monte-Carlo (MC) dropout <cit.>. It generates a distribution over class logits and bounding box localization. Assuming one-stage object detector (e.g.,<cit.>), we insert a dropout layer before the classification layer and the regression layer. Let 𝐳_n∈ℝ^N × K and 𝐫_n∈ℝ^N × J encode the distributions over class-wise logit scores and bounding box parameters, respectively, corresponding to n^th positive location obtained after performing N, MC forward passes. We obtain the mean logits based class-wise confidence 𝐬̅_n∈ℝ^K by first taking the mean along the first dimension of 𝐳_n to get class-wise mean logits and then applying the softmax. To obtain class-wise certainty 𝐜_n, we first estimate the uncertainty 𝐝_n∈ℝ^K by computing the variance along the first dimension of 𝐳_n. Then, we apply tanh over 𝐝_n and subtract it from 1 as: 𝐜_n = 1 - tanh(𝐝_n), where tanh is used to scale the uncertainty 𝐝_n∈ [0,inf) between 0 and 1. Similarly, we estimate the mean bounding box parameters 𝐛̅_n and the certainty g_n in the bounding box parameters for the n^th positive location. Let {σ^2_n}_j=1^J and {μ_n}_j=1^J be the vectors (J is the number of bbox parameters) comprised of variances and means of predicted bounding box parameters distribution 𝐫_n. These variances and the means are computed along the first dimension of 𝐫_n. We term {μ_n}_j=1^J as the mean bounding box parameters 𝐛̅_n. Also, let μ_n,com denote the combined mean, computed as μ_n,com = 1/J∑_j=1^Jμ_n,j. Then, we estimate the (joint) uncertainty u_n as: u_n = 1/J∑_j=1^J [σ^2_n,j + (μ_n,j - μ_n,com)^2 ]. The certainty g_n in the n^th positive bounding box localization is then computed as: g_n = 1 - tanh(u_n). We leverage these estimated mean logits based class-wise confidence, class-wise certainty and the certainty in bounding box localization to formulate the two components of our auxiliary loss: multi-class confidence calibration (MCC), and localization calibration (LC). For MCC, we compute the difference between the fused mean confidence and certainty with the accuracy. For LC, we calculate the deviation between the predicted bounding box overlap and the predictive certainty of the bounding box. Both quantities are computed over the mini-batch during training. Multi-class confidence calibration (MCC): To achieve multi-class confidence calibration, we leverage the mean logits based class-wise confidence and class-wise certainty and fuse them by computing class-wise mean. The resulting vector is termed as the multiclass fusion of mean confidence and certainty. Then, we calculate the absolute difference between the fused vector and the accuracy as: ℒ_MCC = 1/K∑_k=1^K | 1/M∑_l=1^N_b∑_n=1^N_pos𝐯_l,n[k] - 1/M∑_l=1^N_b∑_n=1^N_pos𝐪_l,n[k] | where M=N_b × N_pos. N_b is the number of samples in the minibatch and N_pos represents the number of positive locations. 𝐪_l,n[k] = 1 if k is the ground truth class of the bounding box predicted for the n^th location in the l^th sample. 𝐯_l,n[k] = (𝐬̅_l,n[k] + 𝐜_l,n[k])/2, where 𝐬̅_l, n[k] and 𝐜_l, n[k] are the mean confidence and the certainty, respectively, for the class k of the n^th positive location in the l^th sample. The ℒ_MCC is capable of calibrating the confidence of both the predicted label and non-predicted labels. It penalizes the model if, for a given class k, the fusion (of mean logits based class-wise confidence and certainty in class-wise logits) across minibatch deviates from the average occurrence of this class across minibatch. Localization calibration (LC): We calibrate the localization component by leveraging the certainty in bounding box prediction. Next, we compute the absolute difference between the mean bounding box overlap (with the ground truth) and the certainty in the bounding box prediction: ℒ_LC = 1/N_b∑_l=1^N_b1/N_pos^l∑_n=1^N_pos^l | [IoU(𝐛̅_n,l, 𝐛_n,l^*) - g_n,l] | where N_pos^l denotes the number of positive bounding box regions in the l^th sample. 𝐛̅_n,l denotes the mean bounding box parameters and g_n,l is the certainty for the n^th positive bounding box prediction from l^th sample. Both ℒ_MCC and ℒ_LC operate over the mini-batches, and we combine them to get our new auxiliary loss term ℒ_MCCL-aux: ℒ_MCCL-aux = ℒ_MCC + βℒ_LC where β is a hyperparameter to control the relative contribution of ℒ_LC to the overall loss ℒ_MCCL-aux. § EXPERIMENTS Datasets: To evaluate the in-domain calibration performance, we use the following five datasets: Sim10K  <cit.>, KITTI <cit.>, Cityscapes (CS) <cit.>, COCO <cit.>, and PASCAL VOC(2012) <cit.>. Sim10K <cit.> contains synthetic images of the car category, and offers 10K images which are split into 8K for training, 1K for validation and 1K for testing. Cityscapes <cit.> is an urban driving scene dataset and consists of 8 object categories. It has 2975 training images and 500 validation images, which are used for evaluation. KITTI <cit.> is similar to Cityscapes as it contains images of road scenes with a wide view of the area, except that KITTI images were captured with a different camera setup. Following prior works, we consider car class for experiments. We use train2017 version of MS-COCO <cit.> and it offers 118K training images, 5K validation images, and 41K test images. PASCAL VOC 2012 <cit.> consists of 5,717 training and 5,823 validation images, and provides bounding box annotations for 20 classes. For evaluating out-of-domain calibration performance, we use Sim10K to CS, KITTI to CS, CS to Foggy-CS, COCO to Cor-COCO, CS to BDD100K<cit.>, VOC to Clipart1k<cit.>, VOC to Watercolor2k<cit.>, and VOC to Comic2k<cit.>. Foggy Cityscapes (CS-F)<cit.> dataset is developed using Cityscapes dataset <cit.> by simulating foggy weather leveraging the depth maps in Cityscapes with three levels of foggy weather. Cor-COCO is a corrupted version of MS-COCO val2017 dataset for out-of-domain evaluation, and is constructed by introducing random corruptions with severity levels defined in <cit.>. Clipart1k <cit.> contains 1K images, which are split into 800 for training and 200 for validation, and shares 20 object categories with PASCAL VOC. Both Comic2k<cit.> and Watercolor2k<cit.> are comprised of 1K training images and 1K test images, and share 6 categories with Pascal VOC. BDD100k <cit.> offers 70K training images, 20K test images and 10K validation images. We use validation set for out-of-domain evaluation. Implementation Details: For all experiments, we use Tesla V100 GPUs. For COCO experiments, we use 8 GPUs and follow training configurations reported in <cit.>. For experiments on all other datasets, we utilize 4 GPUs and follow training configurations listed in <cit.>. We chose β in <Ref> from {0.01, 1}. For further training details, we refer to the supplementary material. Evaluation metrics: We use D-ECE metric defined in <Ref> at IoU of 0.5 to measure calibration performance. Note that, in addition to classification scores, it takes into account the calibration of center-x, center-y, width, and height of the predicted box. For reporting detection performance, we use mAP and [email protected] metrics. Baselines: We evaluate our train-time calibration method against models trained with task-specific losses of a CNN-based object detector, namely FCOS <cit.>, and ViT-based object detector, namely Deformable DETR<cit.>. We then compare with the temperature scaling post-hoc method and further with the recently proposed auxiliary loss functions for classification, including MDCA<cit.> and AvUC<cit.>. §.§ Results In-domain experiments: We compare the in-domain performance on five challenging datasets with the models trained with task-specific loss of FCOS in <Ref>. The results reveal that our train-time calibration method (MCCL) consistently improves the calibration performance of the task-specific losses. Notably, when added to the task-specific loss of FCOS, our MCCL reduces the D-ECE by 5.86% and 1.76% in VOC and CS datasets, respectively. Out-of-domain experiments: <Ref> and <Ref> report out-of-domain performance on eight challenging shifts. We see that our MCCL is capable of consistently improving the calibration performance in all shift scenarios. We notice a major decrease in D-ECE of 2.91% in Sim10K to CS shift. Similarly, we observe a reduction in D-ECE by a visible margin of 2.47% for CS to CS-foggy (CS-F). Comparison with post-hoc method: We choose temperature scaling (TS) as post-hoc calibration for comparison. The temperature parameter T is optimized using a hold-out validation set to re-scale the logits of the trained model (FCOS). <Ref> compares the performance of TS with our method (MCCL) on COCO, Sim10K, CS and COCO corrupted datasets. We note that TS performs inferior to our method and to baseline. This could be because when there are multiple dense prediction maps, as in FCOS, it is likely that a single temperature parameter T will not be optimal for the corresponding logit vectors. Test accuracy/precision: We note that in addition to consistently reducing D-ECE, our MCCL also preserves the mAP or [email protected] in almost all cases. In the in-domain experiments (<Ref>), the maximum reduction in [email protected] is only 0.98% in the Sim10K dataset. In the out-of-domain experiments (<Ref> & <Ref>), it mostly remains same in KITTI to CS, CS to BDD100K, VOC to watercolor, and VOC to comic shifts. Overcoming under/overconfidence: We plot confidence histogram (<Ref>) and reliability diagrams (<Ref>) to illustrate the effectiveness of our method in mitigating overconfidence or underconfidence. In confidence histograms (<Ref>) from Sim10K in-domain and CS to CS-F out-of-domain datasets, the average confidence is greater than the average precision which indicates the overconfident model. Our method reduces this gap in both scenarios compared to the baseline (FCOS) method and alleviates the overconfidence of the baseline. Similarly, the reliability diagrams (<Ref>) for VOC in-domain and Sim10K to CS domain shifts reveal that our method can mitigate both underconfident and overconfident predictions by a visible margin. Confidence values of incorrect detections: We analyse the confidence of our method in case of incorrect predictions (<Ref>). Compared to baseline, our method is capable of reducing the confidence of incorrect predictions over the whole spectrum of confidence range. With another baseline: <Ref> reports results with ViT-based object detector, namely Deformable DETR <cit.>. Compared to FCOS, the Deformable DETR, is already a relatively strong baseline in calibration error. We observe that our MCCL reduces the calibration error (D-ECE) for both in-domain and out-of-domain predictions. The major improvement (2.44% reduction in D-ECE) in calibration performance is observable for KITTI in-domain predictions. §.§ Ablation study Impact of each component in MCCL: We report the result of ablation experiments for validating the performance contribution of different components in our method (MCCL) (<Ref>). Moreover, we report the calibration performance of two train-time calibration losses for image classification: MDCA <cit.> and AvUC  <cit.>. We can observe the following trends from <Ref>. The calibration performance of our MCCL is not due to providing only the class-wise mean logits and mean bounding box parameters to classification loss and regression loss of detection-specific loss, respectively, (ours w/o ℒ_LC & ℒ_MCC). Both ℒ_MCC and ℒ_LC are integral components of our method (MCCL). They are complementary to each other and their proposed combination is vital to delivering the best calibration performance. For instance, in Sim10K to CS shift, the proposed combination of ℒ_MCC and ℒ_LC achieves a significant reduction in D-ECE compared to MCC and LC alone. Further, the classification-based calibration losses are sub-optimal for calibrating object detection methods. D-ECE convergence: <Ref> compares the convergence of D-ECE for baseline, the two components (classification and localization) of our method (MCCL), and MCCL. Although our MCCL and its two constituents does not directly optimize the D-ECE metric, they provide improved D-ECE convergence compared to the baseline. Impact on location-dependent calibration: <Ref> and <Ref> depict that miscalibration error (D-ECE) relies highly on the relative object location (c_x, c_y) and/or its relative width and height (w, h). Moreover, it tends to increase as we approach image boundaries. <Ref> plots the precision, confidence and D-ECE over individual parameters i.e. c_x. <Ref> plots 2D calibration heatmaps over object location and width/height, where each location in a heatmap represents D-ECE. Both figures show that, compared to baseline, besides other locations, our MCCL can decrease D-ECE at image boundaries. <Ref> also shows that, compared to baseline, our MCCL allows the adaptation of confidence score at all image locations differently by adjusting the shape of confidence curve accordingly. MCDO overhead & its Tradeoff analysis: Table <ref> reveals that, in our implementation, upon increasing Monte-Carlo dropout (MCDO) passes N={3,5,10,15}, there is a little overhead in time cost over N=1. Table <ref> shows the impact of varying the number of MC dropout passes (N) on calibration performance. Upon increasing the N, we see improved calibration, especially in OOD scenario. § CONCLUSION Very little to no attempts have been made towards studying the calibration of object detectors. In this paper, we explored this direction and presented a new train-time technique for calibrating DNN-based object detection methods. At the core of our method is an auxiliary loss which aims at jointly calibrating multiclass confidence and box localization after leveraging their predictive uncertainties. Extensive experiments reveal that our method can consistently reduce the calibration error of object detectors from two different DNN-based object detection paradigms for both in-domain and out-of-domain detections. ieee_fullname
http://arxiv.org/abs/2306.02472v1
20230604204428
Inside-out growth in the early Universe: a core in a vigorously star-forming disc
[ "William M. Baker", "Sandro Tacchella", "Benjamin D. Johnson", "Erica Nelson", "Katherine A. Suess", "Francesco D'Eugenio", "Mirko Curti", "Anna de Graaff", "Zhiyuan Ji", "Roberto Maiolino", "Brant Robertson", "Jan Scholtz", "Stacey Alberts", "Santiago Arribas", "Kristan Boyett", "Andrew J. Bunker", "Stefano Carniani", "Stephane Charlot", "Zuyi Chen", "Jacopo Chevallard", "Emma Curtis-Lake", "A. Lola Danhaive", "Christa DeCoursey", "Eiichi Egami", "Daniel J. Eisenstein", "Ryan Endsley", "Ryan Hausen", "Jakob M. Helton", "Nimisha Kumari", "Tobias J. Looser", "Michael V. Maseda", "Dávid Puskás", "Marcia Rieke", "Lester Sandles", "Fengwu Sun", "Hannah Übler", "Christina C. Williams", "Christopher N. A. Willmer", "Joris Witstok" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
Inside-out growth in the early Universe]Inside-out growth in the early Universe: a core in a vigorously star-forming disc [1,2]William M. [email protected] [1,2]Sandro [email protected] 3]Benjamin D. Johnson 4]Erica Nelson ]Katherine A. Suess^5,6 1,2]Francesco D'Eugenio 1,2,7]Mirko Curti 8]Anna de Graaff 9]Zhiyuan Ji 1,2,10]Roberto Maiolino 5]Brant Robertson 1,2]Jan Scholtz 9]Stacey Alberts 11]Santiago Arribas^11 12,13]Kristan Boyett 14]Andrew J. Bunker 15]Stefano Carniani 16]Stephane Charlot 9]Zuyi Chen 14]Jacopo Chevallard 17]Emma Curtis-Lake 1,2]A. Lola Danhaive 9]Christa DeCoursey 9]Eiichi Egami 3]Daniel J. Eisenstein 18]Ryan Endsley 19]Ryan Hausen 9]Jakob M. Helton 20]Nimisha Kumari 1,2]Tobias J. Looser 21]Michael V. Maseda 1,2]Dávid Puskás 9]Marcia Rieke 1,2]Lester Sandles 9]Fengwu Sun 1,2]Hannah Übler 22]Christina C. Williams^22 9]Christopher N. A. Willmer 1,2]Joris Witstok [1]Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 OHA, UK [2]Cavendish Laboratory - Astrophysics Group, University of Cambridge, 19 JJ Thomson Avenue, Cambridge, CB3 OHE, UK [3]Center for Astrophysics Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA [4]Department for Astrophysical and Planetary Science, University of Colorado, Boulder, CO 80309, USA [5]Department of Astronomy and Astrophysics University of California, Santa Cruz, 1156 High Street, Santa Cruz CA 96054, USA [6]Kavli Institute for Particle Astrophysics and Cosmology and Department of Physics, Stanford University, Stanford, CA 94305, USA [7]European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei Muenchen, Germany [8]Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg, Germany [9]Steward Observatory University of Arizona 933 N. Cherry Avenue ,Tucson, AZ 85721, USA [10]Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK [11]Centro de Astrobiología (CAB), CSIC–INTA, Cra. de Ajalvir Km. 4, 28850- Torrejón de Ardoz, Madrid, Spain [12]School of Physics, University of Melbourne, Parkville 3010, VIC, Australia [13]ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia [14]Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK [15]Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy [16]Sorbonne Université, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France [17]Centre for Astrophysics Research, Department of Physics, Astronomy and Mathematics, University of Hertfordshire, Hatfield AL10 9AB, UK [18]Department of Astronomy, University of Texas, Austin, TX 78712, USA [19]Department of Physics and Astronomy, The Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA [20]AURA for European Space Agency, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA [21]Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter St., Madison, WI 53706, USA [22]NSF's National Optical-Infrared Astronomy Research Laboratory, 950 North Cherry Avenue, Tucson, AZ 85719, USA The physical processes that establish the morphological evolution and the structural diversity of galaxies are key unknowns in extragalactic astrophysics. Here we report the finding of the morphologically-mature galaxy JADES-GS+53.18343-27.79097, which existed within the first 700 million years of the Universe's history. This star-forming galaxy with a stellar mass of 10^8.6 solar masses consists of three components, a highly-compact core with a half-light radius of 144 pc, a strongly star-forming disc with a radius of 468 pc, and a star-forming clump, which all show distinctive star-formation histories. The central stellar mass density of this galaxy is within a factor of two of the most massive present-day ellipticals, while being globally 1000 times less massive. The radial profile of the specific star-formation rate is strongly rising toward the outskirts. This evidence strongly suggests the first detection of inside-out growth of a galaxy as a proto-bulge and a star-forming disc in the Epoch of Reionization. [ [ July 31, 2023 ================= § In the hierarchical ΛCDM cosmological model, galaxies sustain their star formation for extended periods of time in a quasi-steady state of gas inflow, gas outflow, and gas consumption <cit.>. To first order, the gas that cools at later cosmic epochs possesses higher angular momentum, therefore it settles in a more extended star-forming disc, implying that galaxies grow from the inside out <cit.>. However, the actual formation of galaxies in the cosmological context is more complex since a wide range of processes regulate star formation and the orbital distribution of stars, ranging from stellar feedback (from supernovae and stellar winds), black hole feedback, and cosmic rays, to galaxy-galaxy interactions and mergers <cit.>. Therefore, the morphological structure and spatially resolved growth rates of galaxies are a sensitive – but also complicated – probe of galaxy formation physics <cit.>. Galaxies in the local Universe display a range of morphologies, from younger disc-dominated spiral galaxies to older bulge-dominated ellipticals <cit.>, and are typically classified by the Hubble sequence <cit.>. The growth of local, star-forming galaxies has been observed on spatially resolved scales, confirming that galaxies grow inside out <cit.>. Most of the mass of local galaxies is found to have formed during the redshift range 1≤ z ≤3, around the period of “cosmic noon”, the peak of the cosmic star formation rate density in the Universe <cit.>. Observations at these redshifts have revealed many galaxies with massive bulges and rotating discs <cit.>. However, in order to probe the build-up of these 1≤ z ≤3 bulges we need to investigate even earlier cosmic times, characterising galaxies during the Epoch of Reionization (z≳6, <cit.>). The theoretical expectation is that the galaxy merger rate increases toward higher redshifts <cit.>, which could lead to central starbursts and hence reduced inside-out growth <cit.>. Direct observations will address questions about how galaxies grow their stellar mass and size in the early Universe, as well as shed light on whether this growth is predominantly inside-out and whether it is affected by galaxy mergers. JWST opens a new window to study the formation of the Hubble sequence and bulge-disc formation in the early Universe <cit.>. As part of the JWST Advanced Deep Extragalactic Survey (JADES, <cit.>), we report here the discovery of a core-disc galaxy with an off-centre clump () at a spectroscopic redshift of 7.430 (see Methods), when the Universe was only 700 million years old. appears to be growing inside-out, having built up a massive compact core at its centre before forming a surrounding star-forming disc. This is the first time we are able to characterise a core-disc system during the Epoch of Reionization and find the signature of early bulge formation. §.§ Core-disc-clump decomposition We use NIRCam <cit.> imaging in nine filters (F090W, F115W, F150W, F200W, F277W, F335M, F356W, F410M and F444W) and NIRSpec/MSA <cit.> spectroscopy from JADES in the GOODS-S region <cit.>. This gives us extended coverage of the rest-frame ultra-violet and optical wavelengths, which helps constrain the stellar populations on spatially resolved scales. Furthermore, the medium bands (F335M and F410M) constrain the strengths of emission lines. The upper left panel of Fig. <ref> shows a colour-composite red-green-blue (RGB, corresponding to F444W-F410M-F277W) image of . The image shows a bright central component (core) surrounded by an extended, disc-like component. We employ the tool ForcePho (B. Johnson, in prep.) to perform a detailed morphological and photometric analysis of , forward modelling all individual exposures across all bands simultaneously and accounting for the point-spread functions (see Methods, Section <ref>). We fit a three-component model, consisting of a disc Sérsic profile (fixing the Sérsic index to n=1 by limiting the bounds to ± 0.01), a compact component (n=2-5) consistent with a bulge or pseudobulge, and a off-centred clump (modelled as a quasi-point source). We obtain a central core with a Sérsic index of 2.5 and a half-light radius of 28 mas (144 pc), while the disc has a half-light radius of 91 mas (468 pc). For more details on structural parameters, see Table <ref>. Fig. <ref>, upper right panel, shows the surface brightness profile of the galaxy in the F356W band, the best-fit model convolved with the PSF (black), the unconvolved Sérsic model components (core in red and disc in blue), and the WebbPSF and empirical PSF (ePSF) of the mosaic (grey dotted and dashed line). This shows that the PSF-convolved three-component model fits the observed surface brightness profile well (see also Methods, Section <ref> for a detailed plot of the model and residual). We also fit a single component model with and without clump, both of which we test against our fiducial three-component model. We find that both of these model variants fail to account for the additional flux in the centre (for more details see Methods, Section <ref>). To check against PSF-approximation issues with ForcePho, we re-simulate the core, disc and clump fits convolved with the WebbPSF, and then refit them. Using the WebbPSF model, we find the results are consistent to the original fit within the errors, confirming that the ForcePho PSF approximations are appropriate (see Methods, Section <ref>). Fig. <ref>, bottom panel, shows the 2D and 1D NIRSpec R100 prism spectra of including the positions of notable detected emission lines. This spectrum probes both the core and disc as indicated by the slit position in the upper left panel of Fig. <ref>. Using these data we estimated a spectrosopic redshift of z=7.430, consistent with the photometric redshift. The measured emission line fluxes from the NIRSpec spectrum indicate that this galaxy hosts no prominent Active Galactic Nucleus (AGN) and that the emission in the core and disc is consistent with stellar (see Methods, Section <ref>). §.§ Stellar population properties We show in Fig. <ref> the ForcePho-based spectral energy distributions (SEDs) of the core, disc and clump components, which are diverse and indicate different stellar populations for the three different components. To explore the stellar population properties of the three components, we fit the individual SEDs using the Bayesian SED-fitting tool Prospector <cit.>. We input the flux values and errors obtained for each band from ForcePho, and independently fit the SEDs with a flexible star-formation history (SFH) with the standard continuity prior <cit.>, a variable dust attenuation law with a free dust attenuation law index and normalisation, and a nebular emission model. We also test using a bursty continuity prior for the SFH, finding we obtain stellar masses consistent with the standard continuity prior (see Methods, Section <ref>). In Fig. <ref> we plot the best fit for the SEDs of the the three components, indicating that the SEDs are well reproduced by stellar emission in conjunction with dust attenuation and nebular emission. Table <ref> gives the key stellar population properties of the core, disc, and clump components. The core is the most massive of the components with log(M_*/M_⊙)=8.4_-0.2^+0.3, while the disc has log(M_*/M_⊙)=8.3_-0.2^+0.4, despite the core having a radius (144 pc) about a third of that of the disc (468 pc). This suggests that the core is dense (stellar mass surface density of Σ_ eff=M_*/2π r^2≈ 2×10^9 M_⊙ kpc^-2). The clump is an order of magnitude less massive (log(M_*/M_⊙)=7.2_-0.3^+0.4) than either the disc or the core, but highly star forming with a specific SFR (sSFR) of log(sSFR/yr^-1)=-7.6^+0.3_-0.4 averaged over 10 Myr. The bottom right panel of Figure <ref> shows the star-formation rate (SFR) as a function of lookback time for the core (red), the disc (blue) and the clump (purple). The core, disc and clump have different SFHs with the core undergoing an earlier period of star formation with a recent decline, while the disc is currently undergoing a burst of star formation. Consistently, the stellar age (lookback time when half of the stellar mass formed) for the core is rather old with t_ half=51^+113_-32 Myr, while the disc is younger (t_ half=23^+158_-19 Myr). The clump's age appears to be relatively unconstrained (t_ half=97^+104_-87 Myr), meaning that it could be young and have been formed through a disc instability, or – alternatively – it could be older and be an accreted satellite galaxy. In total, we find that the combined core+disc galaxy has a stellar mass of log(M_*/M_⊙)=8.6^+0.3_-0.3, and SFR_ 10Myr=11.2^+5.6_-4.2 M_⊙/yr, giving it a specific SFR (sSFR) of log(sSFR/yr)=-7.6_-0.5^+0.5 (typical for 7 ≤ z ≤ 8 galaxies <cit.>). For comparison, we derive from the NIRSpec spectrum a dust-corrected Hβ SFR of SFR_ Hβ=9_-7^+30 M_⊙/yr, which is consistent with our SED-derived SFR over 10 Myr. We find that the disc appears to be dusty with an extinction in the V band of A_V=0.31_-0.13^+0.20 mag, while the core and clump components appear to be almost dust free (A_V=0.01_-0.01^+0.01 mag and A_V=0.02_-0.02^+0.04 mag, respectively). This is consistent with the NIRSpec-based Balmer decrement (Hγ/Hβ) estimate of A_ V, gas=0.8_-0.8^+1.2 mag, which traces the gas-phase and hence the youngest stellar populations (i.e. the disc). The metallicities we infer from the the photometry are very uncertain but consistent with the gas-phase metallicities of the separate components inferred from the spectroscopy (see Methods, Section <ref>). So in summary, we find that the core is brighter in the rest-UV than the disc and yet we find that the disc is dominating the most recent star formation and is heavily dust-attenuated. This is surprising in the z≈2 picture of a red, dusty bulge embedded in a blue disc with lower dust attenuation but consistent with recent JWST observations showing populations of red dusty discs missed in previous rest-optically selected samples <cit.>. The observational data that drives this result is the higher F410M excess in the disc (see Methods, Figure <ref>), which implies strong nebular line contribution. This is a good showcase of the power of medium-band observations to constrain stellar populations <cit.>. §.§ Radial profiles of stellar mass and star formation Based on the multi-wavelength morphological decomposition and our stellar population modelling, we can derive the radial profiles of the stellar mass and SFR surface density. We use the unconvolved best-fit Sérsic profiles, normalised to the best fit stellar mass and SFR for each component, as given for example for the stellar mass surface density profile by Σ_*(r)=M_* I(r)/I_tot, where I(r) is the intensity inferred from the Sérsic profile at radius r and I_tot=∫ 2π I(r) r dr. The fundamental assumption is that each component has a negligible radial gradient in their stellar populations. Fig. <ref> shows the stellar mass surface density (Σ_*, left panel), SFR surface density (Σ_ SFR, middle panel) and sSFR (right panel) against radius for the core and disc components and the combined profile. The Σ_ SFR profile shows how the disc completely dominates the profile compared to the core, while the Σ_* profile shows that the core's stellar mass surface density is prominent in the inner regions. We indicate on each diagram as a vertical dot-dashed grey line the half-mass radius (R_⋆=260^+74_-55 pc), which is the radius of the galaxy at which half the (core+disc) stellar mass is contained. By construction, the sSFR profiles are radially constant for the individual disc and core components, while their combined sSFR profile is rising with radius since sSFR(r) = Σ_ SFR, core+Σ_ SFR, disc/Σ_ *, core+Σ_ *, disc. The sSFR profile shows where the galaxy grows, as the stellar mass doubling timescale is approximately equal to 1/sSFR. We see that the sSFR is steeply rising at the half-mass radius (about 1 dex within the central 1 kpc): the central 100 pc has a stellar mass doubling time of ≈100 Myr, while the outskirt has a mass doubling timescale of ≈10 Myr. This implies that this galaxy grows from the inside out. §.§ Discussion We now compare our z=7.43 galaxy with galaxies and stellar systems at lower redshift, which allows us to gain a complementary view on the spatially resolved growth than from the stellar mass and SFR distribution. Fig. <ref> show the effective stellar mass density (Σ_ eff) and half-mass radius as a function of the stellar mass for the core, disc, and the combination of both components. We compare our measurements to the ones from <cit.> of star-forming galaxies (SFGs) and quiescent galaxies (QGs) at z=2.0-2.5 and from <cit.>, which contains data on local globular clusters (GCs), Ultra Compact Dwarfs (UCD) and compact Ellipticals (cE). In the right panel, we also plot the extrapolated growth of the half-mass radius from redshift 7.43 to redshift 2, assuming the SFR profile from Fig. <ref>. Although our z=7.43 galaxy has a lower stellar masses than the plotted SFGs and QGs at z≈2, Σ_ eff lies roughly in between the SFGs and QGs. Specifically, Σ_ eff lies in the upper envelope of z≈2 SFGs and in the range of QGs. Looking the half-mass radius, we find that the extrapolated evolutionary track intersects with the size-mass relation of QGs at an epoch of roughly z≈3. This means that our z=7.43 galaxy is a natural progenitor of the quiescent galaxy population at z≈2. We also consider how our z=7.43 galaxy compares to local, z=0 stellar systems? From the left panel of Fig. <ref>, we can see that this galaxy lies above Σ_ eff of local GCs and UCD, but is comparable to cE. This can also be seen from Fig. <ref>, which shows the stellar mass surface density (Σ_*) against radius. For comparison, we also plot the profiles of local analogues, UCDs, cE, ellipticals with a cusp and ellipticals with a core from the compilation gathered in <cit.>. The profile for is very similar in shape to the one of cEs, just a factor of 2 lower in normalisation. Interestingly, the central density of our z=7.43 galaxy is only a factor of 2 lower than those of massive elliptical galaxies seen in the Universe today, but we note that it contains just 0.1% of the total stellar mass of these galaxies. If this galaxy evolves in such a massive elliptical by z=0, we conclude that inside-out growth takes place in two phases: firstly as a star-forming galaxy, which we directly observe here, and then, secondly, as a quiescent galaxy from z≈2-3 to z=0 via mergers that buildup a stellar envelope <cit.>. How can a z=7.43 galaxy build such a high central stellar mass density that is comparable to local ellipticals? Our analysis shows that the star-formation activity is dominated by the disc component. However, it is not clear whether there was an episode of disc formation previous to the peak of the SFH of the core (i.e. over 100 Myr ago): earlier disc formation is still a possibility based on the posterior distribution of the SFH (see Methods Figure <ref>). Therefore, we speculate that the following two scenarios are possible to build up this core. The first is continuous inside-out growth, where early disc formation took place in a very compact disc, forming the currently observed core <cit.>. Such compact, disc-like objects have indeed been observed at redshift of above 10 <cit.>. The clump could then be a star-forming instability in the disc. An alternative is that the disc formed first, suffered an infall of gas into the centre due to compaction (possibly caused by instability triggered by the clump, <cit.>), which then forms the core. The disc would then re-form via new accretion of gas. Either way it would appear that the clump is either a disc instability or a recently accreted satellite galaxy, although we are not able to constrain this further. Importantly, all stellar systems, including our galaxy at z=7.43, are well below the maximum stellar surface density of Σ_ max=10^11.5 M_⊙/kpc^2. This universal maximum stellar surface density of dense stellar systems is a natural consequence of feedback-regulated star-formation physics <cit.>: the strength of gravitational collapse relative to feedback (from the combination of stellar winds, radiation pressure and supernovae) increases in direct proportion to the baryonic surface density Σ, which includes stars and gas. Basically, star formation becomes more efficient, until the gas depletion time-scale becomes comparable to the free-fall time, and the gas is exhausted before it can collapse to yet higher densities. Star formation in a pre-existing dense stellar system does not generally drive Σ_* beyond Σ_ max, because Σ includes both stars and gas and, hence, both gas and stars contribute to the binding force of gravity <cit.>. This implies that multiple star formation episodes would therefore build up the stellar mass by increasing the half-mass radius, but not the central stellar mass density. This is consistent with our observations and the inferred evolutionary trends to later epochs. In summary, our finding of , a core-disc galaxy with a star-forming clump during the Epoch of Reionization provides evidence for inside-out growth during the first 700 Myr of the Universe. This galaxy appears to be a potential candidate for a progenitor of a typical, quiescent galaxy at redshift 2 and a present-day elliptical galaxy. This suggests that bulge formation can start at very early epochs, and demonstrates the importance of understanding the nature of these earliest systems on spatially resolved scales. § METHODS §.§ NIRCam imaging data We use photometric and spectroscopic data obtained by JWST as part of the JADES <cit.> collaboration. JADES consists of both the NIRCam <cit.> and NIRSpec <cit.> Guaranteed Time Observations (GTO) instrument teams, and was set up to be able to use a combination of imaging and spectroscopy in order to utilise the full capabilities of both instruments. We use the JADES NIRCam imaging of the Great Observatories Origins Deep Survey - South (GOODS-S) field <cit.>. This consists of imaging in F090W, F115W, F150W, and F200W short wavelength (SW) bands, and F277W, F335M, F356W, F410M, and F444W, long wavelength (LW) bands. The details of the data reduction of the NIRCam data will be presented as part of the JADES programme in Tacchella et al. (in prep.), and have already been described in some detail in <cit.>. Briefly, we use the JWST Calibration Pipeline v1.9.2 with the CRDS pipeline mapping (pmap) context 1039. We run Stage 1 and Stage 2 of the pipeline withe default parameters, but provided our own sky-flat for the flat-fielding. Following Stage 2, we perform several custom corrections in order to account for several features in the NIRCam images <cit.>, including the 1/f noise <cit.>, scattered-light effects (“wisps”) and the large-scale background. Since all of those effects are additive, we fit and subtract them. Before constructing the final mosaic, we perform an astrometric alignment using a custom version of JWST TweakReg. We calculate both the relative and absolute astrometric correction for images grouped by visit and band by matching sources to a reference catalogue constructed from HST F814W and F160W mosaics in the GOODS-S field with astrometry tied to Gaia-EDR3 (<cit.>; G. Brammer priv. comm.). We achieve an overall good alignment with relative offsets between bands of less than 0.1 short-wavelength pixel (<3 mas). We then run Stage 3 of the JWST pipeline, combining all exposures of a given filter and a given visit. §.§ NIRSpec spectroscopic data We used the NIRSpec Micro Shutter Array (MSA), with two disperser/filter combinations: PRISM/CLEAR from programme 1180 (hereafter: R100; covers the entire spectral range 0.7<λ<5.3 µm with spectral resolution R=30–100) and the high-resolution grating G395H/F290LP from programme 1286 (hereafter: R2700) to cover the region 2.9<λ<4.2 µm with resolution R=2,700. The MSA was configured with the `three-shutter slitlet', creating an effective slit of 0.2-arcsec width and approximately 1.5-arcsec length. The exposure times were 11,292 s (R100) and 8,009 s (R2700). For a detailed description of the data reduction we refer to <cit.>. Here we note that we applied wavelength-dependent path-loss corrections based on modelling as a point source, and extracted the spectrum from a 0.5-arcsec box. The final reduced R100 1D and 2D spectra are shown in Fig. <ref>. The redshift estimate is based on the [OIII]λ,λ4959,5007 detection in the R2700 spectrum. We obtain z_spec = 7.4303 ± 0.0002 (random)± 0.0005 (systematic). To measure the emission-line fluxes we used the R100 data and ppxf <cit.>, which models simultaneously the underlying continuum, as described in <cit.>. §.§ Galaxy selection In order to model the spatially resolved stellar populations, we selected from the JADES GOODS-S imaging region with F335M and F410M medium-band coverage and a spectroscopic redshift. We used spectroscopic redshifts from both JADES and FRESCO <cit.>, focusing on the redshift interval z=7.0-7.8, as we wanted to both probe the very earliest galaxies while also having the Balmer break falling within our filters. Out of these (≈20) galaxies, appeared to be the most intriguing one with evidence of a core-disc structure and colour gradient (see Fig. <ref>). Due to this selection procedure, we make no claims about population statistics for this type of galaxy at these redshifts, only that it is a fascinating object in its own right. In a future work we will explore the population statistics for a mass-complete sample of similar (bulge-disc) galaxies. Importantly, this galaxy seems not to be peculiar given its stellar mass and redshift: it is only slightly above the star-forming main sequence (<cit.>; see Fig. <ref>) and shows the typical emission line properties for galaxies at this redshift (refer to Sections <ref> and <ref>). has previously been identified in HST imaging for the GOODS-S field as a Lyman break galaxy at z∼7-8 (it is source UDF-3244-4727 in <cit.> based on HST-NICMOS and ACS imaging, and was independently selected in subsequent HST/WFC3 imaging as GS.D-YD4 in <cit.>). Fig. <ref> shows images of the galaxy in the F277W band (upper left panel), the F356W band (upper middle panel), the F410M band (upper right panel) and the PSF-matched radial profiles of the F356W-F410M and F277W-F356W colour (bottom panel). The radial colour profile is computed from the PSF-matched images (PSF-matched to F444W) in order to remove gradient effects resulting from the wavelength-dependent PSF. Interestingly, we find opposite trends for the two colours: the F356W-F410M colour gets redder towards the outskirts, while the F277W-F356W colour is getting bluer. Since the F356W-F410M colour traces mainly the emission line excess ([OIII] and Hβ lines), we find that those emission lines get more prominent towards the outskirts. On the other hand, the F277W-F356W colour traces the rest-frame 4000Å spectral region, implying that the centre shows a Balmer/4000Å-break, while the outskirts shows a Balmer jump. This indicates that the outskirts – dominated by the disc as shown below – has younger stellar population than the central region. §.§ Morphology and photometry with ForcePho It is challenging to assess the morphology and spatially resolved stellar populations of galaxies at z>3 because (i) they are compact with typical sizes of 0.1-0.4 arcsec, i.e., of the order of size of the NIRCam/F200W PSF, and (ii) the NIRCam PSF FWHM varies by a factor 4 from the bluest (F115W) to the reddest (F444W) band. Given this variation in the PSF FWHM, there are two routes to analysing the data: (1) performing pixel-by-pixel SED fitting on images convolved to the F444W resolution, or (2) modelling the galaxy's deconvolved light distribution in each band independently. Convolving to F444W resolution would lose the high spatial resolution available to us in the blue bands, which provides crucial insights into the morphology of the source; therefore, we choose to forward model the galaxy's light distribution. A challenge with this approach is that we have to choose a parametric model that describes the galaxy well. As outlined below (Section <ref>), we experimented with different models, finding that two central components with a clump in the outskirts are able to fit the light distribution of in all bands the best. Our main aim here is describe the stellar populations on spatially resolved scales (see Fig. <ref>), while the physical interpretation of the different components is second order, but still of interest given the insights into bulge-disc systems at lower redshifts <cit.>. We choose here to forward model the light distribution in all 9 NIRCam images using ForcePho (Johnson et al., in prep.). ForcePho fits multiple PSF-convolved Sérsic profiles simultaneously to all individual exposures and filters by sampling the joint posterior distribution via Markov Chain Monte Carlo (MCMC). This allows us to take into account and measure the covariances between all the parameters. We run ForcePho on the individual NIRCam exposures, which is a key advantage from other codes that run on the mosaics, such as <cit.>, <cit.> or <cit.>. Firstly, when the individual cal frame images (stage 2 products) are co-added to build the final mosaic, information is lost by construction. Working therefore on the mosaic means working on data with less information than the full set of individual exposures. This is particularly important for compact objects, such as . The individual exposures capture these compact objects with several different pixelizations (thanks to different dither positions), while the mosaics are a single pixel representation. Information is also lost about the correlation between pixel fluxes in the mosaics. Secondly, alternative methods work with empirical PSFs (ePSFs), which are based on a few stars that are not saturated, leading to significant uncertainty in the outskirts of the ePSFs due to noisy outskirts of individual stars. Furthermore, the ePSFs are only marginally oversampled, which leads to uncertainties in the convolution. The PSFs of the cal frame images are can be described with WebbPSF (<cit.>; see also Section <ref>). Therefore, tools such as ForcePho that are able to work on individual exposures have a significant advantage over tools that work on mosaics with ePSFs. Furthermore, ForcePho has been successfully applied to modelling multiple components in <cit.>. We run ForcePho assuming a three-component model: two central components and one off-centred clump. Our data clearly prefer this three-component model over a more simple model (one- or two-component model) as shown in Section <ref>. We assume that the structural parameters are constrained by a combination of the bands, while the flux is fit individually in each band. For the two central components, we fit for the centre, the axis ratio, the position angle and size. The prior on the size is uniform from 0.001” to 1.0”. Importantly, we constrain the Sérsic index n of one of the central components to 1.0 (sampling it from a range of 0.99 to 1.01), while the other central component is allowed to vary between 2<n<5. The motivation for this comes from lower-redshift observations <cit.>, where so-called bulge-disc decomposition fits have been shown to describe well the light and stellar population distributions. Therefore, we call the n=1 component a “disc”, while the second, n=2-5 component is referred to as a “core”. We fit the off-centred clump as an quasi point source whose radius is fixed to a maximum of 0.01 arcsec (51 pc) and a Sérsic index of 1 in order to suppress prominent wings. In total, we fit a model with 43 free parameters, with two fixed parameters (the Sérsic indices of the disc and clump). We check the success of our fits by exploring the overall data, residual and model images (see Fig. <ref>, upper plot). Our fit's residual is consistent with the background. We find that the best-fit centre of the core and disc align very well, with an offset of 0.019”, which is less than 1 SW pixel and less than the size of either central components. The core has a small effective size of 28±3 mas and a Sérsic index of 2.5±0.4, while the disc component has a larger size with 91±6 mas. The left panel of Fig. <ref> shows the posterior distributions of key parameters from the ForcePho fit (the flux in the F444W and F277W bands and the half-light radius). As can be seen from the corner plot in Fig. <ref>, ForcePho obtained informative posterior distributions for both the central core and disc components. The MCMC approach of ForcePho also allows us to assess the degeneracies in the fitting, as apparent from the covariance in the core and disc fluxes of F444W and F277W. In addition, as can be seen in the right panel of the Methods section Fig. <ref>, the ForcePho spectral energy distributions (SEDs) of the core, disc and clump components are diverse, indicating different stellar populations for the three different components. As shown in Section <ref>, two central components are warranted given the observation of . But what is the evidence for calling the two central components “disc” and “core”? Firstly, focusing on the structure, we find that the effective size of the disc is over 3 times larger than the core, which indicates that the core is a more compact component than the disc. The Sérsic index of the core is consistent with a “pseudo-bulge” component (2.5±0.4), i.e. do not find any evidence for a classical bulge-like component. Importantly, we stress that our disc and core are photometric components and we cannot say anything about the kinematics of those components. Secondly, the SEDs of the core and disc are clearly distinct as shown in the right panel of Fig. <ref>. These SEDs lead to different stellar populations for the core and the disc (see Section <ref>), consistent with the idea of a slightly older core and a younger disc component. In summary, based on the structure and the inferred SEDs and stellar populations, we find support for interpreting the two central components as a disc and a core. We find a consistent interpretation from the direct colour analysis presented in Section <ref> and Fig. <ref>. The F277W-F356W and the F356W-F410M colour profiles indicate an outskirt that is dominated by younger stellar populations than the central region. A direct comparison with the core and disc colour obtained from ForcePho should be taken with a grain of salt, because our decomposition allows for mixing of the different components at fixed radius. We perform aperture photometry on the PSF-matched mosaics using a central aperture of 0.2” and an outskirts aperture of 0.4”. We find that the colours for the centre are similar to that of the core (F277W-F356W=0.14_-0.10^+0.10 mag, F356W-F410M=0.43_-0.20^+0.38 mag for the core and F277W-F356W=-0.06±0.06 mag, F356W-F410M=1.03±0.05 mag for the centre), whilst the colours for the outskirts are similar to that of the disc (F277W-F356W=-0.26_-0.17^+0.13 mag, F356W-F410M=1.53_-0.17^+0.36 mag for the disc and F277W-F356W=-0.19±0.14 mag, F356W-F410M=1.07±0.14 mag for the outskirts). This adds additional evidence in favour of our ForcePho-based decomposition approach. §.§ Motivation for the multiple component fit It is crucial to check whether a multi-component fit is warrant by the NIRCam imaging data. Fig. <ref> upper, shows the data, the residual, and the best-fit model for the three component fit in all the JADES filters. We see we obtain a good fit in each filter for this three component model. We then compare this to two simpler models. We run a single component fit as a comparison to the three component model. This means we treat the whole galaxy as a single component and allow its Sérsic index to vary freely from 0.8-6, enabling it to be modelled as a disc or bulge-like component. As can be seen in Fig. <ref> bottom, we find that the 1-component fit (middle panel) under subtracts the central region (i.e. the core) compared to the three component fit (top panel), it also significantly fails to fit the clump. Our next test is to run a single component fit for the main galaxy plus the clump (to test whether the core disc fit is warranted). Once again the model fails to account for the flux in the centre as seen in the middle panel in Fig. <ref> compared to the three component fit in the top panel. We can also test the effect of ignoring the clump by fitting for just the core and disc components. The most significant change is that this extends the half light radius of the disc component from 91 mas to 115 mas, whilst leaving the core radius approximately the same. The Sérsic index of the core becomes 2.24 (still within the errors of the three component fit. Importantly, we find consistent fluxes for the core and disc components compared to the three component analysis: changes are well within the uncertainties. Specifically, the core fluxes change less then 10% in all bands except F410M and F444W, for which the fluxes increase by 26% and 25%, respectively. Since the disc fluxes for those bands remain unchanged (changes of less than 3%), this indicates that the core picks up clump long-wavelength light. In summary, this test shows that our main results regarding the stellar population differences between disc and core still hold when ignoring the off-centred clump. Finally, as discussed above, in addition to this statistical analysis whether multiple components are warranted, we also have clear indication for multiple components from a physical perspective. Specifically, the SEDs of the three components are clearly distinct (see Fig. <ref>), which motivates to treat them separately in the SED analysis. §.§ PSF approximations in ForcePho ForcePho approximates the JWST PSFs with a set of Gaussians, i.e., a Gaussian mixture model. We find that 4 Gaussians are able to describe the key components of the JWST/NIRCam PSFs as provided by WebbPSF. In this section we explore the validity of these PSF approximations specifically for the data and presented in this work. We simulate the best-fit 3-component model (Section <ref>) with Galsim <cit.>. Specifically, we produces the full set of Stage 2 products as given by our observations, assuming the best-fit 3-component model for our galaxy and PSFs directly obtained from WebbPSF. We then refit with a 3-component model with ForcePho, assuming the same setup as described above. Figure <ref> shows the recovery of the bulge-to-total (B/T) ratio as a function of filter wavelength (left panel) and the half-light radius versus F277W flux for the three components (right panel), with the contours corresponding to the posteriors and the black point is the input value. We are able to recover both the B/T ratio and the best-fit parameters well and within the uncertainties. This shows that the ForcePho's PSF approximations JWST PSFs that are based on WebbPSF work well. This test leaves open whether WebbPSF provides accurate PSFs. We have tested this in detail in Ji et al. (in prep.) and Tacchella et al. (in prep.) by constructing and comparing empirical PSFs both from true observed stars (called empirical PSFs [ePSFs]) and from WebbPSF point sources injected at the Stage 2 level images (called model PSFs [mPSFs]). Specifically, we construct ePSFs using the empirical method proposed by <cit.>. This method solves the centroids and fluxes of a list of input point sources, and then stacks all the point sources together to get the ePSF. For the list of point sources, we visually identified a sample of 15 isolated point sources from JADES, which are bright but unsaturated. We obtain mPSFs by injecting WebbPSF-based PSFs into the Stage 2 images and then mosaiking them in the same way as our normal mosaics, producing “star” images. We have then constructed mPSFs from those star images in the same way as the ePSFs. Importantly, we find excellent agreement between ePSF and mPSF, with a typical difference ≲ 1% in the radial profile of enclosed energy, from the central pixel out to 3 arcsec. This implies that the prediction of Webbpsf is accurate. §.§ SED fitting with Prospector Prospector <cit.> is an SED fitting code which takes in photometric fluxes and flux errors and fits model SEDs to them. It uses the Dynamic Nested Sampling package Dynesty <cit.> and models the stellar populations via Flexible Stellar Population Synthesis (FSPS <cit.>), where we use MIST isochrones <cit.> and a Chabrier <cit.> initial mass function. We assume a similar stellar population model as in <cit.>. Briefly, we assume a flexible SFH with 6 different time bins, where the most recent bin covers the last 5 Myr, with the other bins being split between 5 Myr and 520 Myr (z=20) in log steps. We use the standard continuity prior <cit.>, which weights against a bursty SFH. We use a top-hat prior on the log stellar mass where it varies from 6 to 12. To model the effect of dust attenuation we use a flexible two-component dust model <cit.>, which models a separate birth cloud component (attenuating emission from gas and stars formed in the last 10 Myr) and a diffuse component (attenuating all emission from the galaxy). We use a joint prior on the ratio between the two dust components, where the prior is a clipped normal between 0 and 2, with a mean of 1.0 and a standard deviation of 0.3. The prior on τ_V, the optical depth of the diffuse component in the V band, is a clipped normal ranging from 0 to 4 with a mean of 0.3 and a standard deviation of 1. The slope of dust attenuation law of the diffuse component is a free parameter and modelled as a power law multiplication of the standard <cit.> law (with a top-hat prior from -1 to 0.4). We also use a top-hat prior for the log stellar metallicity with a minimum of -2.0 and a maximum of 0.19. For the nebular component, managed by FSPS, we have a freely varying ionisation parameter and gas-phase metallicity <cit.>. Figures <ref>, <ref> and <ref> show the corner plots for each component for the stellar mass, sSFR, optical depth of the diffuse component, stellar age (lookback time at which 50% of the stellar mass was formed), and stellar metallicity. The SFHs are shown on the top right. Despite not fully breaking the dust-age-metallicity degeneracy, we are able to constrain the stellar mass and overall SFH well. In order to explore the depends of our results on the SFH prior, we also tried the bursty-continuity prior (<cit.>), which enables the SFH to change more rapidly, enabling a more variable (i.e. bursty) star-formation history. In the bursty continuity prior case we obtain stellar masses of log(M_*/M_⊙)=8.42^+0.22_-0.21, log(M_*/M_⊙)=7.99^+0.27_-0.27 and log(M_*/M_⊙)=6.72^+0.54_-0.15 for the core, disc and clump components respectively, compared to log(M_*/M_⊙)=8.39^+0.32_-0.22, log(M_*/M_⊙)=8.27^+0.39_-0.21 and log(M_*/M_⊙)=7.17^+0.35_-0.25 in the standard continuity prior case. Therefore, the stellar masses obtained in both cases are consistent within the errors, suggesting that the stellar masses obtained based on the standard continuity prior are not biased by particularly bursty SFHs. §.§ SED fitting of the combined photometry In order to assess how this galaxy relates to other galaxies, we need to infer the global stellar populations parameters from the combined photometry, i.e. treating the core, disc and clump as a single SED. The combined SED can be seen in the top panel of Fig. <ref>, while the bottom panel shows the corner plot with the SFH inset. We obtain a stellar mass of log(M_*/M_⊙)=8.59^+0.31_-0.32 and SFR_10Myr=6.7^+1.6_-2.5 M_⊙ yr^-1 and a sSFR of log(sSFR/yr)=-7.68^+0.38_-0.33 yr^-1. For comparison, the combined stellar masses of the individual components is log(M_*/M_⊙)=8.65^+0.25_-0.30, while the SFR amounts to SFR_ 10Myr=11.5^+5.7_-4.3 M_⊙ yr^-1. This means the results are consistent within the quoted uncertainties. We show the results of fitting this combined photometry in Fig. <ref> as the orange marker. We see that it is consistent with the black marker (the results of adding the stellar masses and SFRs of the core and disc components). In summary, we find that this galaxy increases its SFH, as expected for galaxies at this epoch. §.§ Emission line properties We can infer several interesting quantities from the emission lines obtained from the NIRSpec R100 prism spectra (see bottom panel Fig. <ref>). First, we do the fitting and measure the fluxes (Section <ref>), then we correct the fitted emission lines for extinction by using the ratio between the Hγ and Hβ Balmer lines (as in <cit.>). The intrinsic value of the ratio (assuming Case B recombination, electron temperature T_e=1.5× 10^4 K and an electron density of N_e=300 cm^-3) is 0.468. We measure Hγ/Hβ=0.409 meaning we are seeing the effects of dust. We obtain a dust extinction of A_ V, gas=0.8_-0.8^+1.2 mag in the V band. We also note that this dust extinction is also consistent with that obtained from our SED fitting (of just the photometry) where we obtain A_ V=0.31^+0.20_-0.13 mag for the disc. We calculate the gas-phase metallicity of using the strong line method (e.g. <cit.>) which uses the ratios of strong emission lines, in this case [OIII]λ5007, [OII]λ3727, Hβ, and [NeIII]λ3969. We use the strong line metallicity diagnostics from <cit.>. We obtain a gas-phase metallicity of 12+log(O/H)=7.86^+0.09_-0.09, broadly consistent with that of other galaxies at these redshifts <cit.>. This is equivalent to a value of log(Z_ gas/Z_⊙)=-0.83, so larger than the stellar metallicities inferred for the core and disc from Prospector, but broadly consistent with the average of the gas-phase metallicity inferred for the two. We can calculate an estimate of the SFR from the dust corrected Hβ emission line by assuming a Balmer decrement flux ratio of F_Hα/F_Hβ=2.86, corresponding to case B recombination at a temperature of T∼ 10^4K <cit.>. This enables us to estimate Hα. We then convert this Hα flux into a Hα luminosity, which we convert into a SFR using the conversion detailed in <cit.>. This gives us a SFR of SFR=8.5^+34.7_-7.7 M_⊙ yr^-1. This is consistent with the combined SFR of the core and disc obtained via SED fitting with Prospector. §.§ Star formation vs AGN Is the central component a stellar core or an AGN? We know that the central component of this galaxy is compact, with a deconvolved half-light radius of 144pc and inferred Sérsic index of 2.5. We use dust corrected emission line diagnostics from the NIRSpec spectrum to investigate a possible AGN contribution. We use the ratio [OIII]λ5007/Hβ from the classical BPT <cit.> diagram and find that this gives us a value of ∼4.5 (see Fig. <ref>), which, while large for the local Universe, is consistent with star-formation at high-redshifts <cit.>. Fig. <ref> left panel shows the BPT diagram of [OIII]λ5007/Hβ against [NII]λ6584/Hα. As has a redshift of 7.43, the [NII]λ6584 and Hα emission lines are shifted out of the NIRSpec wavelength coverage, hence we plot the value of [OIII]λ5007/Hβ as a straight red line. We also show the combined data stacks from <cit.> and <cit.>. We overplot contours from local SDSS galaxies with data from <cit.>. We see that the line ratio for appears consistent with those of the stacks for galaxies at similar redshifts. Fig. <ref> right panel shows R3=[OIII]λ5007/Hβ against R2=[OII]λ3727,29/Hβ for . We again overplot contours for local galaxies from SDSS. Also overplotted are the values from <cit.>, and the stacks from <cit.>. Again, is consistent with those of the stacks at similar redshifts and the three galaxies from <cit.>. This shows that the emission of is consistent with stellar emission (processed by gas and dust) and not AGN-based emission. § DATA AVAILABILITY The data that support the findings of this study will be available from the corresponding author upon reasonable request. § CODE AVAILABILITY AstroPy <cit.>, Prospector <cit.>, Dynesty <cit.>, FSPS <cit.>, Galsim <cit.>, WebbPSF and Photutils <cit.>, are all publicly available, while ForcePho (Johnson et al. in prep) is publicly available via GitHub at <https://github.com/bd-j/forcepho>. § ACKNOWLEDGEMENTS WB, TJL, FDE, RM, JW, LS and JS acknowledge support by the Science and Technology Facilities Council (STFC) and by the ERC through Advanced Grant 695671 “QUENCH”. RM also acknowledges funding from a research professorship from the Royal Society. JW further acknowledges support from the Fondation MERAC. This study made use of the Prospero high performance computing facility at Liverpool John Moores University. BDJ, EE, MR, BER and CNAW acknowledge support from the JWST/NIRCam Science Team contract to the University of Arizona, NAS5-02015. ECL acknowledges support of an STFC Webb Fellowship (ST/W001438/1). SC acknowledges support by European Union’s HE ERC Starting Grant No. 101040227 - WINGS. AJB, JC acknowledge funding from the "FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 789056). SA acknowledges support from the research project PID2021-127718NB-I00 of the Spanish Ministry of Science and Innovation/State Agency of Research (MICIN/AEI). HÜ gratefully acknowledges support by the Isaac Newton Trust and by the Kavli Foundation through a Newton-Kavli Junior Fellowship. DJE is supported as a Simons Investigator and by JWST/NIRCam contract to the University of Arizona, NAS5-02015. D.P. acknowledges support by the Huo Family Foundation through a P.C. Ho PhD Studentship. A.L.D. thanks the University of Cambridge Harding Distinguished Postgraduate Scholars Programme and Technology Facilities Council (STFC) Center for Doctoral Training (CDT) in Data intensive science at the University of Cambridge (STFC grant number 2742605) for a PhD studentship. The reserach of CCW is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. The authors acknowledge use of the lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315. Funding for this research was provided by the Johns Hopkins University, Institute for Data Intensive Engineering and Science (IDIES). This research is supported in part by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
http://arxiv.org/abs/2306.02917v1
20230605142809
Knowledge-Driven Semantic Communication Enabled by the Geometry of Meaning
[ "Dylan Wheeler", "Balasubramaniam Natarajan" ]
eess.SP
[ "eess.SP" ]
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning Lucas BeerensSchool of Mathematics and The Maxwell Institute for Mathematical Sciences, University of Edinburgh, EH9 3F, UK Desmond J. HighamSchool of Mathematics, University of Edinburgh, James Clerk Maxwell Building, Edinburgh, UK ============================================================================================================================================================================================================================================= firstpage [C]810 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. [R]810 firstpage As our world grows increasingly connected and new technologies arise, global demands for data traffic continue to rise exponentially. Limited by the fundamental results of information theory, to meet these demands we are forced to either increase power or bandwidth usage. But what if there was a way to use these resources more efficiently? This question is the main driver behind the recent surge of interest in semantic communication, which seeks to leverage increased intelligence to move beyond the Shannon limit of technical communication. In this paper we expound a method of achieving semantic communication which utilizes the conceptual space model of knowledge representation. In contrast to other popular methods of semantic communication, our approach is intuitive, interpretable and efficient. We derive some preliminary results bounding the probability of semantic error under our framework, and show how our approach can serve as the underlying knowledge-driven foundation to higher-level intelligent systems. Taking inspiration from a metaverse application, we perform simulations to draw important insights about the proposed method and demonstrate how it can be used to achieve semantic communication with a 99.9% reduction in rate as compared to a more traditional setup. semantic communication, conceptual spaces, cognitive communications, metaverse, 6G § INTRODUCTION In the modern age of connectivity, wireless networks are continuing to grow in both importance and ubiquity. As the number and size of these networks increases, so too does the overall demand for data traffic. A recent report from Ericsson <cit.> projects that global data traffic will rise exponentially to more than 400 EB/month by the end of 2028, from just over 100 EB/month in 2022. Current wireless communication paradigms are ill-suited to handle these growing demands due to the information-theoretic capacity of a wireless channel, also known as the Shannon rate <cit.>. This fundamental limit provides two choices to increase the capacity of a wireless channel. One is to increase the power of the transmitted signal, which is not a sustainable option to meet exponentially rising demand. The other is to increase bandwidth, which has been the conventional approach with the push toward millimeter wave (mmWave) technology in fifth generation (5G) mobile networks <cit.> and recent research into terahertz (THz) communications <cit.>. However, operating at these extreme frequencies presents significant design challenges, such as extreme path loss <cit.>. Due to these challenging circumstances, there has been a recent surge of interest in pushing beyond the Shannon rate. Consider the three levels of communication problem <cit.>: A. How accurately can the symbols of communication be transmitted? (The technical problem.) B. How precisely do the transmitted symbols convey the desired meaning? (The semantic problem.) C. How effectively does the received meaning affect conduct in the desired way? (The effectiveness problem.) The Shannon rate provides a limit for the technical problem. On the other hand, the true objective of communication is rarely to simply transmit symbols without error. More often, one is trying to convey some meaning (level B) or affect some action (level C). Focusing on these goals, rather than solely on the technical goal, changes the fundamental meaning of channel capacity, because it changes what is considered to be an error. For example, consider the statement “This communication is semantic" is transmitted, but the sentence “This comnunication is semantic” is received. A symbol error has occurred; however, the semantics of the statement will almost certainly be preserved, i.e., a semantic error does not occur. From this simple example, we arrive at the intuition that not all syntactic errors will induce a semantic error, given sufficient background knowledge and reasoning capabilities. Thus, there is a potential to increase the effective capacity of wireless channels by operating at these higher levels of communication. This intuition, along with recent advances in artificial intelligence (AI), has led to the investigation of what is commonly referred to as semantic communication. Processing information at the semantic and effective levels comes with its own challenges, however. First and foremost, it forces us to reconsider fundamental notions, such as how we define and quantify information. In this article, we focus on the semantic level and draw upon insights from our previous work <cit.> where we first introduced the use of Peter Gärdenfors' theory of conceptual spaces <cit.> as a framework for modeling semantics. We introduce for the first time both theoretical and practical aspects of our approach, and demonstrate the potential of conceptual spaces as a knowledge representation technique which can serve as the foundation for reasoning-driven semantic communication systems. §.§ Related Work The field of semantic communication has experienced a resurgence in the past few years. At the core of semantic communication is knowledge, and thus different approaches to semantic communication can naturally be partitioned by the mechanism used to model knowledge, or, represent meaning. We briefly overview some of these approaches here, and more details can be found in some recent surveys of the field <cit.>. The first attempt to develop a theory of semantic communication was made by Carnap and Bar-Hillel <cit.>. The key to their approach was to extend what we now know as information theory to include semantics by focusing on logical probabilities, rather than statistical probabilities. Some more recent works have attempted to build on this theory <cit.>. This approach comes with known difficulties, such as the Bar-Hillel Carnap Paradox <cit.> and the challenge of building an expressive knowledge base using binary logic. Others circumvent the challenge of representing meaning by focusing more on the effective level rather than the semantic level. More specifically, they consider the significance of information <cit.>, leading to goal-oriented communication. “Significance” is with respect to some goal or task, and can be thought as “provisioning of the right and significant piece of information to the right point of computation (or actuation) at the right point in time” <cit.>. A popular metric exemplifying this approach is the age of information (AoI) <cit.>, which quantifies the “freshness” of data. Another is the so-called value of information <cit.>. One popular way of representing meaning is through the use of knowledge graphs. This is the underlying modeling technique of the semantic web <cit.>. In a knowledge graph, meaning is expressed through relations (edges) that connect entities (nodes) in the graph. The authors of <cit.> propose an edge-sharing knowledge graph to enable semantic communication among intelligent devices in a network. In <cit.>, the authors develop a communication system powered by a knowledge graph with subject-relation-object triples for transmission of images among unmanned aerial vehicles (UAVs). The most popular approach to semantic communication has been through the use of machine learning (ML). ML-based approaches mostly handle semantics in an implicit manner, where meaning is learned by observation of massive data. Often, what these meanings are or what they represent is not apparent, due to the black-box nature of deep networks. Many of these approaches were inspired by the initial success of DeepSC <cit.> and its variants <cit.>, which implement transformer-based architectures and achieve promising results <cit.>. However, many works taking this approach (including DeepSC) train the system with the goal of exactly reproducing the initial data <cit.>, i.e., they are still operating at the technical level of communication. The approaches mentioned above all have significant limitations when modeling semantic information. The symbolic approaches using logical probabilities and knowledge graphs face combinatorial challenges when used to model large knowledge bases. Significance-based approaches do not directly address the semantic level of communication, and ML-based approaches suffer from a lack of interpretability and explainability. Therefore, there is currently no consensus on how semantic information should be modeled and quantified. In our previous work <cit.>, we introduced a novel approach based on the theory of conceptual spaces <cit.>. The initial treatment did not include any theoretical analysis and was limited to trivial examples. In this paper, we further develop the theoretical foundations of this approach and apply it to a practical metaverse-inspired problem to highlight its potential for future semantic communications. §.§ Contributions In this work, we aim to formalize key theoretical aspects of a semantic communication framework utilizing conceptual spaces. In addition, we make a connection between our approach and a recently proposed framework for reasoning-driven semantic communication <cit.>, and show how conceptual spaces can provide the knowledge-driven foundation for such communication. We provide thorough simulation results from a metaverse-inspired problem setup to confirm the theoretical developments and further demonstrate the potential of our approach for highly efficient communication. The main contributions of this work can be summarized as: * We uncover key theoretical aspects of the conceptual space-based framework for semantic communication. Specifically, we provide formal definitions of the various elements (domain, property, etc.) and examine the central concept of semantic distortion and semantic error under these definitions. * We further analyze this notion of semantic distortion, and derive bounds on the probability of semantic error under our proposed framework. The derived bounds are simple to obtain under reasonable assumptions and are based on the level of semantic distortion introduced at the semantic encoder and the channel, and can thus be used to inform robust system design. * We show how our approach can lay the knowledge-driven groundwork for a reasoning-driven semantic communication system, such as that proposed in <cit.>. Specifically, we demonstrate how the notion of a semantic language <cit.> naturally arises from a conceptual space and how ideas like context can be easily incorporated into the conceptual space model. * We provide extensive simulation results of the proposed approach on a metaverse-inspired problem. We examine the problem of virtual reality exposure therapy (VRET) <cit.> and simulate a semantic communication system designed around this problem. * We first simulate a system with a theoretical semantic encoder able to attain an arbitrary level of performance. We show that the end-end system performance is inevitably limited by the quality of the semantic encoder, and that the proposed system can be semantically robust under a fading channel model. Specifically, comparing performance under Rician fading to that of the AWGN channel, we find that in many cases an increase of only 2-3dB in signal-to-noise ratio (SNR) is needed under the fading channel model to achieve similar semantic performance to that of the AWGN model. * A practical semantic communication system is simulated, using a trained convolutional neural network (CNN) as the semantic encoder. We show that the proposed approach can reduce the communication rate by over 99.9% as compared to a more traditional system, while achieving similar semantic performance at extreme SNR values. Moreover, performance gains on the order of 30% are demonstrated in the intermediate SNR region. Finally, we show that even greater gains can be achieved by fine-tuning the semantic encoder to the specific task at hand. The rest of this paper is organized as follows. In section <ref>, we present our view on semantic communication. We then define the key elements of conceptual spaces, and introduce our model of a semantic communication system. We focus on the notion of semantic distortion in section <ref> and provide a formal definition based on our model. With this definition in hand, we define the notion of a semantic error, and derive two upper bounds on the probability of a semantic error. In section <ref>, we examine some extensions of our approach. First, we show how our approach can lay the knowledge-driven framework for higher-level reasoning-driven systems, and then we illustrate how the notion of context can be incorporated into the conceptual space model. We begin section <ref> by first describing the methods used to test our proposed approach, and follow with our experimental results and discussion. Section <ref> concludes the paper. § FOUNDATIONS: SEMANTIC COMMUNICATION WITH CONCEPTUAL SPACES In this section, we first present our view on the meaning of “semantic communication.” We then provide the necessary background on the theory of conceptual spaces, formally defining the core elements and laying the groundwork for later analysis. Finally, we provide our framework of semantic communication built on this groundwork. §.§ What is Semantic Communication? r0.4 < g r a p h i c s > Hierarchy of communication goals Lately, semantic communication has been a popular term used to describe a wide variety of approaches, and thus we will provide some clarification of what we mean when using this term here. First consider the hierarchy of communication goals shown in Figure <ref>. Note that all communication must be technical; some kind of signal must be sent for communication to take place. A subset of these communication goals will also be semantic, where the sender wants to convey some meaning to the listener. An even smaller subset will also be effective, where the sender has a desired action they would like the listener to take. For example, suppose the sender transmits the text “The oven is in the kitchen” to the listener. If the goal is purely technical, then accurate transmission of each character is the sole objective. However, perhaps the sender is attempting to convey the layout of their home to the listener; then the meaning of the words is the focus, e.g., the listener should understand that the oven is in the kitchen and not the bedroom. But perhaps the sender wants the listener to turn the oven on; now the goal is effective as well. When we refer to semantic communication, we mean communication problems where the goal falls into the semantic level of this hierarchy. Therefore, we do not mean using semantics to exactly reproduce the original data, as this falls into the technical level. Likewise, we do not mean goal-oriented or task-oriented communication, which sits at the effective level. In this work, we are concerned with problems where the primary goal is to convey some meaning. Thus, how meaning is defined and quantified is of prime importance. §.§ A Geometric View of Semantics In <cit.> Peter Gärdenfors describes his theory of conceptual spaces, which is proposed as a cognitive model for how ideas and thoughts take shape in the human mind. There, Gärdenfors defines three levels of cognitive representations. The associationism level models cognitive representations as associations between different elements. He names connectionism as a special case, where these associations are modeled by artificial neuron networks. In contrast, the symbolic level uses discrete symbols as the primary cognitive representation and treats cognition as symbol manipulation. These two representations can be seen in the approaches to semantic communication described in subsection <ref>. r0.5 < g r a p h i c s > Visualization of the different levels of cognitive representation: associationism (top), conceptual (middle), and symbolic (bottom) Gärdenfors proposes a third level of representation, termed the conceptual level, which acts as a bridge between these two. This kind of representation is “based on geometrical structures, rather than symbols or connections among neurons” <cit.>. Rather than oppose each other, these different levels come together to form human thought. This idea is aligned with the recent interest in neurosymbolic AI <cit.>, and is shown in Figure <ref>. The elements of the conceptual level of representation are originally described somewhat qualitatively <cit.>, so we will now provide formal definitions for each of them. The basic building block of a conceptual space is a quality dimension, and is used to quantify some “quality” of an object or idea in some domain. We formalize this notion as a set. A quality dimension is a set of scalar values quantifying some quality of an idea or object. We denote a quality dimension by 𝒬, where specific values along a quality dimension are denoted by q, such that q ∈𝒬. Quality dimensions are organized into domains. Quality dimensions that share a domain are said to be integral; Gärdenfors describes integral quality dimensions as being those such that “one cannot assign an object a value on one dimension without giving it a value on the other.” <cit.>. Thus, we can use quality dimensions to build a set representing the domain. A domain 𝒟 is a set that is constructed from the Cartesian product of integral quality dimensions, i.e., 𝒟 = _k=1^K Q_k. A point within a domain is specified by a column vector of quality values d = [ q_1 q_2 ⋯ q_N ]'. We use ' to denote the transpose operator. With a slight abuse of terminology, we refer to K as the dimensionality of domain 𝒟. Furthermore, we can think of ideas and objects as having properties. These properties are with respect to a single domain. A property 𝒫 of a single domain 𝒟 is defined as a convex subset of that domain, i.e., 𝒫⊂𝒟 where, for λ∈ [0,1] and any p_1, p_2 ∈𝒫, λp_1 + (1-λ) p_2 ∈𝒫. The intuition underlying the convexity requirement is straightforward; if two objects possess a property, objects that are conceptually “in between” these two objects should also possess the same property. Allowing for multiple domains gives rise to a conceptual space. A conceptual space 𝒵 is a set which is formed from the Cartesian product of M ≥ 1 domains, i.e. 𝒵 = _m=1^M 𝒟_m. An idea or object is uniquely specified within the conceptual space by providing complete coordinates within every domain, i.e., z = [ d_1' d_2' ⋯ d_M' ]'. Finally, we need to define a structure corresponding to the notion of a concept. Gärdenfors describes a concept as a collection of regions across domains throughout the conceptual space, as well as “correlations between the regions from different domains” <cit.>. We will drop idea of correlation for now. A concept 𝒞 is a a region within the conceptual space spanning one or more domains. If contained within a single domain, we have 𝒞⊂𝒟, and if the concept spans across N domains, we have 𝒞⊂_n=1^N 𝒟_n. A particular example of a concept is the Cartesian product of N properties, 𝒞 = _n=1^N 𝒫_n. We now present a simple example to clarify these definitions. r0.5 < g r a p h i c s > Geometry of the domain from Example <ref> [Colors and shapes] Consider the domain of colors. Studies indicate that humans perceive three dimensions regarding color: hue (𝒬_1) , saturation (𝒬_2), and brightness (𝒬_3) <cit.>. These are integral quality dimensions for colors; a color cannot have a hue without also assigning it a saturation value. Thus, these dimensions form a domain, which is shown in Figure <ref>. Note that the hue dimension is circular; this will have certain implications when defining semantic distortion. Also, the dependence between the brightness and saturation dimensions that determines the spindle-like shape requires an additional condition beyond the simple Cartesian product, which would otherwise result in a cylinder. Such extra conditions can easily be incorporated into the prior definitions to extend the theory. Given the domain shown in Figure <ref>, an example property might be light red. This property 𝒫_1 is the convex region in the top-half of the spindle with hue values in the range corresponding to red. Now consider the domain of regular polygons, having one discrete dimension quantifying the number of sides of the polygon, i.e., 𝒟 = 𝒬_4, where q_4 = 3 corresponds to triangles, q_4 = 4 corresponds to squares, and so on. Furthermore, define the singleton set {4} as the square property 𝒫_2. Then we can consider the Cartesian product of these two domains as the color-regular polygon conceptual space, and the Cartesian product of 𝒫_1 and 𝒫_2 as the concept of a light red square. Example <ref> shows how a simple conceptual space model can be designed to represent meaning. In the next subsection, we provide our general framework for semantic communication utilizing conceptual spaces for knowledge representation. §.§ Toward Semantic Communication Figure <ref> provides a general block diagram of a framework for knowledge-driven semantic communication. r0.5 < g r a p h i c s > Block diagram of semantic communication The proposed framework builds on existing communication paradigms with the addition of a semantic encoder and a semantic decoder. The primary task of the semantic encoder is to take some source data as an input and, according to its knowledge base, output a semantic representation that captures the meaning of the data. At the receiver, the semantic decoder is tasked with taking in a recovered (possibly corrupted) semantic representation and, according to its knowledge base, output some data. Formally, the semantic encoder and decoder can be represented by functions. Let the input to the system be a vector denoted by x. Letting the semantic encoder with knowledge base 𝒦 be denoted by the function e_𝒦, we have e_𝒦(x) = z, where z is the semantic representation for data x. Let the traditional encoder, channel, and traditional decoder be represented by functions f, h, and g, respectively. We can represent the traditional portion of the system as a composition of these functions ẑ = g(h(f(z))). Finally, we represent the semantic decoder with knowledge base ℒ as a function d_ℒ, and we have d_ℒ(ẑ) = u, where u is the final output of the system, which can be some goal-driven transformation of ẑ. We take the knowledge bases at the transmitter and the receiver to be conceptual space models. Moreover, for simplicity we assume that the transmitter and receiver share the same conceptual space 𝒵[Semantic communication with mismatched conceptual spaces at the transmitter and receiver is an interesting problem that is out of the scope of this paper, and will be considered in future work.]. Thus, we rewrite (<ref>) and (<ref>) as e_𝒵(x) = z, and d_𝒵(ẑ) = u. The semantic decoder function (<ref>) is a general function that represents a mapping to some goal-based output u. As our focus is on semantic communication, we consider a particular semantic decoder. Let the primary goal of communication be to successfully convey a concept to the receiver, i.e., to convey meaning. Assuming that a finite number of concepts are defined over the conceptual space 𝒵, define a set 𝒥 as the index set of these concepts: 𝒥 = {1, 2, …, J} where J is the total number of concepts defined over 𝒵, and ∈𝒥 corresponds to concept 𝒞_. Then we can consider the semantic decoder as a function d_𝒵: 𝒵→𝒥 defined as d_𝒵(ẑ) = , ∈𝒥. In the following section, we use these terms to define the notion of a semantic error. § SEMANTIC DISTORTION AND ERROR ANALYSIS With the conceptual space-based framework in place, we now turn to the problem of quantifying semantic distortion and defining semantic errors. In this section, we will show that using conceptual spaces to model knowledge leads to a notion of semantic distortion that is natural, intuitive, and interpretable. Furthermore, we use this notion to derive bounds of the probability of semantic error. We close this section by discussing some of the practical implications of this theory. §.§ Semantic Distortion in a Conceptual Space As we have defined our model of knowledge representation in terms of geometric structures, it is natural to think of semantic distortion as a distance measure. Doing so gives the ability to compare the meanings of different objects and ideas. First, consider the distance between points within a single domain. For the mth domain 𝒟_m, we define this distance as a function mapping two points in the domain to the non-negative real line, i.e. δ_m: 𝒟_m ×𝒟_m →ℝ_+. While any general mapping can be considered, we are interested in functions that satisfy the three conditions of a metric, namely non-negativity, symmetry, and the triangle inequality. Respectively, δ_m(d_1, d_2) ≥ 0, δ_m(d_1, d_2) = δ_m(d_2, d_1), δ_m(d_1, d_3) ≤δ_m(d_1, d_2) + δ_m(d_2, d_3). Given δ_m for each of the M domains, we can now define semantic distortion. Semantic distortion is a function mapping two points in the conceptual space 𝒵 to the non-negative real line δ: 𝒵×𝒵→ℝ_+ given by δ(z_1, z_2) = ∑_m=1^M δ_m(z_1, z_2) where δ_m(z_1, z_2) = δ_m(d_1, d_2) is computed using the coordinates in z_1, z_2 corresponding to domain 𝒟_m. [Distance in the color domain] Recall the color domain of Example 1. To determine a distance function for this domain, we must take into account its specific geometry. If we let the hue dimension take values in the interval [0,1], we cannot use absolute or squared difference to represent distance, since we need to capture the circularity. One way to represent distance on this dimension is with the expression min(| h-ĥ|, | 1 - (h-ĥ)|), where h represents the hue value. To avoid taking a minimum, (<ref>) can be approximated as γ(h,ĥ) = -1/ρln( 1/2e^-ρ| h - ĥ| + 1/2 e^-ρ(1-| h-ĥ|)), with parameter ρ > 0. Since the brightness and saturation dimensions are linear, we can let them take values in the interval [0,1] and use the squared difference to represent distance. Then we define distance within the domain as a mean squared error-like function given by δ_color(d, d̂) = 1/3( (s - ŝ)^2 + (b - b̂)^2 + γ(h,ĥ)^2 ), where s and b are the saturation and brightness values, respectively, and d = [ h s b ]'. Finally, we note that if each of the domain-specific distance functions satisfy the metric conditions, then the semantic distortion function given by (<ref>) will also be a metric. Next, we'll use this observation to derive bounds on the probability of semantic error under some assumptions. §.§ Bounding the Probability of a Semantic Error In traditional information theory, an error is characterized by the reception of an incorrect symbol. As discussed toward the end of section <ref>, we are interested in communication when the primary goal is to convey a concept. Thus, using the notations of (<ref>) and (<ref>), we can define the notion of a semantic error. A semantic error occurs when the concept decoded by the receiver does not match the concept that is to be conveyed, i.e. d_𝒵(ẑ) = ≠^*, where ^* is the index of the true concept 𝒞_^* to be communicated. This definition of a semantic error is rather intuitive. If we take the meaning of an idea or object to be represented by its corresponding concept in a conceptual space, the successful conveyance of meaning boils down to the successful communication of these concepts. This also matches the intuition that a syntactic error does not necessarily induce a semantic error. Perhaps the technical communication of z contains errors, such that ẑ≠z, i.e., the conceptual space coordinates obtained by the receiver have shifted to a new location within the space. Since concepts are represented as regions, it is possible that ẑ will be decoded as the correct concept. To derive formal bounds on the probability of semantic error, we will make a few assumptions: A1. Concepts are formed from the Cartesian products of properties. Thus, concepts are convex regions within the space, and each concept 𝒞_ has a concept prototype z_. A2. The semantic distortion δ: 𝒵×𝒵→ℝ_+ is a metric. A3. The semantic decoder is a minimum-distance decoder: d_𝒵(ẑ) = _∈𝒥δ( z_, ẑ). r0.4 < g r a p h i c s > Example Voronoi tessellation with four concept prototypes on ℝ^2 Assumption A1 is intuitive, as a concept can often be expressed as a combination of properties, e.g., the concept of a “red square” is a combination of the properties “red” and “square.” Since properties are convex regions by definition, we can meaningfully compute a prototype point z_ as the centroid of these convex regions. Assumption A2 ensures that the chosen distortion function behaves like a true measure of distance. Given these first two assumptions, we can obtain a Voronoi tessellation of the conceptual space <cit.>. A Voronoi tessellation partitions a geometric space into convex regions, where the region of the tessellation corresponding to a prototype point z_ is defined as all points z such that δ(z, z_) ≤δ(z, z_j) for all j ∈𝒥 where j ≠. Intuitively, it is the region of all points closer to the prototype of concept 𝒞_ than any other prototype. An example is provided in Figure <ref>. These first two assumptions lead naturally to a minimum-distance decoding scheme, expressed as the final assumption A3. Under this scheme, a concept will be decoded correctly if the received semantic representation ẑ lies within the Voronoi region of the prototype point z_^*. Conversely, a semantic error will occur when ẑ lies outside this region. Let the Voronoi region corresponding to concept 𝒞_ (equivalently, prototype z_) be denoted by 𝒱_. Then we can express the probability of semantic error as in Definition <ref> as P( ≠^* ) = P( ẑ∉𝒱_^* ). To obtain a bound on this probability, first consider the value τ_ which we define as the solution to max_z∈𝒵 δ( z, z_ ) s.t. δ(z, z_) ≤δ(z, z_j): j ∈𝒥, j ≠. Intuitively, τ_ is the radius of the largest sphere centered at z_ and inscribed in the Voronoi region 𝒱_; this notion is illustrated in Figure <ref>. Next, consider the two potential sources of semantic distortion. Recall, the semantic encoder maps data x to a semantic representation z. An imperfect encoder might introduce some level of semantic distortion δ(z_^*, z). Moreover, syntactic errors may introduce some distortion δ(z, ẑ). In general, the end-to-end distortion will be a function of these two values. Under assumption A2 that δ is a metric, the triangle inequality holds and we have δ( z_^*, ẑ) ≤δ(z_^*, z) + δ(z, ẑ) We can now derive an upper bound on the probability of semantic error. Let α_ denote the prior probability that concept 𝒞_ is chosen as the true concept 𝒞_^*. Then the probability of semantic error ≠^* has the upper bound P(≠^*) ≤∑_∈𝒥α_ P(δ(z_, z) + δ(z, ẑ) > τ_). Recall that the Voronoi region corresponding to concept 𝒞_ is denoted by 𝒱_. Then P(semantic error) = P( ≠^* ) = ∑_∈𝒥 P( ≠^* |^* = ) P( ^* = ) = ∑_∈𝒥α_ P( ∉𝒱_ ) ≤∑_∈𝒥α_ P( δ(ẑ,z_) > τ_ ) ≤∑_∈𝒥α_ P( δ(z_, z) + δ(z,ẑ) > τ_ ) Note that to compute (<ref>), the probability distribution over the set of concepts is required. If this distribution is unknown, with one additional assumption a looser bound can be derived. Define τ = min_∈𝒥τ_. If the distribution of the distortion introduced by the semantic encoder is independent of the concept 𝒞_ being encoded, then the probability of semantic error has the upper bound P(≠^*) ≤ P(δ(z_, z) + δ(z, ẑ) > τ). Beginning in the same manner as in the proof of Lemma <ref>, we arrive at the same result. Moreover, since τ≤τ_ for all ∈𝒥, we have P(semantic error) ≤∑_∈𝒥 P( δ(z_, z) + δ(z,ẑ) > τ_ ) P(^* = ) ≤∑_∈𝒥 P( δ(z_, z) + δ(z,ẑ) > τ ) P(^* = ) Looking at the first probability term inside the summation, we see that the only term dependent on is δ(z_, z). By assumption, this random distortion is independent of the concept 𝒞_ being encoded, and thus the entire probability term does not depend on . Therefore, we can write P(semantic error) ≤ P( δ(z_, z) + δ(z,ẑ) > τ ) ∑_∈𝒥 P(^* = ) = P( δ(z_, z) + δ(z,ẑ) > τ ) §.§ Practical Implications The bounds derived in Lemmas <ref> and <ref> can guide the design of robust semantic communication systems. For example, in (<ref>) we see that the bound is determined by four values, namely, the prior distribution of concepts α_, the quality of semantic encoding δ(z_, z), the quality of syntactic communication δ(z, ẑ), and structure of the conceptual space τ_. Taking the α_'s to be fixed, this leaves three variables that can be tuned to achieve the desired performance. First, the conceptual space should be designed such that common concepts (i.e., large α_) have a large τ_j, decreasing the probability of a misunderstanding when communicating that concept. After this, the traditional portion of the communication system can be determined, fixing the distribution of δ(z, ẑ). Finally, the semantic encoder can be designed to meet performance requirements, for instance by training a deep neural network to meet the maximum distortion with high probability. Conversely, one can fix the semantic encoder and relax the requirements of the syntactic communication to achieve maximum technical efficiency while maintaining semantic performance. We end this section with an illustrative example of this process. [Semantic system design] Suppose we want to design a semantic communication system to communicate three concepts, denoted 𝒞_1, 𝒞_2, and 𝒞_3. The probabilities of these concepts are known and fixed[Note that all values and distributions in this example are arbitrary, and are only meant to illustrate the process of semantic communication system design.] at α_1 = 0.5, α_2 = 0.25, and α_3 = 0.25. After the conceptual space model is designed, it is found that τ_1 = 3, τ_2 = 2, and τ_3 = 1. For simplicity, suppose that the components of semantic distortion introduced by both the encoder and the syntactic communication are independent of the concepts, and that they can be modelled as exponential random variables, i.e., δ_enc∼exp(λ_1) and δ_trad∼exp(λ_2). Parameters λ_1 and λ_2 reflect the capabilities of the system components, e.g., a greater λ value indicates that large distortion values are less likely. After designing the semantic encoder, it is found that λ_1 = 2. We would like to design the syntactic portion of the system (channel coding, modulation, etc.) such that the probability that a semantic error occurs is less than 0.05. First, note that the sum of distortions follows a hypoexponential distribution. Letting δ = δ_enc + δ_trad, the PDF of this random variable is given by the expression p_δ(x) = 2λ_2/2-λ_2( e^-λ_2 x- e^-2x), x ≥ 0. Integrating this PDF, we compute the probability of this sum exceeding some value τ_ as P(δ > τ_) = ∫_τ_^∞p_δ(x)dx = 2e^-λ_2τ_ + λ_2 e^-2τ_/2 - λ_2. Substituting this into the right hand side of (<ref>), setting equal to 0.05, and solving we find that λ_2 ≈ 1.5. Designing the syntactic portion of the system to meet this specification will ensure that the desired performance is achieved without wasted resources. For λ_2 < 1.5, the distortion will be too great to guarantee the desired performance. However, if λ_2 > 1.5, resources will be wasted on unnecessary technical accuracy beyond what is required for the semantic goal. § KNOWLEDGE-DRIVEN SEMANTIC COMMUNICATION In <cit.>, the authors outline a vision and general framework for reasoning-driven semantic communication. In this section, we show how our framework based on conceptual spaces naturally lends itself as the knowledge-driven foundation to this kind of higher-level framework based on reasoning, using the framework of <cit.> as an example. First we will briefly outline some of the main points of this reasoning-driven framework. Then we show how the notion of a “semantic language” as in <cit.> is elegantly realized by the conceptual space model. We conclude by demonstrating how semantic distortion can be generalized to handle the notion of context. §.§ Data-Driven to Reasoning-Driven Intelligence The authors of <cit.> characterize the evolution of the wireless network into four stages, namely data-driven, information-driven, knowledge-driven, and reasoning-driven. Their primary contribution is a proposed framework for the final stage of wireless network, where reasoning capabilities are built-in. One central aspect of this framework is the designation of transmitter and receiver nodes as “teacher” and “apprentice” nodes. The two agents communicate using a semantic language. Another key feature is the notion of context, which plays a large role in their causal reasoning framework consisting of interventions and counterfactuals. Overall, this framework is proposed to meet the needs of the final stage outlined above, i.e., it is reasoning-driven. However, such reasoning-driven communication will be largely dependent on knowledge representation, as an agent must have some knowledge to reason over in the first place. In other words, knowledge-driven communication is a vital prerequisite to reasoning-driven communication, as is shown in the stages of evolution given above. In the previous sections, we have developed different aspects of such a knowledge-driven approach. Now, we connect our approach to this higher-level reasoning-driven framework. §.§ Building a Semantic Language First, consider the following definition for a semantic language. (Semantic Language <cit.>) A semantic language ℒ = (X_l_i, Z_i), is a dictionary (from a data structure perspective) that maps the learnable data points X_l,i to their corresponding semantic representation Z_i, based on the identified semantic content elements Y_i. We take the idea of a concept (as in Definition <ref>) to be synonymous with the idea of a “semantic content element” in Definition <ref>. Moreover, the geometric underpinning of the conceptual space model inherently provides the semantic representations of such content elements, which we can take to be the coordinates of the prototype point z_ for concept 𝒞_. Then we can define a conceptual space-based semantic language. ℒ = (x_, z_) is a dictionary that maps the learnable data points x_ of concept 𝒞_ to the corresponding concept prototype z_. Therefore, our proposed model of knowledge representation is fully compatible with this higher-level framework for reasoning-driven communication. More importantly, our approach can provide a formal foundation for the qualitatively-defined components of this framework, such as semantic content elements and semantic representations. In <cit.>, it is stated that a semantic language should exhibit three qualities. We close this subsection by highlighting how the semantic language of Definition <ref> meets these three criteria: * Minimalism: By representing concepts within the raw data as coordinates within a conceptual space, the amount of data transmitted to convey the semantics can be immensely reduced. For example, in our previous work we showed that effectively transmitting image semantics with conceptual spaces can reduce the required data rate by over 99% <cit.>. * Generalizability: As the conceptual space representation truly sits at the semantic level, domains can be reused for multiple tasks and in multiple settings. For example, the color domain can be used to represent concepts involving color in any given task. * Efficiency: The geometric representation of semantics provided by conceptual spaces allows for efficient computation of otherwise difficult operations. Take the example of semantic similarity, which has been a particularly challenging quantity to assess. One recent approach is to use a pre-trained DNN model to assess the similarity of sentences, e.g., using BERT <cit.>. Even without training, using a network of this size can still require billions of operations per computation. With conceptual spaces, semantic similarity is captured as a simple distance function, from which meaning can be easily compared. §.§ Communicating with Context Context is an important notion when discussing semantics, as the meaning of a word or representation can clearly change based on the context in which it is presented. Here, we will show how this idea of context can be formally expressed within the conceptual space model. When first describing conceptual spaces, Gärdenfors also recognized the importance of context. He primarily describes two methods of expressing context in the model, which are referred to as salience and sensitivity. Salience refers to the fact that in a given situation, some domains of a conceptual space may be more important or prominent than others. To this point, Gärdenfors states “the relative weight of the domains depends on the context in which the concept is used” <cit.>. On sensitivity, he states that “subjects can be trained to become sensitized to certain areas of a dimension so that the perceived length of the area is increased” <cit.>. These two aspects of context can be mathematically realized by altering the semantic distortion function in (<ref>). This effectively alters the way that meaning is expressed within the space. First, note that these two notions will alter the semantic distortion function in different ways. Salience seems to work across domains, while sensitivity alters the space within a single domain. Starting with salience, as initially proposed by Gärdenfors we can capture this idea with a set of weights corresponding to each of the M domains, 𝒲 = {w_1, w_2, …, w_M}, ∑_m=1^M w_m = 1, which alters the semantic distortion function in the follow way: δ(z_1, z_2 |𝒲) = ∑_m=1^M w_m δ_m(z_1, z_2) Given the communication context, greater distortion in highly-weighted domains will have a greater effect of the overall distortion. For example, when communicating the semantics of traffic signs, the color domain has a relatively large salience, as different colored signs typically have distinct meanings. In contrast, in a face detection application, the color domain should have a small salience to reduce racial bias. In addition, it is easy to show that if the semantic distortion δ(·) is a metric function, the salience-weighted semantic distortion δ(·|𝒲) is also a metric, and the results in the previous section still apply. Furthermore, we can think of sensitivity as a transformation of a domain, i.e., some dimensions may be “streched” and some may be “shrunk” to reflect the sensitivity of the agent in the given context. We define a set of transformations corresponding to each of the domains in the space, 𝒯 = {T_1, T_2, …, T_M}, such that 𝒟̃_m = T_m(𝒟_m), δ̃_m = δ_m(𝒟̃_m). Transformation T_m will alter the geometry of the mth domain to match the sensitivity of the agent in that given context, and thus defines a new function δ̃_m which maps the new domain to the non-negative real line. Substituting these new distortion functions into (<ref>) we have δ(z_1, z_2 |𝒲, 𝒯) = ∑_m=1^M w_m δ̃_m(z_1, z_2), which is the new context-dependent distortion function. By incorporating these different aspects, we can obtain a more realistic representation of meaning, and begin to capture some of the more nuanced aspects of meaning that can enable more intelligent semantic communication. § EXPERIMENTAL RESULTS: A METAVERSE APPLICATION To demonstrate our proposed approach, we simulate a metaverse-inspired communication problem. Lately, the idea of the metaverse has received great attention due to its potential to enhance digital experiences. Metaverse applications aim to provide immersive online experiences by utilizing technologies such as virtual reality (VR) and augmented reality (AR) <cit.>. To achieve true immersion, these applications potentially require massive amounts of data transmission. These intensive communication requirements, coupled with the expected growth of the metaverse <cit.>, make metaverse-inspired communication a prime candidate for the benefits of semantic communication. §.§ Problem Definition and System Description In our experiments, we look at the problem of virtual reality exposure therapy (VRET), which is just one of the many examples in which the metaverse can be utilized in the context of healthcare <cit.>. VRET can be used to treat phobias by exposing the patient to fear-inducing stimuli in the virtual environment while being guided by a therapist, to gradually reduce fear of the specific stimulus <cit.>. One of the most common phobias is the fear of heights (acrophobia) <cit.>. Thus, we focus on this scenario and aim to use the tools developed in the previous sections to design and analyze a semantic communication system for a VRET application within the metaverse. Specifically, we look at the end-end patient-to-therapist communication link. r0.5 < g r a p h i c s > Conceptual space for using VRET to treat acrophobia. Properties, their corresponding prototype points and the Voronoi tesselation are also included. The first step in our design is to build a conceptual space model to capture the relevant semantics. These relevant semantics are often tied to the overall goal of communication, and in the patient-to-therapist scenario, the goal is to provide a clear picture of the patients current emotional state to the therapist, as well as the intensity of the current stimulus in the VR environment. Thus, we define two domains, shown in Figure <ref>; namely, the emotion and stimulus domains. We utilize valence and arousal dimensions <cit.> to construct the emotion domain. Valence refers to the degree to which a given emotion is positive or negative, while arousal refers to the intensity or level of activation of an emotion. Fear represents a low-valence and high-arousal emotion. We also adopt the different ranges used in <cit.> for fear-level classification to define properties within this domain, also shown in Figure <ref>. For the stimulus domain, with acrophobia in mind, we identify the dimensions of height and stability to characterize the intensity of a stimulus. For example, a high-intensity stimulus would be climbing a tall ladder, while a low-intensity stimulus would be walking on a beach. For simplicity, properties are defined as degrees of intensity, analogous to the fear properties of the emotion domain. As our conceptual space has two domains with linear dimensions, we can define the semantic distortion as the sum of the Euclidean distances in both domains, i.e., for z = (v a h s), δ(z, ẑ) = ‖[ v - v̂; a - â ]‖_2 + ‖[ h - ĥ; s - ŝ ]‖_2 Concepts are formed by taking the Cartesian product of convex regions within the two domains, and these concepts represent phobia levels. For example, “mild phobia” is a concept obtained by combining “mild fear” with “extreme intensity”. Similarly, a concept of “extreme phobia” can be realized through a combination of “high fear” and “no intensity”. We define three concepts and their prototype points, which are given in Table <ref>. We limit ourselves to concepts involving medium and high fear properties, due to these being by far the most common of our defined emotional properties present in the Aff-Wild2 dataset used for training <cit.>. r0.6 Phobia Level Valence Arousal Height Stability Mild 0.375 0.625 0.875 0.125 Moderate 0.250 0.750 0.500 0.500 Extreme 0.125 0.875 0.125 0.875 Concepts and Prototype Points (z_) As described in section <ref>, our semantic communication goal is the accurate communication of these concepts. In our experiments, we simulate three systems to achieve this goal. §.§.§ Semantic System with Theoretical Encoder First, we simulate semantic communication with a theoretical semantic encoder to examine the system performance with arbitrary encoder distortion. Therefore, we model the dimensional error introduced by the semantic encoder as a multivariate normal (MVN) random vector. For this system, we have z = z_^* + n, where n∼𝒩(0, σ_e^2 I)) and I denotes the identity matrix. In our experiments, we tune the parameter σ_e to simulate varying levels of quality of the semantic encoder. The semantic representation z is then quantized, modulated, and transmitted over the channel. At the receiver, demodulation and channel-decoding is performed, and a minimum-distance semantic decoder is employed to make a decision on the transmitted concept. §.§.§ Semantic System with CNN Encoder The second system is identical to the first, except that we train a convolutional neural network (CNN) to perform the function of the semantic encoder. This system is illustrated in Figure <ref>. To model a metaverse application, we consider video data as the input to our system. Note that improvements could likely be obtained by considering additional data inputs commonly available in a VR setting, such as audio and spatial information. Our CNN architecture is slightly modified from that proposed in <cit.>, which is originally based on the YOLO architecture <cit.>. The architecture of the CNN is described in Table <ref>. After each Conv2D layer, LeakyReLU activation and batch normalization are applied. The final layer in the encoder network is a 4-dimensional dense layer employing sigmoid activation to obtain the semantic representation z. r0.4 Layer N Size Stride Conv2D 16 3x3 1 MaxPool - 2x2 2 Conv2D 32 3x3 1 MaxPool - 2x2 2 Conv2D 64 3x3 1 MaxPool - 2x2 2 Conv2D 128 3x3 1 MaxPool - 2x2 2 Conv2D 256 3x3 1 MaxPool - 2x2 2 Conv2D 512 3x3 1 MaxPool - 2x2 2 Conv2D 1024 3x3 1 MaxPool - 2x2 2 Conv2D 256 1x1 1 Conv2D 512 3x3 1 Conv2D 48 1x1 1 CNN Architecture To train the network, we utilized two datasets, each corresponding to one of the two domains of the conceptual space. The first is the Aff-Wild2 dataset <cit.>, which consists of 564 videos of around 2.8M frames with 554 subjects (326 of which are male and 228 female), all annotated with valence/arousal labels. For the stimulus domain, we created a dataset of 43 videos of around 500k frames of various stimuli corresponding to different levels of intensity and assigned each frame a height/stability label. Data samples were constructed by sampling a frame from each dataset, resizing each to a 112×112×3 RGB image, and combining the resulting images to form a 112×112×6 array which serves as the input to the CNN. Likewise, the corresponding valence/arousal and height/stability labels are combined into a semantic representation to serve as the 4×1 label for the input. The described model was created and trained using the keras API within the tensorflow Python package. The Adam optimizer was used with a custom learning rate that increases linearly for a set number of epochs, and then decreases exponentially thereafter. The training dataset consists of 1M input-label pairs, while the validation set consists of 100k such pairs. The model was trained over batches of size 256, where 5 batches were seen each epoch and the model was trained for a total of 5k epochs. §.§.§ Non-Semantic System For comparison, we simulate a system that does not use semantic communication to achieve the semantic goal. A block diagram of this system is shown in Figure <ref>. Here, a CNN is employed to directly classify the received images into the corresponding concept. The architecture described in Table <ref> is also used for this CNN, with the only difference being the final layer of the model, which is a 3-dimensional dense layer with softmax activation to directly obtain the concept classification. The training process and hyperparameters are also nearly identical to those described in the previous subsection, where the labels are now one-hot encoded vectors indicating the true concepts, rather than semantic representations. §.§ Traditional Communication Components For each of the systems described in the previous subsection, we simulate various aspects of the traditional portion of the system. Many future metaverse applications will likely utilize WiFi connectivity, and thus in our experiments we simulate different aspects of the IEEE 802.11 standard <cit.>. We study the various systems under BPSK, 16-QAM, and 256-QAM modulation. Moreover, we perform experiments for both AWGN and Rician fading channel models. §.§ Results and Discussion Monte Carlo simulations are carried out to demonstrate the performance of the systems described above. In this subsection, we describe these experiments and the following results, and provide some discussion as to their implications. §.§.§ Theoretical Encoder First, we perform experiments using the semantic system with a theoretical encoder, and these results are shown in Figures <ref>. In the top-left plot of Figure <ref>, we plot the end-end semantic distortion δ(z_^*, ẑ) and the encoder distortion δ( z_^*, z) as functions of the signal-to-noise ratio (SNR) for an encoder standard deviation σ_e = 0.01 under the AWGN channel. We see that in each case, the end-end semantic distortion becomes limited by the performance of the semantic encoder as the channel quality improves. Therefore, the quality of the semantic encoder is the key factor in determining the best-case performance of the overall semantic communication system. Improving the performance of the semantic encoder will “lower” the distortion floor of the overall system. The bottom-left plot of Figure <ref> confirms this; in these plots, the probability of semantic error P(≠^*) is shown as a function of SNR, as well as the probability of packet error, which is the probability of the transmitted packet containing a bit error. In our simulations, a packet is composed of 160 bytes, which corresponds to 20 semantic representations z each consisting of a vector of 4 16-bit floating point values. In the bottom-left plot of Figure <ref>, we observe that the probability of semantic error reaches a hard limit at precisely the SNR where the end-end distortion in the top-left plot of Figure <ref> reaches its floor. As such, this error floor can be reduced by improving the quality of the semantic encoder. This plot also illustrates the intuition that a syntactic error does not necessarily induce a semantic error. For example at E_b/N_0 = 6dB we see that the probability of packet error is nearly 1 for all modulation types, but for BPSK modulation the probability of semantic error is close to 10^-2. Thus, we are able to achieve great semantic performance despite poor technical performance, illustrating the potential robustness of semantic communication systems to traditional bit errors. The right-hand plots in Figure <ref> provide similar results for the Rician channel model with fading parameter K = 6dB. We see that the semantic distortion curves are rather similar to those for the AWGN channel. Moreover, the probability of semantic error curves are also relatively similar to those for the AWGN channel, despite the dramatically increased probability of packet error. These results indicate that the proposed approach to semantic communication can provide even greater benefits with respect to robustness in the presence of channel fading. §.§.§ Semantic and Non-Semantic Systems In this subsection, we examine the performance of the semantic and non-semantic systems, and the results are shown in Figure <ref>. All of the results in this subsection were obtained for the AWGN channel. r0.5 < g r a p h i c s > Probability of semantic error and the corresponding upper bound vs. average SNR for a general and fine-tuned semantic system, as well as the non-semantic system, for 16-QAM modulation under the AWGN channel. The semantic system achieves a rate reduction of over 99.9% as compared to the non-semantic system. The curves marked by squares in Figure <ref> show the probability of semantic error for the CNN-based semantic system under the AWGN channel and the corresponding bound, as a function of SNR. We observe similar behavior to that seen in the results of the system with a theoretical semantic encoder; namely, the performance of the system improves with the channel up to a certain point, where the overall system becomes limited by imperfections of the semantic encoder. We also observe that the upper bound provided by Lemma <ref> indeed holds as an upper bound for the semantic performance, though it does not appear to be a very tight bound. One possible reason for this loose bound lies in the fact that we are measuring semantic distortion with respect to concept prototypes. Therefore, even if the semantic encoder is perfect in mapping a given input to its true representation z, this point may not be close to any prototypes; in other words, some data are conceptually ambiguous. Intuitively, we should observe improved performance and a tighter bound by eliminating this ambiguity. To test this hypothesis, we created a new dataset by randomly sampling points from the initial dataset that lie within the spheres centered at the prototype points with radius given by (<ref>). This new dataset was then used to fine-tune the general CNN semantic encoder and to test the performance of the system with this fine-tuned encoder. The results are given in Figure <ref>, and are denoted by the curves marked with unfilled circles. These results confirm our intuition, as the best-case probability of error drops from around 0.10 in the general case to around 0.05. Furthermore, we observe that the upper bounds on the semantic performance are indeed much tighter. The curve marked by triangles in Figure <ref> displays the results for the non-semantic system illustrated in Figure <ref>. In the extreme cases, the performance is similar to that of the semantic system with general CNN encoder. This is reasonable, as both systems are essentially making random guesses for extremely poor SNR. In the case of high SNR, both systems are limited by the performance of their respective CNNs, and we would expect this performance to be similar using identical architectures. For intermediate SNR values however, the semantic system appears to outperform the non-semantic system. Specifically, the performance begins to improve at around -20dB for the semantic system, while this does not occur until around 0dB for the non-semantic system. These findings are similar to those in our earlier work <cit.>, and suggest that our approach can provide robustness in low-SNR scenarios. Finally, it is important to note the efficiency gains of the semantic system to the non-semantic system. Each semantic representation consists of 4 16-bit values, for a total of 64 bits. In the non-semantic system, each inference is carried out over 2 112×112×3, for a total of 75,264 pixels. Each pixel is encoded to 8 bits, resulting in 602,112 bits transmitted per inference. Therefore, the semantic system is able to reduce the overall rate of communication by over 99.9%. Even if one were to employ modern image compression techniques, these results demonstrate that, for data-rich modalities, the proposed system can achieve semantic communication with a massive reduction in rate. § CONCLUSION In this paper, we have greatly expanded both theoretical and practical aspects of a semantic communication system with knowledge representation based on the theory of conceptual spaces. We have provided formal definitions of key aspects, defined the notions of semantic distortion and semantic error, and derived error bounds that follow from these definitions. We have shown how this theory can serve as the underlying foundation for a truly intelligent reasoning-driven system, and how important notions such as context can be easily incorporated into the framework. To illustrate some of the key benefits of our approach, we simulated a VRET application utilizing semantic communication, and demonstrated robust semantic performance with more than 99.9% percent reduction in rate as compared to a more traditional system. The results clarify some important insights as well, such as the importance of the semantic encoder and the conceptual space design in the end-end performance of the semantic communication system. There are many interesting future directions that can be taken to extend this work. As was noted in section <ref>, the study of semantic communication when the transmitter and receiver do not share a common conceptual space will be important to future development of these systems. Similarly, the process of autonomously learning the underlying conceptual space in an efficient and optimal way is another important area for future work. We plan to study this learning process, as well as how efficient and intelligent reasoning systems can be developed on top of a conceptual space-based knowledge base. Overall, we anticipate that the continued development and implementation of conceptual space-based semantic communication systems will unlock truly innovative and intelligent systems for the next generation of wireless communications.
http://arxiv.org/abs/2306.06272v1
20230609215413
A Domain-Independent Agent Architecture for Adaptive Operation in Evolving Open Worlds
[ "Shiwali Mohan", "Wiktor Piotrowski", "Roni Stern", "Sachin Grover", "Sookyung Kim", "Jacob Le", "Johan De Kleer" ]
cs.AI
[ "cs.AI", "I.2.4; I.2.6" ]
PARC]Shiwali Mohan PARC]Wiktor Piotrowski PARC,BGU]Roni Stern PARC]Sachin Grover PARC]Sookyung Kim PARC]Jacob Le PARC]Yoni Sher PARC]Johan de Kleer [PARC]organization=Palo Alto Research Center, country=USA [BGU]organization=Ben Gurion University, country=Israel Model-based reasoning agents are ill-equipped to act in novel situations in which their model of the environment no longer sufficiently represents the world. We propose HYDRA - a framework for designing model-based agents operating in mixed discrete-continuous worlds, that can autonomously detect when the environment has evolved from its canonical setup, understand how it has evolved, and adapt the agents' models to perform effectively. HYDRA is based upon PDDL+, a rich modeling language for planning in mixed, discrete-continuous environments. It augments the planning module with visual reasoning, task selection, and action execution modules for closed-loop interaction with complex environments. HYDRA implements a novel meta-reasoning process that enables the agent to monitor its own behavior from a variety of aspects. The process employs a diverse set of computational methods to maintain expectations about the agent's own behavior in an environment. Divergences from those expectations are useful in detecting when the environment has evolved and identifying opportunities to adapt the underlying models. HYDRA builds upon ideas from diagnosis and repair and uses a heuristics-guided search over model changes such that they become competent in novel conditions. The HYDRA framework has been used to implement novelty-aware agents for three diverse domains - CartPole++ (a higher dimension variant of a classic control problem), Science Birds (an IJCAI competition problem[http://aibirds.org/Angry Birds Competition]), and PogoStick (a specific problem domain in Minecraft[https://www.minecraft.net/en-usMinecraft]). We report empirical observations from these domains to demonstrate the efficacy of various components in the novelty meta-reasoning process. open world learning integrated intelligent systems model-based reasoning planning agents agent architectures novelty reasoning § INTRODUCTION Artificial Intelligence (AI) and Machine Learning (ML) research on sequential decision making usually relies on the closed world assumption. That is, all relevant characteristics of the environment are known ahead of deployment, during agent design time. For model-based reasoning agents (e.g., based on planning), knowledge about environmental characteristics is encoded explicitly as a domain model (description of actions, events, processes) that govern the agent's beliefs about environment's dynamics. In model-free learning agents (e.g., based on deep reinforcement learning), assumptions are made about the structure of the reward function and are encoded as a simulation from which the action selection policy is learned. The closed world assumption poses a significant challenge in deploying intelligent agents. Model-based agents may have been encoded with knowledge that is incomplete or incorrect, causing the agent to fail catastrophically during deployment. Model-free learning agents will need numerous interactions with the environment to learn a new policy, rendering them ineffective when the environment evolves from what they were trained on. The assumptions about a closed world during agent design is a significant challenge in robust deployment of intelligent technology in the real world. Our research studies how intelligent agents can be designed such that they can robustly operate in an open world, an environment whose characteristics change while the agent is operational. We term the shift in environmental characteristics as a novelty. An effective open world agent should be able to autonomously detect when a novelty has been introduced in the environment, characterize it, as it pertains to what it knows about the environment, and then accommodate it by changing its decision-making strategies. Ideally, it transfers relevant operational knowledge from before novelty is introduced to after, i.e., it learns without fully retraining and in orders of magnitude less time. This challenge of designing such an open world agent, where novelties can appear at an unspecified time, has been gaining significant interest in the AI literature <cit.>. This paper advances the `AI for open world' research agenda by studying how model-based reasoning agents can reason about novelties appearing in open worlds and adapt themselves in response. A key characteristic of model-based reasoning agents is that their models of the environment are explicit and compositional. Each element of the model represents a meaningful aspect of environmental dynamics. Consider a planning agent written in PDDL+ <cit.> to aim a ball at a target. It will include formal specifications of processes such as the movement of a ball under the effect of gravity. This process is encoded separately from other aspects of the environment such as the ball bouncing off of a hard surface. Together, all elements in the PDDL+ model determine the agent's beliefs about environment dynamics. When considering open world agent design, compositional models have significant advantages when compared to black-box, end-to-end, integrative models such as deep neural networks. They generate explicit expectations about future outcomes that are expressed in meaningful terms (e.g., gravity). This enables a focused analysis of what might have changed in the environment. Often, the introduced novelty only impacts a small subset of model elements and only those need to be updated. Consequently, model-based learning agent can adapt to novelties with fewer observations than model-free agents. The paper introduces HYDRA - a framework for designing model-based reasoning (more specifically planning-based) agents that can operate in complex open worlds. Action reasoning in HYDRA is built upon PDDL+ - a rich modeling language for describing dynamics of mixed discrete-continuous environments. Building upon PDDL+ enables us to study complex, real-world-like domains in which transitions are governed by agent actions as well as events and processes extraneous to agent behavior. HYDRA implements visual reasoning, task monitoring, and domain-independent planning with PDDL+ in a singular framework to develop agents that operate in a closed loop with the environment. A key contribution of our work is a meta-reasoning process for novelty that is integrated with the basic agent perceive-decide-act loop. The meta-reasoning process monitors various aspects of agent behavior in the environment; including analyzing of the observation space, tracking state changes in the environment, and tracking quality of performance. When the environment evolves from the canonical setup the agent is designed for, these monitors generate signals that trigger a model adaptation cycle. HYDRA employs a heuristics-guided search to identify which element(s) of the models needs to be revised and adapted. HYDRA is a domain-independent framework and has been used to design agents for three research domains - CartPole++ (a higher dimension variant of a classic control problem CartPole), ScienceBirds (an AI competition domain), and PogoStick (a Minecraft domain). Our results show that for certain types of novelties, HYDRA agents can adapt quickly with few interactions with the environment. Additionally, the adaptations produced HYDRA are interpretable by design - they are represented in terms of changes to the elements of this PDDL+ model, enabling and inspection of proposed changes. This property of a HYDRA agent is a considerable advantage when developing adaptive systems that can be trusted. § RELATED WORK The problem we consider in this work is raised when a model-based reasoning agent fails to act in the environment because its underlying models are deficient or incorrect. The approach we propose to address this problem is to adapt the world model our agent is using based on the observations is collects. Therefore, our work is related to prior work on (1) handling execution failures in planning-based agents, (2) repairing world models for agents, (3) automated diagnosis and repair, and (4) learning planning models from observations. Adapting to Execution Failure Planning models are often deemed correct by design, and execution failures and other discrepancies observed during execution of the generated plans are usually attributed to partial-observability or non-determinism of the target environment. Replanning and Plan Repair are the common approaches to such failures. Replanning <cit.> methods attempt to generate a new solution to the problem, either from the very beginning or from the point of failure of the plan, by using updated information from the environment. Plan repair <cit.> methods adapt the plan according to additional data so that it will then be able to achieve the desired goal. Replanning and plan repair algorithms usually assume that information on the parts of the environment that have unexpectedly changed and, thus, caused the plan execution failure are freely available and can be queried at any time. Our problem is different: we do not know how the novelty has affected the environmental dynamics, and must infer it from observations. Repairing World Models for Planning-Based Agents The Action Description Update (ADU) problem <cit.> is the problem of updating an action model given a set of new statements about the world dynamics, e.g., adding new axioms or constraints. This different from learning how to update a model from observations. <cit.> used abductive reasoning about unexpected events to expand the knowledge base about the hidden part of the environment and improve their replanning process. This is more similar to handling partial observability than to repairing planning models to handle novelties in the environment. In the Model Reconciliation Problem <cit.> (MRP), plans generated by one agent (the planner) and the objective is to explain that plan to to another agent (the observer). MRP has been studied in settings where each agent assumes a different world model, and the desired explanations are changes to the world model assumed by the observer. Generating such explanation can be viewed as a form of model repair. However, they require knowledge of both models, while in our case we do not assume access to the world model after novelty has been introduced. To the best of our knowledge, all prior work on the ADU problem and MPR have dealt only with discrete domains. In the Model Maintenance Problem (MMP) <cit.>, a world model changes (drifts) over time, and the task is to adapt the model based on observations and active queries so as minimize the difference between it and the real world. MMP has been studied in the context of planning agents, where the drifted model is a symbolic planning model. <cit.> allowed queries about aspects of the planning model, e.g., “is fluent f a precondition of action a”. <cit.> allowed queries about the possible execution of plans, e.g., “Can you perform the plan a_1, a_2, a_3”, where responses state which prefix of the given plan could be executed and the last state reached. Our problem can be viewed as a special case of MMP with a single drift (i.e., a single novelty) and our approach may be applicable to solve MMP. However, unlike prior work on MMP, we go beyond purely symbolic planning and support mixed discrete-continuous domains.  <cit.> discussed the challenge of adapting a planning model to novelties, but did not propose a concrete approach to do so.  <cit.> proposed an algorithm for repairing a planning domain from observations, using a MAX-SAT solver.  <cit.> used a different approach to repair planning models where effects of some actions are incomplete. Their approach compiles, for each unsolvable task, a new extended task where actions are allowed to insert the missing effects. However, both works are limited to classical planning and cannot handle mixed discrete-continuous domains. Recent work has looked at how novelty can be accommodated in various discrete environments such as Polycraft <cit.>, which is a mostly deterministic domain, and Monopoly <cit.>, which is a stochastic, strategic domain. Our research extends these lines of research and demonstrates that model revision techniques can support novelty accommodation in dynamic, physics-based domains such as CartPole and Science Birds. Some prior work explored how to repair an obsolete Markov Decision Problem (MDP) models based on observations <cit.>. They proposed an approach based on using a model-checker to ensure that the repaired MDP satisfies some require constraints. Along similar lines, <cit.> study the problem of repairing a discrete time Markov Chain. Both approaches are not directly applicable for repairing rich planning models. Automated Diagnosis and Repair Automated diagnosis (DX) and repair of faulty systems is a core AI problem that deals with finding the root cause of an observed behavior and suggesting diagnostic and repair actions to return the system to a nominal state <cit.>. Many approaches and systems have been proposed in the past for DX and repair in both discrete <cit.> and mixed continuous-discrete settings <cit.>. For example, <cit.> and  <cit.> proposed general frameworks for repairing faulty systems by planning repair and diagnostic actions. <cit.> proposed the well-known General Diagnosis Engine which has been extended and improved by many <cit.> including to hybrid systems <cit.>, and software <cit.>. While it might be possible to reduce our problem to a DX problem, where the system to repair is the agent's model, it is not clear whether existing DX and repair methods would work, and such reduction is not trivial. Learning Planning Models from Observations There is a growing literature on learning planning models from observations <cit.>, including algorithms such as ARMS <cit.>, LOCM <cit.>, LOCM2 <cit.>, AMAN <cit.>, FAMA <cit.>, and SAM <cit.>. <cit.> arranged these algorithms in a comprehensive framework. But, none of these algorithms is designed to learn complex mixed discrete-continuous planning models such as the domains we consider in this work. Additionally, these algorithms are designed to learn a new planning model from scratch, as opposed to repairing an existing planning model to novelties.  <cit.> proposed an algorithm for learning systems that can be captured as a hybrid timed autonomata. While a PDDL+ domain and problem can be compiled into a hybrid timed automata, it is not clear how to transfer their approach to the problem of learning a complete PDDL+ domain, and whether such an approach can scale. § PROBLEM SETUP AND RESEARCH DOMAINS In a novelty problem, the agent interacts with the environment repeatedly for N episodes in a trial. Each episode begins in some initial state of the environment. During the episode, the agent iteratively executes actions and observes their outcomes until a terminal state is reached, upon which the episode ends and a new one begins. After an unknown number k of episodes, a novelty is introduced that changes the environment and potentially impacts how the agent should behave. The agent is not informed about when the novelty is introduced or how it has changed the environment. The novelty persists for the rest trial, i.e, for N-k episodes. Novelties impact in the environment in different ways <cit.>. Some can change the structure of the environment by introducing or removing object categories, introduce new object attributes, or shift the distribution of attribute values. Others impact the dynamics of by altering the events and processes that determine transitions in the environmental state space. Yet others may change the agent's goals. In an open world, any number of novelties can present themselves at any timepoint. In this paper, we limit our scope by making the following assumptions: (1) only specific classes of novelties are studied; (2) at most one novelty is introduced in a trial; (3) the novelty under study is introduced only between episodes; (4) once the novelty is introduced, it persists for the rest of the trial. We refer to this setup as the single persistent novelty setup. These assumptions were made to support iterative development of agents and allow clear experimental measurement. A novelty-aware agent can (1) detect quickly when the novelty has been introduced in a trial and (2) adapt its decision making efficiently so that it can perform effectively in novel settings. The former is referred to as the novelty detection problem and the latter as novelty accommodation. In model-based reasoning agents, accommodation requires novelty characterization as well - i.e, generating explicit hypotheses about how introduction of novelty has changed the structure and dynamics of the environment. §.§ Research Domains Implementation and experiments described in this paper are motivated by the following three domains. Each domain has been specifically created and maintained for evaluating open world learning by our partner teams. CartPole++ This physics-based discrete-continuous domain is a higher dimensional version of the standard Reinforcement Learning benchmark problem Cartpole. The agent can push the cart in any of the cardinal directions and the objective is to leave the pole upward for 200 steps. Figure <ref> illustrates this domain. A variation of the domain additionally includes a set of spheres flying in the space. ScienceBirds. This domain is a version of the popular video game Angry Birds, which has garnered widespread recognition over a substantial number of years. The objective in SB involves eliminating all green pigs in a level while maximizing destruction to the surrounding structures. The player is equipped with a collection of birds, which may vary, to be launched from a slingshot. The pigs are typically concealed within complex platform structures built with a variety of blocks, requiring the player to identify and eliminate the weak points of the structures, such as supports or dynamite. Figure <ref> shows a screenshot from this domain. The development of Science Birds utilized the Box2D open-source physics library, ensuring that all objects within the game's environment comply with the principles of Newtonian physics in a two-dimensional plane. PogoStick. PolyCraft is version of the popular video game MineCraft[<https://www.minecraft.net/>]. The agent controls a character (Steve) that interacts with the environment, e.g., gathering resources such as logs by cutting trees, mining ores of diamond, interacting with other entities such as traders to trade for several other resources, and following a recipe to craft a new item. Our third research domain is an environment in Polycraft in which the agent is tasked to craft a PogoStick. Crafting a PogoStick requires sequential decision-making including both short and long-term planning. For example, Steve has to decide to collect resources needed to craft intermediate products, such as, a tree tap, and then use it to collect rubber from trees in the environment. Figure <ref> shows the top- and first-person view of the environment that consists of 30 × 30 grass field, and is connected to a smaller rooms. The domain consists of rival Pogoists that may adopt supportive or competitive intentions. Our research domains are mixed discrete-continuous that require complex goal-oriented, spatio-temporal reasoning, and learning. CartPole++ and ScienceBirds are fully observable, deterministic, and physics-based - the environment dynamics are governed by physical laws such as flight under gravity. PogoStick is on the other hand is discrete but partially observable and has some non-determinism. Additionally, PogoStick has other agent entities that have supportive or competitive intentions. Impact of an agent's action is immediately observable in CartPole++ and PogoStick. However, in ScienceBirds, actions have delayed consequence. The trajectory of the bird is determined very early in the process by setting angle and velocity, however, the consequences emerge much later after hitting structures. The differing characteristics of these domains pose a significant challenge in designing a common novelty-aware agent framework. Continuous space, physics-based dynamics of CartPole++ and ScienceBirds motivate accurate physics modeling in the agent framework. Partial observability in PogoStick motivates balancing information-gathering and execution needs. All domains require continual task monitoring and replanning for effective performance. Detecting and accomodating novelties is further complicated due to complex, temporal interactions between objects and entities in the domains. §.§ Space of Novelties Table <ref> summarizes a subset of novelties that can be introduced in the environments and examples of their instantiations in our research domains. An ideal novelty-aware agent can detect, characterize, and accommodate a wide range of novelties that impact the structure, dynamics, and constraints in the environment. While novelty IDs 1 and 2 impact the structure of the environment and may change in the observation space. Novelty IDs 2, 3, 4 impact specific transitions in the environment. 6, 7, and 8 impact environmental transitions in general by changing the laws. Some novelty IDs (2, 7) have been not instantiated in our research domains and consequently, are not being investigated currently. Our aim is to design a common novelty-aware agent framework that can reason about the full space of novelty. The sections below introduce our approach and summarize the progress we have made towards this goal. § HYDRA The main contribution of this work is the design of HYDRA, a domain-independent architecture for implementing a novelty-aware agent in complex, mixed discrete and continuous domains. The HYDRA architecture includes a base agent and novelty meta-reasoning components designed to detect novelties and adapt the base agent's behavior to them. A notional architecture is shown in Figure <ref>. The base agent (Figure <ref> - left, Section <ref>) implements a perceive-decide-act cycle in the environment, inferring the current state of the world using encoded background knowledge and perceived observations, deciding which actions to perform using a PDDL+ planner, and acting accordingly. The novelty meta-reasoning components in HYDRA (Figure <ref> - right, Section <ref>) continuously monitor various aspects of agent behavior to detect when it diverges from expectations to find opportunities for changing the models that drive the agent's action selection. §.§ Base Agent The based agent implements elements in the well-known Belief-Desire-Intention (BDI) theory <cit.> that are similar to processing cycle implemented in prominent agent architectures: Soar <cit.> and ICARUS <cit.>. The main components of the HYDRA base agent are (1) a state inference component, which maintains and updates a belief about the current state of the environment; (2) a task selection component, which selects the intermediate tasks the agent intends to perform; and (3) a planning and acting component, which relies on a planner to determine the actions to execute in order to perform the selected task. §.§.§ State Inference The role of this HYDRA component is to infer the current state of the environment in sufficient detail to enable task selection and planning. It accepts information about the current state obtained either from the visual reasoning component or directly from the environment, and integrates it with a-priori knowledge or background assumptions about the domain. Different environments provide a variety of input signals including visual information (e.g., images, object detection and colormap in ScienceBirds), continuous-valued sensor information (e.g., position and velocity of the cart and the pole in CartPole++), as well as discrete state information (e.g., locations and positions in PogoStick). HYDRA includes a visual reasoning component that detects, localizes, and categorizes known objects and their properties in the input. As each domain may structure its input differently, the visual reasoning and state inference components require domain-specific processing, e.g., a dedicated recognition model. The state inference component outputs a set of PDDL+ facts that encode the current state as provided by the environment, state elaborations that are necessary to track task progress, as well as facts representing background knowledge. In the ScienceBirds domain, the HYDRA visual reasoning component accepts a set of visible objects on the scene including their locations and a vector representing their visual structures. The vector consists of the number of vertices in the bounding polygon, the area of the bounding polygon, as well as the a list of compressed 8-bit (RRRGGGBB) color and their percentage in the object. The HYDRA visual reasoning component for Science Birds accepts such vectors as input and categorizes the object as one of the known object classes using a recognition model. The recognition model is built with a standard implementation of multinomial logistic regression for multi-class classification and recognizes all known object types. The state inference component elaborates upon this symbolic information and adds current assumptions about the number of health points each pig and block have, gravity, starting velocity of the birds, how much damage a bird can incur about various objects and entities etc. A subset of PDDL+ facts generated for a specific initial state in ScienceBirds are below. §.§.§ Task Selection The main task of the HYDRA agent is based on the domain, such as, to craft a pogostick in PogoStick or kill all the pigs in ScienceBirds domain. Planning directly towards performing this task is possible in some domains. In other domains, it is necessary or more efficient to progress towards this main task by setting intermediate tasks (also known as subtasks or subgoals) for the agent to perform, and planning to achieve them. The HYDRA task selection component implements this subtask mechanism, as follows. Implementing a HYDRA agent for a given domain requires defining one or more HYDRA tasks. A HYDRA task is a tuple T=_T, D_T, G_T representing the preconditions, domain, and goal of the task. The preconditions of a task (_T) define when performing this task may be useful, and the goal of a task (G_T) defines what this task aims to achieve. The domain of a task (D_T) is a subset of the HYDRA agent domain D specifies the parts of the domain relevant for performing the task. This is useful for efficiency reasons: performing some tasks do not require reasoning about all aspects of the domain. A HYDRA task is relevant if its preconditions (_T) are met and its goal (G_T) has not been achieved yet. The HYDRA agent maintains a set of relevant HYDRA tasks and designates one of these tasks as the active task. This is the task the HYDRA agent is currently aiming to achieve. HYDRA calls the task selection component to consider changing the active task if the current task is no longer relevant or a performed action had unexpected outcomes. Then, it uses domain-specific decision rules to decide if the active task should be replaced and if so, which of the relevant tasks should be set as the new active task. Note that some of the relevant tasks may be exploration tasks, whose preconditions relate to detected novelties. Task selection in non-novel environments for Science Birds and Cartpole++ is trivial, including a single task to achieve the corresponding domain's goal (kill all birds or maintain the pole's upright angles). A richer task selection component was implemented for PogoStick, since the environment has additional agents that may interfere with the base agent's plans, and includes objects such as a safe or a chest, whose content is unknown and may have useful resources. Thus, the HYDRA agent for PogoStick includes exploratory tasks designed to gather information that might help craft the pogo stick more efficiently. Specifically, the tasks in our implementation of HYDRA for this domain include: (T_1) Craft a pogo stick (T_2) Interact with other agents. (T_3) Explore other rooms. (T_4) Open a safe or a chest. (T_5) Attempt to mine novel objects. T_1 is the main task of the game. T_2 includes interacting with trader agents to learn the trade recipes they support. A trade recipe can be, for example, trading 9 diamonds for 1 titanium. This task is necessary because some items required for crafting the pogo stick can only be obtained by trading with these trader agents. The domain of task T_2 is very limited, including only the location of the other agents and obstacles that may block getting to them, while the domain of task T_1 additional information such as crafting recipes. Tasks T_1 and T_2 are usually sufficient to solve the problem in non-novelty environments. Thus, the task selection decision rules used for this domain selects these tasks first, and only select the other tasks (T_3, T_4, T_5) after T_1 and T_2 was selected and in case where we failed to craft the pogo stick. Between these tasks, we selected the tasks in the sequential order they are presented, i.e., try T_3 first, then T_4, and finally T_5. §.§.§ Planning and Execution HYDRA leverages a PDDL+ planner to determine the sequence of actions to execute in order to perform the active task. To support rich environments with complex dynamics and discrete and numeric state variables, we assume a planner that uses domains specified in PDDL+ <cit.>. A planning domain in PDDL+ defines how the state changes under various happenings that include actions the agent may perform, exogenous events that may be triggered, and durative processes that may be active. We consider an agent acting in a complex discrete-continuous environment that is modeled as a transition system E=F, X, A, T where F is a set of discrete state variables; X is a set of numeric state variables; A is a set of actions the agent may perform; and T is a transition function that governs the dynamics of the environment, where T: S × A → S is a transition function that governs the dynamics of the environment (i.e., T(s, a) → s'). A state of an environment is a complete assignment of values over all its state variables. We denote by S the set of all possible states in the environment. To plan, domain D is the agent's internal approximation of environment E, defining how the state changes under various happenings that include actions, events that were triggered, and durative processes that are may be active[The effects of processes and events are implicitly applied via a time-passing action that the agent can apply alongside the actions defined in the PDDL+ domain.]. Each happening is represented as a pair of pre-conditions and effects expressed in terms of assignments and mathematical expressions. An example of a process from the CartPole++ domain is shown in Figure <ref>. A planning problem 𝒫 in the domain D is a pair 𝒫=s_0,G where s_0∈ S is the initial state., and G ⊆ S is a set of possible goal states. A plan is a sequence of actions π = (a_0, a_1, ..., a_n). A solution to a problem is a plan after execution reaches a goal state, i.e. s_n ∈ G, where s_n is the state after executing action a_n of plan π. Executing a plan π in domain D yields a trajectory τ = (s_0, a_0, s_1, s_1, a_1, s_2, ... , s_n, a_n, s_G), which comprises a sequence of tuples of the form s_i, a_i, s_i+1, representing that the agent performed action a_i in state s_i, and reached state s_i+1. PDDL+ planners accept a domain D and a planning problem 𝒫. Thus, the first step in the HYDRA planning component is to encode the inferred current state and the goal of the active task as a PDDL+ problem, and encoding the domain of the active task as a PDDL+ domain. Due to the richness of the PDDL+ language, this encoding is relatively straightforward. The current state and task goal are represented as sets of grounded predicates and functions (discrete and numeric state variables). The domain encodes the definition of actions, events, and durative processes, each represented as a pair of pre-conditions and effects expressed in terms of logical and numeric conditions and assignments. The output of a PDDL+ planner is either a declared failure, or a solution to the given planning problem. If the planner failed to find a plan, HYDRA calls its task selection component to choose a different task. Otherwise, HYDRA maintains the found plan and attempts to execute it step by step. If an action fails, we halt the execution and call the task selection component to consider changing the active task, or replanning for the current active task. Identifying action failure is done by analyzing feedback from the environment, and comparing with the expected outcome of each action as defined by the domain. In CartPole++, the output of the state inference component returns the velocity of the cart, the angle of the pole, etc. Each of these state variables is directly encoded in PDDL+ as numeric state variables (referred to as functions in PDDL+). The expected cartpole dynamics are encoded in the domain, which requires defining a process to specify the movement of the pole over time. An example of such a process is shown in Figure <ref>. §.§ Novelty Meta Reasoner To deal with novelties during performance, HYDRA implements a meta-reasoning process that reasons about the agent's behavior in its environment. The process maintains expectations about the agent's input, transitions in the state space due to agent actions and extraneous dynamics, as well as its performance. A violation of these expectations indicate that a novelty has been introduced in the environment which must be further inspected, characterized, and accommodated for. Introducing such a process frames learning as a volitional activity undertaken by the agent to which resources are devoted only when an opportunity presents itself (as indicated by violation of expectations). This is in stark contrast with the classical machine learning setup in which agent learning is controlled externally. Specifically, HYDRA implements (1) a set of novelty monitors that maintain a variety of explicit expectations about the environment evolution and monitor divergence; (2) a novelty determination component that aggregates the information from the monitors and determines if a novelty has been introduced and requires adaptation; and (3) a heuristic search-based model repair component that manipulates the base agent's PDDL+ model to be consistent with the detected novelty. §.§.§ Novelty Monitors Unknown Objects and Entities This monitor encodes an expectation that all observed entities processed by the state inference module are known i.e., they are of a type that is encoded in the agent's planning model. If an object appears in the environment whose type cannot be recognized or is not in the agent's planning model, the monitor flags existence of novelty. In some domains this monitor is binary: either a new type has been observed or not. This is the case in PogoStick domain, where the Polycraft environment provides the type of each object which can be matched against a type inventory. In domains with visual input, such as ScienceBirds, the recognition model is leveraged. The recognition model (<ref>) has been trained for near-perfect performance in canonical cases - i.e, it can categorize objects with high confidence (prediction probability ≥ threshold_c). If the recognition model produces detections with low confidence (prediction probability < threshold_c), the monitor flags the likelihood of a novelty. Plan Inconsistency Let D be the planning domain used by the HYDRA planning component. The second novelty monitor, referred to as the planning domain inconsistency montior, measures how accurately D describes the environment dynamics, i.e., the behavior of the different happenings (actions, effects, and processes). This novelty monitor is thus geared towards detecting novelties that change these environment dynamics. Specifically, this monitor relies on computing an inconsistency score, denoted C(s,π, D, τ) where s is a state, π is a plan, D is a planning domain, and τ is the trajecory observed when performing π starting from state s. The inconsistency score C(s,π, D, τ) quantifies the difference between the observed trajectory τ and the trajectory we expected to observe when performing π in s according to D. This expected trajectory can be obtained by simulating the plan π according to D. This is possible since D specifies the expected environment dynamics, including actions' effects. Existing tools such as VAL <cit.> support such simulations although we wrote our own domain-independent PDDL+ simulator. We computed the inconsistency score C as the “distance” between pairs of corresponding states in observed and simulated trajectories, discounted proportionally by time to give more weight to changes that are observed earlier in the trajectories. There are many ways to measure this “distance”, and the exact function can be domain-dependent. An example of this distance measure is Euclidean distance between the pairs of corresponding states in the observed and expected trajectories. In the most general case, the distance would be calculated over all state variables F ∪ X (where propositional values are cast as 1 or 0). However, for increased accuracy, this can be restricted to a subset of relevant state variables. Formally, let S(τ) be the sequence of states in the observed trajectory and S(π,D) be the expected sequence of states obtained by simulating the generated plan π with respect to the domain D. Let S(x)[i] denote the i^th state in the state sequence S(x). Inconsistency score of domain D is computed as: C(π, D, τ) = 1/|τ|∑_iγ^i· ||S(τ)[i] - S(π,D)[i]|| where 0<γ<1 is the discount factor. Errors in C due to sensing noise, rounding errors, and other issues can accumulate over time. Consequently, the Euclidean distance between corresponding states is likely to be higher later in the trajectories. The discount factor γ prevents such errors from dominating the inconsistency score. In a non-novel environment E with an ideal domain D, the expected evolution of the system predicted by the planner should perfectly match the observed behavior in simulation, i.e., the two resulting trajectories align by default with the inconsistency score C=0. In the real world, however, this is usually impossible to achieve due to rounding errors, perception inaccuracies, and similar common issues. To account for such noise, the planning domain inconsistency monitor relies on a domain-specific consistency threshold C_th, where inconsistency scores below this theshold are ignored. Setting C_th requires striking a fine balance between accurately estimating the inconsistency score and suppressing noise stemming from the execution environment. A graphical representation of state trajectory-based inconsistency score computation is shown in Figure <ref> in which the expected state trajectory under the agent's internal model D (pink nodes) is compared against the observed trajectory (green nodes) in the environment E. The left y-axis denotes the Euclidean distance between states and the right y-axis, the inconsistency score. Due to the discount factor γ, the distance between states later in the trajectory contribute less to the inconsistency score than the distance between states earlier in the trajectory. The planning domain inconsistency monitor returns a non-zero inconsistency score if the inconsistency threshold is exceeded. CartPole++ domain is simple with a few relevant elements (pole, cart etc.) and our PDDL+ model is fairly accurate. Consequently, we use the Euclidean distance between states as our inconsistency score and set the threshold to very low 0.009. In contrast, the ScienceBirds domain is extremely complex with several objects and entities that are modeled in PDDL+ with varying levels of consequently. The inconsistency score for ScienceBirds is engineered to focus on information that is relevant for good gameplay. Specifically, we record if there is a mismatch (m_i) in pigs between plan simulations and observations i.e., they exist in plan but are dead in observations and vice-versa. Next, we measure the difference (Δ h) between the maximum height achieved by a bird in plan simulation and observations. The inconsistency score is computed as ∑_i 50 × m_i + Δ h_i where i is a shot. C_th is set high at 10 to alleviate inaccuracies in PDDL+ modeling. In PogoStick, inconsistency is measured as the difference of the number of objects in the simulated and observed traces. Difference of other properties (such as location etc.) is weighted and added to the sum to give the final inconsistency score. The threshold C_th is set to 2, i.e. novelty is detected and repair is called if the count of object difference is greater than 2. Reward Divergence Our third novelty monitor maintains expectations about the quality of its own performance. Any change in performance quality can indicate that a novelty has been introduced in the environment. We leverage the reward signal generated by environments such as ScienceBirds to gauge changes in performance quality. Let r(s,a) be the reward collected by the agent when performing action a in state s in the non-novel environment and let r_Φ(s,a) be the reward collected when performing a in s after novelty Φ has been introduced. We define the reward divergence of state s and action a as the difference between r(s,a) and the reward observed when a in s. These values may differ when the a novelty Φ has been introduced for which r(s,a) and r_Φ(s,a) significantly differ in states and actions encountered by the agent. We developed a neural-network-based reward estimator that serves as a surrogate, denoted as g(s,a), that returns an estimate of r(s,a). We train this estimator by performing actions in the environment before novelty and training the neural network to minimize the root mean squared loss (ℒ) between g(s,a) and r(s,a) through the following optimization problem: ℒ= min_Θ√(1/N∑_i=1^Ng(s_i,a_i)-r(s_i,a_i)_2^2) Where N is the number of training data points and Θ is the set of parameters in our estimator g. Given the estimator g pre-trained with non-novel data, we implemented the reward divergence novelty monitor by computing the absolute error between the predicted reward from g and the ground truth reward r, computed as follows: R_div(s_i, a_i, r_i) = g(s_i, a_i) - r_i, where s_i, a_i, and r_i are the observed state, executed action, and reward collected in the possibly novel environment. The magnitude of this estimated reward divergence score, R_div, serves as an indicator of how much the reward deviated in an environment with a novel feature as compared to a non-novel environment, given identical actions taken in the same state. Thus, a larger value of R_div implies a more pronounced deviation in reward, suggesting that novelty has been introduced. We implemented the reward divergence novelty monitor for ScienceBirds domain, as follows. Since a state in ScienceBirds can be represented as a visual scene composed of multiple channels of different objects, we utilize Convolutional Neural Networks (CNNs) for our reward estimator (g), which have been proven to be successful in learning visual representations. The architecture of our reward estimator for ScienceBirds is illustrated in Figure <ref>. Our reward estimator is designed to receive a pairing of observational state and action as input to predict reward as output. The architecture of the estimator is comprised of four convolutional layers, followed by four fully connected layers, incorporating a Rectified Linear Unit (RELU) activation function. The convolutional layers extract features from the observation, which are then transformed into a flattened representation and concatenated with the action. Then, fully connected layers subsequently predicts a scalar reward given this concatenated representation of action and observation features. §.§.§ Novelty Determination Each novelty monitor generates information about the existence of novelty based on various aspects of agent behavior (input space, state transitions, performance quality). The novelty determination component collects information from all the novelty monitors and determines if novelty has been introduced and the agent's domain be adapted. We have explored several approaches to implement this component, but eventually implemented domain-specific decision rules. Designing more sophisticated novelty detection rules and a domain-independent approach to create them is a topic for future work. For the ScienceBirds environment, we declare that novelty has been detected if the unknown object novelty monitor detected novelty. Otherwise, we declare that novelty has been detected only if the other at least one of the other two novelty monitors exceeds a threshold for more than 3 consecutively episodes. For the Cartpole++ environment, only the planning domain inconsistency novelty detector was used as it was accurate enough. For the PogoStick environment, we infer the existence of novelty based on if there are unknown objects on the scene or the inconsistency score is higher than a threshold. §.§ Accommodation through Heuristic Search-Based Repair The proposed search-based model repair algorithm works by searching for a domain repair , which is a sequence of model modifications that, when applied to the agent's internal domain D, returns a domain D' that is consistent with the observed trajectories. To find such a domain repair, our algorithm accepts as input a set possible basic Model Manipulation Operators (MMOs), denoted {φ} = {φ_0, φ_1, ... , φ_n}. Each MMO φ_i ∈{φ} represents a possible change to the domain. Thus, a domain repair is a sequence of one or more basic MMO φ_i ∈{φ}. An example of an MMO is to add a fixed amount Δ∈ℝ to one of the numeric domain fluents. In general, one can define such an MMO for every domain fluent. In practice, however, not all domain fluents are equal in their importance thus the repair can be focused on a subset of state variables that the domain designer deems relevant. Algorithm <ref> lists the pseudo-code for our search-based model repair algorithm. Initially, the open list (OPEN) includes a single node representing the empty repair, and the best repair seen so far _best is initialized to an empty sequence. This corresponds to not repairing the agent's internal domain at all. Then, in every iteration the best repair in OPEN is popped, and we compose new repairs by adding a single MMO to this repair, and add them to OPEN. For every such repair ' we compute an inconsistency score C_'. This is done by modifying the agent's internal domain D with repair ', simulating the actions in plan π, and measuring the difference between the simulated outcome of these actions and the observed trajectory τ. The inconsistency score C_' serves two purposes. First, we keep track of the best repair generated so far, and return it when the search halts. Second, we consider a repair's inconsistency score when choosing which repair to pop from OPEN in each iteration. This is embodied in the function f(', C_') in line <ref> in Algorithm <ref>. In our implementation, f is a linear combination of the inconsistency score and the size of the repair, i.e., the number of MMOs it is composed of. The latter consideration biases the search towards simpler repairs. Figure <ref> visualizes an example search tree of the MMO-based model repair algorithm where MMOs are treated as actions and the state is composed of changes to the default model (0 indicates no change to the given fluent). The inconsistency score is estimated for each generated repair, and the search terminates once a repair is found such that the updated domain D' is consistent with the true transition function T^*. MMO repair is performed on a set of repairable fluents {x_1, x_2, ..., x_n}. Each MMO adjusts the value of a given fluent by a fixed amount Δ (+Δ or -Δ), defined a priori per fluent (denoted as delta in top left of Figure. <ref>). In this example scenario, the best repair _best = {φ_3, φ_3, φ_3} is a sequence of three MMOs (each adjusting x_2 by +0.2) such that repair φ_best changes a single state variable x_2 ∈ X by adding 0.6 to its value. §.§.§ Focused Model Repair In many cases, allowing the repair to adjust multiple variables simultaneously can result in search space explosion. The branching factor of the general model repair algorithm is 2^n. An impactful novelty usually requires significant change to the domain, high branching factor and vast search space might cause it to not be found within a reasonable time. To make repair feasible in complex domains, we introduce focused model repair, a restricted variant of the general algorithm. The two mechanisms differ only in how they expand new repair candidates. In the focused case, an additional constraint is imposed such that any repair candidate can only contain MMOs of the same type = {φ_i, φ_i, ... , φ_i} where φ_i ∈{φ}. To implement focused repair, the aforementioned constraint is added after line 4 in Algorithm <ref>. The search graph of the focused model repair is shown in fig. <ref>. § EVALUATION Here we evaluate the efficacy of various novelty meta-reasoning methods implemented in supporting online detection, characterization, and accommodation of novelties. The results presented in this section have been generated using the novelties that have been bundled with our research domains - CartPole++, ScienceBirds, and PogoStick and are publicly available. §.§ Novelty Detection First, we study the sensitivity of the three implemented monitors – unknown objects and entities, plan inconsistency, and reward divergence – to the space of novelties available with our research domains. The three domains provide different types of input to the agent and consequently, a subset of monitors was implemented for each domain. The plan inconsistency monitor is tied closely with the heuristic search-based model repair method for accommodation. Consequently, it was evaluated with the efficacy of repair, and the results are summarized in Section <ref>. §.§.§ Unknown Objects and Entities This monitor was built only for PogoStick and ScienceBirds because none of the novelties in the CartPole++ domain introduce new objects in the domain. The monitor was trivial to develop for PogoStick because the input to the agent is symbolic and each object is accompanied by its type. The agent implements a typed inventory and can match the label against known types to detect novelties. ScienceBirds provides a colormap for each object on the scene which requires visual reasoning to infer object types. Training The recognition model for ScienceBirds was built using a standard implementation of multiclass (one-versus-rest) logistic regression[https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegressionsklearn.LinearModel.LogisticRegression]. It was trained on a dataset of 13,198 datapoints obtained by sampling ScienceBirds non-novel levels using a standard 80%/20% train-test spilt with L2 regularization. The recognition model could recognize the 13 known object types with 100% accuracy with the confidence threshold of 0.65. Analysis The monitor was tested on 6 types of ScienceBirds levels (covering novelty ID 2 in Table <ref>) that introduce news objects or entities. Each level type was sampled 10 times to create a novelty recognition test set and the ScienceBird agent played these levels. The monitor could detect novelty in 6 level types with 100% accuracy. One level introduce a new bird type that is orange in color. This bird was detected as a red bird - a known object type. While the novelty was not detected, the agent could still play the game. Arguably, the recognition module implemented here is simple. A more complex visual input (pixels) would motivate a complex recognition module built with convolutional neural networks. All types of classifiers estimate class probabilities during classification which can be used with an appropriate threshold to trigger novelty detection. §.§.§ Reward Divergence In order to demonstrate the utility of the reward divergence monitor introduced in Section <ref>, we evaluate the sensitivity of this score to the existence of novelties in ScienceBirds. Training We first trained the reward estimation model (g*) on approximately 15,000 data collected from non-novel ScienceBirds levels. We used 80% for training and 20% for testing. To train the reward estimator, we normalized the scale of the input (state and action) and output (reward) to fall within the range (0,1). Scatter plot of estimated and actual rewards in the test set are shown in Figure <ref>. The points aggregate close to the 45-degree line. The root mean square error (RMSE) score of the fully-trained reward estimator (g^*) tested in the non-novel environment was 0.008, indicating that the reward estimator can accurately estimate rewards for actions taken in non-novel levels. Analysis To evaluate the sensitivity of reward divergence, we created novelty test datasets for all published novelties in the ScienceBirds domain. Each dataset was sampled from levels instantiating a specific novelty ID and non-novel levels and consisted of datapoints with state, action, reward tuples. We used the reward estimator trained on non-novel levels (g^*) to estimate reward for each datapoint and computed absolute reward error score. We then analyzed if the absolute error in reward estimation is sensitive to the existence of novelties and can be used to classify a datapoint as novel. Table <ref> shows the area under the receiver operating characteristic (ROC) curve (AUC) for various novelty IDs in the ScienceBird domain. The results suggest that, indeed, absolute error in reward estimation is sensitive to existence of some types of novelties but not all. Specifically, it can detect novelties belonging to IDs 4 (interaction) and 6 (global constraints). While new types of relational interaction between objects and entities can impact the cumulative score the agent gets, changes in global constraints can lead to significant plan execution failures. ROC curves in Figure <ref> suggest that for threshold value specific novelty IDs can be detected with high accuracy using the threshold value of 2.0. §.§ Novelty Accommodation Novelty accommodation in HYDRA was evaluated in experiments designed using the problem setup introduced in the earlier sections. For each domain, we selected a few novelty instantiations that we expect the repair method to accommodate for. We ran individual experiments for each selected novelty in every domain. The experiments were set up as follows. Each trial starts with non-novel episodes, and novelty is introduced after the first k episodes and persists until episode N at which point the trial ends. The novelty persists until the end of the trial. After a trial ends, the agent and the environment is reset to default and the next trial commences. Each experiment was run for T trials and the results below report average performance. §.§.§ CartPole++ First, we study the efficacy of the proposed repair method for accommodation. The novelty we study is increasing the mass of the cart by a factor of 10 (novelty ID 1 in Table <ref>). This novelty was selected because it significantly affects the dynamics of the CartPole++ system, making it less controllable without understanding the impact of the novelty. We compare the performance for four different agents: planning-static, planning-adaptive (HYDRA), DQN-static, and DQN-adaptive. We describe these agents below. For each agent, we ran T=10 trials with N=200 episodes and novelty was introduced after k=7 episodes. The planning-static agent selects which action to perform by using a PDDL+ domain that is consistent with the environment before novelty has been introduced. The planning-adaptive agent initially selects which action to perform just like the planning-state agent. However, it implements the HDYRA framework and monitors plan execution, automatically repairs the PDDL+ model when inconsistency is detected using focused model repair, as proposed in Section <ref>. The repairable fluents are all parameters defining CartPole dynamics, summarized in Table <ref>. To estimate the inconsistency score C, the adaptive planning agent uses Euclidean distance over only pose of the CartPole (i.e., cart_x, cart_y, theta_x, theta_y). In non-nominal case C=0, while the threshold was set to C_th=0.009. The static- and dynamic-DQN agents are RL agents, employing a standard deep Q-network (DQN)implementation with experience replay memory <cit.>. The Q-network is built with a dense input layer (10 × 512), two hidden layers (512 × 512), and a dense output layer (512 × 5) and uses the Rectified Linear Unit (ReLU) activation function. The Q-network was trained to achieve perfect performance in the canonical setup. The DQN-static agent applies the policy learned in the canonical setup in the novelty setup. This baseline was implemented to ascertain that the introduced novelty indeed impacts the performance of the agent and motivates adaptation. The DQN-adaptive agent is initialized as the DQN-static agent, training on the non-novel episodes. But, it continues to update its weights after novelty is introduced, allowing it to potentially adapt to novelties. The performance of the planning and DQN agents is summarized in Figure <ref>. In each graph, x-axis captures the episodes in a trial and the y-axis shows the total reward collected by the agent per episode (normalized to 0.0,1.0). The red line indicates the episode where the novelty (shown in the graph title) was introduced. The shaded area represents the 95% confidence interval computed over all 10 trials. As shown in Figure <ref>, all agents demonstrate perfect or near-perfect performance at the beginning of the trial and then experience a significant drop in performance when novelty is introduced (episode 8). This drop demonstrates that the changes in the environment dynamics impact the performance of all agents. There is variability in how each agent responds to the changes in the environment. We make the following observations: Resilience of planning agents: The novelty-induced performance drop is more significant in learning agents. After the introduction of novelty, the performance of the DQN agents drops to approximately 20% for DQN-adaptive and DQN-static. In contrast, the performance of the planning agents only drops to ≈40%. This difference can be explained by the agents' design. The planning agents' PDDL+ model defines the system dynamics in a general manner. Thus, it can still be sufficiently accurate in some conditions, even if some part of it is inaccurate. On the other hand, the DQN agent's learned policy is not general and is only applicable for much reduced subset of cases after novelty. Quick adaptation via model-space search: As expected, after novelty is introduced, the static versions of the DQN and planning agents continue performing poorly, while the adaptive agents improve their performance over time. However, the time taken to improve differs greatly between the DQN-adaptive and planning-adaptive agents. Learning in DQN-adaptive is slow, requiring multiple interactions with the environment. In fact, post-novelty DQN-adaptive took 300 episodes to increase ≈10% to reach ≈40% of optimal performance. In contrast, the planning-adaptive agent recovers very quickly to ≈100% in as few as 6 episodes. This observation supports our central thesis: model-space search enables quick adaptation in dynamic environments because it can localize the learning to specific parts of the explicit model. Other parts of the explicit model are directly transfered to the novel setup. Knowledge transfer is challenging to implement in model-free methods (e.g, DQNs) in which action selection knowledge is distributed through the network. Interpretable by design: The model repair mechanism proposes specific and localized changes to the agent's explicit PDDL+ model. Thus, adaptation in the planning-adaptive agent is interpretable. A model designer can inspect the proposed repair to understand why and how the novelty affected the agent's behavior. Here is a repair found by the method during evaluation: repair:[m_cart: 9.0, l_pole: 0, m_pole: 0, force_mag: 0, gravity: 0, ...]; resulting consistency: 0.0067561. In contrast, learning in model-free systems such as DQN-adaptive cannot be interpreted directly. The planning-adaptive agent uses only its observations over a single episode in the environment to guide its search for the correct model. The observations by themselves may not provide sufficient information to determine the parameter values exactly. The model repair mechanism might be repeated after different episodes to further update the model and increase its accuracy given new trajectories. This is occasionally seen in this experimental evaluation when the PDDL+ model of the planning-adaptive agent is updated multiple times, each bringing it closer to the “true” repair (mass of cart ×10). Then next of set experiments were run to evaluate the generality of the repair-based accommodation mechanism. Figure <ref> summarizes the planning-adaptive agent's behavior when novelty introduction changes the length of the pole (novelty ID 1 in Table <ref>) and gravity (novelty ID 6 in Table <ref>). The results show that changing the length of the pole is impactful and degrades the agent's performance (as evidenced by the performance dip immediately after the novelty is introduced) but the repair mechanism enables the agent to recover its performance to a significant extent. The impact of changing gravity on agent performance is not as clear. When the gravity increases to 20, the agent's performance doesn't change suggesting that the planning model is resilient to changes in gravity. In an extreme case, when the gravity is reduced to -40, the performance drops and never recovers. It is likely that the cart becomes uncontrollable in very low gravity and consequently, no repair is ever successful in improving the performance. §.§.§ ScienceBirds In ScienceBirds, we studied the novelty that increases the gravity in the environment (novelty ID 6), which causes the bird to fall short of its target. A planning-adaptive agent was designed with repair parameters in Table <ref>. The inconsistency score was computed as described in Example <ref> and the threshold C_th was set to 10. In the experiment, we ran T=5 trials with N=30 episodes and novelty was introduced after k=7 episodes. Results from this experiment are shown in Figure <ref>, left. The are a few key observations to make. First, the performance of the agent in non-novel level (i.e, before the novelty was introduced in episode 7) is not perfect and it misses passing the level in some instances. ScienceBirds is a complex, continuous domain and even small inaccuracies in modeling or errors in observations can lead to failures during plan execution. After the novelty is introduced, the performance of the agent drops significantly demonstrating that the novelty was impactful. However, we see that the repair mechanism is triggered as the inconsistency score increases beyond the set threshold. The repair is able to change the gravity parameter to improve the agent's performance. There is significant variability in how soon the relevant repair is found in different trials. However, by the 28th episode, the agent has recovered its performance. Correspondingly, we see (Figure <ref> right) that computed inconsistency is low in the beginning episode. As soon as novelty is introduced, we see an immediate rise in the inconsistency score which is reduced as the agent's model is autonomously repaired. This result demonstrates that the inconsistency score is sensitive to a class of novelties and can be used to detect their existence. §.§.§ PogoStick In PogoStick, we studied the novelty that increases the number of logs produced (5 times the original canonical value). This novelty is an instantiation of ID 1 (new attribute), as defined in Table <ref>. Table <ref> describes the MMOs and the deltas that were implemented. We used C_th value as 2. In PogoStick, a fixed cost is associated with every action of the order of magnitude of thousand points, such as, cutting a tree to create logs costs roughly 4000 points. Once the agent achieves the goal, i.e. it constructs a PogoStick, the agent is rewarded 128,000 points. The score at the end of an episode is obtained by subtracting the action costs from reward. The introduction of novelty creates a possibility of completing the task with a shorter plan or paying a small action cost. In contrast to novelties in CartPole++ and ScienceBirds discused above that degrade the agent's performance, this novelty in PogoStick doesn't degrade performance but presents an opportunity to earn a higher score. Figure <ref> shows the results for two different 30 episode trials where novelty was introduced in episode 0. The results show that before the novelty is introduced the agent can complete the task and earn an average of 10K in reward. After the novelty is introduced, the agent earns a higher reward (average of 15K) because now it needs fewer actions to complete the task. In the episode 1 the agent accommodates the novelty by updating its planning model and increase the number of logs obtained to a value of 3 (increment by 1). Note that the relative incorporation is instantaneous (in game 1) as the ⟨ s_i-1, a_i, s_i⟩ trajectories are easily distinguishable, and due to the causal and symbolic representation of the environment supports easier detection and incorporation of the novelty. This shows the merit of our domain-independent approach as search can quickly update the parameter where the right information is available. These are some of the initial results for the domain, and we are continuously working to incorporate several different novelties for the domain. §.§ Summary of Results Table <ref> provides an overview of our experimental results with respect to the types of novelties considered in this work (Table <ref>). Rows in Table <ref> represent novelty types and columns represent detection or adaptation task (“D” or “A”) for each domain (Cartpole++, ScienceBirds, and PogoStick). In each cell, we mark whether the current implementation of HYDRA shows positive results for detecting or adapting to novelties of the corresponding tasks. The value in each state is “”, “*”, or “.” representing showing positive results, not showing positive results, and settings not evaluated as of now, respectively. The resulting overview shows a promising outlook: the proposed domain-independent framework can detect and accommodate novelty IDs 1 and 6 and additionally detect IDs 2 and 4 in at least one domain. As we extend HYDRA to incorporate other model repair and learning mechanisms, we expect it to cover a larger space of novelties. § CONCLUSION AND FUTURE WORK Despite employing a range of computational methods - from hand designed model-based reasoning systems to model-free learning systems - autonomous agents depend upon the availability of accurate models of the environment during design time <cit.>. While model-based reasoning agents (e.g., planning) encode the models explicitly, model-free learning agents (e.g., reinforcement learning) learn action selection based on these models (available via simulations). This assumption of a closed world is unlikely to hold when the agents are deployed in real world environments. Either because the real world was not correctly understood to develop the models or because the real world evolves. Agents that can robustly handle open worlds - environments that may change while the agent is operational - has been recently proposed as a challenge for intelligent system design <cit.>. This paper introduces HYDRA - a domain-independent framework for implementing agents that are novelty-aware - i.e they can detect, characterize and accomodate novelties during performance. HYDRA is built upon model-based reasoning and exploits the explicit and compositional nature of knowledge implemented in such systems. HYDRA frames learning as a volitional meta-reasoning activity that monitors the agent's own behavior, identifies an opportunity when it diverges from what is expected, and adapts the models that underlie action reasoning. The framework leverages a wide gamut of computational methods including continuous domain planning, classical and deep learning, heuristics search, diagnosis and repair, etc. HYDRA contributes to the growing instances of integrated intelligent systems that employ a variety of intelligent algorithms in a single, end-to-end architecture that demonstrates complex behavior. We used HYDRA to design novelty-aware agents for three complex, mixed discrete-continuous domains: CartPole++, ScienceBirds++, and PogoStick. We found that the method can be implemented in a domain-independent fashion with few domain-specific elements that guide search and establish a success criterion. Through empirical analyses, we show that learning by model repair can learn quickly, requiring only few interactions with the environment. Further, revisions are expressed in the language the model is written in and therefore, are interpretable by design. A model designer can inspect what the agent learns. Our analyses show that the proposed method retains the strengths of model-based reasoning methods (structure and explicability) while making them adaptable to changing environmental dynamics. We also demonstrate that learning by model repair can alleviate some challenges inherent in modeling a complex domain - if the domain designer writes an inaccurate model, the proposed method can iterate it based on observations from the environment. There are several avenues for future work. We are exploring how information gathered during inconsistency estimation can be leveraged for efficiently searching the space of model repairs. For instance, if the process `flying' in Science Birds contributes most to computed inconsistency, it is worthwhile to search repairs to the `flying' process before other aspects of the model. We are also exploring how the repair framework can be extended to include modifications to the structure of the PDDL domain by adding, removing, and modifying preconditions and effects, and finally adding and removing entire happenings. This line of research will bring insights from model learning research into a larger, integrated theory of planning model repair. It will greatly expand the classes of novelties a planning system can accommodate for to support the development of robust open-world learning agents. § ACKNOWLEDGEMENTS The work presented in this paper was supported in part by the DARPA SAIL-ON program under award number HR001120C0040. The views, opinions and/or findings expressed are those of the authors’ and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. The authors thank Matt Klenk for his contributions to problem and approach formulation. elsarticle-harv
http://arxiv.org/abs/2306.10730v1
20230619070345
UniG3D: A Unified 3D Object Generation Dataset
[ "Qinghong Sun", "Yangguang Li", "ZeXiang Liu", "Xiaoshui Huang", "Fenggang Liu", "Xihui Liu", "Wanli Ouyang", "Jing Shao" ]
cs.CV
[ "cs.CV" ]
Frequency Modulation of Gravitational Waves by Ultralight Scalar Dark Matter Ke Wang^1,2,3 July 31, 2023 ============================================================================ The field of generative AI has a transformative impact on various areas, including virtual reality, autonomous driving, the metaverse, gaming, and robotics. Among these applications, 3D object generation techniques are of utmost importance. This technique has unlocked fresh avenues in the realm of creating, customizing, and exploring 3D objects. However, the quality and diversity of existing 3D object generation methods are constrained by the inadequacies of existing 3D object datasets, including issues related to text quality, the incompleteness of multi-modal data representation encompassing 2D rendered images and 3D assets, as well as the size of the dataset. In order to resolve these issues, we present , a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on Objaverse and ShapeNet datasets. This pipeline converts each raw 3D model into comprehensive multi-modal data representation <text, image, point cloud, mesh> by employing rendering engines and multi-modal models. These modules ensure the richness of textual information and the comprehensiveness of data representation. Remarkably, the universality of our pipeline refers to its ability to be applied to any 3D dataset, as it only requires raw 3D data. The selection of data sources for our dataset is based on their scale and quality. Subsequently, we assess the effectiveness of our dataset by employing Point-E and SDFusion, two widely recognized methods for object generation, tailored to the prevalent 3D representations of point clouds and signed distance functions. Our dataset is available at: https://unig3d.github.iohttps://unig3d.github.io. § INTRODUCTION Generative AI has revolutionized the way humans work and improved their efficiency, as this technology can understand human intentions and automatically generate the required content. In particular, there have been numerous generative works in virtual reality <cit.>, autonomous driving <cit.>, metaverse <cit.> and gaming <cit.>, and robotics <cit.>. In the aforementioned applications, a crucial and significant technique is 3D object generation <cit.>, which involves creating realistic or novel 3D representations aiming to simulate and replicate real-world or imagined 3D objects. 3D object generation technology opens up new possibilities for creating, customizing, and exploring 3D objects, making it a valuable tool in various fields where 3D models play a crucial role. Several recent works <cit.> have tackled the problem of 3D generation by optimizing 3D representations against a text-to-image model and do not leverage 3D data. Although these methods have demonstrated promising results, state-of-the-art approaches typically require about six GPU hours to produce a single sample. It is challenging to scale up the generation of 3D data by utilizing these methods. There are alternative methods for 3D object generation that make use of 3D data. Some methods incorporate text as a condition during the 3D generation process. Despite promising early results, many of these works are limited to simple prompts or a narrow set of object categories due to the scarcity of 3D training data <cit.>. Alternatively, some methods use a pre-trained text-to-image model to condition their 3D generation procedure <cit.>. However, since most datasets lack image data, researchers are left with the task of rendering each dataset individually, which is a time-consuming and resource-intensive process. In addition to text-conditioning, existing methods also employ image-conditioning as an alternative approach. However, they face similar challenges as mentioned above. As a result, the quality of text information, the availability of 2D-rendered data, and the scalability of the dataset are crucial. To resolve these issues, we construct a unified 3D object generation dataset, , by utilizing the ShapeNet <cit.> and Objaverse <cit.> as raw data sources. We develop a unified pipeline that can convert raw 3D models into comprehensive multi-modal data, which allows researchers focusing on different target 3D representations or different input conditions to use it conveniently. Specifically, as shown in Fig. <ref>, we convert each raw 3D model into a <text, image, point cloud, mesh> quadruple by employing a 3D model rendering engine <cit.>, BLIP <cit.>, and CLIP <cit.> model. The former is used for rendering 2D and 3D representations, while the latter two are used for generating high-quality textual information. Our proposed unified multi-modal data transformation pipeline requires only 3D data, which eliminates the need for any manual annotation effort and demonstrates its scalability. To illustrate its effectiveness, we conduct our dataset by using ShapeNet <cit.> and Objaverse <cit.> as the data sources, given their scale and quality. Then, we validate the efficacy of our dataset by utilizing Point-E <cit.> and SDFusion <cit.>, two typical object generation methods that are widely used in prevalent 3D representations such as point cloud and signed distance function.  offers three contributions: * We construct a large-scale unified 3D object generation dataset with rich textural information and comprehensive multi-modal data. * We propose a universal data transformation pipeline that can convert any 3D data into representations suitable for most 3D object generation methods. * To validate the efficacy of our dataset, we conduct experiments under various input conditions and target 3D representations. Based on our empirical investigations, we present several valuable insights into the impact of various conditions, the efficacy of data expansion, and the significance of text quality. § RELATED WORK §.§ 3D Generative Methods Several recent works have explored the challenge of generating 3D models with conditioned inputs by optimizing the 3D representations based on a text-image matching objective. <cit.> introduce DreamFields, a method that leverages CLIP to optimize the parameters of a NeRF model without the need for 3D training data. More recently, <cit.> extends DreamFields by incorporating a pre-trained text-to-image diffusion model in place of CLIP, producing more coherent and complex objects. <cit.> builds upon this technique by converting the NeRF representation into a mesh and further refining the mesh representation through a secondary optimization stage. Although these approaches are capable of generating diverse and intricate objects or scenes, the optimization procedures often demand significant GPU computational time to converge, posing challenges for practical applications. While the above primarily rely on optimizing against a 2D text-image model and do not utilize 3D data, alternative methods for conditional 3D object generation incorporate 3D data, sometimes in conjunction with text labels. <cit.> leverages paired text-3D data to generate models in a joint representation space. <cit.> employs a flow-based model to generate 3D latent representations, and find some text-to-3D capabilities when conditioning their model on CLIP embeddings. <cit.> and <cit.> employ a VQ-VAE with an autoregressive prior to sample 3D shapes conditioned on text labels. SDFusion <cit.> utilizes an encoder-decoder structure to compress 3D shapes into a compact latent representation, which is then used to train a diffusion model for text-to-3D generation. While many of these works demonstrate promising early results, they tend to be limited to simple prompts or a narrow set of object categories due to the limited availability of 3D training data. Point-E <cit.> solves this problem by building a large-scale text 3D dataset. However, the datasets are not open source. Alternatively, some methods  <cit.>use a pre-trained text-to-image model to condition their 3D object generation procedure with images. However, since most datasets lack image data, researchers are left with the task of rendering each dataset individually, which is a time-consuming and resource-intensive process. In addition to text-conditioning, existing methods also employ image-conditioning as an alternative approach. However, they face similar challenges as mentioned above. §.§ 3D Object Datasets Many widely-used 3D datasets prefer to collect synthetic CAD models from online repositories <cit.>. Shapenet<cit.> stands out as the prevalent dataset. It covers 55 common object categories with about 51,300 unique 3D models. Every object in this dataset is precisely rendered as a 3D mesh, thereby imparting meticulous geometric information. Moreover, a "name" field supplements each 3D model, carrying rich metadata associated with it. Objaverse <cit.> is an extensive dataset comprising over 800K 3D models accompanied by descriptive captions, tags, and animations. It surpasses existing 3D repositories in terms of its scale, the number of categories, and the visual variety of instances within each category. However, the majority of objects lack appropriate text information. Each object in the dataset is represented in GLB format, posing a challenge for users in directly leveraging the 3D data. Another line of works <cit.> advocate real 3D objects in limited scale. MVImgNet <cit.> is a recent medium-sized dataset of multi-view images, which is highly convenient to gain by shooting videos of real-world objects in human daily life. It contains 6.5 million frames from 219,188 videos crossing objects from 238 classes, with rich annotations of object masks, camera parameters, and point clouds. As previously discussed, the current state of 3D object generation techniques is hindered by various factors that limit their quality. These factors include deficiencies in text quality, limited availability of 2D-rendered data, and dataset scale. Apart from the dataset scale, these limitations manifest in two ways. Firstly, available text information is often imprecise and lacking in detail, providing only broad categories or incomplete descriptions that have limited correlation with actual models. Additionally, existing datasets typically lack the essential data formats required for 3D object generation tasks, such as multi-view 2D rendered images, 3D point clouds, and multi-view meshes. For example, Shapenet <cit.> only provides raw meshes, while Objaverse <cit.> exclusively offers raw 3D models in GLB format. § THE  DATASET r0.6 The statistical information of the four representations in two  datasets. 1! Dataset Mesh PCL Image Text -Shapenet 500K 50K 1 million 50K -Objaverse 5 million 500K 10 million 500K In this section, we describe the data transformation process of . The raw 3D data and related text information are gathered from two 3D datasets, specifically chosen among various datasets based on their scale and significance. The first dataset is Shapenet <cit.>, a classic dataset that has around 50K+ 3D objects with 55 annotated categories. The second one is Objaverse <cit.>, the large-scale 3D dataset. It has approximately 800K 3D objects, but the category of most objects is unknown. We describe the transformation pipeline of different representations in more detail in the following. As shown in Fig.<ref>, we convert each 3D model to four representations, which are mesh, image, point cloud, and rich text information. §.§ The Construction of Quadruples As illustrated in Fig. <ref>, we present the unified data transformation pipeline employed in our dataset. This pipeline encompasses the transformation of raw 3D data and raw text information into a unified representation, quadruple. The pipeline begins by leveraging a powerful 3D model rendering engine, which enables the generation of multiple 2D and 3D representations from the raw data. Furthermore, our pipeline incorporates the utilization of state-of-the-art multi-modal models to generate high-quality textual information. By employing these models, we extract meaningful and descriptive text features from the 3D representations. This process ensures that our dataset includes comprehensive and accurate textual information that complements the visual aspects of the objects. Mesh. For each 3D model, we employ Blender <cit.>, a versatile software tool that supports various 3D formats and incorporates an optimized rendering engine, to generate ten multi-view meshes in the OBJ format. These meshes are rendered using a z-circular camera pose, ensuring comprehensive coverage of the object from different viewpoints. Specifically, the views are evenly spaced at intervals of 36 degrees to capture a wide range of perspectives. Additionally, to facilitate model training <cit.>, we also provide the models in signed distance function (SDF) format, which offers a convenient representation for 3D object generation tasks. Image. To obtain multi-view 2D images for each 3D model, we implement a customized rendering process. Leveraging the capabilities of Blender <cit.>, we develop a script that first normalizes the 3D models to fit within a bounding cube, ensuring consistent scale across all objects. Additionally, we set up a standardized lighting arrangement to ensure uniform illumination across the rendered images. Subsequently, we employ Blender's built-in real-time rendering engine to export the 2D images. The rendering process involved capturing ten images using the z-circular camera pose, which provides a well-distributed set of views around the object. These views are captured at equal intervals of 36 degrees, enabling comprehensive coverage of the object from different angles. The deflection angle used for these random poses is the same as that of the above-mentioned multi-view meshes. In addition to the z-circular camera pose, we also capture another set of ten images using random camera poses. These random poses introduce variations in the viewing angles and orientations of the object, allowing for the generation of dense point clouds. By combining the multi-view images captured using the z-circular camera pose and the random poses, we obtain a total of 20 multi-view 2D images for each 3D model. This diverse set of images provides comprehensive visual information for subsequent experiments and processing tasks. Point Cloud. To convert the 3D models into colored point clouds, we utilize the RGBAD images rendered from them. Initially, we generate dense point clouds by associating points with each pixel in the rendered RGBAD images. However, these point clouds often exhibit uneven distribution and contain a large number of points. To address this issue, we employ voxel point sampling techniques to create uniform point clouds consisting of 4K points. By directly constructing point clouds from the rendered images, we circumvent potential challenges that may arise when sampling points from 3D meshes. These challenges include dealing with points located inside the model or handling 3D models stored in non-standard file formats <cit.>. Our approach ensures a consistent and reliable representation of the objects in the form of point clouds. To further enhance the quality of our dataset, we implement heuristics to exclude low-quality models. Specifically, we employ a criterion based on the singular value decomposition (SVD) of each point cloud <cit.>. We compute the SVD for each point cloud and retain only those models where the smallest singular value exceeds a certain threshold. This process effectively filters out flat objects or models with poor geometric structure, ensuring that our dataset comprises high-quality and meaningful 3D representations. By employing these techniques, we create a comprehensive dataset of colored point clouds that accurately represent the underlying 3D models, while also ensuring the inclusion of high-quality and diverse objects for further analysis and research. Text. The text information of the 3D model mainly has two sources. One is the raw text information associated with each 3D model. However, we observe that a significant portion of this text information does not accurately correspond to the 3D models themselves, including terms like "model", "blender", and "Low poly". Therefore, we employ CLIP-VIT model <cit.> to clean them by calculating image-text similarity. We only retain those where the similarity value was above a certain threshold. By employing this method, over 80% of low-quality texts can be filtered out, while the false recognition rate does not exceed 30%. The other source is employing multi-modal LLM to generate descriptions with its 2D image as input. For each 3D model, we employ BLIP <cit.> to generate rich and detailed descriptions based on its thumbnail or 2D rendered image. Then, we also evaluate its accuracy by the similarity score using CLIP-VIT model <cit.>. r0.5 < g r a p h i c s > A histogram of fine-grained  categories with representative members from several bins highlighted. Based on our data observation, we find that the data with low similarity is primarily attributed to the abstraction of 3D models, which have weak visual features. Hence, we implement a filtering process where we retain only those 3D models whose similarity score surpasses a predetermined threshold. By employing this method, the models we filter out account for approximately 20% of the total. Furthermore, for the 3D models with known categories, we enhance their description information by aligning it with the corresponding category. To be specific, if the category sentence is absent from the original description, we extract descriptive phrases from the existing description and merge them with the known category sentence to create an improved description. Conversely, if the generated description already exhibits high quality and coherence, we retain it as is. §.§ Statistics of our dataset The statistics for each representation of  are presented in Table <ref>, providing a significant overview. Our dataset pipeline generates multiple representations for each raw 3D model, including ten meshes, one colored point cloud, and 20 images. These visual representations are accompanied by corresponding descriptive text information, enhancing the richness and comprehensiveness of the dataset. To provide a visual overview of the dataset, Fig. <ref> showcases the varying degrees of object counts across different categories within , highlighting both the long-tail and head categories and their respective object counts. The distribution of object counts in each category ranges from 0 to 200 in our dataset. More than half of the categories have fewer than 20 objects, representing the long-tail categories of the dataset. In contrast, there are approximately 50 head categories that contain around 200 objects each, representing the head categories. § EXPERIMENTS §.§ 3D Object Generation Method r0.5 < g r a p h i c s > Mesh generated by SDFusion conditioned on different input modalities of Shapenet. In our experimental study, we leverage two commonly used 3D object generation methods, namely Point-E and SDFusion, which are specifically tailored for two prevalent 3D representations, which are point cloud and signed distance function. These methods are grounded in the diffusion process, as proposed by Sohl-Dickstein et al. <cit.>. To enhance the clarity of the training and generation processes, we depict the forward and backward processes in Fig. <ref>, which provides a comprehensive overview of the steps involved in both the forward and backward pass of the diffusion model. Please refer to the supplementary material for detailed hyperparameters of the training process. In particular, we utilize two model structures in SDFusion: VQ-VAE and the 3D latent diffusion model, whose parameter count exceeds 400 million. To provide the ability for interaction, learning conditional distribution is important. We incorporate multiple conditional input modalities with task-specific encoders and cross-attention modules in the latent diffusion model, such as text input, image input, and text-image multi-modality input. For text input, we embed the text caption using BERT <cit.>, while for image input, we embed the image using CLIP <cit.>. As for the generation of the point cloud, we employ two small model structures in Point-E, which are 40M-text and 40M-image. Specifically, 40M-text is a small model which only conditions text captions, not rendered images. The text caption is embedded with CLIP, and the CLIP embedding is appended as a single extra token of context. This model depends on the text captions present in our 3D dataset and does not leverage the fine-tuned GLIDE model. 40M-image is a small model with full image conditioning through a grid of CLIP latent. In the future, we will expand the scale of the training dataset and model structure. §.§ Experimental Results §.§.§ The effects of different conditions To explore the distinct roles of different input modalities in the 3D object generation process, we initially compare the effects of images and text. The experiment is conducted on Shapenet. As shown in Fig.<ref>, images can provide a direct visual reference, allowing for the generation of 3D objects that closely resemble the appearance of the referenced images. However, images may lack contextual information or high-level semantics that can be conveyed through textual descriptions, as shown in the second column of Fig. <ref>. Textual descriptions allow for precise and specific control over the generated 3D objects by providing detailed instructions or constraints. But, textual descriptions may sometimes be ambiguous or subjective, leading to different interpretations and potential variations in the generated 3D objects. Additionally, text lacks the ability to convey rich visual details, such as color gradients, textures, or fine-grained shape features. r0.5 < g r a p h i c s > Comparison of the generative results by SDFusion when using images as the sole condition versus when incorporating additional text information. Consequently, using text and images as conditioning inputs have its own advantages and disadvantages. Moreover, we aim to explore the effect when utilizing both of the aforementioned modalities as conditions. As shown in Fig.<ref>, when using images as the sole condition, the model may have limited semantic understanding due to issues such as the angle of the images. For example, in the first row, the paper airplane, as well as the chairs in the second and third rows, can be challenging for the model to accurately determine the specific category or infer the occluded parts solely based on the image information. By incorporating text as an additional modality, the model gains a better grasp of the desired object characteristics, resulting in improved generation quality and a more comprehensive understanding of the semantic information associated with the object. Overall, using both text and images as conditioning inputs in 3D generation methods offers complementary benefits, with text providing semantic control and language understanding, while images contribute visual realism and rich visual details. The choice between these modalities depends on the specific requirements and objectives of the 3D generation task. r0.5 < g r a p h i c s > Effects of increased data sources. Data(s) represents the use of only the ShapeNet, while Data(s+o) represents the additional inclusion of the -Objaverse_coco dataset. (a) and (b) represent the experiments conducted on different 3D representations. §.§.§ The effectiveness of data expansion Due to limited computing resources, we are currently unable to conduct experiments with the entire set of -Objaverse datasets. Therefore, we create a subset of the dataset by selecting data from the coco category, which we refer to as -Objaverse_coco. This subset contains around 50K 3D objects with 66 categories. -Shapenet has 50K+ 3D objects with 55 categories. In order to explore the changes in data generation quality, such as point cloud integrity and diversity, after incrementally adding data from different categories, we conduct experiments on two 3D representations. Firstly, due to the high training cost associated with incorporating images as conditions in the Point-E method, we conduct experiments initially using text as the conditioning modality. As shown in Fig. <ref> (a), increasing the sources of data enhances the diversity of generated 3D models. This implies the necessity for large-scale and scalable datasets. Due to the model's sensitivity to language ambiguity and variation, it is important to note that when integrating different data sources, language ambiguities need to be addressed. For example, in different datasets, the term "mouse" could refer to either a small rodent or a computer input device. Then, we explore the impact of data expansion under different representations and conditions based on SDFusion method. The first column in Fig. <ref> (b) represents the test data from -Objaverse_coco. We observe that the model trained solely on Shapenet exhibits generalization ability. However, the addition of supplementary data allows the model to perform better on the new domain. §.§.§ The impact of multi-view data r0.5 < g r a p h i c s > Effects of increased multi-view data. Training(S) represents the model trained with single-view data, while Training(M) represents the model utilizing multi-view data. In order to address the potential limitations of images captured from a single viewpoint, our dataset offers users multi-view data to facilitate data augmentation. As we describe in Section <ref>, our dataset includes ten multi-view rendered images along with their corresponding meshes. Specifically, the views are evenly spaced at intervals of 36 degrees to capture a wide range of perspectives. To investigate the influence of multi-view data, we compare the performance of generative models trained on single-view and multi-view data using the SDFusion method. Training data is the combination of ShapeNet and -Objaverse_coco dataset. The benefits of utilizing multi-view training data are demonstrated in Fig. <ref>, where it is evident that the model trained with such data outperforms in cases where the input image exhibits relatively uncommon viewpoints. Consequently, augmenting the dataset with multi-view perspectives proves to be an effective strategy for enhancing the model's robustness when dealing with less frequent angles. r0.5 < g r a p h i c s > (a) corresponds to using only the category as text information during training, while (b) corresponds to utilizing the descriptive text generated by our pipeline. §.§.§ The importance of text quality Due to the relatively lower training cost of the generation experiments based on signed distance function representations, we employ Point-E to investigate the impact of text quality on the task of 3D object generation. To ensure data diversity, we utilize both the Shapenet and -Objaverse_coco datasets as our training data. To provide a clear illustration of the quality and diversity of the generated point clouds, we present three different results for each text condition in Fig.<ref>, using distinct seeds. It shows that when using category as the text condition in the inference, there is not much difference in the completeness and diversity of the generated point clouds. The model using category as its text information can only generate a chair model by inputting the word "chair". When the model is given descriptive text as input, it fails to generate results that possess matching visual features. However, the model which has better text quality is capable of accepting more detailed descriptive text. It can generate more controllable models by specifying detailed textual information, such as specifying the airplane model's type, color, or material. Overall, our  data transformation pipeline enables the text-conditioned models to receive more detailed textual information and provide more control. § LIMITATION AND SOCIAL IMPACT Due to current limitations in computing resources, we are unable to conduct experiments using the complete -Objaverse datasets. However, in future work, we plan to present experimental results based on the entire dataset to validate the consistency of our conclusions on a larger scale. Moreover, considering the advancements in recent methods that demonstrate improved speed and quality, we intend to explore a broader range of 3D generation methods in our future experiments. Additionally, we aim to incorporate additional tasks related to 3D understanding, such as novel view synthesis, neural surface reconstruction, and 3D point cloud classification, to further expand the scope and applicability of our dataset. This work aims to provide a unified dataset for the 3D generation task, eliminating the need for extensive human annotation efforts. While this approach offers positive impacts such as reducing human labor, it is crucial to acknowledge that the reduction of human labor may have negative consequences, including job loss or displacement, especially for individuals with lower skill levels who may rely on employment opportunities. § CONCLUSION In this study, we provide a unified 3D object generation dataset called . Our dataset is constructed by utilizing a universal data transformation pipeline applied to the Objaverse and ShapeNet datasets. This pipeline effectively converts each raw 3D model into a comprehensive multi-modal data representation, encompassing text, images, point clouds, and meshes. To achieve this, it utilizes rendering engines and multi-modal models that are capable of capturing textual information and ensuring a comprehensive representation of the data. As a result, our dataset guarantees the richness of textual information and the comprehensiveness of data representation. Our pipeline's universality stems from its ability to be applied to any 3D dataset, solely relying on raw 3D data, thereby enhancing its practicality and flexibility. During the construction of our dataset, we meticulously select data sources considering their scale and quality, guaranteeing the incorporation of diverse and reliable information. Furthermore, through empirical investigations and analysis of various factors, we present several key insights. unsrt
http://arxiv.org/abs/2306.08885v2
20230615063923
Shadow-based quantum subspace algorithm for the nuclear shell model
[ "Ruyu Yang", "Tianren Wang", "Bing-Nan Lu", "Ying Li", "Xiaosi Xu" ]
quant-ph
[ "quant-ph", "nucl-th" ]
Graduate School of China Academy of Engineering Physics, Beijing 100193, China Graduate School of China Academy of Engineering Physics, Beijing 100193, China Graduate School of China Academy of Engineering Physics, Beijing 100193, China Graduate School of China Academy of Engineering Physics, Beijing 100193, China [email protected] Graduate School of China Academy of Engineering Physics, Beijing 100193, China In recent years, researchers have been exploring the applications of noisy intermediate-scale quantum (NISQ) computation in various fields. One important area in which quantum computation can outperform classical computers is the ground state problem of a many-body system, e.g., the nucleus. However, using a quantum computer in the NISQ era to solve a meaningful-scale system remains a challenge. To calculate the ground energy of nuclear systems, we propose a new algorithm that combines classical shadow and subspace diagonalization techniques. Our subspace is composed of matrices, with the basis of the subspace being the classical shadow of the quantum state. We test our algorithm on nuclei described by Cohen-Kurath shell model and USD shell model. We find that the accuracy of the results improves as the number of shots increases, following the Heisenberg scaling. Shadow-based quantum subspace algorithm for the nuclear shell model Xiaosi Xu July 31, 2023 =================================================================== § INTRODUCTION Nuclear ab initio calculation, e.g., solving the nuclear ground state from the bare nucleon-nucleon interactions, is notoriously hard on classical computers due to their exponential complexity. On the other hand, quantum computing has emerged as a promising new paradigm that is expected to provide more effective solutions for these problems <cit.>. Nonetheless, noise is a significant issue in quantum computing due to hardware limitations <cit.>. To make better use of existing Noisy Intermediate-Scale Quantum (NISQ) devices, several classical-quantum hybrid algorithms have been proposed <cit.>. Among those, the variational quantum eigensolver (VQE) is a major class of algorithms <cit.>. Recently, VQE has been applied to solve static problems in nuclear systems <cit.>. The ground state of the deuteron was first solved on a cloud quantum computing platform using VQE <cit.>. Since then, researchers have applied this algorithm to the Lipkin model and shell model (SM) as well <cit.>. Along with the VQE algorithm, the imaginary time evolution was also employed in quantum computing to calculate the nuclear ground state energy <cit.>. This method involves projecting an initial trial state onto the ground state using an imaginary time evolution operator. Another approach for solving the ground state problem is based on quantum subspace diagonalization (QSD) <cit.>. This method involves selecting a subspace of the Hilbert space that consists of wavefunctions, and then diagonalizing the Hamiltonian within that subspace. The choice of wavefunctions varies, and there are different ways to generate them. One approach is to start from an initial state that has a finite overlap with the ground state and then evolve the real-time Hamiltonian to generate additional wavefunctions <cit.>. By selecting different evolution times, a set of wavefunctions can be constructed to form a subspace. Another approach is to use imaginary time evolution to generate a subspace. In <cit.>, the Lanczos algorithm was utilized to compute the ground state energy of the deuteron by employing such a subspace method. The definition of the subspace is not limited to wavefunctions, as shown in  <cit.>, where the subspace is defined as a polynomial of the density matrix, enabling error mitigation. Previous algorithms have encountered difficulties in achieving highly accurate results, as the optimization of VQE is a known NP-hard problem <cit.>, and can become trapped in local minima. Furthermore, the problem of vanishing gradients can arise when the number of qubits is large <cit.>. In addition, accurate imaginary-time evolution requires a deeper and more complex circuit than that required for real-time evolution. When using the QSD algorithm, it is necessary to truncate the overlap matrix S of the subspace to remove singular values below a certain threshold <cit.>, and failure to do so will result in a high number of shots being required to reduce the variance of the results; however, truncation itself introduces an error. In general, imaginary-time evolution and QSD also require the use of Hadamard tests to measure the cross-terms ⟨ψ_i|H| ψ_j⟩ and the overlap ⟨ψ_i|ψ_j⟩, further complicating the circuit. Note that in cases where the particle number of the system is conserved, it may not be necessary to use the Hadamard test. However, even in such cases, it still requires a deeper circuit than simply preparing |ψ_i⟩ [When measuring the overlap ⟨ψ_i|ψ_j⟩, where |ψ_i⟩ and |ψ_j⟩ can be prepared from the initial state |0⟩ and unitary circuits U_i and U_j, both U_i^† and U_j must be constructed in the circuit <cit.>.]. To overcome these challenges, we introduce a novel class of subspaces. Our approach involves using real-time evolution to generate a set of quantum states, and then constructing their classical shadows. Using these classical shadows, we can construct a subspace composed of matrices, and diagonalize the Hamiltonian within this subspace. Unlike VQE, our algorithm does not require an optimization process but only needs to prepare a state with a finite overlap with the ground state. Furthermore, our method is less susceptible to statistical fluctuations compared to QSD, which eliminates the need to truncate S. Additionally, we observe that the measurement accuracy increases with shots and approaches the Heisenberg limit. To evaluate the effectiveness of our algorithm, we have calculated the ground states of various nuclear systems, including ^6He,^6Li,^7Bo,^8Li,^18O, and ^18F using SM. We have investigated the impact of the subspace size and number of shots on the accuracy of our method through numerical analysis. Our paper is structured as follows: In Section <ref>, we provide an overview of related work related to nuclear systems. In Section <ref>, we introduce the nuclear SM, which may be unfamiliar to quantum computing researchers. In Section <ref>, we review the classical shadow. In Section <ref>, we present the modified QSD algorithm and discuss its working principles. In Section <ref>, we present a comprehensive algorithmic framework for our approach. In Section <ref>, we report the results of our numerical experiments. Finally, we conclude our work in Section <ref>. § RELATED WORKS VQE is one type of variational algorithm that is used to find the ground state of a given Hamiltonian. In VQE, a parameterized quantum circuit is constructed. During the training process, these variables are iteratively updated to minimize the expectation value of the Hamiltonian. ⟨ H(θ)⟩ = ⟨ 0 |U(θ) HU^†(θ)| 0⟩. VQE was the first NISQ algorithm used for calculating the binding energy of nuclear systems <cit.>. This method was initially used for the simplest nuclear model and has been extended to more sophisticated models such as the Lipkin and nuclear SM <cit.>. In particular, the authors of Ref. <cit.> utilized an adaptive strategy to generate a suitable ansatz circuit. One potential drawback of VQE is the possibility of getting stuck in a local minimum during parameter updates. In addition to variational algorithms, the ground state energy of nuclear systems was also solved by imaginary time evolution algorithms <cit.>. This algorithm can not only find the ground state but also prepare the thermal state. Another related work uses the method called full quantum eigensolver <cit.>, which calculates the ground states of various nuclei using the harmonic oscillator basis. The full quantum eigensolver can be considered as a first-order approximation of imaginary time evolution. A state that has undergone imaginary time evolution becomes |Ψ(β)⟩=e^-β H|Ψ(0)⟩/e^-β H|Ψ(0)⟩, where e^-β H|Ψ(0)⟩ denotes the normalization factor. If there is a finite overlap between |Ψ(0)⟩ and the ground state, Eq. <ref> will exponentially approach the ground state as β increases. Unfortunately, it's usually difficult to implement the non-unitary operator e^-β H with quantum circuits. One approach is to achieve such evolution through variational algorithms <cit.>. The authors variationally prepared the states after a short-time imaginary time evolution. Apart from this, there are two main approaches to implementing this operator. One method is to use a unitary evolution to approximate this non-unitary evolution <cit.>. While the approach used state tomography to find a suitable unitary evolution, the computational cost grows exponentially with the system's correlation length. Another approach is to use a linear combination of unitary operators to approximate the non-unitary operator <cit.>. § THE NUCLEAR SHELL MODEL Nuclear force can be derived from the fundamental theory of QCD. However, in practical calculations, these forces are usually too complex and inaccurate. As alternatives, one can work in a limited Hilbert space and write down a phenomenological interaction intended to reproduce the experimental data. One example is the nuclear SM. The SM usually involves an inert core consisting of closed shells, while the remaining valence nucleons fill the orbitals in the open shells. In SM a single-particle state is characterized by four good quantum numbers: angular momentum J and its 3rd component J_z, isospin T and its 3rd component T_z. The Hamiltonian in the SM can be represented in a second quantization form that includes kinetic energy and two-body interactions. We use lowercase letters a, b, c, and d to denote spherical orbitals. Taking into account symmetries in nuclear systems, the Hamiltonian can be written as H^f=∑_a ϵ_a n̂_a+∑_a ⩽ b, c ⩽ d∑_J T V_J T(a b ; c d) T̂_J T(a b ; c d), where n̂_a denotes the occupation number for the orbital a with quantum numbers n_a,l_a,m_a. The coefficients in this Hamiltonian are parametrized by fitting to the experimental data. The first term is a one-body operator and the second term is a two-body operator given by T̂_J T(a b ; c d)=∑_M T_z A_J M T T_z^†(a b) A_J M T T_z(c d), where (JM) and (TT_z) denote the coupled spin and isospin quantum numbers, respectively. And A_J M T T_z^†(a b) is given by Â_J M, T M_T^†(a b) ≡[c_a^†× c_b^†]_J M, T M_T = ∑_m_a, m_b(j_a m_a, j_b m_b | J M) ∑_μ_a, μ_b(1/2μ_a, 1/2μ_b | T M_T) ĉ_j_a m_a, 1/2μ_a^†ĉ_j_b m_b, 1/2μ_b^†. The antisymmetric and normalized two-body state is defined by |a b ; J M⟩≡1/√(1+δ_a b)Â_J M^†(a b)|v⟩. where |v⟩ denotes the vacuum state. There are various SM Hamiltonians that are applicable in different regions of the nuclide chart. In this paper, we focus on two SM parametrizations to demonstrate our method. The first one is the Cohen-Kurath SM <cit.>, which considers ^4 He as an inert core with the valence nucleons filling the p-shell. The p-shell has the capacity of 12 nucleons, including 6 neutrons and 6 protons. Therefore, this model applies to nuclei from ^4He to ^16O. The second SM we consider is the Wildental SM, also known as the USD SM <cit.>, which treats the magic number nucleus ^16O as the frozen core. In this model, the valence nucleons can be filled into the s-d shell with the capacity of 24 nucleons. Thus this SM can describe the nuclei from ^16O to ^40Ca. The applications of our method to other SMs are similar. § THE CLASSICAL SHADOW Quantum state tomography (QST) is widely used to characterize unknown quantum states, but it has two major drawbacks. Firstly, classical computers are inefficient at processing quantum states obtained through this method. Secondly, the resources required for QST increase linearly with the dimension of the Hilbert space <cit.>. In many cases, we don't need to fully characterize a quantum state. Instead, we only need to know certain properties of the state. Performing randomized measurements can be efficient in such situations. One method is by using classical shadows. The concept of shadow tomography was first introduced in Ref. <cit.>. The authors showed that using logarithmically scaled measurements (log^4(N)) is sufficient to derive N linear functions of an unknown state. Ref.  <cit.> provides a straightforward approach to constructing a classical description of an unknown state, known as the classical shadow. This method involves randomly measuring stabilizer observations of quantum states and then processing the measurements on a classical computer. The steps for constructing a classical shadow are as follows. * Randomly insert global Clifford gates C before measurement. * Measure the final state in computational basis Z and derive the binary vector | z⟩. This and the step above runs on a quantum computer. * Calculate the density matrix C^† |z⟩⟨z|C. This step and all subsequent steps are done on a classical computer. * Repeat the steps above N times and calculate the expectation ρ = 𝔼_ CC^†|z⟩⟨z|C on the classical computer. * Transform the matrix with an operator M(ρ) = (2^n + 1)ρ - I. § MODIFIED SUBSPACE DIAGONALIZATION METHOD QSD is a powerful tool for solving eigenvalue problems on a quantum computer. The main advantage of quantum methods over classical ones is that quantum computers can generate more complex states. There are three common ways to generate suitable quantum states: real-time evolution, imaginary-time evolution, and quantum power method. In all these methods, the general eigenvalue problem that needs to be solved is: H^sx = ESx. Here, H^s_ij = ⟨ψ_i|H|ψ_j⟩ and S_ij = ⟨ψ_i|ψ_j⟩. {|ψ_i⟩},i=1,…,m are the basis of the subspace with dimension m. H represents the Hamiltonian of the system that we want to solve, while H^s is the effective Hamiltonian in the subspace. Our algorithm focuses on the first method, real-time evolution, where the state is prepared as |ψ_j⟩ = e^-iHt_j|0⟩, with |0⟩ being the initial state having a finite overlap with the actual ground state. However, this approach often encounters a problem where some singular values of the matrix S are very small. As a result, statistical fluctuations of the elements in the matrices H^s and S can have a significant impact on the results. This is mainly due to the uncorrelated shot noise of the measurements. To overcome this shortcoming, one natural solution is to use state tomography to measure these final states and then calculate S and H^s. However, this method requires a considerable amount of resources when the number of qubits is large, making it impractical. With a density matrix, it is also difficult to calculate matrix multiplication due to the large dimension. In this work, we represent these final states {|ψ_i⟩} using classical shadows {ρ_i}. Consequently, the original quantum state-based subspace algorithm cannot be employed. To handle the density matrix more conveniently, for an arbitrary matrix X = X_i,j|ψ_i⟩⟨ψ_j|, X_i,j∈ℂ, we vectorize it as: X = Σ_i,j X_i,j|ψ_i⟩⟨ψ_j|⟶ |X⟩⟩ = Σ_i,jX_i,j|ψ_i⟩⊗ |ψ_j⟩. Note that this is not the only way to vectorize. Another method commonly used in quantum information theory is the Pauli transfer matrix representation. In our work, it is more convenient to use the method of Eq. <ref>. A general map applied to X can then be expressed as a matrix, AXB ⟶ A⊗ B^T |X⟩⟩. The inner product of two arbitrary vectors |X_1⟩⟩, |X_2⟩⟩ is defined by ⟨⟨ X_1|X_2⟩⟩ = Tr(X_1^†X_2). Instead of solving the stationary Schrödinger equation, we choose to solve a different eigenvalue problem: H⊗ I |ρ⟩⟩ = E|ρ⟩⟩. where we use I to denote the identity matrix with the same dimension as the density matrix. |ρ⟩⟩ and E are eigenvector and eigenvalue, respectively. The eigenstates of H⊗ I are all highly degenerate, and the lowest eigenvalue is exactly E_0. The subspace we use to find the lowest eigenvalues is 𝕍 = {Σ_i,jc_i,j|ρ_i ρ_j⟩⟩ | c_i,j∈ℂ}, using these classical shadows {ρ_i}. The specific steps are as follows * Prepare an initial state |0⟩ which has an overlap with the exact ground state. * Construct a set consisting of m different times t_j for j = 1,2,..,m. * Evolve the initial state under the system's Hamiltonian and derive the final state |ψ_j⟩ = e^-iHt_j|0⟩. * For each final state |ψ_j⟩ (j = 1,2,...,m), construct its classical shadow {ρ_j} using the algorithm presented in last section. * Construct the set {ρ_iρ_j} for i,j = 1,2,...,m consisting of the basis of matrix space and relabel the term of the set by σ_i for i=1,2,...,m^2. * Calculate the matrices S̃ and H̃^s given by S̃_i,j = ⟨⟨σ_i|σ_j⟩⟩, H̃^s_i,j = ⟨⟨σ_i|H⊗ I|σ_j⟩⟩. * Solve the eigenvalue problem H̃^sc⃗ = ES̃c⃗. Here H̃^s is the effective Hamiltonian defined in the subspace 𝕍. We show in the following theorems that the lowest possible eigenvalue solved with Eq. <ref> is the ground state energy of H. If an infinite number of shots are used in constructing the classical shadow, the eigenvalues of Eq. <ref> and Eq. <ref> are the same. We first consider using ρ_iρ_j=1 for i=1,2,3,…,m. As the shots tend to infinity, the classical shadow becomes an exact density matrix, which we denote as ρ_i = |ψ_i⟩⟨ψ_i |. At the same time, the product of the two classical shadows becomes ρ_i ρ_1 = |ψ_i⟩⟨ψ_i|ψ_1⟩⟨ψ_1 |. Its vectorized form is |ρ_iρ_1⟩⟩ = ⟨ψ_i|ψ_1⟩ |ψ_i⟩⊗ |ψ_1⟩. Suppose x⃗ is one solution of Eq. <ref> with the energy E_a, i.e., Σ_j ⟨ψ_i|H|ψ_j⟩ x_j = Σ_j E_a ⟨ψ_i|ψ_j⟩ x_j . We can construct the general eigenvector from the solution: Σ_j⟨⟨ρ_iρ_1|H⊗ I|ρ_jρ_1⟩⟩ x_j/⟨ψ_j|ψ_1⟩ = Σ_j ⟨ψ_1|ψ_i⟩⟨ψ_j|ψ_1⟩ (⟨ψ_i| ⟨ψ_1|)|H⊗ I(|ψ_j⟩|ψ_1⟩) x_j/⟨ψ_j|ψ_1⟩ = Σ_j ⟨ψ_1|ψ_i⟩⟨ψ_i|H|ψ_j⟩ x_j = Σ_j ⟨ψ_1|ψ_i⟩ E_a ⟨ψ_i|ψ_j⟩ x_j = Σ_j ⟨ψ_1|ψ_i⟩⟨ψ_j|ψ_1⟩ E_a ⟨ψ_i|ψ_j⟩ x_j/⟨ψ_j|ψ_1⟩ = E_aΣ_j ⟨⟨ρ_iρ_1|ρ_jρ_1⟩⟩ x_j/⟨ψ_j|ψ_1⟩. Thus x⃗^' where x^'_i = x_i/⟨ψ_j|ψ_1⟩ is the solution of Eq. <ref> and the corresponding eigenvalue is E_a. There are m degenerate eigenvectors, each has the form Σ_i x^'_i|ρ_iρ_j⟩⟩, for j=1,2,…,m. Since Eq. <ref> has m different eigenvectors, all m^2 eigenvectors of Eq. <ref> can be constructed and the spectrum of two equations are exactly the same. Suppose E_0 is the exact ground energy. The eigenvalues E_s obtained through the above process always satisfy E_s ≥ E_0. When the number of shots tends to infinity and 𝕍 exactly covers the density matrix of the ground state, the equal sign can be obtained. The proof is straightforward. For any solution c of Eq. <ref>, the following formulas are established for the corresponding vector |ρ(c)⟩⟩ in 𝕍: ⟨⟨ρ(c⃗)|H⊗ I|ρ(c)⟩⟩/⟨⟨ρ(c)|ρ(c⃗)⟩⟩≥ E_0. The key to this fact is that we use a quantum computer to generate a set of basis vectors |σ_i⟩⟩, instead of directly using a quantum computer to generate the S and H^s matrices. Since the basis vectors are stored in the classical computer, the construction of the S̃ and H̃^s matrices is accurate. Unlike the original QSD, the energy calculated by our algorithm will never be smaller than the ground state energy. Besides, we show later by numerical simulations that this method also retains the advantages of the original method, that is, the accuracy of the calculation increases exponentially as the subspace becomes larger. § THE FULL ALGORITHM In this section, we describe the full algorithm. In the NISQ era, we want to reduce the number of qubits as much as possible. Therefore, our first step is to simplify the original Hamiltonian H^f of nuclear SM. When calculating the ground state of a given nucleus, we don't need to use the entire Hilbert space but only need to solve the problem in the subspace with a corresponding number of particles. Considering only subspaces with a given number of particles can help reduce the number of qubits required as we can use the qubit-efficient encoding method to encode the fermionic states into qubit states <cit.>. Suppose H is a matrix representation of the Hamiltonian H^f in the subspace. Using the algebraic relation of fermionic operators, the matrix elements of H can be easily calculated as follows: * Assign a number to represent each occupiable orbital in the SM. * Generate a set of occupied states {a_k_1^†a_k_2^†a_k_3^†… a_k_p^†|v⟩}, where p is the number of valence nucleons of the nucleus to be studied. |v⟩ represents the vacuum state. To ensure that these states are linearly independent, we specify k_1> k_2 > k_3 >… > k_p. Without loss of generality, we specify that a_k_1^†a_k_2^†a_k_3^†… a_k_p^†|v⟩ is the kth states in the set. * Calculate the matrix elements H_k,k^' = ⟨ v|a_k_p^'… a_k_1^' H^f a_k_1^†… a_k_p^†|v⟩. A classical computer can efficiently compute this quantity by exploiting the algebraic relationship of a and a^†. We can verify that such a Hamiltonian is sparse. We can see from Eq. <ref> that the Hamiltonian can be written as H^f = Σ_i g_i H_i, where H_i can be expressed by H_i = a_i_1^†a_i_2^†a_i_3a_i_4 := a^†_i⃗_12a_i⃗_34, where i⃗_12 = (i_1,i_2) and i⃗_34 = (i_3,i_4). With an n-dimension vector x, we use a^†_x⃗ to denote the operator a^†_x_1a^†_x_2… a^†_x_n. We only need to verify that the matrix H_i has one non-zero term in each column. Owing to the algebraic relationship of the fermionic operator: H_i a_k⃗^† |v⟩ = h(i⃗_12,i⃗_34,k⃗)a_k⃗^'^† |v⟩, where k^' is derived by replacing i_3 and i_4 in k⃗ with i_1 and i_2. The function h equals 0 or ± 1. When i_3 and i_4 are contained in k⃗, h = ± 1; the sign depends on the relationship of the elements in i⃗_12 and i⃗_34. Otherwise h = 0. The only one annihilation operator series a_k⃗^” that satisfies ⟨ v | a_k⃗^” a_k⃗^'^† | v⟩≠ 0 is the case that k⃗^” and k⃗^' have the same elements. Thus each column of the H_i matrix has at most one non-zero element. As a result, several existing methods can be employed to achieve the evolution of this Hamiltonian <cit.>. If H is a d× d matrix, then only n = ⌈log_2(d)⌉ qubits are needed. It is worth noting that, in addition to this encoding method, other encoding methods are available, such as the Jordan-Wigner transformation and the gray code. The former is relatively straightforward and widely used since it is easy to realize the evolution of the Hamiltonian. The latter requires fewer qubits than the former and has gained attention in recent years <cit.>. On a quantum computer, the first step is to initialize the qubits and prepare a state with a finite overlap with the exact ground state ρ_0. In our numerical experiments, we always set the initial state of n qubits to be |0⟩^⊗ n. When the nucleus becomes quite large, it may be necessary to design the initial state more carefully. Next, a set of discrete times t_i,i=1,2,…,m is chosen. It's worth noting that the time difference between t need not be very large to overcome the error caused by shot noise. Then, we evolve the initial state ρ_ini = |0⟩⟨ 0|^⊗ n under the system's Hamiltonian, ρ̃_i = U_i ρ_ini U_i^†, where U_i = e^-iHt_i. The Clifford randomized measurements are chosen to construct the classical shadow ρ_i for different i. Given the classical shadows, we define the shadow space 𝕍, which is spanned by |ρ_iρ_j⟩⟩. Finally, we calculate the matrices H̃^s and S̃ and solve the general eigenvalue problem. Recording the information of the classical shadows and the construction of H̃^s and S can be effectively done by classical computers. We summarize the steps as follows: * Construct the reduced Hamiltonian H from the full Hamiltonian. * Choose a set of discrete time {t_i} for i=1,...,n. * For each time t_i, evolve the initial state |0⟩^⊗ m by e^-iH t_i and get the final state |ψ_i⟩. * For each final state, randomly choose a global Clifford gate C_j and apply C_j to the state. * Apply Z measurement on the state C_j|ψ_i⟩⟨ψ_i|C_j^†, obtain a binary vector |b⟩. * Repeat step 4 and step 5 N times, and construct the classical shadow ρ_i for the final state |ψ_i⟩. * Construct the subspace comprising the vectors |ρ_i ρ_j⟩⟩, and then calculate the effective Hamiltonian H̃^s and overlap matrices S̃ in this subspace. * Solve the general eigenvalue problem H̃^sc⃗ = ES̃c⃗. § NUMERICAL SIMULATIONS To test the effectiveness of this algorithm, we conducted numerical experiments to calculate the ground state of six nuclei: ^6He,^6Li,^7Bo,^8Li,^18O, and ^18F. The Cohen-Kurath SM was used to describe the first four nuclei, while the USD SM was used for the last two. Table <ref> presents some computational data for these six nuclei. First, MNES = M denotes the minimum number of evolved states needed such that the ground state is a linear combination of the evolved states e^-iHx|0⟩^⊗ n, where x = 1,2,…,M. Generally, the accuracy of the calculated result increases with the number of evolved states in the subspace. Here we use M number of states in order for better comparison between different nuclei. In addition, we determine the required number of qubits for each nucleus. For instance, ^6Li has one neutron and one proton distributed in 6 neutron orbitals and 6 proton orbitals, respectively. Therefore, its wave function must be represented by 6× 6 = 36 basis states. Thus, ⌈log_2(36)⌉ = 6 qubits are sufficient. Finally, the discrepancy between the calculated and experimental ground state energy is presented using the fixed shots and M evolved states. To explore the accuracy of ground state energy calculations for each nucleus as a function of the number of evolved states, we fix the number of shots used to construct each classical shadow and then increase the number of evolved states, starting from the minimum number. We choose to use ^8Li and ^18F as examples because the dimensions of their corresponding Hilbert spaces are much larger than the minimum number of evolved states required. This provides more flexibility to increase the subspace. The results are shown in Fig. <ref> and Fig. <ref>. We define the accuracy as the inverse of error ϵ (in MeV), which is the absolute value of the difference between the calculated result and the ideal value derived by directly diagonalizing the Hamiltonian of SM. To aid in observation, here we use logarithmic coordinates for the vertical axis. The green dashed line marks the MNES for the two nuclei. As can be seen from the figure, the calculated results appear to roughly form a straight line after the number of evolved states is beyond the MNES; the data are fit with a red line using the least squares method. This result indicates that, once the minimum number is reached and the number of evolved states continues to increase, the error caused by the finite number of shots decreases exponentially. However, simply increasing the number of evolved states is not enough to achieve infinite accuracy with a fixed number of shots. To further increase the accuracy, we need to increase the number of shots. We show the effect of the number of shots in Figure <ref> to <ref>, where the minimum number of evolved states is used for each nucleus. In contrast to Fig. <ref> and Fig. <ref>, the vertical axis in this case represents the inverse of the error, while the horizontal axis represents the number of shots. The red line is the fit of the data using the least squares method. By analyzing the measurement data represented by the blue dots, we can observe that: 1/|ϵ| ∝ N, |ϵ| ∝1/N. This indicates that the accuracy 1/|ϵ| of the result using our method is dependent on the number of shots N and approaches the Heisenberg limit, which was not observed in the original subspace diagonalization method. To further confirm this phenomenon, we performed calculations for ^7Bo and calculated its error bar, defined as the standard deviation of 1/|ϵ|. We also examined how the lower bound of the error bar changed with the number of shots. As shown in Figure <ref>, the green line was fitted to the lower bound of the error bar. We see that the dependence remains close to a straight line, with the only difference being a slightly smaller slope. Therefore, we conclude that the Heisenberg limit dependence still holds. § CONCLUSION AND OUTLOOK In this paper, we present a novel quantum algorithm for computing the ground state energy of nuclear systems. Our approach combines classical shadow techniques with the modified QSD method. The modified QSD method eliminates the need for Hadamard tests and requires fewer two-qubit gates than the original method, making it more suitable for NISQ devices. One of the distinctive characteristics of this method is that the relationship between accuracy and the number of shots approaches the Heisenberg limit. This property can greatly reduce the number of shots needed in experiments. Additionally, this algorithm resolves the ill-conditioned overlap matrix problem that is often encountered in original algorithms. It's worth noting that this method is not limited to nuclear systems and can be applied to a range of problems involving ground-state calculations. We thank fruitful discussions with Jinniu Hu. This work is supported by National Natural Science Foundation of China (Grant No. 12225507, 12088101) and NSAF (Grant No. U1930403).
http://arxiv.org/abs/2306.02226v1
20230604012740
Variational convergence of the Scharfetter-Gummel scheme to the aggregation-diffusion equation and vanishing diffusion limit
[ "Anastasiia Hraivoronska", "André Schlichting", "Oliver Tse" ]
math.NA
[ "math.NA", "cs.NA", "math.AP" ]
ROME: Testing Image Captioning Systems via Recursive Object Melting Pinjia He ==================================================================== In this paper, we explore the convergence of the Scharfetter–Gummel scheme for the aggregation-diffusion equation using a variational approach. Our investigation involves obtaining a novel gradient structure for the finite volume scheme that works consistently for any nonnegative diffusion constant, which allows us to study the discrete-to-continuum and zero-diffusion limits simultaneously. The zero-diffusion limit for the Scharfetter–Gummel scheme corresponds to the upwind finite volume scheme for the aggregation equation. In both cases, we establish a convergence result in terms of gradient structures, recovering the Otto gradient flow structure for the aggregation-diffusion equation based on the 2-Wasserstein distance. § INTRODUCTION In this paper, we study the convergence of the Scharfetter–Gummel numerical approximation for the aggregation-diffusion equation ADE∂_t ρ_t = div( ϵ∇ρ_t + ρ_t ∇ V + ρ_t ∇ (W * ρ_t) ) in (0, T)×Ω, which describes the evolution of a curve of Borel probability measures t↦ρ_t∈(Ω) on a bounded convex domain Ω⊂^d, where ϵ> 0 is a diffusion coefficient, V:^d→ is an external potential, and W:^d→ is an interaction potential. We impose the no-flux boundary condition ϵ∂_νρ_t + ρ_t ∂_ν (V + W * ρ_t) = 0 on ∂Ω, where ν denotes the outer normal vector on ∂Ω. Our strategy employs a variational approach that not only provides the convergence of the Schar­fetter–Gummel scheme but also a generalized gradient structure for the cases ϵ>0 and ϵ=0. In particular, the method allows us to prove the convergence of the Scharfetter–Gummel (ϵ>0) and upwind (ϵ=0) approximation to the Otto gradient flow solutions of (<ref>), which we outline in detail below. The Scharfetter–Gummel flux approximation originates from <cit.>, where the authors construct a numerical scheme for a system modelling semiconductor devices. Their objective was to develop a robust scheme for the system of equations with discontinuities or rapid variations in the potential. Independently, the same type of flux is introduced in <cit.> for finite-difference schemes. Thereafter, the Scharfetter–Gummel scheme became the preferred finite-volume scheme for the drift-diffusion or convection-diffusion equations. While the original scheme deals with the one-dimensional problem, it has been generalized to higher dimensional problems <cit.> and the flux discretization approach became the basis for numerous other generalizations, e.g. for equations with nonlinear diffusion <cit.> and to systems with source terms <cit.>. To introduce the Scharfetter–Gummel scheme, we first introduce some common notations for finite-volume methods. Let {(^h,Σ^h)}_h>0 be a family of finite (admissible) tessellations of a bounded and convex set Ω⊂^d, where ^h is the family of cells and Σ^h⊂^h×^h contains pairs (K, L) that share a face, i.e. when K,L∈^h share a part of their boundary with positive (d-1)-dimensional Hausdorff measure, which we denote by (K|L). We further define ^h_K to be the set of cells adjacent K. With a slight abuse of notation, we adopt the notation K|L to denote pairs (K,L)∈Σ^h to distinguish between pairs (K,L)∈^h×^h. The parameter h>0 is the maximal diameter of the cells. We make the definitions precise in Section <ref>. For now, one can keep a Voronoi tessellation in mind as an example of an admissible tessellation. We illustrate how the Scharfetter–Gummel flux appears in the finite-volume discretization of (<ref>). First, consider the case without interaction potential, i.e. W≡ 0. Rewriting (<ref>) as ∂_tρ_t + div j_t = 0, j_t = -ϵ∇ρ_t - ρ_t ∇ V, integrating the first equation over a control volume K∈^h, and then applying the divergence theorem yields the discrete continuity equation CE_h∂_t ρ^h_K + ^h,ρ_K = 0, with ^h,ρ_K ∑_L∈_K^h^h,ρ_K|L, where the numerical approximation for the flux ^h,ρ_K|L should be well chosen to approximate the continuous flux j. The idea of the Scharfetter–Gummel flux discretization is to solve a cell problem for two adjacent cells K and L with barycenters x_K = _K x x and x_L = _L x x. Then, the cell problem is the one-dimensional boundary value problem: Find u∈ C^2([x_k,x_L]) satisfying -∂_x (ϵ∂_x u + u q_K|L^h ) = 0 on [x_K, x_L] u(x_K) = ρ^h_K /|K|, u(x_L) = ρ^h_L/|L| for all (K,L)∈Σ^h, where q_K|L^h is an approximation for the gradient of the potential term ∇ V in (<ref>) along a segment connecting x_K and x_L. The solution of (<ref>), which can be explicitly computed, is then used to define the Scharfetter–Gummel flux <cit.>, defined for all (K|L)∈Σ^h as _K|L^h,ρϵτ_K|L^h ( (q_K|L^h / ϵ) u^h_K - (- q_K|L^h / ϵ) u^h_L ), u^h_K ρ^h_K/|K|, where τ_K|L^h |(K|L)| / |x_L - x_K| is called the transmission coefficient and (s) s / (e^s - 1) is the Bernoulli function. The Scharfetter–Gummel scheme then reads SGE_h∂_t ρ^h_K + ∑_L∈_K^h^h,ρ_K|L = 0, ^h,ρ_K|L=ϵτ_K|L^h ( (q_K|L^h / ϵ) u^h_K-(- q_K|L^h / ϵ) u^h_L ). We are interested in a generalization of the Scharfetter–Gummel scheme (<ref>) for (<ref>) that includes the interaction term W, which was considered in <cit.>. In this case, the form of the flux is the same as in (<ref>), but we include a discrete approximation of ∇ (W * ρ) = ∫_Ω∇ W (· - y) ρ ( y) of the form q_K|L^h V^h_L - V^h_K + ∑_M∈^hρ^h_M (W^h_ML - W^h_MK), (K,L)∈Σ^h, where W^h_MK W(x_K - x_M) for any K, M ∈^h×^h such that K≠ M. The important property of the numerical flux (<ref>) is that the Bernoulli function interpolates between appropriate discretizations of the pure diffusion and pure drift problems. In the absence of the potential, i.e., q_K|L^h = 0, the flux becomes ϵτ_K|L^h ( u_K - u_L ). More interestingly, in the vanishing diffusion limit ϵ→ 0, the Scharfetter–Gummel scheme converges to Up_h∂_t ρ^h_K + ∑_L∈_K^h_K|L^h,ρ,Up =0, _K|L^h,ρ,Up= τ_K|L^h ( q_K|L^h,+u^h_K - q_K|L^h,- u^h_L ), which is the upwind flux discretization for the aggregation equation AE∂_t ρ = div (ρ∇ (V + W * ρ)) in (0, T) ×Ω. The convergence of the discrete approximation to the weak solutions of (<ref>) in the absence of an external potential is proven in <cit.>. Moreover, it was shown there that the discrete solutions satisfy an energy-dissipation inequality along the evolution, which is an important structure-preserving property. We aim to go one step further and prove the convergence of a variational structure for (<ref>) to the Otto gradient-flow structure for (<ref>). §.§.§ Strategy and outline The goal of this paper is to complete the commutative diagram in Figure <ref> below, where the convergence results correspond to the convergence of gradient-flow structures. To make the goal clear, we briefly explain the gradient structures involved and the type of convergences we are interested in. The right-hand side of Figure <ref> corresponds to the continuous setting that is rather well understood. The Otto-Wassertein gradient-flow theory <cit.> provides a gradient-flow formulation for the aggregation-diffusion equation (<ref>) with respect to the L^2-Wasserstein metric and the driving energy (Ω)∋ρ↦_ϵ(ρ) = ϵ∫_Ωϕ( ρ/^d) ^d + ∫_Ω V ρ + 1/2∫_Ω ( W * ρ ) ρ if ρ≪^d, +∞ otherwise, where ϕ(s)=s log s -s +1 for s∈_+ and ^d denotes the Lebesgue measure on ^d. Here, we consider gradient flow solutions to (<ref>) in terms of the Energy-Dissipation Balance (EDB), which we now describe. We begin by recalling that (<ref>) can be expressed as ∂_tρ_t + div j_t = 0 in (0, T) ×Ω, CE j_t = -ρ_t ∇_ϵ'(ρ_t), KR where (<ref>) suggests that the density-flux pair (ρ,j) satisfies the continuity equation, while (<ref>) describes the relationship between the force -∇_ϵ'(ρ_t) and the flux j_t, which we call the kinetic relation. By introducing a dual dissipation potential ^* : (Ω) × C_b(Ω;^d) →_+, ^*(ρ, ξ) = 1/2∫_Ω |ξ|^2 ρ, the kinetic relation (<ref>) may be further expressed as j_t = D_2^*(ρ_t, -∇_ϵ'(ρ_t)). Via Legendre-Fenchel duality, we obtain a variational characterization of the kinetic relation: (ρ_t, j_t) + ^*(ρ_t, -∇_ϵ'(ρ_t)) = ⟨ j_t,-∇_ϵ'(ρ_t)⟩, where the dissipation potential is the Legendre dual of ^* w.r.t. its second argument, i.e., (ρ,j)∈(Ω)×(Ω;^d)↦(ρ,j) = 1/2∫_Ω| j/ρ|^2 ρ, where (Ω;^d) is the space of finite ^d-valued Radon measures. Under the chain rule CR -/ t_ϵ(ρ_t) = ⟨ j_t,-∇_ϵ'(ρ_t)⟩, along density-flux pairs (ρ,j) satisfying the continuity equation (<ref>), one arrives at a variational expression for the solution of (<ref>). Indeed, integrating (<ref>) over arbitrary intervals [s,t]∈[0,T] and employing the chain rule (<ref>), one obtains the Energy-Dissipation Balance: EDB_ϵ^[s,t] (ρ, j) ∫_s^t (ρ_r,j_r) + ^*(ρ_r, - ∇'_ϵ(ρ_r)) r + _ϵ(ρ_t) - _ϵ(ρ_s) = 0. Morally, any pair (ρ,j) satisfying the continuity equation (<ref>) and (<ref>) is said to be an (, , ^*)-gradient flow solution of (<ref>) if it satisfies, additionally, the chain rule (<ref>). Although there are other ways of defining gradient flow solutions to (<ref>). We choose to use the definition based on EDB since this works well in the generalized gradient flow setting <cit.> as seen below. For λ-convex functionals _ϵ w.r.t. the Wasserstein distance W_2, it is a standard result of evolutionary -convergence for gradient flows <cit.> that, as ϵ→ 0, the gradient flow solutions of (<ref>) converge to the gradient flow solutions of the corresponding aggregation equation (<ref>). The left-hand side of Figure <ref> corresponds to the discrete setting for which the gradient structure is not well understood. For this reason, our first objective is to present a generalized gradient-flow (GGF) formulation for the Scharfetter–Gummel scheme (<ref>). In particular, we show in Section <ref> that the scheme fits into the (by now, common) `cosh' gradient-structure framework with the discrete driving energy _ϵ,h: (^h) →_+, _ϵ,h(ρ^h) = ϵ∑_K∈^hϕ(u^h_K)|K| + ∑_K∈^h V^h_K ρ^h_K + 1/2∑_(K, L)∈^h×^h W^h_KLρ^h_K ρ^h_L, u^h_K ρ^h_K/|K|, and discrete dual dissipation potential _ϵ,h^*: (^h) ×(Σ^h) →_+ defined in (<ref>), where (A) denotes the set of bounded functions on A. That being said, the `cosh' gradient structure turns out to be ill-suited for proving the desired convergence due to the inclusion of the interaction potential W, which gives rise to a dissipation potential that depends on W and ρ^h. Such phenomenon is known as tilt-dependence of gradient systems and was recently discussed in detail in <cit.>, where it was established that tilt-independent gradient structures give rise to better convergence properties. Using the de-tilting technique <cit.>, we introduce a new tilt-independent gradient structure for the Scharfetter–Gummel scheme in the presence of both external and interaction potentials (cf. Section <ref>) and allows us to pass to the h→ 0 and ϵ→ 0 limits. We show in Section <ref> that the Scharfetter–Gummel scheme (<ref>) possesses a gradient structure with driving energy _ϵ,h (cf. (<ref>)) and the tilt-independent dual dissipation potential _ϵ,h^* given by _ϵ,h^*(ρ^h, ξ^h) 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u^h_K, u^h_L, ξ^h_K|L/2, u^h_K ρ^h_K/K, where α_ϵ^*:_+×_+×→_+ is defined (see Lemma <ref> for more details) for any ϵ>0 by α_ϵ^*(a, b, ξ) ϵ∫_0^ξsinh⟨[|]x/ϵΛ_H⟨*|a e^-x/ϵ, b e^x/ϵ x= ϵ^2 α_1^* ⟨[|]a, b, ξ/ϵ. Hereby the harmonic-logarithmic mean Λ_H : _+ ×_+ →_+ (see also Lemma <ref>) is given as Λ_H (s, t) 1/Λ( 1/s, 1/t ) with Λ(s, t) = s - t/log s - log t for s t. Based on these definitions, the two equations in (<ref>) become a discrete continuity equation for the density-flux pair (ρ^h,j^h) and a kinetic relation providing a force-flux relation: ∂_t ρ^h_t + div j^h_t = 0 in (0,T) ×^h, CE_h j^h_t = D_2 _ϵ,h^* (ρ^h_t, -_ϵ,h'(ρ^h_t)), KR_h where φ(K,L) = φ(L)-φ(K) is the discrete gradient. Together with the discrete chain rule CR_h -/ t_ϵ,h(ρ_t^h) = ⟨ j_t^h,-_ϵ,h'(ρ_t^h)⟩, the pair (ρ^h,j^h) is shown to satisfy the discrete Energy-Dissipation Balance: EDB_h_ϵ,h^[s,t] (ρ^h, j^h) ∫_s^t _ϵ,h(ρ^h_r, j^h_r) + _ϵ,h^*(ρ^h_r, -_ϵ,h'(ρ_t^h)) r + _ϵ,h(ρ^h_t) - _ϵ,h(ρ^h_s) = 0, for any interval [s,t]⊂[0,T]. Our main interest lies in establishing discrete-to-continuum convergence results that connect the left-hand and the right-hand sides of Figure <ref>. For the convergence of (<ref>) to (<ref>) (top horizontal arrow), we define the GGF solutions to (<ref>) as the minimizers of the energy-dissipation functional _ϵ,h corresponding to the tilt-independent structure defined through (<ref>) (cf. Section <ref>). We then follow a similar strategy as in <cit.>, which studies the diffusive limit of random walks on tessellations using variational techniques. However, every step of the strategy requires an adaptation to the new gradient structure. The main challenge here is to prove a -convergence result for the Fisher information, which takes the form _ϵ,h(ρ^h) _ϵ,h^*(ρ^h, -_ϵ,h'(ρ^h)) =∑_(K,L)∈Σ^hβ_ϵ (u^h_K, u^h_L) τ_K|L^h + _ϵ,h^1 (ρ^h) + _ϵ,h^2 (ρ^h), where β_ϵ(a,b)α_ϵ^* (a, b, -ϵlog√(b/a)) with α_ϵ^* from (<ref>), and _ϵ,h^1, _ϵ,h^2 are defined in Section <ref>. The splitting mimics the expanded form of the continuous Fisher information: _ϵ(ρ) ^*(ρ, - ∇'_ϵ(ρ)) = 2ϵ^2 ∫*∇√(u)^2 x + ϵ∫∇ u ·∇𝖰(ρ) x + 1/2∫*∇𝖰(ρ) ^2 u x, where 𝖰(ρ) = V + W∗ρ. The function β_ϵ depending on α_ϵ^* in (<ref>) is only defined by an integral, which makes it more difficult to work with as compared to the Fisher information for the `cosh' structure studied in <cit.>. Nevertheless, it satisfies (see Lemma <ref>) the bounds ϵ^2/4(a - b)^2/a + b≤β_ϵ (a, b) ≤ϵ^2/2⟨*|√(b) - √(a)^2, a, b ≥ 0, thereby allowing us to prove a -convergence result for β_ϵ (cf. Section <ref>), albeit under more stringent assumptions on the tessellations compared to <cit.>. Additionally, we will need to establish new convergence results for the other parts of ^h that depend on the interaction term q_K|L^h. The arrow with ϵ→ 0 on the left side of Figure <ref> refers to the convergence of the Scharfetter–Gummel scheme (<ref>) to the upwind approximation (<ref>) as ϵ→ 0 in terms of the generalized gradient structure. Since the state space is a fixed finite tessellation, this result is not difficult to obtain. On the contrary, the convergence of the upwind scheme (<ref>) to the aggregation equation (<ref>) appears to be very challenging. The difficulty is described in the literature but is still not well studied. The intuitive idea is that the structure of the tessellation can lead to strong oscillations in the solutions of the discrete continuity equation. More specifically, unlike in the 1-dimensional case, one can not expect propagation of the BV-bound, assuming that the initial data is in BV. Indeed, there is a simple example of a 2-dimensional tessellation consisting of lines of squares with size h alternating with lines of squares with size h/2, for which the total variation of the discrete solutions blows up as h^-1/2 even for a constant velocity field (see details in <cit.>). On the other hand, the convergence results in the strong topology are available on general tessellations for Lipschitz velocity fields <cit.>. When one treats general tessellations and rough velocity fields simultaneously, the convergence is proven in the weak topology <cit.> for time-explicit upwind schemes on Cartesian grids and time-implicit upwind schemes on regular general meshes. A first variational method for Fokker-Planck equations based on upwind dissipation functionals is contained in <cit.>. See also <cit.> for a study on general graphs and their continuum limits. A new method for proving regularity estimates for solutions of the discrete continuity equations with non-Lipschitz velocity field and non-Cartesian but periodic tessellations is found in <cit.>, which is significant for future research in this area. Given the state-of-art, at the moment, we cannot expect to prove the discrete-to-continuum convergence of the gradient structure for (<ref>) for general tessellations. Nevertheless, we obtain a convergence result for the Cartesian grid. We believe that this result is already worthwhile since it does not require any assumptions on the integrability of the initial data, allowing us to include atomic measures as initial data. To summarize, the rest of the paper is organized as follows. In Section <ref>, we specify the assumptions on tessellations and potentials and present the main results. We introduce the gradient structure for (<ref>) and two generalized gradient structures for finite volume schemes in Section <ref>. The subsequent sections contain the proofs of the convergence results. Section <ref> is dedicated to the discrete-to-continuum convergence of (<ref>) to (<ref>). The vanishing diffusion limit ϵ→ 0 from (<ref>) to (<ref>) is presented in Section <ref>. We deal with the convergence of (<ref>) to (<ref>) in Section <ref>. §.§ Acknowledgments A.H. and O.T. acknowledge support from NWO Vidi grant 016.Vidi.189.102 on "Dynamical-Variational Transport Costs and Application to Variational Evolution". A.S. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 – 390685587, Mathematics Münster: Dynamics–Geometry–Structure. § ASSUMPTIONS AND MAIN RESULTS We specify our assumptions on the family of tessellations in Section <ref> and the external and interaction potentials in Section <ref>. The main results of this paper are summarized in Section <ref>. §.§ Assumptions on tessellations Let Ω⊂^d be an open bounded convex set. A tessellation (^h,Σ^h) covering Ω consists of a family ^h of mutually disjoint cells (usually denoted by K or L) that are open convex sets and Ω⊂⋃_K∈^h K, and a family Σ^h ={ (K, L)∈^h×^h : ℋ^d-1 (K∩L) > 0 } of pairs of cells with a common face. Here, ℋ^d-1 denotes the (d-1)-dimensional Hausdorff measure. The common face of a pair (K,L)∈Σ^h is denoted by (K|L). The characterizing size of a tessellation is its maximum diameter: h max{diam(K), K∈^h}. The maximum diameter h>0 gives an upper bound on the volumes of the cells |K|≤ C_d h^d and faces |(K|L)| ≤ C_d-1 h^d-1, where C_d, C_d-1>0 are universal constants depending only on the spatial dimension d≥ 1. In our work, it is also necessary to assume lower bounds on the volumes of the cells to prevent the degeneration of cells, which is guaranteed by the following non-degeneracy assumption. 0.92 Non-degeneracy. There exists ζ∈ (0, 1) such that * For each K∈^h, there is an inner ball B(x_K, ζ h) ⊂ K with x_K = _K x x; * For every (K,L)∈Σ^h it holds that |(K|L)| ≥ζ h^d-1. We now summarize the assumptions on the tessellations used within this paper. 0.92 Admissible tesselations. The family of tessellations {(^h, Σ^h)}_h>0 satisfy Ass{ for any h>0, all cells K∈^h are open, convex, and mutually disjoint; {(^h, Σ^h)}_h>0 is non-degenerate with some ζ∈(0, 1) independent of h.. A standard assumption, often embedded in the definition of admissible tessellations in the finite-volume setup, is the following orthogonality assumption. 0.92 Orthogonality. For all (K, L)∈Σ^h, the face (K|L) is orthogonal to the vector x_L - x_K, i.e. Ort (K|L) ⊥ (x_L - x_K), where x_K = _K x x and x_L = _L x x. We assume (<ref>) throughout this paper, and we indicate explicitly in the corresponding statements when we require the orthogonality assumption (<ref>). §.§ Assumptions on potentials We assume the following properties for the potentials. 0.92 Assumptions on V. The external potential V∈Lip(^d)∩ C^1(^d) is bounded from below. Assumptions on W. The interaction potential W^d → nonnegative, i.e. W(x) ≥ 0 for all x∈^d and is symmetric, i.e. W(x) = W(-x). In addition, we assume the interaction potential to be either a pointy potential Pointy W ∈Lip(^d) ∩ C^1(^d\{0}), or a continuously differential potential C^1 W ∈Lip(^d)∩ C^1(^d). A typical example of interaction potentials appearing in mathematical models of the collective behaviour of individuals is the Morse potential W(x) = C_r e^-|x|/ℓ_r - C_a e^-|x|/ℓ_a, where ℓ_a and ℓ_r represent the attractive and repulsive potential ranges and C_a and C_r represent their respective amplitudes. With the choice C_r ≥ C_a > 0 and ℓ_a > ℓ_r, it holds that W(x) ≥ 0 for all x∈^d and W satisfies (<ref>). As mentioned above we define the discrete potentials accordingly as V^h_K V(x_K) for K∈^h, and W^h_KL W(x_L - x_K) for (K,L)∈^h ×^h. We claim in Lemma <ref> that the assumptions on V and W indicated above imply that q_K|L^h = ∇ (V + W * ρ̂^h )(x_K) · (x_L - x_K) + o(h)|_h→ 0, This equality will play an important role in several statements of this paper. Due to the assumptions on the potentials V and W, we further deduce that |q_K|L^h| ≤ c_pot h for all (K,L)∈Σ^h, with c_potLip(V) + Lip(W). We could have also defined V^h_K _K V(x) x for K∈^h and W^h_KL_K _L W(x - y) x y for (K,L) ∈^h×^h. One can verify that (<ref>) remains true and all the results of this paper hold also with these definitions. §.§ Main results To see the scope of the main results, we indicate the corresponding statements on the arrows in Figure <ref>. Our first statement is that the Scharfetter–Gummel scheme (<ref>) has the generalized gradient structure. This allows us to define the GGF solution to (<ref>) as a pair (ρ^h, j^h) satisfying the continuity equation (<ref>), which is a minimizer for the energy-dissipation functional (<ref>). All components of the energy-dissipation functional are made precise in Section <ref> and Lemma <ref> proving that the structure is indeed correct. Section <ref> is devoted to the discrete-to-continuum convergence of the Scharfetter–Gummel scheme as h→ 0 for a fixed diffusion coefficient ϵ > 0. To relate the discrete objects with the continuum, we employ the following reconstruction procedure for a density-flux pair (ρ^h, j^h) satisfying (<ref>) ρ̂^h/^d∑_K∈^hρ^h(K)/|K|_K, ^h ∑_(K,L) ∈Σ^h j^h_K|L σ_K|L^h, where σ_K|L^h∈(Ω; ^d) are chosen in a way such that for any (ρ^h, j^h) satisfying the discrete continuity equation (<ref>) the lifted pair (ρ̂^h, ^h) satisfies the continuous continuity equation (<ref>). The existence of such measures σ_K|L^h∈(Ω; ^d) was shown in <cit.>. The main theorems are the following. Let {(_h,Σ_h)}_h>0 be a family of tessellations satisfying (<ref>) and (<ref>), and assume (<ref>) to hold for the interaction potential W. Further, let {(ρ^h,j^h)}_h>0 be a family of GGF-solutions (<ref>) with initial data {ρ_in^h}_h>0 having sup_h>0_h(ρ_in^h) < ∞, such that there exists ρ_in∈dom with ρ̂_in^h/^d→ρ_in/^d in L^1(Ω) and lim_h→ 0_h(ρ_in^h) = (ρ_in). Then there exists a (not relabelled) subsequence of admissible continuous reconstructions {(ρ̂^h, ^h)}_h>0 and a limit pair (ρ,j) such that * (ρ,j) satisfies (<ref>) with the density u ρ/^d ∈ L^1((0, T)×Ω) and * ρ̂^h_t/^d → u_t in L^1(Ω) for every t∈ [0, T]; * ∫_·^h_t t ⇀^* ∫_· j_t t weakly-* in ((0, T)×Ω). * the following liminf estimate holds: For any [s,t]⊂[0,T], _ϵ^[s,t](ρ, j) ≤lim inf_h→ 0_ϵ,h^[s,t](ρ^h, j^h), where the energy-dissipation functional _ϵ is given by ^[s,t]_ϵ (ρ, j) = ∫_s^t {(ρ_r,j_r) + _ϵ(ρ_r)} r + _ϵ(ρ_t) - _ϵ(ρ_s), with the dissipation potential given in (<ref>) and Fisher information _ϵ:(Ω)→[0,+∞], _ϵ(ρ) = 2ϵ^2∫_Ω| ∇√(u)|^2 x + ϵ∫_Ω∇ u ·∇𝖰(ρ) x + 1/2∫_Ω|∇𝖰(ρ)|^2 ρ if ρ≪^d with u=ρ/^d and +∞ otherwise. Recall that 𝖰(ρ) = V + W∗ρ. * (ρ,j) is the gradient flow solution of (<ref>) with the energy-dissipation functional . In Section <ref>, we fix a tessellation (^h, Σ^h) with some h>0 and consider the dependence of the discrete energy-dissipation functional _ϵ,h^[s,t] (ρ^h, j^h) = ∫_s^t _ϵ,h(ρ^h_r, j^h_r) + _ϵ,h(ρ^h_r) r + _ϵ,h(ρ^h_t) - _ϵ,h(ρ^h_s), on the diffusion coefficient ϵ > 0. We have the following convergence statement. Let (^h, Σ^h) be a non-degenerate tessellation with a fixed h>0. Let { (ρ^ϵ,h, j^ϵ,h) }_ϵ>0 be a family of GGF-solutions to (<ref>) with initial data {ρ_in^ϵ,h}_ϵ>0 having sup_ϵ>0_ϵ,h(ρ_in^ϵ,h) < ∞, such that there exists ρ_in^h∈dom _up,h with ρ_in^ϵ,h(K) →ρ_in^h(K) for every K∈^h and lim_ϵ→ 0_ϵ,h(ρ_in^ϵ,h) = _up,h(ρ_in^h), where _up,h:(^h)→ is given by _up,h(ρ) = ∑_K∈^h V^h_K ρ_K + 1/2∑_(K,L)∈^h×^h W^h_KLρ_K ρ_L. Then there exists a (not relabelled) subsequence of measure-flux pairs { (ρ^ϵ,h, j^ϵ,h) }_ϵ>0 and the limit pair (ρ^up,h, j^up,h) such that * (ρ^up,h, j^up,h) satisfies (<ref>) and * ρ^ϵ,h_t ⇀ρ^up,h_t weakly in (^h) for all t∈ [0,T]; * ∫_· j^ϵ,h_t t ⇀^* ∫_· j^up,h_t t weakly-* in ((0, T) ×Σ^h). * the following liminf estimate holds: For any [s,t]⊂[0,T], _up,h^[s,t](ρ^up,h, j^up,h) ≤lim inf_ϵ→ 0_ϵ,h^[s,t](ρ^ϵ,h, j^ϵ,h), where the energy-dissipation functional I_h,up is given by _up,h^[s,t](ρ^h, j^h) ∫_s^t {_up,h (ρ^h_r, j^h_r) + _h, up (ρ^h_r)} r + _h, up (ρ^h_t) - _h, up (ρ^h_s), with driving energy _up,h, dissipation potential _up,h (ρ^h, j^h) = ∑_(K,L)∈Σ^hτ_K|L^h( u^h_K | j^h,+_K|L/τ_K|L^hu^h_K |^2 + u^h_L | j^h,-_K|L/τ_K|L^hu^h_L |^2 ) , and Fisher information _up,h (ρ^h) = ∑_(K,L)∈Σ^hτ_K|L^h( u^h_K | q_K|L^h,+/2|^2 + u^h_L | q_K|L^h,-/2|^2 ). * (ρ^up,h, j^up,h) is the GGF-solution to the upwind scheme (<ref>). In Section <ref>, we make a first step towards a convergence result from the upwind scheme (<ref>) to the aggregation equation (<ref>). Let {(^h,Σ^h)}_h>0 be a family of Cartesian tessellations with edges of length h>0. Let the interaction potential W satisfy (<ref>). Further, let {(ρ^h,j^h)}_h>0 be a family of GGF-solutions to the upwind scheme (<ref>) with initial data {ρ_in^h}_h>0 having sup_h>0_up,h(ρ_in^h) < ∞, such that there exists ρ_in∈dom _agg with ρ̂^h_in⇀ ^*ρ_in weakly-* in (Ω) and lim_h→ 0_up,h(ρ_in^h) = _agg(ρ_in), where _agg:(Ω)→ is given by _agg(ρ) = ∫_Ω V ρ + 1/2∫_Ω (W*ρ) ρ. Then there exists a (not relabelled) subsequence of admissible continuous reconstructions {(ρ̂^h, ^h)}_h>0 and a limit pair (ρ,j) such that * (ρ, j) satisfies (<ref>) and * ρ̂^h_t ⇀^* ρ_t weakly-* in (Ω) for any t∈ [0, T]; * ∫_·^h_t t ⇀^* ∫_· j_t t weakly-* in ((0, T)×Ω). * the following liminf estimate holds for any [s,t]⊂[0,T], _agg^[s,t](ρ, j) ≤lim inf_h→ 0_h,up^[s,t](ρ^h, j^h), where the energy-dissipation functional is given by _agg^[s,t](ρ, j) = ∫_s^t {(ρ_r,j_r) + _agg(ρ_r) } r + _agg(ρ_t) - _agg(ρ_s), with driving energy _agg, dissipation potential given in (<ref>) and Fisher information _agg(ρ) 1/2∫_Ω | ∇𝖰(ρ) |^2 ρ, 𝖰(ρ) = V + W∗ρ. * (ρ,j) is the gradient flow solution to the aggregation equation (<ref>). Finally, and to close the commutative diagram in Figure <ref>, we present the vanishing diffusion limit on the continuous level. Let the interaction potential W satisfy (<ref>). Let {(ρ^ϵ, j^ϵ)}_ϵ>0 be a family of the gradient flow solutions to the aggregation-diffusion equation (<ref>) the diffusion coefficients ϵ>0 with initial data {ρ_in^ϵ}_ϵ>0 having sup_ϵ>0_ϵ(ρ_in^ϵ) < ∞, such that there exists ρ_in∈dom _agg with ρ^ϵ_in⇀ ^*ρ_in weakly-* in (Ω) and lim_ϵ→ 0_ϵ(ρ_in^ϵ) = _agg(ρ_in), Then there exists a limit pair (ρ, j) and a (not relabelled) subsequence such that * (ρ, j) satisfies (<ref>) and * ρ^ϵ_t ⇀^* ρ_t weakly-* in (Ω) for any t∈ [0,T]; * ∫_. j^ϵ_t t ⇀^* ∫_. j_t t in ((0, T)×Ω). * the following liminf estimate holds for any [s,t]⊂[0,T] _agg^[s,t] (ρ, j) ≤lim inf_ϵ→ 0_ϵ^[s,t] (ρ^ϵ, j^ϵ), with _agg^[s,t] defined in (<ref>). * (ρ,j) is the gradient flow solution to the aggregation equation (<ref>). § GRADIENT STRUCTURES: DISCRETE AND CONTINUOUS This section is devoted to defining our notion of (generalized) gradient flow solution to each equation of interest. We begin with the continuous case in Section <ref>, which is the well-known Otto-Wasserstein gradient structure (see <cit.> for a more extensive study on this). We then introduce, in a similar fashion to the continuous case, generalized gradient structures for general finite volume schemes in Section <ref>, and proceed with providing two such structures for the Scharfetter–Gummel scheme in Section <ref>. We end this section with a summary of the discrete structure we consider in the rest of the article. §.§ Otto-Wasserstein gradient structure for diffusion-type equations A pair (ρ, j) is said to be in 𝒞ℰ(0, T) if * ρ∈([0,T];(Ω)) is a curve of nonnegative finite Radon measures defined on Ω, and * j=(j_t)_t∈[0,T]⊂(Ω;^d) is a measurable family of fluxes with finite action ∫_0^T ∫_Ω| j_t /ρ_t|^2 ρ_t t < ∞, satisfy the continuity equation (<ref>) in the following sense: For any [s,t]⊂ [0,T], ⟨φ,ρ_t⟩ - ⟨φ,ρ_s⟩ = ∫_s^t ⟨∇φ, j_r⟩ r for all φ∈_c^1(^d). It is known that if ρ solves (<ref>) with finite action, then ρ is an absolutely continuous curve in (Ω) w.r.t. the 2-Wasserstein distance <cit.>. A curve ρ∈([0,T];(Ω)) is said to be an (, , ^*)-gradient flow solution of (<ref>) or (<ref>) with initial data ρ_in∈(Ω)∩dom() if * ρ_0=ρ_in in (Ω); * there is a measurable family j=(j_t)_t∈[0, T]⊂(Ω; ^d) such that (ρ, j) ∈𝒞ℰ(0, T) with ∫_s^t ∫_Ω(ρ_r, j_r) + (ρ_r) r + (ρ_t) = (ρ_s) for all [s,t]⊂ [0,T], where (ρ) inf{lim inf_n→∞^*(ρ_n,-∇'(ρ_n)) : ρ_n⇀ρ weakly in (Ω), sup_n≥ 0(ρ_n) <∞}, i.e. is a lower-semicontinuous envelope of ρ↦^*(ρ,-∇'(ρ)); * the following chain rule inequality holds: -/ t(ρ_t) ≤(ρ_t, j_t) + (ρ_t) for almost every t∈(0,T). §.§ Generalized gradient structure for finite volume schemes We take the point of view that finite volume schemes can be seen as random walks on the graph induced by tessellations. Hence, we consider a random walk on a graph that corresponds to a tessellation (^h, Σ^h). Given an initial law ρ_0^h = ρ_in^h∈(^h), the time marginal law of a random walk satisfies the forward Kolmogorov equation FKE_h∂_t ρ_t^h = Q^*_h ρ_t^h, where Q^*_h is the dual of the generator Q_h defined for all bounded functions φ∈(^h) as (Q_h φ) (K) = ∑_(K,L)∈Σ^h (φ) (K,L) κ^h_K|L, K ∈^h, where κ : Σ^h →_+ is a bounded jump kernel. We restrict ourselves to random walks satisfying detailed balance, i.e. random walks admitting a stationary measure π^h∈(^h) such that π^h_K κ^h_K|L = π^h_L κ^h_L|K for all (K, L) ∈Σ^h. We note that the detailed balance implies, by the ergodic theorem for continuous-time Markov chains, the uniqueness of the stationary measure π^h (see, for instance, <cit.>). A pair (ρ^h, j^j) is said to be in 𝒞ℰ_h(0, T) if * ρ^h∈([0,T];(^h)) is a curve of finite measures defined on the graph ^h, and * j^h = (j_t^h)_t∈[0,T]⊂(Σ^h) is a measurable family of discrete fluxes with finite action ∫_0^T | j^h_t | ( Σ^h) t < ∞, satisfy the discrete continuity equation (<ref>) in the following sense: For any [s, t] ⊂ [0, T], ∑_K∈^hφ^h_K ρ^h_K(t) - ∑_K∈^hφ^h_K ρ^h_K(s) = ∫_s^t∑_(K,L)∈Σ^h (φ^h)(K, L) j^h_K|L(r) r for all φ^h∈(^h). A curve ρ^h∈([0,T]; (^h)) is an (_h, _h, _h^*)-generalized gradient flow solution of (<ref>) with initial data ρ_in^h∈(^h)∩dom(_h) if * ρ^h_0= ρ̅^h in (^h); * there is a measurable family j^h=(j^h_t)_t∈[0, T]⊂(Σ^h) such that (ρ^h, j^h) ∈𝒞ℰ_h(0, T) with ∫_s^t _h(ρ^h_r, j^h_r) + _h(ρ^h_r) r + _h(ρ^h_t) = _h(ρ^h_s) for all [s,t]⊂ [0,T]; where _h(ρ^h) inf{lim inf_n→∞_h^*(ρ^h_n,-'_h(ρ^h_n)) : ρ^h_n⇀ρ^h weakly in (^h), sup_n≥ 0_h(ρ^h_n) <∞}, i.e. _h is a lower-semicontinuous envelope of ρ^h↦_h^*(ρ^h,-'_h(ρ^h)). * the chain rule inequality holds, i.e. -/ t_h(ρ^h_t) ≤_h(ρ_t^h,j_t^h) + _h(ρ_t^h) for almost every t∈ (0,T). §.§ Two gradient structures for the Scharfetter–Gummel scheme Since the Scharfetter–Gummel scheme is a finite volume scheme, it defines a random walk on the state space ^h. Moreover, (<ref>) possesses a generalized gradient flow structure if the Scharfetter–Gummel flux (<ref>) can be recast as the force-flux relation (<ref>) induced by a dual dissipation potential, i.e. if we can express the discrete flux for all K∈^h and (K, L)∈Σ^h as ^h,ρ_K|L = D_2_h^* (ρ^h,-'_ϵ,h(ρ^h) ) (K,L) with an appropriate dual dissipation potential _ϵ,h^* and the driving energy _ϵ,h defined in (<ref>). We will see in Section <ref> that in the `cosh' case, the edge activity ϑ^h,ρ depends on the potentials V^h, W^h and ρ^h. This dependence of the dissipation potential on the driving energy can be considered a drawback from the modelling point of view and can cause complications in proving EDP convergence. An in-depth discussion of tilt-dependent gradient systems, where changes in the driving energy can lead to changes in the dissipation potential, is carried out in <cit.>. Fortunately for the Scharfetter–Gummel scheme, it is possible to derive a tilt-independent gradient structure, which is better suited for proving EDP convergence. We present the tilt-independent dissipation potential in Section <ref>. §.§.§ The cosh gradient structure and its tilt-dependence Here, we show that the random walk defined by the Scharfetter–Gummel scheme (<ref>) possesses a `cosh' gradient structure. We follow the strategy introduced in <cit.> and introduce a local equilibrium to arrive at a suitable gradient flow formulation incorporating the aggregation term, such that the scheme would indeed fit into the frame developed in <cit.>. From the discrete energy _h given in (<ref>), we identify its variational derivative as '_ϵ,h(ρ^h)_K = ϵ(logρ^h_K - logπ^ϵ,h,ρ_K ), with π^ϵ,h,ρ_K = K e^- 𝖰_K^h,ρ/ϵ/Z^ϵ,h,ρ, 𝖰_K^h,ρ = V_K^h + ∑_M∈^h W_KM^h ρ_M^h, and Z^ϵ,h,ρ = ∑_K∈^hK e^-𝖰_K^h,ρ/ϵ is the normalization such that π^ϵ,h,ρ∈(^h). The `cosh' dual dissipation potential is given for all ρ^h ∈(^h) and ξ^h ∈(Σ^h) by _ϵ,h^*(ρ^h, ξ^h) = 1/2∑_(K,L)∈Σ^hΨ^*_ϵ (ξ^h_KL) √(u̅^h_K u̅^h_L) κ^ϵ,h,ρ_K|Lπ^ϵ,h,ρ_K, u̅_K^h = ρ_K^h/π^ϵ,h,ρ_K , where Ψ_ϵ^*(s) = 4 ϵ^2 (cosh(s/2 ϵ) - 1). The idea is then to choose a jump kernel κ^ϵ,h,ρ : Σ^h → [0, ∞) in such a way that it satisfies the local detailed balance condition κ^ϵ,h,ρ_K|Lπ^ϵ,h,ρ_K = κ^ϵ,h,ρ_L|Kπ^ϵ,h,ρ_L for all (K,L)∈Σ^h and all ρ^h ∈(^h) . and allows representing the flux in the gradient form (<ref>). One possibility is to define the jump kernel as κ^ϵ,h,ρ_K|L1/|K|τ_K|L^h/exp⟨[|]-𝖰_K^h,ρ/ϵ2 q_K|L^h / ϵ/exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ), (K,L)∈Σ^h, where we recall that τ_K|L^h= |(K|L)| / |x_L - x_K| is the transmission coefficient and q_K|L^h V^h_L - V^h_K + ∑_M∈^hρ^h_M (W^h_ML - W^h_MK) = 𝖰_L^h,ρ-𝖰_K^h,ρ, (K,L)∈Σ^h. Notice that the pair (κ^ϵ,h,ρ, π^ϵ,h,ρ) satisfies the local detailed balance condition (<ref>), since τ_K|L^h = τ_L|K and q_K|L^h = - q_L|K. The edge conductivity is then given by ϑ^ϵ,h,ρ_K|Lτ_K|L^h/Z^ϵ,h,ρ2 q_K|L^h / ϵ/exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ). The kernel defined in (<ref>) satisfies the bound sup_h>0sup_K∈^h h^2 ∑_L∈^h_Kκ^ϵ,h,ρ_K|L≤ c_κ < ∞, where ^h_K{L∈^h: (K,L)∈Σ^h}, provided {(^h, Σ^h)}_h>0 satisfy (<ref>). Indeed, for any (K,L)∈Σ^h, it holds that κ^ϵ,h,ρ_K|L = |(K|L)|/|K||x_K-x_L|2 q_K|L^h/ϵ/exp(q_K|L^h/ϵ) - 1 ≤C_d-1 h^d-1/C_d ζ^d+1 h^d+1(1 - q_K|L^h/2 + o(h) ) = O(h^-2). It is not difficult to see that the non-degeneracy assumption (<ref>) implies that <cit.> sup_h>0sup_K∈^h#^h_K < ∞, and thus also the asserted bound (<ref>). To apply the strategy from <cit.> directly, it is left to show that the choice of κ^ϵ,h,ρ in (<ref>) indeed gives rise to the Scharfetter–Gummel flux (<ref>). For any ρ^h∈(^h), K∈^h, and (K,L)∈Σ^h, we have the identity (<ref>), where ^h,ρ is the Scharfetter–Gummel flux given in (<ref>) and _ϵ,h^* is the `cosh' dual dissipation potential with edge conductivity ϑ^ϵ,h,ρ defined in (<ref>). In particular, the Scharfetter–Gummel scheme (<ref>) possesses the `cosh' gradient flow structure with (<ref>) as the driving energy. We begin by rewriting the Scharfetter–Gummel flux in (<ref>) using the density u̅^h = ρ^h / π^ϵ,h,ρ with the reference measure π^h,ρ depending on 𝖰^h,ρ: _K|L^h,ρ = ϵτ_K|L^h/Z^h,ρ( ( q_K|L^h / ϵ) u̅^h_K e^-𝖰^h,ρ_K/ ϵ - (- q_K|L^h / ϵ ) u̅^h_L e^-𝖰^h,ρ_L/ ϵ). The expression (<ref>) can be simplified, since (q_K|L^h / ϵ)exp(-𝖰^h,ρ_K / ϵ) = q_K|L^h exp(-𝖰^h,ρ_K / ϵ)/ϵ( exp(q_K|L^h / ϵ) - 1 ) = q_K|L^h/ϵ⟨[|]exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ) and, similarly, (-q_K|L^h / ϵ) exp(-𝖰^h,ρ_L / ϵ) = q_K|L^h/ϵ⟨[|]exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ) , therefore _K|L^h,ρ = τ_K|L^h/Z^h,ρ q_K|L^h/exp(𝖰^h,ρ_L / ϵ) - exp(𝖰^h,ρ_K / ϵ)⟨*|u̅^h_K - u̅^h_L = ϵ/2⟨*|u̅^h_K - u̅^h_L ϑ_K|L^ϵ,h,ρ . On the other hand, we note that for every (K,L)∈Σ^h and ξ^h∈(Σ^h): D_2^*_ϵ,h(ρ^h,ξ^h)(K,L) = ϵsinh⟨*|ξ^h_K|L/2ϵ√(u̅^h_K u̅^h_L) ϑ^ϵ,h,ρ_K|L. Recall from (<ref>) and (<ref>) that '_ϵ,h(ρ^h)(K) = ϵlog(u̅^h_K). Inserting ξ^h = - '_ϵ,h(ρ^h), we obtain D_2 _ϵ,h^* (ρ^h, - '_ϵ,h(ρ^h)) (K, L) = ϵsinh⟨[|]1/2logu̅^h_K/u̅^h_L√(u̅^h_K u̅^h_L) ϑ^ϵ,h,ρ_KL = _K|L^h,ρ, i.e. identity (<ref>) holds as asserted. Since the classical Scharfetter–Gummel scheme has the `cosh' gradient-flow formulation, one can ask if it is possible to use the framework of <cit.> to prove the convergence. The necessary assumptions on the invariant measure π^ϵ,h,ρ and the jump intensities κ^ϵ,h,ρ hold true based on the notion of local detailed balance as defined in (<ref>). However, the zero-local-average assumption ∑_L∈^h_Kϑ^ϵ,h,ρ_K|L (x_K - x_L) = 0 for all K∈^h with K∩∂Ω = ∅ does not hold. In addition, the nonlinear dependency of ϑ^ϵ,h,ρ on ρ seems to make satisfying (<ref>), even only asymptotically, very hard and may require strong assumptions on the tessellations to work around. As a last remark, we emphasize that the edge conductivity ϑ^ϵ,h,ρ defined in (<ref>) depends non-uniformly on the diffusion parameter ϵ>0, which makes it difficult to pass to the limit ϵ→ 0. The disadvantages of the `cosh' gradient structure mentioned in this section can be seen as due to tilt-dependence as defined in <cit.>. To clarify this further, we decompose the free energy into entropy and potential energies by writing _ϵ,h(ρ^h) = ϵ_h(ρ^h) + _h^V(ρ^h) + _h^W(ρ^h), where V^h:_h → and W^h:_h×_h→ symmetric are given and we set _h(ρ_h) ∑_K∈^hϕ(u^h_K )|K| , where u^h_K ρ^h_K/|K| ; _h^V(ρ^h) ∑_K∈^h V^h_K ρ^h_K and _h^W(ρ^h) 1/2∑_K, L∈^h×^h W^h_KLρ^h_K ρ^h_L . Then, we can provide a gradient structure for the Scharfetter–Gummel scheme for all possible potential energies V^h and interaction energies W^h altogether by introducing the set of tilts _h *_h^V + _h^W | V^h : ^h → , W^h: ^h×^h → symmetric . We can then recast Lemma <ref> as a derivation of a gradient structure with tilting <cit.> of the type (^h,Σ^h,,_h,_ϵ,h,_h). By recalling that for _h^V + _h^W ∈_h, we find 𝖰^h,ρ = (_h^V)'(ρ^h) + (_h^W)'(ρ^h) as defined in (<ref>) and obtain from (<ref>) the dissipation potential _ϵ,h(ρ^h,j^h;_h^V + _h^W) 1/2∑_(K,L)∈Σ^hΨ_ϵ ⟨*|j_KL^h/√(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ√(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ, u̅_K^h = ρ_K^h/π^ϵ,h,ρ. In particular, it depends on the potential energies V^h,W^h through ϑ^ϵ,h,ρ defined in (<ref>) and hence is tilt-dependent. Its undesirable properties explained in Remark <ref> are a direct consequence of the dependency of the gradient structure on the potentials and in particular on the diffusivity ϵ>0. §.§.§ Tilt-independent gradient structure In this section, we introduce the tilt-independent gradient structure, which we will study in this manuscript and is one of the main contributions of this article. The gist of this structure is that the dual dissipation potential does not depend on potentials V^h and W^h and more importantly also does not degenerate for small diffusivity ϵ≪ 1. Based on the cell formula (<ref>), the Scharfetter–Gummel flux in (<ref>) was recast as a kinetic relation for a general force ξ^h∈(Σ^h) in <cit.>, for which we can derive a suitable dual dissipation potential _ϵ,h^*. For doing so, we notice that along a solution of the scheme, we have the force ξ^h_K|L = - '_ϵ,h(ρ^h)(K,L) = - ⟨[|]ϵlogu^h_L/u^h_K + q_K|L^h , (K,L)∈Σ^h, and therefore, we find the relation q_K|L^h = ϵlogu^h_K/u^h_L - ξ^h_K|L = ϵ( log⟨[|]u^h_K e^-ξ^h_K|L / 2ϵ - log⟨[|]u^h_L e^ξ^h_K|L / 2ϵ). By substituting this relation into (<ref>), we arrive, after some simplifications, at the identity ^h,ρ_K|L = ϵsinh⟨*|ξ^h_K|L/2ϵΛ_H⟨*|u_K^h e^-ξ^h_K|L/2ϵ,u_L^h e^ξ^h_K|L/2ϵ |K| != D_2 _ϵ,h^* (ρ^h, ξ^h) (K,L) , where the last equality is a requirement for the new dual dissipation potential and Λ_H denotes the harmonic-logarithmic mean defined in (<ref>). From the kinetic relation (<ref>) relating the force ξ^h with the flux, one obtains the dissipation potential _h^* as given in (<ref>) with the function α_ϵ^* in (<ref>), by simply integrating over the force. Although α_ϵ^* is only defined as an integral, it has many beneficial properties, which are essential for the analysis that we collect Lemma <ref> in Appendix <ref>. Altogether, we obtained yet another gradient structure for the Scharfetter–Gummel scheme. Since the derivation of the kinetic relation (<ref>) might seem to look ad-hoc, we provide a different derivation of the dissipation potential _ϵ,h^* from the `cosh' dissipation potential _ϵ,h^* defined in (<ref>). To do so, we perform a `de-tilting' technique as explained in <cit.>. In this way, we can show that we arrived at a tilt-independent gradient structure for the Scharfetter–Gummel scheme. The Scharfetter–Gummel with flux-force relation (<ref>) is induced by a gradient structure with tilting (^h,Σ^h,, _h,_ϵ,h,_h) with tilt set _h given in (<ref>). Moreover, the dissipation potential _ϵ,h is tilt-independent and given by _ϵ,h(ρ^h, j^h) = 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ⟨*| u^h_K ,u^h_L, j^h_K|L/τ_K|L^h , u_K^hρ_K^h/|K|, where α_ϵ is the Legendre dual of α_ϵ^* given in (<ref>) with respect to the third variable. We follow the construction explained in <cit.>. To do so, we need to make the tilt-dependence of the dual dissipation potential _h^* explicit, for which use the primal dissipation potential defined in (<ref>) and can rewrite (<ref>) as ^*_ϵ,h(ρ^h,ξ^h; _h^V+_h^W) = 1/2∑_(K,L)∈Σ^hΨ^*_ϵ(ξ^h_K|L) √(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ, u̅_K^h = ρ_K^h/π_K^ϵ,h,ρ. Note, that the tilt-dependence comes through ϑ^ϵ,h,ρ in terms of 𝖰^h,ρ. By inspecting <cit.>, we have to verify the identity D_2 _ϵ,h^*(ρ^h,ξ^h)(K,L) != D_2_ϵ,h^*⟨*|ρ^h,ξ^h; -ξ^h-ϵ_h(ρ)(K,L) . To do so, we fix (K,L)∈Σ^h and identify q_K|L^h = 𝖰^h,ρ in ϑ^ϵ,h,ρ_KL to obtain √(u̅^h_K u̅^h_L)ϑ_K|L^ϵ,h,ρ = τ_K|L^h √(u_K^h u_L^h)𝖰^h,ρ_KL/exp⟨[|]𝖰^h,ρ_KL/(2ϵ)-exp⟨[|]-𝖰^h,ρ_KL/(2ϵ). By substituting 𝖰^h,ρ_KL = q_K|L^h = -ξ_K|L^h- ϵlogρ^h(K,L), which amounts in using the identity (<ref>), we observe that D_2_ϵ,h^*⟨*|ρ^h,ξ^h; - ξ^h -_h(ρ)(K,L) = ϵτ_K|L^h sinh⟨[|]ξ^h_K|L/2ϵ√(u_K^h u_L^h)log⟨[|]u^h_K e^-ξ^h_K|L / 2ϵ - log⟨[|]u^h_L e^-ξ^h_K|L / 2ϵ/e^-ξ^h_K|L/2ϵ -log√(u^h)(K,L) -e^ξ_K|L/2ϵ+log√(u^h)(K,L) = α_ϵ⟨*|u^h_K,u^h_L, ξ^h_K|L/2 = D_2_ϵ,h^*(ρ^h,ξ^h)(K,L), which verifies the claimed identity (<ref>) and the remaining statements from Lemma <ref> follow as argued in <cit.>. § VARIATIONAL CONVERGENCE FOR THE TILT-INDEPENDENT STRUCTURE The strategy of proving the discrete-to-continuum EDP convergence comprises two main steps: * Prove compactness for the family of the GGF solutions (ρ^h, j^h) of (<ref>) defined in Defintion <ref>. This allows us to extract a subsequence converging to a limiting pair (ρ, j). * Prove liminf inequalities for all the functionals in the energy-dissipation functional _h and recover a limiting energy-dissipation functional : (ρ, j)≤lim inf_h→ 0_h(ρ^h, j^h). In Section <ref>, we prove the compactness results required by (1). To establish the liminf inequality for _h from (2), the main effort relates to the Fisher information. Thus, Section <ref> is dedicated to the -convergence of the Fisher information. We conclude with the proof of Theorem <ref> in Section <ref>. §.§ Compactness We consider a family {(ρ^h,j^h)}_h>0 of (_h, _h, _h^*)-generalized gradient flow solutions to (<ref>), where the corresponding functionals are defined in (<ref>), (<ref>), and (<ref>) respectively. We also assume the initial data {ρ^h_in}_h>0 to be well-prepared. We set J^h∫_·_t^h t. The family { J^h }_h>0 is weakly-* compact in ([0,T]×Ω; ^d) and the family { t ↦ | ^h_t | (Ω) }_h>0 is equi-integrable. In particular, there exists a Borel family (j_t)_t∈[0,T]⊂(Ω;^d) such that J^h=∫_· _t^h t ⇀^* ∫_· j_t t weakly-* in ([0,T]×Ω; ^d) for a (not relabelled) subsequence. The proof is similar to the proof of the related compactness statement for the `cosh' gradient structure <cit.>. For completeness, we present the full proof here. For almost every t∈(0,T), the reconstruction of the flux is defined as _t^h = ∑_(K,L) ∈Σ^h j^h_K|L(t) σ_K|L^h, with σ_K|L^h∈(Ω; ^d) such that |σ_K|L^h|(Ω) ≤ 2dh. The existence of the required σ_K|L^h is proven in <cit.>. We begin by noticing that for almost every t∈(0,T) and any β∈, _ϵ,h(ρ_t^h, j_t^h) = sup_ξ^h∈(Σ^h){∑_(K,L)∈Σ^hξ_K|L^h j_K|L^h(t) - 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u^h_K(t), u^h_L(t), ξ^h_K|L/2} ≥β |_t^h|(Ω) - 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u^h_K(t), u^h_L(t), β sign(j_J|L^h)|σ_K|L^h|(Ω)/2, where we simply take ξ_K|L^h=β sign(j_K|L^h)|σ_K|L^h|(Ω). Due to Lemma <ref><ref>, we obtain α_ϵ^* ⟨*| u^h_K(t), u^h_L(t), β sign(j_K|L^h)|σ_K|L^h|(Ω)/2≤1/4√(u_K^h(t) u_L^h(t)) Ψ_ϵ^*⟨*|β|σ_K|L^h|(Ω), and consequently, _ϵ,h(ρ_t^h, j_t^h) ≥β |_t^h|(Ω) - c_κ/2h^2Ψ_ϵ^*(2β dh), with the constant c_κ>0 as defined in (<ref>). Using the fact that Ψ_ϵ^*(s r) ≤ r^2 Ψ_ϵ^*(s) for s,r∈ with |r|≤ 1, where Ψ_ϵ^* is a convex function having superlinear growth and minimizing the previous inequality over β∈, we obtain _ϵ,h(ρ_t^h, j_t^h) ≥c_κ/2sup_β∈{β |_t^h|(Ω)/d c_κ - Ψ_ϵ^*(β )} = c_κ/4Ψ̃_ϵ⟨*||_t^h|(Ω)/d c_κ, where Ψ̃_ϵ is the Legendre dual of Ψ̃_ϵ^* which, again, is a convex function having superlinear growth. Since (j_t^h)_t∈[0,T] has uniform-in-h finite action, we then obtain sup_h>0∫_0^T Ψ̃_ϵ⟨*||_t^h|(Ω)/d c_κ t ≤2/c_κsup_h>0∫_0^T_ϵ,h(ρ_t^h, j_t^h) t ≤2/c_κsup_h>0_ϵ,h(ρ^h_in)<∞, therewith deducing the equi-integrability of the family { t ↦ | ^h_t | (Ω) }_h>0. One also easily deduces from the previous inequality that sup_h> 0|J^h|([0,T]×Ω) ≤ 2 d ⟨*|sup_h> 0∫_0^T _ϵ,h(ρ_t^h, j_t^h) t + c_κ T/2Ψ_ϵ^*(1) <∞, which implies the existence of some J∈((0, T)×Ω) and some subsequence for which J^h ⇀^* J weakly-* in ((0, T)×Ω). Finally, Due to the equi-integrablity of { t ↦ | ^h_t | (Ω) }_h>0, we deduce that J has the representation J = ∫_· j_t t for a Borel family (j_t)⊂(Ω; ^d). Let ρ^h ∈(^h) with ^0_h(ρ^h) < ∞, where _ϵ,h^0 (ρ^h) 2∑_(K,L)∈Σ^hβ_ϵ ( u^h_K, u^h_L ) τ_K|L^h, u_K^h = ρ_K^h/|K|. Then the reconstructed density û^h satisfies | D û^h | (Ω) ≤ C √(^0_ϵ,h(ρ^h)), for some constant C>0 independent of h>0. Since û^h is a piece-wise constant function on the cells ^h, one can show that Dû^h = ∑_(K, L)∈Σ^h u^h_K n_KL^d-1|_(K|L) = 1/2∑_(K, L)∈Σ^h (u^h_K - u^h_L) n_KL^d-1|_(K|L). Therefore, using the Cauchy-Schwarz inequality yields |Dû^h| (Ω) ≤1/2∑_(K, L)∈Σ^h |u^h_K - u^h_L| |(K|L)| ≤1/2∑_(K, L)∈Σ^h |u^h_K - u^h_L| h τ_K|L^h ≤( ∑_(K,L)∈Σ^h|u^h_L - u^h_K|^2/u^h_L + u^h_Kτ_K|L^h )^1/2⟨*|∑_(K,L)∈Σ^h (u^h_K + u^h_L) h^2 τ_K|L^h ^1/2≤ C √(_ϵ,h^0(ρ^h)), for some constant C>0 independent of h>0 and Lemma <ref><ref> was used in the last inequality. With Lemma <ref> and Lemma <ref> at hand, we can prove the strong compactness result. Let the family of curves {ρ^h}_h>0 be the GGF-solutions of (<ref>) with (_h, _h, _h^*) defined in (<ref>), (<ref>), and (<ref>) respectively. Let sup_h>0_h(ρ^h_in) < ∞. Then there exists u ∈ L^1( (0, T); L^1(Ω)) and a (not relabelled) subsequence such that û^h_t → u_t in L^1(Ω) for almost every t∈(0,T). The proof of the proposition can be found in <cit.>. §.§ Γ-convergence of the Fisher information The aim of this section is to prove a -convergence result for the discrete Fisher information ρ^h ↦_ϵ,h(ρ^h)^*_ϵ,h(ρ^h,-'_ϵ,h(ρ^h)), where -'_ϵ,h(ρ^h)(K,L) = 2 ϵlog√(u^h_K / u^h_L) - q_K|L^h. It will be crucial, that we have the decomposition of α_ϵ^* from Lemma <ref><ref> to get the representation of _h as the sum of three terms _ϵ,h(ρ^h) = _ϵ,h^0(ρ^h) + _ϵ,h^1 (ρ^h) + _ϵ,h^2 (ρ^h), where _ϵ,h^0 is given in (<ref>) and_ϵ,h^1 (ρ^h) ϵ/2∑_(K, L)∈Σ^h (u^h_L - u^h_K) q_K|L^h τ_K|L^h, _ϵ,h^2 (ρ^h) 1/2∑_(K, L)∈Σ^h |q_K|L^h|^2 𝕙_ϵ (u^h_K, u^h_L, q_K|L^h) τ_K|L^h. This representation resembles the expansion of the continuous counterpart. Indeed, we expect the limit functional to be _ϵ(ρ) = ^*⟨[|]ρ, -∇ (ϵlog u + 𝖰(ρ) ) = ϵ^2/2∫*∇log⟨*|u e^𝖰(ρ)/ϵ^2 ρ = 2ϵ^2 ∫*∇√(u)^2 x + ϵ∫∇ u ·∇𝖰(ρ) x + 1/2∫*∇𝖰(ρ) ^2 u x _ϵ^0(ρ) + _ϵ^1(ρ) + ^2(ρ), where we use the notation 𝖰(ρ) = V + W*ρ as in the introduction. The main result of this section is the following theorem. Assume that a family of tessellations {(^h, Σ^h)}_h>0 satisfies the orthogonality (<ref>). Up to passing to a subsequence, the family of functionals {_ϵ,h}_h>0 has a -limit _ϵ w.r.t. the L^2-topology taking the form _ϵ(ρ) = 2ϵ^2 ∫_Ω| ∇√(u)|^2 x + ϵ∫_Ω∇ u ·∇𝖰(ρ) x + 1/2∫_Ω| ∇𝖰(ρ) |^2 ρ if √(u)∈ H^1(Ω), +∞ otherwise. The proof of Theorem <ref> consists of the -convergence result for _ϵ,h^0 and continuous convergence results for _ϵ,h^1 and _ϵ,h^2. Although we use the orthogonality assumption (<ref>) to get the complete result, the convergence of _ϵ,h^0 and _ϵ,h^2 can be established without (<ref>) at the cost of the tensor appearing in the limit. Unfortunately, it is not clear how to identify the limit of _ϵ,h^1 without (<ref>). We begin with _ϵ,h^0. According to Lemma <ref><ref> the function β satisfies the following bounds 1/8(a - b)^2/a + b2≤β_ϵ(a, b) ≤1/2 (√(a) - √(b))^2 for a,b>0. The appearance of such bounds is possible to understand intuitively by noting that in the continuous setting, thanks to the chain rule the following two formulations are equivalent 1/8|∇ u |^2 /u = 1/2| ∇√(u)|^2 for √(u)∈ H^1(Ω). We now recognize the lower bound for β_ϵ as a discretization for the second formulation. We can also expect that (<ref>) has the same -limit as the quadratic functional. The proof of -convergence for _ϵ,h^0 follows the localization method. The corresponding theory is covered in <cit.>, and for the application of the localization method in the setting close to ours, see <cit.>. The method is based on considering the localized version of the functional _ϵ,h^0 restricted to an open set A⊂Ω _ϵ,h (v^h, A) ∑_(K, L)∈Σ^h|_Aβ_ϵ( (v^h_K)^2, (v^h_L)^2 ) τ_K|L^h, where Σ^h|_A { (K, L) ∈Σ^h: K,L∈^h|_A } and ^h|_A { K ∈^h : K∩ A ≠∅}. We define for any open set A⊂Ω _ϵ,sup(v, A) -lim sup_h→ 0_ϵ,h (v, A) = inf{lim sup_h→ 0_ϵ,h (v_h, A)  :   v_h → v }. In the next lemma, we summarize the properties of _ϵ,sup, which is necessary to apply the representation theorem from <cit.>. Specifically, we prove that _ϵ,sup is an inner regular, subadditive, and local functional satisfying the lower and upper Sobolev bounds. The proof follows very closely the strategy from <cit.> and leverages the quadratic comparison of the function β_ϵ noted above in (<ref>). The functional _ϵ,sup defined in (<ref>) has the following properties * Inner regularity: For any v∈ H^1(Ω, μ) and for any A∈ it holds that sup_A' A^μ_ϵ,sup(v, A') = ^μ_ϵ,sup(v, A); * Subadditivity: For any v∈ H^1(Ω, μ) and for any A, A', B, B' ∈ such that A' A and B' B it holds that: ^μ_ϵ,sup(v, A'∪ B') ≤^μ_ϵ,sup(v, A) + ^μ_ϵ,sup(v, B); * Locality: For any A∈ and any v, ψ∈ H^1(Ω, μ) such that v=ψ μ-a.e. on A there holds ^μ_ϵ,sup(v, A) = ^μ_ϵ,sup(v, A). * Sobolev bounds: For any v∈ H^1(Ω) and an open set A⊂Ω c∫_A | ∇ v |^2 x ≤_ϵ,sup(v, A) ≤ C∫_A | ∇ v |^2 x, for some c, C>0 independent of v and A. In the following, we drop the subscript ϵ. Upper bound. By the upper bound shown in Lemma <ref>(f), it holds that _sup(v, A) ≤ϵ^2/2∑_(K,L)∈Σ^h|_A( v^h_L - v^h_K )^2 τ_K|L^h. Then the required upper bound follows from <cit.>. Properties of _sup as a set functional. The proof of inner regularity, subadditivity, and locality for _sup follows very closely the corresponding proofs in <cit.>. Lower bound. Let {v_h}_h>0∈ L^2(Ω) be a sequence with v^h → v in L^2(Ω) such that _sup(v, A) = lim sup_h→ 0_h(v_h, A). We fix an arbitrary r>0 and denote A_r { x∈ A : dist(x, ∂ A) > r }. Let η∈^d be such that |η| < r, then by the argument as in <cit.>. ∫_A_r | v_h(x+η) - v_h(x) |^2 x ≤ C |η|^2 ∑_(K, L)∈Σ^h|_A_r| v_h(L) - v_h(K) |^2 τ_K|L^h. Using the lower bound for β_ϵ from Lemma <ref><ref> (v_h,A_r) =∑_(K, L)∈Σ^h|_A_rβ_ϵ((v_h(K))^2, (v_h(L))^2) τ_K|L^h ≥ϵ^2/4∑_(K, L)∈Σ^h|_A_r | v_h(L) - v_h(K) |^2 τ_K|L^h and passing to the limit superior as h→0 then yields _sup(v, A_r) ≥ c v(·+η) - v ^2_L^2(A_r)/|η|^2≥ c∫_A_r| ∇ v |^2 x for v∈ H^1(Ω). Due to the inner regularity property, we conclude _sup(v, A) ≥ c∫_A | ∇ v |^2 x for v∈ H^1(Ω). We aim to find an integral representation for _sup in the form _ϵ,sup(v, A) = ∫_A f_ϵ(x, v, ∇ v) x, v∈ H^1(A). We will prove that the functions ϕ^h_x,w,ξ (K) = w + ⟨ξ, x_K - x ⟩ with some fixed x∈Ω, w∈, ξ∈^d and x_K = _K x x are almost minimizers for _ϵ,h. The family of functions {ϕ^h_x,w,ξ}_h>0 with x∈Ω, w∈, ξ∈^d are almost minimizers for _ϵ,h, i.e. lim_h→ 0( _ϵ,h (ϕ^h_x,w,ξ, Q_r(x)) - M_ϵ,h (ϕ^h_x,w,ξ, Q_r(x)) ) = 0, for a cube Q_r(x) with the edge length r>0 and the center in x∈Ω and where M_ϵ,h (v^h, A) inf{_ϵ,h (w^h, A)   :   w^h on ^h|_A with w^h = φ^h on ^h|_A^c}. Let ψ^h be the minimizer for M_ϵ,h (ϕ^h_x,w,ξ, Q_r(x)). The convexity of _ϵ,h yields 0 ≤_ϵ,h (ϕ^h_x,w,ξ, Q_r(x)) - _ϵ,h (ψ^h, Q_r(x)) ≤ D_ϵ,h(ϕ^h_x,w,ξ, Q_r(x)) [ϕ^h_x,w,ξ - ψ^h]. We now calculate the variation of _ϵ,h(·, A) at some v^h∈^^h, fixed open set A⊂Ω in the directions w^h∈^^h such that w^h_K = 0 for K∈^h|_A^c. Here, we use Lemma <ref><ref> that states the existence of the directional derivatives for _+ ×_+ ∋ (a, b) ↦β_ϵ (a^2, b^2): D_ϵ,h (v^h, A)[w^h] = 2∑_(K,L)∈Σ^h|_A[ ∂_1 β_ϵ((v^h_K)^2, (v^h_L)^2) v^h_K w^h_K + ∂_2 β_ϵ((v^h_K)^2, (v^h_L)^2) v^h_L w^h_L ] τ_K|L^h = 4 ∑_(K,L)∈Σ^h|_A w^h_K v^h_K ∂_1 β_ϵ((v^h_K)^2, (v^h_L)^2) τ_K|L^h. where we used the fact that ∂_1 β_ϵ (a, b) = ϵ^2/4∫_b^a b/z Λ(z, b) z = ∂_2 β_ϵ (b, a). We denote for the moment γ(a, b) a ∂_1 β_ϵ (a^2, b^2) = ϵ^2/4∫_b^2^a^2a b^2/z Λ(z, b^2) z and perform Taylor expansion in the first variable γ(a, b) = γ(b, b) + ∂_1 γ(a, b)|_a=b (b - a) + ∂_1^2 γ(a, b)|_a=b (b - a)^2 + o( (a-b)^2). Direct calculations provide ∂_1 γ(a, b) = ϵ^2/4(∫_b^2^a^2b^2/z Λ(z, b^2) z - a b^2/a^2 Λ(a^2, b^2) 2a ) = ϵ^2/4(∫_b^2^a^2b^2/z Λ(z, b^2) z - 2 b^2/Λ(a^2, b^2)), and thus, ∂_1 γ(a, b)|_a=b = - ϵ^2 / 2. Calculating the second derivative, we obtain ∂_1^2 γ(a, b) = ϵ^2/4( b^2/a^2 Λ(a^2, b^2) 2a + 2 b^2/Λ^2(a^2, b^2)∂_1 Λ(a^2, b^2) 2a ) = ϵ^2/42 b^2/a Λ(a^2, b^2)( 1 - 2 a^2 - Λ(a^2, b^2)/a^2 - b^2) a → b⟶ 0. Therefore, γ(a, b) = -ϵ^2/2 (b - a) + o( (a-b)^2 ). Inserting this expansion into the variation of _ϵ,h yields D_ϵ,h(ϕ^h_x,w,ξ, Q_r(x))[w^h] = 4 ∑_(K,L)∈Σ^h|_Q_r(x) w^h_K ( -ϵ^2/2⟨ξ, x_L - x_K ⟩ + o( h^2 ) ) τ_K|L^h. Since for any admissible tessellation, ∑_L∈^h_K (x_L - x_K) τ_K|L^h = 0 for K∈^h|_A \^h|_A^c, we obtain | D_ϵ,h(ϕ^h_x,w,ξ, Q_r(x))[w^h] | ≤ o(1)_h→ 0 C ∑_K∈^h|_Q_r(x) |w^h_K | |K|, which proves the assertion. We now split the functional _ϵ,h into the quadratic part and the error term, i.e. _ϵ,h(v^h) = ϵ^2/2∑_(K,L)∈Σ^h( v^h_L - v^h_K )^2 τ_K|L^h - ∑_(K,L)∈Σ^h e_ϵ (v^h_K, v^h_L) τ_K|L^h, where we denote e_ϵ(a,b)=ϵ^2/2( a - b )^2 - β_ϵ(a^2, b^2). The first observation to make is that the error term vanishes in the -limit. Let x∈Ω, w∈, ξ∈^d be fixed. For the discrete functions ϕ^h_x,w,ξ (K) = w + ⟨ξ, x_K - x ⟩ for all K∈^h, the following convergence holds lim_h→ 0∑_(K,L)∈Σ^h|_Q_r(x) e ( ϕ^h_x,w,ξ(K), ϕ^h_x,w,ξ(L) ) τ_K|L^h = 0. We recall that e_ϵ(a, b) = ϵ^2/2 (a - b)^2 - β_ϵ (a^2, b^2). Lemma <ref><ref> yield the following bound e_ϵ(a, b) ≤ϵ^2/2 (a - b)^2 - ϵ^2/4(a^2 - b^2)^2/a^2 + b^2 = ϵ^2/4 (a - b)^2 2(a^2 + b^2) - (a + b)^2/a^2 + b^2 = ϵ^2/4(a - b)^4/a^2 + b^2. Without loss of generality, we assume that w=0. If ϕ^h_x,ξ(K) = ϕ^h_x,ξ(L) = 0, then we clearly have that e( ϕ^h_x,ξ(K), ϕ^h_x,ξ(L))=0 and we do not need to take these terms into account. Thus, we only need to consider the edges Σ^h|_Q_r(x) for which ϕ^h_x,ξ(K) ≥ 0,  ϕ^h_x,ξ(L) > 0 or ϕ^h_x,ξ(K) > 0,  ϕ^h_x,ξ(L) ≥ 0. Let δ > 0 be arbitrary and define Σ^h_δ{ (K, L)∈Σ^h|_Q_r(x): min⟨*| | ϕ^h_x,ξ(K) |, | ϕ^h_x,ξ(L) | > δ |ξ| }. Using the non-degeneracy of the tessellation, we get ∑_(K,L)∈Σ^h_δ e ⟨*|ϕ^h_x,ξ(K), ϕ^h_x,ξ(L) τ_K|L^h ≤ C ϵ^2/4∑_(K,L)∈Σ^h_δ|ξ|^4 h^4/|ξ|^2 δ^2 h^d-2≤ C ϵ^2 |ξ|^2 h^2/δ^2 |Ω|. The remainder of the sum can be bounded with the inequality e_ϵ(a, b) ≤ϵ^2/2 (a - b)^2 to obtain ∑_(K,L)∈Σ^h \Σ^h_δ e ⟨*|ϕ^h_x,ξ(K), ϕ^h_x,ξ(L) τ_K|L^h ≤ϵ^2/2∑_(K,L)∈Σ^h \Σ^h_δ| ⟨ξ, x_L - x_K ⟩|^2 τ_K|L^h ≤ C ϵ^2/2 |ξ|^2 h^d | Σ^h \Σ^h_δ|. If (K,L)∈Σ^h \Σ^h_δ, then either |⟨ξ, x_K - x ⟩| ≤ |ξ| δ or |⟨ξ, x_L - x ⟩| ≤ |ξ| δ, and therefore, | Σ^h \Σ^h_δ | ≤ C_| { K ∈^h|_Q_r(x) : |⟨ξ, x_K - x ⟩ | ≤ |ξ| δ}| C_ |^h_δ|. The inequality |⟨ξ, x_K - x ⟩| ≤ |ξ| δ means that the point x_K lies within distance δ from the line passing through x and having the direction vector ξ. Employing the non-degeneracy assumption again, we get | Σ^h \Σ^h_δ | ≤ C_ |^h_δ| ≤ C_C_d-1δ 2√(d) r^d-1/C_d (ζ h)^d = C δ r^d-1/h^d. Hence, the sum over all (K, L)∈Σ^h has the following bound ∑_(K,L)∈Σ^h e ⟨*|ϕ^h_x,ξ(K), ϕ^h_x,ξ(L) τ_K|L^h ≤ C ϵ^2 |ξ|^2 ( h^2/δ^2 + δ r^d-1). For d ≥ 2 we choose δ(h) = √(h) for all h > 0 to obtain the asserted limit. Inserting the functions ϕ^h_x,w,ξ into the quadratic part of _ϵ,h yields _ϵ,h(ϕ^h_x,w,ξ) = ϵ^2/2∑_K∈^h⟨ξ, ∑_L∈^h_Kτ_K|L^h/|K| (x_L - x_K) ⊗ (x_L - x_K) ξ⟩ |K| = ϵ^2/2∫_^d⟨ξ, ^h(x) ξ⟩ x with the tensor ^h(x) ∑_K∈^h_K(x) ∑_L∈^h_Kτ_K|L^h/|K| (x_L - x_K) ⊗ (x_L - x_K). The properties of ^h are summarized in the following proposition. The diffusion tensor (<ref>) has the following properties: * ^h(x) is symmetric and positive-definite for any x∈Ω; * {^h}_h>0 is bounded in L^∞ (Ω; ^d× d): for all the components ^h_ij it holds that sup_h>0^h_ij_L^∞(Ω) < ∞; * {^h}_h>0 has a weakly-* limit in the σ(L^∞, L^1) topology, i.e. there exist a subsequence and a tensor ∈ L^∞ (Ω; ^d× d) such that lim_h→ 0∫_Ω^h_ij f x = ∫_Ω_ij f x for all f∈ L^1(Ω). Proposition <ref> guarantees that there exists a limiting tensor , but, for an arbitrary tessellation, is not necessarily the identity. In the next proposition, we show that (<ref>) is a sufficient condition to ensure that a family of tessellations converges to the identity matrix. Let a family of tessellations { (^h, Σ^h )}_h>0 satisfy the orthogonality assumption (<ref>), then the family of tensors {^h}_h>0 defined in (<ref>) is such that ^h_ij⇀^* 2 δ_ij, weakly-* in σ(L^∞, L^1) up to a subsequence. Thus, =2. Consider a function ϕ^i(x) = x^i for x∈Ω, i=1,…,d. The projection of ϕ^i on ^h is given by ϕ^i,h_K = x^i_K for K∈^h and corresponding piece-wise constant reconstruction is ϕ̂^i,h (x) = ∑_K∈^h x^i_K _K(x). It is not difficult to show that the family {ϕ̂^i,h}_h>0 is bounded uniformly in BV(Ω). Firstly, ϕ̂^i,h_L^1(Ω) = ∑_K∈^h |x^i_K| |K| ≤sup_x∈Ω |x^i| |Ω|. Secondly, as in the proof of Lemma <ref>, we have the uniform bound on translations ∫_Ωψ(x)( ϕ̂^i,h(x - η) - ϕ̂^i,h(x) ) x ≤∑_(K,L)∈Σ^h|ϕ^i,h_L - ϕ^i,h_K| |(K|L)| |η| ≤ C |η| ∑_K∈^h |K| = C |η| |Ω|, for an arbitrary ψ∈ C^1_c(Ω). Therefore, we can conclude that |D ϕ̂^i,h|(Ω) ≤ C |Ω| for all h>0, for some constant C>0 independent of h>0. This BV bound implies that (up to a subsequence) there exists ϕ^i∈ BV(Ω) such that ϕ̂^i,h→ϕ^i in L^1(Ω) and Dϕ̂^i,h⇀^* Dϕ^i weakly-* in (Ω; ^d). On the other hand, we know that ϕ̂^i,h→ x^i in L^1(Ω). Therefore, ∫_Ωφ (D_jϕ̂^i,h)( x) = ∫_Ω∂_jφ ϕ̂^i,h x ⟶∫_Ω∂_jφ x^i x = -∫_Ωφ δ_ij x for all φ∈ C_c^1(Ω), which consequently yields D_j ϕ̃^i= δ_ij. On the other hand, using the piecewise constant structure of ϕ̂^i,h, we can write its distributional derivative explicitly as D ϕ̂^i,h = 1/2∑_(K, L)∈Σ^h (x^i_L - x^i_K) ν_KL^d-1|_(K|L), where ν_KL denotes the outer normal of the face (K|L). Due to the orthogonality assumption, we have that ν_KL = (x_L - x_K)/|x_L - x_K|, and hence D ϕ̂^i,h = 1/2∑_(K, L)∈Σ^hτ_K|L^h (x^i_L - x^i_K) (x_L - x_K) ^d-1|_(K|L)/|(K|L)|. Notice that D ϕ̂^i,h is related to the tensor ^h in the following way: For any φ∈ C_c^1(Ω), ∫_Ωφ(x) D_jϕ̂^i,h( x) = 1/2∑_(K,L)∈Σ^hτ_K|L^h (x^i_L - x^i_K) (x_L^j - x_K^j) _(K|L)φ(y) ^d-1( y) = 1/2∑_(K,L)∈Σ^hτ_K|L^h (x^i_L - x^i_K) (x_L^j - x_K^j) φ(x_K) + o(1) = 1/2∑_K ∫_K ∑_L∈^h_Kτ_K|L^h/|K| (x^i_L - x^i_K) (x_L^j - x_K^j) φ(x) x + o(1) = 1/2∫_Ω^h_ij(x) φ(x) x + o(1). Therefore, passing to the limit then yields _ij = 2 δ_ij. In particular, = 2. In the remainder of this section, we will assume that the family of tessellations satisfy (<ref>). We are now in the position to summarize the convergence statement for _ϵ,h^0. Up to a subsequence, the family of functionals {_ϵ,h^0}_h>0 has a -limit _ϵ with respect to the L^2-topology taking the form _ϵ^0(ρ) = 2ϵ^2∫_Ω| ∇√(u)|^2 x if √(ρ/ x)√(u)∈ H^1(Ω), +∞ otherwise. To complete the proof of Theorem <ref>, we present the continuous convergence results for _ϵ,h^1 and _ϵ,h^2. As preparation, we establish the relation between q^h and the continuous potentials V and W. Let W satisfy (<ref>) and the family {ρ^h∈(^h)}_h>0 be such that ρ̂^h/^d→ρ/^d in L^1(Ω), with sup_h>0∫_Ωϕ⟨*|ρ̂^h/^d^d < ∞. where ϕ(s) = s log s - s + 1 is the entropy density. Then the following relation holds: q_K|L^h = ∇𝖰 (ρ̂^h) (x_KL) · (x_L - x_K) + o(h), for any x_KL∈ K∪ L, where 𝖰(ρ) = V + W∗ρ. Moreover, q_K|L^h has the following two integral approximations q_K|L^h = _K ∇𝖰 (ρ̂^h) (x) x · (x_L - x_K) + o(h) and q_K|L^h = _(K|L)∇𝖰 (ρ̂^h) (x) ^d-1( x) · (x_L - x_K) + o(h). Since ∇ V is uniformly continuous on Ω, we obtain that V(x_L) - V(x_K) = ∇ V(x_KL) · (x_L - x_K) + o(h), where x_KL is some point in K∪ L. The part of q_K|L^h related to the interaction potential is ∑_M∈^hρ^h_M ( W(x_L - x_M) - W(x_K - x_M) ) = ∑_M∈^h M≠ K,Lρ^h_M ( W(x_L - x_M) - W(x_K - x_M) ) + (W(x_L - x_K) - W(0)) ρ^h_K + (W(0) - W(x_K - x_L)) ρ^h_L. The later terms are bounded as |W(x_L - x_K) - W(0)| ρ^h_K + |W(0) - W(x_K - x_L)| ρ^h_L ≤ 2 h Lip(W) sup_x∈Ωρ̂^h(B_h(x)). We intend to show that sup_x∈Ωρ̂^h (B_h(x)) → 0. Using the Legendre-duality, we obtain ∫_Ωϕ(û^h(z)) z ≥βρ̂^h(B_h(x)) - ϕ^*(β) ^d(B_h(x)) for any β>0, where ϕ(s) = s log s - s + 1 is the entropy density. In particular, we obtain sup_x∈Ωρ̂^h(B_h(x)) ≤1/β{sup_h>0∫_Ωϕ(û^h(z)) z + ϕ^*(β) C_d (3h)^d} for any β>0. Therefore, the limsup as h→ 0 yields 0≤lim sup_h→ 0sup_x∈Ωρ̂^h(B_h(x)) ≤1/βsup_h>0∫_Ωϕ(û^h(z)) z. Since β>0 was arbitrary, we can send β→∞ to obtain the required limit, and thus (W(x_L - x_K) - W(0)) ρ^h_K + (W(0) - W(x_K - x_L)) ρ^h_L = o(h). For M≠ K,L, we choose an arbitrary x_KL∈ K ∪ L to obtain W(x_L - x_M) - W(x_K - x_M) = ∫_0^1 ∇ W ((1-λ) x_K + λ x_K - x_M) λ· (x_L - x_K) = ∇ W (x_KL - x_M) · (x_L - x_K) + o(h). We now return to the whole expression for q_K|L^h and write q_K|L^h = ∇ V(x_KL) · (x_L - x_K) + ∑_M∈^h, M≠ K,Lρ^h_M _M ∇ W (x_KL - x)   x · (x_L - x_K) + o(h) = ∇ V(x_KL) · (x_L - x_K) + ∫_Ω\K∪ L∇ W (x_KL - x) ρ̂^h ( x) · (x_L - x_K) + o(h) = ∇𝖰 (ρ̂^h) (x_KL) · (x_L - x_K) - ∫_K∪ L∇ W (x_KL - x) ρ̂^h ( x) · (x_L - x_K) + o(h). In a similar way as above, we obtain | ∫_K∪ L∇ W (x_KL - x) ρ̂^h ( x) | ≤Lip (W) sup_x∈Ωρ̂^h(B_3h (x)) 0, therefore, q_K|L^h = ∇𝖰 (ρ̂^h) (x_KL) · (x_L - x_K) + o(h). To show the integral representations (<ref>) and (<ref>), we note that ∇𝖰 (ρ̂^h) converges uniformly to ∇𝖰 (ρ). Indeed, | ∇𝖰 (ρ̂^h)(x) - ∇𝖰 (ρ)(x) | ≤| ∫_Ω∇ W (x - y) (ρ̂^h - ρ) ( y) | ≤Lip (W) û - u _L^1(Ω). The uniform convergence implies that the family {∇𝖰 (ρ̂^h) }_h>0 is uniformly equicontinuous. Hence, | ∇𝖰(ρ̂^h)(x_KL) - _K ∇𝖰(ρ̂^h)(x) x | ≤_K | ∇𝖰(ρ̂^h)(x_KL) - ∇𝖰(ρ̂^h)(x) | x = o(1) and (<ref>) follows. The same argument works for (<ref>). Let the family {ρ^h ∈(^h) }_h>0 be such that sup_h>0_ϵ,h^0(ρ^h) < ∞. Moreover, suppose that there exists u∈ W^1,1(Ω) such that ρ̂^h/^d→ uρ/^d in L^1(Ω), and Dû^h ⇀^* ∇ u weakly-* in (Ω; ^d). Then lim_h→ 0_ϵ,h^1(ρ^h) = ϵ∫_Ω∇ u ·∇𝖰(ρ) x. First, we show that _ϵ,h^1 is uniformly bounded. Using the Cauchy-Schwartz inequality yields _ϵ,h^1(ρ^h) = ϵ/2∑_(K,L)∈Σ^h (u^h_L - u^h_K) q_K|L^h τ_K|L^h ≤ c_pot√(_ϵ,h^0)( ∑_(K,L)∈Σ^h (u^h_L + u^h_K) h^2 τ_K|L^h )^1/2 where we used the estimate (<ref>). Since ∑_L∈^h_K h^2 τ_K|L^h ≤ C_τ |K|, we then obtain the uniform bound. Similarly, one can show that sup_h>0∑_(K,L)∈Σ^h |u^h_L - u^h_K| |(K|L)| < ∞. We aim to rewrite _ϵ,h^1 in an integral form, which will be convenient for passing to the limit h→ 0. We begin by observing that τ_K|L^h can be rewritten as τ_K|L^h = |(K|L)|/|x_L - x_K| = 1/|x_L - x_K|^d-1 ((K|L)). Inserting this expression for τ_K|L^h into _ϵ,h^1 yields _ϵ,h^1(ρ^h) = ϵ/2∑_(K,L)∈Σ^h (u^h_L - u^h_K) q_K|L^h/|x_L - x_K|∫_(K|L)^d-1 ( x). The representation (<ref>) for q_K|L^h derived in Lemma <ref> yields q_K|L^h/|x_L - x_K|∫_(K|L)^d-1 = ∫_(K|L) ∇𝖰 (ρ̂^h) (x) ^d-1( x) ·ν_KL + |(K|L)| o(1)|_h→ 0, where ν_KL=(x_L - x_K)/|x_L - x_K| is the outer normal of the face (K|L). Inserting the obtained expression into _ϵ,h^1, we have _ϵ,h^1(ρ^h) = ϵ/2∑_(K,L)∈Σ^h (u^h_L - u^h_K) ∫_(K|L)∇𝖰 (ρ̂^h) ^d-1·ν_KL + o(1)|_h→ 0∑_(K,L)∈Σ^h (u^h_L - u^h_K) |(K|L)|, where he last sum is bounded uniformly in h>0 by (<ref>). Altogether, we arrive at _ϵ,h^1(ρ^h) = ϵ/2∫_Ω∇𝖰 (ρ̂^h)(x) ·∑_(K,L)∈Σ^h (u^h_L - u^h_K) ν_KL ^d-1|_(K|L) ( x) + o(1)|_h→ 0. In this expression, one may already recognize the distributional derivative of the density û^h. Indeed, from the definition of û^h, we get Dû^h = ∑_K∈^h u^h_K D_K = ∑_K∈^h u^h_K n_K ^d-1|_∂ K, where n_K is the inner normal for the cell K∈^h. It holds that n_K ^d-1|_∂ K = ∑_L∈^h_K n_KL^d-1|_(K|L) for K∈^h, where n_KL is an inner normal to the face (K|L). Using symmetry, we find Dû^h = ∑_(K, L)∈Σ^h u^h_K n_KL^d-1|_(K|L) = 1/2∑_(K, L)∈Σ^h (u^h_K - u^h_L) n_KL^d-1|_(K|L). If (^h, Σ^h) possesses the orthogonality property, i.e. n_KL = x_K - x_L/|x_K - x_L| = - ν_KL, we can write _ϵ,h^1(ρ^h) = ϵ∫_Ω∇𝖰 (ρ̂^h)(x) · Dû^h ( x) + o(1)|_h→ 0. Moreover, since ∇𝖰 (ρ̂^h) converges to ∇𝖰 (ρ) uniformly as h→ 0, we further obtain _ϵ,h^1(ρ^h) = ϵ∫_Ω∇𝖰 (ρ)(x) · Dû^h ( x) + o(1)|_h→ 0. Passing h→ 0 and using the convergence Dû^h ⇀^* ∇ u in (Ω; ^d) then yields the assertion. Let the family {ρ^h∈(^h) }_h>0 be such that ρ̂^h/^d→ uρ/^d in L^1(Ω), with u∈(Ω; ^d). Then lim_h→ 0_ϵ,h^2(ρ^h) = 1/2∫_Ω| ∇𝖰(ρ) |^2 ρ. Using the symmetry, we rewrite _ϵ,h^2(ρ^h) as _h^2(ρ^h) = ∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 u^h_K ∫_0^1 𝔥(-λ q_K|L^h/ϵ) (1-λ)λ. The function 𝔥 has the following Taylor expansion for s≪ 1 𝔥 (s) = 1/2 + s/6 + o(s^2). Taking into account that |q_K|L^h| ≤ c_pot h (cf. estimate (<ref>)), we have that ∫_0^1 𝔥(-λ q_K|L^h/ϵ) (1-λ)λ = 1/4 + O(h/ϵ)|_h→ 0 . Substituting the last expression into _ϵ,h^2 yields _ϵ,h^2(ρ^h) = 1/4∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 u^h_K + o(1)_h→ 0. Now, notice that | ( ∇𝖰⟨ρ̂^h|(x_K) · (x_L - x_K) )^2 - _K ( ∇𝖰⟨ρ̂^h|(x) · (x_L - x_K) )^2 x | ≤ C h^2 sup_x∈ K| ∇𝖰⟨ρ̂^h |(x_K) - ∇𝖰⟨ρ̂^h |(x) | = o ⟨ h^2 |. Using the representation (cf. (<ref>)) q_K|L^h = _K ∇𝖰⟨ρ̂^h |(x) x · (x_L - x_K) + o(h), we can then rewrite _ϵ,h^2 as _ϵ,h^2(ρ^h) = 1/4∑_(K, L)∈Σ^h u^h_K τ_K|L^h _K ( ∇𝖰⟨ρ̂^h |(x) · (x_L - x_K) )^2 x + o(1)_h→ 0 = 1/4∫_Ωû^h(x) ∑_(K, L)∈Σ^hτ_K|L^h/|K|_K(x) ( ∇𝖰⟨ρ̂^h |(x) · (x_L - x_K) )^2 x + o(1)_h→ 0 = 1/4∫_Ωû^h(x) ⟨∇𝖰⟨ρ̂^h |(x), ^h(x) ∇𝖰⟨ρ̂^h |(x) ⟩ x + o(1)_h→ 0, where we recall the tensor ^h(x) = ∑_K∈^h_K (x) ∑_L∈^h_Kτ_K|L^h/|K| (x_L - x_K) ⊗ (x_L - x_K). The product ⟨∇𝖰⟨ρ̂^h |(x), ^h(x) ∇𝖰⟨ρ̂^h |(x) ⟩ has an L^∞ bound uniformly in h>0, since for any x∈Ω, there is some K for which x∈ K and | ⟨∇𝖰⟨ρ̂^h|(x), ^h(x) ∇𝖰⟨ρ̂^h |(x) ⟩| ≤∑_L∈_K^hτ_K|L^h/|K|( ∇𝖰⟨ρ̂^h |(x) · (x_L - x_K) )^2 ≤ c_pot^2 ∑_L∈_K^h|(K|L)| |x_K-x_L|/|K|≤ c_pot^2C_d-1/C_dζ^d+1sup_h>0sup_K∈^h#_K^h <∞. It is left to how that, for any f∈ L^1(Ω), we have the convergence lim_h→ 0∫_Ω f ⟨∇𝖰 ( ρ̂^h ), ^h ∇𝖰 ( ρ̂^h ) ⟩ x = ∫_Ω f | ∇𝖰 ( ρ ) |^2 x. We consider the limit component-wise lim_h→ 0∫_Ω f ∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) ^h_ij x = lim_h→ 0∫_Ω f ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ ) ^h_ij x + lim_h→ 0∫_Ω f [ ∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) - ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ) ] ^h_ij x, where f ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ ) ∈ L^1(Ω) and, since ^h_ij⇀^* 2 δ_ij in σ(L^∞, L^1) by Proposition <ref>, the first term converges to the expected limit. For the error term, we notice that | ∫_Ω f [ ∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) - ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ) ] ^h_ij x| ≤∂_i 𝖰 ( ρ̂^h ) ∂_j 𝖰 ( ρ̂^h ) - ∂_i 𝖰 ( ρ ) ∂_j 𝖰 ( ρ) _supf_L^1^h_ij_L^∞→ 0 as h→ 0, due to the uniform convergence of ∇𝖰 ( ρ̂^h ) to ∇𝖰 ( ρ ). §.§ EDP convergence A pair (ρ^h, j^h)∈𝒞ℰ_h(0,T) is said to converge to a pair (ρ, j)∈𝒞ℰ(0,T) if the pair of reconstructions (ρ̂^h, ^h)∈𝒞ℰ(0,T) defined as in (<ref>) converges in the following sense: * ρ̂^h_t/^d →ρ_t/^d in L^1(Ω) for almost every t∈[0, T], * ∫_·_t^h t ⇀^* ∫_· j_t t in ((0, T)×Ω). We begin by summarizing the liminf inequalities for the tilt-independent gradient structure. Let (ρ^h, j^h)∈𝒞ℰ_h(0,T) converge to (ρ, j)∈𝒞ℰ(0,T) in the sense of Definition <ref>. Then the following liminf inequalities hold for * the dissipation potential: ∫_0^T 1/2∫_Ω| j_t/ρ|^2 ρ t≤lim inf_h→ 0∫_0^T _ϵ,h(ρ^h_t, j^h_t) t ; * the Fisher information: ∫_0^T _ϵ(ρ_t) t≤lim inf_h→ 0∫_0^T _ϵ,h( ρ^h_t) t ; * the energy functional: _ϵ(ρ_t)≤lim inf_h→ 0_ϵ,h(ρ^h_t) for all t∈[0,T]. (i) We need to show that the following limsup inequality holds for any φ∈_b^2(Ω): lim sup_h→ 0^*_ϵ,h(ρ^h, φ^h) ≤1/2∫_Ω |∇φ|^2 ρ, where {φ^h}_h>0 is defined by φ^h(K) φ(x_K) for K∈^h. Then the desired liminf inequality follows by the duality argument from <cit.>. From Lemma <ref><ref>, it follows that _ϵ,h^*(ρ^h, φ^h) = 1/2∑_(K,L)∈Σ^h | φ^h_KL |^2 Λ_H(u^h_K, u^h_L) τ_K|L^h + 1/ϵ∑_(K,L)∈Σ^h O (| φ^h_KL |^3 ) τ_K|L^h. We note that O (| φ^h_KL |^3 ) = O(h^3) and, therefore, 1/ϵ∑_(K,L)∈Σ^h O (| φ^h_KL|^3 ) τ_K|L^h = 1/ϵO(h). Using the inequality Λ_H(a, b) ≤ (a + b) /2, we arrive at _ϵ,h^*(ρ^h, φ^h) ≤1/2∑_(K,L)∈Σ^h| φ^h_KL|^2 τ_K|L^h/|K|ρ^h_K + O (h ). With this bound at hand, it is enough to make minor modifications of the proof of <cit.> for the tilt-independent dissipation potential with κ^h_KL = τ_K|L^h / |K| to obtain (<ref>). (ii) The asserted liminf inequality follows from Theorem <ref> and Fatou's lemma. (iii) As the following calculations hold for any t∈ [0,T], we drop the subscript t. The relation between the continuous and discrete potentials yields the representation of _ϵ,h in the integral form _ϵ,h(ρ^h) = _ε(ρ̂^h) + O(h). Since _ϵ is lower semicontinuous w.r.t. the narrow convergence, we then easily conclude that _ϵ(ρ^h)≤lim inf_h→ 0_ϵ,h(ρ^h), which completes the proof. Consider a family {(ρ^h , j^h)}_h>0 of GGF-solutions to Scharfetter–Gummel scheme (<ref>), for a fixed ϵ>0, according to Definition <ref> and the tilt-independent structure introduced in Section <ref>. Further, let {(ρ̂^h, ^h)}_h>0 be the family of reconstructed pairs as defined in (<ref>). Then, the existence of a subsequential limit pair (ρ, j) ∈𝒞ℰ(0,T) and the convergence specified in Theorem <ref>(1) follows from the compactness arguments of Section <ref>. The liminf inequality from assertion (2) is proven in Theorem <ref>, which immediately implies that _ϵ^[s,t](ρ, j) ≤lim inf_h→ 0_ϵ,h^[s,t](ρ^h, j^h)= 0 for every [s,t]⊂[0,T]. On the other hand, the chain rule <cit.> yields _ϵ^[s,t](ρ, j) ≥ 0 for every [s,t]⊂[0,T]. Therefore, the limit pair (ρ, j) is the (, , ^*)-gradient flow solution of (<ref>) in the sense of Definition <ref>. § VANISHING DIFFUSION LIMIT This section deals with the vanishing diffusion limit for both the discrete and continuous cases, i.e. Theorems <ref> and <ref>. Although the result for the continuous case seems to be obvious, we did not find a reference containing a proof of the statement. For this reason, and for the sake of completeness, we include a proof of the statement in Section <ref>. We begin with the discrete case. §.§ Discrete Case We fix a tessellation (^h, Σ^h) with some h>0 and consider the vanishing diffusion limit ϵ→ 0. To simplify notation, we drop the superscript h. As mentioned in the introduction, we expect that the Scharfetter–Gummel flux (<ref>) converges to the upwind flux lim_ϵ→ 0_K|L^ρ = _K|L^ρ,upτ_K|L^h ( q_K|L^h,+u_K - q_K|L^h,- u_L ), (K,L)∈Σ^h. The result of this section concerns the convergence of the Scharfetter–Gummel scheme (<ref>) to the upwind scheme (<ref>) in the sense of the EDP convergence. Recall that if a pair (ρ^ϵ,h, j^ϵ,h)∈𝒞ℰ_h(0, T) is a GGF-solutions of (<ref>), then (ρ^ϵ,h, j^ϵ,h) is the minimizer for the energy-dissipation functional _ϵ,h^[s,t](ρ^ϵ,h, j^ϵ,h) = ∫_s^t {_ϵ,h (ρ^ϵ,h_r, j^h,ϵ_r) + _ϵ,h (ρ^ϵ,h_r) } r + _ϵ,h (ρ^ϵ,h_t) - _ϵ,h (ρ^ϵ,h_s) with _ϵ,h, _ϵ,h, and _ϵ,h defined in (<ref>), (<ref>), and (<ref>) respectively. The objective of this section is to get a compactness statement for {(ρ^ϵ,h, j^ϵ,h)}_ϵ>0 and to find the counterparts to _ϵ,h, _ϵ,h, and _ϵ,h for ϵ = 0. Then we complete the proof of Theorem <ref>. Note that since (^h, Σ^h) is fixed and non-degenerate, we have the following useful bounds sup_K∈^h∑_L∈_K^hτ_K|L^h/|K| c_ < ∞. We begin with the compactness result. Consider a measure J^ϵ∈([0,T]×Σ^h) defined on product measurable sets A× B⊂ [0, T]×Σ^h as J^ϵ (A× B) ∫_A j_t^ϵ(B) t=∫_A∑_(K,L)∈ B j^ϵ_K|L(t) t. Let a family of pairs {(ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ_h (0, T) satisfy c_0sup_ϵ>0∫_0^T _ϵ,h(ρ_t^ϵ, j_t^ϵ) t<∞. Then the family { J^ϵ}_ϵ>0 is bounded in total variation. Moreover, |J^ϵ|(A×Σ^h) ≤√(c_0c_^1(A)) for any measurable set A⊂[0,T]. Following the initial arguments of the proof of Lemma <ref>, we obtain for any β∈, _ϵ,h(ρ_t^ϵ, j_t^ϵ) ≥β∑_(K,L)∈Σ^h |j_K|L^ϵ|(t) - 2∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u_K(t), u_L(t), βsign(j_K|L^ϵ)/2. If either a=0 or b=0, then α^*(a,b,x)=0 for any x∈. If a=b, then α_ϵ^*(a,a,ξ) = a ∫_0^ξ x x = aξ^2/2 = α_0^*(a,a,ξ) for all ξ∈, We will now reduce the other cases to this case. Indeed, using the 1-homogeneity and concavity of Λ_H, we have for any ξ∈ that ∑_(K,L)∈Σ^hτ_K|L^h α_ϵ^* ⟨*| u_K, u_L, ξ = ∑_(K,L)∈Σ^hα_ϵ^* ⟨*|τ_K|L^h u_K, τ_K|L^h u_L, ξ ≤α_ϵ^* ⟨*|∑_(K,L)∈Σ^hτ_K|L^h u_K, ∑_(K,L)∈Σ^hτ_K|L^h u_L, ξ = α_ϵ^*⟨*|1,1,ξ∑_(K,L)∈Σ^hτ_K|L^h u_K ≤ c_ξ^2/2. Consequently, and after integration over any measurable set A⊂[0,T], we obtain the estimate ∫_0^T _ϵ,h(ρ_t^ϵ, j_t^ϵ) t ≥β |J^ϵ|(A×Σ^h) - c_/2β^2^1(A). Taking the supremum over β∈, we arrive at the asserted estimate. Let a family of pairs {(ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ_h (0, T) satisfy c_0sup_ϵ>0∫_0^T _ϵ,h(ρ_t^ϵ, j_t^ϵ) t<∞. Then there exist a limit pair (ρ, j) ∈𝒞ℰ_h(0, T) and a (not relabelled) subsequence such that ρ^ϵ_t ⇀ρ_t in (^h) for all t∈ [0,T], J^ϵ⇀^* J=∫_· j_t t weakly-* in ([0,T]×Σ^h). The convergence for J^ϵ follows the same lines as in the proof of Lemma <ref>. We now prove the convergence for {ρ^ϵ}_ϵ>0. Since (ρ^ϵ, j^ϵ)∈𝒞ℰ_h(0, T), then | ∑_K∈^hφ_K (ρ^ϵ_K(t) - ρ^ϵ_K(s) ) | = | ∫_s^t ∑_(K,L)∈Σ^hφ (K,L) j^ϵ_KL(r) r | ≤ 2φ_∞ |J^ϵ|q([s,t]×Σ^h) for any [s, t] ⊂ [0, T]. Taking supremum over all φ∈(^h) with φ_∞≤ 1, we make use of Lemma <ref> to obtain ρ_t^ϵ - ρ_t^ϵ_TV≤ C √(|t - s|). By the Ascoli-Arzelá theorem, there exists a (not relabelled) subsequence of {ρ^ϵ}_ϵ>0 and a limit curve ρ∈([0,T];(^h)), such that the asserted convergence holds. Since ^h and Σ^h are finite discrete spaces, the weak and strong topologies coincide. In particular, the narrow convergence stated in Lemma <ref> implies the pointwise convergence. We will use this property in the proofs of the following results. In the next lemma, we establish the convergence of the Fisher information. Let the family of measures {ρ^ϵ}_ϵ> 0 be such that ρ^ϵ⇀ρ in (^h) as ϵ→ 0, then lim_ϵ→ 0_ϵ,h (ρ^ϵ) = _h, up (ρ) = 2∑_(K, L)∈Σ^hτ_K|L^hα_0^* ⟨*|u_K, u_L, q_K|L^h/2, where α_0^*(a, b, q) = 1/2( a|q^+|^2 + b|q^-|^2 ). The limit Fisher information contains only the limit of _ϵ,h^2, since lim_ϵ→ 0(_ϵ,h^0 + _ϵ,h^1 )= 0. Recall that _ϵ,h^2 (ρ^ϵ) = 1/2∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 𝕙_ϵ⟨*| u^ϵ_K, u^ϵ_L, q_K|L^h, with 𝕙_ϵ being 𝕙_ϵ (a, b, q) = ∫_0^1 [a 𝔥(λ q/ϵ) + b 𝔥(-λ q/ϵ)](1-λ)λ, 𝔥(s) = 1/4e^s-1-s/sinh^2(s/2). It is uniformly bounded by the following argument. Since 0 ≤𝔥(s) ≤ 1, s∈, we have that _h^2 (ρ^ϵ) ≤1/4∑_(K, L)∈Σ^hτ_K|L^h |q_K|L^h|^2 (u^ϵ_K + u^ϵ_L) ≤1/2 c_pot^2 c_ . Moreover, we notice that lim_ϵ→ 0𝔥(s/ϵ) = _(0,∞)(s) + 1/2_{0} (s), and, hence, lim_ϵ→ 0∫_0^1 𝔥(λ q/ϵ) (1-λ)λ = 1/2( _(0,∞)(q) + 1/2_{0} (q) ) 𝔥_0(q). Now we define u^ϵ_KL∫_0^1 [u^ϵ_K 𝔥(λ q_K|L^h/ϵ) + u^ϵ_L 𝔥(-λ q_K|L^h/ϵ)](1-λ)λ. Since u^ϵ→ u pointwise on ^h, we get lim_ϵ→ 0u^ϵ_KL = u_K 𝔥_0(q_K|L^h) + u_L 𝔥_0(-q_K|L^h), which concludes the proof. Finally, we prove the convergence of the dissipation potential. Let the family of measure-flux pairs { (ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ_h (0,T) satisfying * ρ^ϵ_t ⇀ρ_t in (^h) for all t∈ [0, T], * ∫_· j_t^ϵ t ⇀^* ∫_· j_t t weakly-* in ((0, T)×Σ^h). Then, ∫_s^t _up,h(ρ_r,j_r) r ≤lim inf_ϵ→ 0∫_s^t _ϵ,h (ρ^ϵ_r, j^ϵ_r) r for any [s,t]⊂[0,T], where _up,h(ρ,j) = ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K | j^+_K|L/τ_K|L^hu_K|^2 + u_L | j^-_K|L/τ_K|L^hu_L|^2 , u_K=ρ_K/|K|. We begin by proving the convergence lim_ϵ→ 0_ϵ,h^* (ρ^ϵ, ξ) = _up,h^*(ρ,ξ) = 1/4∑_(K,L)∈Σ^hτ_K|L^h ⟨*|u_K |ξ_K|L^+|^2 + u_L |ξ_K|L^-|^2 for any ξ∈(Σ^h). Since ρ^ϵ converges pointwise to ρ (cf. Remark <ref>) and estimate (<ref>) provides ∑_(K,L)∈Σ^hτ_K|L^hα_ϵ^* ⟨*|u^ϵ_K, u^ϵ_L, ξ_KL≤ 2∑_(K,L)∈Σ^hτ_K|L^hα_ϵ^* ⟨*|u^ϵ_K, u^ϵ_L, ξ_∞≤ξ_∞^2 c_ , we obtain the asserted convergence by means of Lemma <ref><ref> and the dominated convergence. We now use the Legendre duality to infer the asserted liminf inequality for the dissipation potential. From the convergence result established in the first part of the proof, it follows that ∫_s^t ∑_(K,L)∈Σ^hχ_r ξ_KL j_KL(r) r - ∫_0^T 2∑_(K,L)∈Σ^hτ_K|L^hα_0^* ⟨*| u_K(r), u_L(r), χ_r ξ_KL/2 r ≤lim_ϵ→ 0∫_s^t ∑_(K,L)∈Σ^hχ_r ξ_KL j^ϵ_KL(r) r - lim sup_ϵ→ 0∫_s^t _ϵ,h^* (ρ^ϵ_r, χ_r ξ) r ≤lim inf_ϵ→ 0∫_s^t {∑_(K,L)∈Σ^hχ_r ξ_KL j^ϵ_KL(r) - _ϵ,h^* (ρ^ϵ_r, χ_r ξ) } r ≤lim inf_ϵ→ 0∫_s^t _ϵ,h (ρ^ϵ_r, j^ϵ_r ) r for any χ∈([0,T]), ξ∈(Σ^h). Now let η∈([0,T]×Σ^h). We introduce the measures Θ_ρ^±, Θ∈([0,T]×Σ^h) in the way that for any measurable A ⊂ [0,T] and B ⊂Σ^h it holds that Θ(A× B) = ∫_A ∑_(K,L)∈ Bτ_K|L^h t, Θ_ρ^+ (A× B) = ∫_A ∑_(K,L)∈ Bτ_K|L^h u_K(t) t, Θ_ρ^- (A× B) = ∫_A ∑_(K,L)∈ Bτ_K|L^h u_L(t) t. Then, we rewrite ∫_s^t ∑_(K,L)∈Σ^hη_KL(r) J_KL(r) r - ∫_s^t 2∑_(K,L)∈Σ^hτ_K|L^hα_0^* ⟨*| u_K(r), u_L(r), η_KL(r)/2 r = ∬_[s,t]×Σ^hη J - ∬_[s,t]×Σ^h 2 α_0^* ( Θ^+_ρ/Θ, Θ^-_ρ/Θ,η/2) Θ I_0^[s,t] (η). It is left to determine sup_η∈([s,t]×Σ^h) I_0^[s,t] (η). We note that ∬_[s,t]×Σ^hη J = ∬_[s,t]×Σ^hη^+ ( [ J/Θ_ρ^+]^+ - [ J/Θ_ρ^+]^-) Θ_ρ^+ + η^- ( [ J/Θ_ρ^-]^- - [ J/Θ_ρ^-]^+ ) Θ_ρ^-. The two negative terms can only decrease the total value, therefore the supremum over ([0,T]×Σ^h) is equivalent to taking supremum over η∈([0,T]×Σ^h) satisfying η^±≡ 0 on (J^0)^∓. Because of the structure of α_0^* with one part depending on η^+ and the other part depending on η^-, the expression under the supremum splits into two independent parts with the supremum over η^+ and the supremum over η^-. The first part is sup_η∈([s,t]×Σ^h){∬_[s,t]×Σ^hη^+ [ J/Θ_ρ^+]^+ Θ_ρ^+ - η^+/2^2_L^2([s,t]×Σ^h, Θ_ρ^+)} = [ J/Θ_ρ^+]^+ ^2_L^2([s,t]×Σ^h, Θ_ρ^+). and the second part is sup_η∈([s,t]×Σ^h){∬_[s,t]×Σ^hη^- [ J/Θ_ρ^-]^- Θ_ρ^- - η^-/2^2_L^2([s,t]×Σ^h, Θ_ρ^-)} = [ J/Θ_ρ^-]^- ^2_L^2([s,t]×Σ^h, Θ_ρ^-). In both parts, we imply that if the supremum is finite then it equals the L^2-norm of the corresponding flux densities. Combining the two, we obtain sup_η∈([s,t]×Σ^h) I^[s,t](η) = [ J/Θ_ρ^+]^+ ^2_L^2([s,t]×Σ^h, Θ_ρ^+) + [ J/Θ_ρ^-]^- ^2_L^2([s,t]×Σ^h, Θ_ρ^-) = ∫_s^t ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K(r) | j^+_K|L(r)/τ_K|L^h u_K(r)|^2 + u_L(r) | j^-_K|L(r)/τ_K|L^h u_L(r)|^2 r = ∫_s^t _up,h (ρ_r, j_r) r, therewith concluding the proof. To summarize, the energy-dissipation functional corresponding to the upwind scheme comprises the driving energy (^h)∋ρ↦_up,h (ρ) = ∑_K∈^h V^h_K ρ_K + 1/2∑_K,L∈^h×^h W^h_KLρ_K ρ_L, the dissipation potential _up,h: (^h) ×(Σ^h) →_+ ∪{+∞} (^h) ×(Σ^h)∋ (ρ,j)↦_up,h(ρ, j) = ∑_(K,L)∈Σ^h⟨*||j^+_K|L|^2/τ_K|L^hu_K + |j^-_K|L|^2/τ_K|L^hu_L , and the Fisher information (^h)∋ρ↦_up,h (ρ) = ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K| q^+_K|L/2|^2 + u_L| q^-_K|L/2|^2 . For completeness, we point out that the dual dissipation potential in this case is (^h) ×(Σ^h)∋ (ρ,ξ)↦_up,h^* (ρ, ξ) = ∑_(K,L)∈Σ^hτ_K|L^h⟨*| u_K|ξ_K|L^+/2|^2 + u_L| ξ_K|L^-/2|^2 . Consider a family {(ρ^ϵ,h, j^ϵ,h)}_ϵ>0 of GGF-solutions to (<ref>) according to Definition <ref> and the tilt-independent structure introduced in Section <ref>. Lemma <ref> and Lemma <ref> provide the existence of a subsequential limit pair (ρ^up,h, j^up,h) ∈𝒞ℰ_h (0, T) and the convergence specified in Theorem <ref>(1). The liminf inequality for the energy-dissipation functionals from assertion (2) is proven in Lemma <ref> and Lemma <ref>. With a simple chain rule, we easily deduce _up,h^[s,t](ρ^up,h, j^up,h)≥ 0 for every [s,t]⊂[0,T], and hence, the limit pair (ρ^h,ϵ, j^h,ϵ) is the GGF solution of the upwind scheme (<ref>). §.§ Continuous case Recall that for each ϵ>0, a gradient flow solution (ρ^ϵ,j^ϵ) of (<ref>) satisfies ^[s,t]_ϵ (ρ^ϵ, j^ϵ)=∫_s^t {(ρ_r^ϵ,j_r^ϵ) + _ϵ(ρ_r^ϵ)} r + _ϵ(ρ_t^ϵ) - _ϵ(ρ_s^ϵ) = 0 for all [s,t]⊂[0,T], with Fisher information _ϵ(ρ) = 2ϵ^2∫_Ω| ∇√(u)|^2 x + ϵ∫_Ω∇ u ·∇𝖰(ρ) x + 1/2∫_Ω|∇𝖰(ρ)|^2 ρ, u = ρ/^d. In particular, √(u^ϵ)∈ H^1(Ω) for every ϵ>0. As in the previous results, we will pass to the liminf in each of the terms in the energy-dissipation functional _ϵ. Due to the joint lower semicontinuity of the dissipation potential w.r.t. weak-* convergence and the fact that _agg≤_ϵ, the only difficulty here is in proving the liminf inequality for the Fisher information _ϵ, as it is unclear that the first two terms vanish in the limit. However, since the chain rule ∇ v^2 = 2 v ∇ v∈ L^1(Ω) for v∈ H^1(Ω), _ϵ takes the alternative form _ϵ(ρ) = 1/2∫_Ω|2ϵ∇√(u) + √(u) ∇𝖰(ρ)|^2 x, u = ρ/^d, √(u)∈ H^1(Ω). Moreover, by defining the ^d-valued measure g_t^ϵ := √(u_t^ϵ)(2ϵ∇√(u_t^ϵ) + √(u_t^ϵ)∇𝖰(ρ_t^ϵ))^d = (ϵ∇ u_t^ϵ + u_t^ϵ∇𝖰(ρ_t^ϵ) )^d∈(Ω;^d), for every t∈[0,T], we can further express _ϵ(ρ^ϵ) as _ϵ(ρ_t^ϵ) = 1/2∫_Ω| g_t^ϵ/ρ_t^ϵ|^2 ρ_t^ϵ = (ρ_t^ϵ,g_t^ϵ). Therefore, if ρ_t^ϵ⇀^* ρ_t weakly-* in (Ω) and g_t^ϵ⇀^* g_t weakly-* in (Ω;^d) for every t∈[0,T], then the weak-* lower semicontinuity of yields (ρ_t,g_t) ≤lim inf_ϵ→ 0(ρ_t^ϵ,g_t^ϵ) = lim inf_ϵ→ 0_ϵ(ρ_t^ϵ). Hence, it suffices to show that g_t^ϵ⇀^* ρ_t ∇𝖰(ρ_t) weakly-* in (Ω;^d) for every t∈[0,T]. Let {ρ^ϵ}_ϵ>0⊂([0,T];(Ω)), ρ∈([0,T];(Ω)) be such that ρ_t^ϵ⇀^* ρ_t weakly-* in (Ω) for every t∈[0,T] and the interaction potential W satisfy (<ref>). Then for every t∈[0,T], the sequence {g_t^ϵ}_ϵ>0⊂(Ω;^d) defined in (<ref>) satisfies g_t^ϵ⇀^* g_t:=ρ_t ∇𝖰(ρ_t) weakly-* in (Ω;^d). In particular, we have ∫_s^t _agg(ρ_r) r≤lim inf_ϵ→ 0∫_s^t _ϵ(ρ_r^ϵ) r for every [s,t]⊂[0,T]. Let φ∈_c^1(Ω;^d) be arbitrary and t∈[0,T]. Then ⟨φ,g_t^ϵ⟩ = ∫_Ωφ·(ϵ∇ u_t^ϵ + u_t^ϵ∇𝖰(ρ_t^ϵ) ) x = - ϵ∫_Ωdivφ ρ_t^ϵ + ∫_Ωφ·∇𝖰(ρ_t^ϵ)ρ_t^ϵ, and therefore |⟨φ,g_t^ϵ - g_t⟩ | ≤ϵdivφ_sup + φ_sup∇𝖰(ρ_t^ϵ)-∇𝖰(ρ_t)_sup + |⟨φ·∇𝖰(ρ_t),ρ_t^ϵ-ρ_t⟩| From the assumptions placed on the potentials V and W, one easily deduces the uniform convergence ∇𝖰(ρ_t^ϵ)-∇𝖰(ρ_t)_sup→ 0 as ϵ→ 0. Clearly, the other terms also converge to zero. Using the weak-* lower semicontinuity of , we then obtain (ρ_t,g_t) ≤lim inf_ϵ→ 0_ϵ(ρ_t^ϵ) for every t∈[0,T]. Since _ϵ(ρ_t^ϵ)≥ 0 for t∈[0,T], an application of Fatou's lemma then yields the result. Following the same strategy as in the previous sections, we obtain a compactness result for Let a family of pairs {(ρ^ϵ, j^ϵ)}_ϵ>0⊂𝒞ℰ (0, T) satisfying c_0sup_ϵ>0∫_0^T _ϵ(ρ_t^ϵ, j_t^ϵ) t<∞. Then there exist a limit pair (ρ, j) ∈𝒞ℰ(0, T) and a (not relabelled) subsequence such that ρ^ϵ_t ⇀^* ρ_t weakly-* in (Ω) for all t∈ [0,T], ∫_· j_t^ϵ t J^ϵ⇀^* J=∫_· j_t t weakly-* in ([0,T]×Ω;^d). An application of Jensen't inequality immediately yields sup_ϵ>0|j_·^ϵ|(Ω)_L^2((0,T))^2 ≤ 2sup_ϵ>0∫_0^T(ρ_t^ϵ,j_t^ϵ) t = 2 c_0. In particular, the sequence {t↦ |j_t^ϵ|(Ω)}_ϵ>0 is equi-integrable, and the weak-* compactness of {J^ϵ}_ϵ>0 can be proven as in Lemma <ref>. We now prove the asserted weak-* convergence for the sequence {ρ^ϵ}_ϵ>0⊂([0,T];(^d)). Since (ρ^ϵ,j^ϵ) satisfies the continuity equation (<ref>), for any φ∈_c^1(^d) with ∇φ_L^∞≤ 1: |⟨φ,ρ_t^ϵ - ρ_s^ϵ⟩| = |∫_s^t ⟨∇φ, j_r^ϵ⟩ r| ≤∫_s^t |j_r^ϵ|(Ω) r ≤√(|t-s|)|j_·^ϵ|(Ω)_L^2((0,T)). Taking the supremum over Lipschitz functions φ satisfying ∇φ_L^∞≤ 1 then gives W_1(ρ_t^ϵ,ρ_s^ϵ) ≤ c_0√(|t-s|) for all ϵ>0 and [s,t]⊂[0,T], where W_1 is the 1-Wasserstein distance. The Ascoli-Arzelá theorem then provides the existence of a limit curve ρ∈([0,T];(Ω)) and a subsequence such that the convergence holds. We now conclude with the proof of Theorem <ref>. Consider a family {(ρ^ϵ, j^ϵ)}_ϵ>0 of gradient flow solutions to (<ref>) according to Definition <ref>. Lemma <ref> provides the existence of a subsequential limit pair (ρ, j) ∈𝒞ℰ (0, T) and the convergence specified in Theorem <ref>(1). To show the liminf inequality for the energy-dissipation functionals from assertion (2), we begin by noticing that ∫_s^t (ρ_r^ϵ,j_r^ϵ) r = 1/2∬_[s,t]×Ω| J^ϵ/ R^ϵ|^2 R^ϵ, R^ϵ = ∫_·ρ_t^ϵ t, where the right-hand side is jointly weakly-* lower semicontinuous as a functional on ([s,t]×Ω)×([s,t]×Ω;^d). Since (R^ϵ,J^ϵ)⇀^* (R,J) weakly-* in ([s,t]×Ω)×([s,t]×Ω;^d) with R = ∫_·ρ_t t and J=∫_· j_t t, we then conclude that lim inf_ϵ→ 0∫_s^t (ρ_r^ϵ,j_r^ϵ) r ≥1/2∬_[s,t]×Ω| J/ R|^2 R = ∫_s^t (ρ_r,j_r) r. Together with Lemma <ref> and the fact that _agg≤_ϵ, we easily deduce the asserted liminf inequality _agg^[s,t](ρ,j) ≤lim inf_ϵ→ 0_ϵ^[s,t](ρ^ϵ, j^ϵ)=0 for every [s,t]⊂[0,T]. Finally, the chain rule <cit.> yields _agg^[s,t](ρ,j) ≥ 0 for every [s,t]⊂[0,T]. Therefore, the limit pair (ρ, j) is an (_agg, , ^*)-gradient flow solution of (<ref>) in the sense of Definition <ref> § FROM THE UPWIND SCHEME TO THE AGGREGATION EQUATION In this section, we complete the commutative diagram in Figure <ref> by studying the variational convergence of the upwind scheme (<ref>) to the aggregation equation (<ref>). We mentioned earlier that we could not consider general tessellations in this section, thus, we restrict to Cartesian grids. Moreover, we assume (<ref>) for the interaction potential W. On the other hand, we can handle any initial data ρ_in^h∈(^h) satisfying ρ̂^h_in⇀^* ρ_in weakly-* in (Ω) without any additional assumptions. We work with (_up,h, _up,h, _up,h^*)-generalized gradient flow solutions of the upwind scheme (<ref>), where _up,h, _up,h, and _up,h^* are defined in (<ref>), (<ref>), and (<ref>), respectively. The strategy should be familiar to the reader by now. We begin with the necessary compactness result in Lemma <ref>. The convergence of the dual dissipation potential _up,h^* and, consequently, the Fisher information _up,h given in (<ref>) is established in Theorem <ref>. We conclude this section with the proof of Theorem <ref>. We begin this section with a compactness result. The family {J^h}_h>0 is weakly-* compact in ((0, T)×Ω; ^d) and the family {t↦ |^h_t|}_h>0 is equi-integrable. In particular, there exists a (not relabelled) subsequence of { (ρ̂^h, ^h) }_h>0 and a pair (ρ, j)∈𝒞ℰ(0,T) such that ρ̂^h_t →ρ_t weakly-* in (Ω) for all t ∈ [0, T], ∫_·^h_t t J^h ⇀^* J=∫_· j_t t weakly-* in ((0, T)×Ω; ^d). The weak-* compactness of {J^h}_h>0 and equi-integrability of the family {t↦ |^h_t|}_h>0 can be proven as in Lemma <ref>. Indeed, using the dissipation potential _up,h (cf. (<ref>)) instead, we obtain sup_h>0|_·^h|(Ω)_L^2((0,T))^2≤ 2c_κ d^2 sup_h>0∫_0^T _up,h(ρ_t^h,j_t^h) t =: c_0 <∞. For the pointwise weak-* convergence of {ρ̂_t^h}_h>0, we simply mimic the proof of Lemma <ref>. Let { (^h, Σ^h)}_h>0 be a family of Cartesian tessellations with edge-length h>0. Let the family {ρ^h∈(^h) }_h>0 satisfy ρ̂^h ⇀^* ρ weakly-* in (Ω). If the family of discrete functions {φ^h∈(^h)}_h>0 is such that for some φ∈ C_b^1(Ω): φ^h(K,L) = ∇φ(x_K) · (x_L - x_K) + o(h), then lim_h→ 0_up,h^*(ρ^h, φ^h) = 1/2∫_Ω |∇φ(x)|^2 ρ( x). Consequently, if the interaction potential W satisfies assumption (<ref>), then lim_h→ 0_up,h(ρ^h) = 1/2∫_Ω| ∇𝖰 (ρ) |^2 ρ, with 𝖰(ρ)=∇ V + ∇ W ∗ρ. Using symmetry, we rewrite the functional as _up,h^*(ρ^h, φ^h) = 1/4∑_(K,L)∈Σ^h( u^h_K |(φ^h(K,L))^+ |^2 + u^h_L | (φ^h(K,L))^- |^2 )τ_K|L^h = 1/2∑_(K,L)∈Σ^h| (φ^h(K,L))^+ |^2 u^h_K τ_K|L^h. Since the mapping ∋ q ↦ q^+ is Lipschitz, we have that (φ^h(K,L))^+ = (∇φ(x_K) · (x_L - x_K))^+ + o(h). Inserting this expression into the functional yields _up,h^*(ρ^h, φ^h) = 1/2∑_(K,L)∈Σ^hτ_K|L^h u^h_K| ( ∇φ (x_K) · (x_L - x_K) )^+ |^2 + o(h^2) ∑_(K,L)∈Σ^hτ_K|L^h/|K|ρ^h_K = 1/2∑_K∈^h⟨∇φ (x_K), ∑_L∈^h_Kτ_K|L^h (x_L - x_K) ⊗ (x_L - x_K) ^φ_K(L) ∇φ (x_K) ⟩ u^h_K + o(1), where we set ^φ_K _{ M∈^h: ∇φ (x_K) · (x_M - x_K) > 0 } + 1/2_{ M∈^h:∇φ (x_K) · (x_M - x_K) = 0 } The indicator ^φ_K means that for any cell K∈^h the sum goes only over the faces (K|L) for which ∇φ (x_K) · (x_L - x_K) > 0. For the Cartesian grid, all the neighboring cells ^h_K can be grouped in pairs M, L ∈^h_K such that x_L - x_K = - (x_M - x_K) and x_L - x_K = ± h e_i for some basis vector e_i, i ∈{ 1, …, d }. We illustrate this idea in Figure <ref> below. This means that for any ∇φ(x_K) that is not parallel to any basis vector {e_i}_i=1^d, the indicator ^φ_K "chooses" all the basis vectors with either plus or minus sign. Hence, the tensor takes the form ∑_L∈^h_Kτ_K|L^h (x_L - x_K) ⊗ (x_L - x_K) _K^φ(L) = h^d ∑_i=1^d e_i ⊗ e_i = |K| . If ∇φ(x_K) is parallel to some e_i for some i∈{1,…,d}, then ^φ_K includes both he_i and -he_i with the coefficient 1/2, which does not change the form of the tensor. The expression above then simplifies to _up,h^*(ρ^h, φ^h) = 1/2∑_K∈^h| ∇φ (x_K) |^2 |K| u^h_K + o(1). Since ∇φ is uniformly continuous on Ω, it holds that | ∇φ (x_K) |^2 = _K | ∇φ (x) |^2 x + o(1). Therefore, the functional admits an integral form _h(ρ^h, φ^h) = 1/2∫_Ω |∇φ(x)|^2 ρ̂^h( x) + o(1) 1/2∫_Ω |∇φ(x)|^2 ρ( x). As for the convergence of the Fisher information, we notice that the assumptions on V and W give |∇𝖰 (ρ̂^h) (x_K)|^2 = _K |∇𝖰 (ρ̂^h) (x)|^2 x + o(1), and therefore, _up,h(ρ^h) = 1/2∫_Ω| ∇𝖰 (ρ̂^h) (x) |^2 ρ̂^h ( x) + o(1). The assertion then follows from the weak-* convergence ρ̂^h ⇀^* ρ in (Ω) and the uniform convergence ∇𝖰 (ρ̂^h) →∇𝖰(ρ) in (Ω). Consider a family {(ρ^h, j^h)}_h>0 of GGF-solutions to the upwind scheme (<ref>) according to Definition <ref> and the generalized gradient structure obtained as the EDP limit in Section <ref>. Let {(ρ̂^h, ^h)}_h>0 be defined as in (<ref>). Then, the existence of a subsequential limit pair (ρ, j) ∈𝒞ℰ(0,T) and the convergence specified in Theorem <ref>(1) follow from Lemma <ref>. The convergence of the Fisher information is proven in Theorem <ref>. The liminf inequality for the dissipation potential follows from the limit of the dual dissipation potential shown in Theorem <ref> and a duality argument from <cit.>. In this way, the assertion (2) is proven and it immediately follows that _agg^[s,t](ρ, j) ≤lim inf_h→ 0_up,h^[s,t](ρ^h, j^h)= 0 for every [s,t]⊂[0,T]. On the other hand, the chain rule <cit.> yields _agg^[s,t](ρ,j) ≥ 0 for every [s,t]⊂[0,T]. Therefore, the limit pair (ρ, j) is an (_agg, , ^*)-gradient flow solution of (<ref>) in the sense of Definition <ref>. § PROPERTIES OF THE TILTED DUAL DISSIPATION POTENTIAL The following lemma contains some properties and an integral representation of the harmonic-logarithm mean Λ_H introduced in (<ref>). A function M:_+ ×_+ →_+ is a mean if it is * positively one-homogeneous: M(λ s,λ t) = λ M(s,t) for all s,t∈_+ and λ >0; * bounded by min*s,t≤ M(s,t)≤max*s,t for all s,t∈_+; * jointly concave. The logarithmic mean Λ: _+ ×_+ →_+, Λ(s,t) = ∫_0^1 s^τ t^1-ττ = s-t/log s - log t , s t ; s , s=t . is a mean between the geometric and arithmetic mean √(st )≤Λ(s,t) ≤s+t/2 , with derivatives bounded ∂_1 Λ(s,t) = ∂_2Λ(t,s) and ∂_1 Λ(s,t) = Λ(s,t)(s-Λ(s,t))/s(s-t) . The harmonic-logarithmic mean Λ_H : _+ ×_+ →_+ defined by Λ_H(s,t) = 1/Λ⟨*|1/s, 1/t = st/Λ(s, t) is a mean between the harmonic and geometric mean 2/1/s+ 1/t≤Λ_H(s,t) ≤√(st) with the integral representations Λ_H(a,b) = ∫_0^1 τ/τ/s+ (1-τ)/t = ∫_0^∞s tτ/(τ +s) (τ+t) and derivatives ∂_1 Λ_H(s,t)=∂_2 Λ_H(t,s) = t( Λ(s, t) - t )/Λ(s, t). See, for instance <cit.> for many properties of the logarithmic mean, from which the analogous ones of the harmonic-logarithmic mean follow. The tilt-independent dual dissipation potential _ϵ,h^* in (<ref>) is given in terms of the function α^*_ϵ defined in (<ref>), which we recall here for convenience α_ϵ^*(a, b, ξ) = ϵ∫_0^ξsinh( x/ϵ) Λ_H (a e^-x/ϵ, b e^x/ϵ) x= ϵ^2 α_1 (a, b, ξ/ϵ). Below we prove useful properties of α_ϵ^*. The function α_ϵ^*:_+×_+×→_+ in (<ref>) has the following useful properties: * α_ϵ^* (a, b, ξ) is convex in ξ for fixed a,b>0, with min*a,b≤∂_ξ^2 α_ϵ^* (a, b, ξ) ≤max*a,b; * α_ϵ^* (a, b, ξ) is positively one-homogeneous and jointly concave in (a,b) for fixed ξ; * α_ϵ^* satisfies the following bound: α_ϵ^* (a, b, ξ) ≤ϵ^2 √(ab)( cosh( | ξ/ϵ| ) - 1 )= 1/4√(ab) Ψ^*(2ξ). Moreover, the expansion for ξ≪ 1 is given by α_ϵ^*(a,b,ξ) = Λ_H(a,b) ξ^2/2 + O⟨*|ξ^3/ϵ; * It holds that α_ϵ^*(a,b,ξ) →1/2( a (ξ^+)^2 + b (ξ^-)^2 ) α_0^*(a,b,ξ) as ϵ→ 0 , where ξ^± is the positive and negative part of ξ, respectively. Moreover, | α_ϵ^*(a,b,ξ) - α_0^* (a,b,ξ) | = O(C_a, b, ξ ϵ), where the constant C_a, b, ξ < ∞ depends on a, b, ξ. * The function β_ϵ: _+×_+→_+ defined for the argument ξ = - ϵlog√(b/a) in α_ϵ^* has the representation β_ϵ(a, b) α_ϵ^* (a, b, -ϵlog√(b/a)) = ϵ^2/4∫_a^b ab/z[ 1/Λ(z,a) - 1/Λ(z,b)] z; * The function β_ϵ:_+×_+→_+ defined in (e) is jointly convex, continuous with β_ϵ (a, 0) ϵ^2/4π^2/6 a and, symmetrically, β_ϵ (0, b) ϵ^2/4π^2/6 b, and satisfies the following bounds: ϵ^2/4 (√(a)-√(b))^2 ≤ϵ^2/4⟨*|a-b^2/a+b≤β_ϵ(a, b) ≤ϵ^2/2 (√(a)-√(b))^2; Moreover, the function _+ ×_+ ∋ (a, b) ↦β_ϵ (a^2, b^2) is differentiable. * The function α_ϵ^*(a,b,-ϵlog√(b/a) + q / 2 ) has the expansion α_ϵ^*(a,b,-ϵlog√(b/a) + q/2 ) = β_ϵ(a, b) + ϵ/4(a-b) q + q^2/4𝕙_ϵ (a, b, q) with 𝕙_ϵ (a, b, q) ∫_0^1 [a 𝔥(λ q/ϵ) + b 𝔥(-λ q/ϵ)](1-λ)λ, 𝔥(s) = 1/4e^s-1-s/sinh^2(s/2). (a) From the representation of α^*_1 in terms of the harmonic-logarithmic mean, it follows that ∂_ξα^*_1(a, b, ξ) = sinh(ξ) Λ_H (ae^-ξ, b e^ξ) = sinh(ξ) ab/Λ (ae^-ξ, b e^ξ). It also holds ∂_ξ^2 α^*_1(a,b,ξ) = a b/⟨*|a e^-ξ - b e^ξ^2⟨*| a (e^-2ξ-1)+ b (e^2ξ -1) + (a-b)⟨*|loge^-ξ/b - loge^ξ/a , which can be rewritten with the help of the function g(x) = x log x - x +1/(x-1)^2 as ∂_ξ^2 α^*_1(a,b,ξ) = a g⟨*|a/be^-2ξ + b g⟨*|b/a e^2ξ . The convexity follows now by observing that ∀ x∈ [0,1] : 0≤ g(x) ≤ 1 and g(x) + g(x^-1) = 1 and hence the bound min{a,b}≤∂_ξ^2 α^*_1(a,b,ξ) ≤max{ a ,b } , implying the convexity in ξ for fixed a,b>0. (b) The positively one-homogeneity and joint concavity follows from the properties of Λ_H. (c) Let ξ>0. Using the inequality between the harmonic-logarithmic and geometric mean, we obtain α_1 (a, b, ξ) = ∫_0^ξsinh(x) Λ_H (a e^-x, b e^x) x ≤∫_0^ξsinh(x) √(ab) x = √(ab)( cosh(ξ) - 1 ). If ξ < 0, then α_1 (a, b, ξ) = ∫_0^|ξ|sinh(x) Λ_H(ae^x, be^-x) x ≤√(ab)( cosh(|ξ|) - 1 ). Combining the two cases and considering α_ϵ, we get α_ϵ (a, b, ξ) ≤ϵ^2 √(ab)( cosh( | ξ/ϵ| ) - 1 ). As for the asymptotic expansion, we obtain, by definition of α^*_1, α^*_1(a, b, ξ) = ∂_ξ^2 α^*_1(a, b, ξ) |_ξ=0ξ^2/2 + O( |ξ|^3 ) = Λ_H (a, b) ξ^2/2 + O( |ξ|^3 ). Then it follows directly that α^*_ϵ(a, b, ξ) = ϵ^2 α^*_1 (a, b, ξ/ϵ) = Λ_H (a, b) ξ^2/2 + O( |ξ|^3/ϵ). (d) We rewrite α^*_ϵ as α^*_ϵ(a, b, ξ) =ϵ^2 ∫_0^ξ/ϵsinh(x) Λ_H (ae^-x, be^x) x = ϵ^2/2∫_0^ξ/ϵ( Λ_H (a, be^2x) - Λ_H (ae^-2x, b) ) x = ϵ/2∫_0^ξ( Λ_H (a, be^2x/ϵ) - Λ_H (ae^-2x/ϵ, b) ) x. For x > 0, it holds that ϵ/2Λ_H (a, be^2x/ϵ) = ϵ/2ab e^2x/ϵ/a - be^2x/ϵ( loga/b - 2x/ϵ) = ab/a e^-2x/ϵ- b( ϵ/2loga/b - x) ax, and - ϵ/2Λ_H (ae^-2x/ϵ, b) = - ab/a - b e^2x/ϵ( ϵ/2loga/b - x) 0. For x < 0, similarly, we obtain ϵ/2( Λ_H (a, be^2x/ϵ) - Λ_H (ae^-2x/ϵ, b) ) bx. Combining the two cases yields lim_ϵ→ 0α^*_ϵ(a, b, ξ) = _ξ > 0∫_0^ξ ax x + _ξ < 0∫_0^ξ bx x = 1/2( a (ξ^+)^2 + b (ξ^-)^2 ). (e) Direct calculation shows β_ϵ(a, b) = α^*_ϵ(a, b, -ϵlog√(b/a)) = ϵ^2 α^*_1 (a, b, log√(a/b)) = ϵ^2 ∫_0^log√(a/b)sinh(x) Λ_H(ae^-x, be^x) x = ϵ^2/4∫_1^a/b(√(y) - 1/√(y)) ab/Λ( a/√(y), b√(y))1/y y = ϵ^2/4∫_1^a/bab/y[ 1/Λ(a/y, b ) - 1/Λ(a, b y )] y = ϵ^2/4∫_a^b ab/z[ 1/Λ(z, a) - 1/Λ(z, b)] y. (f) The joint convexity of β_ϵ follows from (a) and (b). It is clear that β_ϵ is continuously differentiable in _+ ×_+ since it is defined as an integral of a bounded continuous function. However, on the boundary {0}× [0, +∞) ∪ [0, +∞) ×{0} some partial derivatives become -∞. In the case of (a, b) ↦β_ϵ(a^2, b^2), the directional derivatives are continuous and bounded: 0 ≥∂_1 β_1 (a^2, 1) = - 2a ∫_a^2^1 log z/z (z - 1) z ≥ - 2a ∫_a^2^1 1/z √(z) z = 4a 1/√(z)|_a^2^1 = 4a ( 1 - 1/a) = 4 (a - 1) > -∞. As for the bounds, we begin with the Upper bound. Using the inequality that the harmonic-log­arith­mic mean is less or equal to the geometric mean yields β_ϵ (a, b) ≤ϵ^2 √(ab)∫_0^-log√(b/a)sinh(x) x = ϵ^2 √(ab)( cosh( -log√(b/a)) - 1 ) = ϵ^2/2( √(a) - √(b))^2. Tight lower bound. Since β_1 is positively one-homogeneous it is enough to prove that β_1(a, 1) ≥γ(a) 1/4(a-1)^2/a+1 ∀ a≥ 0. For a = 0 the inequality holds, since β_1(0, 1) = 1/4π^2/6≥1/4 = γ(0). It is left to consider a > 0. We notice that β_1(1, 1) = 0 = γ(1). Now we aim to compare the derivatives ∂_a β_1(a,1) and ∂_a γ(a) for a∈ (0,1) and a∈(1,∞). The derivative of γ is ∂_a γ(a) = 1/4(a-1)(a+3)/(a+1)^2 = ∫_1^a 2/(z+1)^3 z We use the representation of β_1 from (e) and apply the change of variables y = z/a in the first part of the integral ∂_a β_1(a, 1) = 1/4∂_a [ ∫_1^1/a1/y Λ(y,1) y - a ∫_a^1 1/z Λ(z,1) z ] = a/Λ(1/a, 1)( -1/a^2) - ∫_a^1 1/z Λ(z,1) z + 1/Λ(a, 1) = ∫_1^a 1/z Λ(z,1) z = ∫_1^a log z/z (z - 1) z . Therefore, ∂_a (β_1(a,1) - γ(a) ) = ∫_1^a [ log z/z (z - 1) - 2/(z+1)^3] z. We are left to show that the integrand is positive, and then the bound follows. For z>1, the integrand is positive, if and only if log z ≥8z(z-1)/(z+1)^3, which can be shown again by comparing the derivatives 1/z - 8-z^2 + 4z -1/(z+1)^4 = (z-1)^2 (z^2 + 14z + 1)/z (z+1)^4 > 0 ∀ z >1. Rough lower bound. This lower bound follows from the inequality between the geometric and arithmetic means (a-b)^2/a+b = ( √(a) - √(b))^2 ( 1 + 2√(ab)/a +b) ≥ 2 ( √(a) - √(b))^2. (g) We apply the second-order Taylor expansion for a function f: f(y) = f(x) + f'(x)(y-x) + (y-x)^2∫_0^1 f”((1-λ)x + λ y)(1-λ)λ to expand the function α^*_ϵ α^*_ϵ(a,b,-ϵlog√(b/a) + q/2) = α^*_ϵ(a,b,-ϵlog√(b/a)) + q/2 ∂_ξ(α^*_ϵ) (a,b,-ϵlog√(b/a)) + q^2/4∫_0^1 (∂_ξ^2α^*_ϵ) (a,b,-ϵlog√(b/a) + λq/2)(1-λ)λ. After some manipulation, we find that (∂_ξα^*_ϵ)(a,b,-ϵlog√(b/a)) = ϵ/2(a-b), (∂_ξ^2α^*_ϵ) (a,b,-ϵlog√(b/a) + q/2) = a 𝔥(q/ϵ) + b 𝔥(-q/ϵ), with 𝔥(s) = 1/4e^s-1-s/sinh^2(s/2). Hence, α^*_ϵ(a,b,-ϵlog√(b/a) + q/2) = β_ϵ (a,b) + ϵ/4(a-b) q + q^2/4∫_0^1 [a 𝔥(λ q/ϵ) + b 𝔥(-λ q/ϵ)](1-λ)λ, therewith concluding the proof. abbrv
http://arxiv.org/abs/2306.03079v1
20230605175341
Machine Learning and Statistical Approaches to Measuring Similarity of Political Parties
[ "Daria Boratyn", "Damian Brzyski", "Beata Kosowska-Gąstoł", "Jan Rybicki", "Wojciech Słomczyński", "Dariusz Stolicki" ]
cs.CL
[ "cs.CL", "91F10 (Primary) 68T50 (Secondary)", "J.4; I.2.7" ]
A]Daria Boratyn 0000-0003-3299-7071 A]Damian Brzyski 0000-0002-6867-1877 A,B]Beata Kosowska-Gąstoł 0000-0003-3555-2828 C,D]Jan Rybicki 0000-0003-2504-9372 A,E]Wojciech Słomczyński 0000-0003-2388-8930 A,B]Dariusz Stolicki 0000-0002-8295-0848 This research has been funded under the Jagiellonian University Excellence Initiative, DigiWorld Priority Research Area, minigrant no. U1U/P06/NO/02.21 and QuantPol Center flagship project. Corresponding author: [email protected] [A]Jagiellonian Center for Quantitative Political Science, Jagiellonian University, Kraków, Poland [B]Faculty of International and Political Studies, Jagiellonian University [C]Center for Digital Humanities, Jagiellonian University [D]Faculty of Philology, Jagiellonian University [E]Faculty of Mathematics and Computer Science, Jagiellonian University Mapping political party systems to metric policy spaces is one of the major methodological problems in political science. At present, in most political science projects this task is performed by domain experts relying on purely qualitative assessments, with all the attendant problems of subjectivity and labor intensiveness. We consider how advances in natural language processing, including large transformer-based language models, can be applied to solve that issue. We apply a number of texts similarity measures to party political programs, analyze how they correlate with each other, and – in the absence of a satisfactory benchmark – evaluate them against other measures, including those based on expert surveys, voting records, electoral patterns, and candidate networks. Finally, we consider the prospects of relying on those methods to correct, supplement, and eventually replace expert judgments. § PRELIMINARIES Spatial models of politics, positing the existence of some metric (usually Euclidean) policy space that bijectively maps to the universe of possible sets of political views and preferences, are central to many theoretical and empirical models of political behavior of voters, legislators, parties, and other political actors. For instance, such models can be used to explain election outcomes: we can assume that each candidate and each voter is positioned somewhere in that policy space, and that voters prefer candidates closer to their own ideal positions <cit.>. Similarly, spatial models can be used to model party competition (parties seek positions that would attract most voters) <cit.>, legislative decisionmaking (legislators vote for alternatives that are closer to their ideal points than the status quo) <cit.>, or coalition formation (coalitions partners seek to minimize the distance of the expected coalition position and their own ideal point) <cit.>. A particularly simple example of a policy space is one-dimensional ordered metric space <cit.> which, in political science, is usually associated with the traditional left-right spectrum. Clearly, to evaluate and apply spatial models it is essential to estimate the positions of the actors involved, and political parties are among the most important here <cit.>. The prevalent approach to this problem is still based on more or less structured, but ultimately qualitative human assessment, whether in the form of expert surveys, such as the Chapel Hill Expert Survey <cit.>, V-DEM expert survey <cit.>, Global Party Survey <cit.>, or others <cit.>, or of human coding of party political programs and electoral manifestos (see, e.g., ). The former approach is subject to coder bias and subjectivity, with the resulting problems of reproducibility and reliability <cit.>. There are also data availability issues: especially past party positions cannot be reliably coded <cit.>. The document-based approach fares somewhat better in those respects (although it still involves subjective judgments) <cit.>, but is more time-consuming <cit.>. A natural solution would be to replace human coding with computerized content analysis <cit.>, and such attempts are already quite numerous. See, e.g., <cit.>. Most of them follow the political science focus on programs and manifestos, being based on textual comparisons of such party documents. However, they have developed in relative isolation from rapid advances made in the field of natural language processing within the last 10 years. Thus, there have been strikingly few attempts to use methods like word embeddings or large language models for party positioning or similarity measurement, and no systematic comparisons or evaluations of their performance in this field. This is the gap we aim to fill. Our main objective is threefold: we seek to review measures of party program similarity (both those already applied in this field and those used to evaluate textual similarity in other fields), analyze their correlation patterns, and evaluate their performance on real-life data from Poland (2001-2019). Because there is no single benchmark to use in that evaluation – party positions and party similarity are not merely latent variables, but quite imprecisely defined and grasped by researchers – we evaluate them against expert surveys, as well as a number of non-programmatic party similarity measures (voter behavior, candidate networks, and voting and coalition patterns). §.§ Contribution The principal contribution of this paper lies in systematically testing, comparing, and benchmarking textual similarity measures and algorithms developed in natural language processing and stylometry as applied to the similarity analysis of political party programs. As far as we are aware, for most of the said methods this is a pioneering application in this field. We also experiment with different hyperparameter choices and document length normalization methods designed to correct for differences in input lengths. Finally, in light of conceptual difficulties in defining party similarity, we introduce and develop several benchmark measures (coalition, genealogical, and electoral similarity indices are first introduced here). §.§ Applications compare political parties and party systems both cross-nationally and over time (Mair 2001) compare party systems in terms of such important indicators as the degree of polarisation, the direction of competition etc. the ability to locate parties in common space, usually in left–right terms, has been a central element in some typologies of party systems (especially Sartori 1976) allows us to assess why certain coalitions of parties are more likely to form rather than others, and to test for the extent to which policy or ideological affinity across parties is a factor in explaining coalition formation allows us to compare party systems cross-nationally and over time with respect to the role played by policy or ideology in promoting alliances between different parties, as well as in promoting or restraining the fractionalisation of party systems the capacity to locate political parties within a common space helps us to understand the working and effectiveness of representative government. For example, by locating parties in this way, and by comparing their positions to the preferences expressed by voters, we can gain a real and measurable sense of the extent to which these two core components of representative government are mutually congruent compare party systems, and political systems more generally, in terms of their capacity to match electoral preferences and party policies (see, for example, Klingemann 1995; Schmitt and Thomassen 2001) compare the positions which parties and governments advocate with the policies which they produce, thus revealing the extent to which the democratic mandate in general proves meaningful §.§ Prior Work Early research on the use of natural language processing for party positioning has been dominated by a single-minded focus on topic modeling <cit.>, at least initially mostly dictionary-based <cit.>. Later, the standard toolbox of computerized party program analysis has been augmented with two ideological scaling algorithms – WordScore and WordFish. WordScore is a supervised scaling / classification method developed in 2003 by Benoit, Laver, and Lowe <cit.>, and resembling a naive Bayes classifier. It calls for estimating, for every (non-stop) word in the corpus, a score vector, i.e., a vector of probabilities that a given word appears in connection with a given label in the training set. For prediction, we average word score vectors over an input text. The result can be used either for scaling (with each coordinate corresponding to one dimension of the scaling space) or for classification (with the label corresponding to the largest coordinate being the predicted one). WordFish, developed by Slapin and Prokosch in 2008 <cit.>, is a term frequency-based method for unsupervised single-dimensional scaling. It is based on an assumption that word frequencies follow a Poisson distribution with the rate parameter depending on (latent) party position. Both the latent variables and coefficients are estimated using the expectation maximization algorithm. This method has been used in <cit.> for ideological scaling of legislative speeches as well. There exists voluminous literature on agreement between expert surveys and document coding (usually combined with some scaling method) <cit.>, as well as on comparing the two with other sources of data, usually voter and party elite surveys, party self-placements, or – more recently – voting advice application data <cit.>. Relatively few such studies incorporate behavioral data such as roll call voting records <cit.> or coalition formation patterns <cit.>. Finally, some of the most recent works evaluate computerized content analysis methods, but none of them go beyond WordScores and WordFish <cit.>. Researchers tend to find high levels of agreement between expert surveys and other data sources, except that manifesto data diverge (although that may be the result of imperfect scaling rather than inherent problems) <cit.>. § TEXTUAL SIMILARITY MEASURES (TEXT / STYL) The canonical view of the spatial theory of voting is that dimensions of the policy space correspond to ideological cleavages or, at least, to major policy issues <cit.>. Accordingly, party similarity under that view should be understood in terms of ideological or programmatic alignment. Such an alignment, in turn, can be discovered by careful analysis of party political programs, electoral manifestoes, and other ideological declarations. Accordingly, it is to this category of sources that we shall first turn. The chief advantage of this approach is data availability: virtually every party has some kind of program or manifesto document which can be used for the purpose of comparing it with other parties. On the other hand, there are two fundamental difficulty, one conceptual and one methodological. The conceptual difficulty arises from the possibility of a disconnect between declarations and actual political practice. The methodological issue lies in the fact that qualitative analysis of party programs is both labor-intensive and inherently subjective. Like several authors before us <cit.>, we seek to avoid the latter problem by applying recent advances in the field of natural language processing, especially in transformer-based large language models. This allows us both to automate the process of analyzing party programs, and to eliminate subjectivity in assessments. Obviously, there is an inherent cost to this approach: since natural language processing models cannot `understand' programmatic documents and the views described therein, our analysis of such views using those methods will be necessarily indirect. §.§ Word Frequency Distributions The earliest algorithmic approach to textual similarity is to represent documents as (unordered) collections of words, informally referred to as bags-of-words <cit.>. Obviously, such a representation involves loss of contextual information carried by the segmentation of text into paragraphs and sentences and, more importantly, by their grammatical structure arising from word orderings. However, it is frequently employed for the sake of simplicity, and research suggests it exhibits relatively good performance, see, e.g., <cit.>. Without further loss of information we can map any bag-of-words to a probability distribution over individual words, thereby reducing the problem of measuring their similarity to a well-known problem of measuring similarity of discrete probability distributions <cit.>. Thus, the final representation of our corpus is an n × |V| matrix, where n is the number of documents (party programs) and V is the set of all words appearing in the corpus. While researchers employing bag-of-words methods and word frequency distributions differ in what text preprocessing techniques they apply before mapping the input text to a bag-of-words <cit.>, we opt for more extensive preprocessing in the form of case-normalization (i.e., lowercasing), lemmatization, and stop words removal. This is because party programs are usually available only in their native form, i.e., in original languages, many of which are inflected languages, and because they are relatively short. Hence, in the absence of lemmatization, entropy of the word frequency would be artificially inflated, potentially distorting the results. We experiment with several variants of word frequency-based methods: two word weighting methods and three metrics. The idea underlying word weighting is that variance in word frequency across documents increases with the expectation, wherefore more prevalent words have a much greater effect on the results than those less prevalent but more distinctive to the corpus or to individual documents <cit.>. The standard correction for this, originating in the field of information retrieval but commonly used across all NLP fields, is the term frequency–inverse document frequency (TF-IDF) measure, where each word is assigned a weight decreasing with probability that it occurs at least once within a random document in the corpus, and the weighted word vectors are normalized in such manner that their L_2 norms equal 1 <cit.>. For our experiments, we test both unweighted word frequencies (TF) and TF-IDF. With respect to the choice of a similarity measure, we experiment with a number of standard functions. We denote the frequency matrix by 𝐖, and its i-th column by 𝐖_i. L_1 (Manhattan) metric d_L_1(i, j) := ‖𝐖_i - 𝐖_j ‖_1, which for stochastic vectors is identical up to a multiplicative constant to the total variation distance d_TV between corresponding probability distributions; L_2 (Euclidean) metric d_L_2(i, j) := ‖𝐖_i - 𝐖_j ‖_2; cosine similarity s_cos(i, j) := (𝐖_i ·𝐖_j) / (‖𝐖_i‖_2‖𝐖_j‖_2). §.§ Stylometry Stylometry is usually regarded as use of statistical analysis of a text aimed at identifying its authorship by discerning author-specific style <cit.>. However, the body of scholarship on stylometric analysis of literary texts convincingly demonstrates that stylometry can also be applied to identify variables going beyond authorship, such as genre <cit.>, chronology <cit.>, or overall sentiment <cit.>. Accordingly, it is interesting to test whether party ideology can also be discerned through stylometric analysis. The usual approach in stylometry is to compare frequency distributions of N words that are most frequently used in the given textual corpus. This emphasis on frequently used words, including parts of speech commonly regarded as stop words in other NLP fields (such as conjunctions and prepositions), is particularly characteristic. Two basic parameters for a stylometric similarity measure are the choice of the number of most frequently used words N and the choice of the metric. With regard to the former, we experiment with N = 50, 100, and 200, noting that 100 is fairly common in stylometric analyses. With regard to the latter, we test eight metrics: cosine distance (styl-cos) d_cos(i, j) = 1 - s_cos(i, j), Burrows' delta (styl-delta) d_Δ(i, j) = ‖ z(𝐖)_i - z(𝐖)_j ‖_1, where z: ℝ^n × |V|→ℝ^n × |V| is a row-wise standardization <cit.>, Argamon's rotated quadratic delta (styl-arg), which differs from Burrows' delta in that L_2 rather than L_1 norm is used and the word frequency matrix is rotated using eigenvalue decomposition according to the word frequency covariance matrix calculated from the whole corpus <cit.>; Eder's delta (styl-eder), which differs from Burrows' delta by applying an inverse-frequency-rank weight to words <cit.>; cosine delta (styl-cosd), which differs from Burrows' delta in that cosine rather than L_1 distance is used <cit.>; cross-entropy (styl-entropy) d_H(i, j) = -∑_k=1^|V|𝐖_iklog𝐖_jk <cit.>; minmax distance (styl-minmax) min{𝐖_i, 𝐖_j} / max{𝐖_i, 𝐖_j}, where min and max are taken element-wise <cit.>; Eder's simple distance (styl-simple) d_L_1(i, j) := ‖√(𝐖_i) - √(𝐖_j)‖_1, with the square-root taken element-wise <cit.>. For all stylometric computations, we use package for R by Eder et al. <cit.>. §.§ Static Word Embeddings Methods based purely on word frequency do not account for semantics. In essence, they correspond to the assumption that the semantic metric on the space of words is discrete, i.e., that all distinct words are equidistant. Clearly, this assumption is a substantial oversimplification. Accordingly, we also use methods that account for semantic rather than lexical similarity, starting with methods employing distributional word embeddings. Such embeddings are injective functions mapping each word to an element of some finite-dimensional metric space (ℰ, d) in such manner that distances in (ℰ, d) decrease as the corresponding words become more semantically similar. Semantic similarity, in turn, is operationalized on the basis of co-occurrence statistics, invoking Firth's distributional hypothesis <cit.>, according to which the more semantically similar two words are, the more likely they are to appear interchangeably in the same context. We begin with static embedding methods, characterized by being context-invariant, i.e., always representing identical words in the same manner. We focus on three arguably most common embedding algorithms: the original word2vec algorithm by Mikolov et al. <cit.> (see also <cit.>); FastText algorithm by Bojanowski et al. <cit.>, designed to account for morphological properties of the space of words; and GloVe algorithm by Pennington et al. <cit.>. For all three, the codomain ℰ is a high-dimensional linear space. We experiment with different values of ℰ: 100, 300, 500, and 800 for word2vec and FastText, and 300 and 800 for GloVe. The choice of these values has been dictated by the availability of pretrained models. How word embeddings can be used to compare party programs (or other texts, for that matter)? One standard approach is to map the whole text to vector representation word-by-word, and then aggregate by averaging over all words. Another one is to use word mover's distance (WMD) <cit.>, which is essentially the Wasserstein metric over the embeddding codomain. Estimation of this distance is a common variant of the Kantorovich-Monge transportation problem <cit.>, which we solve using the displacement interpolation algorithm <cit.>. We experiment with both approaches, yielding us a total of twenty methods per corpus. §.§ Transformer-Based Language Models More advanced language models incorporate contextual information when mapping words to their vector representation. While formerly such models were based on recurrent neural networks or long-short term memory (LSTM) networks <cit.>, since ca. 2018, transformers (recursive self-attention-based networks) <cit.> have been widely regarded as the state-of-the-art solution <cit.>. Most transformer-based models are still word embedding models, although capable of utilizing contextual information from the input text. We experiment with four such models: GPT-2 <cit.> (trained using a left-to-right encoder), RoBERTa <cit.> (trained using a bidirectional encoder), BART <cit.> (trained using a composite noising scheme), and LongFormer <cit.> (with linearly-scaling attention mechanism, which enables it to admit longer token sequences for context). In addition, we consider large language models capable of mathematically representing variable-length chunks of texts (usually sentences) rather than individual words. These include Sentence-BERT <cit.>, using two BERT <cit.> encoders and a pooling model; Universal Sentence Encoder <cit.>, combining a transformer base model with deep averaging network; and DefSent <cit.>, trained on definition sentences from dictionaries. In the present article we use SBERT-based Sentence Transformers library <cit.>, experimenting with models trained using MPNet <cit.> and DistilRoBERTa <cit.>. For aggregation of the resulting representations, we use two methods that are most common in transformer-based models literature: averaging over words (sentences) (mean pooling), and taking an elementwise maximum over the same (max pooling). §.§ Algorithmic Topic Models Algorithmic Topic Models §.§ Methods for Dealing with Length Differences Significant differences in text length can be a major source of distortion in most if not all of the methods described in preceding subsections. However, such differences are common among political party programs. For example, in our reference dataset the ratio of the lengths (in characters) of the largest and shortest program equals approx. 246, and the standard deviation of the natural logarithm of text length is 1.31. While differences at a single point in time tend not to be that extreme (usually the max-min ratio on the order of 10 to 30), they are still sufficiently large to raise doubts about comparability. To assuage those doubts, we experiment with two methods for dealing with text length differences: random sampling and summarization. We use two kinds of random sampling techniques. For stylometry, we sample individual words uniformly with replacement and average the results over 256 samples (leveraging the sampling procedure built into the package). For other methods, we divide the text into sentences using library for Python <cit.>, uniformly sample 120 sentences, and average the results over 256 samples. For summarization we use the following algorithm. First, the text is divided into 25 connected chunks in such manner that the difference in the number of sentences in the longest and shortest chunks is at most 1. Second, from any chunk of more than 4 sentences we choose exactly 4 sentences whose vector representation under the TF-IDF transform are closest to the TF-IDF vector of the whole chunk. From shorter chunks, we simply choose all sentences. The summary is obtained by concatenating all chosen sentences in the order in which they appear in the text. The advantage of this procedure lies in the fact that every sentence in the summary appears in the original text. Generative summarization would threaten to introduce artifacts that could disrupt some of our methods. § BENCHMARK MEASURES The fundamental problem in choosing a benchmark for party similarity measures lies in the fact that the very concept of party similarity – or the dual concept of party position in policy space – is quite fuzzy and only very imprecisely grasped by researchers. Hence, we have no objectively correct similarity measure to benchmark against. Instead, we test program similarity measures against standard methods in the field (various expert surveys) as well as against similarity measures for other areas of party activity (legislative voting, coalition formation, candidate selection, electoral campaigning). By the assumptions of spatial models, all of those should be correlated with proximity in the policy space, and therefore also with program similarity. The basic challenge here is that those assumptions might not be fully (or even at all) satisfied. Domain experts tend to recognize that party similarity is multidimensional. Moreover, it might very well be the case – indeed, many if not most political scientists would agree that it is the case – that party programs are only imperfect representations of party views of policy, the divergence being attributable to strategic considerations: parties may include issues and promises that are not intrinsically important to them, but respond to current concerns of the electorate, and may obscure their positions on other issues if they judge such positions to be liabilities. This is likewise true of all other dimensions of party similarity. As for expert opinions, divergence between expert judgments and textual analysis of political programs may occur because the former incorporate other dimensions as well, or because the former are biased or distorted by misperception of actual party objectives. Accordingly, extreme care is needed in interpretation of the benchmarks. § POLICY SIMILARITY As noted above, relying on political programs for measurements of party similarity is open to an objection that parties might opt to say one thing while doing something very different. According to the view, programs and manifestoes are merely a masquerade for public consumption, and reliance on them fails to yield an accurate picture of party policy. Proponents of this position would prefer to rely on the actual decision record of a party to discover its policy views and positions. In this section, we shall discuss several similarity measures based on such decision records in the form of parliamentary roll call vote results and coalition formation decisions. §.§ Expert Surveys As noted in Sec. 1, expert surveys are the standard source of party positioning data in political science. They are for the most part semi-structured: experts are asked to position the party on some given ordinal or interval scale for a number of issues defined in advance by survey authors (for instance, the Chapel Hill Expert Survey asks experts to position parties according to their views on economic policy, social and cultural issues, European integration, immigration, environmental sustainability, civil liberties, deregulation, etc.). In general, experts are not given any further instructions on how to map specific party positions to points on the survey scale. The principal advantage of expert surveys lies in the fact that they are holistic: experts can integrate all kinds of different data sources and have maximum flexibility in aggregating them <cit.>. On the other hand, the major weakness are reliability concerns <cit.>. The very flexibility and lack of precise constraining criteria makes expert assessments less comparable and therefore more difficult to aggregate <cit.>. There are also obvious risks of experts' biases and misperceptions. Another weakness is connected with the arbitrary choice of policy dimensions. Especially in cross-national surveys it may lead to neglect of country-specific cleavages and to overrating of others that are not particularly relevant to a given country (e.g., the status of ethnic minorities is not a politically relevant issue in some countries). This may be remedied, but only to a limited extent, by relying in turn on expert surveys on policy dimensions <cit.>. Finally, there are issues related to the time horizon. On the one hand, political scientists are able to assess merely current location of a given party in a party system, but not its past positions. On the other hand, they are often unable to abstract away from past behavior and ideological rooting of a party <cit.>. §.§.§ Chapel Hill Expert Survey The leading expert survey on party positions is the Chapel Hill Expert Survey <cit.>, dating back to 1999. The first survey was conducted in 1999 and only included 14 West European countries, but subsequent iterations in 2002, 2006, 2010, 2014, and 2019 quickly expanded its scope. The latest survey in 2019 covered all 28 EU member states (including the UK), as well as several non-EU states. Between 1999 and 2019, the number of national parties included in the CHES dataset increased from 143 to 268. The experts assess party positions on general left-right ideological axis, economic left-right axis, and the progressive-conservative axis (GAL-TAN), as well as on more specific issues such as European integration, immigration, or environment. We consider four benchmarks based on CHES: lrgen absolute difference of the values of CHES lrgen variables, defined as `position of the party ... in terms of its overall ideological stance'; lreco absolute difference of the values of CHES lrecon variables, defined as `position of the party ... in terms of its ideological stance on economic issues'; galtan absolute difference of the values of CHES galtan variables, defined as `position of the party ... in terms of their views on social and cultural values'; ch2d L_2 (Euclidean) distance of the points in a two-dimensional space defined by CHES lrecon and galtan variables. §.§.§ V-DEM V-DEM (Varieties of Democracy Project) is a large comparative expert survey of different aspects of the functioning of democracy V-DEM expert survey <cit.>. One of its component parts is V-PARTY, a survey of parties and party systems, containing data on 3467 parties from 178 countries, in some cases dating back as early as 1900. We consider one benchmark based on V-DEM: vdem L_2 (Euclidean) distance between vectors of V-PARTY ideological variables. §.§.§ Global Party Survey Global Party is one of the newer major party surveys, initiated by Norris <cit.>. It includes data about 1043 parties from 163 countries. We do not use Global Party Survey as a benchmark, because data from this source is only available for 2018. §.§ Manifesto Research Project (MARPOR) The leading document-based party positioning effort is the Manifesto Research Project, currently financed by a long-term funding grant from the German Science Foundation (DFG) as Manifesto Research on Political Representation <cit.>. It continues the work of the Manifesto Research Group (MRG 1979-1989) and the Comparative Manifestos Project (CMP 1989-2009). The project has generated a data set based on the content analysis of electoral manifestos of the major political parties in mainly the OECD and CEE countries. It covers over 1000 parties from 1945 until present in over 50 countries on five continents. To create the data set, trained native-language experts are asked to divide the electoral programs into statements (sentences or quasi-sentences, each containing a certain idea or meaning) and to allocate these quasi-sentences into a set of policy categories. This coding scheme comprises 56 categories that are divided into seven domains. The coding outcome is a single topic distribution vector for each manifesto. The theoretical basis of the MARPOR approach lies within the salience theory that understands competition among parties in terms of the distinct emphases the parties place on certain policy areas <cit.>. Quality and reliability of MARPOR data has been assessed by numerous scholars as relatively good, albeit subject to certain caveats <cit.>. Most domain experts either believe that policy spaces are low-dimensional, or at least prefer to work with such spaces for the sake of simplicity and interpretability, and therefore find MARPOR's 56-dimensional topic distribution vectors not fully satisfactory. Accordingly, there exists quite extensive literature on the subject of scaling MARPOR data (or other topic distribution data). The initial Manifesto Research Group approach to this problem was to use factor analysis for scaling <cit.>, but because of sampling adequacy problems caused by the number of variables exceeding the number of observations, as well as interpretability issues, the authors ultimately settled on a much simpler solution – the RILE (right-left) indicator, defined as a linear combination of a subset of coordinates assigned either positive or negative unit coefficients <cit.>. Initial values of coefficients were assigned according to the a priori judgment of domain experts, but factor and correlation analysis was then used to refine those assignments (although in a computer-assisted rather than purely algorithmic manner) <cit.>. While commonly used (see, e.g., <cit.>), RILE has met with extensive criticism <cit.>, and a number of alternatives have been proposed, ranging from nonlinear transforms of the RILE measure <cit.> and different coefficient assignment methods <cit.>, to more sophisticated statistical techniques such as principal factor analysis <cit.>, factor analysis on Q-transformed dataset <cit.>, structural equation modeling <cit.>, and latent variable analysis <cit.>. A significant barrier to adoption of the latter class of methods, however, lies in the fact that they learn dimensions of the policy space from the data rather than permit the researcher to specify them <cit.>. We consider two benchmarks based on MARPOR: marpor cosine similarity of MARPOR topic distribution vectors; rile absolute difference of the values of the rile variable. §.§ Voting Agreement (vote-kappa) Applying a spatial model of politics to legislative decision-making, we can consider a parliamentary vote on a contested issue as equivalent to a bisection of the policy space. It follows that, on average, parties close to each other in that space should vote in agreement more frequently than those distant from each other. Conversely, agreement in voting patterns is likely to imply policy proximity. Accordingly, we can treat voting agreement as a possible benchmark for our program similarity measures. To quantify voting agreement between two parties we make some general assumptions. Firstly, we assume that (roll-call) voting in the parliament is ternary with the 'abstain' (A) option located exactly halfway between 'no' (N, nay) and 'yes' (Y, yea) options. This leads to the symmetric three-by-three agreement matrix quantifying the similarity of the votes of two MPs in a particular voting. The matrix has values 1 on the diagonal, i.e., if the votes cast are identical, 1/2 for pairs (A/N) or (A/Y), and 0 if the votes are opposite (N/Y). Secondly, in a concrete voting we calculate the mean value of this agreement index for a pair of random voters from these two parties. Thirdly, we average such obtained index over all votes with weights proportional to the products of the turnout (participation) of both parties in a given vote. Finally, we use the technique invented by Cohen <cit.> and modified later by Vanbelle <cit.> to exclude the possibility of agreement occurring just by chance between the results of votes of both parties, obtaining in this way so called modified κ coefficient <cit.>. There remains one question: which votes should we consider in the calculation of the κ coefficient? One option is to look backward from the point in time at which we compare parties, in essence assuming the perspective of a voter at an election, able only to assess the past track record. Another option is to look forward, assessing how the party is carrying out its declarations. Both approaches appear to us equally valid, so we aggregate them into one by averaging them. One obvious weakness of using voting records as a benchmark lies in natural incompleteness of the data: parties that are not represented in the legislature in a given term have no voting record. If only forward or backward data are missing, we omit the averaging and just take the other value. If both forward and backward data are missing, we omit this benchmark for a given party. §.§ NOMINATE ?see https://cran.r-project.org/web/packages/wnominate/vignettes/wnominate.pdfhttps://cran.r-project.org/web/packages/wnominate/vignettes/wnominate.pdf for a brief introduction §.§ Coalition Patterns (coal) The organizing principle of interparty interactions in most parliaments is the government-opposition divide: most votes divide parties into those supporting and those opposing the government and there exists a coalition of parties that consistently vote with the government. However, if we assume parties to be rational actors that maximize the proximity of voting outcomes to their policy position, it follows that coalitions should form between parties that are close to each other. Accordingly, from the coalition formation patterns we should be able to make inferences about party proximity. The simplest conceivable measure of coalition-based similarity is a Boolean one that assumes 1 if two parties are coalition partners and 0 otherwise. But this measure fails to account for the fact that failure to form a coalition will frequently stem not from interparty distance but rather from the fact that such coalition would not command a majority. Thus, proximity of opposition parties would be consistently underrated. One possible solution is to treat both coalition partners and co-opposition parties in the same manner. However, this might in turn overrate similarity of the opposition parties: two parties may be in the opposition together not because they are close to one another, but because they are both distant from the governing parties. An enemy of one's enemy is not necessarily one's friend. To reflect this, the ternary measure of coalitional similarity assigns 1 to coalition partners, 1/2 to parties that are together on the opposition side, and 0 to parties that are on different sides. Since we compare parties at the time of a general election, we ascertain this value for every day of the preceding and succeeding parliamentary terms, calculate a day-by-day average (assuming that more durable coalitions imply greater similarity) for each term, and average the two values together. Fix some reference day t and bandwidths h_-, h_+∈ℕ. The ternary measure of coalition similarity of parties x and y is given by: C_t, h_-, h_+ (x, y) := ∑_i = t - h_-^t + h_+ w_ (t - i)(C_i(x) C_i(y) + 1/2 O_i(x) O_i(y) ), where C_i, O_i are, respectively, characteristic functions of the coalition and opposition as of date i, w_x := h_x^-1 if h_x≠ 0 and 0 otherwise, and w_0 := w_-. Because we compare parties at the time of a general election, we fix h_- as the length of the preceding parliamentary term and h_+ as the length of the succeeding parliamentary term (or such part thereof for which data are known). §.§ Genealogical Similarity (cand-gen) While political scientists frequently follow the constructivist paradigm, treating parties as at least quasi-unitary actors distinct from the collection of their members, it is rather difficult to imagine party position in the political space to be wholly independent from the positions of its members, and especially from the positions of the party elite. At the same time, while some party systems have been stable for generations, (see, e.g., United States, Australia, Switzerland, or Japan, and to a more limited extent Germany and United Kingdom), others are in flux (France, Italy, Poland). In the latter, many politicians moved through several parties in the course of their careers. If, on the average, politicians from the same party are closer to each other than those from different parties, we would expect parties whose members (or at least elites) come from the same past party to be more similar than those whose members do not have such a shared background. This concept of genealogical similarity is used to define our next benchmark. As a first step, we construct a directed genealogical graph 𝒢. Let a set of elections in a given jurisdiction be indexed by L ∈ℕ, let P(i) for i ∈ L be the set of all parties contesting the i-th election, and let party be identified with the set of its candidates. For the purpose of this definition, we ignore party continuity, so even if a party X contested an election i and then an election j, we treat X as of i and X as of j as distinct entities. The set of vertices of 𝒢 equals ⋃_i ∈ L P(i), and an edge exists from p to q if and only if p ∩ q ≠∅ and there exists such i ∈ L that q ∈ P(i) and p ∈ P(i+1), i.e., the two parties have common candidates and contested consecutive elections. A vertex x ∈ V(𝒢) is an ancestor of y ∈ V(𝒢) if and only if there exists a path in 𝒢 from x to y. Each edge p q in the genealogical graph is assigned a weight: * for countries using non-party-list electoral systems, the weight is equal to |p ∩ q| / |p|, i.e., the proportion of candidates in p that belonged to q, * for countries using party-list electoral systems, the weight is equal to w(p ∩ q) / w(p), where w is an additive measure on p such that w({c}) = r_c^-1, where c ∈ p is a candidate and r_c is that candidate's position on the party list. Intuitively, this is equivalent to the non-party-list case, except that candidates are weighted inversely to their position on the party list in the later election. The weight of a path in 𝒢 equals the product of edge weights. For any fixed parties p, q we denote the shortest path from p to q that is maximal in terms of weight by by π(p, q). The genealogical similarity measure of parties x and y is given by: G(x, y) = ∑_z ∈ A(x, y)min{ w(π(x, z)), w(π(y, z)) }, where A(x, y) is the set of common ancestors of x and y. §.§ Electoral Similarity (elec-cor) As applied to electoral behavior, the spatial model posits that electorates of two parties that are close to each other should be similar in terms of their positions in the policy space. Accordingly, we would expect the vote shares of two similar parties to be correlated. Hence, our final benchmark is the electoral similarity measure which for any two parties equals the Pearson correlation coefficient of their municipal-level vote shares as of the most recent national parliamentary election, with the correlation taken over all municipalities. § DATA Finding a good dataset for testing of party similarity measures is surprisingly difficult, as much of the needed data is only available in digital form for quite recent elections. While program texts are available from MARPOR <cit.>, benchmark data are incomplete and scattered over multiple sources. Our ideal dataset should cover several electoral cycles and include multiple parties per each election. These conditions are satisfied by a dataset of Polish electoral and party database for the 2001-2019 period, which includes a collection of digitized program texts (originally from <cit.>), a candidate database with personally unique keys that allow us to track candidates between elections, a database of precinct-level election results, and a dataset of legislative roll call voting records. The dataset consists of 41 party electoral programs, which gives us 820 distinct pairs of programs to compare. We calculate inter-measure correlations for all such pairs. However, because several of our benchmarks can only be defined for parties existing at the same moment in time (for instance, we cannot compare if a party from 2001 and a party from 2015 voted in the same manner, because they participated in different roll call votes), we compare our similarity measures with benchmarks only for such parties. Pretrained word embeddings and language models for Polish texts have been obtained from <cit.>. § RESULTS AND ANALYSIS §.§ Preliminary Test – Self-Similarities As a preliminary test, for each method of similarity measurement we have run a self-similarity test. Every party program in our corpus that was at least 32,768 characters long was divided into two parts, one consisting only of odd sentences, and the other consisting only of even sentences, and then the methods tested were used to compare those parts. The distribution of the results was then compared with the distribution of inter-party similarities. §.§ Intra-Group Correlations Within each group of text analysis methods (word frequency, stylometry, static word embeddings, transformer word embeddings, transformer sentence embeddings) we compute a correlation matrix, and then use hierarchical agglomerative clustering <cit.>, iteratively merging clusters that are most correlated. We use Pearson's correlation coefficient for quantifying correlations between two singleton clusters; multiple correlation coefficient for quantifying correlations between a singleton cluster and a non-singleton cluster <cit.>; and group correlation coefficient for quantifying correlations between two non-singleton clusters <cit.>. However, we do not merge clusters if a merger would cause the minimal intra-cluster correlation to fall below .75 threshold. Because most groups of variables are rather numerous, we only report cluster composition and correlations between clusters. §.§.§ Measures Based on Word Frequency Distributions 3-5 3c|length correction 1|c|method metric 1c|none 1c|sampling summarization 1|c|TF cos 1c|[HTML]EFEFEF1 1c|[HTML]9B9B9B3 [HTML]9B9B9B3 1|c|TF L_2 1c|[HTML]EFEFEF1 1c|[HTML]9B9B9B3 [HTML]9B9B9B3 1|c|TF L_1 1c|[HTML]C0C0C02 1c|[HTML]6565654 [HTML]6565654 1|c|TFIDF cos 1c|[HTML]EFEFEF1 1c|[HTML]9B9B9B3 [HTML]9B9B9B3 1|c|TFIDF L_2 1c|[HTML]EFEFEF1 1c|[HTML]9B9B9B3 [HTML]9B9B9B3 1|c|TFIDF L_1 1c|[HTML]C0C0C02 1c|[HTML]6565654 [HTML]6565654 2-3 2|c|Intra-cluster correlations 2-3 minimal median 1|c|[HTML]EFEFEF1 .917 .962 1|c|[HTML]C0C0C02 .942 .971 1|c|[HTML]9B9B9B3 .785 .958 1|c|[HTML]6565654 .911 .967 2-5 4|c|Inter-cluster correlation matrix 2-5 [HTML]EFEFEF1 [HTML]C0C0C02 [HTML]9B9B9B3 [HTML]6565654 1|c|[HTML]EFEFEF1 1.000 0.950 0.996 0.945 1|c|[HTML]C0C0C02 0.950 1.000 0.908 0.901 1|c|[HTML]9B9B9B3 0.996 0.908 1.000 0.994 1|c|[HTML]6565654 0.945 0.901 0.994 1.000 In conclusion, it appears that IDF weighting does not significantly matter for the results, nor does the distinction between L_2 and cosine metrics. However, both the choice of L_1 metric and the use of sampling and summarization make a difference. §.§.§ Stylometry 3-5 3c|length correction 1|c|metric top words 1c|none 1c|sampling summ. 1|c|cos 50 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]9B9B9B7 1|c|cos 100 1c|[HTML]C0C0C02 1c|[HTML]EFEFEF1 [HTML]EFEFEF5 1|c|cos 200 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]EFEFEF5 1|c|delta any 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]6565654 1|c|argamon any 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]6565654 1|c|eder any 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]6565654 1|c|cross-entropy any 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]6565654 1|c|minmax any 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]6565654 1|c|simple any 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]6565654 1|c|cosine delta any 1c|[HTML]9B9B9B3 1c|[HTML]9B9B9B3 [HTML]C0C0C06 2-3 2c|Intra-cluster correlations 2-3 1c|minimal median 1|c|[HTML]EFEFEF1 1c|.771 .959 1|c|[HTML]C0C0C02 1c|1.000 1.000 1|c|[HTML]9B9B9B3 1c|.797 .907 1|c|[HTML]6565654 1c|.793 .925 1|c|[HTML]EFEFEF5 1c|.984 .992 1|c|[HTML]C0C0C06 1c|.854 .944 1|c|[HTML]9B9B9B7 1c|1.000 1.000 2-8 7c|Inter-cluster correlation matrix 2-8 1c|[HTML]EFEFEF1 1c|[HTML]C0C0C02 1c|[HTML]9B9B9B3 1c|[HTML]6565654 1c|[HTML]EFEFEF5 1c|[HTML]C0C0C06 [HTML]9B9B9B7 1|c|[HTML]EFEFEF1 1c|1.00 1c|.779 1c|.427 1c|.799 1c|.516 1c|.501 .664 1|c|[HTML]C0C0C02 1c|.779 1c|1.00 1c|.340 1c|.602 1c|.672 1c|.296 .535 1|c|[HTML]9B9B9B3 1c|.427 1c|.340 1c|1.00 1c|.385 1c|.165 1c|.649 .365 1|c|[HTML]6565654 1c|.799 1c|.602 1c|.385 1c|1.00 1c|.687 1c|.572 .779 1|c|[HTML]EFEFEF5 1c|.516 1c|.672 1c|.165 1c|.687 1c|1.00 1c|.309 .662 1|c|[HTML]C0C0C06 1c|.501 1c|.296 1c|.649 1c|.572 1c|.309 1c|1.00 .556 1|c|[HTML]9B9B9B7 1c|.664 1c|.535 1c|.365 1c|.779 1c|.662 1c|.556 1.00 The major distinctions can be observed between methods working on summarized and non-summarized texts, as well as between cosine-based metrics and norm-based metrics. §.§.§ Static Word Embeddings 4-6 3c|length correction 1|c|method 1c|metric dim 1c|none 1c|sampling summ. 1|c|FastText 1c|cos any 1c|[HTML]EFEFEF1 1c|[HTML]EFEFEF1 [HTML]EFEFEF1 1|c|FastText 1c|wmd any 1c|[HTML]9B9B9B2 1c|[HTML]9B9B9B2 [HTML]EFEFEF1 1|c|GloVe 1c|cos any 1c|[HTML]9B9B9B2 1c|[HTML]9B9B9B2 [HTML]EFEFEF1 1|c|GloVe 1c|wmd any 1c|[HTML]9B9B9B2 1c|[HTML]9B9B9B2 [HTML]EFEFEF1 1|c|word2vec 1c|cos 100, 300 1c|[HTML]9B9B9B2 1c|[HTML]EFEFEF1 [HTML]EFEFEF1 1|c|word2vec 1c|cos 500, 800 1c|[HTML]9B9B9B2 1c|[HTML]9B9B9B2 [HTML]EFEFEF1 1|c|word2vec 1c|wmd any 1c|[HTML]9B9B9B2 1c|[HTML]9B9B9B2 [HTML]EFEFEF1 2-3 2c|Intra-cluster correlations 2-3 1c|minimal median 1|c|[HTML]EFEFEF1 1c|.835 .911 1|c|[HTML]9B9B9B2 1c|.792 .883 2-3 2c|Inter-cluster correlation matrix 2-3 1c|[HTML]EFEFEF1  1c|[HTML]9B9B9B2 1|c|[HTML]EFEFEF1 1c|1.000 .781 1|c|[HTML]9B9B9B2 1c|.781 1.000 For FastText, but not for other models, we observe significant difference between cosine and wmd metrics. Summarization affects the results significantly, but sampling does not. §.§.§ Transformer Word Embeddings 2-5 4c|pooling 2-5 2c|mean 2c|max. 1|c|model 1c|none 1c|summ. 1c|none summ. 1|c|BART 1c|[HTML]9B9B9B3 1c|[HTML]6565654 1c|[HTML]EFEFEF1 [HTML]C0C0C02 1|c|RoBERTa-medium 1c|[HTML]6565654 1c|[HTML]6565654 1c|[HTML]EFEFEF1 [HTML]C0C0C02 1|c|RoBERTa-large 1c|[HTML]6565654 1c|[HTML]6565654 1c|[HTML]EFEFEF1 [HTML]C0C0C02 1|c|GPT2-medium 1c|[HTML]6565654 1c|[HTML]6565654 1c|[HTML]EFEFEF1 [HTML]C0C0C02 1|c|GPT2-xl 1c|[HTML]6565654 1c|[HTML]6565654 1c|[HTML]EFEFEF1 [HTML]C0C0C02 1|c|LongFormer 1c|[HTML]6565654 1c|[HTML]6565654 1c|[HTML]EFEFEF1 [HTML]C0C0C02 2-3 2c|Intra-cluster correlations 2-3 1c|minimal median 1|c|[HTML]EFEFEF1 1c|.944 .971 1|c|[HTML]C0C0C02 1c|.843 .905 1|c|[HTML]9B9B9B3 1c|1.000 1.000 1|c|[HTML]6565654 1c|.786 .886 2-5 4c|Inter-cluster correlation matrix 2-5 1c|[HTML]EFEFEF1 1c|[HTML]C0C0C02 1c|[HTML]9B9B9B3 [HTML]6565654 1|c|[HTML]EFEFEF1 1c|1.000 1c|.799 1c|.469 .614 1|c|[HTML]C0C0C02 1c|.799 1c|1.000 1c|.489 .717 1|c|[HTML]9B9B9B3 1c|.469 1c|.489 1c|1.000 .607 1|c|[HTML]6565654 1c|.614 1c|.717 1c|.607 1.000 As we can see, there are no significant differences between models, but the choice of pooling method matters. Sampling / summarization affects the results for max pooling, but not for mean pooling. §.§.§ Sentence Embeddings 2-5 4c|pooling 2-5 2c|mean 2c|max. 1|c|model 1c|none 1c|summ. 1c|none summ. 1|c|DistilRoBERTa 1c|[HTML]9B9B9B2 1c|[HTML]9B9B9B2 1c|[HTML]EFEFEF1 [HTML]EFEFEF1 1|c|MPNet2 1c|[HTML]9B9B9B2 1c|[HTML]9B9B9B2 1c|[HTML]EFEFEF1 [HTML]EFEFEF1 2-3 2c|Intra-cluster correlations 2-3 1c|minimal median 1|c|[HTML]EFEFEF1 1c|.813 .907 1|c|[HTML]9B9B9B2 1c|.890 .946 2-3 2c|Inter-cluster correlation matrix 2-3 1c|[HTML]EFEFEF1  [HTML]9B9B9B2 1|c|[HTML]EFEFEF1 1c|1.000 .755 1|c|[HTML]9B9B9B2 1c|.755 1.000 Again we see correlation between models, but differences between pooling methods. §.§ Inter-Group Correlations Sentence embeddings and transformer-based word embeddings are strongly correlated, as is also the case with word frequency methods and stylometry. Word embedding methods are somewhat of an outlier, but closer to the latter. The relatively strong correlation between stylometry and transformer-based methods deserves a note. 2-6 word stylo- word trans- sentence freq. metry embed. formers embed. 1|c|word 2*1.000 2*.855 2*.684 2*.581 2*.522 1|c|freq. 1|c|stylo. .855 1.000 .522 .855 .793 1|c|word 2*.684 2*.522 2*1.000 2*.533 2*.438 1|c|embed. 1|c|trans. .581 .855 .533 1.000 .910 1|c|sentence 2*.522 2*.793 2*.438 2*.910 2*1.000 1|c|embed. §.§ Benchmarks We note that most textual similarity results (with the exception of stylometric ones) perform similarly against expert assessments as document-based coding methods. Almost all methods perform poorly against behavioral benchmarks (voting, etc.), but this is more of a conceptual problem, as it affects MARPOR and other manifesto-based data sources as well. group no. lrgen lreco galtan ch2d vdem rile [HTML]EFEFEF1 .33 .49 .33 .46 .61 .57 2-8 word [HTML]C0C0C02 .46 .59 .46 .59 .84 .55 2-8 [HTML]9B9B9B3 .48 .48 .48 .56 .74 .52 2-8 -2*freq. [HTML]6565654 .55 .53 .55 .64 .85 .58 [HTML]EFEFEF1 .11 .16 .11 .06 .09 .21 2-8 [HTML]C0C0C02 .08 .21 .08 .11 .06 .28 2-8 stylo- [HTML]9B9B9B3 .09 .16 .09 .08 .12 .15 2-8 metry [HTML]6565654 .07 .17 .07 .03 .04 .13 2-8 [HTML]EFEFEF5 .08 .15 .08 .04 .10 .36 2-8 [HTML]C0C0C06 .04 .15 .04 .01 -.04 .14 2-8 [HTML]9B9B9B7 .08 .14 .08 .04 .06 .31 word [HTML]EFEFEF1 .28 .27 .28 .26 .24 .46 2-8 embed. [HTML]9B9B9B2 .35 .42 .35 .44 .49 .47 [HTML]EFEFEF1 .26 .35 .26 .35 .37 .35 2-8 trans- [HTML]C0C0C02 .37 .38 .37 .44 .63 .43 2-8 former [HTML]9B9B9B3 .46 .48 .46 .51 .69 .48 2-8 [HTML]6565654 .45 .44 .45 .48 .61 .39 sent. [HTML]EFEFEF1 .21 .31 .21 .31 .35 .29 2-8 embed. [HTML]9B9B9B2 .38 .41 .38 .43 .45 .37 group no. marpor vote coal cand elec [HTML]EFEFEF1 .25 .00 -.13 .02 .02 2-7 word [HTML]C0C0C02 .44 .36 .28 .03 .05 2-7 [HTML]9B9B9B3 .10 -.01 -.05 .05 -.04 2-7 -2*freq. [HTML]6565654 .24 .19 .30 .13 .31 [HTML]EFEFEF1 .23 .05 -.01 .14 -.07 2-7 [HTML]C0C0C02 .18 -.08 -.09 .13 -.09 2-7 stylo- [HTML]9B9B9B3 .09 .21 .14 .12 -.05 2-7 metry [HTML]6565654 .20 .01 -.02 .14 -.13 2-7 [HTML]EFEFEF5 .12 .10 .13 .19 .15 2-7 [HTML]C0C0C06 -.01 .09 .03 .15 -.11 2-7 [HTML]9B9B9B7 .10 -.05 -.12 .16 -.15 word [HTML]EFEFEF1 -.02 -.02 -.07 .14 -.03 2-7 embed. [HTML]9B9B9B2 .07 -.06 -.08 .01 .00 [HTML]EFEFEF1 .21 -.10 -.11 .03 .01 2-7 trans- [HTML]C0C0C02 .06 .00 -.02 .03 -.02 2-7 former [HTML]9B9B9B3 -.04 .06 .01 .06 .01 2-7 [HTML]6565654 .16 .05 .06 .09 -.05 sent. [HTML]EFEFEF1 .21 -.09 -.07 -.03 -.01 2-7 embed. [HTML]9B9B9B2 .31 -.04 -.08 -.01 -.05 § FUTURE WORK Future work will focus on testing additional methods, including LDA-based topic models with scaling and methods based on topic matching; exploring the potential of combining textual similarity methods with machine translation algorithms to obtain inter-language comparability; and aggregating textual similarity measures to algorithmically recover party positions in the policy space.
http://arxiv.org/abs/2306.12555v1
20230621204910
Topology optimization for inverse magnetostatics as sparse regression: application to electromagnetic coils for stellarators
[ "Alan A. Kaptanoglu", "Gabriel P. Langlois", "Matt Landreman" ]
physics.plasm-ph
[ "physics.plasm-ph", "physics.comp-ph" ]
Corresponding author ([email protected]). Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, MD, 20742, USA=-1 Courant Institute of Mathematical Sciences, New York University, New York, NY, 10012, USA=-1 Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, MD, 20742, USA =-1 Topology optimization, a technique to determine where material should be placed within a predefined volume in order to minimize a physical objective, is used across a wide range of scientific fields and applications. A general application for topology optimization is inverse magnetostatics; a desired magnetic field is prescribed, and a distribution of steady currents is computed to produce that target field. In the present work, electromagnetic coils are designed by magnetostatic topology optimization, using volume elements (voxels) of electric current, constrained so the current is divergence-free. Compared to standard electromagnet shape optimization, our method has the advantage that the nonlinearity in the Biot-Savart law with respect to position is avoided, enabling convex cost functions and a useful reformulation of topology optimization as sparse regression. To demonstrate, we consider the application of designing electromagnetic coils for a class of plasma experiments known as stellarators. We produce topologically-exotic coils for several new stellarator designs and show that these solutions can be interpolated into a filamentary representation and then further optimized. Keywords: topology optimization, sparse regression, inverse magnetostatics, electromagnets, coil optimization, inverse problems, stellarators, nuclear fusion Topology optimization for inverse magnetostatics as sparse regression: application to electromagnetic coils for stellarators Matt Landreman July 31, 2023 ============================================================================================================================ § INTRODUCTION Topology optimization aims at solving a fundamental engineering problem; where should material be placed in a predefined volume in order to minimize some physical objective function? This general problem spans a wide range of scientific disciplines and a large number of approaches have been developed for carrying out variations of topology optimization <cit.>. General topology optimization can be written min_α f(α), s.t. 𝒞_0(α) = 0, 𝒞_1(α) ≤ 0, α_i = 0 or 1, ∀ i, where the elements of α are the optimizable degrees of freedom with permissible values of only 0 or 1, 𝒞_0 and 𝒞_1 are general constraints on α, and f is the primary objective. This binary, constrained, and general form of the problem is very challenging. However, many problems, including those addressed in this work, have a convex objective and convex constraints. This assumption can make high-dimensional topology optimization much more tractable, although the nonconvexity from the binary nature of the problem remains. Traditional density-based approaches relax the binary problem to a continuous one with 0 ≤α_i ≤ 1, with an additional penalty for values of α_i between 0 and 1 <cit.>. This density approach has been used extensively in structural engineering as well as for designing permanent magnets for a class of plasma experiments called stellarators <cit.>. There are many other approaches to topology optimization and we refer the reader to the review in Sigmund and Maute <cit.>. As far as we are aware, until the present work topology optimization has not been performed by solving a continuous version of Eq. (<ref>) with the l_0(α) = α_0 pseudo-norm, an operator that counts the number of nonzero elements in α. Use of the l_0 norm turns the problem into a form of sparse regression. Presumably, this approach has not been favored because it presents a nonconvex, nonsmooth loss term to optimize, preventing the application of traditional gradient or Hessian-based solvers. Nonetheless, this problem can be solved effectively in some important applications, e.g. as we will show in the present work, for designing electromagnetic coils. Before describing this formulation and why it is advantageous, particularly in electromagnetic coil design, we review our motivating application from the field of plasma physics. §.§ Stellarator optimization The design of electromagnetic coils is required in a large number of scientific and engineering domains. In one common situation, considered here, a target magnetic field in some volume is given, and the goal is to find a configuration of magnets outside that volume to produce the desired field. Examples of this problem are producing uniform fields and uniform field gradients for magnetic resonance imaging <cit.>, and producing uniform dipole or quadrupole fields for the beam optics in particle accelerators <cit.>. This problem is an ill-posed inverse problem because many different magnet designs can produce a nearly identical target magnetic field via the Biot-Savart law. This inverse magnetostatics problem is also critical for stellarators, a class of plasma devices commonly considered for future nuclear fusion reactors. Stellarator design relies on sophisticated coil optimization algorithms in order to produce ideal magnetic fields for confining plasma <cit.>. These three-dimensional magnetic fields must be carefully shaped in order to provide high-quality confinement of charged particle trajectories and many other physics objectives. Optimizing stellarators is typically performed in two stages. The first is a configuration optimization using fixed-boundary magnetohydrodynamic equilibrium codes to obtain plasma equilibria with desirable physics properties <cit.>. Important metrics to minimize for nuclear fusion devices include deviations from quasi-symmetry (an unusual symmetry in magnetic fields that enables particle confinement), fast ion losses, and magnetohydrodynamic instability <cit.>. After obtaining the optimal magnetic field in this first stage, magnets must be designed to produce these fields, subject to a number of engineering constraints such as a minimum coil-to-coil distance, maximum forces on the coils <cit.>, maximum curvature on the coils <cit.>, and many other requirements. There has also been recent work combining the two optimization stages into a single overall optimization <cit.>. In any case, the result is that stellarator coils are often complex three-dimensional shapes, raising the cost and difficulty of manufacturing. A primary cost driver of the W-7X and NCSX stellarator programs was the manufacture and assembly of complex coils with tight engineering tolerances <cit.>. §.§ Coil optimization Formulation of coil design in the language of topology optimization, and later, sparse regression, facilitates the effective use of a large literature from across scientific disciplines. To motivate our eventual formulation, consider first a general situation in which there is a surface S' (or volume V') of current sources and a surface S (or volume V) where we want to match a target magnetic field B (or its magnitude B, or just one of its components). A simple example is the following inverse magnetostatic optimization problem, min_ J∫_S B_coil - B_target^2d r, where J are a set of coil current densities, B_coil( r) is the magnetic field generated by the coils, B_target( r) is the magnetic field desired on the surface S, and r is a coordinate vector. Eq. (<ref>) represents the general situation in which a set of coils and magnets are desired that match a target magnetic field. Notice that we have assumed a set of available and fixed spatial locations for the coils because only the coil currents are used as optimization variables. Traditionally, coil design for stellarators is performed with either the winding surface or filament method. The winding surface optimization problem <cit.> is a variation of Eq. (<ref>). The goal is to minimize the normal component of B on the surface of a plasma S, and the sources lie on a winding surface S' that is pre-defined by the practitioner. S' is typically prescribed by extending the plasma boundary outward using an overall offset multiplied by the normal vectors on this surface; see Appendix <ref> for additional details. Now the following optimization problem is solved: min_ J∫_S ( B_coil - B_target)·n̂^2d r + κ∫_S' J( r')^2d r', B_coil( r) ≡μ_0/4 π∫_S' J( r') × ( r - r')/ r - r'^3d r'. Throughout this work, quantities associated with the coil surface (and later, volume) are denoted with a prime, and norms without subscripts, ·, indicate vector magnitudes. Notice that the current density J in Equations (<ref>) and  (<ref>) is a surface current density in Amperes per meter because we have assumed for now that the sources lie entirely on a surface S'; the coils are represented by a continuous sheet current on the winding surface. Here, r' is a source position, n̂ is the plasma unit normal vector (n for a non-unit normal vector), μ_0 is the vacuum permeability, and κ is a scalar hyperparameter that determines how strongly to penalize large and potentially unrealistic currents. The B_target·n̂ term can represent normal magnetic field contributions from other sources, including other coils or magnets, or contributions from finite plasma current. The Tikhonov regularization term proportional to κ is traditionally used to deal with the ill-posedness intrinsic to coil optimization; without additional optimization criteria, very different coil sets can produce similar residuals in B ·n̂ on the plasma surface. In winding surface optimization without Tikhonov regularization, large and unrealistic surface currents can be generated. These currents can then overfit to the quadrature points on the plasma surface that were used to discretize the first integral in Eq. (<ref>). A global, smooth, and periodic Fourier basis is used for the currents in winding surface optimization. This representation results in a linear least-squares problem that can be easily solved, and produces surface currents that are a priori continuous and divergence-free. However, in principle we could make different assumptions about the spatial variation and topology of the currents by expanding the currents locally, using local spatial basis functions. The advantage of the winding surface method over the filament-based algorithms is that the surface is defined before optimization, and subsequently avoids a much more complicated optimization over spatial degrees of freedom. This feature is also a disadvantage, since the predefined and fixed spatial grid is a strong constraint on the space of possible coil shapes. Conversely, the filamentary method allows for complex spatial dependence by representing the coils as zero-thickness curves in three-dimensional space and optimizing the spatial degrees of freedom of the curves. However, this approach leads to a significantly more complicated optimization problem. From an optimization standpoint, it is critical to notice that the Biot-Savart law in Eq. (<ref>), regardless if the coils are represented by one-dimensional curves or three-dimensional volumes, is linear in J but nonlinear in r'. Moreover, most of the additional engineering constraints for coils, such as minimum coil-coil distances, are also nonlinear functions of r'. These nonlinearities guarantee that filament optimization is highly nonconvex, whereas the winding surface method is convex and much simpler to solve. Our new method aims to combine the best features of both: freedom in all three spatial dimensions like filaments, but with the convexity and linearity of the winding surface method. §.§ Contributions of this work In the present work, we provide an algorithm that can be used to solve topology optimization problems of the form of Eq. (<ref>), rewritten min_α {f(α) + λα_0}, s.t. 𝒞_0(α) = 0, 𝒞_1(α) ≤ 0, with the additional assumption that the λ=0 subproblem can be solved with reasonable computational efficiency. This assumption is certainly true for a large class of convex objectives and convex constraints. Note also that Eq. (<ref>) is not quite equivalent to the binary problem, but if necessary, upper bounds can be prescribed in the form of linear constraints on the α_i so that the elements of α take only two possible values. A large volume of literature exists for solving a relaxation of Eq. (<ref>) with the l_0 norm replaced with the l_1 norm <cit.>, since then Eq. (<ref>) is convex and can be easily solved. However, when a parameter is nonzero in the l_1 problem, it can take any value. This is unsuitable for our task since it is important in coil design that the currents in the voxels are either very large or zero, approximating the binary structure of the problem. Next, we use this new sparse regression formulation of topology optimization to generate coil designs without resorting to winding surfaces (only surface currents can exist) or filaments (zero-thickness curves). This is the first demonstration of stellarator coil design using topology optimization. We further illustrate our new method by generating a series of coil sets for three recent high-performance stellarators. Because the currents are local and vary throughout a volume, we refer to our optimization technique as the current voxel method. Unlike the traditional methods, after a coil volume is defined, the coil shape topology is an output of the optimization rather than an input by the user. This is an important step because the optimal coil topology for a particular stellarator is often unclear, so researchers often manually try a number of possible configurations. In this sense, our current voxel optimization can also be seen as providing a principled initial topology and set of coils for further optimization using other coil optimization routines. To demonstrate this use case, we take two helical coil designs generated by our new method and initialize filament optimizations, which perform further solution polishing. Lastly, the methodology described here is implemented in the open-source SIMSOPT code <cit.>, which was used to generate the results in the present paper. § CURRENT VOXEL OPTIMIZATION So far, we have outlined that our new coil optimization should generate coils that vary in three spatial dimensions but avoid the Biot-Savart nonlinearity in r'. We now show that these requirements produce an optimization problem equivalent to the topology optimization in Eq. (<ref>). Consider the following variation of the winding surface objective in Eq. (<ref>), in which the current is now allowed to vary continuously within some volume V' surrounding the plasma surface. Since the coil volume is predefined, there is no shape optimization, a feature in common with the original winding surface method. For simplicity, let us assume that we can reasonably decompose V' (the “winding volume”) into a three-dimensional mesh of grid cells, e.g., rectangular cubes, which we refer to as the current voxels. Then we have D discrete but continuously connected, rectangular grid cells with volumes V_k' such that ∪_k=1^DV_k' ≈ V' and some amount of current in each cell, B_coil( r)·n̂ = -μ_0/4 π∑_k=1^D∫_V_k'n̂× ( r - r_k')/ r - r_k'^3· J_k( r_k') d r_k'. If there are no existing coils or applied fields, i.e. B_target = 0, then the trivial solution B_coil = 0 needs to be avoided in the optimization. There are numerous strategies for preventing the trivial solution. One strategy in filamentary coil optimization is to set a nonzero current in one of the filaments. Another strategy to avoid the trivial solution specifies the toroidal flux in a poloidal cross section <cit.>. Here, we consider instead fixing a target current value, I_target, by computing a line integral μ_0 I_target = ∮_γ( B_coil - B_0) ·dl, around a toroidal loop γ that is on or in the plasma, e.g., the magnetic axis or the plasma boundary at θ = 0. For all of the examples illustrated in this work, we use the latter. B_0 is the magnetic field along the toroidal loop from other sources, e.g. finite plasma current. Equation (<ref>) requires only the computation of a single integral, and it is explicitly shown in Appendix <ref> to be linear in the optimization variables defined in the next section. Typically, the solution need not exactly match the target current value, so we incorporate the squared residual of (<ref>) as another linear least-squares term in the optimization problem that will be described shortly. §.§ A finite element basis for the currents In each cell, J_k in Eq. (<ref>) must necessarily have nontrivial spatial dependence for nontrivial coil designs. The local spatial variation comes from expanding each J_k in a finite element basis, and the coefficients of that basis will later become the variables for optimization, J_k ≡α_k·ϕ_k( r_k') =∑_i=1^Nα_ikϕ_ik. The ϕ_i represent a chosen set of spatial basis functions and we use the divergence-free basis of linear polynomial vectors as in Cockburn <cit.>, so that the J_k are divergence-free in each grid cell. Of course, higher-order polynomial basis functions can be used for improved convergence. However, as of now there can still be deviations from global current conservation, since there can be flux jumps across cell interfaces, and the current is not imposed to be continuous. To solve the former problem, we can impose that surface-averaged flux jumps across cells vanish: ∫_V_k' ∩ V_l' n̂'·[ J_k( r_k') - J_l( r_k')]d^2 r_k' = 0. This constitutes at most six linear constraints per cell on the α_k, for a total of N_c < 6D constraints. On average, we expect N_c ∼ 3D, since adjacent cells need only one constraint for their mutual interface, but the exact number will vary with the geometry of the voxel grid V'. For concreteness, the basis for degree-1, divergence-free polynomial vectors in three-dimensions can be chosen as (taking centers of the cell at (x_k, y_k, z_k)): X_k ≡x - x_k/Δ x_k, Y_k ≡y - y_k/Δ y_k, Z_k ≡z - z_k/Δ z_k, =-3mu =-3mu =-3mu [ 1; 0; 0; ], [ 0; 1; 0; ], [ 0; 0; 1; ], [ Y_k; 0; 0; ], [ Z_k; 0; 0; ], [ 0; 0; X_k; ], [ 0; 0; Y_k; ], [ 0; Z_k; 0; ], [ 0; X_k; 0; ], [ X_k; -Y_k; 0; ], [ X_k; 0; -Z_k; ]. =4mu =3mu =5mu In order to avoid local jumps in ∇· J along cell interfaces (the constraints only guarantee that ∇· J = 0 in an integral sense over the cell interface), we enforce that the J_i component is continuous on the cell interface with normal vector n̂' = x̂_i. It turns out that this can be entirely enforced by simply reducing the number of basis functions to five, [ 1; 0; 0; ], [ 0; 1; 0; ], [ 0; 0; 1; ], [ X_k; -Y_k; 0; ], [ X_k; 0; -Z_k; ]. To summarize, we now have a representation that can produce discontinuous currents and the divergence-free property of J is everywhere satisfied. More sophisticated finite element geometries and basis representations are a clear place for future improvements. For instance, an orthonormal, hierarchical, and high-numerical-precision basis of divergence free polynomials can also be constructed to arbitrary degree and dimension in tetrahedral domains <cit.>. §.§ Finalizing the optimization problem We now have a useful spatial basis to represent the current density in each cell. The Biot-Savart calculation for the normal component of B_coil reduces to a simple matrix-vector product between the optimization variables α∈ℝ^ND and a matrix A ∈ℝ^n_θ n_ζ× ND, which can be computed once before optimization begins. Here n_θ and n_ζ represent the number of poloidal and toroidal quadrature points on the plasma surface, respectively. The optimization to solve so far can be shown to be a linear least-squares problem in α with linear equality constraints, C α = 0. For a degree-1 basis, there are 5D variables in α = [α_1, ..., α_D], which is in principle enough free parameters to satisfy the N_c linear constraints coming from the flux jump constraints. Note that C ∈ℝ^N_c × ND, which can be large (since D can be size ∼ 10^4-10^5) but tractable because it is very sparse. The last ingredient for our new optimization is crucial. As it stands, there will be nonzero current contributions in every cell in the prescribed coil volume, which will clearly not generate isolated, discrete coils of the type desired for stellarators. However, we can alter the optimization problem to contain an additional term, called the non-overlapping group l_0 norm, λα_0^G. The quantity α_0^G is defined to be the number of cells for which α_ik = 0 for all i in the cell indexed by k. Thus, α_0^G is reduced only when cell currents are fully zeroed out in a given voxel. Including this term in the optimization will produce a set of sparse coils. In total, we show in Appendix <ref> how the optimization can be formulated as: min_α {f_B(α) + κ f_K(α) + σ f_I(α) + λα_0^G}, s.t. Cα = 0, f_B(α) ≡1/2 Aα - b_2^2, f_K(α) ≡1/2Dα_2^2, f_I(α) ≡1/2 A_Iα - b_I_2^2. The f_B objective encodes the first term in Eq. (<ref>), f_K is Tikhonov regularization on the optimization variables, and f_I encodes Eq. (<ref>) for avoiding the trivial solution α = 0. The κ, σ, and λ hyperparameters control the relative important of each loss term in the optimization. This is equality-constrained sparse regression - an optimization problem commonly appearing across science and plasma physics <cit.>, for which a number of effective algorithms are available <cit.>. In fact, we have recently formulated stellarator magnet optimization using a large number of permanent magnets in a similar manner <cit.>. However, this is a challenging optimization problem in high-dimensions and with many constraints since the l_0 norm is nonconvex and nonsmooth. For illustration of how the various optimization terms relate to the geometry, we show the full optimization geometry in Fig. <ref> for the stellarator introduced in Sec. <ref>. This includes the volume-averaged current density solution J( r') in the voxels (indicated by the vectors), the unique part of the current voxel grid (the white cubic mesh), the toroidal loop γ (white curve), and the plasma surface (mixed green colors, with B·n̂ errors plotted on the surface). The currents in the solution tend to be very strong on the inboard (small major radius) side where the plasma surface is vertically elongated. Poincaré plots in Fig. <ref> illustrate that this current density solution reproduces the desired plasma equilibrium to high accuracy. §.§ Relax-and-split solution for topology optimization High-dimensional, constrained, and nonconvex problems can be effectively solved with well-known algorithms, e.g., the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm <cit.>, if the problem is smooth. However, the l_0 loss term is nonsmooth and therefore we need specialized algorithms. Relax-and-split methods, also called penalized decomposition methods, solve optimization problems by splitting them into two simpler subproblems. In the context of sparse regression, one set of optimization variables is used to solve the linear least-squares term and a set of convex constraints, and the second set is used to address the nonsmooth and/or nonconvex sparsity-promoting loss term <cit.>. Then a “relaxation” L_2 loss term is introduced to minimize the difference between the two sets of optimization variables. This approach for solving sparse regression problems has also been applied successfully to solve high-dimensional l_0-minimization problems arising in imaging science and compressive sensing; see, e.g., <cit.>. Mathematically, the relax-and-split method reformulates Eq. (<ref>) to become (taking σ = 0 and κ = 0 here for clarity): min_β{min_α{ Aα - b_2^2/2 + α - β_2^2/2ν} + λβ_0^G}, s.t. Cα = 0. Notice there are now two optimization problems, one for α and one for β, and we can control how closely these variables match by tuning the ν hyperparameter. The idea is now to iteratively solve this problem by variable projection - fixing one variable while optimizing over the other - and repeating until convergence is found. Consider initial conditions for the optimization variables, α^(0) and β^(0). The solutions in the k-th iteration we denote as α^(k) and β^(k), so that, α^(k)≡_α { Aα - b_2^2/2 + α - β^(k-1)_2^2/2ν}, s.t. Cα = 0, β^(k)≡_β {1/2να^(k) - β_2^2 + λβ_0^G}. Problem (<ref>) is a linear least-squares with affine constraints. For most of the high-resolution results in the present work, the problem dimensions get large and A^T A becomes very costly to compute and store in memory. Fortunately, iterative solvers are suitable for efficiently solving high-dimensional linear systems. In practice, we use the MINRES algorithm with an approximate Schur complement preconditioner <cit.>, since it requires only matrix-vector products of the matrices appearing in Eq. (<ref>). Note that the algorithm and preconditioning can take advantage of the fact that C is a sparse matrix, since it encodes flux matching constraints for each cell's boundaries; only the cell and its (at most) six neighboring cells are involved in each of the constraints. As long as the update to α can be made reasonably computationally efficient, this relax-and-split strategy is effective for addressing the l_0-regularized topology optimization problem even if additional nonconvex terms are added. Next, the outer optimization problem (<ref>) has a solution via the proximal operator, β^(k) = prox_νλ(.)^G_0(α^(k)). For the non-overlapping group l_0 norm, the proximal operator is an analytic function akin to the traditional l_0, i.e., hard thresholding the norm of each subgroup. This process is repeated iteratively for k iterations until some convergence criteria for β^(k) or α^(k) is satisfied. Note that this algorithm is efficient for solving Eq. (<ref>) and any iterative algorithm for this problem relies on parallelizable matrix-vector products. Finally, the full optimization in Eq. (<ref>) can be solved many times for increasingly large values of λ, using the previous solution as an initial condition for the next optimization problem with larger λ. This process increasingly produces solutions that look like thin, high-current loops, i.e., realistic coils. Lastly, hyperparameter scans were performed and documented in Appendix <ref> to demonstrate convergence with respect to various geometrical quantities and find useful values of the optimization-related hyperparameters λ, σ, κ, and ν. §.§ Discrete symmetries in stellarators Symmetries play an important role for stellarators and discrete symmetries provide reductions in the required number of variables for performing coil optimization. Subsequently, most stellarators to date have been designed with discrete field-period and stellarator symmetries. Field-period symmetry refers to a periodicity in the magnetic field with respect to the number of periods, n_p in a full toroidal turn. In cylindrical coordinates, taking ζ for the moment as the canonical azimuthal angle, B(R, ζ + 2π/n_p, Z) = B(R, ζ, Z). Since B_target exhibits this property at the plasma surface S, and we desire that B_coil·n̂ matches B_target·n̂ on S, the coils should also exhibit field-period symmetry. In other words, we need only design coils for the unique ζ∈ [0, 2π / n_p) part of the plasma surface and the unique ζ' ∈ [0, 2π / n_p) part of the current voxel grid. However, a simple Cartesian grid of voxels is used in the present work, which can only replicate this symmetry in B_coil if n_p = 2 or 4, since otherwise the cubes cannot be stitched together properly. Therefore other values n_p = 3, 5, 6, etc. must use a voxel grid defined in the entire ζ' ∈ [0, 2π). Fortunately, Appendix <ref> illustrates that our algorithm scales well with the number of voxels and therefore stellarators with these values of n_p can still be readily optimized. Future work could address this geometrical issue by working with a cylindrical grid of voxels and associated basis functions, which would exhibit a continuous rotational symmetry. Next, a stellarator symmetric field is one in which, in a (R, ζ, Z) cylindrical coordinate system, B_R is odd with respect to an inversion about the line ζ = 0, Z = 0, while B_Z and B_ζ are even. This constraint on parity amounts to reducing the number of degrees of freedom by a factor of two. We can now design coils for the unique 0 ≤ζ≤π / n_p part of the plasma surface and the unique 0 ≤ζ' ≤π / n_p part of the current voxel grid. For instance, for a stellarator that is two-field-period and stellarator symmetric, only a quarter of the plasma surface and a quarter of the current voxels are required. The contribution to the plasma surface from the remaining voxels is obtained not by optimization but by a set of rotations and parity flips akin to what is done in other coil optimization techniques for stellarators. Subsequently, the α inherit the appropriate symmetries and the flux jump constraints are made consistent across the full voxel grid. § RESULTS To demonstrate that our optimization problem can generate new coil designs, we consider three stellarators: the Landreman-Paul QA and QH configurations <cit.> as well as the recent two-field-period Goodman QI stellarator. <cit.>. All three stellarators are scaled to 1 meter major radius and a plausible, laboratory-scale B≈ 0.1 T, averaged along the major radius. The exact values are not an important choice because these plasmas have no intrinsic length scale and subsequently solutions can always be appropriately rescaled. Since the primary focus of this work was methodology and exploration of coil topology, most of the results in the present work were generated with modest current voxel grid sizes. Subsequently sufficient f_B minimization is often not achieved in these examples, such that the achieved magnetic fields differ some from the target fields. Future work could ameliorate this issue by focusing on a particular stellarator design and performing high resolution runs, with more extensive variations of hyperparameters. We also address the voxel f_B errors by using the current voxel solutions to initialize coil filaments, which are further optimized to low f_B error and shown to reproduce the desired plasma. Before showing the stellarator designs, we test our method for an axisymmetric torus. We consider a case with no plasma current, so the axisymmetric target surface corresponds to a purely azimuthal target field. The torus has continuous rotational symmetry, but the Cartesian voxel grid does not, so for convenience we prescribe that the coil volume is two-field-period and stellarator symmetric. A representative result is illustrated in Fig. <ref>, with two coils per half field period, so there are eight coils in total. As expected for an axisymmetric azimuthal target field, we obtain approximately planar toroidal field coils. The current density is not perfectly planar due to staircasing effects of the rectangular current voxels. This reasonable result for an axisymmetric target field provides initial evidence that our method is working properly and is capable of producing discrete coils. §.§ Coil designs for the Landreman-Paul QA stellarator The Landreman and Paul QA stellarator <cit.> is stellarator-symmetric and two-field-period symmetric, meaning that only one quarter of the plasma surface and one quarter of the total coil optimization variables need to be determined. Figure <ref> illustrates the results for the QA stellarator at varying levels of sparsity-promotion. At each stage, unique topologies are exhibited, although almost all of the solutions have strong currents near the inboard (small major radius) side of the plasma surface where it is vertically elongated, a result commonly observed in coil optimization for stellarators. Both modular and helical coil solutions are obtained. Some of these solutions are too complex for realistic engineering designs, but can be used as an initial condition for filamentary optimization. In particular, one of the most sparse solutions consists of a single figure-eight coil that links through the QA stellarator; an interesting topological choice for a helical coil. With demountable joints, a single optimized coil for the whole device could be an attractive design, so we now use this solution to initialize a filamentary coil optimization. Assuming that the Cartesian coordinates of the coil are unique when written as a function of the toroidal coordinate ζ or poloidal coordinate θ, we can transform the identified curve x(θ) to the Fourier basis used for filament optimization, x(θ) = 1/2x_c, 0 + ∑_m=1^M x_c, mcos(m θ) + x_s, msin(m θ). The coefficients of the expansion are determined as usual by the orthogonality of the basis functions. However, voxel solutions typically have a finite thickness and may have nonzero neighbors. Moreover, the assumption of uniqueness with respect to ζ or θ implies a sufficient level of sparsity in the solution. In practice, if this uniqueness condition holds, we extract a curve from the unique ζ locations and apply a generous moving average to the Cartesian coordinates of the curve. This process results in small deviations from the original curve of voxels identified during optimization, but usefully eliminates the ambiguity in defining the curve and importantly retains the voxel topology. Once the curve, Fourier coefficients, and I_target are specified, the filamentary optimization can be initialized and then performed. We omit the details, e.g. the hyperparameters and objective terms, of this optimization here but the methodology can be found in Zhu et al. <cit.> and the results can be entirely reproduced in an example in the SIMSOPT code <cit.>. The filament optimization results for the figure-eight coil in the third panel of Fig. <ref> are illustrated in Fig. <ref>. This single, 40 meter long, helical coil is able to produce a solution accurate enough to generate good flux surfaces as illustrated in Fig. <ref>, though with a reduced volume compared to the original target configuration. For comparison, the Wechsung et al. <cit.> coil set, with the same 1 meter major radius plasma and somewhat improved solution errors, found sixteen modular coils (four unique coils) with total length of approximately 72 meters. One 40 meter long coil could be challenging to fabricate off-site and transport on-site, but with demountable joints the coil can be fabricated and transported in separate pieces. Moreover, it is challenging to further improve the plasma surfaces here without making the already long coil significantly longer. It is common knowledge that saddle coils are needed to assist the helical coil, but further exploration on this point is beyond the scope of this paper. Despite these caveats, there are well-known benefits to using helical coils for stellarators, and in particular a single helical coil is attractive from a diagnostic access and engineering standpoint. Helical coils also minimize toroidal ripple, the small-scale errors that arise from modular coils with small coil-plasma distance. Notably, compared to the modular coil solution, our figure-eight solution has larger coil-plasma separation, larger coil-coil separation (except for the small regions in the center with high curvature), and more room for diagnostic access and neutron-absorbing blankets (required for nuclear fusion reactors). §.§ Coil design for the Landreman-Paul QH stellarator The Landreman and Paul QH stellarator <cit.> is stellarator-symmetric and four-field-period symmetric, so only one-eighth of the plasma and voxel grid is required for optimization. Figure <ref> illustrates some of the exotic configurations found through optimization. As in many stellarator coil solutions, the currents tend to congregate near the inboard side of bends where reducing the normal magnetic field is challenging. Moreover, the sparsest solution looks like a rectangular, four-field-period coil that is quadruple-linked with the stellarator. In fact, all of the solutions exhibit this underlying structure in the currents. Similar helical-coil solutions for other optimized stellarators have recently been investigated independently <cit.>, and it is exciting to find a similar coil topology through a new optimization method. Like the previous example, we use the four-field-period current voxel solution coil in the third panel of Fig. <ref> to initialize a filament optimization. With a single coil, we were unable to sufficiently reduce B ·n̂ enough to accurately produce the desired plasma surfaces. Instead, we initialize two coils with the same topology, with one coil slightly perturbed in space from the voxel solution. The resulting filament optimization with these two helical coils is similar in spirit to the optimizations in Yamaguchi et al. <cit.> and Elder et al. <cit.>, which both utilize multiple helical coils. A two-coil filament solution, with combined length of 53 m, is illustrated in Fig. <ref>. The coils are accurate enough to produce flux surfaces in the Poincaré plots in Fig. <ref>, though with some distortions compared to the original configuration. Despite the coil complexity, these helical coils could be a useful alternative to the modular coils typically used for four-field period stellarators. For instance, the four-field-period and stellarator symmetric HSX device has 48 coils (6 unique coils), with total coil length ∼ 90 m <cit.>, and subsequently a neutron-absorbing blanket is infeasible and diagnostic access is limited. Lastly, as far as we aware, the pair of intertwined helical coils in Fig. <ref> represents the first successful coil set for this QH stellarator in the literature. §.§ Coil design for the Goodman QI stellarator To conclude the results, we use our new method to compute some initial coils for the two-field-period and stellarator symmetric QI stellarator found in Goodman et al. <cit.>. Figure <ref> illustrates some of the optimized coil configurations. Interestingly, the currents tend to be important near the straight, “race-track” parts of the plasma surface, and it seems to be challenging to find coils that are spatially distributed around the plasma. The sparsest solution consists of a single rotated window-pane coil (two identical coils after symmetrizing) and figure-eight helical coils and more complex topology also appear. Initializing a figure-eight helical coil or tilted modular coil from this solution could be interesting future work. § CONCLUSION We have formulated topology optimization based on sparse regression and the l_0 norm, and additionally provided an algorithm that can effectively solve a large subclass of topology optimization problems across scientific disciplines. To demonstrate our method, we have designed a new approach for inverse magnetostatics, computing topologically-unconstrained electromagnets. While stellarator coils were considered as a specific application here, we expect the method can be applicable to other areas in which a target magnetic field must be produced, such as magnetic resonance imaging or particle accelerator optics. Additionally, we have provided examples of several exotic topological solutions for three different stellarators of interest to the plasma physics community. This method is new and subsequently there is ample room for improvement and refinement. Future work includes: implementation of higher-order basis functions, tetrahedral meshes, algorithmic speedups through improved iterative solvers and preconditioners or improved sparse regression algorithms, additional loss terms in the optimization for reducing coil forces or coil curvatures, reformulation as stochastic optimization to control for coil errors, and much more. A reformulation may be possible that builds in the current conservation by construction, rather than as constraints in the optimization problem. This method explores a high-dimensional nonconvex optimization space that may exhibit more exotic or more useful solutions than the ones found in this initial work. Although not explored in this work, initial conditions for the optimization can bias the solutions towards producing a particular topological structure or a certain number of identifiable coils. Along with adding additional engineering-related optimization terms, clever initial conditions could facilitate real-world coil designs; some of the coil solutions in this work are presented because they are interesting topologically, but these solutions could present serious engineering challenges. It may require such initial conditions or additional loss terms in order to fully reproduce the types of solutions found using filament optimization. Indeed, this work is perhaps most compelling for providing principled topology choices to initialize more complex filament optimization for stellarators. § ACKNOWLEDGEMENTS Thanks to Georg Stadler for optimization improvements and paper suggestions. We would like to acknowledge assistance from Todd Elder, Elizabeth Paul, Alan Goodman, Stefan Buller, and many others in the Simons Collaboration on Hidden Symmetries and Fusion Energy. This work was supported by the U.S. Department of Energy under award DEFG0293ER54197 and through a grant from the Simons Foundation under award 560651. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. § DEFINING THE CURRENT VOXEL GRID As is often done in other magneto-static optimization problems for stellarators, e.g., permanent magnet optimization, we define the permissible volume for voxels as the space between two toroidal limiting surfaces. A simple transformation can be used to generate this volume. We begin by initializing a uniform Cartesian grid, incorporating the discrete symmetries of the plasma surface if possible, in a large region surrounding the plasma. The plasma boundary surface is extended outward by a constant multiple of the unit normal to generate an inner toroidal boundary. An outer limiting surface is generated similarly, using the normal vectors on the inner toroidal surface. For moderately shaped equilibria, these simple transformations work well to generate a toroidal volume. Any of the original grid cells that are not between the inner and outer surfaces are eliminated with a ray-tracing routine. It is straightforward to extend this method for more complex grids. For instance, diagnostic ports can easily be included by removing any intersecting grid cells, and updating the flux jump constraints accordingly for the remaining grid cells. § MATRIX FORMS OF THE LOSS TERMS In this section, we show how the various optimization objectives appearing in stellarator coil optimization can be formulated as linear terms in the optimization variables α. We take advantage of possible stellarator and field-period symmetries by using n_θ n_ζ quadrature points in poloidal and toroidal angles (θ, ζ) on the plasma surface to write for any scalar surface quantity Q: ∫_S Qd^2r = ∑_i=1^n_p∫_0^2πdθ∫_0^2π/n_pdζ nQ ≈∑_i=1^n_p∑_j=1^n_θ n_ζΔθ_jΔζ_jn_jQ_j. Here n_p is the value of the field-period symmetry, n = dr/ dθ×dr/ dζ are the surface normal vectors, and n = n. Plugging in the basis expansion for the J_k in each cell, the coil contributions at each quadrature point r_j can be summarized as, B_coil( r_j)· n_j = -μ_0/4 π∑_k=1^Dα·∫_V_k'n̂× ( r_j - r_k')/ r_j - r_k'^3·ϕ( r_k') d r_k', = ∑_i=1^Nα_iG_ij = G·α, G ≡ -μ_0/4 π∫_V_k'n̂× ( r_j - r_k')/ r_j - r_k'^3·ϕ( r_k') d r_k'. The total inductance matrix G can be computed only once before optimization begins and the integrals over V_k' are evaluated with a tensor-product quadrature grid. The loss term associated with the normal magnetic field on the plasma boundary becomes f_B(α) ≡1/2 Aα - b^2, b_j ≡√(Δθ_jΔζ_j N_j_2) B_target, j· n_j, A_ji ≡√(Δθ_jΔζ_j N_j_2)G_ji, where Δθ_j and Δζ_j indicate the grid spacing in the two angular directions. Equation (<ref>) is linear least-squares in the α optimization variables, as desired. There is a similar term that comes from the requirement μ_0 I_target = ∮_γ( B_coil - B_0) ·dl, defined earlier in Eq. (<ref>) to avoid the trivial solution. Note first that B_coil( r)·dl = -μ_0/4 π∑_k=1^Dα·∫_V_k'dl× ( r - r_k')/ r - r_k'^3·ϕ( r_k') d r_k', = F ·α, where F is the equivalent to G but with the replacement n̂→dl. Then in total we have ∑_j=1^n_γD_jiα_i - e_j = μ_0I_target, D_ji ≡Δζ_j F_ji, e_j ≡Δζ_j B_0, j· dl_j. Now add a row of zeros to D and append μ_0I_target to the end of e. Then Eq. (<ref>) can be written in the form A_Iα = b_I, A_I ≡∑_j=1^n_γ + 1D_ji, b_I ≡∑_j=1^n_γ + 1 e_j, which we are free to recast as a loss term to be minimized in the optimization, f_I(α) ≡1/2 A_Iα - b_I_2^2. The flux jump condition in Eq. (<ref>) also needs to be written in terms of the α optimization variables: ∫_V_k' ∩ V_l' n̂'·[ J_k( r_k') - J_l( r_k')]d^2 r_k' = 0 = ∑_i=1^N∫_V_k' ∩ V_l'n̂'·(α_ikϕ_ik - α_ilϕ_il)d^2 r_k', where the l index denotes the index of the adjacent cell. Many of the cells will have fewer than six constraints because of duplicates from other cells, i.e., two adjacent cells need only a single constraint for their mutual interface. Stacking the constraints from all the cells produces C_kiα_i = 0, C_ki≡∫_V_j' ∩ V_l'n̂'·(ϕ_ij - ϕ_il)d^2 r_j', with an appropriate index mapping between k and (j, l). We have now defined the equality constraints required for the current density to match flux jumps at cell interfaces. Since the current densities are divergence-free within cells, this additional constraint produces globally divergence-free current density. There is a subtlety present in the constraints in Eq. (<ref>). The C matrix is not full rank and this appears to due to the limited expressiveness of the linear finite element basis to represent the current density in each cell. In practice, this is only a potential issue for preconditioning, or computing C^-1 via the pseudoinverse. Alternatively, this problem could be somewhat ameliorated by the use of higher-order polynomial basis functions. Lastly, we can add Tikhonov regularization, f_K(α) ≡1/2Dα_2^2, with a factor of D^-1 introduced to compensate for the dependence of α_2^2 on the number of voxels. Finally, the complete optimization problem is min_α {f_B(α) + κ f_K(α) + σ f_I(α) + λα_0^G}, s.t. Cα = 0. Tikhonov regularization tends to be critical when λ = 0, especially for the MINRES preconditioning, but less important when λ≠ 0 since the group-sparsity term tends to regularize the solution anyways. § HYPERPARAMETER SCANS Here, we investigate the convergence of the algorithm solutions with respect to the geometric hyperparameters, using the Landreman and Paul QA stellarator <cit.>. A description of each of the hyperparameters is shown in Table <ref>. It was found that convergence of the geometric hyperparameters was essentially independent of the particular stellarator configuration. There are four primary algorithm hyperparameters: κ controlling the amount of Tikhonov regularization, σ controlling how closely to match the prescribed total current through a toroidal loop, ν controlling the amount of relaxation between α and β, and λ controlling the amount of group sparsity. In the convex limit, without sparsity promotion, only κ and σ are relevant. In the σ→ 0 or κ→∞ regimes, the optimization correctly arrives at the trivial solution. There are a number of geometric quantities in the optimization: the spatial resolution of V' or equivalently the number of unique grid cells D, the number of points N' used for intra-cell integrations of the Biot Savart law, and the number of quadrature points n_ζ n_θ used for the plasma boundary and n_γ for the toroidal loop. Note that n_ζ n_θ denotes the number quadrature points on the half-field-period surface, so that the total number of quadrature points for this stellarator is 4n_ζ n_θ (and similarly for n_γ and D). It was found that n_γ = 8 is already sufficient for optimization, since the exact shape of the toroidal loop is anyways unimportant for our purposes, and therefore we omit it from the more careful convergence studies described below. Convergence with respect to n_θ n_ζ is illustrated in Fig. <ref>; n_θ = n_ζ = 16 is already well-converged. For convergence studies we take cubic cells, N' = N_x^3, and determine the minimal N_x value for accurate Biot-Savart calculations from each cell. For our purposes, “convergence” refers to the convergence of the solution found in the convex limit of the optimization problem, λ = 0 and ν→∞. We consider typical optimization hyperparameters σ = 1 and κ = 10^-15 for a reasonably well-posed optimization problem with f_B ∼κ f_K. Then we start with N_x = 1 and increase this value until these increases provide no change in the final solution to the optimization problem. The Biot-Savart calculation is then calculated accurately enough to at least produce the same global minimum. Figure <ref> illustrates that N_x ≈ 6 is sufficient for the Biot-Savart calculations to be accurate enough that the optimization is converged. Similarly, we increase the number of voxels, D, by keeping the overall grid volume constant, while increasing the number of cells. The cells subsequently get smaller and smaller and therefore provide a test for convergence. If κ = 0, this is an ill-posed problem that in general can continue to find better global minima as more coil degrees of freedom are added to the problem. With large enough κ, the problem is well-posed and Fig. <ref> illustrates convergence with respect to D. Notice that the Tikhonov loss term in Eq. (<ref>) is scaled by the number of voxels. Furthermore, we test the scaling between various computational times and the number of voxels in Fig. <ref>. The code is parallelized via Openmp and xsimd <cit.>. All runs used a single AMD EPYC 7763 CPU on the Perlmutter supercomputer, with 64 cores per CPU. Additional specifications for these nodes are available online. The algorithmic MINRES scaling with D is favorable; from D ∼ 10^3 → D ∼ 10^5, the time for a complete preconditioned MINRES solution only increases by an order of magnitude. The computational time for a full relax and split solve is calculated with a fixed λ = 10^5, ν = 10^14. The many calls to MINRES are the bottleneck in the overall optimization. Therefore the time for a relax and split solve scales similarly (here we use 40 iterations of relax-and-split and therefore 40 calls to MINRES). The geometric and preconditioning setup scalings are somewhat less favorable but importantly these quantities need only be computed once before optimization begins. Lastly, note that the right-most points represent a solution using 114,208 (unique) grid cells and therefore 571,040 optimization parameters in α. In this case, the matrix A^T A is dense with ∼ 326 billion nonzero elements.
http://arxiv.org/abs/2306.06904v1
20230612071813
Differentiable Multi-Fidelity Fusion: Efficient Learning of Physics Simulations with Neural Architecture Search and Transfer Learning
[ "Yuwen Deng", "Wang Kang", "Wei W. Xing" ]
cs.LG
[ "cs.LG", "cs.AI" ]
a4paper, left=25mm, right=25mm, top=25mm, bottom=25mm, heightrounded, theoremTheorem[section] corollaryCorollary[theorem] lem[theorem]Lemma definition[theorem]Definition assumption[theorem]Assumption rem[theorem]Remark
http://arxiv.org/abs/2306.01494v1
20230602124209
Local Message Passing on Frustrated Systems
[ "Luca Schmid", "Joshua Brenk", "Laurent Schmalen" ]
cs.LG
[ "cs.LG", "cs.IT", "eess.SP", "math.IT" ]
BlackHoleCam - Testing General Relativity with Pulsars Orbiting Sagittarius A* Ralph P. Eatough^*1, Gregory Desvignes^1, Kuo Liu^1, Robert S. Wharton^1, Aristedis Noutsos^1, Pablo Torne^2,1, Ramesh Karuppusamy^1, Lijing Shao^3,1, Michael Kramer^1,4, Heino Falcke^5,1, Luciano Rezzolla^6 July 31, 2023 =================================================================================================================================================================================================================== Message passing on factor graphs is a powerful framework for probabilistic inference, which finds important applications in various scientific domains. The most wide-spread message passing scheme is the sum-product algorithm (SPA) which gives exact results on trees but often fails on graphs with many small cycles. We search for an alternative message passing algorithm that works particularly well on such cyclic graphs. Therefore, we challenge the extrinsic principle of the SPA, which loses its objective on graphs with cycles. We further replace the local SPA message update rule at the factor nodes of the underlying graph with a generic mapping, which is optimized in a data-driven fashion. These modifications lead to a considerable improvement in performance while preserving the simplicity of the SPA. We evaluate our method for two classes of cyclic graphs: the 2 × 2 fully connected Ising grid and factor graphs for symbol detection on linear communication channels with inter-symbol interference. To enable the method for large graphs as they occur in practical applications, we develop a novel loss function that is inspired by the Bethe approximation from statistical physics and allows for training in an unsupervised fashion. § INTRODUCTION Message passing on graphical models is a powerful framework to efficiently solve inference and optimization problems. The most prominent message passing algorithm is the SPA, also known as BP <cit.>, which implements exact inference on tree-structured graphs <cit.>. Due to its simplicity, the SPA is often applied to cyclic graphs where it becomes an iterative and approximate algorithm. While this works surprisingly well for various applications, such as decoding of low-density parity-check codes <cit.>, a class of error-correcting codes, the SPA performs poorly on frustrated systems, i.e., on graphs with many cycles and strong coupling between the nodes. The seminal work of <cit.> revealed a connection between the SPA and free energy approximations of statistical physics, in particular, the fixed points of BP correspond to stationary points of the Bethe free energy. Based on this insight, alternative message passing methods were proposed which directly minimize the Bethe free energy <cit.>. These algorithms are guaranteed to converge to an extremum of the Bethe free energy but are computationally more demanding than plain BP. <cit.> proposed tree-reweighted BP as a message passing algorithm on the “convexified” Bethe free energy, which is guaranteed to have a global minimum. While this algorithm has stronger convergence guarantees compared to BP, it involves the selection and optimization of so-called edge appearance probabilities, a graph-specific problem that is often non-trivial for practical applications. <cit.> proposed “generalized BP” as an algorithm that passes messages between regions of nodes instead of single nodes. Larger regions will generally improve the quality of the approximation, however, they also increase the computational complexity. Recently, model-based deep learning has shown great potential to empower various suboptimal algorithms, such as the SPA on cyclic graphs. Neural BP, proposed by <cit.>, unfolds the iterations of the SPA on its underlying graph and equips the resulting deep network with tunable weights. The GAP algorithm of <cit.> varies the observation model by preprocessing, thereby shaping a graph with more favorable properties with respect to BP performance. <cit.> extend GNN to factor graphs and propose a hybrid model where BP runs conjointly to a GNN which is structurally identical to the original factor graph but has fully parametrized message updates. All these works have in common that they are based on the SPA as a core concept which is vigorously improved using machine learning in order to compensate for its shortcomings on graphs with cycles. In this work, we follow an alternative approach and directly search for alternative message passing algorithms that perform especially well on graphs with cycles, where the SPA tends to fail. To this end, we replace the well-known SPA message update rule with a compact NN, which is optimized to find a superior local message update rule. Furthermore, we discuss the role of the extrinsic information principle which was originally introduced for tree-structured graphs. Based on the close connection of BP to the Bethe approximation, we propose a novel end-to-end loss function that allows unsupervised and application-agnostic training of new message passing schemes. § BACKGROUND We briefly introduce factor graphs and the SPA as a widespread framework for probabilistic inference on graphical models. We refer the reader to <cit.> for an excellent in-depth treatment of the topic. §.§ Factor Graphs Let f(𝒳) be a multivariate function of 𝒳={x_1,…,x_N } which factors into a product of local functions f_j: f(𝒳) = 1/Z∏_j=1^J f_j(𝒳_j), 𝒳_j ⊆𝒳. A factor graph visualizes the factorization in (<ref>) as a bipartite graph. Every variable x_n is represented by a unique vertex, a so-called variable node, which we draw as a circle in the graph. Factor nodes represent the local functions f_j and are visualized by squares. The undirected edges of the graph connect a factor node f_j(𝒳_j) with a variable node x_n if and only if f_j is a function of x_n, i.e., if x_n ∈𝒳_j. From a graphical perspective, 𝒳_j thus corresponds to the set of adjacent variable nodes to the factor node f_j. Similarly, we define 𝒩(x_n) to be the set of adjacent factor nodes to the variable node x_n. In this work, we restrict the variables x_n ∈{+1,-1} to be binary and the local factors f_j to be either functions of a singular variable x_n or functions of pairs (x_n,x_m) such that the factorization becomes f(𝒳) = 1/Z∏_n=1^N ψ_n(x_n) ∏_(n,m)∈ℰψ_n,m(x_n,x_m), where ℰ is the set of edges in the graph. Figure <ref> shows an exemplary factor graph. §.§ Sum-product Algorithm The SPA is a message passing algorithm that operates in a factor graph and attempts to determine the marginals of the multivariate function f(𝒳). Messages are propagated between the nodes of the factor graph along its edges and represent interim results of the marginalization. Let m_f_j → x_n(x_n) denote a message sent from a factor node f_j along an edge to a variable node x_n and let m_x_n → f_j(x_n) denote a message on the same edge, but sent in the opposite direction. If the factor graph visualizes a probabilistic model, i.e., if the variable nodes represent random variables, a message m_f_j → x_n(x_n) can be interpreted as a probabilistic statement from node f_j about the random variable x_n to be in one of its possible states <cit.>. The SPA defines the updates of the propagating messages at the nodes of the factor graph according to the simple rules <cit.>: m_x → f_j(x) = ∏_f_i ∈𝒩(x) ∖ f_j m_f_i → x(x) m_f_j → x(x) = ∑_∼{ x }( f_j(𝒳_j) ∏_x' ∈𝒳_j ∖ x m_x' → f_j(x') ). The summary operator ∑_∼{x} denotes the marginalization over all variables in 𝒳_j except for x. One key property of the SPA is the extrinsic information principle which states that the update of an outgoing message m_A→B at node A destined to node B does not depend on the incident message m_B→A which travels on the same edge but in opposite direction. For the special case of degree-2 factor nodes ψ_n,m(x_n,x_m), the SPA update rule (<ref>) thereby simplifies to m_ψ_n,m→ x_n(x_n) = ∑_x_mψ_n,m(x_n,x_m) · m_x_m →ψ_n,m(x_m). Messages at factors nodes ψ_n(x_n) with degree 1 are not updated at all. Initially, all messages are set to some unbiased state before they are iteratively updated according to a certain schedule. For tree-structured graphs, the messages converge after they have once traveled forward and backward through the entire graph. The result of the SPA, i.e., the marginal functions f(x_n), are finally obtained by a combination of all messages incident to the respective variable nodes: f(x_n) = ∏_f_i ∈𝒩(x_n) m_f_i → x_n(x_n). Since the SPA makes no reference to the topology of the factor graph and the message updates are local, the SPA may also be applied to factor graphs with cycles <cit.>. On graphs with cycles, the SPA only yields an approximation of the exact marginals. While this approximation works surprisingly well in many cases, even including particular classes of graphs with many small cycles, there are also cases where the results are quite poor or where the SPA does not converge at all <cit.>. Relation to the Bethe Approximation In their seminal work, <cit.> showed a revealing connection between the SPA and free energy approximations in statistical physics. From a variational perspective, probabilistic inference can be seen as an optimization problem q = min_q ∈𝕄 D_KL(q || p), where we want to find the distribution q from the set 𝕄 of all globally valid probability distributions, known as the marginal polytope <cit.>. Since the KL divergence D_KL(q||p) is always non-negative and zero if and only if q=p, we reach the minimum exactly for q=p. Obviously, optimizing over all possible probability distributions q∈𝕄 is generally intractable. Based on some general assumption, free energy methods simplify the problem in (<ref>) to the minimization of a variational free energy term. We refer the reader to <cit.> for a detailed elaboration on this topic. The Bethe approximation restricts the distribution q(𝒳) to be a product of univariate distributions b_n(x_n) and joint distributions b_n,m(x_n,x_m) between pairs (n,m) ∈ℰ: q_Bethe( 𝒳) := ∏_n=1^N b_n(x_n) ∏_(n,m)∈ℰ b_n,m(x_n,x_m). This simplification leads to the Bethe free energy F_Bethe = ∑_(n,m)∈ℰ ∑_x_n,x_m b_n,m(x_n,x_m) log( b_n,m(x_n,x_m)/ϕ_n,m(x_n,x_m)) -∑_n=1^N (|𝒳_n| - 1) ∑_x_n b_n(x_n) log( b_n(x_n)/ψ_n(x_n)), with ϕ_n,m(x_n,x_m) := ψ_n(x_n) ψ_n,m(x_n,x_m) ψ_m(x_m). Moreover, the Bethe approximation relaxes the search space in (<ref>) from the marginal polytope 𝕄 to the local polytope 𝕃 = {{ b_n(x_n), b_n,m(x_n,x_m): ∀ n ∈ [1;N], (n,m) ∈ℰ} : ∑_x_n b_n,m(x_n,x_m) = b_m(x_m), ∑_x_n b_n(x_n) = 1 ∑_x_m b_n,m(x_n,x_m) = b_n(x_n)}. This means that the distributions b_n(x_n) and b_n,m(x_n,x_m) only need to locally fulfill consistency in a pairwise sense. In summary, the Bethe approximation converts (<ref>) into the optimization problem q_Bethe = 2_{b_n, b_n,m}∈𝕃 F_Bethe( {b_n, b_n,m}). <cit.> showed that the fixed points of BP applied to a factor graph correspond to the stationary points of the respective Bethe free energy. Seen in this light, BP is a suboptimal algorithm to minimize F_Bethe. The approximative nature in this sense is twofold: First, there may exist multiple fixed points of the SPA for the same factor graph, i.e., the solution of the (converged) BP might correspond to an extremum of F_Bethe other than the global minimum in 𝕃  <cit.>. Second, the beliefs only fulfill the pairwise consistency constraints at the fixed points of BP. This means that the solution only lies within the local polytope 𝕃 after BP has converged. However, BP does not necessarily converge and failure of convergence is a major error mode <cit.>. Various methods to directly solve (<ref>) or variants thereof were proposed (see <cit.> and references therein). <cit.> proposed to decompose the Bethe free energy into concave and convex parts which enables the application of a CCCP. That algorithm consists of a double loop where the outer loop iteratively minimizes F_Bethe and the inner loop ensures that the pairwise consistency constraints are fulfilled. Due to the CCCP, the algorithm provably converges to an extremum of the Bethe free energy. §.§ Examples For the remainder of this section, we introduce two important classes of factor graphs which are the basis for the numerical experiments in Sec. <ref>. Example 1 - Ising Graphs We consider factor graphs with N=M^2 variable nodes, arranged in a square 2D lattice, in which pairs of adjacent variable nodes (x_n,x_m) are symmetrically coupled by the weights J_n,m via factor nodes ψ_n,m(x_n,x_m) = exp( J_n,m x_n x_m ). Additionally, each variable node x_n has local evidence in the form of a degree-1 factor node ψ_n(x_n) = exp( θ_n x_n ). The Ising model originates from statistical physics where the binary variables x_n ∈{+1,-1} represent the orientation of elementary magnets in a lattice <cit.>. Each magnet is exposed to a local field θ_n and is influenced by its neighbors via an assigned pairwise coupling J_n,m. Besides its fundamental significance in statistical physics, the Ising model is a universal mathematical model and finds applications in many other scientific domains such as image processing <cit.> and modeling of social networks <cit.>. Following <cit.>, we study the fully connected 2 × 2 Ising model, i.e., M=2 and N=4, where every pair of variable nodes is connected. A factor graph representation of this model is given in Fig. <ref>. With more cycles than variable nodes and a girth of 3, this graph can be parametrized to a highly frustrated system and is thus able to highlight the weaknesses of the SPA <cit.>. In particular, we consider the Ising spin glass, where the parameters θ_n and J_n,m are iid random variables, sampled from a uniform distribution 𝒰[-S,+S] with S∈ℝ^+. We are interested in the computation of the marginal functions f(x_n) = ∑_∼{x_n} f(x_1,x_2,x_3,x_4), n=1,2,3,4, which correspond to marginal probability distributions p(x_n)=f(x_n) if the Ising graph represents a probabilistic model. While the direct computation of (<ref>) is still feasible for our example with N=4, the number of summations grows exponentially with N, which calls for alternative methods with lower complexity. Applying the SPA on the factor graph in Fig. <ref> yields the single beliefs b_n(x_n) as an approximation of f(x_n) with a complexity that only grows quadratically with N. Example 2 - Symbol Detection We study the problem of symbol detection in a digital communication system <cit.>. A transmitter sends a sequence of N independent and uniformly distributed symbols c_n ∈{+1,-1} over a linear channel with memory, impaired by AWGN. The receiver observes the sequence y = [ h_0 ; ⋮ h_0 1.30 ; h_L ⋮ ⋱ ; h_L h_0; 1.30 ⋱ ⋮; h_L; ]_=: H[ c_1; c_2; ⋮; ; c_N ]_=: c + [ w_1; w_2; ⋮; ; w_N+L ]_=: w , where h∈ℝ^L+1 describes the impulse response of the channel of length L+1 and w_k ∼𝒞𝒩(0,σ^2) are independent noise samples from a complex circular Gaussian distribution. Applying Bayes' theorem, the posterior distribution p(c|y) can be expressed in terms of the likelihood: p(c|y) = 1/Z p(y|c) = 1/Zexp ( -( y-Hc) ^2/σ^2 ). In the context of symbol detection, we want to infer the transmit symbols c_n based on the channel observation y, i.e., we are interested in the marginal distributions p(c_n|y). Based on an observation model by <cit.> p(y|c) ∝exp( 2Re{c^HH^Hy} -c^HH^HHcσ^2), we can factorize the likelihood p(y|c) = 1/Z∏_n=1^N [ F_n(c_n) ∏_m=1 m < n^N I_n,m(c_n,c_m) ] into the factors F_n(c_n) := exp( 1/σ^2Re{ 2 x_n c_n^⋆ - G_n,n|c_n|^2 }) I_n,m(c_n,c_m) := exp( -2/σ^2Re{ G_n,m c_m c_n^⋆}), where x:=H^Hy and G:=H^HH are the matched filtered versions of the observation and the channel matrix, respectively. Modeling a factor graph based on (<ref>) and applying the SPA yields a low-complexity symbol detection algorithm, originally proposed by <cit.>. § MESSAGE PASSING FOR CYCLIC GRAPHS Despite its drawbacks on cyclic graphs, the amazing success of the SPA lies in its simplicity and generality: it is only defined by a local message update rule which can be applied to any generic factor graph based on a suitable message update schedule. Driven by this elegant concept, we are interested in finding message passing algorithms that perform well on graphs with many cycles where the SPA fails. More specifically, we ask the following questions: * If the SPA fails to converge, does an alternative local message update rule exist that converges (possibly to an extremum of the Bethe free energy) and which provides better results than the SPA? * If the SPA converges to an extremum of the Bethe free energy, is there a local message update rule which yields superior performance, either because the SPA converges to a fixed point which only corresponds to a local instead of global minimum of the Bethe free energy, or because the Bethe approximation itself is a bad approximation in this case? §.§ On Message Update Rules A message update rule defines a mapping from one or multiple incident messages to one outgoing message, which is applied locally at the variable or factor nodes of a factor graph. Besides the initialization of the messages and their update schedule, these mappings fully define a graph-based inference algorithm. The SPA update rule at the variable nodes (<ref>) is simply the product of all extrinsic messages. We adopt this quite intuitive aggregation principle and focus on finding a message update rule for the factor nodes, i.e., an alternative to (<ref>). For factor nodes of degree 2, such as in (<ref>), the update rule simplifies to a mapping from one single incident message to one outgoing message: FN_e(ψ_n,m): m_x_n →ψ_n,m(x_n) ↦ m_ψ_n,m→ x_m(x_m). If the pairwise factors ψ_n,m(x_n,x_m) are symmetric with regard to x_n and x_m, and follow the exponential form ψ_n,m(x_n,x_m) = exp( E_n,m x_n x_m ), x_n,x_m ∈{+1,-1}, we can distill the dependency from the function ψ_n,m to the scalar parameter E_n,m∈ℝ, which quantifies the repulsive (E_n,m<0) or attractive (E_n,m>0) coupling between the nodes x_n and x_m. This directly coincides with the pairwise coupling weights J_n,m = E_n,m of the Ising model in Example 1. The factor nodes I_n,m of Example 2 can be reduced to the coupling parameters E_n,m = -2G_n,m / σ^2. Challenging the Extrinsic Principle Most of the existing message passing algorithms follow the extrinsic information principle. For instance in turbo decoding, it is known to be an important property of good message passing decoders <cit.>. Ensuring that only extrinsic messages are received, it prevents backcoupling of intrinsic information in tree-structured graphs, which would otherwise lead to a self-enhancement of the messages, also known as “double counting”. Thereby, it guarantees that the SPA is exact on trees <cit.>. We argue that this is in general not valid for cyclic graphs where backcoupling of messages is inevitable due to the very nature of the cycles. Therefore, we propose a second message update rule which operates contradictory to the extrinsic principle: instead of ignoring the intrinsic message, the message update should rather actively leverage this additional information, e.g., to ensure that local consistency between neighboring nodes is fulfilled. Without the extrinsic principle, we need to reconsider the messages from degree-1 factor nodes which are then also subject to iterative updates. To avoid an increase in complexity due to additional message updates at the degree-1 factor nodes, we apply a clustering approach similar to <cit.>. We split up the single factors ψ_n(x_n) into |𝒳_n| parts Ψ_n(x_n) := (ψ_n(x_n))^1/|𝒳_n| and merge them into the adjacent pairwise factors ψ_n,m(x_n,x_m), such that the new clustered factors are Ψ_n,m(x_n,x_m) := Ψ_n(x_n) ψ_n,m(x_n,x_m) Ψ_m(x_m). The overall factorization (<ref>) simplifies to f(x_1,…,x_N) = 1/Z∏_(n,m)∈ℰΨ_n,m(x_n,x_m), which leads to the non-extrinsic mapping FN(Ψ_n,m): [ m_x_n →Ψ_n,m(x_n); m_x_m →Ψ_n,m(x_m) ]↦ m_Ψ_n,m→ x_m(x_m). If the single factors are in exponential form Ψ_n(x_n) = exp( E_n x_n ), x_n ∈{ +1,-1}, the clustered factors Ψ_n,m(x_n,x_m) are fully characterized by the three scalars E_n, E_n,m and E_m. §.§.§ Neural Networks as Function Approximators Finding suitable mappings (<ref>) or (<ref>) such that the overall message passing algorithm performs well is generally non-trivial. We employ feed-forward NN, known to be efficient universal function approximators <cit.>, to reduce the search space of all possible mappings to a set of weights and biases 𝒫, which fully parametrize the NN. At a factor node f_j, the network accepts N_in inputs and produces the updated outgoing message m_f_j → x_n. For factor graphs with binary variables x_n, the messages m_f_j → x_n(x_n) can be expressed in scalar LLR L_f_j → x_n := log( m_f_j → x_n(x_n=+1)/m_f_j → x_n(x_n=-1)). A similar definition holds for the LLR L_x_n → f_j based on the messages m_x_n → f_j(x_n). For the extrinsic update (<ref>), there are N_in=2 inputs: the LLR of the incoming extrinsic message and the coupling parameter E_n,m of the local factor node. Without the extrinsic principle, the NN furthermore accepts the LLR of the intrinsic message as well as E_n and E_m, i.e., in total N_in=5 inputs. Since we only approximate a local mapping from a few scalar inputs to a single output, we can choose a very compact NN structure with a single hidden layer and 7 neurons, as summarized in Table <ref>. Having set up the NN structure, we are able to define a convenient message update rule by appropriately tuning the parameterization 𝒫 of the NN. We are interested in a local update rule such that the overall message passing performs well. To this end, we optimize 𝒫 with respect to an objective function that evaluates the end-to-end performance of the inference task. Therefore we apply a fixed number of message passing iterations and back-propagate the gradient of the objective function in order to iteratively optimize 𝒫 using gradient descent based on a representative set of examples. Note that this data-driven approach inevitably leads to a specialization of the learned message update to the data. However, we expect the result to be fairly generic and to have good generalization capabilities since we only optimize very few parameters in an otherwise model-aware system. Moreover, despite the end-to-end optimization, we only use a single message update rule for the entire factor graph, i.e., we employ the same instance of the NN for the message updates at all factor nodes and in each iteration[As a consequence, the training procedure of the NN is not entirely local because the local copies of the NN at each factor node must be globally synchronized during optimization. However, the local nature of the message updates is still retained.]. We note that our approach can be interpreted as a special instance of a GNN as, e.g., described by <cit.>. In comparison, our model passes scalar messages instead of high-dimensional vectors and does not use any hidden states or embeddings at the variable nodes. For this reason, we do not require a second NN with a gated recurrent unit, as used in <cit.> to update the hidden states based on the aggregated messages. Furthermore, we do not require a third NN which implements a trainable readout function to interpret the final node embeddings. §.§ End-to-end Objective Functions In the generic context of marginal inference, we hope to find a good approximation of the true marginals. A convenient objective function is the KL divergence which measures a type of statistical distance between the beliefs b_n(x_n) and the exact marginal distributions p(x_n) = ∑_∼{x_n} p(x_1,…,x_N): ℒ_KL := D_KL(b_n(x_n) p(x_n) ). For large graphs, the computation of p(x_n) might be infeasible, and ℒ_KL becomes impractical. Therefore, we propose alternative loss functions in what follows. The training of a symbol detector as in Example 2 is a typical supervised learning scenario where the labels are given by the transmitted symbols c_n. An appropriate performance measure for symbol detection is the BMI which is an achievable information rate[In our case, where the symbols c_n follow a Rademacher distribution, the BMI is equivalent to the mutual information.] for our scenario <cit.>. By a sample mean estimate over D labeled examples (c,y) from the data batch 𝒟, the BMI can be approximated by BMI≈ 1 - 1/D N∑_n=1^N∑_(c, y)∈𝒟log_2 ( e^-c_n L_n(y) + 1 ), where L_n(y) denotes the LLR from the belief b_n(c_n) <cit.>. Other applications such as the Ising model in Example 1 relate to the class of unsupervised problems if the true marginals are not accessible. For such scenarios, we consider a novel and application-agnostic objective function in the following. Inspired by the Bethe approximation, which is known to yield excellent results for many applications, even in cases where the SPA performs poorly <cit.>, we propose a regularized minimization of the Bethe free energy: ℒ_Bethe := F_Bethe + αℒ_𝕃, α∈ℝ^+. To ensure local consistency, we introduce the Bethe consistency distance ℒ_𝕃 := D_KL( ∑_x_m b_n,m(x_n,x_m) b_n(x_n) ) + D_KL( ∑_x_n b_n,m(x_n,x_m) b_m(x_m) ) as a type of distance measure between the solution of the approximative inference { b_n,b_n,m} and the local polytope 𝕃. The weight α in (<ref>) is a hyperparameter that controls how strictly the local consistency is enforced. With this penalty term ℒ_𝕃, we hope to suppress oscillations in the message passing, as they occur in the SPA for graphs with strong coupling. § EXPERIMENTS We consider the examples of Sec. <ref> for numerical evaluation. To enable a deeper analysis, we fix the number of variable nodes to N=4 such that the computation of the true marginals is feasible. Despite this rather small extent, these models lead to factor graphs with a high density of short cycles and are thus expressive examples to highlight the weaknesses of the SPA. Furthermore, we fix the global settings of the message passing to standard choices: all LLR messages are initialized with zero and we perform 10 iterations of a parallel schedule, i.e., each iteration comprises the parallel update of all messages at the factor nodes followed by message updates at all variable nodes. A common technique to improve the performance of the SPA on graphs with cycles is the use of “momentum”, i.e., replacing a message L^(t) of the SPA in iteration t with the weighted average (1-μ) L^(t) + μ L^(t-1) <cit.>. By choosing 0< μ < 1, the idea is to improve the convergence behavior of the message passing scheme compared to the original SPA (μ=0) while retaining the same fixed points. As in <cit.>, we set μ=0.1 and use this variant of the SPA as an additional baseline in the following experiments, where we refer to it as SPA_μ. Besides the SPA, we similarly apply message passing based on the newly proposed update rule (<ref>). We call the resulting inference algorithm cycBP (BP for cyclic graphs). If we use the extrinsic update rule (<ref>), we denote the algorithm with cycBP_e. We also consider the CCCP for the Bethe free energy as defined in <cit.>, since it gives interesting insights into the quality of the Bethe approximation. For the double loop, we apply 25 outer iterations, each comprising 25 inner iterations. Ising model We study the 2 × 2 fully connected spin glass model of Example 1 for S=2, i.e., all parameters θ_n and J_n,m are independently sampled from a uniform distribution 𝒰[-2,+2]. Table <ref> evaluates the behavior of all discussed inference schemes, averaged over 10^5 different graphs. σ̂_ℒ_KL denotes the empirical standard deviation of ℒ_KL of the individual graphs from the empirical mean. We can observe that the SPA does not leverage the full potential of the Bethe approximation, since the average loss ℒ_KL=0.087 of the SPA is twice as large compared to ℒ_KL=0.044 for the CCCP. Although the SPA reaches on average a smaller F_Bethe than the CCCP, the beliefs of the SPA show local inconsistencies with ℒ_𝕃=0.3 due to non-convergent behavior. Using “momentum” in the SPA message updates can help to mitigate this behavior: the SPA_μ shows improved pairwise consistency ℒ_𝕃=0.12 and also yields in average a better approximation of the true marginals (ℒ_KL=0.035). The CCCP has a vanishing Bethe consistency distance ℒ_𝕃, i.e., the results of the CCCP lie within the local polytope 𝕃. We search for alternative message update rules, by optimizing 𝒫 of the NN-based mappings towards minimal ℒ_KL. The training batches are sampled from a spin glass model with S=3 to put more emphasis on graphs with strong coupling, where the SPA is known to be susceptible to convergence errors. The results in Tab. <ref> show that there indeed exist superior message update rules to the SPA for this class of cyclic graphs. Using the extrinsic update rule (<ref>), the cycBP_e algorithm reaches ℒ_KL=0.04 and thereby outperforms the original SPA as well as the CCCP. We visualize the message update rule of the cycBP_e algorithm in Fig. <ref> by plotting the optimized mapping (<ref>) from the incoming LLR message L_x_n →ψ_n,m to the outgoing LLR message L_ψ_n,m→ x_m. Similar to the SPA, the mapping is point-symmetric to the origin. The major difference is the behavior for incident LLR messages with high magnitudes |L_x_n →ψ_n,m|>8, where the outgoing messages are heavily attenuated. Intuitively, this behavior reduces the potential of oscillation in graphs with strong coupling E_n,m. We can further improve the inference performance by disabling the extrinsic principle in the message passing procedure. The resulting algorithm cycBP can be interpreted as a generalization of cycBP_e and outperforms the latter with ℒ_KL=0.014, as reported in Tab. <ref>. It also yields a superior approximation of the true marginals compared to the momentum-based SPA_μ, although the Bethe consistency distance ℒ_𝕃=0.48 is relatively high in this case. Moreover, we consider unsupervised training towards the proposed loss function ℒ_Bethe. For the cycBP_e algorithm, the unsupervised training leads to a smaller loss ℒ_KL=0.03 compared to the supervised training, i.e., the loss function ℒ_Bethe is better suited for the optimization via stochastic gradient descent than the loss function ℒ_KL in this case. In the unsupervised training towards ℒ_Bethe, we observe substantial differences between the two variants cycBP_e and cycBP: while the optimization of cycBP_e converges reliably, the training of cycBP is unstable and the optimization needs to run multiple times with different initializations for 𝒫 until a reasonable result is obtained. This behavior is also reflected in the results in Tab. <ref>, where the cycBP algorithm shows a degraded performance with ℒ_KL=0.027, compared to the supervised training (ℒ_KL=0.014). We conjecture that this is accounted for by the local consistency constraint ℒ_𝕃, which can be directly enforced at the message update at the factor nodes if the intrinsic message also takes part in the update. Optimization of the hyperparameter α did not lead to considerable changes in this behavior and we used α=25 for all presented results. To analyze the convergence properties on highly frustrated systems, we consider the Ising model with constant parameters θ and J. Note that we do not specifically optimize the models for this scenario, but rather use the previous parametrization 𝒫 which is optimized for spin glasses with S=3. Figure <ref> plots ℒ_KL over θ and J for different inference algorithms. <cit.> showed that the Bethe free energy has a unique minimum in the complete antiferromagnetic domain (J<0) and in large parts of the ferromagnetic case (J>0), except for a region around θ=0, where F_Bethe has two minima in 𝕃. This coincides with our findings of the approximation error ℒ_KL for the CCCP in Fig. <ref>. The SPA shows failure to converge in large parts of the antiferromagnetic region with J<-1, where it does not converge to the unique fixed point and produces large approximation errors. The extrinsic message passing scheme cycBP_e, optimized towards ℒ_Bethe, shows an improved behavior. However, in the antiferromagnetic case with strong repellings (J<-1.5), there are still considerable approximation errors. The non-extrinsic message passing algorithm cycBP shows good inference capabilities over the complete considered region if it is optimized towards ℒ_KL. The unsupervised training with respect to ℒ_Bethe leads to a similar performance as the CCCP, however, the training procedure is again relatively unstable in this case. Symbol Detection We further consider the factor graphs of Example 2 for approximate symbol detection on linear channels with memory L=2. To generate random channels, we independently sample each tap h_ℓ of the channel impulse response for every example from a Gaussian distribution with zero mean and unit variance and subsequently normalize each channel to unit energy ‖h‖_2 = 1. Figure <ref> evaluates the detection performance of the considered inference algorithms in terms of the BMI over the signal-to-noise ratio E_b/N_0 = 1/σ^2. Both, the SPA and the CCCP run into an error floor for high E_b/N_0, where the graphs tend to have strong coupling via the factor nodes I_n,m(c_n,c_m). The momentum-based message updates of the SPA_μ enhance the original SPA in the entire E_b/N_0 range under consideration and close the performance gap to the CCCP. During the optimization of the new message update rules, the E_b/N_0 in dB was sampled from 𝒰[0,16] for each batch element independently. To help the update rule adapt to different channel realizations, i.e., different E_b/N_0 and different channel taps h, we feed E_b/N_0 and h as additional inputs to the NN. Figure <ref> shows that the optimized algorithms cycBP_e and cycBP clearly outperform the SPA and the CCCP, especially for high E_b/N_0. Consistently with our findings on the Ising graphs, the cycBP algorithm performs better than the extrinsic variant cycBP_e. The latter can also be trained towards ℒ_Bethe without degrading the detection performance. This is particularly surprising since it thereby clearly outperforms the CCCP. The optimization of the cycBP algorithm towards ℒ_Bethe does not converge and is therefore not shown in Fig. <ref>. However, since training towards the BMI is feasible for large N, the supervised training of the cycBP algorithm yields an attractive algorithm with low complexity and superior performance, which can be highly relevant for practical applications. §.§ Discussion To investigate the two central questions which we formulated in Sec. <ref> and to show the potential of our method, we investigated compact models with N=4. These models are expressive examples for analysis since they have a high density of short cycles and because the true marginals are available as ground truth data. However, verifying the capability of the proposed cycBP algorithm for practical applications requires extensive numerical evaluation on larger graphs and varying graph structures. This is ongoing work and our preliminary results are promising. § CONCLUSION This work considered message passing for approximate inference and showed the existence of message update rules which perform especially well on cyclic graphs where the SPA fails. We challenged the extrinsic information principle for cyclic graphs and proposed an alternative message update rule which also takes intrinsic information into account. The gain was demonstrated by numerical experiments on two exemplary classes of factor graphs. The learned message update rules generalize well and training is extremely fast since the update rule is defined by a very compact NN that is reused at all factor nodes. We furthermore proposed a novel unsupervised and application-agnostic loss function that follows the idea of the Bethe approximation. This work has received funding in part from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001899) and in part from the German Federal Ministry of Education and Research (BMBF) within the project Open6GHub (grant agreement 16KISK010). 29 urlstyle [Alvarado et al.(2018)Alvarado, Fehenberger, Chen, and Willems]alvarado_achievable_2018 Alex Alvarado, Tobias Fehenberger, Bin Chen, and Frans M. J. Willems. Achievable information rates for fiber optics: Applications and computations. J. Lightw. Technol., 360 (2):0 424–439, 2018. [Banerjee and El Ghaoui(2008)]banerjee_model_2008 Onureena Banerjee and Laurent El Ghaoui. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. J. Mach. Learn. Re., 9:0 485–516, 2008. [Besag(1986)]besag_statistical_1986 Julian Besag. On the statistical analysis of dirty pictures. J. R. Stat. Soc. Ser. B. Stat. Soc., 480 (3):0 259–302, 1986. [Colavolpe et al.(2011)Colavolpe, Fertonani, and Piemontese]colavolpe_siso_2011 Giulio Colavolpe, Dario Fertonani, and Amina Piemontese. SISO detection over linear channels with linear complexity in the number of interferers. IEEE J. Sel. Topics Signal Process., 50 (8), 2011. [Gallager(1963)]gallager_ldpc_1963 Robert G. Gallager. Low-density parity check codes. PhD thesis, MIT Press, 1963. [Guillén i Fàbregas et al.(2008)Guillén i Fàbregas, Martinez, and Caire]Fabregas_foundations_2008 Albert Guillén i Fàbregas, Alfonso Martinez, and Giuseppe Caire. Bit-interleaved coded modulation. In Found. Trends Commun. Inf. Theory, volume 5. Now Publishers, Delft, NL, 2008. [Hornik et al.(1989)Hornik, Stinchcombe, and White]hornik_multilayer_1989 Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks, 20 (5):0 359–366, January 1989. [Knoll et al.(2018)Knoll, Mehta, Chen, and Pernkopf]knoll_fixed_2018 Christian Knoll, Dhagash Mehta, Tianran Chen, and Franz Pernkopf. Fixed points of belief propagation – an analysis via polynomial homotopy continuation. IEEE Trans. Pattern Anal. Mach. Intell., 400 (9):0 2124–2136, 2018. [Kschischang et al.(2001)Kschischang, Frey, and Loeliger]kschischang_factor_2001 Frank R. Kschischang, Brendan J. Frey, and Hans-Andrea Loeliger. Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory, 470 (2):0 22, 2001. [Kuck et al.(2020)Kuck, Chakraborty, Tang, Luo, Song, Sabharwal, and Ermon]kuck_belief_2020 Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, and Stefano Ermon. Belief propagation neural networks. Adv. Neural Inf. Process. Syst., 33:0 667–678, 2020. [Mooij and Kappen(2007)]mooij_sufficient_2007 Joris M. Mooij and Hilbert J. Kappen. Sufficient conditions for convergence of the sum-product algorithm. IEEE Trans. Inf. Theory, 530 (12):0 4422–4437, 2007. [Murphy et al.(1999)Murphy, Weiss, and Jordan]murphy_loopy_1999 Kevin Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for approximate inference: An empirical study. In Proc. Innov. Appl. Artif. Intell. Conf., 1999. [Nachmani et al.(2016)Nachmani, Be'ery, and Burshtein]nachmani_learning_2016 Eliya Nachmani, Yair Be'ery, and David Burshtein. Learning to decode linear codes using deep learning. In Proc. Annu. Allerton Conf. Commun., Control, Comput., Monticello, IL, 2016. [Pearl(1988)]pearl_probabilistic_1988 Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. [Peierls(1936)]peierls_isings_1936 Rudolf Peierls. On Ising's model of ferromagnetism. Math. Proc. Camb. Philos. Soc., 32:0 477–481, 1936. [Proakis and Salehi(2007)]proakis_digital_2007 John Proakis and Massoud Salehi. Digital Communications. McGraw Hill, 5 edition, 2007. [Rapp et al.(2022)Rapp, Schmid, Rode, and Schmalen]rapp_structural_2022 Lukas Rapp, Luca Schmid, Andrej Rode, and Laurent Schmalen. Structural optimization of factor graphs for symbol detection via continuous clustering and machine learning. arXiv:2211.11406, 2022. [Richardson and Urbanke(2001)]richardson_capacity_2001 Thomas J. Richardson and Rüdiger L. Urbanke. The capacity of low-density parity-check codes under message-passing decoding. IEEE Trans. Inf. Theory, 470 (2):0 599–618, 2001. [Satorras and Welling(2021)]satorras_neural_2021 Victor Garcia Satorras and Max Welling. Neural enhanced belief propagation on factor graphs. In Proc. Int. Conf. on Artificial Intelligence and Statistics (AISTATS), pages 685–693, San Diego, CA, USA, 2021. [Schmid and Schmalen(2022)]schmid_low-complexity_2022 Luca Schmid and Laurent Schmalen. Low-complexity near-optimum symbol detection based on neural enhancement of factor graphs. IEEE Trans. Commun., 700 (11):0 7562–7575, November 2022. [Ungerboeck(1974)]ungerboeck_adaptive_1974 Gottfried Ungerboeck. Adaptive maximum-likelihood receiver for carrier-modulated data-transmission systems. IEEE Trans. Commun., COM-220 (5):0 624–636, 1974. [Wainwright and Jordan(2008)]wainwright_graphical_2008 Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach., 10 (1–2):0 1–305, 2008. Number: 1–2. [Wainwright et al.(2003)Wainwright, Jaakkola, and Willsky]wainwright_tree-reweighted_2003 Martin J. Wainwright, Tommi S. Jaakkola, and Alan S. Willsky. Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching. Proc. Int. Workshop Artificial Intelligence and Statistics, R40 (PMLR R4):0 308–315, 2003. [Welling and Teh(2013)]welling_belief_2013 Max Welling and Yee Whye Teh. Belief optimization for binary networks: A stable alternative to loopy belief propagation. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI), pages 554–561, 2013. [Yedidia et al.(2000)Yedidia, Freeman, and Weiss]yedidia_generalized_2000 Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. Advances in Neural Information Processing Systems, 13, 2000. [Yedidia et al.(2003)Yedidia, Freeman, and Weiss]yedidia_understanding_2003 Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Understanding belief propagation and its generalizations. In Exploring Artificial Intelligence in the New Millennium, volume 8, pages 236–239. Morgan Kaufmann Publishers Inc., 2003. [Yedidia et al.(2005)Yedidia, Freeman, and Weiss]yedidia_constructing_2005 Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. Inform. Theory, 510 (7):0 2282–2312, 2005. [Yoon et al.(2019)Yoon, Liao, Xiong, Zhang, Fetaya, Urtasun, Zemel, and Pitkow]yoon_inference_2019 KiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel, and Xaq Pitkow. Inference in probabilistic graphical models by graph neural networks. In Proc. Asilomar Conf. Signals Syst. Comput., 2019. [Yuille(2002)]yuille_cccp_2002 Alan L. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. Neural Computation, 140 (7):0 1691–1722, 2002.
http://arxiv.org/abs/2306.05148v1
20230608121618
A Novel Blind Adaptive Beamformer with Robustness against Mutual Coupling and Miscalibration Effects
[ "M. Yaser Yağan", "Ahmet F. Coşkun", "Ali E. Pusane" ]
eess.SP
[ "eess.SP" ]
A Novel Blind Adaptive Beamformer with Robustness against Mutual Coupling and Miscalibration Effects M. Yaser YAĞAN12, Ahmet F. COŞKUN1, Ali E. PUSANE2 1 HİSAR Lab. @Informatics and Information Security Research Center (BİLGEM), TÜBİTAK, Kocaeli, Turkey 2 Department of Electrical and Electronics Engineering, Boğaziçi University, İstanbul, Türkiye [email protected]/[email protected], [email protected], [email protected] July 31, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================== Beamforming techniques utilized either at the transmitter or the receiver terminals have achieved superior quality-of-service performances from both the multi-antenna wireless communications systems, communications intelligence and radar target detection perspectives. Despite the overwhelming advantages in ideal operating conditions, beamforming approaches have been shown to face substantial performance degradations due to unknown mutual coupling effects and miscalibrated array elements. As a promising solution, blind beamformers have been proposed as a class of receiver beamformers that do not require a reference signal to operate. In this paper, a novel gradient-based blind beamformer is introduced with the aim of mitigating the deteriorating effects of unknown mutual coupling or miscalibration effects. The proposed approach is shown to find the optimal weights in different antenna array configurations in the presence of several unknown imperfections (e.g., mutual coupling effects, miscalibration effects due to gain and phase variations, inaccurate antenna positions). By providing numerical results related to the proposed algorithm for different array configurations, and bench-marking with the other existing approaches, the proposed scheme has been shown to achieve superior performance in many aspects. Additionally, a measurement-based analysis has been included with validation purposes. 0.5 antenna array processing, receive beamforming, mutual coupling, array calibration imperfections. § INTRODUCTION Beamforming is a fundamental method that has been considered in wireless communication research communities for decades. The purpose of different beamforming techniques is to exploit antenna arrays to achieve reliable links with low power consumption. Consequently, beamformers can be realized at transmitters, receivers, or both transmitters and receivers in analog, digital, or hybrid forms. At the transmitter side, complex weights are calculated and multiplied by the signal fed to the array such that the propagated electromagnetic waves add up constructively in the desired beam direction and destructively in other (null) directions. On the other hand, a similar formulation can be implemented at the receiver to maximize the receiver array's gain in a specific direction and force it to approach zero in other interference directions. A high-performance beamformer strictly requires perfect channel estimation at the receiver and undelayed feedback to the transmitter such that the complex weights could be jointly optimized. Receiver beamforming algorithms might be classified in three forms:According to training necessity, there are data-aided and blind beamformers,for weight calculation, deterministic and iterative algorithms exist,and according to flexibility, there are fixed and adaptive beamformers. While adaptiveness is a certain requirement in mobile communications, iterative and deterministic algorithms can be compared in terms of computational complexity and its trade-off with the convergence of the iterative algorithm. Furthermore, a well-performing blind beamformer is preferable over a data-aided beamformer, since it will significantly reduce the signaling overhead. A data-aided beamformer calculates the weights numerically by optimizing a divergence measurement between the received signal and a reference signal via a mean square error (MSE) evaluation approach. However, blind beamformers search for the weights that maximize specific criteria, such as the signal-to-noise ratio (SNR) or signal-to-interference plus noise ratio (SINR). This is usually achieved by estimating the angle of arrival (AoA) of the desired signal and interference, then calculating the weights that give high gain at the desired direction and form nulls in the other directions. These methods suffer from many drawbacks arising from the fact that a precise AoA estimation is required and this depends essentially on the array factor (AF). Consequently, unknown mutual coupling (MC) or miscalibrated array (MA) limits the performance of such beamformers significantly. Few studies have focused on the problem of blind beamforming in the presence of unknown MC. In <cit.>, a beamforming algorithm has been developed for ULAs with unknown MC depending on the structure of the mutual coupling matrix (MCM). The numerical results have shown that the proposed algorithm achieves a better performance in the presence of unknown MC when compared to other approaches. Within the same context, the recent work in <cit.> has proposed an approach to improve the estimation of the steering vector. The approach was applied to conventional beamformers and provided performance improvement. Aside from these methods, a well-known algorithm for blind beamforming, previously used for blind equalization <cit.>, is the constant modulus algorithm (CMA) <cit.>. The CMA basically aims to numerically maximize the constant modulus property of the desired signals, whereas its solution is limited to only constant-amplitude signals (e.g., phase- and frequency-modulated ones). Additionally, other major deficiencies of this algorithm are shown to be the high computational complexity and slow convergence <cit.>. As a result, in order to address its basic drawbacks, CMA has been studied extensively and that has yielded to several modified implementations with the aim of providing further enhancements. For example, the authors in <cit.> proposed combining the CMA with a data-aided scheme to exploit the advantages of both approaches. In <cit.>, a modified CMA with analog beamforming has been proposed, and considerable enhancements in the overall bit error rate performance of a communications scheme have been achieved. Other works <cit.> focusing on the mathematical structure of the problem and reducing its complexity have shown significant reductions in computational complexity or the required processing power by performing an extensive comparison on several stochastic gradient-based algorithms. A recent work <cit.> has proposed another approach that aims to maximize the constant modulus criteria in a deterministic manner. Despite the substantial achievements with respect to other schemes in comparison, the complexity of this approach could easily be prominent as a bottleneck, since it includes eigenvalue decomposition and matrix inversion operations. In order to speed up the convergence phase, the authors in <cit.> have proposed employing a Hanning window on the updated weights. However, using Hanning window is shown to limit the degrees of freedom and results in an degradation offset when compared to the optimum solution. In this paper, a blind iteratively adaptive beamformer for a single source scenario is proposed. The proposed beamformer is gradient-based and not limited to signal subspaces oriented by an inflexible AF. Since the proposed scheme has no dependencies on the array geometry, it is expected to be much more robust against unknown MC and MA impairments. Simulation results have shown that the proposed approach maximizes the received signal's power for different array configurations in the presence of unknown antenna array imperfections (AAI). By providing an extensive study on the performance evaluation and comparison to other approaches together with measurement-based validation process, the proposed algorithm has been shown to achieve the optimal solution with reduced complexity and number of iterations. Besides, thanks to its adaptive structure, the algorithm is able to effectively update the combining weights due the AoA variations while facing no performance degradations. The remainder of this paper is organized as follows: The system model and mathematical formulation are explained in Section II, the proposed algorithm is presented in Section III, numerical results and comparisons are given in Section IV, and Section V concludes the paper. § SYSTEM MODEL This paper focuses on an arbitrary planar antenna array composed of M elements (as exemplified in Fig. <ref> for uniform linear array (ULA) and uniform circular array (UCA) cases) that could be described by the element positions in two-dimensional Cartesian coordinate system as (p_ix,p_iy), i=1,2,...,M. Accordingly, for an incident planar wave, the azimuth AoA ϕ can be defined as the angle between the plane's normal vector and the x axis, as shown in Fig. <ref>. The transmitted radio-frequency (RF) signal will be sensed at each antenna element with systematically varying time lags. As a function of the incidence signal direction ϕ, the time lags of the received signals τ_i(ϕ) could be easily expressed in terms of the elements' positions (p_ix,p_iy) relatively to the phase center of the array[The phase center might be selected arbitrarily with no restrictipn, but for the sake of simplicity, the origin of the Cartesian coordinate system is selected.]. Hence, under noise-free and lossless transmission conditions considered to describe the signal model, the received signal vector consisting of the time-domain complex RF signal replicas received by all elements might be expressed as 𝐫(t;ϕ) = [ s(t-τ_1)e^j2π f_c(t-τ_1); s(t-τ_2)e^j2π f_c(t-τ_2); ⋮; s(t-τ_M)e^j2π f_c(t-τ_M); ], where s(t) is a baseband signal occupying a bandwidth of B, and f_c denotes the RF carrier frequency. As long as the narrowband signal assumption (i.e., B<<f_c) holds true, each signal replica might be approximated as: s(t-τ_M)e^j2π f_c(t-τ_M)≈ s(t)e^j2π f_c(t-τ_M). Hence the received signal vector in (1) might be rewritten as 𝐫(t;ϕ) =s(t)e^j2π f_c t[ e^-j2π f_cτ_1; e^-j2π f_cτ_2; ⋮; e^-j2π f_cτ_M; ]≜s(t)e^j2π f_c t_x(t)a(ϕ). In (2), x(t) is the time-domain complex RF signal and a(ϕ) is the array manifold (or the array factor, AF) that could fully characterize the overall response of the antenna array at a specific frequency and towards an incidence signal direction ϕ. Since the RF signal part is common for all array elements, carrying out the investigation as focused on the baseband equivalents would be beneficial for tractability. Accordingly, the digital baseband signal vector received by the array is given as 𝐫[n] = s[n] 𝐚(ϕ) + η[n], where s[n] is the transmitted signal symbol at time instance n and η is an M× 1 vector of uncorrelated zero-mean additive white Gaussian noise samples with a variance of σ^2. In the presence of antenna array impairments such as unknown MC, MA, and inaccurate positioning of antenna elements, the array manifold would face additional phase terms in its each element. This would correspond to an array manifold that considers imperfections and is given as 𝐚_I(ϕ) = [ e^j2π f_c τ_1(ϕ) + jα_1; e^j2π f_c τ_2(ϕ) + jα_2; ⋮; e^j2π f_c τ_M(ϕ) + jα_M; ], where α_i, i∈1,2,…,M, denote the additional deteriorating phase terms caused by the impairments that would be simply equal to zero in ideal conditions (α_i=0, ∀ i), yielding 𝐚_I(ϕ)≡𝐚(ϕ). The beamformer multiplies the received signal vector 𝐫[n] by a vector of complex weights ω^H to give the output y[n] = ω^H𝐫[n]=∑_i=1^Mω_i^* r_i[n], such that the output power is maximized. Hence, the optimum weight vector ŵ_opt is found by solving the optimization problem ω̂_opt≜ max_ωω^H𝐫[n]^2 = max_ωω^H(s[n] 𝐚_I(ϕ) + η[n])^2 With the constraint ω=1, the solution to the optimization problem would easily be obtained as ω̂_opt = 𝐚_I(ϕ). Here, it is clearly seen that, given the array geometry and phase imperfection values α_i, ∀ i, the overall problem reduces to AoA estimation. Furthermore, for a blind beamformer, estimating the AoA with unknown AAI limits the end-to-end performance. Conventional AoA estimation methods, such as MUSIC, search for the solution in the subspace of 𝐚(ϕ), thus they theoretically fail to solve this problem, since 𝐚_I(ϕ) lies outside. However, a numerical method like gradient descent will not be constrained within that subspace and could still search the entire M-dimensional space to reach the optimal solution. In the following section, the proposed gradient-based power maximizing algorithm that aims to find the optimal weights for the blind beamformer is introduced.. The objective function is the direct estimation of the received signal power and the gradient with respect to beamformer weights is calculated at each iteration to update the weights. § GRADIENT-BASED BLIND ADAPTIVE BEAMFORMER §.§ Gradient Derivation The average power of the received signal can be estimated as P̂ = 1/N∑_n=0^N-1|y[n]|^2 = 1/N∑_n=0^N-1 y[n]y^*[n], where N is the number of observed signal samples. By assuming interference-free environment and equal-variance noise at each receiver channel, the weights are only required to shift the phases of the received signals. Hence, they will have the same amplitude and can be written as ω_i = 1/√(M) e^jθ_i. Consequently, the derivative of the average power with respect to each phase would be derived as ∂P̂/∂θ_i = 1/N∑_n=0^N-1(∂ y[n]/∂θ_iy^*[n] +∂y^*[n]/∂θ_iy[n]) . Substituting for y[n] from (<ref>) and for ω_i from (<ref>) yields ∂P̂/∂θ_i = 1/N∑_n=0^N-1(jr_i[n]e^jθ_iy^*[n] - jr_i^*[n]e^-jθ_iy[n]). §.§ Weight Update Process Starting from a random initial weight vector ω^0, the algorithm will update the weights at each data frame of size N', and all frame samples can be used for the gradient estimation (N'=N) or N samples can be used such that N < N'. Having the previous weight vector ω^k-1, the weights will then be updated at the k^th frame as ω_i^k = 1/√(M)exp{j(θ_i[k-1]+μ∂ P[k]/∂θ_i[k-1])}, where μ is the step size. Expressing the weights in this form ensures that the weight vector has a unit norm and no normalization is required. The brief description of the gradient-based beamforming algorithm is given in Algorithm <ref>. At the beginning, the weights can be initialized to take any random values. However, in simulations, the initial value for each weight has been selected as ω_i = 1/√(M). This initialization has shown well adaptiveness performance, thus it is preferred over random initialization to avoid possible convergence problems. In the following section, a detailed analysis of the performance of the proposed algorithm, its convergence, and adaptiveness is conducted. § NUMERICAL & EXPERIMENTAL RESULTS This section exhibits the outcomes of the Monte Carlo simulations and the anechoic chamber measurements corresponding to both ULA and UCA configurations. Within our comparative study, the carrier frequency is selected as f_c=2 GHz, the inter-element distances are set equal to the half-wavelength corresponding to f_c, and the modulation scheme is selected as QPSK. The swept parameters in the simulations are SNR, AoA, and N. While evaluating the convergence of the beamformer and its variation due to the number of considered samples, the value of the AoA has been fixed, and different (N, SNR) combinations have been simulated for both array configurations. Here, each simulated baseband symbol has been oversampled by a factor of 8 with the purpose of transmit filtering application. As a result of the beamformer's convergence examinations, choosing N equal to samples per symbol (SPS) that is identical to the oversampling factor 8 (one symbol) has been shown to be adequate for the proposed algorithm. On average, it takes up to 25 iterations to reach the optimum weight vector. On the other hand, the variations in SNR changes have been shown to induce negligible effects on the overall performance. The time-varying AoA related to the signal source has been then modeled as a random walk process. Fig. <ref> depicts the normalized average power of the proposed blind beamformer together with the outcomes of the conventional MUSIC approach and the prominent state-of-the art beamformer approach (i.e., CMA [4]). For the transmitted QPSK symbols, the advantages achieved by the proposed approach and the CMA in comparison to the fragile MUSIC algorithm have been exhibited due to successive data frame indices for both ULA and UCA configurations with 8 elements. Here, note that, the received power has been normalized to the optimum solution given in (<ref>). Unless otherwise specified, the first 8 symbols of each frames have been utilized for the beamforming algorithms. For the proposed model, after the transient period introduced during the convergence of the weight vector, the steady-state weights are obtained in the form of ω̂ = 𝐚_I(ϕ) e^jγ, where γ is an arbitrary phase shift. Consequently, this solution is shown to achieve maximum power at the beamformer's output. As seen in the Fig. <ref>, the proposed algorithm outperforms the other techniques in terms of convergence to the optimum solution and convergence speed. While CMA at some instances struggles to modify the weights adaptively in response to changing AoA, the suggested model can monitor the changes in AoA with some minor dips in the ULA case. The formed beams for different approaches are shown in Fig. <ref>. As seen, the proposed algorithm provides generating the highest peak in the AoA, and CMA has been shown to construct a very similar pattern. Here, the beamformer weights are compensated by the calibration coefficients to result in a real azimuth pattern. Additionally, Fig. <ref> shows the effect of varying the number of symbols for the weight update process on the performance. The vertical axis denotes the average power of beamformers' outputs for all frames normalized with respect to the optimum solution. As clearly seen, the proposed model exhibits considerable robustness even with lower number of observation samples. Here, note that, the deviation with respect to the optimum solution is caused due to the consideration of the first frames before convergence. For more than 20 symbols, CMA converges faster and produces better solution compared to the proposed model. However, it is important to remember the CMA treats symbols iteratively, and the number of iterations is equal to the number of symbols, while the proposed model performs a single iteration at each frame using all the symbols at once. This clearly emphasizes the advantages of the proposed blind beamformer as a lower complexity but efficient solution when compared to the other state-of-art alternatives. The performance of the proposed beamformer was evaluated in an anechoic chamber using the experimental setup shown in Fig. <ref>. A receiver array composed of 4 identical omnidirecitonal antennas <cit.> and a transmitter equipped with a single antenna were utilized. The signals were received using USRP-2945 SDR with 4 coherent channels and the proposed beamformer was implemented via a LabView environment. The measurements have been taken in 2.4 GHz and 2.7 GHz frequencies. For the 2.7 GHz case, the UCA shown in the figure with interelement distance equal to 4.5 cm was utilized. For the 2.4 GHz case, a ULA with interelement distance of 5 cm was utilized. A QPSK-modulated signal with 4 MHz bandwidth has been transmitted, the measurements have been performed for both the static transmitter case together with the moving transmitter case. The measurement results have been depicted in Fig. <ref>. The proposed model has been shown to converge in a fast manner and keep the performance in the static case. Whereas, for the case of a moving transmitter, performance degradations have been introduced despite the beamformer being able to reconverge in short time periods (∼20 ms). § CONCLUSION In this paper, a low-complexity blind receiver beamforming approach has been introduced. This approach has been shown to converge to the optimum solution numerically, and verified by anechoic chamber measurements. Without prior information on the array's geometry, and in the presence of unknown mutual coupling and miscalibration errors, the beamformer was able to adaptively converge to the maximum power level giving weights. The proposed scheme has been tested for both ULA- and UCA-type antenna arrays. Since the weights found by the beamformer are very close to the optimum weights, as a future work, designing an algorithm that estimates the AoA from the weight vector under the specified imperfections would be beneficial. At the system level, estimating the AoA and receiver nonidealities could be significant for transmitter beamforming, which depends on receiver's feedback. IEEEtran
http://arxiv.org/abs/2306.06450v1
20230610141410
How movement bias to attractive regions determines population spread and critical habitat size
[ "Vivian Dornelas", "Pablo de Castro", "Justin M. Calabrese", "William F. Fagan", "Ricardo Martinez-Garcia" ]
q-bio.PE
[ "q-bio.PE", "physics.bio-ph" ]
1,2]Vivian Dornelas[These authors contributed equally to this work and share first authorship] 1]Pablo de Castro^* 3,4,5]Justin M. Calabrese 5]William F. Fagan 3,1]Ricardo Martinez-Garcia[Correspondence: [email protected]] [1]ICTP – South American Institute for Fundamental Research and Instituto de Física Teórica, Universidade Estadual Paulista – UNESP, São Paulo, Brazil. [2] National Institute of Chemical Physics and Biophysics - Akadeemia Tee 23, Tallinn 12618, Estonia. [3]Center for Advanced Systems Understanding (CASUS); Helmholtz–Zentrum Dresden–Rossendorf (HZDR), Görlitz, Germany. [4]Department of Ecological Modelling, Helmholtz Centre for Environmental Research – UFZ, Leipzig, Germany. [5]Department of Biology, University of Maryland, College Park, MD, USA. How movement bias to attractive regions determines population spread and critical habitat size [ ============================================================================================== Ecologists have long investigated how the demographic and movement parameters of a population determine its spatial spread and the critical habitat size that can sustain it. Yet, most existing models make oversimplifying assumptions about individual movement behavior, neglecting how landscape heterogeneity influences dispersal. We relax this assumption and introduce a reaction-advection-diffusion model that describes the spatial density distribution of a population with space-dependent movement bias toward preferred regions, including avoidance of degraded habitats. In this scenario, the critical habitat size depends on the spatial location of the habitat edges with respect to the preferred regions and on the intensity of the movement bias components. In particular, we identify parameter regions where the critical habitat size decreases when diffusion increases, a phenomenon known as the “drift paradox”. We also find that biased movement toward low-quality or highly populated regions can reduce the population size, therefore creating ecological traps. Our results emphasize the importance of species-specific movement behavior and habitat selection as drivers of population dynamics in fragmented landscapes and, therefore, the need to account for them in the design of protected areas. How movement bias to attractive regions determines population spread and critical habitat size [ ============================================================================================== § INTRODUCTION Habitat destruction and fragmentation result in smaller and more isolated suitable habitat patches where extinctions are more likely to occur <cit.>. The viability of a population in each of these patches depends on the balance between growth inside the patch and population losses, mainly stemming from dispersal through habitat edges. The interplay between these two processes sets the minimum area required to sustain the population and defines a patch-size threshold <cit.>. Thus, understanding the interaction between demographic and dispersal processes is key to determining critical patch sizes across species, which has important implications for conservation, such as in the design of protected areas or ecological corridors <cit.>. Additionally, determining the expected spatial pattern of population density in patches larger than the critical size can improve understanding of population responses to further habitat destruction. The importance of critical patch sizes and population spread in finite habitat patches has led researchers to test model predictions in microcosm experiments using microbial populations <cit.> as well as in large-scale experiments and observations <cit.>. Despite the biological understanding gained from these and other empirical studies, key aspects of complexity remain absent from theoretical modeling of critical patch size phenomena. The most common models to determine critical habitat sizes consist of a reaction-diffusion equation describing the spatio-temporal dynamics of a population density in a bounded region. Within this family of models, the simplest ones assume purely diffusive dispersal coupled with exponential growth and are commonly called KISS models <cit.>. Due to these highly simplified assumptions, KISS models lead to analytical expressions for the critical patch size. More recently, researchers have refined movement descriptions by including space-dependent diffusion within the patch <cit.>, responses to habitat edges <cit.>, and various sources of non-random movement, such as a constant external flow <cit.> or a chemoattractant secreted by the population <cit.>. Other studies have explored more complex growth dynamics, such as Allee effects <cit.>, time-varying environments <cit.>, or heterogeneity in population growth, either through time-dependent demographic rates <cit.> or by introducing a finer spatial structure of habitat quality within the patch <cit.>. Finally, a few studies have combined space-dependent demographic rates with migration toward higher quality regions within the patch, in the presence of an environmental gradient, to determine critical patch sizes <cit.> or population spatial distributions depending on the type of boundary conditions <cit.>. Despite this significant endeavor to refine classical KISS models, some movement features which are present in most species and which directly impact population spread remain underexplored. For example, individuals often show a tendency to move toward certain habitat regions where they concentrate, which makes population ranges smaller than the total amount of habitat available <cit.>. While a considerable effort has focused on understanding how and why individuals show these patterns of space use <cit.>, their population-level consequences, especially in fragmented landscapes, have been less explored. Motivated by colonial central-place foragers such as ants, beavers, and colonial seabirds, one particular study obtained, numerically, the critical patch size when the home-range centers for all the individuals in the population are at the center of the habitat patch <cit.>. Overall, however, the lack of a more general theoretical framework limits the current understanding of how habitat selection within a fragmented landscape determines the spatial distribution and critical patch size for a given population. Here we take a first step to fill this theoretical gap by extending classical KISS models to account for space-dependent deterministic movement. We study how this additional movement component influences population spread in a heterogeneous one-dimensional landscape and the critical habitat patch size that ensures population survival. We consider the simple one-dimensional scenario with a finite high-quality habitat patch embedded in a low-quality “matrix” with high mortality. Using both numerical and analytical methods, we measure critical patch size and spatial patterns of population density for different matrix mortality levels. We also vary the intensity of two deterministic space-dependent movement components relative to random dispersal: a bias to preferred landscape locations and avoidance of degraded habitats. We find that the total population lost due to habitat degradation and the critical patch size depend nonlinearly on the parameters that control the different movement components as well as on the spatial distribution of habitat relative to the landscape regions where individuals move more slowly. Overall, our results emphasize the importance of incorporating covariates between movement behavior and landscape features when investigating population dynamics in heterogeneous landscapes. § MATERIAL AND METHODS §.§ Model formulation We consider a one-dimensional heterogeneous landscape with a habitat patch embedded in an infinite matrix (see Fig. <ref>a). The left and right habitat patch edges are located at x=x_L and x=x_R, respectively, and the habitat patch size is L=|x_R-x_L|. The landscape is occupied by a single-species population, which we describe via a continuous density field u(x,t). This population density changes in space and time due to demographic processes and dispersal. For the birth/death dynamics, we assume that the population follows logistic growth with net reproduction rate r and intraspecific competition intensity γ. The net growth rate is constant within each type of region but different between regions: r(x)=r_H>0 inside the habitat patch (high-quality, low-mortality region) and r(x)=r_M<0 in the matrix (low-quality, high-mortality region). The matrix mortality rate r_M defines the degree of habitat degradation, with the limit r_M→-∞ representing complete habitat destruction. For finite mortality rates, whether an individual dies in the matrix or not is determined by the mortality rate itself and the time the individual spends in the matrix. Therefore, when the matrix is not immediately lethal, the population density outside the habitat patch is not zero. For dispersal, we consider two different movement components: random dispersal with constant diffusion coefficient D, and a deterministic tendency of individuals to move toward attractive regions with space-dependent velocity v(x), therefore accounting for the effect of landscape heterogeneity in movement behavior. Importantly, this attractive term in our model generates movement bias toward regions that are not necessarily of higher habitat quality. The actual velocity of an individual is thus equal to v(x) plus a stochastic contribution that comes from diffusion. Combining these demographic and movement processes, the dynamics of the population density is given by ∂ u(x,t)/∂ t =r(x) u(x,t)-γ u(x,t)^2+ D∂^2 u(x,t)/∂ x^2 - ∂/∂ x(v(x)u(x,t)). The functional form of the advection velocity v(x) depends on landscape features, with attractive locations corresponding to x coordinates with slower velocity. We consider two different types of attractive regions. First, we incorporate a tendency to move toward an attractive location with velocity v_P(x). This term could represent, for example, attraction toward a chemoattractant source or toward a special resource, such as a watering hole. We choose v_P(x)=-τ_P^-1(x-x_P), where we assumed that the velocity at which individuals tend to move toward attractive landscape regions increases linearly with the distance to the focus of attraction. This is similar to how simple data-driven models for range-resident movement implement attraction to home-range center at the individual level <cit.>. The prefactor τ_P^-1 is the attraction rate toward the attractive location and defines the typical time that individuals take to re-visit x_P. In the following, we use x_P=0 in all our calculations, such that the locations of the habitat edges are measured relative to the focus of attraction. Second, we consider that individuals in the matrix tend to return to the habitat patch with velocity v_M(x), and therefore, we incorporate an additional attraction term biasing movement from the matrix toward its closest habitat edge. Again, we consider a linear spatial dependence, but now only for individuals in the matrix: v_M(x) = -τ^-1_M(x-x_L), x<x_L 0, x_L≤ x≤ x_R . -τ^-1_M(x-x_R), x>x_R The prefactor τ^-1_M is the edge attraction rate that modulates the strength of the matrix-to-habitat attraction v_M(x). In the habitat, v_M(x)=0, whereas in the matrix, it is equal to τ_M^-1 multiplied by the distance to the closest edge. Moreover, the velocity v_M(x) always points toward the habitat patch, therefore biasing the movement of the individuals in the matrix toward the habitat-matrix edges. This matrix avoidance drift assumes that individuals remain aware of the direction in which the favorable habitat is located, which extends previous models for movement response to habitat edges that act only at the habitat-matrix boundary <cit.>. Putting together the movement toward the attractive location and the matrix avoidance bias, we obtain a velocity of the form v(x) = v_P(x)+v_M(x) (see Fig. <ref>b for v(x) and Fig. <ref>c for the population spread emerging from it). We provide a summary of the model parameters in Table <ref>. §.§ Model analysis We analyze the stationary solutions of Eq. (<ref>) using a combination of semi-analytical linear stability analysis and numerical simulations of the full nonlinear equation. We use both approaches in the r_M→-∞ limit and perform only numerical simulations in the more general case with finite r_M. §.§.§ Linear stability analysis of the extinction state when r_M→-∞ In the r_M→-∞ limit, individuals die instantaneously upon reaching the matrix, and we can replace the dynamics of the population density in the matrix by absorbing boundary conditions at the habitat edges, u(x_L,t)=u(x_R,t)=0. In this regime, the movement component that attracts individuals to the habitat edge, v_M, has no effect on the dynamics, and we can perform a linear stability analysis of the extinction solution u(x,t→∞)≡ u_s(x)=0 to determine the habitat configurations (x_L, x_R) that lead to population extinction for a given set of movement parameters. To perform this linear stability analysis, we neglect the quadratic term in the logistic growth and take the limit t→∞ in Eq. <ref>, which is equivalent to setting ∂_t u(x,t)=0. In this limit, Eq. <ref> becomes an ordinary differential equation with solution u_s(x)=exp(-x^2/2 τ_PD)[a H_rτ_P(x/√(2Dτ_P))+ b _1F_1(-r τ_P/2;1/2;x^2/2 Dτ_P)], where a and b are constants that we can obtain from the boundary conditions. _1F_1 is the confluent hypergeometric function of the first kind, and H_n(x) is the Hermite polynomial, with n being a real, not necessarily integer, number <cit.>. Imposing absorbing boundary conditions at the habitat edges on Eq. <ref>, we obtain a system of two equations for a and b that we can use to determine the stability of the solution u_s(x)=0. For this system of equations to have non-trivial solutions (that is, different from a=b=0), its determinant has to be zero. With this condition for the determinant and assuming that x_L is fixed, we obtain a transcendental equation in x_R that we can solve numerically to obtain the critical location of the right habitat edge, x_R,C. §.§.§ Numerical solution of the nonlinear model equation We perform all numerical simulations using a central Euler method starting from a random positive initial condition for u. In the r_M→-∞ limit, we further ensure that the initial condition obeys the absorbing boundary conditions at the habitat edges. We integrate Eq. <ref> for a variety of habitat patch sizes keeping x_L constant and decreasing x_R systematically until the population reaches a stationary extinction state. Using this procedure, we can calculate x_R,C and hence the critical patch size defined as the habitat patch size at which the steady-state total population transitions from non-zero to zero (see Fig. <ref> for spatial patterns of population density as x_R decreases, with x_R>x_R,C). Finally, we also use these numerical solutions of Eq. <ref> to measure population loss due to habitat degradation. For this purpose, we introduce a dimensionless quantity, η, defined as the total population size sustained by a finite habitat patch of size L divided by the total population size sustained by an infinite habitat patch. Such remaining population fraction is thus η≡N_T/N_T^∞, where N_T and N_T^∞ are the total population sizes for finite and infinite habitat patches, respectively. We obtain these population sizes by integrating the population density over the entire landscape, including the matrix. § RESULTS §.§ Perfectly absorbing matrix: the r_M→-∞ limit We first consider the simplest scenario in which individuals die instantaneously after they reach the habitat edges. In this limit, the population density is always zero in the matrix and, therefore, the movement component that biases individuals in the matrix toward the habitat edges is irrelevant. Movement is thus solely driven by random diffusion and the bias toward the attractive location x=x_P=0. In large habitat patches, space-dependent movement leads to the accumulation of population density very close to regions with slower movement. However, as the habitat patch decreases in size and regions with slower movement get closer to one of the habitat edges, the spatial pattern of population density changes due to mortality at the habitat edge and the maximum of population density shifts further away from the attractive location and towards the patch center (Fig. <ref>). This asymmetric pattern of space occupation due to space-dependent movement contrasts with well-known results for purely diffusive movement, for which population density reaches its maximum in the center of the habitat patch <cit.>, and significantly alters population loss owing to habitat degradation and the critical patch size. First, to understand population loss due to habitat degradation, we use the remaining population fraction, η, defined in Eq. (<ref>). This remaining population fraction is maximal when the attractive location is at the same distance from the two habitat edges, and it decays symmetrically about the line x_R = -x_L. Moreover, this decay is sharper for stronger bias toward the attractive location (Fig. <ref> and <ref>). Finally, when the distance between the attractive location and one of the habitat edges is sufficiently large, further increasing the habitat size does not change the remaining population fraction because population loss through habitat edges is negligible, except for τ_P^-1 = 0. Regarding the critical patch size, when the bias to the attractive location is strong, represented by higher values of τ_P^-1, the population is localized around the attractive location x_P=0, and it goes extinct when the attractive location is within the habitat patch but close to one of its edges (Fig. <ref>a, b). When the bias to the attractive location decreases, however, the population can survive even if the attractive location is outside the habitat and the mortality in the matrix is infinite (Fig. <ref>c and <ref>c). This scenario would correspond to a situation where habitat destruction places the attractive location outside the habitat and individuals have not adapted their movement behavior to this landscape modification. As a result, individuals preferentially move toward regions with low habitat quality, which can be understood as an example of an ecological trap <cit.>. To further investigate how the distance between the attractive location and the habitat edges determines the critical patch size for different movement parameters, we calculate the critical location of the right habitat edge x_R,C assuming that the left habitat edge is fixed and far from the attractive location. In these conditions, if x_R is also large, mortality through the left edge is negligible, but it becomes significant for smaller habitat patches. This setup mimics a situation where an initially large patch shrinks due to continued habitat destruction, slowly enough that the population distribution is at equilibrium for each particular habitat configuration, until it reaches a critical size and the population collapses. We find that the critical patch size is a nontrivial function of the intensity of the movement bias toward the attractive location and the distance between habitat edges and the attractive location. When τ_P^-1=0.1, x_R,C=0 regardless of the value of D. For τ_P^-1>0.1, movement bias is so strong that the attractive location must be within the habitat patch to avoid individuals entering the matrix and dying at a rate that cannot be outbalanced by population growth within the habitat (red region in Fig. <ref> and Fig. <ref>). Moreover, due to strong bias toward the attractive location, the population is concentrated around that location. Increasing the diffusion coefficient D makes x_R,C increase because the population spreads out, and individuals become more likely to reach the matrix and die. Increasing τ_P^-1 from τ_P^-1=0.1 for a fixed diffusion coefficient D, we find a non-monotonic relationship between x_R,C and τ_P^-1. First, the critical patch size increases with τ_P^-1 because the population concentrates around the attractive location and are more likely to reach the matrix and die. For even higher attraction rate, the population concentrates very narrowly around the attractive location and individuals do not reach the habitat edge. As a result, the critical patch size decreases with τ_P^-1. For τ_P^-1<0.1, x_R,C is negative, which means that the population can persist even when the attractive location is in the matrix (blue region in Fig. <ref>). In this low-τ_P^-1 regime, x_R,C increases with τ_P^-1 because less random movement increases the relative contribution of the movement bias to the population flux through the edge. Similarly, the critical patch size decreases with increasing D for values of τ_P^-1 not too far from τ_P^-1=0.1. This negative correlation between critical patch size and diffusion appears because a more random movement reduces the tendency of individuals to cross the habitat edge (see Fig. <ref>b for curves of x_R,C versus D with fixed τ_P^-1). This phenomenon, known as the "drift paradox," has been previously observed in organisms inhabiting streams, rivers, and estuaries where downstream drift is continuously present and extinction is inevitable in the absence of diffusion <cit.>. However, as D continues to increase and random diffusion dominates dispersal, the critical patch size increases due to population loss via diffusion through both habitat edges. Finally, for very low values of τ_P^-1, diffusion controls the population flux through habitat edges and the behavior of the critical patch size converges to the theoretical prediction of the purely diffusive case, L_C^D=π√(D/r_H) <cit.>. §.§ Partially absorbing matrix and the effect of matrix-to-habitat bias Considering finite r_M allows us to investigate how changes in movement behavior, once individuals reach the matrix, can alter the spatial pattern of population density and the critical patch size. If individuals in the matrix do not tend to return to the habitat (τ^-1_M≈0), the population density decays into the matrix exponentially, and the critical patch size increases with matrix mortality rate . For low values of τ^-1_M, the tendency to return from the matrix to the habitat edges reduces how much the population penetrates the matrix and increases the population density inside the habitat, especially close to the edges (Fig. <ref>a). The spatial distribution of the population has a skewness that reaches its maximum when the attractive location is in the matrix (Fig. <ref>b). For large enough τ^-1_M, the edges act as almost hard walls, and the term representing the tendency to return from the matrix to the nearest habitat edge behaves effectively as reflecting boundary conditions in Eq. (<ref>). In this limit, the population survives for any habitat size <cit.>. The accumulation of individuals around habitat edges suggests a potential tradeoff between a decrease in mortality in the matrix due to the attraction to habitat edges and an increase in intraspecific competition due to higher population densities in the habitat. To investigate the impact of this tradeoff on population loss due to habitat degradation, we measure the fraction of the population that remains for a given patch size relative to the value for an infinite habitat patch, η. We perform this measurement for several values of the matrix mortality rate r_M and the returning rate to habitat edges τ^-1_M, which are the two main parameters controlling the accumulation of population density at habitat edges. We consider a scenario with the attractive location at the center of the habitat patch, which is the limit where we have a weaker accumulation of individuals at habitat edges and, therefore, the regime in which the tradeoff between matrix mortality and intraspecific competition around habitat edges has a weaker effect on population dynamics. At high matrix mortality rates, the population does not survive (η=0), except for very high returning rates τ^-1_M (Fig. <ref>). When the matrix mortality rate decreases, η increases and remains a monotonically increasing function of τ^-1_M. For r_M closer to zero, however, η becomes a non-monotonic function of τ^-1_M. For these values of the matrix mortality rate, increasing the returning rate to habitat edges is initially detrimental to the total population size because it leads to higher intraspecific competition at the habitat edges, which outweighs the decrease in mortality in the matrix. In other words, the density distribution does not penetrate the matrix as far (Fig. <ref>a) while, inside the habitat, competition does not allow for a large enough increase in population, and so the total population decreases. Consequently, the habitat edge itself behaves as an ecological trap in this regime, and our model recovers a behavior similar to previous observations for insects <cit.>. Above a critical value of τ^-1_M at which η is minimal, further increasing the returning rate to habitat edges becomes beneficial for population persistence because now very few individuals enter the matrix and reduced matrix mortality outweighs the increased intraspecific competition at habitat edges. For infinite return rate τ^-1_M, all the curves for different values of the matrix mortality rate r_M converge to the same value because individuals do not penetrate the matrix. For τ^-1_M→∞ and τ^-1_P=0, one has u_s=r/γ inside the habitat and u_s=0 in the matrix (dashed line in Fig. <ref>). The existence of a non-monotonic dependence of population size on advection strength is reminiscent of a behavior reported in a different scenario for a model with advection towards a continuous environmental gradient <cit.>. § DISCUSSION We studied the spatial dynamics of a population in a finite habitat surrounded by an infinite matrix, considering different ratios between matrix mortality and habitat reproduction rates. We additionally incorporated space-dependent deterministic movement through an advection term that attracts individuals toward specific landscape locations, including habitat edges. This advection term can create spatial distributions of population density that are asymmetric with respect to the center of the patch, especially when the patch size is small and attractive regions lie near habitat edges. This result could explain why, in certain species, populations tend to accumulate in the periphery of the species historical range following geographical range contraction <cit.>. Moreover, our results show that both the habitat carrying capacity and critical size depend nonlinearly, sometimes non-monotonically, on movement and demographic parameters and the location of the habitat edges relative to regions of slower movement. Recent work has also found nonlinear and non-monotonic relationships between movement and landscape parameters underlying the stability of prey-predator systems in fragmented landscapes <cit.>. These findings emphasize the importance of untangling the various contributions determining individual movement, including environmental covariates, when designing conservation strategies such as refuges in fragmented landscapes or marine protected areas <cit.>. Specifically, for very low yet non-zero bias intensities, we find a range of values for the diffusion coefficient for which the critical habitat size decreases with increasing diffusion. This counterintuitive phenomenon is known as the “drift paradox” <cit.>. On the opposite limit, if movement bias toward the attractive location is very strong, the population becomes ultra-localized and its survival depends on whether the attractive site is in the habitat patch or the matrix; if it is in the patch, the population will persist, but if it is in the matrix, the population will go extinct. In between these two limits, for weak bias toward the attractive location, further increasing bias intensity increases the critical habitat size when the attractive site is inside the habitat but not too far from both edges. Moreover, populations are still viable for these weak bias intensities even if habitat destruction places the attractive location inside the matrix, creating an ecological trap. Ecological traps are often related to human landscape interventions <cit.> such as the construction of bird nest cavities in regions with generally worse conditions than those where the birds would naturally build their nests <cit.>. Roads can also act as ecological traps. For example, female bears with their cubs are often attracted to roads due to higher forage availability and to avoid potential male infanticide, increasing their risk of being killed in vehicle collisions <cit.>. Our model also suggests that movement responses to changes in habitat quality, such as the tendency of individuals to return from the matrix to habitat edges, can result in the accumulation of population density around habitat edges, even when attractive locations are centered in the habitat patch. This accumulation of population density reduces the quality of regions nearby habitat edges relative to the surrounding matrix and turn the neighborhoods of habitat edges into ecological traps. This population crowding nearby habitat edges could, however, be eliminated by density-dependent dispersal, which was not included in the our model. Animal responses to changes in habitat fragmentation, such as the matrix avoidance term included in our model, might be relevant in regulating demographic responses to habitat destruction. Quantifying correlations between movement behavior, habitat quality, and population density in animal tracking data could help to understand the impact of further habitat destruction on population viability. More generally, the existence of ecological traps suggests that movement patterns exhibited by individuals upon habitat destruction do not correspond to an evolutionarily stable strategy <cit.>. However, because ecological traps do not necessarily lead to population extinctions in our model, individuals could potentially adapt their movement behavior to avoid newly degraded regions. Different non-uniform space utilization patterns and preference for specific habitat locations are ubiquitous in nature. We consider that all individuals in the population have the same movement behavior and thus share habitat preferences. This assumption is an accurate modeling choice for certain species, such as central-place foragers <cit.>. Very often, however, habitat preferences vary across individuals in a population, which might impact how individuals interact with one another <cit.>. Incorporating individual-level variability in space utilization would inform how populations of range-resident and territorial species would respond to habitat destruction, and is one of the future directions that could be explored based on this work. However, while attractiveness can sometimes be quantified in terms of environmental covariates <cit.> or by knowing the locations of landscape features like watering holes, other times it will be difficult or impossible to quantify, for example when “attractiveness” depends on the unknown distribution of a particular prey species. Future theoretical research should aim to increasingly fill this gap between existing models describing empirically observed patterns of animal movement and higher level ecological processes. § ACKNOWLEDGMENTS We thank Silas Poloni, Eduardo H. Colombo, and Chris Cosner for their critical reading of the manuscript. This work was partially funded by the Center of Advanced Systems Understanding (CASUS), which is financed by Germany’s Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science, Culture and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament; the São Paulo Research Foundation (FAPESP, Brazil) through postdoctoral fellowships No. 2021/10139-2 and No. 2022/13872-5 (P.d.C) and No. 2020/04751-4 (V.D.), BIOTA Young Investigator Research Grant No. 2019/05523-8 (R.M-G); ICTP-SAIFR grant no. 2021/14335-0 (P.d.C) and No. 2016/01343-7 (V.D., P.d.C., R.M-G); the Simons Foundation through grant no. 284558FY19 (R.M-G); the Estonian Research Council through grant PRG1059 (V.D.). The National Science Foundation (NSF, USA) grant DBI1915347 supported the involvement of J.M.C. and W.F.F. This research was supported by resources supplied by the Center for Scientific Computing (NCC/GridUNESP) of the São Paulo State University (UNESP). § SUPPLEMENTARY FIGURES
http://arxiv.org/abs/2306.04183v1
20230607063255
Downgrading of a T-variety
[ "Pavankumar Dighe", "Vivek Mohan Mallick" ]
math.AG
[ "math.AG", "14M25, 14L30, 52B20" ]
StructuredMesh: 3D Structured Optimization of Façade Components on Photogrammetric Mesh Models using Binary Integer Programming Qing Zhu July 31, 2023 =============================================================================================================================== For an affine T-variety X with the action of a torus T, this paper provides a combinatorial description of X with respect to the action of a subtorus T' ⊂ T in terms of a T/T'-invariant pp-divisor. We also describe the corresponding GIT fan. § INTRODUCTION This paper studies normal algebraic varieties over algebraically closed fields of characteristic zero admitting an action of an algebraic torus. Such varieties are called T-varieties in the literature. The most common examples of T-varieties are toric varieties, where the dimension of the torus equals the dimension of the variety on which it acts. In <cit.>, the authors studied toric varieties defined over discrete valuation rings, which, in some sense, are T-varieties <cit.>. The study of toric varieties is facilitated by the existence of associated combinatorial data which encodes a gamut of geometric properties in terms of combinatorial properties. In case of T-varieties, the associated data consists of a combination of algebraic geometric and combinatorial data. Altmann and Hausen <cit.> described this for the affine case, and the same authors along with Suss <cit.> elaborated the picture for the general case. As mentioned, a T-variety is a normal variety X along with an effective action of a torus T. The complexity of a T-variety is the number c(X) = X - T. The description of T-variety involves a variety of dimension c(X) and some combinatorial data encoded in the form of pp-divisors, i.e. divisors where the coefficients come from the Grothendieck group associated with the semigroup of polyhedra having a common tail cone. These data were deduced using Geometric Invariant Theory applied to the action of the torus on the variety. The presentment of T-varieties in terms of pp-divisors have been fertile and a lot of geometric properties can be translated into combinatorics. A summary of the development till around 2012 can be found in <cit.>. One useful way to construct examples of T-varieties is by taking a known T-variety, say X with the action of a torus T denoted, temporarily, by T ↺ X. Assume that ⊂ T is a subtorus. Then X is also a T-variety with respect to the action of , ↺ X. ↺ X is called a downgrading of T ↺ X. The case when X is a toric variety was already studied by Altmann and Hausen <cit.>. Ilten and Vollmert <cit.> gave a description of downgrading, and also discussed the reverse construction of upgrading for complexity one T-varieties. Such a downgrade should have a description in terms of a pp-divisor which has an T / action. This was mentioned in <cit.> without proof. In the second part of the paper, we give the details of this construction. In section <ref>, we briefly recall the language of pp-divisor and the notion of GIT-data (see <ref>) for a T-variety. In section <ref>, we defined a poset [<ref>] describing the generators of the GIT-fan for an affine toric variety. Using the poset, we describe the GIT-data for a toric variety and downgraded affine T-variety. We illustrate this by an example: Example <ref>. In section <ref> and <ref>, we prove that the base space is a T-variety and for the right choice of a section (cf <ref>) there is a torus invariant pp-divisor ^' on such that X(^')=X. §.§ Acknowlegements The first author thanks UGC (UGC-Ref.No: 1213/(CSIR-UGC NET JUNE 2017) ) for funding the research and IISER Pune for providing facilities. The second author thanks IISER Pune for providing an excellent environment to conduct this research. § PRELIMINARIES §.§ T-varieties A T-variety is an algebraic variety along with an effective action of an algebraic torus T. A complexity of X is dim(X)-dim(T). Let Y be a normal, semiprojective variety, and N be a lattice of finite rank. A tail cone, tail(Δ), of any polyhedron Δ⊂ N_ is defined as tail(Δ) := v ∈ N_v + Δ⊆Δ. For a fixed strongly convex integral polyhedral cone σ in N_, the collection of polyhedra with tail cone σ forms a semigroup under the Minkowski sum. We denote this semigroup by Pol^+(N,σ). A polyhedral divisor on Y with a tail cone σ is a formal sum = ∑_i Δ_i ⊗ D_i, where D_i runs over all prime divisors of Y, and the coefficients Δ_i ∈Pol^+(N,σ), such that only finitely many Δ_i are different from σ. For each u ∈, the evaluation of at u is the Weil divisor (u) := ∑_i ( min_v ∈Δ_iuv) D_i. A proper polyhedral divisor or pp-divisor on (Y,N) is a polyhedral divisor , such that for each u ∈, (u) is a semiample, rational Cartier divisor on Y, which is big whenever u is in the relative interior of . An affine scheme associated to is defined as follows. Let, for u ∈σ∩ M, _u = Y((u)). Then, = ⊕_u ∈σ∩ M_u is a sheaf of Y algebras. Define, the affine scheme associated with to be X() = ( ⊕_u ∈σ∩ MY_u) = (Y). By <cit.>, X() is an affine T-variety of complexity dim(Y). The following theorem says that every affine T-variety arises in this way. <cit.> Let X be a T-variety of a complexity dim(X)- dim(T). Then, there is a pp-divisor on (Y,N), such that X() ≅ X where Y is a normal, semiprojective variety, and N is a lattice of rank dim(T). §.§ GIT quotients In this section, we recall some results regarding geometric invariant theory of T-varieties from <cit.> and <cit.>, which we need later. Consider the following setup. Let be a finite rank lattice, and let A be an integral, finitely generated, -graded -algebra A= ⊕_u ∈ A_u. Consider an affine variety X=Spec(A) with an action of the algebraic torus T=([]), induced by the -grading on A. Let L be the trivial line bundle on X with the following torus action: t·(x,c)=(t · x,χ^u(t)· c), where χ^u is the character corresponding to u ∈. Now consider the canonical projection L → X. This map is a torus equivariant map, and is the T-linearization of the trivial line bundle on X with respect to χ^u. A T-linearization of a line bundle on X is a line bundle L → X along with a fiberwise linear T-action on L such that the projection map is a torus equivariant. <cit.> Any T-linearization of a trivial line bundle over X is the linearization corresponding to the unique character, described above. An invariant section of the linearization of a trivial line bundle with respect to χ^u is precisely an element of some A_nu for n>0. The set of semistable points associated to a linearization of a trivial line bundle is denoted by (u) and defined as: (u) := ⋃_f ∈ A_nu, n ∈_>0 X_f. If two linearized line bundles have same set of semistable points, we say that they are GIT-equivalent. We recall the description of the GIT-equivalence classes given by a linearization of the trivial line bundle in terms of the orbit cones. We will illustrate this by an example of an affine toric variety. The following definitions are from [BH06]. Consider a point x ∈ X. The orbit monoid associated to x ∈ X is the submonoid S_T(x) ⊂ M consisting of all u ∈ M that admit an f ∈ A_u with f(x) ≠ 0. The convex cone generated by S_T(x) is called the orbit cone, denote it by ω_T(x). The sublattice generated by the orbit cone ω_T(x) is called the orbit lattice, denote it by M(x). The weight cone ω⊂ is a cone generated by u ∈ M with A_u ≠0 The GIT-cone associated to an element u ∈ω∩ is the intersection of all orbit cones containing u, and is denoted by λ(u). The collection of GIT-cones forms a fan and called a GIT-fan. For the sake of brevity, we summarized all this data associated with a torus action on an affine variety as follows. Suppose X is a T-variety with the action of a torus T. The GIT-data associated with (X,T) consists of orbit monoids, orbit cones, orbit lattices, GIT-cones, and set of semistable points. From <cit.>, we have an order-reversing one-to-one correspondence between the possible sets of semistable points induced by a linearization of the trivial line bundle and GIT-cones. Consider an affine toric variety X=(A) with a character lattice M, and dual lattice N. Note : A=⊕_ u ∈∩ M·χ^u where σ is the polyhedral cone in N, and is its dual. The cone is a full dimensional cone, and each u ∈ is saturated[An element u ∈ M is saturated if A_(v)=⊕_n ∈_≥ 0 A_nv generated by degree one elements.]. Using <ref>, we will compute the (u) . Consider a minimal generating set {u_1,u_2 … u_k } of a cone ∩ M. For u ∈∩ M, (u)= (A_χ^u) . If u=α_1 · u_1 + α_2 · u_2 … + α_k · u_k with α_i ≥ 0 then (u)= (A_χu)= ⋂_α_i ≠ 0 (A_u_i). § DESCRIPTION OF GIT-FANS AND SEMISTABLE POINTS. Let {u_1,u_2 … u_k } be a minimal generating set of a cone ∩ M. The collection of a semistable points is (u)u=α_1 · u_1 + α_2 · u_2 … + α_k · u_k, α_i= 0,1. First observe that above collection is a finite set. If u=α_1 · u_1 + α_2 · u_2 … + α_k · u_k then (u)=(∑_α_i ≠ 0u_i)=⋂_α_i ≠ 0(u_i) . From <cit.>, we are going to compute GIT-cone for each (u). To do this we are going to compute (u_i) for each i ∈{1,2 … , k }. Consider the following poset, ( S=∑_i=1^k α_i · u_iα_i=0,1, ≥) where, for v,w ∈ S, v ≥ w if (v) ⊂(w). With the above notations continuing, for v ∈ S, GIT-cone λ_T(v) is generated by the set u_iv ≥ u_i. The cone generated by set u_iv ≥ u_i is denoted by σ_T(v). First, observe that if (v) ⊂(u_i), then for x ∈ X, χ^v(x) ≠ 0 if and only if ∀ u_i ≤ v, χ^u_i(x) ≠ 0. Hence u_i ∈(x) for all x such that χ^v(x) ≠ 0 hence u_i ∈(v), so σ_T(v) ⊂(v). If u ∈(v), then (v) ⊂(u) ⊂(u_i) where u_i is a summand of u hence v ≥ u_i hence u ∈σ_T(v). §.§ Description of a GIT-fan with respect to the action of a subtorus Consider the following setup. Let X be a normal affine variety with an effective T-action. Consider a subtorus of the torus T with canonical action on X. First, we have the following exact sequence from the torus inclusion, 0 [r] [r] [r, "i"] [r] 0 , where and are the character lattices corresponding to the tori torus T and , respectively. The lattice is the kernel of the lattice homomorphism i. If X=(A), then we have a grading A=⊕_u ∈ A_u with respect to T and, similarly, for the action we have an induced grading A=⊕_v ∈ A_v. In addition we have, A_v= ⊕_i(u)=v A_u . We wish to compute the GIT data associated with (X,) from the GIT data associated with (X,T) and above exact sequence. We are using the same notation i for lattice homomorphism and vector space homomorphism. * Let and be the weight cones associated with the T action and the respectively, then i() =. Consider a point x ∈ X. * Let (x) and (x) be the orbit cones associated with the T and action respectively, then i((x)) = (x). * Let S_T(x) and S_(x) be the orbit monoid associated with the T and action respectively, then i(S_T(x)) = S_(x). Note, A_v=i(u)=v⊕A_u then A_u ≠ 0 for some u if and only if A_v ≠ 0. Now the results follows from linearity of i and definition of (resp. ). The statements for orbit monoids and orbit cones follows similarly * Let (u) be the semistable point associated with u ∈ and let (v) be the semistable point associated with v ∈, then (v) = i(u)=nv, n ∈_>0⋃(u). Because of the correspondence between the GIT-cones and sets of semistable points, we have the following result. * Let (u) be GIT-cone associated with u ∈ ( under the T-action) and (v) be GIT-cone associated to v ∈ ( under the -action) then λ^'(v)= (v)= ⋂_i(u)=v i( (u)) . Lets take σ=(e_1,e_2,e_1 + e_3,e_2 +e_3), then [S_σ]= ⊕_u ∈∩ M·χ^u= [u,v,w,uvw^-1] ≡[x,y,z,w]/⟨ xy-zw ⟩ where =(e_1,e_2,e_3,e_1 + e_2- e_3). For this example we have GIT-fans shown in the figure <ref> and semistable points correspondence, (e_1) ⟷(e_1) (e_2) ⟷(e_2) (e_3) ⟷(e_3) (e_1 +e_2)=(e_1+e_2+e_3+e_1+e_2 - e_3) ⟷(e_1 ,e_2) (e_1 +e_2-e_3)=(e_1+e_2+e_1+e_2 - e_3) ⟷(e_1 ,e_2,e_1 +e_2 -e_3) (e_1+e_1+e_2 - e_3) ⟷(e_1,e_1+e_2 - e_3) (e_2+e_1+e_2 - e_3) ⟷(e_2,e_1+e_2 - e_3) (e_1+e_3) ⟷(e_1,e_3) (e_2+e_3) ⟷(e_2,e_3). Consider the above, the torus inclusion map (^*)^2 → (^*)^3 is given by the following map (t_1,t_2) ↦ (t_1,t_2,t_1). The lattice homomorphism associated with this inclusion is the map ^3 →^2 is (a,b,c) ↦ (a+c,b). From Proposition <ref> and Proposition <ref>, the GIT-cones shown in the figure <ref> and semistable points correspondence, e_1^'=(1,0) and e_2^'= (0,1) (e_1^')=(e_1)∪(e_1 + e_3) ∪(e_3) ⟷(e_1^') (e_2^')= (e_2) ∪ (e_1 + e_2 - e_3) ∪(e_2 + e_1 + e_2 -e_3) ⟷(e_2^') (e_1^' + e_2^')= (e_1 + e_2) ∪ (e_1 + e_1 + e_2 - e_3) ∪( e_1 + e_3) ⟷(e_1^',e_2^'). § DESCRIPTION OF THE SPACE ASSOCIATED TO A T- VARIETY WITH RESPECT TO THE ACTION OF A SUBTORUS By <cit.>, we have a proper polyhedral divisor associated with a normal affine variety with an effective torus action. Let X be an affine T-variety, with a torus T. Let be a pp-divisor on (Y,N), where N is a dual lattice of a lattice M=(T,^*), such that X ≅ X()=(A). We assume that is a minimal pp-divisor(<cit.>). For a subtorus of the torus T, we shall construct and a T/-invariant pp-divisor on (, N^'), where N^' is a dual of a lattice M^'= (T^', ^*). Consider an affine T-variety X with the action of the torus T, with weight cone ω⊂ M_. Let be a subtorus of T, and the associated lattice map be i: M →. Then, there exists and a T/-invariant pp-divisor ^' on (,N^') with tail(^')=i(ω) such that X(^') ≅ X. The next part of this paper is about the proof of the Theorem <ref>. Consider the semistable point (v), from <cit.>, the quotient space Y_v = (v) is given by Y_v = (A_(v)), where A_(v)=⊕_n ∈_≥ 0 A_nv. There is effective T/ action on Y_v The set of semistable points (v) is a -invariant open subset of X. Moreover from Proposition <ref>, it is T-invariant. On basic open subsets of Y_v, the torus T/ acts effectively. Consider f ∈ A_(v), in particular, we choose f ∈ A_u for u ∈ M such that i(u)=nv, for some n ∈_≥ 0. The sets ([A_f]_0) covers Y_u and it is enough to prove that T/ acts effectively on each ([A_f]_0), or equivalently, that [A_f]_0 (0 ∈) admits an grading such that the weight cone is full dimension. Note that A_0=⊕_u ∈A_u, which induces an grading on [A_f]_0 . Since A_0=u ∈ i(u)=0⊕A_u⊂ [A_f]_0, and T acts effectively on X, (the dimension of (u ∈i(u)=0 is equal to the rank of ). Then, the T/ action is effective on ([A_0]), and hence T/ acts effectively on ([A_f]_0). Using proposition <ref>, we are going to prove that there is a T/ action on . From <cit.>, the collection of all GIT-cones define the GIT-fan, which we denote by Σ_. The map v →(v) is constant on relint((v)). So for λ^'∈Σ_ and w ∈relint(λ^') if we write W_λ^'=(w), Y_λ^'=Y_w, we have the following commutative diagram W_[r, "jλ^'"] [d, "q"] W_λ^'[r, "jλ^'γ^'"] [d, "qλ^'"] W_γ^'[r, "jλ^'0"] [d, "qγ^'"] X = W_0[dddd, "q0"] [r, "pλ^'"] [rrrddd, "p0" ] Y_λ^'[r, "pλ^'γ^'"] [rrddd,"pλ^'0"] Y_γ^'[rddd,"pγ^'0"] Y_0 All the j-- are inclusion maps, so the inverse limit, W_, is the intersection of sets of semistable points. The is normalization of a canonical component. Let Y_1 is limit { Y_}'s and = Norm(q(W_)), where Norm(-) denotes the normalization. We have the following commutative diagram W_λ^'[r, "jλ^'γ^'"] [d, "qλ^'"] W_γ^'[d, "qγ^'"] Y_λ^'[r, "pλ^'γ^'"] Y_γ^' from the Proj construction, qλ^' and qγ^' are torus equivariant maps with respect to the canonical torus map T →T/. Also the map pλ^'γ^' is a T/-equivariant map. Now we shall demonstrate that admits canonical T/ action. To prove T/ acts effectively on , we required some evident statements which are given below. Let Y and be are two varieties with T actions, the map v : Y → Y^' is a T-equivariant birational map with T acts effectively on Y^' then T acts effectively on Y. Let Y be a topological space, and ψ : Y → Y be a continuous map with A ⊂ X such that ψ(A) ⊂ A, then ψ(A) ⊂A. Let X be a T-variety, then Norm(X) has a torus action, satisfying the following commutative diagram, Norm(X) [r, "t"] [d, "g"] Norm(X) [d,"g"] X [r, "t"] X . Consider q_1: W_→ Y_1 the induced by the commutative diagram <ref>. Then, q_1 is a torus equivariant map, and it defines T/ action on Y^'. From Lemma <ref> and <ref> The map pλ^' is given by, Norma(q(W_)) →q(W_)↪ Y_1 → Y_λ^' and each arrow is a torus equivariant map. is a T-variety. We have to prove that is normal variety and action of T/-effective. From construction, it is a normal variety and from Lemma <ref> and above arrows <ref>, the action is effective. § THE PROPER POLYHEDRAL DIVISOR. Let be a character lattice associated to T/. Consider the exact sequence <ref>. Construction of the pp-divisor requires a homomorphism s^' : → Q(A)^*. Note that i(ω) is a full dimensional cone, and given a v ∈ i(ω) ∩, there is a k ∈ℕ, such that kv is saturated. For each v ∈ i(ω) saturated, we will define a Cartier divisor (v). For v ∈int() saturated, Y_,f f ∈ A_u, where i(u) = v is an open cover for Y_. Consider the open cover _f= pλ^'^-1(Y_,f). Since T acts effectively on X, we have a section s: M → Q(A)^* such that s(u) is u-homogeneous. Consider the section s^' : → Q(A)^* defined: s^'(v)= s(u), For fix u ∈ M such that i(u)=v. Now consider the Cartier divisor ^'(v)=(_f,s^'(i(u))/f). Since p are torus equivariant maps, so Y_f are torus invariant open subset. s^'((i(u))/f is homogeneous of degree (s^'(i(u)))-(f) ∈ [Note that degree of element s^'(i(u))/f is equal to 0 in ]. This defines a torus invariant pp-divisor on , ^' : i(ω) →CaDiv_(), ^'(v)=1/k·^'(kv). where kv is a saturated multiple of v. amsalpha
http://arxiv.org/abs/2306.02765v1
20230605104125
Differentially Private Cross-camera Person Re-identification
[ "Lucas Maris", "Yuki Matsuda", "Keiichi Yasumoto" ]
cs.CV
[ "cs.CV" ]
Differentially Private Cross-camera Person Re-identification Lucas Maris1, Yuki Matsuda12, Keiichi Yasumoto12 1 Nara Institute of Science and Technology, Nara, Japan 2 RIKEN Center for Advanced Intelligence Project AIP, Tokyo, Japan Email: {lucas.maris.lo3, yukimat, yasumoto}@is.naist.jp Received; accepted =============================================================================================================================================================================================================================================================================================================================== Camera-based person re-identification is a heavily privacy-invading task by design, benefiting from rich visual data to match together person representations across different cameras. This high-dimensional data can then easily be used for other, perhaps less desirable, applications. We here investigate the possibility of protecting such image data against uses outside of the intended re-identification task, and introduce a differential privacy mechanism leveraging both pixelisation and colour quantisation for this purpose. We show its ability to distort images in such a way that adverse task performances are significantly reduced, while retaining high re-identification performances. differential privacy, person re-identification § INTRODUCTION Cities keep growing, creating logistic challenges in terms of traffic management, city planning, and tourism. In parallel, the interest in smart cities, i.e., cities that use information and communication technologies to transform the city and its governance, and achieve a positive impact on the community inhabiting it <cit.>, is peaking, with both large and small-scale projects being developed around the globe. Japan, for instance, sees its population decrease and its demography evolve, with a decrease of the relative working-age population and a general movement toward metropolitan areas <cit.>. These changes and the resulting shift in citizen expectations make it highly valuable to dispose of comprehensive data regarding human flows within the city. This data can then be exploited by, e.g., policymakers for urban planning decisions <cit.>, transport or tourism companies for commercial ends <cit.>, or individuals for route planning purposes <cit.>. One of the most commonly available sensors in cities is a video camera. Often installed primarily as a means of passive surveillance, with the aim of disposing of a record of events for understanding past incidents, their feeds have the potential to be leveraged for real-time traffic or crowd monitoring. In this context, cross-camera person re-identification has been extensively studied over the past decade, as the computer vision task of matching individuals from different camera perspectives. Notwithstanding promising applications in security, planning, or tourism areas, little concern has been paid to the glaring privacy concerns this raises, and thus the social acceptance of such systems. This study aims to build a privacy-aware cross-camera re-identification system. By empirically comparing the effect of different types and magnitudes of noise, including both traditional image obfuscation methods and a new differential privacy method, on the inputs to a state-of-the-art re-identification model, we evaluate the feasibility of conducting the paradoxical task of re-identifying people without effectively invading their privacy. We illustrate this goal by means of an additional empirical evaluation of the ability to predict demographic attributes, such as gender, age, or ethnicity, from these very same noised inputs. The purpose is to distort visual data in such a way that it remains maximally useful for the specific intent of video camera controllers, here chosen to be cross-camera re-identification, while making it minimally useful for other purposes, here defined as common demographic classification tasks. The ability to control the privacy leakage of re-identification systems is key to their social acceptance, which requires building trustful relationships with citizens, customers, or individuals at large. Our contributions are as follows: (1) we formulate a strict image differential privacy mechanism leveraging both pixelisation and colour quantisation; (2) we highlight the robustness of centroid-based re-identification models against noise; and (3) we show that the use of our image differential privacy mechanism allows for nearly state-of-the-art re-identification performances while significantly reducing the utility of images for adverse tasks. § RELATED WORK §.§ Person re-identification (reID) The recognition of individuals across different visual snapshots from different points of view has become a traditional computer vision task over the last 20 years <cit.>. While still a matter of designing informative appearance signatures for individuals <cit.>, the advent of deep convolutional neural networks <cit.> has shifted this problem to yet another learning problem <cit.>. Notwithstanding its many variations, the task is essentially defined as the retrieval of all occurrences of a given individual within a set of videos, often captured from different angles, with only a single snapshot of that individual to work from. This is usually formalised as an image-retrieval task, where the aim is to train a model on a training set such that it can rank images from the gallery set in order of their similarity to a given image from the query set. As the input data is usually a set of videos, this reID definition glances over the object detection, tracking and segmentation aspects to focus more heavily on how to transform images into effective vector representations and on how to score such representations against one another. Recent state-of-the-art reID systems achieve maximum performances on traditional reID datasets such as Market1501 <cit.> or MSMT17 <cit.> by answering these questions with CNN feature extractors <cit.> and triplet loss <cit.>, combined with various other techniques, e.g., the use of spatial-temporal data  <cit.>, large-scale pre-training <cit.> or attention mechanisms <cit.>. §.§ Differential privacy (DP) Over the last decade, differential privacy <cit.> has become the single most popular way of modelling formal data privacy, due to its ability to provide quantifiable protection against arbitrary risks. By perturbing computations over statistical databases, it promises indistinguishability between databases and plausible deniability to every individual composing such databases, providing them with roughly the same privacy that would result from having their data removed from the database. Recent work regarding differential privacy has aimed at extending its desirable properties to other forms of data, e.g., location <cit.> or deep neural network models <cit.>. With unstructured data making up most of today's data landscape, a line of work regarding image differential privacy has also emerged, with studies leveraging pixelization <cit.>, autoencoders <cit.> or generative adversarial networks <cit.>. As noted in a recent survey regarding differential privacy for unstructured data <cit.>, the common approach is to vectorize unstructured data into a structured form, which can then be obfuscated with conventional DP methods. § METHOD In this paper, we consider the inherent privacy threat that originates from storing image-representations of individuals for tracking across multiple video feeds. We expect this to be useful for applications where the aim is to collect itinerary data from pedestrians in such a way that the specifics of every individual are protected; for instance, one may wish to know the percentage of people that visit point B shortly after point A, without needing to know whether this percentage includes a specific individual that could be recognised from their visual representation. In other words, we consider the threat of an attacker gaining access to a collection of images from individuals, which were collected for the purpose of re-identification. To lessen the gravity of such a breach, we aim to distort collected images such that they remain maximally useful for the cross-camera reID task but minimally useful for other tasks. The adverse tasks we focus on in this study include gender, age, and ethnicity identification. §.§ DP-obfuscation To provide a quantifiable privacy guarantee on the gallery images, we extend the pixel-level differential privacy definition first introduced as DP-Pix <cit.>. As it provides privacy directly on pixel values, it has since been deemed too strict of a definition, destroying too much of the data's utility in the privacy process <cit.>. This being precisely our goal here, we further restrict its definition; instead of providing indistinguishability between same-sized images differing by at most m pixels, we here aim for indistinguishability between same-sized images differing in any amount of pixels. Definition 1. ε-Image DP: a randomized mechanism ℳ gives ε-image differential privacy if for any two images i and j of same dimension, and for any possible output R ⊆Range(ℳ), Pr[ℳ(i) ∈ R] ≤exp(ε) Pr[ℳ(j) ∈ R] As shown in <cit.>, the Laplace mechanism can achieve such a guarantee, provided ℳ is defined as the noisy function: ℳ(x) = f(x) + n , where n ∼ Laplace(0, Δ f/ε) The exact amount of noise n is to be calibrated to the sensitivity Δ f of function f. We generalise and extend the DP-Pix definition of this function f; instead of defining f as the pixelisation of grayscale image x, we define f as the identity function applied to RGB image x, with optional pixelisation and colour quantisation parameters b and c. The sensitivity of this function is then: Δ f = wh/b^2( 256/c-1 )^3 Where w and h are the width and height of images, b defines the amount of pixelisation to be applied to images, and c characterises the amount of colours to be kept in images. These additional dimensionality reduction parameters are introduced to reduce the magnitude of the sensitivity, increased by our broader neighborhood definition. If b=1 and c=1, the sensitivity is equivalent to that of the identity function. This privacy mechanism is to be applied directly on stored images, prior to their use for re-identification. By protecting the image data upfront with a quantifiable privacy guarantee, their sensitivity can be mitigated. Even in the event of a data leak, the utility of training, query, and gallery samples is expected to be decreased for other applications. §.§ Centroid-reID Despite the higher noise-level introduced by our strict pixel-level differential privacy definition, we wish to be able to link obfuscated image representations with their corresponding identity. One recently introduced method to improve on re-identification and image-retrieval tasks at large is to average training samples into mean centroid representations, i.e., aggregated class representations. By shifting the task from ranking specific identity-instances to classifying into actual identities, which arguably also makes more sense in practical applications, state-of-the-art re-identification performances can be achieved on classic reID datasets <cit.>. The training of a centroid-based re-identification model relies on the Centroid Triplet Loss function, which is formulated below. It aims to minimise the distance between embedding f(A) of a training sample A and the class centroid c_P for the class that sample belongs to, while maximising the distance between said embedding f(A) and class centroid c_N for a class the sample does not belong to. ℒ_CTL = [||f(A)-c_P||^2_2 - ||f(A)-c_N||^2_2 + α_c]_+ Class centroids c_k are simply defined as the mean embedding of all samples available at a given point; during training, these are the items of same class as the currently considered sample within a mini-batch; during testing, these are all gallery samples for a given class. If we denote the available class samples as 𝒮_k, the class centroid c_k is: c_k = 1/|𝒮_k|∑_x_i ∈ S_k f(x_i) We here postulate that by using averaged individual representations, the re-identification system can be made more robust against introduced noise. We expect such aggregated representations to be able to magnify the identity-specific latent features that remain underneath the noise. To this end, we train centroid-based models directly onto noised training images, and test them with noised sets of gallery and query samples. §.§ Adverse tasks To evaluate the privacy protection offered by our system beyond the theoretical privacy budget ε, we additionally consider adverse tasks in the form of demographic attribute classification, specifically gender, age range, and ethnic group. As such tasks are both common and feasible with very limited inputs, we believe them a good way to evaluate the practical privacy protection offered by our mechanism. Attribute classification is implemented as a simple fully connected layer on top of a ResNet50v2 <cit.> backbone, itself pretrained on ImageNet <cit.>. § RESULTS In this section, we use an ε-Image DP-protected dataset for both our target task, person re-identification, and adversary tasks, here chosen to be gender, age, and ethnicity classification. We show how little our target task suffers from the privacy protection applied to the dataset, while the performance of adversary tasks drops more drastically. We evaluate both the reID and gender classification performances after obfuscation on Market1501 <cit.>, a common reID dataset, whose balanced attribute labels are unfortunately limited to unbalanced gender annotations <cit.>. To further evaluate the degree of obfuscation provided by our method, we additionally evaluate the performance of gender, age, and ethnicity classification models after applying the same obfuscation to FairFace <cit.>, a recent face image dataset that provides balanced gender, age, and ethnicity annotations. §.§ ReID To properly get a feel for the effect of pixelisation and quantisation parameters b and c, we consider 3 parameter combinations; b=1 and c=64, b=2 and c=32, and b=4 and c=16, which yield sensitivity values Δ f of similar magnitude. Noise levels ε were made to vary between 10^-3 and 10^6. reid_b1c64 shows the performance of both a regular and a centroid-based reID model on data noised with the first set of parameters. reid_b2c32 and reid_b4c16 summarise the results for the second and third sets of parameters, as these yield visually similar graphs. It is striking from our experiments that centroid-based reID models offer high robustness to noise. While non-centroid-based models suffer, on average, a 43.4%, a 74.5%, and a 93.6% mAP decrease, for each of our parameter combinations in ascending b order, centroid-based models only suffer an average 4.2%, 17.0% and 62.4% mAP decrease, respectively. These averages are computed on mAP metrics obtained with ε varying between 10^-3 and 10^3. As reid_b1c64 illustrates, this robustness allows to conserve a mean mAP of 93.6% even at noise levels ε≤10^3. It is also quite apparent from experiments that there exists a hard limit on the effect of pixel-based noising, with ε values lower than 10^3 yielding more or less the same results as those observed at the ε=10^3 level. Considering the fact that our sensitivities Δ f are themselves of the order of 10^5 or 10^6, which naturally increases the relative scale of ε values, and that differentially private noise is meant to be scaled to these values, this is in line with the “All Small Epsilons Are Alike” observation by Dwork  <cit.>. §.§ Attribute classification §.§.§ Market1501 Using the same noised Market1501 images to perform gender classification yields the results summarised in market_attr. The initial 81.3% gender classification accuracy, obtained on the original Market1501 dataset, gets affected both by the dimensionality reduction parameters b and c, as shown in the unnoised row of the table, and the additional noise added on top of this transformation. As with our reID results, the effect of added noise stagnates when setting ε≤ 10^3. The average accuracy decrease, computed with ε varying between 10^-3 and 10^3, for each parameter combination in ascending b order, is 14.6%, 19.0%, and 18.0%. Considering that 57.0% of the total samples are men, which implies a trivial classifier would achieve a 57.0% accuracy, we can see our privacy mechanism brings gender prediction performances on Market1501 sensibly closer to chance-level. §.§.§ Fairface Likewise, we apply the same transformations to the images from the Fairface dataset, on which we then train gender, age, and ethnicity classification models. As the images from this dataset are significantly larger, being 224x224 against 64x128 for Market1501, we intentionally use a higher pixelisation level b, to make the sensitivity Δ f of the images from these different datasets more comparable. For brevity's sake, we only consider a single parameter combination b=4 and c=32, which yields sensitivity Δ f=1075648. fairface_attr goes over the observed results. Again, the initial classification accuracies of 79.8%, 43.3% and 46.1%, obtained on the original Fairface dataset for each task respectively, get affected by both the dimensionality reduction step as well as the noising step. The effect of noise also tends to stabilise with ε≤ 10^3; the average accuracy decreases, when ε varies between 10^-3 and 10^3, are 14.1%, 19.5% and 25.7%, for each task respectively. § DISCUSSION §.§ Effect of parameters b and c This section further studies the individual effects of the pixelisation parameter b and quantisation parameter c, in order to get a better understanding of the performance changes that are to be attributed to our privacy mechanism ε-Image DP, and those that are to be attributed to more traditional computer vision transformations. market_reid_dimreduc shows the effect of increasingly large values for b and c on reID performances for Market1501 images. While heavy pixelisation destroys the utility of images for reID, with mAP dipping under 50% when b ≥ 2^4, heavy colour quantisation has nearly no impact on performances, even at c = 2^7. In both cases, a centroid-based reID model proves more effective at identifying people than a regular reID model. market_attr_dimreduc shows the effect of increasingly large values for b and c on gender classification for Market1501 images. Unsurprisingly, the effect of pixelisation is more marked than that of colour quantisation, with the lowest accuracy being 63.4% for the maximum b value 64, against 72.2% for the maximum c value 128. While both are a significant departure from the initial classification accuracy of 81.3%, neither suffice for achieving chance-level predictions. Similarly, fairface_attr_dimreduc gives a summary of the effect of increasingly large dimensionality reduction parameters on gender, age, and ethnicity classification for FairFace images. Again, pixelisation has a larger impact than colour quantisation, and varying these parameters individually only reduces performances halfway to chance-level, at best. §.§ Limits of adverse task reduction Both our experiments and the observations described in the previous section showcase the ever-present tradeoff between utility and privacy; by design, increasing privacy, and thus limiting the information contained within data, comes at the cost of utility. In our case, we are trying to conduct a task benefiting from high-dimensional data, i.e., person re-identification, while limiting the extent to which tasks such as demographic attribute classification can be carried out, which can do with much lower-dimensional data. As shown in market_attr_dimreduc(a), using just two RGB pixels summarising the general colour information in images still allows for significantly above chance-level gender predictions; if any reID-relevant information is to be maintained within images, a utility-driven privacy mechanism can then hardly be expected to reduce demographic attribute classification performances beyond that level. The ability of our utility-driven privacy mechanism to reduce gender prediction accuracy on Market1501 to ∼63.9% (average of all experimented parameter & noise combinations) is thus in line with the practical limit to adverse task performance reduction, which we observed to be 63.4% with nothing but 2 RGB pixels of information, while retaining sensibly higher reID performances. The same observation can be made regarding the Fairface dataset, where the application of our privacy mechanism reduces gender, age, and ethnicity prediction accuracies to averages of ∼66.0%, ∼32.0%, and ∼31.5% (average of all experimented noise combinations), respectively, in line with the accuracies observed when applying exceedingly high pixelisation, which were 66.9%, 32.2%, and 32.2%, respectively, for images composed of just 9 RGB pixels. § CONCLUSION We introduce a strict pixel-level image differential privacy mechanism, which aims for indistinguishability between any pair of same-sized RGB images, leveraging both pixelisation and colour quantisation to bound the large sensitivity and therefore large noise this would otherwise entail. We show that by applying our privacy mechanism to an image dataset, one can sensibly reduce the utility of said data for very simple tasks such as gender, age, or ethnicity classification, to performance levels in line with those obtained by means of exceptionally high pixelisation-based obfuscation. Despite this low general-purpose utility, the images can still be used for near state-of-the-art person re-identification when using centroid-based models. We expect these results to be useful for building privacy-compliant camera-based pedestrian flow information systems, able to link together highly noised person representations without compromising pedestrians' privacy. IEEEtran
http://arxiv.org/abs/2306.03865v1
20230606170655
Simultaneous Position-and-Stiffness Control of Underactuated Antagonistic Tendon-Driven Continuum Robots
[ "Bowen Yi", "Yeman Fan", "Dikai Liu", "Jose Guadalupe Romero" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY" ]
compat = newest spy fillbetween theoremTheorem definitionDefinition assumptionAssumption lemmaLemma remarkRemark exampleExample corollaryCorollary propositionProposition [ Remark [[[ ]] #1 #1 #1[ #1 ]#1[ #1 ]#1 #1 #1 #1 𝐠 N G E I O H C Y B M R J S E F P L F A J D X y m Y i_ C L f m f Y i_ C L x q pδ x uδχϕ^𝙻ϕ^𝙻_eρ^𝙻𝕄U_ GU_ Eθ̂θ̃θ̇̂̇θ̇̃̇lim inflim_t →∞ L_∞Ł2e L_2e∙ℝℂℕ1 2ε_t ß∑_i=1^nI&Iminmax cl Asian J. Control Annual Reviews in Control J. Process Control Control Engineering Practice Int. J. Adaptive Control and Signal Processing Int. J. Robust and Nonlinear Control IEEE Trans. Automatic Control IEEE Trans. Industrial Electronics IEEE Trans. Robotics & Automation European Journal of Control Int. J. Control IEEE Conf. Decision and Control Systems & Control Letters Automatica Automatica IEEE Trans. Control Systems Technology IEEE Trans. on Circuits and Systems IEEE Control Systems Magazine SIAM J. Control and Optimization IEEE Trans. on Power Systems IEEE Control Systems Letters Control Eng. Practice American Control Conf. European Control Conf.□◃[lime, fill=lime] (0,0) circle [radius=0.16] node[white] qag ID; [white, fill=white] (-0.0625,0.095) circle [radius=0.007]; in A, ..., Z orcidhttps://orcid.org/ orcidauthorpsf B-.05em i-.025em b-.08em T-.1667em.7exE-.125emXSimultaneous Position-and-Stiffness Control of Underactuated Antagonistic Tendon-Driven Continuum Robots Bowen Yi, Yeman Fan, Dikai Liu, and Jose Guadalupe RomeroThis work was partially supported by the Australian Research Council (ARC) Discovery Project under Grant DP200102497, and by the Robotics Institute at the University of Technology Sydney, Australia. B. Yi and Y. Fan contributed equally to this work. (Corresponding author: Y. Fan)B. Yi, Y. Fan and D. Liu are with Robotics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2006, Australia (email: [email protected]; [email protected]; [email protected]) J.G. Romero is with Departamento Académico de Sistemas Digitales, ITAM, Río Hondo 1, 01080, Ciudad de México, México (email: [email protected]) July 31, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Continuum robots have gained widespread popularity due to their inherent compliance and flexibility, particularly their adjustable levels of stiffness for various application scenarios. Despite efforts to dynamic modeling and control synthesis over the past decade, few studies have focused on incorporating stiffness regulation in their feedback control design; however, this is one of the initial motivations to develop continuum robots. This paper aims to address the crucial challenge of controlling both the position and stiffness of a class of highly underactuated continuum robots that are actuated by antagonistic tendons. To this end, the first step involves presenting a high-dimensional rigid-link dynamical model that can analyze the open-loop stiffening of tendon-driven continuum robots. Based on this model, we propose a novel passivity-based position-and-stiffness controller adheres to the non-negative tension constraint. To demonstrate the effectiveness of our approach, we tested the theoretical results on our continuum robot, and the experimental results show the efficacy and precise performance of the proposed methodology. continuum robot, nonlinear control, stiffness regulation, energy shaping § INTRODUCTION Continuum robots are a novel class of soft robots within the realm of robotics. Their unique properties, such as scaled dexterity and mobility, make them well facilitated important and suitable for human-robot interaction and manipulation tasks in uncertain and complex environments. For example, they can be used for manipulating objects with unknown shapes, performing search and rescue operations, and whole-arm grasping <cit.>. By utilizing soft materials, continuum robots represent an concrete step towards developing robots with performance levels similar to those of biological organisms. Despite the above advantages, rigid-body robotics currently outperforms continuum robotics in tasks requiring adaptable movement and compliant interactions with environment <cit.>. Consequently, many efforts have been devoted to addressing the challenges for real-time control of continuum robots that facilitates fast, efficient and reliable operation. We refer the reader to the recent survey papers <cit.> for a comprehensive understanding of this point. The existing control approaches for soft robots can be broadly classified into two categories: data driven (or machine learning) and model-based design. Initially, the data-driven approaches dominated the research because obtaining reliable models of a continuum robot was believed to be overwhelmingly complex <cit.>. In recent years, various state-of-the-art learning methodologies have been applied to control of soft robotics. These include the Koopman operator <cit.>, Gaussian process temporal difference learning <cit.>, supervised learning via recurrent neural networks <cit.>, and feedforward neural networks <cit.>. However, it is well known that these data-driven approaches have some key limitations, including stringent requirements of data sets and no guarantee of stability or safety. Nonetheless, recent efforts from the control community have been made to impose stability constraints on model and learning policy <cit.>. On the other hand, in the last few years the resurgence of interest in model-based control approaches has made them particularly appealing for soft robotics. This is because they are robust even when using approximate models, and possess properties that are both interpretable and manageable <cit.>. Unlike rigid manipulators, elastic deformation of continuum robots theoretically leads to infinite degrees-of-freedom (DoF) motion, e.g., torsion, buckling, extension and bending. This renders them particularly suitable to be modelled by partial differential equations (PDEs) rather than conventional ordinary differential equations (ODEs) <cit.>. In particular, there are two prevalent categories of modeling used in this field, namely mechanics-based and geometry-based approaches. The former focuses on studying the elastic behaviour of the constitutive materials and solving the boundary conditions problem. For instance, Cosserat rod theory and Euler-Bernoulli beam theory are two methodologies for modelling commonly used in this category <cit.>. To enable numerical implementation of these models, they need to be solved numerically to obtain a closed formulation for each material subdomain. The finite elements method (FEM) has proven to be successful in the design and analysis of continuum robots with high accuracy <cit.>. The FEM approaches have extremely heavy computational burden, thus not adopted to real-time control with a notable exception of <cit.> for quasi-static control. In contrast, geometrical models assume that the soft body can be represented by a specific geometric shape, e.g., piecewise constant curvature (PCC). As these modelling approaches often lead to kinematic models rather than dynamical models, they enable the design of kinematic or quasi-static controllers. For example, <cit.> provides the first exact kinematics in closed forms of trunk sections that can extend in length and bend in two dimensions, and shows how to connect workspace coordinates to actuator inputs. However, some results have shown that such types of kinematic controllers are likely to yield poor closed-loop performance <cit.>. To address the challenges previously mentioned, research has been conducted in the last few years on the dynamic modelling and model-based control of continuum robots.[In this paper, we use the term “dynamic controllers” to refer to feedback laws designed from dynamical and kinematic models for robotics, rather than the terminology in control theory which refers to feedback control with dynamics extension (e.g. adaptive and observer-based control) <cit.>.] Several dynamical models have been adopted for controller synthesis of continuum robots, including the geometrically exact dynamic Cosserat model <cit.>, port-Hamiltonian Cosserat model <cit.>, rigid-link models <cit.>, and reduced-order Euler-Lagrangian model <cit.>. These works have applied various model-based constructive control approaches, such as passivity-based control (PBC), partial feedback linearisation, proportional derivative (PD) control, and immersion and invariance (I&I) adaptive control. Among these, the paper <cit.> provides probably the earliest solution to the design and experimental validation of dynamic feedback control for soft robots. As illustrated above, one of the primary motivations to develop continuum robots is to enable robots exhibit more agile and adaptable movement, and compliant interactions. Consequently, there is an urgent and rapidly growing need to develop high-performance control algorithms to regulate position and stiffness simultaneously. In certain robotic applications involving human interaction or in complicated environments (e.g. search and rescue, industrial inspection, medical service, and home living care), stiffness control outperforms position control in terms of performance. For instance, in minimally invasive surgery or natural orifice transluminal endoscopic surgery, precise control of the robot's position is crucial. Additionally, the robot must dynamically adjust its stiffness to minimize side effects or transmit force. The problem of stiffness control, together with impedance control, is well established for rigid and softly-actuated (or articulated soft) robotics <cit.>. In contrast, the problem of simultaneous position-and-stiffness control for continuum robots is still an open area of research. The first stiffness controller for continuum robots in the literature may refer to <cit.>, which can be viewed as an extension of a simple Cartesian impedance controller for continuum robots using a kinematic model. In <cit.>, the authors tailor the classic hybrid motion/force controller for a static model of multi-backbone continuum robots, and the proposed method needs the estimation of external wrenches. In <cit.>, Cartesian stiffness controller was proposed to achieve dynamic control of a fully-actuated soft robot, enabling interaction between the robot and its environment. However, note that these works are not applicable to underactuated dynamical models of continuum robots. This paper aims to address the above gap by proposing a novel dynamical model and a real-time control approach that regulates both position and stiffness concurrently for a class of underactuated antagonistic tendon-driven continuum robots. The main contributions of the paper are: C1 We propose a port-Hamiltonian dynamical model for a class of antagonistic tendon-driven continuum robots, which features a configuration-dependent input matrix that enables to interpret the underlying mechanism for open-loop stiffening. C2 Stiffness flexibility is one of the motivations to develop continuum robots. Using the resulting highly underactuated dynamical model, we propose a novel potential energy shaping controller. Though simultaneous position-and-stiffness control has been widely studied for rigid and softly-actuated robots, to the best of the authors' knowledge, this work is the first such attempt to design a controller, which is capable of simultaneous control of an underactuated continuum robot. C3 We analyse the assignable equilibria set of the proposed model class, which is actuated by tendons that only provide non-negative tensions, and show how to incorporate input constraints into controller design via an input transformation. We conducted experiments in various scenarios to validate the theoretical results presented in the paper. However, due to C2 and the lack of applicable control approaches in the existing literature, we were unable to include a fair experimental comparison with previous work in this study. The reminder of the paper is organised as follows. Section <ref> provides a brief introduction to the proposed dynamical model of continuum robots and the problem studied in the paper. In Section <ref>, we explain the underlying reason why the proposed model can interpret the open-loop stiffening. In Section <ref> we present a simultaneous position-and-stiffness controller which is further discussed in Section <ref>. It is followed by the experimental results which were tested on a robotic platform OctRobot-I in Section <ref>. Finally, the paper presents some concluding remarks and discusses future work in Section <ref>. Notation. All functions and mappings are assumed C^2-continuous. I_n is the n × n identity matrix, 0_n × s is an n × s matrix of zeros, and 1_n := (1, …, 1) ∈^n. Throughout the paper, we adopt the convention of using bold font for variables denoting vectors, while scalars and matrices are represented in normal font. For ∈^n, S ∈^n × n, S=S^⊤ >0, we denote the Euclidean norm ||^2:=^⊤, and the weighted–norm ^2_S:=^⊤ S. Given a function f: ^n → we define the differential operators ∇ f:=(∂ f /∂ x)^⊤, ∇_x_i f:=(∂ f /∂ x_i)^⊤, where x_i ∈^p is an element of the vector . The set is defined as := {1,…,n}. When clear from the context, the arguments of the functions may be omitted. § MODEL AND PROBLEM SET §.§ Modelling of A Class of Continuum Robots In this section, we present a control-oriented high-dimensional rigid-link dynamical model specifically designed for a class of underactuated continuum robots driven by tendons. This model class encompasses a wide range of recently reported continuum robots in the literature, including the elephant trunk-inspired robot <cit.> and our own developed OctRobot-I <cit.>, alongside other notable examples. By employing this versatile model, we aim to provide a general framework that can effectively describe and analyse various underactuated continuum robotic systems, enabling a deeper understanding of their stiffening mechanisms and facilitating control design. In order to visualize the modelling process, we take OctRobot-I as an example to introduce the proposed dynamical model, but keep its generality in mind that the model is not limited to this specific robotic platform. This robot imitates an octopus tentacle's structure and motion mechanism, as shown in Fig. <ref>. The whole continuum manipulator consists of several sections in order to be able to deform in three-dimensional space. Each of them is made of n connected spine segments that are driven by a pair of cables. More details of the continuum robot OctRobot-I are given in Section <ref>, as well as in <cit.>. In this paper, we use a rigid-link model to approximate dynamical behaviours of continuum robots due to three considerations: 1) for the class of continuum robots which we are concerned with, there are “natural” spine segments partitioning into several links; 2) the rigid-link modelling approach is simple for control-oriented tasks; and 3) it is convenient to account for external loading. To obtain the dynamical model, we make the following assumptions: - The manipulator deflection is due to bending only, and the extension is negligible. - In this paper, we specifically concentrate on the two-dimensional case, limiting our analysis to a single section, in order to effectively illustrate the underlying mechanism.[It is promising to extend the main results to the three-dimensional case with multi-sections, and we will consider it as a valuable avenue for further exploration.] - The actuator dynamics is negligible, i.e., the motor is operating in the torque control mode with sufficiently short transient stages. - The sections of continuum robots have a piecewise constant curvature (PCC), conformally with respect to segments.[Each spine segment has constant curvatures but variable in time. ] In the rigid-link dynamical model, we use a serial chain of rigid links with n rotational joints to approximate one section of the continuum robot. Then, the configuration variable can be defined as = q_1 … q_n^⊤∈⊂^n, with q_i ∈ representing the approximate link angles, where is the feasible configuration space; see Fig. <ref> for an illustration. Practically, all angles q_i are in the set [-π 2, π 2] due to physical constraints. We model the continuum robot as a port-Hamiltonian system in the form of <cit.> = 0_n× n I_n - I_n -D()∇ _ H ∇_ H +  0_n    G() + τ_ ext with the generalised momenta ∈^n, the damping matrix D() ∈^n× n_≻ 0, τ_ ext∈^n the external torque, and the input matrix G() ∈^n× m with m<n. The total energy of the robotic system is given by H(,) = 12^⊤ M^-1() + U(), with the inertia matrix M() ≻ 0, and the potential energy U(), which consists of the gravitational part U_ G and the elastic one (), i.e. U()  = () + (). The potential energy function has an isolated local minimum point at its open-loop equilibrium of the origin = 0_n. The variable ∈^m represents the control input, denoting the tensions along the cables generated by actuators. In particular, for the 2-DoF case we have m=2 with two cables. Due to the specific structure, these two tensions are one-directional, i.e., u_i≥ 0 (i=1,2). The gravitational and elastic potential energy functions can be derived according to the geometric deformation under the uniformity assumptions of the materials. To make the paper self-contained, the details on the modelling of the potential energy functions are given in Appendix. For the studied case, the input matrix G(): ^n →^n× 2 can be conformally partitioned as G()  =   G_1() G_2()  . In the following assumption, some key properties of the matrix G() are underlined when modelling the continuum robot. The input matrix G() of the continuum robot model (<ref>)– or equivalently in (<ref>)– satisfies ( 1a)G() is state-dependent and C^1-continuous. ( 1b)G_1(0_n) = - G_1(0_n). ( 1c)|G_1() + G_2()| ≠ 0 for ∈\{0_n}. Among the above three items, the state-dependency of the input matrix is a key feature of the proposed model, which is instrumental to show tunability of open-loop stiffness of tendon-driven continuum robots. We will give more details about the input matrix in the sequel of the paper. The second item means that at the open-loop equilibrium (i.e.= 0_n), the tensions from two cables have the same value but with opposite directions. Let us now consider a single link in the zoomed-in subfigure in Fig. <ref>. If the forces along the cables are assumed lossless, their directions are nonparallel to the centroid of the continuum manipulator. The torques imposed on the first approximate link are given by u_1 ℒ_1(q_1) and u_2 _2(q_1) with ℒ_1, ℒ_2 the lever's fulcrums, which are nonlinear functions of the configuration q_1. From some basic geometric relations, it satisfies _1(0)= _2(0). This illustrates the rationality of Assumption <ref>. §.§ Problem Set In this paper, we study how to design a feedback controller that is capable of regulating the continuum robot deformation and achieving the variable stiffness capability. To be precise, the closed loop complies with the input constraint (<ref>) and achieves the following aims: A1: In the absence of the external torque (i.e.τ_ ext = 0_n), it achieves the asymptotically accurate regulation of the position, that is lim_t→∞(t) = _⋆, with the desired configuration _⋆∈. A2: We are able to control the stiffness at the closed-loop equilibrium concurrently. § OPEN-LOOP STIFFENING It is widely recognised that in tendon-driven continuum robots, the antagonism mechanism is a popular method for stiffening. By arranging a pair of cables on two sides of the robot and adjusting their tensions simultaneously, the robot body compresses or expands, and generates a reaction that counteracts the tension. As a consequence, the stiffness of the robot changes. In this section, it is shown via an intuitive example that there is a redundant degree of freedom of input in the proposed model, which provides the possibility to regulate the stiffness in the open-loop system. This property is instrumental for the controller design to regulate position and stiffness simultaneously. To this end, the following assumption is imposed. The axial stiffness K_ A at the end-effector is infinite, i.e.K_ A→∞. In other words, the continuum manipulator is axially inextensible. Now consider the case of the open-loop system with a pair of identical constant inputs u_1= u_2= μ. This balance will keep the configuration variable at the open-loop equilibrium _⋆=0_n for any feasible μ≥ 0. Specially, when μ =0, the manipulator is in a slack state where its stiffness corresponds to its inherent properties determined by the materials and mechanical structures. Intuitively, as μ increases, the manipulator progressively transitions towards a state of higher rigidity or inflexibility. The proposed dynamical model should be capable of interpreting the above phenomenon – physical common senses tell us that a larger value of μ >0 implies a larger transverse stiffness. In order to study the stiffness at the end-effector, we need the Jacobian J() ∈^2× n from the contact force _ ext∈^2, i.e., satisfying τ_ ext = J^⊤() _ ext, in which τ_ ext∈^n is the external torque vector acting on each link. In the following proposition, we aim to demonstrate the property of stiffness tunability by changing the value μ in the proposed antagonistic tendon-driven model. Consider the antagonistic tendon-driven model (<ref>) for continuum robots, with the constant inputs (<ref>) under Assumption <ref>, and assume that the stiffness matrix is diagonal. If the following hypotheses are satisfied: H1: For j=1,2 ∂ G_n,j∂ q_n () < 0,   ∂ G_n,j∂ q_k () = 0,  k ∈\{n} in some small neighbourhood of the origin. H2: The forward kinematics (mapping from the configuration ∈ to the end-effector Cartesian coordinate ∈^2) is a locally injective immersion. Then, the input (<ref>) guarantees the origin 0_n an equilibrium in the absence of the external perturbation (i.e.τ_ ext=0), and a large input value μ>0 implies a larger transverse stiffness K_ T of the end-effector at this equilibrium. From the second item of Assumption <ref>, the input term G() |_ = 0_n = 0_n with (<ref>) guarantees that the origin _⋆ = 0_n is an equilibrium in the case of τ_ ext = 0. The Cartesian coordinate ∈^2 of the end-effector can be uniquely determined by the configuration as = T() for some smooth function T: ^n →^2, with the open-loop equilibrium _⋆:= T(_⋆). Note that the function T depends on the coordinate selection. Without loss of generality, we assume _⋆ = 0_2 and the local coordinate of :=(x_1, x_2) are selected as the tangential and the axial directions of the n-th link. For a non-zero external torque τ_ ext, let us denote the shifted equilibrium as (, 0_n) in the presence of the external perturbation, with the corresponding end-effector coordinate = T(). In order to study the transverse stiffness, we assume that the external force _ ext is only acting to the n-th link <cit.>. Substituting it into the port-Hamiltonian model (<ref>), it satisfies the following equation at the shifted equilibrium ∇ U()  =  [G_1() + G_2()] μ + J^⊤()_ ext _ ext  =  K_ C(_⋆ - ), in which K_ C∈^2× 2 is the stiffness matrix with the partition K_ C := (K_ T, K_ A) J() = J_1() J_2() with the axial stiffness K_ A and the transverse stiffness K_ T. For the particular coordinate selection as mentioned above, the Jacobian matrix J_1() is in the form J_1 = 0 … 0 l_n, in which l_n>0 represents the distance from the contact point to the centre of the n-th link. From Assumption <ref>, the continuum manipulator is inextensible along the axial direction, which implies x̅_2 - x_⋆_2 =0. Hence, we have ∇ U() = [G_1() + G_2()]μ + K_ T J_1^⊤[x̅_1 - x_⋆ 1]. For convenience of presentation and analysis, we define a function f_μ : ^n →^n as f_μ() := ∇ U() - [G_1() + G_2()]μ, which is a function parameterised by μ≥ 0. Invoking that _⋆ = 0_n is the open-loop equilibrium, we have f_μ(_⋆) =0, and thus f_μ() - f_μ(_⋆) = K_ T J_1^⊤() [x̅_1 - x_⋆ 1]. Invoking the local injectivity hypothesis H2, there exists a left inverse function T^ L– which is defined locally– of the function T such that = T^ L (T()) in a small neighborhood of _⋆. From the equation (<ref>), the transverse stiffness K_ T at the open-loop equilibrium _⋆ is defined by taking →_⋆, i.e. K_ T  = J_1 (_⋆) |J_1(_⋆)|^2 lim_x̅_1 → x_⋆ 1 f_μ() - f_μ(_⋆) x̅_1 - x_⋆1  = J_1 (_⋆) |J_1(_⋆)|^2 [∇ f_μ(_⋆)]^⊤∇_x_1 T^ L(_⋆)  = J_1 |J_1|^2 [∇^2 U - μ( ∇ G_1 + ∇ G_2 )^⊤] ∇_x_1 T^ L|__⋆. The hypothesis H1 guarantees that the (n,n)-element of ∇ G_j(_⋆) for j=1,2 is negative. On the other hand, from the local coordinate selection, the variation of x_1 implies that q_n will also change accordingly, as consequence the last element of ∇_x_1 T^ L is non-zero (indeed positive). It is straightforward to see that K_ T is increasing by selecting a larger μ>0. The above calculation shows the underlying mechanism on tunability of stiffness in the open loop. Later on, we will illustrate that the tendon difference (u_1 - u_2) provides another degree to regulate the robot configuration. The hypothesis H1 is a technical assumption, which means G_n,j only depends on the state q_n rather than other configuration variables. It is used to simplify the presentation and analysis. Indeed, it is unnecessary, and with only J_1(∇ G_1 + ∇ G_2) ∇_x_1 T^ L≠ 0, we are able to show the ability to tune the stiffness. The assumption H2 means that we are able to find a unique inverse kinematics in a small neighborhood of a given configuration _⋆. Though it is generally not true to guarantee the existence of a global inverse, it is possible to achieve it locally. § CONTROL DESIGN In this section, we will study how to design a state feedback law to regulate position and stiffness simultaneously using the proposed dynamical model in Section <ref>. To facilitate the controller design, we additionally assume the following for the input matrix G() in terms of the geometric constraints. The matrix G() given by (<ref>) is reparameterised as G_1()  = _1() + _0 G_2()  = _1() - _0 with a constant vector _0 ∈^n and a C^1-continuous function _1:^n →^n satisfying the following: ( 3a)_1() is a smooth odd function; ( 3b)_1() is full column rank for ∈{0_n}; ( 3c) The constant vector _0 and the vector field _1() can be re-parameterised as _0 = g_0 1_n , _1() = g_1() 1_n, and g_0+ g_1() ≠ 0 for all . Clearly, the above is compatible with Assumption <ref>, and the function _1() is related to the open-loop stiffness tunability outlined in Proposition <ref>. §.§ Assignable Equilibria For underactuated mechanical systems, it is crucial to identify the set of assignable equilibria, also known as achievable or feasible equilibria. Although extensively explored, tendon-driven robots face a significant obstacle in the form of the one-directional input constraint (<ref>). In order to facilitate the control design, we make the following input transformation: τ = T_ u, T_ u:=1 -1 0 1 with new input control τ= (τ_1, τ_2) ∈^2. For τ_1, there is no sign constraint for its value; the other input channel verifies τ_2 ≥ 0, and we define the admissible input set as _τ := {τ∈^2: τ_1 ∈,  τ_2 ≥ 0}. Invoking the intuitive idea in Section <ref>, we may use these two inputs τ_1,τ_2 to regulate the position and stiffness concurrently. For convenience, we define the new input matrix as G_τ() = ρ_1() ρ_2() := G()T_ u^-1 with ρ_1()  = _0 + _1() ρ_2()  =  2_1(), and then the controlled model becomes = 0_n× n I_n - I_n -D()∇ _ H ∇_ H +  0_n   G_τ() τ in the absence of the external perturbation τ_ ext. Indeed, the real constraint for τ_1 should be τ_1 ≥ -τ_2 rather than τ_1∈ in order to guarantee the constraint u_1,u_2≥0. Since we are able to set the value of τ_2arbitrarily, we consider the admissible input set _τ defined above for convenience in the subsequent analysis. According to <cit.> and invoking the full-rankness of T_u, if there were not input constraints, the assignable equilibrium set would be given by {∈^n : G()^∇ U () =0 }. Clearly, this does not hold true for our case, since the feasible solution cannot be guaranteed to live within the set _τ rather than τ∈^2. To address this point, in the following proposition we present the assignable equilibria set for the studied case with constrained inputs. (Assignable Equilibria) Consider the model (<ref>) and the input transformation (<ref>) with the input constraint _τ in (<ref>). All the assignable equilibria are given by the set _q ∩, with the definition _q := {∈^n | (_1^_0)^_1^∇ U =0 _1^⊤∇ U - _1^⊤(_0+_1) (_1^_0)^†_1^∇ U ≥ 0 }. In terms of Assumption (<ref>b), we can always find a left annihilator _1^() ∈^(n-1)× n, which is full rank for all ∈{0_n}. For an equilibrium (, 0), there should exist τ_1 and τ_2 satisfying ∇ U ()  =   G_τ()τ  =   [_0 + _1()]τ_1 + 2 _1()τ_2. Considering the full-rankness of the square matrix _1^ ()    _1^⊤ () ∈^n× n for ∈{0_n}, we have (<ref>)  _1^ _1^⊤∇ U = _1^_0 τ_1 _1^⊤(_0 + _1) τ + 2|_1|^2τ_2 . Its solvability relies on finding all the points ∈⊂^n satisfying _1^∇ U = _1^_0 τ_1 _1^⊤∇ U = _1^⊤(_0 + _1) τ_1 + 2|_1|^2τ_2 at the same time under the constraint τ∈_τ. Clearly, all the feasible equilibria satisfying (<ref>) live in the set {∈^n:(_1^() _0)^_1^() ∇ U() =0 }, and the corresponding input τ_1 is given by τ_1 = (_1^_0)^†_1^∇ U. On the other hand, Assumption (<ref>b) imposes the constraint |_1|^2>0, thus (<ref>) admits a positive solution to τ_2 ≥ 0 if and only if _1^⊤∇ U - _1^⊤(_0+_1) τ_1 ≥ 0. By inserting (<ref>) into the above equation, we complete the proof. After imposing Assumption (<ref>c) to the input matrix G(), we are interested in a class of particular equilibria, and we call them the homogeneous equilibria that are characterised by the set _θ :={∈^n : _⋆ := (θ, …, θ) } for some constant θ. This definition is tailored for the proposed continuum robot model under the assumptions in the paper. In the following, we show all homogeneous equilibria belong to the assignable equilibrium set _q as defined in Proposition <ref>. Consider the model (<ref>) of the continuum robot under Assumptions <ref>-<ref>. Then, all homogeneous equilibria are assignable, i.e._θ⊂_q. For the case with θ = 0, since g_1(0_n) =0, the equilibrium _⋆= θ1_n makes the equation (<ref>) solvable with τ_1 = 0 and any τ_2 ≥ 0. For the case with θ≠ 0 and any fixedτ_1≥ 0, the determination of the set _q is equivalent to solving (<ref>), which can be written as ∇ U() = [_0 + _1()]τ_1 + 2 _1()τ_2 = G_ Nτ_ N with the definitions G_ N  := 1_n τ_ N(τ)  :=  [g_0+g_1()]τ_1 + 2g_1() τ_2. For any fixed τ_2≥ 0, invoking (<ref>) from Assumption <ref>, the mapping τ_1 →τ_ N is a diffeomorphism from →. It implies that there is no constraint for τ_ N. As a consequence, the PDE (<ref>) becomes G_ N^∇ U(_⋆) =0 at the desired equilibrium _⋆. A feasible full-rank annihilator of G_ N is given by G_ N^ = 1 -1 0 … 0 0 1 -1 … 0 ⋱ ⋱ 0 … 0 1 -1∈^(n-1)× n, and the Jacobian ∇ U at the desired equilibrium _⋆ is in the form ∇ U (_⋆) = α_1  sin(q_Σ)  ⋮  sin(q_Σ) _sin(nθ) 1_n + α_2  _⋆,1   ⋮   _⋆,n _θ1_n with q_Σ := ∑_i∈_i. It is straightforward to verify that (<ref>) holds true for any τ_2≥ 0 with a homogeneous equilibrium _⋆. Since there is no constraint for the input variable τ_1, the equilibrium for this case is also assignable under the constraint (<ref>). We complete the proof. In the sequel of the paper, our focus will be on control design aimed at regulating certain homogeneous equilibria that have been demonstrated to be assignable within the proposed class of models for continuum robots. §.§ Simultaneous Position-and-Stiffness Control We now aim at stabilising an arbitrary homogeneous equilibrium _⋆ in the subset of _θ with a tunable stiffness of the closed loop. Towards the end, we will employ the passivity-based control (PBC) method since it has a clear energy interpretation and simplifies both modelling and controller design. This makes it suitable for continuum robotics to preserve the system compliance <cit.>. Our basic idea is to fix τ_2 at some constant value τ_2^⋆≥ 0. We then utilise the input τ_1 to achieve potential energy shaping for the regulation task. Compared to the more general approach of interconnection and damping assignment (IDA) PBC <cit.>, on one hand, potential energy shaping may provide a simpler controller form, and on the other hand, as pointed out in <cit.> changing the inertia is prone to fail in practice – albeit being theoretically sound with additional degrees of freedom. For a given input τ_2 = τ_2^⋆≥ 0, the actuation into the dynamics is given by G_τ() τ  =  ρ_1() τ_1 + ρ_2() τ_2^⋆ :=  G_ Nτ_ N((τ_1, τ_2^⋆)), with a new function τ_ N: ^2 →. From Assumption <ref>, the vector field ρ_1() ≠ 0 for all ∈. Now the design target becomes using the control input τ_1 (with a fixed τ_2^⋆) to shape the potential energy function U() into a new shaped potential energy function U_ d(). To this end, we need to solve the PDE <cit.> G_ N^[ ∇ U() - ∇ U_ d() ] = 0. Note that the solution must adhere to the constraints ∇ U_ d(_⋆)  =  0 ∇^2 U_ d(_⋆)  ≻  0, in order to make the desired configuration _⋆ an asymptotically stable equilibrium. We are now in the position to propose the controller for simultaneous control of position and stiffness. Consider the continuum robotic model (<ref>), (<ref>) with the constraint (<ref>) satisfying Assumptions <ref>-<ref>. The controller u = T_u^-1τ with the transformed input τ = τ_ es+ τ_ da + τ_ st and the terms τ_ st = - 2g_1() g_0+g_1() 1 τ_2^⋆ τ_ es = 1 g_0+g_1() G_ N^† (∇ U_ d - ∇ U) 0 τ_ da = - 1 g_0+g_1() G_ N^⊤ K_ d M^-1() 0, where G_ N = 1_n, τ_2^⋆ >0, K_ d≻ 0 is gain matrix, and the desired potential energy function is given by U_ d()  =  - γcos(q_Σ - q_Σ^⋆) + α_2 2| - _⋆|^2 q_Σ^⋆  = ∑_i ∈_⋆,i and the gain γ>0, achieves the following closed-loop properties: P1: (Position regulation in free motion) If the external force τ_ ext =0 and γ <α_2, then the desired equilibrium point _⋆ is globally asymptotically stable (GAS) with lim_t→ + ∞(t) = _⋆. P2: (Compliant behavior) The overall closed-loop stiffness (i.e., from the external torque τ_ ext∈^n to the configuration ∈^n) is K_ O = γ1_n× n + α_2 I_n, where 1_n× n is an n× n matrix of ones. First, it is straightforward to verify that the vector τ_ st in (<ref>) is in the null space of τ_ N(τ) for any τ_2^⋆, i.e., τ_ N(τ_ st) = 0, ∀τ_2^⋆∈_≥ 0. It means that the first term τ_ st does not change the closed-loop dynamics. Now, let us study the effect of the potential energy shaping term τ_ es. The Jacobian of the desired potential energy function U_ d is given by ∇ U_ d() = γsin(q_Σ - q_Σ^⋆) 1_n + α_2( - _⋆). It satisfies the following: ∇ U_ d (_⋆)  = 0 ∇^2 U_ d ()  = γcos(q_Σ - q_Σ^⋆)1_n× n + α_2 I_n ≻ 0, in which the second equality holds true for all ∈^n by noting that the eigenvalues of the symmetric matrix ∇^2 U_ d are given by {α_2, …, α_2_n-1, α_2 + γ |cos(q_Σ - q_Σ^⋆)|} with all elements positive from the condition γ < α_2 in P1. This implies that the desired potential energy function U_ d is convex and achieves its global minimum at _⋆. For the function U_ d, we have G_ N^ [∇ U - ∇ U_ d] = G_ N^ [ α_1 sin(q_Σ)1_n + α_2 - γsin(_Σ - _Σ^⋆) 1_n - α_2 ( - _⋆ ) ] = G_ N^ [ α_1 sin(q_Σ)1_n - γsin(q_Σ - q_Σ^⋆) 1_n + α_2 _⋆ ] = 0, where in the last equation we have used the fact _⋆∈_θ, so that the PDE (<ref>) is verified. Together with (<ref>), the controller (<ref>) makes the closed-loop dynamics have the form = 0_n × n I_n - I_n -𝖣∇_ H_ d ∇_ H_ d +  0_n    τ_ ext , with H_ d(,) := 12^⊤ M^-1() + U_ d() 𝖣() := D() + G_ N K_ d G_ N^⊤≻ 0. For free motion (i.e.τ_ ext = 0), following the standard Lyapunov analysis we have Ḣ_ d≤ -(∇ H_ d)^⊤𝖣() ∇ H_ d≤ 0. For the system (<ref>), the set {(, ): (∇ H_ d)^⊤𝖣() ∇ H_ d =0} contains only a single point (_⋆, 0_n). From the LaSalle's invariance principle <cit.>, we are able to show the global asymptotic stability of the desired equilibrium (_⋆, 0_n). Hence, we have proven the property P1. The next step is to verify the stiffness property P2 in a small neighborhood B_ε(_⋆) of _⋆ with a sufficiently small ε>0. For a constant external force τ_ ext, the shifted equilibrium point (,0_n) should satisfy - ∇_ U_ d + τ_ ext =0, or equivalently ϕ():= γsin(q̅_Σ - q_Σ^⋆) 1_n + α_2 (- _⋆) = τ_ ext, with q̅_Σ:=∑_i ∈_i. Note that ∇ϕ = ∇^2 U_ d≻ 0, which means that ϕ: ↦τ_ ext is a (locally) injective immersion. Hence, in the small neighborhood B_ε(_⋆) of _⋆, there is a unique solution to (<ref>) for a given τ_ ext. We show that the shifted equilibrium (, 0_n ) is asymptotically stable by considering the Lyapunov function V(,) = H_ d(,) - ^⊤τ_ ext. From the above analysis, it is clear that ∇ V(,0) = ϕ() - τ_ ext 0 = 0 ∇^2 V(,0) ≻ 0. Hence, V qualifies as a Lyapunov function. Its time derivative along the system trajectory is given by V̇  =  - ∇_ H_ d_𝖣^2 - (∇_ H_ d)^⊤τ_ ext - ^⊤τ_ ext  =   - ∇_ H_ d_𝖣^2. It yields the Lyapunov stability of the closed-loop dynamics (<ref>) in the presence of a constant external torque τ_ ext, with all the system states bounded. On the other hand, the set _u:= {(,):∇_ H_ d() =0 } only contains a single isolated equilibrium (,0). According to LaSalle's invariance principle, (,0) is an asymptotically stable equilibrium, in which depends on the (arbitrary) constant torque τ_ ext– in terms of the unique solution of the algebraic equation (<ref>). The overall stiffness K_ O is defined by τ_ ext = K_ O(-_⋆). Substituting it into (<ref>), we have γsin(1_n^⊤- q_Σ^⋆) 1_n + α_2 (- _⋆) = K_ O(-_⋆). The stiffenss at the desired equilibrium _⋆ is calculated from any direction of the limit →_⋆, thus obtaining K_ O|__⋆  = ∂ϕ∂(_⋆)  = lim_→_⋆γcos(1_n^⊤- q_Σ^⋆) 1_n× n + α_2 I_n  = γ1_n× n + α_2 I_n. This verifies the property P2, and we complete the proof. The above shows that the proposed controller (<ref>)-(<ref>) can achieve the end-effector position regulation with the closed-loop stiffness K_ O given by (<ref>). It implies our ability to set a prescribed stiffness by selecting the control gain γ>0 properly. More discussions about the control law will be provided in the next section. § DISCUSSIONS The following remarks about the proposed controller are given in order. 1) In the property P2, we study the overall stiffness – from the external torque vector τ_ ext to the configuration ∈, rather than the transverse stiffness at the end-effector. Consider the external force f_ ext∈ at the end-effector along the transverse direction of the n-link with the Jacobian J = [0,…, 0, ℓ]. With a small force f_ ext, the coordinate of the end-effector would shift from ( ℓ(∑_k ∈sin(kθ_⋆)), ℓ(∑_k ∈cos(kθ_⋆)) ) to (F_x( f_ ext), F_y(f_ ext)) with [ F_x; F_y ] = [ ℓsin(β) + ℓ∑_k ∈\{n}sin(kθ_⋆ + γℓ f_ ext); ℓcos(β) + ℓ∑_k ∈\{n}cos(kθ_⋆ + γℓ f_ ext) ] and β := nθ_⋆ + (γ + α_2) ℓ f_ ext. Hence, the transverse stiffness is given by K_ T = lim_ f_ ext→ 0 √([F_x(f_ ext) - F_x(0)]^2 + [F_y(f_ ext) - F_y(0)]^2) f_ ext . As a result, we have K_ T ∝ κ_1γ + κ_2 with some non-zero constants κ_1, κ_2 for θ_⋆≠ 0. It means that, for a given desired equilibrium _⋆∈\{0}, the transverse stiffness is affine in the gain γ, thus providing a way to tune the closed-loop stiffness linearly. 2) The proposed controller can be roughly viewed as a nonlinear PD controller. The first term τ_ st is used to compensate the “anisotropy” in the input matrix G() due to its state-dependency property; the potential energy shaping term τ_ es and the damping injection term τ_ da, indeed, play the role of nonlinear PD control. To be precise, the term τ_ es is the error between the nonlinear functions of the position and its desired value _⋆; and the term τ_ da can be viewed as the negative feedback of velocity errors. This is not surprising, since the original idea of energy shaping has its roots in the pioneering work of Takegaki and Arimoto in robot manipulator control <cit.>, in which they proposed a very well-known “PD + gravity compensation” feedback <cit.>. 3) To ensure that ∇^2 U_ d(_⋆) ≻ 0, it is necessary to impose the condition γ< α_2 on the control gains. However, this condition may restrict the range of closed-loop stiffness values within an interval. If this condition is not imposed, it is only possible to guarantee the positive definiteness of ∇ U_ d in the vicinity of _⋆, which would result in local asymptotic stability. We provide some experimental evidence regarding this point in the next section. 4) Let us now look at the proposed controller (<ref>)-(<ref>). Note that the term M^-1 () corresponds to the generalised velocity of . Thus, the controller depends on only three plant parameters (α_1, α_2 and g_0) and a nonlinear function g_1– which need to be identified beforehand – and two adaptation gains (i.e.K_ d and γ). This means that it is unnecessary to identify all parameters and functions in the plant model. This makes the resulting controller robust vis-à-vis different types of uncertainties. § EXPERIMENTAL RESULTS §.§ Experimental setup In this section, the proposed control approach was tested using the OctRobot-I, a continuum robot developed in our lab at the University of Technology Sydney <cit.>. We considered the planar case with six segments (i.e.n=6) with an overall length of 252 mm, and a diameter of approximately 50 mm, which meets the critical assumptions outlined in the paper. Notably, the OctRobot-I has a jamming sheath that can provide an extra degree of freedom for stiffening, though the present paper does not delve into this feature's stiffening capabilities. Additional details of the OctRobot-I can be found in <cit.>. As shown in Fig. <ref>, the test platform used in the experiments consists of the one-section robot (OctRobot-I), two servo motors (XM430-W350, DYNAMIXEL) with customized aluminium spools, three force sensors (JLBS-M2-10kg), a linear actuator, and an electromagnetic tracking systems (Aurora V3, NDI). The control experiments and data collections were conducted using the software MATLAB™. The force sensors used in the platform have a response delay and provide noisy measurements when used for real-time control. On the other hand, the servo motors in the platform can provide accurate position information with high accuracy, making it easier to control cable lengths between the servo motors and the actuator unit. Using Hooke's law, it is possible to approximately consider the cable length proportional to the force for each cable, with some coefficients to be identified off-line using some collected data sets. To verify the linear relationship between the cable length and the tension force, as well as to obtain the coefficients, we conducted a group of experiments with different configurations, and recorded the cable lengths and the corresponding forces. Each configuration was repeatedly conducted three times under the identical condition, and all the data were utilized for identification. In Fig. <ref>, we plot the relation between the right cable length L_2 and the corresponding force, and the one between the length difference Δ L:= L_1 - L_2 and the force difference τ_1 := u_1 - u_2 of these two cables. The correlation coefficients are 0.9977 and 0.9987, which imply the strong linearity between cable lengths and forces. Thus, it is reasonable to use the cable lengths – driven by motors – as the “real” input signals. §.§ Open-loop stiffening experiments In this subsection, we aim to verify the results regarding open-loop stiffening that were presented in Section <ref>. For this purpose, we utilized a linear actuator positioned at the end-effector to generate a small displacement δ x > 0, as illustrated in Fig. <ref>(a). The actuator was attached to the force sensors to measure the external force f_ ext related to the displacement. By dividing the measured force by the applied displacement, i.e., f_ extδ x, we were able to estimate the transverse stiffness, as long as δ x was sufficiently small. This procedure allowed us to validate the findings related to open-loop stiffening presented in Section <ref>. We measured the stiffness values under different open-loop tendon forces μ>0 in the interval [0,45] N. Each experiment was repeatedly conducted three times under the same condition in order to improve reliability. The experimental results are shown in Fig. <ref>(b), where “×” represents the mean values of the calculated stiffness for all μ, and the error bars are ± 1 standard deviation. This clearly verifies the theoretical results in Proposition <ref>. The correlation coefficient between μ and the stiffness is 0.989, which illustrates the strong linearity – exactly coinciding with the equation (<ref>). §.§ Closed-loop experiments In order to apply the proposed real-time control algorithm to the experimental platform, we first conducted the identification procedure to estimate the parameters outlined in the fourth discussing point in Section <ref>. It relies on the fact that at any static configuration (i.e. with =0) the identity ∇_ U() = G() holds true, and thus J(θ) =0 with the cost function J(θ,u_1,u_2) := |α_1 sin(nθ) + α_2θ - [g_0 +g_1(θ)] u_1 + [g_0 - g_1(θ)]u_2|^2, that contains all the quantities to be identified. Since the function g_1(·) is infinite-dimensional, we simply parameterised as g_0 + g_1(θ) = c_1 + c_2 sin(θ) with two constants α_1>0 and α_2<0, complying with the assumptions on the input matrix. We regulated the continuum robot to different equilibria θ^j (j=1,…, w with some w∈ℕ_+) by driving the cables, and recorded the corresponding forces (u_1^j, u_2^j). The identification procedure boils down to solving the optimisation problem c_1, α_1,α_2>0, c_2<0min ∑_j∈{1,…, w} J(θ^j, u_1^j, u_2^j) . We ran the identification experiments to collect data at 15 equilibria points (i.e.w = 15), and repeated for six times. Using this data set, the identified parameters were c_1= 1.2143, c_2 = -2.9015, α_1 = 8.6114 and α_2 = 0.001. To evaluate the performance of position control, we first consider a desired configuration 𝐪_⋆ = [θ_⋆, …, θ_⋆]^⊤ with θ_⋆ = 5 deg for the proposed control scheme. We conducted experiments with various values of the gains γ and K_ d, as shown in Figs. <ref>-<ref>, respectively. The second row of these figures depicts the configuration variable at the steady-state stage during [2,8] s. It is worth noting that the control inputs u_i (i=1,2) are mapped to the cable length L_i, as explained in Section <ref>. In all these scenarios, the transient stage lasted for less than 1.5 seconds, and the configuration variable quickly converged to small neigborhoods of the desired angle, demonstrating the high accuracy of the proposed control approach. There were no apparent overshootings in configuration variables. Our results indicate that selecting either a sufficiently small or large K_ d can negatively affect the control performance during the transient stage. On the other hand, setting a large γ>0 may lead to chattering due to measurement noise at the steady-state stage, which is well understood as the deleterious effect of high-gain design in the control literature <cit.>. We conducted additional experiments to test the proposed control approach in different scenarios, including the desired configurations of θ_⋆ = 10 deg and 15 deg, which are shown in Fig. <ref>. These results demonstrate that the algorithm is capable of achieving high accuracy and performance for position control. To quantify the steady performance, we study the configuration trajectories during _ s:=[4,8] s, since for all these scenarios the system states arrive at the steady-state stage. For these two desired equilibria, the proposed design achieved high accuracy, verifying the property P1 in Proposition <ref>. We summarise the accuracy achieved in these experiments with different equilibria (5 deg, 10 deg and 15 deg) and gains of γ and K_ d in Table <ref>, where [θ_ min, θ_ max] represents the minimal and the maximal values during the interval _ s, and θ̅ and ±σ_θ are the average value and the ± 1 standard deviation. For θ_⋆ = 5 deg, it achieved the highest accuracy among the three equilibria, for which the selection of γ as 1 and 5 degraded the steady-state accuracy a little bit. In Fig. <ref>, we present a photo sequence of one of the scenarios with the desired configuration θ_⋆ = 10 deg, and the gains K_ d =0.1 and γ=1. This sequence serves as an intuitive illustration of the dynamic behaviour of the closed loop. In addition, we report the result with a largeγ=10. However, as explained in the discussion point 3) in Section <ref>, a large γ>0 may make the desired potential energy function U_ dnon-convex, resulting in instability. This is consistent with the experimental results, as we observe the neutral stability with oscillating behaviours at the steady-state stage for γ=10; see Fig. <ref>. Finally, we present experimental results demonstrating closed-loop stiffness regulation around the desired equilibria, which is related to P2 in Proposition <ref>. While measuring the overall stiffness is generally not manageable, we can test the transverse stiffness as outlined in Item 1) of Section <ref>. To this end, we equipped a linear actuator perpendicularly to the tangential direction of the continuum robot at the end-effector, as shown in Fig. <ref>. We repeated the experiments for two different desired equilibria, namely 8 deg and 10 deg. We collected stiffness data using different gains γ and plotted the results in Figs. <ref> and <ref>. The results match the equation (<ref>) in Section <ref> that the closed-loop stiffness is affine in the control gain γ. This implied that we were able to identify the parameters κ_1 and κ_2, and use them to tune the controller for a prescribed stiffness around the desired configuration. It is important to note that although various control approaches have been proposed for continuum robotics, their suitability for achieving simultaneous control of position and stiffness in underactuated robots is limited, particularly considering the variations in actuation mechanisms across different continuum robotic platforms. Given the absence of applicable control strategies in the existing literature, this study did not provide experimental comparisons to previous works. However, our aim is to lay the groundwork for future exploration and development of experimental studies in this area. § CONCLUSION In this paper, we studied the modelling and control of a class of underactuated antagonistic tendon-driven continuum robots. The proposed model possesses a configuration-dependent input matrix, which effectively captures the mechanism for open-loop stiffening through cable tension regulation. We have thoroughly analysed the assignable equilibria set and devised a potential shaping feedback controller that enables simultaneous position-and-stiffness regulation while adhering the non-negative input constraint. To the best of the authors' knowledge, this is the first design for such a problem. The experimental results on the robotic platform OctRobot-I demonstrate the effectivenss and reliability of the proposed approach. Since the proposed approach relies on only a few intrinsic parameters of the model, rather than fully utilising the exact dynamical model, lending it remarking robustness to modelling errors. Along the research line, the following problems are considered as the potential future works: 1) As per Proposition <ref>, we impose the condition γ<α_2 to guarantee the convexity of the desired potential energy function U_ d. It should be noted that the parameter α_2 is a intrinsic characteristic of the continuum robot, and consequently the range of choices for the adaptation gain γ is limited, which restricts our ability to control the stiffness in a relatively narrow interval. Our experimental results support this assertion. To enlarge the closed-loop stiffness range, a potential way is to make full use of jamming in the continuum robot via changing the compression level of jamming flaps <cit.>. 2) Exploring alternative desired potential energy functions may offer a promising approach to enhance closed-loop performance. In addition, applying state-of-the-art energy shaping methodologies, such as those demonstrated in e.g.<cit.>, could prove valuable for solving more complex tasks for continuum robots, e.g., path following and robust simultaneous position-and-stiffness control. 3) Similar to the recent works <cit.>, the approach in the paper is developed for the planar case. It is underway to extend the main results to multi sections in a three-dimensional space. 4) Our proposed approach does not consider the actuation dynamics, opting instead to utilise a high-gain design to enforce time-scale separation and disregard these dynamics. It would be advantageous to take the actuation dynamics in to the controller synthesis by incorporating advanced robustification techniques <cit.>. § A. MODELLING OF POTENTIAL ENERGY FUNCTIONS In this section, we provide additional details on the model, in particular the potential energy functions of the continuum robotic platform. §.§.§ Gravitational energy In order to approximate the gravitational potential energy U_ G, we make the hypothesis that the mass is lumped at the centre of each link with the link lengths l_i>0 and the masses m_i>0 (i∈). From some basic geometric relations, we have U_ G() = ∑_i ∈l_i m_i 2[ cos(q_0 +…+ q_i-1) - cos(q_0+ … +q_i) ] with the parameter q_0 =0, which satisfies U_ G(0_n) = 0. Besides, we make the following assumption on the mass and length. The continuum robot satisfies the uniformity assumptions: ( 4a) The masses verify m_i = m_j,  ∀ i,j ∈. ( 4b) The lengths satisfy the relation: l_0 =ℓ and l_i = 2ℓ (i ∈). The radius of the beam is r. With the above assumption, the potential energy becomes U_ G()   =  α_1 (1-cos(q_Σ)) q_Σ   := ∑_i∈_i, with some coefficient α_1>0. §.§.§ Elastic energy In the designed continuum robot, each spine segment contains two pair of helical compression springs. Since we limit ourselves to the 2-dimensional case, we only consider a pair of springs as illustrated in Fig. <ref>, and make the assumption below. The deformable part of the continuum manipulator is composed of fixed number of segments with constant curvature with differentiable curves everywhere <cit.>. In terms of the above assumptions, the boundary lengths in the i-th segment are given by h_i,1 = q_i[ ℓ(q_i 2) + r ], h_i,2 = q_i[ ℓ(q_i 2) - r ]. Note that the above functions are well-posed when q_i → 0, i.e., lim_q_i → 0 h_i,1 =2ℓ, lim_q_i → 0 h_i,2 =2ℓ. Hence, the elastic energy can be modelled as U_ E = ∑_i ∈k_i[ q_i^2(ℓ^2 cos(q_i 2)^2 + r^2) - ℓ^2 ] + k_i' q_i^2, in which k_i>0 and k_i'>0 are some elastic coefficients to characterise the elastic energies caused by elongation and bending of springs. In the high-dimensional rigid-link model, each configuration variable q_i would generally be small, i.e., q_i∈ [- π 12, π 12], for which the term cos(q_i 2)^2 takes values within [0.983,1]. Then, it is reasonable to make the following quadratic assumption to approximate the highly nonlinear function in (<ref>). The elastic energy function U_ E has the form U_ E() = ^⊤Λ + U_0 with a constant coefficient U_0 and a diagonalisable matrix Λ:=(α_2, …α_2) ≻ 0. IEEEtranS []Bowen Yi obtained his Ph.D. degree in Control Engineering from Shanghai Jiao Tong University, China in 2019. From 2017 to 2019 he was a visiting student at Laboratoire des Signaux et Systèmes, CNRS-CentraleSupélec, Gif-sur-Yvette, France. He has hold postdoctoral positions in Australian Centre for Field Robotics, The University of Sydney, NSW, Australia (2019 - 2022), and the Robotics Institute, University of Technology Sydney, NSW, Australia (Sept. 2022 - now). His research interests involve nonlinear systems and robotics. He received the 2019 CCTA Best Student Paper Award from the IEEE Control Systems Society for his contribution on sensorless observer design. []Yeman Fan (Student Member, IEEE) received the B.E. degree in Machine Designing, Manufacturing and Automation, and the M.E. degree in Agricultural Electrification and Automation from Northwest A&F University, Yangling, China, in 2016 and 2019, respectively. He is currently pursuing the Ph.D. degree with the Robotics Institute, University of Technology Sydney, Sydney, NSW, Australia. His research interests include continuum robots and manipulators, robot control systems, and jamming technology for robotics. []Dikai Liu (Senior Member, IEEE) received the Ph.D. degree in Dynamics and Control from the Wuhan University of Technology, Wuhan, China, in 1997. He is currently a Professor in Mechanical and Mechatronic Engineering with the Robotics Institute, University of Technology Sydney, Sydney, NSW, Australia. His main research interests include robotics, including robot perception, planning and control of mobile manipulators operating in complex environments, human-robot collaboration, multi-robot coordination, and bioinspired robotics. []José Guadalupe Romero (Member, IEEE) obtained the Ph.D. degree in Control Theory from the University of Paris-Sud XI, France in 2013. Currently, he is a full time Professor at the Instituto Tecnológico Autónomo de México (ITAM), Mexico City, Mexico, and since 2019 he has been the Director of undergraduate mechatronics engineering program. His research interests are focused on nonlinear and adaptive control, stability analysis and the state estimation problem, with application to mechanical systems, aerial vehicles, mobile robots and multi-agent systems. ] ]
http://arxiv.org/abs/2306.08794v1
20230615003614
Quantile autoregressive conditional heteroscedasticity
[ "Qianqian Zhu", "Songhua Tan", "Yao Zheng", "Guodong Li" ]
stat.ME
[ "stat.ME" ]
L[1]>p#1 C[1]>p#1 R[1]>p#1 Local Labor Market Effects of Mergers and Acquisitions in Developing Countries: Evidence from Brazil Vítor CostaEconomics PhD Candidate at Cornell University. July 31, 2023 ==================================================================================================== This paper proposes a novel conditional heteroscedastic time series model by applying the framework of quantile regression processes to the ARCH(∞) form of the GARCH model. This model can provide varying structures for conditional quantiles of the time series across different quantile levels, while including the commonly used GARCH model as a special case. The strict stationarity of the model is discussed. For robustness against heavy-tailed distributions, a self-weighted quantile regression (QR) estimator is proposed. While QR performs satisfactorily at intermediate quantile levels, its accuracy deteriorates at high quantile levels due to data scarcity. As a remedy, a self-weighted composite quantile regression (CQR) estimator is further introduced and, based on an approximate GARCH model with a flexible Tukey-lambda distribution for the innovations, we can extrapolate the high quantile levels by borrowing information from intermediate ones. Asymptotic properties for the proposed estimators are established. Simulation experiments are carried out to access the finite sample performance of the proposed methods, and an empirical example is presented to illustrate the usefulness of the new model. Key words: Composite quantile regression; Conditional quantile estimation; GARCH model; Strict stationarity; Tukey-lambda distribution. § INTRODUCTION Since the appearance of autoregressive conditional heteroscedastic (ARCH) <cit.> and generalized ARCH (GARCH) models <cit.>, GARCH-type models have become popular and powerful tools to capture the volatility of financial time series; see <cit.> for an overview. Volatility modeling plays an important role in financial risk management. In particular, it is a key ingredient for the calculation of quantile-based risk measures such as the value-at-risk (VaR) and expected shortfall. As estimating these measures is essentially a quantile estimation problem <cit.>, considerable research has been devoted to the development of quantile regression (QR) methods for GARCH-type models, such as 's linear ARCH <cit.> and linear GARCH models <cit.>, 's GARCH model <cit.>, and asymmetric power GARCH model <cit.>. A common feature of the above research is that the global structure of the volatility process is captured by a parametric GARCH-type model with distribution-free innovations. This implies that the conditional quantile process will be the product of the volatility process and the quantile of the innovation. Consider the following linear GARCH(1,1) model <cit.>: y_t=ε_th_t, h_t=a_0+a_1|y_t-1|+b_1h_t-1, where {y_t} is the observed series, and {ε_t} are independent and identically distributed (i.i.d.) innovations with mean zero. The τth conditional quantile function of y_t is Q_τ(y_t|y_t-1, y_t-2, …)=(a_0+a_1|y_t-1|+b_1h_t-1)Q_τ(ε_t)=θ_τ^' z_t, where Q_τ(ε_t) is the τth quantile of ε_t, θ_τ=(a_0,a_1,b_1)^'Q_τ(ε_t), and z_t=(1,|y_t-1|,h_t-1)^'. Thus, Q_τ(y_t|y_t-1, y_t-2, …) can be estimated by replacing θ_τ and the volatility h_t with their estimates; see <cit.> and <cit.>. Note that Q_τ(y_t|y_t-1, y_t-2, …) is dependent on τ only through Q_τ(ε_t), whereas the GARCH parameters remain invariant across different τ. However, in practice the GARCH parameters may vary across quantile levels. The above framework would fail to capture this phenomenon, potentially resulting in poor forecast accuracy; see Section <ref> for empirical evidence. To address this limitation, a natural idea is to allow the GARCH parameters to be τ-dependent. Recently random-coefficient time series models built upon quantile regression have attracted growing attention. By assuming that the AR coefficients are functions of a standard uniform random variable, the quantile AR model in <cit.> allows for asymmetric dynamic structures across quantile levels; see, e.g., <cit.> and <cit.> for various empirical applications of this model. There have been many extensions of the quantile AR model, such as the quantile self-exciting threshold AR model <cit.>, the threshold quantile AR model <cit.>, and the quantile double AR model <cit.>. However, as far as we know, the approach of <cit.> has not been explored for GARCH-type models. To fill this gap, this paper proposes the quantile GARCH model, where the GARCH parameters are allowed to vary across quantile levels. Our main contributions are threefold. First, we develop a more flexible QR framework for conditional heteroscedastic time series, namely the quantile GARCH model, and establish a sufficient condition for its strict stationarity. As the volatility process of the GARCH model is latent and defined recursively, a direct extension of <cit.> would be infeasible. Instead, by exploiting the ARCH(∞) form <cit.> of the GARCH model, we introduce a random-coefficient GARCH process, where the GARCH parameters are functions of a standard uniform random variable. It can be written as a weighted sum of past information across all lags, where the weights are exponentially decaying random-coefficient functions. The proposed model can capture asymmetric dynamic structures and varying persistence across different quantile levels, while including the linear GARCH model as a special case. Secondly, for the proposed quantile GARCH model, we introduce the self-weighted QR estimator. The uniform convergence theory of the estimator, including uniform consistency and weak convergence, is established for the quantile process with respect to the quantile level τ. Note that the weak convergence of the unweighted QR estimator would require E(|y_t|^3)< ∞. By contrast, the self-weighted estimator only requires E(|y_t|^s)< ∞ for an arbitrarily small s>0 and thus is applicable to very heavy-tailed financial data. The major theoretical difficulty comes from the non-convex and non-differentiable objective function of self-weighted QR estimator. To overcome it, we adopt the bracketing method in <cit.> to derive the pointwise Bahadur representation of the self-weighted QR estimator for each fixed τ, hence the pointwise √(n)-consistency and asymptotic normality. Then, we strengthen the pointwise convergence to uniform convergence for all τ, by deriving the Bahadur representation uniformly in τ and proving the asymptotic tightness of its leading term. In addition, to check whether the persistence coefficient is τ-independent, we construct a Cramér-von Misses (CvM) test. Based on the weak convergence result, we obtain the limiting null distribution of the CvM test statistic and propose a feasible subsampling method to calculate its critical values. Finally, to remedy the possible inefficiency of the QR at high quantile levels due to data scarcity, we further introduce the self-weighted composite quantile regression (CQR) estimator. High quantile levels are of great interest in financial risk management. A common approach to extremal QR <cit.> is to estimate the quantiles at multiple intermediate levels and then extrapolate those at high levels <cit.>. We adopt such an approach for the quantile GARCH model. Since this model is similar to 's GARCH model, we can conveniently make use of the latter for the extrapolation under a chosen innovation distribution such that an explicit quantile function is available. We choose the Tukey-lambda distribution <cit.>, since it not only has an explicit quantile function, but is flexible in fitting heavy tails and approximating many common distributions such as the Gaussian distribution <cit.>. For the proposed weighted CQR estimator, we derive asymptotic properties under possible model misspecification and provide practical suggestions for computational issues. In addition, our simulation studies and empirical analysis indicate that the CQR outperforms the QR at high quantile levels. The rest of this paper is organized as follows. Section <ref> introduces the quantile GARCH(1,1) model and studies its strict stationarity. Section <ref> proposes the self-weighted QR estimator, together with the convergence theory for the corresponding quantile process and a CvM test for checking the constancy of the persistence coefficient across all quantile levels. Section <ref> introduces the CQR estimator and derives its asymptotic properties. Simulation studies and an empirical example are provided in Sections <ref> and <ref>, respectively. Conclusion and discussion are given in Section <ref>. A section on the generalization to the quantile GARCH(p,q) model, all technical proofs, and additional numerical results are given in the Appendix. Throughout the paper, →_d denotes the convergence in distribution, ⇝ denotes weak convergence, and o_p(1) denotes the convergence in probability. Moreover, · denotes the norm of a matrix or column vector, defined as A=√(tr(AA^'))=√(∑_i,ja_ij^2). In addition, ℓ^∞(𝒯) denotes the space of all uniformly bounded functions on 𝒯. The dataset in Section <ref> and computer programs for the analysis are available at https://github.com/Tansonghua-sufe/QGARCH. § PROPOSED QUANTILE GARCH(1,1) MODEL §.§ Motivation For succinctness, we restrict our attention to the quantile GARCH(1,1) model in the main paper, while the generalization to the quantile GARCH(p,q) model is detailed in the Appendix. To motivate the proposed model, first consider a strictly stationary GARCH(1,1) process in the form of x_t=η_t h_t^1/2, h_t=a_0+a_1x_t-1^2+b_1 h_t-1, where a_0>0, a_1≥ 0, b_1≥ 0, and the innovations {η_t} are i.i.d. random variables with mean zero and variance one. The ARCH(∞) representation <cit.> of model (<ref>) can be written as x_t = η_t(a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1x_t-j^2)^1/2. Then, the τth conditional quantile function of x_t in model (<ref>) is given by Q_τ(x_t|x_t-1, x_t-2, …)=Q_τ(η_t)(a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1x_t-j^2)^1/2, τ∈(0, 1), where Q_τ(η_t) denotes the τth quantile of η_t. The parameters a_0, a_1 and b_1, which are independent of the specified quantile level τ, control the scale of the conditional distribution of x_t, while the distribution of η_t determines its shape. As a result, if the GARCH coefficients are allowed to vary with τ and thus capable of altering both the scale and shape of the conditional distribution, we will have a more flexible model that can accommodate asymmetric dynamic structures across different quantile levels. However, note that (<ref>) is nonlinear in the coefficients of the x_t-j^2's. Consequently, a direct extension from (<ref>) to a varying-coefficient model is undesirable, since it will result in a nonlinear conditional quantile function whose estimation is computationally challenging. Alternatively, we will consider the linear GARCH(1,1) model in (<ref>), in which case (<ref>) is revised to y_t = ε_t(a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1|y_t-j|). Then, its corresponding conditional quantile function has the following linear form: Q_τ(y_t|y_t-1, y_t-2, …)=Q_τ(ε_t)(a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1|y_t-j|), τ∈(0, 1). We will adopt (<ref>) to formulate the proposed quantile GARCH model. As shown in <cit.>, the traditional GARCH(1,1) model in (<ref>) has an equivalent form of the linear GARCH(1,1) model in (<ref>) up to a one-to-one transformation T(·). Specifically, for any x_t following model (<ref>), if we take the transformation y_t=T(x_t)=x_t^2 sgn(x_t), then it can be shown that y_t satisfies (<ref>) with ε_t=T(η_t)=η_t^2 sgn(η_t). Note that E(ε_t) may not be zero although E(η_t)=0, and this will not affect our derivation since the conditional quantile function at (<ref>) depends on Q_τ(ε_t) rather than E(ε_t). §.§ The proposed model Let ℱ_t be the σ-field generated by {y_t, y_t-1, …}. To allow the GARCH parameters to vary with τ, we extend model (<ref>) to the following conditional quantile model: Q_τ(y_t|ℱ_t-1)=ω(τ)+α_1(τ)∑_j=1^∞[β_1(τ)]^j-1|y_t-j|, τ∈(0, 1), where ω: (0, 1)→ℝ and α_1: (0, 1)→ℝ are unknown monotonic increasing functions, and β_1: (0, 1)→ [0, 1) is a non-negative real-valued function. Note that both the scale and shape of the conditional distribution of y_t can be altered by the past information |y_t-j|. Assuming that the right hand side of (<ref>) is monotonic increasing in τ, then (<ref>) is equivalent to the following random-coefficient process: y_t=ω(U_t)+α_1(U_t)∑_j=1^∞[β_1(U_t)]^j-1|y_t-j|, where {U_t} is a sequence of i.i.d. standard uniform random variables; see a discussion on the monotonicity of Q_τ(y_t|ℱ_t-1) with respect to τ in Remark <ref>. We call model (<ref>) or (<ref>) the quantile GARCH(1,1) model. Similar to the GARCH model which requires the innovations to have mean zero, the quantile GARCH model also needs a location constraint. For the conditional quantile function (<ref>), we may impose that Q_0.5(y_t|ℱ_t-1)=0. Since β_1(·) is non-negative, condition (<ref>) holds if and only if ω(0.5)=α_1(0.5)=0. For the quantile GARCH(1,1) model, we impose condition (<ref>) throughout this paper. Recall that the functions ω(·) and α_1(·) are monotonic increasing and β_1(·) is non-negative. Under (<ref>) the quantile GARCH(1,1) model (<ref>) can be rewritten into y_t = sgn(U_t-0.5)|y_t|, |y_t| = |ω(U_t)|+∑_j=1^∞|α_1(U_t)|[β_1(U_t)]^j-1|y_t-j|, where y_t, U_t-0.5, ω(U_t) and α_1(U_t) have the same sign at each time t. For simplicity, denote ϕ_0, t=|ω(U_t)| and ϕ_j,t= |α_1(U_t)|[β_1(U_t)]^j-1 for j≥1. Then the quantile GARCH(1,1) model (<ref>) is equivalent to y_t = sgn(U_t-0.5)|y_t|, |y_t|= ϕ_0, t + ∑_j=1^∞ϕ_j,t|y_t-j|, j≥1. This enables us to establish a sufficient condition for the existence of a strictly stationary solution of the quantile GARCH(1,1) model in the following theorem. Suppose that condition (<ref>) holds. If there exists s∈(0,1] such that E(ϕ_0, t^s)<∞ and ∑_j=1^∞E(ϕ_j,t^s)<1, or s>1 such that E(ϕ_0, t^s)<∞ and ∑_j=1^∞[E(ϕ_j,t^s)]^1/s<1, then there exists a strictly stationary solution of the quantile GARCH(1,1) equations in (<ref>), and the process {y_t} defined by y_t=(U_t-0.5)(ϕ_0,t+∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1) is the unique strictly stationary and ℱ_t^U-measurable solution to (<ref>) such that E|y_t|^s<∞, where ℱ_t^U is the σ-field generated by {U_t, U_t-1, …}. Theorem <ref> gives a sufficient condition for the existence of a unique strictly stationary solution satisfying E|y_t|^s<∞. The proof relies on a method similar to that of Theorem 1 in <cit.>; see also <cit.> and <cit.>. As discussed in <cit.> and <cit.>, it is very difficult to derive a necessary and sufficient condition on random-coefficient functions to ensure the monotonicity of Q_τ(y_t|ℱ_t-1) in τ for the quantile GARCH(1,1) model in (<ref>). Given that ω(·) and α_1(·) are monotonic increasing, a sufficient condition for monotonicity of Q_τ(y_t|ℱ_t-1) is that the non-negative function β_1(·) is monotonic decreasing on (0, 0.5) and monotonic increasing on (0.5, 1). However, since Q_τ(y_t|ℱ_t-1) could be monotonic increasing even if β_1(·) does not satisfy the above constraint (e.g., if β_1(τ) is constant over τ), we refrain from imposing any monotonicity constraint on β_1(·) in order to avoid overly restricting the function space. When ω(U_t)=a_0ε_t/(1-b_1), α_1(U_t)=a_1ε_t, and β_1(U_t)=b_1, the quantile GARCH(1,1) model in (<ref>) reduces to the linear GARCH(1,1) model in (<ref>). Then, (<ref>) can be simply written as a_1^sE|ε_t|^s+b_1^s<1 for s∈(0,1], while (<ref>) reduces to a_1(E|ε_t|^s)^1/s+b_1<1 with E|ε_t|^s<∞ for s>1. In particular, when s=1, the stationarity condition becomes a_1+b_1<1, which is exactly the necessary and sufficient condition for the existence of a second-order stationary solution to the GARCH(1,1) model in (<ref>). If s=2, then the condition becomes a_1[E(η_t^4)]^1/2+b_1<1 with E(η_t^4)<∞, which is slightly stronger than the necessary and sufficient condition for the existence of a fourth-order stationary solution to the GARCH(1,1) model in (<ref>); see also <cit.> and <cit.>. There are numerous variants of the GARCH model, such as the exponential GARCH <cit.> and threshold GARCH <cit.> models. The quantile GARCH model in this paper can be extended along the lines of these variants. For example, to capture leverage effects in quantile dynamics, as the quantile counterpart of the threshold GARCH model <cit.>, the threshold quantile GARCH(1,1) model can be defined as Q_τ(y_t|ℱ_t-1)=ω(τ)+α_1^+(τ)∑_j=1^∞[β_1(τ)]^j-1y_t-j^+-α_1^-(τ)∑_j=1^∞[β_1(τ)]^j-1y_t-j^-, where ω: (0,1)→ℝ and α_1^+, α_1^-: (0,1)→ℝ are monotonic increasing, β_1: (0,1)→ [0,1), y_t-j^-=min{y_t-j, 0}, and y_t-j^+=max{y_t-j, 0}. We leave this interesting extension for future research. § QUANTILE REGRESSION §.§ Self-weighted estimation Let θ=(ω, α_1,β_1)^'∈Θ be the parameter vector of the quantile GARCH(1,1) model, which belongs to the parameter space Θ⊂ℝ^2× [0,1). From (<ref>), we can define the conditional quantile function below, q_t(θ) =ω + α_1∑_j=1^∞β_1^j-1|y_t-j|. Since the function q_t(θ) depends on observations in the infinite past, initial values are required in practice. In this paper, we set y_t=0 for t≤ 0, and denote the resulting function by q_t(θ), that is, q_t(θ)=ω + α_1∑_j=1^t-1β_1^j-1|y_t-j|. We will prove that the effect of the initial values on the estimation and inference is asymptotically negligible. Let ψ_τ(x)=τ-I(x<0), where the indicator function I(·)=1 if the condition is true and 0 otherwise. For any τ∈𝒯⊂ (0,1), we propose the self-weighted quantile regression (QR) estimator as follows, θ_wn(τ) = (ω_wn(τ), α_1wn(τ), β_1wn(τ))^' =_θ∈Θ∑_t=1^n w_tρ_τ(y_t-q_t(θ)), where {w_t} are nonnegative random weights, and ρ_τ(x)=xψ_τ(x)=x[τ-I(x<0)] is the check function; see also <cit.>, <cit.>, and <cit.>. When w_t=1 for all t, (<ref>) reduces to the unweighted QR estimator. In this case, the consistency and asymptotic normality of the estimator would require E|y_t|<∞ and E|y_t|^3<∞, respectively. A sufficient condition for the existence of these moments is provided in Theorem <ref>. However, higher order moment conditions will make the stationarity region much narrower. Moreover, financial time series are usually heavy-tailed, so these moment conditions can be easily violated. By contrast, using the self-weighting approach <cit.>, we only need a finite fractional moment of |y_t|. Denote the true parameter vector by θ(τ)=(ω(τ), α_1(τ), β_1(τ))^'. Let F_t-1(·) and f_t-1(·) be the distribution and density functions of y_t conditional on ℱ_t-1, respectively. To establish the asymptotic properties of θ_wn(τ), we need the following assumptions. {y_t} is strictly stationary and ergodic. (i) The parameter space Θ is compact; (ii) θ(τ) is an interior point of Θ. With probability one, f_t-1(·) and its derivative function ḟ_t-1(·) are uniformly bounded, and f_t-1(·) is positive on the support {x:0<F_t-1(x)<1}. {w_t} is strictly stationary and ergodic, and w_t is nonnegative and measurable with respect to ℱ_t-1 such that E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for j≥ 1. The functions ω(·),α_1(·) and β_1(·) are Lipschitz continuous. Theorem <ref> provides a sufficient condition for Assumption <ref>. In Assumption <ref>, condition (i) is standard for the consistency of estimator, while condition (ii) is needed for the asymptotic normality; see also <cit.> and <cit.>. Assumption <ref> is commonly required for QR processes whose coefficients are functions of a uniform random variable; see Assumption A.3 in <cit.> for quantile AR models and Assumption 4 in <cit.> for quantile double AR models. Specifically, the positiveness and continuity of f_t-1(·) are required to show the uniform consistency of θ_wn(τ) in Theorem <ref>, while the boundedness of f_t-1(·) and ḟ_t-1(·) is needed for the weak convergence in Theorem <ref>. In the special case where the quantile GARCH(1,1) model in (<ref>) reduces to model (<ref>), Assumption <ref> can be simplified to conditions similar to Assumption (A2) in <cit.> and Assumption 4 in <cit.>. Assumption <ref> on the self-weights {w_t} is used to reduce the moment requirement on {y_t} in establishing asymptotic properties of θ_wn(τ); see more discussions on {w_t} in Remark <ref>. Assumption <ref> is required to establish the stochastic equicontinuity for weak convergence in Theorem <ref>. Let T_n(τ)=n^-1/2∑_t=1^nw_tq̇_t(θ(τ))ψ_τ(y_t-q_t(θ(τ))) and Σ_w(τ_1, τ_2)=(min{τ_1, τ_2}-τ_1τ_2)Ω_1w^-1(τ_1)Ω_0w(τ_1, τ_2)Ω_1w^-1(τ_2), where Ω_0w(τ_1, τ_2)=E[w_t^2q̇_t(θ(τ_1))q̇_t^'(θ(τ_2))] and Ω_1w(τ)=E[f_t-1(F_t-1^-1(τ))w_tq̇_t(θ(τ))q̇_t^'(θ(τ))]. Theorems <ref> and <ref> below establish the uniform consistency and weak convergence for the QR process θ_wn(·), respectively. For {y_t} generated by model (<ref>) with condition (<ref>), suppose E|y_t|^s<∞ for some s∈ (0,1). If Assumptions <ref>, <ref>(i), <ref> and <ref> hold, then sup_τ∈𝒯θ_wn(τ)-θ(τ)→_p 0 as n→∞. For {y_t} generated by model (<ref>) with condition (<ref>), suppose E|y_t|^s<∞ for some s∈ (0,1) and the covariance kernel Σ_w(τ_1, τ_2) is positive definite uniformly for τ_1=τ_2=τ∈𝒯. If Assumptions <ref>–<ref> hold, as n→∞, then we have √(n)(θ_wn(·)-θ(·)) = Ω_1w^-1(·) T_n(·) + o_p(1) ⇝𝔾(·) in (ℓ^∞(𝒯))^3, where the remainder term is uniform in τ∈𝒯, and 𝔾(·) is a zero mean Gaussian process with covariance kernel Σ_w(τ_1, τ_2). Owing to the self-weights, the above results hold for very heavy-tailed data with a finite fractional moment. The proof of Theorem <ref> is nontrivial. The first challenge comes from the non-convex and non-differentiable objective function of QR. Specifically, we need to prove the finite dimensional convergence of θ_wn(τ), i.e., the √(n)-consistency of θ_wn(τ) for each τ in the form of √(n)(θ_wn(τ)-θ(τ))=O_p(1). We overcome this challenge by adopting the bracketing method in <cit.>. The second challenge is to obtain the Bahadur representation uniformly in τ∈𝒯 and prove the asymptotic tightness of the leading term Ω_1w^-1(·) T_n(·) in this representation. The key to accomplishing this is to verify the stochastic equicontinuity for all remainder terms and T_n(·). In particular, when a fixed quantile level τ∈𝒯 is considered, by the martingale central limit theorem (CLT), we can obtain the asymptotic normality of θ_wn(τ) without the Lipschitz condition in Assumption <ref> as follows. For {y_t} generated by model (<ref>) with condition (<ref>), suppose E|y_t|^s<∞ for some s∈ (0,1) and Σ_w(τ,τ) is positive definite. If Assumptions <ref>–<ref> hold, then √(n)(θ_wn(τ)-θ(τ))→_d N( 0,Σ_w(τ,τ)) as n→∞. To estimate the asymptotic covariance Σ_w(τ,τ) in Corollary <ref>, we first estimate f_t-1(F_t-1^-1(τ)) in Ω_1w(τ) using the difference quotient method <cit.>. Let Q_τ(y_t|ℱ_t-1)=q_t(θ_wn(τ)) be the fitted τth conditional quantile. We employ the estimator f_t-1(F_t-1^-1(τ))=2ℓ[Q_τ+ℓ(y_t|ℱ_t-1)-Q_τ-ℓ(y_t|ℱ_t-1)]^-1, where ℓ is the bandwidth. As in <cit.>, we consider two commonly used bandwidths for ℓ as follows: ℓ_B=n^-1/5{4.5f_N^4(F_N^-1(τ))[2F_N^-2(τ)+1]^2}^1/5 and ℓ_HS=n^-1/3z_α^2/3{1.5f_N^2(F_N^-1(τ))2F_N^-2(τ)+1}^1/3, where f_N(·) and F_N(·) are the standard normal density and distribution functions, respectively, and z_α=F_N^-1(1-α/2) with α=0.05. Then the matrices Ω_0w(τ,τ) and Ω_1w(τ) can be approximated by the sample averages: Ω_0w(τ,τ) =1n∑_t=1^nw_t^2q̇_t(θ_wn(τ))q̇_t^'(θ_wn(τ)) and Ω_1w(τ) =1n∑_t=1^nf_t-1(F_t-1^-1(τ))w_tq̇_t(θ_wn(τ))q̇_t^'(θ_wn(τ)), where q̇_t(θ)=(1,∑_j=1^t-1β^j-1_1|y_t-j|,α_1∑_j=2^t-1(j-1)β^j-2_1|y_t-j|)^'. Consequently, a consistent estimator of Σ_w(τ,τ) can be constructed as Σ_w(τ,τ)=τ(1-τ)Ω_1w^-1(τ)Ω_0w(τ,τ)Ω_1w^-1(τ). The goal of the self-weights {w_t} is to relax the moment condition from E|y_t|^3<∞ to E|y_t|^s<∞ for s∈(0,1). If there is empirical evidence that E|y_t|^3<∞ holds, then we can simply let w_t=1 for all t. Otherwise, the self-weights are needed. There are many choices of random weights {w_t} that satisfy Assumption <ref>. Note that the main role of {w_t} in our technical proofs is to bound the term w_ty_t-j^δ for δ≥ 1 by O(|y_t-j|^s) for some s∈ (0,1). Following <cit.>, we may consider w_t=(∑_i=0^∞ e^-log^2(i+1){I[|y_t-i-1|≤ c]+c^-1|y_t-i-1|I[|y_t-i-1|>c]})^-3 for some given c>0, where y_s is set to zero for s≤ 0. In our simulation and empirical studies, we take c to be the 95% sample quantile of {y_t}_t=1^n. If we are only interested in estimating Q_τ(y_t|ℱ_t-1) at a specific quantile level τ, the L-BFGS-B algorithm <cit.> can be used to solve (<ref>) with the constraint β_1∈(0, 1). Then the estimate Q_τ(y_t|ℱ_t-1)=q_t(θ_wn(τ)) can be obtained for Q_τ(y_t|ℱ_t-1). As a more flexible approach, we may study multiple quantile levels simultaneously, say τ_1<τ_2<⋯<τ_K. However, the pointwise estimates {Q_τ_k(y_t|ℱ_t-1)}_k=1^K in practice may not be a monotonic increasing sequence even if Q_τ(y_t|ℱ_t-1) is monotonic increasing in τ. To overcome the quantile crossing problem, we adopt the easy-to-implement rearrangement method <cit.> to enforce the monotonicity of pointwise quantile estimates {Q_τ_k(y_t|ℱ_t-1)}_k=1^K. By Proposition 4 in <cit.>, it can be shown that the rearranged quantile curve has smaller estimation error than the original one whenever the latter is not monotone; see also the simulation experiment in Section <ref> of the Appendix. The proposed model in (<ref>) assumes that ω(·) and α_1(·) are monotonic increasing. In practice, we can apply the method in <cit.> to rearrange the estimates {ω_wn(τ_k)}_k=1^K and {α_1wn(τ_k)}_k=1^K to ensure the monotonicity of the curves across τ_k's. It is shown in <cit.> that the rearranged confidence intervals are monotonic and narrower than the original ones. §.§ Testing for constant persistence coefficient In this subsection, we present a test to determine if the persistence coefficient β_1(τ) is independent of the quantile level τ for τ∈𝒯⊂ (0,1). This problem can be cast as a more general hypothesis testing problem as follows: H_0: ∀τ∈𝒯, Rθ(τ)=r versus H_1: ∃τ∈𝒯, Rθ(τ)≠ r, where R is a predetermined row vector, and r ∈Γ denotes a parameter whose specific value is unknown, but it is known to be independent of τ. Here the parameter space Γ contains all values Rθ(τ) can take under the proposed model. Then, we can write the hypotheses for testing the constancy of β_1(τ) in the form of (<ref>) by setting R=(0,0,1) and r=β_1∈Γ = (0,1). In this case, the null hypothesis in (<ref>) means that β_1(τ) does not vary cross quantiles. For generality, we present the result for the general problem in (<ref>). Under H_0, we can estimate the unknown r using r=∫_𝒯Rθ_wn(τ)dτ. Define the inference process v_n(τ)=Rθ_wn(τ)-r=R[θ_wn(τ)-∫_𝒯θ_wn(τ)dτ]. To test H_0, we construct the Cramér-von Misses (CvM) test statistic as follows: S_n=n∫_𝒯v_n^2(τ)dτ. Let σ(τ_1,τ_2)=R[Σ_w(τ_1,τ_2)+∫_𝒯∫_𝒯Σ_w(τ,τ^')dτ dτ^'-∫_𝒯Σ_w(τ_1,τ)dτ-∫_𝒯Σ_w(τ,τ_2)dτ]R^'. Denote v_0(τ)=R[𝔾(τ)-∫_𝒯𝔾(τ)dτ] with 𝔾(τ) defined in Theorem <ref>. Suppose the conditions of Theorem <ref> hold. Under H_0, then we have S_n→_d S≡∫_𝒯v_0^2(τ)dτ as n→∞. If the covariance function of v_0(·) is nondegenerate, that is, σ(τ,τ)>0 uniformly in τ∈𝒯, then Pr(S_n>c_α)→Pr(S>c_α)=α, where the critical value c_α is chosen such that Pr(S>c_α)=α. Corollary <ref> indicates that we can reject H_0 if S_n>c_α at the significance level α. In practice, we can use a grid of values 𝒯_n in place of 𝒯. Similar to Corollary 3 in <cit.>, we can verify that Corollary <ref> still holds for the discretization if the largest cell size of 𝒯_n, denoted as δ_n, satisfies δ_n→ 0 as n→∞. Note that the CvM test in (<ref>) is not asymptotically distribution-free due to the estimation of r, which is commonly known as the Durbin problem <cit.>. This complicates the approximation of the limiting null distribution of S_n and the resulting critical value c_α. We suggest approximating the limiting null distribution by subsampling the linear approximation of the inference process v_n(τ); see also <cit.>. This approach is computationally efficient as it avoids the repeated estimation over the resampling steps for many values of τ. Specifically, by Theorem <ref>, under H_0 we have √(n)v_n(τ)= 1√(n)∑_t=1^n z_t(τ) + o_p(1), where z_t(τ)=R[m_t(τ)-∫_𝒯m_t(τ)dτ], with m_t(τ)=w_tΩ_1w^-1(τ)q̇_t(θ(τ))ψ_τ(y_t-q_t(θ(τ))). By the consistency of θ_wn(τ) in Theorem <ref>, we can estimate z_t(τ) using z_t(τ)=R[m_t(τ)-∫_𝒯m_t(τ)dτ], where m_t(τ)=w_tΩ_1w^-1(τ)q̇_t(θ_wn(τ))ψ_τ(y_t-q_t(θ_wn(τ))). Thus, a sample of estimated scores {z_t(τ), τ∈𝒯, 1≤ t≤ n} is obtained, where n is the sample size. Then a subsampling procedure is conducted as follows. Given a block size b_n, we consider L_n=n-b_n+1 overlapping blocks of the sample, indexed by B_k={k, k+1,…,k+b_n-1} for k=1,…, L_n. For each block B_k, we compute the inference process v_k,b_n(τ)=b_n^-1∑_t∈ B_kz_t(τ) and define S_k,b_n=b_n∫_𝒯v_k,b_n^2(τ)dτ. Then the critical value c_α can be calculated as the (1-α)th empirical quantile of {S_k,b_n}_k=1^L_n. To establish the asymptotic validity of the subsampling procedure above, we can use a method similar to the proof of Theorem 5 in <cit.>. This is possible under the conditions of Theorem <ref> and an α-mixing condition on y_t, provided that L_n→∞, b_n→∞, and b_n/n→ 0 as n→∞. However, we leave the rigorous proof for future research. Following <cit.>, we consider b_n=⌊ cn^1/2⌋ with a positive constant c, where ⌊ x ⌋ stands for the integer part of x. Our simulation study shows that the CvM test has reasonable size and power when c=0.5,1 or 2. § COMPOSITE QUANTILE REGRESSION §.§ Self-weighted estimation It is well known that the QR can be unstable when τ is very close to zero or one due to data scarcity <cit.>. However, estimating high conditional quantiles is of great interest in financial risk management. As a remedy, this section proposes the composite quantile regression (CQR). To estimate the conditional quantile at a target level τ_0 ∈𝒯⊂ (0,0.01]∪[0.99,1), the main idea is to conduct extrapolation based on estimation results of intermediate quantile levels at the one-sided neighbourhood of τ_0. Suppose that {y_t} follows the quantile GARCH(1,1) model in (<ref>). Note that the conditional quantile function Q_τ(y_t|ℱ_t-1) cannot be extrapolated directly due to the unknown nonparametric coefficient functions. To develop a feasible and easy-to-use extrapolation approach, we leverage the close connection between the linear GARCH(1,1) process in (<ref>) and quantile GARCH(1,1) process in (<ref>). First, we approximate y_t in (<ref>) by the linear GARCH(1,1) model in (<ref>). Then, the τth conditional quantile of y_t in (<ref>) can be approximated by that of the linear GARCH(1,1) model in (<ref>): Q_τ(y_t|ℱ_t-1)≈ Q_τ(ε_t)(a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1|y_t-j|), where ε_t's are the i.i.d. innovations of the linear GARCH(1,1) model. If the quantile function Q_τ(ε_t) has an explicit parametric form, then (<ref>) will be fully parametric and hence can be easily used for extrapolation of conditional quantiles of y_t at high levels. While this parametric approximation will induce a bias, the gain is greater estimation efficiency at high quantile levels; see more discussions on the bias-variance trade-off in Section <ref>. Next we need a suitable distribution of ε_t such that the tail behavior can be flexibly captured. There are many choices such that Q_τ(ε_t) has an explicit form, including distributions in lambda and Burr families <cit.>. We choose the Tukey-lambda distribution since it provides a wide range of shapes. It can not only approximate Gaussian and Logistic distributions but also fit heavy Pareto tails well. Given that ε_t follows the Tukey-lambda distribution with shape parameter λ≠ 0 <cit.>, Q_τ(ε_t) has a simple explicit form given by Q_τ(λ) := Q_τ(ε_t;λ) = τ^λ-(1-τ)^λλ. Combining (<ref>) and (<ref>), we can approximate the conditional quantile Q_τ(y_t|ℱ_t-1) by q_t,τ(φ) = Q_τ(λ) (a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1|y_t-j|):= Q_τ(λ)h_t(ϕ), where φ=(ϕ^', λ)^'=(a_0, a_1, b_1, λ)^' is the parameter vector of linear GARCH(1,1) model with ε_t following the Tukey-lambda distribution. Note that Q_0.5(λ)=0 for any λ. Thus, q_t,0.5(φ)=0 holds for any φ, i.e., the location constraint on Q_τ(y_t|ℱ_t-1) in (<ref>) is satisfied. Since q_t,τ(φ) depends on unobservable values of y_t in the infinite past, in practice we initialize y_t=0 for t≤ 0 and define its feasible counterpart as q_t,τ(φ) = Q_τ(λ)(a_0/1-b_1+a_1∑_j=1^t-1 b_1^j-1|y_t-j|):=Q_τ(λ)h_t(ϕ). The initialization effect is asymptotically negligible, as we verify in our technical proofs. Note that q_t,τ(φ) is fully parametric. Since φ is independent of τ, we can approximate the nonparametric function Q_τ_0(y_t|ℱ_t-1) by the parametric function q_t,τ_0(φ), where we replace φ with an estimator obtained by fitting the above Tukey-lambda linear GARCH(1,1) model at lower quantile levels. Let Φ⊂ (0,∞)×[0,∞)× [0,1)×Λ be the parameter space of φ, where Λ=(-∞,0)∪ (0,∞) is the parameter space of λ. To estimate φ locally for the target level τ_0, we utilize the information at lower quantile levels in the one-sided neighborhood of τ_0, namely 𝒯_h=[τ_0,τ_0+h]⊂ (0,0.5) if τ_0 is close to zero and 𝒯_h=[τ_0-h,τ_0]⊂ (0.5,1) if τ_0 is close to one, where h>0 is a fixed bandwidth; see Section <ref> for discussions on the selection of bandwidth h. If Q_τ(y_t|ℱ_t-1) is well approximated by q_t,τ(φ) for τ∈𝒯_h, then we can estimate φ by the weighted CQR as follows: φ̌_wn = (ϕ̌_wn^', λ̌_wn)^' =_φ∈Φ∑_t=1^n ∑_k=1^K w_tρ_τ_k(y_t-q_t,τ_k(φ)), where {w_t} are the self-weights defined as in (<ref>), and τ_1 < ⋯ < τ_K are fixed quantile levels with τ_k ∈𝒯_h for all 1≤ k≤ K; see also <cit.>. In practice, equally spaced levels are typically used. That is, τ_k=τ_0+h(k-1)/(K-1) if τ_0 is close to zero, whereas τ_k=τ_0-h(k-1)/(K-1) if τ_0 is close to one. As a result, the conditional quantile Q_τ_0(y_t|ℱ_t-1) can be approximated by q_t,τ_0(φ̌_wn). §.§ Asymptotic properties Note that the approximate conditional quantile function q_t,τ(φ) can be rewritten using the true conditional quantile function q_t(·) as follows: q_t,τ(φ) = a_0Q_τ(λ)1-b_1+a_1Q_τ(λ)∑_j=1^∞ b_1^j-1|y_t-j| := q_t(θ_τ^*), where θ_τ^*=g_τ(φ)=(a_0Q_τ(λ)/(1-b_1),a_1Q_τ(λ),b_1)^', and g_τ: ℝ^4→ℝ^3 is a measurable function such that q_t,τ=q_t∘ g_τ. Let θ̌_wn^*(τ):=g_τ(φ̌_wn) be the transformed CQR estimator. In view of (<ref>) and the fact that Q_τ(y_t|ℱ_t-1)=q_t(θ(τ)), θ̌_wn^*(τ) can be used as an estimator of θ(τ); see (<ref>) and the definition of q_t(·) in Section <ref>. The pseudo-true parameter vector φ_0^*=(ϕ_0^', λ_0)^'=(a_00, a_10, b_10, λ_0)^' is defined as φ_0^*=_φ∈Φ∑_k=1^K E[w_tρ_τ_k(y_t-q_t,τ_k(φ))], τ_k∈𝒯_h. In other words, for τ∈𝒯_h, the best approximation of the nonparametric function Q_τ(y_t|ℱ_t-1)=q_t(θ(τ)) via the fully parametric function q_t,τ(·) is given by q_t,τ(φ_0^*)=q_t(g_τ(φ_0^*)). In general, Q_τ(y_t|ℱ_t-1) may be misspecified by q_t,τ(φ_0^*), and θ(τ)=g_τ(φ_0^*) may not hold for all τ. Thus, asymptotic properties of the CQR estimator φ̌_wn and its transformation θ̌_wn^*(τ)=g_τ(φ̌_wn) should be established under possible model misspecification. The following assumptions will be required. {y_t} is a strictly stationary and α-mixing time series with the mixing coefficient α(n) satisfying ∑_n≥ 1[α(n)]^1-2/δ<∞ for some δ>2. (i) The parameter space Φ is compact and φ_0^* is unique; (ii) φ_0^* is an interior point of Φ. Note that Assumption <ref> is insufficient for the asymptotic normality of φ̌_wn under model misspecification, since E[ψ_τ(y_t-q_t,τ(φ_0^*))|ℱ_t-1]≠ 0 in this case, which renders the martingale CLT no longer applicable. Instead, we rely on Assumption <ref> to ensure the ergodicity of {y_t} and enable the use of the CLT for α-mixing sequences; see <cit.> and more discussions in Remark <ref>. Assumption <ref> is analogous to Assumption <ref>, which is standard in the literature on GARCH models <cit.>. If there is no model misspecification, i.e. Q_τ(y_t|ℱ_t-1) is correctly specified by q_t,τ(φ_0^*) for all τ∈𝒯_h, then the uniqueness of φ_0^* can be guaranteed for K≥ 3 and λ<1. Let q̇_t,τ(φ) and q̈_t,τ(φ) be the first and second derivatives of q_t,τ(φ) with respect to φ, respectively, given by q̇_t,τ(φ)=(Q_τ(λ)ḣ_t^'(ϕ),Q̇_τ(λ)h_t(ϕ))^' and q̈_t,τ(φ)=[ Q_τ(λ)ḧ_t(ϕ) Q̇_τ(λ)ḣ_t(ϕ); Q̇_τ(λ)ḣ_t^'(ϕ) Q̈_τ(λ)h_t(ϕ) ], where Q̇_τ(λ) and ḣ_t(ϕ) (or Q̈_τ(λ) and ḧ_t(ϕ)) are the first (or second) derivatives of Q_τ(λ) and h_t(ϕ), respectively. Denote X_t=∑_k=1^K w_tq̇_t,τ_k(φ_0^*)ψ_τ_k(y_t-q_t,τ_k(φ_0^*)) and Ω_0w^*=E( X_t X_t^')+n^-1∑_t≠ s^nE( X_t X_s^'). Define the matrices Ω_11^*=∑_k=1^KE[w_tq̈_t,τ_k(φ_0^*)ψ_τ_k(y_t-q_t,τ_k(φ_0^*))] and Ω_12^*=∑_k=1^KE[w_tf_t-1(q_t,τ_k(φ_0^*))q̇_t,τ_k(φ_0^*)q̇_t,τ_k^'(φ_0^*)]. Let Ω_1w^*=Ω_12^*-Ω_11^* and Σ_w^*=Ω_1w^*-1Ω_0w^*Ω_1w^*-1. For {y_t} generated by model (<ref>) with condition (<ref>), suppose E|y_t|^s<∞ for some s∈ (0,1) and Σ_w^* is positive definite. If Assumptions <ref>, <ref>, <ref>, <ref>(i) hold, then as n→∞, we have (i) φ̌_wn→_p φ_0^*. Moreover, if Assumption <ref>(ii) further holds, then (ii) √(n)(φ̌_wn-φ_0^*)→_d N( 0,Σ_w^*); and (iii) √(n)(θ̌_wn^*(τ)-θ(τ)-B(τ))→_d N( 0,g_τ(φ_0^*)Σ_w^*g_τ^'(φ_0^*)), where B(τ)=g_τ(φ_0^*)-θ(τ) is a systematic bias. Theorem <ref>(iii) reveals that θ̌_wn^*(τ) is a biased estimator of θ(τ) if g_τ(φ_0^*)≠θ(τ) i.e., when Q_τ(y_t|ℱ_t-1) is misspecified by q_t,τ(φ_0^*). Moreover, the systematic bias B(τ) depends on the bandwidth h, which balances the bias and variance of θ̌_wn^*(τ); see Section <ref> for details. However, at the cost of introducing the systematic bias, the proposed CQR method can greatly improve the estimation efficiency at high quantile levels, as it overcomes the inefficiency due to data scarcity at tails. Similar to Theorem <ref>, we employ the bracketing method in <cit.> to tackle the non-convexity and non-differentiability of the objective function. However, due to the possible model misspecification, the mixing CLT is used instead of the martingale CLT; see Assumption <ref>. We will discuss the estimation of the covariance matrix Σ_w^* in the Appendix. The proof of the mixing property in Assumption <ref> is challenging. For a stationary Markovian process, a common approach to proving that it is geometrically β-mixing and thus α-mixing is to establish its geometric ergodicity <cit.>. Note that the proposed quantile GARCH process can be regarded as a random-coefficient ARCH(∞) process. However, ARCH(∞) processes are not Markovian in general <cit.>. Thus, the above approach is not feasible. <cit.> provides an alternative method to establish mixing properties. By deriving explicit bounds for mixing coefficients using conditional densities of the process, they obtain mixing properties of stationary ARCH(∞) processes and show that the bound on the mixing rate depends on the decay rate of ARCH(∞) parameters. This method potentially can be applied to the quantile GARCH process. However, it is challenging to derive the conditional density of y_k+s given {…,y_0,U_1,…,U_k-1,y_k,…,y_k+s-1} due to the random functional coefficients driven by U_t. Thus, we leave this for future research. §.§ Selection of the bandwidth h As shown in Theorem <ref>(iii), the bandwidth h plays an important role in balancing the bias and efficiency of the estimator θ̌_wn^*(τ). In the extreme case that h=0, (<ref>) will become a weighted quantile regression at the fixed quantile level τ_0, and θ̌_wn^*(τ_0) will be equivalent to the QR estimator θ_wn(τ_0). Then we have g_τ_0(φ_0^*)=θ(τ_0) and B(τ_0)=0. Although B(τ) does not have an explicit form with respect to h, our simulation studies show that a larger h usually leads to larger biases but smaller variances of θ̌_wn^*(τ) when the true model is misspecified; see Section <ref> for details. In practice, we can treat h as a hyperparameter and search for h that achieves the best forecasting performance from a grid of values via cross-validation. Specifically, we can divide the dataset into training and validation sets, and choose the value of h that minimizes the check loss in the validation set for the target quantile level τ_0: h^opt=_h∈(0,d)∑_t=n_0+1^n_0+n_1ρ_τ(y_t-q_t,τ_0(φ̌_wn(h))), where n_0 and n_1 are the sample sizes of the training and validation sets, respectively, φ̌_wn(h) is the CQR estimator calculated by (<ref>) with bandwidth h, and d>0 determines the range of the grid search. Usually we take d to be a small value such as 0.1 to avoid large biases. The chosen bandwidth h^opt will be used to conduct CQR for rolling forecasting of the conditional quantile at time t=n_0+n_1+i for any i≥ 1. § SIMULATION STUDIES §.§ Data generating processes This section conducts simulation experiments to examine the finite sample performance of the proposed estimators and CvM test. The data generating process (DGP) is y_t=ω(U_t)+α_1(U_t)∑_j=1^∞[β_1(U_t)]^j-1|y_t-j|, where {U_t} are i.i.d. standard uniform random variables. For evaluation of the QR and CQR estimators, we consider two sets of coefficient functions as follows: ω(τ)=0.1F^-1(τ), α_1(τ)=0.1F^-1(τ), β_1(τ)=0.8, and ω(τ)=0.1F^-1(τ), α_1(τ)=τ-0.5+0.1F^-1(τ), β_1(τ)=0.3+0.6|τ-0.5|, where F(·) is the distribution function of the standard normal distribution or Tukey-lambda distribution in (<ref>) with the shape parameter λ = -0.2, denoted by F_N(·) and F_T(·) respectively. Note that F_T has heavy Pareto tails and does not have the finite fifth moment <cit.>. For coefficient functions in (<ref>), the strict stationarity condition (<ref>) with s=1 in Theorem <ref> can be verified for F=F_N or F_T by direct calculation or simulating 10^5 random numbers for U_t, respectively. Note that the DGP with coefficient functions in (<ref>) is simply the following GARCH(1,1) process: y_t = ε_t(0.1+0.1∑_j=1^∞ 0.8^j-1|y_t-j|), where ε_t follows the distribution F. As a result, the model is correctly specified for the CQR under (<ref>) with F being the Tukey-lambda distribution (i.e. F=F_T), whereas it is misspecified under all other settings. Two sample sizes, n=1000 and 2000, are considered, and 1000 replications are generated for each sample size. In addition, for the CvM test in (<ref>), we consider the following coefficient functions: ω(τ)=0.1F^-1(τ), α_1(τ)=0.1F^-1(τ), β_1(τ)=0.3+d(τ-0.5)^2, where d=0, 1 or 1.6, and all other settings are the same as those for (<ref>). We can similarly verify that the strict stationarity condition holds with s=1 under this setting. Note that the case of d=0 corresponds to the size of the test, whereas the case of d = 1 or 1.6 corresponds to the power. The computation of QR and CQR estimators and the CvM test involves a infinite sum. For computational efficiency, we adopt an exact algorithm based on the fast Fourier transform instead of the standard linear convolution algorithm; see <cit.> for details. §.§ Self-weighted QR estimator The first experiment focuses on the self-weighted QR estimator θ_wn(τ) in Section <ref>. For the estimation of the asymptotic standard deviation (ASD) of θ_wn(τ), we employ the two bandwidths (<ref>). The resulting ASDs with respect to bandwidths ℓ_B and ℓ_HS are denoted by ASD_1 and ASD_2, respectively. Tables <ref> and <ref> display the biases, empirical standard deviations (ESDs) and ASDs of θ_wn(τ) at quantile level τ=0.5%,1% or 5% for (<ref>) and (<ref>) with F being the standard normal distribution F_N or Tukey-lambda distribution F_T, respectively. We have the following findings. First, as the sample size increases, most of the biases, ESDs and ASDs decrease, and the ESDs get closer to the corresponding ASDs. Secondly, the ASDs calculated using ℓ_HS are marginally smaller than those using ℓ_B and closer to the ESDs. Thus, we use the bandwidth ℓ_HS in the following for stabler performance. Thirdly, when τ is closer to zero, the performance of θ_wn(τ) gets worse with larger biases, ESDs and ASDs, which indicates that the self-weighted QR estimator tends to deteriorate as the target quantile becomes more extreme. The above results are obtained based on the self-weights in (<ref>) with c being the 95% sample quantile of {y_t}_t=1^n. We have also considered the 90% sample quantile for the value of c, and the above findings are unchanged. In addition, simulation results for the unweighted QR estimator are given in the Appendix. It is shown that the unweighted estimator is less efficient than the self-weighted one when E|y_t|^3=∞. §.§ The CvM test The second experiment evaluates the performance of the CvM test in Section <ref>. Since we are particularly interested in the behavior of persistence coefficient function β_1(τ) at tails, we consider 𝒯=[0.7,0.995] and [0.8,0.995]. To calculate S_n in (<ref>), we use a grid 𝒯_n with equal cell size δ_n=0.005 in place of 𝒯. Moreover, ℓ_HS in (<ref>) is employed to calculate z_t(τ) in the subsampling procedure. The rejection rates of S_n at 5% significance level are summarized in Table <ref>. Firstly, observe that the size is close to the nominal rate when b_n=⌊ n^1/2⌋. The case with b_n=⌊ 0.5n^1/2⌋ tends to be undersized, while that with b_n=⌊ 2n^1/2⌋ tends to be oversized. Secondly, the power generally increases as the sample size n or departure level d increases. Thirdly, a larger subsampling block size b_n or wider interval 𝒯 tends to result in a greater power. Hence, we recommend using b_n=⌊ n^1/2⌋ since it leads to reasonable size and power. For a fixed 𝒯, we have also considered other settings for 𝒯_n, and the above findings are unchanged. This indicates that the CvM test is not sensitive to the choice of the grid. §.§ Self-weighted CQR estimator In the third experiment, we examine the performance of the proposed CQR method in Section <ref> via the transformed estimator g_τ(φ̌_wn)=θ̌_wn^*(τ)=(ω̌_wn^*(τ),α̌_1wn^*(τ),β̌_1wn^*(τ))^'. The DGP is preserved from the first experiment. To obtain the weighted CQR estimator φ̌_wn in (<ref>), we let 𝒯_h={τ_k: τ_k=τ_0+h(k-1)/(K-1)}_k=1^K, where K=19, τ_0=0.5%,1% or 5% is the target quantile level, and h>0 is the bandwidth. To investigate the influence of bandwidth h on the CQR, we obtain the estimator g_τ(φ̌_wn) for each h∈{0.01,0.02,…,0.10} at quantile level τ=0.5%,1% or 5% for the DGP in (<ref>) with (<ref>) or (<ref>), F=F_T, and sample size n=2000. Figures <ref> and <ref> illustrate the empirical squared bias, variance and mean squared error (MSE) of g_τ(φ̌_wn) versus h for coefficient functions in (<ref>) and (<ref>), respectively. Note that the model is correctly specified under coefficient functions in (<ref>) with F=F_T and misspecified under (<ref>) with F=F_T. Figure <ref> shows that the squared bias is close to zero, which is because the model is correctly specified. Meanwhile, as h increases, the variance and MSE get smaller, indicating the efficiency gain from using more data for the estimation. On the other hand, Figure <ref> shows that a larger h leads to larger biases but smaller variances under model misspecification. Consequently, as h increases, the MSE first decreases and then increases. Moreover, it can be observed that the CQR estimator can have much smaller MSE than the QR estimator (i.e., the case with h=0) especially for the high quantiles. This corroborates the usefulness of the CQR for high quantile levels. Next we verify the asymptotic results of the CQR estimator by focusing on a fixed bandwidth h=0.1. The ASD of g_τ(φ̌_wn) is calculated based on ġ_τ(φ̌_wn)Σ̌_w^*ġ_τ^'(φ̌_wn), where Σ̌_w^* is obtained as in Section <ref> of the Appendix. Specifically, to estimate Ω_1w^*, the bandwidth ℓ_k for quantile level τ_k is set to ℓ_HS defined in (<ref>) with τ replaced by τ_k. To obtain the kernel estimator Ω̌_0w^* in (<ref>), we consider the QS kernel in (<ref>) with the automatic bandwidth B_n=1.3221[nα(2)]^1/5, 0.1B_n or 10B_n for B_n, where the latter two choices of B_n correspond to under- or over-smoothing in comparison to B_n, respectively. The resulting ASDs with respect to B_n, 0.1B_n and 10B_n are denoted as ASD_a, ASD_b and ASD_c, respectively. Tables <ref> and <ref> report the biases, ESDs and ASDs of g_τ(φ̌_wn) for the DGP with coefficient functions in (<ref>) and (<ref>), respectively. The quantile levels τ=0.5%,1% and 5% and distributions F=F_N and F_T are considered. We first examine the results in Table <ref>, which corresponds to the DGP with (<ref>) and covers two scenarios: correctly specified (when F=F_T) and missspecified (when F=F_N) models. For both scenarios, we have three main findings as follows. Firstly, as the sample size increases, most of the biases, ESDs and ASDs become smaller, and the ESDs get closer to the corresponding ASDs. Secondly, as τ approaches zero, the biases, ESDs and ASDs of ω̌_wn^*(τ) and α̌_1wn^*(τ) get larger, while that of β̌_1wn^*(τ) is almost unchanged. This is expected since ω̌_wn^*(τ) and α̌_1wn^*(τ) are τ-dependent, and their true values have larger absolute values as τ goes to zero. However, β̌_1wn^*(τ)=b̌_1wn is independent of τ. Thirdly, the results of ASD_a, ASD_b and ASD_c are very similar, which suggests that the kernel estimator in (<ref>) is insensitive to the selection of bandwidth B_n. It is also interesting to compare the results under the two scenarios in Table <ref>. However, it is worth noting that the true values of ω(τ) and α_1(τ) for the correctly specified model (i.e., when F=F_T) are larger than those for the misspecified model (i.e., when F=F_N) in absolute value. As a result, the absolute biases, ESDs and ASDs of ω_wn(τ) and α_1wn(τ) are much smaller for F_N than that for F_T in Table <ref>. On the other hand, note that the true values of β_1(τ) are the same for F_N and F_T. Thus, the comparison of the results for β_1wn(τ) under F_N and F_T can directly reveal the effect of model misspecification. Indeed, Table <ref> shows that the absolute biases, ESDs and ASDs of β_1wn(τ) for F_T are much smaller than those for F_N. This confirms that the CQR performs better under correct specification (i.e., F=F_T) than misspecification (i.e., F=F_N). Note that the above misspecification is only due to the misspecified innovation distribution F, whereas the coefficient function (i.e., model structure) is correctly specified via (<ref>). By contrast, the DGP with (<ref>) have a misspecified model structure, which is more severe than the former. As a result, Table <ref> shares the three main findings from Table <ref> for the ESDs and ASDs but not for the biases. In particular, most biases do not decrease as the sample size increases. This is consistent with Theorem <ref>(iii), which shows that g_τ(φ̌_wn) is in general a biased estimator of θ(τ) under model misspecification. It also indicates that the misspecification in the model structure is systematic and has greater impact on the bias than that in the innovation distribution F. We have also considered other choices of the number of quantile levels K and the kernel function K(·). The above findings are unchanged. To save space, these results are omitted. §.§ Comparison between QR and CQR estimators We aim to compare the in-sample and out-of-sample performance of QR and CQR in predicting conditional quantiles. The self-weights {w_t} in (<ref>) are employed for both QR and CQR, and the set 𝒯_h with K=19 and h=0.1 is used for CQR as in the third experiment. For evaluation of the prediction performance, we use q_t(θ(τ)) as the true value of the conditional quantile Q_τ(y_t|ℱ_t-1). Based on the QR estimator θ_wn(τ) and the transformed CQR estimator g_τ(φ̌_wn), Q_τ(y_t|ℱ_t-1) can be predicted by q_t(θ_wn(τ)) and q_t(g_τ(φ̌_wn)), respectively. Note that estimates of Q_τ(y_t|ℱ_t-1) for t=1,…,n are in-sample predictions, and that of Q_τ(y_n+1|ℱ_n) is the out-of-sample forecast. We measure the in-sample and out-of-sample prediction performance separately, using the biases and RMSEs of conditional quantile estimates by averaging individual values over all time points and replications as follows: Bias_In(θ_τ) =1Mn∑_k=1^M∑_t=1^n[q_t^(k)(θ_τ)-q_t^(k)(θ(τ))], Bias_Out(θ_τ) =1M∑_k=1^M[q_n+1^(k)(θ_τ)-q_n+1^(k)(θ(τ))], RMSE_In(θ_τ) ={1Mn∑_k=1^M∑_t=1^n[q_t^(k)(θ_τ)-q_t^(k)(θ(τ))]^2}^1/2, RMSE_Out(θ_τ) ={1M∑_k=1^M[q_n+1^(k)(θ_τ)-q_n+1^(k)(θ(τ))]^2}^1/2, where M=1000 is the total number of replications, q_t^(k)(θ_τ) represents the conditional quantile estimate at time t in the kth replication, and θ_τ is the QR estimator θ_wn(τ) or the transformed CQR estimator g_τ(φ̌_wn). Table <ref> reports the above measures for the DGP in (<ref>) with coefficient functions in (<ref>) and (<ref>). Firstly, note that most of the biases and RMSEs decrease as the sample size increases. Secondly, the QR and CQR perform similarly for (<ref>) and (<ref>) with F=F_N in terms of the bias and RMSE. However, when F=F_T, obviously the CQR outperforms the QR in biases and RMSEs especially for high quantiles. This confirms that the CQR can be more favorable than the QR at high quantile levels if the data is heavy-tailed, yet can be comparable to the latter if otherwise. This is also consistent with the findings in Figures <ref> and <ref>. Lastly, although the CQR estimator is biased under model misspecification, the biases of its conditional quantile predictions are very close to or even smaller than those of the QR. This suggests that the CQR can provide satisfactory approximation of conditional quantiles, possibly owing to the flexibility of the Tukey-lambda distribution. In the Appendix, we also provide a simulation experiment to investigate the effect of quantile rearrangement on the prediction performance. § AN EMPIRICAL EXAMPLE This section analyzes daily log returns of the S&P500 Index based on the proposed quantile GARCH model. The daily closing prices from July 1, 2015 to December 30, 2021, denoted by {p_t}, are downloaded from the website of Yahoo Finance. Let y_t=100( ln p_t-ln p_t-1) be the log return in percentage, which has n=1637 observations in total. The time plot of {y_t} suggests that the series exhibits volatility clustering, and it is very volatile at the beginning of 2020 due to COVID-19 pandemic; see Figure <ref>. Table <ref> displays summary statistics of {y_t}, where the sample skewness with value -1.053 and kurtosis with value 23.721 indicate that the data are left-skewed and very heavy-tailed. The above findings motivate us to fit {y_t} by our proposed quantile GARCH model to capture the conditional heteroscedasticity of the return series and possible asymmetric dynamics over its different quantiles. We fit a quantile GARCH(1,1) model to {y_t}. Since the data are very heavy-tailed, the self-weighted QR estimator in (<ref>) is used to obtain estimates of θ(τ)=(ω(τ),α_1(τ),β_1(τ))^', where the self-weights in (<ref>) are employed with c being the 95% sample quantile of {y_t}. The estimates of θ(τ) for τ∈ (0.7,1) together with their 95% pointwise confidence intervals are plotted against the quantile level in Figure <ref>. Note that θ(τ) of our model corresponds to θ_τ=(a_0Q_τ(ε_t)/(1-b_1),a_1Q_τ(ε_t),b_1) in the linear GARCH(1,1) model in (<ref>). To compare the fitted coefficients of our model with those of model (<ref>), we also provide estimates of θ_τ using the filtered historical simulation (FHS) method <cit.> based on the Gaussian quasi-maximum likelihood estimation (QMLE). Specifically, a_0,a_1 and b_1 are estimated by Gaussian QMLE of the linear GARCH(1,1) model in (<ref>), and then Q_τ(ε_t) is estimated by the empirical quantile of resulting residuals {ε_t}. From Figure <ref>, we can see that the confidence intervals of ω(τ), α_1(τ) and β_1(τ) do not include the FHS estimates of θ_τ for τ∈ (0.7,0.8), (0.9,1) and (0.9,1) respectively. Since the quantile GARCH model includes the linear GARCH model as a special case, this indicates that the model with constant coefficients fails to capture the asymmetric dynamic structures across different quantiles. In addition, we apply the CvM test in Section <ref> to check whether β_1(τ) is constant for τ∈𝒯_1=[0.700,0.850], τ∈𝒯_2=[0.850,0.950], τ∈𝒯_3=[0.950,0.980], τ∈𝒯_4=[0.980,0.995], and τ∈𝒯=[0.700,0.995]=∪_i=1^4𝒯_i. The CvM test statistic S_n is calculated using a grid 𝒯_n with equal cell size δ_n=0.005. Its critical value is approximated using the proposed subsampling procedure with b_n=⌊ n^1/2⌋. The p-values of S_n for 𝒯_1,…, 𝒯_4, and 𝒯 are 0.585, 0.054, 0.555, 0.017, and 0.150, respectively. Therefore, it is likely that β_1(τ) is varying over [0.850,0.950] and [0.980,0.995]. Since the 5% VaR is of common interest in practice, we report the fitted quantile GARCH model at τ=0.05 as follows: Q_0.05(y_t| ℱ_t-1)=-0.380_0.100-0.341_0.075∑_j=1^∞0.790_0.033^j-1|y_t-j|, where the standard errors are given in the corresponding subscripts of the estimated coefficients. We divide the dataset into a training set (𝒮_train) with size n_0=1000 and a test set (𝒮_test) with size n-n_0=637. Then we conduct a rolling forecast procedure at level τ=0.05 (i.e. negative 5% VaR) with a fixed moving window of size n_0 from the forecast origin t_0=n_0+1 (June 24, 2019). That is, we first obtain the one-step-ahead conditional quantile forecast for t_0 (i.e., the first time point in 𝒮_test) based on data from t=1 to t=n_0, using the formula Q_0.05(y_t_0|ℱ_n_0)=ω_wn_0(0.05)+α_1wn_0(0.05)∑_j=1^n_0[β_1wn_0(0.05)]^j-1|y_t_0-j|. Then for each i=1,…, n-n_0-1, we set the forecast origin to t_0+i and conduct the forecast based on data from t=1+i to t=n_0+i. These forecasts are displayed in the time plot in Figure <ref>. It is clear that the VaR forecasts keep in step with the returns closely, and the return falls below the corresponding negative 5% VaR forecasts occasionally. We also thoroughly compare the forecasting performance of the proposed model with that of existing conditional quantile estimation methods as follows: * FHS: The FHS method <cit.> based on the linear GARCH(1,1) model in (<ref>), where the coefficients are estimated by the Gaussian QMLE, and the residual empirical quantiles are used to approximate the innovation quantiles. * XK: The two-step estimation method QGARCH2 of <cit.> based on linear GARCH(1,1) model (<ref>). Specifically, the initial estimates of {h_t} are obtained by combining the conditional quantile estimates of sieve ARCH approximation h_t=γ_0+∑_j=1^mγ_j|y_t-j| over multiple quantile levels, τ_k=k/20 for k=1,2,…,19, via the minimum distance estimation. Here we set m=3n^1/4 as in their paper. * Hybrid: The hybrid estimation method proposed in <cit.> based on Bollerslev's GARCH(1,1) model in (<ref>) with x_t=y_t. * CAViaR: The indirect GARCH(1,1)-based CAViaR method in <cit.>, where we use the same code and settings for the optimization as in their paper. We consider the lower and upper 1%, 2.5% and 5% quantiles and conduct the above rolling forecast procedure for all competing methods. The forecasting performance is evaluated via the empirical coverage rate (ECR), prediction error (PE), and VaR backtests. The ECR is calculated as the percentage of observations in the test set 𝒮_test that fall below the corresponding fitted conditional quantiles. The PE is calculated as follows: PE=1√(τ(1-τ)/(n-n_0))|1n-n_0∑_t=n_0+1^nI{y_t<Q_τ(y_t | ℱ_t-1)}-τ|, where n-n_0 is the size of 𝒮_test, and Q_τ(y_t | ℱ_t-1) is the one-step-ahead conditional quantile forecast based on each estimation method. We conduct two VaR backtests: the likelihood ratio test for correct conditional coverage (CC) in <cit.> and the dynamic quantile (DQ) test in <cit.>. The null hypothesis of the CC test is that, conditional on ℱ_t-1, {H_t} are i.i.d. Bernoulli random variables with the success probability being τ, where H_t=I(y_t<Q_τ(y_t | ℱ_t-1)) is the hit series. For the DQ test in <cit.>, we consider the regression of H_t on a constant and four lagged hits H_t-ℓ with 1≤ℓ≤ 4. The null hypothesis is that the intercept equals to τ and the regression coefficients are zero. If we fail to reject the null hypotheses of the VaR backtests, then the forecasting method is satisfactory. Table <ref> reports the ECRs, PEs and p-values of VaR backtests for the one-step-ahead forecasts. In terms of ECRs and backtests, all methods perform reasonably well, since the ECRs are close to the corresponding nominal levels, and at least one backtest is not rejected at the 5% significance level. However, it is clear that the proposed QR estimator has the smallest PEs in most cases. Furthermore, we compare the performance of the proposed self-weighted QR and CQR estimators at high quantile levels, including the lower and upper 0.1%, 0.25% and 0.5% quantiles. For a more accurate evaluation, we enlarge the S&P500 dataset to cover the period from February 23, 2000 to December 30, 2021, which includes n=5500 observations in total. Moreover, since the self-weighted CQR requires a predetermined bandwidth h, we divide the dataset into a training set (𝒮_train) with size n_0=1000, a validation set (𝒮_val) with size n_1=500, and a test set (𝒮_test) with size n_2=n-n_0-n_1. We choose the optimal h that minimizes the check loss in (<ref>) for 𝒮_val; see Section <ref> for details. Then based on the chosen h, we conduct a moving-window rolling forecast procedure similar to the previous one. The window size is n_0, and the forecast origin is t_0=n_0+n_1+1=1501. That is, we first obtain the conditional quantile forecast for t_0 (i.e., the first time point in 𝒮_test) based on data from t=t_0-n_0=501 to t=t_0-1=1500 (i.e., the last 500 observations in 𝒮_train and all observations in 𝒮_val). We repeat this procedure by advancing the forecast origin and moving window until the end of 𝒮_test is reached. Table <ref> displays the results for the proposed QR, CQR and other competing methods. Notably, the CQR method has the smallest PE and the most accurate ECR at almost all quantile levels, while the QR method is generally competitive among the other methods. In summary, for the S&P 500 dataset, the proposed quantile GARCH model has superior forecasting performance than the original GARCH model, and the proposed CQR estimator outperforms the QR estimator at high quantile levels. Finally, to remedy the quantile crossing problem, we have further conducted the quantile rearrangement <cit.> for the proposed QR method. There are only inconsequential changes to Tables <ref> and <ref>, while all main findings summarized earlier remain the same. In addition, for Figure <ref>, we can also rearrange the self-weighted QR estimates {ω_wn(τ_k)}_k=1^K and {α_1wn(τ_k)}_k=1^K to ensure the monotonicity of the curves. After the rearrangement, the curves for ω(·) and α_1(·) become smoother than those in Figure <ref>. The corresponding confidence intervals are slightly narrower than the original ones; see Section <ref> of the Appendix for details. § CONCLUSION AND DISCUSSION This paper proposes the quantile GARCH model, a new conditional heteroskedastic model whose coefficients are functions of a standard uniform random variable. A sufficient condition for the strict stationarity of this model is derived. To estimate the unknown coefficient functions without any moment restriction on the data, we develop the self-weighted QR and CQR methods. By efficiently borrowing information from intermediate quantile levels via a flexible parametric approximation, the CQR method is more favorable than the QR at high quantile levels. Our empirical analysis shows that the proposed approach can provide more accurate conditional quantile forecasts at high or even extreme quantile levels than existing ones. The proposed approach can be improved and extended in the following directions. Firstly, the estimation of the asymptotic covariance matrices for the QR and CQR estimator are complicated due to the unknown conditional density function. As an alternative to the kernel density estimation, an easy-to-use bootstrap method such as the block bootstrap and random-weight bootstrap may be developed, and asymptotically valid bootstrap inference for the estimated coefficient functions and conditional quantiles can be further studied. Secondly, it is worth investigating whether it is possible to construct a debiased CQR estimator that is provably no less efficient than the proposed biased estimator at high quantile levels. Thirdly, the expected shortfall, defined as the expectation of the loss that exceeds the VaR, is another important risk measure. It is also of interest to forecast the ES based on the proposed quantile GARCH model. Lastly, the parametric method to model the tails based on the flexible Tukey-lambda distribution is efficient and computationally simple. It can be generalized to other high quantile estimation problems for various data settings. § APPENDIX This appendix presents the generalization of results for the quantile GARCH(1,1) model to the quantile GARCH(p,q) model and the estimation of Σ_w^* in Theorem <ref>. It also provides notation, technical details for Theorems <ref>–<ref> as well as Corollary <ref>, and introduces Lemmas <ref>–<ref> which give some preliminary results for proving Theorems <ref>–<ref> and Corollary <ref>, and Lemmas <ref>–<ref> for Theorem <ref>. Moreover, additional results for simulation and empirical analysis are also included in this appendix. Throughout the appendix, the notation C is a generic constant which may take different values in different locations, and ρ∈ (0,1) is a generic constant which may take different values at its different occurrences. § GENERAL RESULTS FOR QUANTILE GARCH(P,Q) MODELS §.§ Proposed quantile GARCH(p,q) model This section extends the quantile GARCH(1,1) model and the methods in Sections <ref> and <ref> to the general GARCH(p,q) setting. The linear GARCH(p, q) model <cit.> is given by y_t= h_tε_t, h_t=a_0+∑_i=1^qa_i|y_t-i|+∑_j=1^pb_j h_t-j, where a_0>0, a_i≥ 0 for 1≤ i ≤ q, b_j≥ 0 for 1≤ j ≤ p, and the innovations {ε_t} are i.i.d. random variables with mean zero and variance one. If {y_t} is strictly stationary, then ∑_j=1^pb_j<1, and the process has the linear ARCH(∞) representation, y_t=ε_t[ω+∑_j=1^∞γ_j(a_1, …, a_q, b_1, …, b_p)|y_t-j|], where ω=a_0(1-∑_j=1^pb_j)^-1, the functions γ_j(·)'s are defined on ℝ^q× D with D={(d_1, …, d_p)^'∈ℝ^p: ∑_j=1^pd_j<1, min_1≤ j ≤ pd_j≥ 0}, such that for any (c_1, …, c_q)^'∈ℝ^q and (d_1, …, d_p)^'∈ D, it holds that ∑_j=1^∞γ_j(c_1, …, c_q, d_1, …, d_p)z^j=∑_i=1^qc_i z^i/1-∑_j=1^pd_jz^j, |z| ≤ 1. Motivated by (<ref>), we define the quantile GARCH(p,q) model as follows: Q_τ(y_t|ℱ_t-1)= ω(τ)+ ∑_j=1^∞γ_j (α_1(τ), …, α_q(τ), β_1(τ), …, β_p(τ))|y_t-j|, or equivalently, y_t= ω(U_t) + ∑_j=1^∞γ_j (α_1(U_t), …, α_q(U_t), β_1(U_t), …, β_p(U_t))|y_t-j|, where ω: (0, 1)→ℝ and α_i: (0, 1)→ℝ are unknown monotonic increasing functions, and β_k: (0, 1)→ [0, 1) is a non-negative real-valued function, for 1≤ i≤ q and 1≤ k≤ p, with ∑_j=1^pβ_j(·)< 1. In particular, the quantile GARCH(1,1) model has the form of (<ref>). The quantile GARCH(p,q) model in (<ref>) or (<ref>) also requires condition (<ref>) for its identifiability. By (<ref>) and Lemma 2.1 of <cit.>, we can verify that condition (<ref>) holds if and only if ω(0.5)=α_1(0.5)=⋯=α_q(0.5)=0. Hence, we impose (<ref>) for the quantile GARCH(p,q) model. As in Section <ref>, we refrain from imposing any monotonicity constraint on β_j(·)'s to avoid restricting the flexibility of the functions. Nonetheless, the monotonicity of the right side of (<ref>) in τ is guaranteed if β_1(·), …, β_p(·) are monotonic decreasing on (0, 0.5) and monotonic increasing on (0.5, 1); see Remark <ref>. Moreover, we can write the quantile GARCH(p,q) model in the form of y_t = sgn(U_t-0.5)|y_t|, |y_t| = |ω(U_t) |+ ∑_j=1^∞γ_j (|α_1(U_t)|, …, |α_q(U_t)|, β_1(U_t), …, β_p(U_t)) |y_t-j|. Then the quantile GARCH(p,q) model is equivalent to y_t = sgn(U_t-0.5)|y_t|, |y_t|= ϕ_0, t + ∑_j=1^∞ϕ_j,t|y_t-j|, j≥1, where ϕ_0, t=|ω(U_t)| and ϕ_j, t=γ_j (|α_1(U_t)|, …, |α_q(U_t)|, β_1(U_t), …, β_p(U_t)) for j≥1. This enables us to establish a sufficient condition for the existence of a strictly stationary solution of the quantile GARCH(p,q) model in the following theorem. Suppose condition (<ref>) holds. If there exists s∈(0,1] such that E(ϕ_0, t^s)<∞ and ∑_j=1^∞E(ϕ_j,t^s)<1, or s>1 such that E(ϕ_0, t^s)<∞ and ∑_j=1^∞[E(ϕ_j,t^s)]^1/s<1, then there exists a strictly stationary solution of the quantile GARCH(p,q) equations in (<ref>), and the process {y_t} defined by y_t=(U_t-0.5)(ϕ_0,t+∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1) is the unique strictly stationary and ℱ_t^U-measurable solution to (<ref>) such that E|y_t|^s<∞, where ℱ_t^U is the σ-field generated by {U_t, U_t-1, …}. §.§ Proposed estimation methods The proposed QR and CQR estimators can be extended to the quantile GARCH(p,q) model (<ref>) with minor adjustments in notations and assumptions. For the QR method, denote the parameter vector of model (<ref>) by θ=(ω, ϑ^')^'=(ω, α^',β^')^', where α=(α_1,…,α_q)^', β=(β_1,…,β_p)^', and the parameter space is Θ⊂ℝ^q+1× [0,1)^p. Then the conditional quantile functions q_t(θ) and q_t(θ) are given by q_t(θ) =ω + ∑_j=1^∞γ_j (ϑ)|y_t-j| and q_t(θ)=ω + ∑_j=1^t-1γ_j (ϑ)|y_t-j|. Accordingly, the self-weighted QR estimator for quantile GARCH(p,q) model can be defined as θ_wn(τ) in (<ref>). Let the true value of the parameter vector be θ(τ)=(ω(τ), ϑ^'(τ))^'=(ω(τ), α^'(τ), β^'(τ))^', where α(τ)=(α_1(τ),…,α_q(τ))^' and β(τ)=(β_1(τ),…,β_p(τ))^'. Denote the first derivative of q_t(θ) by q̇_t(θ)=(1,∑_j=1^∞γ̇_j^'(ϑ)|y_t-j|)^'. For each j≥ 1, γ_j(·)'s are twice differentiable functions, with derivatives of first and second orders, γ̇_j(·) and γ̈_j(·), satisfying that (i) sup_ν≤ r|γ_j(ν+ϑ(τ))|≤ c_1ρ^j; (ii) sup_ν≤ rγ̇_j(ν+ϑ(τ))≤ c_2ρ^j; (iii) sup_ν≤ rγ̈_j(ν+ϑ(τ))≤ c_3ρ^j for some constants c_1, c_2, c_3>0, where ν∈ℝ^p+q, r>0 is a fixed small value, and 0< ρ <1. For {y_t} generated by model (<ref>) under condition (<ref>), suppose E|y_t|^s<∞ for some s∈ (0,1) and Σ_w(τ,τ) is positive definite. If Assumptions <ref>, <ref>(i), <ref>, <ref> and <ref> hold, then as n→∞, we have (i) θ_wn(τ) →_p θ(τ); (ii) √(n)(θ_wn(τ)-θ(τ))→_d N( 0,Σ_w(τ,τ)) if Assumption <ref>(ii) is further satisfied. For the CQR method, let φ=(ϕ^', λ)^'=(a_0,ψ^',λ)^'=(a_0, a_1,…, a_q, b_1, …, b_p, λ)^' be the parameter vector of the linear GARCH(p,q) model in (<ref>) with the innovation ε_t following the Tukey-lambda distribution in (<ref>). Let Φ⊂ (0,∞)×[0,∞)^q× [0,1)^p×Λ be the parameter space of φ. The conditional quantile functions q_t,τ(φ) and q_t,τ(φ) are defined as in Section <ref> with h_t(ϕ)=a_0(1-∑_j=1^pb_j)^-1+∑_j=1^∞γ_j(ψ)|y_t-j| and h_t(ϕ)=a_0(1-∑_j=1^pb_j)^-1+∑_j=1^t-1γ_j(ψ)|y_t-j|, respectively. Then the self-weighted CQR estimator for the quantile GARCH(p,q) model is given by φ̌_wn in (<ref>), and the transformed CQR estimator is θ̌_wn^*(τ)=g_τ(φ̌_wn), where g_τ(·): ℝ^p+q+2→ℝ^p+q+1 is the measurable function defined as g_τ(φ)=(a_0Q_τ(λ)/(1-∑_j=1^pb_j),a_1Q_τ(λ),…,a_qQ_τ(λ),b_1,…,b_p)^'. Let φ_0^*=(ϕ_0^', λ_0)^'=(a_00,ψ_0^',λ_0)^'=(a_00, a_10, …, a_q0, b_10, …,b_p0, λ_0)^' be the pseudo-true parameter defined as in (<ref>). Define the first derivative of q_t,τ(φ) as q̇_t,τ(φ)=(Q_τ(λ)ḣ_t^'(ϕ),Q̇_τ(λ)h_t(ϕ))^', where ḣ_t(ϕ) and Q̇_τ(λ) are the first derivatives of h_t(ϕ) and Q_τ(λ), respectively. For each j≥ 1, γ_j(·)'s are twice differentiable functions, with derivatives of first and second orders, γ̇_j(·) and γ̈_j(·), satisfying that (i) sup_ν≤ r|γ_j(ν+ψ_0)|≤ c_1ρ^j; (ii) sup_ν≤ rγ̇_j(ν+ψ_0)≤ c_2ρ^j; (iii) sup_ν≤ rγ̈_j(ν+ψ_0)≤ c_3ρ^j for some constants c_1, c_2, c_3>0, where ν∈ℝ^p+q, r>0 is a fixed small value, and 0< ρ <1. For {y_t} generated by model (<ref>) under condition (<ref>), suppose E|y_t|^s<∞ for some s∈ (0,1) and Σ_w^* is positive definite. If Assumptions <ref>, <ref>, <ref>, <ref>(i) and <ref> hold, then as n→∞, we have (i) φ̌_wn→_p φ_0^*. Moreover, if Assumption <ref>(ii) further holds, then (ii) √(n)(φ̌_wn-φ_0^*)→_d N( 0,Σ_w^*); and (iii) √(n)(θ̌_wn^*(τ)-θ(τ)-B(τ))→_d N( 0,g_τ(φ_0^*)Σ_w^*g_τ^'(φ_0^*)), where B(τ)=g_τ(φ_0^*)-θ(τ). § ESTIMATION OF Σ_W^* IN THEOREM <REF> To approximate the asymptotic variance Σ_w^* in Theorem <ref>, it suffices to estimate Ω_0w^* and Ω_1w^*. Note that Ω_11^* and Ω_12^* can be consistently estimated by the sample averages: Ω̌_11^* =1n∑_t=1^n∑_k=1^Kw_tq̈_t,τ_k(φ̌_wn)ψ_τ_k(y_t-q_t,τ_k(φ̌_wn)) and Ω̌_12^* =1n∑_t=1^n∑_k=1^Kw_tf̌_t-1(q_t,τ_k(φ̌_wn))q̇_t,τ_k(φ̌_wn)q̇_t,τ_k^'(φ̌_wn), where f̌_t-1(q_t,τ_k(φ̌_wn))=2ℓ_k[q_t,τ_k+ℓ_k(φ̌_wn)-q_t,τ_k-ℓ_k(φ̌_wn)]^-1, with ℓ_k representing the bandwidth defined as in (<ref>) for quantile level τ_k, and q̇_t,τ(φ̌_wn) and q̈_t,τ(φ̌_wn) are obtained from q̇_t,τ(φ̌_wn) and q̈_t,τ(φ̌_wn) by setting the initial values y_t=0 for t≤ 0, respectively. Then Ω_1w^* can be consistently estimated by Ω̌_1w^*=Ω̌_12^*-Ω̌_11^*. As the population covariance matrix of n^-1/2∑_t=1^n X_t, the matrix Ω_0w^* cannot be consistently estimated by the corresponding sample covariance matrix <cit.>. Alternatively, we adopt the following kernel estimator of spectral density matrix <cit.>: Ω̌_0w^*=nn-d∑_ℓ=-n+1^n-1K(ℓB_n)Γ̌(ℓ), where n/(n-d) with d=4 is a small sample degrees of freedom adjustment to offset the effect of estimating φ_0^*∈ℝ^d using φ̌_wn, Γ̌(ℓ)=I(ℓ≥ 0)n^-1∑_t=ℓ+1^nX̌_tX̌_t-ℓ^'+I(ℓ< 0)n^-1∑_t=-ℓ+1^nX̌_t+ℓX̌_t^' with X̌_t=∑_k=1^K w_tq̇_t,τ_k(φ̌_wn)ψ_τ_k(y_t-q_t,τ_k(φ̌_wn)), B_n is a bandwidth, and K(·): ℝ→ [-1,1] is a real-valued kernel function satisfying K(0)=1,  K(x)=K(-x), ∫_-∞^∞K^2(x)dx<∞, and K(·) is continuous. Under Assumption <ref>, if B_n→∞, B_n^2/n→ 0, E( X_t^2δ)<∞, and ∑_n=1^∞n^2[α(n)]^1-2/δ for some δ>2, <cit.> showed that Ω̌_0w^*→_p Ω_0w^* as n→∞. As a result, the asymptotic covariance Σ_w^* can be estimated by Σ̌_w^*=Ω̌_1w^*-1Ω̌_0w^*Ω̌_1w^*-1. Many kernel functions satisfy (<ref>), such as the Bartlett, Parzen, Tukey-Hanning and quadratic spectral (QS) kernels. <cit.> showed that under some regular conditions the QS kernel is optimal with respect to the asymptotic truncated mean squared error (MSE) among the aforementioned kernels. Therefore, we employ the QS kernel defined as follows: K(x)=2512π^2x^2[sin(6π x/5)6π x/5-cos(6π x/5)]. It remains to choose the bandwidth B_n for Ω̌_0w^* in (<ref>). <cit.> introduced the automatic bandwidth for the QS kernel as B_n=1.3221[nα(2)]^1/5, where α(2) is calculated using some approximating parametric models for each element of X_t or X_t as a whole. For simplicity, we fit AR(1) models for {X_it} (i=1,…,4) and obtain the estimates (ρ_i,σ_i^2) for the AR coefficient and innovation variance (ρ_i,σ_i^2), respectively. Then α(2) can be calculated as α(2)=∑_i=1^4ι_i4ρ_i^2σ_i^4(1-ρ_i)^8/∑_i=1^4ι_iσ_i^4(1-ρ_i)^4, where ι_i's are the weights assigned to the diagonal elements of Ω̌_0w^*, and the usual choice of ι_i is one for i=1,…,4. If Q_τ(y_t|ℱ_t-1) is correctly specified by q_t,τ(φ_0^*) (i.e., g_τ(φ_0^*)=θ(τ)) for each τ∈𝒯_h, then E[ψ_τ(y_t-q_t,τ(φ_0^*))|ℱ_t-1]=0, and the martingale CLT can be used to establish the asymptotic normality of φ̌_wn. In this case, Σ_w^* can be largely simplified since Ω_1w^*=Ω_12^* and Ω_0w^*=∑_k=1^K∑_k^'=1^KΨ_k,k^'E[w_t^2q̇_t,τ_k(φ_0^*)q̇_t,τ_k^'^'(φ_0^*)] with Ψ_k,k^'=min{τ_k,τ_k^'}(1-max{τ_k,τ_k^'}). As a result, Ω_0w^* can be estimated by its sample average as for Ω_0w(τ) in Section <ref>, and a consistent estimator Σ̌_w^*=Ω̌_12^*-1Ω̌_0w^*Ω̌_12^*-1 can be constructed for Σ_w^*. § NOTATION §.§ q̇_t(θ) and q̈_t(θ) for quantile GARCH(1,1) model Recall that q_t(θ) =ω + α_1∑_j=1^∞β_1^j-1|y_t-j|, then its first and second derivatives are q̇_t(θ)=(1,∑_j=1^∞β^j-1_1|y_t-j|,α_1∑_j=2^∞(j-1)β^j-2_1|y_t-j|)^', and q̈_t(θ)=[ 0 0 0; 0 0 ∑_j=2^∞(j-1)β^j-2_1|y_t-j|; 0 ∑_j=2^∞(j-1)β^j-2_1|y_t-j| α_1∑_j=3^∞(j-1)(j-2)β^j-3_1|y_t-j| ]. §.§ q̇_t,τ(φ) and q̈_t,τ(φ) for quantile GARCH(1,1) model Recall that q_t,τ(φ) = Q_τ(λ) (a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1|y_t-j|):= Q_τ(λ)h_t(ϕ). Then its first and second derivatives are as follows q̇_t,τ(φ) =(Q_τ(λ)ḣ_t^'(ϕ),Q̇_τ(λ)h_t(ϕ))^' and q̈_t,τ(φ)=[ Q_τ(λ)ḧ_t(ϕ) Q̇_τ(λ)ḣ_t(ϕ); Q̇_τ(λ)ḣ_t^'(ϕ) Q̈_τ(λ)h_t(ϕ) ], where Q̇_τ(λ) and ḣ_t(ϕ) (Q̈_τ(λ) and ḧ_t(ϕ)) are the first (second) derivatives of Q_τ(λ) and h_t(ϕ), respectively. Specifically, they are defined as follows Q̇_τ(λ) =λ^-2{τ^λ(λlnτ-1)-(1-τ)^λ[λln(1-τ)-1]}, Q̈_τ(λ) =λ^-3{τ^λ[(λlnτ-1)^2+1]-(1-τ)^λ[(λln(1-τ)-1)^2+1]}, ḣ_t(ϕ) =(11-b_1,∑_j=1^∞b_1^j-1|y_t-j|,a_0(1-b_1)^2+a_1∑_j=2^∞(j-1)b_1^j-2|y_t-j|)^', ḧ_t(ϕ) =[ 0 0 1(1-b_1)^2; 0 0 ∑_j=2^∞(j-1)b_1^j-2|y_t-j|; 1(1-b_1)^2 ∑_j=2^∞(j-1)b_1^j-2|y_t-j| 2a_0(1-b_1)^3+a_1∑_j=3^∞(j-1)(j-2)b_1^j-3|y_t-j| ]. § TECHNICAL PROOFS §.§ Proof of Theorem <ref> Let {X_t} be a sequence of random variables with X_t=ϕ_0,t+∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1 taking values in [0,∞]. (i) We first consider the case with s∈(0,1]. For any s∈(0,1], using the inequality (x+y)^s≤ x^s+y^s for x, y≥ 0, we have X_t^s ≤ϕ_0,t^s+∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ^s_0, t-j_1-⋯-j_ℓϕ^s_j_1, tϕ^s_j_2, t-j_1⋯ϕ^s_j_ℓ, t-j_1-⋯-j_ℓ-1. Denote A_s=∑_j=1^∞ E(ϕ^s_j, t). Observe that the ϕ_j, t's in every summand on the right side of the above inequality are independent, where j≥0. Thus it follows that E(X_t^s) ≤ E(ϕ_0,t^s)+∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ E(ϕ^s_0, t-j_1-⋯-j_ℓ)E(ϕ^s_j_1, t)E(ϕ^s_j_2, t-j_1)⋯ E(ϕ^s_j_ℓ, t-j_1-⋯-j_ℓ-1) = E(ϕ^s_0,t) [1+∑_ℓ=1^∞ A_s^ℓ] = E(ϕ^s_0,t)/1-A_s < ∞, where we used the condition in (<ref>). Consequently, {X_t} is a sequence of almost surely finite random variables. With all the summands being non-negative, we can write ∑_j=1^∞ϕ_j,tX_t-j =∑_j_0=1^∞ϕ_0,t-j_0ϕ_j_0, t +∑_j_0=1^∞∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0,t-j_0-j_1-⋯-j_ℓϕ_j_0,tϕ_j_1, t-j_0⋯ϕ_j_ℓ, t-j_0-j_1-⋯-j_ℓ-1 =∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1. Comparing this with (<ref>), we have that {X_t} satisfies the recursive equation X_t= ϕ_0,t+ ∑_j=1^∞ϕ_j,tX_t-j. Hence the existence of a strictly stationary solution to (<ref>) is proved by setting y_t=sgn(U_t-0.5)X_t. In addition, E|y_t|^s=E(X_t^s)<∞ by (<ref>). Now suppose that {y_t} is a strictly stationary and causal solution to the model in (<ref>). Then, for any m∈ℕ, by successively substituting the |y_t-j|'s in the second equation of (<ref>) m times, we have |y_t|=ϕ_0,t+∑_ℓ=1^m∑_j_1,…, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1+R_t,m, where R_t,m=∑_j_1,…, j_m+1=1^∞ϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_m+1, t-j_1-⋯-j_m|y_t-j_1-⋯-j_m+1|. By the causality of {y_t}, the ϕ_j_1, t, ϕ_j_2, t-j_1, …ϕ_j_m+1, t-j_1-⋯-j_m and |y_t-j_1-⋯-j_m+1| in every summand on the right side of the above expression are independent. As a result, E(R_t,m^s) ≤∑_j_1,…, j_m+1=1^∞ E(ϕ_j_1, t^s)E(ϕ_j_2, t-j_1^s)⋯ E(ϕ_j_m+1, t-j_1-⋯-j_m^s)E|y_t-j_1-⋯-j_m+1|^s =A_s^m+1E|y_t|^s, which implies E(∑_m=1^∞R_t,m^s)= ∑_m=1^∞E(R_t,m^s) <∞, since 0≤ A_s<1 and E|y_t|^s<∞. It follows that, as m→∞, R_t,m→0 a.s., and thus |y_t|=X_t a.s. Finally, since y_t=sgn(U_t-0.5)|y_t|, we have y_t=sgn(U_t-0.5)X_t a.s. (ii) We next consider the case with s∈{2,3,4,…}, where we only need to show E(X_t^s)<∞ since the remainder of the proof is the same as for s∈(0,1] in (i). By Minkowski inequality, for s≥ 1, we have X_t_s ≤ϕ_0,t_s + ∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1_s. Since E(ϕ_0, t^s)<∞ by the condition in (<ref>), to show E(X_t^s)<∞, it suffices to show that E[(∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1)^s]<∞. Consider the case with s=2 for illustration. By Hölder's inequality and the independence of ϕ_j, t's, it holds that E[(∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1)^2] = ∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞∑_k=1^∞∑_i_1, …, i_k=1^∞E[(ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1) ×(ϕ_0, t-i_1-⋯-i_kϕ_i_1, tϕ_i_2, t-i_1⋯ϕ_i_k, t-i_1-⋯-i_k-1) ] ≤ ∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞∑_k=1^∞∑_i_1, …, i_k=1^∞[E(ϕ_0, t-j_1-⋯-j_ℓϕ_j_1, tϕ_j_2, t-j_1⋯ϕ_j_ℓ, t-j_1-⋯-j_ℓ-1)^2]^1/2 ×[E(ϕ_0, t-i_1-⋯-i_kϕ_i_1, tϕ_i_2, t-i_1⋯ϕ_i_k, t-i_1-⋯-i_k-1)^2]^1/2 = E(ϕ^2_0, t)∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞∑_k=1^∞∑_i_1, …, i_k=1^∞[∏_h=1^ℓE(ϕ^2_j_h, t)]^1/2[∏_m=1^kE(ϕ^2_i_m, t)]^1/2 = E(ϕ^2_0, t){∑_ℓ=1^∞∑_j_1, …, j_ℓ=1^∞[∏_h=1^ℓE(ϕ^2_j_h, t)]^1/2}{∑_k=1^∞∑_i_1, …, i_k=1^∞[∏_m=1^kE(ϕ^2_i_m, t)]^1/2} = E(ϕ^2_0, t) (∑_ℓ=1^∞B_2,1/2^ℓ) (∑_k=1^∞B_2,1/2^k) < E(ϕ^2_0, t)(1-B_2,1/2)^2<∞, where we used the conditions E(ϕ^s_0, t)<∞ and B_s,1/s=∑_j=1^∞[E(ϕ^s_j, t)]^1/s<1 in (<ref>). Hence, E(X_t^s)<∞ holds for s=2. For the cases with s≥ 3, we can similarly show that E(X_t^s)<∞ if (<ref>) holds. The proof of this theorem is complete. §.§ Proofs of Theorems <ref>–<ref> and Corollary <ref> Recall that q_t(θ) =ω + α_1∑_j=1^∞β_1^j-1|y_t-j| and q_t(θ)=ω + α_1∑_j=1^t-1β_1^j-1|y_t-j|, where θ=(ω, α_1,β_1)^'. Define L(θ,τ)=E[w_tℓ_t(θ,τ)], L_n(θ,τ)=n^-1∑_t=1^nw_tℓ_t(θ,τ) and L_n(θ,τ)=n^-1∑_t=1^nw_tℓ_t(θ,τ), where ℓ_t(θ,τ)=ρ_τ(y_t-q_t(θ)) and ℓ_t(θ,τ)=ρ_τ(y_t-q_t(θ)). To show the uniform consistency, we first verify the following claims: (i) sup_τ∈𝒯sup_Θ|L_n(θ,τ)-L_n(θ,τ)|=o_p(1); (ii) E(sup_τ∈𝒯sup_Θw_t|ℓ_t(θ,τ)|)<∞; (iii) L(θ,τ) has a unique minimum at θ(τ). We first prove Claim (i). By the Lipschitz continuity of ρ_τ(·), strict stationarity and ergodicity of y_t by Assumption <ref>, Lemma <ref>(i) and E(w_tς_ρ)<∞ by Assumption <ref>, it holds that sup_τ∈𝒯sup_Θ|L_n(θ,τ)-L_n(θ,τ)| ≤1n∑_t=1^nw_tsup_τ∈𝒯sup_Θ|ρ_τ(y_t-q_t(θ))-ρ_τ(y_t-q_t(θ))| ≤Cn∑_t=1^nw_tsup_τ∈𝒯sup_Θ|q_t(θ)-q_t(θ)| ≤Cn∑_t=1^nρ^tw_tς_ρ=o_p(1), where ς_ρ=∑_s=0^∞ρ^s|y_-s|. We next prove Claim (ii). By Assumption <ref>, there exist constant 0<c<∞ and 0<ρ<1 such that max{|ω|, |α_1|}≤c and 0<β_1≤ρ. By the fact that |ρ_τ(x)|≤ |x|, and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, we have E[sup_τ∈𝒯sup_Θw_t|ℓ_t(θ,τ)|] ≤ E[w_t|y_t|]+E[w_tsup_τ∈𝒯sup_Θ|q_t(θ)|] ≤ E(w_t|y_t|) + cE[w_t(1+∑_j=1^∞ρ^j-1|y_t-j|)] < ∞. Hence, (ii) is verified. We consider Claim (iii). For x≠ 0, it holds that ρ_τ(x-y)-ρ_τ(x) =-yψ_τ(x)+y∫_0^1[I(x≤ ys)-I(x≤ 0)]ds =-yψ_τ(x)+(x-y)[I(0>x>y)-I(0<x<y)], where ψ_τ(x)=τ-I(x<0); see <cit.>. Let ν_t(θ,τ)=q_t(θ)-q_t(θ(τ)) and η_t,τ=y_t-q_t(θ(τ)). By (<ref>), it follows that ℓ_t(θ,τ)-ℓ_t(θ(τ),τ) = -ν_t(θ,τ)ψ_τ(η_t,τ)+[η_t,τ-ν_t(θ,τ)][I(0>η_t,τ>ν_t(θ,τ))-I(0<η_t,τ<ν_t(θ,τ))]. This together with E[ψ_τ(η_t,τ)|ℱ_t-1]=0, implies that L(θ,τ)-L(θ(τ),τ) = E{w_t[η_t,τ-ν_t(θ,τ)][I(0>η_t,τ>ν_t(θ,τ))-I(0<η_t,τ<ν_t(θ,τ))]}≥ 0. Since f_t-1(x) is continuous at a neighborhood of q_t(θ(τ)) by Assumption <ref> and {w_t} are nonnegative random weights, then the above equality holds if and only if ν_t(θ,τ)=0 with probability one for t ∈ℤ. This together with q_t(θ) =ω + α_1∑_j=1^∞β_1^j-1|y_t-j|, implies that ω-ω(τ) =∑_j=1^∞[α_1(τ)β_1^j-1(τ)-α_1β_1^j-1]|y_t-j|. Note that y_t-1 is independent of all the others given ℱ_t-2. As a result, we have ω=ω(τ) and α_1=α_1(τ), thus β_1=β_1(τ) follows. Therefore, θ=θ(τ) and (iii) is verified. Note that L(θ,τ) is a measurable function of y_t in Euclidean space for each (θ,τ)∈Θ×𝒯, and L(θ,τ) is a continuous function of (θ,τ)∈Θ×𝒯 for each y_t. Then by Theorem 3.1 of <cit.>, together with Claim (ii) and the strict stationarity and ergodicity of {y_t} under Assumption <ref>, we have sup_τ∈𝒯sup_θ∈Θ|L_n(θ,τ)-L(θ,τ)|=o_p(1). This together with Claim (i), implies that sup_τ∈𝒯sup_θ∈Θ|L_n(θ,τ)-L(θ,τ)|=o_p(1). We next verify the uniform consistency by extending the standard consistency argument in <cit.>; see also Lemma B.1 of <cit.>. For any c>0, with probability tending to 1 uniformly in ϵ≥ c and uniformly in τ∈𝒯: by θ_wn(τ)=_θ∈ΘL_n(θ,τ), it follows that L_n(θ_wn(τ),τ) ≤L_n(θ(τ),τ) + ϵ/3, and by (<ref>), it holds that L(θ_wn(τ),τ) < L_n(θ_wn(τ),τ) + ϵ/3 and L_n(θ(τ),τ) < L(θ(τ),τ) + ϵ/3. Combining (<ref>)–(<ref>), with probability tending to 1, we have L(θ_wn(τ),τ) < L_n(θ_wn(τ),τ) + ϵ/3 ≤L_n(θ(τ),τ) + 2ϵ/3 < L(θ(τ),τ) + ϵ. Pick any δ>0, let {B_δ(τ),τ∈𝒯} be a collection of balls with radius δ>0, each centered at θ(τ). Then B^c≡Θ/B_δ(τ) is compact, and thus inf_θ∈ B^cL(θ,τ) exists. Denote ϵ=inf_τ∈𝒯[inf_θ∈ B^cL(θ,τ)-L(θ(τ),τ)]. Since θ(τ)=_θ∈ΘL(θ,τ) is unique by Claim (iii), then ϵ>0. For any ϵ>0, we can pick c>0 such that Pr(ϵ≥ c)>1-ϵ. Together with (<ref>), it follows that with probability becoming greater than 1-ϵ, uniformly in τ∈𝒯: L(θ_wn(τ),τ) < L(θ(τ),τ) + inf_τ∈𝒯[inf_θ∈ B^cL(θ,τ)-L(θ(τ),τ)] < inf_θ∈ B^cL(θ,τ). Thus with probability becoming greater than 1-ϵ, sup_τ∈𝒯θ_wn(τ)-θ(τ)≤δ. By the arbitrariness of ϵ, it implies that sup_τ∈𝒯θ_wn(τ)-θ(τ)≤δ with probability tending to 1. The proof of this theorem is complete. For u∈ℝ^3, define H_n( u)=n[L_n(θ(τ)+ u)-L_n(θ(τ))], where L_n(θ)=n^-1∑_t=1^nw_tρ_τ(y_t-q_t(θ)). Denote u_n=θ_wn(τ)-θ(τ). By Theorem <ref>, it holds that u_n=o_p(1). Note that u_n is the minimizer of H_n( u), since θ_wn(τ) minimizes L_n(θ). Define J=Ω_1w(τ)/2, where Ω_1w(τ)=E[f_t-1(F_t-1^-1(τ))w_tq̇_t(θ(τ))q̇_t^'(θ(τ))]. By the ergodic theorem and Assumptions <ref>–<ref>, we have J_n=J+o_p(1), where J_n is defined as in Lemma <ref>. Moreover, from Lemmas <ref>–<ref>, it follows that H_n( u_n)= -√(n) u_n^' T_n+√(n) u_n^'J√(n) u_n+o_p(√(n) u_n+n u_n^2) ≥ -√(n) u_n[ T_n+o_p(1)]+n u_n^2[λ_min+o_p(1)], where λ_min is the smallest eigenvalue of J, and T_n=n^-1/2∑_t=1^nw_tq̇_t(θ(τ))ψ_τ(η_t,τ). Note that E(ψ_τ(η_t,τ)|ℱ_t-1)=0 and the positive definite matrix Ω_0w(τ)=E[w_t^2q̇_t(θ(τ))q̇_t^'(θ(τ))]<∞ by Assumptions <ref> and <ref>. Then by the Lindeberg–Lévy theorem for martingales <cit.> and the Cramér-Wold device, together with the stationarity and ergodicity of y_t by Assumption <ref>, T_n converges in distribution to a normal random variable with mean zero and variance matrix τ(1-τ)Ω_0w(τ) as n→∞. Note that λ_min>0 as Ω_1w(τ) is positive definite, and T_n<∞ by Assumptions <ref>, <ref> and <ref>. Since H_n( u_n)≤ 0, then we have √(n) u_n≤ [λ_min+o_p(1)]^-1[ T_n+o_p(1)]=O_p(1). This together with Theorem <ref> verifies the √(n)-consistency of θ_wn(τ), i.e. √(n)(θ_wn(τ)-θ(τ))=O_p(1). Let √(n) u_n^*=J^-1 T_n/2=Ω_1w^-1(τ) T_n, then we have √(n) u_n^*→ N( 0,τ(1-τ)Ω_1w^-1(τ)Ω_0w(τ)Ω_1w^-1(τ)) in distribution as n→∞. Therefore, it suffices to show that √(n) u_n^*-√(n) u_n=o_p(1). By (<ref>) and (<ref>), we have H_n( u_n)= -√(n) u_n^' T_n+√(n) u_n^'J√(n) u_n+o_p(1) = -2√(n) u_n^'J√(n) u_n^*+√(n) u_n^'J√(n) u_n+o_p(1), and H_n( u_n^*)= -√(n) u_n^*' T_n+√(n) u_n^*'J√(n) u_n^*+o_p(1) =-√(n) u_n^*'J√(n) u_n^*+o_p(1). It follows that H_n( u_n)-H_n( u_n^*)= (√(n) u_n-√(n) u_n^*)^'J(√(n) u_n-√(n) u_n^*)+o_p(1) ≥ λ_min√(n) u_n-√(n) u_n^*^2+o_p(1). Since H_n( u_n)-H_n( u_n^*)=n[L_n(θ(τ)+ u_n)-L_n(θ(τ)+ u_n^*)]≤ 0 a.s., then (<ref>) implies that √(n) u_n-√(n) u_n^*=o_p(1). We verify the asymptotic normality of θ_wn(τ), and the proof is hence accomplished. Recall that L_n(θ,τ)=n^-1∑_t=1^nw_tρ_τ(y_t-q_t(θ)) and L_n(θ,τ)=n^-1∑_t=1^nw_tρ_τ(y_t-q_t(θ)). Let ψ_τ(x)=τ-I(x<0) and η_t,τ=y_t-q_t(θ(τ)). Denote T_n(τ)=1√(n)∑_t=1^nw_tq̇_t(θ(τ))ψ_τ(η_t,τ) and J_n(τ)=12n∑_t=1^nf_t-1(F_t-1^-1(τ))w_tq̇_t(θ(τ))q̇_t^'(θ(τ)). Note that we have established the uniform consistency and finite dimensional convergence of θ_wn(τ) in Theorem <ref> and Corollary <ref>, respectively. By Corollary 2.2 of <cit.>, to show the weak convergence of θ_wn(τ), we need to prove the stochastic equicontinuity. As a result, it suffices to verify the following claims: (1) If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>–<ref> hold, then for any sequence of random variables u_n≡ u_n(τ) such that sup_τ∈𝒯 u_n(τ)=o_p(1), n[L_n( u_n+θ(τ),τ)-L_n(θ(τ),τ)]-n[L_n( u_n+θ(τ),τ)-L_n(θ(τ),τ)]=o_p(√(n) u_n+n u_n^2), where the remainder term is uniform in τ∈𝒯. (2) If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>–<ref> hold, then for any sequence of random variables u_n≡ u_n(τ) such that sup_τ∈𝒯 u_n(τ)=o_p(1), n[L_n( u_n+θ(τ),τ)-L_n(θ(τ),τ)]=-√(n) u_n^' T_n(τ)+√(n) u_n^'J_n(τ)√(n) u_n +o_p(√(n) u_n+n u_n^2), where the remainder term is uniform in τ∈𝒯. (3) If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>–<ref> hold, then as n→∞, T_n(·) ⇝𝔾_0(·) in (ℓ^∞(𝒯))^3, where 𝔾_0(·) is a zero mean Gaussian process with covariance kernel (min{τ_1, τ_2}-τ_1τ_2)Ω_0w(τ_1, τ_2) with Ω_0w(τ_1, τ_2)=E[w_t^2q̇_t(θ(τ_1))q̇_t^'(θ(τ_2))]. For Claim (1), it extends the pointwise result in Lemma <ref> to the uniform version. For θ=(ω,α_1,β_1)^'∈Θ, note that |α_1|≤c<∞ and 0<β_1≤ρ<1 for τ∈𝒯 by Assumption <ref>. Then Lemma <ref> holds for τ∈𝒯, that is, under Assumption <ref>, we have sup_τ∈𝒯sup_Θ|q_t(θ)-q_t(θ)| ≤ Cρ^tς_ρ and sup_τ∈𝒯sup_Θq̇_t(θ)-q̇_t(θ)≤ Cρ^t(ς_ρ+tς_ρ+ξ_ρ), where ς_ρ=∑_s=0^∞ρ^s|y_-s| and ξ_ρ=∑_s=0^∞sρ^s|y_-s| with the constant ρ∈ (0,1). As a result, (<ref>)–(<ref>) in the proof of Lemma <ref> hold uniformly in τ∈𝒯. For Claim (2), we generalize the notation in Lemma <ref> and decompose n[L_n( u+θ(τ))-L_n(θ(τ))] as follows: n[L_n( u+θ(τ))-L_n(θ(τ))] = -√(n) u^' T_n(τ)-√(n) u^'R_1n( u^*,τ)√(n) u + ∑_i=2^5R_in( u,τ), where T_n(τ)=n^-1/2∑_t=1^nw_tq̇_t(θ(τ))ψ_τ(y_t-q_t(θ(τ))), R_1n( u, τ) =12n∑_t=1^nw_tq̈_t( u+θ(τ))ψ_τ(y_t-q_t(θ(τ))), R_2n( u, τ) = u^'∑_t=1^nw_tq̇_t(θ(τ))E[ξ_1t( u, τ)|ℱ_t-1], R_3n( u, τ) = u^'∑_t=1^nw_tq̇_t(θ(τ))E[ξ_2t( u, τ)|ℱ_t-1], R_4n( u, τ) = u^'∑_t=1^nw_tq̇_t(θ(τ)){ξ_t( u, τ)-E[ξ_t( u, τ)|ℱ_t-1]}and R_5n( u, τ) = u^'2∑_t=1^nw_tq̈_t( u^*+θ(τ))ξ_t( u, τ) u with ψ_τ(x)=τ-I(x<0), u^* between 0 and u, ν_t( u, τ)=q_t( u+θ(τ))-q_t(θ(τ)), ξ_t( u, τ) =∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u, τ)s)-I(y_t≤ F_t-1^-1(τ))]ds ξ_1t( u, τ) =∫_0^1[I(y_t≤ F_t-1^-1(τ)+ u^'q̇_t(θ(τ))s)-I(y_t≤ F_t-1^-1(τ))]ds and ξ_2t( u, τ) =∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u, τ)s)-I(y_t≤ F_t-1^-1(τ)+ u^'q̇_t(θ(τ))s)]ds. Given the pointwise result in Lemma <ref>, to establish Claim (2) it suffices to show the stochastic equicontinuity related to R_in( u, τ) for i=1,…,5. For R_1n( u, τ), by Lemma <ref>(ii) and the fact that |ψ_τ(η_t,τ)|≤ 1, we have E[sup_τ∈𝒯sup_ u≤ηw_t q̈_t( u+θ(τ))ψ_τ(η_t,τ)]≤ CE[w_tsup_θ∈Θq̈_t(θ)]<∞. Moreover, E[w_t q̈_t( u+θ(τ))ψ_τ(η_t,τ)]=0 by iterated-expectation and the fact that E[ψ_τ(η_t,τ)| ℱ_t-1]=0. Since {y_t} is strictly stationary and ergodic under Assumption <ref>, then by Theorem 3.1 in <cit.>, we can show that sup_τ∈𝒯sup_ u≤ηR_1n( u, τ)=o_p(1). This together with sup_τ∈𝒯 u_n(τ)=o_p(1), implies that sup_τ∈𝒯R_1n( u_n(τ), τ)=o_p(1). We next focus on R_2n( u, τ). By Taylor expansion, we have R_2n( u, τ)=√(n) u^'J_n(τ)√(n) u+√(n) u^'Π_1n( u, τ)√(n) u, where J_n(τ)=(2n)^-1∑_t=1^nf_t-1(F_t-1^-1(τ))w_tq̇_t(θ(τ))q̇_t^'(θ(τ)) and Π_1n( u, τ)=1n∑_t=1^nw_tq̇_t(θ(τ))q̇_t^'(θ(τ))∫_0^1[f_t-1(F_t-1^-1(τ)+ u^'q̇_t(θ(τ))s^*)-f_t-1(F_t-1^-1(τ))]sds. By Taylor expansion and sup_x|ḟ_t-1(x)|<∞ under Assumption <ref>, for any η>0, we have sup_τ∈𝒯sup_ u≤ηΠ_1n( u, τ) ≤1n∑_t=1^nsup_τ∈𝒯sup_ u≤ηw_tq̇_t(θ(τ))q̇_t^'(θ(τ))sup_x|ḟ_t-1(x)| u^'q̇_t(θ(τ)) ≤ Cη·1n∑_t=1^nw_tsup_τ∈𝒯q̇_t(θ(τ))^3. Then by Assumption <ref>(ii) and Lemma <ref>(i), it holds that E(sup_τ∈𝒯sup_ u≤ηΠ_1n( u, τ))≤ Cη E(w_tsup_Θq̇_t(θ)^3) ≤ Cη tends to 0 as η→ 0. Similar to (<ref>) and (<ref>), for sup_τ∈𝒯 u_n(τ)=o_p(1), we can show that sup_τ∈𝒯Π_1n( u_n, τ)=o_p(1). It follows that sup_τ∈𝒯[R_2n( u_n, τ)-√(n) u_n^'J_n(τ)√(n) u_n]=o_p(n u_n^2). For R_3n( u, τ), by Taylor expansion, the Cauchy-Schwarz inequality and the strict stationarity and ergodicity of y_t under Assumption <ref>, together with (<ref>), Assumption <ref>(ii), Lemma <ref> and sup_xf_t-1(x)<∞ by Assumption <ref>, for any η>0, we have E(sup_τ∈𝒯sup_ u≤η|R_3n( u)|n u^2) ≤ ηn∑_t=1^nE{w_tsup_τ∈𝒯q̇_t(θ(τ))12sup_xf_t-1(x)sup_θ∈Θq̈_t(θ)} ≤ Cη E{√(w_t)sup_θ∈Θq̇_t(θ)·√(w_t)sup_θ∈Θq̈_t(θ)} ≤ Cη[E(w_tsup_θ∈Θq̇_t(θ)^2)]^1/2[E(w_tsup_θ∈Θq̈_t(θ)^2)]^1/2 tends to 0 as η→ 0. Similar to (<ref>) and (<ref>), we can show that sup_τ∈𝒯|R_3n( u_n,τ)|=o_p(n u_n^2). For R_4n( u, τ) and R_5n( u, τ), by Lemma <ref> and sup_τ∈𝒯 u_n(τ)=o_p(1), we have sup_τ∈𝒯R_4n( u_n, τ)=o_p(√(n) u_n+n u_n^2) and sup_τ∈𝒯R_5n( u_n, τ)=o_p(n u_n^2). Combining (<ref>)–(<ref>), it follows that Claim (2) holds. Finally, we consider Claim (3). By the Lindeberg–Lévy theorem for martingales <cit.> and the Cramér-Wold device, together with the stationarity and ergodicity of y_t by Assumption <ref> and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, the finite-dimensional convergence of T_n(τ) has been established in the proof of Corollary <ref>, that is, T_n(τ)→_d N( 0,τ(1-τ)Ω_0w(τ,τ)) as n→∞, where Ω_0w(τ_1, τ_2)=E[w_t^2q̇_t(θ(τ_1))q̇_t^'(θ(τ_2))]. It suffices to verify the stochastic equicontinuity of e_n^' T_n(τ) in ℓ^∞(𝒯) with e_n∈ℝ^3 being an arbitrary vector. Without loss of generality, we will assume that e_n is a sequence of vectors with e_n=1. It holds that e_n^' T_n(τ_2)- e_n^' T_n(τ_1) = 1√(n)∑_t=1^n( e_n^' a_t+ e_n^' b_t), where a_t=w_t[q̇_t(θ(τ_2))-q̇_t(θ(τ_1))] ψ_τ_1(y_t-q_t(θ(τ_1))) and b_t=w_tq̇_t(θ(τ_2))[c_t-E(c_t|ℱ_t-1)] with c_t≡ c_t(τ_1,τ_2)=I(y_t<q_t(θ(τ_1)))-I(y_t<q_t(θ(τ_2))). By Lemma <ref>(ii), the fact that |ψ_τ(x)|<1, the strict stationarity and ergodicity of y_t under Assumption <ref>, and E(w_t^2Δ_ρ,t^2)<∞ under Assumption <ref> with Δ_ρ,t=1+∑_j=1^∞ρ^j-1|y_t-j|+∑_j=2^∞(j-1)ρ^j-2|y_t-j|+∑_j=3^∞(j-1)(j-2)ρ^j-3|y_t-j|+∑_j=4^∞(j-1)(j-2)(j-3)ρ^j-4|y_t-j|, we have E[( e_n^' a_t)^2] ≤ E( e_n^2w_t^2q̇_t(θ(τ_2))-q̇_t(θ(τ_1))^2) ≤ C(τ_2-τ_1)^2. Moreover, note that I(X<a)-I(X<b)=I(b< X < a)-I(b> X > a) and E{[I(X<a)-I(X<b)]^2}=E[I(b< X < a)+I(b> X > a)]=|Pr(X<a)-Pr(X<b)|. These together with (X)≤ E(X^2), imply that E{[c_t-E(c_t|ℱ_t-1)]^2|ℱ_t-1}≤ |F_t-1(q_t(θ(τ_2)))-F_t-1(q_t(θ(τ_1)))|=|τ_2-τ_1|. Then by iterative-expectation, together with E(w_t^2q̇_t(θ(τ_2))^2)<∞ implied by Lemma <ref>(i), we have E[( e_n^' b_t)^2] ≤ E{ e_n^2w_t^2q̇_t(θ(τ_2))^2 E{[c_t-E(c_t|ℱ_t-1)]^2|ℱ_t-1}}≤ C|τ_2-τ_1|. Thus, by the Cauchy-Schwarz inequality and E( a_t|ℱ_t-1)=E( b_t|ℱ_t-1)= 0, together with (<ref>) and (<ref>), it can be verified that E[ e_n^' T_n(τ_2)- e_n^' T_n(τ_1)]^2= E[( e_n^' a_t)^2] + E[( e_n^' b_t)^2] + 2E[( e_n^' a_t)( e_n^' b_t)] ≤ 4{E[( e_n^' a_t)^2]E[( e_n^' b_t)^2]}^1/2≤ C|τ_2-τ_1|^3/2. Therefore, for any τ_1 ≤τ≤τ_2 and e_n∈ℝ^3, by the Cauchy-Schwarz inequality, we have E{| e_n^' T_n(τ)- e_n^' T_n(τ_1)|| e_n^' T_n(τ)- e_n^' T_n(τ_2)|} ≤ {E[ e_n^' T_n(τ)- e_n^' T_n(τ_1)]^2}^1/2{E[ e_n^' T_n(τ)- e_n^' T_n(τ_2)]^2}^1/2≤ C|τ_2-τ_1|^3/2. This proves the asymptotic tightness. Then by Theorem 13.5 of <cit.>, we establish the weak convergence of T_n(τ) in (ℓ^∞(𝒯))^3. By the ergodic theorem and Assumptions <ref>–<ref>, we have J_n(τ)=Ω_1w(τ)/2+o_p(1) uniformly on 𝒯, where Ω_1w(τ)=E[f_t-1(F_t-1^-1(τ))w_tq̇_t(θ(τ))q̇_t^'(θ(τ))]. Moreover, Ω_1w(τ) is Lipschitz continuous on 𝒯 by Assumptions <ref>–<ref>. This together with the positive definiteness of Ω_1w(τ), implies that Ω_1w^-1(τ) is continuous on 𝒯, and thus Ω_1w^-1(τ) T_n(τ) is tight on 𝒯 (Theorem 7.3 of <cit.>). Similar to the proof of Corollary <ref>, then by Claims (1)–(3), we can show that √(n)(θ_wn(·)-θ(·)) = Ω_1w^-1(·) T_n(·) + o_p(1) ⇝𝔾(·) in (ℓ^∞(𝒯))^3, where the remainder term is uniform in τ∈𝒯, and 𝔾(·) is a zero mean Gaussian process with covariance kernel Σ(τ_1, τ_2). §.§ Proof of Corollary <ref> Recall that v_n(τ)=Rθ_wn(τ)-r=R[θ_wn(τ)-∫_𝒯θ_wn(τ)dτ] and v_0(τ)=R[𝔾(τ)-∫_𝒯𝔾(τ)dτ]. By continuous mapping theorem and (<ref>) by Theorem <ref>, under the null hypothesis H_0 that Rθ(τ)=r, we have √(n)v_n(τ) =R√(n)(θ_wn(τ)-θ(τ))-√(n)(r-r)+√(n)(Rθ(τ)-r) H_0=R√(n)(θ_wn(τ)-θ(τ))-R∫_𝒯√(n)(θ_wn(τ)-θ(τ))dτH_0→ v_0(τ). Since the covariance function of v_0(·) is nondegenerate by assumption, then by Theorem 11.1 in <cit.>, the distribution of functionals S=∫_𝒯v_0^2(τ)dτ is absolutely continuous on (0,∞). As a result, by continuous mapping theorem, it follows that S_n=n∫_𝒯v_n^2(τ)dτ→_d S≡∫_𝒯 v_0^2(τ)dτ as n→∞. §.§ Proof of Theorem <ref> Recall that q_t,τ(φ) =Q_τ(λ)h_t(ϕ) and q_t,τ(φ) =Q_τ(λ)h_t(ϕ), where φ=(ϕ^', λ)^'=(a_0, a_1, b_1, λ)^', h_t(ϕ)=a_0(1-b_1)^-1+a_1∑_j=1^∞ b_1^j-1|y_t-j| and h_t(ϕ)=a_0(1-b_1)^-1+a_1∑_j=1^t-1 b_1^j-1|y_t-j|. Define L^*(φ)=E[w_tℓ_t^*(φ)], L_n^*(φ)=n^-1∑_t=1^nw_tℓ_t^*(φ) and L_n^*(φ)=n^-1∑_t=1^nw_tℓ_t^*(φ), where ℓ_t^*(φ)=∑_k=1^Kρ_τ_k(y_t-q_t,τ_k(φ)) and ℓ_t^*(φ)=∑_k=1^Kρ_τ_k(y_t-q_t,τ_k(φ)). (I) Proof of (i) φ̌_wn→_p φ_0^* To show the consistency, we first verify the following claims: (1) sup_Φ|L_n^*(φ)-L_n^*(φ)|=o_p(1); (2) E[sup_Φw_t|ℓ_t^*(φ)|]<∞; (3) L^*(φ) has a unique minimum at φ_0^*. For Claim (1), by the Lipschitz continuity of ρ_τ(·), strict stationarity and ergodicity of {y_t} by Assumption <ref>, E(w_tς_ρ)<∞ by Assumption <ref> and Lemma <ref>(i), we have sup_Φ|L_n^*(φ)-L_n^*(φ)| ≤1n∑_t=1^n∑_k=1^Ksup_Φw_t|ρ_τ_k(y_t-q_t,τ_k(φ))-ρ_τ_k(y_t-q_t,τ_k(φ))| ≤Cn∑_t=1^n∑_k=1^Ksup_Φw_t|q_t,τ_k(φ)-q_t,τ_k(φ)| ≤CKn∑_t=1^nρ^tw_tς_ρ=o_p(1). We next show Claim (2). Since max{a_0, a_1}≤c<∞ and 0<b_1≤ρ<1 by Assumption <ref>, E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, and by the fact that |ρ_τ(x)|≤ |x| and Assumption <ref>, it holds that E[sup_Φw_t|ℓ_t^*(φ)|] ≤∑_k=1^K{E[sup_Φw_t|y_t|]+E[sup_Φ|Q_τ_k(λ)|w_th_t(ϕ)]} ≤ KE(w_t|y_t|) + CKcE[w_t(1+∑_j=1^∞ρ^j-1|y_t-j|)] < ∞. Hence, Claim (2) is verified. For Claim (3), it holds by definition of φ_0^* in (<ref>) and Assumption <ref>(i). Note that L^*(φ) is a measurable function of y_t in Euclidean space for each φ∈Φ, and L(φ) is a continuous function of φ∈Φ for each y_t. Then by Theorem 3.1 of <cit.>, together with Claim (1) and the strict stationarity and ergodicity of {y_t} implied by Assumption <ref>, we have sup_φ∈Φ|L_n^*(φ)-L^*(φ)|=o_p(1). This together with Claim (1), implies that sup_φ∈Φ|L_n^*(φ)-L^*(φ)|=o_p(1). We next verify the consistency using the standard consistency argument in <cit.>. For any c>0, with probability tending to 1 uniformly in ϵ≥ c, we have: (a) by φ̌_wn=_φ∈ΦL_n^*(φ), L_n^*(φ̌_wn) ≤L_n^*(φ_0^*) + ϵ/3; and (b) by (<ref>), L^*(φ̌_wn) < L_n^*(φ̌_wn) + ϵ/3 and L_n^*(φ_0^*) < L^*(φ_0^*) + ϵ/3. Combining (<ref>)–(<ref>), with probability tending to 1, we have L^*(φ̌_wn) < L_n^*(φ̌_wn) + ϵ/3 ≤L_n^*(φ_0^*) + 2ϵ/3 < L^*(φ_0^*) + ϵ. Pick any δ>0, let B_δ be a ball centered at φ_0^* with radius δ>0. Then B^c≡Φ/B_δ is compact, and thus inf_φ∈ B^cL^*(φ) exists. Denote ϵ=inf_φ∈ B^cL^*(φ)-L^*(φ_0^*). Since φ_0^*=_φ∈ΦL^*(φ) is unique by Claim (3), then ϵ>0. For any ϵ>0, we can pick c>0 such that Pr(ϵ≥ c)>1-ϵ. Together with (<ref>), it follows that with probability becoming greater than 1-ϵ: L^*(φ̌_wn) < L^*(φ_0^*) + inf_φ∈ B^cL^*(φ)-L^*(φ_0^*) = inf_φ∈ B^cL^*(φ). Thus with probability becoming greater than 1-ϵ, φ̌_wn-φ_0^*≤δ. By the arbitrariness of ϵ, it implies that φ̌_wn-φ_0^*≤δ with probability tending to 1. The proof of (i) φ̌_wn→_p φ_0^* is complete. (II) Proof of (ii) √(n)(φ̌_wn-φ_0^*)→_d N( 0,Σ_w^*) Recall that L_n^*(φ)=n^-1∑_k=1^K∑_t=1^nw_tρ_τ_k(y_t-q_t,τ_k(φ)). For u∈ℝ^4, define H_n^*( u)=n[L_n^*(φ_0^*+ u)-L_n^*(φ_0^*)]. Denote ǔ_n=φ̌_wn-φ_0^*. By the consistency of φ̌_wn, it holds that ǔ_n=o_p(1). Note that ǔ_n is the minimizer of H_n^*( u), since φ̌_wn minimizes L_n^*(φ). This together with Lemmas <ref>–<ref>, implies that H_n^*(ǔ_n)= -√(n)ǔ_n^' T_n^*+√(n)ǔ_n^'J^*√(n)ǔ_n+o_p(√(n)ǔ_n+nǔ_n^2) ≥ -√(n)ǔ_n[ T_n^*+o_p(1)]+nǔ_n^2[λ_min+o_p(1)], where λ_min is the smallest eigenvalue of J^*=Ω_1w^*/2 with Ω_1w^* defined before Theorem <ref>, and T_n^*=n^-1/2∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)ψ_τ_k(e_t,τ_k^*) with e_t,τ_k^*=y_t-q_t,τ_k(φ_0^*). Denote X_t=∑_k=1^K w_tq̇_t,τ_k(φ_0^*)ψ_τ_k(y_t-q_t,τ_k(φ_0^*)), then T_n^*=n^-1/2∑_t=1^n X_t. By definition of φ_0^* in (<ref>), we have E( X_t)= 0. Moreover, by Lemma 2.1 of <cit.> and Assumption <ref>, for any nonzero vector c∈ℝ^4, we can show that c^' X_t is also a strictly stationary and α-mixing sequence with the mixing coefficient α(n) satisfying ∑_n≥ 1[α(n)]^1-2/δ<∞ for some δ>2. As a result, by central limit theorem for α-mixing process given in Theorem 2.21 of <cit.> and the Cramér-Wold device, T_n^* converges in distribution to a normal random variable with mean zero and variance matrix Ω_0w^*=E( X_t X_t^')+n^-1∑_t≠ s^nE( X_t X_s^') or equivalently Ω_0w^*=E( X_t X_t^')+∑_ℓ=1^∞[E( X_t X_t-ℓ^')+E( X_t-ℓ X_t^')] as n→∞. Note that λ_min>0 as Ω_1w^*=2J^* is positive definite, and T_n^*<∞ by Assumptions <ref>–<ref>. Since H_n^*(ǔ_n)≤ 0, then we have √(n)ǔ_n≤ [λ_min+o_p(1)]^-1[ T_n^*+o_p(1)]=O_p(1). This together with the consistency of φ̌_wn, verifies the √(n)-consistency of φ̌_wn, i.e. √(n)(φ̌_wn-φ_0^*)=O_p(1). Let √(n) u_n^*=J^*-1 T_n^*/2=Ω_1w^*-1 T_n^*, then we have √(n) u_n^*→ N( 0,Σ_w^*) in distribution as n→∞, where Σ_w^*=Ω_1w^*-1Ω_0w^*Ω_1w^*-1. Therefore, it suffices to show that √(n) u_n^*-√(n)ǔ_n=o_p(1). By (<ref>) and (<ref>), we have H_n^*(ǔ_n)= -√(n)ǔ_n^' T_n^*+√(n)ǔ_n^'J^*√(n)ǔ_n+o_p(1) = -2√(n)ǔ_n^'J^*√(n) u_n^*+√(n)ǔ_n^'J^*√(n)ǔ_n+o_p(1), and H_n^*( u_n^*)= -√(n) u_n^*' T_n^*+√(n) u_n^*'J^*√(n) u_n^*+o_p(1) =-√(n) u_n^*'J^*√(n) u_n^*+o_p(1). It follows that H_n^*(ǔ_n)-H_n^*( u_n^*)= (√(n)ǔ_n-√(n) u_n^*)^'J^*(√(n)ǔ_n-√(n) u_n^*)+o_p(1) ≥ λ_min√(n)ǔ_n-√(n) u_n^*^2+o_p(1). Since H_n^*(ǔ_n)-H_n^*( u_n^*)=n[L_n^*(φ_0^*+ǔ_n)-L_n^*(φ_0^*+ u_n^*)]≤ 0 a.s., then (<ref>) implies that √(n)ǔ_n-√(n) u_n^*=o_p(1). We verify the asymptotic normality of φ̌_wn. (III) Proof of (iii) √(n)(θ̌_wn^*(τ)-θ(τ)-B(τ))→_d N( 0,g_τ(φ_0^*)Σ_w^*g_τ^'(φ_0^*)) Based on the proof of Theorem <ref>(i)–(ii), it follows that √(n)(φ̌_wn-φ_0^*)=Ω_1w^*-11√(n)∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)ψ_τ_k(e_t,τ_k^*) + o_p(1), where e_t,τ_k^*=y_t-q_t,τ_k(φ_0^*). By Delta method and the √(n)-consistency of φ̌_wn, we have √(n)[g_τ(φ̌_wn)-g_τ(φ_0^*)] = ġ_τ(φ_0^*)√(n)(φ̌_wn-φ_0^*) + o_p(1) = ġ_τ(φ_0^*)Ω_1w^*-11√(n)∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)ψ_τ_k(e_t,τ_k^*) + o_p(1) →_d N( 0,g_τ(φ_0^*)Σ_w^*g_τ^'(φ_0^*)), where g_τ(·): ℝ^4→ℝ^3 is a measurable transformation function on φ̌_wn such that g_τ(φ̌_wn)=θ̌_wn^*(τ)=(ǎ_0wnQ_τ(λ̌_wn)/(1-b̌_1wn),ǎ_1wnQ_τ(λ̌_wn),b̌_1wn)^', and ġ_τ(φ_0^*)=[ Q_τ(λ_0)1-b_10 0 Q_τ(λ_0)a_00(1-b_10)^2 Q̇_τ(λ_0)a_001-b_10; 0 Q_τ(λ_0) 0 Q̇_τ(λ_0)a_10; 0 0 1 0 ]. The proof of this theorem is hence accomplished. §.§ Lemmas for Corollary <ref> and Theorems <ref>–<ref> This section provides seven preliminary lemmas with proofs, where Lemma <ref> gives basic results for all lemmas and theorems, Lemma <ref> is used to handle initial values, Lemmas <ref>–<ref> are used to prove Corollary <ref>, and Lemmas <ref>–<ref> are basic results to show Theorem <ref>. Specifically, Lemma <ref> verifies the stochastic differentiability condition defined by <cit.>, and the bracketing method in <cit.> is used for their proofs. Lemma <ref> is used to obtain the √(n)-consistency and asymptotic normality of θ_wn(τ), and its proof needs Lemma <ref>. Based on Lemma <ref>, Lemma <ref> will be used to handle initial values in establishing asymptotic normality. Lemma <ref> provides basic results to verify the stochastic equicontinuity, and Lemma <ref> is used to establish the weak convergence of θ_wn(τ). If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>, <ref> and <ref> hold, then we have (i) E(w_tsup_Θq̇_t(θ)^κ) < ∞ for κ=1, 2, 3; (ii) E(w_tsup_Θq̈_t(θ)^κ) < ∞ for κ=1, 2. Let ς_ρ=∑_s=0^∞ρ^s|y_-s| and ξ_ρ=∑_s=0^∞sρ^s|y_-s| be positive random variables depending on a constant ρ∈ (0,1). If Assumption <ref> holds, then we have (i) sup_Θ|q_t(θ)-q_t(θ)| ≤ Cρ^tς_ρ; (ii) sup_Θq̇_t(θ)-q̇_t(θ)≤ Cρ^t(ς_ρ+tς_ρ+ξ_ρ). Under Assumptions <ref>–<ref>, then for any sequence of random variables u_n such that u_n=o_p(1), if E|y_t|^s<∞ for some 0<s≤ 1, then it holds that ζ_n( u_n)=o_p(√(n) u_n+n u_n^2), where ζ_n( u)= u^'∑_t=1^nw_tq̇_t(θ(τ)){ξ_t( u)-E[ξ_t( u)|ℱ_t-1]} with ξ_t( u) =∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u)s)-I(y_t≤ F_t-1^-1(τ))]ds and ν_t( u)=q_t( u+θ(τ))-q_t(θ(τ)). If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>–<ref> hold, then for any sequence of random variables u_n such that u_n=o_p(1), we have n[L_n( u_n+θ(τ))-L_n(θ(τ))]= -√(n) u_n^' T_n+√(n) u_n^'J_n√(n) u_n +o_p(√(n) u_n+n u_n^2), where L_n(θ)=n^-1∑_t=1^nw_tρ_τ(y_t-q_t(θ)), and T_n=1√(n)∑_t=1^nw_tq̇_t(θ(τ))ψ_τ(η_t,τ) and J_n=12n∑_t=1^nf_t-1(F_t-1^-1(τ))w_tq̇_t(θ(τ))q̇_t^'(θ(τ)) with ψ_τ(x)=τ-I(x<0) and η_t,τ=y_t-q_t(θ(τ)). If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>–<ref> hold, then for any sequence of random variables u_n such that u_n=o_p(1), we have n[L_n( u_n+θ(τ))-L_n(θ(τ))]-n[L_n( u_n+θ(τ))-L_n(θ(τ))]=o_p(√(n) u_n+n u_n^2), where L_n(θ)=n^-1∑_t=1^nw_tρ_τ(y_t-q_t(θ)) and L_n(θ)=n^-1∑_t=1^nw_tρ_τ(y_t-q_t(θ)). Let Δ_ρ,t=1+∑_j=1^∞ρ^j-1|y_t-j|+∑_j=2^∞(j-1)ρ^j-2|y_t-j|+∑_j=3^∞(j-1)(j-2)ρ^j-3|y_t-j|+∑_j=4^∞(j-1)(j-2)(j-3)ρ^j-4|y_t-j| be positive random variables depending on a constant ρ∈ (0,1). If Assumptions <ref> and <ref> hold, for τ_1,τ_2∈𝒯 and u∈Λ with Λ={ u∈ℝ^3: u+θ(τ) ∈Θ}, then we have (i) |q_t(θ(τ_2))-q_t(θ(τ_1))| ≤ C|τ_2-τ_1|Δ_ρ,t; (ii) q̇_t( u+θ(τ_2))-q̇_t( u+θ(τ_1))≤ C|τ_2-τ_1|Δ_ρ,t; (iii) q̈_t( u+θ(τ_2))-q̈_t( u+θ(τ_1))≤ C|τ_2-τ_1|Δ_ρ,t. Under Assumptions <ref>–<ref>, for any η>0, we have sup_τ∈𝒯sup_ u≤η|R_4n( u,τ)|√(n) u+n u^2=o_p(1) and sup_τ∈𝒯sup_ u≤η|R_5n( u,τ)|n u^2=o_p(1), where R_4n( u, τ) and R_5n( u, τ) are defined in the proof of Theorem <ref>. Recall q̇_t(θ) in (<ref>) and q̈_t(θ) in (<ref>), where θ=(ω, α_1,β_1)^'. For any s∈(0,1], using the inequality (x+y)^s≤ x^s+y^s for x, y≥ 0, we have q̇_t(θ)≤ 1+∑_j=1^∞β^j-1_1|y_t-j|+|α_1|∑_j=2^∞(j-1)β^j-2_1|y_t-j| and q̈_t(θ)≤ 2∑_j=2^∞(j-1)β^j-2_1|y_t-j| + |α_1|∑_j=3^∞(j-1)(j-2)β^j-3_1|y_t-j|. For κ=1,2,3, denote M_κ=max_j{E(w_t|y_t-j|^κ)}, then it follows that M_κ<∞ since E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 under Assumption <ref>. For κ=1, by the strict stationarity and ergodicity of y_t under Assumption <ref>, max{|ω|,|α_1|}<c<∞ and β_1≤ρ<1 by Assumption <ref>, and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, it holds that E(w_tsup_Θq̇_t(θ)) ≤ E(w_t) + M_1∑_j=1^∞ρ^j-1+cM_1∑_j=2^∞(j-1)ρ^j-2 <∞ and E(w_tsup_Θq̈_t(θ)) ≤ 2M_1∑_j=2^∞(j-1)ρ^j-2 + cM_1∑_j=3^∞(j-1)(j-2)ρ^j-3<∞. Thus, (i) and (ii) hold for κ=1. For κ=2, under Assumptions <ref> and <ref>, by the Cauchy-Schwarz inequality, we have E[w_t(∑_j=1^∞ρ^j-1|y_t-j|)^2]≤∑_i=1^∞∑_j=1^∞ρ^i+j-2[E(w_t|y_t-i|^2)]^1/2[E(w_t|y_t-j|^2)]^1/2≤M_2(1-ρ)^2<∞, E[w_t(∑_j=1^∞ (j-1)ρ^j-2|y_t-j|)^2] ≤ M_2(∑_j=1^∞(j-1)ρ^j-2)^2<∞ and E[w_t(∑_j=1^∞ (j-1)(j-2)ρ^j-3|y_t-j|)^2] ≤ M_2(∑_j=1^∞(j-1)(j-2)ρ^j-3)^2<∞. Then under Assumptions <ref>, <ref> and <ref>, by (a+b)^2=a^2+2ab+b^2 and the Cauchy-Schwarz inequality, we can show that E(w_tsup_Θq̇_t(θ)^2) ≤ E(w_t)+E[w_t(∑_j=1^∞ρ^j-1|y_t-j|)^2] + c^2E[w_t(∑_j=1^∞ (j-1)ρ^j-2|y_t-j|)^2] + 2c∑_j=1^∞ρ^j-1E(w_t|y_t-j|) + 2c∑_j=1^∞(j-1)ρ^j-2E(w_t|y_t-j|) + 2c[Ew_t(∑_j=1^∞ρ^j-1|y_t-j|)^2]^1/2[Ew_t(∑_j=1^∞ (j-1)ρ^j-2|y_t-j|)^2]^1/2 <∞, and E(w_tsup_Θq̈_t(θ)^2) ≤ 4E[w_t(∑_j=1^∞ (j-1)ρ^j-2|y_t-j|)^2] + c^2E[w_t(∑_j=1^∞ (j-1)(j-2)ρ^j-3|y_t-j|)^2] + 4c[Ew_t(∑_j=1^∞ (j-1)ρ^j-2|y_t-j|)^2]^1/2[Ew_t(∑_j=1^∞ (j-1)(j-2)ρ^j-3|y_t-j|)^2]^1/2 <∞. Hence, (i) and (ii) hold for κ=2. For κ=3, under Assumptions <ref> and <ref>, by Hölder's inequality, we have E[w_t(∑_j=1^∞ρ^j-1|y_t-j|)^3] ≤ ∑_i=1^∞∑_j=1^∞∑_k=1^∞ρ^i+j+k-3[E(w_t|y_t-i|^3)]^1/3[E(w_t|y_t-j|^3)]^1/3[E(w_t|y_t-k|^3)]^1/3≤M_3(1-ρ)^3<∞, E[w_t(∑_j=1^∞ (j-1)ρ^j-2|y_t-j|)^3] ≤ M_3(∑_j=1^∞(j-1)ρ^j-2)^3<∞ and E[w_t(∑_j=1^∞ (j-1)(j-2)ρ^j-3|y_t-j|)^3] ≤ M_3(∑_j=1^∞(j-1)(j-2)ρ^j-3)^3<∞. Then similar to the proof for κ=2, under Assumptions <ref>, <ref> and <ref>, by (a+b)^3=a^3+3a^2b+3ab^2+b^3 and Hölder's inequality, we can show that (i) holds for κ=3. The proof of this lemma is complete. Recall that for θ=(ω, α_1,β_1)^', q_t(θ) =ω + α_1∑_j=1^∞β_1^j-1|y_t-j|, q_t(θ)=ω + α_1∑_j=1^t-1β_1^j-1|y_t-j|, q̇_t(θ)=(1,∑_j=1^∞β^j-1_τ|y_t-j|,α_τ∑_j=2^∞(j-1)β^j-2_τ|y_t-j|)^' and q̇_t(θ)=(1,∑_j=1^t-1β^j-1_1|y_t-j|,α_1∑_j=2^t-1(j-1)β^j-2_1|y_t-j|)^'. It follows that q_t(θ)-q_t(θ) =α_1∑_j=t^∞β_1^j-1|y_t-j| and q̇_t(θ)-q̇_t(θ) =(0,∑_j=t^∞β^j-1_1|y_t-j|,α_1∑_j=t^∞(j-1)β^j-2_1|y_t-j|)^'. Since |α_1|≤c<∞ and 0<β_1≤ρ<1 by Assumption <ref>, it holds that sup_Θ|q_t(θ)-q_t(θ)| ≤ |α_1|∑_j=t^∞β_1^j-1|y_t-j| ≤cρ^t-1∑_s=0^∞ρ^s|y_-s|≤ Cρ^tς_ρ, and sup_Θq̇_t(θ)-q̇_t(θ) ≤sup_Θ[∑_j=t^∞β^j-1_1|y_t-j|+|α_1|∑_j=t^∞(j-1)β^j-2_1|y_t-j|] ≤ρ^t-1ς_ρ+ctρ^t-2ς_ρ+cρ^t-1∑_s=0^∞(s-1)ρ^s-1|y_-s| ≤ Cρ^t(ς_ρ+tς_ρ+ξ_ρ), where ς_ρ=∑_s=0^∞ρ^s|y_-s| and ξ_ρ=∑_s=0^∞sρ^s|y_-s|. The proof of this lemma is complete. Recall that θ=(ω, α_1,β_1)^' and its true parameter vector θ(τ)=(ω(τ), α_1(τ), β_1(τ))^'. For u∈ℝ^d with d=3, note that |ζ_n( u)| ≤√(n) u∑_j=1^3|1√(n)∑_t=1^nm_t,j{ξ_t( u)-E[ξ_t( u)|ℱ_t-1]}|, where m_t,j=w_t∂ q_t(θ(τ))/∂θ_jτ with θ_jτ being the jth element of θ. For 1≤ j≤ d, define g_t=max_j{m_t,j,0} or g_t=max_j{-m_t,j,0}. Let ϱ_t( u)=g_tξ_t( u) and define D_n( u)=1√(n)∑_t=1^n{ϱ_t( u)-E[ϱ_t( u)|ℱ_t-1]}. To establish Lemma <ref>, it suffices to show that, for any δ>0, sup_ u≤δ|D_n( u)|1+√(n) u=o_p(1). We follow the method in Lemma 4 of <cit.> to verify (<ref>). Let 𝔉={ϱ_t( u): u≤δ} be a collection of functions indexed by u. First, we verify that 𝔉 satisfies the bracketing condition defined on page 304 of <cit.>. Let B_r( v) be an open neighborhood of v with radius r>0, and define a constant C_0 to be selected later. For any ϵ>0 and 0< r≤δ, there exists a sequence of small cubes {B_ϵ r/C_0( u_i)}_i=1^K(ϵ) to cover B_r( 0), where K(ϵ) is an integer less than Cϵ^-d, and the constant C is not depending on ϵ and r; see <cit.>, page 227. Denote V_i(r)=B_ϵ r/C_0( u_i)⋂ B_r(0), and let U_1(r)=V_1(r) and U_i(r)=V_i(r)-⋃_j=1^i-1V_j(r) for i≥ 2. Note that {U_i(r)}_i=1^K(ϵ) is a partition of B_r(0). For each u_i∈ U_i(r) with 1≤ i ≤ K(ϵ), define the following bracketing functions ϱ_t^L( u_i) = g_t∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u_i)s-ϵ rC_0q̇_t(θ(τ)))-I(y_t≤ F_t-1^-1(τ))]ds, ϱ_t^U( u_i) = g_t∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u_i)s+ϵ rC_0q̇_t(θ(τ)))-I(y_t≤ F_t-1^-1(τ))]ds. Since the indicator function I(·) is non-decreasing and g_t≥ 0, for any u ∈ U_i(r), we have ϱ_t^L( u_i)≤ϱ_t( u)≤ϱ_t^U( u_i). Furthermore, by Taylor expansion, it holds that E[ϱ_t^U( u_i)-ϱ_t^L( u_i)|ℱ_t-1]≤ϵ rC_0·2sup_xf_t-1(x) w_tq̇_t(θ(τ))^2. Denote ℵ_t=2sup_xf_t-1(x)w_tq̇_t(θ(τ))μ_t. By Assumption <ref>, we have sup_xf_t-1(x)<∞. Choose C_0=E(ℵ_t). Then by iterated-expectation, Assumption <ref>(ii) and Lemma <ref>(i), it follows that E[ϱ_t^U( u_i)-ϱ_t^L( u_i)]=E{E[ϱ_t^U( u_i)-ϱ_t^L( u_i)|ℱ_t-1]}≤ϵ r. This together with (<ref>), implies that the family 𝔉 satisfies the bracketing condition. Put r_k=2^-kδ. Let B(k)=B_r_k(0) and A(k) be the annulus B(k)∖ B(k+1). From the bracketing condition, for fixed ϵ>0, there is a partition U_1(r_k), U_2(r_k), …, U_K(ϵ)(r_k) of B(k). First, consider the upper tail case. For u ∈ U_i(r_k), by (<ref>), it holds that D_n( u) ≤ 1√(n)∑_t=1^n{ϱ_t^U( u_i)-E[ϱ_t^U( u_i)|ℱ_t-1]}+1√(n)∑_t=1^nE[ϱ_t^U( u_i)-ϱ_t^L( u_i)|ℱ_t-1] ≤ D_n^U( u_i)+√(n)ϵ r_k1nC_0∑_t=1^nℵ_t, where D_n^U( u_i)=1√(n)∑_t=1^n{ϱ_t^U( u_i)-E[ϱ_t^U( u_i)|ℱ_t-1]}. Define the event E_n={ω: 1nC_0∑_t=1^nℵ_t(ω) < 2 }. For u ∈ A(k), 1+√(n) u>√(n)r_k+1=√(n)r_k/2. Then by (<ref>) and the Chebyshev's inequality, we have Pr(sup_ u ∈ A(k)D_n( u)1+√(n) u>6ϵ, E_n) ≤ Pr(max_1 ≤ i ≤ K(ϵ)sup_ u ∈ U_i(r_k) ∩ A(k)D_n( u)>3√(n)ϵ r_k, E_n) ≤ K(ϵ)max_1 ≤ i ≤ K(ϵ)Pr(D_n^U( u_i)>√(n)ϵ r_k) ≤ K(ϵ)max_1 ≤ i ≤ K(ϵ)E{[D_n^U( u_i)]^2}nϵ^2 r_k^2. Moreover, by iterated-expectation, Taylor expansion and the Cauchy-Schwarz inequality, together with u_i≤ r_k for u_i ∈ U_i(r_k), we have E{[ϱ_t^U( u_i)]^2} = E{E{[ϱ_t^U( u_i)]^2|ℱ_t-1}} ≤ 2E{g_t^2| ∫_0^1[F_t-1(F_t-1^-1(τ)+ν_t( u_i)s+ϵ r_kC_0q̇_t(θ(τ)))-F_t-1(F_t-1^-1(τ))]ds|} ≤ Csup_xf_t-1(x)r_kE{w_t^2[q̇_t(θ(τ))^3+q̇_t(θ(τ))^2sup_θ^*∈Θq̇_t(θ^*)]} ≤ Csup_xf_t-1(x)r_k{E(w_t^2q̇_t(θ(τ))^3)+[E(w_tq̇_t(θ(τ))^2)]^1/2[E(w_tsup_θ^*∈Θq̇_t(θ^*)^2)]^1/2} := Υ(r_k), where θ^*= u_i^*+θ(τ) with u_i^* between 0 and u_i. This, together with sup_xf_t-1(x)<∞ by Assumption <ref>, Lemma <ref>(i) and the fact that ϱ_t^U( u_i)-E[ϱ_t^U( u_i)|ℱ_t-1] is a martingale difference sequence, implies that E{[D_n^U( u_i)]^2} =1n∑_t=1^nE{{ϱ_t^U( u_i)-E[ϱ_t^U( u_i)|ℱ_t-1]}^2} ≤1n∑_t=1^nE{[ϱ_t^U( u_i)]^2}≤Υ(r_k)<∞. Combining (<ref>) and (<ref>), we have Pr(sup_ u ∈ A(k)D_n( u)1+√(n) u>6ϵ, E_n) ≤K(ϵ)Υ(r_k)nϵ^2r_k^2. Similar to the proof of the upper tail case, we can obtain the same bound for the lower tail case. Therefore, Pr(sup_ u ∈ A(k)|D_n( u)|1+√(n) u>6ϵ, E_n) ≤2K(ϵ)Υ(r_k)nϵ^2r_k^2. Note that Υ(r_k)→ 0 as k→∞, we can choose k_ϵ such that 2K(ϵ)Υ(r_k)/(ϵ^2δ^2)<ϵ for k≥ k_ϵ. Let k_n be the integer such that n^-1/2δ≤ r_k_n≤ 2n^-1/2δ, and split B_δ( 0) into two events B:=B(k_n+1) and B^c:=B(0)-B(k_n+1). Note that B^c=⋃_k=0^k_nA(k) and Υ(r_k) is bounded. Then by (<ref>), it holds that Pr(sup_ u ∈ B^c|D_n( u)|1+√(n) u>6ϵ) ≤ ∑_k=0^k_nPr(sup_ u ∈ A(k)|D_n( u)|1+√(n) u>6ϵ, E_n) + Pr(E_n^c) ≤ 1n∑_k=0^k_ϵ-1CK(ϵ)ϵ^2δ^22^2k+ ϵn∑_k=k_ϵ^k_n2^2k+ Pr(E_n^c) ≤ O(1n) + 4ϵ + Pr(E_n^c). Furthermore, for u ∈ B, we have 1+√(n) u≥ 1 and r_k_n+1≤ n^-1/2δ<n^-1/2. Similar to the proof of (<ref>) and (<ref>), we can show that Pr(sup_ u ∈ BD_n( u)1+√(n) u>3ϵ, E_n) ≤Pr(max_1 ≤ i ≤ K(ϵ)D_n^U( u_i)>ϵ, E_n) ≤K(ϵ)Υ(r_k_n+1)ϵ^2. We can obtain the same bound for the lower tail. Therefore, we have Pr(sup_ u ∈ B|D_n( u)|1+√(n) u>3ϵ) = Pr(sup_ u ∈ B|D_n( u)|1+√(n) u>3ϵ, E_n)+ Pr(E_n^c) ≤ 2K(ϵ)Υ(r_k_n+1)ϵ^2 + Pr(E_n^c). Note that Υ(r_k_n+1)→ 0 as n→∞. Moreover, by the ergodic theorem, Pr(E_n)→ 1 and thus Pr(E_n^c)→ 0 as n→∞. (<ref>) together with (<ref>) asserts (<ref>). The proof of this lemma is accomplished. Recall that L_n(θ)=n^-1∑_t=1^nw_tρ_τ(y_t-q_t(θ)) and q_t(θ(τ))=F_t-1^-1(τ). Let ξ_t( u)=∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u)s)-I(y_t≤ F_t-1^-1(τ))]ds with ν_t( u)=q_t( u+θ(τ))-q_t(θ(τ)). By the Knight identity (<ref>), it can be verified that n[L_n( u+θ(τ))-L_n(θ(τ))] = ∑_t=1^nw_t[ρ_τ(η_t,τ-ν_t( u))-ρ_τ(η_t,τ)] = K_1n( u)+K_2n( u), where u∈Λ≡{ u∈ℝ^3: u+θ(τ) ∈Θ}, η_t,τ=y_t-q_t(θ(τ)), K_1n( u)=-∑_t=1^nw_tν_t( u)ψ_τ(η_t,τ) and K_2n( u)=∑_t=1^nw_tν_t( u)ξ_t( u). By Taylor expansion, we have ν_t( u)=q_1t( u)+q_2t( u), where q_1t( u)= u^'q̇_t(θ(τ)) and q_2t( u)= u^'q̈_t( u^*+θ(τ)) u/2 for u^* between u and 0. Then it follows that K_1n( u) =-∑_t=1^nw_tq_1t( u)ψ_τ(η_t,τ)-∑_t=1^nw_tq_2t( u)ψ_τ(η_t,τ) =-√(n) u^' T_n-√(n) u^'R_1n( u^*)√(n) u, where T_n=1√(n)∑_t=1^nw_tq̇_t(θ(τ))ψ_τ(η_t,τ) and R_1n( u^*)=12n∑_t=1^nw_tq̈_t( u^*+θ(τ))ψ_τ(η_t,τ). By Lemma <ref>(ii) and the fact that |ψ_τ(η_t,τ)|≤ 1, we have E[sup_ u^*∈Λw_t q̈_t( u^*+θ(τ))ψ_τ(η_t,τ)]≤ CE[sup_θ^*∈Θw_t q̈_t(θ^*)]<∞. Moreover, by iterated-expectation and the fact that E[ψ_τ(η_t,τ) | ℱ_t-1]=0, it follows that E[w_t q̈_t( u^*+θ(τ))ψ_τ(η_t,τ)]=0. Then by Theorem 3.1 in <cit.> and Assumption <ref>, we can show that sup_ u^*∈ΛR_1n( u^*+θ(τ))=o_p(1). This together with (<ref>), implies that K_1n( u_n)=-√(n) u_n^' T_n+o_p(n u_n^2). Denote ξ_t( u)=ξ_1t( u)+ξ_2t( u), where ξ_1t( u) =∫_0^1[I(y_t≤ F_t-1^-1(τ)+q_1t( u)s)-I(y_t≤ F_t-1^-1(τ))]ds and ξ_2t( u) =∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u)s)-I(y_t≤ F_t-1^-1(τ)+q_1t( u)s)]ds. Then for K_2n( u), by Taylor expansion, it holds that K_2n( u)=R_2n( u)+R_3n( u)+R_4n( u)+R_5n( u), where R_2n( u) = u^'∑_t=1^nw_tq̇_t(θ(τ))E[ξ_1t( u)|ℱ_t-1], R_3n( u) = u^'∑_t=1^nw_tq̇_t(θ(τ))E[ξ_2t( u)|ℱ_t-1], R_4n( u) = u^'∑_t=1^nw_tq̇_t(θ(τ)){ξ_t( u)-E[ξ_t( u)|ℱ_t-1]}and R_5n( u) = u^'2∑_t=1^nw_tq̈_t(θ^*)ξ_t( u) u. Note that E[ξ_1t( u)|ℱ_t-1]=∫_0^1[F_t-1(F_t-1^-1(τ)+q_1t( u)s)-F_t-1(F_t-1^-1(τ))]ds. Then by Taylor expansion, together with Assumption <ref>, it follows that E[ξ_1t( u)|ℱ_t-1]= 12f_t-1(F_t-1^-1(τ))q_1t( u) +q_1t( u)∫_0^1[f_t-1(F_t-1^-1(τ)+q_1t( u)s^*)-f_t-1(F_t-1^-1(τ))]sds, where s^* is between 0 and s. Therefore, it follows that R_2n( u)=√(n) u^'J_n√(n) u+√(n) u^'Π_1n( u)√(n) u, where J_n=(2n)^-1∑_t=1^nf_t-1(F_t-1^-1(τ))w_tq̇_t(θ(τ))q̇_t^'(θ(τ)) and Π_1n( u) =1n∑_t=1^nw_tq̇_t(θ(τ))q̇_t^'(θ(τ))∫_0^1[f_t-1(F_t-1^-1(τ)+q_1t( u)s^*)-f_t-1(F_t-1^-1(τ))]sds. By Taylor expansion, together with Assumption <ref>(ii), sup_x|ḟ_t-1(x)|<∞ by Assumption <ref> and Lemma <ref>(i), for any η>0, it holds that E(sup_ u≤ηΠ_1n( u)) ≤1n∑_t=1^nE[sup_ u≤ηw_tq̇_t(θ(τ))q̇_t^'(θ(τ))sup_x|ḟ_t-1(x)| u^'q̇_t(θ(τ))] ≤ Cηsup_x|ḟ_t-1(x)| E[w_tq̇_t(θ(τ))^3] tends to 0 as η→ 0. Therefore, by Markov’s theorem, for any ϵ, δ>0, there exists η_0=η_0(ϵ)>0 such that Pr(sup_ u≤η_0Π_1n( u)> δ)<ϵ2 for all n≥ 1. Since u_n=o_p(1), it follows that Pr( u_n> η_0)<ϵ2 as n is large enough. From (<ref>) and (<ref>), we have Pr(Π_1n( u_n)> δ) ≤Pr(Π_1n( u_n)> δ, u_n≤η_0)+Pr( u_n> η_0) ≤Pr(sup_ u≤η_0Π_1n( u)> δ)+ϵ2<ϵ as n is large enough. Thus Π_1n( u_n)=o_p(1). This together with (<ref>), implies that R_2n( u_n)=√(n) u_n^'J_n√(n) u_n+o_p(n u_n^2). Note that E[ξ_2t( u)|ℱ_t-1]=∫_0^1[F_t-1(F_t-1^-1(τ)+ν_t( u)s)-F_t-1(F_t-1^-1(τ)+q_1t( u)s)]ds. Then by Taylor expansion, the Cauchy-Schwarz inequality and the strict stationarity and ergodicity of y_t under Assumption <ref>, together with Assumption <ref>(ii), sup_xf_t-1(x)<∞ by Assumption <ref> and Lemma <ref>, for any η>0, it holds that E(sup_ u≤η|R_3n( u)|n u^2) ≤ ηn∑_t=1^nE{w_tq̇_t(θ(τ))12sup_xf_t-1(x)sup_θ∈Θq̈_t(θ)} ≤ Cη E{√(w_t)q̇_t(θ(τ))sup_θ∈Θ√(w_t)q̈_t(θ)} ≤ Cη[E(w_tq̇_t(θ(τ))^2)]^1/2[E(sup_θ∈Θw_tq̈_t(θ)^2)]^1/2 tends to 0 as η→ 0. Similar to (<ref>) and (<ref>), we can show that R_3n( u_n)=o_p(n u_n^2). For R_4n( u), by Lemma <ref>, it holds that R_4n( u_n)=o_p(√(n) u_n+n u_n^2). Finally, we consider R_5n( u). Since I(x≤ a)-I(x≤ b)=I(b≤ x ≤ a)-I(b≥ x ≥ a) and ν_t( u)= u^'q̇_t(θ^⋆) with θ^⋆ between θ(τ) and u+θ(τ) by Taylor expansion, we have sup_ u≤η|ξ_t( u)| ≤ ∫_0^1sup_ u≤η|I(F_t-1^-1(τ)≤ y_t≤ F_t-1^-1(τ)+ν_t( u)s)|ds + ∫_0^1sup_ u≤η|I(F_t-1^-1(τ) ≥ y_t≥ F_t-1^-1(τ)+ν_t( u)s)|ds ≤ I(F_t-1^-1(τ)≤ y_t≤ F_t-1^-1(τ)+ ηsup_θ^⋆∈Θq̇_t(θ^⋆)) + I(F_t-1^-1(τ)≥ y_t≥ F_t-1^-1(τ)- ηsup_θ^⋆∈Θq̇_t(θ^⋆)). Then by iterated-expectation, the Cauchy-Schwarz inequality and the strict stationarity and ergodicity of y_t under Assumption <ref>, together with sup_xf_t-1(x)<∞ by Assumption <ref> and Lemma <ref>, for any η>0, it follows that E(sup_ u≤η|R_5n( u)|n u^2) ≤ 12n∑_t=1^nE[w_tsup_θ^*∈Θq̈_t(θ^*)E(sup_ u≤η|ξ_t( u)| | ℱ_t-1)] ≤ ηsup_xf_t-1(x) E[w_tsup_θ^*∈Θq̈_t(θ^*)sup_θ^⋆∈Θq̇_t(θ^⋆)] ≤ Cη[E(sup_θ^*∈Θw_tq̈_t(θ^*)^2)]^1/2[E(sup_θ^⋆∈Θw_tq̇_t(θ^⋆)^2)]^1/2 tends to 0 as η→ 0. Similar to (<ref>) and (<ref>), we can show that R_5n( u_n)=o_p(n u_n^2). From (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we have K_2n( u_n)=√(n) u_n^'J_n√(n) u_n+o_p(√(n) u_n+n u_n^2). In view of (<ref>), (<ref>) and (<ref>), we accomplish the proof of this lemma. Recall that η_t,τ=y_t-q_t(θ(τ)), ν_t( u)=q_t( u+θ(τ))-q_t(θ(τ)) and ξ_t( u)=∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u)s)-I(y_t≤ F_t-1^-1(τ))]ds with F_t-1^-1(τ)=q_t(θ(τ)). Let η_t,τ=y_t-q_t(θ(τ)), ν_t( u)=q_t( u+θ(τ))-q_t(θ(τ)) and ξ_t( u)=∫_0^1[I(y_t≤q_t(θ(τ))+ν_t( u)s)-I(y_t≤q_t(θ(τ)))]ds. Similar to (<ref>), by the Knight identity (<ref>), we can verify that n[L_n( u+θ(τ))-L_n(θ(τ))]-n[L_n( u+θ(τ))-L_n(θ(τ))] = ∑_t=1^n w_t {[-ν_t( u)ψ_τ(η_t,τ)+ν_t( u)ξ_t( u)] - [-ν_t( u)ψ_τ(η_t,τ)+ν_t( u)ξ_t( u)]} = A_1n( u)+A_2n( u)+A_3n( u)+A_4n( u), where u∈Λ≡{ u∈ℝ^3: u+θ(τ) ∈Θ}, A_1n( u) = ∑_t=1^n w_t [ν_t( u)-ν_t( u)]ψ_τ(η_t,τ), A_2n( u)=∑_t=1^n w_t [ψ_τ(η_t,τ)-ψ_τ(η_t,τ)]ν_t( u), A_3n( u) = ∑_t=1^n w_t [ν_t( u)-ν_t( u)]ξ_t( u) and A_4n( u)=∑_t=1^n w_t [ξ_t( u)-ξ_t( u)] ν_t( u). We first consider A_1n( u). Since |ψ_τ(·)|≤ 1, {y_t} is strictly stationary and ergodic by Assumption <ref> and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, then by Taylor expansion and Lemma <ref>(ii), we have sup_ u∈Λ|A_1n( u)|√(n) u ≤1√(n)∑_t=1^n w_t sup_ u∈Λ|ν_t( u)-ν_t( u)| u|ψ_τ(η_t,τ)| ≤1√(n)∑_t=1^n w_t sup_Θq̇_t(θ^*)-q̇_t(θ^*) ≤C√(n)∑_t=1^n ρ^t w_t(ς_ρ+ξ_ρ)+ C√(n)∑_t=1^n tρ^t w_tς_ρ=o_p(1), where θ^* is between θ and θ(τ), ς_ρ=∑_s=0^∞ρ^s|y_-s| and ξ_ρ=∑_s=0^∞sρ^s|y_-s|. Therefore, for u_n=o_p(1), it holds that A_1n( u_n)=o_p(√(n) u_n). We next consider A_2n( u). Using I(x < a)-I(x < b)=I(0 < x-b < a-b)-I(0> x-b > a-b) and ψ_τ(η_t,τ)-ψ_τ(η_t,τ)=I(y_t<q_t(θ(τ)))-I(y_t<q_t(θ(τ))), we have E[|ψ_τ(η_t,τ)-ψ_τ(η_t,τ)| |ℱ_t-1] ≤ E[I(0< y_t-q_t(θ(τ))< |q_t(θ(τ))-q_t(θ(τ))|) |ℱ_t-1] + E[I(0> y_t-q_t(θ(τ))> -|q_t(θ(τ))-q_t(θ(τ))|)|ℱ_t-1] ≤ F_t-1(q_t(θ(τ))+|q_t(θ(τ))-q_t(θ(τ))|) -F_t-1(q_t(θ(τ))-|q_t(θ(τ))-q_t(θ(τ))|). Then by iterative-expectation and Cauchy-Schwarz inequality, together with ν_t( u)= u^'q̇_t(θ^*_τ) by Taylor expansion, Lemma <ref>(i), Lemma <ref>(i), sup_xf_t-1(x)<∞ by Assumption <ref> and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, it holds that Esup_ u∈Λ|A_2n( u)|√(n) u ≤1√(n)∑_t=1^n E{ w_tsup_Θq̇_t(θ^*_τ)· E[|ψ_τ(η_t,τ)-ψ_τ(η_t,τ)| |ℱ_t-1] } ≤ 2Csup_xf_t-1(x)1√(n)∑_t=1^n ρ^t E{w_tsup_Θq̇_t(θ^*_τ)ς_ρ} ≤C√(n)∑_t=1^n ρ^t ·[E(w_tsup_Θq̇_t(θ^*_τ)^2)]^1/2·[E(w_tς_ρ^2)]^1/2=o(1). As a result, for u_n=o_p(1), it follows that A_2n( u_n)=o_p(√(n) u_n). For A_3n( u), since |ξ_t( u)|<2, similar to the proof of A_1n( u), for u_n=o_p(1), we have A_3n( u_n)=o_p(√(n) u_n). Finally, we consider A_4n( u). Denote c_t=I(y_t≤q_t(θ(τ)))-I(y_t≤ q_t(θ(τ))) and d_t= ∫_0^1 δ_t(s) ds with δ_t(s)=I(y_t≤q_t(θ(τ))+ν_t( u)s)-I(y_t≤ q_t(θ(τ))+ν_t( u)s). Using I(X≤ a)-I(X≤ b)=I(b≤ X ≤ a)-I(b≥ X ≥ a), it holds that |c_t| ≤ I(|y_t-q_t(θ(τ))| ≤ |q_t(θ(τ))-q_t(θ(τ))|) and sup_ u∈Λ|δ_t(s)| ≤ I(|y_t-q_t(θ(τ))-ν_t( u)s| ≤ |q_t(θ(τ))-q_t(θ(τ))|+sup_ u∈Λ|ν_t( u)-ν_t( u)|s ). Then by Taylor expansion, together with sup_xf_t-1(x)<∞ under Assumption <ref> and Lemma <ref>, we have E(|c_t| |ℱ_t-1) ≤ 2sup_xf_t-1(x) |q_t(θ(τ))-q_t(θ(τ))| ≤ Cρ^t ς_ρ and E(sup_ u∈Λ|δ_t(s)| |ℱ_t-1) ≤ 2sup_xf_t-1(x) (|q_t(θ(τ))-q_t(θ(τ))|+sup_ u∈Λ|ν_t( u)-ν_t( u)|) ≤ Cρ^t [ς_ρ + u(ς_ρ+tς_ρ+ξ_ρ)]. These together with ξ_t( u)-ξ_t( u)=d_t-c_t, imply that E(sup_ u∈Λ|ξ_t( u)-ξ_t( u)| |ℱ_t-1) ≤ Cρ^t ς_ρ + C uρ^t(ς_ρ+tς_ρ+ξ_ρ). As a result, by iterative-expectation and Cauchy-Schwarz inequality, together with ν_t( u)= u^'q̇_t( u^*+θ(τ)) by Taylor expansion, Lemma <ref>(i) and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, we have Esup_ u∈Λ|A_4n( u)|√(n) u+n u^2≤ ∑_t=1^n E{w_t E(sup_ u∈Λ|ξ_t( u)-ξ_t( u)|√(n)+n u |ℱ_t-1)sup_ u∈Λ|ν_t( u)| u} ≤ C√(n)∑_t=1^n ρ^t ·[E(w_tsup_Θq̇_t(θ^*)^2)]^1/2·[E(w_tς_ρ^2)]^1/2 + Cn∑_t=1^n ρ^t ·[E(w_tsup_Θq̇_t(θ^*)^2)]^1/2·[E(w_tς_ρ^2)]^1/2 + Cn∑_t=1^n tρ^t ·[E(w_tsup_Θq̇_t(θ^*)^2)]^1/2·[E(w_tς_ρ^2)]^1/2 + Cn∑_t=1^n ρ^t ·[E(w_tsup_Θq̇_t(θ^*)^2)]^1/2·[E(w_tξ_ρ^2)]^1/2 = o(1). Hence, for u_n=o_p(1), it follows that A_4n( u_n)=o_p(√(n) u_n+n u_n^2). Combining (<ref>)–(<ref>), we accomplish the proof of this lemma. Recall that q_t(θ(τ))=ω(τ) + α_1(τ)∑_j=1^∞β_1(τ)^j-1|y_t-j|. By the Lipschitz continuous conditions in Assumption <ref> and Taylor expansion, we have |ω(τ_2)-ω(τ_1)| ≤ C|τ_2-τ_1|, |α_1(τ_2)-α_1(τ_1)|≤ C|τ_2-τ_1| and |β_1^j(τ_2)-β_1^j(τ_1)| ≤ Cj(β_1^*)^j-1|τ_2-τ_1|, where β_1^* is between β_1(τ_1) and β_1(τ_2). These together with |α_1(τ)|<c<∞ and β_1(τ), β_1^*≤ρ<1 by Assumption <ref>, imply that for τ_1,τ_2∈𝒯, |q_t(θ(τ_2))-q_t(θ(τ_1))| = |ω(τ_2)-ω(τ_1) + ∑_j=1^∞ [α_1(τ_2)β_1^j-1(τ_2)-α_1(τ_1)β_1^j-1(τ_1)]|y_t-j|| ≤ |ω(τ_2)-ω(τ_1)| + |α_1(τ_2)-α_1(τ_1)|∑_j=1^∞β_1^j-1(τ_2)|y_t-j| + |α_1(τ_1)|∑_j=1^∞|β_1^j-1(τ_2)-β_1^j-1(τ_1)| |y_t-j| ≤ C|τ_2-τ_1| (1 + ∑_j=1^∞ρ^j-1|y_t-j| + c∑_j=2^∞(j-1)ρ^j-2|y_t-j| ) ≤ C|τ_2-τ_1|Δ_ρ,t, where Δ_ρ,t=1+∑_j=1^∞ρ^j-1|y_t-j|+∑_j=2^∞(j-1)ρ^j-2|y_t-j|+∑_j=3^∞(j-1)(j-2)ρ^j-3|y_t-j|+∑_j=4^∞(j-1)(j-2)(j-3)ρ^j-4|y_t-j|. Therefore, (i) holds. For (ii), recall the first derivative of q_t(θ) defined in (<ref>). It holds that q̇_t( u+θ(τ_2))-q̇_t( u+θ(τ_1)) = (0,∑_j=1^∞{[u_3+β_1(τ_2)]^j-1-[u_3+β_1(τ_1)]^j-1}|y_t-j|, ∑_j=2^∞(j-1){[u_2+α_1(τ_2)][u_3+β_1(τ_2)]^j-2-[u_2+α_1(τ_1)][u_3+β_1(τ_1)]^j-2}|y_t-j|)^'. By Taylor expansion, we have [u_3+β_1(τ_2)]^j-[u_3+β_1(τ_1)]^j=j(β_1^*)^j-1[β_1(τ_2)-β_1(τ_1)], where β_1^* is between u_3+β_1(τ_1) and u_3+β_1(τ_2). Moreover, it follows that [u_2+α_1(τ_2)][u_3+β_1(τ_2)]^j-[u_2+α_1(τ_1)][u_3+β_1(τ_1)]^j = [α_1(τ_2)-α_1(τ_1)][u_3+β_1(τ_2)]^j + j(β_1^*)^j-1[u_2+α_1(τ_1)][β_1(τ_2)-β_1(τ_1)]. These together with (<ref>), |α_1(τ)|<c<∞ and β_1(τ), β_1^*≤ρ<1 by Assumption <ref>, for τ_1,τ_2∈𝒯 and u=(u_1,u_2,u_3)^'∈Λ such that u+θ(τ_i) ∈Θ with i=1,2, imply that q̇_t( u+θ(τ_2))-q̇_t( u+θ(τ_1)) ≤ ∑_j=1^∞|[u_3+β_1(τ_2)]^j-1-[u_3+β_1(τ_1)]^j-1||y_t-j| + ∑_j=2^∞(j-1)|[u_2+α_1(τ_2)][u_3+β_1(τ_2)]^j-2-[u_2+α_1(τ_1)][u_3+β_1(τ_1)]^j-2||y_t-j| ≤ [|α_1(τ_2)-α_1(τ_1)|+|β_1(τ_2)-β_1(τ_1)|] ∑_j=2^∞(j-1)ρ^j-2|y_t-j| + c|β_1(τ_2)-β_1(τ_1)|∑_j=3^∞(j-1)(j-2)ρ^j-3|y_t-j| ≤ C|τ_2-τ_1|Δ_ρ,t. Hence, (ii) is verified. To show (iii), recall the second derivative of q_t(θ) defined in (<ref>). For τ_1,τ_2∈𝒯 and u=(u_1,u_2,u_3)^'∈Λ such that u+θ(τ) ∈Θ, by max{|α_1(τ)|, |u_2+α_1(τ)|}<c<∞ and β_1(τ), β_1^*,u_3+β_1(τ)≤ρ<1 for u+θ(τ) ∈Θ under Assumption <ref>, together with (<ref>)–(<ref>), we have q̈_t( u+θ(τ_2))-q̈_t( u+θ(τ_1)) ≤ 2∑_j=2^∞(j-1)sup_Θ|[u_3+β_1(τ_2)]^j-2-[u_3+β_1(τ_1)]^j-2||y_t-j| + sup_Θ|α_1(τ_2)-α_1(τ_1)|∑_j=3^∞ (j-1)(j-2)sup_Θ[u_3+β_1(τ_1)]^j-3|y_t-j| + sup_Θ|u_2+α_1(τ_2)| ∑_j=3^∞ (j-1)(j-2)sup_Θ|[u_3+β_1(τ_2)]^j-3-[u_3+β_1(τ_1)]^j-3||y_t-j| ≤ C|τ_2-τ_1| (3∑_j=3^∞(j-1)(j-2)ρ^j-3|y_t-j| + c∑_j=4^∞(j-1)(j-2)(j-3)ρ^j-4|y_t-j|) ≤ C|τ_2-τ_1|Δ_ρ,t. As a result, (iii) holds. The proof of this lemma is complete. For any η>0, (<ref>) in Lemma <ref> and the proof for (<ref>) in Lemma <ref> imply that sup_ u≤η|R_4n( u,τ)|√(n) u+n u^2=o_p(1) and sup_ u≤η|R_5n( u,τ)|n u^2=o_p(1). By Corollary 2.2 of <cit.>, to show Lemma <ref>, it remains to establish the stochastic equicontinuity of sup_ u≤η|R_4n( u,τ)|/(√(n) u+n u^2) and sup_ u≤η|R_5n( u,τ)|/(n u^2). We first consider the stochastic equicontinuity of sup_ u≤η|R_4n( u,τ)|/(√(n) u+n u^2). Denote R_4n( u,τ)= u^'∑_t=1^nw_tq̇_t(θ(τ))ξ̅_t( u, τ), where ξ̅_t( u, τ)=ξ_t( u, τ)-E[ξ_t( u, τ)|ℱ_t-1], ξ_t( u, τ)=∫_0^1[I(y_t≤ F_t-1^-1(τ)+ν_t( u, τ)s)-I(y_t≤ F_t-1^-1(τ))]ds, with ν_t( u, τ)=q_t( u+θ(τ))-q_t(θ(τ)). Recall that c_t≡ c_t(τ_1,τ_2)=I(y_t<q_t(θ(τ_1)))-I(y_t<q_t(θ(τ_2))). Let d_t( u)≡ d_t( u, τ_1,τ_2)= ∫_0^1 δ_t( u, s) ds, where δ_t( u, s)≡δ_t( u, τ_1,τ_2, s)=I(y_t≤ Q_t( u, τ_2, s))-I(y_t≤ Q_t( u, τ_1, s)) with Q_t( u, τ, s)=F_t-1^-1(τ)+ν_t( u, τ)s=F_t-1^-1(τ) + u^'q̇_t( u^*+θ(τ))s for u^* between 0 and u. Note that ξ_t( u, τ_2)-ξ_t( u, τ_1)=c_t + d_t and |sup_x|g_1(x)|-sup_x|g_2(x)||≤sup_x||g_1(x)|-|g_2(x)|| for functions g_1 and g_2. Then it holds that |sup_ u≤η|R_4n( u,τ_2)|√(n) u+n u^2 - sup_ u≤η|R_4n( u,τ_1)|√(n) u+n u^2| ≤∑_i=1^3 sup_ u≤η|R_4i( u,τ_1,τ_2)|√(n) u, where R_41( u,τ_1,τ_2) = u^'∑_t=1^nw_t[q̇_t(θ(τ_2))-q̇_t(θ(τ_1))]ξ̅_t( u, τ_2), R_42( u,τ_1,τ_2) = u^'∑_t=1^nw_tq̇_t(θ(τ_1))[c_t-E(c_t|ℱ_t-1)] and R_43( u,τ_1,τ_2) = u^'∑_t=1^nw_tq̇_t(θ(τ_1))[d_t( u)-E(d_t( u)|ℱ_t-1)]. Using I(X≤ a)-I(X≤ b)=I(b≤ X ≤ a)-I(b≥ X ≥ a), similar to the proof of (<ref>), we can show that sup_ u≤ηξ_t^2( u, τ) ≤ I(F_t-1^-1(τ)-ηsup_Θq̇_t(θ)≤ y_t≤ F_t-1^-1(τ)+ηsup_Θq̇_t(θ)). Then by Taylor expansion, it follows that E(sup_ u≤ηξ_t^2( u, τ)|ℱ_t-1) ≤ 2ηsup_Θq̇_t(θ). Note that E(ξ̅_t( u, τ)|ℱ_t-1)=0 and (n^-1/2∑_t=1^nM_t)=(M_t) for a martingale difference sequence {M_t}. Then by iterative-expectation and the Cauchy-Schwarz inequality, together with Lemma <ref>(ii), Lemma <ref>(i) and E(w_tΔ_ρ,t^2)<∞ under Assumptions <ref> and <ref>, it can be verified that (sup_ u≤η|R_41( u,τ_1,τ_2)|√(n) u) = (w_tq̇_t(θ(τ_2))-q̇_t(θ(τ_1))sup_ u≤η|ξ̅_t( u, τ_2)|) ≤ E[w_t^2q̇_t(θ(τ_2))-q̇_t(θ(τ_1))^2E(sup_ u≤ηξ_t^2( u, τ_2)|ℱ_t-1)] ≤ C|τ_2-τ_1|^2η E(w_t^2Δ_ρ,t^2sup_Θq̇_t(θ))≤ C|τ_2-τ_1|^2. This implies that sup_ u≤η|R_41( u,τ_1,τ_2)|√(n) u = O_p(1)|τ_2-τ_1|. We next consider R_42( u,τ_1,τ_2). By Lemma <ref>, we can show that sup_ u≤η|Q_t( u, τ_2, s)-Q_t( u, τ_1, s)| ≤ |F_t-1^-1(τ_2)-F_t-1^-1(τ_1)| +sup_ u≤η uq̇_t( u^*+θ(τ_2))-q̇_t( u^*+θ(τ_1)) ≤ C|τ_2-τ_1|(1+η)Δ_ρ,t. Thus, by Taylor expansion and sup_xf_t-1(x)<∞ under Assumption <ref>, together with I(x ≤ a)-I(x ≤ b)=I(0 ≤ x-b ≤ a-b)-I(0≥ x-b ≥ a-b), we have E(sup_ u≤ηδ_t^2( u, s)|ℱ_t-1) ≤ F_t-1(Q_t( u, τ_1, s)+sup_ u≤η|Q_t( u, τ_2, s)-Q_t( u, τ_1, s)|) -F_t-1(Q_t( u, τ_1, s)-sup_ u≤η|Q_t( u, τ_2, s)-Q_t( u, τ_1, s)|) ≤ 2sup_xf_t-1(x)sup_ u≤η|Q_t( u, τ_2, s)-Q_t( u, τ_1, s)| ≤ C|τ_2-τ_1|Δ_ρ,t. Therefore, by the Cauchy-Schwarz inequality, it holds that E(sup_ u≤ηd_t^2( u)|ℱ_t-1) = E(∫_0^1∫_0^1sup_ u≤ηδ_t( u, s_1)sup_ u≤ηδ_t( u, s_2)ds_1ds_2 |ℱ_t-1) ≤ ∫_0^1∫_0^1 {E[sup_ u≤ηδ_t^2( u, s_1)|ℱ_t-1]}^1/2{E[sup_ u≤ηδ_t^2( u, s_2)|ℱ_t-1]}^1/2ds_1ds_2 ≤ C|τ_2-τ_1|Δ_ρ,t. Similar to the proof for (sup_ u≤η|R_41( u,τ_1,τ_2)|/(√(n) u)), by iterative-expectation and the Cauchy-Schwarz inequality, together with (<ref>), Lemma <ref>(i) and E(w_tΔ_ρ,t^2)<∞ under Assumptions <ref> and <ref>, it can be verified that (sup_ u≤η|R_42( u,τ_1,τ_2)|√(n) u) ≤ E[w_t^2q̇_t(θ(τ_2))^2 E{[c_t-E(c_t|ℱ_t-1)]^2|ℱ_t-1}] ≤ C|τ_2-τ_1|, and (sup_ u≤η|R_43( u,τ_1,τ_2)|√(n) u) ≤ E[w_t^2q̇_t(θ(τ_2))^2 E{sup_ u≤η[d_t( u)-E(d_t( u)|ℱ_t-1)]^2|ℱ_t-1}] ≤ C|τ_2-τ_1|E[w_t^2q̇_t(θ(τ_2))^2Δ_ρ,t] ≤ C|τ_2-τ_1|. Therefore, it holds that sup_ u≤η|R_42( u,τ_1,τ_2)|√(n) u= O_p(1)|τ_2-τ_1|^1/2, and sup_ u≤η|R_43( u,τ_1,τ_2)|√(n) u= O_p(1)|τ_2-τ_1|^1/2. Combining (<ref>)–(<ref>), the stochastic equicontinuity of sup_ u≤η|R_4n( u,τ)|/(√(n) u+n u^2) follows. Next, we consider the stochastic equicontinuity of sup_ u≤η|R_5n( u,τ)|/(n u^2). It can be verified that |sup_ u≤η|R_5n( u,τ_2)|n u^2 - sup_ u≤η|R_5n( u,τ_1)|n u^2| ≤∑_i=1^3 sup_ u≤η|R_5i( u,τ_1,τ_2)|n u^2, where R_51( u,τ_1,τ_2) = u^'2∑_t=1^nw_t[q̈_t( u^*+θ(τ_2))-q̈_t( u^*+θ(τ_1))] ξ_t( u, τ_2) u, R_52( u,τ_1,τ_2) = u^'2∑_t=1^nw_tq̈_t( u^*+θ(τ_1))c_t u and R_53( u,τ_1,τ_2) = u^'2∑_t=1^nw_tq̈_t( u^*+θ(τ_1))d_t( u) u. By Lemma <ref>(iii), the fact that |ξ_t( u, τ)|<1, the strict stationarity and ergodicity of y_t under Assumption <ref> and E(w_tΔ_ρ,t)<∞ under Assumptions <ref> and <ref>, it holds that sup_ u≤η|R_51( u,τ_1,τ_2)|n u^2 ≤12n∑_t=1^nw_tsup_Θq̈_t( u^*+θ(τ_2))-q̈_t( u^*+θ(τ_1)) ≤ C|τ_2-τ_1|1n∑_t=1^nw_tΔ_ρ,t = O_p(1)|τ_2-τ_1|. For R_52( u,τ_1,τ_2) and R_53( u,τ_1,τ_2), by iterative-expectation and the Cauchy-Schwarz inequality, the strict stationarity and ergodicity of y_t under Assumption <ref>, Lemma <ref>(ii) and E(w_tΔ_ρ,t)<∞ under Assumptions <ref> and <ref>, together with E(c_t^2|ℱ_t-1)=|τ_2-τ_1| by (<ref>) and (<ref>), we have (sup_ u≤η|R_52( u,τ_1,τ_2)|n u^2) ≤ E(w_t^2sup_Θq̈_t(θ)^2 E(c_t^2|ℱ_t-1)) ≤ C|τ_2-τ_1| and (sup_ u≤η|R_53( u,τ_1,τ_2)|n u^2) ≤ E(w_t^2sup_Θq̈_t(θ)^2 E(d_t^2( u)|ℱ_t-1)) ≤ C|τ_2-τ_1|. Then we have |R_52( u,τ_1,τ_2)|n u^2 =O_p(1)|τ_2-τ_1|^1/2 and |R_53( u,τ_1,τ_2)|n u^2 =O_p(1)|τ_2-τ_1|^1/2. Combining (<ref>)–(<ref>), the stochastic equicontinuity of sup_ u≤η|R_5n( u,τ)|/(n u^2) follows. We complete the proof of this lemma. §.§ Lemmas for Theorem <ref> This section provides four preliminary lemmas with proofs. Specifically, Lemma <ref> is used to handle initial values. Lemmas <ref> verifies the stochastic differentiability condition defined by <cit.>, and the bracketing method in <cit.> is used for its proof. Lemmas <ref> and <ref> are used to obtain the √(n)-consistency and asymptotic normality of φ̌_wn, and their proofs need Lemmas <ref> and <ref>, respectively. Let ς_ρ=∑_s=0^∞ρ^s|y_-s| and ξ_ρ=∑_s=0^∞sρ^s|y_-s| be positive random variables depending on a constant ρ∈ (0,1). If Assumption <ref>(i) holds, for τ∈𝒯_h=[τ_0,τ_0+h]⊂ (0,0.5) or τ∈𝒯_h=[τ_0-h,τ_0]⊂ (0.5,1) with h>0, then we have (i) sup_Φ|q_t,τ(φ)-q_t,τ(φ)| ≤ Cρ^tς_ρ; (ii) sup_Φq̇_t,τ(φ)-q̇_t,τ(φ)≤ Cρ^t(ς_ρ+tς_ρ+ξ_ρ). Under Assumptions <ref>, <ref>, <ref> and <ref>, if E|y_t|^s<∞ for some 0<s≤ 1, then for any sequence of random variables u_n such that u_n=o_p(1), it holds that ζ_n^*( u_n)=o_p(√(n) u_n+n u_n^2), where ζ_n^*( u)= u^'∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*){ξ_t,τ_k( u)-E[ξ_t,τ_k( u)|ℱ_t-1]} with ξ_t,τ_k( u) =∫_0^1[I(y_t≤ q_t,τ_k(φ_0^*)+ν_t,τ( u)s)-I(y_t≤ q_t,τ_k(φ_0^*))]ds and ν_t,τ( u)=q_t,τ(φ_0^*+ u)-q_t,τ(φ_0^*). If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>, <ref>, <ref> and <ref> hold, then for any sequence of random variables u_n such that u_n=o_p(1), we have n[L_n^*( u_n+φ_0^*)-L_n^*(φ_0^*)]= -√(n) u_n^' T_n^*+√(n) u_n^'J^*√(n) u_n +o_p(√(n) u_n+n u_n^2), where L_n^*(φ)=n^-1∑_k=1^K∑_t=1^nw_tρ_τ_k(y_t-q_t,τ_k(φ)), T_n^*=n^-1/2∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)ψ_τ_k(e_t,τ_k^*) and J^*=J_2^*-J_1^* with e_t,τ_k^*=y_t-q_t,τ_k(φ_0^*), J_1^*=Ω_11^*/2=∑_k=1^KE[w_tq̈_t,τ_k(φ_0^*)ψ_τ_k(e_t,τ_k^*)]/2 and J_2^*=Ω_12^*/2=∑_k=1^KE[w_tq̇_t,τ_k(φ_0^*)q̇_t,τ_k^'(φ_0^*)f_t-1(q_t,τ_k(φ_0^*))]/2. If E|y_t|^s<∞ for some 0<s≤ 1 and Assumptions <ref>, <ref>, <ref> and <ref> hold, then for any sequence of random variables u_n such that u_n=o_p(1), we have n[L_n^*( u_n+φ_0^*)-L_n^*(φ_0^*)]-n[L_n^*( u_n+φ_0^*)-L_n^*(φ_0^*)]=o_p(√(n) u_n+n u_n^2), where L_n^*(φ)=n^-1∑_k=1^K∑_t=1^nw_tρ_τ_k(y_t-q_t,τ_k(φ)) and L_n^*(φ)=n^-1∑_k=1^K∑_t=1^nw_tρ_τ_k(y_t-q_t,τ_k(φ)). For φ=(ϕ^', λ)^'=(a_0, a_1, b_1, λ)^', recall that q_t,τ(φ) = Q_τ(λ) (a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1|y_t-j|):= Q_τ(λ)h_t(ϕ), q_t,τ(φ) = Q_τ(λ) (a_0/1-b_1+a_1∑_j=1^t-1 b_1^j-1|y_t-j|):= Q_τ(λ)h_t(ϕ), q̇_t,τ(φ)=(Q_τ(λ)ḣ_t^'(ϕ),Q̇_τ(λ)h_t(ϕ))^' and q̇_t,τ(φ)=(Q_τ(λ)ḣ_t^'(ϕ),Q̇_τ(λ)h_t(ϕ))^', where Q̇_τ(λ)=λ^-2{τ^λ(λlnτ-1)-(1-τ)^λ[λln(1-τ)-1]}, ḣ_t(ϕ) =(11-b_1,∑_j=1^∞b_1^j-1|y_t-j|,a_0(1-b_1)^2+a_1∑_j=2^∞(j-1)b_1^j-2|y_t-j|)^' and ḣ_t(ϕ) =(11-b_1,∑_j=1^t-1b_1^j-1|y_t-j|,a_0(1-b_1)^2+a_1∑_j=2^t-1(j-1)b_1^j-2|y_t-j|)^'. It follows that q_t,τ(φ)-q_t,τ(φ)=Q_τ(λ)a_1∑_j=t^∞ b_1^j-1|y_t-j| and q̇_t,τ(φ)-q̇_t,τ(φ)=(0,Q_τ(λ)∑_j=t^∞b^j-1_1|y_t-j|,Q_τ(λ)a_1∑_j=t^∞(j-1)b^j-2_1|y_t-j|,Q̇_τ(λ)a_1∑_j=t^∞b^j-1_1|y_t-j|)^'. Since λ≥c>0, a_1≤c<∞ and 0<b_1≤ρ<1 by Assumption <ref>, for τ∈𝒯_h such that Q_τ(λ) and Q̇_τ(λ) are bounded, it holds that sup_Φ|q_t,τ(φ)-q_t,τ(φ)| ≤ |Q_τ(λ)|a_1∑_j=t^∞ b_1^j-1|y_t-j| ≤ Ccρ^t-1∑_s=0^∞ρ^s|y_-s|≤ Cρ^tς_ρ and sup_Φq̇_t,τ(φ)-q̇_t,τ(φ)≤ |Q_τ(λ)|sup_Φ[∑_j=t^∞b^j-1_1|y_t-j|+a_1∑_j=t^∞(j-1)b^j-2_1|y_t-j|] + |Q̇_τ(λ)|sup_Φa_1 ∑_j=t^∞b^j-1_1|y_t-j| ≤ C[ρ^t-1ς_ρ+ctρ^t-2ς_ρ+cρ^t-1∑_s=0^∞(s-1)ρ^s-1|y_-s|]+Ccρ^t-1ς_ρ ≤ Cρ^t(ς_ρ+tς_ρ+ξ_ρ), where ς_ρ=∑_s=0^∞ρ^s|y_-s| and ξ_ρ=∑_s=0^∞sρ^s|y_-s|. The proof of this lemma is complete. Recall that φ=(ϕ^', λ)^'=(a_0, a_1, b_1, λ)^' and its true parameter vector φ_0^*=(ϕ_0^', λ_0)^'=(a_00, a_10, b_10, λ_0)^'. For u∈ℝ^d with d=4, note that |ζ_n^*( u)| ≤√(n) u∑_k=1^K∑_j=1^d|1√(n)∑_t=1^nm_t,τ_k,j{ξ_t,τ_k( u)-E[ξ_t,τ_k( u)|ℱ_t-1]}|, where m_t,τ_k,j=w_t∂ q_t,τ_k(φ_0^*)/∂θ_j with θ_j being the jth element of φ. For 1≤ j≤ d and τ∈𝒯_1 ⊂ [0,0.5) or τ∈𝒯_2 ⊂ (0.5,1], define g_t,τ=max_j{m_t,τ,j,0} or g_t,τ=max_j{-m_t,τ,j,0}. Let ϱ_t,τ( u)=g_t,τξ_t,τ( u) and define D_n,τ( u)=1√(n)∑_t=1^n{ϱ_t,τ( u)-E[ϱ_t,τ( u)|ℱ_t-1]}. To establish Lemma <ref>, it suffices to show that, for any δ>0, sup_ u≤δ|D_n,τ( u)|1+√(n) u=o_p(1). We follow the method in Lemma 4 of <cit.> to verify (<ref>). Let 𝔉_τ={ϱ_t,τ( u): u≤δ} be a collection of functions indexed by u. First, we verify that 𝔉_τ satisfies the bracketing condition defined on page 304 of <cit.>. Let B_r( v) be an open neighborhood of v with radius r>0, and define a constant C_0 to be selected later. For any ϵ>0 and 0< r≤δ, there exists a sequence of small cubes {B_ϵ r/C_0( u_i)}_i=1^K(ϵ) to cover B_r( 0), where K(ϵ) is an integer less than Cϵ^-d, and the constant C is not depending on ϵ and r; see <cit.>, page 227. Denote V_i(r)=B_ϵ r/C_0( u_i)⋂ B_r(0), and let U_1(r)=V_1(r) and U_i(r)=V_i(r)-⋃_j=1^i-1V_j(r) for i≥ 2. Note that {U_i(r)}_i=1^K(ϵ) is a partition of B_r(0). For each u_i∈ U_i(r) with 1≤ i ≤ K(ϵ), define the following bracketing functions ϱ_t,τ^L( u_i) = g_t,τ∫_0^1[I(y_t≤ q_t,τ(φ_0^*)+ν_t,τ( u_i)s-ϵ rC_0q̇_t,τ(φ_0^*))-I(y_t≤ q_t,τ(φ_0^*))]ds, ϱ_t,τ^U( u_i) = g_t,τ∫_0^1[I(y_t≤ q_t,τ(φ_0^*)+ν_t,τ( u_i)s+ϵ rC_0q̇_t,τ(φ_0^*))-I(y_t≤ q_t,τ(φ_0^*))]ds. Since I(·) is non-decreasing and g_t,τ≥ 0, for any u ∈ U_i(r), we have ϱ_t,τ^L( u_i)≤ϱ_t,τ( u)≤ϱ_t,τ^U( u_i). Furthermore, by Taylor expansion, it holds that E[ϱ_t,τ^U( u_i)-ϱ_t,τ^L( u_i)|ℱ_t-1]≤ϵ rC_0·2sup_xf_t-1(x) w_tq̇_t,τ(φ_0^*)^2. Denote ℵ_t,τ=2sup_xf_t-1(x)w_tq̇_t,τ(φ_0^*)^2. By Assumption <ref>, we have sup_xf_t-1(x)<∞. Choose C_0=E(ℵ_t,τ). Then by iterated-expectation and Assumption <ref>, it follows that E[ϱ_t,τ^U( u_i)-ϱ_t,τ^L( u_i)]=E{E[ϱ_t,τ^U(φ_i)-ϱ_t,τ^L(φ_i)|ℱ_t-1]}≤ϵ r. This together with (<ref>), implies that the family 𝔉_τ satisfies the bracketing condition. Put r_k=2^-kδ. Let B(k)=B_r_k(0) and A(k) be the annulus B(k)∖ B(k+1). From the bracketing condition, for fixed ϵ>0, there is a partition U_1(r_k), U_2(r_k), …, U_K(ϵ)(r_k) of B(k). First, consider the upper tail case. For u ∈ U_i(r_k), by (<ref>), it holds that D_n,τ( u) ≤ 1√(n)∑_t=1^n{ϱ_t,τ^U( u_i)-E[ϱ_t,τ^U( u_i)|ℱ_t-1]}+1√(n)∑_t=1^nE[ϱ_t,τ^U( u_i)-ϱ_t,τ^L( u_i)|ℱ_t-1] ≤ D_n,τ^U( u_i)+√(n)ϵ r_k1nC_0∑_t=1^nℵ_t,τ, where D_n,τ^U( u_i)=1√(n)∑_t=1^n{ϱ_t,τ^U( u_i)-E[ϱ_t,τ^U( u_i)|ℱ_t-1]}. Define the event E_n={ω: 1nC_0∑_t=1^nℵ_t,τ(ω) < 2 }. For u ∈ A(k), 1+√(n) u>√(n)r_k+1=√(n)r_k/2. Then by (<ref>) and the Chebyshev's inequality, we have Pr(sup_ u ∈ A(k)D_n,τ( u)1+√(n) u>6ϵ, E_n) ≤ Pr(max_1 ≤ i ≤ K(ϵ)sup_ u ∈ U_i(r_k) ∩ A(k)D_n,τ( u)>3√(n)ϵ r_k, E_n) ≤ K(ϵ)max_1 ≤ i ≤ K(ϵ)Pr(D_n,τ^U( u_i)>√(n)ϵ r_k) ≤ K(ϵ)max_1 ≤ i ≤ K(ϵ)E{[D_n,τ^U( u_i)]^2}nϵ^2 r_k^2. Moreover, by iterated-expectation, Taylor expansion and the Cauchy-Schwarz inequality, together with u_i≤ r_k for u_i ∈ U_i(r_k), we have E{[ϱ_t,τ^U( u_i)]^2} = E{E{[ϱ_t,τ^U( u_i)]^2|ℱ_t-1}} ≤ 2E{g_t,τ^2| ∫_0^1[F_t-1(q_t,τ(φ_0^*)+ν_t,τ( u_i)s-ϵ rC_0q̇_t,τ(φ_0^*))-F_t-1(q_t,τ(φ_0^*))]ds|} ≤ Cr_ksup_xf_t-1(x)E[w_t^2q̇_t,τ(φ_0^*)^3+w_t^2q̇_t,τ(φ_0^*)^2sup_φ^†∈Φq̇_t,τ(φ^†)] ≤ Cr_ksup_xf_t-1(x){E(w_t^2q̇_t,τ(φ_0^*)^3)+[E(w_tq̇_t,τ(φ_0^*)^2)]^1/2[E(w_tsup_φ^†∈Φq̇_t,τ(φ^†)^2)]^1/2} := Υ_τ(r_k), where φ^† is between φ_0^* and u_i+φ_0^*. This, together with (<ref>), sup_xf_t-1(x)<∞ by Assumption <ref>, E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, strictly stationarity and α-mixing property of {y_t} under Assumption <ref>, max{a_00,a_10,a_0^†,a_1^†}<c<∞ and b_10,b_1^†≤ρ<1 by Assumption <ref>, and the fact that ϱ_t,τ^U( u_i)-E[ϱ_t,τ^U( u_i)|ℱ_t-1] is a martingale difference sequence, implies that E{[D_n,τ^U( u_i)]^2} =1n∑_t=1^nE{{ϱ_t,τ^U( u_i)-E[ϱ_t,τ^U( u_i)|ℱ_t-1]}^2} ≤1n∑_t=1^nE{[ϱ_t,τ^U( u_i)]^2}≤Υ_τ(r_k)<∞. Combining (<ref>) and (<ref>), we have Pr(sup_ u ∈ A(k)D_n,τ( u)1+√(n) u>6ϵ, E_n) ≤K(ϵ)Υ_τ(r_k)nϵ^2r_k^2. Similar to the proof of the upper tail case, we can obtain the same bound for the lower tail case. Therefore, Pr(sup_ u ∈ A(k)|D_n,τ( u)|1+√(n) u>6ϵ, E_n) ≤2K(ϵ)Υ_τ(r_k)nϵ^2r_k^2. Note that Υ_τ(r_k)→ 0 as k→∞, we can choose k_ϵ such that 2K(ϵ)Υ_τ(r_k)/(ϵ^2δ^2)<ϵ for k≥ k_ϵ. Let k_n be the integer such that n^-1/2δ≤ r_k_n≤ 2n^-1/2δ, and split B_δ( 0) into two events B:=B(k_n+1) and B^c:=B(0)-B(k_n+1). Note that B^c=⋃_k=0^k_nA(k) and Υ_τ(r_k) is bounded. Then by (<ref>), it holds that Pr(sup_ u ∈ B^c|D_n,τ( u)|1+√(n) u>6ϵ) ≤ ∑_k=0^k_nPr(sup_ u ∈ A(k)|D_n,τ( u)|1+√(n) u>6ϵ, E_n) + Pr(E_n^c) ≤ 1n∑_k=0^k_ϵ-1CK(ϵ)ϵ^2δ^22^2k+ ϵn∑_k=k_ϵ^k_n2^2k+ Pr(E_n^c) ≤ O(1n) + 4ϵ + Pr(E_n^c). Furthermore, for u ∈ B, we have 1+√(n) u≥ 1 and r_k_n+1≤ n^-1/2δ<n^-1/2. Similar to the proof of (<ref>) and (<ref>), we can show that Pr(sup_ u ∈ BD_n,τ( u)1+√(n) u>3ϵ, E_n) ≤Pr(max_1 ≤ i ≤ K(ϵ)D_n,τ^U( u_i)>ϵ, E_n) ≤K(ϵ)Υ_τ(r_k_n+1)ϵ^2. We can obtain the same bound for the lower tail. Therefore, we have Pr(sup_ u ∈ B|D_n,τ( u)|1+√(n) u>3ϵ) = Pr(sup_ u ∈ B|D_n,τ( u)|1+√(n) u>3ϵ, E_n)+ Pr(E_n^c) ≤ 2K(ϵ)Υ_τ(r_k_n+1)ϵ^2 + Pr(E_n^c). Note that Υ_τ(r_k_n+1)→ 0 as n→∞. Moreover, by the ergodic theorem, Pr(E_n)→ 1 and thus Pr(E_n^c)→ 0 as n→∞. (<ref>) together with (<ref>) asserts (<ref>). The proof of this lemma is accomplished. Denote u=φ-φ_0^*, where φ=(ϕ^', λ)^'=(a_0, a_1, b_1, λ)^' and φ_0^*=(ϕ_0^', λ_0)^'=(a_00, a_10, b_10, λ_0)^'. Recall that L_n^*(φ)=n^-1∑_k=1^K∑_t=1^nw_tρ_τ_k(y_t-q_t,τ_k(φ)) and e_t,τ^*=y_t-q_t,τ(φ_0^*) with q_t,τ(φ) = Q_τ(λ) (a_0/1-b_1+a_1∑_j=1^∞ b_1^j-1|y_t-j|):= Q_τ(λ)h_t(ϕ). Let ξ_t,τ( u)=∫_0^1[I(e_t,τ^*≤ν_t,τ( u)s)-I(e_t,τ^*≤ 0)]ds with ν_t,τ( u)=q_t,τ(φ_0^*+ u)-q_t,τ(φ_0^*). By the Knight identity (<ref>), it holds that n[L_n^*(φ_0^*+ u)-L_n^*(φ_0^*)] = ∑_k=1^K∑_t=1^nw_t[ρ_τ_k(e_t,τ_k^*-ν_t,τ_k( u))-ρ_τ_k(e_t,τ_k^*)] = K_1n^*( u)+K_2n^*( u), where u∈Λ^*≡{ u∈ℝ^4: u+φ_0^* ∈Φ}, K_1n^*( u)=-∑_k=1^K∑_t=1^nw_tν_t,τ_k( u)ψ_τ_k(e_t,τ_k^*) and K_2n^*( u)=∑_k=1^K∑_t=1^nw_tν_t,τ_k( u)ξ_t,τ_k( u). By Taylor expansion, we have ν_t,τ( u)=q_1t,τ( u)+q_2t,τ( u), where q_1t,τ( u)= u^'q̇_t,τ(φ_0^*) and q_2t,τ( u)= u^'q̈_t,τ(φ^†) u/2 with φ^† between φ_0^*+ u and φ_0^*. Then it follows that K_1n^*( u) =-∑_k=1^K∑_t=1^nw_tq_1t,τ_k( u)ψ_τ_k(e_t,τ_k^*)-∑_k=1^K∑_t=1^nw_tq_2t,τ_k( u)ψ_τ_k(e_t,τ_k^*) =-√(n) u^' T_n^*-√(n) u^'R_1n^*(φ^†)√(n) u, where T_n^*=1√(n)∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)ψ_τ_k(e_t,τ_k^*) and R_1n^*(φ^†)=12n∑_k=1^K∑_t=1^nw_tq̈_t,τ_k(φ^†)ψ_τ_k(e_t,τ_k^*). From E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, strictly stationarity and α-mixing property of {y_t} under Assumption <ref> and max{a_0^*, a_1^*}≤c<∞ and b_1^*≤ρ<1 by Assumption <ref>, together with (<ref>) and the fact that |ψ_τ(·)|≤ 1, we have E[sup_φ^†∈Φw_t q̈_t,τ_k(φ^†)ψ_τ_k(e_t,τ_k^*)]≤ CE[sup_φ^†∈Φw_t q̈_t,τ_k(φ^†)]<∞. Moreover, since q̈_t,τ(φ) is continuous with respect to φ∈Φ, then by ergodic theorem for strictly stationary and α-mixing process under Assumption <ref>, together with φ_n=φ_0^*+ u_n=φ_0^*+o_p(1) and φ^†_n between φ_0^*+ u_n and φ_0^*, we can show that R_1n^*(φ^†_n)=J_1^*+o_p(1), where J_1^*=∑_k=1^KE[w_tq̈_t,τ_k(φ_0^*)ψ_τ_k(e_t,τ_k^*)]/2. This together with (<ref>), implies that K_1n^*( u_n)=-√(n) u_n^' T_n^*-√(n) u_n^'J_1^*√(n) u_n+o_p(n u_n^2). Denote ξ_t,τ( u)=ξ_1t,τ( u)+ξ_2t,τ( u), where ξ_1t,τ( u) =∫_0^1[I(e_t,τ^*≤ q_1t,τ( u)s)-I(e_t,τ^*≤ 0)]ds and ξ_2t,τ( u) =∫_0^1[I(e_t,τ^*≤ν_t,τ( u)s)-I(e_t,τ^*≤ q_1t,τ( u)s)]ds. For K_2n^*( u), by Taylor expansion, it holds that K_2n^*( u)=R_2n^*( u)+R_3n^*( u)+R_4n^*( u)+R_5n^*( u), where R_2n^*( u) = u^'∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)E[ξ_1t,τ_k( u)|ℱ_t-1], R_3n^*( u) = u^'∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)E[ξ_2t,τ_k( u)|ℱ_t-1], R_4n^*( u) = u^'∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*){ξ_t,τ_k( u)-E[ξ_t,τ_k( u)|ℱ_t-1]}and R_5n^*( u) = u^'2∑_k=1^K∑_t=1^nw_tq̈_t,τ_k(φ^†)ξ_t,τ_k( u) u. Note that E[ξ_1t,τ( u)|ℱ_t-1]=∫_0^1[F_t-1(q_t,τ(φ_0^*)+q_1t,τ( u)s)-F_t-1(q_t,τ(φ_0^*))]ds. Then by Taylor expansion, together with Assumption <ref>, it follows that E[ξ_1t,τ( u)|ℱ_t-1]= u^'2f_t-1(q_t,τ(φ_0^*))q̇_t,τ(φ_0^*) +q_1t,τ( u)∫_0^1[f_t-1(q_t,τ(φ_0^*)+q_1t,τ( u)s^*)-f_t-1(q_t,τ(φ_0^*))]sds, where s^* is between 0 and s. Therefore, it holds that R_2n^*( u)=√(n) u^'J_2n^*√(n) u+√(n) u^'Π_1n^*( u)√(n) u, where J_2n^*=(2n)^-1∑_k=1^K∑_t=1^nw_tf_t-1(q_t,τ_k(φ_0^*))q̇_t,τ_k(φ_0^*)q̇_t,τ_k^'(φ_0^*) and Π_1n^*( u)=1n∑_k=1^K∑_t=1^nw_tq̇_t,τ_k(φ_0^*)q̇_t,τ_k^'(φ_0^*)∫_0^1[f_t-1(q_t,τ_k(φ_0^*)+q_1t,τ_k( u)s^*)-f_t-1(q_t,τ_k(φ_0^*))]sds. By Taylor expansion, together with sup_x|ḟ_t-1(x)|<∞ by Assumption <ref> and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, (<ref>), strictly stationarity and α-mixing property of {y_t} under Assumption <ref>, max{a_00,a_10}≤c<∞ and b_10≤ρ<1 by Assumption <ref>, for any η>0, it holds that E(sup_ u≤ηΠ_1n^*( u)) ≤1n∑_k=1^K∑_t=1^nE[sup_ u≤ηw_tq̇_t,τ_k(φ_0^*)q̇_t,τ_k^'(φ_0^*)sup_x|ḟ_t-1(x)| u^'q̇_t,τ_k(φ_0^*)] ≤ Cηsup_x|ḟ_t-1(x)|∑_k=1^K E[w_tq̇_t,τ_k(φ_0^*)^3] tends to 0 as η→ 0. Therefore, by Markov’s theorem, for any ϵ, δ>0, there exists η_0=η_0(ϵ)>0 such that Pr(sup_ u≤η_0Π_1n^*( u)> δ)<ϵ2 for all n≥ 1. Since u_n=o_p(1), it follows that Pr( u_n> η_0)<ϵ2 as n is large enough. From (<ref>) and (<ref>), we have Pr(Π_1n^*( u_n)> δ) ≤Pr(Π_1n^*( u_n)> δ, u_n≤η_0)+Pr( u_n> η_0) ≤Pr(sup_ u≤η_0Π_1n^*( u)> δ)+ϵ2<ϵ as n is large enough. Therefore, Π_1n^*( u_n)=o_p(1). This together with Assumption <ref>, (<ref>) and J_2n^*=J_2^*+o_p(1) by ergodic theorem for strictly stationary and α-mixing process, implies that R_2n^*( u_n)=√(n) u_n^'J_2^*√(n) u_n+o_p(n u_n^2), where J_2^*=∑_k=1^KE[w_tf_t-1(q_t,τ_k(φ_0^*))q̇_t,τ_k(φ_0^*)q̇_t,τ_k^'(φ_0^*)]/2. For R_3n^*( u), note that E[ξ_2t,τ( u)|ℱ_t-1]=∫_0^1[F_t-1(q_t,τ(φ_0^*)+ν_t,τ( u)s)-F_t-1(q_t,τ(φ_0^*)+q_1t,τ( u)s)]ds. Then by iterated-expectation, Taylor expansion and the Cauchy-Schwarz inequality, together with (<ref>), (<ref>), sup_xf_t-1(x)<∞ by Assumption <ref>, E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, strictly stationarity and α-mixing property of {y_t} under Assumption <ref>, max{a_00,a_10,a_0,a_1}≤c<∞ and b_10,b_1≤ρ<1 by Assumption <ref>, for any η>0, it holds that E(sup_ u≤η|R_3n^*( u)|n u^2) ≤ ηn∑_k=1^K∑_t=1^nE{w_tq̇_t,τ_k(φ_0^*)12sup_xf_t-1(x)sup_Φq̈_t,τ_k(φ^†)} ≤ Cη∑_k=1^K E{√(w_t)q̇_t,τ_k(φ_0^*)sup_Φ√(w_t)q̈_t,τ_k(φ^†)} ≤ Cη∑_k=1^K[E(w_tq̇_t,τ_k(φ_0^*)^2)]^1/2[E(w_tsup_Φq̈_t,τ_k(φ^†)^2)]^1/2 tends to 0 as η→ 0. Similar to (<ref>) and (<ref>), we can show that R_3n^*( u_n)=o_p(n u_n^2). For R_4n^*( u), by Lemma <ref>, it holds that R_4n^*( u_n)=o_p(√(n) u_n+n u_n^2). Finally, we consider R_5n^*( u). Since I(x≤ a)-I(x≤ b)=I(b≤ x ≤ a)-I(b≥ x ≥ a) and ν_t,τ( u)= u^'q̇_t,τ(φ^⋆) with φ^⋆ between φ_0^* and u+φ_0^* by Taylor expansion, we have sup_ u≤η|ξ_t,τ( u)| ≤ ∫_0^1sup_ u≤η|I(q_t,τ(φ_0^*)≤ y_t≤ q_t,τ(φ_0^*)+ν_t,τ( u)s)|ds + ∫_0^1sup_ u≤η|I(q_t,τ(φ_0^*) ≥ y_t≥ q_t,τ(φ_0^*)+ν_t,τ( u)s)|ds ≤ I(q_t,τ(φ_0^*)≤ y_t≤ q_t,τ(φ_0^*)+ ηsup_φ^⋆∈Φq̇_t,τ(φ^⋆)) + I(q_t,τ(φ_0^*)≥ y_t≥ q_t,τ(φ_0^*)- ηsup_φ^⋆∈Φq̇_t,τ(φ^⋆)). Then by iterated-expectation, the Cauchy-Schwarz inequality and the strict stationarity and ergodicity of y_t under Assumption <ref>, together with (<ref>), (<ref>), max{a_0^*, a_1^*,a_0^⋆, a_1^⋆}≤c<∞ and b_1^*,b_1^⋆≤ρ<1 by Assumption <ref>, sup_xf_t-1(x)<∞ by Assumption <ref> and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, for any η>0, it follows that E(sup_ u≤η|R_5n^*( u)|n u^2) ≤ 12n∑_k=1^K∑_t=1^nE[w_tsup_φ^†∈Φq̈_t,τ_k(φ^†)E(sup_ u≤η|ξ_t,τ_k( u)| | ℱ_t-1)] ≤ ηsup_xf_t-1(x) ∑_k=1^KE[w_tsup_φ^†∈Φq̈_t,τ_k(φ^†)sup_φ^⋆∈Φq̇_t,τ_k(φ^⋆)] ≤ Cη∑_k=1^K[E(w_tsup_φ^†∈Φq̈_t,τ_k(φ^†)^2)]^1/2[E(w_tsup_φ^⋆∈Φq̇_t,τ_k(φ^⋆)^2)]^1/2 tends to 0 as η→ 0. Similar to (<ref>) and (<ref>), we can show that R_5n^*( u_n)=o_p(n u_n^2). From (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we have K_2n^*( u_n)=√(n) u_n^'J_2^*√(n) u_n+o_p(√(n) u_n+n u_n^2). In view of (<ref>), (<ref>) and (<ref>), we accomplish the proof of this lemma. Denote u=φ-φ_0^*. Recall that e_t,τ^*=y_t-q_t,τ(φ_0^*), ν_t,τ( u)=q_t,τ(φ_0^*+ u)-q_t,τ(φ_0^*) and ξ_t,τ( u)=∫_0^1[I(e_t,τ^*≤ν_t,τ( u)s)-I(e_t,τ^*≤ 0)]ds. Let e_t,τ^*=y_t-q_t,τ(φ_0^*), ν_t,τ( u)=q_t,τ(φ_0^*+ u)-q_t,τ(φ_0^*) and ξ_t,τ( u)=∫_0^1[I(e_t,τ^*≤ν_t,τ( u)s)-I(e_t,τ^*≤ 0)]ds. Similar to (<ref>), by the Knight identity (<ref>) we can verify that n[L_n^*(φ_0^*+ u)-L_n^*(φ_0^*)]-n[L_n(φ_0^*+ u)-L_n(φ_0^*)] = ∑_k=1^K∑_t=1^n w_t {[-ν_t,τ_k( u)ψ_τ_k(e_t,τ_k^*)+ν_t,τ_k( u)ξ_t,τ_k( u)] - [-ν_t,τ_k( u)ψ_τ_k(e_t,τ_k^*)+ν_t,τ_k( u)ξ_t,τ_k( u)]} = ∑_k=1^K[A_1n,k^*( u)+A_2n,k^*( u)+A_3n,k^*( u)+A_4n,k^*( u)], where u∈Λ^*≡{ u∈ℝ^4: u+φ_0^* ∈Φ}, A_1n,k^*( u) = ∑_t=1^n w_t [ν_t,τ_k( u)-ν_t,τ_k( u)]ψ_τ_k(e_t,τ_k^*), A_2n,k^*( u) =∑_t=1^n w_t [ψ_τ_k(e_t,τ_k^*)-ψ_τ_k(e_t,τ_k^*)]ν_t,τ_k( u), A_3n,k^*( u) = ∑_t=1^n w_t [ν_t,τ_k( u)-ν_t,τ_k( u)]ξ_t,τ_k( u) and A_4n,k^*( u) =∑_t=1^n w_t [ξ_t,τ_k( u)-ξ_t,τ_k( u)] ν_t,τ_k( u). We first consider A_1n,k^*( u). Since |ψ_τ(·)|≤ 1, {y_t} is strictly stationary and ergodic by Assumption <ref> and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, then by Taylor expansion and Lemma <ref>(ii), we have sup_ u∈Λ^*|A_1n,k^*( u)|√(n) u ≤1√(n)∑_t=1^n w_t sup_ u∈Λ^*|ν_t,τ_k( u)-ν_t,τ_k( u)| u|ψ_τ_k(e_t,τ_k^*)| ≤1√(n)∑_t=1^n w_t sup_Φq̇_t,τ_k(φ^†)-q̇_t,τ_k(φ^†) ≤C√(n)∑_t=1^n ρ^t w_t(ς_ρ+ξ_ρ)+ C√(n)∑_t=1^n tρ^t w_tς_ρ=o_p(1), where φ^† is between φ and φ_0^*. Therefore, it holds that A_1n,k^*( u_n)=o_p(√(n) u_n). We next consider A_2n,k^*( u). Using I(x < a)-I(x < b)=I(0 < x-b < a-b)-I(0> x-b > a-b) and ψ_τ(e_t,τ^*)-ψ_τ(e_t,τ^*)=I(y_t<q_t,τ(φ_0^*))-I(y_t<q_t,τ(φ_0^*)), we have E[|ψ_τ(e_t,τ^*)-ψ_τ(e_t,τ^*)| |ℱ_t-1] ≤ E[I(0< y_t-q_t,τ(φ_0^*)< |q_t,τ(φ_0^*)-q_t,τ(φ_0^*)|) |ℱ_t-1] + E[I(0> y_t-q_t,τ(φ_0^*)> -|q_t,τ(φ_0^*)-q_t,τ(φ_0^*)|)|ℱ_t-1] ≤ F_t-1(q_t,τ(φ_0^*)+|q_t,τ(φ_0^*)-q_t,τ(φ_0^*)|) -F_t-1(q_t,τ(φ_0^*)-|q_t,τ(φ_0^*)-q_t,τ(φ_0^*)|). Then by iterative-expectation and Cauchy-Schwarz inequality, together with ν_t,τ( u)= u^'q̇_t,τ(φ^†) by Taylor expansion, Lemma <ref>(i), Assumption <ref>, max{a_0^†,a_1^†}≤c<∞ and b_1^†≤ρ<1 by Assumption <ref>, sup_xf_t-1(x)<∞ by Assumption <ref> and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, it holds that Esup_ u∈Λ^*|A_2n,k^*( u)|√(n) u ≤1√(n)∑_t=1^n E{ w_tsup_Φq̇_t,τ_k(φ^†)· E[|ψ_τ_k(e_t,τ_k^*)-ψ_τ_k(e_t,τ_k^*)| |ℱ_t-1] } ≤ 2Csup_xf_t-1(x)1√(n)∑_t=1^n ρ^t E{w_tsup_Φq̇_t,τ_k(φ^†)ς_ρ} ≤C√(n)∑_t=1^n ρ^t ·[E(w_tsup_Φq̇_t,τ_k(φ^†)^2)]^1/2·[E(w_tς_ρ^2)]^1/2=o(1), where φ^† is between φ_0^*+ u and φ_0^*. As a result, it follows that A_2n,k^*( u_n)=o_p(√(n) u_n). For A_3n,k^*( u), since |ξ_t,τ( u)|<2, similar to the proof of A_1n,k^*( u), we can verify that A_3n,k^*( u_n)=o_p(√(n) u_n). Finally, we consider A_4n,k^*( u). Denote c_t,τ^*=I(y_t≤q_t,τ(φ_0^*))-I(y_t≤ q_t,τ(φ_0^*)) and d_t,τ^*= ∫_0^1 δ_t,τ^*(s) ds with δ_t,τ^*(s)=I(y_t≤q_t,τ(φ_0^*)+ν_t,τ( u)s)-I(y_t≤ q_t,τ(φ_0^*)+ν_t,τ( u)s). Using I(X≤ a)-I(X≤ b)=I(b≤ X ≤ a)-I(b≥ X ≥ a), it holds that |c_t,τ^*| ≤ I(|y_t-q_t,τ(φ_0^*)| ≤ |q_t,τ(φ_0^*)-q_t,τ(φ_0^*)|) and sup_ u∈Λ^*|δ_t,τ^*(s)| ≤ I(|y_t-q_t,τ(φ_0^*)-ν_t,τ( u)s| ≤ |q_t,τ(φ_0^*)-q_t,τ(φ_0^*)|+sup_Φ|ν_t,τ( u)-ν_t,τ( u)|s ). Then by Taylor expansion, together with sup_xf_t-1(x)<∞ under Assumption <ref> and Lemma <ref>, we have E(|c_t,τ^*| |ℱ_t-1) ≤ 2sup_xf_t-1(x) |q_t,τ(φ_0^*)-q_t,τ(φ_0^*)| ≤ Cρ^t ς_ρ and E(sup_ u∈Λ^*|δ_t,τ^*(s)| |ℱ_t-1) ≤ 2sup_xf_t-1(x) (|q_t,τ(φ_0^*)-q_t,τ(φ_0^*)|+sup_ u∈Λ^*|ν_t,τ( u)-ν_t,τ( u)|) ≤ Cρ^t [ς_ρ + u(ς_ρ+tς_ρ+ξ_ρ)]. These imply that E(sup_ u∈Λ^*|ξ_t,τ( u)-ξ_t,τ( u)| |ℱ_t-1) ≤ Cρ^t ς_ρ + C uρ^t(ς_ρ+tς_ρ+ξ_ρ). As a result, by iterative-expectation and Cauchy-Schwarz inequality, together with Assumption <ref>, ξ_t,τ( u)-ξ_t,τ( u)=d_t,τ^*-c_t,τ^*, ν_t,τ( u)= u^'q̇_t,τ(φ^†) by Taylor expansion, and E(w_t)<∞ and E(w_t|y_t-j|^3)<∞ for all j≥ 1 by Assumption <ref>, we have Esup_ u∈Λ^*|A_4n,k^*( u)|√(n) u+n u^2≤ ∑_t=1^n E{w_t E(sup_ u∈Λ^*|ξ_t,τ_k( u)-ξ_t,τ_k( u)|√(n)+n u |ℱ_t-1)sup_ u∈Λ^*|ν_t,τ_k( u)| u} ≤ C√(n)∑_t=1^n ρ^t ·[E(w_tsup_Φq̇_t,τ_k(φ^†)^2)]^1/2·[E(w_tς_ρ^2)]^1/2 + Cn∑_t=1^n ρ^t ·[E(w_tsup_Φq̇_t,τ_k(φ^†)^2)]^1/2·[E(w_tς_ρ^2)]^1/2 + Cn∑_t=1^n tρ^t ·[E(w_tsup_Φq̇_t,τ_k(φ^†)^2)]^1/2·[E(w_tς_ρ^2)]^1/2 + Cn∑_t=1^n ρ^t ·[E(w_tsup_Φq̇_t,τ_k(φ^†)^2)]^1/2·[E(w_tξ_ρ^2)]^1/2 = o(1). Hence, it follows that A_4n,k^*( u_n)=o_p(√(n) u_n+n u_n^2). Combining (<ref>)–(<ref>), we accomplish the proof of this lemma. § ADDITIONAL SIMULATION STUDIES §.§ Unweighted QR estimator In this experiment, we examine the robustness of the unweighted QR estimator when the data is heavy-tailed such that the condition E|y_t|^3<∞ does not hold. Note that a simulation experiment is conducted in Section 5.2 of the main paper to examine the performance of the self-weighted QR estimator θ_wn(τ). For a direction comparison, we redo the experiment for the unweighted estimator θ_n(τ) under the same settings. To determine whether the third-order moment of y_t exists or not, we generate {y_t} of length 10^5 for Settings (5.2) and (5.3) with F=F_N and F=F_T. By calculating the tail index of y_t using Hill estimator <cit.>, we conclude that possibly E|y_t|^3=∞ if F=F_T and E|y_t|^3<∞ if F=F_N for both settings; see Table <ref> for the tail index of {y_t}. To confirm this, we further conduct tests for the null hypothesis that the kth moment of y_t does not exist <cit.> for k=1,2 and 3. From the p-values in Table <ref>, we confirm the above conclusion. Tables <ref> and <ref> report the biases, empirical standard deviations (ESDs) and asymptotic standard deviations (ASDs) of the unweighted QR estimator θ_n(τ) at quantile level τ=0.5%,1% or 5%. From the results, we observe that the three main findings for the self-weighted estimator summarized in Section <ref> of the main paper also hold true for the unweighted estimator. This indicates that the unweighted estimator is robust to heavy-tailed data without a finite third-order moment. Moreover, we can compare Tables <ref> and <ref> to Tables <ref> and <ref> in the main paper, respectively. It can be observed that both estimators have similar performance for F=F_N, whereas the self-weighted estimator outperforms the unweighted estimator in terms of ESD and ASD for F=F_T. Thus, when E|y_t|^3=∞, although the unweighted estimator is still robust, it is less efficient than the self-weighted estimator. §.§ Quantile rearrangement To evaluate the effect of the quantile rearrangement method on prediction, we conduct a simulation experiment to compare the original quantile curve based on the pointwise quantile estimates {Q_τ_k(y_t|ℱ_t-1)}_k=1^K with the rearranged quantile curve based on the sorted quantile estimates denoted by {Q_τ_k^*(y_t|ℱ_t-1)}_k=1^K. We consider the DGP in (5.1) with Settings (5.2) and (5.3) and F being the standard normal distribution F_N or Tukey-lambda distribution F_T, respectively. The sample size is set to n=1000 or 2000, and 1000 replications are generated for each sample size. For evaluation, we use the ℓ_2-loss to measure the prediction errors of {Q_τ_k(y_t|ℱ_t-1)}_k=1^K and {Q_τ_k^*(y_t|ℱ_t-1)}_k=1^K. Specifically, the in-sample prediction error is defined as [1nMK∑_m=1^M∑_t=1^n∑_k=1^K |Q_τ_k^(m)(y_t|ℱ_t-1)-Q_τ_k(y_t|ℱ_t-1)|^2]^1/2, and the out-of-sample prediction error is defined as [1MK∑_m=1^M∑_k=1^K |Q_τ_k^(m)(y_n+1|ℱ_t-1)-Q_τ_k(y_n+1|ℱ_t-1)|^2]^1/2, where Q_τ_k^(m)(y_t|ℱ_t-1)=Q_τ_k^(m)(y_t|ℱ_t-1) or Q_τ_k^*(m)(y_t|ℱ_t-1) is the estimate in the mth replication, and M=1000 is the total number of replications. Table <ref> reports the prediction errors of estimated curves based on {Q_τ_k(y_t|ℱ_t-1)}_k=1^K and {Q_τ_k^*(y_t|ℱ_t-1)}_k=1^K for τ_k=0.7+0.005k with k=1,…,58 (dense case) and τ_k=0.7+0.05k with k=1,…,5 (sparse case). It can be observed that the rearranged quantile curve has no greater prediction errors than the original quantile curve in finite samples. §.§ A Kolmogorov-Smirnov test and its finite-sample comparison with the CvM test To test whether β_1(τ) is a constant or not, we can also construct the Kolmogorov-Smirnov (KS)-type test S_n^*=√(n)sup_τ∈𝒯|v_n(τ)|, where v_n(τ)=Rθ_wn(τ)-β_1=R[θ_wn(τ)-∫_𝒯θ_wn(τ)dτ]. Similar to Corollary <ref>, under the same regular conditions, we can show that S_n^*→_d S^*≡sup_τ∈𝒯|v_0(τ)| as n→∞ under H_0, where v_0(τ)=R[𝔾(τ)-∫_𝒯𝔾(τ)dτ] with 𝔾(τ) defined in Theorem <ref>. Then the subsampling method in Section 3.2 of the main paper can be used to calculate the critical values of S_n^* with S_k,b_n replaced by S_k,b_n^*=√(b_n)sup_τ∈𝒯|v_k,b_n(τ)|. The same experiment is conducted using the same DGPs as in Section <ref>. To calculate S_n in (<ref>) and S_n^*, we use a grid 𝒯_n with equal cell size δ_n=0.005 in place of 𝒯. For the block size b_n in subsampling, we consider b_n=⌊ cn^1/2⌋ with c=0.5, 1 and 2; see also <cit.>. Tables <ref> and <ref> summarize the rejection rates of S_n (the CvM test) and S_n^* (the KS-type test) at 5% significance level for 𝒯=[0.7,0.995] and [0.8,0.995], respectively. It can be seen that the KS-type test has lower power than the CvM test for tail quantile intervals in finite samples. As a result, we recommend using the CvM test for testing H_0: ∀τ∈𝒯, Rθ(τ)=β_1 in our model setting. § ADDITIONAL RESULTS FOR THE EMPIRICAL ANALYSIS We have also re-calculated both the backtesting and empirical coverage results for the proposed self-weighted QR method after conducting the quantile rearrangement in <cit.>. Tables <ref>–<ref> report the results for the self-weighted QR method before and after quantile rearrangement. We find that the p-values of the backtests, the empirical coverage rates and prediction errors do not change at all after the percentage points are rounded down to two decimal places for lower and upper 1%, 2.5%, 5% conditional quantiles. Moreover, the empirical coverage rates and prediction errors change very little for lower and upper 0.1%, 0.25%, 0.5% conditional quantiles. As a result, almost the same results can be observed for Tables <ref>–<ref>. Moreover, we also employ the monotone rearrangement method to ensure the monotonicity of estimated curves for ω(·) and α_1(·), respectively. Specifically, we sort the pointwise estimates {ω_wn(τ_k)}_k=1^K and {α_1wn(τ_k)}_k=1^K in Figure <ref> respectively in increasing order to enforce the monotonicity. Moreover, the pointwise confidence intervals of ω(·) and α_1(·) can be rearranged accordingly by sorting the upper and lower endpoint functions. Figure <ref> illustrates the original curve estimates together with their 95% confidence intervals, and the rearranged curve estimates together with rearranged confidence intervals for ω(·) and α_1(·). It can be seen that the rearranged curves and confidence intervals for ω(·) and α_1(·) are monotonic and more smooth than the original estimated curves, and the rearranged confidence interval is shorter in length than the original interval.
http://arxiv.org/abs/2306.01541v1
20230602134442
Strong tractability for multivariate integration in a subspace of the Wiener algebra
[ "Takashi Goda" ]
math.NA
[ "math.NA", "cs.NA" ]
Social Interactions with Endogenous Group FormationWe are grateful to Denis Chetverikov, Aureo de Paula, Jinyong Hahn, Brian Krauth, Michael Leung, Arthur Lewbel, Zhipeng Liao, Adriana Lleras-Muney, Rosa Matzkin, Konrad Menzel, Krishna Pendakur, and Geert Ridder for their helpful comments. We are also grateful to seminar and conference participants at Caltech, Iowa, Jinan University (IESR), Rice, UCL, UC Riverside, UPenn, USC, Yale, 2021 Asia Meeting of the Econometric Society, 2021 Australasia Meeting of the Econometric Society, 2021 China Meeting of the Econometric Society, 2022 California Econometrics Conference, 37th Canadian Econometric Study Group Meetings, 55th Annual Conference of the Canadian Economics Association, ASSA/ES North American Winter Meeting 2021, Econometric Society World Congress 2020, and IAAE 2021 Annual Conference. Sun is grateful for financial support from the Social Sciences and Humanities Research Council of Canada through its Insight Development Grants Program. All errors are our own. Shuyang ShengDepartment of Economics, University of California at Los Angeles, Los Angeles, CA 90095, USA. Email: [email protected] SunDepartment of Economics, Simon Fraser University, Vancouver, BC, V5A 1S6, Canada. Email:[email protected] July 31, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Building upon recent work by the author, we prove that multivariate integration in the following subspace of the Wiener algebra over [0,1)^d is strongly polynomially tractable: F_d:={ f∈ C([0,1)^d) | f:=∑_∈^d|f̂()|max((()),min_j∈()log |k_j|)<∞}, with f̂() being the -th Fourier coefficient of f, ():={j∈{1,…,d}| k_j≠ 0}, and : 2^{1,…,d}→{1,…,d} being defined by (u):=max_j∈ uj-min_j∈ uj+1, for non-empty subset u⊆{1,…,d} and (∅):=1. Strong polynomial tractability is achieved by an explicit quasi-Monte Carlo rule using a multiset union of Korobov's p-sets. We also show that, if we replace (()) with 1 for all ∈^d in the above definition of norm, multivariate integration is polynomially tractable but not strongly polynomially tractable. 41A55, 41A58, 42B05, 65D30, 65D32 § INTRODUCTION AND MAIN RESULTS This paper concerns numerical integration for multivariate functions defined over the d-dimensional unit cube. For a Riemann integrable function f: [0,1)^d→, we approximate its integral I_d(f)=∫_[0,1)^df() by Q_d,n(f)=∑_h=0^n-1w_h f(_h) with sets of n sampling points {_0,…,_n-1}⊂ [0,1)^d and associated weights {w_0,…,w_n-1}. Quasi-Monte Carlo (QMC) rule denotes a special case of Q_d,n where all the weights w_h are equal to 1/n. The worst-case error of an algorithm Q_d,N in a Banach space F with norm · is defined by e^(F,Q_d,n):=sup_f∈ F, f≤ 1| I_d(f)-Q_d,n(f)|. In the field of information-based complexity <cit.>, we are interested in how the information complexity n(ε, d, F) grows in the reciprocal of the error tolerance ε∈ (0,1) and the dimension d. Here, the information complexity is defined as the minimum number of function values, among all possible Q_d,n, needed to make the worst-case error in F no greater than ε, that is, n(ε, d, F):= min{ n∈ | ∃ Q_d,n: e^(F,Q_d,n)≤ε}. In a recent work by the author <cit.>, it has been proven that the information complexity for the following unweighted subspace of the Wiener algebra grows only polynomially both in ε^-1 and d: F_d^1:={ f∈ C([0,1)^d) | f:=∑_∈^d|f̂()|max(1,min_j∈()log |k_j|)<∞}, with f̂() being the -th Fourier coefficient of f, i.e., f̂() = ∫_[0,1)^df()exp(-2π i·), and ():={j∈{1,…,d}| k_j≠ 0}. More precisely, it has been shown that an upper bound n(ε, d, F_d^1)≤ C_1 ε^-3d^3 holds for a positive constant C_1, concluding that the problem of multivariate integration in F_d^1 is polynomially tractable. We refer to <cit.> for more recent progress on this line of research. In this context, an unweighted function space F refers to a space where all variables and groups of variables play an equal role. Therefore, for any permutation matrix π and f∈ F, it holds that f∘π∈ F and f∘π=f. The result presented in <cit.> builds upon the work of Dick <cit.>, who established polynomial tractability for multivariate integration in the intersection of the Wiener algebra and an unweighted space of Hölder continuous functions. As a continuation of <cit.>, we prove the following result in this paper: Let F_d^2 be a subspace of the Wiener algebra defined by F^2_d:={ f∈ C([0,1)^d) | f:=∑_∈^d|f̂()|max((()),min_j∈()log |k_j|)<∞}, where : 2^{1,…,d}→{1,…,d} is defined by (u):=max_j∈ uj-min_j∈ uj+1, for non-empty subset u⊆{1,…,d}, and (∅)=1. Then, there exists a positive constant C_2 such that, for any d∈ and ε∈ (0,1), we have n(ε, d, F_d^2)≤ C_2 ε^-3/(logε^-1). In comparison to the result of <cit.> for F_d^1, by replacing 1 (the first argument in taking the maximum for each ) with (()) in the definition of norms, the polynomial dependence of the information complexity on the dimension d does not show up anymore, meaning that the problem of multivariate integration in F_d^2 is strongly polynomially tractable. This result is strengthened by the following theorem on the former space F_d^1. For any linear algorithm Q_d,n using n function values, we have e^(F_d^1,Q_d,n)≥ d/(2n^2) for any d∈ and n>2d. Note that there is a significant gap between the lower bound on the worst-case error obtained above and the upper bound of order dn^-1/3 shown in <cit.>. Nevertheless, this result implies that a dependence of the information complexity on the dimension d cannot be eliminated for F_d^1. Therefore, the problem of multivariate integration in F_d^1 is polynomially tractable but not strongly polynomially tractable. As a future research direction, it would be interesting to study whether an intermediate space between F_d^1 and F_d^2 still exhibits strong polynomial tractability for multivariate integration. As we have 1≤ |()|≤(()) for all ∈^d when defining |()|=1, one of the most natural spaces we can consider is an unweighted space F_d^3:={ f∈ C([0,1)^d) | f:=∑_∈^d|f̂()|max(|()|,min_j∈()log |k_j|)<∞}. Note that, although the space F_d^2 is weighted, it remains invariant under the reversion of the variables, i.e., if f∈ F_d^2, then we have g∈ F_d^2 and f=g where g(x_1,…,x_d)=f(x_d,…,x_1). This is in contrast to many existing results on strong polynomial tractability for multivariate integration in the worst-case setting, where weight parameters are introduced to model the relative importance of each group of variables, and variables are typically assumed ordered in decreasing importance order. See <cit.> among many others. In fact, it seems not possible to characterize the space F_d^2 in such a way. The author believes that further tractability studies in subspaces of the Wiener algebra will offer new insights into the field of information-based complexity, particularly regarding (strong) polynomial tractability in (un)weighted spaces. § PROOF OF THEOREM <REF> This section is devoted to proving Theorem <ref> by providing an explicit QMC rule that attains the desired worst-case error bound. The QMC rule considered here is exactly the same as the one discussed in <cit.>. For an integer m≥ 2, let _m := {⌈ m/2⌉<p≤ m | p is prime}. It is known that there exist constants c_ and C_ with 0<c_<min(1,C_) such that c_m/log m≤ |_m|≤ C_m/log m, for all m≥ 2, see <cit.>. Now, given an integer m≥ 2, we define two different point sets as multiset unions: P_d,m^1=⋃_p∈_mS_d,p and P_d,m^2=⋃_p∈_mT_d,p, where S_d,p={_h^(p)| 0≤ h<p^2} and T_d,p={_h,ℓ^(p)| 0≤ h,ℓ<p} are sets with p^2 points known as Korobov's p-sets <cit.>. These point sets are defined as follows: _h^(p)=( {h/p^2}, {h^2/p^2},…, {h^d/p^2}), and _h,ℓ^(p)=( {hℓ/p}, {hℓ^2/p},…, {hℓ^d/p}), respectively, where we write {x}=x-⌊ x⌋ to denote the fractional part of a non-negative real number x. It is important to note that taking the multiset unions of Korobov's p-sets with different primes p is crucial in our error analysis. Trivially we have |P_d,m^1|=|P_d,m^2|=∑_p∈_mp^2. The following result on the exponential sums refines the known results from <cit.> as well as <cit.>. Let d∈ and p be a prime with p≥ d. For any ∈^d∖{} such that there exists at least one index j^*∈{1,…,d} where k_j^* is not divisible by p, i.e., p∤, the following bounds hold: |1/p^2∑_h=0^p^2-1exp( 2π i·_h^(p))| ≤(())/p, and |1/p^2∑_h,ℓ=0^p-1exp( 2π i·_h,ℓ^(p))| ≤(())/p. Let us consider the first bound. As we have {0,…,p^2-1}={h_0+h_1p | 0≤ h_0,h_1<p} and, for each pair of h_0,h_1∈{0,…,p-1}, it holds that exp( 2π i·_h_0+h_1p^(p)) = exp( 2π i/p^2∑_j∈()k_j (h_0+h_1p)^j) = exp( 2π i/p^2∑_j∈()k_j ∑_a=0^jjah_0^a(h_1p)^j-a) = exp( 2π i/p^2∑_j∈()k_j (h_0^j+j h_0^j-1h_1p)), we obtain |1/p^2∑_h=0^p^2-1exp( 2π i·_h^(p))| = |1/p^2∑_h_0,h_1=0^p-1exp( 2π i/p^2∑_j∈()k_j (h_0^j+j h_0^j-1h_1p))| = |1/p∑_h_0=0^p-1exp( 2π i/p^2∑_j∈()k_j h_0^j)1/p∑_h_1=0^p-1exp(2π i h_1/p∑_j∈()k_j j h_0^j-1)| ≤1/p∑_h_0=0^p-1|1/p∑_h_1=0^p-1exp(2π i h_1/p∑_j∈()k_j j h_0^j-1)| = 1/p∑_h_0=0 ∑_j∈()k_j j h_0^j-1≡ 0 p^p-11, where the last equality follows from the well-known character property for the trigonometric functions <cit.>. Here, by denoting j_min=min_j∈()j and j_max=max_j∈()j, we have ∑_j∈()k_j j h_0^j-1=∑_j=j_min j∈()^j_maxk_j j h_0^j-1= h_0^j_min-1∑_j=j_min j∈()^j_maxk_j j h_0^j-j_min. As the last sum over j is a polynomial in h_0 with degree j_max-j_min, the number of solutions of the congruence ∑_j∈()k_j j h_0^j-1≡ 0 p is at most j_max-j_min+1=(()). Thus the result follows. Since the second bound can be proven in the same manner, we omit the details. Note that, if k_j is divisible by p for all j, i.e., p|, then we only have a trivial bound on the exponential sum, which is 1. Using this refined result, we obtain the following bounds on the exponential sums for our point sets P_d,m^1 and P_d,m^2. Let d∈ and m≥ 2 with min_p∈_mp≥ d. For any ∈^d∖{}, it holds that |1/|P_d,m^1|∑_p∈_m∑_h=0^p^2-1exp( 2π i·_h^(p))| ≤1/m( 4(())+8/c_min_j∈()log |k_j|), and |1/|P_d,m^2|∑_p∈_m∑_h,ℓ=0^p-1exp( 2π i·_h,ℓ^(p))| ≤1/m( 4(())+8/c_min_j∈()log |k_j|). The following proof for the first bound is similar to that of <cit.>, and the second bound can be proven in a similar way, so we omit the details. Using Lemma <ref>, we have |1/|P_d,m^1|∑_p∈_m∑_h=0^p^2-1exp( 2π i·_h^(p))| ≤1/|P_d,m^1|∑_p∈_m|∑_h=0^p^2-1exp( 2π i·_h^(p))| ≤1/|P_d,m^1|∑_p∈_m p∤p(())+1/|P_d,m^1|∑_p∈_m p|p^2 ≤m|_m|/|P_d,m^1|(())+m^2/|P_d,m^1|∑_p∈_m p|1 ≤m|_m|/(m/2)^2|_m|(())+m^2/(m/2)^2|_m|∑_p∈_m p|1 ≤4/m(())+4log m/c_m∑_p∈_m p|1, where the last inequality follows from (<ref>). To give a bound on the last sum over p∈_m which divides , we use the fact that, for any integers k,n∈, k has at most log_n k prime divisors larger than or equal to n. With (·) denoting the indicator function, for any index j^*∈(), we get ∑_p∈_m p|1 = ∑_p∈_m∏_j∈()(p| k_j)≤∑_p∈_m(p| k_j^*) ≤log_⌈ m/2⌉ +1|k_j^*|≤2log |k_j^*|/log m. Since this inequality applies to any index j^*∈(), it holds that ∑_p∈_m p|1 ≤2/log mmin_j∈()log |k_j|. This completes the proof. Now we are ready to prove Theorem <ref>. Since any function f∈ F_d^2 has an absolutely convergent Fourier series, by letting Q_d,n being the QMC rule using P_d,m^1 (or P_d,m^2) for some m≥ 2 with min_p∈_mp≥ d, it follows from Corollary <ref> that, with n equal to ∑_p∈_mp^2, | I_d(f)-Q_d,n(f)| = | I_d(f)-1/|P_d,m^1|∑_p∈_m∑_h=0^p^2-1f(_h^(p))| = | f̂()-1/|P_d,m^1|∑_p∈_m∑_h=0^p^2-1∑_∈^df̂()exp( 2π i·_h^(p))| = | ∑_∈^d∖{}f̂()1/|P_d,m^1|∑_p∈_m∑_h=0^p^2-1exp( 2π i·_h^(p))| ≤∑_∈^d∖{}|f̂()|| 1/|P_d,m^1|∑_p∈_m∑_h=0^p^2-1exp( 2π i·_h^(p))| ≤1/m∑_∈^d∖{}|f̂()|( 4(())+8/c_min_j∈()log |k_j|) ≤16/c_m∑_∈^d∖{}|f̂()|max((()),min_j∈()log |k_j|) ≤16/c_mf. This leads to an upper bound on the worst-case error as e^(F_d^2,Q_d,n)≤16/c_m. Therefore, in order to make e^(F_d^2,Q_d,n) less than or equal to ε∈ (0,1), it suffices to choose m=⌈ 16c_^-1ε^-1⌉ and we have n(ε, d, F_d^2)≤∑_p∈_⌈ 16c_^-1ε^-1⌉p^2 ≤ C_⌈ 16c_^-1ε^-1⌉/log⌈ 16c_^-1ε^-1⌉×(⌈ 16c_^-1ε^-1⌉)^2, from which the result follows immediately. § PROOF OF THEOREM <REF> We adopt a similar approach as in the proofs of <cit.> and <cit.>. Consider an arbitrary linear algorithm Q_d,n(f)=∑_h=0^n-1w_h f(_h). For a set ⊂^d with enough cardinality ||> n, we define a function g: [0,1)^d→ by g()=∑_∈c_exp(2π i·) with c_∈, which satisfies g(_h)=0 for all h=0,…,n-1. In fact, there exists a non-zero vector of (c_)_∈, as the condition that g(_h)=0 for all h=0,…,n-1 forms n homogeneous linear equations with ||>n unknowns c_. Let us normalize these coefficients in such a way that max_∈|c_|=c_=1 for some ∈. With this and a positive constant C, we define another function g̃: [0,1)^d→ as follows: g̃()=Cexp(-2π i·)g()=C∑_∈c_exp(2π i(-)·). Then we construct a real-valued function g^⋆ defined on [0,1)^d by taking the average of g̃ and its complex conjugate: g^⋆()=(g̃()+g̃())/2. Regarding the norm of g^⋆ in F_d^1, we have g^⋆ ≤g̃+g̃/2 = g̃ = C∑_∈|c_|max(1,min_j∈()log |k_j-ℓ_j|) ≤ C∑_∈max(1,min_j∈()log |k_j-ℓ_j|) ≤ Cmax_∈∑_∈max(1,min_j∈()log |k_j-ℓ_j|). To ensure g^⋆≤ 1, we set C=( max_∈∑_∈max(1,min_j∈()log |k_j-ℓ_j|))^-1. By construction, we have g^⋆(_h)=0 for all h=0,…,n-1, which implies Q_n,d(g^⋆)=0. On the other hand, the exact integral is given by I_d(g^⋆)=Cc_=C=( max_∈∑_∈max(1,min_j∈()log |k_j-ℓ_j|))^-1. Since g^⋆∈ F_d^1 with g^⋆≤ 1, the worst-case error of any linear algorithm Q_d,n is bounded below by e^(F_d^1,Q_d,n)≥| I_d(g^⋆)-Q_n,d(g^⋆)|=( max_∈∑_∈max(1,min_j∈()log |k_j-ℓ_j|))^-1. In what follows, let ={}∪{∈^d | d-1 of k_j are all 0 and one non-zero k_j is from {1,…,⌈ n/d⌉}}. It is easy to verify that ||=1+d⌈ n/d⌉ > n. For this choice of , we can restrict ourselves to =(ℓ,0,…,0) for some ℓ∈{0,…,⌈ n/d⌉}. By utilizing the assumption n>2d and the well-known inequality log x ≤ x-1, we have ∑_∈max(1,min_j∈()log |k_j-ℓ_j|) = max(1,logℓ) + ∑_k_1=1^⌈ n/d⌉max(1,log |k_1-ℓ|) + ∑_j=2^d∑_k_j=1^⌈ n/d⌉max(1,log k_j) ≤log⌈ n/d⌉ + d∑_k=1^⌈ n/d⌉max(1,log k) ≤log⌈ n/d⌉ + d⌈ n/d⌉log⌈ n/d⌉ ≤(⌈ n/d⌉-1)·( 1+d⌈ n/d⌉)≤2n^2/d. Since the last bound is independent of ℓ, we obtain e^(F_d^1,Q_d,n)≥( max_∈∑_∈max(1,min_j∈()log |k_j-ℓ_j|))^-1≥d/2n^2. This completes the proof. plain
http://arxiv.org/abs/2306.12380v1
20230621165150
On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets and their Aggregation
[ "Samir M. Perlaza", "Iñaki Esnaola", "Gaetan Bisson", "H. Vincent Poor" ]
cs.LG
[ "cs.LG", "cs.IT", "math.IT", "math.PR", "math.ST", "stat.TH" ]
On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets and their Aggregation Samir M. Perlaza123, Iñaki Esnaola24, Gaetan Bisson3, and H. Vincent Poor2 1 INRIA, Centre Inria d'Université Côte d'Azur, Sophia Antipolis, France. 2 ECE Dept. Princeton University, Princeton, 08544 NJ, USA. 3 GAATI, Université de la Polynésie Française, Faaa, French Polynesia. 4 ACSE Dept., University of Sheffield, Sheffield, United Kingdom. This work is supported by the Inria Exploratory Action – Information and Decision Making (AEx IDEM) and in part by a grant from the C3.ai Digital Transformation Institute. ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The dependence on training data of the Gibbs algorithm (GA) is analytically characterized. By adopting the expected empirical risk as the performance metric, the sensitivity of the GA is obtained in closed form. In this case, sensitivity is the performance difference with respect to an arbitrary alternative algorithm. This description enables the development of explicit expressions involving the training errors and test errors of GAs trained with different datasets. Using these tools, dataset aggregation is studied and different figures of merit to evaluate the generalization capabilities of GAs are introduced. For particular sizes of such datasets and parameters of the GAs, a connection between Jeffrey’s divergence, training and test errors is established. § INTRODUCTION The Gibbs algorithm (GA) randomly selects a model by sampling the Gibbs probability measure, which is the unique solution to the empirical risk minimization (ERM) problem with relative entropy regularization (ERM-RER) <cit.>. The input of the GA is twofold. It requires a number of labeled patterns (datasets); and a prior on the set of models in the form of a σ-measure, e.g., the Lebesgue measure, the counting measure, or a probability measure. One of the main features of the GA is that it does not require an assumption on the statistical properties of the datasets <cit.>. Nonetheless, the generalization capabilities of the Gibbs algorithm are often characterized by the generalization error, for which statistical assumptions on the datasets must be considered, e.g., training, and unseen datasets are identically distributed. When the prior on the set of models is a probability measure, a closed-form expression for the generalization error is presented in <cit.>, while upper bounds have been derived in <cit.>, and references therein. In a more general setting, when the prior on the set of models is a σ-measure, the generalization capabilities of the GA have been studied in <cit.>, and <cit.>, using the sensitivity of the empirical risk to deviations from the Gibbs probability measure to another probability measure. This method does not require any statistical assumptions on the datasets and is chosen as the workhorse of the present analysis. The main motivation of this work is to break away from the implicit assumption in existing literature that all training datasets are drawn from the same probability measure and thus, can be aggregated to improve the generalization capabilities of a given GA. In practical settings, training data might be acquired from multiple sources that might be subject to different impairments during data acquisition, data storage and data transmission. For instance, consider a GA trained upon a particular dataset and assume that a new dataset from a different source is made available. Hence, the following questions arise concerning the generalization capabilities of such a GA: Would such a GA generalize over the new dataset? Should the new dataset be aggregated to the previous dataset to build a new GA in the aim of improving generalization? How does the GA trained upon the existing dataset compare in terms of generalization with respect to a new GA trained upon the new dataset? The answers to such questions are far from trivial. One of the main challenges to answer such questions stems from the fact that the probability measures generating each of those datasets are unknown and potentially different due to a variety of impairments. This paper introduces a closed-form expression for the difference of the expected empirical risk on a given dataset induced by a GA trained upon this dataset and the one induced by an alternative algorithm (another probability measure). This quantity was coined sensitivity of the GA algorithm in <cit.> and is shown to be central to tackling the questions above. This is in part due to the fact that it allows studying the generalization capabilities of GAs based on actual datasets, which disengages from the assumption that both training and unseen data follow the same probability distribution. More specifically, by studying the sensitivity, closed-form expressions for the difference between training error and test error can be obtained. These expressions lead to a clearer understanding of the roles of the size of datasets chosen for training and testing, as well as the parameters of the GAs. As a byproduct, the difference between the expected empirical risk on the aggregation of two datasets induced by two GAs trained upon the constituent datasets is characterized. Similarly, the difference between the expected empirical risk on one of the constituent datasets induced by two GAs trained upon the aggregated dataset and the constituent dataset is also characterized. These explicit expressions allow comparing two GAs trained upon different datasets, which is relevant under learning paradigms such as federated learning <cit.>. § PROBLEM FORMULATION Let M, X and Y, with M⊆^d and d ∈, be sets of models, patterns, and labels, respectively. A pair (x,y) ∈𝒳×𝒴 is referred to as a labeled pattern or as a data point. Given n data points, with n ∈, denoted by (x_1, y_1 ), ( x_2, y_2), …, ( x_n, y_n ), a dataset is represented by the tuple ((x_1, y_1 ), (x_2, y_2 ), …, (x_n, y_n )) ∈( 𝒳×𝒴)^n. Let the function f: M×𝒳→𝒴 be such that the label y assigned to the pattern x according to the model θ∈M is y = f(θ, x). Let also the function ℓ: Y×Y→ [0, +∞] be such that given a data point (x, y) ∈X×Y, the risk induced by a model θ∈M is  ℓ( f(θ, x), y ). In the following, the risk function ℓ is assumed to be nonnegative and for all y ∈Y,  ℓ( y , y) = 0. Given a dataset z = ((x_1, y_1 ), (x_2, y_2 ), …, (x_n, y_n )) ∈( X×Y)^n, the empirical risk induced by the model θ, with respect to the dataset z in (<ref>), is determined by the function 𝖫_z: M→ [0, +∞ ], which satisfies rCl 𝖫_z (θ ) = 1/n∑_i=1^n ℓ( f(θ, x_i), y_i). Using this notation, the ERM problem consists of the following optimization problem: min_θ∈M𝖫_z(θ). Let the set of solutions to the ERM problem in (<ref>) be denoted by T( z) ≜min_θ∈M𝖫_z(θ). Note that if the set M is finite, the ERM problem in (<ref>) always possesses a solution, and thus, T( z) > 0. Nonetheless, in general, the ERM problem might not necessarily possess a solution. Hence, for some cases, it might be observed that T( z) = 0. §.§ Notation The relative entropy is defined below as the extension to σ-finite measures of the relative entropy usually defined for probability measures. Given two σ-finite measures P and Q on the same measurable space, such that Q is absolutely continuous with respect to P, the relative entropy of Q with respect to P is QP≜∫dQ/dP(x) log( dQ/dP(x)) dP(x), where the function dQ/dP is the Radon-Nikodym derivative of Q with respect to P. Given a measurable space ( Ω, ℱ), the set of all σ-finite measures on ( Ω, ℱ) is denoted by ( Ω, ℱ). Given a σ-measure Q ∈( Ω, ℱ), the subset of ( Ω, ℱ) including all σ-finite measures absolutely continuous with Q is denoted by _Q( Ω, ℱ). Given a subset A of ^d, the Borel σ-field on A is denoted by A. §.§ The ERM-RER Problem The expected empirical risk is defined as follows. Let P be a probability measure in ΔM. The expected empirical risk with respect to the dataset z in (<ref>) induced by the measure P is 𝖱_z( P ) = ∫𝖫_z(θ) d P(θ), where the function 𝖫_z is in (<ref>). The following lemma follows immediately from the properties of the Lebesgue integral. Given a dataset z∈( X×Y)^n and two probability measures P_1 and P_2 over the measurable space M, for all α∈ [0,1], the function 𝖱_z in (<ref>) satisfies rcl 𝖱_z( αP_1 + (1- α)P_2) = α𝖱_z( P_1 ) + (1- α) 𝖱_z( P_2). The ERM-RER problem is parametrized by a σ-finite measure on M and a positive real, which are referred to as the reference measure and the regularization factor, respectively. Let Q be a σ-finite measure on M and let λ > 0 be a positive real. The ERM-RER problem, with parameters Q and λ, consists in the following optimization problem: min_P ∈_QM𝖱_z( P ) + λ D( P Q), where the dataset z is in (<ref>); and the function 𝖱_z is defined in (<ref>). For the ease of presentation, the parameters of the ERM-RER problem in (<ref>) are chosen such that Q( {θ∈M: 𝖫_z( θ) = +∞}) =0. The case in which the regularization is QP (instead of PQ) in (<ref>) is left out of the scope of this work. The interested reader is referred to <cit.>. §.§ The Solution to the ERM-RER Problem The solution to the ERM-RER problem in (<ref>) is presented by the following lemma. Given a σ-finite measure Q and a dataset z∈( X×Y)^n, let the function K_Q,z: →∪{ +∞} be such that for all t ∈, rcl K_Q,z(t ) = log( ∫exp( t 𝖫_z(θ) ) dQ(θ) ), where the function 𝖫_z is defined in (<ref>). Let also the set K_Q,z⊂ (0, +∞) be rcl K_Q,z ≜ {s > 0: K_Q,z(-1/s ) < +∞}. Then, for all λ∈K_Q,z, the solution to the ERM-RER problem in (<ref>) is a unique measure on M, denoted by P^(Q, λ)_Θ| Z = z, whose Radon-Nikodym derivative with respect to Q satisfies that for all θ∈ Q, rcl dP^(Q, λ)_Θ| Z = z/dQ ( θ ) = exp( - K_Q,z(- 1/λ ) - 1/λ 𝖫_z( θ)). Among the numerous properties of the solution to the ERM-RER problem in (<ref>), the following property is particularly useful in the remainder of this work. Given a σ-finite measure Q over the measurable space M, and given a dataset z∈( X×Y)^n, for all λ∈K_Q,z, with K_Q,z in (<ref>), the following holds: rcl 𝖱_z( P^(Q, λ)_Θ| Z = z ) + λP^(Q, λ)_Θ| Z = zQ = - λK_Q,z(- 1/λ ), where the function 𝖱_z is defined in (<ref>); the function K_Q,z is defined in (<ref>); and the probability measure P^(Q, λ)_Θ| Z = z is the solution to the ERM-RER problem in (<ref>). The proof is presented in <cit.>. § SENSITIVITY OF THE ERM-RER SOLUTION The sensitivity of the expected empirical risk 𝖱_z to deviations from the probability measure P^(Q, λ)_Θ| Z = z towards an alternative probability measure P is defined as follows. Given a σ-finite measure Q and a positive real λ > 0, let 𝖲_Q, λ: ( X×Y)^n ×_QM→( - ∞, +∞] be a function such that l 𝖲_Q, λ( z, P ) = {[ 𝖱_z( P ) - 𝖱_z( P^(Q, λ)_Θ| Z = z ) if λ∈K_Q,z; +∞ otherwise, ] . where the function 𝖱_z is defined in (<ref>) and the measure P^(Q, λ)_Θ| Z = z is the solution to the ERM-RER problem in (<ref>). The sensitivity of the expected empirical risk 𝖱_z when the measure changes from P^(Q, λ)_Θ| Z = z to P is 𝖲_Q, λ( z, P). The following theorem introduces an exact expression for the sensitivity in Definition <ref>. Given a σ-finite measure Q over the measurable space M and a probability measure P ∈_QM, it holds that for all datasets z∈( X×Y)^n and for all λ∈K_Q,z, with K_Q,z in (<ref>), rcl 𝖲_Q, λ( z, P ) = λ( P^(Q, λ)_Θ| Z = zQ + PP^(Q, λ)_Θ| Z = z - PQ ), where the probability measure P^(Q, λ)_Θ| Z = z is the solution to the ERM-RER problem in (<ref>). The proof uses the fact that, under the assumption in (<ref>), the probability measure P^(Q, λ)_Θ| Z = z in (<ref>) is mutually absolutely continuous with respect to the σ-finite measure Q; see for instance <cit.>. Hence, the probability measure P is absolutely continuous with respect to P^(Q, λ)_Θ| Z = z, as a consequence of the assumption that P is absolutely continuous with respect to Q. The proof follows by noticing that for all θ∈M, rcl log(dP/dP^(Q, λ)_Θ| Z = z ( θ )) = log(dQ/dP^(Q, λ)_Θ| Z = z ( θ ) dP/dQ ( θ )) = - log(dP^(Q, λ)_Θ| Z = z/dQ ( θ ) ) + log( dP/dQ ( θ )) = K_Q,z(- 1/λ ) + 1/λ 𝖫_z( θ)+ log( dP/dQ ( θ )), where the functions 𝖫_z and K_Q,z are defined in (<ref>) and in (<ref>), respectively; and the equality in (<ref>) follows from Lemma <ref>. Hence, the relative entropy PP^(Q, λ)_Θ| Z = z satisfies rcl PP^(Q, λ)_Θ| Z = z = ∫log(dP/dP^(Q, λ)_Θ| Z= z ( θ )) dP ( θ ) = K_Q,z(- 1/λ ) + ∫( 1/λ 𝖫_z( θ) + log( dP/dQ ( θ )) )dP ( θ ) = K_Q,z(- 1/λ ) + 1/λ𝖱_z( P ) + PQ = - P^(Q, λ)_Θ| Z = zQ + 1/λ ( 𝖱_z( P ) - 𝖱_z( P^(Q, λ)_Θ| Z = z ) ) + PQ, where the function 𝖱_z is defined in (<ref>), the equality in (<ref>) follows from (<ref>), and the equality in (<ref>) follows from Lemma <ref>. Finally, the proof is completed by re-arranging the terms in (<ref>). § VALIDATION OF GIBBS ALGORITHMS Consider the dataset z_0∈( X×Y)^n_0 that aggregates dataset z_1∈( X×Y)^n_1 and dataset z_2∈( X×Y)^n_2 as constituents. That is, z_0 = ( z_1, z_2 ), with n_0 = n_1 + n_2. Datasets z_1 and z_2 are referred to as constituent datasets, whereas, the dataset z_0 is referred to as the aggregated dataset. For all i ∈{ 0,1,2 }, the empirical risk function in (<ref>) and the expected empirical risk function in (<ref>) over dataset z_i are denoted by 𝖫_z_i and 𝖱_z_i, respectively. Such functions exhibit the following property. The empirical risk functions 𝖫_z_0, 𝖫_z_1, and 𝖫_z_2, defined in (<ref>) satisfy for all θ∈M, rcl 𝖫_z_0 ( θ ) = n_1/n_0𝖫_z_1( θ ) + n_2/n_0𝖫_z_2( θ ). Moreover, the expected empirical risk functions 𝖱_z_0, 𝖱_z_1, and 𝖱_z_2, defined in (<ref>), satisfy for all σ-finite measures P ∈M, rcl 𝖱_z_0 ( P ) = n_1/n_0𝖱_z_1( P ) + n_2/n_0𝖱_z_2( P ). The proof is presented in <cit.>. For all i ∈{ 0,1,2 }, let Q_i∈M and λ_i∈K_Q_i,z_i, with K_Q_i,z_i in (<ref>), be the σ-finite measure acting as the reference measure and regularization factor for the learning task with dataset i, respectively. Each dataset induces a different ERM-RER problem formulation of the form min_P ∈_Q_iM𝖱_z_i(P ) + λ_i D( PQ_i), where 𝖱_z_i is the expected empirical risk defined in (<ref>). For all i ∈{ 0,1,2 }, the solution to the ERM-RER problem in (<ref>) is the probability measure denoted by P^(Q_i, λ_i)_Θ| Z = z_i. In particular, from Lemma <ref>, it holds that the probability measure P^(Q_i, λ_i)_Θ| Z = z_i satisfies for all θ∈ Q_i, rcl dP^(Q_i, λ_i)_Θ| Z = z_i/dQ_i ( θ ) = exp( - K_Q_i,z_i(- 1/λ_i ) - 1/λ_i 𝖫_z_i( θ)). For all i ∈{ 0,1,2}, the probability measure P^(Q_i, λ_i)_Θ| Z = z_i in (<ref>) represents a GA trained upon the dataset z_i with parameters (Q_i, λ_i). In the following, such an algorithm is denoted by 𝖦𝖠_i and the dataset z_i is often referred to as the training dataset of 𝖦𝖠_i. The dataset z_j, with j ∈{ 0,1,2}∖{ i }, which might contain datapoints that are not in z_i, is referred to as the test dataset for 𝖦𝖠_i. §.§ Gibbs Algorithms Trained on Constituent Datasets The expected empirical risk induced by 𝖦𝖠_i on the training dataset z_i is the training expected empirical risk, which is denoted by 𝖱_z_i( P^(Q_i, λ_i)_Θ| Z = z_i) and often referred to as the training error <cit.>. Alternatively, the expected empirical risk induced by 𝖦𝖠_i on the test dataset z_j is the test expected empirical risk, which is denoted by 𝖱_z_j( P^(Q_i, λ_i)_Θ| Z = z_i) and often referred to as the test error <cit.>. The following theorem provides explicit expressions involving the training and test errors of 𝖦𝖠_1 and 𝖦𝖠_2. Assume that the σ-finite measures Q_1 and Q_2 in (<ref>) are mutually absolutely continuous. Then, for all i ∈{ 1,2} and j ∈{ 1,2}∖{ i }, rcl 𝖱_z_i( P^(Q_j, λ_j)_Θ| Z = z_j) - 𝖱_z_i( P^(Q_i, λ_i)_Θ| Z = z_i ) = λ_i ( P^(Q_i, λ_i)_Θ| Z = z_iQ_i + P^(Q_j, λ_j)_Θ| Z = z_jP^(Q_i, λ_i)_Θ| Z = z_i - P^(Q_j, λ_j)_Θ| Z = z_j Q_i ), where the function 𝖱_z_i is defined in (<ref>) and the measure P^(Q_i, λ_i)_Θ| Z = z_i satisfies (<ref>). The proof is immediate from Theorem <ref> by noticing that for all i ∈{ 1,2 } and for all j ∈{ 1,2 }∖{ i }, the differences 𝖱_z_i( P^(Q_j, λ_j)_Θ| Z = z_j) - 𝖱_z_i( P^(Q_i, λ_i)_Θ| Z = z_i) can be written in terms of the sensitivity 𝖲_Q_i, λ_i( z_i, P^(Q_j, λ_j)_Θ| Z = z_j). A reasonable figure of merit to compare two machine learning algorithms trained upon two different training datasets is the difference between the expected empirical risk they induce upon the aggregation of their training datasets. The following theorem provides an explicit expression for this figure of merit for the case of the algorithms 𝖦𝖠_1 and 𝖦𝖠_2. Assume that the σ-finite measures Q_1 and Q_2 in (<ref>) are mutually absolutely continuous. Then, rcl 𝖱_z_0( P^(Q_2, λ_2)_Θ| Z = z_2) - 𝖱_z_0( P^(Q_1, λ_1)_Θ| Z = z_1 ) = n_1/n_0λ_1 ( P^(Q_1, λ_1)_Θ| Z = z_1Q_1 + P^(Q_2, λ_2)_Θ| Z = z_2P^(Q_1, λ_1)_Θ| Z = z_1 - P^(Q_2, λ_2)_Θ| Z = z_2 Q_1 ) - n_2/n_0λ_2 ( P^(Q_2, λ_2)_Θ| Z = z_2Q_2 + P^(Q_1, λ_1)_Θ| Z = z_1P^(Q_2, λ_2)_Θ| Z = z_2 - P^(Q_1, λ_1)_Θ| Z = z_1 Q_2 ), where the function 𝖱_z_0 is defined in (<ref>) and, for all i ∈{ 1,2}, the measure P^(Q_i, λ_i)_Θ| Z = z_i satisfies (<ref>). The proof uses the following argument: rcl 𝖱_z_0( P^(Q_2, λ_2)_Θ| Z = z_2 ) - 𝖱_z_0( P^(Q_1, λ_1)_Θ| Z = z_1 ) = n_1/n_0 𝖱_z_1( P^(Q_2, λ_2)_Θ| Z = z_2 ) + n_2/n_0 𝖱_z_2( P^(Q_2, λ_2)_Θ| Z = z_2 ) - ( n_1/n_0 𝖱_z_1( P^(Q_1, λ_1)_Θ| Z = z_1 ) + n_2/n_0 𝖱_z_2( P^(Q_1, λ_1)_Θ| Z = z_1 ) ) = n_1/n_0 𝖲_Q_1, λ_1( z_1, P^(Q_2, λ_2)_Θ| Z = z_2 ) - n_2/n_0 𝖲_Q_2, λ_2( z_2, P^(Q_1, λ_1)_Θ| Z = z_1 ), where the equality in (<ref>) follows from Lemma <ref>; and the equality in (<ref>) follows from Definition <ref>. The proof is completed by Theorem <ref>. §.§ Averaging Gibbs Measures In practical scenarios, building GAs on the aggregated dataset might be difficult or impossible due to limited computational power or due to the fact that dataset aggregation at one location is not allowed due to privacy constraints. In these cases, a common practice is to average the output of machine learning algorithms trained on constituent datasets, e.g., federated learning <cit.>. In this case, a figure of merit to validate such an approach is to study the difference of the expected empirical risk induced on the aggregated dataset by 𝖦𝖠_0 and a convex combination of 𝖦𝖠_1 and 𝖦𝖠_2. The following theorem provides an explicit expression for this quantity. Assume that the σ-finite measures Q_0, Q_1 and Q_2 in (<ref>) are pair-wise mutually absolutely continuous. Then, for all α∈ [0,1], rcl 𝖱_z_0(αP^(Q_1, λ_1)_Θ| Z = z_1 + ( 1-α) P^(Q_2, λ_2)_Θ| Z = z_2 ) - 𝖱_z_0( P^(Q_0, λ_0)_Θ| Z = z_0 ) = λ_0( P^(Q_0, λ_0)_Θ| Z = z_0Q_0 + α( P^(Q_1, λ_1)_Θ| Z = z_1P^(Q_0, λ_0)_Θ| Z = z_0 - P^(Q_1, λ_1)_Θ| Z = z_1Q_0 ) + ( 1 - α) ( P^(Q_2, λ_2)_Θ| Z = z_2P^(Q_0, λ_0)_Θ| Z = z_0 - P^(Q_2, λ_2)_Θ| Z = z_2Q_0 ) ), where the function 𝖱_z_0 is defined in (<ref>) and, for all i ∈{ 1,2}, the measure P^(Q_i, λ_i)_Θ| Z = z_i satisfies (<ref>). The proof uses the following argument: rcl 𝖱_z_0(αP^(Q_1, λ_1)_Θ| Z = z_1 + ( 1-α) P^(Q_2, λ_2)_Θ| Z = z_2 ) - 𝖱_z_0( P^(Q_0, λ_0)_Θ| Z = z_0 ) = α𝖱_z_0( P^(Q_1, λ_1)_Θ| Z = z_1 ) + ( 1-α) 𝖱_z_0( P^(Q_2, λ_2)_Θ| Z = z_2 ) - α𝖱_z_0( P^(Q_0, λ_0)_Θ| Z = z_0 ) - ( 1-α) 𝖱_z_0( P^(Q_0, λ_0)_Θ| Z = z_0 ) = α( 𝖱_z_0( P^(Q_1, λ_1)_Θ| Z = z_1 ) - 𝖱_z_0( P^(Q_0, λ_0)_Θ| Z = z_0 ) ) + ( 1-α) ( 𝖱_z_0( P^(Q_2, λ_2)_Θ| Z = z_2 ) - 𝖱_z_0( P^(Q_0, λ_0)_Θ| Z = z_0 ) ) = α𝖲_Q_0, λ_0( z_0, P^(Q_1, λ_1)_Θ| Z = z_1 ) + ( 1 - α) 𝖲_Q_0, λ_0( z_0, P^(Q_2, λ_2)_Θ| Z = z_2 ), where the equality in (<ref>) follows from Lemma <ref>, and the equality in (<ref>) follows from Definition <ref>. The proof is completed by Theorem <ref>. The following corollary of Theorem <ref> is obtained by subtracting the equality in (<ref>) with α =1 from the equality in (<ref>) with α =0. Assume that the σ-finite measures Q_0, Q_1 and Q_2 in (<ref>) are pair-wise mutually absolutely continuous. Then, for all i ∈{ 0,1,2 }, the probability measure P^(Q_i, λ_i)_Θ| Z = z_i in (<ref>) satisfies rcl 𝖱_z_0( P^(Q_2, λ_2)_Θ| Z = z_2) - 𝖱_z_0( P^(Q_1, λ_1)_Θ| Z = z_1 ) = λ_0 (P^(Q_2, λ_2)_Θ| Z = z_2P^(Q_0, λ_0)_Θ| Z = z_0 - P^(Q_2, λ_2)_Θ| Z = z_2 Q_0) - λ_0 ( P^(Q_1, λ_1)_Θ| Z = z_1P^(Q_0, λ_0)_Θ| Z = z_0 - P^(Q_1, λ_1)_Θ| Z = z_1Q_0 ), where the function 𝖱_z_0 is defined in (<ref>). Corollary <ref> is an alternative to Theorem <ref> involving the GA trained upon the aggregated dataset, i.e., 𝖦𝖠_0. §.§ Gibbs Algorithms Trained on Aggregated Datasets Training a GA upon the aggregation of datasets does not necessarily imply lower expected empirical risk on the constituent datasets. As argued before, datasets might be obtained up to different levels of fidelity. Hence, a validation method for 𝖦𝖠_0 is based on the expected empirical risk induced by 𝖦𝖠_0 on a constituent dataset z_i, with i ∈{ 1,2 }, which is denoted by 𝖱_z_i( P^(Q_0, λ_0)_Θ| Z = z_0). A pertinent figure of merit is the difference 𝖱_z_i( P^(Q_0, λ_0)_Θ| Z = z_0) - 𝖱_z_i( P^(Q_i, λ_i)_Θ| Z = z_i). The following theorem provides an explicit expression for such quantity. Assume that the σ-finite measures Q_0, Q_1 and Q_2 in (<ref>) are pair-wise mutually absolutely continuous. Then, for all i ∈{ 0,1,2 }, rcl 𝖱_z_i ( P^(Q_0, λ_0)_Θ| Z = z_0 ) - 𝖱_z_i( P^(Q_i, λ_i)_Θ| Z = z_i ) = λ_i ( P^(Q_i, λ_i)_Θ| Z = z_iQ_i + P^(Q_0, λ_0)_Θ| Z = z_0P^(Q_i, λ_i)_Θ| Z = z_i - P^(Q_0, λ_0)_Θ| Z = z_0Q_i ) , where, the function 𝖱_z_i is defined in (<ref>) and the measure P^(Q_i, λ_i)_Θ| Z = z_i satisfies (<ref>). The proof is immediate from Theorem <ref> by noticing that for all i ∈{ 1,2 }, the differences 𝖱_z_i( P^(Q_0, λ_0)_Θ| Z = z_0) - 𝖱_z_i( P^(Q_i, λ_i)_Θ| Z = z_i) can be written in terms of the sensitivity 𝖲_Q_i, λ_i( z_i, P^(Q_0, λ_0)_Θ| Z = z_0). §.§ Special Cases Consider a given σ-finite measure Q and assume that for all i ∈{ 0, 1, 2 } and for all A∈M, Q( A) = Q_i( A). Assume also that the parameters λ_0, λ_1, and λ_2 in (<ref>) satisfy λ_1 = n_0/n_1λ_0 and λ_2 = n_0/n_2λ_0. These assumptions are referred to as the case of homogeneous priors with measure Q, and the case of proportional regularization, respectively. The term “proportional” stems from the fact that the regularization factor decreases proportionally to the size of the data set in the optimization problem in (<ref>). Under these assumptions, the following corollary of Theorem <ref> unveils an interesting connection with the Jeffrey's divergence <cit.>. Consider the case of homogeneous priors with a σ-finite measure Q and proportional regularization with parameter λ_0. Then, for all i ∈{ 1,2 }, the probability measure P^(Q, λ_i)_Θ| Z = z_i in (<ref>), satisfies l ( n_1/n_0 𝖱_z_1( P^(Q, λ_2)_Θ| Z = z_2) - n_2/n_0 𝖱_z_2( P^(Q, λ_2)_Θ| Z = z_2 ) ) + ( n_2/n_0 𝖱_z_2( P^(Q, λ_1)_Θ| Z = z_1 ) - n_1/n_0 𝖱_z_1( P^(Q, λ_1)_Θ| Z = z_1 ) ) = λ_0 ( P^(Q, λ_1)_Θ| Z = z_1P^(Q, λ_2)_Θ| Z = z_2 + P^(Q, λ_2)_Θ| Z = z_2P^(Q, λ_1)_Θ| Z = z_1 ). Note that P^(Q, λ_1)_Θ| Z = z_1P^(Q, λ_2)_Θ| Z = z_2 + P^(Q, λ_2)_Θ| Z = z_2P^(Q, λ_1)_Θ| Z = z_1 is the Jeffrey's divergence between the measures P^(Q, λ_1)_Θ| Z = z_1 and P^(Q, λ_2)_Θ| Z = z_2. For all i ∈{ 1,2 } and j ∈{ 1,2 }∖{ i }, the difference n_j/n_0𝖱_z_j( P^(Q, λ_i)_Θ| Z = z_i) - n_i/n_0𝖱_z_i( P^(Q, λ_i)_Θ| Z = z_i) is reminiscent of a validation <cit.>. This follows from noticing that 𝖱_z_j( P^(Q, λ_i)_Θ| Z = z_i) is the testing error of 𝖦𝖠_i over the test dataset z_j, while 𝖱_z_i( P^(Q, λ_i)_Θ| Z = z_i) is the training error of 𝖦𝖠_i. In (<ref>), it holds that P^(Q, λ_1)_Θ| Z = z_1P^(Q, λ_2)_Θ| Z = z_2 and P^(Q, λ_2)_Θ| Z = z_2P^(Q, λ_1)_Θ| Z = z_1 are both nonnegative, which leads to the following corollary of Theorem <ref>. Consider the case of homogeneous priors with a σ-finite measure Q and proportional regularization. Then, for all i ∈{ 1,2 }, the probability measure P^(Q, λ_i)_Θ| Z = z_i in (<ref>), satisfies rcl ( n_1/n_0 𝖱_z_1( P^(Q, λ_2)_Θ| Z = z_2) + n_2/n_0 𝖱_z_2( P^(Q, λ_1)_Θ| Z = z_1 ) ) ⩾ ( n_1/n_0 𝖱_z_1( P^(Q, λ_1)_Θ| Z = z_1 ) + n_2/n_0 𝖱_z_2( P^(Q, λ_2)_Θ| Z = z_2 ) ). Corollary <ref> highlights that the weighted-sum of the test errors induced by 𝖦𝖠_1 and 𝖦𝖠_2 is not smaller than the weighted sum of their training errors when the weights are proportional to the sizes of the datasets. 18 IEEEtran
http://arxiv.org/abs/2306.15678v1
20230615101803
Toward high-speed effective numerical simulation of multiple filamentation of high-power femtosecond laser radiation in transparent medium
[ "Andrey Bulygin", "Yury Geints" ]
physics.optics
[ "physics.optics" ]
1 V.E. Zuev Institute of Atmospheric Optics SB RAS, Tomsk, Russia 2Tomsk State University, Lenin ave. 36, Tomsk 634050, Russia *[email protected] High-power femtosecond laser radiation during the propagation in air (and other transparent media) experiences multiple filamentation. Filamentation is a unique nonlinear optical phenomenon, which is accompanied by a wealth of nonlinear optical effects such as formation of extended plasma channels in the beam wake, generation of higher harmonics and supercontinuum, generation of THz radiation. The manifestations of laser filamentation can be useful for solving atmospheric optics problems related to remote sensing of the environment as well as directed transmission of laser power. The classical numerical methods used for simulating the nonlinear long-range atmospheric propagation of high-power radiation with a sufficiently large laser beam aperture have almost reached their limit regarding the acceleration of calculations. To solve this problem and speed-up the numerical simulations of laser filamentation, we propose an improved numerical technique based on a modified method of phase screens constructed on a sparse spatial grid. Within the framework of this technique, we seek for optimal ansatz (substitution function) to the governing equations using the machine learning technology, which provides for the best correspondence to the numerical solution of the test problem using a denser spatial grid. § INTRODUCTION The propagation of high-power femtosecond laser radiation (HPLR) in air usually occurs in a nonlinear regime. In media with optical nonlinearity of the cubic type (Kerr-type nonlinearity), which include most gases and transparent dielectrics, the self-focusing of optical radiation leads to a bright nonlinear phenomenon called laser filamentation<cit.>. Pulse filamentation is accompanied by strong aberrations of the optical radiation, namely the large-scale transformations of its spatial-temporal profile. This leads to the formation of localized high-intensity optical structures usually referred to as the filaments. A characteristic feature of laser filaments is their stable transverse size, which persists over a sufficiently long distance and can be longer than the Rayleigh length of a beam of this diameter. In atmospheric air, the peak intensity in the filament can reach several hundreds of TW/cm^2, while the average filament size varies from several tens to hundreds of micrometers depending on air pressure and laser wavelength <cit.>. Experimentally, the laser pulse filamentation usually manifests itself as the appearance of glowing filaments in the visible spectrum in the beam channel. This is due to the recom-bination of plasma electrons in the regions formed as a result of laser pulse induced air molecules ionization (tunnel, multi-photon). The numerical simulation of the nonlinear propagation in air of a small-aperture (diameter of several millimeters or less) femtosecond laser pulse is usually carried out within the nonlinear Schrodinger equation (NLSE) in (2D+1) or (3D+1) formulation or using the more rigorous analog known as the unidirectional pulse propagation equation (UPPE) [2]. Note, that this is not an easy computational task, since it requires a sufficiently dense computational grid with a transverse spatial step of about tens of micrometers and a longitudinal step less than a few millimeters. This significantly complicates the implementation of massive in-parallel numerical calculations and limits the maximum range of pulse propagation simulation to several tens of meters <cit.>. At the same time, the practical needs of nonlinear atmospheric optics demand at least kilometer-long optical distances upon taking into account various optical weather (atmos-pheric aerosol, turbulence). Such long-range nonlinear propagation distances can be achieved only with wide centimeter-scale femtosecond laser beams, because of the fundamental limitation of the nonlinear focus position (the filamentation region) imposed by the Rayleigh diffraction length. When transferring to centimeter-diameter laser beams, the amount of computer resources and time required for numerical simulation of even one implementation of the laser pulse propagation increases enormously. It can be argued that at the present stage of computer technology development, direct numerical calculation of laser filamentation for such experiments within the complete non-stationary model is impractical even with the use of the supercomputer clusters. Therefore, to simplify the numerical calculation, as a rule, a reduction in the problem spatial dimensions is used by moving to its stationary approximation in various forms <cit.> . For example, in <cit.>, a phase-modulated (chirped) pulse was simulated through the substitution of real pulse power dependence on the path length by a simple analytical relationship determined by the dispersion law. However even in such case, one can only talk about the possibility of predicting the filamentation onset, and not about investigation its dynamics. Several efforts also have been made to find the analytical solutions and to study the properties of the NLSE in case of laser filamentation. For example, in <cit.> it is theoretically shown that multiple pulse filamentation can take place in different phase modes (phase transitions of the second kind), which correlates with the experimental results <cit.>. This finding is qualitatively consistent with that in Ref. [16], where based on the self-consistent field approximation the authors propose to restore the dynamics of the HPLR propagation as a whole, i.e., on the macroscopic scale (scale much larger than the size of a single filament). However, for practical implementation of these ideas there is not enough validity and verification on a set of test cases. It should be noted, that along with numerical approaches to the problem of laser fila-mentation there are also attempts to obtain some analytical solutions to the problem con-sidered, in particular, using the geometric optics (GO) approach <cit.>. However, the use of GO approximation can be justified only in the pre-filamentation region. Worthwhile noting, the transition from studying the small-scale optical field dynamics toward the pulse integral parameters, such as the total pulse energy and the root-mean-square (effective) beam radius. Here, one can mention the works <cit.>, which analyze the evolution of the effective radius and other integral properties of laser radiation during the filamentation in air including the use of completely conservative numerical schemes. The knowledge about the behavior of the effective beam radius can be useful both in practical atmospheric optics applications and in the theory of pulse self-focusing as it gives important qualitative ideas about the physics of filamentation. If one considers the actual numerical methods for solving the problem of nonlinear focusing of a high-power laser beam in the atmosphere, then in addition to semi-analytical approaches the attempts are made to develop various simplified methods. Here, the method of averaged optical field of a laser pulse is worth mentioning <cit.>. Despite certain theoretical successes in this field, which led to the theoretical discovery of the filamentation mode change as a second-order phase transition, this method has significant drawbacks. They stem from the fact that the transition to a macroscopic description of field dynamics contains many heuristic constructions, and this makes this averaged method ill-conditioned regarding the evidence base. Additionally, the description of the transient regime of pulse self-action is missed, when direct numerical calculation is already difficult and the use of a macroscopic description is yet incorrect. Obviously, a certain transitional bridge from the NLSE-based description to the macroscopic approach should be developed, which will allow not only to verify this approach but also to substantiate and validate it on an array of numerical calculations. In this paper, we propose a novel theoretical approach which is based on the classical method of phase screens to improve the efficiency of the numerical simulations of the problem on the long-range nonlinear propagation of a wide-aperture HPLR in a nonlinear randomly inhomogeneous medium (atmosphere) under conditions of laser pulse self-focusing and multiple filamentation. We demonstrate a new approach to dramatically accelerate the numerical simulation of a wide-aperture HPLR propagation in the real atmosphere. The idea of the technique proposed is to replace the region of active optical radiation interaction with nonlinear inhomogeneous medium (filamentation region, aerosol cloud, turbulent layer) with certain effective phase screen(s) given in the form of an Ansatz, the parameters of which are found by means of the machine learning (ML) methods based on the condition of the closest match to the test problem solutions on the numerical grids with sufficiently dense nodes. Importantly, the simplest situations including narrow laser beams filamentation, are directly calculated with required accuracy and used as a database of the test solutions, as well as the available experimental data on HPLR propagation on real atmospheric paths are also used to obtain a representative array of solutions of the HPLR propagation problem. Worthwhile, the classical numerical methods used to solve this problem have almost reached the limit in accelerating calculations when simulating the long-range atmospheric HPLR propagation with a sufficiently large beam aperture (centimeters). Nowadays, the ef-ficiency increasing of all known numerical methods is possible only by low-level optimization of the software implementations. The semi-analytical method laid down in this work does not cancel the existing developments but completes them with new methods including ML practices. § MATH AND EQUATIONS The Nonlinear Schrodinger equation for HPLR propagation in a nonlinear medium (air) can be considered in general form: 2i k_0 ∂_z U =(Δ_⊥ + ϵ_k UU^* +ϵ_h[U] ) U ≡ (ĥ_k+ϵ_h[U]) U Here, k_0 is the wave number at the carrier wavelength (800 nm),ϵ_k is the coefficient for cubic medium nonlinearity (Kerr nonlinearity), ϵ_h is some (yet formal) complex medium dielectric permittivity associated with the manifestation of higher-order optical nonlinearities. Obviously, when solving Eq. (1) analytically or numerically the main difficulty is the spatial region, where ϵ_h 0. Indeed, this is the pulse filamentation region or in a more general formulation the region of strong plasma nonlinearity. Within the framework of the split-step method according to different physical mechanisms, the propagation of a laser radiation through the entire filamentation region we replace by the scattering of a laser pulse at a certain spatially lumped complex effective phase screen (ES). Meanwhile, one strives to ensure that the scattering at this ES gives the effect as close as possible to that which is realized in the classical filamentation model formulated on a denser numerical grid. This approximation can be corrected in such a form if the region of higher nonlinearities has strongly pronounced properties of spatial localization or in other words, discreteness. This is exactly the peculiarity of the model considered below. And then if the characteristic longitudinal scale of manifestation ϵ_h is l_f then formally, the optical radiation scattering at an effective screen can be represented in the following operator form: U_out= e^-i∫_z^z+l_f ((ĥ_k+ϵ_h))/(2 k_0) dz U_in ≡ e^-i(ĥ_kl_f/(2 k_0)-D̂_f [U_in]) U_in ≈ e^-i(ĥ_kl_f/(2 k_0)-f_lens [U_in,D]) U_in Here, U_in is the optical field before the ES, while U_out is the field that is formed as a result of the scattering on the ES, which is denoted as operater D̂_f [U_in], which can be approximated by some effective screen f_lens [U_in,D]. Note that in relation (<ref>) we both defined the effectiveg phase screen on a dense grid D̂_f [U_in] and introduced its approximation on a sparse grid f_lens, which we will call an effective lens (EL). EL is specified as a nonlinear functional of the incoming field U_in and several (yet unknown) set of the problem parameters D, which must be found based on minimizing the residual value of the exact numerical (true solution) and effective solution on a set of test examples. In this case, the measure of the difference between the true solution and the effective one, i.e. the discrepancy, is determined to some extent arbitrarily based on the physical requirements of a specific problem being solved. Generally, it is necessary to monitor the drop in the optical pulse energy as it propagates in a nonlinear dissipative medium, the structure of the filament and postfilament field surroundings, the dynamics of the maximum pulse intensity along the path, as well as the evolution of the NLSE Hamiltonian <cit.>. Accordingly, when studying the properties of a post-filamentation pulse propagation, it is necessary to trace mainly its parameters, which determine the residual between the exact and approximation solutions. The generation of such EL is based on the solution to the classical problem of a single laser pulse filamentation in a nonlinear medium using the method of the Ansatz <cit.> in combination with any ML methodic, e.g., the genetic algorithm (GA). GA allows one to search for a solution to the original dynamic problem for a field function on a fixed class of test functions provided that the resulting function must minimize the residual functional from the true solution. Here, by the true solution we mean the set of solutions obtained on a dense grid through the classical NLSE numerical solution. To find a set of test solutions, it is necessary to solve the nonlinear Schrödinger equation for an optical pulse envelope propagating in a nonlinear medium. In this case, it is necessary to take into account not only the cubic (Kerr) nonlinearity, but also the physical effects associated with higher optical nonlinearities causing the beam transverse collapse arrest and the realization of pulse filamentation regime. We consider the NLSE-type pulse propagation equation in the following form: 2ik_0 ∂_z U=ĥ_k U+ + (ϵ_m (UU^* )^2m+ik_0 α_m (UU^* )^2(m-1))U Here, ϵ_m and α_m are the coefficients for the m-th order nonlinearity leading to the beam self-focusing arrest. These coefficients account for the physical effects simulating plasma refraction and nonlinear absorption in self-induced plasma, respectively. In our analysis, the numerical solution to (<ref>)), which provides the conservation of all integrals of motion, is implemented using a semi-implicit finite-difference scheme <cit.>. One of the main challenges for seeking the Ansatz for Eqs. (1-3) is that the optical field variation in the region of pulse self-compression near the nonlinear focus, where the higher-order nonlinearities strongly manifest themselves has characteristic transverse dimensions of about 100–200 mkm. At the same time, when leaving the collapse region, a pulse is localized on rather larger spatial scales of about 500 µm. Meanwhile, this region hosts the formation of a ring structures in pulse transverse profile, which is specific for laser filamentation (see Fig. la below). This means that the effective screen, which depends on the input field, must be the nonlocal functional of the input optical field. As one of the possible options for constructing the desired EL we can propose a procedure, according to which the optical intensity (squared electric field module), w=UU^* , is subjected by some kind of a diffraction operator upon reaching some chosen "pre-filamentation" value w_pf. In the following, this value is selected in the range from 2 TW/cm^2 to 8 TW/cm^2. In other words, if the condition w =w _pf is satisfied, a new field w_dif is constructed as follows: w_dif=(F̂^(-1) exp(-l_dif)F̂)w. Here, two undefined parameters are introduced, namely w_pf and l_dif. In this case, the symbolF̂ in Eq. (4) denotes the Fourier transform operator. Importantly, the spatial region containing the field maximum has larger dimensions than the dimensions of the surroundings of the maximum of the original field w. At the same time, w_dif monotonically decreases when moving away from the centers of its extremes. Next, to define the EL we trim the field w_dif by some boundary level w_l and obtain the desired lens field profile w_lens (see Fig. 1). The boundary value w_l is a free parameter of the model and is chosen in such a way that after the trimming procedure the dimensions of the generated EL is about ten-times the typical filament diameter, i.e., about 1000 µm. As a result, one obtains the EL array as a set of disconnected regions within the beam field profile. Due to the monotonicity of EL field w_lens, there is an unambiguous relationship between the distance from the EL centers and the value of the w_lens field. This allows one using the inverse procedure to obtain the required lens fields w_lens, i.e.: D_f [U_in,d]= ∑_r_cf_lens (r-r_c)=f_lens (w_lens^(-1)). Here, r_c are the coordinates of EL center positions. In particular, based on the numerical simulation of single filamentation of submillimeter laser beams at the wavelength of 800 nm in air, the EL field profile w_lens(r) can be approximated by the cubic dependence: w_lens=c_1 (r^3-c_2) , where c1 and c2 are certain numerical coefficients depending on the pulse energy. Consider the form of the chosen substitution function (Ansatz), which depends on the distance to the EL center obtained from the analysis of direct numerical simulation of pulse laser filamentation: f_lens=∑_j a_j cos(rω_j+ψ_j)(θ(r_j))-θ(r_(j-1)))+ + i∑_l b_l cos(rν_l+η_l)(θ(r_l))-θ(r_(l-1))) Here, θ is the Heaviside function, and the remaining parameters define a set of free optimization parameters for the real (r_j,a_j,ψ_j,ω_j) and imaginary parts (r_l,b_l,η_l,ν_l) of the EL, respectively. The requirement of sufficient EL smoothness (no discontinuity of the function and its derivatives at the joining points) imposes additional restrictions on the value of these functions. As a result, the vector D (gene) used in the genetic algorithm has the dimension (2n_g+3), where n_g=3 is the number of joining points. It should be noted that the best gene D which is found by GA for one variant of the problem being solved may not be optimal for another problem statement. So, when varying the initial data of the test problem simulating a single pulse filamentation in air in terms of changing the pulse power, beam radius, focusing, etc., the best gene can have different form. However, this just means that one needs to expand the number of the control parameters by making it (selection) more flexible. The search for the optimal parameters of the Ansatz is a classical optimization problem belonging to the field of mathematical programming. Due to the fact that the problem considered is multi-parametric, it is instructive to solve it using the GA method. In this study, one of the variants of the classical GA implementations is chosen, which in particular can be implemented for the classical problem of building the optimal lens profile for optical field focusing with minimal wave aberrations <cit.>. For implementing the approach described above it is necessary to solve the reference (test) problem defined by Eq. (3) for single filamentation of an optical pulse by considering a set of initial beam profiles in the spatial form of a Gaussian beam U_n with e-width r_n and initial amplitude A_n (n = 1..N): U_n (r)=A_n exp(-(r/r_n )^2) The range of pulse initial power P=(π A_n r_n^2) and beam size used in the simulations is determined by the typical scales of field perturbations which seed the filaments in air, i.e. we chose P from 3 Pcr to 9 Pcr and rn from 0.5 mm to 2.5 mm <cit.>, where Pcr is the critical self-focusing power of a Gaussian beam (Pcr varies from 4 GW to about 7 GW in air in the near-IR range <cit.>. The next step required for EL building procedure is the determination of proper measure of the difference between the solutions generated using the EL method on a sparse grid and test solutions found on a dense grid. This will allow one to calculate the penalty function required for the evolutionary algorithm to work. Assume, that the solutions of problem (3) obtained by the effective lens method can be considered as acceptably close to the test ones according to the criterion defined through the difference in the spatial field moments: Q^mn=∫ (U_n U_n^* )^2m ( x^2+y^2 )^m dxdy, Recall, that another important issue of this measure is the length of the filamentation region and the rate of pulse intensity decrease on the beam axis, which reflects the dynamics of the post-filamentation pulse propagation. The block diagram of GA implementation for searching the Ansatz is shown in Fig. 2. Here, the vector P characterizes the solution for the gene D. P is used for construction the so-called population fitness function f with the help of the Euclidean metric. In order to optimize the GA, the creation of the first viable generation (elite generation with Del gene) is carried out with 5 representatives by selecting from random implementations of generations. Population sizes vary between 5 and 10 chromosomes. The condition for the viability of a generation when finding an elite gene is the value of the deviation of the vector P from the vector P0, which in turn is found from the solution of the problem (3) on a dense grid for a set of sub-millimeter Gaussian beams. In a random enumeration of genes, those of them are selected that according to the norm are at the smallest distances from P0 than the given measure fel. In its turn, fel value is selected in such a way that approximately one of 10 random representatives is viable. Thus, we get the first viable generation with a certain set of Del genes. Worthwhile, the set of Gaussian beams is specially chosen to produce a single laser pulse filamentation. However, if necessary, the proposed methodology allows generalizing this approach for the case of coupled multiple filaments. The result of the GA operation is some efficient gene Deff, which provides the best set of Ansatz function parameters that completely define the effective screen. The selection is carried out by the roulette method. If the launched evolution goes in the wrong way, i.e. if the current state of the generation is worse than the elite implementation, then the destruction of degenerative descendants is carried out and the elite generation is reproduced again. The longitudinal size of the EL in this implementation is fixed and equals to 30 cm, because it is this value that correlates with the build-up distance of the filament from the moments when pulse intensity reaches a pre-filamentation value w_pf until exiting from the collapse at the same intensity value. At the same time, the behavior of the field in the vicinity of the beam collapse has a fairly universal form and weakly depends on the parameters of the incoming optical field <cit.>. Recall, the direct solution to the Eq. (3) requires a sufficiently dense numerical grid. Thus, the transverse grid step should less than 10 mkm (for pulse filamentation in air) and under the action of plasma and other higher nonlinearities the longitudinal step can decrease to micrometer values, which significantly complicates the implementation of massive numerical calculations. When using the ES approach in the filamentation simulation, the requirements for the grid step are reduced by at least an order of magnitude, and in the region of nonlinear effects manifestation the longitudinal step can be reduced by two or more orders of magnitude. This makes it possible to carry out large-scale numerical simulations even for centimeter-diameter beams propagating in real atmospheric conditions. § EL BENCHMARKING §.§ EL benchmarking by a single pulse filamentation regime To demonstrate the efficiency of the EL technique, consider the case of a single fila-mentation in air of an unfocused laser pulse with radius of 1 mm and a power of 5 Pcr. In Fig. 3, an illustration of the generated effective lens flens is presented, which is used to simulate the traversal of the first nonlinear focus by the laser beam. As noted above, this situation corresponds to the transverse collapse arrest due to the air plasma nonlinearity. In the figures below, the complex-valued optical thickness D_f of the beam channel is shown by the open points, obtained by direct numerical solution of the problem (3) on dense spatial grid. The complex phase screen D_f is built based on the Eq. (2) and the numerical solution of Eq. (3) on a dense numerical grid. The fields Uin and Uout are chosen at the first nonlinear focus. As noted above, the entry and exit conditions of the nonlinear focus correspond to the limiting value of the field fluence on beam axis equal to wpf. The solid curves in these plots are the approximations of EL dielectric constant flens based on the expression (5). The result of the energy density transformation of the optical field after passing the ef-fective lens can be seen in Fig. 4. In this figure, for comparison, the transverse distribution of the normalized fluence w(x,y) is given as two parts of one image combined along the vertical axis. Namely, on the left side of the image one can see the pulse fluence obtained as the exact solution to the stationary NLSE (3), and on the right the same profile is plotted but calculated through the EL model (6). As can be clearly seen, the beam profiles obtained by both methods are similar in their structure and exhibit even number of conical emission rings. It is also important to compare the global characteristics of laser beam evolution (as a whole) along the optical path. Namely, we compare the peak intensity Im and the total pulse energy normalized to its initial value in the rigorous and effective propagation models. These parameters are shown in Figs. 5(a,d). As seen, the difference between the effective model and the test solution for pulses with different initial power P_0 is less than 15% in terms of the total pulse energy. The most noticeable discrepancy is observed in the behavior of the maximum pulse intensity at the moment of filamentation beginning, which can achieve one order of magnitude. However, on average, despite the differences in the peak intensity dynamics the values of the filamentation length obtained by the EL method are reproduced with sufficient physical accuracy. §.§ EL benchmarking by a single pulse filamentation regime In this Section, we present the results of the benchmarking of the proposed numerical method for the multiple filamentation mode. As an example, consider the initial Gaussian beam with two superposed intensive narrow side lobes, as shown in the inset to Fig. 6. These additional lobes serve as the seeds for off-axis filaments formation. We choose the initial beam diameter as 1 cm and set the pulse power as P_0 = 380 GW = 112 Pcr. In this case, it is important to analyze not only the change in pulse energy but also the dynamics of the number N_ha of the most intense "hot areas" formed inside the beam. The filaments subsequently evolve into the so-called postfilament channels (postfilaments), which extend over a sufficiently long distance having subdiffraction angular divergence and moderate peak intensity (up to 0.1 TW/cm^2) <cit.>. The condition,w ≥ w_p, where w_p is certain selective cut-off intensity value, is used to distinguish "hot areas " inside the laser beam. The trace evolution of the normalized laser pulse energy and the N_ha parameter are shown in Fig. 6(a) and (b), respectively. Clearly, the figures show a satisfactory agreement between the two filamentation sim-ulation techniques. Furthermore, the correspondence is observed not only in terms of the qualitative behavior of the pulse parameters, but also in quantitative meaning. This validates the possibility of using EL method for the simulation the multiple filamentation of moderate-aperture laser beams under the conditions of multiple “hot areas” formation inside a beam. §.§ Multiple filamentation regime in turbulent air The above presented results show that the EL method is suitable for numerical simulation of single and multiple filamentation of a femtosecond laser pulse with beam diameter up to several centimeters in clear atmosphere. However, in real situations a wide-aperture femtosecond radiation often propagates in a turbulent air. In this section, we demonstrate the flexibility and robustness of the proposed EL numerical method for such situations. We give the results of our simulation of a sub-terawatt laser pulse filamentation with larger diameter (5 cm) on an airborne trace. On this air path we create an artificial turbulent layer in order to simulate the atmospheric turbulence. Note that such numerical calculations are very challenging even within the stationary 3D-NLSE model, since this requires hundreds of CPU-hours and huge computing resources by using the modern supercomputers. Alternately, the proposed EL method gives a significant increase in computational performance. For example, the average per task runtime (statistical realization) of the considered 3D problem takes about 15 to 20 minutes on a personal computer with average computing performance. To test the efficiency of the proposed numerical technique, we carry out a numerical experiment on the filamentation of high-power pulse of titanium-sapphire femtosecond laser (peak pulse power up to 0.5 TW) on a long air path (∼ 100 m ) with an optical turbulence. We numerically reproduce the experimental condition previously reported in Ref. [28], where one can find detailed information on the methodology of these experiments. In the simulations, air turbulence is set in the form of a spatially localized (about a meter wide) turbulent phase screen which can be placed in different parts of the optical path. Note, in real experiments this turbulence layer was created using an industrial air heater. The structural composition and the strength of the turbulence were controlled by changing the temperature of the air jet. As shown in <cit.>, chaotic modulations of the laser beam supported by the optical nonlinearity of the medium led to the small-scale self-focusing of laser pulse, its spatial fragmentation and the appearance of multiple high-intensity light spots ("hot areas") along the propagation path. Unlike <cit.>, here we consider different yet unpublished results of these experiments obtained for a laser beam with larger size of 5 cm (not 2.5 cm as in <cit.>), which is produced by telescoping the initial narrow beam emitted from the femtosecond oscillator output. Recall, that experimentally, the turbulent air layer was created in the form of a hot jet with stepwise variable temperature T at the outlet of the fan nozzle from 100 to 600 C^o. According to our estimation, the values of structural parameter of the artificial turbulent layer are much higher than the typical values of atmospheric turbulence and exceed ∼ 10^-10 m^-2/3. In accordance with real experiments, it is also necessary to numerically reproduce the presence of a turbulent air layer on the optical path. To this end, the NLSE (3) is modified by including along with the effective complex lens f_lens also a random turbulent phase screen ϵ_t: 2ik_0 ∂_z U=ĥ_k U+ (f_lens[U,D]+ϵ_t)U The turbulent phase screen is constructed in a regular way by the spectral method [26]. In this case, to simplify the calculations the spectral density function of turbulent inhomogeneities ϵ_t: is modeled by a step-function: ϵ_t=τθ(k-k_h) Here, k_h is the upper cut-off spatial frequency of the pulse spectrum. The spectral amplitude of turbulent perturbances τ is associated with the tem-perature of the air heater as T=ντ k_h^2 (here ν is an fitting parameter). Figs. 7(a, b) shows the constructed turbulent phase screens (in normalized variables) for two different temperatures T of the air jet as an example. The corresponding results of EL method application for simulation laser beam filamentation in turbulent air are presented in Figs. 7(c, d). In the simulations, the pulse energy is set to 35 mJ (peak power  400 GW), and the distributions of the normalized fluence in the beam cross-section are given at the spatial coordinates right before and inside a multiple filamentation region in air. As seen from these distributions, the proposed EL method numerically reproduces the typical pattern of multiple filamentation of the beam <cit.>. The induced thermal source in-homogeneities of the optical field are clearly distinguishably in the laser beam in Fig. 7(c), which seed later the filaments visible in Fig. 7(d) as the "hot areas". To be more specific, we characterize the "hot areas" by certain threshold pulse intensity value, say w_p = 0.01 Tw/cm^2. Consequently, a "hot area" is defined as a closed region, where w ≥ w_p. Then, we calculate the "hot area" number N_ha by counting the closed regions in the laser beam at optical path end and present the results in Figs. 8(a) and (b). § DISCUSSION From Fig. 8(a) one can see, that the parameter N_ha grows with increasing air jet temperature T. This trend is consistent with the change in the scale of inhomogeneities of the random phase screen (compare Figs. 7(a) and (b)) within the framework of the proposed theoretical model. Generally, this increasing dynamic is non monotonic. In this case, the interval of continuousN_ha growth can be qualitatively explained by the Bespalov-Talanov modulation instability theory (BPT) <cit.>. BPT predicts that the increase in the perturbation of the optical wave phase leads to an increase in the number of "hot areas" in the wave amplitude profile when propagating in a Kerr-type medium. However, only those perturbations will be supported by the self-focusing optical nonlinearity, in which the nonlinearity suppresses the diffraction, i.e. which carry sufficient optical power larger than the critical value Pcr. In this case, the greater the average intensity along the beam profile, the smaller the scale of viable perturbations. In our theoretical model, an increase in fan temperature T causes greater fragmentation of the beam profile. Consequently, the number of arising "hot areas" is also growing. It is important to emphasize that the above reasoning can only be regarded as a qualitative analysis of the dynamics of beam fragmentation in a turbulent medium, since the optical intensity is initially spatially inhomogeneous, decreasing in amplitude from the center to the beam periphery (Gaussian shape). In addition, the application of the BPT is correct only for small initial perturbations. In this regard, BPT predicts only the proportionality of the number of viable perturbations to their relative amplitude. However, the very type of this dependence seems to be nonlinear due to the diffraction coupling between different perturbations in the Kerr medium. We calculate the number of "hot areas" formed in the laser beam when the turbulent layer is placed at different spatial positions along the optical propagation path. Specifically, the heater fan is placed at two positions: (a) in front of the pulse filamentation region, which corresponds to a propagation distance of 15 m, and (b) inside the filamentation region at a distance of 45 m from the laser source. The results of these numerical experiments for a 35 mJ pulse are shown in Fig. 8(b). Remember, the initial beam diameter is 5 cm. From this figure one can see, that if the turbulence is positioned before the filamentation region the induced optical phase perturbations cause the monotonic growth of "hot area" number with the fan temperature. Obviously, at the beginning of multiple filamentation, a laser beam as a whole is more concentrated in space and has more energy than after the filamentation region, therefore, the optical radiation is more sensitive to any spatial per-turbations imposed by the turbulent layer. Alternatively, by placing the turbulent screen in the region of "developed" pulse fila-mentation (open symbols), there is a weak growth and sometimes even a decrease of N_ha (T < 200 C^o) with the increase of fan temperature. Similar tendency was reported earlier in similar experiments with a smaller beam size <cit.> . This demonstrates the impressive stability of light filaments and postfilaments to stochastic pulse phase perturbations under conditions of constant maintenance of the dynamic balance between the focusing Kerr and defocusing plasma nonlinearities of the optical medium. A qualitative explanation for this dependence can be found in the fact that in the region of strong nonlinearity, induced fluc-tuations cannot destroy the already formed inhomogeneities in the beam during it self-focusing, and for a low-intensity background they almost no longer affect the conditions for forming new "hot areas". Indeed, when a turbulent screen is placed in a region where filaments have already been formed, its role in the formation of "hot areas " is significantly reduced, since the optical path must be of sufficient length to convert the corresponding phase distortions into new viable amplitude perturbations. Additionally, the phase-induced amplitude perturbations must receive enough optical power for their development, whereas to the end of the optical trace the pulse power density on average decreases. § CONCLUSIONS In conclusion, we propose an effective numerical method for the simulation of high-power wide-aperture laser radiation propagation in a turbid nonlinear medium based on the 3D NLSE numerical solution. This method relays on replacing the regions with strong medium optical nonlinearity and, accordingly, the regions demanding fine numerical grid, by a number of complex-valued (effective) phase screens, which provide the scattering and absorption of optical radiation in a certain manner. The choice of the specific lens structure is based on the selection of the Ansatz (substitution function) to the NLSE with the parameters providing the best accordance to the test problem solutions. This Ansatz is searched by the ML methods (the genetic algorithm). So far, in present form the proposed algorithm is implemented for a simple scenario of stationary laser beam filamentation (single and multiple) and does account for any nonsta-tionary physical effects (pulse group velocity dispersion, optical "shock" waves, retarded Kerr rotational effect, etc.), as well as the mechanisms occurring during filament stochastic clustering ("optical pillars" <cit.>). At the same time, with this method we are able to reproduce quantitatively the experimental dependencies for the evolution of "hot areas" (postfilaments) number formed in a wide laser beam (5 cm in diameter) during the propagation on an extended air path with localized turbulence layer. Worthwhile, the proposed method possesses great opportunities of the implementation both for the non-stationary case of multiple filamentation and for the possible accounting of filament clustering during the propagation of wide-aperture laser radiation. Basically, the developed numerical method for solving nonlinear partial differential equations of the parabolic type (NLSE, unidirectional Maxwell equation) is not limited to the problem of propagation of high-power femtosecond laser radiation. In this regard, the presented method has a broader significance, because it can be extended to other practical problems, such as vision problems in lossy turbulent media, optical energy transmission in a disperse aerosol medium, electromagnetic wave propagation in plasma, etc. It is also possible to use the machine learning methods and build a base of test solutions using the methodology we have proposed.
http://arxiv.org/abs/2306.09454v1
20230615192033
Analytical Evaluation of Elastic Lepton-Proton Two-Photon Exchange in Chiral Perturbation Theory
[ "Poonam Choudhary", "Udit Raha", "Fred Myhrer", "Dipankar Chakrabarti" ]
hep-ph
[ "hep-ph" ]
[][email protected] Department of Physics, Indian Institute of Technology Kanpur, Kanpur-208016, India. [][email protected] Department of Physics, Indian Institute of Technology Guwahati, Guwahati - 781039, India. Department of Physics and Astronomy, Ohio University, Athens, Ohio, 45701 USA [][email protected] Department of Physics and Astronomy, University of South Carolina, Columbia, SC 29208, USA. [][email protected] Department of Physics, Indian Institute of Technology Kanpur, Kanpur-208016, India. We present an exact evaluation of the two-photon exchange contribution to the elastic lepton-proton scattering process at low-energies using heavy baryon chiral perturbation theory. The evaluation is performed including next-to-leading order accuracy. This exact analytical evaluation contains all soft and hard two-photon exchanges and we identify the contributions missing in a soft-photon approximation approach. We evaluate the infrared divergent four-point box diagrams analytically using dimensional regularization. We also emphasize the differences between muon-proton and electron-proton scatterings relevant to the MUSE kinematics due to lepton mass differences. Analytical Evaluation of Elastic Lepton-Proton Two-Photon Exchange in Chiral Perturbation Theory Dipankar Chakrabarti July 31, 2023 ==================================================================================================== § INTRODUCTION Scattering of light leptons off hadron targets has been the most time-honored precision tool to study the internal composite structure and dynamics of hadrons since the pioneering work of Hofstadter and McAllister <cit.>. The point-like nature of the leptons makes them ideal probes of the hadronic electromagnetic structure. Despite a century-long endeavor in understanding the basic nucleon structure, fundamental gaps in our knowledge still persist. An accurate determination of the proton's electromagnetic form factors and parton distributions is known to shed much light on the constituent spin, charge, and magnetic distributions. However, various systematic analyses of high-precision lepton-proton (ℓ^±p) elastic scattering data, providing the cleanest possible information on the proton's internal structure, have brought forth several discrepancies in the recent past that question our conventional notion regarding the proton's structure as revealed from the standard treatments of QED and QCD. A well-known discrepancy is the stark difference in the measured value of the proton's electric (G^p_E) to magnetic (G^p_M) form factor ratio (G^p_E/G^p_M) at momentum transfers Q^2 beyond ≳ 1 (GeV/c)^2 between two different popular experimental methodologies, namely, the Rosenbluth Separation <cit.> and Recoil Polarization Transfer <cit.> techniques (also see, Refs. <cit.> for more details). A resolution to this “form factor puzzle” necessitates a closer investigation of the so-called Two-Photon Exchange (TPE) contributions to the radiative corrections to the elastic ℓ^±p cross-section (for prominent past works and reviews, see e.g., Refs. <cit.>). The TPE corrections give an additional higher-order contribution to the well-known leading order (LO) Born approximation contribution arising from the One-Photon Exchange (OPE) diagram which was assumed to dominate this electromagnetic scattering process at small momentum transfers. Likewise, there exists yet another puzzling scenario in regard to low-momentum transfers where the TPE consideration may prove to be a crucial game changer. This concerns the proton's charge radius, as obtained from the slope of G^p_E at Q^2=0. There exist two exclusive means to determine the charge radius, namely, via scattering processes and via atomic spectroscopy. In particular, the muonic Lamb-shift measurements of the rms charge radius by the CREMA Collaboration <cit.> are strikingly inconsistent with the prior CODATA recommended value  <cit.>. Such a discrepancy, the so-called “proton radius puzzle", has been an agenda of serious scientific contention over the last decade since its inception in 2013 <cit.>. Despite the flurry of ingenious ideas and techniques introduced to fix the conundrum, the resolution of this discrepancy remained unsettled thus far. We refer the reader to the recent status report as presented in Ref. <cit.>. With no apparent fundamental flaws either conceptually or in the measurement process to explain this incongruity, the TPE processes may be singularly implicated as culpable under circumstantial evidence, vis-a-vis, the form factor puzzle <cit.>. It is conceivable that a rigorous evaluation of the TPE effects could potentially resolve both the form factor and radius discrepancies. Given the growing consensus in this regard, many new theoretical works on TPE studies have recently appeared in the literature <cit.>. Of the several newly commissioned high-precision scattering experiments <cit.>, the ongoing MUSE Collaboration project at PSI <cit.> is one such endeavor, uniquely designed to simultaneously scatter leptons (ℓ^-≡ e^-, μ^-) as well as anti-leptons (ℓ^+≡ e^+, μ^+) off a proton target. One specialty of MUSE will be its uniqueness in pinning down the charge-odd contributions to the unpolarized lepton-proton (ℓ^±p →ℓ^±p) scattering, which arguably includes the TPE as the dominant process. Nevertheless, isolating the charge-odd contributions does not necessarily preclude other competing low-energy chiral-radiative contributions <cit.> in affecting the extraction of the pure TPE loop contributions. In particular, the recent estimation <cit.> of the “soft” bremsstrahlung (ℓ^±p →ℓ^±pγ^*_ soft) corrections to next-to-leading order (NLO) in heavy baryon chiral perturbation theory (HBχPT) <cit.> revealed novel chiral-odd constituents large enough (and of opposite sign) to supersede the TPE effects pertinent to the MUSE kinematics.[Notably this observation contrasts the standard expectation based on relativistic hadronic models, namely, that the TPE loop diagrams constitute the only charge-odd contribution responsible for asymmetries between the lepton and anti-lepton scatterings. All other radiative effects (vacuum polarization, vertex, and self-energy corrections) including the bremsstrahlung contributions are known to be charge symmetric.] Given that the TPE processes are major sources of systematic uncertainty, their accurate theoretical estimation is of pivotal interest for the purpose of precision analysis of the future MUSE data aimed at sub-percentage accuracy. Since the inception of the radius puzzle, the importance of low-energy TPE contributions has been extensively explored via diverse approaches such as QED-inspired hadronic models <cit.>, Dispersion techniques <cit.>, and Effective Field Theories (EFTs) such as Non-Relativistic Quantum Electrodynamics (NRQED) <cit.>, Baryon Chiral Perturbation Theory (BχPT) <cit.> and HBχPT <cit.>. It is well-known that restricting to the low-energy regime, the TPE with elastic proton intermediate state yields the dominant contribution to the radiative corrections <cit.>. Besides the proton, the inclusion of other excited nucleon intermediate states, such as the spin-isospin quartet of Δ (1232) resonances, can yield interesting results <cit.> relevant to the MUSE kinematics that leads to a better understanding of the non-perturbative aspects of the TPE contributions <cit.>. The TPE loop diagrams which have been evaluated in the past for a wide range of intermediate to high momentum transfers <cit.> are either known to be model-dependent or used unwarranted simplifications (generally referred to as soft photon approximation or SPA <cit.>) in the treatment of the intricate four-point functions, (see e.g., Refs. <cit.>). In this work, we present the evaluation of the TPE loop contributions using HBχPT without taking recourse to SPA methods. The outline of the paper is as follows. In Sec. <ref>, after a brief discussion of the HBχPT Lagrangian needed for our intended accuracy, namely, including interactions that at next-to-leading order (NLO) in the power counting, we introduce the possible topologies of the TPE loop diagrams that contribute at LO and NLO. In Sec. <ref>, we discuss the amplitudes of these loop diagrams and their contributions to the elastic cross-section. In Sec. <ref>, we discuss our numerical results and compare them with other works. Finally, we present our summary and conclusions in Sec. <ref>. Appendix <ref> contains the notations of the various generic two-, three- and four-point integral functions that contribute to the TPE corrections to the cross-section. In this work, all such relevant integrals have been evaluated exactly, and their analytical expressions are collected in Appendix <ref>. § TPE DIAGRAMS IN HBΧPT In this work, we evaluate the LO and NLO TPE contributions to lepton-proton elastic scattering cross-section using HBχPT, where only parts of the LO and NLO chiral Lagrangian contributes. In particular, the pion degrees of freedom that appear only via the next-to-next-to-leading order (NNLO) loop diagrams are absent in our current accuracy. This means that in our Lagrangian the non-linear Goldstone field u = √(U) effectively contributes as the identity matrix in two-dimensional isospin space, namely, u↦𝕀_2× 2. Consequently, the relevant LO and NLO parts of the chiral Lagrangian become ℒ^(0)_π N=N (iv· D+g_A S· u)N , and ℒ^(1)_π N=N{1/2M(v· D)^2-1/2MD· D+...} N , respectively, where N=(p n)^ T is the heavy nucleon spin-isospin doublet spinor field, g_A=1.267 is axial coupling constant of the nucleon, and v_μ and S_μ are the nucleon velocity and spin four-vectors satisfying the condition, v· S= 0. Here, the standard choice is v=(1, 0) such that S=(0,σ/2). Furthermore, the gauge covariant derivative is D_μ = ∂_μ+Γ_μ-iv^(s)_μ , with the chiral connection given by Γ_μ = 1/2[u^†(∂_μ-ir_μ)u+u(∂_μ-il_μ)u^†] , and the chiral vielbein is u_μ = iu^†∇_μ U u^†, where ∇_μ U=∂_μ U-ir_μ U+iUl_μ . Here, l_μ and r_μ are external iso-vector chiral source fields, which in our case are given by the photon field, namely, r_μ=l_μ=-eτ^3/2A_μ, with τ^3 being the third Pauli isospin matrix. Finally, v_μ^(s)=-eI/2A_μ is the external iso-scalar vector source field. Figure <ref> shows all relevant TPE diagrams up-to-and-including NLO chiral order. The first two diagrams, namely, the box diagram (a) and the crossed-box diagram (b) are of LO where both the photon-proton vertices as well as the proton propagator stem from the LO Lagrangian. However, these two LO diagrams also contain 𝒪(1/M) parts originating from the proton propagators (see below) that are kinematically suppressed. In χPT the perturbative chiral expansion is determined in terms of powers of the expansion parameter Q/Λ_χ, where Q is a generic momentum scale of the process and Λ_χ≈ 1 GeV/c is breakdown scale of the theory. However, since the value of Λ_χ is approximately equal to the proton's mass M, the HBχPT additionally incorporates a non-perturbative power or recoil correction of the interaction vertices and propagators via an expansion in powers of the inverse proton's mass, namely, Q/M∼ Q/ Λ_χ. However, it is also important to formally distinguish such dynamical recoil corrections from the naive kinematical recoil corrections. Thus, for instance, the difference in incoming lepton and outgoing lepton energies, E - E^' =-Q^2/(2M), and the corresponding lepton velocities, β-β^'=-Q^2(1-β^2)/(2MEβ)+𝒪(1/M^2), are interpreted as kinematic recoil corrections of order 1/M. Nevertheless, in practice, such corrections are also referred to as NLO corrections. Consequently, diagrams (a) and (b) are not pure LO contributions since they contain order 1/M “NLO contributions” from the proton propagator, e.g., v· p_p ≈ -p^2_p/(2 M)+𝒪(1/M^2), where p_p is a generic small off-shell four-momentum of the proton which is related to the corresponding full four-momentum in the heavy baryon formalism by the relation, P^μ=Mv^μ+p^μ_p. Next, the diagrams (c), (d), (e), and (f) are genuine NLO chiral order diagrams since they contain one NLO vertex, i.e., one photon-proton vertex is taken from NLO chiral Lagrangian whereas the propagators in each of these diagrams are taken from LO chiral Lagrangian. Likewise, the diagrams (g) and (h) are also genuine NLO box and crossed-box diagrams where the proton propagator in each diagram is derived from NLO chiral Lagrangian whereas the proton-photon vertices are taken from LO chiral Lagrangian. Finally, the triangular-shaped seagull diagram (i) contains an NLO vertex of 𝒪(e^2) with no proton intermediate state. The explicit amplitude for each of the TPE diagrams is given by the following expressions: ℳ^(a)_ box = e^4∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^')v_μ v_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) (v· k+v· p_p+i0) , ℳ^(b)_ xbox = e^4∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^')v_μ v_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) (-v· k+v· p^'_p+i0) , ℳ^(c)_ box = e^4/2M∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^'){v_μ(k + 2p_p)_ν-v_μ v_ν v· (k + 2 p_p)}χ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) (v· k + v· p_p+i0) , ℳ^(d)_ xbox = e^4/2M∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^'){v_ν(p_p + p_p^' - k)_μ - v_μ v_ν v · (p_p + p_p^' - k)}χ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) ( v· p^'_p-v· k+i0) , ℳ^(e)_ box = e^4/2M∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^'){v_ν(p_p + p_p^' + k)_μ - v_μ v_ν v·(p_p + p_p^' + k)}χ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p +i0) ( v· p_p+v· k+i0) , ℳ^(f)_ xbox = e^4/2M∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^'){v_μ(2p_p^' - k)_ν - v_μ v_ν v·(2p_p^' - k) }χ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) ( v· p^'_p -v· k +i0) , ℳ^(g)_ box = e^4/2M∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^')v_μ v_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0)(1-(p_p+k)^2/( v· p_p +v· k+i 0)^2) , ℳ^(h)_ xbox = e^4/2M∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^')v_μ v_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0)(1-(p^'_p-k)^2/ ( v· p^'_p - v· k+i 0)^2) , ℳ^(i)_ seagull = 2 e^4/2M∫ d^4k/(2π)^4i[u̅(p^')γ^μ(p̸ -k̸+ m_l)γ^ν u(p)] [χ^†(p_p^')(v_μ v_ν-g_μν)χ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) , where u(p) and u̅(p^') are the incoming and outgoing lepton Dirac spinors, and χ(p_p) and χ^†(p_p^') are the corresponding non-relativistic proton Pauli spinors. We have used the fact that the proton propagators up-to-and-including NLO for the box and crossed-box TPE loops diagrams are respectively given as iS^(1/M)_ full(p_p+k) = i/v·(p_p + k)+i0+ i/2M[1-(p_p+k)^2/(v· p_p + v· k+i0)^2] + O(M^-2) , iS^(1/M)_ full(p^'_p - k) = i/v·(p^'_p - k)+i0+ i/2M[1-(p^'_p - k)^2/(v· p^'_p - v· k+i0)^2] + O(M^-2) . Next, examining the possible cancellations of contributions at the amplitude level, the following comments are in order: * First, it is convenient for us to use the lab-frame kinematics where the initial proton is at rest, i.e., p_p=0, which means that the four-momentum transfer is Q=p^'_p-p_p=p^'_p. * Second, since we observe that v· Q = E - E^'=-Q^2/(2M)∼𝒪(1/M), this difference can therefore be neglected in the numerators of the NLO amplitudes ℳ^(c)_ box, ℳ^(d)_ box, ℳ^(e)_ box and ℳ^(f)_ xbox , as they give rise to higher-order contributions. What remains in the numerators of these four amplitudes following the factor v_μ v_ν are the terms v· k, only. However, it is important to retain the v· Q terms in the denominators of the proton propagators for consistent evaluation of the loop integrals. In many cases, they serve as natural regulators against infrared (IR) divergences. Nevertheless, as stated, we may ignore all v· p_p^' = v· Q terms in the NLO expressions after evaluations of the integrals, only. Consequently, when we sum these amplitudes, the terms containing the factors v_μ v_ν in the four NLO amplitudes of ℳ^(c)_ box, ℳ^(d)_ xbox, ℳ^(e)_ box and ℳ^(f)_ xbox, cancel with the two v_μ v_ν terms with coefficient 1 (within the braces) of ℳ^(g)_ box and ℳ^(h)_ xbox, plus the v_μ v_ν term in the seagull amplitude ℳ^(i)_ seagull. After such partial cancellations between the NLO amplitudes, the remaining parts of the NLO box diagrams (c), (e), and (g), and the seagull diagram (i), along with the unaltered LO amplitude (a) are given in the lab-frame as follows: iℳ^(a)_ box = e^4∫ d^4k/(2π)^4[u̅(p^')γ^μ(p̸-k̸+ m_l)γ^ν u(p)] [χ^†(p_p^')v_μ v_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) (v· k+i0) , iℳ^(c)_ box = e^4/2M∫ d^4 k/(2 π)^4[u̅(p^')γ^μ(p̸-k̸+ m_l)γ^ν u(p)] [χ^†(p^'_p)v_μ k_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2 k· p+i0) (v· k+i0) , iℳ^(e)_ box = e^4/2M∫ d^4 k/(2 π)^4[u̅(p^')γ^μ(p̸-k̸+ m_l)γ^ν u(p)] [χ^†(p^'_p)v_ν (k+Q)_μχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2 k· p+i0) (v· k+i0) , iℳ^(g)_ box = e^4/2M∫ d^4 k/2 π^4[u̅(p^')γ^μ(p̸-k̸+ m_l)γ^ν u(p)] [χ^†(p^'_p) v_μ v_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2 k· p+i0)(- k^2/(v· k+i 0)^2) = - e^4/2M∫ d^4 k/2π^4[u̅(p^')γ^μ(p̸-k̸+ m_l)γ^ν u(p)] [χ^†(p^'_p) v_μ v_νχ(p_p)]/[(Q-k)^2+i0] (k^2-2 k· p+i0) (v· k+i0)^2 , iℳ^(i)_ seagull = - e^4/M∫ d^4k/(2 π)^4[u̅(p^')γ^μ(p̸-k̸+ m_l)γ_μ u(p)] [χ^†(p_p^') χ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2-2k· p+i0) . Regarding the other amplitudes for the crossed-box diagrams, it is convenient to shift the integration variable k by means of the transformation k→ -k+Q, which yields the following expressions: iℳ^(b)_ xbox = e^4∫ d^4k/(2π)^4[u̅(p^')γ^μ(p̸+k̸-Q̸+ m_l)γ^ν u(p)] [χ^†(p_p^')v_μ v_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2+2k· p^'+i0) (v· k+i0) , iℳ^(d)_ xbox = e^4/2M∫ d^4 k/(2 π)^4[u̅(p^')γ^μ(p̸+k̸-Q̸+ m_l)γ^ν u(p)] [χ^†(p^'_p)v_ν k_μχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2+2 k· p^' +i0) (v· k+i0) , iℳ^(f)_ xbox = e^4/2M∫ d^4 k/(2 π)^4[u̅(p^')γ^μ(p̸+k̸-Q̸+ m_l)γ^ν u(p)] [χ^†(p^'_p)v_μ (k+Q)_νχ(p_p)]/(k^2+i0) [(Q-k)^2+i0] (k^2+2 k· p^' +i0) (v· k+i0) , iℳ^(h)_ xbox = - e^4/2M∫ d^4 k/2 π^4[u̅(p^')γ^μ(p̸+k̸-Q̸+ m_l)γ^ν u(p)] [χ^†(p^'_p) v_μ v_νχ(p_p)]/[(k-Q)^2+i0] (k^2+2 k· p^' +i0) (v· k+i0)^2 . The above TPE amplitudes include only LO and NLO amplitude contributions, i.e., we neglect all higher-order contributions. These diagrams constitute the radiative corrections to the LO Born amplitude and contribute to the elastic unpolarized lepton-proton scattering cross-section. Their contribution to the cross-section is obtained via the interference of these TPE amplitudes with the LO Born diagram <cit.>. In the following section, we present the complete analytical evaluation of the above TPE contributions to the cross-section up-to-and-including NLO in HBχPT without resorting to any kind of approximation other than the usual perturbative NLO truncation. For this purpose, we first need to isolate the finite parts of the TPE corrections from the IR-divergent parts which are unphysical. § TPE CONTRIBUTIONS TO THE ELASTIC CROSS-SECTION The differential cross-section due to the TPE is given by the fractional TPE contribution δ^ (box)_γγ (Q^2)∼𝒪 (α) times the LO Born differential cross-section [i.e., of 𝒪(α^2)], namely, [ dσ_el(Q^2)/ dΩ^'_l]_γγ =[ dσ_el(Q^2)/ dΩ^'_l]_γδ^ (box)_γγ (Q^2) . For unpolarized elastic lepton-proton scattering cross-section only the finite real part of the amplitude contributes: δ^ (box)_γγ (Q^2)=2ℛe∑_spins[ℳ^(0)*_γ ℳ^ (box)_γγ]/∑_spins|ℳ^(0)_γ|^2-δ^ (box)_ IR(Q^2) , where ℳ_γ corresponds to the LO OPE or the Born amplitude for the elastic lepton-proton scattering process, and is given by [We note that there are additional non-vanishing contributions to the TPE cross-section arising from the interference of the proton's spin-independent NLO Born amplitude <cit.>: ℳ^(1)_γ = - e^2/2 M Q^2[u̅_l(p^')γ^μ u_l(p)] [χ^†(p_p^') × {(p_p + p_p^')_μ - v_μ v · (p_p + p_p^') }χ(p_p)] , with the LO TPE amplitudes, ℳ^(a)_ box and ℳ^(b)_ xbox. However, these contributions are of 𝒪(1/M^2) and hence ignored in this work. The corresponding contribution from the spin-dependent NLO Born amplitude (see Ref. <cit.>) identically vanishes in this case.] ℳ^(0)_γ=-e^2/Q^2[u̅(p^')γ^μ u(p) ] [χ^†(p^'_p) v_μχ(p_p)] . The term δ^ (box)_ IR(Q^2) [see Eq. (<ref>)] collects all the IR-divergent parts of the TPE amplitudes and is expected to cancel exactly with IR-divergent terms of the soft bremsstrahlung counterparts. The TPE amplitude, ℳ^ (box)_γγ is the sum of the box, crossed-box, and seagull diagrams presented in Fig. <ref>. As we calculate the LO and NLO contributions to the elastic cross-section, i.e., of 𝒪(α^3/M), from each of the aforementioned TPE box and crossed-box diagrams, we retain all possible 𝒪(1/M) terms. As mentioned earlier, since the LO TPE box (a) and the crossed box (b) diagrams have both LO and NLO terms, in order to derive the “true LO" contribution in the theory, it is necessary to eliminate the 𝒪(1/M) contributions from these LO diagrams. In our HBχPT calculation of the TPE box (and crossed-box) amplitudes, there arise four-point integrals where the proton propagator is non-relativistic and linear in loop momentum. These integrals are simplified using a modified form of Feynman parametrization <cit.>. The calculation of such integrals differs from the standard approach of evaluating loop diagrams with relativistic proton propagators. In this work, we derive the expressions for the TPE four-point functions in terms of two- three-, and four-point scalar master integrals using successive integration-by-parts (IBP) techniques, combining with decomposition via the method of partial fractions <cit.>. The IBP method is a widely known technique whereby complicated loop-integrals containing products of four or more propagators are decomposed in terms of known simpler master integrals up to three propagators. The master integrals, in turn, are straightforward to evaluate using standard techniques for evaluating Feynman loop integrals. However, such methodologies are primarily meant to tackle ultraviolet (UV) divergences in loop functions with relativistic propagators that are quadratic in loop four-momentum. In the current case where IR divergences are involved in loop functions with the non-relativistic propagators, the application of such existing techniques becomes less obvious. In Ref. <cit.>, Zupan demonstrated the evaluation of one-loop scalar integrals up to four-point functions with heavy quark propagators (also linear in four-momentum). The propagator structure of heavy quark is very similar to the heavy baryon (proton) propagator in HBχPT. In effect, the HBχPT theory was formulated on similar lines following the ideas of Heavy Quark Effective Theory (HQET), (see, e.g., Ref. <cit.>). Therefore, it is in principle straightforward to extend Zupan's method <cit.> to evaluate the TPE integrals in our HBχPT calculations. One should, however, maintain some degree of caution in directly using the results of Ref. <cit.>, since the techniques presented therein were meant to handle either finite or UV-divergent two-, three- and four-point loop functions with massive propagators. The integrals appearing in our case are in contrast IR-divergent-containing photon propagators. In that case, it may seem straightforward to introduce photon mass regulators to evaluate such IR-divergent integrals. However, such a mass cut-off regularization scheme to evaluate the IR-divergent Feynman integrals using Zupan's technique does not a priori distinguish between v · k+ i0 and v · k -i0 in the propagator denominators with four-momentum k^μ, and may lead to discrepancies in estimating the correct analytic structure of the Feynman amplitudes. In contrast, by employing dimensional regularization to tackle the IR-divergent integrals using methods of complex analysis, albeit much more involved, one can avoid such ambiguities in evaluating the loop amplitudes. §.§ Chiral order contributions and the 𝒪(1/M) terms In any perturbative approach, the dominant contributions are expected to arise from the LO corrections. Although, the diagrams (a) and (b) (see Fig. <ref>) formally give the leading chiral order contributions, they in fact also contain the suppressed 𝒪 (1/M) terms that are numerically commensurate with other NLO chiral order contributions to the cross-section. Likewise, the NLO chiral order diagrams (c) - (i) have similar kinematically suppressed 𝒪 (1/M) terms which effectively contribute as 𝒪 (1/M^2), i.e., of NNLO chiral order. However, since the latter contributions is higher than our desired NLO accuracy, we ignore such terms in our results. In other words, in all our NLO expressions we therefore simply replace the outgoing lepton kinematical variables like E^' and β^' by the incoming variables E and β. We first investigate only the diagrams, (a) and (b). Their contribution to the radiative process is given by δ^(a)_ box(Q^2) = 2ℛe∑_spins[ℳ^(0)*_γℳ^(a)_ box]/∑_spins|ℳ^(0)_γ|^2 = - 4 πα[Q^2/Q^2+4E E^'] ℛe{1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v̸ (p^'+m_l) v (p-k+m_l) v]/(k^2+i0) [(Q-k)^2+i0] (k^2-2 k· p+i0) (v· k+i0)} , and δ^(b)_ xbox(Q^2) = 2ℛe∑_spins[ℳ^(0)*_γℳ^(b)_ xbox]/∑_spins|ℳ^(0)_γ|^2 = - 4 πα[Q^2/Q^2+4E E^'] ℛe{1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v (p^ '+m_l) v (p+k-Q+m_l) v]/(k^2+i0) [(Q-k)^2+i0] (k^2 + 2k· p^'+i0) (v· k+i0)} , respectively. Before proceeding to evaluate the integrals, some comments about δ^(a)_ box and δ^(b)_ xbox corrections are warranted. Equations (<ref>) and (<ref>) contain four-point integrals function with two photon propagators. In the literature, the analytical evaluations of such types of four-point functions (albeit with all relativistic propagators) used SPA <cit.> to simplify calculations. Notably, two kinds of SPA have often been employed in the literature, namely, the one by Mo and Tsai <cit.>, and the other by Maximon and Tjon <cit.>. Invoking such an approximation, while one of the exchanged photons is always considered “soft", either with four-momenta k=0 or k=Q, and the other photon is considered “hard", with (k-Q)^2 ≠ 0 or k^2 ≠ 0, leads to two distinct kinematical domains of the TPE loop configurations. Note, however, the fact that the kinematical regions where both the photon exchanges are simultaneously soft or hard are ignored in the SPA methodology (see Ref. <cit.> for a detailed discussion). SPA is an unwarranted approximation that could lead to unknown systematic uncertainties. On the one hand, SPA drastically simplifies the calculation of the four-point integrals without loss of information in the IR domain. On the other hand, SPA neglects finite contributions to the TPE, which could prove to be a major drawback. Thus, when it comes to the precise estimation of the TPE effects, SPA results prove to be mostly unreliable. Therefore, in contrast to recent works based on similar perturbative evaluations of TPE loop diagrams, we do not take recourse to SPA for their exact analytical evaluation. Using IBP technique, the complete expressions of δ^(a)_ box and δ^(b)_ xbox can be expressed in terms of a string of three- and four-point integrals, defined by the generic master integral I^±(p,ω|n_1,n_2,n_3,n_4) (cf. Appendix <ref>), and are given by δ^(a)_ box(Q^2) = - 8πα[Q^2/Q^2+4E E^'] ℛe {E^' I^-(p,0|0,1,1,1)+E I^-(p,0|1,0,1,1) - (E+E^') I^-(p,0|1,1,0,1) - (Q^2+8EE^') I^-(p,0|1,1,1,0) + (Q^2+8EE^') E I^-(p,0|1,1,1,1)} , and δ^(b)_ xbox(Q^2) = - 8πα[Q^2/Q^2+4E E^'] ℛe {E I^+(p^',0|0,1,1,1) + E^' I^+(p^',0|1,0,1,1) - (E+E^') I^+(p^',0|1,1,0,1) + (Q^2+8EE^') I^+(p^',0|1,1,1,0) + (Q^2+8EE^') E^' I^+(p^',0|1,1,1,1) } , respectively. In deriving the above expressions, we have used the facts that p · Q=Q^2/2 and p^'· Q=-Q^2/2. Furthermore, as demonstrated in the Appendix <ref>, the four-point integrals I^-(p,0|1,1,1,1) and I^+(p^',0|1,1,1,1) are decomposed via the method of partial fractions into sums of simpler three-point (I^±) and four-point (Z^±) functions, namely, I^-(p,0|1,1,1,1)=1/Q^2[I^-(p,0|1,0,1,1) + I^-(p,0|0,1,1,1) - 2Z^-(Δ,i√(-Q^2)/2,m_l,E)] , and I^+(p^',0|1,1,1,1)=1/Q^2[I^+(p^',0|1,0,1,1) + I^+(p^',0|0,1,1,1) - 2Z^+(Δ^',i√(-Q^2)/2,m_l,-E^')] , where the four-vector Δ_μ=-Δ^'_μ=(p-Q/2)_μ. The explicit analytical expressions of the I^± and Z^± integral functions appearing above are rather elaborate and, therefore, relegated to Appendix <ref>. §.§ Leading chiral order TPE corrections, i.e., 𝒪(α) First, we extract the true LO terms from the (a) and (b) diagrams to determine the leading finite TPE contributions. For this purpose we eliminate all 𝒪(1/M) terms by substituting E^'=E and β^'=β, which yields the true LO sum of the TPE diagrams in HBχPT: δ^ (0)_γγ(Q^2) = [δ^(a)_ box(Q^2) + δ^(b)_ xbox(Q^2) ]_True LO = 32πα E [Q^2/Q^2+4E^2] ℛe [ I(Q|1,1,0,1) ]. Here we note that the integral, I(Q|1,1,0,1) ≡ I^-(p,0|1,1,0,1) ≡ I^+(p^',0|1,1,0,1), Eq. (<ref>), is solely a function of the squared momentum transfer Q^2. Furthermore, we use the facts that at LO the real parts of the I^- and Z^- integrals appearing in the expression for δ^(a)_ box, Eq. (<ref>), completely cancel with those from the integrals I^+ and Z^+, respectively, appearing in δ^(b)_ xbox, Eq. (<ref>). This is evident from our explicit expressions for these integrals displayed in Eqs. (<ref>) - (<ref>). Thus, the only surviving term in the LO TPE contribution corresponds to the non-vanishing three-point function I(Q|1,1,0,1) given by I(Q|1,1,0,1)=-1/16√(1/-Q^2) . This means that the true LO result boils down to δ^ (0)_γγ(Q^2) = πα√(-Q^2)/2E[1/1+Q^2/4 E^2] , which bears a close resemblance to the well-known McKinley-Feshbach contribution <cit.> for massless electrons, given by δ_F (Q^2) = παβsinθ_lab/2[1-sinθ_lab/2/1-β^2 sin^2θ_lab/2] = πα√(-Q^2)/2E[1 - √(-Q^2)/2β E/1+Q^2/4E^2] , where θ_lab refers to the lepton scattering angle in the lab-frame. The above result <cit.> was originally obtained in the context of non-relativistic quantum mechanics employing second-order Born approximation. Given the fact that the SPA approach in HBχPT pursued in Ref. <cit.> had completely missed out on the Feshbach contribution, our LO exact TPE result is already a significant improvement over the SPA results. A notable feature of our HBχPT calculation is the absence of IR divergence at the true LO: δ^(0)_ IR(Q^2) = - 16 πα E ℛe [ I^-(p,0|1,0,1,1) + I^+(p^',0|1,0,1,1)] = 0 . Fig. <ref> displays the numerical results for the true LO TPE fractional corrections,[We re-emphasize that in the HBχPT framework, only the true LO [i.e., of 𝒪(1/M^0)] corrections are regarded as the LO contributions. All other 𝒪(1/M) corrections are treated as NLO contributions.] Eq. (<ref>) (with respect to the Born contribution), for specific MUSE choices of the incoming lepton beam momenta, both for e-p and μ-p scattering. Our displayed results cover the full kinematical scattering range 0<|Q^2|<|Q^2_ max|, where <cit.> |Q^2_ max|=4M^2β^2E^2/m^2_l+M^2+2ME . Evidently, the TPE corrections for muon and electron are different, as they depend on the non-zero lepton mass m_l. §.§ Next-to-leading order TPE corrections, i.e., 𝒪(α/M) The NLO TPE fractional radiative corrections to elastic cross-section are of 𝒪(α/M) contribution from each TPE diagram. As discussed earlier, they include both the 𝒪(1/M) parts of the TPE diagrams, (a) and (b), as well as the NLO chiral order TPE diagrams, (c) - (i), either with one proton-photon vertex stemming from the NLO Lagrangian ℒ^(1)_π N, Eq. (<ref>), or with an insertion of an NLO proton propagator. First, we display our result for the 𝒪(1/M) parts of the (a) plus (b) diagrams. This part is obtained by eliminating the true LO contributions δ^ (0)_γγ from the complete contribution from (a) and (b) diagrams, δ^ (ab)_γγ. It is represented by the following expression: δ^ (ab; 1/M)_γγ(Q^2) = δ^ (ab)_γγ(Q^2) - δ^ (0)_γγ(Q^2) = δ^(a)_ box(Q^2) + δ^(b)_ xbox(Q^2) -[ δ^(a)_ box(Q^2) + δ^(b)_ xbox(Q^2)]_True LO = - 16 πα E ℛe [δ^(1/M)I^+(p^',0|1,0,1,1)] + 8 παQ^2/M ℛe [I^(0)(p,0|1,0,1,1)] - 16 πα E ℛe [δ^(1/M)I^-(p,0|0,1,1,1) + δ^(1/M)I^+(p^',0|0,1,1,1)] + 4 παQ^2/M[8E^2/Q^2+4E^2] ℛe [I^(0)(p,0|0,1,1,1)] + 8 παQ^2/M[Q^2/Q^2+4E^2] [Q^2-4 E^2/Q^2+4E^2] ℛe[ I(Q|1,1,0,1) ] - 8 παQ^2/M[Q^2+8 E^2/Q^2+4E^2] ℛe [ Z^(0)(Δ,i√(-Q^2)/2,m_l,E)] + 16πα E[Q^2+8 E^2/Q^2+4E^2] ℛe [ δ^(1/M) Z^-(Δ,i√(-Q^2)/2,m_l,E) + δ^(1/M) Z^+(Δ^',i√(-Q^2)/2,m_l,-E^')] + 𝒪(M^-2) . Here, the functions I^(0) and Z^(0) [cf. Eqs. (<ref>) and (<ref>)] denote the LO parts of the three-, and four-point functions I^- and Z^-, respectively, as displayed in Appendix <ref>. Also, δ^(1/M)I^± [cf. Eqs. (<ref>) and (<ref>)] and δ^(1/M)Z^± [cf. Eqs. (<ref>) and (<ref>)], denote the 𝒪(1/M) parts of the functions I^± and Z^±, respectively. It is noteworthy that Eq. (<ref>) is IR-singular due to the presence of the IR-divergent integrals I^-(p,0|1,0,1,1)≡ I^(0)(p,0|1,0,1,1) and I^+(p^',0|1,0,1,1) [cf. Eqs. (<ref>) and (<ref>)]. The LO IR-divergent terms arising for these integrals, however, cancel each other leaving only 𝒪(1/M) residual IR divergences arising from terms containing the integral I^+(p^',0|1,0,1,1). Thus, the resulting IR divergence from Eq. (<ref>), extracted using dimensional regularization in D=4-2ϵ dimension, with pole ϵ<0 and choice of the renormalization scale, μ=√(-Q^2), has the form: δ^ (ab; 1/M)_γγ(Q^2)|_ IR ≡ δ^ (box)_ IR(Q^2) = - 16 πα E ℛe [δ^(1/M)I^+(p^',0|1,0,1,1)]_ IR + 8 παQ^2/M ℛe [I^(0)(p,0|1,0,1,1)]_ IR = - α Q^2/2π M E β^2(1/ϵ-γ_E+ln 4π) {1+ (β-1/β)ln√(1+β/1-β) } + 𝒪(1/M^2) . where γ_E=0.577216... is the Euler-Mascheroni constant. These NLO TPE contributions constitute the only IR-divergent terms δ^ (box)_ IR that arise in our calculations. They exactly cancel with the soft bremsstrahlung counterparts δ^ (soft)_ IR originating from the 𝒪(1/M) kinematical part of the interference contributions of the LO soft bremsstrahlung radiation diagrams, namely, δ^ (soft)_ IR=-δ^ (box)_ IR. An explicit demonstration of this cancellation will be detailed in a future publication. In Fig. <ref>, we present a comparison between the true LO TPE corrections δ^(0)_γγ and the full TPE corrections arising from the (a) and (b) diagrams δ^(ab)_γγ which also include the proton propagator 𝒪(1/M) NLO contributions. As observed in this figure the true LO and the NLO part of diagrams (a) and (b) in the electron-proton scattering case almost cancel each other. In contrast, for the muon-proton scattering the NLO parts of these diagrams are quite small and barely alters the LO results. This difference is principally attributed to the presence of the lepton mass-dependent logarithms ∝ln(-Q^2/m^2_l) arising from the δ^(1/M)I^+(p^',0|1,0,1,1) [cf. Eq. (<ref>)] and Z^(0)(Δ,i√(-Q^2)/2,m_l,E) [cf. Eq. (<ref>)] functions in Eq. (<ref>), leading to significant enhancement for the electronic case. Next, we turn to the NLO chiral order diagrams (c) and (d), as well as (e) and (f), obtained by replacing one LO vertex with one NLO vertex, namely, δ^(c)_ box(Q^2) = 2ℛe∑_spins[ℳ^(0)*_γℳ^(c)_ box]/∑_spins|ℳ^(0)_γ|^2 = - 2 πα/M[Q^2/Q^2+4EE^'] ℛe {1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v (p^ '+m_l) v (p-k+m_l) k]/(k^2+i0) [(Q-k)^2+i0] (k^2-2 k· p+i0) (v· k+i0)} , δ^(d)_ xbox(Q^2) = 2ℛe∑_spins[ℳ^(0)*_γℳ^(d)_ xbox]/∑_spins|ℳ^(0)_γ|^2 = - 2 πα/M[Q^2/Q^2+4EE^'] ℛe {1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v (p^ '+m_l) k (p+k-Q+m_l) v]/(k^2+i0) [(Q-k)^2+i0] (k^2+2 k· p^'+i0) (v· k+i0)} , δ^(e)_ box(Q^2) = 2ℛe∑_spins[ℳ^(0)*_γℳ^(e)_ box]/∑_spins|ℳ^(0)_γ|^2 = - 2 πα/M[Q^2/Q^2+4EE^'] ℛe {1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v (p^ '+m_l) (k +Q) (p-k+m_l) v]/(k^2+i0) [(Q-k)^2+i0] (k^2-2 k· p+i0) (v· k+i0)} , and δ^(f)_ xbox(Q^2) = 2ℛe∑_spins[ℳ^(0)*_γℳ^(f)_ xbox]/∑_spins|ℳ^(0)_γ|^2 = - 2 πα/M[Q^2/Q^2+4EE^'] ℛe {1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v (p^ '+m_l) v (p+k-Q+m_l) (k+Q)]/(k^2+i0) [(Q-k)^2+i0] (k^2+2 k· p^'+i0) (v· k+i0)} . These TPE diagrams also have higher-order contributions that we neglect in our NLO evaluations. Using IBP methods the contributions, the (c) and (d) diagrams can be expressed in terms of the real part of the IR-finite three-point integral I(Q|1,1,0,1) [cf. Eq. (<ref>)]: δ^(c)_ box(Q^2) = 4παQ^2/M ℛe [I(Q|1,1,0,1)] , and δ^(d)_ xbox(Q^2) = - 4παQ^2/M ℛe [I(Q|1,1,0,1)] , which means that the sum is δ^(cd)_γγ(Q^2) = δ^(c)_ box(Q^2) + δ^(d)_ xbox(Q^2) = 0 . Thus, the net contribution from the (c) and (d) diagrams to the TPE radiative corrections vanishes. The other two diagrams, namely, (e) and (f) diagrams are given by the following expressions: δ^(e)_ box(Q^2) = - 4πα/M[Q^2/Q^2+4EE^'] ℛe {-2(Q^2+2E^2) I^-(p,0|0,1,1,1) + 4 E^2 I^-(p,0|1,0,1,1) + (Q^2-4 E^2) × I(Q|1,1,0,1) + 4 E Q^2 I^-(p,0|1,1,1,0) - 4 E^2 Q^2 I^-(p,0|1,1,1,1)} = - 4πα/M[Q^2/Q^2+4E^2] ℛe {-2(Q^2+4E^2) I^(0)(p,0|0,1,1,1) + (Q^2-4 E^2) I(Q|1,1,0,1) + 4 E Q^2 I(Q|1,1,1,0) + 8 E^2 Z^(0)(Δ,i√(-Q^2)/2,m_l,E) } + 𝒪(1/M^2) , and δ^(f)_ xbox(Q^2) = - 4πα/M[Q^2/Q^2+4EE^'] ℛe {2(Q^2+2 E^2) I^+(p^',0|0,1,1,1) - 4 E^2 I^+(p^',0|1,0,1,1) - (Q^2-4 E^2) × I(Q|1,1,0,1) + 4 E Q^2 I^+(p^',0|1,1,1,0) + 4 E^2 Q^2 I^+(p^',0|1,1,1,1)} = - 4πα/M[Q^2/Q^2+4E^2] ℛe {-2(Q^2+4 E^2) I^(0)(p,0|0,1,1,1) - (Q^2-4 E^2) I(Q|1,1,0,1) + 4 E Q^2 I(Q|1,1,1,0) + 8 E^2 Z^(0)(Δ,i√(-Q^2)/2,m_l,E) } + 𝒪(1/M^2) , where the IR-finite three-point integrals, I(Q|1,1,1,0)≡ I^-(p,0|1,1,1,0) ≡ I^+(p^',0|1,1,1,0), are purely relativistic and depend only on the momentum transfer Q^2 [cf. Eq. (<ref>)]. It is important to note that the IR-divergent functions, namely, I^-(p,0|1,0,1,1) and I^+(p^',0|1,0,1,1) [arising due to the decompositions, Eq. (<ref>)], drop out from the above expressions leading to IR-finite contributions from each of (e) and (f) NLO diagrams. Thus, the net contribution to the TPE radiative corrections from the (e) and (f) diagrams is finite and displayed in Fig. <ref>. Notably, the sizable corrections in the case of electron-proton scattering arise primarily due to the dominance of the integral I(Q|1,1,1,0) at large Q^2 values. Next, we display our analytical results for the (g) and (h) NLO diagrams containing the NLO proton propagator insertions, where we employ Eqs. (<ref>) and (<ref>), namely, δ^(g)_ box(Q^2) = 2ℛe∑_spins[ℳ^(0)*_γℳ^(g)_ box]/∑_spins|ℳ^(0)_γ|^2 = 2 πα/M[Q^2/Q^2+4EE^'] ℛe {1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v̸ (p^ '+m_l) v (p-k+m_l) v]/[(Q-k)^2+i0] (k^2-2 k· p+i0) (v· k+i0)^2} = - 4πα/M[Q^2/Q^2+4EE^'] ℛe {(Q^2+8 E^2) I^-(p,0|0,1,1,1) - 2E(Q^2+4 E^2) I^-(p,0|0,1,1,2) - 2(E^' p + Ep^') · I^-_1(p,0|0,1,1,2)} = -4πα/M[Q^2/Q^2+4E^2] ℛe {(Q^2+8 E^2) I^(0)(p,0|0,1,1,1) - E(Q^2+8E^2+4m^2_l) I^(0)(p,0|0,1,1,2) - 2[(E^' p + Ep^') · T^-_1(p,0|0,1,1,2)]_ LO} + 𝒪(1/M^2) , and δ^(h)_ xbox(Q^2) = 2ℛe∑_spins[ℳ^(0)_γℳ^(h)_ xbox]/∑_spins|ℳ^(0)_γ|^2 = 2 πα/M[Q^2/Q^2+4EE^'] ℛe{1/i∫ d^4 k/(2 π)^4 Tr[(p+m_l) v (p^ '+m_l) v (p+k-Q+m_l) v̸]/[(Q-k)^2+i0] (k^2+2 k· p^'+i0) (v· k+i0)^2} = - 4πα/M[Q^2/Q^2+4EE^'] ℛe {-(Q^2+8 E^2) I^+(p^',0|0,1,1,1) - 2E(Q^2+4 E^2) I^+(p^',0|0,1,1,2) + 2(E^' p + E p^') · I^+_1(p^',0|0,1,1,2) } = -4πα/M[Q^2/Q^2+4E^2] ℛe {(Q^2+8 E^2) I^(0)(p,0|0,1,1,1) + E(Q^2+8E^2+4m^2_l) I^(0)(p,0|0,1,1,2) + 2[(E^' p + Ep^') · T^+_1(p^',0|0,1,1,2)]_ LO} - α Q^2/2π M Eβ^3[Q^2+8E^2+4m^2_l/Q^2+4E^2] {ln√(1+β/1-β)-β} + 𝒪(1/M^2) , respectively, The above functions, I^-μ_1(p,0|0,1,1,2) and I^+μ_1(p^',0|0,1,1,2) [or alternatively, T^-μ_1(p,0|0,1,1,2) and T^+μ_1(p^',0|0,1,1,2)], are rank-1 tensor integrals containing a single power of the loop momentum k^μ [or alternatively, (k-p)^μ and (k+p^')^μ] in the numerators, Eqs. (<ref>) and  (<ref>). The symbol [...]_ LO in the above expressions denotes LO terms to be considered within braces. We should note that when we add the (g) and (h) contributions from Eqs. (<ref>) and (<ref>), the integral I^(0)(p,0|0,1,1,2) cancels. Furthermore, only the difference, T_1^- - T_1^+, is relevant when considering the sum of (g) and (h) diagrams. The above integrals are evaluated by decomposing into simple scalar master integrals via the standard technique of Passarino-Veltman reduction <cit.> (PV). To this end, we can decompose the tensor structures of T^± μ_1 in terms of three independent external four-vectors, e.g., v, p and p^': T^-μ_1(p,0|0,1,1,2) = v^μ C^-_1 + p^' μ C^-_2 , T^+μ_1(p^',0|0,1,1,2) = v^μ C^+_1 + p^μ C^+_2 . The coefficients, C^±_1,2 = ∘C^±_1,2+𝒪(1/M), are combinations of scalar master integrals, as discussed in Appendix <ref>. Only the LO parts of C^±_1,2 (as obtained by replacing E^'=E and β^'=β) are relevant in our context, namely, ∘C^-_1 = [1-1/β^2]I^(0)(p,0|0,1,1,1) + 1/2Eβ^2[I^(0)(p,0|0,0,1,2) - I(Q|0,1,0,2)] , ∘C^-_2 = 1/Eβ^2I^(0)(p,0|0,1,1,1) - I^(0)(p,0|0,1,1,2) - 1/2E^2β^2[I^(0)(p,0|0,0,1,2) - I(Q|0,1,0,2)] , ∘C^+_1 = - [1-1/β^2]I^(0)(p,0|0,1,1,1) - 1/2Eβ^2[I^(0)(p,0|0,0,1,2) - I(Q|0,1,0,2)] , ∘C^+_2 = - 1/Eβ^2I^(0)(p,0|0,1,1,1) - I^(0)(p,0|0,1,1,2) + 1/2E^2β^2[I^(0)(p,0|0,0,1,2) - I(Q|0,1,0,2)] - 2/(4π)^2 E^2 β^3[ln√(1+β/1-β)-β] . In particular, the two-point functions, I^(0)(p,0|0,0,1,2) and I(Q|0,1,0,2), appear in a difference at our chiral order. Therefore, their UV-divergences cancel exactly as they should since we only take the difference of these integrals in each of the above coefficients ∘C^±_1,2, see Eq. (<ref>) or Eq. (<ref>): I^(0)(p,0|0,0,1,2) - I(Q|0,1,0,2) = 1/4 π^2[ 1 + 1/βln√(1+β/1-β) - ln(-Q^2/4 M m_l) ] . All our results for the TPE radiative corrections are UV-finite, as expected from naive dimensional arguments. We observe that the presence of the lepton mass-dependent logarithmic term ∝ln(-Q^2/Mm_l) of Eq. (<ref>), originating from the (g) and (h) diagrams, enhance the contributions for the electron scattering as compared to muon scattering. (see Fig. <ref>). Furthermore, it is important to note that the three-point function I^(0)(p,0|0,1,1,2) appearing in the individual contributions from the (g) and (h) diagrams, δ^(g)_ box and δ^(h)_ xbox, is linear in the proton's mass M [cf. Eqs. (<ref>)], and therefore, may lead to pathologies with the convergence of the chiral expansion. However, as noted, this integral appearing in both δ^(g)_ box and δ^(g)_ xbox cancel in the sum. Our combined result for the fractional contribution from the (g) and (h) diagrams becomes δ^(gh)_γγ(Q^2) = δ^(g)_ box(Q^2) + δ^(h)_ xbox(Q^2) = - 4πα/M[Q^2/Q^2+4E^2] ℛe { 2[Q^2(1+1/β^2)+8E^2] I^(0)(p,0|0,1,1,1) - (Q^2+4E^2β^2/Eβ^2)[I^(0)(p,0|0,0,1,2) - I(Q|0,1,0,2)] } - α Q^2/π M Eβ^3[ln√(1+β/1-β)-β] + 𝒪(1/M^2) . Finally, the least important NLO contribution to the TPE radiative corrections arises from the seagull diagram (i), without an elastic proton intermediate state. After initial cancellations, only the residual part ℳ^(i)_ seagull [see Eq. (<ref>)] proportional to g^μν contributes to the NLO cross-section: δ^ (seagull)_γγ (Q^2) = 2ℛe ∑_spins[ℳ^(0)*_γ ℳ^(i)_ seagull]/∑_spin|ℳ^(0)_γ|^2 = 4πα/M[Q^2/Q^2+4EE^'] ℛe {1/i∫ d^4 k/(2 π)^4 Tr[(p̸+m_l) γ^ρ (p̸^̸'̸+m_l) γ^μ (p̸-k̸+m_l) γ_μ v_ρ]/(k^2+i0) [(Q-k)^2+i0] (k^2-2 k· p+i0)} = 4πα/M{Q^2/Q^2+4EE^'] ℛe {4(Q^2 v+2 E p+2 E p^')· I^-_1(p,0|1,1,1,0) + 16 m_l^2 E I(Q|1,1,1,0)} = -4 α E/π M[Q^2/Q^2+4E^2] ℛe {(1-ν^2/2 ν^2) ln(-Q^2/m^2_l) - (4π)^2m_l^2(1+ν^2/ν^2) I(Q|1,1,1,0) } + 𝒪(1/M^2) , where ν=√(1-4m^2_l/Q^2) is a Q^2-dependent kinematical variable. The analytical results for the relativistic three-point scalar I(Q|1,1,1,0) and tensor I^-μ_1(p,0|1,1,1,0) integrals are expressed in Eqs. (<ref>) and (<ref>). In the case of muon scattering, the second term in the seagull contribution being proportional to the square of its mass leads to some enhancements in the result as compared to the electron case, where the contribution is tiny (see Fig. <ref>). For a detailed discussion about the seagull contribution, we refer the reader to Ref. <cit.>. § RESULTS AND DISCUSSION In Figs. <ref> - <ref>, we provide the numerical results of our analytically derived expressions for the box, crossed-box, and seagull TPE contributions after isolating out the IR-divergent terms [see Eq. (<ref>)].[The exact cancellation of the IR-divergent terms from the TPE diagrams that we have derived, namely, δ^ (box)_ IR, with the corresponding soft bremsstrahlung counterpart, δ^ (soft)_ IR, is a concrete result that we have established via explicit HBχPT calculations including NLO corrections to the latter contributions, the details of which will be published in a future publication.] In Fig. <ref> our LO correction results, Eq. (<ref>), are plotted for e-p and μ-p scatterings, at the three specific MUSE choices of the incoming lepton momenta. In the same figure, we also compared our results to other TPE works, e.g., Ref. <cit.>, based on relativistic QED invoking SPA <cit.>, as well as the well-known estimate, Eq. (<ref>) of McKinley and Feshbach <cit.>, based on the scattering of massless electrons off static Coulomb potentials in the context of the second Born Approximation. Our final results of evaluating the TPE radiative corrections to the elastic cross-sections are consolidated in Fig. <ref>, where we present a comparison of LO versus NLO contribution from HBχPT. The left panel plots correspond to the electron-proton scattering while the right ones show the results for muon-proton scattering. As seen from the figure [see also Fig. <ref>], for e-p scattering the HBχPT does not appear reliable for Q^2 ≳ 0.04 (GeV/c)^2. In the figures, LO corresponds to the true LO result which stems from diagrams (a) and (b), Eq. (<ref>). In addition, as we have discussed before, diagrams (a) and (b) also contain NLO contributions from Eq. (<ref>) that for the case of e-p scattering are roughly as large as the positive LO part but negative in sign. In fact, in the case of e-p scattering the sum of the NLO contributions from the (a) and (b) diagrams, which dominates for all values of Q^2, makes the total NLO corrections arising from all TPE diagrams, (a) - (i) in Fig. <ref>, negative for a certain low-Q^2 domain, as shown in the left panel plots of Fig. <ref>. Another observation concerns the NLO propagator insertion diagrams, (g) and (h), whose contributions are almost as large as the LO contribution for e-p scattering. The former corrections have a positive sign and they almost cancel the negative NLO corrections from (a) and (b) diagrams, which also include a 1/M correction of the proton propagator. We further observe that the box diagram (c) exactly cancels the contribution from the crossed-box (d) diagram, see below Eq. (<ref>). Finally, we find from Fig. <ref> that for e-p scattering the two IR-finite diagrams, (e) and (f), become dominant for Q^2 ≳ 0.02 (GeV/c)^2, especially due to the contribution of the relativistic three-point integral I(Q|1,1,1,0). In this context, we note that even at the kinematic regime relevant to the MUSE, the electron must be considered relativistic. However, the muon being much heavier is not expected to behave as a true relativistic probe at the MUSE regime. Turning to muon-proton scattering results we find significant differences in the TPE corrections compared to the electron-proton scattering results. For the μ-p scattering case, the LO contribution dominates the NLO contribution in the entire MUSE kinematical domain. Furthermore, both LO and NLO contributions are positive in the whole Q^2 range. These results indicate that the perturbative aspects of HBχPT appear to behave more robustly for μ-p scattering. In particular, we find from Fig. <ref> (right panel) that the contributions from the NLO TPE diagrams (g) and (h) are significantly smaller than that of the LO, which sharply contrasts the corresponding e-p scattering results (left panel). Nonetheless, their contributions are the most significant ones among all other NLO corrections. Another contrasting feature regarding the muon results is the following: while on the one hand, the (e) and (f) diagrams have rather small contributions in this case, the role of the seagull diagram, on the other hand, becomes quite relevant, see Fig. <ref>. Finally, we observe that the total TPE correction including the NLO contributions to μ-p scattering is roughly half the corresponding size of e-p scattering. We can clearly see the impact of our exact analytical results in comparison to the SPA results of Ref. <cit.> also in the context of HBχPT. There the calculation, applied SPA both in the numerator and denominator of the lepton propagators that appear in the TPE box and crossed-box diagrams. The results in that work were expressed solely in terms of the IR-divergent three-point functions, I^-(p,0|1,0,1,1) and I^+(p^',0 |1,0,1,1). The application of SPA is in effect tantamount to suppressing the effects arising from several such integrals that potentially yield sizeable numerical contributions, as evidenced in Figs. <ref> and <ref>. For instance, the (e), (f), (g), and (h) diagram contributions would be greatly diminished in SPA by the absence of integrals, such as the three-point function, I(Q|1,1,1,0), and to some extent the four-point function, Z^±. The most significant difference in the SPA results compared with our exact result is, however, the vanishing true LO contribution from diagrams (a) and (b) as obtained in Ref. <cit.>. In other words, Ref. <cit.> completely missed out on the Feshbach contribution, Eq. (<ref>) <cit.>. In contrast, our current results in Fig. <ref> suggest that the LO contribution to TPE radiative corrections exceeds 2% at the largest Q^2 values, essentially stemming from the integral I(Q|1,1,0,1) which is absent in a SPA evaluation. Nevertheless, a notable feature of the TPE radiative corrections is that at Q^2=0 these contributions identically vanish in all the different approaches/approximations. § SUMMARY AND CONCLUSION We have presented in this work an exact analytical evaluation of LO and NLO contributions from two-photon exchange to the lepton-proton elastic unpolarized cross-section at low-energies in HBχPT, taking nonzero lepton mass into account. We find sizeable contributions beyond the expected SPA results, akin to the ones found via dispersion techniques, e.g., Ref. <cit.>. Our EFT method contrasts several prior TPE analyses utilizing relativistic hadronic models, which frequently parameterize the proton-photon vertices using phenomenological form factors. The major difference with the SPA evaluation of Ref. <cit.> is that our calculations are done exactly including all soft- and hard-photon exchange kinematical configurations of the TPE loops. In fact, our current results are expected to exhibit sizeable differences with any existing SPA calculation using a perturbative/diagrammatic approach. Regarding the isolation of the IR divergences, we used dimensional regularization. They arise from the three-point functions, I^-(p,0|1,0,1,1) and I^+(p^',0 |1,0,1,1) stemming only from diagrams (a) and (b). The remaining TPE diagrams have all finite contributions. Such an IR singular structure of our TPE results agrees with the recent evaluation base of manifestly Lorentz-invariant BχPT <cit.>. In our HBχPT approach, the IR singularities appearing at LO automatically drop out of the calculation with only residual NLO IR-singular terms remaining and they cancel against the relevant soft bremsstrahlung IR-singular contributions at NLO. § ACKNOWLEDGMENTS DC and PC acknowledge the partial financial support from Science and Engineering Research Board under grant number CRG/2019/000895. UR acknowledges financial support from Science and Engineering Research Board under grant number CRG/2022/000027. UR is also grateful to the Department of Physics and Astronomy, University of South Carolina, Columbia, for their local hospitality and support during the completion of the work. The authors convey special thanks to Rakshanda Goswami, Bheemsehan Gurjar, and Ghanashyam Meher for carefully cross-checking many of the calculational steps and for pointing out subtle discrepancies. § NOTATIONS In this section, we present the notations for all the Feynman loop integrals used in this work. First, we present the following generic forms of the one-loop scalar master integrals with external four-momenta p and p^', four-momentum transfer Q_μ=(p-p^')_μ, and loop four-momentum k_μ, having indices n_1,2,3,4∈ℤ_± and a real-valued parameter ω: I^-(p,ω|n_1,n_2,n_3,n_4) = 1/i∫ d^4 k/(2 π)^41/(k^2+i 0)^n1 [(k-Q)^2+i0]^n2 (k^2 - 2k· p+i0)^n_3 (v· k+ω+i0)^n4 , I^+ (p^',ω|n_1,n_2,n_3,n_4) = 1/i∫ d^4 k/(2 π)^41/(k^2+i 0)^n1 [(k-Q)^2+i0]^n2 (k^2 + 2k· p^'+i0)^n_3 (v· k+ω+i0)^n4 , Z^-(Δ,i√(-Q^2)/2,m_l,ω) = 1/i∫ d^4 k/(2 π)^41/[(k+Δ)^2-1/4Q^2+i0] (k^2-m^2_l+i0) (v· k+ω+i0) , and Z^+(Δ^',i√(-Q^2)/2,m_l,ω) = 1/i∫ d^4 k/(2 π)^41/[(k+Δ^')^2-1/4Q^2+i0] (k^2-m^2_l+i0) (v· k+ω+i0 ) , where the Z^± integrals are functions of the additional four vectors, Δ_μ=(p-Q/2)_μ and Δ^'_μ=-(p^'+Q/2)_μ=-(p-Q/2)_μ=-Δ_μ. The parameter ω can be either zero, ω=v· Q=-Q^2/2M, or any finite quantity (e.g., ω=v· p=E , -v· p^'=-E^'), depending on the specific integrals we are dealing with. Next, we have the two tensor integrals with one power of the loop momentum in the numerators appearing in our work. They have the following generic forms: I^-μ_1(p,ω|n_1,n_2,n_3,n_4) = 1/i∫ d^4 k/(2 π)^4k^μ/(k^2+i 0)^n1 [(k-Q)^2+i0]^n2 (k^2- 2k· p+i0)^n_3 (v· k+ω+i0)^n4 , and I^+μ_1(p^',ω|n_1,n_2,n_3,n_4) = 1/i∫ d^4 k/(2 π)^4k^μ/(k^2+i 0)^n1 [(k-Q)^2+i0]^n2 (k^2+ 2k· p^'+i0)^n_3 (v· k+ω+i0)^n4 . In particular, using the following transformations on the tensor integrals, I^-μ_1(p,0|0,1,1,2) and I^+μ_1(p^',0|0,1,1,2), used in this work: 2(E^' p+E p^')· I^-_1(p,0|0,1,1,2) = 2(E^' p + E p^')· T^-_1(p,0|0,1,1,2) - [EQ^2-2m^2_l(E+E^')] I^-(p,0|0,1,1,2) , and 2(E^' p + Ep^')· I^+_1(p^',0|0,1,1,2) = 2(E^' p + Ep^')· T^+_1(p,0|0,1,1,2) + [EQ^2-2m^2_l(E+E^')]I^+(p^',0| 0,1,1,2) , we obtain two more tensor integrals of interest: T^-μ_1(p,0|0,1,1,2) = 1/i∫ d^4 k/(2 π)^4(k-p)^μ/[(k-Q)^2+i0] (k^2- 2k· p+i0) (v· k +i0)^2 , T^+μ_1(p^',0|0,1,1,2) = 1/i∫ d^4 k/(2 π)^4(k+p^')^μ/[(k-Q)^2+i0] (k^2+ 2k· p^'+i0) (v· k+i0)^2 . § ANALYTICAL RESULTS FOR MASTER INTEGRALS Before we display our analytical expressions for the scalar master integrals, we briefly discuss the reduction of the rather complicated four-point functions, I^-(p,0|1,1,1,1) and I^-(p^',0|1,1,1,1), appearing in TPE box and crossed-box diagrams, respectively, into simpler three- and four-point functions via the method of partial fractions. For this purpose, we first express the four-point integrals I^± as I^-(p,0|1,1,1,1) = 1/i∫ d^4 k/(2π)^41/D_1 D_2 D_3 D_4 , and I^+(p^',0|1,1,1,1) = 1/i∫ d^4 k/(2π)^41/D_1 D_2 D_3 D_4 , where for brevity we have defined the following propagator denominators: D_1 = k^2+i0 , D_2 = (k-Q)^2+i0 , D_3 = k^2-2 k p+i0 , D_3 = k^2 + 2 k p^' +i0 , D_4 = v · k +i0 . Next, we use partial fractions to decompose the four-point functions into the following forms: 1/i∫ d^4 k/(2π)^41/D_1 D_2 D_3 D_4 = 1/i∫ d^4 k/(2π)^4 1/Q^2( 1/D_1 D_3 D_4+1/D_2 D_3 D_4 - 2 k·(k-Q)/D_1 D_2 D_3 D_4) , and 1/i∫ d^4 k/(2π)^41/D_1 D_2 D_3 D_4 = 1/i∫ d^4 k/(2π)^4 1/Q^2( 1/D_1 D_3 D_4 +1/D_2 D_3 D_4 - 2 k· (k-Q)/D_1 D_2 D_3 D_4) , which explicitly boil down to the results: I^-(p,0|1,1,1,1) = 1/i∫ d^dk/(2π)^d1/(k^2+i0) [(k-Q)^2+i0] (k^2-2k· p+i0) (v· k +i0) = 1/Q^2[ I^-(p,0|1,0,1,1)+I^-(p,0|0,1,1,1)-2Z^-(Δ,i√(-Q^2)/2,m_l,E)] , and I^+(p^',0|1,1,1,1) = 1/i∫ d^4 k/(2π)^41/(k^2+i0) [(k-Q)^2+i0] (k^2+2k· p^' +i0) (v· k +i0) = 1/Q^2[I^+(p^',0|1,0,1,1)+I^+(p^',0|0,1,1,1)-2Z^+(Δ^',i√(-Q^2)/2,m_l,-E^')] . Here we note that the tensor-like forms of the four-point functions Z^± that follow from Eq. (<ref>) can be conveniently transformed into the corresponding scalar forms via simple transformations of the loop momentum k_μ: Z^-(p,0|1,1,1,1) = 1/i∫ d^4 k/(2π)^4 k· (k-Q)/(k^2+i0) [(k-Q)^2+i0] (k^2-2k· p+i0) (v· k +i0) k→ k+p= 1/i∫ d^4 k/(2 π)^41/[(k+Δ)^2-1/4Q^2+i0](k^2-m^2_l+i0)(v· k+E+i0) ≡ Z^-(Δ,i√(-Q^2)/2,m_l,E) , and similarly, Z^+(p^',0|1,1,1,1) = 1/i∫ d^4 k/(2π)^4 k· (k-Q)/(k^2+i0) [(k-Q)^2+i0] (k^2+2k· p^'+i0) (v· k +i0) k→ k-p^'= 1/i∫ d^4 k/(2 π)^41/[(k+Δ^')^2-1/4Q^2+i0](k^2-m^2_l+i0)(v· k-E^'+i0) ≡ Z^+(Δ^',i√(-Q^2)/2,m_l,-E^') , The three- and four-point functions resulting from the above decompositions are individually rather straightforward to evaluate. While the three-point functions, I^-(p,0|1,0,1,1) and I^+(p^',0|1,0,1,1) are IR-divergent, the three-point functions, I^-(p,0|0,1,1,1) and I^+(p^',0|0,1,1,1), as well as the four-point functions, Z^-(Δ,i√(-Q^2)/2,m_l,E) and Z^+(Δ^',i√(-Q^2)/2,m_l,-E^'), are all IR-finite. Each of these integrals contains one non-relativistic heavy baryon (proton) propagator linear in the loop four-momentum factor, namely, v · k. We first focus on the IR-divergent master integrals, I^-(p,0|1,0,1,1) and I^+(p^',0|1,0,1,1). One method of isolating the IR-divergent parts of such integrals is to utilize dimensional regularization by analytically continuing the integrals to D-dimensional (D = 4 -2ϵ) space-time, where the pole ϵ < 0 yields the IR divergence. Especially, to deal with the non-relativistic propagator it is convenient to employ a special form of the Feynman parameterization <cit.>, namely, [(∏_i=1^N A_i)B]^-1 = N!∫_0^∞ 2 dλ∫∏_i=1^N du_i × δ (1-∑_i=1^N u_i) ∏_i=1^n θ(u_i)/[∑_i=1^N A_i u_i+2 Bλ]^N+1 . Using such a parameterization one can evaluate n-point Feynman integrals with one or more non-relativistic propagators. The detailed derivation of the Feynman integrals will not be presented here. Below, we simply display the analytical expressions of the specific integrals used in this work. We now explicitly spell out the results for the IR-singular master integrals with the standard choice of the renormalization scale, μ=√(-Q^2): I^-(p,0|1,0,1,1) ≡ I^(0)(p,0|1,0,1,1) = - 1/(4π)^2 β E[(1/ϵ-γ_E+ln 4π) ln√(1+β/1-β) + ln(-Q^2/m_l^2)ln√(1+β/1-β) - Li_2(2β/1+β) - ln^2√(1+β/1-β) - iπ(1/ϵ-γ_E+ln-4 π Q^2 /m_l^2)] , where γ_E=0.577216... is the Euler-Mascheroni constant, and I^+(p^',0|1,0,1,1) = 1/(4π)^2 β^' E^'[(1/ϵ-γ_E+ln 4π) ln√(1+β^'/1-β^') + ln(-Q^2/m_l^2)ln√(1+β^'/1-β^') -Li_2(2β^'/1+β^') - ln^2√(1+β^'/1-β^') - iπ(1/ϵ-γ_E+ln-4 π Q^2/m_l^2) ] . Here, β=|p⃗ |/E and β^'=|p⃗^⃗'⃗ |/E^' are the velocities of the incoming and outgoing lepton, respectively, and Li_2(z) =- ∫_0^z dt ln(1-t)/t , ∀ z∈ℂ , is the standard dilogarithm (or Spence) function. For convenience we split I^+(p^',0|1,0,1,1) into the LO [i.e., 𝒪(1/M^0)] and NLO [i.e., 𝒪(1/M)] parts, namely, I^+(p^',0|1,0,1,1) = - I^(0)(p,0|1,0,1,1) + δ^(1/M)I^+(p^',0|1,0,1,1) + 𝒪(M^-2) , noting that the LO part of I^+(p^',0|1,0,1,1) is simply the negative of the integral I^-(p,0|1,0,1,1) containing purely LO terms only. The NLO part of I^+(p^',0|1,0,1,1) is given as δ^(1/M)I^+(p^',0|1,0,1,1) = - Q^2/2(4π)^2 M E^2 β^3[ (1/ϵ-γ_E+ln 4π)(ln√(1+β/1-β)-β) + ln(-Q^2/m_l^2)(ln√(1+β/1-β)-β) - Li_2(2β/1+β) + 2 ln√(1+β/1-β) - ln^2 √(1+β/1-β) ] . Next, there are four integrals that we give for completeness. These UV-divergent two-point integrals are calculated using dimensional regularization by analytically continuing the integrals to d-dimensional (d = 4 + 2ϵ) space-time, where the pole ϵ < 0 yields the UV-divergence. [In Eq. (<ref>) in the text where we isolated the IR divergence, we have analytically continued to dimension D=4-2ϵ, where ϵ <0. This case should not be confused with the continuation to the dimension d≠ D which we use in this part of Appendix <ref> in the context of the particular UV-divergent integrals of interest. Nonetheless, our TPE results are free of UV-divergences, as seen in Eq. (<ref>) which involves taking differences of such UV-divergent functions.] These integrals are needed later in Appendix <ref> when we evaluate the tensor integrals T^-μ_1 displayed in Appendix <ref>. They are given by the following expressions with μ=√(-Q^2) as the choice of the renormalization scale: I^(0)(p,0|0,0,1,2) ≡ I^-(p,0|0,0,1,2) = 1/8π^2[1/ϵ+γ_E -ln(-4π Q^2/m^2_l)+2/βln√(1+β/1-β) ] , I^+(p^',0|0,0,1,2) = 1/8π^2[1/ϵ+γ_E -ln(-4π Q^2/m^2_l)+2/β^'ln√(1+β^'/1-β^') ] = I^(0)(p,0|0,0,1,2) + Q^2/8π^2 MEβ^2[1-(1-β^2/β)ln√(1+β/1-β) ] + 𝒪(1/M^2) , and I(Q|0,1,0,2) ≡ I^-(p,0|0,1,0,2) ≡ I^+(p^',0|0,1,0,2) = 1/8π^2[1/ϵ+γ_E -ln(-4π M^2/Q^2) - 2 ] , where the last integral is only a function of Q^2. Having discussed the results for all divergent master integrals used in this work, we turn our attention to non-divergent functions. First, we enumerate the analytical expressions of the IR-finite three- and four-point functions I^± and Z^± that appear in the TPE results. These integrals, which contain one heavy baryon propagator each, are conveniently evaluated using Zupan's method <cit.>. Again, for the sake of convenience, it is useful to split each of these integrals into LO and NLO parts: I^-(p,0|0,1,1,1) = I^(0)(p,0|0,1,1,1) + δ^(1/M)I^+(p,0|0,1,1,1) + 𝒪(1/M^2) , I^+(p^',0|0,1,1,1) = -I^(0)(p,0|0,1,1,1) + δ^(1/M)I^+(p^',0|0,1,1,1) + 𝒪(1/M^2) , Z^-(Δ,i√(-Q^2)/2,m_l,E) = Z^(0)(Δ,i√(-Q^2)/2,m_l,E) + δ^(1/M)Z^-(Δ,i√(-Q^2)/2,m_l,E) + 𝒪(1/M^2) Z^+(Δ^',i√(-Q^2)/2,m_l,-E^') = -Z^(0)(Δ,i√(-Q^2)/2,m_l,E) + δ^(1/M)Z^+(Δ^',i√(-Q^2)/2,m_l,-E^') +𝒪(1/M^2) , where we note that the LO parts of I^+ and Z^+ differ from the LO parts of I^- and Z^-, namely, I^(0)(p,0|0,1,1,1) =-1/ (4π)^2 β E[π^2/6 - Li_2 (2 β/1+β) - Li_2 (1+β/1-β) - 2ln^2 √(1+β/1-β) + 2ln(-Q^2/2 M E β) ln√(1+β/1-β) ] , and Z^(0)(Δ,i√(-Q^2)/2,m_l,E) = - 1/(4π)^2√(E^2-Δ^2)[1/2Li_2(Δ^2/Δ^2-m^2_l) - 1/2Li_2(Δ^2-m^2_l/α^2) - 1/2Li_2(α^2/Δ^2-m^2_l) + Li_2(1+E(1+β)/α) + Li_2(1+E(1-β)/α) ] , respectively, by overall signs only. Whereas, the NLO terms are given by the following expressions: δ^(1/M)I^-(p,0|0,1,1,1) = - Q^2/(4π)^2 M E^2 β^3[-π^2/12 - β + 1/2Li_2 (2 β/1+β) + 1/2Li_2 (1+β/1-β) - (1+β) ln√(1+β/1-β) + 2 βln√(2 β/1-β) + ln^2 √(1+β/1-β) - ln(-Q^2/2 M E β) ln√(1+β/1-β) + βln(-Q^2/2 M Eβ) + i πβ] , δ^(1/M)I^+(p^' ,0|0,1,1,1) = - Q^2/(4π)^2 M E^2 β^3[ln√(1+β/1-β) -β] , δ^(1/M)Z^-(Δ,i√(-Q^2)/2,m_l,E) = - Q^2/4 (4 π)^2 M (E^2-Δ^2)[(4 π)^2 E Z^(0) - √(Δ^2/Δ^2-m^2_l)ln(√(Δ^2) + √(Δ^2-m^2_l)/√(Δ^2)-√(Δ^2-m^2_l)) - ln(m_l^2/Δ^2-m^2_l) - i π(1+√(Δ^2)+E/√(Δ^2-m^2_l))] , δ^(1/M)Z^+(Δ^',i√(-Q^2)/2,m_l,-E^') = Q^2/4 (4 π)^2 M(E^2-Δ^2)[(4 π)^2 E Z^(0) + √(Δ^2/Δ^2-m^2_l)ln(√(Δ^2)+√(Δ^2-m^2_l)/√(Δ^2)-√(Δ^2-m^2_l)) - 4/βln√(1+β/1-β) - ln(m_l^2/Δ^2-m^2_l) + 2(2+E/√(E^2-Δ^2)) × ln(√(Δ^2-m^2_l)/√(Δ^2-m^2_l)-α) - iπ(1+√(Δ^2)+E/√(Δ^2-m^2_l))] , where α=-E + √(E^2-Δ^2), and Δ^2=m_l^2-1/4Q^2. Furthermore, in the context of evaluating the contributions from (g) and (h) diagrams [cf. Eqs. (<ref>) and (<ref>)], we additionally need to evaluate the three-point scalar master integrals I^-(p,0|0,1,1,2) and I^+(p^',0|0,1,1,2), respectively, as well as the tensor integrals, I^-μ_1(p,ω|0.1,1,2) and I^+μ_1(p^',ω|0,1,1,2), respectively. These are IR-finite functions containing two powers of the heavy baryon propagator each. First, we tackle the scalar integrals by adopting Zupan's methodology <cit.>. For this purpose, we need to generalize our aforementioned expression for the IR-finite functions I^-(p,0|0,1,1,1) and I^+(p^',0|0,1,1,1) [see Eqs. (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>)] into I^-(p,ω|0,1,1,1) and I^+(p^',ω|0,1,1,1) by introducing an infinitesimal parameter ω, and subsequently taking the ω-derivatives evaluated in the limit ω→ 0, namely, I^-(p,0|0,1,1,2) = - lim_ω→ 0[∂/∂ω I^-(p,ω|0,1,1,1)] = I^(0)(p,0|0,1,1,2) + 𝒪(1/M) = - 4 M/(4π)^2 Q^2 Eβln√(1+β/1-β) + 𝒪(1/M) , I^+(p^',0|0,1,1,2) = - lim_ω→ 0[∂/∂ω I^+(p^',ω|0,1,1,1)] = - I^(0)(p ,0|0,1,1,2) - 2/(4π)^2 E^2 β^3[ln√(1+β/1-β)-β] + 𝒪(1/M) , where from Eq. (<ref>) we have I^-(p,ω|0,1,1,1) = - 1/ (4π)^2 β E[π^2/6 - Li_2 (2 β/1+β) - Li_2 (1+β/1-β) - 2ln^2 √(1+β/1-β) + 2ln(ω-Q^2/2 M Eβ) ln√(1+β/1-β) + Q^2/M E β^2{- π^2/12 - β + 1/2Li_2 (2 β/1+β) + 1/2Li_2 (1+β/1-β) - (1+β) ln√(1+β/1-β) + 2βln√(2 β/1-β) + ln^2 √(1+β/1-β) - ln(ω-Q^2/2 M Eβ) ln√(1+β/1-β) + βln(ω-Q^2/2 M E β) + i πβ} + 2ω/β E{1-1/βln√(1+β/1-β) } ] + 𝒪(1/M^2) , I^+(p^',ω|0,1,1,1) = 1/ (4π)^2 β E[π^2/6 - Li_2 (2 β/1+β) - Li_2 (1+β/1-β) - 2ln^2 √(1+β/1-β) + 2ln(ω-Q^2/2 M E β) ln√(1+β/1-β) + Q^2/M E β^2 ( β - ln√(1+β/1-β) ) - 2ω/β E( 1 - 1/βln√(1+β/1-β)) ] + 𝒪(1/M^2) . Next, we deal with the rank-1 tensor integrals, T^-μ_1(p,0|0.1,1,2) and T^+μ_1(p^',0|0,1,1,2) as displayed in Appendix <ref>. Using PV reduction <cit.> technique to decompose these tensor functions into the corresponding scalar forms, we first write the integrals as T^-μ_1(p,0|0,1,1,2) = v^μ C^-_1 + p^' μ C^-_2 , T^+μ_1(p^',0|0,1,1,2) = v^μ C^+_1 + p^μ C^+_2 . The above coefficients C^±_1,2 are then obtained by successively contracting with three independent available four-vectors, such as v^μ, p^μ and p^'μ, namely, v· T^±_1, p^'· T^-_1 and p · T^+_1, and subsequently using IBP to decompose these dot products into combinations of two- and three-point scalar master integrals, as discussed in this appendix. We then obtain the following: C^-_1 = [1-1/β^'2]I^-(p,0|0,1,1,1) - [E - E/β^' 2 + m^2_l/E^'β^' 2]I^-(p,0|0,1,1,2) + 1/2E^'β^' 2[I^-(p,0|0,0,1,2) - I(Q|0,1,0,2)], C^-_2 = 1/E^'β^' 2I^-(p,0|0,1,1,1) - [E/E^'β^' 2 - m^2_l/E^' 2β^' 2]I^-(p,0|0,1,1,2) - 1/2E^' 2β^' 2[I^-(p,0|0,0,1,2) - I(Q|0,1,0,2)], C^+_1 = [1-1/β^2]I^+(p^',0|0,1,1,1) + [E^' - E^'/β^2 + m^2_l/E β^2]I^+(p^',0|0,1,1,2) - 1/2Eβ^2[I^+(p^',0|0,0,1,2) - I(Q|0,1,0,2)], C^+_2 = 1/Eβ^2I^+(p^',0|0,1,1,1) + [E^'/Eβ^2 - m^2_l/E^2 β^2]I^+(p^',0|0,1,1,2) + 1/2E^2β^2[I^+(p^',0|0,0,1,2) - I(Q|0,1,0,2)]. Furthermore, there appears another finite three-point integral with one heavy baryon propagator arising from all but the (g) and (h) TPE diagrams, and is given by I(Q|1,1,0,1) ≡ I^-(p,0|1,1,0,1) ≡ I^+(p^',0|1,1,0,1) = - 1/16√(1/-Q^2) . Finally, in contrast to the aforementioned loop integrals containing the non-relativistic proton propagators, there exists two relativistic finite integrals essentially contributing to the seagull diagrams, namely, the scalar three-point function: I(Q|1,1,1,0) ≡ I^-(p,0|1,1,1,0) ≡ I^+(p^',0|1,1,1,0) =1/8π^2Q^2ν[π^2/3+ln^2√(ν+1/ν-1) +Li_2(ν-1/ν+1)] , where ν=√(1-4m^2_l/Q^2), and the tensor three-point function which can be decomposed into the following form: I^-μ_1(p,0|1,1,1,0) = -1/8π^2Q^2ν^2[(p^μ-1/2Q^μ)ln(-Q^2/m^2_l) -8π^2(Q^2p^μ-2m^2_l Q^μ)I(Q|1,1,1,0)] . apsrev 111 Hofstadter:1955ae R. Hofstadter and R. W. McAllister, Phys. Rev. 98, 217 (1955). Rosenbluth:1950yq M. N. Rosenbluth, Phys. Rev. 79, 615 (1950). Akhiezer:1974em A. I. Akhiezer, M. P. Rekalo, Fiz. Elem. Chast. Atom. Yadra 4, 662 (1973). Arnold:1980zj R. G. Arnold and C. E. Carlson and F. Gross, Phys. Rev. C 23, 363 (1981). Gayou:2001qt O. Gayou, et al., Phys. Rev. C 64, 038202 (2001). Jones:1999rz M. K. Jones et al,, Phys. Rev. Lett. 84, 1398 (2000). Perdrisat:2006hj C. F. Perdrisat, V. Punjabi, M. Vanderhaeghen, Prog. Part. Nucl. Phys. 59, 694 (2007). Punjabi:2015bba V. Punjabi et al., Eur. Phys. J. A 51, 79 (2015). Puckett:2010 A. J. R. Puckett, et al., Phys. Rev. Lett. 104, 242301 (2010). Arrington:2003 J. Arrington, Phys. Rev. C 68, 034325 (2003). Guichon:2003 P. A. M. Guichon and M. Vanderhaeghen, Phys. Rev. Lett. 91, 142303 (2003). Blunden:2003sp P. G. Blunden, W. Melnitchouk and J. A. Tjon, Phys. Rev. Lett. 91, 142304 (2003). Rekalo:2004wa M. P. Rekalo and E. Tomasi-Gustafsson, Nucl. Phys. A 742, 322 (2004). Blunden:2005ew P. G. Blunden, W. Melnitchouk, and J. A. Tjon, Phys. Rev. C 72, 034612 (2005). Carlson:2007sp C. E. Carlson and M. Vanderhaeghen, Ann. Rev. Nucl. Part. Sci. 57, 171 (2007). Arrington:2011 J. Arrington, P. G. Blunden, and W. Melnitchouk, Prog. Part. Nucl. Phys. 66, 782 (2011). Pohl:2010zza R. Pohl et al., [CREMA Collaboration] Nature 466 213, 2010. Pohl:2013 R. Pohl, R. Gilman, G. A. Miller, and K. Pachucki, Ann. Rev. Nucl. Part. Sci. 63, 175 (2013). Mohr:2012tt P. J. Mohr, et al., “CODATA Recommended Values of the Fundamental Physical Constants: 2010*", Rev. Mod. Phys. 84, 1527 (2012). Antognini:1900ns A. Antognini, et al., Science 339, 417 (2013). Bernauer:2014 J. C. Bernauer and R. Pohl, Scientific American 310, 32 (2014). Carlson:2015 C. E. Carlson, Prog. Part. Nucl. Phys. 82, 59 (2015). Bernauer:2020ont J. C. Bernauer, EPJ Web Conf. 234, 01001 (2020). Gao:2021sml H. Gao and M. Vanderhaeghen, Rev. Mod. Phys. 94, 015002, (2022). Kivel:2012vs N. Kivel and M. Vanderhaeghen, JHEP 04, 029 (2013). Tomalak:2014sva O. Tomalak and M. Vanderhaeghen, Eur. Phys. J. A 51, 24 (2015). Tomalak:2014dja O. Tomalak and M. Vanderhaeghen, Phys. Rev. D 90, 013006 (2014). Tomalak:2015aoa O. Tomalak and M. Vanderhaeghen, Phys. Rev. D 93, 013023 (2016). Tomalak:2015hva O. Tomalak and M. Vanderhaeghen, Eur. Phys. J. C 76, 125 (2016). Tomalak:2016vbf O. Tomalak, B. Pasquini, M. Vanderhaeghen, Phys. Rev. D 95, 096001 (2017). Tomalak:2017npu O. Tomalak, Eur. Phys. J. C 77, 858 (2017). Koshchii:2017dzr O. Koshchii and A. Afanasev, Phys. Rev. D 96, 016005 (2017). Tomalak:2018jak O. Tomalak and M. Vanderhaeghen, Eur. Phys. J. C 78, 514 (2018). Talukdar:2019dko P. Talukdar, V. C. Shastry, U. Raha and F. Myhrer, Phys. Rev. D 101, 013008 (2020). Peset:2021iul C. Peset, A. Pineda and O. Tomalak, Prog. Part. Nucl. Phys. 121, 103901 (2021). Talukdar:2020aui P. Talukdar, V. C. Shastry, U. Raha, and F. Myhrer, Phys. Rev. D 104, 053001 (2021). Guo:2022kfo Q. Q. Guo and H. Q. Zhou, Phys. Rev. C 106, 015203 (2022). Gilman:2013eiv R. Gilman et al., [MUSE Collaboration] AIP Conf. Proc. 1563, 167 (2013). Gasser:1982ap J. Gasser and H. Leutwyler, Phys. Rept. 87, 77 (1982). Jenkins:1990jv E. Jenkins and A. V. Manohar, Phys. Lett. B 255, 558 (1991). Bernard:1992qa V. Bernard, N. Kaiser and U.-G. Meissner, Nucl. Phys. B 338, 315 (1992). Ecker:1994pi G. Ecker, Phys. Lett. B 336, 508 (1994). Bernard:1995dp V. Bernard, N. Kaiser and U.-G. Meissner, Int. J. Mod. Phys. E 4, 193 (1995). Cao:2021nhm X. H. Cao, Q. Z. Li and H. Q. Zheng, Phys. Rev. D 105, 094008 (2022). Kondratyuk:2005kk S. Kondratyuk, P. G. Blunden, W. Melnitchouk and J. A. Tjon, Phys. Rev. Lett. 95, 172503 (2005). Christy:2021snt M. E. Christy, et al., Phys. Rev. Lett. 128 (2022) no.10, 102002 Tsai:1961zz Y. S. Tsai, Phys. Rev. 122, 1898 (1961). zupan:2002 J. Zupan, Eur. Phys. J. C 25 (2002) 233. Chetyrkin:1981qh K. G. Chetyrkin and F. V. Tkachov, Nucl. Phys. B 192, 159 (1981). Grozin:2000cm A. G. Grozin, [arXiv:hep-ph/0008300 [hep-ph]]. Mo:1968cg L. W. Mo and Yung-Su Tsai. Rev. Mod. Phys. 41, 205 (1969). Maximon:1969nw L. .C. Maximon, Rev. Mod. Phys. 41, 193 (1969). Maximon:2000hm L. C. Maximon and J. A. Tjon, Phys. Rev. C 62, 054320 (2000). McKinley:1948zz W. A. McKinley and H. Feshbach, Phys. Rev. 74, 1759 (1948). Passarino:1979 G. Passarino and M. Veltman, Nucl. Phys. B 160, 151 (1979).
http://arxiv.org/abs/2306.12206v1
20230621120125
Tailstorm: A Secure and Fair Blockchain for Cash Transactions
[ "Patrik Keller", "Ben Glickenhaus", "George Bissias", "Gregory Griffith" ]
cs.CR
[ "cs.CR", "cs.DC" ]
Modulation vector of the Fulde-Ferrell-Larkin-Ovchinnikov state in CeCoIn_5 revealed by high-resolution magnetostriction measurements Kazushige Machida July 31, 2023 ====================================================================================================================================== Proof-of-work (PoW) cryptocurrencies rely on a balance of security and fairness in order to maintain a sustainable ecosystem of miners and users. Users demand fast and consistent transaction confirmation, and in exchange drive the adoption and valuation of the cryptocurrency. Miners provide the confirmations, however, they primarily seek rewards. In unfair systems, miners can amplify their rewards by consolidating mining power. Centralization however, undermines the security guarantees of the system and might discourage users. In this paper we present Tailstorm, a cryptocurrency that strikes this balance. Tailstorm merges multiple recent protocol improvements addressing security, confirmation latency, and throughput with a novel incentive mechanism improving fairness. We implement a parallel proof-of-work consensus mechanism with k PoWs per block to obtain state-of-the-art consistency guarantees <cit.>. Inspired by Bobtail <cit.> and Storm <cit.>, we structure the individual PoWs in a tree which, by including a list of transactions with each PoW, reduces confirmation latency and improves throughput. Our proposed incentive mechanism discounts rewards based on the depth of this tree. Thereby, it effectively punishes information withholding, the core attack strategy used to reap an unfair share of rewards. We back our claims with a comprehensive analysis. We present a generic system model which allows us to specify Bitcoin, ℬ_k <cit.>, and Tailstorm from a joint set of assumptions. We provide an analytical bound for the fairness of Tailstorm and Bitcoin in honest networks and we confirm the results through simulation. We evaluate the effectiveness of dishonest behaviour through reinforcement learning. Our attack search reproduces known optimal strategies against Bitcoin, uncovers new ones against ℬ_k, and confirms that Tailstorm's reward discounting makes it more resilient to incentive layer attacks. Our results are reproducible with the material provided online cpr:repo. Lastly, we have implemented a prototype of the Tailstorm cryptocurrency as a fork of Bitcoin Cash. The client software is ready for testnet deployment and we also publish its source online clientImpl. § INTRODUCTION Proof-of-work (PoW) cryptocurrencies can be thought of as stacked systems comprising four layers. The PoW layer moderates access of weakly identified parties using a mining puzzle: proposing a new block requires finding a small hash. The consensus layer allows all participants to agree on a specific block ordering. The application layer maintains a distributed ledger by writing cryptocurrency transactions into the blocks. Finally, the incentive layer motivates participation in the PoW layer by minting new cryptocurrency for successful miners. The circular dependencies between the layers result in interdependent failures. Consensus faults are inevitable when individual miners gain too much control <cit.>. Unreliable consensus enables double spending and erodes confidence in the cryptocurrency. Devaluation of the currency renders the mining rewards worthless as well. Lastly, misaligned incentives encourage centralization among the miners, eventually allowing the strongest one to break consensus. In this paper, we introduce and analyze Tailstorm, a new cryptocurrency that strengthens two layers of the stack. We employ an innovative incentive mechanism as well as a state-of-the-art consensus mechanism, while retaining the PoW and application layers of Bitcoin. We draw inspiration from the organization of delta blocks in the Storm protocol <cit.> as well as the use of partial PoW in the Bobtail <cit.> and ℬ_k protocols <cit.>. Bitcoin's consensus <cit.> uses a sequential PoW mechanism where each block references a single parent block. The blocks form a tree and the participants mine new blocks that extend the longest branch according to the longest chain rule. Blocks off the longest branch are discarded. Evidently, attackers who possess more than 50 % of the hash rate pose a threat to the system: they can execute double-spend attacks by mining their own branch until it eventually becomes the longest. But even less mining power might suffice since honest participants also discard the blocks of other honest miners if they are not on the same branch. This happens naturally in realistic networks due to propagation delays. If an attacker can induce and exploit communication delays, all discarded blocks may benefit the attacker; effectively increasing their strength <cit.>. In contrast, Tailstorm implements a parallel PoW consensus mechanism that largely avoids discarding blocks. For this, we closely follow the approach taken by Keller and Böhme at AFT '22 <cit.>. Their protocol, ℬ_k, confirms each block with k votes. ℬ_k blocks do not require a PoW, only votes do. Notably, votes confirming the same block can be mined in parallel because they do not depend on each other. Discarding only occurs when there are more than k votes for the same block. Even then, individual discarded votes only account for 1/k of a block's PoW. As Keller and Böhme <cit.> argue, this makes consensus more robust against consensus layer attacks. But it is futile to analyze the consensus without considering incentives. Ideally, rewards are distributed fairly, which means that a miner's expected reward is proportional to its hash rate. Arnosti and Weinberg <cit.> show that even small inequalities in reward allocation encourage substantial centralization of hash rate which ultimately poses a threat to consensus and the cryptocurrency itself <cit.>. Unfairness arises from natural network delays and dishonest behavior. Both of these factors affect Bitcoin. In latent networks, two blocks mined around the same time may refer to the same parent, even if both miners follow the longest chain rule. One of the blocks will be discarded, the other rewarded. The stronger miner has more hash rate to support their own block, hence the weaker miner is worse off. Network-level attackers can additionally exploit latency to manipulate impartial participants in their favor. But even without delays, Bitcoin miners with more than one third of the hash rate can reap an unfair share of rewards by temporally withholding blocks instead of acting honestly <cit.>. As we will demonstrate, ℬ_k suffers from the same problem, partly due to its leader election mechanism. Tailstorm addresses the unfairness problem through an innovative reward scheme that punishes withholding. The Tailstorm blockchain consists of subblocks and summary blocks. Similar to ℬ_k, summaries do not require a PoW, but subblocks do. Assembling a new summary requires k subblocks that confirm the same parent summary. To preserve the security properties of ℬ_k, subblocks confirming the same summary are conflict-free, and hence can be mined in parallel. Taking inspiration from Bobtail <cit.>, subblocks optionally refer to another subblock instead of a summary, and hence form a tree. With Tailstorm, we propose to discount rewards based on the depth of this tree, as depicted in Figure <ref>. Mining subblocks in private causes branching of the tree, reduces its depth, and ultimately leads to lower rewards. We support our claims with a comprehensive analysis and make the following contributions. * We formulate a generic system model for PoW cryptocurrencies. The models abstracts PoW and communication, defines a joint set of assumptions, and enables valid comparisons between different consensus protocols and incentive mechanisms. * We specify Bitcoin, ℬ_k, and Tailstorm in the joint model. To isolate the effect of Tailstorm's discount reward scheme and ℬ_k's leader election mechanism, we additionally specify a hybrid protocol, , modelling Tailstorm without discounting and ℬ_k without leader election. * We provide an upper bound for the orphan rate of Tailstorm in honest networks. Compared to Bitcoin <cit.>, Tailstorm creates less orphans and hence presents less opportunity for unfairness. * We implement the system model as a simulator and show that Tailstorm is more fair than Bitcoin in honest but realistic networks with propagation delays. We confirm Bitcoin's inherent tradeoff: while short block intervals are desirable for fast confirmations and a less volatile stream of rewards, they also bias rewards in favour of strong miners. In Tailstorm, these concerns are largely separated by configuring long summary block intervals for fairness and short subblock intervals for fast confirmations and frequent rewards. * We evaluate multiple hard-coded attack strategies against the specified protocols, finding that attacks which are profitable against ℬ_k are less profitable against Tailstorm, with the protocol lying in between. * We follow Hou et al. <cit.> and search optimal attack strategies using reinforcement learning. Our search reproduces optimal strategies against Bitcoin <cit.> and generally matches or outperforms the hard-coded strategies. The regularity of our results indicates that we indeed found near-optimal strategies against all protocols and enables the conclusion that Tailstorm is less susceptible to incentive layer attacks than the other protocols. * We describe the Tailstorm application layer, which implements a cryptocurrency on top of the proposed consensus protocol. It preserves Bitcoin's transaction logic while enabling faster confirmations. Transactions are stored in subblocks in much the same way that the Storm protocol <cit.> stores transactions in delta blocks. * Lastly, we implement a prototype of Tailstorm which is ready for testnet deployments and make the code available online clientImpl. We structure the paper in order of our contributions. Section <ref> defines the system model and Section <ref> presents the specification of Tailstorm; specification of the remaining protocols is deferred to Appendix <ref>. In Section <ref>, we evaluate the protocols in an honest network with propagation delays. In Section <ref>, we evaluate hard-coded attack strategies, and in Section <ref> we conduct the search for optimal policies with reinforcement learning. Section <ref> presents the Tailstorm cryptocurrency and our prototype implementation. In Section <ref>, we discuss related work, limitations and future work. Section <ref> concludes. § SYSTEM MODEL Recall the layered view on PoW cryptocurrencies presented in the introduction: The PoW layer moderates access using a mining puzzle, the consensus layer establishes a specific block ordering, the application layer writes cryptocurrency transactions into the blocks, and the incentive layer mints new cryptocurrency for successful miners. We now present a system model that abstracts PoW and communication to enable concise specification of the consensus layer. Application and incentives are considered in later sections. In practice, PoW consensus protocols are executed as distributed systems, where independent nodes communicate over a P2P network. Messages exchanged between the nodes may be subject to natural or potentially malicious delays. To facilitate specification, we abstract this distributed system and model it algorithmically. We define a virtual environment that emulates distributed protocol execution in a single thread of computation. Within the virtual environment, nodes are represented as numbers, and blocks are represented as vertices in a directed acyclic graph (DAG). Mining is simulated as a loop with random delays, while communication is modeled by restricting the visibility of blocks to a subset of the nodes. The behaviour of nodes is defined by functions, which can be customized to model different protocols. We define the virtual environment in Algorithm <ref>. The environment maintains a DAG where each vertex represents one block. Each block b has an associated list of parent blocks that constitute the outgoing edges in the DAG. We denote this list (b). In practice, edges arise from hash references pointing to other blocks in the blockchain. We say that block b is a descendant of block b', if b' is either a parent of b, or is connected transitively by the relationship. In this case, we say b' is an ancestor of b. Each block has properties which are assigned as the protocol unfolds. For example, the virtual environment uses the Boolean property (b) to track whether block b has a PoW or not. We label the participating nodes with integers ranging from 1 to n. For each node, the virtual environment maintains a local view of the DAG and a preferred tip of the chain. In Lines <ref> and <ref>, we restrict the local view of node i to blocks where (b,i) was set to true. Initially, local views are empty and new blocks are not visible to any node. We denote as (i) the preferred tip of node i, the block to which a new block from i will point. We describe the behaviour of nodes as pure functions. These functions are called by the environment to obtain instructions from the node which the environment then interprets according to our assumptions. This makes all modifications of the DAG and all communication explicit in Algorithm <ref>. In particular, nodes do not directly append blocks to the DAG; they return block templates, which the environment then reifies by appending a new block to the DAG. A protocol is fully specified through four functions: and define the structure of the blockchain, while and define the behavior of honest nodes. The function takes no argument and returns a single block, which we call genesis. Initially, the genesis is the only block in the DAG. takes a block as argument and returns if the block is valid and otherwise. E. g., Bitcoin's checks the property and that there is exactly one parent. The environment enforces block validity during the reification of blocks, while deployed protocols would reject invalid blocks in the communication layer. The genesis is not subject to the validity rule. The function specifies how nodes react to newly visible blocks, after they are mined locally with PoW, appended locally without PoW, or received from the network. The function takes two arguments: the node's currently preferred tip and the new block. The function returns the new preferred tip, a list of blocks the node intends to share with other nodes, and a list of block templates it wants to append to the chain without PoW. On the other hand, the function defines how nodes grow the chain with PoW. It takes a single argument, a node's currently preferred tip, and returns a template for the block that the node intends to mine. We follow related work <cit.> and model the mining process in continuous time. The virtual environment generates independent mining delays from the exponential distribution (λ), with rate λ measured in expected number of proofs-of-work per second. Accordingly, the expected value of the distribution, 1/λ, is called the mining interval and is measured in seconds. After each mining delay, the environment randomly selects a successful miner, obtains a block template from , and reifies the block by appending it to the DAG. We support arbitrary hash rate distributions among the nodes by setting the weights κ_1, …, κ_n in Line <ref> accordingly. The function captures the process of making blocks visible to nodes. The specified protocols have in common that block validation requires knowledge of all referenced blocks. We avoid a lot of boilerplate code in the specification, by ensuring that parent blocks are delivered before their children. Upon delivery, the virtual environment first invokes the function to obtain the node's new preferred tip, a list of blocks the node wants to share, and a list of block templates the node intends to append without PoW. It then handles the node's requests to share and append. Communication is modelled through delayed delivery, while appends happen immediately. § THE TAILSTORM PROTOCOL This section specifies the Tailstorm consensus protocol and reward mechanism using the algorithmic model described in Section <ref>. The specification serves as the basis for our theoretical analyses, network simulations, and attack search in subsequent sections. We first describe Tailstorm's chain structure in Section <ref>. We then specify the behaviour of honest nodes in Section <ref>. As a point of reference, we also specify Bitcoin and ℬ_k protocols in Appendix <ref>. In this section, we assume that the application layer implements a cryptocurrency which we can use to pay rewards. We defer the description of Tailstorm's application layer to Section <ref>. Throughout this section, we focus on honest miners who follow the protocol as intended. Later sections will consider dishonest behaviour. §.§ Chain Structure Algorithm <ref> defines Tailstorm's chain structure. Each block is either a summary or a subblock. Subblocks must have PoW and they must have exactly one parent. Summaries do not require PoW and reference k subblocks, each confirming the same ancestor summary. Each block b has two integer properties, (b) and (b). The genesis has height and depth zero. Subblocks inherit the height of their parent and they increment the depth by one. Summaries increment the height and reset the depth to zero. Figure <ref> in Section <ref> illustrates a valid Tailstorm chain for k=3. Note that the subblocks confirming the same summary form a tree and that the property tracks the depth of this tree. The property on the other hand counts the number of trees that have been summarized. To incentivize participation, Tailstorm allocates rewards to the miners of subblocks. The reward size is proportional to the depth of the subblock tree: let b be a summary block, and S be the set of subblocks in the corresponding subblock tree. Then all subblocks in S are allocated the same reward (b) = c/k·max_x ∈ S((x) ) , where c represents a tunable upper limit on the subblock reward. In Figure <ref> we show the rewards for c = 1. Note that the reward scheme punishes non-linearities in the blockchain, and this punishment affects all included subblocks equally. §.§ Honest Nodes Algorithm <ref> specifies the behaviour of honest nodes. The algorithm revolves around a preference order (ln. <ref>-<ref>) that ranks summaries first by height, then by number of confirming subblocks, and finally by potential personal reward for the individual node. Nodes set the highest ranked summary as their preferred summary (ln. <ref>+<ref>) and they mine subblocks (ln. <ref>-<ref>) that confirm their preferred summary. To maximize the depth of the subblock tree, nodes append their subblocks to the longest existing branch. Whenever nodes learn about a new block (ln. <ref>-<ref>), they share it with the other nodes and they update their preference. As soon as there are k subblocks (ln. <ref>) confirming the preferred summary, nodes assemble the next summary. When there are more than k subblock candidates for the next summary, nodes choose the subblocks to maximize their own rewards. We present a greedy algorithm for subblock selection in Algorithm <ref>. §.§ Difficulty Adjustment A major goal for most blockchains, including Tailstorm, is for the blockchain itself to grow at a constant rate so as to maintain constant transactional throughput. However, in any deployed blockchain, the puzzle solving rate λ changes over time because nodes may come and go or may add or remove mining hardware. The changes in solving rate lead to changes of the growth rate. Adjusting for the fluctuations requires feedback from the consensus to the PoW layer. Typically, blockchains adjust the puzzle solving difficulty depending on the observed chain growth using a dynamic difficulty adjustment algorithm (DAA). There exists a rich body of prior work concerning DAA design and analysis <cit.>, but a deep investigation of ideal DAAs for the Tailstorm protocol is beyond the scope this paper. We note, however, that existing DAAs for Bitcoin can be adapted to Tailstorm by counting the number of subblocks where Bitcoin DAAs use the length of the blockchain. For any Tailstorm block b (summary or subblock), we define (b) = k ·(b) + (b) , which counts the number of PoWs included in the chain. Any Tailstorm DAA should adjust the puzzle solving difficulty such that grows at a constant rate. §.§ Protocol Variant With Constant Rewards Tailstorm discounts rewards proportional to the depth of the subblock tree. We see the discounting mechanism as a core contribution of this paper. To isolate the effect of discounting, we introduce a protocol variant without discounting, which we call . While Tailstorm pays out at most c units of reward per subblock, the protocol pays out exactly c units of reward per subblock. In all other aspects, is identical to Tailstorm. Note, that the protocol does not use the subblock tree structure, neither for consensus nor for incentives. In that regard, resembles the parallel PoW protocol ℬ_k <cit.> where all subblocks refer to a summary, never another subblock. The only difference with ℬ_k is that does not implement leader election: while ℬ_k restricts the creation of the next summary to the miner of the subblock with the smallest hash, any node may (re-)create valid summaries locally. § FAIRNESS UNDER PROTOCOL COMPLIANCE A fair PoW protocol rewards miners in proportion to the amount of work they do. Disproportionate or inconsistent allocation discriminates against weak miners and encourages the formation of pools and centralization. In this section, we explore the causes of unfairness in Tailstorm under the assumption that all miners are honest. We measure work in number of hashes evaluated. The system is fair if rewards are proportional to the miners' hash rates. The root cause of unfairness is the inherent asymmetry in reward loss when multiple PoW solutions are discovered in short order. In Tailstorm, if more than k subblocks are produced, all having the same height, then all but k of them will be discarded. The discarded blocks are commonly called orphans and receive no rewards. Typically, miners do not orphan their own blocks. Since miners with a high hash rate are more often able to choose which blocks will be orphaned, miners with a relatively low hash rate lose a disproportionate amount of rewards. In Bitcoin, this effect is amplified because orphaning can occur for each block (set k=1 in the above argument). §.§ Analytical Orphan Rate Analysis In this section, we develop an analytical model for the orphan rate and use it to compare the fairness of Bitcoin and Tailstorm. Let B be a random variable denoting the number of subblocks orphaned during the production of a summary block in Tailstorm or the number of orphans per block in Bitcoin. The orphan rate is given by ρ = E[B]/k, which represents the expected number of PoW solutions orphaned for every PoW solution confirmed. In the remainder of this section, we will derive a bound on ρ. Following Rizun's orphan rate analysis for Bitcoin <cit.>, we model block propagation delays as τ(τ_0, z, Q) = τ_0 + z Q, where τ_0 represents network latency (seconds), z represents bandwidth (bytes per second), and Q represents block size (bytes). In order to adapt this expression for Tailstorm, we assume that the transactions included in a summary block are spread evenly across the k subblocks, so the subblock size is Q/k. The expected summary block interval is T and the subblock mining rate is λ = k/T. theoremthmOrphanRate For network parameters τ_0, z, and Q, summary block interval T, and subblock count k, Tailstorm's expected orphan rate ρ is bounded from above by: ρ(τ_0, z, Q, T) ≤τ_0/T + zQ/kT. Table <ref> shows how the upper bound varies depending on the expected summary block interval T and the subblock count k. The orphan rate for Bitcoin <cit.> appears in the column for k=1. In this analysis we assume that Q = 32 MB, z = 100 MB/s, and τ_0 = 5 s. Generally, we can see that the orphan rate decreases with both expected summary block interval T and k. This suggests that, when summary block intervals are constant across protocols, Tailstorm with high k increases fairness over Bitcoin and Tailstorm with lower k. §.§ Measuring Fairness in Simulation Section <ref> provides an analytical bound for Tailstorm's orphan rate. This bound is only tight if all subblocks originate from different miners, which is unlikely to happen in practice. To obtain a more realistic analysis, albeit in a more specific setting, we now consider strong miners who are likely to produce multiple blocks within a single propagation delay. Since the mathematical analysis for this scenario is complex, we rely on simulation instead. Additionally, instead of using the orphan rate metric, itself only a proxy for fairness, we directly measure a miner's deviation from their fair share of rewards. We implement the virtual environment as described in Section <ref>, Algorithm <ref> and measure how the choice of protocol—Bitcoin and Tailstorm under various parameterizations—affects rewards. The network is configured with one weak and one strong miner operating with 1 % and 99 % of the total hash rate, respectively. All messages are delayed for 6 seconds. We simulate one million PoW solutions per configuration. We split the simulation into independent observations, each representing one day of protocol execution. Specifically, with T denoting the expected summary block interval, we terminate individual executions when ⌊ 24 × 3600 × k / T ⌋ PoWs are mined. For each execution, we identify the longest chain of blocks and calculate the accumulated rewards for both miners. Since each observation covers one day of mining, the variance represents daily volatility of rewards. A miner's fair share of reward equals its relative hash rate. To measure fairness, we compare actual rewards to the fair share. Figure <ref> shows the weak miner's relative reward as a percentage of its fair share. The leftmost facet shows the Bitcoin protocol for expected block intervals ranging from roughly 9 seconds to 600 seconds. The remaining three facets show Tailstorm with summary block intervals T of 600, 300, and 150 seconds for k ranging from 2 to 64. Like colors represent such configurations where subblocks are generated at the same frequency λ = k/T. We omit the configurations where the expected subblock interval T/k is lower than the network propagation delay of 6 seconds. The Figure illustrates the tradeoff between reward fairness and volatility in Bitcoin and Tailstorm. In the leftmost facet, it shows how increasing the expected block interval improves fairness but also increases volatility in Bitcoin. In contrast, the right facets show how Tailstorm can be configured to independently control both reward fairness (by adjusting the summary block interval T) and volatility (by adjusting k). In particular, choosing higher k leads to more frequent rewards and lower volatility, while longer summary block intervals tend to increase fairness. The latter observation aligns with the results in Table <ref>, which also shows that fairness tends to increase with the summary block interval. § ATTACK EVALUATION We now extend the virtual environment defined by Algorithm <ref> in Section <ref> to model adversarial behavior. We then specify and evaluate several attack strategies against Tailstorm. To account for the possibility of collusion among multiple dishonest parties, we pessimistically assume that they indeed collude. Therefore, we model a single but strong node deviating from the protocol. Throughout the remaining sections we refer to the single dishonest node as the attacker and to the honest nodes as defenders. The attacker observes the virtual environment's state and reacts to any updates accordingly. We describe attacks as policies, which determine the action to take based on the observed state. §.§ Network We deploy a virtual environment comprising n nodes. The first node represents the attacker and is assigned fraction α of the total hash rate. The remaining hash rate is distributed evenly among the n - 1 defenders. Henceforth, we refer to α as the attacker's strength. In Bitcoin, blocks form a linear chain. If there are two blocks with the same height, only one will be included in the blockchain long term. Since honest Bitcoin nodes prefer and mine on the block first received, nodes with better connectivity obtain an advantage. The same block race situation can arise in Tailstorm. Here, there can be only one summary at each height and the nodes' preference of summaries (Alg. <ref>) likewise depends on the order they are received. Attackers able to send blocks more quickly might be able to manipulate the defenders' preference to suit their needs. We follow Eyal and Sirer <cit.> and Sapirshtein et al. <cit.> who model the block race advantage with a single network parameter γ, which defines the proportion of defenders, weighted by hash rate, that will opt for the attacker's block in the case of a block race. We pessimistically assume that the attacker receives all blocks without delay. Defender nodes can send blocks to one another with a negligible delay of ε. When the attacker sends a block to a defender, we introduce a random delay, which we draw independently from a uniform distribution on the interval [0, n-2/n-1ε/γ]. theoremthmGammaAligns Let γ∈ [0, 1] be given. For any choice of ε > 0 and n > 1/1-γ + 1, γ is equal to the attacker's block race advantage. Constraint n > 1/1-γ + 1, from Theorem <ref>, is fundamental; if n is smaller, then it is not possible for an honest miner to communicate a block to another honest miner faster than the attacker can. The constraint implies that γ < n-2/n-1. Hence, we can model γ = 1 only in the limit that the number of miners approaches infinity, which is not possible in practice. Other authors <cit.> have directly considered γ = 1, but we feel that the restriction to γ < 1 is natural. Given that the miner of the defending block in a block race will never adopt the attacker's competing block, γ = 1 implies that individual miners have zero hash rate, which is not realistic. §.§ Observation Space The virtual environment maintains complex state. It tracks the full history of blocks, delayed messages, and the partial views of all nodes. Some of this information is not available to attackers in practice, other parts are not relevant. To enable concise policies, we follow Sapirshtein et al. <cit.> and restrict what the attacker can see. Figure <ref> illustrates our notation. At any given time, let b_a = (1) denote the attackers preferred summary block, and let b_d denote the best block (according to in Alg. <ref>) among the preferred blocks of the defenders: (2), …, (n). We use b_c to refer to the best summary block among the common ancestors of b_a and b_d. In other words, b_c is the latest block that all nodes agree upon. For any summary block b, we use R(b) to identify the subblocks which are both a descendant of b and have the same height as b. Note that Tailstorm's chain structure (Alg. <ref>) implies that R(b) and b together span a tree. Section <ref> defines the depth of a subblock to be its depth within this tree. We define R'(b) as the largest subset of R(b) such that all subblocks are mined by the attacker and R'(b) ∪{b} is still connected. In the following, we use |S| to refer to the cardinality of a set S and [S] to refer to the maximum depth among all blocks in S. The attacker observes h_a attacker's height advantage(b_a) - (b_c), h_d defenders' height advantage(b_d) - (b_c), s_a attacker's inclusive subblock count|R(b_a)|, s_a'attacker's exclusive subblock count|R'(b_a)|, s_d defenders' subblock count|R(b_d)|, d_a attacker's inclusive depth[R(b_a)], d_a'attacker's exclusive depth[R'(b_a)], and d_d defenders' depth[R(b_d)]. h_a , h_d , s_a , s_a', s_d , d_a , d_a', d_d The example shown Figure <ref> implies () = (1,1,2,2,2,2,2,2) . §.§ Action Space The virtual environment (Alg. <ref>) calls a node's function whenever this node learns about a new block: after it was mined locally with PoW, appended locally without PoW, or received from the network. We implement the attacker's dishonest function in Algorithm <ref>. We allow for generic attacks by letting the attacker choose from a set of potentially dishonest actions. We assume that there is a policy 𝒫 which maps observations (see Sect. <ref>) to action tuples of the form (withhold, extend). The withhold action type controls the preferred tip of the chain and the withholding of blocks. We hereby follow closely the actions used by Sapirshtein et al. <cit.> for selfish mining against Bitcoin <cit.>. Continue mining on b_a and withhold new blocks. Release just enough blocks to induce a block race between b_d and an attacker block. Release just enough blocks to make the defenders discard their block b_d. Abort attack, prefer defenders' block b_d, and discard b_a. Recall from Section <ref> that “blocks” refers to vertices in the block DAG. This makes the withhold action protocol-agnostic. For Tailstorm, blocks can be subblocks or summaries. Note that while and are always feasible, and are not. If the attacker's chain is too short, the defenders will not consider adopting it. In such cases, and , as implemented in Algorithm <ref>, fall back to releasing all withheld blocks. The second action type, extend, is specific to Tailstorm. It controls how the attacker assembles new summary blocks, more specifically, which subblocks it considers for selection with Algorithm <ref>. Use all available subblocks to create new summaries. Use only subblocks which were mined by the attacker. Note that the attacker never delays the next summary block longer than necessary. As soon as there are enough subblocks for an inclusive or exclusive summary (according to the chosen action), this summary will be created. §.§ Reference Policies We evaluate the following policies. In each, the attacker assembles summaries. Honest Emulate Algorithm <ref>: adopt the longest chain and release all blocks: if h_d > h_a. Otherwise . Get Ahead Withhold own subblocks, release own summaries: if h_d > h_a. if h_d < h_a. Otherwise . Minor Delay Withhold own subblocks, override defender summaries as they come out: if h_d > h_a. if h_d = h_c. Otherwise . To evaluate the policies above, we reuse the simulator from Section <ref>, configuring the network as described in Sections <ref> to <ref>. We set block race advantage γ∈{5, 50, 95} % and allow the attacker strength α to range from 20 % to 45 %. We run one hundred simulations per configuration, and stop each as soon as the DAG contains 2048 blocks. At the end of each simulation we select the longest chain of blocks and calculate its rewards. We configure Tailstorm's discount reward scheme, as defined in Section <ref>, with c = 1 such that at most one unit of reward is minted per unit of chain progress. Some policies cause more orphans than others. Assuming effective difficulty adjustment according to Section <ref>, the amount of simulated time depends on the attacker's behaviour. The attacker's mining cost is proportional to the time spent on the attack. To facilitate comparison across policies, we normalize rewards with respect to simulated time. Formally, we define the normalized reward as the attacker's reward up to block b divided by (b). Note that relative reward, a metric commonly used for Bitcoin <cit.>, is not sufficient to account for Tailstorm's reward discounting. Figure <ref> reports the average normalized reward (across 100 simulations) on the y-axis for the varying α on the x-axis and with γ varying by facet. The green curves represent the different reference policies against Tailstorm. We also evaluate the reference policies against the protocol as defined in Section <ref> (orange curves) and against ℬ_k as defined in Appendix <ref> (red curves). In addition, we evaluate the SM1 policy against Bitcoin (blue curve) as described by Sapirshtein et al. <cit.> and in Appendix <ref>. For orientation we include their upper bound α / ( 1 - α) as a gray dotted curve and add a solid gray line for α, the expected reward of honest behaviour. The figure supports multiple conclusions. First, the curves for the Honest policy coinciding with the line for α indicates that the policy indeed replicates honest behavior in all evaluated protocols. Second, comparing the measurements for Tailstorm with the ones for shows clearly that the discounting of rewards reduces efficacy of all dishonest reference policies. Third, Minor Delay is generally the most effective reference policy. Forth, Minor Delay produces less reward in Tailstorm than SM1 does in Bitcoin. Fifth, for low γ, Minor Delay is more profitable against ℬ_k than SM1 is against Bitcoin. Note however, that Minor Delay might not be the best strategy against Tailstorm, just like SM1 is not always optimal for Bitcoin <cit.>. This motivates to search for optimal policies. § ATTACK SEARCH Previous research has identified optimal attacks against Bitcoin and Ethereum through the use of Markov Decision Processes (MDP) and exhaustive search <cit.>. However, for more complex protocols, the state space of MDPs can become prohibitively large, and exhaustive search becomes impractical <cit.>. As a result, many authors in the past have resorted to evaluating hard-coded attacks instead of searching for optimal policies <cit.>. We adopt the approach introduced by Hou et al. <cit.> and employ reinforcement learning (RL) to search for attacks. The observation and action spaces described in Section <ref> readily define a partially observable MDP. To search for optimal attacks, we replace the hard-coded policy P with an RL agent that learns to select actions based on past observations. We utilize off-the-shelf RL tooling by first exposing our simulator and attack space as an OpenAI Gym <cit.>. Next, we deploy Proximal Policy Optimization (PPO) <cit.> as our agent. To support reproduction and future research, we release the Gym as Python package cpr:pypi and include all training scripts in our open source repository cpr:repo. The modularity of our simulator allows us to apply the same RL pipeline to different protocols. We insert the protocol specifications along with their associated observation and action spaces, while reusing the virtual environment and network assumptions across protocols. For Bitcoin, we adopt the attack space defined by Sapirshtein et al. <cit.>. Details about the attack spaces for Bitcoin and ℬ_k are provided in Appendix <ref>. We use Bitcoin as a reference protocol to measure the completeness of our approach. Previous research by Sapirshtein et al. <cit.> has yielded the optimal policy. As we will demonstrate, our search mechanism reproduces these results closely. We search for optimal policies for all possible combinations of attacker strength α∈{20, 25, 30, … 45} %, and block race advantage γ∈{ 5, 50, 95} %. We evaluate four protocols: Bitcoin, ℬ_k, Tailstorm, and with k=8 where applicable. In total, this amounts to 7 · 3 · 4 = 84 different learning problems. For each learning problem, we conduct multiple training runs with varying hyperparameters. For the objective function we choose normalized reward, as evaluated for the reference policies in Figure <ref>. From the training we obtain 1292 policies. We select the best trained policy for each problem by simulating 100 independent protocol executions for each and proceeding as follows. As in Section <ref>, we stop individual executions as soon as there are 2048 blocks. We then select the longest chain of blocks and observe the attacker's normalized reward. This reward is averaged over the 100 observations per policy to determine the policy with the highest average reward. For reference, we apply the same filtering method to select the best policy among the reference policies from Section <ref>. Figure <ref> shows the performance of the selected policies, with γ varying by facet, α on the x-axis, and average normalized reward on the y-axis. The solid colored curves represent the best trained policies, while the dashed colored curves represent the best reference policies. As in Figure <ref>, we include gray reference curves for Bitcoin's known upper bound α / (1 - α) (dotted) and fair share α (solid). Note that for γ = 50 %, Nakamato and ℬ_k visually overlap. In comparing the performance of different protocols, we say that protocol A performs better (or worse) than B in the event that A returns fewer (alternatively more) rewards to the attacker than does B. Figure <ref> supports the following conclusions. First, ℬ_k performs worse than Bitcoin for low γ, but better than Bitcoin for high γ. The break-even point seems to be at γ = 50 %. Second, Tailstorm consistently outperforms Bitcoin and ℬ_k, even with constant rewards (). Moreover, discounting of rewards consistently makes Tailstorm less susceptible to selfish mining and similar incentive layer attacks. Third, the learned policies consistently match or outperform the hard-coded reference policies. This finding is important because anything else would indicate deficiencies in the training. Forth, the learned policies are close to optimal for Bitcoin. This can be seen in two ways. Firstly, Sapirshtein et al. showed that the SM1 policy is close to optimal for γ = 0 <cit.>. Our figure for γ = 5% shows that the learned policy matches SM1 (the dashed blue curve with ×-markers). Secondly, the authors showed that the optimal policy reaches the upper bound α / (1 - α) for γ = 1 <cit.>. Our figure for γ = 95% shows that the learned policy matches this bound as well. Until now, we have evaluated the efficacy of policies in absolute terms. We now take a different perspective and ask: how strong must an attacker be for dishonest behaviour to pay off? Recall that honest behaviour implies an expected normalized reward of α (solid gray line in Figures <ref> and <ref>). To answer the question, we calculate break-even points, which represent the minimum relative hash rate α, such that following the policy produces more than α of normalized reward. We start from the optimal policies presented in Figure <ref> and select only those policies that feature dishonest behavior. Recall that each of the remaining policies was trained on a fixed relative hash rate α. For each protocol and choice of γ, we select the dishonest policy trained for the lowest α. We then evaluate this policy against alternative α values ranging from 5 % to 50 %. Our simulator provides noisy observations and the reward distribution varies stochastically. Hence, we use Bayesian optimization to minimize the difference between the observed reward and α, i. e., to get as close to the break-even point as possible. Table <ref> reports the results. Observe that for the trained policies, the break-even points are consistently either lower or close to the break-even points of the hard-coded reference policies. ℬ_k, , and Tailstorm are also less sensitive to changes in γ than Bitcoin. Tailstorm has the highest break-even points among all protocols, indicating that it is most resilient to incentive layer attacks. § TAILSTORM CRYPTOCURRENCY So far, we have focused on Tailstorm's consensus and incentive mechanisms. Throughout our analysis, we have assumed a cryptocurrency on the application layer that facilitates the creation and distribution of rewards to participants of the consensus layer. In this section, we describe and discuss the Tailstorm cryptocurrency and our prototype implementation. §.§ Transaction Handling Tailstorm implements the unspent transaction output (UTXO) model as it is used in Bitcoin <cit.>: Each UTXO represents a designated amount of cryptocurrency. Transactions consume and create UTXOs, but they never create more cryptocurrency than they consume. Consumed UTXOs cannot be consumed again. Ownership and transfer of value follows from restricting the consummation of UTXOs to the holders of specific cryptographic keys. Tailstorm uses the same public key cryptography as Bitcoin, and it also supports Bitcoin's UTXO scripting facility. The dissemination and processing of transactions follows Bitcoin's approach as well. However, as we describe next, Tailstorm deviates from Bitcoin in how transactions are recorded in the blockchain. Taking inspiration from Storm <cit.>, each subblock contains a list of transactions. Let a be a transaction listed at position a in subblock a summarized in summary block a and let b, b, b, and b be defined similarly. Assume a and b are incompatible, e. g., because they spend the same UTXO twice. If a and b are listed in the same subblock, i. e., a = b and a ≠ b, then this subblock is marked invalid. We proceed similarly if a and b are summarized by different summaries: due to Tailstorm's chain structure (see Alg. <ref>) a and b must differ in height and we mark the higher one as invalid. Honest nodes ignore invalid blocks and all their descendants at the consensus layer. Hence, under the above circumstances, incompatible transactions are not persisted in the blockchain, as is the case in Bitcoin. However, special attention is required for incompatible transactions in different subblocks summarized by the same summary block, i. e., a ≠ b and a = b. A critical assumption of our analyses as well as parallel PoW in general <cit.> is that subblocks confirming the same summary are compatible and thus can be mined independently. We thus cannot mark the summary invalid in this case and the incompatible transactions are persisted in the blockchain. To resolve this conflict, Tailstorm executes only one of the transactions at the application layer and ignores the other according the following rules. Let a denote the distance of a to a in the block DAG, and let b be defined similarly. If a < b, then a is ignored. Ties are broken by the PoW hash function, i. e., if a = b and ( a) > ( b), then a is ignored. If a is ignored then all dependent transactions (i.e. those spending the UTXOs produced by a) are ignored as well. Note that even though incompatible transactions are persisted in the blockchain, the double-spending semantics are equivalent to Bitcoin: offending transactions off the longest branch are ignored. §.§ Fast Confirmations To reap the full consistency guarantees of parallel PoW <cit.>, prudent cryptocurrency users should wait for one summary block confirmation before accepting their transactions as final. For example, if their transaction is included in subblock a and first summarized in a, then they should wait for another valid summary b with ( b) = ( a) + 1. Assuming a 10 minute summary block interval and large k, the full confirmation will likely occur in 10 to 20 minutes, depending on whether the transaction was included early in the subblock tree or later. If time is short and the transacted value is low, e. g. if the user is selling a cup of coffee to go, they can consider waiting for a number of subblock confirmations instead. For example, consider Tailstorm with k = 60 and a 10 minute summary block interval. Subblocks will be mined every 10 seconds in expectation. The heuristics are similar to a fast version of Bitcoin: If the seller waits for 6 subblock confirmations, i. e., for a subblock b with ( b) = ( a) + 6, the settlement will take about one minute. Invalidation of the payment, e. g. due to a double spend, implies a fork of the subblock tree of length 6 or more. Tailstorm discounts mining rewards according to the depth of the subblock tree; whereas the tree could have achieved depth 60, it now can achieve depth at most 54. One tenth of the minted rewards is lost, and the cost of the coffee is dwarfed. §.§ Tailstorm Prototype We have implemented Tailstorm and make the code available online clientImpl. We started from a fork Bitcoin Unlimited's implementation of Bitcoin Cash. Our fork implements the Tailstorm consensus layer and incentive mechanism as specified in Section <ref>. The prototype uses k=3, but is easily configurable by changing a compile-time flag. The DAA is calibrated for a target summary block interval of ten minutes. Subblocks are expected to arrive every 200 seconds. To minimize propagation delays, we have implemented several network compression mechanisms. First, we adapt the Graphene protocol <cit.> to avoid redundant transmission of transactions. Second, we adapt the Compact Block protocol <cit.> to reduce the size of summary blocks. Third, the transaction list is encoded bit-efficiently in each subblock. We leave the implementation of Tailstorm's application layer, i. e. the transaction handling as described in Section <ref>, for future work. Finally, we note that our implementation has minimal testing and, being a prototype, is not ready for production use. § DISCUSSION Tailstorm inherits the consistency guarantees of parallel PoW consensus <cit.>. To increase fairness, we discount rewards based on the tree structure of subblocks which itself is inspired from Bobtail <cit.>. To obtain fast confirmations, we write transactions into subblocks like Storm <cit.>. Alongside these foundational influences, we incorporate insights from a variety of related works, which we here can only list partially. For more details we refer to the surveys of Garay and Kiayias <cit.>, describing the different ways of defining the consensus problem, and Bano et al. <cit.>, focusing on the different solution approaches and protocols. Tailstorm improves fairness by avoiding orphans and discounting rewards. The former is not new <cit.>, however, the motivation differs. Sompolinsky and Zohar <cit.> propose to improve Bitcoin's consistency by referencing blocks that would otherwise be orphaned (called uncles) and then counting both blocks and uncles (GHOST-rule) whereas Bitcoin counts only the blocks (longest chain rule). However, the authors explicitly avoid rewarding uncles and hence do not improve fairness. Follow-up work <cit.> does not discuss rewards at all. In Ethereum PoW, a deployed variant of GHOST <cit.>, and Kaspa, a deployed variant of GHOSTDAG <cit.>, uncles receive partial rewards and the blocks that include uncles get additional rewards. The unfair dynamic of Bitcoin, where included blocks receive the full reward and orphans none, is largely preserved. We think that GHOST-like protocols can become more fair by applying Tailstorm's ideas: discounting of rewards, applied equally to all involved miners. Pass and Shi <cit.> explicitly set out to increase fairness in PoW. Their Fruitchains protocol uses two kinds of blocks like Tailstorm. Blocks record fruits and fruits record transactions. Unlike in Tailstorm, both fruits and blocks require PoW. Applying the 2-for-1 trick of Garay et al. <cit.>, both are mined with the same hash puzzle observing different bits of the output. Fruits have a lower difficulty and thus are created more frequently. The rewards are distributed evenly across recent fruits while blocks receive nothing. According to Zhang and Preneel <cit.>, Fruitchains are more vulnerable to incentive layer attacks than Bitcoin for block race advantage γ = 0, and thereby less fair. Furthermore, Fruitchains suffer from a tradeoff between fairness and transaction confirmation time <cit.>. A related line of research <cit.> extends the 2-for-1 into an n-for-1 trick and makes the miners work on multiple chains in parallel. The chains are then interleaved to form a single coherent transaction ledger. Unlike Fruitchains, however, Prism <cit.> and OHIE <cit.> focus on consistency, liveness, and throughput. We leave for future work to investigate whether and how the fairness of these protocols can be improved by applying Tailstorm's reward discounting. Kiffer and Rajaraman <cit.> propose to discount rewards inversely proportional to the overall hash rate participating in the system. This reduces centralization in their model, but we worry that motivating lower hash rates might harm the primary goals of consensus: consistency and liveness. Their mechanism poses an alternative to Bitcoin's halving mechanism, which statically reduces block rewards by 50 % roughly once every 4 years, to avoid long term inflation of the cryptocurrency. In contrast, Tailstorm's reward discounting is focused on the short term and punishes non-linearities within a single subblock tree. But our approach is not without limitations. In Section <ref>, we analyse the impact of message propagation delays on fairness. We assume that these delays are uniform, affecting all miners equally. However, in practice, propagation delays depend on the connectivity of individual miners. Some inequalities arise from economies of scale, larger miners can invest more in low latency connections, others from the underlying physics of information propagation: miners located in the same region or on the same continent implicitly form are cartel, whereas joining the network from a distant location puts them at a disadvantage. We leave for future work to analyze how Tailstorm is affected by more realistic network topologies <cit.>. In Section <ref>, we perform a search for optimal attack strategies. Our search algorithm is based on reinforcement learning (RL), which means it is not exhaustive. It is possible that the RL agent did not discover the optimal strategy against certain protocols. This challenges our conclusion that Tailstorm is less susceptible to incentive layer attacks than the other protocols, Bitcoin and ℬ_k. To encourage further exploration we release the RL Gym environment on PyPI cpr:pypi, allowing others to discover more effective attacks. Assuming the absence of better strategies, our results can be confirmed by encoding our attack space as a Markov Decision Process (MDP) and employing exhaustive search techniques, as has been done for Bitcoin <cit.>, Ethereum <cit.> and other longest chain protocols <cit.>. However, to the best of our knowledge, such techniques have not yet been successfully applied to DAG-based protocols like Tailstorm. Another limitation is that our action space might not cover all possible attacks. We closely follow the modeling of the selfish mining attack space against Bitcoin <cit.>, where we observe consent that the four actions , , , and are indeed complete. We are convinced that adding two actions for summary formation, and , is enough to represent the most profitable attacks. However, our conclusions can be challenged by presenting a more effective incentive layer attack against Tailstorm, which cannot be expressed in our attack space. We encourage this endeavour by making all analytical code available online cpr:repo. After modifying the attack space, new strategies can be drafted, evaluated, and optimized, while the virtual environment, protocol specifications, and RL attack search can be reused without changes. We demonstrate in Section <ref> that ignoring subblocks is no major concern in the long term, i. e., after the mining difficulty was adjusted to the attack. But, as we point out in Appendix <ref>, there is a small incentive to do so in the short term (before difficulty adjustment). This stands in contrast to Bitcoin, where selfish mining implies negative outcome in the short term and positive outcome in the long term. We feel this tradeoff is favorable to Tailstorm. On a separate note, Carlsten et al. <cit.> demonstrate that selfish mining becomes more profitable when considering transaction fees in addition to mining rewards. They present a strategy targeting Bitcoin which leverages transaction fees to outperform honest behavior for any α > 0 and γ < 1. Similar attacks are likely feasible against all PoW cryptocurrencies, including Tailstorm, however they also exceed the scope of this paper. Lastly, Arnosti and Weinberg <cit.> show that even small cost imbalances or economies of scale lead to highly centralized PoW mining ecosystems. While, Tailstorm reduces the imbalances compared to other PoW protocols we studied, it cannot fully mitigate centralization. Our notion of fairness revolves around rewarding miners in proportion to their hash rate. This definition is not new <cit.>. However, from an economic standpoint, to prevent centralization, miners should be rewarded in proportion to their operational mining costs <cit.>. Unfortunately, achieving this goal seems to be impossible. Mining costs primarily depend on the price and the availability of energy and specialized mining hardware. Purchasing energy in large quantities tends to be more cost-effective, and strong miners may even consider operating their own power plants. Scaling up operations also enables miners to develop and deploy more efficient mining hardware, and then selling it to weak miners after it has become outdated <cit.>. Given that these unfair scaling effects do not only affect Tailstorm, but PoW cryptocurrencies in general, one might be lead to consider alternative approaches like proof-of-stake <cit.>. But even then, it remains questionable whether any permissionless system can fully avoid centralization <cit.>. § CONCLUSION Tailstorm integrates parallel PoW <cit.>, partial transaction confirmation <cit.>, and a novel incentive mechanism—reward discounting—into a PoW cryptocurrency that provides fast and secure confirmations for its users and fair rewards for its miners. We thereby solve a long standing issue of longest chain protocols, where the operator has to choose between either a short block interval with fast but less reliable confirmations and unfair rewards, or a long block interval with slow confirmations and more fair but infrequent rewards. Our prototype implementation demonstrates that Tailstorm can serve as a replacement for Bitcoin not only theoretically but also in practice. § CONTRIBUTION STATEMENT George originally developed the idea of reward discounting as a way to combat withholding attacks in Bobtail. George and Gregory refined that idea into an early version of Tailstorm suitable for a hard fork of Bitcoin Cash. They also contributed to the prototype implementation. Patrik contributed the protocol specification, simulation-based analyses, and reinforcement learning interface. George contributed the analytical orphan rate analysis. George and Patrik contributed the hard-coded reference policies. Ben and Patrik conducted the attack search with reinforcement learning. Guided by earlier stages of the analyses, Patrik provided improvements to Tailstorm consensus and its transaction handling. Patrik and George wrote this paper. § ACKNOWLEDGEMENTS We wish to thank Bitcoin Unlimited for their financial and technical support as well as Michael Fröwis for his review of this work and for the helpful suggestions he provided. § REFERENCE PROTOCOLS To enable comparison of Tailstorm to Bitcoin <cit.> and ℬ_k <cit.>, we specify these protocols using the system model described in Section <ref>. §.§ Bitcoin We specify the Bitcoin protocol in Algorithm <ref>. Bitcoin's chain structure is simple: each block has a PoW and exactly one parent; the property tracks the number of ancestors. Protocol-compliant nodes try to extend the longest chain of blocks. Tie-breaking between multiple blocks of the same height happens in favour of the block first seen. Freshly mined blocks are shared immediately. For dishonest behaviour we closely model the attack space used by Sapirshtein et al. <cit.>. Recall Tailstorm's attack space from Section <ref>. For Bitcoin, we restrict the observations to (h_a, h_d) (comp. Sec. <ref>) and use only the first half of the action tuple, withhold, with the actions , , , and (comp. Sec. <ref>). Algorithm <ref> implements the dishonest node (comp. Alg. <ref>). We define a single reference policy for Bitcoin, SM1. It was originally proposed by Eyal and Sirer <cit.>, but we here use the more formal definition by Sapirshtein et al. <cit.>. SM1 Withhold blocks as long as the honest chain is shorter: if h_d > h_a. if h_d = h_a = 1. if h_ = h_a - 1 and h_≥ 1. Otherwise . §.§ ℬ_k We specify the ℬ_k protocol in Algorithm <ref>. ℬ_k is similar to Tailstorm in several ways. Both protocols only require proofs-of-work for subblocks, and constructing a new summary necessitates k subblocks that confirm the previous summary. Compliant nodes extend the longest chain of summaries and break ties by the number of confirming subblocks. However, there are two significant differences between the two protocols. First, ℬ_k subblocks do not form a tree but instead directly reference the previous summary. This makes it pointless to track the subblock depth, and the discount reward scheme cannot be applied. Second, ℬ_k employs a leader election mechanism to restrict who can append summaries based on the hash values of the subblocks. Specifically, lines <ref>, <ref>, and <ref> in Algorithm <ref> ensure that only the miner of the smallest subblock can add the next summary. We have made minor changes to the terminology used by Keller and Böhme <cit.> to align with Tailstorm: what they refer to as a block, we call a summary, and what they refer to as a vote, we call a subblock. Tailstorm's attack space, as defined in Section <ref>, translates directly to ℬ_k. The only modification is that we omit the irrelevant variables d_a, d_a', and d_d for the subblock tree depth from the observation space. The reference policies presented in Section <ref> can be applied without modification. § IGNORING SUBBLOCKS TO MAXIMIZE REWARDS IN THE SHORT TERM The action of our attack space defined in Section <ref> enables attackers to ignore foreign subblocks that lie off the main branch in the subblock tree. Including such subblocks in the next summary would reduce the depth of the tree and, due to Tailstorm's discounting mechanism, also the individual subblock rewards. In Section <ref>, we search for attack strategies maximizing the normalized reward metric, which captures rewards under the hypothesis that the difficulty was already adjusted to the increased number of orphans caused by the attack. The results in Section <ref> demonstrate that ignoring subblocks is no major concern in the long term, that is, after the difficulty adjustment algorithm (DAA) has stabilized. In this section, we investigate the profitability of ignoring subblocks in the short term, i. e., before the DAA modifies the subblock mining rate λ. Let d̅_pow = 1/λ denote the expected time required to mine a subblock. Suppose that k subblocks are mined, all confirming the same parent summary block P. All are connected in a chain except one, which we label o. Miners have two options. They can either assemble a new summary block Q from the k subblocks now, allowing them to start mining subblocks with parent Q, or they can continue to mine with parent P, hoping to achieve a fully linear chain of subblocks with higher rewards. We observe both options from time t_0, when parent summary P has been proposed and no confirming subblocks have been mined, and time t_k+1, when the next k+1 subblocks are available. Time t_k is the arrival time of the kth subblock, i. e., the time when assembling the new summary Q first becomes possible. Following the definition of our mining process in Section <ref>, the expected values for the times {t_i}_i ≥ 0 are {i ·d̅_pow}. Taking the first option, assembling Q at time t_k, summary Q is of depth k - 1 and, according to the discount rule in Formula <ref>, assigns c / k · (k-1) units of rewards to each of the k subblocks. The scaling factor c has to be chosen by the operator; to simplify our argument, we assume c = 1. In total, summary Q allocates k - 1 units of reward. The miners receive (k - 1) / ( k ·d̅_pow ) units of reward per unit of time. The second option, continue mining with parent P, can result in a linear chain of k subblocks at time t_k + 1. This would allow for a summary that allocates k units of reward. The miners would receive k / ( (k+1) ·d̅_pow ) units of reward per time. Option two, delaying the summary and ignoring one subblock off the main branch gives 1 / (k^2 - 1) more units of reward per time than option one, including all subblocks and summarizing early. So it seems there is a small incentive to act selfishly in the short term. This stands in contrast to Bitcoin, where selfish mining implies strictly negative expected utility in the short term and positive utility in the long term. We note though, that this result is preliminary. We leave for future work to remove the assumption that subblock k+1 is indeed of depth k, and to address the remaining cases with more than one subblock off the main chain. Additionally, it would be interesting to conduct the analysis from the viewpoint of an individual miner, instead of the collective of all miners like we do above. Lastly, we think that there is enough social pressure to follow the rules. Mining is typically a public endeavour: miners tend to identify themselves, particularly those operating in pools. When a pool operator chooses to ignore subblocks, this will be apparent to all miners and they will join a different pool. Conversely, if the majority of miners agrees on maximizing short term reward, they might as well change the protocol arbitrarily, e. g., by simply scaling up the mining rewards.
http://arxiv.org/abs/2306.05917v1
20230609141732
Impact of conditional modelling for universal autoregressive quantum states
[ "Massimo Bortone", "Yannic Rath", "George H. Booth" ]
quant-ph
[ "quant-ph", "cond-mat.str-el", "physics.comp-ph" ]
[email protected] [email protected] [email protected] Department of Physics, King’s College London, Strand, London WC2R 2LS, United Kingdom We present a generalized framework to adapt universal quantum state approximators, enabling them to satisfy rigorous normalization and autoregressive properties. We also introduce filters as analogues to convolutional layers in neural networks to incorporate translationally symmetrized correlations in arbitrary quantum states. By applying this framework to the Gaussian process state, we enforce autoregressive and/or filter properties, analyzing the impact of the resulting inductive biases on variational flexibility, symmetries, and conserved quantities. In doing so we bring together different autoregressive states under a unified framework for machine learning-inspired ansätze. Our results provide insights into how the autoregressive construction influences the ability of a variational model to describe correlations in spin and fermionic lattice models, as well as ab initio electronic structure problems where the choice of representation affects accuracy. We conclude that, while enabling efficient and direct sampling, thus avoiding autocorrelation and loss of ergodicity issues in Metropolis sampling, the autoregressive construction materially constrains the expressivity of the model in many systems. Impact of conditional modelling for universal autoregressive quantum states George H. Booth =========================================================================== § INTRODUCTION The quantum many-body problem is a keystone challenge in the description of quantum matter from nuclei to materials and many more fields besides. Its formal solution scales exponential with number of interacting particles, but recent developments in neural and tensor network representations have made significant advances in defining compact and expressive approximations to the many-body wave function. This has allowed for accurate solutions to many complex quantum systems in condensed matter physics <cit.>, quantum chemistry <cit.> and beyond <cit.>. Neural Quantum States (NQS) use neural networks as polynomially compact models of the wave function, and are variationally optimized via stochastic sampling of expectation values. Many NQS architectures have been investigated in recent years, starting with the Restricted Boltzmann Machine (RBM) <cit.>. However, it has been shown that a state parameterization inspired by kernel models rather than neural networks can also be used to achieve a similar level of accuracy and flexibility with a simpler functional form, derived straightforwardly from well-defined physical arguments. These `Gaussian process states' (GPS) <cit.>, as well as other related kernel models <cit.>, have been shown to efficiently and compactly represent a large class of low-energy quantum states to high-accuracy. Given the many alternative functional forms that the different machine learning-inspired architectures can imply, a significant advance from the initial demonstration of the NQS in quantum systems has been the refining of particular models based on enforcing desired properties. Motivated by the performance of deep convolutional neural networks (CNN) in the field of computer vision <cit.>, wave function models that incorporate many layers of translationally-invariant convolutional filters to efficiently learn local correlated features have been proposed and applied to find ground states of frustrated quantum spin systems <cit.>. The recent success of autoregressive (AR) generative models in machine learning (ML) <cit.> has also captured the attention of physicists interested in the quantum many-body problem, leading to the development of autoregressive quantum states (ARQS) that enforce a strictly normalized state from which configurations can be directly sampled without Metropolis Monte Carlo, autocorrelation times or loss of ergodicity. In this context, Sharir et al. <cit.> were the first to propose an adaption of PixelCNN <cit.> (an autoregressive masked convolutional neural network for image generation) to the quantum many-body problem and applied it to find the ground state of two-dimensional transverse-field Ising and antiferromagnetic Heisenberg systems. Other ML architectures such as recurrent neural networks (RNN) <cit.> and transformer architectures <cit.> have also been proposed as models for ARQS, yielding convincing results about their ability to represent ground states of lattice systems with different geometries and to compute accurate entanglement entropies in systems with topological order <cit.>. Hybrid models that combine the expressivity of autoregressive architectures from the deep learning literature with the physical inductive bias of tensor networks have also been proposed <cit.>. Going beyond quantum spin lattice systems, extensions of ARQS based on deep feed-forward neural networks have been applied to the ab initio electronic structure problem in quantum chemistry, demonstrating good accuracy up to 30 spin-orbitals <cit.>. At the core of any autoregressive model is the application of the product rule of probability to factorize a joint-probability distribution of N random variables into a causal product of probability distributions, one for each variable, conditioned on the realization of previous variables. This modelling approach can be extended to quantum states, yielding explicitly normalized autoregressive ansätze from which independent configurations can be sampled directly via a sequential process. This ability is of particular interest in the context of Monte Carlo, where the computation of expectation values via autocorrelated stochastic processes such as Metropolis sampling can lead to loss of ergodicity or long sampling times <cit.>. In this work, we describe how to adapt general quantum states into an autoregressive form, as well as introduce filters to improve the parameter scaling, enforce translational symmetry and locality of correlation features. We specifically apply these adaptations to the GPS to introduce autoregressive properties and filters in a simpler overall form to NQS. While this makes the efficient sampling of large models possible <cit.>, it is not clear in general how enforcing autoregressive properties affects the expressibility of the state. In particular, while the advantage of direct sampling of configurations from the ansatz has been well demonstrated (though its impact is system-dependent), it has been less clear how the different conditions required for it (such as the masking, the normalization, and the more limited choice of symmetrization) reduce the overall variational freedom afforded by the state. We directly compare the autoregressive state to its parent (unnormalized) GPS, discussing the advantages and otherwise in the choice of AR models in general, finding the normalization of the conditionals to be the dominant factor in the expressibility of these states for an unfrustrated 2D spin lattice. We consider the breadth of spin models, fermionic lattices and ab initio systems, considering futher the impact of sign structure, choice of representation and numerical expediency. § A FRAMEWORK FOR COMPACT MANY-BODY WAVE FUNCTIONS §.§ Quantum states as product of correlation functions The many-body quantum state of a given system consisting of N modes, each represented by a local Fock space of D local states as x_i∈{0,…,D-1}, is fully described by a set of D^N amplitudes ψ_x_1,…,x_N and basis configurations |𝐱⟩, i.e. |ψ⟩ = ∑_𝐱ψ_x_1,…,x_N|𝐱⟩, where 𝐱=(x_1,…,x_N) is a string representing the local states of each mode in the configuration |𝐱⟩. This presents a challenging problem, since the number of amplitudes grows exponentially with system size (number of modes, sites in a lattice or number of orbitals in ab initio systems). The variational Monte Carlo (VMC) approach circumvents this by replacing the structureless tensor ψ_x_1,…,x_N in Eq. <ref> with a model that can be efficiently evaluated at any configuration, ψ_θ: {0,…,D-1}^N →ℂ, parameterized by a vector θ of size 𝒪(poly(N)). This compact representation of the wave function then enables the estimation of expectation values of operators Ô via stochastic evaluation, as ⟨Ô⟩ = ⟨ψ_θ|Ô|ψ_θ| ⟩ = ∑_𝐱|ψ_θ(𝐱)|^2∑_𝐱'O_𝐱𝐱'ψ_θ(𝐱')/ψ_θ(𝐱) = 𝐄_𝐱∼ p_θ[O_loc(𝐱)] , where p_θ(𝐱)=|ψ_θ(𝐱)|^2 is the Born probability of 𝐱 and O_loc(𝐱)=∑_𝐱'O_𝐱𝐱'ψ_θ(𝐱')/ψ_θ(𝐱) is the local estimator for operator Ô. Since typical operators are k-local, the sum in the local estimator has a polynomial number of terms and can thus be efficiently computed. An approximation to the ground (or low-energy) state of a system with Hamiltonian operator Ĥ is then found by minimizing the expectation value of the variational energy E_θ = ⟨Ĥ⟩ via gradient descent methods, such as stochastic reconfiguration <cit.> or Adam <cit.>, which we describe in more detail in Appendix <ref>. The success (or otherwise) of VMC is thus related to the choice of three key components: 1) an expressive and compact ansatz; 2) a reliable sampling method and 3) a fast and robust optimization of the parameters. Focusing on the first point, it can be important to impose physically motivated constraints on the chosen state in order to obtain a compact representation of the wave function, since one is typically interested in wave functions enforcing particular physical properties (e.g. those with an area scaling law in the entanglement entropy, topological order or antisymmetry for fermionic models). However, in order to accurately model large extended systems, it is also critical that wave functions should be size extensive. This property requires that the error per particle incurred by the model in the asymptotic large system limit should remain constant with system size, ensuring that extensive thermodynamic quantities such as the energy density converge to a constant energy per unit volume. A general guiding principle for size extensive parameterized wave functions is that they can be written in a product separable form as ψ_PS(𝐱) = ∏_i=1^Nψ_i(𝐱), where ψ_i(𝐱) are individual parameterized functions (correlators) describing the i-th site and its correlations with other sites in its environment. How these correlators are modelled has important consequences for the ability of the ansatz to capture different physical aspects of a wave function, such as the length scale or rank of the correlations it can model. Simple product states have entirely local functions for each correlator, which precludes the description of multi-site correlated physics. Recent ML-inspired variational ansätze such as the RBM or GPS have extensive correlators as long as the number of hidden units or support dimension respectively scales linearly with the system size, or if translational symmetries are taken into account. Furthermore, full coupling of each site to their latent spaces of the model (`hidden layers' or `support states' respectively) allows each site to interact with the whole rest of the system, not formally restricting the rank of range of correlations that each site can describe, as illustrated in Figure <ref>(a). This enables them to capture entanglement scaling beyond the area law and to obtain accurate results formally independent of the dimensionality of the system <cit.>. In contrast, Matrix Product States (MPS) introduce a specific one-dimensional ordering of the degrees of freedom in a system, as shown schematically in Figure <ref>(b), explicitly allowing for the efficient extraction of correlations decaying over finite length scales along the one-dimensional ordering <cit.>. Formalized through entanglement scaling arguments <cit.>, this makes MPS particularly suited for many one-dimensional systems. More general families of tensor decomposed or factorized forms also exist which can unify these ansätze under the same mathematical framework <cit.>. As will be seen in the next section, autoregressive quantum states also rely on the general product structure of Eq. <ref>, but introduce ordering constraints in the correlator functions, with important ramifications for the expressivity of the ansatz. §.§ Universal construction of autoregressive quantum states Modelling the probability distribution of N random variables 𝐱=(x_1,…,x_N) is a task common to many domains of science and engineering. Advances in generative machine learning have popularized efficient approaches to describe and subsequently sample from a joint probability distribution p(𝐱) via an autoregressive (AR) factorization <cit.>. This relies on the probability chain rule to decompose p(𝐱) into a product of conditional distributions p(x_i|𝐱_<i): p(𝐱) = ∏_i=1^N p(x_i|𝐱_<i) = p(x_1)p(x_2|x_1)⋯ p(x_N|x_N-1,…,x_2,x_1), where 𝐱_<i=(x_1,…,x_i-1) is a fixed, ordered sub-sequence of the random variables up to the i-th position. The same autoregressive factorization can be applied to the wave function, where importantly, this form also naturally has the desired product separable structure shown in Eq. <ref>. We can define an autoregressive wave function as ψ_θ(𝐱) = ∏_i=1^N ψ_i(x_i|𝐱_<i), where ψ_i(x_i|𝐱_<i) is the conditional wave function over the D local quantum states of the i-th site of the system, conditioned on 𝐱_<i, a configuration of the sub-Hilbert space of all the sites before the i-th one in a one-dimensional ordering of the system. Here θ denotes the (potentially complex) parameters of the autoregressive wave function, and the number of local quantum states will be D=2 for a spin-1/2 system, or D=4 for a fermionic system. We desire a square-normalized autoregressive state, ∑_x|ψ_θ( x)|^2=1, which can be achieved if all conditional wave functions are normalized for every possible sub-configuration 𝐱_<i, i.e. ∑_x'=0^D-1|ψ_i(x'|𝐱_<i)|^2 = 1. This condition can be explicitly imposed on the state by normalizing the conditionals at the point of evaluation <cit.>. This local normalization scheme is efficiently computable, since it only involves summing over local Fock states of the i-th site, with the global normalization for a given configuration just the product of the local normalization of the conditional states. We can therefore always consider unnormalized models for the conditional wave functions ψ̃_i(x_i| x_<i), and apply the normalization as the model is evaluated for a given 𝐱. We note that this autoregressive property remains when the state is multiplied by any function e^i ϕ(x) that models a complex phase. Thus, we can summarize the construction of an autoregressive ansatz into the following general recipe: * define a one-dimensional ordering of the system, which is equivalent to picking unique site indices; * choose a model for each unnormalized conditional wave function ψ̃_i(x_i|𝐱_<i); * in the evaluation of the wave function for a given configuration, compute each conditional and their respective configuration-dependent normalization in the chosen ordering of the sites. This gives the general form for an autoregressive state as ψ_AR(𝐱) = ∏_i=1^Nψ̃_i(x_i|𝐱_<i)/√(∑_x'=0^D-1|ψ̃_i(x'|𝐱_<i)|^2) , which we depict schematically in Fig. <ref>(c), with the conditional wave functions for a few sites shown to be conditioned only on the occupations of the sites preceding them in the chosen site ordering. The property of the ansatz being systematically improvable to exactness holds as long as the models for each conditional are themselves universal approximators. Recently introduced autoregressive ansätze have parameterized these conditional wave functions with machine learning-inspired models such as deep convolutional neural networks <cit.>, recurrent neural networks <cit.>, tranformers <cit.>, or hybrid models that incorporate tensor networks with deep learning architectures <cit.>. We will consider a simpler construction, motivated from Bayesian kernel models rather than neural networks, using the recently introduced Gaussian process state (GPS) for each conditional <cit.>. Similar to neural network parameterizations, this model is a systematically improvable universal approximator for these conditionals, written in a compact functional form as ψ_GPS(𝐱) = exp(∑_m=1^M∏_i=1^Nϵ_x_i,m,i), where ϵ is a tensor of adjustable parameters with dimensions (D,M,L), with M denoting the `support dimension', the single model hyperparameter that controls the expressivity of the ansatz. Crucially, increasing M enlarges the class of states that the GPS wave function can span systematically towards exactness, since it formally defines a set of product states on which a kernel model can be trained to support the description <cit.>. The exponential form of Eq. <ref> ensures product separability, and allows the model to capture entanglement beyond area law states. This form can then be related to an infinite series of products of sums of unentangled states, as well as constructively recast into a deep feed-forward neural network architecture <cit.>. We can use this GPS model as a parametric form for each conditional rather than the full state, to adapt the state definition to an autoregressive GPS ansatz (AR-GPS) as ψ_AR-GPS(𝐱) = ∏_i=1^Nexp(∑_m=1^M∏_j≤ iϵ_x_j,m,j^(i))/√(∑_x'=0^D-1|exp(∑_m=1^Mϵ_x',m,i^(i)∏_j<iϵ_x_j,m,j^(i))|^2), where the autoregressive masking of the configuration x is explicitly enforced in the argument of each exponential by only multiplying parameters related to sites with index j≤ i. This full AR-GPS ansatz has DMN(N+1)/2 parameters since it is a product of normalized GPS models for each site, each with support dimension M over successively larger Hilbert spaces. Tempering this additional scaling with number of parameters compared to the parent GPS model will be considered via filters in Sec. <ref>. To conclude this section we return to the general formulation of AR models, to stress that there are two conditions which must be enforced, both of which constrain the flexibility of the state compared to the parent parameterization. These are: * A specific site (orbital) ordering is imposed, with the conditional wave function for site i only allowed to depend on x_<i, i.e. the occupation of sites preceding it in this ordering. In the rest of this work, we will denote this step as a masking operation. Explicit dependence of the conditional from occupation changes of sites higher in this ordering are therefore excluded. It can thus also be expected that the flexibility of the state will be dependent on this ordering, as it is also commonly observed for tensor network representations relying on an enforced one-dimensional sequence of sites <cit.>. * Each conditional is explicitly normalized over the D local Fock states in the evaluation of any configurational amplitude. This constraint also reduces the expressivity of each conditional, and the overall state. This loss of flexibility is offset by the practical advantages of direct sampling of configurations from the state in statistical estimators. The autoregressive masking and the explicit normalization in fact allow the generation of independent and identically distributed configurations directly from the underlying Born distribution |ψ_θ(𝐱)|^2, avoiding autocorrelation in path-dependent Markov chain construction via e.g. Metropolis sampling and potential loss of ergodicity. Furthermore, there are other applications beyond the variational optimization of unknown states, e.g. in quantum state tomography or real-time evolution, where the constraint of ensuring at all times a normalized model for the state is a critical feature for success in modelling the state with an appropriate inductive bias <cit.>. Nevertheless, it is important to understand the loss of variational flexibility for AR models due to these two constraints, as well as the computational overheads compared to their parent parameterizations and increases in parameter number, to appropriately understand the trade-offs and whether this loss of flexibility can simply be compensated by a more complex model for the conditionals. We will numerically investigate these questions and quantify the impact of the individual constraints in Sec. <ref> by comparing to the original (non-autoregressive) GPS model. §.§ Filters The general autoregressive ansatz in Eq. <ref>, as well as the specific AR-GPS model of Eq. <ref>, allows each site conditional correlator to be modelled independently. While this increases the variational flexibility of the ansatz, it implies that the number of parameters scales as 𝒪(N^2). For large systems, this can be prohibitively expensive, thus schemes that bring the scaling down become necessary. We consider a scheme analogous to the approach of translationally invariant convolutional layers in neural network parameterizations <cit.>, which define local filters of correlation features and can be applied independently, or in conjunction with an autoregressive model, akin to how the PixelCNN model was used as an autoregressive quantum state in Ref. sharirDeepAutoregressiveModels2020. If the system being studied has translational symmetry, then it is reasonable to model each conditional correlator centered at a given site as the same, with its dependence being on the distance from the current conditional site to the ones in its environment. We can consider these conditional correlators then as filters that are translated to each site, describing the translationally equivalent correlations of that site with its environment. This ensures that the quantum fluctuations between sites are the same across the system, and only depend on the relative distance between sites. Furthermore, these filters can also be combined with an autoregressive state, as long as the autoregressive masking is applied on top to ensure that only the occupation of preceding sites is accounted for in each correlator. We refer to this kind of autoregressive ansatz as the filter-based model. It should be stressed however that while the application of filters on unnormalized GPS states (which we term the `filter-GPS' model) trivially conserves translational symmetry, the masking operation on top of the filters will break this rigorous translational symmetry. To consider the specific construction for an autoregressive filter-based GPS (`AR-filter-GPS'), we model the unnormalized conditional correlators as ψ̃_i(x_i|𝐱_<i) = exp(∑_m=1^M∏_{𝐫} g_i(σ(𝐫), ϵ_x_σ(𝐫), m, 𝐫_i-𝐫)). To ensure that the form generalizes for lattices of different dimensions, we define the product over sites above not by their index, but rather via the set of N vectors {𝐫} to each site in the system. The function σ(𝐫) then defines the mapping between the vector to the site, and the index of the site. The tensor of variational parameters then depends on the occupation of the site at the position given by the vector (x_σ(𝐫)), the support index (m) defining the latent space, and the relative distance between the central site of the conditional correlator and the site defined by the vector (𝐫_i-𝐫). Note that this relative distance is the shortest distance taking into account the periodic boundary conditions. The g_i(j,x) function then controls the masking operation for the conditional of site i, required for the autoregressive properties, given by g_i(j,x)= 1, if j>i. x, if j≤ i, thus masking out contributions from sites which have an index higher than the central site in the one-dimensional ordering. This state is depicted schematically in Fig. <ref>(d) for a filter size which extends to the whole lattice size. The consequence of the parameters being defined by relative site distances is that the total number of parameters is reduced by a factor 𝒪(N), yielding the same scaling as the parent (non-autoregressive) GPS model, at the expense that each site conditional is no longer independently parameterized. A further reduction in number of parameters can then be simply achieved by introducing a range cutoff in the convolutional filters, i.e. setting a maximum distance in the range |𝐫_i-𝐫| in Eq. <ref>. Practically, this restricts the range of the correlations that are modelled, and it is common to define range-restricted filters in e.g. CNN-inspired NQS studies <cit.>. The number of parameters in the state is then independent of system size for a fixed range of correlations. However, in this work, we consider filters which extend over the whole lattice, and therefore do not restrict the range of the filters in each conditional to a local set of sites. An alternative and simpler strategy to reduce the number of parameters in autoregressive models compared to the filter-based approach is simply to share the parameters between different conditional models, i.e. to remove the dependence on i in the ϵ_x_j,m,j^(i) factor in the full AR-GPS model of Eq. <ref>. This `weight-sharing' scheme has been considered in autoregressive models based on feed-forward neural networks, such as NADE <cit.>, which inspired the ARQS in Ref. <cit.>. While this weight-sharing scheme reduces the computational cost for sample generation and amplitude evaluation, it introduces a highly non-trivial relationship between subsequent correlators in the autoregressive sequence which can not directly be linked to physical intuition, making it hard to justify as a parameterization. As a concrete example, in Appendix <ref>, we show a constructive demonstration of how the full AR-GPS with M=1 can exactly describe any product state. However, the weight-sharing adaptation is generally expected to require M=N to describe an arbitrary product state, representing a significant increase in the model complexity, even for the description of entirely unentangled states. We therefore will not consider these weight-sharing AR-GPS models further. A further technique to compress the full autoregressive approach is to exploit recurrent neural network-based AR models <cit.>. These consist of a parameterized function (recurrent cell) that recursively compresses the environmental part of the physical configuration for each conditional, retaining the autoregressive character of the state. Information about the previous sites is encoded in a hidden state vector, which is updated by the recurrent cell at each site. From a modelling perspective, this approach is similar to a range-restricted AR-filter-GPS, but has the advantage of being able to learn a system-dependent description of this filter, instead of specifying it into the model a priori. We will explore connections between these two approaches in future work. Beyond the number of parameters, the computational cost of both generating statistically independent configurations from the AR-GPS wave function (according to |Ψ(𝐱)|^2), and evaluating their amplitude, is given as 𝒪(N^2), since N correlators of complexity 𝒪(N) must be computed. In AR-filter-GPS models with a range cutoff this reduces to 𝒪(NK^d), where K is a measure of the linear length scales included, and d denotes the dimensionality of the filters. However, the dominant cost in a VMC calculation is often in the evaluation of the local energy, particularly in ab initio systems where the Hamiltonian in second quantization has in general a quartic number of non-zero terms (although locality arguments can reduce this to quadratic <cit.>), and the model amplitudes must then be evaluated at all these connected configurations. Here, it is possible to reduce the computational cost of evaluating the AR-GPS model at each of these configurations, by exploiting the fact that these configurations only differ by a small occupation change from a reference configuration. This `fast updating' scheme for the AR-GPS, described in Appendix <ref>, yields a reduction in the naive cost of computing the local energy by a factor of 𝒪(N) and is used in all results. §.§ Universality Given the functional forms introduced in the previous sections, it is important to consider to what degree they are able to exactly represent arbitrary quantum states, and thus be considered universal ansätze. For the parent GPS (Eq. <ref>), this property has been demonstrated in Ref. <cit.>, and since the AR-GPS is a product of N GPS models this ansatz is also universal, even in the presence of the masking and normalization. For the filter-GPS where translationally-invariant filters are used on top of the GPS, the ansatz will only be able to represent quantum states with trivial translational symmetry with character one, i.e. those where all translations map configurations onto ones with the same amplitude. As such, the filter-GPS can only be considered a universal ansatz for states exhibiting this trivial translational symmetry, and as long as no locality constraints are applied to the filter, which is allowed to span the whole system (or deep architectures used as in Ref. <cit.>). However, applying the autoregressive adaptation on top of a filter-based ansatz allows the symmetry to be broken by the masking operation, and while this no longer exactly conserves translational symmetry, it also allows the state to become a general universal approximator. Masking can also be applied to filter-GPS for systems without exact translational symmetry, to break the enforcing of this symmetry by the filters and return to a universal approximator for all states (e.g. for lattice models with open boundary conditions) <cit.>. §.§ Symmetries and conserved quantities Incorporating symmetries and conserved quantities of the system into the ansatz is crucial for state-of-the-art accuracy <cit.>, restricting the optimization to the appropriate symmetry sector. Any non-autoregressive GPS state can typically be symmetrized by either symmetrizing the form of the kernel via a filter as described in Sec. <ref> (and as has previously been denoted `kernel symmetrization as in Eq. (B1) of Ref. rathQuantumGaussianProcess2022), or via projective symmetrization where an operator summing over the operations of the group is applied at the point of evaluating the amplitudes (see Eq. (B2) in Ref. rathQuantumGaussianProcess2022). For a set 𝒮 of symmetry operations forming a symmetry group, the projective symmetrization sums over the symmetry operations applied to each configuration that is being evaluated of a non-symmetric ansatz, ψ_θ(𝐱). Projecting onto a specific irreducible representation of the group, Γ, results in the explicitly symmetrized ansatz ψ_θ^Γ(𝐱) = dim(Γ)/|𝒮|∑_τ∈𝒮χ^Γ(τ) ψ_θ(τ(𝐱)), where each τ is a symmetry operation, and χ^Γ(τ) is the character of the symmetry operation in that irrep. For the totally symmetric states considered in this work all characters are one, leading to a simple averaging over the symmetry-related configurations <cit.>. However, since this explicit symmetrization of Eq. <ref> does not preserve the normalization of the state, projective symmetrization is not compatible with the autoregressive property, and thus direct sampling of configurations. Instead, autoregressive wave functions are symmetrized by ensuring that the probability of generating configurations of a certain symmetry-equivalence class from the unsymmetrized ansatz is the same as the probability given by the corresponding amplitude of the symmetrized model. As first proposed for autoregressive NQS in Ref. sharirDeepAutoregressiveModels2020 and further expanded on in Ref. rehOptimizingDesignChoices2023, this can be achieved by averaging the real and imaginary part of the wave function amplitude separately, which for a totally symmetric irrep results in ψ_θ^symm(𝐱) = √(1/|𝒮|∑_τ∈𝒮exp(2[logψ_θ(τ(𝐱))])) ×exp(i(∑_τ∈𝒮exp(i[logψ_θ(τ(𝐱))]))), where ψ_θ(𝐱) is the amplitude from the unsymmetrized autoregressive ansatz. This ansatz can be sampled in a two-step process: first a configuration is autoregressively sampled from the unsymmetrized ansatz, then a symmetry operation is drawn uniformly from the set 𝒮 and applied to the sampled configuration. For symmetry operators which are diagonal in the computational basis, there is a simpler way to exactly constrain the sampled irrep in autoregressive models. This includes spin magnetization when working in a computational basis of Ŝ^z eigenfunctions, or electron number symmetry for fermionic models. This can also be extended to full SU(2) spin-rotation symmetry when working in a basis of coupled angular momentum functions <cit.>. These `gauge-invariant' autoregressive models can be implemented with a gauge-checking block, which renormalizes the conditionals in order to respect the overall selected gauge or quantum number. This means that certain local Fock states of conditionals are set to zero when iteratively generating a configuration, if they would result in a symmetry-breaking final configuration. Therefore, this excludes support of the AR-GPS state on these symmetry-breaking configurations. This approach has also been used for autoregressive recurrent neural network-based architectures for the conservation of magnetization in quantum spin systems <cit.> and electron number and multiplicity in ab initio systems <cit.>. In this work, we use the normalization-preserving symmetrization of Eq. <ref> to symmetrize our AR-GPS with the C_4v point-group symmetries of the lattice and ℤ_2 spin-flip symmetry in quantum spin systems (not including translations, which are not preserved even in the presence of filters in the AR states). In addition, we implement a gauge-checking block to conserve the total magnetization in spin systems as well as the electron particle number in fermionic systems. We stress here that the normalization-preserving symmetrization of Eq. <ref> is not equivalent to the projective symmetrization approach of Eq. <ref>. In particular, in keeping with the conclusions of Refs. rehOptimizingDesignChoices2023 and  rothHighaccuracyVariationalMonte2023, we find that the symmetric autoregressive state resulting from Eq. <ref> is not as capable in modelling sign structures of quantum states compared to the projective symmetrization of non-autoregressive states. This is due to the requirement to split the amplitude and phase information in Eq. <ref>, which prevents interference between unsymmetrized amplitudes, limiting the flexibility of autoregressive ansätze in frustrated spin and fermionic systems. § RESULTS Having presented the general formulation of autoregressive and filter adaptations to a wave function ansatz, as well as their specific construction for the Gaussian process state (GPS) model, we will now numerically investigate the expressivity of these states. In particular, we aim to understand how the AR constraints of masking conditionals according to a 1D ordering and normalization (Sec. <ref>), as well as the symmetrization (Sec. <ref>) and convolutional filters (Sec. <ref>) change the variational freedom of the state compared to the `parent' unnormalized and non-autoregressive GPS model of Eq. <ref>. Note that this is analyzed independently to the benefit in the efficiency of the direct sampling afforded by the AR models, which is considered elsewhere <cit.> and likely to be highly system dependent. We therefore consider the minimized variational energy of these models, with the complexity of the state denoted by the number of parameters. These models are optimized with a variant of the stochastic reconfiguration algorithm <cit.> which is detailed further in Appendix <ref> and has been shown to improve the convergence for autoregressive models and avoid local minima. We implement all the models of Table <ref> and perform the VMC optimization using the NetKet package <cit.>. To distinguish between the effects of the autoregressive masking and the normalization conditions, we also consider one further GPS-derived model. This is an unnormalized ansatz, but including the autoregressive masking, as ψ_masked-GPS(𝐱) = ∏_i=1^Nexp(∑_m=1^M∏_j≤ iϵ_x_j,m,j^i). This `masked-GPS' model is purely proposed for illustrative purposes, as it suffers from the increase in variational parameters and loss of flexibility due to masking constraints of the full AR-GPS state, but without the benefit of direct sampling that the full AR construction would afford. However, since it does not introduce the additional normalization constraint, it allows us to disentangle the loss in the flexibility of the model due to these two constraints required for a full AR state construction. We also apply this masking without the explicit normalization in the presence of filters, resulting in the `masked-filter-GPS' similarly used to understand the effect of the masking operation in isolation. We first consider these states applied to the Heisenberg system, before moving on to fermionic Hubbard models, and ab initio systems, to understand the change of variational freedom that these model variants present. While these will be specific to the GPS underlying parameterization, we expect that the conclusions regarding the relative expressibility of these states will also transfer to other (e.g. NQS) parent models with similar universal approximator properties. §.§ Antiferromagnetic Heisenberg system The study of quantum spin fluctuations and magnetic order in condensed matter has for a long time relied on the understanding of the properties of the S=1/2 antiferromagnetic Heisenberg model (AFH), described by Ĥ = J∑_⟨ i,j⟩Ŝ_i·Ŝ_j, where ⟨ i,j⟩ represent nearest-neighbor pairs of localized quantum spins. Correctly describing the quantum fluctuations of the AFH ground state at zero temperature represents a challenge for analytical and numerical techniques and a common benchmark for many emerging methods. On a 6 × 6 square lattice, we can apply the Marshall-Peierls sign rule, transforming the Hamiltonian into a sign-free problem that can be described with real parameter GPS models, avoiding in this case the complications of representing sign-structures as described in Sec. <ref>. For masked and autoregressive models, we follow a zig-zag ordering of the lattice sites, as depicted in Fig. <ref>(c-d). In Fig. <ref> we show the relative variational energy error as a function of the number of parameters for the different ansätze. We include the conservation of total zero magnetization for all models (trivially for the Metropolis-based non-AR models, or via gauge-checking blocks for the AR models), and do not explicitly include further symmetries unless otherwise stated. The accuracy for all states can be systematically improved by increasing the number of parameters (via the support dimension M), noting that due to the difference in scaling the same number of parameters does not necessarily equate to the same value of M. Considering first the GPS models without filters (solid lines), there is a clear loss of variational flexibility for a given number of parameters between the unnormalized parent GPS model (Eq. <ref>), and the AR-GPS model (Eq. <ref>). Interestingly, if we apply a masking operation to the GPS model without normalization (the non-AR masked-GPS model of Eq. <ref>), the energies are almost as good as the parent GPS model, despite the increase in the number of parameters for a given M. This indicates that it is the act of explicitly normalizing the AR-GPS state for each configuration which is providing most of the loss of variational flexibility in the model, rather than the act of masking the physical configuration from certain sites. This normalization step cannot be easily compensated for by an increase in the support dimension. We note here that similar finding have been uncovered in the machine learning literature, pointing to intrinsic limitations of autoregressive models in modelling arbitrary distributions over sequences of a finite length <cit.>. We further consider the relative impact of this masking and normalization of the individual conditionals of the autoregressive state, but now with filters also applied both to autoregressive, masked and parent GPS models (dashed lines). While these models are more parameter efficient than their respective counterparts without filters, the discrepancy between the AR-filter (including normalization of conditionals) and the masked-filter-GPS (without normalization of conditionals) persists, reaffirming that the normalization rather than the masking is the leading cause in the loss of flexibility going towards AR models in this (unsigned) problem. As expected, the filter-GPS provides the best results for a given number of parameters, due to its dual advantage in both avoiding the masking operation, allowing the model at each site to `see' the full spin configuration, as well as ensuring that translational symmetry is exactly maintained. The quality of these results in this system mirrors the equivalent `kernel-symmetrization' results of Ref. rathQuantumGaussianProcess2022. We should stress again that this analysis considers purely the expressivity of these models for a given compactness, rather than the numerical advantages in faithful and direct sampling of configurations the AR construction admits. In the inset of Fig. <ref> we also report the relative energy error obtained by the filter-based autoregressive GPS model on a larger 10× 10 lattice. This relative error is almost identical to that reached on the smaller 6× 6 lattice with the same value of M, confirming the expectation of a consistent level of accuracy across different system sizes for a given M for the size extensive form of the AR-GPS model. To test whether the inclusion of additional symmetries helps in closing this accuracy gap due to the explicit normalization of AR correlators, we optimize filter-based autoregressive and masked GPS models with the inclusion of C_4v point-group symmetries of the square lattice and ℤ_2 spin-flip symmetry (in addition to the conservation of total zero magnetization, i.e. S_z symmetry). We symmetrize both models following the normalization-preserving method in Eq. <ref>, in order to ensure a faithful comparison. Inclusion of these symmetries in Fig. <ref> show that they help with the accuracy of both models. However, the gap between the accuracy of the models (arising from the requirement of explicit normalization of conditional correlators in the AR-filter-GPS) decreases as a function of M. This is due to a plateau in the accuracy of the masked-filter-GPS model. For comparison, we also show the CNN-based multi-layer NQS results from Ref. chooTwodimensionalFrustratedJ2019. While this non-AR filter-based NQS model is by construction translationally invariant, it is also invariant under all rotations of the lattice (C_4 symmetry). However, contrary to the masked and autoregressive filter-based GPS models in Fig. <ref>, the CNN-NQS was symmetrized via projective-symmetrization, which as shown in Ref. rehOptimizingDesignChoices2023 results in a more expressive model than the normalization-preserving symmetrization required in autoregressive models, which is again demonstrated in these results. Furthermore, even though our models use a translationally-invariant filter to model the conditional wave-functions, the autoregressive masking breaks this invariance, and thus we lose this exact symmetry in the model. Restoring rigorous translational symmetry in the AR-filter-GPS models would require the addition of all the translation operators in Eq. <ref>, yielding an additional 𝒪(N) cost to the evaluation of the amplitudes. We expect that this exact conservation of translational symmetry, more expressive (projective) symmetrization of the point group operations, as well as lack of masking to be the dominant cause of the improved CNN results of Ref. chooTwodimensionalFrustratedJ2019 rather than the change in underlying model architecture to the GPS in Fig. <ref>. This is validated via inclusion of the projectively-symmetrized GPS results of Ref. rathQuantumGaussianProcess2022, which provides comparable accuracy to the projectively-symmetrized deep CNN results of Ref. chooTwodimensionalFrustratedJ2019 (albeit noting that the GPS results are also projectively-symmetrized over the translational symmetries, instead of relying on translationally-invariant filters). §.§ 1D Hubbard model We now move to the 1D fermionic Hubbard model of strongly correlated electrons with Hamiltonian H = -t∑_⟨ i,j⟩,σ(ĉ^†_iσĉ_jσ+ĉ^†_jσĉ_iσ) + U∑_in̂_i↑n̂_i↓, where ĉ^†_iσ(ĉ_iσ) is the creation (annihilation) operator for a σ-spin electron at site i, and n̂_iσ=ĉ^†_iσĉ_iσ is the spin-density operator for σ-spin electrons at site i <cit.>. The ansätze introduced can be easily extended to this fermionic setting by allowing the local Fock space for each site to be extended to four possible states, from two in spin systems. In Fig. <ref> we show the relative ground state energy error of a AR-filter-GPS with support dimension M=64 for a N=32 site 1D model in different interaction regimes, from the uncorrelated U=0t to strongly correlated U=10t, compared to the reference energy from a DMRG optimized MPS with bond dimension M=2500 <cit.>. For each interaction strength we consider both open (OBC) and anti-periodic (APBC) boundary conditions. The OBC system no longer strictly obeys translational symmetry, however the filter ensures that the environment around each site is modelled with the same parameters. In this case, the sum over {r} in Eq. <ref> therefore only ranges over values such that 𝐫_i-𝐫 respects the boundary conditions of the system for each site conditional. As discussed in Sec. <ref>, despite the filter ensuring that environmental fluctuations are translationally symmetric, the addition of the masking operation ensures that the AR-filter-GPS is still a universal approximator for this system even with OBC. In fact, we find that the model is able to describe the OBC system in general to higher accuracy than the APBC system, with it being particularly effective at higher U/t values. We can rationalize this as resulting from the fact that the necessity of the masking operation biases towards the OBC, where a 1D ordering of the sites is required which does not respect the boundary conditions of the problem. This improves at higher U/t values, where the physics is dominated by local fluctuations, and where the imposition of a translationally symmetric filter is less restrictive. In contrast, the translational symmetry of the APBC system works against the constraints imposed by the masking, where the accuracy is worse for all values other than the uncorrelated U/t=0 state. §.§ Ab initio hydrogen models Lastly we look at more challenging ab initio fermionic systems with long-range Coulomb interactions, and test the performance of the autoregressive GPS in describing the electronic ground state of chains and sheets of hydrogen atoms. Analogous to changes in the interaction strength in the Hubbard system, modulating the bond length of the hydrogen atoms leads to qualitatively different correlation regimes. These systems have been studied as models towards realistic bulk materials, with rich phase diagrams that exhibit Mott phases, charge ordering and insulator-to-metal transitions <cit.>. In second quantization, the ab initio Hamiltonian in the Born-Oppenheimer approximation is discretized in a basis of L spin-orbitals Ĥ = ∑_ij^2Lh^(1)_ijĉ^†_iĉ_j + 1/2∑_ijkl^2Lh^(2)_ijklĉ^†_iĉ^†_kĉ_jĉ_l, where ĉ^†_i(ĉ_i) are fermionic operators that create (destroy) an electron in the i-th spin-orbital. The single-particle contributions due to the kinetic energy of the electrons and their interaction with external potentials are described by the one-body integrals h^(1)_ij, whereas the Coulomb interactions between electrons is modelled by the two-body integrals h^(2)_ijkl. The choice of molecular orbitals used in the second quantized representation is not unique, since any non-singular rotation of the orbitals would yield another valid basis for the degrees of freedom, without affecting the physical observables of the exact solution. However, depending on the chosen representation for the computational basis, the wave function will have different amplitudes, which changes the ability to sample configurations from an ansatz, as well as faithfully represent them in a given parameterized form. As such the accuracy to which an observable can be estimated by an ansatz will greatly depend on this choice, which will then also impact the optimization process. The practical consequences of this choice on the scalability of VMC calculations have recently been studied in Ref. rathFrameworkEfficientInitio2023 with the GPS as an ansatz. Using the 4× 4× 4 hydrogen cube as benchmark, the authors have demonstrated the benefits of working in a basis of localized orbitals to obtain state-of-the-art results. A localized basis is one in which the electron orbitals are rotated in such a way that they fulfil some locality requirement, concentrating the orbital amplitudes around localized regions in the system, often retaining atomic-like orbital character. In contrast, the orbitals in a more common canonical basis (that diagonalize some single-particle effective Hamiltonian, such as the Hartree–Fock or Kohn–Sham Hamiltonian) are delocalized over the whole system. Since the dominant contribution to the correlated ground state wave function typically would come from the mean-field configuration in this representation, the probability distribution over configurations is generally highly peaked around this configuration. For a local basis, however, many configurations will have similar energy contributions, which then leads to more uniform structure in the probability distribution, improving the ability to faithfully sample from the wave function via Monte Carlo algorithms. It is important to note here that even though autoregressive states allow for independent and uncorrelated sampling of the configurations, they are not immune to sampling effects caused by the choice of representation. In a canonical basis, they will still require potentially many samples to resolve expectation values by integrating over the highly peaked probability distribution given by the ground state many-body density, which is the same problem that can also affect non-autoregressive states as described in Ref. chooFermionicNeuralnetworkStates2020. Autoregressive states are simply more sample efficient, since generated configurations are uncorrelated, and amplitudes of repeatedly generated configurations can be stored in a lookup table as illustrated in the batched sampling procedure of Ref. <cit.>. In Fig. <ref> we investigate this by considering the relative energy error of a fully-variational AR-GPS model with support dimension M=16 on 1D chains with OBC and 4 × 4 square planar 2D systems of 16 hydrogen atoms (32 spin-orbitals) in both local and canonical (restricted Hartree–Fock) bases at different interatomic distances <cit.>. We then obtain a localized basis by performing a Foster-Boys localization <cit.>, which directly minimizes the overall spatial extent of each orbital, whilst preserving orthogonality. The 1D hydrogen chain can be considered an extension of the 1D Hubbard model with OBC of Sec. <ref>, with a natural choice of ordering for the local orbitals required for the masking, but now with long-range interactions giving the potential to induce a non-trivial sign structure of the state, and a higher complexity in the local energy evaluation. As shown in the top panel of Fig. <ref>, the localization of the orbitals in this setting is clearly critical for the sampling of configurations in the optimization of the autoregressive model, as well as the accuracy to which the state can be represented. In the local basis, the ansatz achieves an average relative energy error of ≈ 4× 10^-5 across the whole range of interatomic distances considered, whereas in the canonical basis it fails to reach an acceptable accuracy, with its error increasing as the separation between atoms becomes large in the more strongly correlated regime. We furthermore tested the performance of the AR-GPS ansatz with both real-valued parameters, restricting it to model positive-definite wave function amplitudes, as well as complex parameters, which should enhance the flexibility while also making it possible to model the sign structure of the target state. The additional freedom of the model with complex parameters leads to an improvement in the observed energy in all cases other than the linear chain in a local basis. In this case, the high accuracy of the real-valued, strictly positive model could not be matched. Since the complex-parameter model must be able to span the same states as the real-valued analogue, this (small) discrepancy must arise from increased difficulties in the sampling and optimization of the parametrization, even if the model carries the theoretical ability to give a better approximation. The additional flexibility of the complex amplitudes causes more numerical difficulties in practice than benefit found in their expressibility, due to the small change from a stoquastic Hamiltonian and positive-definite wave function that the long-range interactions induce in this case. Rearranging the atoms into a two-dimensional square lattice changes these conclusions, with the canonical basis providing better results up until a bond length of ∼ 2.5Å, at which point the local basis becomes more accurate. At short bond lengths, the canonical basis allows a description of the dominant kinetic energy driven effects with a single configuration, while the `Mott insulating' stretched geometries which are dominated by the interactions favor the local basis as an efficient representation due to the rapidly decaying correlation lengths. Nevertheless, the geometry change coupled to fermionic antisymmetry necessitates a strongly signed set of wave function amplitudes, requiring complex parameters. Furthermore, the ambiguity in defining an ordering of the orbitals (in both representations) impacts upon the accuracy that can be achieved in the state, significantly increasing the relative energy error compared to the 1D system. § CONCLUSIONS AND OUTLOOK With the emergence of highly expressive functional models based on machine learning paradigms as ansätze for the many-body quantum state, autoregressive models seem particularly appealing due to their inherent design which allows for an exact generation of configurational samples. We have presented a general framework for the construction of these autoregressive forms from general approximators, defining the two constraints which must be imposed on their form in terms of masking and normalization steps. Exemplifying the construction for the recently introduced Gaussian process state for the conditional probability distributions which make up these models, we introduced a new autoregressive ansatz, explicitly underpinned by physical modelling assumptions which motivate the GPS ansatz, and adapted for autoregressive sampling. Furthermore, we go beyond autoregressive adaptations of quantum states to consider `filters', designed to model correlations in a translationally symmetric fashion, and allow for a corresponding scaling reduction in parameter numbers. We show how these can then be combined with the quantum states in both a general framework, and specifically with the GPS. While the benefits of direct sampling have been previously highlighted, with the practical optimization difficulties of these expressive states well known <cit.>, we put particular focus on the ramifications for the variational flexibility of these states due to these autoregressive and filter adaptations compared to their parent model. This was numerically investigated for the variational optimization of unknown ground states across spin, fermionic and ab initio systems, highlighting that for the benefits and simplicity of direct sampling of a normalized autoregressive quantum state there can be a significant loss in expressibility. We numerically investigate which of the two constraints (masking or normalization) required for an autoregressive state this primarily stems from, finding (perhaps surprisingly) that the explicit normalization affects the expressibility more than the masking constraint of an ordered and causal set of conditionals. While numerical results were specifically obtained from the simple (yet nonetheless universal) autoregressive GPS model, we believe that these general conclusions would transfer to other forms of flexible ansatz, with the choice of `parent' architecture for the conditionals less important in the flexibility of these states compared to the underlying assumptions required for the autoregressive property to emerge. We show that the autoregressive model especially performs well when capturing the correlations emerging from (quasi-)one dimensional systems, where a natural order for the decomposition into a product of conditionals can be found. Indeed, we are able to demonstrate a high degree of accuracy for one-dimensional fermionic systems within different settings and correlation regimes, here exemplified for prototypical Hubbard models as well as fully ab initio descriptions of the electronic structure of hydrogen atom arrays. We furthermore compare the performance across signed and unsigned states, as well as the importance of basis choice in moving towards ab initio systems. Generalizing these constructions for models in higher dimensions, as has started to be done for e.g. recurrent neural networks <cit.>, is an ongoing direction of future work. Bringing these different modelling paradigms together in a general framework can build us towards a practical tool for the description of general quantum states, from their variational optimization, to time-evolution <cit.> and even the simulation of quantum circuits <cit.>. § CODE AVAILABILITY The code for this project was developed as part of the https://github.com/BoothGroup/GPSKetGPSKet plugin for https://github.com/netket/netketNetKet <cit.> and is made available, together with configurations files to reproduce the figures in the paper, at <https://github.com/BoothGroup/GPSKet/tree/master/scripts/ARGPS>. The authors gratefully acknowledge support from the Air Force Office of Scientific Research under award number FA8655-22-1-7011, as well as the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 759063. We are grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/P020194/1 and EP/T022213/1). Furthermore, we acknowledge the use of the high performance computing environment CREATE at King’s College London <cit.>. plainnat 73 urlstyle [Arovas et al.(2022)Arovas, Berg, Kivelson, and Raghu]arovasHubbardModel2022 Daniel P. Arovas, Erez Berg, Steven Kivelson, and Srinivas Raghu. The Hubbard Model. Annual Review of Condensed Matter Physics, 130 (1):0 239–274, March 2022. ISSN 1947-5454, 1947-5462. 10.1146/annurev-conmatphys-031620-102024. [Barrett et al.(2022)Barrett, Malyshev, and Lvovsky]barrettAutoregressiveNeuralnetworkWavefunctions2022 Thomas D. Barrett, Aleksei Malyshev, and A. I. Lvovsky. Autoregressive neural-network wavefunctions for ab initio quantum chemistry. Nature Machine Intelligence, 40 (4):0 351–358, April 2022. ISSN 2522-5839. 10.1038/s42256-022-00461-z. [Bond-Taylor et al.(2022)Bond-Taylor, Leach, Long, and Willcocks]bond-taylorDeepGenerativeModelling2022 Sam Bond-Taylor, Adam Leach, Yang Long, and Chris G. Willcocks. Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 440 (11):0 7327–7347, November 2022. ISSN 1939-3539. 10.1109/TPAMI.2021.3116668. [Borin and Abanin(2020)]borinApproximatingPowerMachinelearning2020 Artem Borin and Dmitry A. Abanin. Approximating power of machine-learning ansatz for quantum many-body states. Physical Review B, 1010 (19):0 195141, May 2020. 10.1103/PhysRevB.101.195141. [Bukov et al.(2021)Bukov, Schmitt, and Dupont]bukovLearningGroundState2021 Marin Bukov, Markus Schmitt, and Maxime Dupont. Learning the ground state of a non-stoquastic quantum Hamiltonian in a rugged neural network landscape. SciPost Physics, 100 (6):0 147, June 2021. ISSN 2542-4653. 10.21468/SciPostPhys.10.6.147. [Carleo and Troyer(2017)]carleoSolvingQuantumManybody2017 Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem with artificial neural networks. Science, 3550 (6325):0 602–606, February 2017. 10.1126/science.aag2302. [Carleo et al.(2019)Carleo, Choo, Hofmann, Smith, Westerhout, Alet, Davis, Efthymiou, Glasser, Lin, Mauri, Mazzola, Mendl, van Nieuwenburg, O'Reilly, Théveniaut, Torlai, Vicentini, and Wietek]carleoNetKetMachineLearning2019 Giuseppe Carleo, Kenny Choo, Damian Hofmann, James E. T. Smith, Tom Westerhout, Fabien Alet, Emily J. Davis, Stavros Efthymiou, Ivan Glasser, Sheng-Hsuan Lin, Marta Mauri, Guglielmo Mazzola, Christian B. Mendl, Evert van Nieuwenburg, Ossian O'Reilly, Hugo Théveniaut, Giacomo Torlai, Filippo Vicentini, and Alexander Wietek. NetKet: A machine learning toolkit for many-body quantum systems. SoftwareX, 10:0 100311, July 2019. ISSN 2352-7110. 10.1016/j.softx.2019.100311. [Carrasquilla et al.(2019)Carrasquilla, Torlai, Melko, and Aolita]carrasquillaReconstructingQuantumStates2019 Juan Carrasquilla, Giacomo Torlai, Roger G. Melko, and Leandro Aolita. Reconstructing quantum states with generative models. Nature Machine Intelligence, 10 (3):0 155–161, March 2019. ISSN 2522-5839. 10.1038/s42256-019-0028-1. [Cataldi et al.(2021)Cataldi, Abedi, Magnifico, Notarnicola, Pozza, Giovannetti, and Montangero]cataldiHilbertCurveVs2021 Giovanni Cataldi, Ashkan Abedi, Giuseppe Magnifico, Simone Notarnicola, Nicola Dalla Pozza, Vittorio Giovannetti, and Simone Montangero. Hilbert curve vs Hilbert space: Exploiting fractal 2D covering to increase tensor network efficiency. Quantum, 5:0 556, September 2021. 10.22331/q-2021-09-29-556. [Chen and Heyl(2023)]chenEfficientOptimizationDeep2023 Ao Chen and Markus Heyl. Efficient optimization of deep neural quantum states toward machine precision, February 2023. [Chen et al.(2023)Chen, Newhouse, Chen, Luo, and Soljačić]chenANTNBridgingAutoregressive2023 Zhuo Chen, Laker Newhouse, Eddie Chen, Di Luo, and Marin Soljačić. ANTN: Bridging Autoregressive Neural Networks and Tensor Networks for Quantum Many-Body Simulation, May 2023. [Choo et al.(2019)Choo, Neupert, and Carleo]chooTwodimensionalFrustratedJ2019 Kenny Choo, Titus Neupert, and Giuseppe Carleo. Two-dimensional frustrated J_1J_2 model studied with neural network quantum states. Physical Review B, 1000 (12):0 125124, September 2019. 10.1103/PhysRevB.100.125124. [Choo et al.(2020)Choo, Mezzacapo, and Carleo]chooFermionicNeuralnetworkStates2020 Kenny Choo, Antonio Mezzacapo, and Giuseppe Carleo. Fermionic neural-network states for ab-initio electronic structure. Nature Communications, 110 (1):0 2368, May 2020. ISSN 2041-1723. 10.1038/s41467-020-15724-9. [Clark(2018)]clarkUnifyingNeuralnetworkQuantum2018 Stephen R. Clark. Unifying neural-network quantum states and correlator product states via tensor networks. Journal of Physics A: Mathematical and Theoretical, 510 (13):0 135301, February 2018. ISSN 1751-8121. 10.1088/1751-8121/aaaaf2. [Deng et al.(2017)Deng, Li, and Das Sarma]dengQuantumEntanglementNeural2017 Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Quantum Entanglement in Neural Network States. Physical Review X, 70 (2):0 021021, May 2017. 10.1103/PhysRevX.7.021021. [Donatella et al.(2022)Donatella, Denis, Boité, and Ciuti]donatellaDynamicsAutoregressiveNeural2022a Kaelan Donatella, Zakari Denis, Alexandre Le Boité, and Cristiano Ciuti. Dynamics with autoregressive neural quantum states: Application to critical quench dynamics, September 2022. [Eisert et al.(2010)Eisert, Cramer, and Plenio]eisertAreaLawsEntanglement2010a J. Eisert, M. Cramer, and M. B. Plenio. Area laws for the entanglement entropy. Reviews of Modern Physics, 820 (1):0 277–306, February 2010. 10.1103/RevModPhys.82.277. [Foster and Boys(1960)]fosterCanonicalConfigurationalInteraction1960 J. M. Foster and S. F. Boys. Canonical Configurational Interaction Procedure. Reviews of Modern Physics, 320 (2):0 300–302, April 1960. 10.1103/RevModPhys.32.300. [Giuliani et al.(2023)Giuliani, Vicentini, Rossi, and Carleo]giulianiLearningGroundStates2023 Clemens Giuliani, Filippo Vicentini, Riccardo Rossi, and Giuseppe Carleo. Learning ground states of gapped quantum Hamiltonians with Kernel Methods, March 2023. [Glielmo et al.(2020)Glielmo, Rath, Csányi, De Vita, and Booth]glielmoGaussianProcessStates2020 Aldo Glielmo, Yannic Rath, Gábor Csányi, Alessandro De Vita, and George H. Booth. Gaussian Process States: A Data-Driven Representation of Quantum Many-Body Physics. Physical Review X, 100 (4):0 041026, November 2020. 10.1103/PhysRevX.10.041026. [Hachmann et al.(2006)Hachmann, Cardoen, and Chan]hachmannMultireferenceCorrelationLong2006 Johannes Hachmann, Wim Cardoen, and Garnet Kin-Lic Chan. Multireference correlation in long molecules with the quadratic scaling density matrix renormalization group. The Journal of Chemical Physics, 1250 (14):0 144101, October 2006. ISSN 0021-9606. 10.1063/1.2345196. [Hermann et al.(2020)Hermann, Schätzle, and Noé]hermannDeepneuralnetworkSolutionElectronic2020 Jan Hermann, Zeno Schätzle, and Frank Noé. Deep-neural-network solution of the electronic Schrödinger equation. Nature Chemistry, 120 (10):0 891–897, October 2020. ISSN 1755-4349. 10.1038/s41557-020-0544-y. [Hermann et al.(2022)Hermann, Spencer, Choo, Mezzacapo, Foulkes, Pfau, Carleo, and Noé]hermannAbinitioQuantumChemistry2022 Jan Hermann, James Spencer, Kenny Choo, Antonio Mezzacapo, W. M. C. Foulkes, David Pfau, Giuseppe Carleo, and Frank Noé. Ab-initio quantum chemistry with neural-network wavefunctions, August 2022. [Hibat-Allah et al.(2020)Hibat-Allah, Ganahl, Hayward, Melko, and Carrasquilla]hibat-allahRecurrentNeuralNetwork2020 Mohamed Hibat-Allah, Martin Ganahl, Lauren E. Hayward, Roger G. Melko, and Juan Carrasquilla. Recurrent neural network wave functions. Physical Review Research, 20 (2):0 023358, June 2020. 10.1103/PhysRevResearch.2.023358. [Hibat-Allah et al.(2022)Hibat-Allah, Melko, and Carrasquilla]hibat-allahSupplementingRecurrentNeural2022 Mohamed Hibat-Allah, Roger G. Melko, and Juan Carrasquilla. Supplementing Recurrent Neural Network Wave Functions with Symmetry and Annealing to Improve Accuracy, July 2022. [Hibat-Allah et al.(2023)Hibat-Allah, Melko, and Carrasquilla]hibat-allahInvestigatingTopologicalOrder2023 Mohamed Hibat-Allah, Roger G. Melko, and Juan Carrasquilla. Investigating Topological Order using Recurrent Neural Networks, March 2023. [Hinton, Geoffrey et al.(2012)Hinton, Geoffrey, Srivastava, Nitish, and Swersky, Kevin]hintongeoffreyLecture6aOverview2012 Hinton, Geoffrey, Srivastava, Nitish, and Swersky, Kevin. Lecture 6a: Overview of mini-batch gradient descent, 2012. [Hofmann et al.(2022)Hofmann, Fabiani, Mentink, Carleo, and Sentef]hofmannRoleStochasticNoise2022 Damian Hofmann, Giammarco Fabiani, Johan Mentink, Giuseppe Carleo, and Michael Sentef. Role of stochastic noise and generalization error in the time propagation of neural-network quantum states. SciPost Physics, 120 (5):0 165, May 2022. ISSN 2542-4653. 10.21468/SciPostPhys.12.5.165. [Jónsson et al.(2018)Jónsson, Bauer, and Carleo]jonssonNeuralnetworkStatesClassical2018 Bjarni Jónsson, Bela Bauer, and Giuseppe Carleo. Neural-network states for the classical simulation of quantum computing, August 2018. [Kingma and Ba(2017)]kingmaAdamMethodStochastic2017 Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization, January 2017. [King's College London e-Research team(2022)]kingscollegelondone-researchteamKingComputationalResearch2022 King's College London e-Research team. King's Computational Research, Engineering and Technology Environment (CREATE). https://doi.org/10.18742/rnvf-m07, 2022. [Kochkov and Clark(2018)]kochkovVariationalOptimizationAI2018 Dmitrii Kochkov and Bryan K. Clark. Variational optimization in the AI era: Computational Graph States and Supervised Wave-function Optimization, November 2018. [Lin et al.(2021)Lin, Jaech, Li, Gormley, and Eisner]linLimitationsAutoregressiveModels2021 Chu-Cheng Lin, Aaron Jaech, Xin Li, Matthew R. Gormley, and Jason Eisner. Limitations of Autoregressive Models and Their Alternatives. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5147–5173, Online, June 2021. Association for Computational Linguistics. 10.18653/v1/2021.naacl-main.405. [Lin and Pollmann(2022)]linScalingNeuralNetworkQuantum2022 Sheng-Hsuan Lin and Frank Pollmann. Scaling of Neural-Network Quantum States for Time Evolution. physica status solidi (b), 2590 (5):0 2100172, 2022. ISSN 1521-3951. 10.1002/pssb.202100172. [Lovato et al.(2022)Lovato, Adams, Carleo, and Rocco]lovatoHiddennucleonsNeuralnetworkQuantum2022 Alessandro Lovato, Corey Adams, Giuseppe Carleo, and Noemi Rocco. Hidden-nucleons neural-network quantum states for the nuclear many-body problem. Physical Review Research, 40 (4):0 043178, December 2022. 10.1103/PhysRevResearch.4.043178. [Luo et al.(2022)Luo, Chen, Carrasquilla, and Clark]luoAutoregressiveNeuralNetwork2022 Di Luo, Zhuo Chen, Juan Carrasquilla, and Bryan K. Clark. Autoregressive Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation. Physical Review Letters, 1280 (9):0 090501, February 2022. 10.1103/PhysRevLett.128.090501. [Luo et al.(2023)Luo, Chen, Hu, Zhao, Hur, and Clark]luoGaugeinvariantAnyonicsymmetricAutoregressive2023 Di Luo, Zhuo Chen, Kaiwen Hu, Zhizhen Zhao, Vera Mikyoung Hur, and Bryan K. Clark. Gauge-invariant and anyonic-symmetric autoregressive neural network for quantum lattice models. Physical Review Research, 50 (1):0 013216, March 2023. 10.1103/PhysRevResearch.5.013216. [Medvidović and Carleo(2021)]medvidovicClassicalVariationalSimulation2021 Matija Medvidović and Giuseppe Carleo. Classical variational simulation of the Quantum Approximate Optimization Algorithm. npj Quantum Information, 70 (1):0 1–7, June 2021. ISSN 2056-6387. 10.1038/s41534-021-00440-z. [Nomura(2021)]nomuraHelpingRestrictedBoltzmann2021 Yusuke Nomura. Helping restricted Boltzmann machines with quantum-state representation by restoring symmetry. Journal of Physics: Condensed Matter, 330 (17):0 174003, April 2021. ISSN 0953-8984. 10.1088/1361-648X/abe268. [Nomura and Imada(2021)]nomuraDiracTypeNodalSpin2021 Yusuke Nomura and Masatoshi Imada. Dirac-Type Nodal Spin Liquid Revealed by Refined Quantum Many-Body Solver Using Neural-Network Wave Function, Correlation Ratio, and Level Spectroscopy. Physical Review X, 110 (3):0 031034, August 2021. 10.1103/PhysRevX.11.031034. [Pfau et al.(2020)Pfau, Spencer, Matthews, and Foulkes]pfauInitioSolutionManyelectron2020 David Pfau, James S. Spencer, Alexander G. D. G. Matthews, and W. M. C. Foulkes. Ab initio solution of the many-electron Schrödinger equation with deep neural networks. Physical Review Research, 20 (3):0 033429, September 2020. 10.1103/PhysRevResearch.2.033429. [Rath and Booth(2022)]rathQuantumGaussianProcess2022 Yannic Rath and George H. Booth. Quantum Gaussian process state: A kernel-inspired state with quantum support data. Physical Review Research, 40 (2):0 023126, May 2022. 10.1103/PhysRevResearch.4.023126. [Rath and Booth(2023)]rathFrameworkEfficientInitio2023 Yannic Rath and George H. Booth. Framework for efficient ab initio electronic structure with Gaussian Process States. Physical Review B, 1070 (20):0 205119, May 2023. 10.1103/PhysRevB.107.205119. [Rath et al.(2020)Rath, Glielmo, and Booth]rathBayesianInferenceFramework2020 Yannic Rath, Aldo Glielmo, and George H. Booth. A Bayesian inference framework for compression and prediction of quantum states. The Journal of Chemical Physics, 1530 (12):0 124108, September 2020. ISSN 0021-9606. 10.1063/5.0024570. [Rawat and Wang(2017)]rawatDeepConvolutionalNeural2017 Waseem Rawat and Zenghui Wang. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Computation, 290 (9):0 2352–2449, September 2017. ISSN 0899-7667. 10.1162/neco_a_00990. [Reh et al.(2023)Reh, Schmitt, and Gärttner]rehOptimizingDesignChoices2023 Moritz Reh, Markus Schmitt, and Martin Gärttner. Optimizing design choices for neural quantum states. Physical Review B, 1070 (19):0 195115, May 2023. 10.1103/PhysRevB.107.195115. [Roth and MacDonald(2021)]rothGroupConvolutionalNeural2021 Christopher Roth and Allan H. MacDonald. Group Convolutional Neural Networks Improve Quantum State Accuracy, May 2021. [Roth et al.(2023)Roth, Szabó, and MacDonald]rothHighaccuracyVariationalMonte2023 Christopher Roth, Attila Szabó, and Allan MacDonald. High-accuracy variational Monte Carlo for frustrated magnets with deep neural networks, May 2023. [Sandvik(1997)]sandvikFinitesizeScalingGroundstate1997 Anders W. Sandvik. Finite-size scaling of the ground-state parameters of the two-dimensional Heisenberg model. Physical Review B, 560 (18):0 11678–11690, November 1997. 10.1103/PhysRevB.56.11678. [Schulz et al.(1996)Schulz, Ziman, and Poilblanc]schulzMagneticOrderDisorder1996 H. J. Schulz, T. A. L. Ziman, and D. Poilblanc. Magnetic order and disorder in the frustrated quantum Heisenberg antiferromagnet in two dimensions. Journal de Physique I, 60 (5):0 675–703, May 1996. ISSN 1155-4304, 1286-4862. 10.1051/jp1:1996236. [Sharir et al.(2020)Sharir, Levine, Wies, Carleo, and Shashua]sharirDeepAutoregressiveModels2020 Or Sharir, Yoav Levine, Noam Wies, Giuseppe Carleo, and Amnon Shashua. Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems. Physical Review Letters, 1240 (2):0 020503, January 2020. 10.1103/PhysRevLett.124.020503. [Simons Collaboration on the Many-Electron Problem et al.(2017)Simons Collaboration on the Many-Electron Problem, Motta, Ceperley, Chan, Gomez, Gull, Guo, Jiménez-Hoyos, Lan, Li, Ma, Millis, Prokof'ev, Ray, Scuseria, Sorella, Stoudenmire, Sun, Tupitsyn, White, Zgid, and Zhang]simonscollaborationonthemany-electronproblemSolutionManyElectronProblem2017 Simons Collaboration on the Many-Electron Problem, Mario Motta, David M. Ceperley, Garnet Kin-Lic Chan, John A. Gomez, Emanuel Gull, Sheng Guo, Carlos A. Jiménez-Hoyos, Tran Nguyen Lan, Jia Li, Fengjie Ma, Andrew J. Millis, Nikolay V. Prokof'ev, Ushnish Ray, Gustavo E. Scuseria, Sandro Sorella, Edwin M. Stoudenmire, Qiming Sun, Igor S. Tupitsyn, Steven R. White, Dominika Zgid, and Shiwei Zhang. Towards the Solution of the Many-Electron Problem in Real Materials: Equation of State of the Hydrogen Chain with State-of-the-Art Many-Body Methods. Physical Review X, 70 (3):0 031059, September 2017. 10.1103/PhysRevX.7.031059. [Sinitskiy et al.(2010)Sinitskiy, Greenman, and Mazziotti]sinitskiyStrongCorrelationHydrogen2010 Anton V. Sinitskiy, Loren Greenman, and David A. Mazziotti. Strong correlation in hydrogen chains and lattices using the variational two-electron reduced density matrix method. The Journal of Chemical Physics, 1330 (1):0 014104, July 2010. ISSN 0021-9606. 10.1063/1.3459059. [Sorella(2001)]sorellaGeneralizedLanczosAlgorithm2001 Sandro Sorella. Generalized Lanczos algorithm for variational quantum Monte Carlo. Physical Review B, 640 (2):0 024512, June 2001. 10.1103/PhysRevB.64.024512. [Stella et al.(2011)Stella, Attaccalite, Sorella, and Rubio]stellaStrongElectronicCorrelation2011 Lorenzo Stella, Claudio Attaccalite, Sandro Sorella, and Angel Rubio. Strong electronic correlation in the hydrogen chain: A variational Monte Carlo study. Physical Review B, 840 (24):0 245117, December 2011. 10.1103/PhysRevB.84.245117. [Sun et al.(2018)Sun, Berkelbach, Blunt, Booth, Guo, Li, Liu, McClain, Sayfutyarova, Sharma, Wouters, and Chan]sunPySCFPythonbasedSimulations2018 Qiming Sun, Timothy C. Berkelbach, Nick S. Blunt, George H. Booth, Sheng Guo, Zhendong Li, Junzi Liu, James D. McClain, Elvira R. Sayfutyarova, Sandeep Sharma, Sebastian Wouters, and Garnet Kin-Lic Chan. PySCF: The Python-based simulations of chemistry framework. WIREs Computational Molecular Science, 80 (1):0 e1340, 2018. ISSN 1759-0884. 10.1002/wcms.1340. [Sun et al.(2020)Sun, Zhang, Banerjee, Bao, Barbry, Blunt, Bogdanov, Booth, Chen, Cui, Eriksen, Gao, Guo, Hermann, Hermes, Koh, Koval, Lehtola, Li, Liu, Mardirossian, McClain, Motta, Mussard, Pham, Pulkin, Purwanto, Robinson, Ronca, Sayfutyarova, Scheurer, Schurkus, Smith, Sun, Sun, Upadhyay, Wagner, Wang, White, Whitfield, Williamson, Wouters, Yang, Yu, Zhu, Berkelbach, Sharma, Sokolov, and Chan]sunRecentDevelopmentsPySCF2020 Qiming Sun, Xing Zhang, Samragni Banerjee, Peng Bao, Marc Barbry, Nick S. Blunt, Nikolay A. Bogdanov, George H. Booth, Jia Chen, Zhi-Hao Cui, Janus J. Eriksen, Yang Gao, Sheng Guo, Jan Hermann, Matthew R. Hermes, Kevin Koh, Peter Koval, Susi Lehtola, Zhendong Li, Junzi Liu, Narbe Mardirossian, James D. McClain, Mario Motta, Bastien Mussard, Hung Q. Pham, Artem Pulkin, Wirawan Purwanto, Paul J. Robinson, Enrico Ronca, Elvira R. Sayfutyarova, Maximilian Scheurer, Henry F. Schurkus, James E. T. Smith, Chong Sun, Shi-Ning Sun, Shiv Upadhyay, Lucas K. Wagner, Xiao Wang, Alec White, James Daniel Whitfield, Mark J. Williamson, Sebastian Wouters, Jun Yang, Jason M. Yu, Tianyu Zhu, Timothy C. Berkelbach, Sandeep Sharma, Alexander Yu. Sokolov, and Garnet Kin-Lic Chan. Recent developments in the PySCF program package. The Journal of Chemical Physics, 1530 (2):0 024109, July 2020. ISSN 0021-9606. 10.1063/5.0006074. [Sun et al.(2022)Sun, Nebabu, Han, Flynn, and Qi]sunEntanglementFeaturesRandom2022 Xiao-Qi Sun, Tamra Nebabu, Xizhi Han, Michael O. Flynn, and Xiao-Liang Qi. Entanglement features of random neural network quantum states. Physical Review B, 1060 (11):0 115138, September 2022. 10.1103/PhysRevB.106.115138. [Szabó and Castelnovo(2020)]szaboNeuralNetworkWave2020 Attila Szabó and Claudio Castelnovo. Neural network wave functions and the sign problem. Physical Review Research, 20 (3):0 033075, July 2020. 10.1103/PhysRevResearch.2.033075. [Torlai et al.(2018)Torlai, Mazzola, Carrasquilla, Troyer, Melko, and Carleo]torlaiNeuralnetworkQuantumState2018 Giacomo Torlai, Guglielmo Mazzola, Juan Carrasquilla, Matthias Troyer, Roger Melko, and Giuseppe Carleo. Neural-network quantum state tomography. Nature Physics, 140 (5):0 447–450, May 2018. ISSN 1745-2481. 10.1038/s41567-018-0048-5. [Tsuchimochi and Scuseria(2009)]tsuchimochiStrongCorrelationsConstrainedpairing2009 Takashi Tsuchimochi and Gustavo E. Scuseria. Strong correlations via constrained-pairing mean-field theory. The Journal of Chemical Physics, 1310 (12):0 121102, September 2009. ISSN 0021-9606. 10.1063/1.3237029. [Uria et al.(2016)Uria, Côté, Gregor, Murray, and Larochelle]uriaNeuralAutoregressiveDistribution2016 Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural Autoregressive Distribution Estimation. Journal of Machine Learning Research, 170 (205):0 1–37, 2016. ISSN 1533-7928. [van den Oord et al.(2016)van den Oord, Kalchbrenner, Espeholt, kavukcuoglu, Vinyals, and Graves]vandenoordConditionalImageGeneration2016 Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, koray kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional Image Generation with PixelCNN Decoders. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. [Vicentini et al.(2021)Vicentini, Hofmann, Szabó, Wu, Roth, Giuliani, Pescia, Nys, Vargas-Calderon, Astrakhantsev, and Carleo]vicentiniNetKetMachineLearning2021 Filippo Vicentini, Damian Hofmann, Attila Szabó, Dian Wu, Christopher Roth, Clemens Giuliani, Gabriel Pescia, Jannes Nys, Vladimir Vargas-Calderon, Nikita Astrakhantsev, and Giuseppe Carleo. NetKet 3: Machine Learning Toolbox for Many-Body Quantum Systems. arXiv:2112.10526 [physics, physics:quant-ph], December 2021. [Vieijra et al.(2020)Vieijra, Casert, Nys, De Neve, Haegeman, Ryckebusch, and Verstraete]vieijraRestrictedBoltzmannMachines2020 Tom Vieijra, Corneel Casert, Jannes Nys, Wesley De Neve, Jutho Haegeman, Jan Ryckebusch, and Frank Verstraete. Restricted Boltzmann Machines for Quantum States with Non-Abelian or Anyonic Symmetries. Physical Review Letters, 1240 (9):0 097201, March 2020. 10.1103/PhysRevLett.124.097201. [Wang et al.(2022)Wang, Che, Li, Song, Pei, Bengio, and Li]wangYourAutoregressiveGenerative2022 Yezhen Wang, Tong Che, Bo Li, Kaitao Song, Hengzhi Pei, Yoshua Bengio, and Dongsheng Li. Your Autoregressive Generative Model Can be Better If You Treat It as an Energy-Based One, June 2022. [Westerhout et al.(2020)Westerhout, Astrakhantsev, Tikhonov, Katsnelson, and Bagrov]westerhoutGeneralizationPropertiesNeural2020 Tom Westerhout, Nikita Astrakhantsev, Konstantin S. Tikhonov, Mikhail I. Katsnelson, and Andrey A. Bagrov. Generalization properties of neural network approximations to frustrated magnet ground states. Nature Communications, 110 (1):0 1593, March 2020. ISSN 2041-1723. 10.1038/s41467-020-15402-w. [Wu et al.(2023)Wu, Rossi, Vicentini, and Carleo]wuTensorNetworkQuantum2023 Dian Wu, Riccardo Rossi, Filippo Vicentini, and Giuseppe Carleo. From Tensor Network Quantum States to Tensorial Recurrent Neural Networks, March 2023. [Zhai and Chan(2021)]zhaiLowCommunicationHigh2021 Huanchen Zhai and Garnet Kin-Lic Chan. Low communication high performance ab initio density matrix renormalization group algorithms. The Journal of Chemical Physics, 1540 (22):0 224116, June 2021. ISSN 0021-9606. 10.1063/5.0050902. [Zhang and Di Ventra(2023)]zhangTransformerQuantumState2023 Yuan-Hang Zhang and Massimiliano Di Ventra. Transformer quantum state: A multipurpose model for quantum many-body problems. Physical Review B, 1070 (7):0 075147, February 2023. 10.1103/PhysRevB.107.075147. [Zhao et al.(2021)Zhao, De, Chen, Stokes, and Veerapaneni]zhaoOvercomingBarriersScalability2021 Tianchen Zhao, Saibal De, Brian Chen, James Stokes, and Shravan Veerapaneni. Overcoming barriers to scalability in variational quantum Monte Carlo. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '21, pages 1–13, New York, NY, USA, November 2021. Association for Computing Machinery. ISBN 978-1-4503-8442-1. 10.1145/3458817.3476219. [Zhao et al.(2022)Zhao, Stokes, and Veerapaneni]zhaoScalableNeuralQuantum2022 Tianchen Zhao, James Stokes, and Shravan Veerapaneni. Scalable neural quantum states architecture for quantum chemistry, August 2022. [Zhou(2020)]zhouUniversalityDeepConvolutional2020 Ding-Xuan Zhou. Universality of deep convolutional neural networks. Applied and Computational Harmonic Analysis, 480 (2):0 787–794, March 2020. ISSN 1063-5203. 10.1016/j.acha.2019.06.004. § OPTIMIZATION DETAILS Throughout this work we optimize all the ansätze with an improved Stochastic Reconfiguration (SR) <cit.> algorithm introduced in Ref. lovatoHiddennucleonsNeuralnetworkQuantum2022, which we implemented in our GPSKet plugin for NetKet <cit.>. In the SR scheme, parameters are updated according to the following rule: θ_t+1 = θ_t - η S^-1 g, where η is the step size (learning rate), θ_t are the parameters of the ansatz at iteration t, S is the quantum geometric tensor (QGT) and g is the variational energy gradient. The QGT and the energy gradient, can be defined by introducing operators Ô_k representing the derivative with respect to the k-th parameter of the log wave function amplitude according to ⟨𝐱|Ô_k|𝐱'|=⟩δ_𝐱,𝐱'∂logψ_θ(𝐱)/∂θ_k, where |𝐱⟩ and |𝐱'⟩ are computational basis states. The QGT and the energy gradient can then be evaluated via Monte Carlo sampling of the following expectation values: S_i,j = ⟨Ô^*_iÔ_i⟩ - ⟨Ô^*_i⟩⟨Ô_i⟩, g_i = ⟨Ô^*_iĤ⟩ - ⟨Ĥ⟩⟨Ô_i⟩. It is typically required to appropriately regularize the solution of the update of Eq. (<ref>), which involves solving for the update vector S^-1 g. A common strategy is to add a constant shift to the diagonal of the S matrix. Instead of applying a constant shift to the diagonal of the S matrix to stabilize its inversion in Eq. <ref>, we update the diagonal entries of S with a parameter-dependent shift based on the scheme introduced in Ref. lovatoHiddennucleonsNeuralnetworkQuantum2022. We found that this approach sometimes significantly helps to reliably optimize the autoregressive parameterization. The scheme is based on adding a regularization shift to the diagonal of the S matrix based on the exponential moving average of the squared gradient, v_t, effectively rotating the parameter updates towards the RMSProp gradient descent update directions <cit.>. This means that the S matrix is regularized by replacing it according to S ↦ (1-ε)S + εdiag(√(v)+10^-8), which depends on an additional hyperparameter ε between 0 and 1, controlling the amount of regularization. The exponentially moving average of squared gradients is continuously updated over the course of the optimization according to v ↦β v' + (1-β)g^2, where v' is the accumulated value from the previous iteration, and an additional momentum hyperparameter β controls the rate of the decay. Within all our numerical tests, we set the momentum value to β = 0.9. We chose a learning rate of η=0.01 with a diagonal shift constant of ε=0.1 for our simulation with lattice models, and a learning rate of η=0.04 with a shift constant ε=0.01 for the ab initio simulations of hydrogen systems. We computed estimates of the variational energy, the gradient, and S matrix elements with 4096 (non-symmetric representations), or 1024 (symmetric representations) for lattice models, and with 5000 samples for the ab initio systems. For non-autoregressive models, we relied on the Metropolis-Hastings algorithm based on spin exchange proposals to generate samples according to the Born distribution defined by the ansatz. The reported final energies were computed by averaging the sampled variational energy over the last 50 iterations. § REPRESENTING PRODUCT STATES WITH AUTOREGRESSIVE GPS While the practical applications studied in this work specifically focus on capturing non-trivial correlations between the modes with the machine learning inspired ansatz, the model should also be able to reproduce physical characteristics of non-entangled states, as, e.g., obtained for eigenstates of Hamiltonians with vanishing couplings between system fragments. In particular the ability to represent such simple product states with the model is likely an important building block in which to model ground states typically displaying a low, but non-vanishing, degree of entanglement <cit.>. In this appendix, we show how these unentangled states can also be obtained with the autoregressive extensions of the GPS model considered in the main text. A general product state for a system comprising N modes decomposes as |ψ⟩ = ⊗_i=i^N|ψ_i⟩, where the states |ψ_i⟩ are states only associated with the local Hilbert space of the i-th mode. This means that wave function amplitudes of the configurations in the computational basis for this state evaluate to ψ(𝐱) = ∏_i=1^N c^i_x_i, with an N × D tensor of local amplitudes c^i_x_i = ⟨x_i | ψ_i|$⟩. To represent a general product state by an autoregressive model, we decompose the wave function amplitudes according to ψ_AR(𝐱) = ∏_i=1^Nψ̃_i(x_i|𝐱_<i)/√(∑_x'=0^D-1|ψ̃_i(x'|𝐱_<i)|^2), we represent the local amplitudesc^i_x_iby the conditional wave functions amplitudesψ̃_i(x_i|𝐱_<i). It can directly be seen that the general autoregressive GPS model as defined in Eq. <ref> of the main text, which specifies the wave function amplitudes as ψ_AR-GPS(𝐱) = ∏_i=1^Nexp(∑_m=1^M∏_j≤ iϵ_x_j,m,j^(i))/√(∑_x'=0^D-1|exp(∑_m=1^Mϵ_x',m,i^(i)∏_j<iϵ_x_j,m,j^(i))|^2), can represent arbitrary product states with a support dimensionM=1, by employing the following choice ϵ_x_j,m,j^(i) = log(c^i_x_i) if j = i 1 otherwise. As an approach to impose additional structure into the ansatz (and reduce the number of variational parameters), we introduced filter-based version of (autoregressive) GPS models. This relies on transferring a symmetric structure of the system to the model similar to that of a convolutional neural network, and is compatible with the autoregressive adaptation, since an additional masking can always be applied in order to ensure the autoregressive property is maintained. Applying this to the product state representation above, results in a fully symmetric product state where all the local states|ψ_i⟩are equal, i.e., a wave function decomposing as a product with mode-independent amplitudesc^i_x_iaccording to ψ(𝐱) = ∏_i=1^N c_x_i. While this filtering approach reduces the number of variational parameters (thus often improving the practical optimizability of the state) for a given support dimension, the fully-symmetric product state representation can only be sensible if the target agrees with this trivial symmetry. As an alternative to the filtering approach to impose additional structure, in the main text we also consider a `weight sharing' approach in which parameters are equivalent among the conditionals of different sites according to the model ψ_AR-GPS(𝐱) = ∏_i=1^Nexp(∑_m=1^M∏_j≤ iϵ_x_j,m,j)/√(∑_x'=0^D-1|exp(∑_m=1^Mϵ_x',m,i∏_j<iϵ_x_j,m,j)|^2), characterized byM ×N ×Dparametersϵ_x,m,j. This ansatz has a factor𝒪(N)fewer parameters, and caching intermediate values of the product over sitesjallows for a reduction of the computational cost which is linear in the system size when sampling and evaluating configurations. However, with these additional weight sharing constraints on the model, it is no longer obvious how to represent arbitrary product states most compactly, let alone with a constant support dimensionM=1, since the same parameters are used in all the correlators. We can still recover a representation of arbitrary product states by using a support dimension matching the size of the system,M=N, in which case a representation arbitrary product states with an autoregressive weight-sharing ansatz can be obtained by choosing the model parameters as ϵ_x_j,m,j = log(c^j_x_j) if j = m 1 if j < m 0 otherwise. The required increase in the support dimension of the model to represent fully unentangled states therefore suggests that a weight-sharing construction might not be as suitable to target states exhibiting low degrees of entanglement, representing a major drawback of such a construction. This is also in agreement with results from numerical experiments, where we commonly observed a significant decay of the achievable accuracy when utilizing the autoregressive ansatz based on a weight-sharing parameter reduction. § FAST UPDATING OF THE AR-GPS To reduce the overall computational scaling, it is often useful to exploit the fact that the evaluation of local energies generally requires low-rank updates to wave function amplitudes arising from few-electron changes to configurations of interest. These are applicable fork-local Hamiltonians which connect a sampled configuration with other computational basis states that only differ in their occupancy for few sites. While this is, in general, true for all the systems considered in this work, the utilization of a fast updating strategy to evaluate the amplitude of the connected configurations typically becomes particularly important for the ab initio Hamiltonians where each basis state is connected to a quartically-scaling number of connected configurations. In this section, we show how updates to wave function amplitudes can be implemented for the AR-GPS model, resulting in an𝒪(N)scaling improvement of the connected amplitude evaluations. The fast updating scheme directly follows from the approach for GPS amplitudes as also outlined in Ref. <cit.>. With the definition of the AR-GPS ansatz according to Eq. (<ref>), the model associates a wave function amplitude with a basis configuration𝐱according to ψ_AR(𝐱) = ∏_i=1^Nexp(∑_m=1^Mφ_i,m(𝐱))/√(∑_x'=0^D-1|exp(∑_m=1^M ϵ_x',m,i^(i) φ_i,m(𝐱)/ϵ_x_i,m,i^(i))|^2). Here, we introduced theN ×Mproductsφ_i,m, which are defined as φ_i,m(𝐱) = ∏_j ≤ iϵ_x_j,m,j^(i). The direct evaluation of an amplitude is therefore associated with a cost of𝒪(N^2 M). To avoid redundant computations of elements for an update of the amplitude for a connected configuration𝐱̃with a similar occupancy as the initial configuration𝐱, we can consider updates to values of the productsφ_i,m. Caching these for configuration𝐱, its value can be updated for a configuration𝐱̃according to φ_i,m(𝐱̃) = φ_i,m(𝐱) ×∏_k θ^(k)_i,m(x_k, x̃_k), where the product only runs over those site indices for which the occupancy is different in configurations𝐱and𝐱̃. The update factorθ^(k)_i,m(x_k, x̃_k)is given as θ^(k)_i,m(x_k, x̃_k) = ϵ_x̃_k,m,k^(i)/ϵ_x_k,m,k^(i) if k ≤ i 1 otherwise, and can therefore easily be evaluated in constant time. This means that the full cost to update the amplitude for the connected configuration𝐱̃only scales as𝒪(N M K), whereKis the number of local updates which are employed. Within the considered lattice models with nearest neighbor interactions, the number of updates is at mostK=2, and for ab initio systems the occupancy changes on at most4orbitals through the application of the Hamiltonian.
http://arxiv.org/abs/2306.05425v1
20230608175956
MIMIC-IT: Multi-Modal In-Context Instruction Tuning
[ "Bo Li", "Yuanhan Zhang", "Liangyu Chen", "Jinghao Wang", "Fanyi Pu", "Jingkang Yang", "Chunyuan Li", "Ziwei Liu" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CL", "cs.HC" ]
Background Prompting for Improved Object Depth Manel Baradad1,* Yuanzhen Li2 Forrester Cole2 Michael Rubinstein2 Antonio Torralba1 William T. Freeman1,2 Varun Jampani2 1 Massachusetts Institute of Technology 2 Google Research July 31, 2023 ============================================================================================================================================================================================ High-quality instructions and responses are essential for the zero-shot performance of large language models on interactive natural language tasks. For interactive vision-language tasks involving intricate visual scenes, a large quantity of diverse and creative instruction-response pairs should be imperative to tune vision-language models (VLMs). Nevertheless, the current availability of vision-language instruction-response pairs in terms of quantity, diversity, and creativity remains limited, posing challenges to the generalization of interactive VLMs. Here we present MultI-Modal In-Context Instruction Tuning (MIMIC-IT), a dataset comprising 2.8 million multimodal instruction-response pairs, with 2.2 million unique instructions derived from images and videos. Each pair is accompanied by multi-modal in-context information, forming conversational contexts aimed at empowering VLMs in perception, reasoning, and planning. The instruction-response collection process, dubbed as Syphus, is scaled using an automatic annotation pipeline that combines human expertise with GPT's capabilities. Using the MIMIC-IT dataset, we train a large VLM named Otter. Based on extensive evaluations conducted on vision-language benchmarks, it has been observed that Otter demonstrates remarkable proficiency in multi-modal perception, reasoning, and in-context learning. Human evaluation reveals it effectively aligns with the user's intentions. We release the MIMIC-IT dataset, instruction-response collection pipeline, benchmarks, and the Otter model. ^*Equal Contribution^Project LeadCorresponding Author § INTRODUCTION The recent advancements in artificial intelligence have focused on conversational assistants <cit.> that possess a strong ability to understand user intentions <cit.> and then execute actions <cit.>. In addition to the strong generalization ability of large language models (LLMs), the notable achievements of these conversational assistants can be attributed to the practice of instruction tuning <cit.>. It involves fine-tuning LLMs on a range of tasks specified through diverse and high-quality instructions <cit.>. By incorporating instruction tuning, LLMs acquire a heightened comprehension of user intentions <cit.>, enabling them to exhibit improved zero-shot capabilities even in previously unseen tasks <cit.>. One potential reason for the zero-shot performance gain by instruction tuning is that it internalizes the context <cit.>, which is preferred in user interactions especially when user input skips commonsense context. Conversational assistants that excel in language tasks have achieved remarkable success. However, an optimal conversational assistant should be able to address tasks involving multiple modalities. This requires access to a diverse and high-quality multi-modal instruction-following dataset. The LLaVA-Instruct-150K dataset <cit.>, also known as LLaVA, is the pioneering vision-language instruction-following dataset. It is constructed using COCO <cit.> images, instructions and responses obtained from GPT-4 <cit.> based on image captions and object bounding boxes. Although inspiring, LLaVA-Instruct-150K exhibits three limitations. (1) Limited visual diversity: The dataset's visual diversity is constrained due to its exclusive reliance on the COCO image. (2) Single image as visual data: it utilizes a single image as visual data, while a multi-modal conversational assistant should possess the capability to process multiple images or even extensive videos. For instance, it should effectively provide answers when a user presents a collection of images (or a sequence of images, such as a video) alongside the instruction: "Help me think of an album title for these images." (3) Language-only in-context information: it depends solely on language for in-context information, whereas a multi-modal conversational assistant should integrate multi-modal in-context information to better comprehend user instructions. For example, an assistant could more accurately align its description of an image with the tone, style, or other aspects if the human user provides a concrete image example of the desired attributes. Addressing these limitations, we introduce MultI-Modal In-Context Instruction Tuning (MIMIC-IT). MIMIC-IT is characterized by: (1) Diverse visual scenes, incorporating images and videos from general scenes, egocentric view scenes, and indoor RGB-D images across various datasets. (2) Multiple images (or a video) as visual data, supporting instruction-response pairs accompanied by any number of images or videos. (3) Multi-modal in-context information, featuring in-context information formulated in multi-modal formats, including multiple instruction-response pairs and multiple images or videos (see <ref> for data format clarification). To efficiently generate instruction-response pairs, we introduce Sythus, an automated pipeline for instruction-response annotation inspired by the self-instruct method <cit.>. Sythus employs system message, visual annotation, and in-context examples to direct the language model (GPT-4 or ChatGPT) in generating instruction-response pairs based on visual context, including timestamps, captions, and object information, targeting three fundamental capabilities of vision-language models: perception, reasoning, and planning (refer to <ref>). Additionally, instructions and responses are translated from English into seven languages to support multi-lingual usage. On MIMIC-IT, we train a multi-modal model Otter based on OpenFlamingo <cit.>. We evaluate Otter's multi-modal capabilities in two aspects: (1) ChatGPT evaluation on the MMAGIBenchmark <cit.>, comparing Otter's perception and reasoning abilities with other recent vision-language models (VLMs), where Otter demonstrates the strongest performance. (2) Human evaluation on the Multi-Modality Arena <cit.>, where Otter outperforms other VLMs, achieving the highest Elo rating. Furthermore, we assess Otter's few-shot in-context learning ability using the COCO Caption dataset <cit.>, with results showing Otter's superior performance over OpenFlamingo in all few-shot settings. In summary, our contributions include: * MultI-Modal In-Context Instruction Tuning (MIMIC-IT) dataset, a dataset comprising 2.8M multi-modal in-context instruction-response pairs, with 2.2 million unique instructions, across various real-life scenes. * Syphus, an automatic pipeline built with LLMs to generate high-quality and multi-lingual instruction-response pairs based on visual context. * Otter, a multi-modal model demonstrates robust multi-modal perception and reasoning capabilities, effectively following human intent while exhibiting adeptness in-context learning. § RELATED WORK §.§ Multi-modal Instruction Tuning Dataset The notion of instruction tuning in multi-modal models was initially introduced in the work called Multi-Instruct <cit.>, which encompassed a wide range of multi-modal tasks <cit.> involving visual understanding and multi-modal reasoning, such as Visual Question Answering <cit.>. Similarly, Mini-GPT4 <cit.> created its instruction-based dataset by merging Conceptual Caption <cit.>, SBU <cit.>, and LAION <cit.> with handwritten instruction templates. More recently, LLaVA-Instruct-150K <cit.> has elevated the quality of instruction tuning datasets by utilizing self-instruct and GPT-4 <cit.>, along with handwritten seed instructions on COCO images <cit.>. While these previous works on multi-modal instruction tuning primarily focused on general scene images, our approach categorizes our data sources into indoor scenes, outdoor scenes, conversations, and egocentric videos. Additionally, drawing inspiration from the image-text interleaved structure of the MMC4 dataset <cit.>, our approach further distinguishes itself by incorporating a multi-modal in-context format into instruction tuning. §.§ Multi-modal Foundation Models With the recent success of ChatGPT <cit.>, GPT-4 <cit.>, and other LLMs <cit.>, recent studies start to explore incorporating information from other modalities into pretrained language models. These studies extend the capabilities of LLM to more tasks and modalities and can be categorized into two classes: (i) Multi-model Aggregation. These approaches <cit.> take an LLM as a dispatch scheduler and connect different expert models through it to allow for different tasks. Language serves as an interface to call expert visual-language models within their respective task domains. However, this approach is limited that each model cannot be trained individually on new tasks. (ii) End-to-End Trainable Models. These approaches <cit.> connect models from different modalities into integrated end-to-end trainable models, also known as multi-modal foundation models. Among them, based on large-scale image-text interleaved pretrained model OpenFlamingo <cit.>, Otter is the first open-sourced model to further demonstrate the power of multi-modal in-context instruction tuning. § MULTI-MODAL IN-CONTEXT INSTRUCTION TUNING DATASET We aim to build MIMIC-IT dataset to support more VLMs in acquiring the ability to comprehend the real world. In this section, we provide an overview of the MIMIC-IT dataset, starting with the data format in  <ref> and our automatic instruction generation pipeline, Sythus, in <ref>. §.§ MIMIC-IT Data Format Each instance i in the MIMIC-IT dataset comprises an instruction-response pair and a set of N images. We regard it as query example with a tuple: (I_q, R_q, X_q), where {x_j=1^N}∈ X_q. Here, I_q denotes the q-th instruction in our dataset, R_q represents the response, and X_q refers to the images or videos [Videos can be viewed as ordered sequences of images.]. Our primary objective is to develop a visual language model p_θ(R_q| (I_q, X_q)) parametrized by trainable parameters θ, the model generates the response R_i for each query (I_q, X_q). With above example denotes the standard instruction tuning process of a visual language model. Further, we could define a set of in-context examples as (I_k, R_k, X_k)_k=1^M, where M is the number of the set. We then define a context function C_ψ:(I_q, X_q) ↦{(I_k, X_k)}_k=1^M to represent the in-context examples with current query example. In summary, all data in the MIMIC-IT dataset will be represented in the following format, query example with its corresponding in-context examples. d_q = (I_q, R_q, X_q, C_ψ(I_q, X_q)), d_q ∼ D_ Now the visual language model that incorporates in-context examples can be denoted as p_θ(R_q| (I_q, X_q, C_ψ(I_q, X_q))). C_ψ is task-dependent, we apply different approaches to organize the in-context examples with the current query example. The details will be presented in <ref> and illustrative examples will be showcased in <ref>. §.§ Sythus: Automatic Instruction-Response Generation Pipeline We present Sythus (see <Ref>), an automated pipeline for generating high-quality instruction-response pairs in multiple languages. Building upon the framework proposed by LLaVA <cit.>, we utilize ChatGPT to generate instruction-response pairs based on visual content. To ensure the quality of the generated instruction-response pairs, our pipeline incorporates system messages, visual annotations, and in-context examples as prompts for ChatGPT. System messages define the desired tone and style of the generated instruction-response pairs, while visual annotations provide essential image information such as bounding boxes and image descriptions. In-context examples assist ChatGPT in learning within the context. Since the quality of coreset impacts subsequent data collection process <cit.>, we employ a cold-start strategy to enhance in-context examples before the large-scale query. During the cold-start stage, in-context examples are collected by prompting ChatGPT solely through system messages and visual annotations, employing a heuristic approach. This stage concludes only when satisfactory in-context examples are identified. In step 4, once the instruction-response pairs are obtained, the pipeline expands them into Chinese (zh), Japanese (ja), Spanish (es), German (de), French (fr), Korean (ko), and Arabic (ar). For further details, please refer to Appendix <ref>, and task-specific prompts can be found in Appendix <ref>. §.§ Visual Data Exploration Acknowledging the importance of high-quality visual annotations and the need for diverse vision-language instructions that align with the distribution of real-world visual content, we curate a collection of seven image and video datasets spanning a wide spectrum of scenes, from general to specific. Encompassing various topics, the MIMIC-IT dataset includes general scene understanding and reasoning, spoting general and subtle differences, as well as facilitating egocentric view comprehension to assist VLMs in future AR headsets, . In the subsequent sections, we will present the application scenarios of our dataset: General Scene Understanding in <ref> and General Scene Understanding in <ref>. In each sub-task, we elaborate on the process of organizing various data into an in-context instruction tuning format, based on the previously established guidelines. §.§.§ General Scene Understanding For understanding the general scenes, we include four tasks: (1) LLaVA-Interleaved. (2) Spot The Difference. (3) Visual Story Telling. (4) Dense Captions. LLaVA-Interleaved (LA-I). Learning with in-context examples is essential for effective instruction tuning. To achieve this, we refine the LLaVA-Instruct-150K <cit.> dataset by retrieving ten in-context examples for each instruction-response pair in LLaVA-Instruct-150K, building LLaVA-Interleaved (LA-I). We identify each data's in-context examples based on instruction text-to-text similarity or image-image similarity. Further details on locating in-context examples and the data sources for LA-I can be found in the Appendix. Spot The Difference (SD). Learning to discern differences between images is vital for understanding real-world changes. Our study encompasses two interrelated task types in Scene Difference (SD), addressing varying complexity levels in difference identification. The first type, General Scene Difference, involves creating a pair of images by determining the most similar one to the current image, utilizing image-to-image similarity relationships from the COCO2017 <cit.>. The second type, Subtle Difference, features pairs of similar images with subtle distinctions sourced from the Spot-the-Diff<cit.>, extracted from surveillance footage. For the first type, we prompt ChatGPT using original image captions and object detection annotations, while for the second type, we employ natural language difference descriptions as annotations. The resulting instruction-response pairs focus on identifying differences between the paired images. Visual Story Telling (VIST). Beyond traditional scene understanding, the ability to generate coherent and engaging narratives based on visual input expands the context comprehension of Visual Language Models (VLMs). To enable this, we propose a task using the Visual Storytelling datase <cit.>, which includes event-based image sequences and corresponding inquiry questions. Given that image annotations often contain narratives and timelines not directly observable, we instruct ChatGPT to act as a viewer answering questions about the images. The prompts also incorporate thought-provoking inquiries to promote creativity. Each task instance comprises multiple images and instruction-response pairs, providing in-context examples. Dense Captions (DC). Expanding the scope of video understanding, DC features dense captions from <cit.> corresponding to clips within longer videos. The instructions pose a diverse set of questions, addressing the general visual content of the video, human actions, and behaviors, the chronological sequence of events, and causal relationships. This approach encourages VLMs to delve deeper into the intricacies of video content. TV Show Captions (TVC). The primary purpose of incorporating TV show clips with high-level captions into the training process of VLMs is to enhance their social reasoning abilities and deepen their understanding of complex character dynamics. By organizing drama clips from <cit.> to analyze character relationships and motivations, we aim to challenge VLMs to move beyond mere perception and demonstrate their reasoning capabilities within the context of TV show narratives. This focused approach is crucial for fostering advanced VLMs capable of effectively handling diverse real-world situations and user queries. §.§.§ Egocentric View Understanding Indoor Event Planning (IEP). Emphasizing the planning capabilities of virtual assistants, we utilize visual inputs consisting of a collection of 2D photos depicting a room. We gather indoor scene RGB-D images from ScanNetv2 <cit.> and sample them into multiple 2D visual inputs, representing a room's layout from a first-person perspective. We prompt ChatGPT to generate instructions that direct humans to perform various activities in indoor spaces. Initially, we have ChatGPT create a personality for the room owner. Subsequently, the planning should be intimately related to the room's layout and the generated room owner, underlining the importance of context awareness in VLMs. This approach ensures that models can effectively support users across diverse indoor scenarios. Ego4D (E4D) <cit.>. Utilizing E4D's egocentric videos, we strive to enable VLMs to function effectively as augmented reality (AR) assistants in real-life scenarios. By prompting ChatGPT to generate instructions based on visual descriptions, our goal is to simulate practical interactions between users and AR assistants. To this end, we devise assistant-related questions and tasks that demand context-aware responses. For instance, Instruction: What should I do now? Response: Based on my observation, you can now proceed to do.... This focused approach underscores the potential of VLMs in providing valuable insights and assistance across a diverse range of daily life situations. §.§ Dataset Statistics <Ref> presents the essential statistics pertaining to the generated data. Our dataset comprises over 2.8 million instruction-response pairs, wherein each pair includes at least one multi-modal in-context example and one language-only in-context example. Among these pairs, there are 2.2M unique instructions. Furthermore, to examine the characteristics and diversity of the instructions (refer to  <ref> (a)) and responses (refer to  <ref> (b)), we analyze the verb-noun structure present in them, refering to <cit.>. Specifically, we employ spaCy for parsing the instructions, extracting the verb closest to the root, and retrieving its first direct noun object[<https://github.com/explosion/spacy-models/releases/tag/en_core_web_md-3.5.0>]. We plot the top 20 most frequently occurring root verbs alongside their top 4 direct noun objects. Our findings reveal that the sentence structure of responses exhibits greater diversity compared to that of instructions. Moreover, we demonstrate diversity in terms of the length of instructions/responses, the number of images per instruction, and the number of in-context examples per instruction, as depicted in  <ref> (c). § EMPRICIAL EVALUATION In this section, we showcase the diverse applications of the MIMIC-IT dataset and the potential capabilities of a vision-language model (VLM) trained on it. Firstly, in <ref>, we introduce Otter, an in-context instruction-tuned model developed using the MIMIC-IT dataset. Next, in <ref>, we explore various methods for training Otter on the MIMIC-IT dataset and discuss numerous scenarios in which Otter can be effectively employed. Finally, in <ref> to <ref>, we present a comparative analysis of Otter's performance against other VLMs across an array of benchmarks. §.§ Otter: A Multi-Modal In-context Instruction Tuned Model Otter is designed to support multi-modal in-context instruction tuning based on the OpenFlamingo <cit.> model, which involves conditioning the language model on the corresponding media, such as an image that corresponds to a caption or an instruction-response pair. §.§ Usage Examples and Demonstrations Scene Understanding and Reasoning. The MIMIC-IT dataset comprises approximately 2.8 million in-context instruction-response pairs, which are structured into a cohesive template to facilitate various tasks. The following template encompasses images, user instructions, and model-generated responses, utilizing the and role labels to enable seamless user-assistant interactions. [language=Python, basicstyle=910pcr] <image>Human:instruction Assistant:<answer>response<endofchunk> Training the Otter model on the MIMIC-IT dataset allows it to acquire different capacities, as demonstrated by the LA and SD tasks. Trained on the LA task, the model exhibits exceptional scene comprehension, reasoning abilities, and multi-round conversation capabilities. Meanwhile, on the SD task, the model can acquire the ability to adeptly spot general differences or subtle distinctions within daily scenes. We showcase response examples from the Otter after training on the MIMIC-IT dataset in <ref>, highlighting its ability to understand situations and reasoning in a multi-round conversation style. Learning with In-context Examples. As mentioned in <ref>, regarding the concept of organizing visual-language in-context examples, we demonstrate here the acquired ability of the Otter model to follow inter-contextual instructions after training on the LA-T2T task (refer to Appx. for other tasks). The organized input data format is as follows: [language=Python, basicstyle=910pcr] # Multiple in-context example with similar instructions <image>Human:instruction Assistant:<answer>response<|endofchunk|> # .... <image>Human:instruction Assistant:<answer>response<|endofchunk|> # Query example <image>Human:instruction Assistant:<answer> The Otter model's demonstration of regulating its expressions by referencing in-context examples is illustrated in <ref>. Egocentric Visual Assistant. A distinctive feature of the MIMIC-IT dataset is its inclusion of a comprehensive collection of videos and sequential images in an egocentric view, derived from the IEP, E4D scenarios. In the IEP scenario, the content emphasizes understanding and planning within indoor environments, incorporating instructions and responses designed to guide the model in event planning based on interior layouts. The E4D scenario, on the other hand, tailors instructions and responses specifically for first-person augmented reality (AR) headset assistant applications. These two datasets collectively serve to bolster the model's proficiency in perceiving scenes from a first-person viewpoint, strategizing for impending tasks, and providing valuable insights and suggestions to AR headset users. Tailored this part of data, we train an egocentric visual assistant, termed Otter-E, which is specifically designed for AR headset applications. MIMIC-IT bolsters the model's proficiency in perceiving scenes from a first-person viewpoint, strategizing for impending tasks, and providing valuable insights and suggestions to AR headset users. As a result, the Otter-E model emerges as an exceptional and visionary Visual Language Model for AR headsets, paving the way for a groundbreaking and immersive experience. In the bottom image of <ref>, Otter-E demonstrates its ability to perceive the first-person view and respond to users' questions, such as guiding users to land a small aircraft (In real-life scenarios, you are not encouraged to consult visual assistants for such hazardous actions). §.§ ChatGPT Evaluation In <ref>, we utilize the MMAGIBench framework <cit.> to provide an extensive evaluation of the perception and reasoning capabilities of vision-language models. The perception benchmark consists of data derived from COCO images and social network images (, Twitter), covering tasks such as coarse scene and object recognition, fine-grained OCR, celebrity identification, and recognition of well-known locations. The reasoning benchmark, on the other hand, is performed across three dimensions: attribute reasoning, relation reasoning, and future prediction. Current evaluation metrics for vision-language models, like VQAv2 <cit.>, exhibit shortcomings in terms of robustness. For instance, VQAv2 primarily assesses single-word or phrase responses, while many modern models generate sentence outputs. To bridge this gap, we evaluate the models by asking ChatGPT to compare their label predictions with the ground truth labels for each input. A test sample is considered correct if ChatGPT's response indicates that the prediction aligns with the corresponding label. For a more in-depth understanding of MMAGIBench, we recommend referring to the original source <cit.>. <ref> (a) demonstrates that Otter outperforms VideoChatGPT <cit.> by 6.8% accuracy and 1.8% on MSVD <cit.> 0-shot question answering and captioning benchmarks respectively. Similar substantial margins are also observed on the MSRVTT <cit.> dataset. §.§ Human Evaluation Multi-Modality Arena <cit.> uses an Elo rating system to evaluate the usefulness and alignment of VLM responses. The Elo rating system calculates the relative skill levels of players, as commonly used in chess and other competitive games. The difference in Elo ratings between the two models predicts the outcome if they were matched against each other. This system works well for evaluating conversational AI models, because multiple models can have pairwise "battles" responding to the same inputs in a user-blind evaluation. <ref>(b) shows that Otter demonstrates superior usefulness and alignment, achieving the highest Elo rating among recent VLMs. §.§ Few-shot In-context Learning Metric Evaluation Otter is finetuned based on OpenFlamingo, an architecture designed for multi-modal in-context learning. Finetuned with the MIMIC-IT dataset, Otter outperforms OpenFlamingo by a substantial margin on COCO caption (CIDEr) <cit.> few-shot evaluation (see <ref>(c)). As expected, the finetuning also brings marginal performance gain on zero-shot evaluation. § DISCUSSION Limitations. Though we have iteratively refined the system message and instruction-response examples, ChatGPT is prone to language hallucinations therefore it might generate incorrect responses. Generally, more trustworthy language models are desired for self-instruct data generation. Future Works. In the future, we plan to support more embodied AI datasets such as Language-Table <cit.> and SayCan <cit.>. We also consider improving the instruction collection with more trustworthy language models or generation techniques. Conclusion. In this work, we propose MIMIC-IT, a large-scale multi-modal in-context instruction tuning dataset. We leverage an automatic pipeline, Syphus, to enable this dataset to cover a diverse set of visual scenes and creative instructions in eight languages. MIMIC-IT empowers our model, Otter, to achieve state-of-the-art performances in perception and reasoning benchmarks as well as human evaluations. This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221- 0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). We thank Peiyu Fu, Xuli Chen, and Mehdi Cherti for their professional advice on the in-context example of the translation query of Japanese, French, German, Spanish, Korean, and Arabic. plain § TOTAL COST AND CHATGPT VERSION We construct MIMIC-IT using the ChatGPT-0301 version. Overall, we query 1,006,746,240 tokens (859,677,150 and 147,069,090 for input and output tokens respectively). The estimated total cost is $20134.9248.[ <https://openai.com/pricing>] § CONTENT COPYRIGHT AND LICENSE The license of the datasets we used in this work is illustrated below. COLOR_MEAN Visual Data Image License Instruction-response license MS-COCO <cit.> Custom CC BY-NC-SA Spot-the-diff <cit.> Unknown CC BY-NC-SA ScanNetv2 <cit.> non-commercial CC BY-NC-SA ActivityNet Captions <cit.> Unknown CC BY-NC-SA Visual Storytelling <cit.> Unknown CC BY-NC-SA TV Captions <cit.> Unknown CC BY-NC-SA Ego4D <cit.> non-exclusive, non-transferable CC BY-NC-SA § SYTHUS: AUTOMATIC INSTRUCTION GENERATION PIPELINE Safety and Ethical Filtering Since we use GPT to generate instructions and responses, we generally follow the GPT content policy for safe and ethical use. This policy eliminates output that is suspicious for unfair opportunities, stereotyping, overrepresentation/underrepresentation, explicit content, disinformation, or unreliable information. Multi-lingual Support We enrich the datasets by translating the English instruction-response pairs by GPT into 7 additional languages: Chinese, Japanese, Spanish, German, French, Korean, and Arabic. See the prompt for multi-lingual translation query in <ref>. § ANNOTATION PROMPT In this section, we will present prompts for querying ChatGPT of all datasets in detail. Each prompt contains system message, in-context emample.
http://arxiv.org/abs/2306.01514v2
20230602130311
Zeeman and Orbital Driven Phase Transitions in Planar Josephson Junctions
[ "D. Z. Haxell", "M. Coraiola", "D. Sabonis", "M. Hinderling", "S. C. ten Kate", "E. Cheah", "F. Krizek", "R. Schott", "W. Wegscheider", "F. Nichele" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
We perform supercurrent and tunneling spectroscopy measurements on gate-tunable InAs/Al Josephson junctions (JJs) in an in-plane magnetic field, and report on phase shifts in the current-phase relation measured with respect to an absolute phase reference. The impact of orbital effects is investigated by studying multiple devices with different superconducting lead sizes. At low fields, we observe gate-dependent phase shifts of up to φ_0=0.5π which are consistent with a Zeeman field coupling to highly-transmissive Andreev bound states via Rashba spin-orbit interaction. A distinct phase shift emerges at larger fields, concomitant with a switching current minimum and the closing and reopening of the superconducting gap. These signatures of an induced phase transition, which might resemble a topological transition, scale with the superconducting lead size, demonstrating the crucial role of orbital effects. Our results elucidate the interplay of Zeeman, spin-orbit and orbital effects in InAs/Al JJs, giving new understanding to phase transitions in hybrid JJs and their applications in quantum computing and superconducting electronics. Keywords: Hybrid materials, superconductor-semiconductor, phase transitions, orbital effect, spin-orbit interaction, 2DEG, φ-junction [ W Mirza^1,2, A Torres-Sánchez^1,3[Present address: Tissue Biology and Disease Modelling Unit, European Molecular Biology Laboratory, Doctor Aiguader 88, Barcelona (08003), Spain.], G Vilanova^1 and Marino Arroyo^1,3,4 July 31, 2023 =================================================================================================================================================================================================================================== Josephson junctions (JJs) defined in hybrid superconductor-semiconductor materials are the subject of intense investigation as building blocks of gate-tunable superconducting <cit.> and Andreev <cit.> qubits, along with transistors <cit.>, mixers <cit.> and rectifiers <cit.> for superconducting electronics. Additional functionalities are enabled by the interplay between spin-orbit interaction and external magnetic fields, including spin-dependent <cit.> and non-reciprocal supercurrents <cit.>, topological phase transitions <cit.> and anomalous shifts in the ground state <cit.>. The latter constitute a shift in the energy minimum away from a phase difference φ=0 across the JJ, to 0<φ<π by breaking of time-reversal symmetry <cit.> or to φ=π by a Zeeman-induced phase transition <cit.>. Epitaxially-grown InAs/Al heterostructures <cit.> are a promising platform to realize these complex devices, due to their high electron mobility, excellent superconducting properties <cit.> and prospect of scalability. To date, tunneling spectroscopy experiments of planar InAs/Al JJs have revealed the onset of zero-energy states at large in-plane magnetic fields <cit.>, and more refined devices <cit.> have since shown zero-energy states accompanied by closure and reopening of the superconducting gap, consistent with a topological transition. Supercurrent measurements in superconducting quantum interference devices (SQUIDs) demonstrated gate-tunable phase shifts in small magnetic fields <cit.>, as well as large phase jumps at larger fields <cit.> accompanied by a minimum in the supercurrent amplitude, also consistent with a topological transition <cit.>. However, several questions remain on the behavior of planar JJs subject to in-plane magnetic fields. For instance, Ref. <cit.> reported anomalous phase shifts at small magnetic fields which were considerably larger than theoretical expectations <cit.>. Additionally, orbital effects can resemble the behavior expected from a topological transition <cit.>: a magnetic flux threading the cross-section underneath the superconducting leads can produce non-monotonic switching currents <cit.> together with closure and reopening of the induced superconducting gap. In this context, it is crucial to understand the mechanisms underlying phase shifts in planar JJs in an in-plane magnetic field, to fully harness their properties in quantum computation and superconducting electronics applications. In this work, we present a comprehensive investigation of planar SQUIDs in in-plane magnetic fields. An advanced device geometry allowed simultaneous measurements of the Andreev bound state (ABS) spectrum of a planar JJ and its current-phase relation (CPR), including anomalous phase shifts relative to an absolute phase reference. The role of orbital effects was studied by measuring several devices with varying size of the superconducting leads. For small in-plane magnetic fields oriented perpendicular to the current flow in the JJ, that is along the direction of the Rashba spin-orbit field, we observed phase shifts in the CPR which depended linearly on magnetic field and varied strongly with gate voltage, similar to Ref. <cit.>. For simplicity, we define this as a Type A phase shift. Spectroscopic measurements demonstrated that Type A phase shifts in the CPR were highly correlated with phase shifts of ballistic ABSs in the JJs, but were found to be independent on the size of the superconducting contacts. Upon further increase in magnetic field, we observed a rapid increase of the anomalous phase shift, which did not depend on gate voltage but was instead strongly correlated with the length of the superconducting contacts, indicating an orbital origin. We define this as a Type B phase shift. Strikingly, Type B phase shifts were accompanied by both a local minimum in the amplitude of the CPR and a closure and reopening of the superconducting gap, which might resemble a topological transition. We discuss similarities and differences of our observations with respect to previous work. Our results establish a new baseline understanding of InAs/Al JJs subject to in-plane magnetic fields, and guide towards a more complete understanding of anomalous phase shifts and topological transitions in planar JJs. § RESULTS AND DISCUSSION Experiments were performed on six devices. Figure <ref>(a) shows a false-colored scanning electron micrograph of Device 1, the principal device under study, which consisted of a planar SQUID fabricated in a heterostructure of InAs (pink) and epitaxial Al (blue) <cit.>. The device was covered by a HfO_2 dielectric layer, onto which Au gate electrodes (yellow) were deposited. The superconducting loop, defined in the epitaxial Al, contained a superconductor-normal semiconductor-superconductor (SNS) JJ and a narrow Al constriction. The SNS junction had length L=80 nm, width W=2.5 μ m and Al leads of length =250 nm. The constriction had width W_cons.=130 nm, chosen to limit the switching current of the planar SQUID, while still being much larger than that of the SNS junction. This asymmetric configuration resulted in a phase drop across the SNS junction of φ≈2π(Φ/Φ_0), where a flux Φ=A threaded the area A=10.2 (μ m)^2 enclosed by the SQUID loop (Φ_0=h/2e is the superconducting flux quantum). Differently from previous work <cit.>, where two InAs JJs were used, the Al constriction cannot introduce anomalous phase shifts in an in-plane magnetic field due to the absence of spin-orbit and orbital effects. A superconducting probe was integrated close to one end of the SNS junction, comprising a contact of epitaxial Al separated from the SNS junction by a tunnel barrier defined in the InAs. The transparency of the tunnel barrier was controlled by the gate voltages and , applied to the left and right tunnel gates respectively. The carrier density in the SNS junction was controlled via a top-gate voltage . An additional gate was kept at =0 throughout. Devices 2 to 5 were similar to Device 1 except for , resulting in different orbital coupling to in-plane magnetic fields [see Fig. <ref>(b)]. Each measurement presented here was acquired in parallel with measurements of a Reference Device fabricated on the same chip, which consisted of a SQUID with two Al constrictions of different widths [see Fig. <ref>(c)]. Parallel conduction in the InAs surrounding Reference Devices was prevented by setting a global gate to =-1.5 V. Switching currents I were measured using fast current ramps and voltage triggers. A ramped current was injected into the SQUID loop while monitoring the voltage across the device with an oscilloscope. The switching current was defined as the value of at which exceeded a threshold. Particular care was taken to inject the current by symmetrically biasing the measurement circuit, to prevent significant voltage build-up between SQUID and gates. Each CPR data point shown here was obtained by averaging over 32 data points measured with >0 and 32 with <0. This procedure allowed us to improve the experimental accuracy, limit the effect of the broad switching current distributions typical of planar devices <cit.> and cancel trivial phase shifts originating from the kinetic inductance of the loop <cit.>. The CPR of the SNS junction was obtained by subtracting the switching current of the Al constriction from that of the SQUID loop, which had a value between 30 and 45 μ A for all devices. Tunneling conductance measurements were performed by low-frequency lock-in techniques. A voltage bias + was sourced at the tunneling probe and the resulting AC current and voltage gave the differential conductance G≡/. Global magnetic fields were applied via a three-axis vector magnet, nominally along the directions , and as indicated in Fig. <ref>(a). Further details on electronic measurements and on the procedures used to accurately align the chip to the external magnetic field are presented in the Supporting Information. Figure <ref>(d) shows the CPR of Device 1 at =0 (blue line, left axis) and Reference Device (gray line, right axis) at =0.1 T. We highlight the maximum switching current /2 and a -field shift , which was measured where the CPR crossed zero with positive slope (circle and triangle for Device 1 and Reference Device, respectively). Figures <ref>(e) and (f) show /2 and , respectively, as a function of and for various values of . Black triangles in Fig. <ref>(e) represent magnetic field shifts measured in the Reference Device. In Fig. <ref>(e) we plot , that is the maximum supercurrent /2 averaged over positive and negative . We observe a non-monotonous dependence of as a function of , with minima at =±||=±0.6 T (see turquoise arrow). The magnetic field shift in Fig. <ref>(f) shows two distinctive trends. For ||≲0.4 T, shows a systematic deviation with respect to the Reference Device (Type A shift, orange shaded area). Type A shifts were larger for =0 (purple) than for =-1.6 V (red). For ||≳0.4 T we observe a more pronounced shift (Type B shift, green shading), without any measurable gate voltage dependence. Notably, at =±, where the supercurrent was at a minimum, the shift was approximately half a SQUID period, corresponding to a phase shift of ∼±π. At =0.9 T, the magnetic field shift accumulated in Device 1 exceeded one SQUID period. Finally, we note a weak "S"-shaped dependence of , both for Device 1 and the Reference Device, which persisted after accurate alignment of the external magnetic field (see Supporting Information). We speculate that the residual trend in originated from flux focusing <cit.> or a non-linearity of the vector magnet. Figure <ref>(g) shows Δ, that is as in Fig. <ref>(f) after subtraction of the data at =-1.6 V, which is the most negative top-gate voltage and follows the trend of the Reference Device for ||≤0.4 T. At each gate voltage, the field shift (circles) was approximately linear in , as highlighted by the linear fits (solid lines). The slope β extracted from the linear fits increased for more positive . Remarkably, no significant phase shift of either Type A or B was observed for in-plane fields applied along the transverse direction, as shown in Fig. <ref>(h) for Type A shifts (see Supporting Information for further details). The lack of Type A shifts as a function of implies a direction-dependent coupling to the external field, with a coupling strength indicated by β. We now present CPR data obtained from Devices 2, 3 and 4, where was 400, 350 and 180 nm, respectively. Switching currents /2 are shown in Figs. <ref>(a, c, e) for Devices 2-4 respectively, with field shifts in Figs. <ref>(b, d, f) for each device (colored markers) alongside those of a Reference Device measured in parallel (black triangles). Devices 2, 3 and 4 showed a qualitatively similar behavior to Device 1, despite having =0.4 T, =0.4 T and =0.8 T, respectively. We repeated the analysis on Type A phase shifts presented in Fig. <ref>(g) on the data of Fig. <ref>(b, d, f), and show the extracted β in Fig. <ref>(g) [see Supporting Information for more details]. As each device operated in a different range of , we compare them by plotting β as a function of Δ, the top-gate voltage relative to the most negative value at which oscillations were observed. Despite some scattering for small Δ, where data analysis is intricate due to the small switching current, we note that β follows a similar trend for all devices. In particular, β increases with Δ and does not depend on . Figure <ref>(h) shows as a function of the inverse superconducting lead length 1/. The data (blue circles) followed a linear trend, fitted by =(Φ_0/d)/ (orange line) describing one flux quantum threading an area d. The result of d=15 nm agrees with the separation of Al and InAs layers, indicating a crucial role of orbital effects in inducing Type B phase shifts. We now complement CPR measurements with spectroscopic data obtained on Device 1. Figure <ref> presents a series of differential conductance maps as a function of and , for increasing values of . All data were obtained at =-1 V (data at more values of are reported in the Supporting Information). As the tunneling probe was constituted by a superconducting lead, the differential conductance G at =0 indicates the density of states in the junction up to a bias shift of ±eΔ. Further conductance peaks at zero and high bias are attributed to a residual supercurrent and multiple Andreev reflection through the tunneling probe, respectively. For ≤ 0.2 T, the conductance demonstrates a conventional spectrum containing multiple Andreev bound states, some of which have transmission approaching unity and an induced superconducting gap of approximately 180 μ eV. For ≥0.2 T, a finite density of states at the Fermi level was induced in the lead facing the tunneling probe, resulting in a direct mapping of the density of states in the junction <cit.>. For =0.4 T, phase-dependent conductance features approached zero energy, resulting in a significant decrease of the superconducting gap [Fig. <ref>(c)]. For ==0.6 T [Fig. <ref>(d)], conductance features oscillated close to =0 with no clear separation between states at positive and negative bias. As was further increased, a gap reopened in the Andreev bound state spectrum, with discrete states around zero energy. Finally, the gap closed for ≥1 T. Conductance features close to =0 in Fig. <ref>(e) were reminiscent of zero-bias peaks reported for similar devices at high in-plane magnetic fields and understood in terms on topological states <cit.>. However, zero-bias features of Fig. <ref>(d) were not robust to small changes in the top-gate voltage or tunnel gate voltage (see Supporting Information). Figure <ref> compares spectroscopic maps obtained at =0.2 T (a-d) and 0.4 T (e-h), for multiple values of . The value of at which the ABS energy was closest to the gap was found for each value of , as indicated by the blue circles. This was determined as the value where the gradient ∂G/∂ was zero, at a fixed bias and averaged over multiple periods. Blue dashed lines indicate the minimum energy position at =-1.4 V, which is defined as =0 in Fig. <ref>(d). For both =0.2 T and 0.4 T, a clear deviation of the ABS spectrum took place as a function of . The shift in perpendicular field Δ measured from the ABS spectrum is summarized in Fig. <ref>(i) as a function of for =0.2 T (blue) and =0.4 T (orange). The Type A shift Δ obtained from the CPR is plotted on the same axis [squares, dashed lines] and shows remarkable agreement. After demonstrating the occurrence of two types of anomalous phase shifts taking place in hybrid SQUIDs in in-plane magnetic fields, we now discuss their origin. Type A phase shifts, which were approximately linear in and depended on [Fig. <ref>(g)], are associated with spin-orbit-induced anomalous phase shifts <cit.>, as recently reported in similar devices <cit.>. As phase shifts were much more pronounced for in-plane fields aligned perpendicular to the current flow direction () than parallel to it () [Fig. <ref>(h)], and were stronger for higher electron density (more positive  <cit.>), we conclude that spin-orbit interaction in our samples is predominantly of Rashba type. Type A phase shifts reported here, which are of similar magnitude than in Ref. <cit.>, are considerably larger than theoretical predictions <cit.>. Reference <cit.> proposed that the observed phase offsets could be explained by the contribution of several low-transmission modes. However, here we show that Type A shifts obtained from the CPR matched those from tunneling spectroscopy [Fig. <ref>], where conductance features at both high and low bias showed a phase shift. Since conductance features at low bias correspond to ABSs with high transmission, we conclude that highly transmissive modes participate in the overall phase shift despite their large Fermi velocity. While this result does not resolve the discrepancy between theoretical predictions and experiments <cit.>, it rules out diffusive modes with small Fermi velocities as the dominant cause of Type A phase shifts. Type B phase shifts were concomitant with a reentrant supercurrrent and a closure and reopening of the superconducting gap, independent of top-gate voltage . At =±, where the supercurrent was at a minimum and the proximitized superconducting gap was suppressed, the phase shift was φ_0≈±π. For ||>, a gap reopened in the ABS spectrum and the phase shift increased to above 2π. A phase shift occurring with a supercurrent minimum and gap closure indicates a 0-π transition at =, where the minimum ABS energy moves from φ≈0 to φ≈π due to coupling of the magnetic and superconducting orders by Zeeman interaction <cit.>. All experimental signatures of Type B shifts were shown to depend on the length , consistent with a flux quantum threading an area d underneath the superconducting leads. The experimentally obtained value of d=15 nm agrees with the separation between the Al and InAs layers (13.4 nm), up to some flux penetration into each layer. We therefore conclude that orbital effects strongly contributed to inducing Type B phase shifts. Type B shifts were observed for in-plane fields <1 T, much lower than the values B_0-π≳9 T expected for InAs/Al heterostructures <cit.>. We explain this by orbital effects, which were responsible for the induced gap reduction, forcing ABSs to move closer in energy. This enabled ABSs to cross even with small Zeeman splitting. Previous work reported similar phase shifts <cit.>, where a π jump in the junction phase was accompanied by a minimum in the switching current. However, phase shifts depended on the top-gate voltage, unlike the Type B shifts reported here. This shows that orbital effects alone are not sufficient to explain the results of Ref. <cit.>. § CONCLUSIONS In conclusion, measurements of the current phase relation and Andreev bound state spectrum in hybrid quantum interference devices showed phase shifts with two distinct characters, referred to as Types A and B. Type A phase shifts are attributed to coupling of the external magnetic field with an internal Rashba spin-orbit field, resulting in a φ_0-junction. Highly transmissive bound states were shown to make a significant contribution to the phase shift, which was much larger than expected for a single ballistic channel. The discrepancy might be due to the presence of many transverse modes, which future studies could investigate by varying the width and length of the Josephson junction. Type B shifts were consistent with a 0-π transition, where orbital effects in the superconducting leads played a critical role. This suggests that the geometry of the superconducting leads, and their impact on orbital effects, is a key ingredient for realizing π-junctions for superconducting electronics <cit.> or in interpreting signatures of topological superconductivity <cit.>. § METHODS Devices were fabricated from a hybrid superconducting-semiconducting heterostructure grown by molecular beam epitaxy on a semi-insulating InP (001) substrate. The heterostructure consisted of a step-graded InAlAs buffer, onto which an In_0.75Ga_0.25As/InAs/In_0.75Ga_0.25As quantum well was grown with a termination of two GaAs monolayers. The step-graded metamorphic buffer compensated the lattice mismatch between the InP and InAs, while the GaAs capping layers provided a barrier for In diffusion into the superconducting layer. The 8 nm InAs layer hosted a two-dimensional electron gas (2DEG), buried 13.4 nm below the semiconductor surface, as measured by transmission electron microscopy <cit.>. A 15 nm layer of Al was deposited onto the semiconductor surface, in situ without breaking vacuum in the growth chamber. Measurements of a gated Hall bar in this material showed a peak mobility of 18000 cm^2V^-1s^-1 at an electron sheet density of 8·10^11 cm^-2. This gave an electron mean free path of l_e≳260 nm, implying that all Josephson junctions measured in this work were in the ballistic regime along the length L of the junction. The first step in patterning superconducting quantum interference devices (SQUIDs) was to isolate each device from its neighbors by etching large mesa structures. This was done by selectively removing the Al layer with Transene type D, followed by a 380 nm chemical etch into the III-V heterostructure using a 220:55:3:3 solution of H_2O:C_6H_8O_7:H_3PO_4:H_2O_2. The second step was to pattern the Al device features, by wet etching in Transene type D at 50^∘C for 4 s. A dielectric layer of Al_2O_3 (3 nm) and HfO_2 (15 nm) was deposited across the chip by atomic layer deposition, then gate electrodes were defined on top of the dielectric layer by evaporation and lift-off. Fine gate features were defined in a first step consisting of 5 nm Ti and 20 nm Au; a second deposition of Ti (10 nm) and Al (420 nm) connected the gates on top of the mesa structures to bonding pads, which were defined in the same step. Measurements were performed in a dilution refrigerator with a base temperature at the mixing chamber below 10 mK. Magnetic fields were applied using a three-axis vector magnet, nominally oriented perpendicular to the device () and in the plane of the device (, ). Magnetic fields applied in the direction parallel to the Rashba spin-orbit field, or equivalently the direction perpendicular to the current flow, are denoted by . The in-plane field was rotated by 90 degrees to give , perpendicular to the spin-orbit field. Measurements of the differential conductance were performed with standard lock-in amplifier techniques. An AC voltage =3 μ V was applied to the contact of the superconducting probe with frequency 311 Hz, in addition to a DC source-drain voltage . The AC current and DC current I_SD flowing through the probe to ground was measured via a current-to-voltage (I-V) converter. The differential voltage across the tunnel barrier was measured to give the differential conductance G≡/. The transparency of the tunnel barrier was controlled with the gate voltages (, ), which are denoted by ≡= (symmetric configuration). Measurements were performed in the tunneling regime, where G≪ G_0=2e^2/h. A constant bias offset of 43 μ V was subtracted from all datasets, due to a DC offset at the I-V converter. Since the tunnel probe was superconducting, the measured conductance was a convolution of the density of states (DoS) in the probe and the superconductor-normal-superconductor (SNS) junction: G=G_Probe∗ G_SNS. This amounted to a shift in G_SNS features by ±eΔ^*. For elevated in-plane magnetic fields, the superconducting gap in the tunnel probe was softened, leading to a finite DoS at low energy. This enabled measurements of the DoS in the SNS junction using an effectively normal probe, such that the measured conductance was directly proportional to the DoS in the SNS junction <cit.>. In addition to conductance peaks at high source-drain bias corresponding to Andreev bound states (ABSs), we can attribute some features in the conductance spectrum to multiple Andreev reflections or to disorder in the tunnel barrier and sub-gap states in the DoS of the tunnel probe <cit.>. For tunneling spectroscopy measurements at an in-plane magnetic field, a first calibration measurement was performed at each field-value by sweeping the perpendicular field across a range >±3 mT. The position of zero perpendicular field was determined from spectroscopic features, including the size of the superconducting gap, the shape and peak conductance of high-bias features, and the sharpness of spectral lines. Then, each spectroscopic map was taken across >5 oscillation periods such that spectral features were consistent over the full range. Current-biased measurements were performed on the same device. Both contacts at the superconducting probe were floated, such that no current flowed through the probe. The tunnel barrier gate voltages, which also covered large areas of the superconducting loop, were set to =-1.5 V to deplete the InAs surrounding the Al features, thereby preventing parallel conduction and forming a well-defined current path. A DC current was applied by symmetrically biasing the SQUID loop, such that the device potential was not raised with respect to the ground. Hence, the nominal voltage applied to gate electrodes was the same as the potential difference between gates and the device. A ramped current signal was applied from a waveform generator at a frequency of 133 Hz. The voltage drop across the loop was measured with an oscilloscope. The switching current, the current at which the SQUID transitioned from the superconducting to resistive state, was recorded when exceeded a voltage threshold of less than 15 % of the maximum voltage in the resistive state. This measurement was repeated 32 times, and the resulting switching current values were averaged to account for stochastic fluctuations in the switching current <cit.>. Values of switching current reported in this work were averaged between values obtained for positive and negative bias currents . § ASSOCIATED CONTENT Supporting Information is available at [URL]. It includes: details on materials and device fabrication; additional details on Reference Device measurements; extraction of the current phase relation and phase shift from switching current measurements; current phase relation measurements in an in-plane magnetic field transverse to the junction axis, along ; discussion of the origin of zero bias peaks in tunneling spectroscopy; additional tunneling spectroscopy measurements as a function of transverse in-plane field , at different top-gate voltages and in an additional device with large superconducting lead length ; additional measurements of the Type B phase shift in different devices; and a discussion of the kinetic inductance of the superconducting loop. Supporting Information contains additional references <cit.>. § ACKNOWLEDGMENTS We are grateful to C. Bruder, W. Riess and H. Riel for helpful discussions. We thank the Cleanroom Operations Team of the Binnig and Rohrer Nanotechnology Center (BRNC) for their help and support. F. N. acknowledges support from the European Research Council (grant number 804273) and the Swiss National Science Foundation (grant number 200021_201082). § DATA AVAILABILITY The data that support the findings of this study are available upon reasonable request from the corresponding author. myc myc2 § REFERENCE DEVICE External magnetic fields were applied using a three-axis vector magnet, nominally aligned in-plane and perpendicular to the surface of the chip. However, small misalignments of the external magnet with respect to the chip mean that large in-plane fields resulted in a perpendicular component, causing a flux through the superconducting loop. To account for this, a Reference Device was fabricated on the same chip, consisting of two Al constrictions in parallel [see Fig. 1(c) of the Main Text]. An example of the switching current of the Reference Device is shown in Fig. <ref>(a), as a function of perpendicular magnetic field . The average switching current ⟨⟩=40 μ A (green dashed line) corresponds to the switching current of the wide Al constriction, =130 nm, giving similar values to that of Device 1. The switching current after subtracting the average, -⟨⟩ is shown on the right axis. The maximum switching current of the narrow Al constriction, =100 nm, is inferred as half the peak-to-peak amplitude of oscillations, Δ/2. The position where -⟨⟩=0 is assumed to be the perpendicular field at which there is no flux threading the loop, [marked by the triangle]. Figure <ref>(b) shows the maximum switching current of the wide and narrow constriction as a function of in-plane magnetic field (green and blue circles, respectively). Full (empty) markers correspond to the values obtained for positive (negative) applied current . At =0, the switching current appears to be slightly suppressed relative to that at a small in-plane field. This is attributed to a change in the interplay between quasiparticle populations in the superconductor and the number of quasiparticle relaxation channels in the superconducting leads <cit.>. At zero magnetic field, quasiparticles in the Al constriction are confined, with few relaxation channels in the superconducting leads, causing a suppression in the superconducting gap. At small magnetic fields, quasiparticles are generated in the large superconducting leads connected to the constrictions, providing additional relaxation channels for quasiparticles in the constriction region. This partially alleviates the suppression of the superconducting gap relative to the zero-field case, leading to an increase in the switching current. At larger magnetic fields, more quasiparticles are generated in the superconductor resulting in suppression of the switching current. This effect was observed for both in-plane and perpendicular magnetic fields. For >0.9 T, a large reduction was observed in the switching current of both constrictions, presumably caused by some portion of the superconducting loop becoming resistive. For this reason, no further studies were performed in this regime. The perpendicular magnetic field offset of the Reference Device as a function of in-plane magnetic field is shown in Fig. <ref>(c). Misalignment between the vector magnet and the chip is evident at large in-plane magnetic fields, as indicated by the dashed lines. This was considered to be identical for the Reference Device and Device 1 since both are on the same chip. At small , external fields were distorted, presumably due to flux-focusing effect by the large Al leads <cit.>. Flux-focusing effects in the Reference Device for in-plane fields directed along the junction axis, , were consistent with those measured in all devices. § EXTRACTING THE CURRENT-PHASE RELATION An example of the switching current of Device 1 is shown in Fig. <ref>(a) (circles), as a function of perpendicular magnetic field . A slowly-varying background is associated with the switching current of the Al constriction, which had a large switching current of I≈37 μ A. A weak dependence of the background switching current on is consistent with a change in the number and distribution of quasiparticle relaxation channels, as described in the previous section <cit.>. To remove this background, the data was fitted with a polynomial function over four complete periods, each defined by =Φ_0/A=200 μ T where Φ_0=h/2e is the superconducting magnetic flux quantum and A=10.2 (μ m)^2 is the area enclosed by the superconducting loop. This is shown as the dashed line in Fig. <ref>(a). Due to the large asymmetry between the critical currents of the SNS junction and the Al constriction, the current-phase relation (CPR) of the SNS junction was taken to be the switching current of the SQUID after subtracting the background. This is plotted as the circles in Fig. <ref>(b), at =0 for different top-gate voltages [denoted by color, defined in Fig. <ref>(c)]. The data showed a large forward skewness, consistent with the presence of highly transmissive ABSs in the junction <cit.>. The CPR of an SNS junction containing N modes is described by I(φ) = -2e/ħ∑^N_n=1∂ E_A,n(φ)/∂φ, where E_A,n=Δ√(1-τ_nsin^2(φ/2)) is the energy of the n^th ABS with transmission τ_n, Δ is the superconducting gap and φ is the phase difference across the SNS junction. The total supercurrent is a sum over the contributions of each ABS in the junction. The junctions studied in this work all had a large width W=2.5 μ m, and therefore contained many transverse conducting modes. Since detailed knowledge about individual modes is missing, we instead consider an effective transmission τ̅ to describe the properties of the CPR: the transmission which would reproduce the CPR in a junction where all modes have identical transmission. With the application of an in-plane magnetic field, the CPR is expected to obtain a phase shift φ_0 <cit.>. Accounting for these considerations, we obtain the equation I(φ) = I_Nτ̅sin(φ-φ_0)/(φ-φ_0)/Δ, where I_N=(e/2ħ)N̅Δ and N̅ is the effective number of modes in the junction. The phase difference across the junction is related to the perpendicular magnetic field by φ=2π(· A/Φ_0). The switching current as a function of perpendicular magnetic field is therefore fitted using Eq. <ref> obtaining three parameters: I_0, τ̅ and φ_0≡ 2π(· A/Φ_0). The maximum switching current is not necessarily equal to I_N, so it is obtained as the maximum of I(φ) from the fit. The fits to the data in Fig. <ref>(b) are shown as the solid lines, with the maximum switching current and effective transmission τ̅ plotted in Figs. <ref>(c) and (d), respectively. Note that is not necessarily equal to the critical current of the SNS junction, since stochastic fluctuations of the phase result in a switching current much lower than the critical current in planar Josephson junctions <cit.>. The maximum switching current decreased as a function of top-gate voltage , until no oscillations were visible at <-1.6 V. The effective transmission did not change appreciably across this range, indicating the presence of highly transmissive ABSs across the full gate range. Results are plotted for positive (>0) and negative (<0) bias current directions, as the full and empty markers respectively. Changing the current direction resulted in a reversal of the skewness of the CPR, since the external perpendicular field had a fixed direction. The sign of the phase φ used in Eq. <ref> was therefore reversed for negative , as was the associated value of coming from the fit. This meant that a larger φ_0 always corresponded to a larger , independent of the current direction. § TYPE B PHASE SHIFTS OF CURRENT-PHASE RELATION At a given in-plane magnetic field , the CPR of the SQUID was found by measuring the switching current as a function of perpendicular field , which was swept multiple times across a small range such that it was stable. The switching current was measured for positive and negative currents, before changing the top-gate voltage . Once the switching current had been collected for all top-gate voltages, was ramped to the next value. The in-plane field was always swept away from =0, such that sweeps in the positive and negative directions began at =0. As such, all measurements are relative to the values obtained at zero in-plane field in that field sweep. Since fitting with Eq. <ref> always returned values for φ_0 in the range [-π, π], results at a given in-plane field were shifted by integer multiples of the oscillation period such that values followed a monotonic trend. The magnetic field was swept multiple times, from -1 T to 1 T, before measurements were taken to minimize hysteresis effects. Nevertheless, some hysteresis was observed at =0, where flux focusing effects were most prevalent. Hence, results for >0 and <0 were combined such that current-averaged features were symmetric for ||≥0.1 T. The results of Figs. 1 and 3 of the Main Text were plotted following this procedure. An identical procedure was followed for in-plane magnetic fields applied transverse to the junction axis, . The CPR as a function of in-plane magnetic field is plotted in Fig. <ref>, where each CPR is normalized to the maximum switching current at that value of . The top-gate voltage was =-1 V, the same as in the tunneling spectroscopy maps of Fig. 3 in the Main Text. Each CPR trace is plotted with respect to of the Reference Device at that in-plane field [see Fig. <ref>(c)], indicated by the vertical dashed line at =0. The position of zero current through the SNS junction is marked by the second dashed line, which encloses the shaded green area to =0. The phase shift shown in Fig. 1(f) of the Main Text is evident, increasing to Δ/≈0.5 at =0.6 T, where the switching current is minimal [see Fig. 1(e) of the Main Text]. For larger , the phase offset moves towards zero, or equivalently towards Δ/=1 as shown in Fig. 1(f) of the Main Text. This result is consistent with the interpretation of a large phase shift induced by orbital effects in the superconducting leads. As the superconducting gap in the leads is suppressed by orbital effects, ABSs in the junction are pushed closer together, such that some cross zero energy due to Zeeman splitting at the finite in-plane field. When the superconducting gap is sufficiently small, most states have sufficient energy splitting that the ground state is at φ=π rather than φ=0 <cit.>. This explains the phase shift of φ=2π(Δ/)≈π at =0.6 T, where the orbital effects are strongest. For >0.6 T, the superconducting gap in the leads increases as the orbital effects become weaker. This means that fewer ABSs have sufficient energy splitting to shift the phase of the ground state, and φ_0 moves away from π. The phase shift extends over a range of in-plane fields since the junction contains many ABSs with different transmissions, which will therefore require different Zeeman energies to cross. § CURRENT-PHASE RELATION DEPENDENCE ON Phase shifts induced by orbital effects rely on an in-plane mangetic field generating a flux underneath the superconducting leads. This is particular for in-plane fields applied along the junction axis (), since a field applied in a perpendicular direction () would not generate desructive interference of ABSs in the superconducting leads <cit.>. A strong direction dependence is also predicted for spin-orbit related effects, since planar Josephson junctions in InAs are expected to have dominant Rashba spin-orbit coupling directed perpendicular to the junction axis. This has implications for anomalous phase shifts, as well as proposed topological transitions where angular dependence is a crucial ingredient <cit.>. Figure <ref>(a) shows the maximum switching current of SQUID oscillations in Device 1 as a function of in-plane field . The maximum switching current decreased for larger ||, until no oscillations in the switching current were observed for ||>0.5 T. No minimum and increase in the switching current was observed, nor was there any associated phase jump [Fig. <ref>(b)], unlike for [see Figs. 1(e, f) of the Main Text]. This is consistent with a lack of orbital effects in the superconducting leads. The small difference between for Device 1 and the Reference Device, measured for the same applied , is attributed to different flux focusing effects between the two devices. Figure <ref>(c) shows the perpendicular field offset relative to the most negative top-gate voltage, =-1.6 V. No gate-dependence was present in Δ, and there was no linear trend as a function of in-plane field . The absence of gate-dependent phase shifts as a function of supports the interpretation that Type B phase shifts for are enabled by the presence of spin-orbit coupling. Switching current measurements as a function of were also performed on Device 4. Figure <ref>(d) shows the maximum switching current as a function of , for different top-gate voltages . No minimum and increase in the switching current was observed up to =0.8 T, beyond which no oscillations in switching current were visible. The corresponding offset in perpendicular field [circles, Fig. <ref>(e)] showed no deviation from that of the Reference Device [triangles, Fig. <ref>(e)]. This is consistent with Device 1 [Fig. <ref>], supporting the conclusion that orbital effects do not play a role in measurements in in-plane fields applied perpendicular to the junction axis. Figure <ref>(f) shows the perpendicular field offset relative to =0.2 V. This was chosen to be the reference in this case due to the large deviation of the =0.02 V data from the Reference Device. This was potentially due to the small switching currents at the lowest top-gate voltage, causing an unreliable fit result. Some gate-dependent trend is apparent in Fig. <ref>(f), although with a smaller gradient than observed for [see Fig. 1(f) of the Main Text]. This could be due to stray in-plane fields coupling to the primary spin-orbit direction, or to an additional spin-orbit component in the junction. § ZERO-BIAS PEAK IN TUNNELING SPECTROSCOPY Tunneling spectroscopy measurements at large in-plane fields ≈0.8 T show a peak in the differential conductance G close to zero source-drain bias [see Fig. 3(e) of the Main Text]. In measurements of similar devices, a zero-bias peak (ZBP) has been associated with the emergence of a topological phase <cit.>. Here, we show additional data of the ZBP observed in Fig. 3(e) of the Main Text and comment on its origin. Figures <ref>(a-g) show the conductance G as a function of perpendicular magnetic field , for in-plane magnetic fields >0.6 T (i.e., after the closure of the superconducting gap at =0.6 T). Conductance maps show periodic lobe-like features: each map is plotted such that the center of a lobe is aligned to =0. The top-gate voltage was set to =-1 V, identical to that in Fig. 3 of the Main Text [such that Fig. <ref>(d) is the same as Fig. 3(e) of the Main Text]. A high-conductance feature is visible close to =0 in many maps, but does not appear robustly for all in-plane fields and is rarely well separated from conductance features at higher source-drain bias. To test the robustness of this ZBP, the magnetic field was fixed to =0.8 T and =0, then the top-gate was varied from =-0.92 V to =-1.05 V [Fig. <ref>(h)]. Conductance features moved close to =0 as a function of , but were not stable at =0 for more than a few millivolts. Figures. <ref>(i-k) show the differential conductance as a function of perpendicular field , at top-gate voltages offset from =-1 V by -δ V, where δ V = 0, 21 mV and 34 mV for (i-k) respectively. The conductance spectrum changed appreciably, and a high-conductance feature is evident in Fig. <ref>(j) but not in the others. Note also that the regime of Fig. <ref>(d) was not recovered in (i), despite the identical gate and field configuration. Figure <ref>(l) shows the differential conductance G as a function of tunnel-barrier gate voltage, . High-conductance features were dependent on , and moved across the low-bias region. Zero-bias peaks were shown to be sensitive to in-plane mangetic fields and top-gate voltage , and tunnel-barrier-dependent conductance features were shown to move close to =0. These results suggest that ZBPs were most likely due to ABSs coalescing close to zero energy, rather than being topological in origin. This is despite the gap closure and opening, shown in Fig. 3 of the Main Text and associated with orbital effects in the superconducting leads. This result suggests that additional levels of caution are needed in interpreting ZBPs as indicative of a topological transition, even in the presence of gap closure and reopening. We note that the top-gate voltage =-1 V was chosen to have good visibility of conductance features at low , to be in a regime of single-subband occupation (based on supercurrent measurements) and to match a value used in supercurrent measurements [see Fig. 1(e-h) in the Main Text]. It was not chosen based on the observation of a ZBP; the emergence of a ZBP after gap closure and reopening was by coincidence rather than by fine-tuning of . § TUNNELING SPECTROSCOPY AS FUNCTION OF Current-biased measurements for in-plane magnetic fields aligned perpendicular to the junction axis () are supported by tunneling spectroscopy [see Fig. <ref>]. Measurements were taken with an identical gate voltage configuration to those in Fig. 3 of the Main Text. For small values of , superconductivity in the tunnel probe was quickly softened such that conductance features occurred at low bias [Figs. <ref>(a, b)]. Conductance features were periodic with perpendicular magnetic field , but with a weak dependence consistent with the small switching currents oberved in Fig. <ref>. Conductance features did not resemble those of ABSs described by E_A=Δ√(1-τsin^2(φ/2)) , instead forming a complex network and crossing =0 in many places [Figs. <ref>(c, d)]. This became more pronounced at larger [Figs. <ref>(e, f)] until the superconducting gap was largely suppressed and conductance features changed very little with [Figs. <ref>(g, h)]. No reopening of the superconducting gap was observed in these spectroscopic maps, up to large in-plane fields well beyond the value at which no oscillations in the switching current were visible. Conductance features are not well described by a simple model of ballistic ABSs in a short junction, instead showing crossings and interactions at high and low bias. These results indicate the absence of a phase transition, since there was no reopening of the superconducting gap. This is consistent with the lack of orbital effects for in-plane fields applied perpendicular to the junction axis. More sophisticated modeling of ABSs would be required to understand the conductance features in detail, which is beyond the scope of this work. § TUNNELING SPECTROSCOPY FOR DIFFERENT TOP-GATE VOLTAGES Figures <ref> and <ref> show tunneling spectroscopy maps for increasing in-plane magnetic field , at top-gate voltages of =-0.6 V and =-1.4 V respectively. The tunnel barrier gates were adjusted to be in the tunneling regime, so were set to =-2.46 V and (,)=(-1.835,-1.805) V for Figs. <ref> and <ref> respectively. At =-0.6 V, many more conductance features were present relative to =-1 V [Fig. <ref>(a) compared with Fig. 3(a) of the Main Text], consistent with more modes present in the junction. In contrast, only few modes were visible at =-1.4 V [Fig. <ref>(a)]. No -dependent conductance features were observed for top-gate voltages <-1.4 V. For increasing in-plane magnetic field , superconductivity in the tunnel probe was suppressed [Figs. <ref>(b) and <ref>(b)] and -dependent conductance features moved closer to =0 [Figs. <ref>(c) and <ref>(c)]. At =0.6 T, the superconducting gap was suppressed at both top-gate voltages and conductance features had very weak -dependence close to =0 [Figs. <ref>(d) and <ref>(d)]. For larger in-plane fields, some phase-dependence appeared to recover although this was difficult to distinguish due to the poor visibility of conductance features corresponding to individual ABSs [Figs. <ref>(e, f) and <ref>(e, f)]. The superconducting gap was suppressed at =0.6 T at all measured top-gate voltages. This is consistent with current-biased measurements [see Fig. 1(e) of the Main Text], where the minimum in the switching current occurred at =0.6 T independent of top-gate voltage . These results suggest that the cause of gap closure is independent of the properties of the normal region of the junction. Since orbital effects depend only on the properties of the superconducting leads, these findings are consistent with gap closure induced by orbital effects. § TUNNELING SPECTROSCOPY IN DEVICE 5 Tunneling spectroscopy was performed in an additional device to those shown in the Main Text, which was identical to Device 1 in all aspects other than the length of the superconducting leads =400 nm. The superconducting loop in this device, Device 5, was identical to that of Device 2 [Figs. 3(a, b) of the Main Text], where the switching current was measured. Conductance maps for different values of in-plane magnetic field are shown in Figs. <ref> and <ref>, for =0.8 V and =0.2 V respectively. These each correspond to the situation of a large [Fig. <ref>(a)] or small [Fig. <ref>(a)] number of modes, similar to Figs. <ref> and <ref> for Device 1. On increasing , the superconducting gap in the tunnel probe was softened [Figs. <ref>(b) and <ref>(b)] and conductance features moved closer to =0 [Figs. <ref>(c, d) and <ref>(c, d)] until the gap between conductance features was closed at =0.4 T [Figs. <ref>(e) and <ref>(e)]. For larger , the gap between conductance features reopened and there was a stronger -dependence [Figs. <ref>(f, g) and <ref>(f, g)]. At =0.7 T, the gap closed again and superconducting features were suppressed [Figs. <ref>(h) and <ref>(h)]. Closure of the superconducting gap was shown to occur at =0.4 T in Device 5, for two top-gate voltages. This is consistent with the minimum in the switching current of Device 2, which had an identical SQUID loop, Al constriction and SNS junction. Tunneling spectroscopy showed a reopening of the gap between conductance features at larger in-plane fields, where a reentrant supercurrent was measured in current-biased experiments. The closure of the superconducting gap and minimum in the switching current both occurred at ≈0.4 T, the expected in-plane field at which one flux quantum threads the area underneath the superconducting leads. This supports the conclusion that gap closure in these devices is induced by orbital effects in the superconducting leads. § DEVICES WITH VARYING SUPERCONDUCTING LEAD LENGTH Measurements were performed on devices with varying superconducting lead length [see Fig. 2 of the Main Text]. Devices consisted of a superconducting loop identical to that of Device 1, other than the length of the superconducting lead which had values =400 nm, 350 nm and 180 nm for Devices 2-4 respectively. These devices did not have a tunnel probe proximal to the SNS junction, so only current-biased measurements were possible. Each device had two gates: a top-gate identical to that of Device 1 to tune the charge density in the SNS junction; and a global gate covering the exposed InAs regions around the junction and superconducting loop. The global gate was set to <-1.5 V throughout the experiment, such that the exposed InAs was depleted everywhere other than in the junction region. Switching current measurements were performed for increasing in-plane magnetic field . At each value of , the switching current was first measured across a wide range of at the most positive top-gate voltage. After subtracting a slowly varying background corresponding to the Al constriction, a recognisable Fraunhofer interference pattern was observed [Fig. <ref>(a), blue line]. In Devices 2 and 3, where the superconducting leads were large, flux focusing effects were strong. This caused a minimum in the Fraunhofer interference pattern at relatively small perpendicular fields . It was therefore important to consider the envelope of switching current oscillations due to Fraunhofer interference. This was extracted from the data by filtering out the high frequency oscillatory component, and fitting the result with the following equation I() = I_0|sinc(-B^(env)_0/B_min.)| There were three free parameters: the maximum current I_0, the perpendicular field at which the current was maximum B^(env)_0 and the perpendicular field at which the first minimum occurred B_min.. The result of this fit for the data in Fig. <ref>(a) is shown as the dashed red line. The in-plane field was aligned such that the maximum of the Fraunhofer pattern was close to =0 for each value of in-plane field. This was different in each device, due to flux focusing effects, so a different alignment was needed for each device. As such, the Reference Device was measured with each field alignment, to make a direct comparison. At a given in-plane magnetic field, the switching current was measured as a function of perpendicular magnetic field for different top-gate voltages . The most negative top-gate voltage was chosen such that no oscillations were visible, where the SNS junction is assumed to be completely closed. The bias current therefore only flowed through the Al constriction, giving a direct evaluation of the switching current of the constriction as a function of . This background switching current was subtracted from the data at other , to obtain the current-phase relation at each top-gate voltage [see Fig. <ref>(b)]. The data (circles) for each [colors, defined in (c)] was fitted with Eq. <ref>, adjusted to account for the envelope given by Eq. <ref>: I() = I_0|sinc(-B^(env)_0/B_min.)|·τ̅sin[2π(-)A/Φ_0]/[2π(-)A/Φ_0]/Δ Equation <ref> takes the fixed parameters B^(env)_0 and B_min. obtained from the fit to Eq. <ref>. There are therefore only three free parameters, as in Eq. <ref>: I_0, τ̅ and . As for Device 1, is calculated as the maximum I(). The fit for the data in Fig. <ref>(b) is shown as the colored lines, with the results for and τ̅ in (c) and (d) respectively [positive (negative) bias currents are indicated by the full (empty) markers]. This procedure is applied to every switching current measurement for Devices 2-4, to obtain the values shown in Fig. 2 of the Main Text. Measurements for positive and negative are combined using the same method as for Device 1, as described above. § TYPE A PHASE SHIFTS IN THE CURRENT PHASE RELATION Gate-dependent Type A phase shifts were observed in all devices, for in-plane fields ||≲||, where is the field at which the superconducting gap is suppressed by orbital effects. The results for Devices 1-4 are summarized in Fig. <ref>. The perpendicular field offset relative to the most negative gate voltage, Δ, was linear with in-plane field with steeper gradient β for more positive top-gate voltage [Figs. <ref>(a-d), colors defined in (e-h)]. The data (circles) are fitted with a linear curve (lines) to extract the gradient β, which is plotted in Figs. <ref>(e-h) (filled circles) for Devices 1-4 respectively. The maximum switching current at =0 is also plotted as a function of top-gate voltage (empty squares). The trend of β with is similar to that of the maximum switching current . At the maximum , where was large, β≳100 μ T/T for all devices independent of the superconducting lead length . The size of the shift Δ did not depend strongly on the switching current at that in-plane field, rather on the switching current at =0. This is because the switching current at an in-plane field is significantly influenced by orbital effects, independent of the carrier density at that top-gate voltage. The maximum switching current is linked to the carrier density in the junction, since at lower densities there are fewer transverse modes to carry the supercurrent <cit.>. The switching current is therefore indicative of the carrier density in the InAs, despite that the gate voltages might differ between devices due to local disorder, inhomogeneous material properties and fabrication imperfections. For decreasing , the carrier density decreases causing both and β to decrease [Figs. <ref>(e-h)]. This follows a trend consistent with that of Ref. <cit.>, which directly measured the spin-orbit coupling strength as a function of carrier density, in similar InAs quantum wells. However, the size of these Type A phase shifts is much larger than would be expected for a single ballistic channel <cit.>, using the spin-orbit coupling strength for InAs <cit.>. Similar observations were made in Ref. <cit.>, where anomalous phase shifts were reported for planar Josephson junctions in InAs/Al heterostructures. The anomalous phase shift was shown to be consistent with that of ABSs in tunneling spectroscopy [Fig. 4 of the Main Text], implying that the phase shift was not dominated by low transmission modes but had contributions from all modes in the junction. § TYPE A PHASE SHIFTS IN TUNNELING SPECTROSCOPY Figure 4 shows differential conductance maps for different top-gate voltages . The perpendicular field at which the ABS energy was lowest was taken to be where the partial derivative of the differential conductance with respect to perpendicular field, ∂G/∂, was zero at a fixed source-drain bias . The closest conductance feature to =0 was considered. This procedure was repeated across 5 lobes, for positive and negative bias, and extracted values of were shifted by integer multiples of the period to give values within [-/2,/2]. A similar procedure was followed by considering the position where the conductance was closest to =0, which corresponds to φ≈π. All methods gave a similar trend and similar quantitative values for the phase shift. The data plotted in Fig. 4(i) of the Main Text is the average of all values obtained from these methods, with the error bars giving the standard deviation. § PHASE SHIFTS DUE TO KINETIC INDUCTANCE OF THE SUPERCONDUCTING LOOP Switching current measurements were performed by applying large bias currents to the SQUID device. Since the epitaxial Al is very thin, it has an appreciable kinetic inductance L_K, which generates a flux Φ_K=L_K(I_cons.-I_SNS)/2, where I_cons. and I_SNS are the currents flowing in the Al constriction and SNS junction, respectively. The kinetic inductance of the loop is estimated as <cit.> L_K = N_□h/2π^2R_□/Δ≈66 pH, where N_□=38 is the number of squares in the superconducting loop, R_□≈1.5 Ω is the normal-state sheet resistance per unit square measured in a Hall bar geometry on the same material, and Δ≈180 μ eV is the superconducting gap of Al. This gives a shift of Δ B_Kin.≈110 μ T, for typical currents (I_cons.-I_SNS) in the SQUID loop. The shift Δ B_Kin. between positive and negative currents is shown in Fig. <ref>. No top-gate dependence was observed, so points were averaged over all top-gate voltages. The field shift Δ B_Kin. increased for increasing magnitude of in-plane magnetic field, consistent with an increasing kinetic inductance due to quasiparticle generation in the superconducting loop. The values of Δ B_Kin. in Fig. <ref> are consistent with the field shift estimated from the kinetic inductance in Eq. <ref>.
http://arxiv.org/abs/2306.02468v2
20230604204424
The Cosmos in its Infancy: JADES Galaxy Candidates at z > 8 in GOODS-S and GOODS-N
[ "Kevin N. Hainline", "Benjamin D. Johnson", "Brant Robertson", "Sandro Tacchella", "Jakob M. Helton", "Fengwu Sun", "Daniel J. Eisenstein", "Charlotte Simmonds", "Michael W. Topping", "Lily Whitler", "Christopher N. A. Willmer", "Marcia Rieke", "Katherine A. Suess", "Raphael E. Hviding", "Alex J. Cameron", "Stacey Alberts", "William M. Baker", "Rachana Bhatawdekar", "Kristan Boyett", "Andrew J. Bunker", "Stefano Carniani", "Stephane Charlot", "Zuyi Chen", "Mirko Curti", "Emma Curtis-Lake", "Francesco D'Eugenio", "Eiichi Egami", "Ryan Endsley", "Ryan Hausen", "Zhiyuan Ji", "Tobias J. Looser", "Jianwei Lyu", "Roberto Maiolino", "Erica Nelson", "David Puskas", "Tim Rawle", "Lester Sandles", "Aayush Saxena", "Renske Smit", "Daniel P. Stark", "Christina C. Williams", "Chris Willott", "Joris Witstok" ]
astro-ph.GA
[ "astro-ph.GA" ]
0000-0003-4565-8239] Kevin N. Hainline Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-9280-7594] Benjamin D. Johnson Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge MA 02138 USA 0000-0002-4271-0364] Brant Robertson Department of Astronomy and Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz CA 96054, USA 0000-0002-8224-4505] Sandro Tacchella Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0003-4337-6211] Jakob M. Helton Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-4622-6617] Fengwu Sun Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-2929-3121] Daniel J. Eisenstein Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge MA 02138 USA 0000-0003-4770-7516] Charlotte Simmonds Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0001-8426-1141] Michael W. Topping Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0003-1432-7744] Lily Whitler Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0001-9262-9997] Christopher N. A. Willmer Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-7893-6170] Marcia Rieke Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-1714-1905] Katherine A. Suess Department of Astronomy and Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 USA Kavli Institute for Particle Astrophysics and Cosmology and Department of Physics, Stanford University, Stanford, CA 94305, USA 0000-0002-4684-9005] Raphael E. Hviding Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-0450-7306] Alex J. Cameron Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK 0000-0002-8909-8782] Stacey Alberts Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0003-0215-1104] William M. Baker Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0002-4735-8224]Stefi Baum Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB R3T 2N2, Canada 0000-0003-0883-2226] Rachana Bhatawdekar European Space Agency (ESA), European Space Astronomy Centre (ESAC), Camino Bajo del Castillo s/n, 28692 Villanueva de la Cañada, Madrid, Spain; European Space Agency, ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, NL 0000-0001-8470-7094]Nina Bonaventura Cosmic Dawn Center (DAWN), Copenhagen, Denmark Niels Bohr Institute, University of Copenhagen, Jagtvej 128, DK-2200, Copenhagen, Denmark Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0003-4109-304X]Kristan Boyett School of Physics, University of Melbourne, Parkville 3010, VIC, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia 0000-0002-8651-9879] Andrew J. Bunker Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK 0000-0002-6719-380X] Stefano Carniani Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy 0000-0003-3458-2275] Stephane Charlot Sorbonne Université, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France 0000-0002-7636-0534]Jacopo Chevallard Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK 0000-0002-2178-5471]Zuyi Chen Steward Observatory University of Arizona 933 N. Cherry Avenue Tucson AZ 85721, USA 0000-0002-2678-2560]Mirko Curti European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching, Germany Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0002-9551-0534] Emma Curtis-Lake Centre for Astrophysics Research, Department of Physics, Astronomy and Mathematics, University of Hertfordshire, Hatfield AL10 9AB, UK 0000-0003-2388-8172] Francesco D'Eugenio Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0003-1344-9475] Eiichi Egami Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0003-4564-2771]Ryan Endsley Department of Astronomy, University of Texas, Austin, TX 78712, USA 0000-0002-8543-761X] Ryan Hausen Department of Physics and Astronomy, The Johns Hopkins University, 3400 N. Charles St. Baltimore, MD 21218 0000-0001-7673-2257] Zhiyuan Ji Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-3642-2446] Tobias J. Looser Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0002-6221-1829] Jianwei Lyu Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0002-4985-3819] Roberto Maiolino Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK 0000-0002-7524-374X] Erica Nelson Department for Astrophysical and Planetary Science, University of Colorado, Boulder, CO 80309, USA 0000-0001-8630-2031]Dávid Puskás Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0002-7028-5588]Tim Rawle European Space Agency (ESA), European Space Astronomy Centre (ESAC), Camino Bajo del Castillo s/n, 28692 Villafranca del Castillo, Madrid, Spain 0000-0001-9276-7062]Lester Sandles Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK 0000-0001-5333-9970] Aayush Saxena Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK 0000-0001-8034-7802]Renske Smit Astrophysics Research Institute, Liverpool John Moores University, 146 Brownlow Hill, Liverpool L3 5RF, UK 0000-0001-6106-5172]Daniel P. Stark Steward Observatory, University of Arizona, 933 N. Cherry Ave, Tucson, AZ 85721, USA 0000-0003-2919-7495] Christina C. Williams NSF’s National Optical-Infrared Astronomy Research Laboratory, 950 North Cherry Avenue, Tucson, AZ 85719, USA 0000-0002-4201-7367]Chris Willott NRC Herzberg, 5071 West Saanich Rd, Victoria, BC V9E 2E7, Canada 0000-0002-7595-121X] Joris Witstok Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK We present a catalog of 717 candidate galaxies at z > 8 selected from 125 square arcminutes of NIRCam imaging as part of the JWST Advanced Deep Extragalactic Survey (JADES). We combine the full JADES imaging dataset with data from the JEMS and FRESCO JWST surveys along with extremely deep existing observations from HST/ACS for a final filter set that includes fifteen JWST/NIRCam filters and five HST/ACS filters. The high-redshift galaxy candidates were selected from their estimated photometric redshifts calculated using a template fitting approach, followed by visual inspection from seven independent reviewers. We explore these candidates in detail, highlighting interesting resolved or extended sources, sources with very red long-wavelength slopes, and our highest redshift candidates, which extend to z_phot∼ 18. Over 93% of the sources are newly identified from our deep JADES imaging, including 31 new galaxy candidates at z_phot > 12. We also investigate potential contamination by stellar objects, and do not find strong evidence from SED fitting that these faint high-redshift galaxy candidates are low-mass stars. Using 42 sources in our sample with measured spectroscopic redshifts from NIRSpec and FRESCO, we find excellent agreement to our photometric redshift estimates, with no catastrophic outliers and an average difference of ⟨Δ z = z_phot- z_spec⟩= 0.26. These sources comprise one of the most robust samples for probing the early buildup of galaxies within the first few hundred million years of the Universe's history. § INTRODUCTION The earliest galaxies that appeared from the Cosmic Dark Ages fundamentally changed the Universe. For hundreds of millions of years after recombination, the decoupling of matter and radiation, the Universe's baryon content consisted of predominantly neutral hydrogen that was gravitationally pooling and collecting, pulled by early dark matter halos. Eventually these massive clouds collapsed and formed the first stars which gave off energetic ultraviolet (UV) radiation, ionizing the neutral hydrogen medium throughout the universe. Reionization is thought to have taken place across the the first billion years after the Big Bang, but exactly how this process occurred, and more specifically, what types of galaxies are responsible for this phase transition, has been an active area of research for decades <cit.>. Observations of early galaxies offer us a vital insight into the first stages of galaxy formation and evolution, and help us understand emergence of the elements heavier than helium. To aid in understanding these distant sources, in this paper we present a sample of 717 galaxies and candidate galaxies with spectroscopic and photometric redshifts corresponding to the first 200 to 600 Myr after the Big Bang and describe their selection and properties. To explore the very early universe, researchers search for galaxies at increasingly high redshifts using deep observations from space. One of the pioneering early universe surveys was the Hubble Space Telescope (HST) Deep Field project <cit.>, a set of observations at wavelengths spanning the near-ultraviolet to near-infrared (IR). These data provided an opportunity to explore galaxy evolution out to z = 4 - 5 <cit.>. Following the success of the HDF, the next decades were spent observing multiple deep fields down to unprecedented observational depths of 30 mag (AB) at optical and near-IR wavelengths. These surveys included the Hubble Ultra-Deep Field <cit.>, the UVUDF <cit.>, the HST Great Observatories Origins Deep Survey <cit.>, the Cosmological Evolution Survey <cit.>, and the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey <cit.>. Researchers hoping to target fainter galaxies also focused on lensing clusters, leading to the Cluster Lensing and Supernova Survey with Hubble <cit.>, the Hubble Frontier Fields <cit.>, the Reionization Lensing Cluster Survey <cit.>, and the Brightest of Reionizing Galaxy survey <cit.>. It has therefore been exciting to see the fruits of these observations: the discovery of many thousands of galaxies at z > 4 <cit.>. While these sources have been found through multiple methods, the primary method of high-redshift galaxy selection relies on photometry alone. Neutral hydrogen within, surrounding, and between distant galaxies serves to absorb ultraviolet radiation, leading to what is commonly referred to as the “Lyman break” in the spectral energy distiribution (SED) at 912 - 1216 Å. By identifying galaxies where this break fell between two adjacent filters at a given redshift, these sources could be selected in large quantities, as done initially in <cit.> and <cit.>. A similar approach involves fitting galaxy photometry to simulated or observed galaxy SEDs, a method that utilizes more data than pure color selection <cit.>. These results require accurate template sets that span the full color space of the photometric data, and include the effects of both dust extinction and intergalactic medium (IGM) absorption. This template-fitting procedure is uncertain at high-redshifts given the current lack of UV and optical SEDs for galaxies in the early universe <cit.>. While both galaxy selection techniques have been used to find galaxies out to z ∼ 10 with HST, the reddest filter on the telescope's Wide-Field Camera 3 (WFC3/IR) is at 1.6μm, such that potential galaxies at higher redshifts would have their Lyman break shifted out of the wavelength range of the instrument. Exploring the evolution of galaxies at earlier times was limited by the availability of deep, high-resolution near- and mid-IR observations. This changed with the launch of the James Webb Space Telescope (JWST) in late 2021, an observatory carrying a suite of sensitive infrared instruments behind a 6.5m primary mirror. The instruments include NIRCam <cit.>, a high-resolution camera operating at 0.7 - 5.0 μm across a 9.7 square arcminute field of view, and NIRSpec <cit.>, a spectrograph operating at similar wavelengths with a unique multi-object shutter array capable of obtaining spectra at multiple resolutions. In the first year of JWST science, researchers have identified scores of candidate high-redshift galaxies at z > 9 <cit.>. Some of these sources have been spectroscopically confirmed at z > 8 <cit.>, demonstrating the efficacy of using NIRCam for early universe observations. It should be noted, however, that this is an imperfect science - <cit.> describe how the early bright z ∼ 16 candidate CEERS-93316 was spectroscopically found to be at z_spec = 4.9 with strong line emission and dust obscuration simulating the colors of a distant galaxy, a possibility discussed in <cit.> and <cit.>. One of the largest JWST Cycle 1 extragalactic surveys by time allocation is the JWST Advanced Deep Extragalactic Survey <cit.>, a GTO program that will eventually encompass 770 hours of observations from three of the telescope's instruments: NIRCam, NIRSpec, and the mid-infrared instrument MIRI. These data, which focus on the GOODS-S and GOODS-N regions of the sky, are ideal for finding and understanding the most distant galaxies through imaging and follow-up spectroscopy. Because the JADES target regions have been observed by multiple telescopes and instruments across the electromagnetic spectrum, there is a rich quantity of ancillary data for comparing with JWST images and spectroscopy. Early JADES observations resulted in the discovery of the highest-redshift spectroscopically-confirmed galaxy thus far, JADES-GS-z13-0 (z_spec = 13.20^+0.04_-0.07 )<cit.>. Because of NIRCam's wavelength range and dichroic offering simultaneous short wavelength (0.7 - 2.3 μm) and long wavelength (2.4 - 5.0 μm) images, these and other high-redshift candidates are detected in multiple bands at wavelengths longward of the Lyman break. The high-redshift galaxies that can be observed thanks to the wavelength coverage of JWST are vital for exploring the potential downturn in the number density of z>10 galaxies previously predicted by HST observations alone <cit.>. In this study, we present the results of a search through the first year of JADES NIRCam imaging of the GOODS-S and GOODS-N regions for galaxy candidates at z > 8, where we combine the deepest HST optical and near-IR observations with JADES NIRCam data taken across ten filters. These data are supplemented by medium-band JWST imaging in five additional filters from both the publicly available JWST Extragalactic Medium Survey (JEMS) <cit.> and First Reionization Epoch Spectroscopic COmplete Survey (FRESCO) <cit.> programs. We perform template fitting in order to select candidate high-redshift candidates, capitalizing on the large number of filters at wavelengths longer than 2μm. Because of both the unparalleled HST coverage and the mixture of medium and wide NIRCam filters present in the JADES data, these data currently represent the best opportunity for uncovering galaxies at z > 8 with minimal low-redshift interlopers. The deepest portions of the JADES dataset probe down to 5σ depths of 2.17 nJy (30.6 mag AB) at 2.7μm, currently deeper than the other similar JWST extragalactic fields studied in the literature. In addition, because of the FRESCO grism spectra and the JADES NIRSpec spectroscopy, we also have a number of spectroscopic redshifts for these sources confirming their selection, providing constraints on the accuracy of photometric redshifts for galaxies in the early universe. The structure of this paper is as follows. We begin by introducing the JADES dataset used in this study, and we discuss our data reduction and photometric and spectroscopic measurements in Section <ref>. In Section <ref> we describe how we estimate photometric redshifts and, from these results, select candidate galaxies at z > 8. We then spend the bulk of this study exploring the resulting sample in Section <ref>, separating the objects into three bins: z = 8 - 10 (<ref>), z = 10 - 12 (<ref>), and z > 12 (<ref>). We then consider candidate galaxies that fall out of our primary selection either because of their template fits (<ref>) or their proximity to brighter sources (<ref>). We also discuss the possibility of these sources being low-mass stars (<ref>), describe which candidates have been included in samples previous to this study (<ref>), and explore the impact of different galaxy template sets for photometric redshifts (<ref>). Finally, we examine the selection and further properties of these sources in Section <ref> and conclude in Section <ref>. Throughout this paper we assume the <cit.> cosmology with H_0 = 67.4 km s^-1 Mpc^-1, Ω_M = 0.315 and Ω_Λ = 0.685. All magnitudes are provided using the AB magnitude system <cit.>. § JADES IMAGING AND PHOTOMETRY JADES is a joint Guaranteed Time Observations (GTO) program between the NIRCam and NIRSpec extragalactic GTO teams that consists of NIRCam imaging, NIRSpec spectroscopy, and MIRI imaging across the GOODS-S (RA = 53.126 deg, DEC = -27.802 deg) and GOODS-N (RA = 189.229, DEC = +62.238 deg) <cit.> fields. In this section we describe the Cycle 1 JADES observations taken as of February 8 2023, the data reduction, and the measurement of fluxes and spectroscopic redshifts. The full description of these observations is provided in <cit.>. §.§ Observations In this paper we will discuss galaxy candidates selected from the NIRCam imaging in both GOODS-S, with observations taken on UT 2022-09-29 through 2022-10-10 (Program 1180, PI:Eisenstein), and GOODS-N, with observations taken on UT 2023-02-03 through 2023-02-07 (Program 1181, PI:Eisenstein). In addition, a set of NIRCam parallels (9.8 square arcmin each) were observed during NIRSpec observation PID 1210 (PI:Ferruit) on UT 2022-10-20 to 2022-10-24 within and southwest of the JADES Medium footprint in GOODS-S. Another set of NIRCcam observations (9.8 square arcmin) parallel to NIRSpec PID 1286 (PI:Ferruit) were observed on UT 2023-01-12 to 2023-01-13 to the northwest of the JADES Deep footprint in GOODS-S. These data were partly presented in both <cit.> as well as <cit.>, although here we combine the full suite of JADES data observed as of February 8 2023. The total current survey area of the JADES GOODS-S is 67 square arcminutes, with 27 square arcminutes for the JADES Deep program, and 40 square arcminutes for the JADES Medium program. The filters used for JADES Deep are NIRCam F090W, F115W, F150W, F200W, F277W, F335M, F356W, F410M, and F444W (λ = 0.8 - 5.0 μ m), while JADES Medium uses the same filters without F335M. For the 1286 parallel, the JADES observations include the F070W filter. The total current area of the NIRCam GOODS-N program is 58 square arcminutes. The NIRCam filters observed for GOODS-N are F090W, F115W, F150W, F200W, F277W, F335M, F356W, F410M, and F444W (λ = 0.8 - 5.0 μ m). The GOODS-N observations are separated into two portions: the northwest (NW) portion, which covers 30.4 square arcminutes, and a southeast (SE) portion, which covers 27.6 square arcminutes. The NW portion was taken under PID 1181 (PI:Eisenstein) with NIRCam as the prime instrument and MIRI in parallel, while the SW portion was taken as part of the same program with NIRSpec as prime and NIRCam in parallel. We also include observations taken for the JWST Extragalactic Medium-band Survey <cit.>. These data, which are part of program PID 1963 (PIs C. Williams, S. Tacchella, M. Maseda) were taken on UT 2022-10-12. For this study, we use the NIRCam data from JEMS which covers the Ultra Deep Field <cit.> by the NIRCam A module, with the NIRCam B module to the southwest, spanning the JADES Deep and Medium portions, for a total area of 10.1 square arcminutes. The NIRCam observations in the JEMS survey were taken with the F182M, F210M, F430M, F460M, and F480M filters <cit.>. We also supplement our observations with NIRCam data from the The First Reionization Epoch Spectroscopic COmplete Survey (FRESCO, PID 1895, PI P. Oesch). While nominally a NIRCam grism survey across GOODS-S and GOODS-N, we use the FRESCO F182M, F210M, and F444W imaging of GOODS-S and GOODS-N to supplement the filters available in JADES. The FRESCO area extends beyond the JADES Deep and Medium region, and we do not select galaxies in this region due to the lack of NIRCam filter coverage afforded by the JADES observations. We use the FRESCO grism data as well as the NIRSpec observations from PID 1210 and 1286 to measure spectroscopic redshifts for sources within our sample. The GOODS-S and GOODS-N regions have been the target of deep HST observations, and we utilize existing HST/ACS and WFC3 mosaics. We use the HST/ACS mosiacs from the Hubble Legacy Fields (HLF) v2.0 for GOODS-S and v2.5 for GOODS-N <cit.>. We use data in the HST/ACS F435W, F606W, F775W, F814W, and F850LP filters. §.§ Data Reduction §.§.§ JADES NIRCam The data reduction techniques used in this present study will be fully described in a future paper (Tacchella et al. in prep), but they follow the methods outlined in <cit.> and <cit.>, which we briefly summarize here. For both the JADES GOODS-S and GOODS-N observations, the data were first reduced using the JWST calibration pipeline v1.9.2, with the JWST Calibration Reference Data System (CRDS) context map 1039. The raw images (uncal frames) are processed using the default JWST Stage 1 pipeline, which performs the detector-level corrections and results in count-rate images (rate frames). The JWST pipeline Stage 2 involves flat fielding and flux calibration, and was run largely with the default values. We convert from counts/s to MJy/sr following <cit.>. During the data reduction, we discovered that the current long wavelength flats used in the JWST pipeline result in non-astrophysical artifacts in the final mosaics. To mitigate this effect, we developed our own sky-flats, stacking in each filter 80 - 200 source-masked raw uncal frames from across PID 1180, 1210, 1286, and JEMS. For F335M and F410M, where we did not have enough exposures to properly perform this stacking procedure, we instead constructed these sky flats via interpolation using the other wide-band LW sky flats. After Stage 2, we used custom corrections for common features seen in JWST/NIRCam data <cit.>. We fit and subtracted the 1/f noise <cit.> assuming a parametric model. To fit for the scattered-light “wisps” in the NIRCam SW channel we constructed templates by stacking our images from the JADES program (PID 1180, 1210, 1286) as well as other publicly available programs (PIDs 1063, 1345, 1837, and 2738), and then subtracted these scaled templates for the SW channel detectors A3, A4, B3, and B4 (Tacchella et al. in prep). The background was removed using the photutils Background2D class <cit.>. We created our final mosaics using the JWST Pipeline Stage 3, after performing an astrometric alignment using a custom version of the JWST TweakReg software. In both GOODS-S and GOODS-N, we calculated the relative and absolute astrometric corrections for the individual images grouped by visit and by photometric band. We matched to sources in a reference catalog created from HST F814W and HST F160W mosaics with astrometry tied to Gaia-EDR3 <cit.>. Following this alignment, we performed the default steps of Stage 3 of the JWST pipeline for each filter and visit. For our final mosaics we chose a pixel scale of 0.03 arcsec/pixel and drizzle parameter of for both the SW and LW images. §.§.§ FRESCO The FRESCO <cit.> NIRCam grism spectroscopic data in the F444W filter (λ = 3.9-5.0 ) were reduced and analyzed following the routines in <cit.> and <cit.>. Here we briefly summarize the main steps of the process. Because we aim to conduct a targeted emission line search of [O III] and Hβ lines for our z>8 galaxy candidates, and we do not expect any of them to have strong continuum emission that can be detected with grism data, we used a median-filtering technique to subtract out the remaining continuum or background on a row-by-row basis, following the methods outlined by <cit.>. We extracted 2D grism spectra using the continuum-subtracted emission-line maps for all objects that are brighter than 28.5 AB mag in the F444W band and within the FRESCO survey area. The emission lines from sources fainter than 28.5 AB mag are not expected to be detected with FRESCO. The FRESCO short-wavelength parallel imaging observations were used for both astrometric and wavelength calibration of the F444W grism spectroscopic data. We extracted 1D spectra from the 2D grism spectra using the optimal extraction algorithm <cit.> using the light profiles of sources in the F444W filter. We then performed automatic identifications of >3σ peaks in 1D spectra <cit.>, and fit these detected peaks with Gaussian profiles. We tentatively assigned spectroscopic redshifts for >3σ peaks which minimize the difference from the estimated photometric redshifts (Section <ref>). Visual inspection was performed on these tentative spectroscopic redshift solutions and spurious detections caused by either noise or contamination were removed. The final grism spectroscopic redshift sample of JADES sources will be presented in a forthcoming paper from the JADES collaboration. §.§.§ JADES NIRSpec In addition to FRESCO data, we discuss NIRSpec spectroscopic redshifts in Section <ref>, and these were reduced following the same procedure as outlined in <cit.>, <cit.>, <cit.>, and <cit.>. For the present study, we are only using the derived spectroscopic redshifts from these data. §.§ Photometry cccccccc 8 0pt 5σ Photometric Depth In JADES Areas Measured in 0.2^'' Apertures (nJy) 2c 4cGOODS-S 2cGOODS-N Instrument Filter JADES Deep JADES Medium 1210 Parallel 1286 Parallel SE NW HST/ACS F435W 2.33 10.77 10.9 10.43 7.75 9.40 HST/ACS F606W 3.61 6.96 6.68 8.72 9.81 8.55 HST/ACS F775W 2.22 15.79 16.46 15.03 13.61 9.21 HST/ACS F814W 8.20 7.2 7.02 11.03 8.21 7.7 HST/ACS F850LP 4.28 17.53 18.58 19.55 15.23 17.55 JWST/NIRCam F070W - - - 8.29 - - JWST/NIRCam F090W 3.55 6.26 2.40 5.92 6.04 11.03 JWST/NIRCam F115W 2.93 5.44 2.27 5.26 4.51 8.08 JWST/NIRCam F150W 2.89 5.53 2.15 5.28 5.16 8.44 JWST/NIRCam F182M 8.04 9.53 10.37 11.29 - - JWST/NIRCam F200W 3.01 5.27 2.37 4.63 4.66 7.78 JWST/NIRCam F210M 5.83 12.11 13.71 13.53 - - JWST/NIRCam F277W 2.17 4.24 1.64 3.69 3.92 6.17 JWST/NIRCam F335M 3.64 3.81 2.86 6.08 5.7 9.12 JWST/NIRCam F356W 2.46 4.07 1.62 3.60 3.81 5.74 JWST/NIRCam F410M 3.23 6.43 2.39 5.60 6.33 9.524 JWST/NIRCam F430M 7.84 7.31 - - - - JWST/NIRCam F444W 2.79 5.11 2.00 4.62 5.07 7.31 JWST/NIRCam F460M 10.71 9.61 - - - - JWST/NIRCam F480M 7.98 6.51 - - - - To compute the photometry from both the GOODS-S and GOODS-N mosaics in each filter, we used the software package jades-pipeline developed by authors BR, BDJ, and ST. We began by creating an inverse-variance-weighted stack of the NIRCam F277W, F335M, F356W, F410M, and F444W images as an ultra-deep signal-to-noise ratio (SNR) image. From this SNR image, jades-pipeline utilizes software from the Photutils package to define a catalog of objects with five contiguous pixels above a SNR of 3 <cit.>, creating a segmentation map in the process. From this catalog, we calculated circular and Kron aperture photometry on both the JWST NIRCam mosaics as well as the 30mas pixel scale HST Legacy Fields mosaics <cit.> for ACS F435W, F606W, F775W, F814W, and F850LP filters. Forced photometry was performed using a range of aperture sizes. The uncertainties we report were measured by combining in quadrature both the Poisson noise and the noise estimated from random apertures placed throughout the image <cit.>. Elliptical Kron aperture fluxes were measured using Photutils with a Kron parameter of K = 2.5 and the default circularized radius six times larger than the Gaussian-equivalent elliptical sizes while masking segmentation regions of any neighboring source. We created empirical HST/ACS and JWST/NIRCam point spread functions to estimate and apply aperture corrections assuming point source morphologies (Z. Chen, private communication). For this present study we will fit to the JADES “CIRC1” (0.2^'' diameter aperture) fluxes, which reduces the background noise associated with the use of larger apertures, and is appropriate given the typically small sizes found for high-redshift galaxies <cit.>. We also use Kron aperture fluxes to calculate some derived parameters, such as the UV magnitude M_UV to better encompass the full flux from more extended sources. We estimated the 5σ limiting flux across both GOODS-S and GOODS-N from the 0.2^'' diameter fluxes and uncertainties. In Table <ref>, we report these 5σ limiting fluxes in nJy for these portions of GOODS-S: JADES Deep, JADES Medium, the 1210 Parallel, and the 1286 Parallel. In addition, we report the limiting fluxes for both the shallower NW portion of the GOODS-N field and the SE portion. Understanding these depths is important for exploring the recovery of high-redshift galaxies across the JADES data. § GALAXY SELECTION AT Z > 8 Our final photometric catalogs span 20 optical and near-IR filters, including both HST/ACS and JWST/NIRCam observations. Because of the multiple datasets included in these catalogs, however, objects will only have coverage in a subset of these filters, with the maximum number being in the area of the JEMS survey in the GOODS-S region, where there is coverage in 19 filters (F070W was only observed in the 1286 parallel, no portion of which overlaps with JEMS). In this section we describe how we identified z > 8 sources from the measured line-flux catalog. Throughout this study, we will identify sources using “JADES-GS-” or “JADES-GN-” followed by the right ascension (RA) and declination (DEC) values in decimal degrees corresponding to the source. As discussed in the introduction, we choose to employ template fitting in this study due to the large quantity of available data in the JADES data set, especially longward of the potential Lyman break for objects at z > 8. The rest-frame UV and optical continuum can be fit with the templates as well, better constraining the exact redshift than with color selection alone. In addition, potential strong optical emission lines such as [OIII]λ5007 observed in high-redshift galaxies can boost the flux in photometric filters, and can be modeled with template fitting. The JADES data set includes multiple medium-band filters longward of 3μm, where these effects can be more significant. §.§ EAZY Photometric Redshifts In order to estimate the redshifts of the GOODS-S galaxies, we used the photometric redshift code EAZY <cit.>. EAZY combines galaxy templates and performs a grid-search as a function of redshift. We used the EAZY photometric redshift z_a, corresponding to the minimum χ^2 of the template fits, to identify high-redshift galaxies. For the fits, we started with the EAZY “v1.3” templates, which we plot in the left panel in Figure <ref>. These templates include the original seven templates modified from <cit.> to include line emission, the dusty template “c09_del_8.6_z_0.019_chab_age09.40_av2.0.dat,” and the high-equivalent-width template taken from <cit.> (the equivalent width of [OIII]λ5007 measured for the galaxy this template is derived from, Q2343-BX418, is 285 Å). We supplemented these with seven additional templates that were designed to optimize photometric redshift estimates for mock galaxy observations from the JAGUAR simulations <cit.>. These templates were created to better span the observed color space of the JAGUAR galaxies, including both red, dusty and blue, UV-bright populations. Similar to what has been demonstrated by other authors <cit.>, we found that young galaxies with very high specific star-formation rates can have very blue observed UV continuum slopes, which is made more complex due to strong nebular continuum and line emission <cit.>. To aid in fitting these galaxies, we generated additional templates using Flexible Stellar Population Synthesis <cit.>, and added these to the “5Myr” and “25Myr” simple stellar population models introduced in <cit.> for fitting blue galaxies in HUDF. We show our additional templates in the right panel of Figure <ref>, and we provide these templates online hosted on Zenodo: <https://doi.org/10.5281/zenodo.8092529>. Multiple templates from the full set contain nebular continuum and line emission, including that from Lyman-α. In each redshift bin considered, EAZY combines all of the available templates together and applies an IGM absorption consistent with the redshift <cit.>. The best fit in that redshift bin, measured using the minimum χ^2, is recorded in a χ^2(z) surface that is output from the program. We explored the redshift range z = 0.01 - 22, with a redshift step size Δ z = 0.01. We did not adopt any apparent magnitude priors, as the exact relationship between galaxy apparent magnitude and redshift at z > 8 is currently not well constrained, so any attempt to impose a prior would serve to only remove faint objects from the sample. To prevent bright fluxes from overly constraining the fits and to account for any photometric calibration uncertainties not captured by the offset procedure described below <cit.>, we set an error floor on the photometry of 5%, and additionally, we used the EAZY template error file “template error.v2.0.zfourge” to account for any uncertainties in the templates as a function of wavelength. We also explored the use of the EAZY templates discussed in <cit.>, which were used in finding high-redshift galaxies in the JWST Cosmic Evolution Early Release Science (CEERS) observations, and we describe how using these photometric redshifts affects our final sample in Section <ref>. To match the EAZY template set to the observed fluxes in our catalog, we estimated photometric offsets with EAZY. We calculated the offsets for GOODS-S and GOODS-N data separately, where we first fit the observed photometry for a sample of galaxies with an SNR in F200W between 5 and 20, and calculated the offsets from the observed photometry to the template photometry. We then applied these offsets to the photometry and re-fit, iterating on this procedure. We list the final photometric offsets that we used for GOODS-S and GOODS-N, normalized to F200W, in Table <ref>. These offsets are within 10% of unity for all of the filters, with the exception of a large offset used for the F850LP observations in GOODS-N. We find that the F850LP depths are among the shallowest in our dataset (Table <ref>) which is likely contributing to the large offset. l c c c 8 0pt EAZY-derived photometric offsets, normalized to F200W. GOODS-S GOODS-N Instrument Filter Offset Offset HST F435W 1.021 1.072 HST F606W 1.002 0.976 HST F775W 1.009 0.996 HST F814W 0.962 0.998 HST F850LP 0.919 0.774 NIRCam F070W 0.981 - NIRCam F090W 0.987 1.012 NIRCam F115W 1.008 1.020 NIRCam F150W 0.994 0.989 NIRCam F182M 1.001 0.991 NIRCam F200W 1.000 1.000 NIRCam F210M 1.014 1.006 NIRCam F277W 0.998 0.990 NIRCam F335M 1.035 1.024 NIRCam F356W 1.057 1.047 NIRCam F410M 1.071 1.057 NIRCam F430M 1.014 - NIRCam F444W 1.015 1.009 NIRCam F460M 0.956 - NIRCam F480M 1.017 - We used the χ^2(z) values output from EAZY to calculate a probability P(z) assuming a uniform redshift prior: P(z) = exp[-χ^2(z) / 2], where we normalize such that ∫ P(z) dz = 1.0. The P(z) and χ^2(z) values allowed us to calculate P(z > 7), the summed probability from EAZY that the galaxy is at z > 7, as well as the χ^2 minimum for EAZY fits restricted to z < 7. These statistics, and others, are helpful for identifying and removing interlopers from our sample. In Figure <ref> we show the EAZY fit to an object in GOODS-S, JADES-GS-53.17551-27.78064, along with the P(z) surface, and the JADES NIRCam thumbnails. The source is an F115W dropout, with no visible flux at shorter wavelengths. The fit constrained at z < 7 produces significantly more F115W flux than is observed, lending evidence of this galaxy being at z > 9. §.§ High-Redshift Galaxy Selection and Catalogs Because of the extensive deep photometric data for the GOODS-S and GOODS-N fields, we chose to use the EAZY photometric redshifts for finding z > 8 candidates, as template fitting utilizes more photometric data points in the fit than color selection by itself. Following work done in the literature <cit.>, we selected galaxies at z > 8 by imposing these rules on the EAZY fits: * The redshift of the fit corresponding to the minimum χ^2, z_a, must be greater than 8. * The SNR in at least two photometric bands must be above 5. For this study, we chose NIRCam F115W, F150W, F200W, F277W, F335M, F356W, F410M, or F444W, as these filters are longward of the Lyman break at z > 8. We used the photometry derived using 0.2^'' diameter apertures for measuring this SNR. * The summed probability of the galaxy being above z > 7 must be greater than 70%, or ∫_7^22 P(z) dz > 0.7. * The difference between the overall minimum χ^2 and the minimum χ^2 at z < 7, Δχ^2, must be greater than 4. * There should be no object within 0.3^'' (10 pixels in the final JADES mosaics), or within the object's bounding box, that is 10 times brighter that the object. For this study, we targeted galaxies at z_a > 8 as galaxies above this redshift should have no observed flux in the JWST/NIRCam F090W filter. This allows us to use the deep JADES F090W observations to aid in visually rejecting lower-redshift contaminants. The second requirement, that the source be detected in multiple bands, was chosen to ensure that the sources we selected were not artifacts found in individual exposures such as cosmic rays or bad pixels. We imposed the EAZY ∫_7^22 P(z) > 0.7 (which we will shorten to “P(z > 7)”) and Δχ^2 limits in order to help remove objects where EAZY could fit the observed SED at low redshift with high probability. In <cit.>, the authors recommend the use of a more strict cut, Δχ^2 > 9, and we consider this cut in Section <ref>. We also, in Section <ref>, discuss those objects where Δχ^2 < 4 in our sample, as these sources, though faint, may contain true high-redshift galaxies that should be considered. Finally, we remove objects with close proximity to bright sources because of the possibility of selecting tidal features or stellar clusters near to the edges of relatively nearby galaxies. We list those objects that satisfied our other requirements but were close to a brighter source, along with discussion of these targets, in Section <ref>. We chose not to implement a direct cut on χ^2 as this metric is dependent on the flux uncertainties, which vary across the field in such a way as to make a comparison of the value between objects difficult and potentially non-meaningful. We still report the resulting χ^2 values, however. In comparison, the Δχ^2 value is calculated from two fits to the same photometry and uncertainties, and is helpful in exploring the relative goodness of fits at different redshifts. These cuts resulted in 1078 objects in GOODS-S and 636 objects in GOODS-N. From here, we began the process of visual inspection, first to remove obvious non-astrophysical data artifacts, including extended diffraction spikes from stars, and hot pixels caused by cosmic rays. We also removed extended, resolved low-redshift, dusty sources, many of which were not visible in HST imaging. After removing these sources, we were left with 580 possible objects in GOODS-S, and 212 objects in GOODS-N. After this initial inspection, authors KH, JH, DE, MWT, CNW, LW, and CS independently graded each target with a grade of Accept, Reject, or Review. For those objects where 50% or more of the reviewers accepted the candidate, it was then added to the final candidate list. In cases where greater than 50% of the reviewers chose to reject the candidate, this candidate was removed entirely from the candidate list. In all other cases (57 objects in GOODS-N and 102 objects in GOODS-S), the reviewers did one more round of visual inspection with only the grades Accept or Reject, with a larger discussion occurring for objects where necessary. Again, a 50% of Accept grades was required for these galaxies under review to be listed as part of the final sample. § RESULTS l l 2 0pt Overview of Columns in the z > 8 Source Catalog Column Description 1 JADES ID 2, 3 Right ascension and declination, in decimal degrees, of the source 4 m_F277W, Kron (AB) 5 - 11 EAZY z_a, σ_68, low, σ_68, high, σ_95, low, σ_95, high, σ_99, low, σ_99, high 12 EAZY ∫_7^22 P(z) dz 13 EAZY minimum χ^2 14 EAZY minimum z_a (z < 7) 15 EAZY χ^2 (z < 7) 16 EAZY Δχ^2 17, 18 Spectroscopic Redshift, Source (FRESCO or NIRSpec) 19 M_UV 20 Flag indicating the source is fit by a brown dwarf model within Δχ^2 < 4 21 Flag indicating whether or not the source is unresolved (r_eff,F444W < 0.063^'') 22, 23 HST/ACS F435W Flux, 1σ uncertainty (nJy) 24, 25 HST/ACS F606W Flux, 1σ uncertainty (nJy) 26, 27 HST/ACS F775W Flux, 1σ uncertainty (nJy) 28, 29 HST/ACS F814W Flux, 1σ uncertainty (nJy) 30, 31 HST/ACS F850LP Flux, 1σ uncertainty (nJy) 32, 33 JWST/NIRCam F070W Flux, 1σ uncertainty (nJy) 34, 35 JWST/NIRCam F090W Flux, 1σ uncertainty (nJy) 36, 37 JWST/NIRCam F115W Flux, 1σ uncertainty (nJy) 38, 39 JWST/NIRCam F150W Flux, 1σ uncertainty (nJy) 40, 41 JWST/NIRCam F182M Flux, 1σ uncertainty (nJy) 42, 43 JWST/NIRCam F200W Flux, 1σ uncertainty (nJy) 44, 45 JWST/NIRCam F210M Flux, 1σ uncertainty (nJy) 46, 47 JWST/NIRCam F277M Flux, 1σ uncertainty (nJy) 48, 49 JWST/NIRCam F335M Flux, 1σ uncertainty (nJy) 50, 51 JWST/NIRCam F356W Flux, 1σ uncertainty (nJy) 52, 53 JWST/NIRCam F410M Flux, 1σ uncertainty (nJy) 54, 55 JWST/NIRCam F430M Flux, 1σ uncertainty (nJy) 56, 57 JWST/NIRCam F444W Flux, 1σ uncertainty (nJy) 58, 59 JWST/NIRCam F460M Flux, 1σ uncertainty (nJy) 60, 61 JWST/NIRCam F480M Flux, 1σ uncertainty (nJy) 62 JADES Footprint Region We provide these values for the primary z > 8 sample with Δχ^2 > 4, as well as the subsamples outlined in the text: Δχ^2 < 4 and those proximate to brighter sources. Our final z > 8 samples consist of 535 objects in GOODS-S and 182 objects in GOODS-N. In Table <ref> we provide the descriptions of the columns in our final catalog; the catalog itself is provided as an online table on Zenodo: <https://doi.org/10.5281/zenodo.8092529>. We include 0.2^'' diameter aperture photometry in each of the observed photometric bands, as well as the EAZY z_a, χ^2, P(z > 7), and Δχ^2 values used in selecting the galaxies. We also provide the σ_68, σ_95, and σ_99 confidence intervals estimated from the P(z) distribution. In this table we also list the z > 8 candidates that have EAZY Δχ^2 < 4, and we will discuss these sources in Section <ref>. Similarly, in our output table, we list those z > 8 candidates that were either within 0.3^'' or within the bounding box of a target 10 times brighter than the candidate, which we discuss in Section <ref>. We show the positions of the GOODS-N sources in the left panel and the GOODS-S sources in the right panel of Figure <ref>. On these figures, we include both those with EAZY Δχ^2 > 4 (dark points) and EAZY Δχ^2 < 4 (lighter points). The relatively higher density of sources in the southern portion of the GOODS-N observations compared to the northern portion is a result of the increased observational depth in that region. In GOODS-N, we find 2.1 objects in our z > 8 sample per square arcminute in the NE footprint, and 4.3 objects per square arcminute in the SW footprint. Similarly, the deepest portions of the JADES GOODS-S coverage are the large rectangular JADES Deep region, and the smaller 1210 parallels, where a significantly higher density of objects are detected. In GOODS-S, we find 7.8 objects in our z > 8 sample per square arcminute in JADES Deep, 4.5 objects per square arcminute in JADES Medium, 4.1 objects per square arcminute in the 1286 parallel, and 13.9 objects per square arcminute in the 1210 parallel. In Figure <ref>, following similar work done in the literature <cit.>, we show the F277W observed AB magnitude measured using a Kron aperture against the EAZY photometric redshift for each candidate z > 8 galaxy in GOODS-S and GOODS-N. Across the top we show the distribution of the photometric redshifts, and on the right side we show the F277W magnitude distribution for the photometric redshift sample as well as the GOODS-N and GOODS-S sample independently. For those objects where we have spectroscopic redshifts from either NIRSpec or FRESCO, we plot this value instead of the photometric redshift, and indicate those galaxies with larger points with black outlines. The GOODS-S sample, by virtue of the deeper coverage and larger area, extends to much fainter F277W magnitudes. On this diagram, the galaxy GN-z11 <cit.> is the brightest source, as one of two galaxies at m_F277W,Kron < 26 (the other is JADES-GS-53.10394-27.89058, at z_a = 8.35). The redshifts seen in the main panel are discrete because of how EAZY fits galaxies at specific redshift steps. In Figure <ref> we can see how the usage of wide filters for estimating photometric redshift leads to relative dearths of objects at z ∼ 10 and z ∼ 13, as these redshifts are where the Lyman break is between the F090W, F115W, and F150W filters. This is an artificial effect - for Lyman-break galaxies, estimating the exact redshift is highly predicated on the flux in the band that probes the break, and when the break sits between filters, the resulting redshifts are more uncertain. The best-fit inferred redshift can then scatter to higher or lower redshift due to small and insignificant perturbations to χ^2, leading to these apparent gaps. In this section we discuss the candidates in three subcategories: z_a = 8 - 10 (Section <ref>), z_a = 10 - 12 (Section <ref>), and z_a > 12 (Section <ref>). For each subcategory we describe the properties of the sample, plot example SEDs for galaxies spanning the magnitude and redshift range, and discuss notable examples. §.§ z_phot = 8 - 10 Candidates We find 547 total galaxies and galaxy candidates combined across the JADES GOODS-S (420 sources) and GOODS-N (127 sources) areas at z_a = 8 - 10. We show a subsample of the EAZY SED fits and the JADES thumbnails for eight example candidate high-redshift galaxies in this photometric redshift range in Figure <ref>. In each plot, we show both the minimum χ^2 fit, as well as the fit constrained to be at z < 7. We chose these objects from the full sample to span a range of F277W Kron magnitudes as well as photometric redshifts. Because of the availability of both NIRSpec and FRESCO spectroscopy for our sample, there are 34 (27 in GOODS-S and 7 in GOODS-N) galaxies in this photometric redshift range where a spectroscopic redshift has been measured. For 14 of these sources (13 in GOODS-S and 1 in GOODS-N), the resulting spectroscopic redshift z_spec = 7.65 - 8.0. Because these objects satisfied our photometric redshift selection criteria, we choose to include them in our sample, and discuss their spectroscopic redshifts in Section <ref>. There are a number of sources in this photometric redshift range with extended morphologies, often seen in the JADES data as multiple clumps observed in the images at shorter wavelengths. In Figure <ref>, we show a subsample of nine resolved galaxies with z_a = 8 - 10. For each object we show the F090W, F115W, and F356W thumbnails, along with a color image combining these three filters. Each thumbnail is 2^'' on a side, showcasing the resolved sizes of some of these targets. At z = 8 - 10, 1^'' corresponds to 4.3 - 4.9 kpc, and we provide a scale bar of 0.5^'' in each panel. At these redshifts, F090W is to the blue of the Lyman break, so the galaxies should not appear in this filter, F200W spans the rest-frame UV and F356W spans the rest-frame optical continuum. We are then seeing UV-bright star-forming clumps in the F200W filter, and the stellar continuum in F356W. We show two sources, JADES-GS-53.1571-27.83708 (top row, left column), and JADES-GS-53.08738-27.86033 (top row, middle column), which have spectroscopic redshifts from FRESCO at z_spec = 7.67 and z_spec = 7.96 respetively, as indicated below the photometric redshifts in the color panel. These nine sources show multiple irregular morphologies, and many are elongated. JADES-GN-189.18051+62.18047 is an especially complex system at z_a = 8.92 with four or five clumps that span almost 7 kpc at this photometric redshift, similar to the “chain of five” F150W dropout system presented in <cit.>. Five of the extended sources we highlight were previously presented in the literature: JADES-GS-53.1571-27.83708, JADES-GS-53.08738-27.86033, JADES-GS-53.08174-27.89883, JADES-GS-53.1459-27.82279 and JADES-GS-53.10393-27.89059 <cit.>. Given the depth and resolution of NIRCam, we can see new details for these sources from what was observed in the HST ACS and WFC3 observations, such as the nearly ∼ 0.8^''-long haze to the northeast for JADES-GS-53.10393-27.89059, which corresponds to about 4 kpc at the candidate photometric redshift. §.§ z_phot = 10 - 12 Candidates We find a total of 137 galaxies and candidate galaxies at at z_phot = 10 - 12: 92 in GOODS-S and 45 in GOODS-N. We show the EAZY SED fits and the JADES thumbnails for eight example candidates in this photometric redshift range in Figure <ref>. In the GOODS-S region, this redshift range includes two of the spectroscopically-confirmed galaxies from <cit.> and <cit.>, JADES-GS-z10-0 (z_spec = 10.38^+0.07_-0.06) and JADES-GS-z11-0 (z_spec = 11.58^+0.05_-0.05). The EAZY photometric redshifts for these targets are z_a = 10.84 for JADES-GS-z10-0, and z_a = 12.31 for JADES-GS-z11-0. Both photometric redshifts are higher than the measured spectroscopic redshift, but considering the P(z) uncertainty in both measurements, the measurements are within 2σ of the true values. Indeed, the Δχ^2 between the minimum value corresponding to z_a and the value at z_spec is 10.25 for JADES-GS-z10-0 and 1.75 for JADES-GS-z11-0. In <cit.>, the authors estimate photometric redshifts for these sources using the Bayesian stellar population synthesis fitting code Prospector <cit.> and recover P(z) surfaces that are similarly offset to higher values than the spectroscopic redshifts. In the GOODS-N region, we find the brightest object overall in our sample (a Kron F277W aperture magnitude of 25.73 AB), GN-z11, discussed at length in <cit.> and spectroscopically confirmed to lie at z = 10.603 ± 0.001 in <cit.>. In our EAZY fit, we estimate z_a = 11.0, which is within 2σ of the spectroscopic redshift, but again, higher than the spectroscopic redshift. We further explore this difference in Section <ref>. In <cit.>, the authors identify nine galaxies within 10 comoving Mpc (212^'') of GN-z11 that have photometric redshifts between z_a = 10 - 11. Six of these sources are included in our Δχ^2 > 4 sample <cit.>, while the other three sources <cit.> are not in our final sample as these sources did not satisfy the requirement of having a flux SNR > 5 in at least two bands to the red of the potential Lyman break, or in the case of JADES-GN-189.07355+62.2375, this source has z_a < 8 with the updated photometry in this study. We want to highlight three galaxies seen in Figure <ref> because of their extended, somewhat complex morphologies. JADES-GS-53.13918-27.84849 (z_a = 10.45, first row, right column), an F115W dropout, has three components and spans 0.5^'', which is 2 kpc at this photometric redshift. We observe an increase in the F444W flux over what is seen at 3 - 4μm, which could either be a result of [OII]λ3727 emission at this redshift or evidence of a Balmer break. The F115W dropout JADES-GS-53.09872-27.8602 (z_a = 10.69, second row, right column) is the southern clump of two morphologically distinct components separated by 0.3^'' (1.2 kpc at this photometric redshift) in the rest-frame UV, which becomes less distinct at longer wavelengths. The northern clump, JADES-GS-53.09871-27.86016 (z_a = 9.59), is also in our sample, but the EAZY fit prefers a lower photometric redshift which is consistent to within 1σ. Finally, JADES-GS-53.07597-27.80654 (z_a = 11.27, third row, right column) consists of two, bright, connected clumps separated by 0.2^'' (580 pc at this photometric redshift). The sources are detected as separate clumps in the relatively shallower FRESCO F182M and F210M data as well. These sources could be interacting seed galaxies or star-forming clumps in the very early universe. §.§ z_phot > 12 Candidates We find 33 galaxies and candidate galaxies across both the JADES GOODS-S (23 sources) and GOODS-N (10 sources) footprints at z > 12. We show their SEDs and thumbnails for eight examples in Figure <ref>, and we show the remaining in Figures <ref>, <ref>, and <ref> in the Appendix. For objects at these redshifts, the Lyman break falls in the F150W filter at z = 12, in between the F150W and F200W filters at z = 13.2, and in between the F200W and F277W filters at z = 17.7. The objects in our z > 12 sample, then, are a mixture of solid F150W dropouts and more tentative galaxies that show evidence for faint F200W flux associated with the Lyman break lying in that filter. Our sample in this redshift range includes the other two high-redshift spectroscopically-confirmed galaxies from <cit.> and <cit.>, JADES-GS-z12-0 (z_spec = 12.63^+0.24_-0.08) and JADES-GS-z13-0 (z_spec = 13.20^+0.04_-0.07). We estimate EAZY photometric redshifts for these targets of z_a = 12.46 for JADES-GS-z12-0, and z_a = 13.41 for JADES-GS-z13-0. While both photometric redshifts are quite uncertain due to the width of the bands used to probe the Lyman break, the range of uncertainties based on the EAZY σ_68 redshifts are consistent with the spectroscopic redshifts. Because of the importance of these galaxies towards understanding galaxy formation in the very early universe, we will discuss the candidate galaxies in this redshift range individually, in order of decreasing photometric redshift. In our descriptions, we do not include the spectroscopically confirmed galaxies from <cit.> and <cit.>, as they have been characterized previously. JADES-GN-189.15981+62.28898 (z_a = 18.79) This F200W dropout, the highest-redshift candidate in our sample, is clearly detected in multiple LW filters. There is no detection in the F200W filter, and we calculate a dropout color assuming a 2σ upper-limit on the F200W flux of m_F200W - m_F277W > 1.29. While this source lies in the relatively shallower GOODS-N NW portion of the survey, the large Δχ^2 provides strong evidence for this source being at high redshift. JADES-GS-53.12692-27.79102 (z_a = 15.77) This is one of the more intriguing objects in our sample, as it is a relatively bright (m_F277W, Kron = 29.37) F150W dropout detected at greater than 16 σ in all of the detection bands. While there may be F115W flux observed in the thumbnail, it is only at SNR = 1.76. Caution should be exercised in adopting the derived redshift for this source as a result, since this object's fluxes are consistent with it being at z = 5. JADES-GS-53.0541-27.70399 (z_a = 15.67) This F150W dropout has quite large photometric redshift uncertainties, but the σ_68 range is still consistent with it being at z > 12. The source 1^'' to the east is a potential F090W dropout with z_a = 8.24, but we measure P(z < 7) = 0.68 from the EAZY fit, so it does not appear in our sample. JADES-GS-53.19592-27.7555 (z_a = 15.32) This slightly extended F150W dropout has SNR >5 in three filters: F277W, F356W, and F444W. It is also over 3^'' away from any bright sources. Because of the non-detection in F150W, we estimate that m_F150W - m_F200W > 0.74 given a 2σ upper limit on the observed F150W flux. JADES-GS-53.07557-27.87268 (z_a = 15.31) This is one of the faintest z > 12 sources (m_F277W, Kron = 30.04), although it is observed at SNR > 5 in three filters: F277W, F356W, and F444W. In the thumbnail we show how this candidate is surrounded by other, brighter sources. The sources to the northwest and southeast are both at z_a ∼ 1.0, while the source with multiple components to the northeast is an F435W dropout at z_a = 3.74. JADES-GS-53.17847-27.75591 (z_a = 15.13) This very compact F150W dropout is quite faint (m_F277W, Kron = 29.79), and is relatively isolated, with the nearest bright galaxy being almost 2^'' to the west. The lack of significant detections in the bands to the blue of the proposed Lyman break (m_F150W - m_F200W > 1.14) provides strong evidence of this source's photometric redshift. JADES-GS-53.12914-27.86075 (z_a = 15.07) This F150W dropout is strongly detected (SNR > 7) in F200W and F277W, with m_F150W - m_F200W = 1.73. It is detected at SNR > 3 in F210M, but not in F182M. JADES-GN-189.32608+62.15725 (z_a = 14.77) This is an F150W dropout 0.5^'' northeast of an F850LP dropout galaxy at z_a = 5.2, which is a slightly higher redshift than the potential secondary minimum in the P(z) surface for this source. We measure m_F150W - m_F200W = 2.46, and find that this source is still at z > 12 within the σ_68 range on the photometric redshift. JADES-GS-53.02212-27.85724 (z_a = 14.59) This slightly diffuse F150W dropout has SNR > 5 in all of the bands where it is detected, and we measure m_F150W - m_F200W = 2.91. It is near the western edge of the JADES medium mosaic, and is 2.5^'' southeast of the star GOODS J033205.16-275124.2. JADES-GN-189.23606+62.16313 (z_a = 14.47) This F150W dropout is faint (m_F277W, Kron = 29.40), but has a 7σ detection in F277W, and a 6σ detection with F356W. We measure a very red dropout color m_F150W - m_F200W = 4.28. JADES-GS-53.10763-27.86014 (z_a = 14.44) This is a faint, diffuse F150W dropout that is within 1.5^'' of a larger galaxy at z_a = 0.9. While there is evidence of F115W flux in the thumbnail, it is only at a SNR = 1.64. JADES-GS-53.07427-27.88592 (z_a = 14.36) This F150W dropout is not detected at SNR < 0.8 in each of the bands shortward of the potential Lyman break, and we measure m_F150W - m_F200W = 4.05. It is 1^'' away from a F435W dropout at z_a = 4.31, and could be associated with that source, as the secondary P(z) peak indicates. JADES-GN-189.16733+62.31026 (z_a = 14.33) This F150W dropout is very bright in F277W (m_F277W, Kron = 27.28), pushing it above the distribution at these redshifts as seen in Figure <ref>. It is within 0.5^'' of an F435W dropout at z_a = 4.34. As a result, the potential Lyman break for this object could be a Balmer break if these two sources are associated at similar redshifts. JADES-GS-53.11127-27.8978 (z_a = 14.22) This source is an F150W dropout solid SNR > 5 detections in the LW JADES filters. The SW fluxes for this objects may be impacted by detector artifacts which are seen to the northwest and southeast of the source. JADES-GN-189.24454+62.23731 (z_a = 14.0) This F150W dropout is only detected with >5σ in F277W (SNR = 7.32) and F356W (SNR = 7.08), and we measure m_F150W - m_F200W > 0.93. JADES-GS-53.06475-27.89024 (z_a = 14.0) This F150W dropout (m_F150W - m_F200W > 2.27) is detected in the F277W filter at 19.9σ, is found in the exceptionally deep GOODS-S JADES 1210 Parallel. In this region, there are no medium band observations from either JEMS or FRESCO for this source, and we do not see any detection in any of the WFC3 or ACS bands. This source a quite promising high-redshift candidate, with a Δχ^2 ∼ 65. JADES-GS-53.14673-27.77901 (z_a = 13.68) This F150W dropout is quite well detected in multiple bands, including F182M, but it has a fairly broad P(z) surface, although the σ_68 values are consistent with z > 12 solutions. At z_phot∼ 13 - 14, the fits are more unconstrained due to the widths of the F150W and F200W photometric bands and the gap between them. JADES-GN-189.27873+62.2112 (z_a = 13.12) This source has F182M and F210M fluxes boosted by flux from a diffraction spike. This source has an F150W detection at 2.4 σ, potentially demonstrating that it is at a slightly lower redshift, as indicated by the large P(z) distribution. JADES-GN-189.11004+62.23638 (z_a = 13.12) The F182M and F210M fluxes for this F150W dropout (m_F150W - m_F200W = 2.69) are also boosted by a diffraction spike from a nearby star. There appears to be F115W flux at 2.14 σ, but this is shifted to the northeast of the primary source by 0.3^'' seen in F200W and F277W. JADES-GS-53.06928-27.71539 (z_a = 12.69) This is a faint (m_F277W, Kron = 29.73) F150W dropout with a very red dropout color m_F150W - m_F200W = 4.97. JADES-GN-189.33638+62.16733 (z_a = 12.53) While this F150W dropout is faint (m_F277W, Kron = 30.01), the fit indicates a blue UV slope and the object is detected in multiple filters at >5 σ, with Δχ^2 = 12.94. JADES-GS-53.18129-27.81043 (z_a = 12.52) This very faint (m_F277W, Kron = 30.11) F150W dropout has solid 8σ detections in F200W and F277W, and can be seen in the F356W thumbnail at 4σ. JADES-GS-53.08468-27.86666 (z_a = 12.48) This F150W dropout (m_F150W - m_F200W = 1.96) has a slightly redder potential UV slope, and this may be a low-redshift dusty interloper. JADES-GS-53.16635-27.82156 (z_a = 12.46) This bright (m_F277W, Kron = 28.64) F150W dropout (m_F150W - m_F200W = 2.01) has a 26σ detection in F277W, and is observed at the 4-6σ level in the relatively shallower F182M and F210M filters. JADES-GS-53.02868-27.89301 (z_a = 12.39) This source is an F150W dropout with strong detections (SNR > 5) in each filter to the red of the potential Lyman break. JADES-GS-53.10469-27.86187 (z_a = 12.27) This source is an F150W dropout with strong detections in F200W and F277W. The fluxes at F115W and F150W are observed at 1.55σ and 1.39σ significance, respectively. JADES-GN-189.27641+62.20724 (z_a = 12.19) This source is well detected in F200W and F277W (SNR > 10 in both filters), but is quite faint at longer wavelengths. There is a source 1.5^'' the southeast of the target at z_spec = 2.44 <cit.>, so we caution that the observed Lyman break for the high-redshift candidate may be a Balmer break at 1.5μm. JADES-GN-189.09217+62.25544 (z_a = 12.16) This is a bright (m_F277W, Kron = 28.53) F150W dropout. The F150W detection is at a SNR = 2.03, while the F115W flux is only measured at the 1.55σ level. JADES-GS-53.19051-27.74982 (z_a = 12.08) This is a bright (m_F277W, Kron = 28.72) F115W dropout with multiple filters with >10σ detections. JADES-GS-53.14283-27.80804 (z_a = 12.06) This object is an F150W dropout that is primarily seen in F200W (SNR = 5.89) and F277W (SNR = 6.72). The fit very strongly favors the high-redshift solution, and it does not seem be associated with the nearby galaxy to the east, an F814W dropout at z_a = 5.98 with [OIII]λ5007 potentially boosting the F335M flux. JADES-GS-53.18936-27.76741 (z_a = 12.05) This source is a very faint, slightly diffuse F150W dropout. While the Δχ^2 is still in favor of the z > 8 fit, the lower-redshift solution would help explain the boosted F210M flux as potentially arising from an [OIII] emission line, although the F210M flux is only significant at 2.6σ. We caution that photometric redshifts at z > 12 are quite uncertain, and that our sources are observed down to very faint magnitudes, and thus require deep spectroscopic follow-up to confirm. In many cases we also raise the possibility that the source is potentially associated with a nearby galaxy at lower redshift. For the GOODS-S sources, continued observations extending both the size of the JADES Medium region and the depth of JADES Deep planned for Cycle 2 as part of JADES will help to provide evidence as to whether these sources are truly at high-redshift or not. §.§ z > 8 Candidates with Δχ^2 < 4 In the previous sections we explored those objects for which the EAZY fit strongly favors a high-redshift solution. The fits to these sources at their proposed photometric redshifts indicate strong Lyman breaks and more robust upper limits on the photometric fluxes blueward of the break. For cases where the observed HST/ACS or short-wavelength JWST/NIRCam fluxes have higher uncertainties (for fainter objects or those objects in shallower parts of the GOODS-S or GOODS-N footprint), fits at z < 7 are more strongly favored, leading to values of Δχ^2 < 4. We selected candidate z > 8 galaxies in our sample that satisfy our criteria outlined in Section <ref>, but where Δχ^2 < 4. While the bulk of the output EAZY P(z) indicates that the galaxy is at high-redshift (P(z > 7) > 0.7), the minimum χ^2 for the z < 7 solution is more similar to the overall minimum χ^2 at z > 8. In this section we explore these targets, as they represent a non-insignificant number of candidates. Following the initial selection of these objects, they were visually inspected following the same routine as for the Δχ^2 > 4 objects, where an object was removed from the sample if the majority of reviewers flagged it for rejection. Our final sample consists of 163 candidates in GOODS-S and 64 candidates in GOODS-N, for a total of 227 objects. These objects are also plotted with lighter symbols in Figure <ref>. While these sources span the full redshift range of the Δχ^2 > 4 sample, the median F277W Kron magnitude is 29.47 for the GOODS-S objects and 29.11 for the GOODS-N objects, fainter than the median F277W magnitudes of the Δχ^2 > 4 sources (29.25 for GOODS-S and 28.62 for GOODS-N). This is expected as Δχ^2 is strongly dependent on the observed flux uncertainties for each source. In Figure <ref> we highlight some targets from both GOODS-S and GOODS-N with Δχ^2 < 4, demonstrating the variety of targets in this subcategory. The median F090W flux (measured in a 0.2^'' diameter aperture) for the full sample of GOODS-S and GOODS-N z > 8 candidates is -0.02 nJy (the distribution is consistent with 0 nJy), while the median F090W flux for the combined sample of Δχ^2 < 4 targets is 0.39 nJy. Targets like JADES-GS-53.04744-27.87208 (z_a = 12.33) and JADES-GN-189.33478+62.1919 (z_a = 12.38) have faint F115W and F150W flux measurements (with high uncertainties) consistent with dusty z ∼ 4 solutions. Many of these objects are also limited by the lack of deep HST/ACS data; JADES-GS-53.04744-27.87208 only has coverage with F435W and F606W due to its position in the southwest of the JADES GOODS-S footprint. We include these sources and their fluxes to aid in the selection of high-redshift galaxies in future deep JWST surveys with different filter selection and observational depths. While a larger number of these objects may be lower redshift interlopers masquerading as z > 8 galaxies, this sample may serve as a pool of additional sources to be placed on multi-object slit-masks in follow-up spectroscopic campaigns to confirm source redshifts. In addition, these objects are helpful with calibrating template sets as they have colors that can be fit with models at low and high redshift. §.§ z > 8 Candidates Proximate to Brighter Sources In addition to exploring the sources with Δχ^2 < 4, we also looked at objects (at all Δχ^2 values) that were near bright sources, being within 0.3^'' or the bounding box of a source with ten times greater brightness. We caution that these sources could be stellar clusters on the outskirts of bright nearby galaxies, which is especially true at fainter magnitudes. In addition, being so close to a bright source can potentially introduce flux into the circular aperture photometry and change the observed colors of the candidate galaxy and the shape of the SED. We went through the same visual classification procedure for these sources as we did for the full sample, and ended up with 41 candidates (30 have Δχ^2 > 4) in GOODS-S and 17 candidates (14 have Δχ^2 > 4) in GOODS-N. These sources have a median F277W Kron magnitude of m_AB = 28.61 for those in GOODS-S, and m_AB = 27.50 for those in GOODS-N, and range in redshift between z = 8.0 - 16.7. We want to specifically highlight some of the higher-redshift candidates from this subsample. Notably, there are three galaxies at z_a > 12: JADES-GS-53.08016-27.87131 (z_a = 16.74), JADES-GS-53.09671-27.86848 (z_a = 12.03), and JADES-GN-189.23121+62.1538 (z_a = 12.16). JADES-GS-53.08016-27.87131 has Δχ^2 < 4, although this source is the most interesting due to its photometric redshift. This source has a very faint F200W detection (SNR > 4 in an 0.2^'' diameter aperture), but is relatively bright at longer wavelengths (with an F277W AB magnitude of 29.20). This source lies 3^'' - 4^'' north of a pair of interacting galaxies at z_spec = 1.1 <cit.>, and flux from the outskirts of these galaxies may be contributing to the aperture photometry for this object, leading to an artificially red UV slope. We caution that this source may be a stellar cluster associated with this pair or the dusty galaxies that are to the north of its position. There are five objects in this subsample that have spectroscopic redshifts from FRESCO: JADES-GS-53.12001-27.85645 (z_a = 8.33, z_spec = 7.652), JADES-GS-53.13341-27.83909 (z_a = 8.18, z_spec = 8.217), JADES-GS-53.07688-27.86967 (z_a = 8.57, z_spec = 8.270), JADES-GS-53.10107-27.86511 (z_a = 8.49, z_spec = 8.195), and JADES-GN-189.27457+62.21053 (z_a = 8.03, z_spec = 8.015). These sources have [OIII]λ5007 line detections from FRESCO, demonstrating that while this class of sources may be associated with their nearby brighter neighbors, there are possible high-redshift galaxies among them. §.§ Stellar Contamination One primary source of contamination for high-redshift galaxy samples are low-mass Milky Way stars and brown dwarfs which at low temperatures can have near-IR colors similar to high-redshift galaxies, and many studies have explored the selection of these sources from within extragalactic surveys <cit.>. Candidate brown dwarfs have been observed in extragalactic surveys, such as GLASS <cit.>. To explore whether our sample contains objects with a high probability of being a possible brown dwarf, we looked at the sizes of the targets in our sample and their fits to stellar models and observed brown dwarf SEDs. We fit the targets in our sample using the jades-pipeline profile fitting software, which utilizes the python lenstronomy package <cit.>. We fit each source, as well as the other nearby sources within 2^'' and up to two magnitudes fainter than the primary galaxy as Sersic profile. Objects that are fainter or farther from the source are masked instead of fit. We use the final residuals to determine the goodness of fit for each source. We measured the sizes using the NIRCam F444W mosaic, as brown dwarfs are bright and unresolved at 4 μm <cit.>. To determine whether an object was unresolved, we looked at those where the observed half-light radius for each source was smaller than the NIRCam long-wavelength channel pixels size (0.063^''). We note that the maximum half-light radius measured using the same procedure on a sample of stars and brown dwarfs in GOODS-S and GOODS-N was 0.02^'', but adopted a larger limit to broaden our search. These sources were identified using both photometric fits to theoretical brown-dwarf models and identification of sources with proper motions compared to HST observations, and will be described further in Hainline et al. (in prep). We fit the NIRCam photometry of our z > 8 candidates using both the SONORA cloud-free brown dwarf models from <cit.> as well as a sample of observed brown-dwarf observations from the SpeX Prism Spectral Library[Compiled by Adam Burgasser and found online at <http://pono.ucsd.edu/ adam/browndwarfs/spexprism/>.]. As the SpeX spectra, in general, are only observed to 2.5μm, we took a group of objects across the temperature range that were detected in the Wide Field Infrared Survey Explorer (WISE) allWISE catalog <cit.> and used their photometry at 3.4 and 4.6 μm to create an extrapolated spectra out to 5μm, which we used to estimate NIRCam photometry, following <cit.>. We supplemented these with empirical NIRCAM SEDs of M dwarfs, obtained from a selection of extremely compact objects in F115W-F200W color magnitude space, consistent with stellar evolutionary models and JWST observations of globular clusters <cit.>. The full set of model photometry were then fit to the observed NIRCam 0.2^'' diameter aperture photometry for the z > 8 candidate galaxies using a χ^2 minimization approach. We compared the resulting χ^2 minima for the stellar fits to those from the EAZY galaxy templates, and if an object had a Δχ^2 < 4 between the galaxy model fit and the stellar fit, it was flagged as a brown dwarf candidate. We find 303 objects in our z > 8 sample that are unresolved, with a half-light radius less than 0.063^'', while only six objects across both fields have stellar fits within Δχ^2 < 4 (two of these sources have lower χ^2 values with the brown dwarf fits). We flag the sources in the online table if they satisfy either of these requirements. Of these objects, only two sources are both unresolved and have stellar fits within Δχ^2 < 4: JADES-GS-53.0353-27.87776 (z_a = 10.82) and JADES-GN-189.19772+62.25697 (z_a = 8.61). The latter source, which is detected with HST WFC3/IR, was identified as the Y-dropout candidate GNDY-6474515254 in <cit.>. While this object has evidence for being a brown dwarf, FRESCO identified both [OIII]λλ5007,4959 emission lines (z_spec = 8.28), ruling out the brown dwarf hypothesis. There are additional brown dwarf candidates in the z > 8 candidate galaxies with Δχ^2 < 4 identified in <ref>. We find 83 objects with an F444W half-light radius less than 0.063^'', and 17 objects with stellar fits within Δχ^2 < 4 (6 of these sources have lower χ^2 values with the brown dwarf fits). We caution that because of the larger flux uncertainties for these objects, it is more likely that models would fit these data with comparable χ^2 values, but we include flags in the online table in these cases. Only five of the sources in this subsample are unresolved with comparable brown dwarf fits to the EAZY fits: JADES-GS­+53.02588-27.87203 (z_a = 8.84), JADES-GS­+53.12444-27.81363 (z_a = 8.33), JADES-GS­+53.07645-27.84677 (z_a = 8.64), JADES-GN­+189.16606+62.31433 (z_a = 8.6), and JADES-GN­+189.07787+62.23302 (z_a = 8.1). These sources, on visual inspection, do not appear to be strong brown dwarf candidates due to them being quite faint, which would indicate potentially unphysical distances compared to models of the halo brown dwarf population <cit.>. While there are 19 unresolved sources in the sample that are proximate to brighter objects (as in Section <ref>), none of these have stellar fits within Δχ^2 < 4 of the EAZY fits. §.§ z > 8 Candidates in the Literature As the GOODS-S and GOODS-N fields have been observed across a wide wavelength range and to deep observational flux limits, a number of the sources in our sample have been previously presented in the literature. As described in <cit.>, both JADES-GS-z10-0 (JADES-GS-53.15883-27.7735) and JADES-GS-z11-0 (JADES-GS-53.16476-27.77463) were previously identified in <cit.>, as UDFj-38116243 and UDFj-39546284, respectively. Both of these galaxies are in our z > 8 sample, as JADES-GS-53.15883-27.7735 and JADES-GS-53.16476-27.77463. Similarly, we also previously discussed GN-z11, first identified in <cit.>, and later further explored in <cit.>, which is present in our sample as JADES-GN-189.10605+62.24205. In <cit.>, the authors use the publicly available JEMS data to search for z > 8 candidates in GOODS-S and construct a sample of ten sources. Nine of the ten sources appear in our sample (their source XDFY-2376346017, which they measure at z_EAZY = 8.3^+0.2_-0.2, is at z_a = 7.89 in our fits, and we additionally measure a FRESCO z_spec = 7.975 for this source), and of those, eight sources were previously known and are in our sample. The remaining two sources were not previously known, and also appear in our sample: JADES-GS-53.13918-27.78273 (z_a = 10.49) and JADES-GS-53.16863-27.79276 (z_a = 11.71). <cit.> perform a similar search, and find two additional candidates that fall into our sample: JADES-GS-53.17551-27.78064 (z_a = 9.66), and JADES-GS-53.12166-27.83012 (z_a = 9.42) (they also independently recover JADES-GS-53.16863-27.79276). The photometric redshifts presented in <cit.> for JADES-GS-53.13918-27.78273 (XDFH-2334046578 in their sample, z_EAZY = 11.8^+0.4_-0.5) and JADES-GS-53.16863-27.79276 (XDFJ-2404647339 in their sample, z_EAZY = 11.4^+0.4_-0.5) are broadly similar to our values, but we measure a much lower redshift for the former due to the availability of the F150W flux from JADES. Similarly, <cit.>, estimate similar photometric redshifts to what we find for JADES-GS-53.17551-27.78064 (UDF-21003 in their sample, z_phot = 9.79^+0.15_-0.13) and JADES-GS-53.16863-27.79276 (UDF 16748 in their sample, z_phot = 11.77^+0.29_-0.44), but they claim a much higher redshift for JADES-GS-53.12166-27.83012 (UDF-3216 in their sample, z_phot = 12.56^+0.64_-0.66), which is inconsistent with the measured F150W flux. We note that this latter candidate appears in our catalog of sources proximate to brighter objects, although with Δχ^2 = 4.32. For the full sample of Δχ^2 > 4 candidates at z > 8, we additionally cross-matched their sky positions against GOODS-S and GOODS-N high-redshift catalogs in the literature, including <cit.> and <cit.>. Because our JADES mosaics were aligned using the GAIA reference frame, we had to carefully visually match against each sample, which have different reference frames. In Table <ref>, we list the targets that were matched to sources previously discussed in the literature, the photometric redshift for these sources, and we include the references for each object. We find 47 objects across the full Δχ^2 > 4 catalog have been discussed previously in the literature, 42 in GOODS-S and 5 in GOODS-N. As previously mentioned seven are at z_a > 10, three are at z_a = 9 - 10, and the remaining 37 are at z_a = 8 - 9. l l c 3 0pt Δχ^2 > 4 Catalog Sources in the Literature. JADES ID EAZY z_a Reference(s) JADES-GS-53.15751-27.76677 8.00 1, 2, 3, 4, 5, 6, 8, 10, 12, 13, 16 JADES-GS-53.16415-27.78452 8.02 8, 10 JADES-GS-53.13563-27.79185 8.02 12, 16 JADES-GN-189.27457+62.21053^a 8.03 12 JADES-GS-53.08174-27.89883 8.04 15 JADES-GS-53.13849-27.85854 8.04 12, 16 JADES-GS-53.148-27.79571 8.04 4, 10, 12, 15, 16 JADES-GS-53.06029-27.86353 8.04 12, 13, 16 JADES-GS-53.17727-27.78011 8.08 8, 12, 15, 16 JADES-GS-53.13675-27.83746 8.13 13 JADES-GS-53.08745-27.81492 8.17 7, 12, 16 JADES-GS-53.06035-27.86355 8.17 12, 13, 16 JADES-GS-53.07052-27.86725 8.21 12, 16 JADES-GS-53.05924-27.8353 8.22 12, 16 JADES-GS-53.1459-27.82279 8.23 8 JADES-GS-53.13569-27.83884 8.24 13 JADES-GN-189.2032+62.24245 8.28 12 JADES-GS-53.14585-27.82274 8.28 8 JADES-GS-53.10393-27.89059 8.35 12, 16 JADES-GS-53.20988-27.77928 8.36 12, 16 JADES-GN-189.09186+62.25744 8.38 12 JADES-GS-53.1571-27.83708 8.39 12, 15, 16 JADES-GS-53.08738-27.86033 8.46 8, 12, 13, 16 JADES-GS-53.10224-27.85925 8.46 12 JADES-GS-53.16447-27.80218 8.50 4, 6, 8, 9, 13, 16, 19 JADES-GS-53.0865-27.8592 8.50 12, 16 JADES-GS-53.08741-27.8604 8.51 8, 12, 13, 16 JADES-GS-53.15891-27.76508 8.52 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 13, 16, 19 JADES-GS-53.08932-27.8727 8.53 8, 13 JADES-GS-53.1777-27.78478 8.53 6, 8, 9, 19 JADES-GS-53.07581-27.87938 8.55 8, 12, 13, 16 JADES-GS-53.15784-27.76271 8.57 12, 16 JADES-GN-189.19772+62.25697 8.61 12, 13, 16 JADES-GS-53.16767-27.80017 8.64 9, 12, 14, 16, 19 JADES-GS-53.16337-27.77569 8.65 6, 8, 9, 10, 12, 16, 19 JADES-GS-53.15342-27.77844 8.81 10 JADES-GN-189.2114+62.1703 8.92 12, 16 JADES-GS-53.13363-27.84499 9.36 14, 16 JADES-GS-53.12166-27.83012^a 9.42 20 JADES-GS-53.17551-27.78064 9.66 20 JADES-GS-53.13918-27.78273 10.49 19 JADES-GS-53.15883-27.7735^b 10.84 6, 9, 12, 14, 16, 17, 18, 19, 20 JADES-GN-189.10604+62.24204^c 11.00 11, 12, 14, 16 JADES-GS-53.16863-27.79276 11.71 19, 20 JADES-GS-53.16476-27.77463^d 12.31 6, 8, 17, 18, 19, 20 JADES-GS-53.16635-27.82156^e 12.46 17, 18 JADES-GS-53.14988-27.7765^f 13.41 17, 18 a: Proximate to a brighter source, as described in <ref>, b: JADES-GS-z10-0, c: GN-z11, d: JADES-GS-z11-0, e: JADES-GS-z12-0, f: JADES-GS-z13-0, References: 1: <cit.>, 2: <cit.>, 3: <cit.>, 4: <cit.>, 5: <cit.>, 6: <cit.>, 7: <cit.>, 8: <cit.>, 9: <cit.>, 10: <cit.>, 11: <cit.>, 12: <cit.>, 13: <cit.>, 14: <cit.>, 15: <cit.>, 16: <cit.>, 17: <cit.>, 18: <cit.>, 19: <cit.>, 20: <cit.>. §.§ Alternate EAZY Template Fitting Results In <cit.>, the authors present a series of theoretical galaxy templates[<https://ceers.github.io/LarsonSEDTemplates>] designed to be used with EAZY to better model the bluer UV slopes expected for very high redshift galaxies. To create these templates, the authors first used EAZY to calculate photometric redshifts for mock galaxies in the CEERS Simulated Data Product V32 catalog using the EAZY “tweak_fsps_QSF_v12_v3” templates, which were derived from . At this point, the authors created an additional set of templates that better matched the simulated m_F200W - m_F277W colors for the z > 8 galaxies in their sample using the binary stellar evolution models BPASS <cit.> with nebular emission derived from the spectral synthesis code CLOUDY <cit.>. These templates resulted in significantly better photometric redshift estimations for the mock galaxies with the CEERS filter set. To explore how our choice of EAZY templates affects our final z > 8 sample, we fit the photometry for all of the objects recovered across GOODS-N and GOODS-S with EAZY with the recommended template set from <cit.> for fitting z > 8 galaxies: tweak_fsps_QSF_v12_v3 along with the BPASS-only “Set 1” and the “BPASS + CLOUDY – NO LyA” “Set 4” templates. We ran EAZY in an otherwise identical manner, including the template error function used, but we utilized the same photometric offsets as provided in Table <ref>. The resulting photometric redshifts for our primary sample of z > 8 sources provide no significant differences or noticeably improved photometric redshift fits: only 4% have |z_Larson - z_a| / (1 + z_a) > 0.15 (23 sources in GOODS-S and 5 sources in in GOODS-N). More importantly, if we look at those sources where z_Larson < 8, only 2.5% (14 sources in GOODS-S and 4 sources in GOODS-N) have significantly different photometric redshifts. In the majority of these cases, the lower-redshift solution offered by the <cit.> templates is at the same secondary χ^2 minimum seen for our own template fits, and the validity of the fit is strongly dependent on the observed F090W or F115W fluxes. The sources in our sample with Δχ^2 < 4 fits are less robust, as has previously been discussed, and show more discrepancy between the fits with our EAZY templates and those from <cit.>. Here, 37% have |z_Larson - z_a| / (1 + z_a) > 0.15 (67 sources in GOODS-S and 17 sources in in GOODS-N). 64 of these GOODS-S sources and 15 of the GOODS-N sources (for a total fraction of 35% of the Δχ^2 < 4 objects) have z_Larson < 8. In addition, we derived a sample of z_a > 8 sources from fits with the <cit.> templates after applying the same SNR, P(z > 7) and Δχ^2 cuts as described in <ref>. We compared the resulting candidates with those from the original template set, and after visual inspection found a total of 10 additional z > 8 candidates (7 in GOODS-S and 3 in GOODS-N) which we list in Table <ref>. Of those sources, 5 are at z_a = 6 - 8 and 2 have P(z < 7) < 0.7 in our own EAZY fits. The remaining three objects are quite faint, but should be considered alongside the main sample. We conclude that our results would not be significantly improved by using the <cit.> templates. §.§ Spectroscopic Redshifts In total, we have spectroscopic redshifts for 42 objects in our sample. As discussed previously, five of the high-redshift galaxies have been spectroscopically confirmed to lie at z > 10: JADES-GS-z10-0, JADES-GS-z11-0, JADES-GS-z12-0, JADES-GS-z13-0 <cit.>, and GN-z11 <cit.>. In this section, we discuss the other objects in our sample with spectroscopic confirmation from both JWST NIRSpec and JWST NIRCam grism spectroscopy from FRESCO. We also compare the photometric redshifts to the spectroscopic redshifts and discuss the observed offset between the two values. Four additional GOODS-S sources have NIRspec spectroscopic redshifts at z > 8. Besides GN-z11, there are no GOODS-N NIRSpec spectroscopic redshifts for our sample. An additional 28 sources in our sample have FRESCO spectroscopic redshifts, with 19 objects in GOODS-S and 8 in GOODS-N. As described in Section <ref>, there are 4 additional GOODS-S sources, and 1 GOODS-N object with FRESCO z_spec that are proximate to other bright sources. The photometric redshifts for these sources were derived from either single [OIII]λ5007 line detections or, in some brighter cases, multiple line detections. Fifteen of the sources in our sample of z > 8 candidates have FRESCO spectroscopic redshifts at z < 8 (13 in GOODS-S, and 1 in GOODS-N) in all cases at z_spec > 7.6. We chose to include these objects as they satisfy our EAZY selection criteria. In Figure <ref> we show the spectroscopic redshifts of the objects in our sample against their photometric redshift. There are no catastrophic outliers, defined here as those objects where |z_spec - z_a| / (1 + z_spec) > 0.15. As discussed previously with individual objects, the photometric redshifts have a systematic offset such that EAZY is slightly overpredicting the distances to these galaxies (⟨Δ z = z_a - z_spec⟩= 0.26). To estimate the scatter on the relationship, we also calculated the normalized mean absolute deviation σ_NMAD, defined as: σ_NMAD = 1.48 ×median(| δ z - median(δ z)/1 + z_spec| ) where δ z = z_spec - z_phot. For all of our sources with spectroscopic redshifts, σ_NMAD = 0.04. Understanding the source of this offset is quite important given the usage of photometric redshifts in deriving statistical parameters like the UV luminosity function. By constraining the EAZY fit for each of these sources to be at the spectroscopic redshift, we find that the primary reason for these higher-redshift fits is due to the flux of the filter that spans the Lyman break. In the fits where redshift is constrained to be at z_spec, the observed fluxes in the band at the Lyman break are overestimated in the template fits. While this effect maybe be due to photometric scatter upwards in those bands, it is more likely due to the templates themselves. In <cit.>, the authors present z_spec = 8 - 10 objects with NIRSpec spectroscopy which show a larger offset (⟨Δ z = z_a - z_spec⟩= 0.50) to higher photometric redshifts, and these authors also hypothesize that this might be a result of potential differences between the observed high-redshift galaxy SEDs and the templates used to model high-redshift galaxies. One potential source of excess flux in the UV is the strength of the Lyman-α emission line in our templates. To explore the effect of this line, we first took our EAZY templates and removed the Lyman-α contribution by cutting out the flux between 1170 to 1290 Å and replacing that portion in each template with a linear fit. Using these templates without Lyman-α, we re-fit every one of our sources with spectroscopic redshifts, and calculated a new σ_NMAD = 0.02, as well as a difference in the average offset ⟨Δ z = z_a - z_spec⟩ = 0.19. While this is smaller, the offset is still present, indicating that Lyman-α flux is not the dominant factor. One alternate possibility is that the strength of the optical emission lines at long wavelength may not be fully reflected in our limited template set, and for those z_spec < 9 sources (where the optical emission is not redshifted out of the NIRCam filters), this may have an effect of pushing fits at higher redshifts. At the redshift range of our sample, the FRESCO redshifts are calculated preferentially for those objects with strong line emission, which may not be probed by our template set. Understanding this offset may prove important for future fits to high-redshift galaxies, and it will necessitate the creation of templates derived from high-resolution NIRSpec spectra of these sources once larger samples are observed. §.§ Rejected High-Redshift Candidates Finding and characterizing high-redshift galaxies is a complex process, even given the IR filters on board JWST. In our visual inspection, we found a number of bright galaxies that we rejected from our z > 8 sample because of multiple reasons, and in this section, we will provide four examples as case studies to demonstrate the sorts of galaxies with colors that can mimic those of high-redshift galaxies. This analysis follows discussions in <cit.> and <cit.>, and seen directly with CEERS-93316, a candidate galaxy at z_phot = 16.4 which was shown to be at z_spec = 4.912 <cit.>. In Figure <ref> we provide SEDs for JADES-GS-53.0143-27.88355, JADES-GS-53.08294-27.85563, JADES-GS-53.20055-27.78493, and JADES-GN-189.30986+62.20844. Here, we highlight the solution at z < 7 in each, while also leaving the overall minimum χ^2 solution. JADES-GS-53.0143-27.88355 (m_Kron, AB = 29.3), appears from the thumbnails and from the EAZY minimum χ^2 fit to be an F150W dropout at z_a = 12.51. However, the red UV slope indicates that perhaps this object is much dustier and at low-redshift (z_alt = 3.41), where the Hα emission line was boosting the observed F277W flux. This UV slope could also arise from the bright source to the southeast <cit.>. In addition there is what appears to be a flux detection in the F115W thumbnail which helps to rule out the high-redshift solution. JADES-GS-53.08294-27.85563 (m_F277W, Kron = 26.8) appears to be a bright F150W dropout clump immediately adjacent to another object. The SED is well fit at z_a = 14.51, and the fit constrained to be at z < 7 is significantly worse (z_alt = 3.56). The secondary source, which is detected with all five of the JADES HST/ACS bands (although only at 2σ significance for F435W), has an EAZY template redshift z_a = 3.4. This redshift puts the Balmer break between the NIRCam F150W and F200W filters, and is likely what is being seen in JADES-GS-53.08294-27.85563. While there is a detection in this source in F150W at a SNR = 2.33, F090W and F115W are non-detected at SNR < 2. There is potentially a line detection in the FRESCO spectrum for this source, which if it were He1 λ1.08 μm, would put this object at z_spec = 3.16. At that redshift, the [OIII]λ5007 line would contribute flux to F200W and F210M, and Hα flux would boost F277W. JADES-GS-53.20055-27.78493 (m_F277W, Kron = 28.8) appears to be an F200W dropout at z_a = 15.89 southeast of another, brighter source. While positive flux is observed at the 1 - 2 nJy level in F115W and F150W, this is at a SNR < 0.8 in both cases. This source was ruled out as an F200W dropout because of the detection at the 4σ of F090W flux, which can be seen in the thumbnail. JADES-GN-189.30986+62.20844 (m_F277W, Kron = 27.4) is best fit at z_a = 11.36, placing the observed Lyman break at 1.5μm. This object is proximate to another, brighter galaxy with an EAZY fit at z_a = 1.87 with a complex morphology first observed as part of the GOODS survey in <cit.>. This is at a lower redshift than the alternate EAZY result for JADES-GN-189.30986+62.20844, z_a = 2.58, but the faint F115W detection (SNR = 3.64) demonstrates that the minimum χ^2 redshift solution for this object is erroneous. § DISCUSSION In this section we explore the selection and derived properties of this large sample of candidate high-redshift galaxies in more detail. A full description of the theoretical implications of these sources is outside the scope of this paper. The stellar mass and star formation histories for these sources will be the focus of a study by Tacchella et al. (in prep), while the full estimation of the evolution of the UV luminosity function at z > 8 from the JADES sources will be presented in Whitler et al. (in prep). §.§ UV Magnitudes We calculated the UV magnitudes from the EAZY fits to explore the range of intrinsic UV brightnesses for the sample. To calculate M_UV we started by fitting the Kron magnitude catalog fluxes forced to be at the redshifts derived from the smaller circular apertures, or, if available, the spectroscopic redshifts for each source. This was done to not bias the resulting UV magnitudes against more extended objects by encompassing more of the total flux. From here, we took the best-fitting rest-frame EAZY template for each object and passed it through mock top-hat filter centered at 1500Å with a width of 100Å, and calculated the intrinsic UV magnitude based on the resulting flux. In Figure <ref> we show the resulting M_UV values against the photometric and spectroscopic redshift for the sample. As can be expected, GN-z11 is by far the brightest source in the sample at M_UV = -22.0. Excitingly, we find 227 objects in our sample with M_UV > -18, and 16 objects (all in GOODS-S) with M_UV > -17, entirely at z_a < 11.5. These UV-faint high-redshift galaxy candidates demonstrate the extraordinary depth of the JADES survey. In addition, these results stand in contrast to the decline in the number counts of HST-observed galaxies discussed in <cit.> and <cit.>, and help to confirm results from other JWST surveys <cit.>. §.§ Dropout Colors As discussed in the introduction, traditionally, high-redshift samples are assembled by targeting Lyman dropout galaxies in color space. In <cit.>, the authors used the JAGUAR mock catalog <cit.> to explore the NIRCam colors of simulated dropout samples, and demonstrated the tradeoff between sample completeness and accuracy for high-redshift dropout galaxies. Because of the utility of dropout selection, we sought to explore how successful this technique alone would be at finding the JADES z > 8 candidate galaxies. We utilized a uniform two-color selection scheme to target F090W, F115W, and F150W dropouts within our primary z > 8 sample, where in each case the color limit for the filters that targeted the Lyman break was m_1 - m_2 > 1.0, while the color limit for the filters that targeted the rest-frame UV was m_2 - m_3 < 0.5. In Figure <ref>, we show the F090W, F115W, and F150W color selection in the top row plots, targeting the entire z > 8 sample in each panel. In the bottom panel we show a photometric redshift histogram for the sources in the sample with a thick grey line and, in the shaded regions the F090W, F115W, and F150W dropout sample distributions. We sum these distributions and plot that with a thick black line. The lone F090W dropout at z_a > 15 is JADES-GS-53.12692-27.79102, which we discuss in Section <ref> and plot in the upper-right panel of Figure <ref>. We find that 71% of the z > 8 sample would be selected as dropouts with these color criteria, while 171 GOODS-S and 37 GOODS-N objects in the sample are not selected by any scheme, which are are predominantly at z ∼ 8.5 and z ∼ 11.5, as seen from the bottom panel of the figure. These candidates have colors just outside of the selected color space, where z ∼ 8.5, while m_F090W - m_F115W > 1.0, m_F115W - m_F150W = 0.5 - 1.0. A similar effect is seen for the F115W and F150W dropouts at z ∼ 11.5. This effect could be mitigated by expanding the selection criteria, but this is at the risk of including significantly more lower-redshift interlopers <cit.>. Another way of looking at color selection is by directly plotting the dropout color against the EAZY photometric redshift. At z_phot = 8, the Lyman break is at ∼ 1.1 μm, which is on the blue edge of the NIRCam F115W band, and by z_phot = 10, the Lyman break should sit between the F115W and F150W filters, so for the objects at increasing photometric redshifts in this range, the F115W SNR will vary as the galaxy's rest-frame UV emission drops out of this band. In Figure <ref>, we plot the m_F115W - m_F150W color against the EAZY z_a value for the GOODS-N and GOODS-S objects at z_phot = 8 - 10. As expected, the m_F115W - m_F150W color increases in this redshift range. We find that 95% of the candidate high-redshift galaxies selected as F115W dropouts by our cuts have z_a > 8.75, while 16% of the candidates at z_a = 8.75 - 10.0 in our sample would still fall outside of this simple color cut. §.§ Using Δχ^2 to Discern Between High- and Low-Redshift Template-fitting Solutions Fitting a galaxy's SED with templates or stellar population synthesis models enables a measurement of the probability of a galaxy being at a range of photometric redshifts. In this study, we have used the difference in χ^2 values between the best-fit model and the model constrained to be at z < 7 as a our metric of accuracy. The exact Δχ^2 value we measure for each object is dependent on the template set used, as well as the flux uncertainties and, in our case, the template error function and photometric offsets used. As a result, as is the case for any continuous value of merit, choosing a specific cut is a tradeoff between sample accuracy and completeness. In <cit.>, the authors discuss that Δχ^2 > 4, the value we adopt in this current work <cit.> is not sufficient for properly removing low-redshift interlopers, through injecting and recovering mock galaxies in the CEERS extragalactic data. Instead, these authors recommend the stricter cut of Δχ^2 > 9. Because we have a larger number of observed photometric filter in the JADES data, choosing a low Δχ^2 limit may be resulting in the inclusion of more potential interlopers, which has lead to our releasing output catalogs which include all of the sources we visually inspected regardless of the chosen Δχ^2 cut. If we do instead look only at those objects in our sample with Δχ^2 > 9, our primary sample is reduced to 483 candidates (358 in GOODS-S and 125 in GOODS-N), or 67%. This subsample selected with a stricter cut has a similar redshift distribution to our full sample (19 of the 33 candidates at z > 12 would still be included), but the sources have brighter F277W magnitudes, as would be expected. The median F277W magnitude for the Δχ^2 > 4 sample is 29.11, while the median F277W magnitude for the Δχ^2 > 9 sample is 28.96. It should be noted that every source in our sample with a spectroscopic redshift has Δχ^2 > 13. Pushing the cut to even stricter values, we find that 45% of the original sample has Δχ^2 > 15 and 36% of the original sample has Δχ^2 > 20. §.§ Candidate Galaxies with Red Long-Wavelength Slopes In our visual inspection of the galaxy candidates, we find a number of high-redshift candidates with very red long-wavelength slopes, following the discovery of similar sources at z_phot = 5 - 9 in <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. These objects are often very bright and unresolved in F444W, and in many cases are comparatively faint at shorter wavelengths. To systematically search for these sources in our full sample, we selected those objects that have m_F277W - m_F444W > 1.3 and m_200W - m_F356W > 0.0. These color limits ensure that the observed red long-wavelength slope is not due to an emission line boosting the F444W flux, and return sources similar to those presented in the literature. For our sample, these cuts select 12 objects (9 in GOODS-S and 3 in GOODS-N). Of those sources, 11 are at z_a = 8 - 10, while one source is at z_a = 11.64. We provide the IDs, z_a values, F444W magnitudes (measured in an 0.2^'' aperture), and colors for these sources in Table <ref>, and we show six of these sources in Figure <ref>. Outside of the highest-redshift source, JADES-GS-53.11023-27.74928, these sources have fairly tight lower limits on their redshift due to both the lack of flux observed in the F090W band and the red slope not being easily reproduced at low redshift. However, JADES-GS-53.11023-27.74928 is very faint (∼ 1 nJy) at wavelengths shorter than 2μm, making a photometric redshift estimate difficult. JADES-GS-53.18354-27.77014, a source with a FRESCO spectroscopic redshift (z_spec = 8.38) is extended with three visible clumps spanning 0.6^'' (2.9 kpc at z_a = 8.38), of which the central knot has a very observed red UV through optical slope. l c c c c 5 0pt Sources with Red Long-Wavelength Slopes JADES ID EAZY z_a m_F444W m_F277W - m_F444W m_F200W - m_F356W JADES-GN-189.05064+62.27935 8.04 27.63 1.495 0.209 JADES-GS-53.04601-27.85399 8.3 26.44 1.493 0.602 JADES-GS-53.19904-27.77207 8.31 29.14 1.313 0.182 JADES-GS-53.19211-27.75252 8.53^a 26.79 1.354 0.608 JADES-GN-189.18036+62.28851 8.69 28.42 1.51 1.021 JADES-GS-53.18392-27.78691 8.71 28.14 1.76 0.907 JADES-GS-53.1387-27.79248 8.87 28.4 1.88 0.475 JADES-GN-189.17121+62.21476 8.91^b 26.41 1.514 0.535 JADES-GS-53.18354-27.77014 8.95^c 27.04 1.529 0.506 JADES-GS-53.18087-27.80577 9.33 29.42 1.414 0.185 JADES-GS-53.18448-27.79696 9.66 28.8 1.355 1.783 JADES-GS-53.11023-27.74928 11.64 28.52 1.339 0.372 a: z_spec = 7.99, b: z_spec = 8.62, c: z_spec = 8.38 The origin of these sources is not obvious. One possible cause of such a red slope is the presence of very hot dust from supermassive black hole growth in these objects, as discussed in <cit.>, <cit.> and <cit.>. This would be of interest given the lack of ultra-high-redshift active galaxies currently known, and the short timescales by which these supermassive black holes could have grown in the early universe. Another alternative is that these sources could have strong optical line emission that boosts the long-wavelength flux, similar to what is presented in <cit.>. In this work, the authors describe how galaxy models with young stellar populations or supermassive black hole growth can replicate the photometry for a sample of sources selected from JWST CEERS. An alternate view is offered in <cit.>, who argue that sources like these are instead very massive, and the red long-wavelength slope is indicative of an evolved population, although this interpretation is in contrast to theoretical models of galaxy growth <cit.>. A continued exploration of the stellar properties of JADES sources at z = 7 - 9 with red long-wavelength slopes is discussed in Endsley et al. (in prep). However, until a number of these sources are followed-up with deep spectroscopy, their nature will remain elusive. § CONCLUSIONS In this paper, we have assembled a sample of 717 galaxies and candidate galaxies at z > 8 selected from the 125 sq. arcmin JWST JADES observations of GOODS-N and GOODS-S. We combined these data with publicly available medium-band observations from JEMS and FRESCO, and describe our data reduction and photometric extraction. Our primary results are listed below: * Using the template-fitting code EAZY, we calculated photometric redshifts for the JADES sources, and selected z > 8 candidates based on source SNR, the resulting probability of the galaxy being at z > 7, P(z > 7), and the difference in χ^2 between the best-fit at z > 8 and the fit at z < 7. The final sample was visually inspected by seven of the authors, and contains 182 objects in GOODS-N and 535 objects in GOODS-S, consistent with the areas and observational depths in the different portions of the JADES survey. * The photometric redshifts of these sources extend to z ∼ 18, with an F277W Kron magnitude range of 25 - 31 (AB). The brightest source in our sample is the previously studied galaxy GN-z11 (m_F277W, Kron = 25.73). We find 33 galaxy candidates at z_a > 12, with the highest-redshift candidate being JADES-GN-189.15981+62.28898 with a photometric redshift of z_a = 18.79. * We find a number galaxies and galaxy candidates at z = 8 - 12 that are visually extended across many kpc and consist of multiple UV-bright clumps with underlying diffuse optical emission, potentially demonstrating very early massive galaxy growth. * Forty-two of the sources in our sample have spectroscopic redshift measurements. Each spectroscopic redshift agrees with the photometric redshift for the source within |z_spec - z_a| / (1 + z_spec) < 0.15. We find an average offset between the calculated photometric redshifts and the spectroscopic redshifts of ⟨Δ z = z_spec - z_a ⟩= 0.26, lower than the results seen with other high-redshift samples in the literature. We speculate that the offset may be due to differences between the templates used to fit these objects and the observed galaxy SEDs, which will be mitigated as more accurate templates are created using high-redshift galaxy spectra from JWST/NIRSpec. * To explore whether any of the sources are consistent with being low-mass stars, we fit our sources with brown dwarf models and measure whether the objects are unresolved. The galaxy templates fit the photometry with better accuracy than the brown dwarf templates for the vast majority of cases. * We demonstrate that while traditional color selection would find most of the sources in our sample, at specific redshift ranges there are a number of sources that fall outside of typical color selection criteria. * These results are robust to the exact EAZY templates used; the vast majority of sources found in our sample have similar redshifts when fit using the independently derived templates from <cit.>. * Our sample includes a number of intriguing sources with red long-wavelength slopes, potentially from dust heated by a growing supermassive black hole at z > 8. This red slope could also be due to an abundance of strong optical line emission from young stellar populations. Taken together, these sources represent an exciting and robust sample for follow-up studies of the early universe. The detailed stellar populations, as well as the resulting evolution of the mass and luminosity functions for the z > 8 JADES galaxies will be found in forthcoming studies from the JADES collaboration members. We also look forward to JADES Cycle 2 observations which will push to fainter observed fluxes. In addition, many of these sources will be observed with JADES NIRSpec MSA spectroscopy to both confirm their redshifts and to explore their ionization and metallicity properties. JWST has only just opened the door to the early universe, and the years to come promise to be the most scientifically fruitful in the history of extragalactic science. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-03127 for JWST. These observations are associated with PID 1063, 1345, 1180, 1181, 1210, 1286, 1963, 1837, 1895, and 2738. Additionally, this work made use of the lux supercomputer at UC Santa Cruz which is funded by NSF MRI grant AST1828315, as well as the High Performance Computing (HPC) resources at the University of Arizona which is funded by the Office of Research Discovery and Innovation (ORDI), Chief Information Officer (CIO), and University Information Technology Services (UITS). We acknowledge support from the NIRCam Science Team contract to the University of Arizona, NAS5-02015. DJE is supported as a Simons Investigator. E.C.L acknowledges support of an STFC Webb Fellowship (ST/W001438/1). S.C acknowledges support by European Union’s HE ERC Starting Grant No. 101040227 - WINGS. AJB, AJC, JC, IEBW, AS, & GCJ acknowledge funding from the “FirstGalaxies” Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 789056). JW, WB, FDE, LS, TJL, and RM acknowledges support by the Science and Technology Facilities Council (STFC) ERC Advanced Grant 695671, “QUENCH”. JW also acknowledges support from the Foundation MERAC. RM also acknowledges funding from a research professorship from the Royal Society. The research of CCW is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. REH acknowledges support from the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1746060. LW acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2137419. Funding for this research was provided by the Johns Hopkins University, Institute for Data Intensive Engineering and Science (IDIES). This research is supported in part by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. DP acknowledges support by the Huo Family Foundation through a P.C. Ho PhD Studentship. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant no.140. § ADDITIONAL TABLES AND FIGURES l | c c c c c c | c c c c c c 13 0pt Additional z > 8 candidates from EAZY <cit.> Template Fits 1c 6cPhotometric Redshifts (<cit.> Templates) 6cPhotometric Redshifts (This Study) JADES ID EAZY z_a χ^2_min z_σ68, low z_σ68, high P(z > 7) Δχ^2 EAZY z_a χ^2_min z_σ68, low z_σ68, high P(z > 7) Δχ^2 JADES-GS-53.05706-27.81652 8.92 13.72 8.54 10.01 0.984 6.190 1.89 15.14 2.14 10.25 0.793 0.0 JADES-GS-53.1153-27.80992 8.52 6.16 7.42 8.99 0.935 4.108 7.39 6.45 7.36 9.55 0.915 3.226 JADES-GS-53.13383-27.82825 8.02 13.13 7.58 8.09 1.000 16.976 7.89 10.72 7.72 7.98 1.000 32.386 JADES-GS-53.14036-27.79026 8.61 20.72 8.28 8.77 0.979 5.167 6.99 23.51 7.01 8.69 0.859 0.0 JADES-GS-53.14712-27.77639 8.14 18.44 8.04 8.30 0.876 5.791 8.30 20.71 6.06 8.44 0.691 2.222 JADES-GS-53.14992-27.88179 8.94 26.30 8.14 8.95 0.919 4.484 8.96 25.67 2.42 8.98 0.678 2.254 JADES-GS-53.18389-27.82345 8.20 15.47 7.90 8.38 1.000 20.058 1.83 20.18 1.84 8.28 0.578 0.0 JADES-GN-189.07044+62.29257 8.34 23.40 7.50 8.45 1.000 12.288 7.20 15.42 7.17 8.22 1.000 12.651 JADES-GN-189.26946+62.19909 8.11 13.93 7.60 8.29 0.997 7.348 7.84 15.67 7.40 8.12 0.995 5.804 JADES-GN-189.29444+62.14231 8.79 16.89 8.16 9.00 0.984 7.340 1.86 18.56 1.85 8.79 0.499 0.0 JWST(NIRCam, NIRSpec), HST(ACS) astropy <cit.>, matplotlib <cit.>, numpy <cit.>, scipy <cit.>, Photutils <cit.>, lenstronomy <cit.>, EAZY <cit.>, fsps <cit.>. aasjournal
http://arxiv.org/abs/2306.14913v1
20230619155928
FSUIE: A Novel Fuzzy Span Mechanism for Universal Information Extraction
[ "Tianshuo Peng", "Zuchao Li", "Lefei Zhang", "Bo Du", "Hai Zhao" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Enhanced entanglement in multi-bath spin-boson models Janet Anders July 31, 2023 ===================================================== Universal Information Extraction (UIE) has been introduced as a unified framework for various Information Extraction (IE) tasks and has achieved widespread success. Despite this, UIE models have limitations. For example, they rely heavily on span boundaries in the data during training, which does not reflect the reality of span annotation challenges. Slight adjustments to positions can also meet requirements. Additionally, UIE models lack attention to the limited span length feature in IE. To address these deficiencies, we propose the Fuzzy Span Universal Information Extraction (FSUIE) framework. Specifically, our contribution consists of two concepts: fuzzy span loss and fuzzy span attention. Our experimental results on a series of main IE tasks show significant improvement compared to the baseline, especially in terms of fast convergence and strong performance with small amounts of data and training epochs. These results demonstrate the effectiveness and generalization of FSUIE in different tasks, settings, and scenarios. § INTRODUCTION Information Extraction (IE) is focused on extracting predefined types of information from unstructured text sources, such as Named Entity Recognition (NER), Relationship Extraction (RE), and Sentiment Extraction (SE). To uniformly model the various IE tasks under a unified framework, a generative Universal Information Extraction (UIE) was proposed in <cit.> and has achieved widespread success on various IE datasets and benchmarks. Due to the necessity of a powerful generative pre-training model for the generative UIE, the time overhead is extensive and the efficiency is not satisfactory. For this reason, this paper examines span-based UIE to unify various IE tasks, conceptualizing IE tasks as predictions of spans. However, UIE models still have some limitations. First, as it is the process of training machine learning models to extract specific information from unstructured text sources, IE relies heavily on human annotation which involves labeling the data by identifying the specific information to be extracted and marking the corresponding span boundaries in the text manually. However, due to the complexity of natural language, determining the correct span boundaries can be challenging, leading to the phenomenon of annotation ambiguity. As shown in Figure <ref>, different annotated spans can be considered reasonable. In the span learning of UIE models, the method of teacher forcing is commonly used for loss calculation, making the model dependent on the precise span boundaries given in the training data. This can cause performance bottlenecks due to annotation ambiguity. When the model structure in UIE places too much emphasis on the exact boundaries of IE tasks, it leads to insufficient utilization of supervision information. In order to predict span boundaries, positions closer to the ground-truth should be more accurate than those relatively farther away, as shown in Figure <ref>. For example, words close to the target “car” are more likely to be correct than the word “evening” which is farther away from the target. Under the premise of positioning to the span where "car" is located, both the "yellow car" and the "yellow sports car" can be regarded as vehicle entities. This means that the span model learned should be fuzzy rather than precise. In addition, the use of pre-trained Transformer <cit.> in UIE to extract the start and end position representations also poses a problem. The Transformer model is designed to focus on the global representation of the input text, while UIE requires focusing on specific parts of the text to determine the span boundaries. This mismatch between the Transformer's focus on global representation and UIE's focus on specific parts of the text can negatively impact the performance of the model. When there is a mismatch between the Transformer architecture and the span representation learning, the model may not make good use of prior knowledge in IE. Specifically, given the start boundary (end boundary) of the label span, the corresponding end boundary (start boundary) is more likely to be found within a certain range before and after, rather than throughout the entire sequence. This is a prior hypotheses that span has limited length, which is ignored in the vanilla UIE model. To address this, a fuzzy span attention mechanism, rather than fixed attention, should be applied. In this paper, we propose the Fuzzy Span Universal Information Extraction (FSUIE) framework that addresses the limitations of UIE models by applying the fuzzy span feature, reducing over-reliance on label span boundaries and adaptively adjusting attention span length. Specifically, to solve the issue of fuzzy boundaries, we design the fuzzy span loss that quantitatively represents the correctness information distributed on fuzzy span. At the same time, we introduce fuzzy span attention that sets the scope of attention to a fuzzy range and adaptively adjusts the length of span according to the encoding. We conduct experiments on various main IE tasks (NER, RE, and ASTE). The results show that our FSUIE has a significant improvement compared to the strong UIE baseline in different settings. Additionally, it achieves new state-of-the-art performance on some NER, RE, and ASTE benchmarks with only bert-base architecture, outperforming models with stronger pre-trained language models and complex neural designs. Furthermore, our model shows extremely fast convergence, and good generalization on low-resource settings. These experiments demonstrate the effectiveness and generalization of FSUIE in different tasks, settings, and scenarios. § FSUIE In FSUIE, incorporating fuzzy span into base UIE model involves two aspects. Firstly, for the spans carrying specific semantic types in the training data, the boundary targets should be learned as fuzzy boundaries to reduce over-reliance on span boundaries. To achieve this, we propose a novel fuzzy span loss. Secondly, during the span representation learning, the attention applied in span should be dynamic and of limited length, rather than covering the entire sequence. To achieve this, we propose a novel fuzzy span attention. §.§ Fuzzy Span Loss (FSL) The introduction of FSL is a supplement to traditional teacher forcing loss (usually implemented as Cross Entrophy), to guide the model in learning fuzzy boundaries. The challenge for FSL is how to quantify the distribution of correctness information within the fuzzy boundary. Specifically, for a given label span S, conventional target distributions (one-hot) indicate the correct starting and ending boundaries. This form actually follows the Dirac delta distribution that only focuses on the ground-truth positions, but cannot model the ambiguity phenomenon in boundaries. To address the challenge discussed above, we propose a fuzzy span distribution generator (FSDG). In our method, we use a probability distribution of span boundaries to represent the ground-truth, which is more comprehensive in describing the uncertainty of boundary localization. It consists of two main steps: 1) determining the probability density distribution function f; 2) mapping from the continuous distribution to a discrete probability distribution based on f. Specifically, let q ∈ S be a boundary of the label span, then the total probability value of its corresponding fuzzy boundary q̂ can be represented as follows: q̂=∫_R_min^R_max x Q(x) d x, q ∈ S where x represents the coordinate of boundaries within the fuzzy range [R_min , R_max ], R_min and R_max are the start and end positions of the fuzzy range, q^gt represents the ground-truth position for boundary q, and Q(x) represents the corresponding coordinate probability. The traditional Dirac delta distribution can be viewed as a special case of Eq. (<ref>), where Q(x) = 1 when x = q^gt, and Q(x) = 0 otherwise. Through a mapping function F, we can quantifying continuous fuzzy boundaries into a unified discrete variable 𝐪̂=[F(q_1), F(q_2), ⋯, F(q_n)] with n subintervals. [q_1, q_2, ⋯, q_n] represent continuous coordinates in fuzzy range where q_1=R_min and q_n=R_max, the probability distribution of each given boundary of the label span can be represented within the range via the softmax function. Since the Dirac delta distribution only assigns non-zero probability to a single point, it is not suitable for modeling uncertainty or ambiguity in real-world data. Thus in FSUIE, we choose the Gaussian distribution N(μ,σ^2) as the probability density function f. Compared with other probability distributions, the Gaussian distribution assigns non-zero probability to an entire range of values has the following advantages: (1) it is continuous, symmetrical, and can well represent the distribution of correctness information within the fuzzy boundary including the gold position; (2) it is a stable distribution with fewer peaks and offsets, and can ensure that the correctness information is more concentrated on the gold position while distributed on the fuzzy boundary; (3) the integral of the Gaussian distribution is 1, which can ensure that the accuracy distribution after softmax is more gentle. To get the discrete variable 𝐪̂ , Four parameters are involved here: variance σ, mean μ, sampling step s, and sampling threshold θ. These parameters are used to control the range, peak position, and density of the fuzzy boundary, respectively. Specifically, the parameter μ is set to q^gt and the Gaussian distribution is determined using a pre-determined σ. Assuming q_g∈ [q_1, q_2, ⋯, q_n] =q^gt, F can represented as: F(q_i)={[ ε, ε≥θ; 0, ε <θ; ]., ε=f(μ+(i-g)s). Given that values in the marginal regions of Gaussian distribution are quite small, the sampling threshold θ here acts as a filter to eliminate information from unimportant locations. The specific choice of parameters is discussed in the following experimental section. We use 𝐪̂ as the distribution of correctness information on the fuzzy boundaries. The beginning and end fuzzy boundaries together make up the fuzzy span. Then, we calculate the KL divergence between the model's predicted logits and the gold fuzzy span distribution as the fuzzy span loss. The exact boundary and fuzzy boundary distribution is shown in Figure <ref>. This fuzzy span loss is then incorporated into the original teacher-forcing loss function with a coefficient: ℒ_FS=D_K L(𝐪̂ p) =∑_i=1^N𝐪̂(x_i)(log𝐪̂(x_i)/p(x_i)), ℒ =ℒ_ori+λℒ_FS where p represents the predicted distribution of the model and 𝐪̂ represents the generated fuzzy span distribution from FSDG according to the annotation in training data. ℒ_ori is the original Binary Cross Entropy (BCE) loss of the model in UIE, and λ is the coefficient of the fuzzy span loss. §.§ Fuzzy Span Attention (FSA) We construct a FSA based on a multi-head self-attention mechanism with relative positional encoding (RPE), since RPE is more suitable for span representation learning with fuzzy bounds. In conventional multi-head attention with RPE, for a token at position t in the sequence, each head computes the similarity matrix of this token and the tokens in the sequence. The similarity between token t and token r can be represented as: s_t r=y_t^⊤ W_q^⊤(W_k y_r+p_t-r) where W_k and W_q are the weight matrices for "key" and "query" representations, y_t and y_r are the representations of token t and r, and p_t-r is the relative position embedding, the corresponding attention weight can be obtained through a softmax function a_t r=exp(s_t r)/∑_q=0^t-1exp(s_t q). Conventional self-attention focus on global representations, mismatching the requirement of fuzzy spans. To address this issue, we present a novel attention mechanism, called Fuzzy Span Attention (FSA), to control attention scores of each token, aiming to learn a span-aware representation. The fuzzy span mechanism of FSA consists of two aspects: (1) the length of the range applying full attention is dynamically adjusted; and (2) the attention weights on the boundary of the full attention span are attenuating rather than truncated. Specifically, inspired by <cit.>, we design a mask function g_m to control the attention score calculation. Assuming the maximum length of the possible attention span is L_span, the new attention scores can be represented as: a_t r=g_m(t-r)exp(s_t r)/∑_q=t-L_span^t-1 g_m(t-r)exp(s_t q). The following process is divided into two stages: (1) determining the attention changing function g_a on the fuzzy span, and (2) constructing the mask function g_m based on g_a for span-aware representation learning. According to the characteristics of fuzzy span, we set g_a as a monotonically decreasing linear function. To adjust the attention span length, we define a learnable parameter δ∈ [0,1]. The g_a(x) and corresponding g_m(x) can be represented as follows: g_a(z) =-z+l+d/d, l =δ L_span. g_m(z)={[ 1, g_a(z)>1; 0, g_a(z)<0; g_a(z), otherwise ].. where l controls the length of the full attention range and d is a hyper-parameter that governs the length of the attenuated attention range. In Figure <ref>, an illustration of the g_m function is depicted. The dashed lines represent alternative choices of g_a functions, such as g_a'(z)={[ 1, z ≤ l; 0, z > l; ]., g_a”(z)={[ 1, z ≤ l; 1/√(2 π)·d/3exp(-(z-l)^2/2 (d/3)^2), z > l; ].. Through experimentation, we found that the linear attenuated function performs best (refer to comparison in Appendix A). Iterative optimization of δ allows the model to learn the optimal attention span lengths for a specific task. It is important to note that different heads learn the attention span length independently and thus obtain different optimal fuzzy spans. In our implementation, instead of using multiple layers of fuzzy span attention layers, we construct the span-aware representation with a single fuzzy span attention layer on top of Transformer encoder, and it does not participate in the encoding process. Therefore, although the maximum range of fuzzy span attention is limited by L_span, it only affects span decisions and does not have any impact on the representation of tokens in the sequence. § EXPERIMENTS §.§ Setup Tasks We conducted experiments on 4 datasets for 3 common information extraction tasks: NER, RE, and ASTE. The datasets we used include ACE2004, ACE2005, ADE <cit.> for NER and RE tasks, and ASTE-Data-V2 <cit.> for ASTE task. We evaluate our model using different metrics for the three IE tasks. For NER, we use the Entity F_1 score, in which an entity prediction is correct if its span and type match a reference entity. For RE, we use the Relation Strict F_1 score, where a relation is considered correct only if its relation type and related entity spans are all correct. For ASTE, we use the Sentiment Triplet F_1 score, where a triplet is considered correct if the aspect, opinion, and sentiment polarity are all correctly identified. Training Details We trained two variations of FSUIE, FSUIE-base and FSUIE-large, which are based on the BERT-base and BERT-large model architecture and pre-training parameters respectively. In addition, we also trained a UIE-base based on BERT-base as a baseline without using FSL and FSA layers. In FSUIE, we added the FSA layer and the span boundary prediction layer to both models. Specifically, FSUIE-base has 12 layers of 12-head Transformer layers, with a hidden size of 768, while FSUIE-large has 24 layers of 16-head Transformer layers, with a hidden size of 1024. During training, we set the parameters of the Gaussian distribution in FSL as σ=0.5, the distribution value truncation threshold θ to 0.3, sampling step s to 0.3, and the loss coefficient λ to 0.01. And the parameter μ is set to the coordinate of annotation boundary. The hyper-parameters L_span and d involved are determined based on the statistics of the target length on the UIE training data. During training, we set L_span to 30 and d to 32, and experimentation results have shown that the model's performance is not significantly sensitive to the choice of these hyper-parameters (refer to comparison in Appendix C). We trained both models for 50 epochs with a learning rate of 1e-5 on the datasets of each task, and selected the final model based on the performance on the development set. The code is available at https://github.com/pengts/FSUIEhttps://github.com/pengts/FSUIE. §.§ Results on NER tasks We report the results of NER task in Table <ref>. By comparing the results of our baseline UIE-base with other methods, it can be seen that UIE-base has achieved comparable results compared to other methods that use the same BERT-base architecture. It serves as a strong baseline to visually demonstrate enhancements made by FSL and FSA. By introducing FSL and FSA, our FSUIE-base achieves significant performance improvements over the UIE-base that does not have fuzzy span mechanism (+1.15, +1.59, +1.99 F1 scores). Our proposed FSUIE model shows the most significant improvement on the ADE dataset. This is primarily due to the smaller scale of training datasets in the ADE dataset, which allows the model to easily learn generalized fuzzy span-aware representations. This demonstrates the superiority of the FSUIE model. FSL and FSA enable the model to reduce over-dependence on label span boundaries and learn span-aware representations. When compared to existing NER models, FSUIE achieves new state-of-the-art performance on the ADE dataset even with the BERT-base backbone. FSUIE-large even achieve significant improvement (+1.42) on FSUIE-base. FSUIE-large also achieves comparable results on the ACE04 and ACE05 datasets, even when compared to models using stronger pre-trained language models such as ALBERT-xxlarge. Furthermore, our FSUIE demonstrates an advantage in terms of its structure prediction compared to the generative UIE model. As it does not require the generation of complex IE linearized sequences, our FSUIE-base, which only uses BERT-base as its backbone, outperforms the generative UIE model that uses T5-v1.1-large on the ACE05 dataset. §.§ Results on RE tasks In Table <ref>, we present the results of the RE tasks. Compared to the baseline, UIE-base, which does not incorporate fuzzy span mechanism, our proposed FSUIE-base, which incorporates FSL and FSA, also achieves a significant improvement on the RE task using same backtone. Furthermore, when compared to the Table-Sequence Encoder approach <cit.>, our method learns label span boundary distribution and span-aware representations, resulting in optimal or competitive results on the RE task even with FSUIE-base, despite using a simpler structure and smaller PLM backbones. Compared to span-based IE models, our method outperforms the traditional joint extraction model by performing a two stage span extraction and introducing the fuzzy span mechanism. Specifically, on the ADE dataset, our method performs better than joint extraction methods using Bio-BERT, a domain-specific pre-trained language model on biomedical corpus, even using BERT-base as the pre-trained language model. This demonstrates that the fuzzy span mechanism we introduced can extract general information from the data, giving the model stronger information extraction capabilities, rather than simply fitting the data. Compare to generative UIE models, our span-based FSUIE reflects the reality of the structure of IE task and does not require additional sequence generation structures, achieving higher results with less parameters even with FSUIE-base. Compared to models that perform relation extraction using a pipeline approach, like PL-Marker, our FSUIE improves performance in both stages of the pipeline by introducing FSL and FSA. As a result, it results in an overall improvement in relation extraction. Additionally, our model achieves new state-of-the-art results on ACE04 and ADE datasets,even using only BERT-base as the backbone, and on ACE05 dataset with FSUIE-large, compared to other models that use more complex structures. This demonstrates the model's ability to effectively extract information through our proposed method. §.§ Results on ASTE tasks In Table <ref>, we present the results of our experiments on the ASTE task. Due to the small scale of the ASTE-Data-V2, FSUIE-large is not needed to achieve better results, and this section only uses FSUIE-base for comparison. It can be seen that by introducing the fuzzy span mechanism, our FSUIE model significantly improves ASTE performance compared to the baseline UIE-base. This also demonstrates the effectiveness and generalization ability of FSUIE in IE tasks. Additionally, our FSUIE-base model achieves state-of-the-art results on three datasets (14lap, 15res, 16res) and demonstrates competitive performance on the 14res dataset. This indicates that the fuzzy span mechanism is effective in improving the model's ability to exploit and extract information, as well as its performance on specific tasks without increasing model parameters. Furthermore, our FSUIE model has a relatively simple architecture, compared to other models, which shows that FSUIE is able to improve performance without the need for complex structures. The gap in performance between UIE models and other models can be attributed, in part, to the advantage of UIE pre-training, which is further enhanced by our proposed fuzzy span mechanism. Compared to models that decompose the ASTE task into two subtasks of opinion recognition and sentiment classification, and use separate models to handle each, our FSUIE model achieved better performance using a unified model architecture. For ASTE, span-based UIE models, as opposed to generative UIE models, can leverage the complete semantic information of the predicted aspect span to assist in extracting opinions and sentiments. The fuzzy span mechanism enhances the model's ability to exploit the semantic information within the fuzzy span, where possible opinions and sentiments reside, while ensuring span-aware representation learning, resulting in significant improvements.Furthermore, FSUIE is a reaction to the real structure of IEtask, avoiding the extra parameters that sequence generation structures bring, and therefore outperforms generative UIE models with fewer parameters. We notice that FSUIE improves relatively less on the RE task compared to the ASTE task. In the RE task, the model has to learn different entities, different types of relationships, and binary matching skills. In contrast, in ASTE tasks, the model only needs to learn different entities, two relationships that differ significantly in semantics (opinion and sentiment), and ternary pairing tips. From this perspective, RE tasks are more challenging than ASTE tasks. §.§ Results on Low-resource Settings To demonstrate the robustness of our proposed FSUIE method in low-resource scenarios, we conducted experiments using a reduced amount of training data on ACE04 for NER and RE tasks, and 14res for ASTE task. Specifically, we created three subsets of the original training data at 1%, 5%, and 25% of the original size. In each low-resource experiment, we trained the model for 200 epochs instead of 50 epochs. The results of these experiments were compared between FSUIE-base and UIE-base and are presented in Table <ref>. The results of the low-resource experiments further confirm the superior performance of FSUIE over UIE in handling low-resource scenarios. With only a small fraction of the original training data, FSUIE is still able to achieve competitive or even better performance than UIE. This demonstrates the robustness and generalization ability of FSUIE in dealing with limited data. Overall, the results of the low-resource experiments validate the ability of FSUIE to effectively handle low-resource scenarios and extract rich information through limited data. We also found that the model both performed better on NER and ASTE taks than on RE task under low-resource settings. This is because NER and ASTE tasks are simpler than RE, so less data can bring better learning performance. Additionally, we noticed a small performance decrease in the ASTE task for the 100% set compared to the 25% set. This change may be due to the fact that the training data is unbalanced, and reducing the training size can alleviate this phenomenon. §.§ Ablation Study Since FSUIE has been verified to make more effective use of the information in the training set, in order to verify this, we verify it from the perspective of the model training process. Specifically, we recorded the effects of baseline UIE-base, UIE-base+FSL, UIE-base+FSA and full model FSUIE-base on different training steps on the NER ACE04 test set, and the results are shown in Figure <ref>. We noticed that the models with FSA have a significantly faster convergence speed, indicating that by learning span-aware representations, which are closer to the span prediction goal, the span learning process becomes more easy and efficient. With FSA, the model can focus its attention on the necessary positions and capture the possible span within a given sequence. While for FSL, it have a similar convergence trend with the baseline, thus may not improve the convergence speed. To further investigate the contribution of FSL and FSA to the improvement of model performance, we conduct ablation experiments on the NER task using the ADE dataset. The specific experimental results are shown in Table <ref>. It can be seen that the introduction of FSL alone can improve model performance individually. When using FSA alone, the performance of the model drops slightly. However, when both FSL and FSA are used together, the model is significantly enhanced. From our perspectives, the separate introduction of FSA makes the model focus on specific parts of the sequence rather than global representation, resulting in a loss of information from text outside the span. This may explain the slight drop in performance when using UIE+FSA. However, this also demonstrates that in the IE task, sequence information outside a specific span has a very limited impact on the results. The introduction of FSL alleviates the model's over-dependence on label span boundaries, allowing the model to extract more information, resulting in an improvement in both settings. When FSA and FSL operate simultaneously, the model extracts more information from the text and FSA guides the model to filter the more critical information from the richer information, resulting in the most substantial improvement. §.§ Visualization of FSA To further examine the effectiveness of the fuzzy span mechanism, we visualized the attention distribution of the FSA layer in FSUIE-large as shown in Figure <ref>. It should be noted that FSA is only placed at the top layer for constructing span-aware representation and does not participate in the encoding process, thus only affects span decisions rather than the representation of tokens in the sequence. The attention distribution indicates that, for a given input text, each token in the final encoding sequence tends to focus on semantic information within a limited range of preceding tokens rather than on the global representation of the input text. This aligns with our expectation for the design of the fuzzy span mechanism and confirms that fuzzy span mechanism does indeed guide the model for appropriate attention distribution for IE tasks. § RELATED WORK Universal Model Building universal model structures for a wide range of NLP tasks has been a hot research area in recent years. The focus is on building model structures that can be adapted to different sources of data, different types of labels, different languages, and different tasks. Several universal models have been proposed, such as models learning deep contextualized word representations <cit.>, event extraction models that can predict different labels universally <cit.>, models that can handle multiple languages <cit.>, a universal fine-tuned approach to transfer learning <cit.>, models that learning syntactic dependency structure over many typologically different languages <cit.> and models that can universally model various IE tasks in a unified text-to-structure framework <cit.>. This paper builds upon the UIE by incorporating the fuzzy span mechanism to improve IE performance. Information Extraction IE is the task of extracting structured information from unstructured text data. This includes NER, RE, ASTE, Event Extraction (EE), Aspect-Based Sentiment Analysis (ABSA), etc. Research has proposed numerous approaches for IE, such as rule-based <cit.>, machine learning <cit.>, deep learning <cit.>, active learning <cit.>, and logic fusion <cit.>. There are still many task-specific models being proposed, based on previous approaches and structures, e.g., NER <cit.>; RE <cit.>, ABSA <cit.>; and ASTE <cit.>. More related work about sparse attention please refer to Appendix B. § CONCLUSION In this paper, we proposed the Fuzzy Span Universal Information Extraction (FSUIE) framework, an improvement for Universal Information Extraction. To make use of boundary information in the training data and learn a decision-closer span-aware representation, we proposed a fuzzy span loss and fuzzy span attention. Extensive experiments on several main IE tasks show that our FSUIE has a significant improvement compared to the UIE baseline, and achieves state-of-the-art results on ADE NER datasets, ACE04 RE, ACE05 RE and ADE RE datasets and four ASTE datasets. The experiments also reveal FSUIE's fast convergence and good generality in low-resource settings. All the results demonstrate the effectiveness and generalizability of our FSUIE in information extraction. § LIMITATIONS This paper are based on the assumption that Universal Information Extraction (UIE) models have limitations, particularly with regards to over-reliance on label span boundaries and inflexible attention span length. Therefore, the proposed framework may be computationally and spatially expensive as it requires a more complex attention mechanism and additional computing power for training. Nevertheless, this limitation of the span-based UIE model can be overlooked in comparison to that of the generative UIE model, which uses a stronger language model. Additionally, the probability density functions explored in FSL are limited; thus, further research is needed to develop a more targeted strategy for adjusting the correct information distribution. acl_natbib § APPENDIX §.§ g_a in FSA In Table <ref>, we present the performance of models using various g_a functions in the FSUIE technique on the ADE NER test set, where g^l_a denotes the linear attenuated function employed in FSUIE. Compared to the UIE-base, which does not integrate the fuzzy span mechanism, all FSUIE-based models employing different g_a functions obtain better results, thus illustrating the superiority of FSUIE. Regarding the different g_a strategies, FSUIE-base (g'_a) shows minimal enhancement. This is likely because the fuzzy span of attention attenuation adequately reflects the real reading context and enables the model to take advantage of more abundant information within the boundary of the attention span. The best performance is achieved by FSUIE-base (g^l_a), which indicates that the attention should not decay too quickly at the boundary of the attention span, as evidenced by the results of g”_a. §.§ Related Work on Sparse Attention The high time and space complexity of Transformer (O(n^2)) is due to the fact that it needs to calculate the attention information between each step and all previous contexts. This makes it difficult for Transformer to scale in terms of sequence length. To address this issue, sparse attention was proposed <cit.>. This refers to attention mechanisms that focus on a small subset of the input elements, rather than processing the entire input sequence. This method allows attention to be more focused on the most contributing value factors, thus reducing memory and computing capacity requirements. Based on the idea of sparse attention, various approaches have been proposed, such as an adaptive width-based attention learning mechanism and a dynamic attention mechanism that allows different heads to learn only the region of attention <cit.>. <cit.> proposed an O(N) complexity model with three different sparse attentions. <cit.> sought to make the sparse attention matrix predictable. This paper, however, based on adaptive span attention <cit.> to establish a fuzzy span attention, which aims at learning a span-aware representation with the actual needs of information extraction tasks. Our approach differs from previous work in that we aim to obtain a fuzzy span of attention in the process of locating the target, rather than reducing computational and memory overhead. §.§ L_span and d in FSA In Table <ref>, we present the performance of FSUIE-base models using various hyper-parameter d on the ADE NER test set. In Table <ref>, we present the performance of FSUIE-base models using various hyper-parameter L_span on the ADE NER test set. The results demonstrate that the model's performance is not significantly affected by the choice of these hyper-parameters. §.§ Single-Side and Both-Side Ambiguity in FSL Actually, there may be cases of single-side ambiguity in the labeling of entity boundaries in the text. Therefore, we demonstrate FSUIE-base models' performance with different FSL strategies in Table <ref>, where "single-side" means applying FSL only on start boundary and "both-side" means applying FSL on both start and end boundary. The results suggests that the influence of single-sided and both-sided fuzziness on the model's performance is limited, because not all head words are at the end or start, and FSL only performs limited left/right extrapolation on precise boundaries, without affecting the important information provided by the original boundary. For generalization purposes, we utilized both-sides fuzzy span in FSUIE.
http://arxiv.org/abs/2306.08596v1
20230614155707
Transforming Rydberg Interactions with Floquet Frequency Modulation
[ "Luheng Zhao", "Michael Dao Kang Lee", "Mohammad Mujahid Aliyu", "Huanqian Loh" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "physics.atom-ph" ]
Centre for Quantum Technologies, National University of Singapore, 117543 Singapore, Singapore Centre for Quantum Technologies, National University of Singapore, 117543 Singapore, Singapore Centre for Quantum Technologies, National University of Singapore, 117543 Singapore, Singapore [][email protected] Centre for Quantum Technologies, National University of Singapore, 117543 Singapore, Singapore Department of Physics, National University of Singapore, 117542 Singapore, Singapore The Rydberg blockade is a key ingredient for entangling atoms in arrays. However, it requires atoms to be spaced well within the blockade radius, which limits the range of local quantum gates. Here we break this constraint using Floquet frequency modulation, with which we demonstrate Rydberg-blockade entanglement beyond the traditional blockade radius. Further, we find that the coherence of entangled states can be extended under Floquet frequency modulation. Finally, we realize Rydberg anti-blockade states for two atoms within the blockade radius, where the steady-state population cannot be achieved with only the conventional static drive. Our work transforms between the paradigmatic regimes of Rydberg blockade versus anti-blockade and paves the way for realizing more connected, coherent, and versatile neutral atom quantum processors with a single approach. Transforming Rydberg Interactions with Floquet Frequency Modulation Huanqian Loh July 31, 2023 =================================================================== § INTRODUCTION Ultracold atoms in reconfigurable tweezer arrays have emerged as one of the most powerful and rapidly growing quantum platforms. These systems have demonstrated impressive quantum many-body simulations <cit.>, highly stable frequency standards <cit.>, and promising quantum computation architectures <cit.>. At the heart of these quantum applications lies entanglement, which is often effected in neutral atom arrays via the Rydberg blockade <cit.>. Among the various entanglement schemes <cit.>, the Rydberg blockade has been widely adopted due to its robustness against position disorder. However, it requires the distance between two atoms to be well within the blockade radius to prevent substantial entanglement infidelity due to double Rydberg excitations. This constraint reduces quantum gates on Rydberg atoms to a limited range. Improvements in the Rydberg interaction range would increase the qubit connectivity, which could significantly enhance quantum processing efficiency. Like other quantum processors, neutral atom computations and simulations are limited by decoherence <cit.>. New methods for extending the lifetime of entangled states would improve the fidelity of quantum gates <cit.>. However, even with existing levels of decoherence, neutral atom processors continue to demonstrate excellent quantum simulations <cit.>. As these systems continue to develop more robust and efficient ways to initialize quantum states, they are likely to yield more versatile quantum simulation capabilities. In this work, we report that these neutral atom platforms can be advanced on three critical fronts — extending the blockade-based entanglement range, improving coherence times, and enabling new state-preparation schemes — with Floquet frequency modulation (FFM). Our FFM approach is simple and straightforward to implement in all existing neutral atom array experiments. First, we demonstrate that atoms can be entangled outside the traditional blockade radius, thereby significantly increasing the useful range of the Rydberg interaction. Second, we show how FFM can protect a two-atom entangled state against Doppler dephasing, which is the typical mechanism limiting entangled-state coherence in a Rydberg atom array. Finally, we find that closely-spaced atoms can be robustly transferred into an anti-blockaded state. Such a strongly-interacting state cannot be otherwise attained in the steady state with only a static drive, yet its realization would open the door to intricate simulations of quantum dynamics. § FLOQUET FREQUENCY MODULATION: MODEL AND IMPLEMENTATION When atoms are excited from the ground state |g⟩ to the Rydberg state |e⟩, the resulting dynamics are governed by the Hamiltonian: H/ħ = -Δ(t) ∑_i=1^Nσ_ee^i + Ω/2∑_i=1^Nσ_x^i + ∑_i<j V_ijσ_ee^iσ_ee^j , where i indexes the atom, V(r) = C_6/r^6 is the van der Waals interaction between Rydberg atoms, and Ω is the Rabi frequency. Under FFM, the laser detuning Δ(t) is modulated sinusoidally in time with modulation amplitude δ and modulation frequency ω_0 about an offset Δ_0 to give Δ(t)= Δ_0 + δsin (ω_0 t). In this work, we examine the resonant addressing of an atom-array building block comprising two atoms (Δ_0 = 0, N = 2). In the absence of dissipation, the two-atom system can evolve between the |gg⟩, |W⟩ = (|ge⟩ + e^iϕ|eg⟩)/√(2), and |ee⟩ states, where ϕ denotes the relative phase between the atoms arising from their initial positions. With the application of FFM, the two-atom Hamiltonian can be transformed to a new Hamiltonian in a rotating frame, such that the coupling strengths for the |gg⟩↔|W⟩ and |W⟩↔|ee⟩ transitions are respectively rescaled to <cit.>: Ω_a(t) ∝ Ω∑_m=-∞^∞ J_m(α) e^i m ω_0t + imπ/2 , Ω_b(t) ∝ Ω∑_m=-∞^∞ J_m(α) e^i(mω_0 + V)t + imπ/2 , where J_m(α) is the m^th order Bessel function of the first kind with modulation index α = δ/ω_0. A high modulation frequency simplifies the picture as only the resonant terms dominate the coupling strength. Explicitly, the Rabi frequency for |gg⟩↔|W⟩ is dominated by J_0(α), whereas the resonance condition for |W⟩↔|ee⟩ is met by setting mω_0 = -V. The interplay of these two resonance conditions dictates the physics behind the results presented in this study. To implement FFM, we send a Rydberg excitation laser through an acousto-optic modulator driven with a time-varying frequency (Fig. 1a). Since the dynamics of the two-atom system depends critically on the modulation index α, we take care to calibrate the modulation amplitude δ, which can differ from the specified amplitude due to the finite modulator bandwidth. We further minimize the residual amplitude modulation arising from the frequency-dependent diffraction efficiency of the acousto-optic modulator <cit.>. We note that FFM can be easily implemented given the typical bandwidth of commercially available acousto-optic modulators. We point out that FFM is distinct from the Floquet engineering techniques used to control dynamics in Rydberg atoms <cit.>. The latter involves changing the effective Hamiltonian by applying fast, periodic pulses with well-defined phase relations. These pulses are used in the microwave domain to address either two ground states in a Rydberg-dressed system or two Rydberg states. In contrast, FFM directly transforms the effective couplings. Further, FFM does not use periodic pulses, which would have been challenging to implement in the optical domain on the time scale of Rabi oscillations between the ground and Rydberg states. A versatile approach, FFM has been used to implement efficient random access quantum processors with superconducting circuits <cit.>, generate strongly interacting polaritons with Rydberg atoms in an optical cavity <cit.>, and create exotic states of light <cit.>. FFM is also analogous to the shaking of optical lattices, which has been used to realize synthetic gauge fields in ultracold atoms <cit.>. In a Rydberg atom array, FFM has been used to stabilize the revivals of quantum many-body scars <cit.>. Here we focus on the entanglement between two atoms, which serves as a basic ingredient of quantum information processing. In addition, the results obtained below can be generalized to a large atom array by using single-site addressing lasers to select two atoms at a given time. Our experiment procedure begins with the D_1 Λ-enhanced loading of single ^23Na atoms into two optical tweezers <cit.>. We excite the ^23Na atoms from the ground state |g⟩ = |3S_1/2, F=2, .m_F = 2⟩ to the Rydberg state |e⟩ = |59S_1/2, m_J = 1/2⟩ via the intermediate state |m⟩ = |3 P_3/2, F = 3, m_F = 3⟩ with two photons at 589 nm and 409 nm. The Rydberg laser intensities and single-photon detuning Δ' are chosen to give an effective single-atom Rabi frequency of Ω/(2π) = 1.0 MHz. FFM is only applied to the 589 nm excitation laser. The optical tweezers are switched off during Rydberg excitation and turned back on at the end of the excitation sequence to image the atoms. Using D_1 imaging, the ground (Rydberg) state is detected as the presence (absence) of a recaptured atom. The false positive error (3%) is dominated by the imaging survival probability, whereas the false negative error due to the decay of Rydberg atoms is estimated to be 5%. § EXTENDED RYDBERG BLOCKADE RANGE For a static Rydberg excitation (α = 0), suppression of population in the |ee⟩ state is an important prerequisite for the high-fidelity generation of the entangled |W⟩ state. Therefore, quantum gates are typically carried out between atoms spaced well within the Rydberg blockade radius R_b, where V(R_b) = Ω. Under FFM, the two-atom couplings are rescaled by Bessel functions (Eqs. (<ref>) and (<ref>)), giving an intuitive picture for the transformation of the Rydberg blockade radius. As an example, we consider two atoms spaced farther than the static blockade radius, such that their interaction strength is given by V(r) = 0.5 Ω. Figures 1b and 1c show the calculated time-averaged |ee⟩ populations and the maximum |W⟩ fidelities, respectively, as a function of the modulation frequency and modulation index. In the regime of high modulation frequency (ω_0 ≥ 2 Ω), the dynamics are relatively robust and are dictated by the Bessel functions. At the Bessel function zeros (J_0(α) = 0), the |ee⟩ population is the most suppressed. However, it would be misleading to use these Bessel function zeros for generating entangled states, as the population is actually trapped in its initial |gg⟩ state. To generate entangled states, it is optimal to select α slightly away from the Bessel function zeros. Here, the rescaled Rabi frequency Ω_a∝ J_0(α) Ω remains finite but small compared to the interaction, such that errors arising from populating the |ee⟩ state can still be suppressed while the |W⟩ state can be generated with good fidelity. Furthermore, by operating near a higher order Bessel function zero, the |W⟩ state can be realized with high fidelity over a wider range of modulation indices. Given the finite modulation bandwidth of the acousto-optic modulator, it is advantageous to work with a modulation frequency that is large enough compared to the Rabi frequency, but still low enough to allow a high modulation index (e.g. ω_0 = 3 Ω). Figure 1d shows the measured blockade enhancement under FFM, taken with ω_0 = 3 Ω and at various modulation indices. Compared to the static excitation, the dynamics of the |ee⟩ state under FFM are significantly suppressed (inset, V = 0.8 Ω). The observed Rydberg blockade over a range of atom spacings (r/R_b = 0.93 - 1.3, inferred from the normalized interaction range V/Ω =0.22 - 1.5) agrees well with the theory simulations. Through an appropriate choice of α, we observe either population trapping (Fig. 1e, α = 5.5) or coherent dynamics between the |gg⟩ and |W⟩ states (Fig. 1f, α = 6.9). The |W⟩ fidelity achieved in Fig. 1f is determined from Monte Carlo simulations to be 0.77(5). The observed fidelity is primarily limited by the coherence of the Rydberg excitation lasers, which can be improved with cavity filtering techniques <cit.>. Where the coherence of the Rydberg excitation laser is no longer a dominant constraint, the entangled state fidelity can be optimized by choosing α in exchange for longer gate times. For instance, one can use FFM to access a |W⟩ fidelity of 0.98 using α = 11.1 even at a small interaction strength of V = 0.5 Ω. To access the same |W⟩ fidelity with the static Rydberg excitation scheme, the atoms would have had to experience an interaction strength of V = 4.9 Ω, effectively extending the Rydberg blockade range by a factor of (4.9/0.5)^1/6 > √(2) <cit.>. In other words, for an atom array with a fixed square geometry, the static scheme would have allowed only the four nearest neighbors to be entangled in a pairwise manner with the center atom, whereas implementing FFM would allow the next-nearest neighbors on the diagonals to be pairwise entangled with the center atom, thereby potentially doubling the number of qubit connections on demand. In the rest of this paper, we switch to working well within the Rydberg blockade radius and demonstrate two more useful features of the FFM. § PROTECTION OF ENTANGLED STATE AGAINST DEPHASING The |W⟩ state can be dynamically stabilized under the same conditions that give rise to population trapping (J_0(α) = 0). We demonstrate this by first transferring atoms to the |W⟩ state with a static, resonant π-pulse in the Rydberg blockade regime (V = 8 Ω). Subsequently, we apply the FFM (ω_0 = 6 Ω, α = 5.5) for 2 μs, before returning to the static drive. The Rabi frequency is kept constant throughout the sequence (Fig. 2a). During the FFM, the dynamics of the |W⟩ state are frozen (Fig. 2b), in contrast to the case where the static drive is applied throughout the sequence (Fig. 2c). Instead of FFM, one can also trivially keep atoms in the |W⟩ state by turning off the excitation lasers after the first π-pulse. We refer to this alternative as the laser-free scheme. In each case (laser-free versus FFM), the entangled state coherence is limited by relative Doppler shifts between the two atoms <cit.>, and can be measured by first applying a π-pulse to drive |gg⟩ to |W⟩, then waiting for a variable time 0 ≤ t ≤ 2 μs before applying another π-pulse to measure the population in |gg⟩. Figure 2d compares the measured decay of the |W⟩ state for both cases, where the FFM sequence yields a fit decay time of 14(5) μs and the laser-free scheme shows a comparable decay time of 11(2) μs. With a judicious choice of parameters, FFM can protect the entangled |W⟩ state from dephasing and maintain its coherence over the laser-free case. Intuitively, the protection arises from a high-frequency interruption of the dephasing process, thereby yielding a net suppression of decoherence. To access the entanglement-protection regime, one needs to choose parameters such that the |W⟩ state has a strong overlap with an eigenstate of the Floquet Hamiltonian |ϕ_k(0)⟩⟩. The overlap p_k = |⟨ϕ_k(0)|W||⟩^2 is parameterized by the inverse participation ratio (IPR), which is defined <cit.> to be Π^|W⟩ = (1/∑_k p_k^2) - 1. A low IPR is preferred for robust dynamical stabilization of the |W⟩ state. Operating, for instance, at ω_0 = 7 Ω yields a low IPR over an extended range of Doppler shifts (Fig. 2e). After 20 μs of FFM application, the |W⟩ state fidelity under FFM is predicted to be significantly higher than that for the laser-free case (Fig. 2f). The higher fidelity under FFM is maintained over a large range of Doppler shifts, demonstrating the robustness of entanglement under FFM. § ENHANCED RYDBERG ANTI-BLOCKADE DYNAMICS We now turn our attention to Rydberg anti-blockade states <cit.>, which are promising for quantum simulations of interesting dynamics such as that of epidemics <cit.> and of flat band systems in condensed matter physics <cit.>. Rydberg anti-blockade (i.e. |ee⟩ population) is typically achieved for two atoms spaced outside the blockade radius <cit.>. Within the blockade radius, Rabi oscillations between the |gg⟩ and |ee⟩ states can still be realized <cit.> with a finite static detuning, e.g. Δ_0 = V/2. However, this comes at the expense of slower dynamics, particularly when V ≫Ω, as the effective Rabi frequency is set by Ω^2/Δ_0. On the other hand, FFM provides a convenient handle to access the |ee⟩ state from the |gg⟩ state by fulfilling both resonance conditions described in Eqs. (<ref>) and (<ref>) (Fig. 3a). For instance, setting ω_0 = V causes the coupling strength for the |W⟩↔|ee⟩ transition to be dominated by J_1(α), while J_0(α) continues to dominate the |gg⟩↔|W⟩ coupling strength. Consequently, choosing a modulation index that gives a large value for both J_1(α) and J_0(α), such as α = 1.4, realizes the Rydberg anti-blockade (Fig. 3b). Figure 3c shows the corresponding two-atom population dynamics for V = ω_0 = 6 Ω and α = 1.4, where the |ee⟩ state can be accessed with a speedup over the off-resonant static drive. The observed |ee⟩ population is limited by the finite position spread of the atoms at about 1 μK, which gives rise to a range of interaction strengths, over which the resonance condition ω_0 = V does not always hold. This problem can be mitigated by cooling the atoms to the motional ground state <cit.> and increasing the atom-laser coherence <cit.> (Fig. 3c). To boost the |ee⟩ population further, we propose to combine FFM with stimulated Raman adiabatic passage (STIRAP). A reliable state-preparation method, STIRAP has been used to initialize atoms in multiply-excited Rydberg states for studies of symmetry-protected topological phases <cit.>, spin transport in one-dimensional systems <cit.>, and more. However, to date, such STIRAP transfer has only been demonstrated on atoms spaced outside the blockade radius. Reducing the spacing between atoms would be desired for stronger interactions, yet the steady-state population of multiply-excited Rydberg states cannot be achieved by applying STIRAP with a static excitation scheme. On the other hand, FFM offers a new, straightforward path for populating the |ee⟩ state through STIRAP for atoms well within the blockade radius. The modulation index is a flexible degree of freedom that controls both two-atom coupling strengths simultaneously. The adiabatic transfer of atoms from the initial |gg⟩ state to the final |ee⟩ state can be accomplished by ramping the modulation index from its first Bessel function zero (α = 2.4; J_0(α) = 0) down to α = 0 (where J_1(α) = 0) over time (Fig. 3d). The adiabatic ramp needs to be performed quickly compared to the decoherence of the |W⟩ state but slowly compared to the coupling strengths {Ω_a, Ω_b}. We note that the effective Rabi frequency for each transition varies asymmetrically in time despite the laser intensities being kept constant throughout the transfer. § DISCUSSION AND OUTLOOK We have demonstrated that FFM is a versatile approach that can be used to increase the entanglement range, protect the entangled state coherence, and initialize strongly-interacting states in Rydberg atom arrays. The full advantages afforded by FFM can be realized by working with a Rydberg state of higher principal quantum number so as to access a longer lifetime and by making several technical upgrades. These include improving the Rydberg laser coherence though cavity filtering <cit.>, suppressing off-resonant scattering rates by increasing the detuning from the intermediate state <cit.>, and reducing the position disorder through ground-state cooling <cit.>, all of which have already been demonstrated separately in other experiment setups. Our results can be extended to the generation and protection of entanglement between two long-lived ground states, where the entanglement is mediated by excitation to a Rydberg state <cit.>. For an arbitrary graph depicting a particular geometric arrangement of single atoms at its vertices, FFM enables connectivity between any two atoms through an appropriate choice of the modulation index. FFM can also be combined with mobile optical tweezers that can transport entanglement over longer distances <cit.>. In such a combination, FFM can reduce both the duration and the number of moves required to perform pairwise entanglement across the entire array, thereby leading to a more streamlined quantum information processing architecture <cit.>. We further note that the enlarged blockade range can be used to entangle multiple atoms simultaneously <cit.>. In this case, FFM can effect the dynamical control of the Rydberg blockade range beyond that accomplished by simply tuning the Rydberg laser intensity, thereby offering a flexible way to access quench dynamics in quantum many-body simulations <cit.>. Importantly, our work redefines the ability to access the two paradigmatic regimes of interactions — Rydberg blockade versus anti-blockade, which have thus far been mostly governed by the precise positioning of atoms and the static blockade radius. The FFM not only enhances the programmability of Rydberg atom arrays but also enables steady-state Rydberg anti-blockade for atoms spaced within the blockade radius, which cannot be otherwise attained with conventional static schemes. These results open the door to realizing arrays of closely-spaced Rydberg atoms, including those in long-lived circular Rydberg states that are attractive for quantum computing and simulation <cit.>. § ACKNOWLEDGEMENTS We acknowledge Travis Nicholson and Wen Wei Ho for stimulating discussions, as well as Krishna Chaitanya Yellapragada for technical assistance with the experiment setup. This research is supported by the National Research Foundation, Singapore and A*STAR under its Quantum Engineering Programme (NRF2021-QEP2-02-P09) and its CQT bridging grant. 51 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Ebadi et al.(2021)Ebadi, Wang, Levine, Keesling, Semeghini, Omran, Bluvstein, Samajdar, Pichler, Ho et al.]ebadi2021quantum author author S. Ebadi, author T. T. Wang, author H. Levine, author A. Keesling, author G. Semeghini, author A. Omran, author D. Bluvstein, author R. Samajdar, author H. Pichler, author W. W. Ho, et al., @noop journal journal Nature volume 595, pages 227 (year 2021)NoStop [Scholl et al.(2021)Scholl, Schuler, Williams, Eberharter, Barredo, Schymik, Lienhard, Henry, Lang, Lahaye et al.]scholl2021quantum author author P. Scholl, author M. Schuler, author H. J. Williams, author A. A. Eberharter, author D. Barredo, author K.-N. Schymik, author V. Lienhard, author L.-P. Henry, author T. C. Lang, author T. Lahaye, et al., @noop journal journal Nature volume 595, pages 233 (year 2021)NoStop [Yan et al.(2022)Yan, Spar, Prichard, Chi, Wei, Ibarra-García-Padilla, Hazzard, and Bakr]yan2022two author author Z. Z. Yan, author B. M. Spar, author M. L. Prichard, author S. Chi, author H.-T. Wei, author E. Ibarra-García-Padilla, author K. R. A. Hazzard, and author W. S. Bakr, https://doi.org/10.1103/PhysRevLett.129.123201 journal journal Physical Review Letters volume 129, pages 123201 (year 2022)NoStop [Young et al.(2020)Young, Eckner, Milner, Kedar, Norcia, Oelker, Schine, Ye, and Kaufman]young2020half author author A. W. Young, author W. J. Eckner, author W. R. Milner, author D. Kedar, author M. A. Norcia, author E. Oelker, author N. Schine, author J. Ye, and author A. M. Kaufman, @noop journal journal Nature volume 588, pages 408 (year 2020)NoStop [Madjarov et al.(2019)Madjarov, Cooper, Shaw, Covey, Schkolnik, Yoon, Williams, and Endres]madjarov2019atomic author author I. S. Madjarov, author A. Cooper, author A. L. Shaw, author J. P. Covey, author V. Schkolnik, author T. H. Yoon, author J. R. Williams, and author M. Endres, @noop journal journal Physical Review X volume 9, pages 041052 (year 2019)NoStop [Kaufman et al.(2015)Kaufman, Lester, Foss-Feig, Wall, Rey, and Regal]kaufman2015entangling author author A. Kaufman, author B. Lester, author M. Foss-Feig, author M. Wall, author A. Rey, and author C. Regal, @noop journal journal Nature volume 527, pages 208 (year 2015)NoStop [Saffman(2016)]saffman2016quantum author author M. Saffman, @noop journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 49, pages 202001 (year 2016)NoStop [Jau et al.(2016)Jau, Hankin, Keating, Deutsch, and Biedermann]jau2016entangling author author Y.-Y. Jau, author A. Hankin, author T. Keating, author I. H. Deutsch, and author G. Biedermann, @noop journal journal Nature Physics volume 12, pages 71 (year 2016)NoStop [Levine et al.(2019)Levine, Keesling, Semeghini, Omran, Wang, Ebadi, Bernien, Greiner, Vuletić, Pichler et al.]levine2019parallel author author H. Levine, author A. Keesling, author G. Semeghini, author A. Omran, author T. T. Wang, author S. Ebadi, author H. Bernien, author M. Greiner, author V. Vuletić, author H. Pichler, et al., @noop journal journal Physical Review Letters volume 123, pages 170503 (year 2019)NoStop [Madjarov et al.(2020)Madjarov, Covey, Shaw, Choi, Kale, Cooper, Pichler, Schkolnik, Williams, and Endres]madjarov2020high author author I. S. Madjarov, author J. P. Covey, author A. L. Shaw, author J. Choi, author A. Kale, author A. Cooper, author H. Pichler, author V. Schkolnik, author J. R. Williams, and author M. Endres, @noop journal journal Nature Physics volume 16, pages 857 (year 2020)NoStop [Bluvstein et al.(2022)Bluvstein, Levine, Semeghini, Wang, Ebadi, Kalinowski, Keesling, Maskara, Pichler, Greiner et al.]bluvstein2022quantum author author D. Bluvstein, author H. Levine, author G. Semeghini, author T. T. Wang, author S. Ebadi, author M. Kalinowski, author A. Keesling, author N. Maskara, author H. Pichler, author M. Greiner, et al., @noop journal journal Nature volume 604, pages 451 (year 2022)NoStop [Chew et al.(2022)Chew, Tomita, Mahesh, Sugawa, de Léséleuc, and Ohmori]chew2022ultrafast author author Y. Chew, author T. Tomita, author T. P. Mahesh, author S. Sugawa, author S. de Léséleuc, and author K. Ohmori, @noop journal journal Nature Photonics volume 16, pages 724 (year 2022)NoStop [McDonnell et al.(2022)McDonnell, Keary, and Pritchard]mcdonnell2022demonstration author author K. McDonnell, author L. F. Keary, and author J. D. Pritchard, https://doi.org/10.1103/PhysRevLett.129.200501 journal journal Physical Review Letters volume 129, pages 200501 (year 2022)NoStop [Ma et al.(2022)Ma, Burgers, Liu, Wilson, Zhang, and Thompson]ma2022universal author author S. Ma, author A. P. Burgers, author G. Liu, author J. Wilson, author B. Zhang, and author J. D. Thompson, @noop journal journal Physical Review X volume 12, pages 021028 (year 2022)NoStop [Jenkins et al.(2022)Jenkins, Lis, Senoo, McGrew, and Kaufman]jenkins2022ytterbium author author A. Jenkins, author J. W. Lis, author A. Senoo, author W. F. McGrew, and author A. M. Kaufman, https://doi.org/10.1103/PhysRevX.12.021027 journal journal Physical Review X volume 12, pages 021027 (year 2022)NoStop [Okuno et al.(2022)Okuno, Nakamura, Kusano, Takasu, Takei, Konishi, and Takahashi]okuno2022takahashi author author D. Okuno, author Y. Nakamura, author T. Kusano, author Y. Takasu, author N. Takei, author H. Konishi, and author Y. Takahashi, https://doi.org/10.7566/JPSJ.91.084301 journal journal Journal of the Physical Society of Japan volume 91, pages 084301 (year 2022)NoStop [Singh et al.(2022)Singh, Anand, Pocklington, Kemp, and Bernien]singh2022dual author author K. Singh, author S. Anand, author A. Pocklington, author J. T. Kemp, and author H. Bernien, @noop journal journal Physical Review X volume 12, pages 011040 (year 2022)NoStop [Mur-Petit et al.(2020)Mur-Petit, Sawant, Blackmore, Gregory, Hutson, Jaksch, Aldegunde, Tarbutt, and Cornish]mur2020ultracold author author J. Mur-Petit, author R. Sawant, author J. Blackmore, author P. Gregory, author J. Hutson, author D. Jaksch, author J. Aldegunde, author M. Tarbutt, and author S. L. Cornish, @noop journal journal New Journal of Physics volume 22 (year 2020)NoStop [Zhang et al.(2022)Zhang, Picard, Cairncross, Wang, Yu, Fang, and Ni]zhang2022optical author author J. T. Zhang, author L. R. Picard, author W. B. Cairncross, author K. Wang, author Y. Yu, author F. Fang, and author K.-K. Ni, @noop journal journal Quantum Science and Technology volume 7, pages 035006 (year 2022)NoStop [Urban et al.(2009)Urban, Johnson, Henage, Isenhower, Yavuz, Walker, and Saffman]urban2009observation author author E. Urban, author T. A. Johnson, author T. Henage, author L. Isenhower, author D. Yavuz, author T. Walker, and author M. Saffman, @noop journal journal Nature Physics volume 5, pages 110 (year 2009)NoStop [Gaëtan et al.(2009)Gaëtan, Miroshnychenko, Wilk, Chotia, Viteau, Comparat, Pillet, Browaeys, and Grangier]gaetan2009observation author author A. Gaëtan, author Y. Miroshnychenko, author T. Wilk, author A. Chotia, author M. Viteau, author D. Comparat, author P. Pillet, author A. Browaeys, and author P. Grangier, @noop journal journal Nature Physics volume 5, pages 115 (year 2009)NoStop [Jaksch et al.(2000)Jaksch, Cirac, Zoller, Rolston, Côté, and Lukin]jaksch2000fast author author D. Jaksch, author J. I. Cirac, author P. Zoller, author S. L. Rolston, author R. Côté, and author M. D. Lukin, @noop journal journal Physical Review Letters volume 85, pages 2208 (year 2000)NoStop [Jo et al.(2020)Jo, Song, Kim, and Ahn]jo2020rydberg author author H. Jo, author Y. Song, author M. Kim, and author J. Ahn, @noop journal journal Physical Review Letters volume 124, pages 033603 (year 2020)NoStop [de Léséleuc et al.(2018)de Léséleuc, Barredo, Lienhard, Browaeys, and Lahaye]de2018analysis author author S. de Léséleuc, author D. Barredo, author V. Lienhard, author A. Browaeys, and author T. Lahaye, https://doi.org/10.1103/PhysRevA.97.053803 journal journal Physical Review A volume 97, pages 053803 (year 2018)NoStop [Basak et al.(2018)Basak, Chougale, and Nath]basak2018periodically author author S. Basak, author Y. Chougale, and author R. Nath, @noop journal journal Physical Review Letters volume 120, pages 123204 (year 2018)NoStop [Mallavarapu et al.(2021)Mallavarapu, Niranjan, Li, Wüster, and Nath]mallavarapu2021population author author S. K. Mallavarapu, author A. Niranjan, author W. Li, author S. Wüster, and author R. Nath, @noop journal journal Physical Review A volume 103, pages 023335 (year 2021)NoStop [SOM()]SOM @noop note Materials and methods are available as supplementary materials.Stop [Borish et al.(2020)Borish, Markovi ćć, Hines, Rajagopal, and Schleier-Smith]borish2020transverse author author V. Borish, author O. Markovi ćć, author J. A. Hines, author S. V. Rajagopal, and author M. Schleier-Smith, https://doi.org/10.1103/PhysRevLett.124.063601 journal journal Physical Review Letters volume 124, pages 063601 (year 2020)NoStop [Geier et al.(2021)Geier, Thaicharoen, Hainaut, Franz, Salzinger, Tebben, Grimshandl, Zürn, and Weidemüller]geier2021floquet author author S. Geier, author N. Thaicharoen, author C. Hainaut, author T. Franz, author A. Salzinger, author A. Tebben, author D. Grimshandl, author G. Zürn, and author M. Weidemüller, @noop journal journal Science volume 374, pages 1149 (year 2021)NoStop [Scholl et al.(2022)Scholl, Williams, Bornet, Wallner, Barredo, Henriet, Signoles, Hainaut, Franz, Geier et al.]scholl2022microwave author author P. Scholl, author H. J. Williams, author G. Bornet, author F. Wallner, author D. Barredo, author L. Henriet, author A. Signoles, author C. Hainaut, author T. Franz, author S. Geier, et al., @noop journal journal PRX Quantum volume 3, pages 020303 (year 2022)NoStop [Naik et al.(2017)Naik, Leung, Chakram, Groszkowski, Lu, Earnest, McKay, Koch, and Schuster]naik2017random author author R. Naik, author N. Leung, author S. Chakram, author P. Groszkowski, author Y. Lu, author N. Earnest, author D. McKay, author J. Koch, and author D. I. Schuster, @noop journal journal Nature communications volume 8, pages 1904 (year 2017)NoStop [Clark et al.(2019)Clark, Jia, Schine, Baum, Georgakopoulos, and Simon]clark2019interacting author author L. W. Clark, author N. Jia, author N. Schine, author C. Baum, author A. Georgakopoulos, and author J. Simon, @noop journal journal Nature volume 571, pages 532 (year 2019)NoStop [Clark et al.(2020)Clark, Schine, Baum, Jia, and Simon]clark2020observation author author L. W. Clark, author N. Schine, author C. Baum, author N. Jia, and author J. Simon, @noop journal journal Nature volume 582, pages 41 (year 2020)NoStop [Aidelsburger et al.(2013)Aidelsburger, Atala, Lohse, Barreiro, Paredes, and Bloch]aidelsburger2013realization author author M. Aidelsburger, author M. Atala, author M. Lohse, author J. T. Barreiro, author B. Paredes, and author I. Bloch, https://doi.org/10.1103/PhysRevLett.111.185301 journal journal Physical Review Letters volume 111, pages 185301 (year 2013)NoStop [Miyake et al.(2013)Miyake, Siviloglou, Kennedy, Burton, and Ketterle]miyake2013realizing author author H. Miyake, author G. A. Siviloglou, author C. J. Kennedy, author W. C. Burton, and author W. Ketterle, https://doi.org/10.1103/PhysRevLett.111.185302 journal journal Physical Review Letters volume 111, pages 185302 (year 2013)NoStop [Eckardt(2017)]eckardt2017colloquium author author A. Eckardt, @noop journal journal Reviews of Modern Physics volume 89, pages 011004 (year 2017)NoStop [Bluvstein et al.(2021)Bluvstein, Omran, Levine, Keesling, Semeghini, Ebadi, Wang, Michailidis, Maskara, Ho et al.]bluvstein2021controlling author author D. Bluvstein, author A. Omran, author H. Levine, author A. Keesling, author G. Semeghini, author S. Ebadi, author T. T. Wang, author A. A. Michailidis, author N. Maskara, author W. W. Ho, et al., @noop journal journal Science volume 371, pages 1355 (year 2021)NoStop [Aliyu et al.(2021)Aliyu, Zhao, Quek, Yellapragada, and Loh]aliyu2021d author author M. M. Aliyu, author L. Zhao, author X. Q. Quek, author K. C. Yellapragada, and author H. Loh, @noop journal journal Physical Review Research volume 3, pages 043059 (year 2021)NoStop [Brown et al.(2019)Brown, Thiele, Kiehl, Hsu, and Regal]brown2019gray author author M. Brown, author T. Thiele, author C. Kiehl, author T.-W. Hsu, and author C. Regal, @noop journal journal Physical Review X volume 9, pages 011057 (year 2019)NoStop [Ang'ong'a et al.(2022)Ang'ong'a, Huang, Covey, and Gadway]angonga2022gray author author J. Ang'ong'a, author C. Huang, author J. P. Covey, and author B. Gadway, https://doi.org/10.1103/PhysRevResearch.4.013240 journal journal Physical Review Research volume 4, pages 013240 (year 2022)NoStop [Levine et al.(2018)Levine, Keesling, Omran, Bernien, Schwartz, Zibrov, Endres, Greiner, Vuletić, and Lukin]levine2018high author author H. Levine, author A. Keesling, author A. Omran, author H. Bernien, author S. Schwartz, author A. S. Zibrov, author M. Endres, author M. Greiner, author V. Vuletić, and author M. D. Lukin, @noop journal journal Physical Review Letters volume 121, pages 123603 (year 2018)NoStop [Amthor et al.(2010)Amthor, Giese, Hofmann, and Weidemüller]amthor2010evidence author author T. Amthor, author C. Giese, author C. S. Hofmann, and author M. Weidemüller, @noop journal journal Physical Review letters volume 104, pages 013001 (year 2010)NoStop [Wintermantel et al.(2021)Wintermantel, Buchhold, Shevate, Morgado, Wang, Lochead, Diehl, and Whitlock]wintermantel2021epidemic author author T. M. Wintermantel, author M. Buchhold, author S. Shevate, author M. Morgado, author Y. Wang, author G. Lochead, author S. Diehl, and author S. Whitlock, @noop journal journal Nature Communications volume 12, pages 103 (year 2021)NoStop [Liu et al.(2022)Liu, Yang, Bienias, Iadecola, and Gorshkov]liu2022localization author author F. Liu, author Z.-C. Yang, author P. Bienias, author T. Iadecola, and author A. V. Gorshkov, @noop journal journal Physical Review Letters volume 128, pages 013603 (year 2022)NoStop [Marcuzzi et al.(2017)Marcuzzi, Minář, Barredo, De Léséleuc, Labuhn, Lahaye, Browaeys, Levi, and Lesanovsky]marcuzzi2017facilitation author author M. Marcuzzi, author J. Minář, author D. Barredo, author S. De Léséleuc, author H. Labuhn, author T. Lahaye, author A. Browaeys, author E. Levi, and author I. Lesanovsky, @noop journal journal Physical Review Letters volume 118, pages 063606 (year 2017)NoStop [Kaufman et al.(2012)Kaufman, Lester, and Regal]kaufman2012cooling author author A. M. Kaufman, author B. J. Lester, and author C. A. Regal, @noop journal journal Physical Review X volume 2, pages 041014 (year 2012)NoStop [De Léséleuc et al.(2019)De Léséleuc, Lienhard, Scholl, Barredo, Weber, Lang, Büchler, Lahaye, and Browaeys]de2019observation author author S. De Léséleuc, author V. Lienhard, author P. Scholl, author D. Barredo, author S. Weber, author N. Lang, author H. P. Büchler, author T. Lahaye, and author A. Browaeys, @noop journal journal Science volume 365, pages 775 (year 2019)NoStop [Evered et al.(2023)Evered, Bluvstein, Kalinowski, Ebadi, Manovitz, Zhou, Li, Geim, Wang, Maskara et al.]evered2023high author author S. J. Evered, author D. Bluvstein, author M. Kalinowski, author S. Ebadi, author T. Manovitz, author H. Zhou, author S. H. Li, author A. A. Geim, author T. T. Wang, author N. Maskara, et al., @noop journal journal arXiv preprint arXiv:2304.05420 (year 2023)NoStop [Nguyen et al.(2018)Nguyen, Raimond, Sayrin, Cortiñas, Cantat-Moltrecht, Assemat, Dotsenko, Gleyzes, Haroche, Roux, Jolicoeur, and Brune]nguyen2018towards author author T. L. Nguyen, author J. M. Raimond, author C. Sayrin, author R. Cortiñas, author T. Cantat-Moltrecht, author F. Assemat, author I. Dotsenko, author S. Gleyzes, author S. Haroche, author G. Roux, author T. Jolicoeur, and author M. Brune, https://doi.org/10.1103/PhysRevX.8.011032 journal journal Physical Review X volume 8, pages 011032 (year 2018)NoStop [Meinert et al.(2020)Meinert, Hölzl, Nebioglu, D'Arnese, Karl, Dressel, and Scheffler]meinert2020indium author author F. Meinert, author C. Hölzl, author M. A. Nebioglu, author A. D'Arnese, author P. Karl, author M. Dressel, and author M. Scheffler, @noop journal journal Physical Review Research volume 2, pages 023192 (year 2020)NoStop [Cohen and Thompson(2021)]cohen2021quantum author author S. R. Cohen and author J. D. Thompson, @noop journal journal PRX Quantum volume 2, pages 030322 (year 2021)NoStop
http://arxiv.org/abs/2306.01711v1
20230602173217
OMNI: Open-endedness via Models of human Notions of Interestingness
[ "Jenny Zhang", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Fast estimation of the look-elsewhere effect using Gaussian random fields Rafael F. Lang, 0000-0001-7594-2746 July 31, 2023 ========================================================================= Open-ended algorithms aim to learn new, interesting behaviors forever. That requires a vast environment search space, but there are thus infinitely many possible tasks. Even after filtering for tasks the current agent can learn (i.e., learning progress), countless learnable yet uninteresting tasks remain (e.g., minor variations of previously learned tasks). An Achilles Heel of open-endedness research is the inability to quantify (and thus prioritize) tasks that are not just learnable, but also interesting (e.g., worthwhile and novel). We propose solving this problem by Open-endedness via Models of human Notions of Interestingness (OMNI). The insight is that we can utilize large (language) models (LMs) as a model of interestingness (MoI), because they already internalize human concepts of interestingness from training on vast amounts of human-generated data, where humans naturally write about what they find interesting or boring. We show that LM-based MoIs improve open-ended learning by focusing on tasks that are both learnable and interesting, outperforming baselines based on uniform task sampling or learning progress alone. This approach has the potential to dramatically advance the ability to intelligently select which tasks to focus on next (i.e., auto-curricula), and could be seen as AI selecting its own next task to learn, facilitating self-improving AI and AI-Generating Algorithms [The code is available at <https://github.com/jennyzzt/omni>.]. § INTRODUCTION Provided that the real, significant challenges of AI safety and existential risk can be solved <cit.>, there are tremendous gains to be had by creating more powerful AI or even AGI. A great hope for AI is that one day it can produce breakthroughs that fundamentally improve the human condition. These so-far uniquely human advancements and discoveries are the hallmark of civilization, from the invention of the wheel, to farming, vaccines, computers, and even rock and roll. Perhaps someday, unlimited energy and a cure for cancer can be added to the list. What does AI need to possess to discover such new paradigms, as only humans have until now? Much discussed in the field of open-endedness <cit.>, the ephemeral fuel behind civilization's prodigious output is the human intuition for interestingness. Drawing upon eons of human experience, we can sense potential even when we don't precisely know where it leads. Conventional Reinforcement Learning (RL) tools (e.g., intrinsic motivation <cit.> and learning progress <cit.>) are so far only shadows of what such a human sense could do. However, with the rise of large models (LMs) (a.k.a. foundation models <cit.>), an intriguing prospect has suddenly arisen – trained on vast troves of human experience, perhaps LMs have the potential to grapple for the first time with the critical question of what is actually interesting to explore. Open-ended learning algorithms, which could leverage such a notion of interestingness, seek to create AI agents that, like humans, continuously learn a variety of different skills within a vast, complex, ever-changing environment. The challenge addressed by interestingness is that, in such environments, there are an infinite number of possible tasks, requiring some method to choose which tasks to try to learn next at every point in training. Handcrafting curricula for training agents in open-ended environments can be extremely challenging due to the sheer number of tasks and the need to adapt to the agent's skill level and learning progress. In pursuit of an algorithm that is applicable in any domain and enables perpetual learning, handcrafting curricula proves to be an impractical solution. Learning progress methods are a type of auto-curriculum approach that estimates which tasks are at appropriate difficulty levels for the agent to learn from <cit.>. However, such methods can be distracted by learnable yet uninteresting tasks. For example, an agent could be bogged down indefinitely with rearranging silverware in slightly new configurations, hindering it from trying other interesting tasks. Even after filtering for tasks that the current agent can learn, countless learnable yet uninteresting tasks may persist (e.g., slight variations of previously learned tasks). A key challenge in open-endedness research is the inability to quantify and thus focus on tasks that are not only learnable but also interesting. Many works have tried to encourage a predefined metric of novelty, diversity, exploration, or open-endedness, processes that necessitate the quantification of these ineffable qualities. The optimization of these quantitative measures often leads to undesirable or pathological outcomes, resulting in an output that conforms to the defined metrics, rather than achieving the intended goal <cit.>. As Goodhart's law posits, “when a measure becomes a target, it ceases to be a good measure” <cit.>. For example, an agent might exploit a novelty measure by generating many superficially different but ultimately trivial solutions, thus undermining the goal of discovering genuinely interesting outcomes <cit.>. Similarly, based on how intrinsic motivation is measured, an agent could be biased towards certain types of solutions, leading to a narrow exploration of the problem space rather than developing diverse and valuable insights and innovations <cit.>. Directly specifying a criteria for what constitutes an interesting learning challenge is unlikely to yield satisfactory results. Instead, we propose utilizing neural networks to model the ineffable notion of interestingness that humans have. To borrow from Newton, modern AI sees further by standing on the shoulders of giant human datasets. Training on vast amounts of human-generated data has proven very powerful in many cases, such as text generation (e.g., GPT-3 <cit.>), image generation (e.g., DALL-E <cit.>), and representation learning (e.g., CLIP <cit.>). We propose Open-endedness via Models of human Notions of Interestingness (OMNI). OMNI leverages the power of LMs that have already been trained on extensive human-generated data and have an inherent understanding of human notions of interestingness <cit.>. OMNI utilizes LMs as a model of interestingness (MoI) to focus on tasks that are: (1) learnable, at appropriate difficulty levels for the agents to learn from, and (2) interesting, roughly meaning worthwhile to learn and sufficiently novel. Concepts of “interestingness”, “worthwhile”, and “novelty” are challenging to explicitly define, let alone quantify, which is precisely what OMNI addresses. Humans can intuitively assess these qualities despite their elusive and abstract nature, echoing Justice Potter Stewart's sentiment of “I know it when I see it” <cit.>. The goal of OMNI, therefore, is to emulate this human capacity for nuanced interestingness judgement within the context of open-ended learning. We evaluate OMNI on a challenging domain, Crafter <cit.>, a 2D version of Minecraft. OMNI outperforms baselines based on uniform task sampling or learning progress alone. Overall, OMNI has the potential to significantly enhance the ability of AI to intelligently select which tasks to concentrate on next for endless learning and could pave the way for self-improving AI and AI-Generating Algorithms <cit.>. § RELATED WORK §.§ Auto-Curriculum Learning Training neural networks with a curriculum has been extensively studied <cit.>. Auto-curriculum learning has emerged as a promising research area in RL <cit.>, with approaches based on success probabilities and reward thresholds <cit.>, regret <cit.>, or learning progress <cit.>. Static threshold-based approaches provide a straightforward method for curriculum design. These approaches involve setting fixed criteria for tasks based on their difficulty or complexity. An agent progresses to the subsequent task in a predefined order only after mastering a simpler one. To handcraft an effective curriculum, one would have to understand the relative difficulty of each task and identify tasks of suitable difficulty corresponding to each phase of the agent's learning trajectory. Doing this in a vast task space is extremely difficult or even impossible. Regret-based methods compute per-task regret by taking the difference between the maximum known return and the average return over multiple rollouts. Regret-based methods typically select tasks with high regret, under the assumption that these tasks still offer substantial learning opportunities <cit.>. However, in stochastic environments, this approach may favor more stochastic and less learnable tasks instead of less stochastic and more learnable ones <cit.>. Learning-progress-based curricula have the potential to mitigate these issues by monitoring the agent's progress and adapting the task selection accordingly <cit.>. <cit.> demonstrated that learning progress can be measured reliably and that learning-progress-based curricula can be applied to hard RL problems at scale. Our work extends the learning-progress-based curriculum proposed by <cit.>. A notable limitation of existing auto-curricula approaches is their inability to distinguish between interesting and uninteresting tasks. Despite filtering for learnable tasks, open-ended environments may still contain infinite learnable but uninteresting tasks. This paper proposes a novel method for identifying and filtering interesting tasks and integrates it with a learning-progress-based auto-curriculum. §.§ Attempts to Quantify Interestingness There have been many works quantifying novelty, diversity, or intrinsic motivation <cit.>. However, by Goodhart's law, algorithms that optimize against these measures produces pathologies and end up uninteresting <cit.>. Optimizing for novelty might inadvertently promote the generation of unusual and unexpected outputs that lack practical utility or relevance <cit.>. The focus on novelty may overshadow other important aspects, such as its effectiveness or applicability to real-world problems. Similarly, attempts to optimize for diversity can lead to the production of a broad range of solutions that, while appearing to be varied and distinct in a specified area, may ultimately lack depth or significance in other areas <cit.>. Intrinsic motivation can also result in the pursuit of tasks that are not inherently interesting or valuable (e.g., twitching in slightly novel ways) <cit.>. In light of these shortcomings, it becomes apparent that the pursuit of handcrafted measures of interestingness is fraught with difficulties. Instead, this paper employs an LM to model human notions of interestingness, gleaned from a large text corpora of existing human-generated data. §.§ Pre-trained Large Models in Open-Endedness Large (language) models have recently shown a remarkable ability to capture rich knowledge on an extensive array of subjects from large-scale text corpora. They achieve impressive performance across a wide range of natural language processing tasks <cit.> and display profound understanding of complex concepts such as physics. Consequently, they are being utilized in many robotics domains <cit.>. There has been growing interest in using them for task selection or generation. Some studies have investigated the application of LMs in breaking down high-level instructions into a sequence of sub-goals, which can be executed by an agent in a zero-shot manner <cit.> or used to train modular sub-policies <cit.>. <cit.> queries LMs for zero-shot commonsense priors and apply them to a planning task. Other studies have utilized LMs to estimate success rates for a given task or desired behavior <cit.>. Moreover, LMs have been employed to generate tasks, enabling structured exploration in various environments <cit.>. However, their method, which primarily serves as a pre-training strategy, does not consider the agent's past achievements. OMNI incorporates feedback on task success rates, allowing the curriculum to adaptively select tasks aligned with the agent's growing capabilities. Furthermore, OMNI leverages LMs' commonsense priors to model human notion of interestingness, potentially enabling open-ended learning by focusing on interesting learnable tasks only (instead of getting lost) within a vast space of possible tasks. § METHODS §.§ Problem Formulation We train task-conditioned agents, and formulate the RL problem as a partially observed Markov decision process <cit.> defined by a tuple (𝒮, 𝒜, 𝒯, ℛ, 𝒪, Ω, γ). Observations o ∈Ω depend on the new environment states s ∈𝒮 and actions taken a ∈ A via 𝒪(o|s, a). The task which the agent is conditioned on is part of the environment state s. 𝒯(s'|s, a) describes the dynamics of the environment. ℛ is the environment's reward function. γ is a discount factor. OMNI focuses on generating learnable and interesting tasks to condition the RL agent on. §.§ Learning Progress Curriculum The task pool in open-ended environments can be very large and diverse, making it challenging for an agent to learn effectively through uniform sampling. Most randomly sampled tasks are likely to be too easy or too difficult for the agent. To automatically identify tasks at the frontier of the agent's capabilities, we extend the learning-progress-based curriculum from <cit.>. The curriculum predominantly samples tasks with high learning progress, defined as an agent's recent change in task success probability. During training, the agent is periodically evaluated, and a recent success probability estimate p_recent is calculated by applying an exponential moving average (EMA) function to the evaluated task success rates. p_recent is smoothed with a second, identical EMA to obtain a slower-to-change reflection p_gradual of the success probability estimate. Since tasks with low success probabilities are more likely to be novel and are harder to learn because the agent observes fewer successes, p_recent and p_gradual are reweighted to magnify the learning progress in tasks with low success probabilities and reduce the learning progress in tasks with high success probabilities. This reweighting also compensates for the temporal delay caused by the EMA (Figure <ref>). Bidirectional learning progress, the absolute difference between the reweighted p_recent and p_gradual, is used to also focus learning on tasks where performance is degrading due to forgetting. Sampling of training tasks is biased towards those that score the highest on this bidirectional learning progress measure. The reweighting mechanism magnifies differences in low probabilities, putting additional focus on tasks that have high learning progress and low success rates. However, it causes a bias against tasks with initially high success rates achieved by chance even though the agent has not yet learned them. We do not remove the reweighting mechanism since it remains useful in the later stages of training. Instead, we propose an extension to the approach from <cit.>, normalizing the task success rates with the success rates achieved by a random action policy. Normalizing the success rates reduces the bias against tasks with initially high success rates, increasing their sample frequency during the early stages of the learning progress curriculum. Appendix <ref> has more details on the learning progress curriculum and the performance difference without our extension. §.§ Modeling what Humans Find Interesting A learning progress curriculum can be distracted by endless variations of uninteresting tasks. To address this challenge, a Model of Interestingness (MoI) is used to focus on the selection of interesting tasks that offer substantial learning value. Humans often intuitively know what might be useful for learning new skills or achieving goals much later <cit.>. This is evident in children playing to unknowingly acquire skills, or scientists exploring new areas to uncover unexpected and beneficial knowledge for future endeavors. This paper presents one specific instance of the OMNI principle, but there are many other possible instantiations to explore in future work (Section <ref>). First, this section outlines the process of using an LM to determine which tasks are interesting. Then, the section describes how the interestingness predictions are utilized to obtain task sampling weights. Determining Interesting Tasks. This paper capitalizes on the capabilities of autoregressive LMs, specifically GPT-3 <cit.>, to emulate human notions of interestingness. LMs are pretrained on vast and diverse text corpora, enabling them to amass a significant amount of world knowledge. GPT-3 is prompted in a few-shot manner by providing it with a few examples of choosing which tasks are interesting. It takes into account the agent's existing proficiency on a given set of tasks and suggests what humans would typically find interesting. See Appendix <ref> for the full prompt. The prompt excluding the few-shot examples is: Sampling Weights. All tasks are first algorithmically partitioned as either “interesting” or “boring” based on their success rates and the LM's assessment of their relation to tasks already classified as “interesting” (Algorithm <ref>). Since the raw evaluation of task success rates can be noisy, the task success rates referenced in this section are smoothed with an EMA function. Algorithm <ref> iteratively selects the task with the highest success rate not yet categorized, adds it to the “interesting” set (Step <ref>), prompts the LM to identify boring tasks from the remaining tasks in relation to the “interesting” set (Step <ref>), and updates the “boring” set (Step <ref>), repeating until all tasks are categorized. In Step <ref>, the rationale for adding the task with the highest success rate, that is not yet categorized, to the “interesting” set is twofold: first, the LM considers this task to be sufficiently distinct from those already in the “interesting” set, otherwise, it would have already been considered “boring” in Step <ref>; second, its relatively higher success rate indicates that it aligns closer to the agent's current skill level. To illustrate, assume that the “collect wood” task has the highest success rate. It is added to the “interesting” set (Step <ref>). Then the LM deems “collect 2..10 wood” tasks as boring (Step <ref>). Now, when repeating Step <ref>, the algorithm will add the next task with the highest success rate and not yet categorized, e.g., “place table”, to the “interesting” set, whereas without the filter in Step <ref>, “collect 2 wood” might have been selected as interesting instead. Sampling weights of 1.0 and 0.001 are assigned to interesting and boring tasks respectively. The final task sampling rates in OMNI are obtained by multiplying the sampling weights from the learning progress curriculum and the MoI together, and then normalizing to form the probability distribution. § EXPERIMENTS §.§ Crafter Environment We evaluate OMNI on Crafter <cit.>, a 2D version of Minecraft that enables collecting and creating a set of artifacts organized along a technology tree. This means that certain tasks need to be completed, often multiple times, as prerequisites for other more challenging tasks (Figure <ref>). Agents receive RGB pixel observations (64 x 64 resolution) of a 9 x 9 grid area surrounding their position within a 64 x 64 grid landscape that varies with each episode, offering a complex and engaging testing ground. We modify the game to focus on gathering and crafting skills by eliminating the survival component. This removes the need for the agent to learn and continually apply survival tactics against enemies or for food gathering. The “sleep” and “place plant” actions are important for survival in the original game and have been omitted due to their reduced relevance in our modified context, which excludes the survival aspect. The original game consists of 22 tasks, of which, the 15 tasks unrelated to survival are selected and considered interesting. Crafter's challenges (owing to the procedurally generated landscape), such as partial observability, dynamic interactions, and diverse and complex tasks, capture in a microcosm much of the essence of other open-ended environments <cit.>, including the real world. However, other real-world complexities, such as multi-agent interaction and an unboundedly complex world, remain outside the scope of this environment and paper. §.§ Setup To train the Crafter agent, we use PPO <cit.>, a standard RL algorithm. Policy details and hyperparameters can be found in Appendix <ref>. The RL agent is trained in a task-conditioned setting, where it is provided a target task (represented with a bag-of-words encoding) as part of its observation and rewarded exclusively upon successful completion of the conditioned task (with a reward of +1). To investigate our hypothesis that focusing on interesting tasks with high learning progress will improve performance, we dilute the 15 interesting tasks with 90 “boring” tasks and 1023 “extremely challenging” tasks that serve as potential distractors for learning-progress-based approaches. Boring tasks are generated as numerical repeats of interesting tasks, e.g., “collect N wood” where N ∈ [2, 10], analogous to how minor numerical variations of real-world tasks are less interesting than tasks that differ qualitatively. See Appendix <ref> for the full list of boring tasks. Extremely challenging tasks represent tasks that are too difficult for the agent to complete at its current state of learning, serving as tasks that uniform sampling will waste time on, but that learning-progress-based methods should successfully ignore. The agent is assumed to always fail at these extremely challenging tasks and hence is always assigned a success rate of 0 for them. By analogy, consider the futility of attempting to cook a 5-course meal before learning the basic skill of cutting a vegetable. §.§ Results We compare the performance of agents trained with: (1) Uniform sampling, (2) Learning Progress (LP) only, and (3) OMNI: Learning Progress with additional filtering by a Model of Interestingness (OMNI: LP + MoI). Uniform sampling, the control algorithm, samples all tasks with equal probabilities. Uniform sampling is the most naive and samples tasks that are too easy or too difficult for the agent most of the time. LP samples tasks based on the calculated learning progress weights (Section <ref>), but is distracted by the many boring tasks. OMNI: LP + MoI focuses on the subset of tasks with high learning progress that are also interesting (Section <ref>). All experiments are run for 100 million time steps and are repeated 10 times with different random seeds. Each experiment takes about 33 hours on a 24GB NVIDIA A10 GPU with 30 virtual CPUs. We evaluate our methods with two metrics: (1) the average task success rate, and (2) the number of tasks with success rates exceeding a predetermined threshold α. The parameter α can be arbitrarily chosen from the lower half of the interval [0, 1]. Specifically, this study sets α = 0.2, which is consistent with the selections made in related literature <cit.>. The first metric reflects the agent's average performance across all tasks, while the second metric captures the extent to which the agent is a generalist that has decent competency on many different tasks. These metrics are calculated on the full set of interesting and boring tasks (Figure <ref>). Metrics calculated on interesting tasks only are shown in Appendix <ref>. All confidence intervals given are 95% median bootstrap confidence intervals obtained by resampling 1000 times. Confidence intervals are reported with the following notation: stat (CI: lower – upper) where stat is the median across runs. Shaded areas in graphs also indicate the 95% median bootstrap confidence interval obtained by resampling 1000 times. Uniform sampling. As expected, the results with uniform sampling are poor. Worse, the agents did not improve over time as most tasks sampled are too difficult or too easy for the agent and successes are extremely sparse (Figure <ref>). The agent is considered to have learned a task if its conditional success probability on that task is at least 0.2. The agent learns 4 (CI: 4 – 6) out of 105 interesting and boring tasks and only 3 (CI: 2 – 3) out of 15 interesting tasks. The agent achieves an average task success rate of 0.030 (CI: 0.026 – 0.033) on interesting and boring tasks, and 0.103 (CI: 0.087 – 0.120) on interesting tasks only. Learning Progress Curriculum. By focusing on tasks with suitable levels of difficulty, the agent learns to do a lot more tasks with higher success rates than uniform sampling. The agent learns 55 (CI: 54 – 56) out of 105 interesting and boring tasks and 9 (CI: 9 – 11) out of 15 interesting tasks. The agent achieves an average task success rate of 0.42 (CI: 0.41 – 0.43) on interesting and boring tasks, and 0.52 (CI: 0.50 – 0.56) on interesting tasks only. Across all metrics, the differences in performance between LP and Uniform at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test), showing that LP significantly outperforms uniform sampling (Figure <ref>). LP samples tasks that are at the frontier of the agent's capabilities (Figures <ref> and <ref>). When a task's conditional success probability changes, the LP curriculum focuses more on it. This means that there will be more rollouts where the task is the given goal and thus more positive examples from which the agent can learn to solve the conditioned task. However, LP is distracted by boring tasks (Figure <ref>). When the conditional success probabilities of boring tasks change, LP allocates higher sampling weights to them even though they are similar to other sampled tasks and might not expand the agent's range of skills. OMNI: Learning Progress + a Model of Interestingness. To automatically select and focus on interesting tasks, an LM is prompted in a few-shot manner to predict which tasks are interesting. By combining LP with an MoI, OMNI focuses on the subset of high learning progress tasks that are interesting. The agent learns 82 (CI: 80 – 87) out of 105 interesting and boring tasks and 14 (CI: 14 – 14) out of 15 interesting tasks. The agent achieves an average task success rate of 0.56 (CI: 0.54 – 0.58) on interesting and boring tasks, and 0.78 (CI: 0.76 – 0.80) on interesting tasks only. Across all metrics, the differences in performance between OMNI and LP at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test), showing that OMNI significantly outperforms an LP-only curriculum (Figure <ref>). OMNI is not distracted by boring tasks with high learning progress, and focuses on the interesting tasks only (Figure <ref>). The trained agent not only achieves higher average task success rates, but also learns more challenging tasks faster (Figure <ref>). We thus know OMNI performs better than LP alone, but how far is it from optimal? To address this, we created an oracle for the MoI, referred to as the Oracle Model of Interestingness (OMoI). Surprisingly and impressively, the performance of the LM-based MoI is nearly on par with the oracle, suggesting that OMNI is highly effective in identifying interesting tasks for the agent to learn on. Across all metrics, the performance of OMNI is almost indistinguishable from that of LP + OMoI (Figure <ref>), and the differences in performance between them at 25%, 50%, 75%, and 100% of the way through training are not statistically significant (all p > 0.05, Mann Whitney U test). Appendix <ref> presents more details on how the OMoI is designed, and provides plots of task sample rates and task success rates for LP + OMoI. These findings underscore the effectiveness of OMNI in harnessing the power of LMs to guide AI agents towards interesting and learnable tasks. The experimental setting employed primarily considers repetitive tasks as boring tasks. One may wonder how OMNI performs on different types of boring tasks. To address this, we explored two additional settings in Appendices <ref> and <ref>, where boring tasks are generated as compounds and synonyms respectively. Compound tasks are combinations of interesting tasks (e.g., “collect wood and make stone pickaxe”). Synonymous tasks are tasks with different names but the same success condition (e.g., “collect wood” vs. “gather wood”). Remarkably, the MoI used in OMNI is still able to capture what humans find interesting in these auxiliary settings. As a result, OMNI significantly outperforms the LP-only and Uniform curricula in both of these two additional settings. These findings suggest that OMNI is robust and adaptable to various tasks, including those that deviate from the repetitive tasks initially considered. By demonstrating its effectiveness across diverse task settings, OMNI further strengthens its position as a valuable approach for guiding AI agents towards interesting and learnable tasks. § DISCUSSION, FUTURE WORK, AND CONCLUSION This study explores the vision of using human notions of interestingness to accelerate open-ended learning, an approach we term Open-endedness via Modeling human Notions of Interestingness (OMNI). OMNI has several advantages over other methods by leveraging human concepts of interestingness to guide task selection in open-ended learning (Section <ref>). In this first version, we estimate learning progress through statistical methods and utilize a Model of Interestingness (MoI) based on human data distilled into LMs. A different version of OMNI could have the LM judge both learning progress and interestingness by giving it a history of task selection and task performance. That could allow for a more flexible notion of learning progress, as it can potentially recognize not just success or failure, but also important stepping stones and patterns leading to a solution, echoing the complexity of human learning progression. Consider an agent learning object manipulation, eventually progressing to more complex tasks like peeling an egg. Despite the prolonged lack of reward, the task may not be “too difficult” as statistical learning progress might suggest. Given the agent's proven ability to handle intricate items, it may be primed for egg peeling, simply requiring more attempts. Preliminary results suggest that LMs can successfully integrate these aspects (Figure <ref>). It is worth noting that as LMs continue to advance <cit.>, the performance of all versions of OMNI are expected to improve correspondingly. Ultimately, OMNI offers a general recipe for accelerated learning in open-ended environments with potentially infinite tasks, and steers open-ended learning towards meaningful and interesting progress, instead of meandering aimlessly amidst endless possibilities. Looking ahead, there are several promising avenues for future work. First, pushing beyond the reliance on a predetermined set of tasks, systems that automatically generate tasks could enable learning in a broader range of tasks <cit.>. One way of representing a diverse and vast task space is through language. Its advantages include ease of specification by humans, the ability to encapsulate more abstract concepts than standard state-based goals <cit.>, and the potential to enhance the trained agent's generalizability due to its partial compositionality <cit.>. Second, instead of using handcrafted reward signals, we propose training foundation models to automatically provide reward signals, exemplified by previous approaches <cit.>. One version could be a video-language foundation model that evaluates the degree to which an agent completes natural language tasks like “build a house” or “herd sheep to a hilltop”. It is worth noting that these two points are closely related, as the automatic estimation of reward signals for arbitrary tasks specified in natural language is essential in order to use RL with automatically generated natural language tasks. Another possibility is to incorporate multi-modal models, such as vision-language models and other modalities into the MoI. This would allow the MoI to have richer representations and a better comprehension of the agent's capabilities, facilitating a more accurate assessment of the agent's learning progress and task diversity. For example, a vision-language MoI might see that the agent is making progress on or very close to solving a task for which it is getting no reward, such as peeling an egg. Another idea is to adapt the MoI to different notions of interestingness based on the context. For example, an agent with a language prior, and thus an understanding of synonyms, would not benefit from attempting many synonymous tasks. Conversely, an agent lacking such a prior could learn from experience that different synonymous tasks indeed share the same success conditions after tackling numerous such tasks. Our preliminary findings suggest that LMs are flexible and capable of altering their predictions about which tasks are interesting according to the input context (Figure <ref>). An additional direction for future research is enabling the MoI to autonomously analyze quantitative performance measures, make its own assessment of learning progress, and incorporate that into its notion of interestingness. Our preliminary investigations indicate that LMs, such as GPT-3 and GPT-4 <cit.>, possess the ability to automatically analyze numerical results and adjust their understanding of interestingness (Figures <ref> and <ref>). Critically, by Goodhart's law, we expect that any model will have pathologies uncovered once it is a target metric being optimized against. For instance, fine-tuning a task generator to produce tasks that the MoI finds interesting might eventually result in the MoI not being a good indicator of what is interesting. Hence, refining and updating the MoI with additional human feedback could lead to more effective learning systems, an algorithm within the OMNI paradigm we call Open-Endedness with Human Feedback (OEHF). Similar to Reinforcement Learning with Human Feedback (RLHF) <cit.>, the objective is to train a model that can effectively capture an ineffable property that, although challenging to quantitatively measure, is readily identifiable upon observation (e.g., a backflip for RLHF, or whether a task is interesting for OEHF). One version would be to build upon the insights of this paper and start with a model already well-versed with human concepts of interestingness (via unsupervised pre-training on internet-scale human data), and further fine-tune that MoI with additional human evaluation of its output (as often as necessary). Such fine-tuning can help minimize suboptimal interestingness judgements, enhance the MoI's understanding of skills not initially present in its zero-shot repertoire, or tailor the MoI to a specific domain. In conclusion, our work demonstrates the potential of using an MoI to significantly enhance auto-curricula and the quest for open-ended learning algorithms by intelligently focusing on learnable and interesting tasks. OMNI addresses the Achilles Heel of open-ended systems, which lies in defining and quantifying interestingness, as previous attempts have resulted in pathologies when optimizing against such definitions and quantifications. OMNI mitigates this problem by leveraging human notions of interestingness to guide AI systems. There are numerous ways to implement the principles of this new paradigm, and exploring different versions presents an exciting avenue for future research. The generality and applicability of OMNI to other open-ended domains with large task spaces further underscores its significance. In the long run, it hints at a synergy between LMs and open-endedness that simultaneously addresses looming challenges for both: how will LMs ultimately rise to the level of creativity seen in the best of human innovation, and how will open-endedness overcome the trap of diverging into a vast space of uninspiring mediocrity? By playing off each other's strengths, LMs can perhaps someday become essential engines of open-ended discovery and begin to participate in the creative dance that has defined civilization since its inception. This work was supported by the Vector Institute, a grant from Schmidt Futures, an NSERC Discovery Grant, and a generous donation from Rafael Cosman. We also thank Cédric Colas and members in our lab at the University of British Columbia, namely Aaron Dharna, Ben Norman, and Shengran Hu, for insightful discussions and feedback. unsrtnat figuresection tablesection Appendices § LEARNING PROGRESS CURRICULUM DETAILS For each task, we calculate the task success rate t_rdn achieved by a random action policy. At fixed evaluation intervals during training, the evaluated task success rate t_eval of the RL policy is normalized as such: t_norm = t_eval - t_rdn/1 - t_rdn The above is our contributed extension to the learning progress method in <cit.>. t_norm is smoothed with an EMA function to obtain p_recent (Figure <ref>, green). p_recent is smoothed with a second identical EMA function to obtain p_gradual (Figure <ref>, brown). The exponential smoothing constant applied in all experiments is 0.1. The bidirectional learning progress measure is given by LP = |f(p_recent - f(p_gradual)| (Figure <ref>, blue), where f is the reweighting function: f(p) = (1 - p_θ)p/p + p_θ(1 - 2p) with parameter p_θ = 0.1. We employ a sampling function to transform the measure of learning progress into task sampling weights, focusing mostly on tasks with the largest learning progress. The steps are as follows: * Z-score the reweighted learning progress (substract mean and divide by standard deviation). * Apply a sigmoid to the result. * Normalize resulting weights to sampling probabilities. §.§ Learning Progress Curriculum Ablation The learning progress curriculum without the proposed extension (Section <ref>) of normalizing the task success rates to a random action baseline is labelled as LP-no-norm. In light of limited compute, experiments in this setting are run for 30 million time steps (vs. 100 million in the main experiments) and are repeated 10 times (same as the main experiments) with different random seeds. The LP curriculum achieves higher average task success rates and learns more tasks with the proposed normalization extension (Figure <ref>). Across all metrics, the differences in performance between LP and LP-no-norm at 50%, 75%, and 100% of the way through training are statistically significant (all p < 0.05, Mann Whitney U test) (Table <ref>). § POLICY AND OPTIMIZATION DETAILS The model architecture is similar to those in previous works <cit.>. The RGB inputs are passed through a 2-layer convolution network with ReLU activations. The RGB convnet is followed by a fully connected layer of size 256. The visual embeddings are concatenated with the task encoding before being passed into an LSTM cell of size 256. The network output is given by a 2-layer linear action head for the policy and a 2-layer linear layer for the value function. Both have Tanh activation functions. Optimization is performed with Proximal Policy Optimization <cit.> and General Advantage Estimation <cit.>. § CRAFTER BORING TASKS The 90 boring tasks in main experiments of the Crafter environment are: * collect N drink, where N ∈ [2, 10] * collect N wood, where N ∈ [2, 10] * collect N coal, where N ∈ [2, 10] * collect N stone, where N ∈ [2, 10] * collect N iron, where N ∈ [2, 10] * collect N diamond, where N ∈ [2, 10] * place N table, where N ∈ [2, 5] * place N furnace, where N ∈ [2, 5] * place N stone, where N ∈ [2, 5] * make N wood pickaxe, where N ∈ [2, 5] * make N wood sword, where N ∈ [2, 5] * make N stone pickaxe, where N ∈ [2, 5] * make N stone sword, where N ∈ [2, 5] * make N iron pickaxe, where N ∈ [2, 5] * make N iron sword, where N ∈ [2, 5] § CRAFTER PROMPT We give GPT-3 three examples as part of the prompt: Given the set of tasks that the agent can do relatively well and the set of tasks that needs to be determined as interesting or not, the additional prompt is: In the few-shot examples, we do not include all possible tasks in to reduce token usage. While each example sets different tasks for , during inference, all tasks needing classification as interesting or not (Section <ref>) are inputted as without any additional filtering. GPT-3 predicts whether each task in is interesting or not with / , following the format in the few-shot examples. In our experience, GPT-3 nearly always conforms to the requested output format. However, in the rare cases that GPT-3 deviates from the expected output format, the responses are regenerated at a higher temperature, making the output less deterministic. For tasks where GPT-3 did not provide an answer, we modify the input to include only the tasks lacking responses, and then regenerate these responses. We access GPT-3 through OpenAI's APIs, opting for the Davinci model in our experiments, which costs $0.02 per 1000 tokens. Caching significantly reduces the number of API queries. We extensive cache GPT-3 responses and consistently reuse this cache across multiple runs. § SUPPLEMENTARY METRICS AND PLOTS The metrics and plots in this section complement the results shown in Section <ref>. To better see how different methods perform on interesting tasks, Figure <ref> offers the same metrics as Figure <ref> but calculated on the subset of interesting tasks only. Figures <ref> and <ref> are subsets of Figures <ref> and <ref> respectively, zoomed into the set of interesting tasks only. Tables <ref> and <ref> present the Mann Whitney U test p-values, indicating statistical significance between methods as detailed in Section <ref>. § ORACLE MODEL OF INTERESTINGNESS The Oracle Model of Interestingness (OMoI) is a meticulously designed model that assigns sampling weights of 1.0 to interesting tasks and 0.001 to boring tasks. Interesting tasks are the set of 15 tasks shown in Figure <ref>, all other tasks are considered boring. The OMoI functions as an oracle that accurately discerns which tasks humans would typically find interesting for the agent to learn the most skills in this environment. Although we were able to design an oracle MoI in this domain, it is important to note that constructing such models in more complex domains will not always be feasible, nor would it scale well given the human labor required. Across all metrics, the differences in performance between OMNI and the oracle at 25%, 50%, 75%, and 100% of the way through training are not statistically significant (all p > 0.05, Mann Whitney U test) (Table <ref>). OMNI's performance is comparable to that of the oracle (Figure <ref>). OMNI and the oracle achieve similar task success rates for each task (Figure <ref>), and induce similar patterns in task sample rates (Figure <ref>). This suggests that LMs can capture key aspects of what humans typically find interesting. It is very encouraging, at least in this domain, that OMNI automatically determines what tasks are interesting and achieves results comparable to the oracle. § USING COMPOUNDS AS BORING TASKS In this setup, the tasks added that are boring are compound and repetitive tasks. In addition to the repetitive boring tasks (Appendix <ref>), compound tasks are generated by combining any two of the 15 interesting tasks (Figure <ref>). Hence, there is a total of 15 interesting tasks, 195 boring tasks, and 1023 extremely challenging tasks. In light of limited compute, experiments in this setting are run for 30 million time steps (vs. 100 million in the main experiments) and are repeated 10 times (same as the main experiments) with different random seeds. We compare the performance of agents trained with: (1) Uniform sampling, (2) Learning Progress (LP) only, (3) OMNI: Learning Progress with additional filtering by a Model of Interestingness (OMNI: LP + MoI), and (4) the oracle: Learning Progress with additional filtering by an Oracle Model of Interestingness (Oracle: LP + OMoI). The high-level summary of the results from these experiments is that they are qualitatively similar to when boring tasks are repetitive tasks only (Section <ref>). Uniform sampling. The results with uniform sampling are poor (Figure <ref>). The agent learns 6 (CI: 3 – 6) out of 195 interesting and boring tasks and 3 (CI: 2 – 3) out of 15 interesting tasks, achieving an average task success rate of 0.019 (CI: 0.017 – 0.021) on interesting and boring tasks, and 0.097 (CI: 0.082 – 0.105) on interesting tasks only. Learning Progress Curriculum. The agent learns 67 (CI: 65 – 69) out of 195 interesting and boring tasks and 9 (CI: 8 – 9) out of 15 interesting tasks, achieving an average task success rate of 0.19 (CI: 0.19 – 0.20) on interesting and boring tasks, and 0.40 (CI: 0.38 – 0.42) on interesting tasks only. Across all metrics, the differences between LP and Uniform at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test) (Table <ref>), showing that LP significantly outperforms uniform sampling (Figure <ref>). LP accurately tracks and samples tasks with high learning progress but is distracted by the many boring tasks (Figure <ref>). OMNI: Learning Progress + a Model of Interestingness. The agent learns 96 (CI: 93 – 99) out of 195 interesting and boring tasks and 11 (CI: 11 – 11) out of 15 interesting tasks, achieving an average task success rate of 0.29 (CI: 0.28 – 0.30) on interesting and boring tasks, and 0.59 (CI: 0.56 – 0.60) on interesting tasks only. Across all metrics, the differences between OMNI and LP at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test) (Table <ref>), showing that OMNI significantly outperforms LP (Figure <ref>). Since OMNI focuses on interesting tasks with high learning progress (Figure <ref>), the agent achieves higher average task success rates and learns more challenging tasks faster (Figure <ref>). Oracle: Learning Progress + an Oracle Model of Interestingness. The agent learns 101 (CI: 97 – 105) out of 195 interesting and boring tasks and 11 (CI: 11 – 11) out of 15 interesting tasks, achieving an average task success rate of 0.30 (CI: 0.29 – 0.31) on interesting and boring tasks, and 0.62 (CI: 0.61 – 0.63) on interesting tasks only. The differences between the oracle and OMNI at 25%, 50%, 75%, and 100% of the way through training are sometimes statistically significant (some p < 0.05, Mann Whitney U test) (Table <ref>), showing that OMNI does not always achieve comparable performance to the oracle (Figure <ref>). However, OMNI emulates similar task success rates (Figure <ref>) and task sample rates to the oracle (Figure <ref>). Having OMNI approach the performance benchmark set by the oracle is already an indicator of success. § USING SYNONYMS AS BORING TASKS In this setup, the tasks added that are boring are synonymous and repetitive tasks. In addition to the repetitive boring tasks (Appendix <ref>), synonymous tasks are those with different task representations (i.e., synonymous descriptions of tasks like “collect wood” and “gather wood”) but the same success conditions (Table <ref>). In this setting, the OMoI regards the 15 tasks shown in Figure <ref> and their synonyms as interesting. The repetitive tasks and their synonyms are also regarded as boring. Therefore, the OMoI identifies 90 interesting tasks, 540 boring tasks, and 1023 extremely challenging tasks. The same classification is utilized for subsequent analysis. In light of limited compute, experiments in this setting are run for 30 million time steps (vs. 100 million in the main experiments) and are repeated 10 times (same as the main experiments) with different random seeds. We compare the performance of agents trained with: (1) Uniform sampling, (2) Learning Progress (LP) only, (3) OMNI: Learning Progress with additional filtering by a Model of Interestingness (OMNI: LP + MoI), and (4) the oracle: Learning Progress with additional filtering by an Oracle Model of Interestingness (Oracle: LP + OMoI). The high-level summary from these experiments is that, when furnished with adequate information about the agent, OMNI delivers qualitatively similar results to when boring tasks are repetitive tasks only (Section <ref>), or compound and repetitive tasks (Section <ref>). Uniform sampling. The results with uniform sampling are poor (Figure <ref>). The agent learns 80 (CI: 77 – 82) out of 630 interesting and boring tasks and 20 (CI: 19 – 21) out of 90 interesting tasks, achieving an average task success rate of 0.080 (CI: 0.070 – 0.088) on interesting and boring tasks, and 0.14 (CI: 0.13 – 0.14) on interesting tasks only. Learning Progress Curriculum. The agent learns 302 (CI: 287 – 306) out of 630 interesting and boring tasks and 53 (CI: 52 – 54) out of 90 interesting tasks, achieving an average task success rate of 0.32 (CI: 0.31 – 0.33) on interesting and boring tasks, and 0.43 (CI: 0.42 – 0.44) on interesting tasks only. Across all metrics, the differences between LP and Uniform at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-2, Mann Whitney U test) (Table <ref>), showing that LP significantly outperforms uniform sampling (Figure <ref>). OMNI: Learning Progress + a Model of Interestingness. The agent learns 307 (CI: 296 – 318) out of 630 interesting and boring tasks and 66 (CI: 66 – 66) out of 90 interesting tasks, achieving an average task success rate of 0.31 (CI: 0.28 – 0.33) on interesting and boring tasks, and 0.56 (CI: 0.52 – 0.57) on interesting tasks only. On the interesting tasks only, the differences between OMNI and LP at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-2, Mann Whitney U test) (Table <ref>), showing that OMNI significantly outperforms LP (Figure <ref>). However, on all tasks (interesting and boring), the differences between OMNI and LP at 25%, 50%, 75%, and 100% of the way through training are not always statistically significant (not all p < 0.05, Mann Whitney U test) (Table <ref>), showing that OMNI does not outperform LP on all metrics (Figure <ref>). We further investigate this by comparing OMNI with the oracle. In the subsequent paragraphs, we introduce a minor modification to OMNI, which then significantly outperforms LP (as occurred in the previous experimental settings). Oracle: Learning Progress + an Oracle Model of Interestingness. The agent learns 389 (CI: 380 – 393) out of 630 interesting and boring tasks and 66 (CI: 66 – 67) out of 90 interesting tasks, achieving an average task success rate of 0.42 (CI: 0.41 – 0.43) on interesting and boring tasks, and 0.62 (CI: 0.61 – 0.62) on interesting tasks only. On all tasks (interesting and boring), the differences between the oracle and OMNI: LP + MoI at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test) (Table <ref>), showing that the oracle significantly outperforms OMNI: LP + MoI on the set of all tasks (Figure <ref>). This difference in performance can be attributed to the different interestingness notions held by the OMoI vs. the MoI. The current LM-based MoI in OMNI regards all synonymous tasks as boring, even if they are not repetitive tasks. This is because the LM-based MoI assumes that the RL agent already has a language prior and that sampling synonymous tasks would not enable the agent to learn more skills. In other words, the LM-based MoI incorrectly assumes that the RL agent knows how to do the task “gather wood” if it knows how to do the task “collect wood”, not understanding that the RL agent has not yet learned that these two tasks are actually the same thing. Recall that the RL agent in our setup does not have a language prior yet, because the tasks are encoded as a bag-of-words instead of a using a pre-trained natural language encoder (which is likely to encode “gather wood” and “collect wood” closer in the embedding space than other non-synonymous tasks). OMNI: Learning Progress + an updated Model of Interestingness We update the prompt input to GPT-3, adding information that the agent currently lacks a language prior and perceives synonymous tasks as completely different (Figure <ref>). The MoI with updated prompt is labelled as MoI-updated. The MoI-updated now predicts that synonymous tasks are still interesting. The agent trained with OMNI: LP + MoI-updated learns 393 (CI: 376 – 397) out of 630 interesting and boring tasks and 66 (CI: 66 – 67) out of 90 interesting tasks, achieving an average task success rate of 0.43 (CI: 0.41 – 0.45) on interesting and boring tasks, and 0.62 (CI: 0.61 – 0.63) on interesting tasks only. Across all metrics, the differences in performance between OMNI: LP + MoI-updated and the oracle at 25%, 50%, 75%, and 100% of the way through training are not statistically significant (all p > 0.05, Mann Whitney U test) (Table <ref>). This shows that OMNI: LP + MoI-updated's performance is comparable to that of the oracle (Figure <ref>). OMNI: LP + MoI-updated and the oracle achieve similar task success rates for each task (Figure <ref>), and induce similar patterns in task sample rates (Figure <ref>). These experiments demonstrate that OMNI can be effectively utilized across a range of task settings when provided with sufficient information. Initially, the MoI was not aware that the RL agent lacked a language prior, which led it to categorize synonymous tasks as boring. However, by supplementing the prompt with additional information about the agent (i.e., its lack of a language prior), OMNI was able to recognize synonymous tasks as interesting, thereby accelerating the agent's learning. An alternative approach could involve providing the MoI with more information about the agent's performance, so that it can analyze the data, identify and adapt to the agent's inherent limitations (e.g., the absence of a language prior). LMs can potentially automatically analyze the task success rates and update their predictions of what is interesting without human intervention. Preliminary investigations with both GPT-3 and GPT-4 have shown promising results. These models generated plausible analyses of the differences in task success rates between synonymous tasks, and adjusted their predictions of what is interesting based on these analyses (Figures <ref> and <ref>). These observations imply that LMs may already have the capacity to analyze data autonomously and adapt their outputs accordingly. § INTEGRATING MORE ASPECTS OF INTERESTINGNESS INTO LMS For future research, we plan to explore the potential of leveraging LMs to address multiple aspects of interestingness, including learning progress. By examining these aspects individually or in a combined manner, we aim to develop more robust and adaptive models that can efficiently cater to a wide range of tasks and applications. Preliminary evidence suggests that LMs, such as GPT-4, possess the capacity to integrate multiple aspects of interestingness (Figure <ref>). Further investigation and experimentation in this direction could lead to the development of AI models that exhibit a more sophisticated approach to learning by effectively capturing and integrating various aspects of interestingness.
http://arxiv.org/abs/2306.09725v1
20230616094745
Discourse Representation Structure Parsing for Chinese
[ "Chunliu Wang", "Xiao Zhang", "Johan Bos" ]
cs.CL
[ "cs.CL" ]
Optomechanical Dark Matter Direct Detection Glen Harris =========================================== Previous work has predominantly focused on monolingual English semantic parsing. We, instead, explore the feasibility of Chinese semantic parsing in the absence of labeled data for Chinese meaning representations. We describe the pipeline of automatically collecting the linearized Chinese meaning representation data for sequential-to-sequential neural networks. We further propose a test suite designed explicitly for Chinese semantic parsing, which provides fine-grained evaluation for parsing performance, where we aim to study Chinese parsing difficulties. Our experimental results show that the difficulty of Chinese semantic parsing is mainly caused by adverbs. Realizing Chinese parsing through machine translation and an English parser yields slightly lower performance than training a model directly on Chinese data. § INTRODUCTION Semantic parsing is the task of transducing natural language text into semantic representations, which are expressed in logical forms underlying various grammar formalisms, such as abstract meaning representations (AMR,  ), minimal recursion semantics (MRS,  ), and Discourse Representation Theory (DRT,  ). In this work, we explore the feasibility of parsing Chinese text to semantic representation based on Discourse Representation Structures (DRSs,  ), which are meaning representations proposed from DRT, a recursive first-order logic representation comprising of discourse referents (the entities introduced in the discourse) and relations between them. Several neural parsers for DRS have been recently developed <cit.> and reached remarkable performance, but mostly focused on monolingual English or some language using the Latin alphabet. Meaning representations are considered to be language-neutral, and texts with the same semantics but in different languages have the same meaning representation. The literature presents several examples of parsing multilingual text by training on monolingual English semantic representations <cit.>. For the reason of relatively limited amounts of labeled gold-standard multilingual meaning representation data, multilingual text parsing relies on the source of silver English meaning representation data. As long as the meanings are expressed in a language-neutral way, this is a valid approach. However, named entities aren't usually, because they can (a) have different orthography for different languages using the same alphabet (in particular for location names, e.g., Berlin, Berlijn, Berlino, Berlynas) or (b) be written with a completely different character set, as is the case for Chinese. Figure <ref> shows a (nearly) language-neutral meaning representation for a simple English sentence. For non-English Latin alphabet languages, the named entities in the text are usually consistent with English, and the meaning in the form of a graph structure of the corresponding Discourse Representation (Discourse Representation Graph, DRG) would be identical to these languages <cit.>, as shown in Figure <ref>. However, it would be rather absurd to expect a semantic parser for Chinese to produce meaning representations (with interlingual WordNet synsets) where proper names are anchored using the Latin alphabet using English (or any other language for that matter) orthography. We need to keep this important aspect in mind when evaluating semantic parsers for languages other than English. However, for non-Latin alphabet languages, such as the widely used language of Chinese, is it feasible to use English meaning representation as the meaning representation of Chinese? Our objective is to investigate whether Chinese semantic parsing can achieve the same performance as English semantic parsing while using the same amount of data. We try to investigate whether it is necessary to develop a dedicated parser for Chinese, or whether it is possible to achieve a similar performance using an English parser by leveraging machine translation (MT) on Chinese. We provide inexpensively acquired silver-standard Chinese DRS data to implement our exploration: (1) We collect Chinese and English aligned texts from the Parallel Meaning Bank (PMB, ), which provides parallel multilingual corpora including corresponding English meaning representation expressed in DRSs. (2) We leverage GIZA++ <cit.> to align the word-segmented Chinese and English to obtain Chinese-English named entity alignment pairs, the resulting named entities are used to replace the named entities in our English semantic representation. (3) We train two monolingual parsers on the two languages separately, and then provide a set of fine-grained evaluation metrics to make better comparison between parsers. We aim to answer the following questions: * Can existing DRS parsing models achieve good results for Chinese? (RQ1) * What are the difficulties in semantic parsing for Chinese? (RQ2) * Is it feasible to use machine translation and an English parser to parse Chinese? How is it different from designing a special parser for Chinese? (RQ3) * How to conduct more fine-grained evaluation of experimental results and reduce the workload of manual evaluation? (RQ4) § BACKGROUND §.§ Discourse Representation Structure DRS, as a kind of formal meaning representation, can be used to represent the semantic meaning of sentences and discourse. For the wide coverage of linguistic phenomena at quantification, negation, reference resolution, comparatives, discourse relations, and presupposition, DRT and DRS possess stronger semantic representation power than AMR. A DRS comprises discourse referents and conditions. However, some variants of DRS formats have been introduced in recent years, the format we employ throughout our work being one of them. We use a simplified DRS, which can be called Discourse Representation Graph (DRG) or Simplified Box Notation (SBN;  ). It discards explicit discourse references and variables while maintaining the same expressive power, as shown in Figure <ref>. As introduced by <cit.>, DRS allows two kinds of representations: graph and sequential notation (Figure <ref>). There are five types of semantic information involved in DRS: concepts (, , , ), roles (, , , ), constants (, , , ), comparison operators (=, ≺, ∼, ) and discourse relations (, , , ), where concepts and roles are represented by WordNet synsets <cit.> and VerbNet thematic relations <cit.> respectively. §.§ DRS parsing DRS parsing was originally applied to English and has been continuously extended to other Latin languages. Initially, rule-based systems were predominantly utilized by early parsers for analyzing small English texts <cit.>. The first version of GMB <cit.> which provides English texts with DRS, is built on Boxer <cit.>. With the release of PMB <cit.> and the propose of the first shared tasks <cit.>, related research keeps growing, with a focus on deep learning models <cit.>. The target languages have also expanded to other languages: German, Italian, Dutch and Chinese <cit.>. Translation has been utilized in two manners when dealing with cross-lingual parsing: the first involves translating other languages into English and then employing an English parser, while the second involves translating English into other languages and training a parser specific to that language <cit.>. In this paper, we use the existing Chinese-English parallel corpus to design a specific parser for Chinese, and compare the performance of the parser with the first method. § DATA CREATION In previous work, for non-English parsing tasks, the semantic representation of English is usually directly used as the semantic representation of the target language, but most of these works focus on Latin languages <cit.>. For non-Latin languages such as Chinese, named entities are not language-neutral, as illustrated in the work of <cit.>, and are quite different from named entities in English texts. To design a more reasonable Chinese parser, we first focus on replacing the named entities in the English semantic representation with Chinese, so that the parser can parse out the Chinese named entities corresponding to the text content according to different texts. To achieve our goal, we use the data of PMB, the largest parallel corpus of DRS data available, as our experimental object. From the PMB, English-Chinese parallel texts and DRS data for English texts are collected. Based on that, we propose a pipeline to obtain Chinese DRS for Chinese text. Our pipeline has three steps: (1) using tokenizers tools to segment Chinese and English text data; (2) utilizing the English-Chinese alignment tool to obtain the alignment tokens between Chinese and English texts; (3) replacing named entities in English DRS with Chinese named entities. Figure <ref> shows our processing pipeline. §.§ Text Tokenizers Preprocessing data with a tokenizer is an important step in the pipeline because the alignment of Chinese and English texts needs to act on the data after tokenization. At the same time, since the quality of upstream results directly affects downstream performance, the quality of text segmentation also directly affects the correctness of Chinese and English text alignment. In this work, we use Moses <cit.> for English, which is advanced and widely used. It is a collection of complex normalization and segmentation logic that works very well for structured languages like English. For Chinese, we choose HanLP <cit.>, which is an efficient, user-friendly and extendable tokenizer. Different from a widely used Jieba tokenizer, HanLP is based on the CRF algorithm. It takes into account word frequency and context at the same time, and can better identify ambiguous words and unregistered words. §.§ English-Chinese Alignment In order to realize the replacement of named entities in English semantic representation with Chinese named entities, it is very important to obtain the correct alignment of Chinese and English texts, especially the alignment of named entities in the two texts. In order to quickly and effectively obtain the alignment data in Chinese and English, we choose the GIZA++ word aligning tool. GIZA++ is the most popular statistical alignment and MT toolkit <cit.>, which implements the lexical translation models of <cit.> (IBM Models), and the Hidden-Markov alignment Model <cit.>, trained using expectation-maximization (EM). GIZA++ is highly effective at aligning frequent words in a corpus, but error-prone for infrequent words. §.§ Replacing Named Entities The last step to obtain the Chinese semantic representation is to replace the named entities in the English DRS with Chinese named entities. First, the English named entities in DRS data can be easily obtained according to the edge types between two nodes. When the edge type is , the output nodes are named entities in the DRG. After processing the Chinese and English texts with the GIZA++ tool in the second step, we can obtain alignment tokens between Chinese and English. On this basis, a named entity alignment dictionary can be obtained, and then the English named entities in the DRS data can be replaced with Chinese named entities based on this dictionary. § METHODOLOGY §.§ Neural Models We adopt Recurrent Neural Networks (RNN) equipped with Long Short-Term Memory units (LSTM;  ) as our baseline models. Following the work of <cit.>, we use frozen mBERT <cit.> embeddings to initialize the encoder. An attention-based LSTM architecture is used for the decoder, where the attention memory is the concatenation of the attention vectors among all the input tokens. In addition, the copy mechanism <cit.> is added to the decoder, which can integrate the attention distribution into the final vocabulary distribution. The copy mechanism favors copying tokens from the source text into the target text instead of generating all target tokens only from the target vocabulary. §.§ Evaluation Given a document to the DRS parser, it will generate variable-free sequential notation DRS as shown in Figure <ref>(b). The evaluation tool for DRS parsing task was recently proposed by <cit.> and is based on the AMR standard evaluation tool Smatch <cit.>. By converting a sequential DRS into DRG, Penman notation format data <cit.> can be obtained, as shown in Figure <ref> (b), and then Smatch can be used to compute F-scores based on matching triples between system output and gold meanings. However, we note that the scores given by the above evaluation tool have two flaws: (1) the evaluation scores are too inflated, and it is difficult to detect the differences between different parsers. (2) the evaluation tool only gives an overall score without evaluating the different types of constituent elements in the DRS, it is difficult to quantitatively determine what is the difficulty of the parser in the parsing process. Based on that, we propose to compress evaluation scores to improve the above evaluation methods and further propose fine-grained evaluation metrics for different subtasks according to different types of components in DRS. §.§.§ Overall Evaluation Our improvement strategy is mainly aimed at the representation of the Penman format of DRG. We mainly improve on two points, one is WordNet synsets representation, and the other is constants representation. In the previous evaluation method, the WordNet synsets in Penman format are fine-grained during the evaluation process, and the WordNet synsets are divided into three parts (lemma, pos, number) according to their constituents. On this basis, even if the parser generates wrong concepts, such as and , the Smatch still obtains a similar inflated F1 score. To this end, we change the WordNet synsets in the Penman format to a coarse-grained representation to strictly evaluate WordNet synsets qualities generated by parsers, as shown in Figure <ref> (c). In addition, we have also modified the constant representation in Penmen format, such as the constant shown in the figure, because the variable c is added to the constant, making the triples in Penman format redundant, which also makes the F1-score higher to a certain extent. By omitting the c variable as shown in Figure <ref> (c), we further compress the F1-score. §.§.§ Fine-grained Evaluation To evaluate the quality of specific subtasks in DRS parsing, we imitate the fine-grained metrics for AMR parsing task <cit.> to DRS parsing. In order to make them compatible with DRS, we make some changes based on the data characteristics of DRS. Our fine-grained metrics consist of three categories in total: graph-level, node-level and edge-level. Each category includes more fine-grained evaluation metrics. All the metrics are proposed based on the semantic information types involved in DRS (see Section <ref>). In graph-level evaluation, , , and are used to represent the Smatch scores of the DRG in Penman format ignoring Roles, Discourse, Operators and Senses respectively. In theory, they are Smatch's coarse-grained scores, which are higher than the original Smatch scores. In node-level evaluation, we compute F-score on the list of parsed information types (such as roles, constants, and discourse relations) instead of using Smatch. Note that different from the metrics in the AMR parsing task, concepts in DRS are represented by WordNet synsets, so can be evaluated more finely by part-of-speech (, , and ). detects all discourse relation labels except NEGATION since it is more common and specific in DRS than other discourse relations labels, the metric is used for evaluation to detect NEGATION edge label alone. In addition, metric is added to evaluate the ratio of the generated concepts. In DRG, member represents the edge label connecting the BOX node and the concepts node, i.e., the dashed line as shown in Figure <ref> (a). For edge-level evaluation, we focus on calculating the F-score based on the number of matching triples in the parsed DRG and the gold DRG. For example, in edge-level is a metric that considers the relations between concepts nodes and named entities, which differs from the metric of in node-level, which only considers the concepts labeled with Name and ignores the accuracy of named entities themselves.  [Our evaluation suite is available at: <https://github.com/wangchunliu/SBN-evaluation-tool>.] § EXPERIMENTS §.§ Dataset We collect all Chinese-English text pairs in the PMB. According to the quality label of English DRS, we divide the data into gold data and silver data, and randomly split the test set and development set from the gold data. Since PMB data may contain duplicate data, before splitting, we first filter the duplicate data. Then we merge the remaining gold data and silver data as our training set, and get a total of 137,781 training instances, 1,000 development instances and 1,000 test instances, each instance contains English DRS data, corresponding English text, and Chinese text. [Our data and code are available at: <https://github.com/wangchunliu/Chinese-SBN-parsing>.] After splitting the data, we use the pipeline introduced in Section <ref> to process our Chinese and English texts to get the Chinese and English word alignment data, and then replace the named entities in the English DRS with Chinese. However, we noticed that not all replacements were successful. We classified the wrong replacement types into four types, as shown in Table <ref>. These errors are mainly caused by GIZA++ alignment errors when aligning Chinese and English text words. Among them, the fourth type of error is quite special. In our experiment, we directly ignore the location named entities used to refer to nationality and do not replace them with Chinese named entites. In order to reduce the work of manual correction and make the work reproducible, We only fix incorrect named entity replacements in the test set, where 26 of the 1000 test set instances require manual correction of named entities. §.§ Settings For tokenizers, we use Moses <cit.> and HanLP <cit.> on English and Chinese respectively. We observe that the HanLP tokenizer outperforms Jieba[https://github.com/fxsjy/jieba], a tokenizer widely used in Chinese, in segmenting text containing named entities. This is an important indicator for selecting a tokenizer, because getting the correct Chinese and English named entity pairs is our main goal. In addition, we observed that HanLP's segmentation results also outperformed Jieba's tokenizer on text containing traditional Chinese characters, while the Chinese data in PMB contains traditional Chinese characters. This is also one of the reasons for choosing the HanLP tokenizer. At the top of Table <ref>, we show the difference in name entities between the Jieba tokenizer and the HanLP tokenizer. In addition, we give an example of the impact of different sizes of training data on the alignment performance of GIZA++ at the bottom of Table <ref>, and the results show that it is almost impossible to achieve correct alignment using only gold data. All experiments are implemented based on OpenNMT <cit.>. For the vocabulary, we construct vocabularies from all words, the vocabulary sizes as shown in Table <ref>. The hyperparameters are set based on performance on the development set. We use SGD optimizer with the initial learning rate set to 1 and decay 0.8. In addition, we set the dropout to 0.5 at the decoder layer to avoid overfitting with batch size 32. §.§ Main Results Table <ref> shows the results obtained by the parsers with Smatch, which gives the overall performance for different parsers. The first parser (EN) is trained on the English dataset based on the model introduced in Section <ref>. The Smatch_1 result of our English parser is slightly lower than the results of <cit.>, which we believe is due to slightly different training, development and test set instances. The result of Smatch_2 is significantly lower than the result of Smatch_1, indicating that the F1-score has been significantly compressed and will not be too inflated (see Section 4). The Chinese parser (ZH) is trained on the data created by the pipeline introduced in Section <ref>. The results show that the performance of the Chinese parser is lower than the English parser in all overall evaluation metrics. ZH→EN_zh shows the performance by using the English parser on English text translated from Chinese text instead of training a dedicated model for Chinese text. The only unreasonable point is that the model will generate English named entities, which may not be recognized as the correct Chinese semantic representation. The smatch_1 scores and the smatch_2 scores show that the Chinese parser outperforms using the ZH→EN_zh approach. For the metrics and , the evaluation results have been significantly improved compared with Smatch_2. This shows that Concepts and Roles have a greater impact on evaluation results than Discourse and Operators. It is worth noting that the performance difference between the Chinese and English parsers is about five percentage points across all metrics, while the difference between the ZH and the ZH→EN_zh narrows at the graph-level metrics compared to Smatch_2 score. §.§ Fine-grained Results and Analysis To further explore the performance of parsers, we apply our proposed fine-grained evaluation metrics to the results of two parsers. Tabel <ref> shows the fine-grained evaluation performance of different component types based on DRG at node-level and edge-level. From the results, we observe that the metric gives completely opposite results at different evaluation levels. On the node-level, the metric in ZH parser scores the lowest, but on the edge-level, metric in ZH→EN_zh gives the lowest scores. This is reasonable and expected because the node-level metric only evaluates whether the parser can parse concepts to contain named entities, so the results of ZH→EN_zh parser should be similar to those of the English parser. However, the edge-level metric evaluates whether the generated named entities completely match the original text, and the ZH→EN_zh parser completely loses the Chinese named entity information. An important observation is that the metric has very low F1 scores on both the node-level and the edge-level for the Chinese parser. Using machine translation and an English parser to parse Chinese (ZH→EN_zh) will further degrade the performance of the metric . Based on the text data and parsed output, we find that discourse relations in Chinese are inconspicuous, and even disappears after being translated into English (see Table <ref> for examples). Table <ref> shows the scores of ZH parser are lower than those for ZH→EN_zh except for the adj category. This is an interesting finding, because the performance of other parts of speech in the ZH parser is worse than that of ZH→EN_zh, while adj is special. We observe that the expressions of adjectives in Chinese translated into English are diverse and may not match the original English text (see Table <ref> and Appendix <ref> for relevant examples). For the English parser, verbs are the most difficult words to parse, scoring significantly lower than other parts of speech. However, the difficulty of Chinese semantic parsing is mainly reflected in adv. In addition, the accuracy of ZH→EN_zh in parsing concepts of adv is significantly better than that of the ZH parser, but it is still the lowest results in four types of parts of speech for ZH→EN_zh. On the one hand, the corpus containing adverb data is smaller, which makes the training insufficient. On the other hand, the adverbs in Chinese are usually not obvious and diverse. For noun and verb, ZH has the worst performance, with the ZH→EN_zh method, the performance of noun and verb is slightly improved, but it is much worse than the EN parser. A typical reason is that the English text translated from Chinese may not be consistent with the original English text. We observe that the DRS sequences parsed using the translated text are overall shorter than those parsed using the original English text, some noun concepts are missing, and the verb concepts may be inconsistent with the reference DRS (see Appendix <ref> for examples). Our fine-grained results obtained by using machine translation and the English parser are not always worse than training a Chinese parser alone. For the metrics and , both methods have similar scores at both the node-level and the edge-level. However, when we compare the results of ZH→EN_zh with EN parser, we find that all the results of ZH→EN_zh are significantly lower than those of the EN parser. We found that tense information is usually lost in the process of English-Chinese translation, but almost no tense information is lost in the process of Chinese-English translation. This explains why the result of the Chinese parser operator is significantly lower than that of the English parser, while the result of ZH→EN_zh is the same as that of the ZH parser. For , we can observe something interesting. As the connector NEGATION in English DRss can also express universal quantification (using nesting of two negation operators) for words such as "every" and "always", this information is missing in the translation process, and as a result not picked up by the parser. For this metric, ZH→EN_zh even slightly outperforms the ZH parser, but they are both lower than the EN parser. On the one hand, a free translation may lead to a different ordering of semantic information. Although texts with the same meaning but realised with different word order have the same semantic graph, a parser based on sequence-to-sequence neural networks may get the wrong graph structure leading to a lower evaluation score of the evaluation metric. On the other hand, both evaluation metrics are affected by the correctness of , and in our results, the Chinese parser scored lower than the other two parsers for . § CONCLUSION Given an annotated meaning bank primarily designed for English, it is feasible to develop a semantic parser for Chinese by pairing the "English" meaning representation with Chinese translations, reaching good results. Most difficulties in Chinese parsing are caused by adverbs, while the diversity of Chinese verbs and adjectives also has a big impact on parsing performance. Using Machine Translation as an alternative to approach semantic parsing for Chinese yields slightly lower results. Our fine-grained graph evaluation gives better insight when comparing different parsing approaches. § ACKNOWLEDGMENTS This work was funded by the NWO-VICI grant “Lost in Translation—Found in Meaning” (288-89-003) and the China Scholarship Council (CSC). We thank the anonymous reviewers for detailed comments that improved this paper. We would also like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. acl_natbib § RESULT PLOTS According to the fine-grained evaluation results, for both English and Chinese DRS parsing, relatively low f1 scores tend to appear in and . The performance of parser declined by approximately five percent after the named entity was converted to Chinese, especially the and , comparing EN with ZH. § OUTPUT DRS
http://arxiv.org/abs/2306.03262v1
20230605212612
Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
[ "Alina Beygelzimer", "Yann N. Dauphin", "Percy Liang", "Jennifer Wortman Vaughan" ]
cs.LG
[ "cs.LG", "cs.DL" ]
[ Y. Zhang and L. N. Cattafesta III 2023 June 05 ===================================== We present the NeurIPS 2021 consistency experiment, a larger-scale variant of the 2014 NeurIPS experiment <cit.> in which 10% of conference submissions were reviewed by two independent committees to quantify the randomness in the review process. We observe that the two committees disagree on their accept/reject recommendations for 23% of the papers and that, consistent with the results from 2014, approximately half of the list of accepted papers would change if the review process were randomly rerun. Our analysis suggests that making the conference more selective would increase the arbitrariness of the process. Taken together with previous research, our results highlight the inherent difficulty of objectively measuring the quality of research, and suggest that authors should not be excessively discouraged by rejected work. § INTRODUCTION Across academic disciplines, peer review is used as a mechanism to vet the quality of research and identify interesting work. However, like other human judgments <cit.>, reviews are noisy, and studies across fields have shown that reviewers often disagree with one another <cit.>. Within the field of machine learning, Corinna Cortes and Neil Lawrence, the 2014 program chairs of the top-tier conference Neural Information Processing Systems (NeurIPS), ran an experiment in which 10% of NeurIPS submissions were reviewed by two independent program committees to quantify the randomness in the review process <cit.>, a type of noise audit <cit.>. They found that there was a high degree of inconsistency between reviewers. One particularly salient result implied that if the review process had been independently rerun with a different assignment of reviewers to papers, approximately half of the accepted papers would have been rejected. Since the time of that experiment, the impact and ubiquity of machine learning has only grown. The number of annual submissions to NeurIPS has increased more than fivefold, with more than 9,000 papers submitted in each of 2020, 2021, and 2022. This growth has led to rapidly expanding the pool of reviewers, with upwards of 9,000 program committee members playing a role in the review process in a given year. While serving as NeurIPS program chairs in 2021, we wanted to measure the extent to which the consistency in reviewers' decisions has changed as the conference has grown. We therefore ran a variant of the 2014 consistency experiment during the 2021 review process. As in the original experiment, we duplicated 10% of submitted papers, assigned the two copies of each paper to two independent committees for review, and compared their recommendations. Echoing prior work, our results suggest that the review process contains a high level of noise and subjectivity. Consistent with the findings from 2014, we observed that about half of the list of accepted papers would have changed if we independently reran the review process. Our analyses suggest that paper outcomes would only grow more arbitrary if the conference were made more selective, but that the acceptance rate could be increased without significantly impacting how arbitrary decisions are. Taken together with results from a complementary study that showed high levels of disagreement between authors and reviewers and even between pairs of co-authors <cit.>, these results indicate an inherent difficulty in objectively measuring the merits of research, raising important issues for the research community to grapple with. In this paper, we provide some background on the NeurIPS 2021 review process, detail the way in which the consistency experiment was implemented, and present an analysis of the results. We conclude with a discussion of the implications and limitations of the experiment. § BACKGROUND ON THE NEURIPS 2021 REVIEW PROCESS Before describing the details of the experiment, we give some background on the way the NeurIPS review process was run in 2021. The review process took place between May 28, 2021, when submissions were due, and September 28, 2021, when authors were notified of accept/reject decisions. For the first time, the entire review process was conducted on OpenReview, a flexible and customizable peer review platform. As in previous years, the review process was confidential; submissions under review were visible only to assigned program committee members. After the review process concluded, reviews for accepted papers were released publicly.[<https://openreview.net/group?id=NeurIPS.cc/2021/Conference>] Authors of rejected papers were given the opportunity to opt in to have their reviews publicly released, but they were kept private by default. The review process was double-blind, meaning that authors were required to anonymize their submissions and were not allowed to include any information that might reveal their identity to reviewers. One new feature added in 2021 was that authors were asked to include in their submission a checklist designed to encourage best practices for responsible machine learning research, including issues of reproducibility, transparency, research ethics, and societal impact <cit.>; in the interest of promoting more responsible research practices, a completed paper checklist for this report is included in Appendix <ref>. The program committee was made up of more than 9,000 program committee members playing different roles. Reviewers were responsible for reading and evaluating an assigned set of submissions and participating in discussions on each paper. Area chairs (ACs) were responsible for recommending reviewers for submissions, ensuring that all submissions received high-quality reviews, facilitating discussions among reviewers, writing meta-reviews, evaluating the quality of reviews, and making decision recommendations. Senior area chairs (SACs) each oversaw the work of a small number of ACs, making sure that the review process went smoothly. SACs were also responsible for helping ACs find expert reviewers, calibrating decisions across ACs, discussing borderline papers, and helping the program chairs make final decisions. Ethics reviewers provided additional reviews for submissions flagged for potential ethical concerns. Their comments were intended to inform reviewer deliberations. In total there were more than 8,000 reviewers, 708 ACs, 95 SACs, and 105 ethics reviewers, plus the 4 program chairs. Program committee members were not compensated for their service. The content of the review forms used by reviewers and ACs is included in Appendix <ref>. Initial reviews were released to authors on August 3, 2021, at which point 99.7% of submissions had at least three reviews available. Authors were given one week to submit a response to the reviews before the reviewer discussion period began on August 10. To minimize the chance of misunderstandings, there was also an opportunity for additional rolling discussion between authors and the program committee after the initial response period. If new reviews were added (including ethics reviews), authors had the opportunity to respond to those during the rolling discussion. In total, the conference received 9,122 submissions, of which 2,334 were accepted, for an acceptance rate of 25.6%. Out of 2,334 accepted papers, 55 papers were accepted as oral presentations, 260 as spotlights, and the remaining 2,019 as posters only. Decisions to accept papers as oral presentations or spotlights were made jointly between SACs and the program chairs. In addition to the experiment described in this paper, we ran a survey to understand authors' perceptions about the quality of their submitted papers as well as their perceptions of the peer-review process. Specifically, we asked submitting authors to report their predicted probability of acceptance for each of their papers, their perceived ranking of their own papers based on scientific contribution, and the change in their perception of their own papers after seeing the reviews. We found that authors overestimate the probability their papers will be accepted by roughly a factor of three, with a median prediction of about 70% when the acceptance rate was 25.8%. In cases where an author had one paper accepted and another rejected, authors ranked the rejected paper as higher quality about a third of the time. Surprisingly, when co-authors ranked their jointly authored papers, they also disagreed with each other about the relative merits of their paper about a third of the time. As discussed in Section <ref>, this suggests an inherent difficulty, or perhaps impossibility, of objectively quantifying the merits of a paper. The results of this experiment are explored in a separate report <cit.>. § METHODS During the assignment phase of the review process, we chose 10% of papers uniformly at random to duplicate. We'll refer to these as the duplicated papers. We assigned two ACs and twice the usual number of reviewers to these papers. With the help of the team at OpenReview, we then created a copy of each of these papers and split the ACs and reviewers at random between the two copies. We made sure that the two ACs were assigned to two different SACs so that no SAC handled both copies of the same paper. Any newly invited reviewer for one copy was automatically added as a conflict for the other copy. We'll refer to the SAC, AC, and reviewers assigned to the same copy as the copy's committee. We note that this implementation is slightly different from what was done in 2014. In 2014, the entire program committee was randomly split into two committees for the 10% of the papers in the experiment. One of the program committees was then marked as conflicted with the originals and the other was marked as conflicted with the copies of papers in the experiment. To allow for more flexibility in the assignment of reviewers, we instead effectively created a different split for each paper in the experiment. To mitigate any influence the experiment might have on the program committee's behavior, the duplicated papers' committees were not told about the experiment and were not aware the paper had been duplicated. The OpenReview team reused paper IDs of withdrawn abstract submissions for the duplicated papers to avoid revealing which papers were duplicated. The authors of duplicated papers were notified of the experiment right before initial reviews were released and instructed to respond to each set of reviews independently. They were also asked to keep the experiment confidential. The email that was sent to authors is included in Appendix <ref>. As in 2014, duplicated papers were accepted if at least one of the two copies was recommended for acceptance and no “fatal flaw” was found. This resulted in 92 accepted papers that would not have been accepted had we not run the experiment, increasing the overall conference acceptance rate from 24.6% to 25.6%. Four papers that were accepted by one committee (one as a spotlight, three as posters) were ultimately rejected due to what was considered a fatal flaw. In an additional two cases, the committees for the two papers disagreed about whether a flaw was “fatal.” In these cases, the papers were conditionally accepted with conditions determined jointly by the two committees; both were ultimately accepted. At the time initial reviews were released, 8,765 of the original 9,122 submitted papers were still under review, and 882 of these were duplicated papers. This set of 882 papers is what we consider in the analyses in Section <ref>. This means we do not include duplicated papers that were desk rejected for violations of the call for papers or withdrawn by the authors before initial reviews were released; reviewer scores and acceptance decisions were not available for these papers and the authors of these papers never learned that they were part of the experiment. We do include the 118 duplicated papers that were withdrawn by the authors after initial reviews were released since authors were more likely to withdraw a paper after seeing negative reviews. We note that the withdrawal rate after seeing initial reviews was 45% higher for papers not in the experiment compared with duplicated papers, which we suspect is because authors of duplicated papers had two shots at acceptance. This experiment was independently reviewed and approved by an Institutional Review Board (IRB) which determined the risks to participants were no greater than those experienced by participating in the normal review process. Since the data collected (i.e., submitted papers and their authors, assignments of reviewers to papers, content of the reviews, discussion comments, and so on) contains personally identifiable information, we agreed that only the 2021 program chairs, workflow manager, and the OpenReview staff would access this data. However, for all duplicated papers that were accepted (and any rejected duplicated papers that opted in to make their reviews public), both sets of reviews are publicly available on OpenReview. The conference call for papers[<https://neurips.cc/Conferences/2021/CallForPapers>] included a statement alerting authors that “as in past years, the program chairs will be measuring the quality and effectiveness of the review process via randomized controlled experiments.” § RESULTS Table <ref> summarizes the recommendations for the 882 duplicated papers still under review when initial reviews were released. We discuss several interpretations of these results below. Figure <ref> shows the correlation between average initial reviewer scores (i.e., overall scores or “ratings” at the time initial reviews were released) from the two committees assigned to each paper, and the same for average final reviewer scores. §.§ Inconsistent Decisions There are a few ways to think about the results. First, we can measure the fraction of inconsistent accept/reject recommendations—the fraction of duplicated papers that were accepted by only one of the two committees, grouping together oral presentations, spotlights, and posters. The number of papers with such inconsistent recommendations was 203 out of 882, or 23.0%. To put this number in context, we need a baseline. There were 206 papers accepted in the original set and 195 papers accepted in the duplicate set, for an average acceptance rate of 22.7%. If acceptance recommendations were made at random with a 0.227 chance of accepting each paper, we would expect the fraction of inconsistent recommendations to be 35.1%. While the fraction of inconsistent recommendations is closer to the random baseline than it is to 0, many of these papers could genuinely have gone either way. When ACs entered recommendations, they were asked to note whether they were sure or whether the paper could be bumped up or down. (See the full meta-review form in Appendix <ref>.) If we treat the pair “poster accept that can be bumped down” and “reject” as consistent, and do the same for the pair “reject that can be bumped up” and “poster accept that should not be bumped up to spotlight,” then the fraction of inconsistent recommendations drops to only 16.0%. We can see how the fraction of inconsistent recommendations would have changed if we shifted the acceptance threshold in different ways. For example, if the conference were so selective as to accept only those papers that were assigned orals and spotlights, the committees would have accepted 29 and 25 of the duplicated papers respectively, agreeing on only 3 papers. To visualize the impact of shifting the acceptance threshold, the small green dots in Figure <ref> show, for each acceptance rate x, the level of disagreement there would have been between the two committees if the papers with the highest x% of average final reviewer scores were accepted; the brown dots show the same for average initial reviewer scores (i.e., average scores at the time reviews were first released to authors). The gray curve in Figure <ref> extends the random baseline described above to other acceptance rates. Points on the gray curve correspond to the expected fraction of inconsistent recommendations if both committees were making acceptance recommendations at random with the corresponding acceptance probability. We have additionally included the following points on the plot: * Accepting only papers recommended as orals or spotlights leads to a 5.8% disagreement rate, but this is only an 8% relative improvement over the random baseline at the corresponding acceptance rate of 3.1%. * Bumping down all posters marked as candidates to bump down leads to a 20.3% disagreement rate, a 25% relative improvement over the random baseline at the acceptance rate of 16.0%. * As mentioned above, the recommendations made by the NeurIPS 2021 committees led to a disagreement rate of 23.0%, a 35% relative improvement over the random baseline at the acceptance rate of 22.7%. * Bumping up all rejects that were marked as candidates for being bumped up leads to a 29.4% disagreement rate, which is also a 35% relative improvement over the random baseline at the corresponding acceptance rate of 34.4%. For comparison, in 2014, of the 166 papers that were duplicated, the two committees disagreed on 43 (25.9%). The acceptance rate was 25% for duplicated papers—a bit higher than the overall 2014 acceptance rate. The random baseline for this acceptance rate is 37.5% disagreement, so this is a 31% relative improvement. Given the small sample size in the 2014 experiment, there is a fairly large confidence interval around these numbers, but the 2021 results appear to be in line with those from 2014, with no obvious increase or decrease in inconsistency. §.§ Accept Precision Another way of measuring disagreement is to look at the fraction of accepted papers that would have changed if we reran the review process. This is also the probability that a randomly chosen accepted paper would have been rejected if it were re-reviewed, previously discussed as the (complement to 1 of) “accept precision” or “arbitrariness” in the context of the 2014 experiment <cit.>. In 2014, 49.5% of the papers accepted by the first committee were rejected by the second (with a fairly wide confidence interval as the experiment included only 116 papers). This year, this number was 50.6% across the two committees (51.9% for one committee and 49.2% for the other). This is a 35% improvement over the random baseline, i.e., the fraction of accepted papers that would be rejected by the other committee if recommendations were made at random with the same acceptance rate. Figure <ref> shows how this number would change if different acceptance thresholds were used. Similar to what we observed for inconsistent recommendations, choosing a more selective acceptance rate leads to lower accept precision: * Accepting only papers recommended as orals or spotlights yields only an 8% relative improvement over the random baseline, with 88.8% of accepted papers rejected by the other committee. * Bumping down all posters marked as candidates for being bumped down leads to a 25% relative improvement over the random baseline, with 63.2% of accepted papers rejected by the other committee. * As noted above, the recommendations made by NeurIPS 2021 committees led to a 35% relative improvement over the random baseline, with 50.6% of accepted papers rejected by the other committee. * Bumping up all rejects that were marked as candidates for being bumped up also yields a 35% relative improvement over the random baseline, with 42.7% of accepted papers rejected by the other committee. We note that there is especially high arbitrariness on which papers are selected for orals and spotlights. These recommendations are made by the ACs, SACs, and program chairs taking into account not only raw reviewer scores but other factors, such as interest to a broad audience. Choosing papers to highlight with oral presentations and spotlights may be a more subjective task than simply assigning a score to a paper. We can also look at the probability that a randomly chosen rejected paper would have been accepted if it were re-reviewed. This number was 14.9% this year, compared to 17.5% in 2014. There were 24 duplicated papers where each committee was unanimous internally about the accept/reject decision yet the two committees disagreed with each other on what that decision should be. §.§ Ethics Flags As discussed in a NeurIPS blog post reflecting on the 2021 ethics review process <cit.>, one of the biggest challenges in implementing ethics review at NeurIPS is the uncertainty that reviewers face around which papers to flag. This uncertainty adds noise into the process and leads to inconsistency in which papers receive ethics reviews. Examining how reviewers flagged duplicated papers for ethics review emphasizes this point. There were 23 papers flagged by one committee and 22 papers flagged by the other, but the overlap between these two sets was only 3 papers—a little over 13%. §.§ Feedback from ACs and SACs After the review period ended and decisions were released, we gave ACs and SACs who were assigned to duplicated papers for which there had been disagreement between committees access to the reviews and discussion for the papers' other copies. We asked them to complete a brief survey to provide feedback on the experiment. Of the 203 papers that were recommended for acceptance by only one committee, we received feedback on 99. Unfortunately, we received feedback from both committees for only 18 papers, which limits the scope of our analysis. Based on this feedback, the vast majority of cases fell into one of three categories. First, there is what one AC called “noise on the decision frontier.” In such cases, there was no real disagreement, but one committee may have been feeling a bit more generous or more excited about the work and willing to overlook the paper's limitations. Indeed, 48% of multiple-choice responses were “This was a borderline paper that could have gone either way; there was no real disagreement between the committees.” Second, there were genuine disagreements about the value of the contribution or the severity of limitations. We saw a spectrum here ranging from basically borderline cases to a few more difficult cases in which expert reviewers disagreed. In some of these cases, there was also disagreement within committees. Third were cases in which one committee found a significant issue that the other did not. Such issues included, for example, close prior work, incorrect proofs, and methodological flaws. 45% of responses were “I still stand by our committee's decision,” while the remaining 7% were “I believe the other committee made the right decision.” We can only speculate about why this may be the case. Part of this could be that, once formed, opinions are hard to change. Part of it is that many of these papers are borderline, and different borderline papers just fundamentally appeal to different people. Part of it could also be selection bias; the ACs and SACs who took the time to respond to our survey may have been more diligent and involved during the review process as well, leading to better decisions. § DISCUSSION A common complaint among members of the machine learning research community is that the NeurIPS review process has become noisier as the conference has grown. In contrast, our analysis shows no evidence that the review process has become noisier with increasing scale. All results appear to be in line with those from the 2014 consistency experiment. On some level, we can view this as good news. Still, there is significant noise in the review process. This may be due in part to fundamental limits on how consistent reviews can be. In a recent analysis revisiting the results of the original 2014 consistency experiment, <cit.> argue that about 50% of the variance in recommendations can be attributed to subjective opinions. In a parallel and complementary study that we ran on NeurIPS 2021 authors' perceptions of the quality of their own work <cit.>, we found high levels of disagreement on paper quality between authors and reviewers, but also, perhaps more surprisingly, high disagreement between co-authors on the relative quality of their own co-authored submissions. This suggests a fundamental difficulty in objectively ranking papers. Subjectivity and noise are common in other forms of decision making as well <cit.> and while it may be possible to implement interventions to reduce the level of noise, we will never be able to avoid some level of randomness in the process. In light of this, we would encourage authors to avoid excessive discouragement from rejections. The NeurIPS 2014 program chairs found that more than a third of papers that were rejected from NeurIPS were ultimately published in other high-quality venues and we expect the same of many papers rejected from NeurIPS 2021. This does raise questions for program chairs around how selective conferences should be. The analyses in Sections <ref> and <ref> suggest that shifting NeurIPS to be significantly more selective may significantly increase the arbitrariness as measured by both the disagreement between committees and the fraction of accepted papers with a different decision upon re-review when compared against appropriate baselines. However, increasing the acceptance rate may not decrease the arbitrariness appreciably. We would encourage future program chairs to keep this in mind when considering whether to make the conference more selective. Some might take this argument to the extreme and suggest that we do away with accept/reject decisions completely, instead relying on readers to judge the quality of papers for themselves, or that NeurIPS moves to a model in which papers are reviewed for technical correctness but not more subjective characteristics like perceived importance of the work, similar to venues like PLOS One. While we find value in the peer review process, we invite the community to debate these options and continue to suggest other ways of improving the peer review process. We hope that the analyses presented here can contribute to this discussion. §.§ Limitations There are two caveats we would like to call out that may impact these results. First, although we asked authors of duplicated papers to respond to the two sets of reviews independently, there is evidence that some authors put significantly more effort into their responses for the copy that they felt was more likely to be accepted. In fact, some authors directly informed us that they were only going to spend the time to write a detailed response for the copy of their paper with the higher scores. Overall, there were 50 pairs of papers where authors only left comments on the copy with the higher average score; interestingly, only two of these papers were ultimately accepted. To dig into this more, we had 8,765 papers still under review at the time initial reviews were released. The acceptance rate for the 7,883 papers not in the experiment was 2036/7883 = 25.8%. (Note that the overall acceptance rate for the conference was 25.6%, but this overall rate also includes papers that were withdrawn or rejected for violations of the call for papers prior to initial reviews being released—here we are looking only at papers still under review at this point.) As discussed above, the average acceptance rate for duplicated papers was 22.7% (206 papers recommended for acceptance in the original set and 195 papers recommended in the duplicate set, for 401 acceptances out of the total of 882*2 papers). The 95% binomial confidence intervals for the two observed rates do not overlap. Authors changing their behavior may account for this difference. This confounder may have somewhat skewed the results of the experiment. Second, when decisions shifted as part of the calibration process, ACs were often asked to edit their meta-reviews to move a paper from “poster” to “reject” or vice versa, or from “spotlight” to “poster” or vice versa. We observed several cases in which ACs made these changes without altering the field for whether a paper “can be bumped up” or “can be bumped down.” For example, there were nine cases in which it appears that a duplicated paper was initially marked “poster” and “can be bumped down” and later moved to “reject,” ending up marked as the nonsensical “reject” and “can be bumped down.” This could potentially introduce minor inaccuracies into our analysis of shifted thresholds. §.§ Potential Negative Impact on the Community Running this experiment required expanding the NeurIPS 2021 program committee's workload by approximately 10%, which required recruiting additional SACs, ACs, and reviewers. This increases the burden imposed on the machine learning community. We weighed this cost against the potential benefits in terms of future improvements to the review process that might result as a consequence of this study and hope that on the whole this study benefits the community. The experiment additionally led to 92 papers being accepted that would not have been otherwise, raising the overall conference acceptance rate from 24.6% to 25.6%. Given the level of noise and inconsistency in the review process that has been demonstrated by this experiment and other work, we do not believe that this meaningfully impacted the overall quality of the conference program. § ACKNOWLEDGMENTS We would like to thank the NeurIPS 2021 workflow manager Zhenyu (Sherry) Xue and the entire OpenReview team, especially Melisa Bok, for their support with the experiment. We would also like to thank everyone who provided feedback on the design of the experiments, especially Corinna Cortes and Neil Lawrence. Finally, we also thank the reviewers, ACs, and SACs who contributed their time to the review process, and all of the authors who submitted their research to NeurIPS. Figures <ref> and <ref> were generated using the XKCD functionality in matplotlib. plainnat § NEURIPS PAPER CHECKLIST While this report is not a traditional machine learning paper, we have applied the NeurIPS paper checklist to promote responsible research practices. We include our responses below. * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? * Did you describe the limitations of your work? See Section <ref>. * Did you discuss any potential negative societal impacts of your work? See Section <ref>. * Have you read the ethics review guidelines and ensured that your paper conforms to them? * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? As discussed in Section <ref>, since the data collected included personally identifiable information, to protect authors' privacy, only the 2021 NeurIPS program chairs and workflow manager and OpenReview staff are permitted to access this data. However, for all duplicated papers that were accepted (and any rejected duplicated papers that opted in), both sets of reviews are publicly available on OpenReview. * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? We did not include error bars in Figure <ref> because there was not a straight-forward way to derive meaningful error bars here. * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? The compute resources required to run our experiments were not significant. * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you cite the creators? * Did you mention the license of the assets? * Did you include any new assets either in the supplemental material or as a URL? * Did you discuss whether and how consent was obtained from people whose data you're using/curating? The IRB review determined that explicit consent was not required, but we were asked to notify authors that experiments would be run in the call for papers; see Section <ref>. * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? See Section <ref>. * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? See Sections <ref> and <ref> in the Appendix. * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? See Section <ref>. * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? As noted in Section <ref>, program committee members were not compensated for their involvement, nor were authors. § REVIEW FORMS The review form used by reviewers contained the following fields (starred fields mandatory): * Summary^* Briefly summarize the paper and its contributions. * Main Review^* Provide a full review of the submission, including its originality, quality, clarity, and significance. See <https://neurips.cc/Conferences/2021/Reviewer-Guidelines> for guidance on questions to address in your review, and faq for how to incorporate Markdown and LaTeX into your review. * Limitations And Societal Impact^* Have the authors adequately addressed the limitations and potential negative societal impact of their work? If not, please include constructive suggestions for improvement. * Ethical Concerns If there are ethical issues with this paper, please describe them and the extent to which they have been acknowledged or addressed by the authors. See <https://neurips.cc/public/EthicsGuidelines> for ethics guidelines. * Needs Ethics Review^* Should this paper be sent for ethics review? Options: * Yes * No * Ethics Review Area If you flagged this paper for ethics review, what area of expertise would it be most useful for the ethics reviewer to have? Please click all that apply. Options: * Discrimination / Bias / Fairness Concerns * Inadequate Data and Algorithm Evaluation * Inappropriate Potential Applications & Impact (e.g., human rights concerns) * Privacy and Security (e.g., consent) * Legal Compliance (e.g., GDPR, copyright, terms of use) * Research Integrity Issues (e.g., plagiarism) * Responsible Research Practice (e.g., IRB, documentation, research ethics) * I don’t know * Time Spent Reviewing^* How much time did you spend reviewing this paper (in hours)? * Rating^* Please provide an “overall score” for this submission. Options: * 10: Top 5% of accepted NeurIPS papers, seminal paper * 9: Top 15% of accepted NeurIPS papers, strong accept * 8: Top 50% of accepted NeurIPS papers, clear accept * 7: Good paper, accept * 6: Marginally above the acceptance threshold * 5: Marginally below the acceptance threshold * 4: Ok but not good enough - rejection * 3: Clear rejection * 2: Strong rejection * 1: Trivial or wrong * Confidence^* Please provide a “confidence score” for your assessment of this submission. Options: * 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. * 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. * 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. * 2: You are willing to defend your assessment, but it is quite likely that you did not understand central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. * 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. * Code of Conduct^* (Checkbox) While performing my duties as a reviewer (including writing reviews and participating in discussions), I have and will continue to abide by the NeurIPS code of conduct. The meta-review form used by ACs contained the following fields (starred fields mandatory): * Recommendation^* Please recommend a decision for this submission. Options: * Accept (Oral) * Accept (Spotlight) * Accept (Poster) * Reject * Confidence^* Please qualify your recommendation. (Your response will not be shared with authors or reviewers.) Options: * You are absolutely certain. * This decision can be bumped up (e.g., you are recommending to accept this paper as a poster but it would also make a fine spotlight). * This decision can be bumped down (e.g., you have a preference for accepting this paper as a poster but you wouldn’t mind if it is rejected). * You are not certain what the right decision should be. This is a submission you wish to discuss further with the SAC. * Metareview^* Please provide a meta-review for this submission. Your meta-review should explain your decision to the authors. Your comments should augment the reviews, and explain how the reviews, author response, and discussion were used to arrive at your decision. If you want to make a decision that is not clearly supported by the reviews, perhaps because the reviewers did not come to a consensus, please justify your decision appropriately. * Review History^* Have you previously reviewed or area chaired (a version of) this work for another archival venue? (This information is only requested in order to collect aggregate statistics for NeurIPS 2021. Your response will not be shared with authors or reviewers, nor will it affect the submission's chance of acceptance.) * Consider For An Award (Checkbox) Yes, this paper should be seriously considered for an award. § DISCLOSURE TO AUTHORS The following is the text of the email that was sent to authors whose papers were duplicated before initial reviews were released. Subject: NeurIPS reviews: you are part of an experiment, please read this email in full Dear firstname, Preliminary NeurIPS reviews will be released tomorrow and the author response period will begin. When this happens, you will receive two independent sets of reviews for your paper paper title. This email explains why this will happen and how you should respond to these reviews. We ask that you do not discuss the information in this email with anyone outside of the co-authors on your affected submission(s). NeurIPS has a long history of experimentation. For many years, Program Chairs have run randomized controlled experiments to measure the quality of the review process and the effectiveness of proposed alternatives. In 2014, NeurIPS ran an experiment in which 10% of submissions were reviewed by two independent program committees to quantify the randomness in the review process. This year we are repeating a variant of this experiment to see how the quality of the review process has changed over time. Your paper is among the 10% randomly chosen to receive two independent sets of reviews. Below we answer some questions you may have as you prepare for the author response period. Q: Can I discuss the experiment with colleagues or post about it on social media? A: No. The experiment will not be publicly announced until the review process is over. At this time, we are informing only authors of papers in the replicated 10% so that you know how to respond to reviews. We ask that you please keep the experiment and the fact that you have received two sets of reviews confidential. Q: How should I respond to my reviews? A: You should respond to each set of reviews independently. The reviewers, area chairs, and senior area chairs assigned to each copy of your paper do not know that there is more than one copy of the paper under review. Q: What about the rolling discussion? A: The same holds for the rolling discussion between reviewers and authors that will follow the initial author response period. Respond to the two discussions independently. Q: How will a final decision be made about my paper? A: If both sets of reviewers make the same accept/reject recommendation, this recommendation will be followed. If a single set recommends acceptance, the paper will be accepted as long as the other set does not identify a fatal flaw (e.g., an error in a major result that cannot easily be fixed). The Program Chairs will make the final call in such cases. Q: Will both sets of reviews be publicly released if my paper is accepted or if I opt in to have my rejected paper be public? A: Yes, both sets of reviews will be released (along with your responses to the reviews and any follow up discussion with the program committee assigned to your paper). Q: Who should I contact if I have questions? A: Please contact the Program Chairs directly at [email protected]. Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jenn Wortman Vaughan NeurIPS 2021 Program Chairs
http://arxiv.org/abs/2306.12181v1
20230621112441
Feature Interactions Reveal Linguistic Structure in Language Models
[ "Jaap Jumelet", "Willem Zuidema" ]
cs.CL
[ "cs.CL" ]
Adaptive DNN Surgery for Selfish Inference Acceleration with On-demand Edge Resource Xiang Yang, Dezhi Chen, Qi Qi, Senior Member, IEEE, Jingyu Wang, Senior Member, IEEE, Haifeng Sun, Jianxin Liao, and Song Guo, Fellow, IEEE Xiang Yang, Dezhi Chen, Qi Qi, Jingyu Wang, Haifeng Sun and Jianxin Liao are with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications. E-mail: {yangxiang, chendezhi, qiqi8266, wangjingyu, hfsun, liaojx}@bupt.edu.cn. Song Guo is an IEEE Fellow (Computer Society) and an ACM Distinguished Member with the Department of Computing at The Hong Kong Polytechnic University. E-mail: [email protected] Qi Qi and Jingyu Wang are the corresponding authors. July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We study feature interactions in the context of feature attribution methods for post-hoc interpretability. In interpretability research, getting to grips with feature interactions is increasingly recognised as an important challenge, because interacting features are key to the success of neural networks. Feature interactions allow a model to build up hierarchical representations for its input, and might provide an ideal starting point for the investigation into linguistic structure in language models. However, uncovering the exact role that these interactions play is also difficult, and a diverse range of interaction attribution methods has been proposed. In this paper, we focus on the question which of these methods most faithfully reflects the inner workings of the target models. We work out a grey box methodology, in which we train models to perfection on a formal language classification task, using PCFGs. We show that under specific configurations, some methods are indeed able to uncover the grammatical rules acquired by a model. Based on these findings we extend our evaluation to a case study on language models, providing novel insights into the linguistic structure that these models have acquired.[All code and data is available here: <https://github.com/jumelet/fidam-eval>] § INTRODUCTION Feature attribution methods (FAMs) are a popular family of tools for explaining the behaviour of deep learning models, by explaining a prediction in terms of contributions of individual features <cit.>. There are many such methods proposed, and mathematical results (such as axiomatic approaches based on game theory) and theoretical frameworks (such as <cit.>'s `Explaining by Removing') are starting to offer a good understanding of how different methods relate to one another. However, there are also some important shortcomings. Perhaps most importantly, popular FAMs mostly ignore the existence of interactions between the effects of features on the prediction. This is problematic, because Feature Interactions are widely seen as a major factor in the success of neural networks <cit.>. This is all the more important in domains such as language and music processing, because feature interactions allow neural networks to model hierarchical representations of their input, which is considered a key design feature of language and music. To address these shortcomings, there is now an emerging literature on feature interaction detection and attribution methods (FIDAMs) that explain model predictions in terms of interacting features <cit.>. However, assessing the faithfulness of FIDAMs is even more challenging than assessing the faithfulness of feature attribution methods more generally <cit.>. In this paper, we present a systematic framework to characterise FIDAMs, and derive several new FIDAMs based on that framework. We then proceed with creating an evaluation pipeline that measures a FIDAM's ability to recover the structural rules for which we have good evidence that they play an important role in the target model's performance (Figure <ref>). We first test this on a set of small-scale formal language tasks, that provide stronger faithfulness guarantees. Finally, we present a case study of a large language model on the CoLA task for linguistic acceptability. We find that the performance of FIDAMs is very variable, and that the performance on the small-scale formal language tasks may not be predictive of the performance of methods on the large-scale natural language task. This is an illustration of what we call the Attribution Generalisation problem. We argue that this problem remains a key open problem in the study of explanation methods in general. However, there currently is no work that successfully brings these different feature interaction detection and attribution methods (FIDAMs) under one framework, clarifies their interrelations, and assesses their faithfulness. In this paper, we aim at defining exactly this: a framework for characterising and evaluating FIDAMs. Ideally, the framework should meet the following three criteria: (i) providing a number of axes along which existing FIDAMs can be characterised and new FIDAMs can be generated; (ii) providing a way to assess the faithfulness of each ; (iii) providing a way to generalise from evaluation results on simple models to the desired knowledge on the performance of these methods on interesting models. § RELATED WORK: ASSESSING FAITHFULNESS In this section we discuss related work on assessing the faithfulness of feature attribution methods (FAMs). A model explanation ideally provides better insights into model behaviour. However, it is important that an explanation is faithful to the reasoning of the model, and not merely plausible to a researcher. Unfortunately, attribution models can yield vastly different outcomes <cit.>. Defining a notion of faithfulness itself is an ongoing debate, and it has been argued that we should not be aiming for a binary notion, but a graded one instead <cit.>. To this end, various methodologies have been proposed to evaluate the faithfulness of explanation methods. One research direction introduces metrics to evaluate faithfulness by quantifying the impact of features that were deemed to contribute the most by an attribution method. <cit.> does this by retraining a model on data from which the most contributing features have been removed. <cit.> provide a more direct measure, by quantifying changes in model predictions when only a subset of the most contributing features is fed to model. <cit.> build on this notion, introducing a range of diagnostic metrics that capture various aspects of explanation quality including faithfulness, human rationale agreement, and explanation consistency. <cit.> ensure and evaluate faithfulness by only allowing a model access to the set of features that were deemed important by the explanation method, which has also been shown to improve model robustness <cit.>. Another line of work modifies the training data in such a way that we obtain guarantees of certain features the model must be paying attention to when making a prediction: e.g. by shuffling test data such that only part of the input resembles the statistics from the train set <cit.>, or by explicitly adding exploitable heuristics in the train set <cit.>. These two approaches could be characterised as grey box models: we adapt the data in such a way that we gain a degree of confidence what cues the model must be relying on, without having a full understanding of the model's internal reasoning. A glass box model, on the other hand, is a model whose behaviour is fully understood: it's not derived by training a model on a task, but hand-crafted. <cit.> utilises such models to evaluate FAMs on formal language tasks, providing more robust guarantees on model behaviour. Our own approach is related to the first line of research, making use of grey box models. Instead of evaluating FAMS, we evaluate FIDAMs, that provide more comprehensive insights into model reasoning. Deployment of such methods within NLP has been fairly limited, and as such evaluating their faithfulness in a language context has been an underexplored research topic. § A FRAMEWORK FOR CHARACTERISING FIDAMS Feature attribution methods typically decompose a model prediction into a sum of feature contributions <cit.>. A large contribution then indicates that this feature played an important role in a model's prediction. Although feature attributions can provide meaningful insights into the inner model dynamics, they paint a fairly limited picture of the model behaviour. Most importantly, interactions between features are lumped together, making it impossible to discern whether a large contribution of a feature stemmed from that feature alone, or from its interaction with neighbouring features. To address this, multiple methods have been proposed that decompose a model prediction into a sum of feature interactions, based on similar mathematical formalism as those of feature attributions. Notation A neural network is represented as a single function f. The input to f is denoted as 𝐱, which consists of N input features. A partial input 𝐱_S only consists of input features S⊆ N. A value function v(𝐱_S) quantifies the model output on the partial input 𝐱_S. Padding the missing features in 𝐱_S with replacement features 𝐱_∖ S' is denoted as 𝐱_S ∪𝐱_∖ S'. The attribution value of feature i is denoted as ϕ_i, and the interaction effect of a set of features ℐ is denoted as Γ_ℐ. Attribution Dimensions Attribution methods can generally be characterised along two dimensions <cit.>: 1) how the method deals with feature removal, and 2) how the impact of removing a feature is quantified. FIDAMs are built on the same principles as FAMs, and can be categorised along the same two dimension. By discerning these two dimensions we can separately evaluate their impact on the faithfulness of the attribution method. Furthermore, we can combine feature removal procedures with influence quantification methods in order to obtain novel attribution methods, an observation that has also been made in the context of FIDAMs by <cit.>, who, concurrent to our work, provide a general framework for characterising FIDAMs. §.§ Feature Removal It is not straight-forward to define the absence of a feature to a model's input. The main goal here is to replace the removed feature with a neutral baseline, that adequately represents the absence of the feature. Methods often make use of a neutral input feature, the static baseline 𝐱', such as a zero-valued embedding or a pad token: v(𝐱_S) = f(𝐱_S ∪𝐱'_∖ S) This may, however, lead to input that lies outside of the original input distribution <cit.>. The reason why this is problematic is that the model may behave erratically on such modified input, posing issues to the faithfulness of the explanation. Instead of using a static baseline, we can also opt to use a baseline that is sampled from a background distribution <cit.>. There exist two approaches to this procedure <cit.>. The observational conditional expectation samples the baseline features from a distribution that is conditioned on the set of features that are still present in the input <cit.>: v(𝐱_S) = 𝔼_𝐱'_∖ S[f(𝐱_S ∪𝐱'_∖ S) | 𝐱_S] The interventional conditional expectation drops the conditional, and samples the baseline features from an independent distribution: v(𝐱_S) = 𝔼_𝐱'_∖ S[f(𝐱_S ∪𝐱'_∖ S)] There exist two motivations for the latter approach: <cit.> drop the conditional expectation for computational reasons, allowing them to approximate the observational conditional expectation. <cit.> provide a perspective derived from causality theory, stating that the intervention of removing a feature should break the dependence between the baseline and remaining features, and hence conditioning on these features is fundamentally wrong. The previous two methods sample baseline values for individual missing features, but we can also compute the expectation over the range of possible baselines. This yields the technique of expected explanations <cit.>, in which attributions with different static baselines are averaged out over a background distribution D: ϕ_i = 𝔼_𝐱' D[ϕ_i(𝐱;𝐱') ] §.§ Quantifying Feature Influence The simplest method of quantifying the influence of a feature is expressed as the output difference after ablating the feature: ϕ_i = v(𝐱) - v(𝐱_∖ i) Note that this formulation can be combined with any of the feature removal methods: e.g. Occlusion <cit.> combines this influence method with a static baseline (Eq. <ref>), whereas <cit.> combines it with the observational conditional expectation , employing BERT as the conditional distribution. A more involved method leverages a technique from the field of game theory, called the Shapley value <cit.>. Shapley values were originally introduced in the domain of cooperative games, in which players can form coalitions to change the outcome of the game. This setup can be transferred directly to machine learning models, in which features now take up the role of the players. A Shapley value expresses the contribution of a feature as the marginal gain of including that feature in the input, averaged over all possible coalitions of features. § FIDAMS We now address a series of interaction methods that we use in our own experiments. Group Ablation The feature influence principle of Equation <ref> can straightforwardly be extended to groups of features. In our experiments we will focus on pairwise interactions, but any kind of feature subset can be used here. Γ_i,j = v(𝐱) - v(𝐱_∖ ij) Archipelago Explaining model behaviour in terms of pairwise interactions will already yield a better portrayal of its internal behaviour than `flat' attributions, but it neglects the interactions that occur within larger groups of features. Archipelago <cit.> splits up the feature interaction procedure into two phases: first an interaction detection method is performed that clusters features into interaction sets, and afterwards interaction scores are assigned to these sets as a whole. Interaction detection is based on measuring the non-additive effect of pairs of features. The interaction effect that is assigned to an interaction set ℐ is expressed as follows, with respect to a static baseline 𝐱': Γ_ℐ = f(𝐱_ℐ∪𝐱_∖ℐ') - f(𝐱') Note that Archipelago expresses the interaction effect inversely compared to the Group Ablation procedure: instead of measuring the impact of removing a group of features, we now measure the impact of solely keeping this group in the input. Shapley(-Taylor) Interaction Index Both the previous methods base interaction effects on direct output differences. We can modify the formulation of the Shapley value to yield interaction effects. This modification was originally introduced in the field of game theory, called the Shapley Interaction Index <cit.>. Instead of computing the marginal gain that is achieved by a single feature, we now compute the marginal gain of groups of features. The Shapley-Taylor Interaction Index <cit.> is an extension of SII, satisfying additional theoretical properties. Hessian Analogous to utilising the gradient for feature attributions, we can employ the second-order derivative to quantify interactions between features, which is captured by the Hessian matrix. <cit.> and <cit.> consider an interaction between two variables to exist when the effect of one variable on the response depends on values of the other variable, which can be expressed in terms of the second-order partial derivative: Γ_i,j = [∂^2f(𝐱)/∂ x_i∂ x_j]^2 A common approach when using the gradient of a model as a proxy for feature importance is to multiply it with the input embeddings <cit.>: in our experiments we consider an analogous method to the Hessian that we call Hessian × Input. Integrated Hessians Directly using the Hessian as explanation method is prone to the same caveats as using the gradient: the interactions signal may vanish due to saturation. Integrated Hessians <cit.> address this issue by integrating over the Hessian manifold along a path between the input and a baseline. This is achieved by applying the method of Integrated Gradients <cit.> to itself. An IH interaction between features i and j can hence be interpreted as the contribution of i to the contribution of j to the models prediction. The path integral between input and baseline is approximated via a Riemann sum interpolation. Other Methods The methods explained thus far have all been incorporated in our experimental pipeline. The scope of our work focuses mainly on pairwise interactions, but methods that extract higher-order interactions have been proposed as well <cit.>. Comparing such methods to linguistic structure is an exciting avenue that we leave open to future work. Other interaction methods that were not considered include two methods that preceded Archipelago: Neural Interaction Detection <cit.> and MAHE <cit.>. The feature attribution method Contextual Decomposition <cit.> has been extended to extract interactions as well <cit.>, but these methods place the constraint that only contiguous groups of features can interact. Integrated Directional Gradients <cit.>, an extension of Integrated Gradients to capture group attributions, could be adapted to our framework, but we leave this open for future work. § EVALUATING FIDAMS The final component of our framework is a methodology for evaluating the faithfulness of FIDAMs. To lay a robust foundation for such work, we propose to evaluate a range of interaction methods and baselines on smaller deep learning models (using LSTM and Transformer architectures) that have been trained to recognise formal languages, based on a probabilistic context-free grammar (PCFG). Our models are trained on a binary language classification task, in which a model needs to learn to discern between well-formed strings and minimally corrupted counterparts. Models are trained to perfection (100% accuracy) on both train and test set. To obtain perfect performance, a model must rely solely on the grammatical rules that underlie the language, without resorting to spurious heuristics, because only these results allow completely solving the task. This way, due to the controlled nature of the task, we obtain a high degree of confidence about the model's behaviour. The goal of our experimental approach is to recover the structure of the language based on the trained model itself. This is achieved by the FIDAMs outlined in <ref>. We aim to uncover whether a structural dependency between two features results in a high interaction effect. Since our models have been trained to perfection, this allows us to employ our setup as a way of measuring the faithfulness of a FIDAM. A method that assigns a high interaction effect to features that contain a dependency in the original grammar is able to provide a faithful reflection of a model's understanding of the task. By testing a wide range of FIDAMs and baselines we can uncover which configuration yields the most faithful explanations. A graphical overview of our approach is depicted in Figure <ref>. Task The binary language classification task is set up by generating positive examples D^+, based on some PCFG, and negative examples D^-, derived from minimally corrupting the positive examples. We split the union of these two sets into a random train/test split of 80/20%. We train our models with a default cross-entropy loss, using the AdamW optimiser <cit.>, a learning rate of 0.01, and a batch size of 48. Models Our pipeline permits the use of any kind of neural model architecture, in our experiments we considered both LSTMs <cit.> and Transformers <cit.>. In our experiments we report the results of the LSTM model, but we observed similar results for Transformers: due to the black-box approach of our explanation procedure the architecture itself is not of great importance. The models are deliberately small: we use an embedding size that is equal to the number of symbols in the language it is trained on, a hidden state size of 20, and a single layer. This results in models that provide a compute-friendly test bed for evaluating the FIDAMs. Evaluation We focus on pairwise interactions: interactions between individual pairs of features. A FIDAM that extracts pairwise interactions for an input sequence 𝐱∈ℝ^N returns a matrix of interaction effects Γ∈ℝ^N× N. Since our goal is to uncover whether structural dependencies result in high interaction effects, we approach the evaluation of the interaction matrix as a retrieval task. By aggregating and normalising the rank of each interaction of interest we can quantify the performance of a FIDAM. We call this metric the Average Relative Rank (ARR): ARR(Γ, ℐ) = 1/|ℐ|∑_i,j∈ IR(Γ_i)_j/N-1 where ℐ denotes the set of interaction pairs of interest and R(Γ_i) denotes the rank of each interaction between feature i and the other features in input 𝐱 (the lowest interaction is ranked 0, and the highest interaction is ranked N-1). We aggregate these scores over an evaluation set to obtain a general performance score of the FIDAM. A graphical overview of this procedure is provided in Figure <ref>. Baselines We consider a range of baselines in our experiments, based on the procedures explained in <ref>. For the static baselines we consider a zero-valued baseline (𝐱'=0), and a baseline that utilises a fixed mapping T based on the original input symbols (𝐱'=T(𝐱)). Expected attributions are marginalised over samples from the distribution of well-formed strings D^+ and corrupted strings D^-. The interventional conditional expectation (Eq. <ref>) is computed with a corpus-wide unigram distribution (P(x_i)), a unigram distribution that is conditioned on the sentence position (P(x_i|i)), and as a joint distribution over the missing features (P(𝐱'_∖ S)), that we sample from the training corpus. The observational conditional expectation (Eq. <ref>) is computed based on the original corpus data.[Due to the small scale of the PCFGs considered here we can generated the complete language up to a certain length, and sample from strings that have feature overlap with the features that are still present in the partial input. For more complex tasks an auxiliary LM can be used instead.] § EXPERIMENTS ON FORMAL LANGUAGES We apply the evaluation procedure of <ref> to two formal languages: the Identity Rule language and the Dyck-2 language. In the appendix (<ref>) we also present results on a palindrome language. §.§ Identity Rule The first language we consider is a regular language consisting of strings in which the first two symbols are identical, followed by a random sequence of symbols. The language is formed by the following grammar: S → x x A x∈{a,b,c} A → x A | ϵ x∈{a,b,c} The only interaction of interest here is between the first two symbols; all subsequent symbols are irrelevant for the prediction. An ARR score of 1.0 then indicates that for all corpus items the interaction between the first two items was the strongest out of all interactions. We use a corpus size of 1.000, a maximum sequence length of 20, with 3 different input symbols. Corrupted strings are derived by altering one of the first two symbols (e.g. aabcb → cabcb). Results The results for an LSTM that was trained on the language are shown in Table <ref>. Due to the simplicity of the language and for brevity we only report results on three baselines. A static zero-valued baseline provides imperfect interactions for all methods. The Hessian, that does not depend on any baseline, performs better than all other methods here. When sampling the baseline, however, multiple methods perfectly retrieve the interaction between the first two symbols for all corpus items. Interestingly, Group Ablation and IH benefit from sampling from the distribution of well-formed items, whereas Archipelago performs best when sampling from the distribution of corrupted items. §.§ Dyck-2 The Dyck language is the language of well-nested brackets, and is a popular testbed for research on formal languages. It is a context-free language with center embedding clauses, requiring a model to keep track of a memory stack while processing a string. Earlier work on Dyck languages has shown that a wide range of neural model architectures can learn the grammar, including LSTMs <cit.>, memory augmented RNNs <cit.>, Transformers <cit.>, and handcrafted RNNs <cit.>. We consider the Dyck-2 language, consisting of two types of brackets. The language is formed by the following grammar: S → [ S ] | ( S ) | S S | ϵ We use a corpus size of 15.000, a maximum sequence length of 20, and a maximum branching depth of 4. We use the same branching probabilities as <cit.>, which results in a uniform probability of 0.25 for each rule. Corrupted strings are derived by flipping a single bracket to any other bracket. For the baseline mapping T(𝐱), we map a bracket to the other bracket type, i.e. `(' ↔ `[' and `)' ↔ `]'. This results in a baseline that is of the same structure as the original input, but without feature overlap. Results We report the results for this language in Table <ref>, computed over all our baselines for an LSTM. The zero-valued baseline again turns out to be a mediocre baseline: for none of the methods this results in a high ARR score. The method that performs best is the fixed mapping T(𝐱). For Group Ablation, SII, and STII this results in a perfect ARR; for IH it is the best performing baseline. It is encouraging that a baseline exists that results in perfect ARR scores, but this mapping depends strongly on the nature of the Dyck task itself. It is, for example, unclear how this static mapping would transfer to the natural language domain. Ideally, a more general solution makes no strong assumptions about the baseline itself. The three other baseline types in Table <ref> may provide such a solution, as these only depend on the access to the original training data. Out of these, the observational baseline performs best: for the SII and STII methods this baseline performs nearly on par with the static mapping. Obtaining this conditional distribution is challenging for more complex tasks, and it can be seen here that the interventional baseline with a joint distribution over the missing features performs well too. § A NATURAL LANGUAGE CASE STUDY: COLA As a case study on a larger scale natural language task, we apply our methodology to language models fine-tuned on the CoLA task <cit.>. CoLA is part of the GLUE Benchmark <cit.>, and is defined as a binary classification task of determining the linguistic acceptability of a single input sentence. The task consists of linguistically valid sentences, and sentences that contain either a syntactic, semantic, or morphological violation. A model that performs well on this task must have a thorough grasp of grammatical structure, and as such it provides a useful test bed for our FIDAM evaluation procedure. In the previous experiments there was a degree of certainty about the structure that must be encoded by the model. In the natural language domain, however, we do not have such certainty, and should therefore be careful of making strong claims about faithfulness. Furthermore, natural language is highly multi-faceted and can not be captured by a single hierarchical structure that covers all these facets. Nonetheless, we consider it valuable to test our setup on a natural domain in order to see if interesting differences between FIDAMs arise, and whether particular facets of language such as syntactic dependency structure can be extracted. §.§ Experimental Setup For our experiment we consider the RoBERTa-base model <cit.> which obtains a Matthew's Correlation Coefficient score of 69.70 on the in-domain validation split. We filter out sentences that contain words that are split into multiple subwords by the tokenizer, since this leads to issues with aligning the interactions of multiple subwords to the dependency graph that is used for evaluation. Furthermore, we limit sentences to a max length of 14 in order to allow the STII and SII methods to be computed exactly without approximations. This resulted in a subset of around 60% of the original in-domain validation split that we will use in our experiment. We evaluate the FIDAM scores on the dependency parse tree of the sentence, that we obtain with the parser of spaCy <cit.>. The ARR score is computed based on the interaction of each token with its parent token. We omit the interaction of the token that has the root node as its parent. An example of this procedure can be found in Appendix <ref>. Do note that our evaluation procedure is one of many possibilities: we make the assumption that a token should interact strongly with its parent, but other interactions are likely to play a role within the model as well. We leave a more detailed investigation into using different types of linguistic structure open for future work. We again consider the FIDAMs of Group Ablation, STII/SII, and Integrated Hessians. We leave out Archipelago, since its procedure of assigning features to a single interaction set is not feasible with our setup in which multiple child tokens might be interacting with the same parent token. Due to computational constraints we were unable to compute the full Hessian matrix of the language model, whose computation scales quadratically in the number of input neurons <cit.>. For the static baselines we again consider the zero-valued baseline, as well as the token. The interventional baselines are obtained by computing simple count-based distributions over a sample of 100.000 sentences from the Google Books corpus. The distributions are based on the tokenization of the model's tokenizer, and allow for computationally efficient sampling. We leave the incorporation of an observational baseline for future work, where an auxiliary masked LM might provide a useful conditional probability distribution. §.§ Results The results for the experiment are shown in Table <ref>. As expected, due to reasons outlined at the start of this section, none of the methods reaches ARR scores that are close to 1. Nonetheless, it is encouraging to see that various method/baseline combinations attain ARR scores that are far above chance level, indicating that there exists a strong degree of alignment between feature interactions and dependency structure. Contrary to the Dyck results, using a zero-valued baseline yields some of the highest ARR scores, which indicates that within RoBERTa's embedding space this baseline represents a better neutral value. A closer inspection of these results shows that the ARR scores are strongly negatively correlated to sentence length: for Group Ablation with a baseline, for example, we obtain a Spearman correlation of -0.38 (p<<0.001, regression plot in Appendix <ref>). This is not surprising: as the sentence length increases, the chance of a token's largest interaction being with its parent decreases. Another correlation of interest is between the ARR score and the model's prediction of a sentence's acceptability. A high correlation would indicate that the FIDAM's alignment with dependency structure are indicative of a model's performance. For this we obtain a Spearman correlation of 0.14 (p=0.036): a relatively weak result that indicates that the structure our FIDAM extracted is only partly driving the model's comprehension of the sentence structure. § DISCUSSION & CONCLUSIONS In this paper, we have presented a framework for characterising FIDAMs and evaluating their faithfulness. For the characterisation we set out two dimensions, feature removal and feature influence, along which existing FIDAMs can be characterised, by extending the `Explaining by Removing' framework of to also apply to FIDAMs. This allows us to place each of the known FIDAMs in a two-dimensional grid, and to define novel variants of these models. As such, many of the methods that we incorporated in our experiments are novel FIDAMs, such as combining Archipelago with expected explanations and STII with an observational baseline. To assess the faithfulness of FIDAMs, we made use of formal language theory and `grey box models'. We use formal grammars to generate multiple datasets, each with known feature interactions, and train deep learning models to perfection on those datasets. Using FIDAMs, we can then extract the learned feature interactions based on the model itself, and compare these interactions to the dependencies in the original grammar. We demonstrate that only specific combinations of FIDAMs and baselines are able to retrieve the correct interactions, while methods such as Archipelago and Integrated Hessians consistently fail to do so. Finally, we tested our methodology on a natural language case study using a model fine-tuned on the CoLA task for linguistic acceptability. Our results on the formal language tasks either did not turn out to be predictive of this experiment or, alternatively, the results were predictive but the LMs made less use of dependency graph information than we might have expected. This illustrates the challenge of the Attribution Generalisation problem, and the open question remains how we can transfer faithfulness guarantees from a synthetic, controlled context to the domain of natural language and LLMs. We do show, however, that under certain configurations feature interactions align to some degree with the (syntactic) dependency structure of a sentence. This paves the way for revealing linguistic structure in a more direct way than, for instance, can be achieved with Structural Probes <cit.>. Investigating whether different methods and baseline configurations are able to retrieve different aspects of structure is an exciting next step that we look forward to exploring in more detail. This could be examined, for instance, through the lens of contrastive explanations <cit.>, a procedure that demonstrates that different baselines can reveal different aspects of linguistic structure. Furthermore, investigating the role that attention plays in modelling interactions could be a fruitful line of work, for instance by incorporating context mixing methods to our pipeline, such as Value Zeroing <cit.> and ALTI <cit.>. § LIMITATIONS Our work has only considered pairwise interactions, but linguistic structure can also manifest through higher-order interactions. We show that our results on small-scale, formal languages, are different from our results on a natural language task. It would be premature to conclude that small-scale, synthetic tasks can not be predictive of behaviour on more complex tasks, and a more detailed investigation into the properties of the task that play a role is a viable next step. Some of the FIDAMs we considered, most notably SII and STII, are intractable for larger inputs (scaling O(2^n)), and a necessary step in employing these methods to larger models is to construct better approximation procedures, e.g. by adapting SHAP to SII as has been done before for tabular data by <cit.>. More generally, although we believe our probabilistic formal language setup provides a important step forward, solving the Attribution Generalization problem – i.e., showing that results for small setups generalize to very large model – remains a key open problem. acl_natbib § PALINDROMES One additional language we investigated is the context-free language of palindromes. In order to process a palindrome, a model needs to keep track of the dependency between each token in the first half of the string with its counterpart in the second half. Palindromes can contain a special symbol in the middle of a string to demarcate the two string halves, making it less ambiguous for the model at which point it should track whether the palindrome is well-formed. In our experiments, however, we found our models to perform well on both forms of palindromes. Furthermore, following <cit.>, we use a homomorphic mapping h for the second half of the string, allowing the model to use separate embeddings for symbols occurring in the first and second half of a string: S → x S h(x) | ϵ x∈{a,b,c,⋯} We use a corpus size of 5.000, 10 different input symbols, and a maximum sequence length of 18. For the fixed baseline mapping T(𝐱) we map a symbol onto another random symbol, preserving the grammaticality of the palindrome (e.g. abBA → cdDC). Results The results for this language, trained with an LSTM, are shown in Figure <ref>. Again, the zero-valued baseline performs poorly, with most methods scoring ARRs even below chance level. The fixed baseline mapping again performs well for Group Ablation, SII, and STII, although it is not the best performing baseline this time. These three FIDAMs obtain perfect performance when using the expected baselines over a distribution of well-formed palindromes, which also holds for the interventional baseline with a joint distribution over the missing features. This is in contrast to the Dyck results, where the observational baseline resulted in better ARR scores for all three of these methods. § ARR EXAMPLE An example of a sentence with a high ARR (0.93), for the Group Ablation method with a baseline: < g r a p h i c s > < g r a p h i c s > § CORRELATION COLA ARR AND SENTENCE LENGTH Correlation between sentence length and ARR, shown here for Group Ablation with a baseline. Spearman's ρ = -0.38 (p<<0.001): < g r a p h i c s >
http://arxiv.org/abs/2306.04011v1
20230606205044
Direct Observation of Landau Levels in Silicon Photonic Crystals
[ "Maria Barsukova", "Fabien Grisé", "Zeyu Zhang", "Sachin Vaidya", "Jonathan Guglielmon", "Michael I. Weinstein", "Li He", "Bo Zhen", "Randall McEntaffer", "Mikael C. Rechtsman" ]
physics.optics
[ "physics.optics", "cond-mat.mes-hall" ]
Department of Physics, The Pennsylvania State University, University Park, PA, USA These authors contributed equally Department Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA, USA These authors contributed equally Department of Physics, The Pennsylvania State University, University Park, PA, USA These authors contributed equally Department of Physics, The Pennsylvania State University, University Park, PA, USA Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA Department of Physics, The Pennsylvania State University, University Park, PA, USA Department of Applied Physics and Applied Mathematics and Department of Mathematics, Columbia University, New York, NY, USA Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA Department Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA, USA Department of Physics, The Pennsylvania State University, University Park, PA, USA [email protected] We experimentally observe photonic Landau levels that arise due to a strain-induced pseudomagnetic field in a silicon photonic crystal slab. The Landau levels are dispersive (i.e., they are not flat bands) due to the distortion of the unit cell by the strain. We employ an additional strain which induces a pseudoelectric potential to flatten them. Direct Observation of Landau Levels in Silicon Photonic Crystals Mikael C. Rechtsman July 31, 2023 ================================================================ When electrons are confined to a two-dimensional plane and are subjected to an out-of-plane magnetic field, they move in circular cyclotron orbits as a result of the Lorentz force. In the quantum domain, this cyclotron motion is quantized, and as a consequence, the electrons' energy spectrum splits into discrete, highly degenerate states called Landau levels. The integer and fractional quantum Hall effects <cit.> arise as a direct result; in the fractional case, it is the high degeneracy of Landau levels (i.e., that they are flat bands) that gives rise to effectively strong electron-electron interactions and leads to the fractionalization of charge. In free space, photons do not respond to external magnetic fields because they do not carry charge; yet, when propagating in magneto-optical materials, they may respond indirectly as a result of the material's magnetic response. However, this response is weak at optical frequencies. In 2012, an approach was put forward for emulating magnetic behavior in photonic systems by inhomogeneously straining a photonic lattice. <cit.>. This implementation was based on an idea proposed for electrons in graphene, where a strain pattern imposed on the lattice would introduce an effective gauge field at the Dirac point, causing electrons to behave as though there were a strong field present, even in the absence of a real magnetic field <cit.>. The effect was later demonstrated by directly observing Landau levels in graphene bubbles, where a strain corresponding to an enormous `pseudomagnetic' field of 300T was imposed <cit.>. Since the original photonic experiment, Landau levels were also proposed and observed in exciton-polariton condensates <cit.> and in mechanical systems <cit.>. Moreover, there have been a number of theoretical proposals for how Landau levels may be used in the context of photonics that are intrinsically distinct from the electronic case <cit.>. Here, we directly observe Landau levels in two-dimensional silicon photonic crystal slabs in the nanophotonic domain. Moreover, we go beyond purely pseudomagnetic effects and demonstrate that strains corresponding to pseudoelectric fields act to flatten the Landau levels that inherit dispersion from the form of the pseudomagnetic strain. There are several key differences and advantages of pseudomagnetism in photonic crystals compared to previous realizations of photonic pseudomagnetism. First, photonic crystals have been demonstrated to enhance light-matter interaction via cavity modes and flat bands <cit.>. This enhancement is generated as a result of the lattice. In contrast, for systems composed of individual, isolated guiding or resonant elements (as in Refs. <cit.>), lattice effects are not leveraged because strong enhancement would occur even in a single site. Second, besides having unit cells that are an order of magnitude smaller, photonic crystals can in practice have much larger system sizes compared to previous realizations (millions compared to hundreds of unit cells), and can be realized with smaller loss in the silicon platform. Since Landau level degeneracy scales with system size and the linewidth increases with loss, photonic crystals allow for increased degeneracy and significantly improved spectral resolution of the levels. Further, since photonic crystals do not have an associated tight-binding theory, the original theoretical framework relating strain to pseudomagnetism is not directly applicable, rendering a new understanding necessary; the appropriate effective Hamiltonians for strain-dependent emergent parameters for two-dimensional photonic crystals were derived in our previous theoretical work <cit.>, and are extended to the slab geometry here (2D slab embedded in 3D space). Our establishment of a new analytical method of understanding and describing aperiodicity in photonic crystals (i.e., using pseudomagnetic fields) will be useful in their optimization for many different functions; this has traditionally been approached by using direct numerical optimization <cit.>. Our starting point is a photonic crystal structure consisting of rounded triangular air holes in a silicon slab <cit.> that rests on a silica substrate. The holes form an underlying honeycomb pattern with C_6v-symmetry. As a result, this lattice hosts Dirac points at the 𝐊 and 𝐊^' points in the Brillouin zone <cit.>. As these Dirac points lie below the light line of vacuum, they are not detectable via free-space excitation. To allow radiative coupling from outside the slab, we introduce a small period-doubling perturbation by changing the size of some of the holes (more details can be found in Supplementary Information Section 2). This makes the unit cell of the lattice rectangular, and the band structure is folded such that the Dirac cone resides along the k_x axis and lies above the light line of vacuum. A scanning electron microscope image of the structure is shown in Fig. <ref>(a); the period-doubled unit cell is shaded in purple. We numerically compute the band structure (in the transverse electric polarization) using the guided mode expansion method as implemented in the open-source software package Legume <cit.>. Fig. <ref>(b) shows the linearly-dispersing transverse electric (TE)-like bands that exhibit a Dirac point at 𝐊, with a frequency ω_ D=0.318 [2π ca^-1]. Here a is the lattice constant of the underlying hexagonal lattice structure and c is the speed of light. The period-doubling procedure very slightly changes the Dirac frequency (see Supplementary Information Section 2). Next, we introduce a strain pattern in our structure by deforming the lattice as shown in Fig. <ref>(c). Here, the term strain refers not to a strain induced by a physically applied stress, but to the deformation of the dielectric pattern that is directly etched into the silicon. The specific strain pattern is achieved by mapping every point (x, y,z) to (x, y + a (κ x)^2,z), where κ is the strength of the strain. This deformation breaks periodicity in the x direction, but retains periodicity along the y direction. The spatial scale separation ensured by the assumption of small and slowly varying strain, κ a ≪ 1, allows us to develop a multiple scale <cit.> variant of degenerate perturbation theory to expand the eigenstates and eigenvalues of the strained system. The eigenstates are, to leading order in κ, a slow spatial modulation of the degenerate Bloch modes associated with the Dirac point of the unstrained (κ=0) structure. The resulting effective Hamiltonian, which incorporates the strain, is given by ℋ_ eff=E_ Dσ_0+v_ D[(-i∂/∂ x)σ_1+(-i∂/∂ y+4ab_*κ^2/v_ D x)σ_2], where E_ D=(ω_ D/c)^2, σ_0, σ_1 and σ_2 are Pauli matrices, and v_ D=0.915a^-1 and b_*=0.606a^-2 are two parameters calculated from the modes of the unstrained structure at energy E_ D. A detailed derivation can be found in Supplementary Information section 3, where explicit expressions for b_* and v_ D in terms of the eigenstates of the periodic structure are displayed. We note that the effective Hamiltonian displayed in Eq. (1) is derived directly from the continuum theory of photonic crystals; this is fundamentally different from the previous work <cit.> based on the tight-binding approximation. Our approach extends the the methods of Ref. <cit.> to the three-dimensional setting of the slab geometry, where vectorial effects play a role. Equation (<ref>) corresponds to a two-dimensional Dirac Hamiltonian describing massless spin-1/2 relativistic particles under a constant (pseudo)magnetic field pointing in the out-of-plane direction, where the magnetic field has a strength of B_ eff = 4ab_*κ^2/v_ D and is described by a vector potential in the Landau gauge. The discrete energies that are eigenvalues of the Hamiltonian in Eq. (<ref>) for an electron are known as Landau levels. The energy eigenvalue of the n^th level is proportional to √(|n|), where n is an integer. Analogously, for our photonic crystal slabs, the frequency eigenvalues of the electromagnetic eigenmodes are, to first order in κ, proportional to √(|n|) and can be expressed as ω_n = ω_ D±(c^2v_ D/√(2)ω_ D)√(B_ eff| n|), where n is an integer. To corroborate our analytical results given in Eq. (<ref>), we also perform numerical simulations of the strained structure using the guided-mode expansion method. The strain is implemented in a dielectric profile which spans 199 period-doubled unit cells in the x-direction. Due to the preservation of lattice periodicity along the y-direction, k_y is conserved and the frequencies of the bands can be plotted as functions of k_y, as shown in Fig. <ref>(d). Here, we observe the splitting of the spectrum near the Dirac point into discrete Landau levels due to the strain-induced pseudomagnetic field, where the spacing of these levels is proportional to √(|n|) for a fixed value of κ. To demonstrate the formation of Landau levels in such a system, we use electron-beam lithography to fabricate both the periodic and the strained patterns in a silicon slab (ε = 12.11) on top of a silica substrate (ε = 2.25). A detailed description of the fabrication methods can be found in Supplementary Material Section 1. Figures <ref>(a) and (c) show scanning electron microscope (SEM) images of the fabricated structures. The structure in Fig. <ref>(a) has a periodicity along the x direction of 2a=980 nm. To experimentally characterize the photonic bands of these structures, we perform angle- and frequency-resolved reflection measurements. The samples are illuminated by a tunable continuous wave laser (Keysight 81606A) with a wavelength range of λ = 1.45 - 1.65 μ m (±1.5 pm absolute wavelength resolution accuracy), and a laser linewidth coherence control of 10 kHz. We measure the iso-frequency contours of the fabricated photonic crystal slabs using back focal plane (BFP) imaging. We then extract the Landau-level band structures by observing the photonic crystal resonances at a fixed k_x corresponding to the location of the Dirac point of the unstrained structure. Details of the experimental setup can be found in Supplementary Material Section 1. Fig. <ref>(a) shows the bands of the unstrained structure, obtained by BFP imaging, where we clearly observe linearly dispersing bands near the Dirac point. We note that a small gap is observed at the Dirac point - this is due to inevitable fabrication disorder that breaks inversion symmetry. We show in Supplementary Information Section 4 that the breaking of inversion symmetry affects the zeroth Landau level significantly more than the others. Next, we measure the bands of the strained photonic crystal slabs described above and find the emergence of discrete Landau levels, as shown in Fig. <ref>(b). While the effective theory predicts that the Landau levels should be flat, we see that they are dispersive in both simulation (Fig. <ref>(d)) and experiment (Fig. <ref>(b)), i.e., the bands are concave-up. This arises due to the fact that, by adding strain, the unit cell is distorted locally as a function of x. This distortion effectively adds a parabolic potential to the Hamiltonian (i.e., H∼ x^2σ_0), which in turn causes the dispersion of the Landau level bands. A detailed explanation can be found in Supplementary Information Section 5. According to the effective theory (Eq. <ref>), the n=0 level should be at the center of all Landau levels. However, due to the aforementioned inversion-symmetry breaking, this level is slightly shifted away from the center (see Supplementary Information Section 4). As a result, we use a new reference frequency of ω_0^' = 1/2(ω_-1 + ω_1) as the Dirac frequency to calculate the Landau level spacings, defined as ω_n - ω_0^'. In Fig. <ref>(c), we compare the theoretically and experimentally obtained level spacings at k_y=0 [2π a^-1] under different strain strengths (characterized by κ) and observe good agreement between the two. From the experimental data, we also calculate the normalized quantity |ω_n - ω_0^'|/(κ√(|n|)), which should be a constant for all Landau levels. We again observe good agreement between experiment and the theoretically-predicted value of 0.0823 [2π c], as shown in Fig. <ref>(d). In both Figs. <ref>(c) and (d), the theoretical plots (solid lines) are obtained directly from analytical predictions, and have no free parameters. It is clear from Fig. <ref>(b) that, as n decreases, the range of k_y values over which the n^th Landau level is observed becomes smaller. This can be intuitively understood as arising from the interaction of the Landau level states with other states that reside toward the far left and right sides of the sample. These states rise in energy as one moves away from the sample center along the x direction. We know from Eq. (1) that the Landau level states are harmonic oscillator eigenstates centered at x=k_y/B_ eff, with spatial widths of Δ x_n=√((2|n|+δ_0,n)/2B_ eff). As k_y is increased, the Landau level center translates and the tail of the Landau level eventually interacts with the states mentioned above, leading to an increased linewidth. More details are given in Supplementary Information Section 3. The fact that the x-position of the Landau level state varies linearly with k_y leads to another clear observable: when the input beam is moved from left to right in real space along the x-direction, the Landau level states at increasing k_y are selectively excited, and therefore appear more clearly in the band structure. We observe this effect directly, as shown in Fig. <ref>(a) through (c): when the input beam is on the left side of the sample (i.e., x<0), we see that the modes on the left side of the band structure (k_y<0) are more strongly excited, but as the input beam is moved rightward, we observe that the modes on the right side of the band structure are increasingly excited. To further study the relationship between x and k_y, we extract the boundary in k_y-space between the modes that are excited and those that are not excited. For input beams positioned left of center, we extract the right boundary, and for input beams positioned right of center, we extract the left boundary. The boundary values differ from the excitation centers by an overall offset, which we remove by fitting the data to a line and subtracting the intercept (one for the left-boundary data and one for the right-boundary data). Using this procedure, we obtain the relationship between the Landau level horizontal position and the average vertical momentum, k_y, of the excited modes. The linear relationship between these, as shown in Fig. <ref>(d), evidences the direct proportionality between the Landau level positions and k_y. We next turn our attention to the mitigation of the Landau level dispersion. As explained earlier, Eq. <ref> predicts flat Landau levels. However, in the simulations and experiments, the Landau levels exhibit quadratic dispersion as k_y is varied. As shown in Ref. <cit.>, it is possible to mitigate this dispersion by introducing an additional strain profile, which induces a pseudo-electric potential. Specifically, we add a cubic term to the deformation such that the point (x, y, z) is mapped to (x + aβ(κ x)^3, y + a(κ x)^2,z). The parameter β controls the strength of this additional strain in the x-direction. A schematic of the strained structure, which induces both pseudomagnetic and pseudoelectric fields, is shown in Fig. <ref>(a) (further details are given in Supplementary Information Section 5). The reason why the pseudoelectric field counters the Landau level dispersion, to leading order, can be explained as follows. To leading order, the form of the pseudoelectric field gives rise to a potential V_ eff=3aβ m κ^2 x^2σ_0 (to be added to (<ref>)) which is similar to that which creates the dispersion in the first place (here m=-3.28a^-2 is a parameter calculated entirely from the states of the periodic structure). Since the spatial positions of the Landau level eigenstates grow linearly with k_y, a quadratic potential in x is equivalent to a parabolic dispersion in k_y. An appropriate choice of the field strength (and sign) will then counteract the original dispersion induced by the strain associated with the pseudomagnetic field. By choosing β appropriately, the quadratic dispersion of the Landau levels can be mitigated, leading to nearly flat bands. We note that each Landau level requires a different value of β to counteract its dispersion. More details are given in Supplementary Information Section 5. Fig. <ref>(b) shows numerical simulations of the flattened Landau levels for a structure with pseudomagnetic and pseudoelectric fields induced by a strain with κ=0.0632a^-1 and β=0.0364. Here, the n=0 level is targeted, but other levels are also evidently flatter. Fig. <ref> (c) shows the experimental data for a strained structure with the same values of κ and β given above, where a good agreement is observed between theory and experiment. In conclusion, we have directly observed Landau levels in the spectra of two-dimensional silicon photonic crystal slabs. As in graphene, the Landau level energies are proportional to √(|n|), where n is an integer. The Landau level bands are found to be dispersive, which can be explained by a distortion of the unit cell as a result of the strain. We further showed that this dispersion can be mitigated by adding an additional strain that induces a position-dependent pseudoelectric field (i.e., a potential). Landau levels constitute a new methodology for enhancing light-matter interaction which is distinct from standard slow light or cavity enhancement, because a flat band acts essentially as a `cavity everywhere in space'. The realization of optical pseudomagnetism prompts several new questions and directions of inquiry, including: whether Landau-level flat bands can be used to enhance light-matter coupling more efficiently than conventional photonic crystal flat bands or other points of high degeneracy (such as van Hove singularities); the question of the nature of wave mixing processes such as four-wave mixing among Landau levels; and whether the square-root structure of the eigenvalue spacing can lead to different properties associated with entangled pair or frequency comb generation. More broadly, the framework of pseudomagnetism gives an analytical handle on aperiodic photonic structures, allowing for a new approach to designing devices and better understanding their behavior. § ACKNOWLEDGEMENTS We gratefully acknowledge funding support from the Office of Naval Research MURI program under agreement number N00014-20-1-2325, the Air Force Office of Scientific Research MURI program under agreement number FA9550-22-1-0339, as well as the Kaufman and Packard foundations under grant numbers KA2020-114794 and 2017-66821, respectively. This research was also supported in part by National Science Foundation grants DMS-1620422 (MCR), DMS-1620418 (MIW), DMS-1908657 (MIW) and DMS-1937254 (MIW), as well as Simons Foundation Math + X Investigator Award #376319 (MIW). The authors acknowledge the Nanofabrication Lab within the Materials Research Institute at Penn State and the help of Michael Labella, as well as seed funding from the Center for Nanofabricated Optics at Penn State University. F.G. thanks GenISys and, in particular, Roger McCay for his help in optimizing the fracturing of the electron-beam patterns. M.B. thanks Sebabrata Mukherjee and Alexander Cerjan for fruitful discussions in the early stages of the project and help with numerical optimization. We would like to note that the group of Ewold Verhagen has concurrently posted a similar work on the observation of Landau levels in photonic crystals.
http://arxiv.org/abs/2306.01491v1
20230602123414
Learning Local to Global Feature Aggregation for Speech Emotion Recognition
[ "Cheng Lu", "Hailun Lian", "Wenming Zheng", "Yuan Zong", "Yan Zhao", "Sunan Li" ]
cs.SD
[ "cs.SD" ]
Concurrent Classifier Error Detection (CCED) in Large Scale Machine Learning Systems Pedro Reviriego, Ziheng Wang, Álvaro Alonso, Zhen Gao, Farzad Niknia, Shanshan Liu and Fabrizio Lombardi P. Reviriego is with Universidad Politécnica de Madrid, 28040 Madrid, Spain. Email: [email protected]. Z. Wang is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected]. A. Alonso is with Universidad Politécnica de Madrid, 28040 Madrid, Spain. Email: [email protected]. Z. Gao is with Tianjin University, Tianjin 300072, China. Email: [email protected]. F. Niknia is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected]. S. Liu is with New Mexico State University, Klipsch School of ECE, Las Cruces, NM 88003, USA. Email: [email protected]. F. Lombardi is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected]. July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Transformer has emerged in speech emotion recognition (SER) at present. However, its equal patch division not only damages frequency information but also ignores local emotion correlations across frames, which are key cues to represent emotion. To handle the issue, we propose a Local to Global Feature Aggregation learning (LGFA) for SER, which can aggregate long-term emotion correlations at different scales both inside frames and segments with entire frequency information to enhance the emotion discrimination of utterance-level speech features. For this purpose, we nest a Frame Transformer inside a Segment Transformer. Firstly, Frame Transformer is designed to excavate local emotion correlations between frames for frame embeddings. Then, the frame embeddings and their corresponding segment features are aggregated as different-level complements to be fed into Segment Transformer for learning utterance-level global emotion features. Experimental results show that the performance of LGFA is superior to the state-of-the-art methods. Index Terms: speech emotion recognition, Transformer, time-frequency feature, frame-level, segment-level § INTRODUCTION Speech emotion recognition (SER) is a significant task of affective computing and has attracted wide attention in recent years <cit.>, <cit.>. The key to addressing the SER is how to disentangle the emotion information hidden in speech from the confusion of diverse acoustic factors <cit.>, <cit.>, <cit.>, e. g., background noise, language, speaker identity. Actually, the emotional information is always discretely distributed in frames or segments of speech <cit.>, <cit.> due to the presence of special frames or segments without emotional contexts, i .e., empty frames/segments. In other words, emotion information is always discretely distributed in some key frames or segments. Therefore, a practical approach is to capture long-range emotion dependencies from these key frames/segments <cit.>, <cit.>, <cit.>, <cit.>. To this end, Recurrent Neural Networks (RNNs) <cit.>, <cit.> are widely adopted for learning utterance-level emotion features from frame-level or segment-level features. Although previous works based on RNNs, e. g., LSTM and Bi-LSTM, have achieved great success on SER, they still encounter some issues <cit.>, e. g., high time and space complexity for computing cells and only modeling sequential long-term dependencies (from forward to backward, or reverse). With the emergence of Transformer <cit.>, these issues have been handled effectively. In Transformer, the Multihead Self-Attention can describe the complete relationship between all speech frames/segments. Also, the time-space complexity could be effectively reduced by the matrix parallel calculation. Taking these advantages, the Speech Transformer models <cit.>, <cit.> are promisingly developed from the Vision Transformer (ViT) <cit.>. However, Speech Transformer roughly divides the speech spectrogram into same "chunks" <cit.> (i. e., patches in ViT), leading to lossing local inter-frame relationships reflecting the fine-gained emotion distribution and corruption of frequency domain information. Since the frame-level and segment-level features contain the emotional information at different scales <cit.>, <cit.>, e.g., frames reflect the phoneme-level associations and segments respond to the word-level or phrase-level correlations, they should be aggregated complementarily to learn more emotion-discriminative speech features. Likewise, ViT also ignores the local structure information in image patches for computer vision. To handle the similar issue, Han et al. <cit.> proposed a Transformer in Transformer (TNT) to simultaneously learn inter-patch and intra-patch relationships. Inspired by TNT <cit.>, we propose a novel Local to Global Feature Aggregation learning (LGFA) method for SER. The LGFA nests a Frame Transformer inside a Segment Transformer to aggregate different-scale emotion dependencies for the speech emotion representation. The whole learning processing of LGFA is from frame-level to segment-level to utterance-level. Compared with other Speech Transformer-based methods, our LGFA is a novel and special Transformer-based model for SER and its advantages can be summarized as the following three folds: * it aims to capture long-range emotion-related dependencies at different scales both inside frames and segments instead of the simple image patches adopted in Transformer. * it takes a frame and a segment as the input of Frame Transformer and Segment Transformer, respectively, instead of equally divided image patches. In this case, the frame and segment used in LGFA may contain the entire frequency domain information such that the frequency feature will not be damaged in the speech chunk division. * it also can be extended from the time domain to the frequency domain and time-frequency domain by different patch partition strategies. This extension can make full use of the time-frequency characteristic of speech signals to represent emotion information. § PROPOSED METHOD Considering the inter-frame time property of speech, LGFA feeds a Frame Transformer with frame features, then integrates frame embeddings and segment features as the segment-level aggregation features. This point is the main difference from TNT. Further, these aggregation features are regarded as the input of a Segment Transformer to learn higher-level emotion correlations across segments. Consequently, we can obtain the global utterance-level features of speech emotions through joint training of the Frame and Segment Transformers. The overview of LGFA is shown in Figure. <ref>, in which the Frame Transformer takes the frame-level feature of speech as the input. To this end, we firstly process the frame-level feature of speech. Given the log-Mel-spectrogram feature x∈ℝ^F × T × C of each emotional speech, the i^th frame x_i∈ℝ^F × C of the spectrogram x={x_i}_i=1^T is firstly encoded by a linear projection layer FC(·) as the i^th frame embedding x'_i ∈ℝ^1 × d_f, denoted as x'_i = FC(x_i), where F, T, and C represent the numbers of Mel-scaled frequency, time frame and channel, respectively. d_f is the dimension of frame embeddings. Then, to enhance inductive bias of Frame Transformer <cit.>, we add a learnable position encoding e^f_i ∈ℝ^1 × d_f into x'_i as the input of Frame Transformer, which can be represented as x'_i ←x'_i + e^f_i, where e^f={e^f_i}_i=1^T ∈ℝ^T × d_f. In Frame Transformer, the sequence of speech frame embeddings x'={x'_i}_i=1^T is utilized to characterize local inter-frame correlations of emotions. Then, the frame-level encoding x̂ can be obtained by the frame embedding sequence x' through the following operations: x”^,ℓ = MSA(LN(x'^,ℓ-1)) + x'^,ℓ-1, x̂^ℓ = MLP(LN(x”^,ℓ)) + x”^,ℓ, where ℓ∈ [1,...,L] is the index of the stacked block, L is the number of blocks in Frame Transformer, and x̂^ℓ∈ℝ^T × d_f is encoded by the ℓ^th block. Besides, in Equation (<ref>) and (<ref>), MSA(·), MLP(·) and LN(·) are the operations of Multihead Self-Attention (MSA), MultiLayer Perceptron (MLP), and Layer Normalization (LN), respectively, according to <cit.>, <cit.>. Notably, x'^,0=[x'_1, x'_2,...,x'_T] ∈ℝ^T × d_f in Equation (<ref>) is the initial input of the frame embedding sequence x'. To aggregate the emotion-related dependencies at different scales, we further design a Segment Transformer to learn frame-level and segment-level correlations of speech emotion. Therefore, the input of Segment Transformer is the combination of the frame-level encoding x̂ and segment-level embedding s. Specifically, the log-Mel-spectrogram feature x can be divided into a segment set, where each segment s_j∈ℝ^k × F × C consists of k frames, represented as x={s_j}_j=1^T/k. Similar to the Frame Transformer, each segment s_j is firstly transformed to the segment embedding s'_j ∈ℝ^1 × d_s by a linear projection layer FC(·) in Segment Transformer. Besides, the k^th frame-level encoding x̂^s_j ∈ℝ^k × d_f corresponding to the j^th segment are also used to aggregate into the segment embeddings after another linear projection FC(·), where FC(·) is to ensure dimension match for the addition of frame encoding and segment embedding. Then, the combination embedding s”_j ∈ℝ^1 × d_s of frame-level encoding and segment embeddings is generated by s^'_j = FC(Vec(s_j)), x̂^s_j = [x̂_i+1,x̂_i+2,..,x̂_i+k], s”_j = s'_j + FC(Vec(x̂^s_j)), where Vec(·) is a vectorization operation to flatten the dimension of x̂_j^s or s_j to ℝ^1 × (k × d_f). Then, we also add a learnable class token s_cls into input sequence for the final emotion classification. Eventually, the segment-level embedding s”∈ℝ^(T/k+1) × d_s can be written to s”=[s_cls, s”_1, s”_2, ..., s”_T/k]. Similar to Frame Transformer, each segment-level embedding with frame-level aggregation is added the corresponding positions between segments to preserve time-sequence property of inductive bias on speech by a learnable position encoding e^s_j∈ℝ^1 × d_s, which can be denoted as s”_j ←s”_j + e^s_j, where e^s={e^s_j}_j=1^T/k∈ℝ^(T/k+1) × d_s. The Segment Transformer also adopts L stacked standard transformer blocks to encode the aggregation embedding for the utterance-level representation of speech emotion, where the ℓ^th block transformations are formalized to s̅^ℓ = MSA(LN(s”^,ℓ-1)) + s”^,ℓ-1, ŝ^ℓ = MLP(LN(s̅^ℓ)) + s̅^ℓ, where s”^,0 is the initial segment embedding sequence in Equation (<ref>). With all the above operations, our proposed LGFA firstly models local emotion correlations within frames by Frame Transformer I(·), then aggregates the frame-level encoding ŝ and segment embeddings s” to capture global longer-dependencies for the utterance-level emotion representation ŝ through Segment Transformer O(·), which can be denoted as ŝ = O(s”;I(x')), Furthermore, the class token ŝ_cls can be generated from ŝ to input the classifier for speech emotion prediction, represented as y_pred=C(ŝ_cls), where y_pred, C, and ŝ_cls are the predicted labels of emotions, classifier, and s_cls generated by LGFA, respectively. Note that the segment class token s_cls, frame position encoding e^f and segment position encoding e^s are all initialized as zeros in the letter. § EXPERIMENTS In the section, we will introduce the details of our implemented experiments, then discuss the comparison results of the proposed LGFA with state-of-the-art methods. Database: To evaluate the performance of our proposed LGFA, two public emotional speech databases are selected to implement the experiments, i. e., the Interactive Emotional Dyadic Motion Capture database (IEMOCAP) <cit.> and the China Emotional Database (CASIA) <cit.>. In detail, IEMOCAP is an English multimodal database containing video, speech, and text scripts, which is recorded in 5 sessions (1 male and 1 female in each session) by inducing diverse emotions (angry, happy, sad, neutral, frustrated, excited, fearful, surprised, disgusted, and others) of 10 actors under improvised or scripted scenarios. CASIA is a Chinese Emotional Speech Database with 9 600 recording files under 6 emotions (angry, fear, happy, neutral, sad, and surprise). It is collected by inducing 4 actors (2 males and 2 females) to express 6 emotions under several fixed text contents. Note that we adopt 2 280 improvised samples and 4490 scripted+improvised samples with 4 emotions (angry, happy, sad, and neutral) in IEMOCAP, and 1 200 public released samples with 6 emotions in CASIA for experiments. Experimental Settings: In our experiments, all speech sentences are re-sampled to 16 kHz for Short-Time Fourier Transform (STFT) using 20 ms Hamming window size with 50% frame overlapping. Then, they are divided into segments with 128 frames as experimental samples and pad 0 for the segment less than 128 frames. Finally, we obtain the log-Mel-spectrogram with the dimension of ℝ^64 × 128 × 1 for the input of our LGFA, where the number of Mel-filter is set as 64. For the network of LGFA, the input sizes of Frame Transformer and Segment Transformer are assigned as (64,128,1) and (64,8,1). The number of stacked blocks L is 7. Furthermore, the projection dimensions and the head number of the Frame Transformer are set as 16 and 4, and they are assigned 256 and 4 in the Segment Transformer. The LGFA is implemented by PyTorch with NVIDIA A10 GPUs. And it is optimized by the AdamW Optimizer with a learning rate of 0.0001 and trained from scratch with a batch size of 64. In addition, the Leave-One-Subject-Out (LOSO), i. e., k-fold cross-validation protocol (CV), is adopted for a fair comparison according to <cit.>, <cit.>, where k is the speaker number of dataset. Therefore, the speaker rate of training and testing data in IEMOCAP and CASIA are 9:1 and 3:1, respectively. Furthermore, since the IEMOCAP are class-imbalanced, the weighted average recall (WAR) and the unweighted average recall (UAR) <cit.>, <cit.> are used to effectively evaluate the performance of the proposed method, where WAR is standard recognition accuracy while UAR is the class-wise accuracy. Results and Analysis: We compare the performance of our proposed LGFA with several state-of-the-art methods on IEMOCAP, i. e., CNN+LSTM Model <cit.>, DNN-HMM based model (DNN-HMM_SGMM-Ali.) <cit.>, CNN model with spectrogram (model-2A(spectrogram)) <cit.>, fusion model with different acoustic features (Model-3 (fusion) and Model-1 (dow.+ens.)) <cit.>. The above methods are all implemented on the improvised data (2280 samples). To further demonstrate the performance of LGFA, we also compare the LGFA with other methods (i. e., Bi-LSTM and Greedy+Dro.+Att.+MLP) <cit.> on the scripted+improvised data (4490 samples). Moreover, we also choose other comparison methods on CASIA, i. e., LLDs with dimension reduction (LLD+DR) <cit.>, DNNs with the extreme learning machine (DNN+ELM) <cit.>, weighted spectral feature learning model (HuWSF) <cit.>, and DCNN with discriminant temporal pyramid matching (DTPM) <cit.>. As homologous methods to LGFA, ViT <cit.> and TNT <cit.> were also used as comparasion methods. Note that the results of DTPM, ViT, and TNT are obtained through our own implementations with the released codes[https://github.com/tzaiyang/SpeechEmoRec]^,[https://github.com/lucidrains/vit-pytorch]^,[https://github.com/huawei-noah/CV-Backbones/tree/master/tnt_pytorch]. In addition, to evaluate the experimental performance more comprehensively, these selected comparison methods are based on two commonly used experimental protocols on IEMOCAP, i. e., 10-fold LOSO based on speakers and 5-fold LOSO based on sessions. For example, Bi-LSMT, Greedy+Dro.+Att.+MLP, CNN+LSTM, DNN-HMM_SGMM-Ali., ViT, TNT and our proposed LGFA are all based 10-fold CV, other methods are based on 5-fold CV. The experimental results with WAR and UAR on IEMOCAP are shown in Table <ref>, where ViT and TNT are implemented by the spectrogram size of 128×128 and the chunk size of 16×16 according to <cit.>, <cit.>. From these results, it is obvious that the proposed LGFA achieves the competitive performance on both WAR and UAR. Specifically, based on the scripted+improvised data, our LGFA improves the accuracies (6.25% on WAR and 7.82% on UAR) than comparison methods. Based on the improvised data, LGFA is superior to RNN-based methods (i. e., CNN+LSTM), demonstrating the advantage of the Transformer-based methods in SER. Further, its results also outperform the ViT and TNT, which reveals LGFA effectively capture the long-range emotion dependencies inside frames and segments for better speech representation and is more suitable for the task of SER than ViT and TNT. Although our LGFA achieve the best performance, the UAR results are lower than the WAR ones on comparison methods because of the class-imbalance in IEMOCAP. The results on CASIA, illustrated in Table <ref>, also reveal the superiority of our LGFA (improving 3.17% on WAR and UAR). It is better than traditional methods (i. e., LLD+DR and HuWSF) and DNN-based approaches (i. e., DNN_ELM and DTPM). Similar to the results on IEMOCAP, our proposed LGFA proves its superiority on the SER again over ViT and TNT. Since CASIA is class-balanced, the results of WAR are equal to those of UAR. Furthermore, to explore the effective components of LGFA, we implement extended experiments to analyze different architectures of our LGFA. Figure. <ref> shows the results of ablation study, where ViT, Frame Transformer, and Segment Transformer are implemented by square chunks with the size of 16×16, frame chunks with the size of 64×1, segment chunks with the size of 64×8, respectively. The ablation results in Figure. <ref> indicate that LGFA is superior in speech emotion representation over other architectures. Namely, our designed frame and segment aggregation learning is more suitable for SER than current Speech Transformers. Furthermore, the Segment Transformer outperforms the Frame Transformer, indicating larger chunks will promote the feature extraction of speech emotion for the Transformer. Discussion on the extension of LGFA: In LGFA, to preserve the completeness of the frequency domain in the spectrogram, we divide the spectrogram feature as chunks only on the time domain. To further explore the effect of different chunk division strategies, we extend the chunk division of the proposed LGFA (i. e., LGFA_T in Table <ref>) from the time domain to the frequency and time-frequency domain (i. e., LGFA_F and LGFA_TF in Table <ref>). Compared with LGFA_T, LGFA_F takes each frequency band as a frame and each frequency band group as a segment to learn the sentence-level emotion feature from the frequency domain. Thus, we can obtain the frequency-wise class token ŝ_cls^fre of LGFA_F for the emotions prediction represented as y_pred=C(ŝ_cls^fre). Further, we will also complementarily combine the chunk division methods in the frequency and time domains to generate the fusion class token of LGFA_TF ŝ_cls^fu=cat(ŝ_cls,ŝ_cls^fre) to the emotion classifier y_pred=C(ŝ_cls^fu), where cat(·) is the concatenation operation on the feature dimension. The experimental results of different chunk division strategies are shown in Table <ref>. From them, we observe that LGFA_T and LGFA_TF outperform LGFA_F, which may be due to the fact that speech emotion is closely related to the context within frames or segments. While in the frequency domain, not all emotions have obvious energy activations between frequency bands. Furthermore, the LGFA_TF outperforms LGFA_T on CASIA, while performs worse on IEMOCAP. The reason may be that chunk division in the frequency domain will not only complement the time-domain chunk division but may also integrate noise caused by the uncertain correlations on the frequency domain under emotions. The recording environment of CASIA contains less noise, while IEMOCAP is recorded in a open dialogue environment. Thus, the noise will affect frequency-domain correlations and impair the performance of the time-frequency fusion model. In other word, the frequency information should be screened to obtain this supplement. § CONCLUSIONS We propose a novel Local to Global Feature Aggregation (LGFA) method for SER. LGFA integrates a Frame Transformer into a Segment Transformer to aggregate local emotion correlations at different scales both within frames and segments for the global utterance-level representation of emotional speech. Through the joint learning of two Transformers, we can obtain discriminative emotion features to learn speech emotion representation from frame-level to segment-level to sentence-level. Extensive experimental results on IEMOCAP and CASIA demonstrate the superiority of our proposed LGFA. Further, we will deeply explore the different chunk division strategies of LGFA for the better SER performance. § ACKNOWLEDGEMENTS This work was supported in part by NSFC under Grant U2003207, in part by National Key R&D Project under Grant 2022YFC2405600, in part by Jiangsu Frontier Technology Basic Research Project under Grant BK20192004, and in part by Zhishan Young Scholarship of Southeast University. IEEEtran
http://arxiv.org/abs/2306.09044v1
20230615110717
Hands-on detection for steering wheels with neural networks
[ "Michael Hollmer", "Andreas Fischer" ]
cs.LG
[ "cs.LG" ]
Hands-on detection for steering wheels with neural networks Michael Hollmer Faculty of Computer Science Deggendorf Institute of Technology Dieter-Görlitz-Platz 1 94469 Deggendorf E-Mail: mailto:[email protected]@stud.th-deg.de Andreas Fischer Faculty of Computer Science Deggendorf Institute of Technology Dieter-Görlitz-Platz 1 94469 Deggendorf E-Mail: mailto:[email protected]@th-deg.de (Corresponding author) July 31, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================== Proc. of the Interdisciplinary Conference on Mechanics, Computers and Electrics (ICMECE 2022) 6-7 October 2022, Barcelona, Spain In this paper the concept of a machine learning based hands-on detection algorithm is proposed. The hand detection is implemented on the hardware side using a capacitive method. A sensor mat in the steering wheel detects a change in capacity as soon as the driver's hands come closer. The evaluation and final decision about hands-on or hands-off situations is done using machine learning. In order to find a suitable machine learning model, different models are implemented and evaluated. Based on accuracy, memory consumption and computational effort the most promising one is selected and ported on a micro controller. The entire system is then evaluated in terms of reliability and response time. machine learning, hands-on detection, driving assistance § INTRODUCTION The development of advanced driver assistance systems is an essential goal for car manufacturers. As can be seen from a survey, driver assistance systems are by now an important purchase criterion for over 60% of potential buyers <cit.>. In addition, a unique selling point over the competition and thus a competitive advantage can be gained through the further automation of vehicles. An example is the system from Mercedes-Benz, which was the first to receive approval for autonomous driving at level 3 in December 2021. Autonomous driving at level 3 enables the driver to divert his attention from what is happening on the road in certain situations. The vehicle takes over the lateral and longitudinal guidance and independently recognizes errors or departure from system limits. In such a case, the system would prompt the driver to take back control of the vehicle. This transfer of vehicle control is a crucial challenge. An autonomous system must be able to recognize whether the driver is ready to take over control of the vehicle again. To ensure this, some form of driver monitoring is required. One way of detecting the driver's condition is a hands-on detection (HOD). This is a system that detects whether the driver's hands are on the steering wheel and therefore control over the vehicle can safely be transferred. A HOD can be implemented inexpensively by measuring steering angle and torque acting on the steering wheel. The necessary sensors are required for the servo-assistance, anyway. However, there is the disadvantage that false hands-off messages often occur in situations where the driver does not exert any significant force for lateral guidance. In such a case, the driver would be asked to put his hands back on the steering wheel, even though he has not let go of the steering wheel. A better HOD variant, also used in this paper, uses a capacitance sensor. This allows to detect the driver's contact with the steering wheel, without relying on any exerted force to the steering wheel. However, the evaluation of capacitance values is more complex, since these are dependent on the driver and his environment. In this paper a machine learning algorithm is implemented, which is able to distinguish between a hands-on and a hands-off situation based on the capacitance values. The AI model is then ported to a micro controller and the reliability and response time of the HOD is evaluated. A maximum response time of 200ms is assumed to be appropriate for timely HOD. This paper aims to answer the question: Can neural networks increase reliability of HOD within a response time of 200ms? § BACKGROUND Two techniques are combined in this paper to realize HOD: Capacity measurement and machine learning. §.§ Capacity measurement One option to realize HOD is detection of a contact between the driver and the steering wheel by measuring the change in capacitance. There are different methods to measure the capacitance of the steering wheel. In this paper, a frequency-based measurement method is used. Touching the steering wheel is detected by a change in capacitance in a sensor element, with the capacitance being calculated indirectly from the measured frequency. The sensor element represents a measuring capacitor which forms a resonant circuit together with another capacitor and a coil. The frequency of the resonant circuit can be calculated using equation <ref>, which describes an ideal resonant circuit. f_0 = 1/2 π√(L(C_k + C_s)) The equation depends on the capacitance of the capacitor C_k, the capacitance of the sensor element C_s and the inductance of the coil L. As long as the steering wheel is in an untouched state, the resonant circuit oscillates with its maximum frequency f_0. If the driver puts his hands on the steering wheel, the capacity of the sensor element is increased, leading to a reduction in the frequency of the resonant circuit. The sensor element is a capacitive mat that is wrapped around the core of the steering wheel and represents the active part of the measuring capacitor. Since there is no opposite side, a stray electric field forms between the active capacitor side and the environment. An approaching object causes a change of the capacitance value of the sensor element. To illustrate, the measuring capacitor can be seen as a plate capacitor, which can be described by the equation C = ϵ_0 ·ϵ_r ·A/d. Here, both electrically conductive and non-conductive objects cause a change in capacitance for different reasons. A nearby conductive object causes the distance d between the active capacitor side and its surroundings to decrease, increasing the capacitance. On the other hand, non-conductive objects lead to an increase in capacity via a change in relative permittivity ϵ_r. §.§ Machine learning approaches To classify the capacitance values four different machine learning models are trained. In the following a brief overview of the different approaches is given. §.§.§ Time Delay Neural Network One machine learning approach is the Time Delay Neural Network (TDNN). The TDNN is structured as a standard multiperceptron with a delay buffer connected in front. New values in a time series are buffered until a certain amount is reached. Subsequently, these buffered values are passed in a final input into the multiperceptron, which then carries out the classification <cit.>. §.§.§ Long Short Term Memory As a second approach to classify the capacitance values a Long Short Term Memory (LSTM) net, a variant of a recurrent neural network is used. In difference to feed-forward networks like the TDNN, the neurons of the LSTM can have connections to neurons in the previous layer, to the same layer or to themselves, in addition to the standard forward-pointing connections. The feedback loops implement a memory, which allows the network to remember previous events <cit.>. This is an advantage in time-dependent series of measurements, since each measured value is dependent on its predecessor in a certain way. In contrast to TDNN, which assumes independent measured values, recurrent neural networks can use this memory to take account of the temporal dependency <cit.>. §.§.§ Random Forest The last approach is the random forest which combines the prediction results of multiple decision trees using the bootstrap aggregating (bagging) method. The idea behind bagging is to train several decision trees with a subset of the training data. The subsets are created by randomly selecting samples from the entire training data. This process is also called bootstrapping <cit.>. The result are multiple decision trees that are structured differently and ideally even out in their classification errors. The output of the random forest is the class chosen by most of the decision trees. § RELATED WORK Other work also used machine learning to develop HOD, differing in sensors, algorithms, and response times. Johansson and Linder <cit.> used a camera system and the torque acting on the steering wheel to implement HOD. For the camera, two CNN approaches were compared in classifying the most recently acquired image. For evaluating the torque measurement a one-dimensional convolutional neural network and an LSTM network were used. According to the authors, the evaluation of the torque requires a few seconds to detect a hands-off situation and up to two seconds to detect a hands-on. The camera approach reacted to a situation change within 5.4 seconds. Both solutions are thus well above the response time of 200ms we aim for in this work. Hoang Ngan Le et al. <cit.> have also developed a machine learning based HOD with a camera system. In their paper the image evaluation is performed by a Region Based Convolutional Neural Network (RCNN) which has been improved for the specific purpose. The improved RCNN achieved 0.09 frames per second, which roughly corresponds to the evaluation of one frame every eleven seconds. As such, the time required for detection is also well above the 200ms limit. A solution not based on machine learning was published as a patent by Volkswagen AG. This connects two possible approaches for HOD. On the one hand, the values of the steering angle or torque sensor are used and on the other hand, the capacitance values of the steering wheel are considered to distinguish between a hands-on and hands-off situation. The idea behind the combined approach is to use the torque sensor to detect hands-on situations with high confidence. During these situations, the corresponding capacitance values are recorded. With the data a function is set up with which it is possible to quickly decide for each new capacitance value whether it corresponds to a hands-on or hands-off situation <cit.>. Another non machine learning option for evaluating capacitance sensors was published by Analog Devices <cit.> and relies on dynamic threshold values. An algorithm continuously monitors the values of the capacitance sensor and measures the ambient level if no touch is detected. In addition, the average maximum sensor value is measured with each touch. The threshold from which a capacitance increase is counted as a touch is a certain percentage of the average measured maximum sensor value. These approaches bear a potential problem: If the driver only touches the steering wheel very lightly, the measured average maximum sensor value decreases. The dynamic threshold adapts to the small capacitance values. Thus, at some point a slight increase in capacitance values is erroneously recognized as a touch. If the driver brings both hands close to the steering wheel without touching it, this could trigger a similar increase in capacity as previous two-finger touch. The HOD would then recognize a hands-on situation even though the driver is not touching the steering wheel. § EXPERIMENTAL DESCRIPTION The implementation of the different machine learning models is divided into several steps. First, training data is recorded, classified and processed, which is then used to train the machine learning models. Based on the model results, the most promising model is selected and transferred to a micro controller. Finally, the system is evaluated in terms of reliability and response time. §.§ Generating training data For generating the training data, the steering wheel is alternately touched and released at defined points for five seconds. This process is repeated 30 minutes each for a two-finger, four-finger and two-hand touch. Figure <ref> shows the points of contact. It should be noted that the points in the figure do not only refer to the front of the steering wheel. Alternately also the outside, the back and the inside were touched. Regarding the sampling rate a new capacitance value was recorded every 2 ms. §.§ Preprocessing data for learning After recording the training data two preprocessing steps were implemented. In the first step, every sample was assigned a “hands-on” or “hands-off' label. This was automated by following the change in capacitance when touching or releasing the steering wheel. The corresponding edge was used to separate and label all samples. Once the difference between two measured capacity values is above noise level, it is interpreted as an edge. The required change in capacitance to trigger an edge was set separately and fine tuned for each of the three data sets. A rising edge triggers the “hands-on” label, while a falling edge triggers the “hands-off” label. In the second preprocessing step, every sample in the dataset was normalized in its length. This was done because the machine learning model should learn to classify a hands-on or hands-off situation based on capacitance values of just a few hundred milliseconds to speed up the reaction time of the HOD. Therefore a window with a fixed length of 100 values is placed over every sample. The values in the window form the input for the machine learning models. In each step, this window is moved one value, dropping an old value and adding a new value. Thus, all models have a fixed length input of 100 capacitance values, corresponding to 200 ms of recorded time. §.§ Preparation of gradient data In order for the machine learning models to deliver optimal results, capacitance values have to be normalized. For this, it is necessary to obtain minimum and maximum capacitance value during execution. In the training phase this is not a problem because all data is known a-priori. In a real world application this is not the case, which means that minimum and maximum values have to be determined dynamically. An estimate of the minimum value can be obtained by measuring the ambient level when the steering wheel is untouched. The maximum value, however, is a greater challenge. It would require the driver to place both of his hands on the steering wheel, which the system can never be sure is the case. Additionally, estimating the maximum value from the minimum value is not possible, as the change in capacitance caused by a driver heavily depends on his body weight. To eliminate this issue, the absolute capacitance values were converted into gradient values, focussing on change in capacity over time instead. This makes it easier to normalize the values, since only the maximum capacitive rate of change need to be known. Figure <ref> shows gradient values when the steering wheel is touched with one hand. § EVALUATION For the evaluation all machine learning approaches are trained with the created datasets. The most promising model is then selected and ported to the STM32F769 micro controller where the final reliability and reaction time testing is done. §.§ Training In order to decide which machine learning model is best suited for the classification task, all models are trained with five different combinations of parameters. The resulting machine learning models are examined based on memory consumption, execution time and reliability. §.§.§ Without gradient data First, the models are trained with absolute capacitance values. All models achieve a very high level of accuracy as well as precision and recall, with differences visible mainly in memory usage and execution time (cf. Tab. <ref>). With no major difference in accuracy, the random forest requires far more memory than other models, which is particularly disadvantageous for embedded systems. Therefore, measuring the execution time was neglected. Looking at the neural networks, the biggest difference is the execution time. While the TDNN only need a few microseconds for a forward pass, the LSTM networks need several milliseconds due to their more complex structure. This is relevant, because data is sampled with a rate of 2ms in the experiments. Thus, when used on the micro controller, LSTM networks with more than one hidden neuron lose data because the measurement is faster than the processing. §.§.§ With gradient data Looking at the models trained with the gradient data, the previously observed disadvantages regarding the memory consumption of the random forest remain (cf. Tab. <ref>). The processing time of the LSTM is still inferior to that of the TDNN. However, the LSTM with a hidden neuron performs slightly better in accuracy, precision and recall compared to the TDNN with 50 hidden neurons and occupies with 27 kB of memory almost just a third of the memory. The TDNN on the other hand, offers a significantly shorter execution time. The delay between input and output for the TDNN with 50 hidden neurons is only 150 μs compared to the 0.6 ms of the smallest LSTM. Training the TDNN takes 1:08 minutes which is only a fraction of the training time for the LSTM, which takes 20:48 minutes. For this reason the TDNN with 50 hidden neurons was selected and ported to the micro controller. §.§ Practical reliability test To test if the system recognizes touches by the driver reliably, the steering wheel was touched with two fingers, four fingers one hand and two hands at the points shown in figure <ref>. In the runs in which the steering wheel was touched with two and four fingers, a distinction was made between touching the front, back, inside and outside for each point. In each of the four runs, all points were touched ten times to see whether touches were only recognized sporadically in some places. In these experiments, the recognition of two fingers proved to be the most difficult. Especially on the inside, where there is a seam, the distance to the sensor mat is particularly large. This decreases sensitivity and a touch triggers only a small increase in capacitance, resulting in no touch detection at all for the two finger experiments and a maximum of 7 out of 10 correctly identified events in the four finger experiment. Regarding the position, the 6 o'clock position proved to be difficult, both with two and with four fingers. Somewhat less (but still noticeably) impacted positions were the 3 o'clock and 9 o'clock positions. These three positions are located, where the steering wheel spokes connect to the wheel—likely the root causes of the problem. In the 10 and 2 positions typical for driving a car, all events were recognized reliably irrespective of finger count, as long as any area apart from the inside of the wheel was touched. Also, when the steering wheel was not just touched, but gripped with one or two hands, the success rate rose to 100%. §.§ Reaction time Next, the reaction time of the system was tested with two fingers which represents the hardest challenge as shown in the previous section. Fig. <ref> shows the capacitance values over time when the steering wheel was touched with two fingers. The red line represents the threshold from which the steering wheel was actually touched and released. The increase in capacitance below the red line is caused by approaching the fingers but not having made contact yet. Reaction time measurement is started, when the capacitance values first exceed the threshold and stopped when the values drop below it. In ten experimental runs, results ranged between 108ms and 294ms for “hands-on” events and a significantly faster reaction time of 30–60ms for “hands-off” events, if two fingers were used. With four fingers, reaction times could be reduced to 74–94ms (hands-on) and 38–58ms (hands-off), respectively. § CONCLUSION The results show that it is possible to use a machine learning algorithm to evaluate capacitance values for HOD and achieve fast reaction times. By using the change in capacitance instead of the absolute values in the machine learning model, the problem of normalizing the input values was solved and the HOD worked without external calibration, independent of the driver and environment. IEEEtran
http://arxiv.org/abs/2306.05833v1
20230609120503
Simulation of the 3D Radiative Transfer with Anisotropic Scattering for Convective Trails
[ "Olivier Pironneau", "Pierre-Henri Tournier" ]
math.NA
[ "math.NA", "cs.NA", "math-ph", "math.MP", "85A25, 37N30, 31A10, 35Q30, 68P30, 74S05" ]
patterns patterns e ℝ 𝕊 ℕ ℚ ℂ ℤ 𝔻 𝕆 ε
http://arxiv.org/abs/2306.09689v1
20230616083937
Mesoscale Description of Interface-Mediated Plasticity
[ "Jinxin Yu", "Alfonso H. W. Ngan", "David J. Srolovitz", "Jian Han" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
cityu]Jinxin Yu hku]Alfonso H. W. Ngan hku]David J. Srolovitzcor2 cityu]Jian Hancor1 [cityu]Department of Materials Science and Engineering, City University of Hong Kong, Hong Kong SAR, China [hku]Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong SAR, China [cor1][email protected] [cor2][email protected] Dislocation-interface interactions dictate the mechanical properties of polycrystalline materials through dislocation absorption, emission and reflection and interface sliding. We derive a mesoscale interface boundary condition to describe these, based on bicrystallography and Burgers vector reaction/conservation. The proposed interface boundary condition is built upon Burgers vector reaction kinetics and is applicable to any type of interfaces in crystalline materials with any number of slip systems. This approach is applied to predict slip transfer for any crystalline interface and stress state; comparisons are made to widely-applied empirical methods. The results are directly applicable to many existing dislocation plasticity simulation methods. Mesoscale Description of Interface-Mediated Plasticity [ July 31, 2023 ====================================================== § INTRODUCTION A classical approach to tailoring the mechanical properties (strength/ductility) of materials is through the manipulation of microstructure. Many common classes of microstructure may be described as a spatial distribution of interfaces; e.g., grain boundaries (GBs) in a polycrystal, interfaces between precipitates and a matrix, or phase boundaries in a multi-phase system. The principle behind the modulation of mechanical properties through microstructure design lies on the interactions between lattice dislocations (carriers of plastic deformation) and homo-/hetero-phase interfaces. Hall-Petch strengthening <cit.> is a remarkable example; it is commonly described as the result of dislocation pileups at impenetrable grain boundaries. To achieve strengthening and toughening simultaneously, researchers have designed and synthesized a spectrum of heterostructured materials <cit.>, such as nanotwinned structures, gradient structured materials, heterogeneous lamella structured materials, dual-phase alloys, etc. One triumph of such a strategy was the understanding of hetero-deformation induced strengthening <cit.>, an interface-mediated mechanism. One consequence of grain-level plasticity interacting with GBs is the activation of GB sliding which, in turn, alters plasticity within the grains. Interface sliding was observed during plastic deformation of bicrystals <cit.> and polycrystalline materials <cit.> and during superplasticity <cit.>. The focus of this work is the quantitative prediction of the interaction of plasticity within grains with interfaces in a rigorous, quantitative manner that respects crystallography and loading conditions. This approach is applicable to all types of homo- (GBs, twins, ...) and hetero-phase interfaces. The interactions between lattice dislocations and interface have been widely explored in the literature since this interaction plays a major role in plasticity, strengthening, fracture, ... The classic picture of grain size/interface is that lattice dislocations pileup against interfaces, causing back stresses that reduce plastic deformation in grains; this is known as Hall-Petch strengthening. Although highly simplified, this is a convenient starting point for thinking about the mechanisms by which dislocation/interface interactions affect plasticity. There is considerable evidence that this pileup may be partially relaxed by slip transfer across an interface (i.e., precipitate cutting <cit.>). Transmission electron microscopy (TEM) observations have shown many different forms of dislocation-interface interactions; even within the same material. Kacher et al. <cit.> showed that dislocations may be “transmitted” across a GB and/or be “reflected” back into their source-grain (see Fig. <ref>a). Other observations show that GBs/interfaces themselves may evolve as a result of plastic deformation; e.g., dislocation-GB interactions lead to the formation <cit.> and motion <cit.> of line defects within the GB/interface plane (see Fig. <ref>b and c). These phenomena have also been observed in atomistic and multi-scale simulations <cit.>. Hence, GBs/interfaces are much more than simply blocking dislocations; they are a mediator of plasticity in the grains – sliding, transmitting, absorbing and reflecting lattice dislocations, ... And, GBs/interfaces in a single material system can behave very differently, depending on bicrystallography and loading. Extensive experimental observations of dislocation/GB interactions led Lee, Robertson, Birnbaum, and co-workers <cit.> to propose several criteria to identify the factors that account for slip transfer across a particular GB for a particular loading condition (i.e., the “LRB” criteria). The LRB criteria suggest that slip transfer is favorable for the slip systems in two grains which (i) are well aligned (in both slip planes/directions), (ii) result in the smallest residual Burgers vector left in the GB, and (iii) possess large Schmid factors. While these criteria are based upon sound reasoning, they are empirical and not always mutually consistent. Atomistic <cit.> and coupled atomistic/discrete dislocation methods <cit.> were employed to examine the LRB criteria. Dewald and Curtin <cit.> modified the LRB criteria to account for dislocation dissociation, non-Schmid stresses and step formation. Nonetheless, the proposed criteria simply represent a systemization of observations. A bicrystallographically-based, theoretical understanding should not only be consistent with TEM observation and be consistent with mechanistic observations from experiments and atomistic simulations, but should be predictive. Several mesoscale, dislocation-based simulation models have been proposed for interface-mediated plastic deformation. Commonly used dislocation-based mesoscale, single crystal, plasticity models include discrete dislocation dynamics (DDD) <cit.>, continuum dislocation dynamics (CDD) <cit.> and crystal plasticity finite element methods (CPFEM) <cit.>. Although Neumann boundary condition (BC) oversimplifies what happens at interfaces, this approach is easily applied; hence, in most mesoscale simulations, GBs are treated as Neumann BCs (i.e., impenetrable to dislocations or plastic strain); this BC was employed in CPFEM <cit.>, DDD <cit.> and CDD <cit.> simulations. Note that even if an interface is treated as a Neumann BC, plastic deformation on one side of the interface affects that on the other side (through a dislocation pileup induced concentrated stress field). Neumann BCs do not account for the deformation induced evolution of an interface, interface sliding nor does it distinguish between interfaces of different types – except (rarely) through changes of empirical parameters. Some simulation approaches model the effects of dislocation-interface interactions by invoking the empirical LRB criteria <cit.>. For example, in one case, dislocation transmission through a GB is assumed to occur through a Frank-Read mechanism when criterion (iii) is satisfied <cit.>. In others, both criteria (ii) and (iii) were considered to determine whether a dislocation will penetrate a GB <cit.>. In some cases, LRB-like criteria (i.e., criteria established upon slip system alignment and the resolved shear stress) were invoked to determine dislocation absorption/emission at a GB <cit.>. Other semi-empirical GB models were incorporated into CPFEM to study slip transfer in polycrystals <cit.>; e.g., Ma et al. <cit.> considered dislocation transmission according to criterion (ii) (i.e., the minimum residual Burgers) in a CPFEM study. Here, the GB is modeled by an element where the slip resistance is assumed to be proportional to the square of the minimum residual Burger. Mayeur et al. <cit.> modeled a GB as an interface affected zone in CPFEM, where the probability of each transfer event is determined based upon criteria (i) and (iii). While these models achieved some success, the empirical nature of the LRB criteria makes such models unreliable. Also note that these models did not include the possibility of dislocation glide in the GB/GB sliding. Many phenomenological models include interactions between dislocations and GBs within a strain gradient plasticity framework <cit.>. A general GB model was developed by Gurtin <cit.> based on the continuum thermodynamic Coleman-Noll procedure, which was partially implemented in a finite element method by Özdemir et al. <cit.> (this approach couples bulk and GBs via a microscopic force balance). One merit of this method is that grain misorientation and GB orientation may be included through slip-interaction moduli. This method was extended to a microstructure-motivated higher-order internal boundary conditions and numerically implemented to investigate the influence of GBs on shear deformation, at both macroscopic and microscopic levels <cit.>. Piao and Le <cit.> noted that standard strain gradient plasticity models ignore the configurational entropy and effective temperature which enables dislocations to satisfy universal laws for plastic flows <cit.>; an alternative continuum approach that accounts for configurational entropy was developed and applied to study the dislocation transmission/absorption at a GB at a continuum level. While these models are theoretically more rigorous, the physical mechanisms observed in experiments and atomistic simulations are not explicitly reflected. For example, GB/interface dislocations (residual dislocations) are not arbitrary; their Burgers vectors must correspond to the bicrystallographic translational symmetry <cit.>. Again, these methods do not correctly capture interface sliding induced by dislocation-interface interactions, although widely observed in experiments and atomistic simulations <cit.>; Gurtin <cit.> did propose a GB sliding approach, but this has not been implemented (to our knowledge). In this paper, we address the same question raised by Gurtin <cit.>: “Is there a physically natural method of characterizing the possible interactions between the slip systems of two grains that meet at a grain boundary – a method that could form the basis for the formulation of grain boundary conditions?”. More general, this question applies to all interfaces – not just GBs. Our approach is based upon irreversible thermodynamics (Onsager relations <cit.> and Ziegler's maximum entropy production principle <cit.>). We first propose a Burgers vector reaction-based dislocation-interface interaction model motivated by a wide range of experimental observations and consistent with crystallography constraints. The admissible Burgers vectors of the lattice dislocations are determined by the orientation and lattice structure of the adjoining grains. The admissible Burgers vectors of interface dislocations (or disconnections) must be consistent with the displacement-shift-complete (DSC) vectors determined from the bicrystallography <cit.>. We describe Burgers vector reactions by a linear relationship between dislocation fluxes entering, leaving and within the interface and the applicable driving forces. The kinetic coefficient tensor is constrained by the requirement of Burgers vector conservation (or equivalently, compatibility of interface deformation). The overall kinetics are described by a tensorial Robin boundary condition at the interface that is consistent with the linear response theory. Our model is bicrystallography-respecting, explicit about Burgers vector reactions, and simple to implement within different plasticity simulation schemes. The paper is organized as follows. In Section <ref>, a general model for individual dislocation reactions at interface is formulated as an interface boundary condition and then extended to multiple slip systems/reactions case (several special cases are discussed to validate the interface BC). In Section <ref>, the mesoscale interface BC is applied to a one-dimensional continuum dislocation dynamics (CDD) model to examine the dislocation-GB interactions under different crystallographic orientations. We then propose rigorous dislocation transmission conditions and compare those with the predictions from LRB criteria. the transmitted dislocation density is defined to measure the amount of dislocations reacted at the interface and then they are compared with each LRB criterion. The results show that while the LRB criteria are reasonable, they fail in many cases and are not always self-consistent. § INTERFACE BOUNDARY CONDITION Consider the interface-slip system configuration illustrated in Fig. <ref>a, which shows two slip planes in Phase α (labeled (1) and (2)) and a single slip plane in Phase β (labeled (3)); the two phases are divided by an interface plane (labeled (4)). The Burgers vector of the dislocations on plane (i) is 𝐛^(i). 𝐛^(1), 𝐛^(2) and 𝐛^(3) are lattice dislocation Burgers vectors. 𝐛^(4), on the other hand, is not an arbitrary vector in the interface (like the residual Burgers vector commonly defined in the literature), but must be a DSC lattice vector (i.e., determined from the bicrystallography) <cit.>. For example, for a coherent twin boundary in a face-centered cubic crystal, an admissible interface dislocation is a twinning partial with 𝐛 = ⟨ 112⟩ a_0/6 (see the dislocations along the twin boundaries in Fig. <ref>c). Suppose that dislocations with 𝐛^(1) glide from the Phase α interior to the interface with a flux J^(1). The following Burgers vector reaction may occur where 𝐛^(1) contacts the interface (the blue point in Fig. <ref>a): J^(1)𝐛^(1)→ -J^(2)𝐛^(2) -J^(3)𝐛^(3) -J^(4)𝐛^(4), i.e., the dislocations 𝐛^(2), 𝐛^(3) and 𝐛^(4) are nucleated at the contact (the minus sign before J^(i) denotes dislocations flowing away from the contact). In this case, J^(2), J^(3) and J^(4) correspond to dislocation reflection, dislocation transmission and interface sliding, respectively. In general, reactions follow from Burgers vector (flux) conservation: ∑_i=1^4 J^(i)𝐛^(i) = 0, i.e., the four vectors {J^(i)𝐛^(i)} form a tetrahedron with the origin located at the centroid (see the Fig. <ref>a inset). Note that to keep dimensions consistent, we consider the flux along the interface J^(4) as the number of dislocations crossing a point in the interface (in 2D) per unit time divided by the interfacial width δ^I. Equation (<ref>) is not to be confused with a condition of zero net strain rate at the interface (this follows from how the fluxes {J^(i)} are defined – all pointing away from the contact point). For example, if two phases are identical with no interface (i.e., a single crystal), dislocations should simply move across the contact point without reaction; in this case, b^(1)=b^(2) and J^(1)=-J^(2) such that Eq. (<ref>) is trivially satisfied and the strain strain rate is J^(1)b^(1)=-J^(2)b^(2) (i.e., the results are compatible with, but not governed by, Eq. (<ref>)). As a second example, consider an impenetrable, non-reflecting and static interface (J^(2)=J^(3)=J^(4)=0); here Eq. (<ref>) yields J^(1)=0, which is the classical Neumann BC. We now focus on the slip plane (i)/interface intersection; for the example shown in Fig. <ref>b, the slip plane has unit normal 𝐧^(i), dislocation slip direction 𝐬^(i) (parallel to 𝐛^(i)), and line direction ξ^(i), from which we define 𝐥^(i)≡𝐧^(i)×𝐬^(i) and ζ^(i)≡ξ^(i)×𝐧^(i). At the dislocation/interface intersection, ξ^(i) is parallel to the intersection line and (𝐧^(i), ξ^(i), ζ^(i)) form a local coordinate frame; the dislocation density ρ^(i) is the number of dislocation lines threading a unit area with normal ξ^(i). The dislocation flux is 𝐉^(i) = J^(i)ζ^(i), where J^(i) is the number of dislocations cutting through a unit segment in 𝐧^(i) with velocity in ζ^(i) per unit time. The Orowan equation defines the dislocation flux J^(i) = ρ^(i)v^(i) = γ̇^(i)/b^(i) , where v^(i) is the dislocation velocity along the ζ^(i)-axis and γ̇^(i) is the shear rate on the i^th slip system (the dislocation density on the interface is normalized by the interface width δ^I). Hence, Burgers vector conservation, Eq. (<ref>), is equivalent to the requirement that ∑_i γ̇^(i)𝐬^(i) = 0; this guarantees the continuity of the displacement rate component normal to the interface plane. The reaction kinetics described in Fig. <ref>b and Eq. (<ref>) may be developed based upon linear response theory <cit.>. We assume that, at a point on an interface where reaction occurs, the Burgers vector flux is linearly proportional to the driving force such that the interface BC is ([ J^(1) b^(1); J^(2) b^(2); J^(3) b^(3); J^(4) b^(4) ])_ = ([ L_11 L_12 L_13 L_14; L_22 L_23 L_24; L_33 L_34; sym. L_44 ]) ([ f^(1); f^(2); f^(3); f^(4) ])_, where the left-hand side represents a list of Burgers vector fluxes (not dislocation number fluxes), f^(i) is the driving force exerted on Burgers vector 𝐛^(i), and the kinetic coefficient tensor 𝐋 = [L_ij] is symmetric (in accordance with the Onsager reciprocity theorem). The subscript “I” denotes quantities evaluated at a point on the interface. The derivation of Eq. (<ref>) based on the maximum entropy production principle <cit.> is given in <ref> – in the derivation we assume local equilibrium and small driving forces. We may understand Eq. (<ref>) from another viewpoint. When a dislocation glides in a grain interior, the dislocation velocity is often expressed by v=Lτ^m, where L is the mobility, τ is the resolved shear stress, and m is unity (in the overdamped regime). This equation can be rewritten as Jb=Lf, where J=ρ v is the flux, and f=τρ b is the driving force. We assume the same form for the reaction kinetics at the interface in Fig. <ref>a, but with two differences. First, the fluxes J considered in Eq. (<ref>) correspond to the annihilation and production rates of dislocations at the interface, rather than movement of dislocations. Hence, mobility L should be treated as a reaction kinetic constant for Burgers vector annihilation and production, rather than dislocation mobility. (Once dislocations are produced at the interface from Burgers vector reaction, they move with different mobility laws within grains.) Second, letting the flux J^(i) on each slip system only depend on the driving force f^(i) on that slip system alone; i.e., writing J^(i)b^(i)=L^(i)f^(i) would not satisfy Eq. (<ref>) for general driving forces on different slip systems. Instead, Eq. (<ref>) will be satisfied for general driving forces if J^(i) also depends on driving force on other slip systems. Therefore, the kinetics law governing the interfacial reaction is written in tensorial form, Eq. (<ref>); we now discuss each term in Eq. (<ref>). The driving forces on the dislocations, {f^(i)}, are Peach-Koehler (PK) force (see <ref>). On the i^th slip system and under stress σ (including external and internal stresses), the PK force is f^(i) = 𝐟^(i)·ζ^(i) = [σ(ρ^(i)𝐛^(i))] ×ξ^(i)·ζ^(i) = τ^(i)ρ^(i) b^(i), where ξ^(i) is the dislocation line direction at the interface, {ζ^(i),ξ^(i),𝐧^(i)} form a local coordinate frame as shown in Fig. <ref>b, and τ^(i)≡𝐬^(i)·σ𝐧^(i) is the resolved shear stress (RSS) on slip system (i) (𝐬^(i) is the slip direction). Substituting the driving forces Eq. (<ref>) into Eq. (<ref>), the interface BC described by Eq. (<ref>) has the form of a Robin BC, i.e., flux is proportional to the densities {ρ^(i)}. The dislocation fluxes at the interface are the result of dislocation reactions, which are constrained by Burgers vector conservation Eq. (<ref>). This constraint requires that the coefficient tensor takes the form 𝐋 = κ([ (c^(234))^2 c^(234)c^(314) c^(234)c^(124) c^(234)c^(132); (c^(314))^2 c^(314)c^(124) c^(314)c^(132); (c^(124))^2 c^(124)c^(132); sym. (c^(132))^2 ])= κ𝐜⊗𝐜, where 𝐜≡ (c^(234), c^(314), c^(124), c^(132))^T, c^(ijk)≡𝐬^(i)·𝐬^(j)×𝐬^(k), and κ is the overall reaction constant for reaction Eq. (<ref>) (κ is discussed below). Hence, there is one kinetic parameter and all other parameters are purely geometric, which is reasonable as there is only one Burgers vector reaction involving four slip systems (including the slip systems in the interface). Clearly, Eq. (<ref>) does guarantee that Eq. (<ref>) is always satisfied under arbitrary driving forces. We obtain the interface BC by substituting Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>): 𝐉_ = κ𝐁^-1 (𝐜⊗𝐜) 𝐁𝐓_ρ_, where 𝐉≡ (J^(1), J^(2), J^(3), J^(4))^T and ρ≡ (ρ^(1), ρ^(2), ρ^(3), ρ^(4))^T are the generalized dislocation flux and density, 𝐁≡diag(b^(1), b^(2), b^(3), b^(4)), 𝐓≡diag(τ^(1), τ^(2), τ^(3), τ^(4)), and the subscript “I” denotes the quantities evaluated at a point on the interface. In this equation, 𝐜⊗𝐜 depends on how the slip systems in the two phases and the interface plane are oriented; hence, it may be related to LRB criterion (i). 𝐓 is simply a matrix of the resolved shear stresses, which may be related to the LRB Schmid factor criterion (ii). 𝐁 is a matrix of the Burgers vectors, which may be related to the LRB residual Burgers vector criterion (iii). In this sense, interface BC Eq. (<ref>), is related to the three empirical LRB criteria; we demonstrate this point in Section <ref>. κ is the reaction constant which depends on the microscopic dislocation reaction mechanism, materials properties and temperature. Harmonic transition state theory suggests that it is reasonable to write κ = (κ_0/T) e^-Q/k_B T, where T is the temperature, k_B is the Boltzmann constant, κ_0 is an attempt frequency, and Q is the energy barrier along the reaction path. The parameter Q contains all the complexity at atomic scale: e.g., the reaction rate will depend sensitively on whether the dislocations are dissociated and the stacking fault energy <cit.>. Q may be determined from atomistic simulations; e.g., Zhu et al. <cit.> obtained Q for the interaction between a screw dislocation and a coherent twin boundary in copper by atomistic simulations. Their work also suggests that Q may be extracted from experiments by, for example, measuring the the strain-rate sensitivity as a function of stress. Although the interface BC proposed for the reaction involving four slip systems (including the slip systems in the interface) is always valid, we examine two special cases of the interface BC to show that it behaves as expected. Case 1: Two collinear Burgers vectors: e.g., b^(1)𝐬^(1) and b^(2)𝐬^(2) are parallel (i.e., 𝐬^(1) = 𝐬^(2)) such that Eq. (<ref>) reduces to (J^(1)b^(1)+J^(2)b^(2))𝐬^(1)+J^(3)b^(3)𝐬^(3)+J^(4)b^(4)𝐬^(4)=0. If 𝐬^(3) and 𝐬^(4) are not collinear, we recover: J^(3)b^(3)=J^(4)b^(4)=0 and J^(1)b^(1)=-J^(2)b^(2). If 𝐬^(3) and 𝐬^(4) are also collinear J^(1)b^(1)=-J^(2)b^(2) and J^(3)b^(3)=-J^(4)b^(4). In each case, the reaction only involves two slip systems. For example, based on Fig. <ref>a, direct transmission (without reflection and a residual Burgers vector) can occur when 𝐛^(1), 𝐛^(3) and the intersection between plane (1) and (3) are collinear. Case 2: Three co-planar Burgers vectors: e.g., b^(1)𝐬^( 1), b^(2)𝐬^(2) and b^(3)𝐬^(3), such that only two are independent. Choosing 𝐬^(1) and 𝐬^(2) as the basis vectors, 𝐬^(3) can be represented as a linear combination: p𝐬^(1)+q𝐬^(2) (p and q are combination coefficients). Thus, Eq. (<ref>) becomes (J^(1)b^(1)+pJ^(3)b^(3))𝐬^(1) + (J^(2)b^(2)+qJ^(3)b^(3))𝐬^(2)+J^(4)b^(4)𝐬^(4)=0; i.e., three homogeneous equations in three unknowns (i.e., the coefficients before {𝐬^(i)}). The solution is J^(4)b^(4) = 0 and J^(1)b^(1) : J^(2)b^(2) : J^(3)b^(3) = p:q:-1. This reaction only involves three slip systems with coplanar Burgers vectors. Since p and q are only geometry-dependent, the magnitudes of {J^(i)b^(i)} (i=1,2,3) have only one degree of freedom. Case 2 applies in two dimensions (2D); e.g., dislocations in monolayer graphene. In a 2D space, the vectors 𝐉 and ρ contain three components, 𝐁 and 𝐓 are 3× 3, and 𝐜 = (𝐬^(2)·𝐧^(3), 𝐬^(3)·𝐧^(1), 𝐬^(1)·𝐧^(2))^T. One example in Section <ref> is an application of this 2D model. Above, we only considered the case of a single Burgers vector reaction which may involve up to four slip systems (including the interface). In practice, there may be multiple slip systems in each phase and the interface; reaction may occur amongst any quadruple of these. If there are N slip systems, there will be C_N^4 ≡ M possible reactions. We label the l^th reaction by subscript “l” (l = 1, ⋯, M) such that Eq. (<ref>) (for the l^th reaction) is 𝐉_l, = κ_l 𝐁_l^-1 (𝐜⊗𝐜)_l 𝐁_l 𝐓_l,ρ_l,, where all quantities with subscript l are evaluated with parameters for the four relevant slip systems, and κ_l is the associated reaction constant. There are M equations similar to Eq. (<ref>). The total Burgers vector fluxes, due to all M reactions is (see <ref> for details) 𝐉̅_ = ∑_l=1^M 𝐉̅_l, = 𝐁̅^-1[ ∑_l=1^M κ_l (𝐜⊗𝐜)_l] 𝐁̅𝐓̅_ρ̅_, where the overline indicates the extended quantities: 𝐉̅≡ (J^(1), ⋯, J^(N))^T, ρ̅≡ (ρ^(1), ⋯, ρ^(N))^T, 𝐁̅≡diag(b^(1), ⋯, b^(N)), 𝐓̅≡diag(τ^(1), ⋯, τ^(N)), and (𝐜⊗𝐜)_l is the matrix (𝐜⊗𝐜)_l extended to N× N dimensions with zero padding. The general result, Eq. (<ref>), is the major result of this paper. The reaction constants {κ_l} are associated with the detailed atomic-scale mechanisms for the M reactions and thus, depend on interface structure. In this sense, {κ_l} should also depend on the macroscopic degrees of freedom of an interface (misorientation, inclination, misfit, ...). § APPLICATIONS The interface BC, Eq. (<ref>) or (<ref>), may be easily implemented in different simulation methods, such as CDD, DDD and CPFEM. While we have done such implementations, detailed descriptions are beyond the scope of this paper. Below, we demonstrate the application of the interface BC for the case of a minimal, one-dimensional (1D) CDD model for simplicity and transparency. The goal here is to examine how the interface BC works and if the result is consistent with the empirical LRB criteria. Consider the simple bicrystal microstructure illustrated in Fig. <ref>, which represents a 1D bicrystal, periodic along x. Each period consists of two phases, α and β with domain sizes, λ^α and λ^β. Each phase domain is delimited by two, symmetry related interfaces at x=0 and x=λ^β. For simplicity, assume that there is one slip system in each phase; the slip direction and slip plane normal are 𝐬^(i) and 𝐧^(i) for (i) ∈{α,β}. To model plastic deformation within the grains, we apply the dislocation-density-function crystal plasticity model of Leung et al. <cit.>. We denote the densities of dislocations of opposite sign on each slip system by subscripts “+” and “-” (superscripts denote individual slip systems) such that the dislocation flux 𝐉^(i)_+/-=ρ^(i)_+/-𝐯^(i)_+/-, where 𝐯^(i)_+/- is the dislocation velocity. In the 1D problem, ρ^(i) represents the dislocation density averaged over one period along y (see <ref>). We describe the dislocation velocity magnitude by the power law v^(i)_+/-= ±sgn(τ^(i)) v^*(i)|τ^(i)/τ^*(i)|^n, where v^*(i) and the slip resistance τ^*(i) are material parameters for phase (i), and n is a constant that depends on the range of stress; n≈ 1 at low stress <cit.> and much higher at large stress <cit.>. The velocity law Eq. (<ref>) applies to grain interiors, while Eq. (<ref>) only describes the dislocation reaction at the interface. The dislocation density evolution satisfies the balance ρ̇^(i)_+/-=-∇·𝐉^(i)_+/-+ ρ̇_+/-^(i),ann+ ρ̇_+/-^(i),gen (the equation for the 1D problem is in <ref>). As proposed by Arsenlis et al. <cit.>, annihilation of dislocations occurs when two opposite signed dislocations come within a critical capture radius r_; i.e., the annihilation rates are ρ̇_+^(i),ann = ρ̇_-^(i),ann = -ρ^(i)_+ρ^(i)_-r_|v^(i)_+-v^(i)_-|. When a stress is applied, a dislocation pair (opposite signs) is emitted from a source with generation rates <cit.>: ρ̇_+^(i),gen =ρ̇_-^(i),gen =η|τ^(i)|^m, where η and m are constants. Note that in the present model, the sources are assumed to be present everywhere within the grains. The net dislocation density is ρ^(i)=ρ^(i)_+-ρ^(i)_-. The total RSS τ^(i) in Eq. (<ref>) has contributions from both the external and internal (associated with all other dislocations) stress tensors (σ^ext and σ^int). For the simulation results presented here, we apply the external shear stress σ_xy^ext. The internal stress is a functional of dislocation densities on all slip systems (see <ref>). The flux J^ at the interface obeys the reaction law in Eq. (<ref>), but once dislocations are generated they are assumed to move with velocity v^ of the same form as Eq. (<ref>), albeit with different parameters: v^ = (τ^) v^*|τ^/ τ^*|^n, where we set n=1 on all slip systems/interface, although this is not necessary. In the simulations, we employ reduced variables: ρ̃≡ρ (λ^α)^2, x̃≡ x/λ^α, t̃≡ v^* t/ λ^α, ṽ≡ v / v^*, τ̃≡τ / K, where λ^α is the width of the α phase in x (see Fig. <ref>) and K ≡μ/[2π(1-ν)]. For simplicity, we omit the tilde in the reduced quantities below. §.§ Grain Boundaries When phases α and β represent the same structures (but differently oriented), the interface is a grain boundary (GB). GBs strongly affect the mechanical properties of polycrystals <cit.>. On the other hand, different GBs (different misorientations and inclinations) interact with dislocations differently <cit.>, which suggests that the mechanical properties of polycrystals may be adjusted by GB engineering. Since GB properties are functions of misorientation and inclination (and bonding), we examine the effects of θ^α and θ^β (Fig. <ref>) and the interface properties through the coefficient tensor 𝐋 in Eq. (<ref>). Upon application of an external shear stress σ_xy^ext=0.20 (see Fig. <ref>), the dislocation density profile, stress within the bicrystal, and strain evolve. Figures <ref>a-c show (i) the dislocation density at the interface on the α side of the interface ρ^α(x=0), (ii) the average resolved shear stress in Phase α τ̅^α, and (iii) the strain rate associated with GB sliding ϵ̇^ in steady-state (after the transients have relaxed). The GB sliding strain rate is ϵ̇^≡u̇_y(x=0)/δ^≡ρ^ b^ v^, where the term in double brackets is the jump in the y-displacement rate across the interface and δ^ is the interface width. When θ^β = θ^α, the system is a single crystal. This case is indicated by the vertical dashed lines in Figs. <ref>a-c. Not surprisingly, the net dislocation density at x=0 is zero ρ^α(0)=0 (Fig. <ref>a) and no sliding ϵ̇^ = 0 (Fig. <ref>c), as expected since there is no GB. For small misorientations Δθ≡ |θ^α - θ^β| , the magnitude of the dislocation pileup at the GB |ρ^α(0)| and GB sliding rate both increase with increasing misorientation. When θ^α < 45^∘, ρ^α(0)>0 (in the region close to the dashed lines in Fig. <ref>a1 and a2). This may be understood by reference to schematic Fig. <ref>d. Inside α, the applied stress drives positive dislocations to the GB which react with negative dislocations drawn to the GB from the β grain to produce zero pile-up (ρ(0)=0) when Δθ = 0; increasing Δθ increases the positive-dislocation pileup. When θ^α > 45^∘, ρ^α(0)<0 (near the red dashed line in Fig. <ref>a4). Figure <ref>f illustrates that when θ^α = 60^∘, the applied stress leads to the pileup of negative dislocations; the pileup increases with increasing Δθ. When θ^α = 45^∘, the resolved shear stress is zero and no dislocations are generated in α. The dislocation density on the α side of the GB ρ^α(0) is the result of dislocation reactions at the GB. As schematic Fig. <ref>e shows, when θ^β > θ^α, the positive dislocations pile up on the β side of the GB with some transferring into the α phase; this is consistent with the θ^β > 45^∘ results in Fig. <ref>a3. When θ^β < θ^α, some negative dislocations are “transmitted” from the β to α phases by reaction, consistent with the data for θ^β smaller than 45^∘ in Fig. <ref>a3. We also observe a cusp in ρ(0) at θ^β = 0 for all θ^α (see Fig. <ref>a). When θ^β = 0, the resolved shear stress in β and the pileup on the β side of the GB, ρ^β(0), are maximal (since the slip plane is aligned with the external shear stress). Figure <ref>g shows the pileup at the GB for the special no reaction case κ=0 (i.e., a Neumann BC). For this case, the pileup increases sharply for θ < 15^∘. As shown schematically in Fig. <ref>h, when θ^β=0, ρ^β(0) is very large (corresponding to the sharp peak at θ=0 in Fig. <ref>g). Burgers vector reaction then occurs at the GB to produce dislocations (density) that dominate that on the α side of the GB. From the Burgers vector reaction shown in Fig. <ref>h, the dislocation density on the α side of the GB ρ^α(0) is a reaction product and is always negative. Figure <ref>b shows that, in general, trend of the variation of resolved shear stress at the side of α phase decreases as the misorientation angle increases, and the resolved shear stress obtains larger values at θ^β≈θ^α resulting from little pileup will occur near the interface to counterpart the external stress. And the resolved shear stress reaches maximum values at θ^β≈ 0 because the only external force is the shear stress. Our interface condition allows GB sliding. Figure <ref>c shows the GB sliding rate, v_s = ρ^ v^ b^, vs. θ^β for a set of θ^α values. For a fixed θ^α, the GB sliding rate reaches maximum when θ^β≈θ^α (single crystal case), no sliding will occur at the interface. And the sliding will reach maximum with θ^β≈ 0 because the resolved shear stress obtains larger values and more pileup dislocations at the β phase side, the reaction between two sides will get stronger resulting in the sliding at the interface more easily. §.§ Example 2: Heterophase interface Grain boundary is the special form of interface. Interface exert profound effects on the mechanical properties of bicrystals and polycrystals. We now consider the heterogeneous case of slip resistance of β phase τ^β_0 increasing while keeping α phase slip resistance τ^α_0 constant. The simulation results are shown in Fig. <ref>a, b and c. i) Strain rate at the interface with different slip resistance is presented in Fig. <ref>a. The sliding rate at the interface decreases when the slip resistance of β phase keeps increasing. From Fig. <ref>b, the stress of phase β will increase as its slip resistance increases. The residual stress equals to the the summation of external stress and internal stress caused by the dislocations. The pileup dislocations of β phase decrease as the slip resistance increases so the internal stress induced by dislocations to counterpart the external stress will decrease. So the total stress of β phase will increase when material slip resistance gets larger. ii) As the slip resistance of β phase increases, the dislocation density of α phase pile-up at the interface will increase while the dislocation density of β phase will decrease as Fig. <ref>c shows. Because the motion of dislocations are suppressed as the slip resistance increases, resulting in the reduction of dislocation density pile-up at β phase. The reaction flux of α phase will naturally decrease arising from less dislocation pile-up at β phase (see Eq.(<ref>)), the remnants of dislocation at α phase increase. iii) When slip resistance of β phase keeps increasing and equals to 0.40, the dislocation density of β phase pile-up at the interface even changes sign to be positive. This phenomenon results from the rate of dislocation transmission from α phase across the interface exceeds the rate of dislocation pile-up at the β phase. In other words, the amount of positive dislocation transmitting from α phase will surpass the amount of negative dislocation produced by β phase and the total amount of dislocation density at β phase will get positive. We examine the effects of changing the orientation of the externally applied shear stress (see the top of Fig. <ref>) by varying ψ (0≤ψ≤45^∘) for a set of bicrystals (θ^α, θ^β). For a GB, α and β are the same material, such that we need only consider one side of the GB (phase α here). Figures <ref>a-c show the dislocation density ρ^α(0) with shear orientation ψ=0^∘, 22.5^∘ and 45^∘, respectively. The two rows of figures represent the results under no-reaction (κ=0, Neumann) and reaction (κ>0, Robin – Eq. (<ref>)) boundary conditions. No dislocations react at the GB when κ=0 (Neumann BC). In this case, the dislocation density ρ^α(0) is an extremum when the Schmid factor m^α=±1; and ρ^α(0) is zero with m^α=0 (see Fig. <ref>). When the reaction BC (Eq. (<ref>)) is applied, the dislocation pileup on the β side affects ρ^α(0). When θ^α=θ^β, the system is a single crystal/there is no GB (see the solid black lines in Fig. <ref>a2-c2), there is no pileup. Focusing on the ψ=0 reaction BC case (Fig. <ref>a2) as an example, for fixed θ^α (any horizontal line), ρ^α(0) is an extremum when the Schmid factor is an extremum, m^β=±1. When the Schmid factor is an extremum, the magnitudes of the dislocation pileup ρ^β(0) and ρ^α(0) are extrema. This is a result of dislocation reactions at the interface; e.g., at θ^α=15^∘ ρ^α(0) is nearly zero because of reactions with dislocations from β which are very high density at θ^β=0/m^β=1. When the applied shear orientation ψ increases, the ρ^α(0) map evolves as seen in Fig. <ref>. There are several special points lying on the lines corresponding to m^β=±1 that represent transitions between positive and negative values of ρ^α(0). For the reaction BC cases in Fig. <ref>, the special points for m^β = -1 are indicated by the orange points while those for m^β = 1 by the green points. All of the special points correspond to ρ^α(0)=0. The special points, along with the ρ^α(0)=0 contours, shift towards the upper right within increasing applied shear orientation from ψ= 0 to 45^∘. While dislocation pile-up maps may be generated for any GB and applied shear stress orientation, knowledge of these points and how they change with ψ provide heuristic guidance. All symmetric tilt GB (STGB) are located along the diagonal line θ^α=-θ^β in Fig. <ref>. For any fixed misorientation Δθ, diagonal lines with slope one represent all possible GB inclinations; e.g., see the cyan dashed lines in Figs. <ref>a2, b2 and c3. The dislocation density ρ^β(0) maps may be obtained by mirroring ρ^α(0) maps about θ^α=-θ^β. §.§ Comparison with the LRB Criteria The LRB criteria are widely used to determine the likelihood of slip transfer across a grain boundary <cit.>. These criteria are empirical; they are deduced from extensive experimental observations and simple crystallographic ideas. To evaluate slip transfer, we focus on the question of how the dislocation density changes across the GB when transmission occurs. First, note that even if there are no dislocation reactions at the GB, the plasticity in one grain affects that in the other through the stress concentrations associated with dislocation pile-ups. Hence, we focus on the change in dislocation density at the interface between cases where slip can occur κ>0 and cannot occur κ=0: Δρ = ρ_κ>0 - ρ_κ=0. Next, we realize that Δρ will be different on the two sides of the interface (depending on grain orientations with respect to one another and the applied stress); hence, Δρ^(i) = ρ^(i)_κ>0- ρ^(i)_κ=0, where (i)∈{α,β}. We now compare the LRB predictions with our simulation observations for Δρ^(i). The first LRB criterion states that slip transfer tends to occur between the pair of slip systems with the minimal misorientation angle. Our interface BC, Eq. (<ref>), fully incorporates the differences in orientation between the two crystals through the tensor 𝐜⊗𝐜; hence, the interface BC may be used to evaluate the first LRB ansatz. Figure <ref>a shows that, in general, |Δρ^(i)| decreases with increasing misorientation angle (Δθ≡ |θ^α - θ^β|). However, this trend is not universal; we see many examples in Fig. <ref>a where small Δθ corresponds to small |Δρ^(i)|. Of course, when the two slip systems each have small Schmid factors, little slip transfer occurs; this effect is included in the third LRB criterion and shows that the three LRB criteria are not necessarily consistent with each other. Therefore, our simulation results based on the interface BC suggest that the first LRB criterion predicts the correct trends but fails in very many particular cases. The second LRB criterion states that slip transfer occurs in a manner that leads to the smallest residual Burgers vector at the interface. The residual Burgers vector associated with any reaction/slip transfer event is simply related to the difference in the Burgers vectors on the two slip systems 𝐛^I = 𝐛^α-𝐛^β, where 𝐛^I is simply b^I directed along the GB. For a GB, this is related to the misorientation angle by b^I = 2b^αsin(Δθ/2) if b^α = b^β (e.g., for a GB). In Fig. <ref>a, we plot the transmited dislocation density |Δρ^(i)| versus b^I and find that, in general, |Δρ^(i)| decreases with increasing b^I. While this trend is consistent with the second LRB criterion, it too fails in many specific cases. The third criterion states that the two slip systems involved in slip transfer are those for which the resolved shear stress in the phases/grains is a maximum. Abuzaid et al. <cit.> suggested the use of the combined resolved shear stresses in the two phases/grains for this condition: τ̂≡ (𝐬^α⊗𝐧^α + 𝐬^β⊗𝐧^β) : σ=τ^α+τ^β. Figure <ref>b shows |Δρ^(i)| versus τ̂ from the simulations. We observe that |Δρ^(i)| indeed increases with increasing τ̂. This (third) criterion properly describe the trends, but, again, does not always work. Based on the interface BC (Eq. (<ref>)), when the interface dislocation density is small (ρ^→0), the flux on the α side of the GB is J^α = κ (c^β)^2 ρ^ατ^α + κ (c^β c^α) ρ^βτ^β. For the very special case of a symmetric tilt grain boundary, c^α=c^β and ρ^α=ρ^β for which Eq. (<ref>) reduces to J^α = κ (c^β)^2 ρ^ατ̂. This shows that, for this special case (STGB), the maximum combined resolved shear stress τ̂ produces the maximum flux and the third LRB criterion is exactly correct, while our interface BC is applicable in general (symmetric and asymmetric GBs; heterophase interfaces). The three LRB criteria are insightful, but largely heuristic and are often not consistent with one another. For example, consider the case of applying an external stress σ_xy^ext (σ_xx^ext=σ_yy^ext=0), keeping θ^α=0, and gradually increasing θ^β from 0 to 90^∘. Since the combined resolved shear stress is a minimum at θ^β≈ 45^∘, the third LRB criterion suggests that the transmitted dislocation density |Δρ^(i)| should also be a minimum at this angle. On the other hand, when θ^β gradually increases from 0 to 90^∘, the misorientation angle between the two slip systems Δθ increases monotonically and, according to the first criterion, |Δρ^(i)| will decrease monotonically. This means that |Δρ^(i)| does not reach a minimum at θ^β≈ 45^∘. This simple example demonstrates that the LRB criteria are not self-consistent, unlike our interface BC. § CONCLUSION We proposed a meso-scale boundary condition to model the interactions between dislocations from within grains and interfaces. Our interface BC, based on interface bicrystallography and rigorous linear kinetics, is novel, self-consistent, and easily applied. Consider the following: (i) The interface BC is established based upon experimentally observed reactions between lattice and interface dislocations/disconnections. Burgers vectors are rigorously conserved. (ii) The interface BC is based on basic kinetic theory (the principle of maximum dissipation rate) for dislocation-interface interactions (rather than energy minimization, as in some other models). (iii) The interface BC is applicable to interfaces in all crystalline systems and to multiple slip systems. (iv) Interface sliding naturally occurs in simulations incorporating the interface BC (i.e., interface dislocations participate in reactions at the interface). We employed the proposed interface BC to examine the validity of the empirical LRB criteria for slip transfer across a GB. We demonstrated that the three LRB criteria correctly predict the slip transfer trends but the LRB criteria fail in many cases and are not self-consistent. The interface BC provides a more rigorous and accurate approach to consider all factors that affect slip transfer, including all bicrystallography (including misorientation angle), the residual Burgers vectors, interface sliding and arbitrary external stress. The proposed interface BC can be applied directly in most plasticity simulation methods, such as continuum dislocation dynamics, discrete dislocation dynamics and crystal plasticity finite element method. § ACKNOWLEDGEMENTS JY, DJS and JH were also supported by the National Key R&D Program of China (2021YFA1200202). JY, AHWN and DJS gratefully acknowledge support of the Hong Kong Research Grants Council Collaborative Research Fund C1005-19G. JH acknowledges support of Early Career Scheme (ECS) Grant from the Hong Kong Research Grants Council 21213921. AHWN also acknowledges support from the Shenzhen Fund 2021 Basic Research General Programme JCY20210324115400002 and the Guangdong Province Basic and Applied Research Key Project 2020190718102. § DISLOCATION DENSITY: DEFINITIONS AND KINEMATICS Define the dislocation density vector ρ as the number of dislocations per unit perpendicular to the line direction: ρ = ρξ, where ξ is the dislocation line direction. Note that in this definition, Burgers vector is not included in the dislocation density. The kinematic equation of the dislocation density vector is (Maxwell-Faraday equation) <cit.> ρ̇ = - ∇×(ρ×𝐯), where 𝐯 is the dislocation velocity. The slip plane normal is 𝐧 and the dislocation line direction is always ξ = 𝐞_z. Define the positive slip direction as 𝐬 = 𝐧×𝐞_z; so, (𝐬, 𝐧, 𝐞_z) in a local Cartesian coordinate system. Based on this coordinate system, ρ = ρ(x_s, x_n)𝐞_z and 𝐯 = v(x_s, x_n)𝐬, where x_s and x_n are the coordinates along the 𝐬- and 𝐧-axes and we assume that everything is uniform along the 𝐞_z-axis. In this case, ρ×𝐯 = |[ 𝐬 𝐧 𝐞_z; 0 0 ρ; v 0 0 ]| = ρ v 𝐧. ∇×(ρ×𝐯) = ∇×(ρ v 𝐧) = ∂(ρ v)/∂ x_s𝐞_z = (cosθ∂(ρ v)/∂ x + sinθ∂(ρ v)/∂ y) 𝐞_z, where θ is the angle between 𝐬 and 𝐞_x. Equation (<ref>) becomes ρ̇ = -∂(ρ v)/∂ x_s = -cosθ∂(ρ v)/∂ x - sinθ∂(ρ v)/∂ y. This describes the conservation of dislocation lines. In order to include the dislocation generation/annihilation, we distinguish dislocations of opposite sign. We define the “+/-” dislocation on a particular slip system as the one which moves in the positive/negative slip direction when the resolved shear stress τ > 0 (depending on the definition of 𝐧). The dynamics of + and - dislocation densities on a particular slip system are described by ρ̇_+ = -∂(ρ_+ v_+)/∂ x_s and ρ̇_- = -∂(ρ_- v_-)/∂ x_s. § LINEAR RESPONSE THEORY The linear response approach, underlying Eq. (<ref>), may be deduced based on the maximum entropy production principle <cit.> or equivalently the principle of maximum dissipation rate <cit.>. The general idea behind the derivation is as follows. First, the global entropy production rate is a functional of the (generalized) flux: Σ̇ = Σ̇[𝐉]. Second, we assume that local thermodynamic equilibrium (1^st law of thermodynamics) applies through the constraint: Γ[𝐉] = 0. Finally, we maximize the global entropy production rate Σ̇ with respect to 𝐉 under the Γ=0 constraint via the Lagrange multiplier method: δ(Σ̇ - λΓ) = 0, where λ is the Lagrange multiplier. The three steps are detailed below. Since any nonequilibrium process is characterized by the presence of a flux, the local entropy production rate is a function of the flux: σ̇ = σ̇(𝐉), where 𝐉 is a vector of the fluxes along all slip systems into the interface. When the system is near equilibrium, we expand σ̇(𝐉) about 𝐉 = 0. From symmetry considerations, σ̇ = 𝐉·𝐂𝐉, where 𝐂 is a coefficient tensor. The global entropy production rate is Σ̇[𝐉] = ∫𝐉·𝐂𝐉 V. Local thermodynamic equilibrium implies we may write the entropy production rate as u = T s + ∑_i ϕ^(i)ρ^(i) ⇒ ṡ = 1/Tu̇ + ∑_i (-ϕ^(i)/T) ρ̇^(i), where u is the internal energy density, T is the temperature, s is the entropy density; ϕ^(i) = ϕ^(i)(ζ^(i)) is the free energy of a dislocation (per length) on the i^th slip system and located at ζ^(i), where ζ^(i) is the coordinate along the ζ^(i)-axis (see Fig. <ref>) and ρ^(i) = ρ^(i)(ζ^(i)) is the dislocation density at ζ^(i). The continuity equations at each point on the interface are ṡ = σ̇ - ∑_i ∂ J_s^(i)/∂ζ^(i), u̇ = - ∑_i ∂ J_u^(i)/∂ζ^(i), ρ̇^(i) = - ∂ J^(i)/∂ζ^(i), where J_s^(i), J_u^(i) and J^(i) are, respectively, the entropy flux , the energy flux and the dislocation flux flowing from the i^th slip system to the interface point. Substituting Eq. (<ref>) into Eq. (<ref>), we find σ̇ = ∑_i [ J_u^(i)∂/∂ζ^(i)(1/T) + J^(i)∂/∂ζ^(i)(-ϕ^(i)/T) ]. We define the generalized flux as 𝐉≡ (J_u^(1), ⋯, J_u^(4), J^(1), ⋯, J^(4))^T and the generalized force as 𝐟≡( ∂/∂ζ^(1)(1/T), ⋯, ∂/∂ζ^(4)(1/T), ∂/∂ζ^(1)(-ϕ^(1)/T), ⋯, ∂/∂ζ^(4)(-ϕ^(4)/T) )^T. Then, Eq. (<ref>) can be written as σ̇ = 𝐉·𝐟. Thus, local thermodynamic equilibrium leads to the constraint: Γ[𝐉] = ∫ (𝐉·𝐟 - 𝐉·𝐂𝐉) V = 0. To maximize Eq. (<ref>) under the constraint Eq. (<ref>), we construct the functional: Σ̇'[𝐉] = ∫𝐉·𝐂𝐉 V + ∫λ(𝐉·𝐟 - 𝐉·𝐂𝐉) V, and set δΣ̇'/δλ = 0 and δΣ̇'/δ𝐉 = 0. The solution to this variational problem is 𝐉 = 𝐋𝐟, where 𝐋≡𝐂^-1; thus, we have obtained the linear response expression, Eq. (<ref>). Note that the assumptions include: (i) near-equilibrium and (ii) local equilibrium. It remains to examine the physical meaning of the force 𝐟. Recall that f^(i)≡ -(1/T)(∂ϕ^(i)/∂ζ^(i)). The derivative -∂ϕ^(i)/∂ζ^(i) represents the decrease of the free energy of a dislocation (per length) when the dislocation is displaced by a small distance ζ^(i). So, f^(i)T exactly corresponds to the Peach-Koehler force. In Eq. (<ref>), T is absorbed into the coefficient tensor such that the {f^(i)} represent the Peach-Koehler force. § DERIVATION OF THE INTERFACE BOUNDARY CONDITION FOR MULTIPLE SLIP SYSTEMS Extension of the interface BC to the case of multiple slip systems in both phases is given below. Consider the case where there are multiple non-colinear slip systems in both phases and one slip plane along the interface. As an example, assume that there are two slip systems in the α phase, “α_1” and “α_2”, and two in β, “β_1” and “β_2”. Including the interface “I”, there are five slip systems. Label the five slip systems as “1”≡“α_1”, “2”≡“α_2”, “3”≡“β_1”, “4”≡“β_2”, and “5”≡“I”. Reactions may occur amongst any four of the five slip systems. For example, J^(1) b^(1)𝐬^(1) + J^(2) b^(2)𝐬^(2) + J^(4) b^(4)𝐬^(4) + J^(5) b^(5)𝐬^(5) = 0. Below, we enumerate all possible reactions: [ Reaction 1:  (1)+(2)+(3)+(4)=0, Reaction 2:  (1)+(2)+(3)+(5)=0,; Reaction 3:  (1)+(2)+(4)+(5)=0, Reaction 4:  (1)+(3)+(4)+(5)=0,; Reaction 5:  (2)+(3)+(4)+(5)=0, ] where “(i)” is short for “J^(i)b^(i)𝐬^(i)” for the i^th slip system. Take Eq. (<ref>) (Reaction 3) as an example. Similar to Eq. (<ref>), Reaction 3 kinetics may be described by ([ J^(1)_3b^(1); J^(2)_3b^(2); J^(4)_3b^(4); J^(5)_3b^(5) ]) = ([ L_11 L_12 L_14 L_15; L_22 L_24 L_25; L_44 L_45; sym. L_55 ]) ([ τ^(1)ρ^(1) b^(1); τ^(2)ρ^(2) b^(2); τ^(4)ρ^(4) b^(4); τ^(5)ρ^(5) b^(5) ]), where J^(i)_l is the flux on slip system i generated by Reaction l. We can rewrite this equation as ([ J^(1)_3b^(1); J^(2)_3b^(2); J^(3)_3b^(3); J^(4)_3b^(4); J^(5)_3b^(5); ]) = κ_3 ([ (c^(245))^2 c^(245)c^(415) 0 c^(245)c^(125) c^(245)c^(142); (c^(415))^2 0 c^(415)c^(125) c^(415)c^(142); 0 0 0; (c^(125))^2 c^(125)c^(142); sym. (c^(142))^2 ]) ([ τ^(1)ρ^(1) b^(1); τ^(2)ρ^(2) b^(2); τ^(3)ρ^(3) b^(3); τ^(4)ρ^(4) b^(4); τ^(5)ρ^(5) b^(5) ]) ⇒  𝐉̅_3, = 𝐁̅^-1κ_3(𝐜⊗𝐜)_3𝐁̅𝐓̅_ρ̅_, where the quantities with overlines are extended to 5× 1 vectors or 5× 5 matrices with zero padding. The tensor (𝐜⊗𝐜)_3 depends on the geometry, which slip systems are available, and which four slip systems participate in Reaction 3. We obtain the relationship akin to Eq. (<ref>) for all the other reactions. The total flux associated with these 5 reactions is 𝐉̅_ = ∑_l=1^5 𝐉̅_l, = 𝐁̅^-1[ ∑_l=1^5 κ_l (𝐜⊗𝐜)_l] 𝐁̅𝐓̅_ρ̅_. This example is for the five slip systems and five reactions case. If there are N slip systems (including the slip systems in the interface) and M ≡ C_N^4 reactions, the interface BC becomes Eq. (<ref>). § ONE-DIMENSIONAL PROBLEM AS COARSE-GRAINING OF THE TWO-DIMENSIONAL PROBLEM A 1D problem can be deduced from a 2D problem by coarse-graining when the dislocation distribution is periodic along the y-axis in Fig. <ref>. We focus on the configuration illustrated in Fig. <ref>a (i.e., a set of dislocation walls distributed along the x-axis). Each dislocation wall consists of a periodic array of edge dislocations and each dislocation sits on its own slip plane. The dislocation density can be written as ρ(x, y) = ϱ(x) ∑_j=-∞^∞δ(y - y_j(x)), where ϱ(x) is the number density of the dislocation walls distributed along the x-axis (i.e., the vertical dashed lines in Fig. <ref>a), y_j(x) = xtanθ - (j+ς)dθ, d is the interplanar spacing, and ς∈ [0,1) is the fractional offset along the y-axis at x=0. We obtain a 1D quantity by averaging (coarse-graining) the corresponding 2D quantity over a period alone y; i.e., D^-1∫_0^D y, where D ≡ dθ is the period along y. Thus, we define the coarse-grained 1D dislocation density as ρ̅≡1/D∫_0^D ρ y = 1/D∫_0^D ϱ∑_jδ(y-y_j) y = ϱ/D = ϱcosθ/d. Substituting Eq. (<ref>) into Eq. (<ref>) and averaging on both sides of Eq. (<ref>) (without the annihilation and generation rates), we have ρ̇̅̇ = - ∂(ρ̅ v_x)/∂ x = - cosθ∂(ρ̅ v)/∂ x. This equation describes the evolution of dislocation density in 1D. (Note that for the 1D problem studied in the main text, we omit the bar of ρ̅ for simplicity.) Equation (<ref>) requires the evaluation of v = (τ) v^* |τ/τ^*|^n and the RSS τ = 𝐬·σ𝐧. The total stress σ has contribution from the external stress σ^ext and the internal one σ^int associated with the elastic interaction between dislocations. σ^ext is a constant in a stress-controlled experiment. The problem is how to calculate σ^int. The internal stress field due to a distribution of dislocations in a 2D space is σ^int(𝐫) = ∬ρ(𝐫') σ^(𝐫-𝐫') 𝐫', where σ^(𝐫-𝐫') is the stress at the point 𝐫 induced by a dislocation located at 𝐫'. Substituting Eq. (<ref>) (with ς=0) into Eq. (<ref>), σ^int(x, y) = ∬ϱ(x') ∑_j δ(y' - y_j(x')) σ^(x-x', y-y') x' y' = ∫ϱ(x') ∑_j σ^(x-x', y-x'tanθ - jD) x' = ∫ϱ(x') σ^(x-x', y-x'tanθ; D) x', where σ^(x-x', y-y'; D) ≡∑_jσ^(x-x', y-y'-jD) is the stress field at (x,y) due to a vertical dislocation wall for which the dislocation spacing is D and one of the dislocations is located at (x', y'). A dislocation wall may be viewed as the superposition of two dislocation walls for which analytical solutions are known; such composition is shown in Fig. <ref>b and can be expressed as σ^w(x, y;D) = σ^⊥(x,y;D) + σ^⊣(x,y;D), where σ^⊥/⊣ is resulted from the dislocation wall associated with the Burgers vector 𝐛^⊥/⊣. The analytical solutions to σ^⊥ and σ^⊣ are <cit.> {[ σ_xx^⊥ = -σ_0 sin Y ( cosh X - cos Y + Xsinh X ); σ_yy^⊥ = -σ_0 sin Y( cosh X - cos Y - Xsinh X ); σ_xy^⊥ = σ_0 X (cosh X cos Y - 1) ]. and {[ σ_xx^⊣ = σ_0 X ( cosh X cos Y - 1); σ_yy^⊣ = σ_0 [ 2sinh X(cosh X - cos Y) - X (cosh X cos Y - 1) ]; σ_xy^⊣ = -σ_0 sin Y( cosh X - cos Y - Xsinh X) ]., where σ_0 ≡μ b/D/2(1-ν)(cosh X - cos Y)^2, X ≡2π x/D, Y ≡2π y/D. The resolved internal shear stress is τ^int = 𝐬·σ^int𝐧. From this, we write τ^int(x,y) = ∫ϱ(x') τ^w(x-x', y-x'tanθ; dθ) x', τ^w = τ^⊥ + τ^⊣, τ^⊥(x,y) = σ_0 X[sinh X sin Y sin 2θ + (cosh X cos Y - 1)cos 2θ], τ^⊣(x,y) = σ_0 { [sinh X (cosh X - cos Y) - X(cosh X cos Y - 1)]sin 2θ - sin Y(cosh X - cos Y - Xsinh X) cos 2θ}. Since all dislocations sit on the slip planes, we only evaluate the stress at (x, xtanθ); τ^int(x) ≡τ^int(x, xtanθ). elsarticle-harv
http://arxiv.org/abs/2306.07119v1
20230612135230
Improving Forecasts for Heterogeneous Time Series by "Averaging", with Application to Food Demand Forecast
[ "Lukas Neubauer", "Peter Filzmoser" ]
stat.ME
[ "stat.ME", "stat.ML" ]
Improving Forecasts for Heterogeneous Time Series by "Averaging", with Application to Food Demand Forecast Lukas Neubauer TU Wien Peter Filzmoser TU Wien July 31, 2023 ========================================================================================================== A common forecasting setting in real world applications considers a set of possibly heterogeneous time series of the same domain. Due to different properties of each time series such as length, obtaining forecasts for each individual time series in a straight-forward way is challenging. This paper proposes a general framework utilizing a similarity measure in Dynamic Time Warping to find similar time series to build neighborhoods in a k-Nearest Neighbor fashion, and improve forecasts of possibly simple models by averaging. Several ways of performing the averaging are suggested, and theoretical arguments underline the usefulness of averaging for forecasting. Additionally, diagnostics tools are proposed allowing a deep understanding of the procedure. § INTRODUCTION In many forecasting settings, one encounters a heterogeneous set of time series for which individual forecasts are required. A heterogeneous set of time series may imply time series of different lengths, shapes, or seasonalities. Hence, it may be difficult to model all of them in a joint way, or choose an approach yielding reasonable results for the entire set of time series. Modelling each time series on its own might be an alternative way to handle this challenge, however, this does not make use of the shared properties of the domain. Depending on this domain each time series might be very difficult to model and forecast. Additionally, a practitioner may even want an automatic procedure to produce forecasts. To this end, some work has been done in comparing local and global forecasting approaches, whereas local means modelling each time series by its own. Global models refer to modelling the set of time series simultaneously. <cit.> argue that global and local models can achieve the same forecasts without needing additional assumptions. Yet, global models such as pooled auto-regressive models or neural networks seem to outperform local models on both homogeneous and heterogeneous sets of time series, whereby those global models allow for much higher complexity. The global approach does, however, require a rather high number of total observations, additional tuning of hyperparameters, and might be very difficult to find in the first place. Especially the number of total observations can, in practice, be not sufficiently large. On the other hand, local models are especially hard to use for short time series, thus we want to fill the gap of forecasting a set of time series having short time series with a still rather low number of total observations. If the set of time series allows for a grouped or hierarchical data structure, work has been done by Hyndman et al. Here the authors use the group structure to combine forecasts of various levels in order to produce overall better forecasts (<cit.>, <cit.>). However, oftentimes there is not reasonable group structure in the data. <cit.> investigate further the notion of global models whereas they closely look at the relatedness of the time series. The authors focus on simulation experiments controlling the data-generating process of each time series to allow for arbitary relatedness in the set of time series as well. They then apply various machine learning methods to conclude that the performance and complexity of a global models heavily depends on the heterogeneity of the data as well as the actual amount of available data. A set of very heterogeneous time series requires possibly very complex global models which in turn also requires a lot of overall data. This leads us again to the challenge mentioned above. In terms of short time series, there is little literature available to tackle this challenge. <cit.> compare simple and machine learning based models on a set of short time series (14 to 21 observations each) regarding crimes in Mexico. The authors conclude that simple models like simple moving average or ARIMA perform better than more complex models such as neural networks. Considering all mentioned aspects, our contributions are as follows. This paper proposes a meta framework for forecasting a set of possibly very heterogeneous time series by utilizing a range of local models and aggregating them in an appropriate way, exploiting similarity of those time series. That way, we firstly allow for simple models in case of short time series, and still do not require a large total number of observations as is the case for complex global models. Therefore, we can say that our methodology is a mixture approach: using local models while still taking into account the whole set of time series in the forecast procedure. Compared to <cit.> we do not assume a predefined group structure. We do, however, build groups of time series that are close to each other to improve forecasts yielded by the possibly simple local models. The rest of the paper is structered as follows. In Section <ref> we introduce the measure of (dis-)similarity as it is useed in our methodology, namely Dynamic Time Warping (DTW). Afterwards we shortly outline how DTW can be used to obtain an average time series (Section <ref>). Section <ref> introduces the notion of k-nearest neighbors in the context of time series while in Section <ref> we propose several averaging methods to improve forecasts. A simple theoretical motivation is given in Section <ref> where we validate the notion of model averaging under certain assumptions. Section <ref> is about the evaluation technique we apply followed by the data application using food demand data of smart fridges in Section <ref>. Concluding remarks are given in Section <ref> § METHODOLOGY In this paper we propose following methodology. Given a finite set of time series 𝒯 of possibly different properties, we first propose an appropriate dissimilarity measure on 𝒯 by using asymmetric open-begin open-end Dynamic Time Warping, see Section <ref>. This dissimilarity measure is used to construct neighborhoods for each time series in 𝒯. Next, for a fixed time series its neighbors are aggregated in a new way to obtain better one-step ahead forecasts. This aggregation is proposed to be done in multiple ways. First, we look at a time series averaging technique in Section <ref> to construct a representative time series for the neighborhood. Then, based on this barycenter and the entire neighorhood, we propose to perform model-averaging in a k-nearest neighbor fashion (Section <ref>). The actual model-averaging is described in Section <ref>. §.§ Dynamic Time Warping A common technique for measuring similarity of two time series is Dynamic Time Warping (DTW), introduced by <cit.>. Originating from speech recognition, the goal is to align two sequences using a time-warping function in a cost-minimizing manner. In detail, we consider the (possibly multivariate) sequences 𝐗=(𝐗_1,…,𝐗_n)'∈ℝ^n× d, 𝐘=(𝐘_1,…,𝐘_m)'∈ℝ^m× d with possibly n≠ m with appropriate dissimilarity function on the rows of 𝐗 and 𝐘 as d(i,j):=d(𝐗_i,𝐘_j). Denote the entries of 𝐗_i and 𝐘_j by X_ik and Y_jk, k=1,… ,d, respectively. A common choice, also used in this paper, is the Euclidean distance, i.e. d(i,j)=√(∑_k=1^d(X_ik-Y_jk)^2). The DTW distance is now defined as DTW(𝐗,𝐘):=min_ϕ∈Φ∑_k=1^K_ϕw_ϕ(k)d(ϕ_𝐗(k),ϕ_𝐘(k)), where ϕ=(ϕ_𝐗,ϕ_𝐘):{1,… ,K_ϕ}→{1,… ,n}×{1,… ,m} denotes the warping function, and w_ϕ are corresponding weights. The warping function ϕ links the two sequences in a cost-minimizing way, e.g. ϕ(k)=(ϕ_𝐗(k),ϕ_𝐘(k)) implies that 𝐗_ϕ_𝐗(k) and 𝐘_ϕ_𝐘(k) are linked together. The length of the warping function K_ϕ depends on the optimal warping function and is therefore chosen alongside the minimization problem (<ref>). The set of all allowed warping functions Φ depends on the type of time-warping which is usually parametrized by a step pattern. This pattern defines the allowed values of ϕ(k) given ϕ_k(1),…,ϕ_k(k-1) for each k. Basic properties of a warping function are as follows. * Monotonicity. We have ϕ_𝐗(k)≤ϕ_𝐗(k+1),ϕ_𝐘(k)≤ϕ_𝐘(k+1) for all k. This allows for only reasonable matchings where the order of the sequences is unchanged. * Slope. The possible (local) slope of ϕ is given by Q_k_s = ∑_i=1^T_k_s q_i^(k_s)∑_i=1^T_k_s p_i^(k_s), where s denotes the step pattern for the warping function, T_k_s gives the number of allowed steps in the k-th step pattern path, and p_i^(k_s),q_i^(k_s) gives the actual size of steps in 𝐗 and 𝐘 direction, respectively. Thus, we have that p_i^(k_s)=ϕ_𝐗(i+1)-ϕ_𝐗(i) for any i, and q_i^(k_s) analogously. The slope is therefore constrained by min_k_sQ_k_s≤ Q_k_s≤max_k_sQ_k_s. * Normalizability. Depending on ϕ, the DTW distance can also be normalized by M_ϕ=∑_k=1^Kw_ϕ(k), i.e. we have nDTW(𝐗,𝐘):=DTW(𝐗,𝐘)/M_ϕ. More properties arise when moving to more concrete warping functions. §.§.§ Symmetric DTW The most common type is the so-called symmetric matching pattern where * the endpoints must be matched, meaning we have ϕ(1)=(ϕ_𝐗(1),ϕ_𝐘(1))=(1,1) and ϕ(K)=(ϕ_𝐗(K),ϕ_𝐘(K))=(n,m), * all points must be matched, i.e. ϕ_𝐗(k+1)-ϕ_𝐗(k)≤ 1, ϕ_𝐘(k+1)-ϕ_𝐘(k)≤ 1 (this can also be seen as a continuity constraint), * allowed steps to get to (i,j) are (i-1,j)→ (i,j), (i,j-1)→ (i,j), (i-1,j-1)→ (i,j) (see also Eq. (<ref>)), implying that * the slope of ϕ is unconstrained allowing for an arbitary amount of time stretching or compression because Q = q/p, p,q∈{0,1} and 0≤ Q≤∞, and * the weights are chosen to be w_ϕ(k)=ϕ_𝐗(k)-ϕ_𝐗(k-1) + ϕ_𝐘(k)-ϕ_𝐘(k-1) such that M_ϕ = n+m. In fact, the step pattern can be written using a Dynamic Programming <cit.> approach by g(i,j)=min( g(i,j-1)+d(i,j) g(i-1,j-1)+2d(i,j) g(i-1,j)+d(i,j) ), 1≤ i≤ n,1≤ j≤ m, such that g(1,1)=w(1)d(1,1)=2d(1,1) and g(n,m)=:DTW_sym(𝐗,𝐘), where n=|𝐗|,m=|𝐘|. The resulting matrix G of Eq. (<ref>) is called the warping matrix. The corresponding warping path can be extracted from these recursive calculations by backtracking. This distance is naturally symmetric and positive definite. However, it is not a metric since the triangle inequality is usually not fulfilled. This pattern type can be useful when considering sequences of similar lengths. However, for differently sized sequences this might not work reasonably anymore due to above constraints. §.§.§ Asymmetric DTW We are particularly interested in subsequence matching where we relax the endpoint constraints. This is also denoted as open-begin open-end (OBE) matching <cit.>. The corresponding optimization problem looks as follows. DTW_OBE (𝐗,𝐘) := min_1≤ p≤ q≤ mDTW(𝐗,𝐘^(p,q)), where 𝐘^(p,q)=(𝐘_p,…,𝐘_q)'∈ℝ^(q-p+1)× d is a subsequence of 𝐘, and n≤ m. This type of matchings requires an asymmetric matching pattern. The most basic one is given by g(i,j)=min( g(i-1,j)+d(i,j) g(i-1,j-1)+d(i,j) g(i-1,j-2)+d(i,j) ), 1≤ i≤ n, p≤ j≤ q, implying that continuity is not imposed anymore. In fact, information of 𝐘 can be skipped. The slope is constrained by min Q=0, max Q=2. Further, the distance is only normalizable by n since we consider subsequences of 𝐘 of different lengths. Because of the subsequence matching, we have an initial value of g(1,p)=d(1,p) and g(n,q)=:DTW_asym, OBE (𝐗,𝐘). §.§.§ Other Step Patterns Many more step patterns exist with different types of slope constraints and other properties. Unfortunately, it is often not very clear which step pattern really is the most appropriate one to use. A general family of step patterns is defined in <cit.> by the notion of symmetricPx, and asymmetricPx, respectively, where x controls the slope parameter. Usual values are 0, 0.5, 1 and 2. The larger the slope can possibly be, the more complicated the dynamic programming equations tend to get. As a limiting step pattern one obtains the rigid step pattern for x→∞. As suggested in <cit.>, this step pattern is only reasonable when considering an OBE matching, since it finds the most appropriate subsequence without any gaps and, in fact, does not perform any time warping. To summarize, DTW is a very flexible sequence matching method which yields the optimal matching as well as an associated distance between any two sequences. §.§ DTW Averaging As all dissimilarity measures, DTW can also be used to find a barycenter of a set of sequences 𝒮. A barycenter is usually defined to minimize the sum of distances in the set. Generally, c∈ X for some metric space (X,δ) is called a barycenter of Y⊂ X if ∑_x∈ Yδ(c,x)≤∑_x∈ Yδ(y,x), for any y∈ X. In terms of time series and DTW we have Y=𝒯 as the finite set of time series of interest, X⊃ Y as the set of all possible time series with some maximum length, and δ=DTW. However, the space of possible solutions is very large which makes the search for the average difficult. Therefore, approximative solutions have been developed. In the context of DTW, it is not straightforward to provide a definition for an average. For the symmetric case of DTW, <cit.> have considered pairwise, coordinate-wise averaging, where two sequences are averaged to one sequence until only one sequence is left. This method is easy to implement, but a big downside is that it heavily depends on the order of sequences. A different approach, introduced in <cit.>, is global averaging. Starting from an initial sequence, the average is updated in each iteration based on all matchings of all sequences in the set. As mentioned in their paper, this heuristic naturally reduces the sum of the warping distances to the average in Eq. (<ref>). One aspect to consider is the length of the averaging sequence. In pairwise averaging, this average can grow twice as long in each step. In global averaging, the length of the resulting average is fixed to the length of the initial sequence. We adapt the global averaging methodology to the asymmetric DTW use case as follows. Denote 𝒯 a set of time series of different lengths. Our aim is to have a longer barycenter than the time series of interest, hence as the initial average time series we take the longest one available, i.e. 𝐂_0=argmax_𝐗∈𝒯|𝐗|. Then iteratively we compute 𝐂_i, i=1,2,…: * Compute DTW_asym, OBE(𝐗,𝐂_i) for all 𝐗∈𝒯 to obtain ϕ^(𝐗)(k):=(ϕ_𝐗(k),ϕ_𝐂_i(k)) for k=1,…,K_ϕ. * For each time step t=1,…,|𝐂_0| of 𝐂_i, denoted by 𝐂_i,t, let ℂ_i,t:={𝐗_j:𝐗∈𝒯, ϕ^(𝐗)=(j,t)} be the set of all associated 𝐗 time steps. * Update each time step of the averaging sequence by 𝐂_i+1,t = 1|ℂ_i,t|∑_𝐙∈ℂ_i,t𝐙 if |ℂ_i,t|>0, 𝐂_i,t otherwise. We perform this iteration for a fixed number of I times or until the sum in Eq. (<ref>) does not decrease anymore. We denote now the averaging time series as aDBA(𝒯):=𝐂_min(S,I) where S>S^∗ is the time of no further reduction after a start-up period S^∗ and the averaging function aDBA: 𝒯→𝒵 where 𝒵 denotes the set of all possible time series. §.§ Nearest Neighbors For many classification and regression problems, k-nearest neighbors (k-NN) is an easy yet well-performing method to apply and improve predictions. The basic idea is as follows. Consider a metric space (X,δ) and x∈ X. Then a neighborhood around x can be formed computing δ(x,z) for each z∈ X,z≠ x. The k elements with minimal distance to z are then the neighbors of x, i.e. 𝒩(x):={z_1,…,z_k: δ(x,z_1)≤…≤δ(x,z_k)≤δ(x,z), z≠ z_i, i=1,…,k}. In case of ties one could increase the neighborhood's size or choose one of the equally distanced neighbors randomly. For the neighborhood including x itself we write 𝒩(x):=𝒩(x)∪{x}. From a statistical point of view, we consider observations (x_1,y_1),…,(x_N,y_N)∈ X× Y where Y denotes the set of possible labels and X the feature space. We write y(x) if y is the label corresponding to x. In an unsupervised framework there would not be any labels and we would just be able to form neighborhoods. In a classification setting, Y is a discrete set whereas the easiest example is a binary classifier with Y={0,1}. A new observation x_0 may be classified as the majority class of its neighborhood. In a regression setting, the labels are continuous, e.g. Y=ℝ and the predicted label is set to be ŷ(x_0) = f(𝒩̅(x_0)) where f is an aggregation function. In the simplest case we have ŷ(x_0)=1/k∑_x∈𝒩(x_0)y(x), where the predicted label is just the arithmetic mean of the neighborhood's labels. The choice of k is vital, however, it is still not completely clear how to choose it. There are many heuristics, such as choosing k≈√(N), which do not seem reasonable in many applications. In contrast, the optimal k can be chosen based on an optimization criterion. In supervised settings, one usually splits the data in training and test set, and chooses k to minimize a loss function on the training set. This k is then fixed and used to predict on the test set. For more advanced approaches where k is determined adaptively, see <cit.>. §.§.§ Time Series Nearest Neighbors In the context of time series, k-NN has also been widely used. In time series classification, 1-NN is often used in conjunction with DTW <cit.>. Many approaches also consider k-NN in a feature space. Such feature space could consist of time series features like length, trend, auto-correlation properties, and many more. For such cases, DTW is not even used. However, this approach assumes that the features can be extracted in a reasonable way which might not always be the case. In terms of k-NN for regression or forecasting, <cit.> use a simple approach where a single time series can be forecasted by the mean value of neighboring labels based on Euclidean distances of lagged values. Naturally, this method can also be applied to many time series in a pooled manner but this would not use the idea of employing similar time series for improving forecasts, where similarity is based on DTW. §.§.§ Our Setting We let X be a set of heterogeneous and differently sized time series, equipped with the asymmetric open-begin open-end DTW distance measure. Note that this space is not metric, however, k-NN can still be applied in such setting. The label set Y is not uniquely defined. For the first model, then average models (see Section <ref>), this set consists of one-step ahead forecasts for the time series of interest. In the case of first average, then model, the label set and the aggregation function are more complicated and are described in the corresponding section. §.§ Model Averaging In the next section, let δ(x,y)=DTW_asym, OBE(x,y). Further, let ·̂:(x,M)↦x̂_t+1|t,…,s(M) with x=(x_s,…,x_t) a time series and M any model used to forecast x_t+1, i.e. x̂_t+1(M):=x̂_t+1|t,…,s(M) is the one-step ahead forecast of x obtained by using model M. We refer to M as the baseline model which we aim to improve. We understand a model M as a mapping M:𝒯→ℳ, taking a time series x∈𝒯 and outputting M_x∈ℳ where ℳ denotes the set of all possible models. Given a model M=M_x and a time series z we denote ·̃:(z,M_x)↦M̃_x(z) as the refitting function, i.e. M̃_x(z) is the model M_x, estimated on x but fitted on z. Thus, forecasts of M̃_x(z) are always with respect to z. §.§.§ First Average, Then Model Approach The first approach of improving a single forecast yielded by a baseline model is as follows. Given a time series y=(y_s,…,y_t), we find a neighborhood 𝒩_y={z_1,…,z_k_y} based on k-NN. Since the DTW distance also depends on the difference of level in the time series we center each time series first to avoid this effect. Denote z^c:=(z_s_z-z̅,…,z_t_z-z̅) the centered time series for z = (z_s_z,…,z_t_z) with z̅=∑_i=s_z^t_zz_i/(t_z-s_z+1). Then a neighborhood around y is constructed such that δ(y^c ,z_1^c)≤δ(y_c,z_2^c)≤…≤δ(y,z_k_y^c) and z_i = (z_s_i,…,z_t_i) with t≥ t_i, t_i-s_i>t-s. That means we look for neighboring time series which have more historical data but might not have any recent data. Next, as mentioned in Section <ref>, we compute the neighborhood's averaging time series on centered sequences z^c,z∈𝒩_y, given by avg(y):=aDBA({z^c:z∈𝒩_y}). As the name already suggests, we now take the averaging time series avg(y) and map it to M_avg(y). After estimating model M_avg(y) and its parameters we take this model and use it to forecast on y, i.e. we obtain ŷ_t+1(M̃_avg(y)(y)). Note that since M_avg(y) is based on centered data, we allow the initial parameters of M_avg(y) to be re-estimated using data of y. This procedure automatically handles the re-centering. This averaging will be later referred to as G-AVG. In terms of k-NN we can write the aggregation function as ŷ_t+1 =f(𝒩̅(y)) =·̂(y,·)∘·̃(y,·)∘ M∘aDBA∘center(𝒩̅(y)), where, with abuse of notation, we first center the time series of the neighboorhood, then average them, model the resulting averaged time series, and finish by refitting and performing the actual forecast with respect to y. §.§.§ First Model, Then Average Approach As noted previously, for a time series y=(y_s,…,y_t) we obtain a neighborhood 𝒩_y with size k_y. However, instead of averaging the neighboring time series in terms of DTW, we consider a different approach. As mentioned, each time series z∈𝒩_y has been modelled, i.e. there exists M_z for any z∈𝒩_y. In terms of k-NN, the averaging now is of the form ŷ_t+1 =f(𝒩̅(y)) =g_w∘·̂(y,·)∘·̃(y,·)∘ M(𝒩̅(y)), where g_w is a weighted averaging function, defined as g_w(x_1,…,x_n):=∑_i=1^nw_ix_i where n denotes the number of forecasts to be averaged. Compared to the G-AVG approach, we clearly see based on Equations (<ref>) and (<ref>) that the approaches basically differ in when averaging is performed. Next, we propose various ways to choose the weights since it is not clear how to choose them in an optimal way. §.§.§ Simple Average The easiest and straightforward way to combine the forecasts is by taking the simple average. We obtain w_z=w_y=1/k_y+1. This assumes that all forecasts are equally important. We will later refer to this type of averaging as S-AVG. Alternatively, we can opt not to use M_y but only utilize the models M̃_z(y) of the neighbors, and set w_z=1/k_y,w_y=0 (S-AVG-N). §.§.§ Distance-weighted Average For the distance-based model average we consider two ways. First, we can take the DTW distances of the neighborhood and set w_z∝1δ(y^c,z^c), z≠ y, implying that closer neighbors in terms of DTW are assigned higher weights in the model averaging. Since δ(y,y)=0, we have to set w_y=0, implying that this type of averaging only regards neighbors' models. After normalizing, the weights are given by w_z=1δ(y^c,z^c)∑_z̃∈𝒩_y1δ(y^c,z̃^c). We denote this type of averaging as D-AVG-N. This averaging does not take into account the forecasts ŷ_t+1(M_y) because w_y=0. In contrast, we can consider the neighborhood's average avg(y) and corresponding distances δ(z^c,avg(y)) for z∈𝒩_y. Due to the averaging algorithm of Section <ref> and a minimum neighborhood size of 1, we have that δ(z^c, avg(y))>0 for all z. That way we can set up weights by w_z =1δ(z^c,avg(y))∑_z∈𝒩_y1δ(z^c,avg(y)), for z∈𝒩_y, denoted by D-AVG. §.§.§ Error-weighted Average The error-based or, equivalently, performance-based weights are computed from the models' historical performance. They might be calculated in two ways. First, for each time series z∈𝒩_y we obtain residuals r_z based on the one-step ahead forecasts ẑ(M_z), i.e. r_z,i:=z_i-ẑ_i(M_z), i=s_z+1,…,t_z. Next, we calculate the root mean squared scaled error (RMSSE) with respect to random walk forecasts introduced in Section <ref>, by RMSSE(z,ẑ(M_z)) = √(1t_z-s_z∑_i=s_z+1^t_zq_z,i^2), q_z,i =r_z,i√(1i-s_z∑_j=s_z+1^i(z_j-z_j-1)^2), and set the corresponding weights for the model averaging to be w_z=1RMSSE(z,ẑ(M_z))∑_z̃∈𝒩̅_y1RMSSE(z̃,ẑ̃̂(M_z̃)). This method will be referenced to as P-AVG. Alternatively, we consider the refitted residuals r_y(M̃_z(y)) with r(M̃_z(y))_i := y_i - ŷ_i(M̃_z(y)), i=s+1,…,t, z∈𝒩_y. Consequently, we obtain errors of RMSSE(y,ŷ(M̃_z(y)))=√(1t-s∑_i=s+1^tq_y,z,i^2), q_y,z,i=r(M̃_z(y))_i√(1i-s∑_j=s+1^i(y_j-y_j-1)^2), and corresponding weights, w_z=1RMSSE(y,ŷ(M̃_z(y)))∑_z̃∈𝒩̅_y1RMSSE(y,ŷ(M̃_z̃(y)) , and denote the method by P-AVG-R. §.§.§ No-Model Approaches In respect to the non-parametric neighboring search of k-NN, we also consider non-parametric forecasts as follows. Following <cit.> where forecasts are computed based on the next available observed value, we apply a similar methodology. For the given time series y=(y_s,…,y_t) and neighbors z=(z_s_z,…,z_t_z)∈𝒩_y, we obtain matchings (on which the neighborhood is based on) such that ϕ^(z)(k)=(ϕ_y(k),ϕ_z(k)) for k=1,…,K_z. Since every index of y must be matched, there exists a value k̃ and t̃_z∈{s_z,…,t_z} such that ϕ^(z)(k̃)=(t,t̃_z). Next, there are two cases to differ. * Case t̃_z < t_z: Then there exists a successor u_z=t̃_z+1 with corresponding value of z_u_z which is used to forecast y_t+1. * Case t̃_z = t_z: No successor exists, thus the time series z cannot be used to forecast y_t+1. To this end, we obtain the set of possible successors given by 𝒮_y = {z_u_z: u_z = t̃_z+1, t̃_z <t_z, z∈𝒩_y}, such that |𝒮_y|≤ |𝒩_y|. Naturally, this type of averaging also only considers neighbors' information because y does not have any successor itself. Similarly to before, we can forecast y_t+1 by ŷ_t+1=∑_s∈𝒮_yw_s s with approriate weights. We choose simple weights as in w_s=1/|𝒮_y| (S-NM-AVG) and distance-based weights as in Eq. (<ref>) (D-NM-AVG). Note that the matchings are based on centered time series, hence the forecasts obtained here are actually forecasts for y^c. Since we do not have a model taking care of re-centering, we have to do it by hand by setting ŷ_t+1=ŷ_t+1^c+y̅. §.§ Theoretical Motivation We want to motivate our approach of using the DTW distance. For that we consider just a simple case. Let X be an ANN model <cit.>. An ANN model is also known as Exponential Smoothing. It does not have any trend or seasonality component, and for a time series (X_t,t=1,…,n) it is given by the recursion l_t^X = α X_t + (1-α)l_t-1^X, X̂_t+h|t = l_t^X, h>0, where l^X denotes the level component of the model which is also equal to the flat forecast X̂_t+h|t for any h>0. The model parameter α is usually found by minimizing the sum of squared forecast errors or by maximum likelihood. §.§.§ Theoretical DTW Computation Let X,Y be two independent ANN models. Without loss of generalization, we assume both initial values are equal to 0, i.e. l_0^X=l_0^Y=0 (otherwise we could just look at X-l_0^X). We write X=(X_1,…,X_n)'∼ ANN(α_X,σ^2_X) and Y=(Y_1,…,Y_n)'∼ ANN(α_Y,σ^2_Y). For both we consider an equal length of n=|X|=|Y|. Considering Eq. (<ref>), we can rewrite the recursive equation using a state-space representation as X_t = l_t-1^X+ϵ_t, Y_t = l_t-1^Y+η_t l_t^X = l_t-1^X+α_Xϵ_t, l_t^Y = l_t-1^Y+α_Yη_t, where the innovations ϵ_tiid∼N(0,σ^2_X),η_tiid∼N(0,σ^2_Y) are assumed to be also pairwise independent for t=1,…,n. The innovations are the one-step ahead forecast errors given by ϵ_t=X_t-l_t-1^X. The states l^X are latent and only X_t itself is observable. Next, we give an explicit expression for the asymmetric DTW distance between X and Y when considering the squared L^2 cross-distance. We denote X to be independent of Y if and only if X_i is independent of Y_j for all i,j. This is a simple assumption, however, following results can be easily extended to a more general setup. Let X∼ ANN(α_X,σ^2_X), Y∼ ANN(α_Y,σ^2_Y) be two independent and centered Exponential Smoothing processes of length n. Let d(i,j):=𝔼[(X_i-Y_j)^2] denote the cross-distance between X and Y. Then the asymmetric DTW distance is given by DTW_asym(X,Y)=σ^2_X(n+ n2α_X^2)+σ^2_Y (n+⌊n^2/4⌋α_Y^2). First, note that the cross-distance is given by d(i,j) = 𝔼[X_i^2]+𝔼[Y_j^2] = σ_X^2(1+(i-1)α_X^2) + σ_Y^2(1+(j-1)α_Y^2), due to the independence of X_i, Y_j for every i, j. Next, we need to recursively compute the warping matrix G as in Eq. (<ref>). We have g(1,1)=d(1,1)=σ_X^2+σ_Y^2. Due to the cross-distance being increasing in both i and j, we can easily solve Eq. (<ref>) by g(i,j) = min[ g(i-1,j); g(i-1,j-1); g(i-1,j-2) ] +d(i,j) =g(ĩ,j̃) + ∑_k=0^i-ĩ-1d(i-k,j-2k), where ĩ, j̃ denotes the indices where the recursion needs more specific computation. We need this distinction because for small i, j the minimum value is different than expressed in the sum due to some of the values being not assigned. In detail, we have that g(ĩ,j̃)=∑_k=0^ĩ - 1d(ĩ-k,1) if ĩ>0, j̃=1, g(ĩ-1,1)+d(ĩ, 2) if ĩ>1,j̃=2, NA if ĩ=1,j̃>1. Altogether, we obtain, by induction, g(i,j) = i(σ_X^2+σ_Y^2)+ i2σ_X^2α_X^2+⌊j^2/4⌋σ_Y^2α_Y^2 if 2i-j≥ 1, NA otherwise. Setting i=j=n finishes the proof. §.§.§ Relation to Wasserstein Distance Since we are interested in the one-step ahead forecast, we may look at the corresponding forecast distributions. The forecast distributions are given by the conditional distributions of X̂_n+1|n = X_n+1|l_n^X∼ N(l_n^X,σ_X^2) and Ŷ_n+1|n=Y_n+1|l_n^Y∼ N(l_n^Y,σ_Y^2), respectively. Now we want to measure the distance between those two distributions. For that we use the 2-Wasserstein distance, which is defined as follows <cit.>. Let μ,ν be two measures and π(μ,ν) be the set of all couplings of μ and ν. Then W_2(μ,ν) = √(inf_γ∈π(μ,ν)∫ ||x-y||^2dγ(x,y)). In the case of random variables we can also write W_2(X,Y)=√(inf{𝔼[||X̃-Ỹ||^2]: (X̃,Ỹ)∈π(X,Y) }), for any two random variables X,Y. If both are Gaussian, i.e. X∼ N(μ_1,Σ_1) and Y∼ N(μ_2,Σ_2), then the squared 2-Wasserstein distance is readily computed by W_2^2(X,Y) = ||m_1-m_2||^2 + tr(Σ_1+Σ_2-2(Σ_1^1/2Σ_2Σ_1^1/2)^1/2). Details are available in the work of <cit.>. Applying Eq. (<ref>) to the forecast distributions of the ANN models, we can calculate the 2-Wasserstein distance between X̂_n+1|n and Ŷ_n+1|n yielding W_2^2(X̂_n+1|n,Ŷ_n+1|n) = (l_n^X-l_n^Y)^2 + (σ_X-σ_Y)^2 Thus, both Eq. (<ref>), and (<ref>) are quadratic in the model parameters of ANN allowing us to give following theorem. Let 𝒳 be the space of independent ANN processes of length n>n(p) for some p∈(0,1), equipped with the asymmetric DTW distance, and 𝒴 be the space of corresponding Gaussian forecast distributions equipped with the squared 2-Wasserstein distance. Then the map 𝒳→𝒴:X↦X̂_n+1|n is Lipschitz-continuous with Lipschitz constant L<1 and probability at least p. Let X,Y∈𝒳 be two arbitrary ANN processes. We have by Eq. (<ref>) that W^2_2(X̂_n+1|n,Ŷ_n+1|n)=(l_n^X-l_n^Y)^2+(σ_X-σ_Y)^2 for fixed states l_n^X, l_n^Y. Since l_n^X, l_n^Y are both realizations of independent Gaussian random variables, we obtain ℙ( (l_n^X-l_n^Y)^2≤ q_p n(α_X^2σ_X^2+α_Y^2σ_Y^2))=p, where q_p is the p-quantile of a χ^2(1) distribution. Then using Eq. (<ref>) yields p=ℙ (W^2_2(X̂_n+1|n,Ŷ_n+1|n)DTW_asym(X,Y)≤q_pn(σ_X^2α_X^2+σ_Y^2α_Y^2)+(σ_X-σ_Y)^2σ^2_X(n+ n2α_X^2)+σ^2_X (n+⌊n^2/4⌋α_Y^2)) ≤ ℙ (W^2_2(X̂_n+1|n,Ŷ_n+1|n)DTW_asym(X,Y)≤q_p(σ_X^2(1+nα_X^2)+σ_Y^2(1+nα_Y^2))σ^2_X(n+ n2α_X^2)+σ^2_X (n+⌊n^2/4⌋α_Y^2))≤ ℙ (W^2_2(X̂_n+1|n,Ŷ_n+1|n)DTW_asym(X,Y)<1) using that n2,⌊ n^2/4⌋ > n for n>n(p). Thus, the map X↦X̂_n+1|n is Lipschitz-continuous with constant L<1 and probability at least p. If we want to have Lipschitz-continuity with at least 95% probability, then q_0.95<4 and Theorem <ref> holds with n>16. This result also holds when looking at the normalized DTW measure with Lipschitz constant L>1 and is therefore not a contraction anymore. It also tells us that close time series in terms of DTW are also close in their corresponding forecast distribution both in mean and variance. Further, it assures us that small changes in the time series only affect the difference in forecast distributions by a small amount. More detailed results are obtained when considering the mean forecasts given by 𝔼[X_n+1|n]:=𝔼[X_n+1|l_n^X]=l_n^X∼ N(0,nσ_X^2α_X^2), and 𝔼[Y_n+1|n]=l_n^Y∼ N(0,nσ_Y^2α_Y^2), respectively. The corresponding 2-Wasserstein distance is computed to be W_2^2(𝔼 [X_n+1|n],𝔼 [Y_n+1|n]) = nσ_X^2α_X^2+nσ_Y^2α_Y^2-2√(n^2σ_X^2σ_Y^2α_X^2α_Y^2) = n(σ_Yα_X-σ_Yα_Y)^2. Let 𝒳 be the space of independent ANN processes of length n>5 equipped with the asymmetric DTW distance, and 𝒴 be the space of corresponding Gaussian forecast distributions equipped with the squared 2-Wasserstein distance. Then the map 𝒳→𝒴:X↦𝔼[X_n+1|n] is Lipschitz-continuous with Lipschitz constant L<1. Let X,Y∈𝒳 be two arbitrary ANN processes. We have that W^2_2(𝔼[X_n+1|n],𝔼[Y_n+1|n])DTW_asym(X,Y) = n(σ_Xα_X-σ_Yα_Y)^2σ^2_X(n+ n2α_X^2)+σ^2_X (n+⌊n^2/4⌋α_Y^2) < 1, using that n2,⌊ n^2/4⌋ > n for n>5. Thus, the map X↦𝔼[X_n+1|n] is Lipschitz-continuous with constant L<1. §.§.§ Reduction of Mean Squared Error Another result is about the relation of DTW and the mean squared error of a convex combination of the mean forecasts. Let Z_n+1|n(w):=w𝔼[X_n+1|n]+(1-w)𝔼[Y_n+1|n] for w∈[0,1]. We have Z_n+1|n(w)=w l_n^X+(1-w)l_n^Y. However, in practice, the states l_n are not known and need to be estimated by actually estimating the smoothing parameter α. To this end, assume there exists unbiased estimators α̂_X, α̂_Y for α_X, α_Y, respectively. We further assume they have finite second moment. The corresponding estimating forecasts are given by ẑ(w). Then, under certain conditions for the estimation errors made for α_X, α_Y we have following result. Let X∼ ANN(α_X,σ^2_X), Y∼ ANN(α_Y,σ^2_Y) be two independent and centered Exponential Smoothing processes of length n and known variances σ_X^2, σ_Y^2. Then a convex combination of the forecasts Ẑ reduces the mean squared error, i.e. 𝔼[(X_n+1-ẑ(w))^2]≤𝔼[(X_n+1-l̂_n^X)^2], if MSE(α̂_Y)≤σ_X^2/2σ_Y^2((1-w^2)MSE(α̂_X)-nDTW_asym(X,Y)/σ_X^2). We have that 𝔼[(X_n+1-ẑ(w))^2] = 𝔼[(l_n^X+ϵ_n+1-(wl̂_n^X+(1-w)l̂_n^Y))^2] = σ_X^2 + 𝔼[(l_n^X-(wl̂_n^X+(1-w)l̂_n^Y))^2], using the independence of the error term ϵ_n+1. Further, we obtain 𝔼[(X_n+1-ẑ(w))^2] ≤𝔼[(l_n^X-l_n^Y)^2] + w^2𝔼[(l̂_n^X-l̂_n^Y)^2] + 𝔼[(l_n^Y-l̂_n^Y)^2] = n(σ_X^2α_X^2+σ_Y^2α_Y^2)+w^2(𝔼[(l̂_n^X)^2]+𝔼[(l̂_n^Y)^2])+ 𝔼[(l_n^Y-l̂_n^Y)^2]. We can quickly compute the last terms of above by 𝔼[(l_n^Y-l̂_n^Y)^2] = ∫𝔼[ (l_n^Y-l̂_n^Y)^2|α̂_Y=a]ℙ(α̂_Y=a)da =nσ_Y^2 ∫ (a-α_Y)^2 ℙ(α̂_Y=a)da =nσ_Y^2 MSE(α̂_Y), and 𝔼[(l̂_n^X)^2] = nσ_X^2𝔼[α̂_X^2] = nσ_X^2(MSE(α̂_X)+α_X^2). In total we get that 𝔼[(X_n+1-ẑ(w))^2] ≤ nσ_X^2( (1+w^2)α_X^2+w^2MSE(α̂_X))+ nσ_Y^2((1+w^2)α_Y^2+2MSE(α̂_Y)) !≤𝔼[(X_n+1-l̂_n^X)^2] = nσ_X^2MSE(α̂_X). Using that nσ_X^2(1+w^2)α_X^2+nσ_Y^2(1+w^2)α_Y^2≤DTW_asym(X,Y), this finally yields 𝔼[(X_n+1-ẑ(w))^2] ≤DTW_asym(X,Y)+n(w^2σ_X^2MSE(α̂_X)+2σ_Y^2MSE(α̂_Y)) ≤ nσ_X^2MSE(α̂_X), if MSE(α̂_Y)≤σ_X^2/2σ_Y^2((1-w^2)MSE(α̂_X)-nDTW_asym(X,Y)/σ_X^2). In practice, the previous theorems tell us that if X,Y are close in terms of DTW and, additionally, the estimation error made when estimating α_Y is smaller than the error made for α_X, then the convex combination forecast improves the point forecast for X_n+1. The condition of MSE(α̂_Y)<<MSE(α̂_X) might have various reasons. In an application, the fit of the model might be better for Y than for X, resulting in better estimation of α. §.§.§ Conclusions These theoretical results give us arguments of our methodology for the most simple cases of models. However, the arguments might be extended to a broader family of models as given in <cit.>. In practice, we also need to use open begin, open end matching since the asymmetric DTW measure cannot be computed once the reference time series is too long. Still similar arguments should hold, and motivate our approach to using DTW neighborhoods and perform model averaging. The theory does not give us hints how to choose the optimal weights, hence we propose the weights of Section <ref>. extension to obe, to also allow differently sized processes When considering sequences of different lengths, we are limited because once Y is twice as long as X or even longer, the recursion cannot be calculated anymore as Eq. (<ref>) suggests. This happens quite often in practice, hence we use the open-begin open-end matching procedure. Similarly to before, we obtain Let X∼ ANN(α_X,σ^2_X),Y∼ ANN(α_Y,σ^2_Y) be two independent and centered Exponential Smoothing processes of lengths n and m. Let d(i,j):=𝔼[(X_i-Y_j)^2] denote the cross-distance between X and Y. Then the asymmetric DTW distance for any 1≤ p≤ q≤ m is given by DTW_asym(X,Y^(p,q))=σ^2_X(n+ n2α_X^2)+σ^2_Y (n+ c(p,q)α_Y^2), with a constant c(p,q) defined by c(p,q)= p-1+(q-p)^24 q-p even, 1+1+q-p^22+q^2-(p+1)^24 q-p odd. miniming DTW_asym(X,Y^(p,q)) yields p=q=1 since the constant in minimized there. Unfortunately this does not make much sense in practice. § EVALUATION METHODS For the evaluation of our methods we use scaled one-step ahead forecast errors as proposed by <cit.>, adapted to our problem setting of on-line forecasting. The authors denote the mean absolute scaled error (MASE) by MASE(y,ŷ) = 1t-s∑_u=s+1^t |q_u|, q_u = y_u-ŷ_u1/t-s∑_v=s+1^t|y_v-y_v-1|, where y=(y_s,…,y_t) is a time series with one-step ahead forecasts ŷ=(ŷ_s+1|s,…,ŷ_t|t-1). This means we scale each error by the average in-sample error when using the last available observation as the one-step ahead forecast, also known as the random walk forecast. As mentioned by <cit.>, one advantage of this measure is its independence of scale, making it easier to compare results of different time series. Since the measure compares the actual forecast with the mean forecast error based on a random walk forecast, we can say that if MASE<1, then the forecast method used to obtain ŷ works better on average than the naive approach of using the last available value. Similarly, if MASE>1, then the method performs worse then the random walk forecast. However, in an on-line forecasting setting, scaling by the whole in-sample error may not be reasonable, hence we use a different scaling. We adapt the MASE to a root mean squared scaled error (RMSSE) given by RMSSE_s'^t'(y,ŷ,ŷ^b) := √(1t'-s'+1∑_u=s'^t'q_u^2), q_u = y_u-ŷ_u√(1/u-s∑_v=s+1^u(y_v-ŷ^b_v)^2), where s≤ s'≤ t'≤ t. This means we obtain scaled errors q_u by scaling the error y_u-ŷ_u by the averaged benchmark forecast error until time u. Altogether, RMSSE_s'^t' gives the average error of the window {s',…,t'} scaled by the average historical benchmark forecast until time t'. A typical benchmark method is the random walk forecast given by ŷ_v^b = y_v-1. Moreover, the scaled errors q_u can also be used to average over a set of differently scaled time series 𝒯 by setting RMSSE_s'^t'(𝒯) := √(1|𝒯 |∑_y∈𝒯RMSSE_s'^t'(y,ŷ,ŷ^b)^2) = √(1|𝒯|∑_y∈𝒯1t'-s'∑_u=s'+1^t'q_y,u^2), allowing for a global evaluation. In case of s'=t' we obtain RMMSE_t'(y,ŷ,ŷ^b) = |q_t'|, and RMSSE_t'(𝒯)=√(1/|𝒯|∑_y∈𝒯q_y,t'^2). This can be used to evaluate performance at a specific time step t'. For s'>t' we let RMSSE_s'^t'(y,ŷ,ŷ^b)=0. For evaluation purposes, we will also look at a differently scaled RMSSE value, yielding simpler comparisons between the ETS models and averaging approaches. To this end, we define q̃_u = y_u-ŷ_u√(1/u-s∑_v=s+1^u(y_v-ŷ_v(M_y))^2), and an aggregated value of RMSSE_s'^t'(y,ŷ) = √(1t'-s'+1∑_u=s'^t'q̃_u^2). As the data is usually split into training and test set, we also need to differ the corresponding evaluation techniques. In the training set we compute the RMSSE as given above, yielding a final RMSSE value for the entire training set. For an evaluation on the test set we follow the notation of <cit.> and its use in the R package forecast <cit.> where the test errors are scaled by the training error of the random walk. Consider a time series y=(y_s,…,y_t,y_t+1,…,y_T) split in training and test set at time step t. Then each test set residual is scaled by the root mean squared error of the random walk forecast yielding scaled test set residuals given by q_u = r_u√(1/t-s∑_v=s+1^t(y_v-ŷ^b_v)^2), T≥ u > t. § FOOD DEMAND FORECAST The problem setting motivating this work is as follows. Various data is collected from smart fridges for the goal of estimating their demand in an optimal way. It is especially important to obtain accurate forecasts for fridges that do not have a lot of historical data yet. All in all, the number of smart fridges is increasing, hence manual demand estimation becomes infeasible and error-prone. This work helps to overcome these challenges by proposing a straightforward and transparent methodology to forecast the smart fridges' demand. In the next sections, the paper's methodology is illustrated on this weekly demand data. §.§ Data The data we use is as follows. For a certain number of smart fridges (individuals) the weekly sales are observed. In detail, at week T there are N_T individuals given by y^(T)=(y_s_y,…,y_t_y), n_y^(T)=t_y-s_y+1, t_y≤ T such that the number of individuals may also vary over time. The goal is to forecast the demand of each fridge with t_y=T for the upcoming week T+1. This means we only want to forecast recent individuals in one-step ahead fashion. Especially individuals with little historical data are difficult to forecast, hence the paper tries to tackle this problem. In the following experiments, we have data from week 2021-01-18 (T=1) until 2022-07-11 (T=78) and N=N_78=43 individuals in total. Next, we describe some properties of this unbalanced panel. * The mean length is n̅_y=33.2 and the median length is med_y(n_y)=25. In total the lengths range from 2 to 78. * The mean number of individuals is N̅_T=18.3 and the median is med_T(N_T)=16.5. The number of individuals ranges from 1 to 43. * In total there are 1,427 observations available. We split the panel data in training and test set. The training set consists of data T≤ T_train=74 implying a training size of 1,259 and 43 training individuals. The test set consists of time points T=75,…,78, thus there are 43 test individuals and 168 test observations. Note that we have excluded individuals that do not have enough or any training data at all. The sales are distributed as follows. The overall mean of sales is 39.1 with a standard deviation of 28.8. The individual means of sales range from 8.21 to 165.8 with a standard deviation of 29.9. Therefore, the individuals are very heterogenous as seen in some example time series in Figure <ref>. §.§ Baseline Model For the base family of models, any common time series model family can be used. We here use the ETS family of models (<cit.>, <cit.>). These models consist of 3 components: Error, Trend and Seasonality. The most simple ETS model is of the form ANN and is also known as Exponential Smoothing. It does not have any trend or seasonality component, and is given by the recursion l_t = α y_t + (1-α)l_t-1, ŷ_t+h|t = l_t, h>0, where l denotes the level component of the model which is also equal to the flat forecast ŷ_t+1|t. The model parameter α is usually found by minimizing the sum of squared one-step ahead forecast errors. The next more complex model is of the form AAN and is called Holt-Winters model. It is given by l_t = α y_t + (1-α)(l_t-1+b_t-1), b_t = β (l_t-l_t-1) + (1-β)b_t-1, ŷ_t+h|t = l_t+hb_t, h>0, where l denotes the level component again. The newly introduced trend component is denoted by b. In this model the forecasts are not flat anymore, but linear in the forecast horizon h. All smoothing parameters are usually constrained between 0 and 1. For a more detailed review on those models, see <cit.>. With that, this family provides a very flexible way of modelling and forecasting, and is therefore often used in business forecasting applications. The forecast package <cit.> in R also provides an automatic forecasting framework for ETS and other models (such as ARIMA), where the most appropriate model of all possible models in that family is chosen automatically based on some optimization criteria. We use the corrected Akaike's Information Criterion given by AIC_c =AIC+2k(k+1)T-k-1 =-2log(L)+2k+2k(k+1)T-k-1, where L is the model's likelihood, k denotes the total number of parameters in the model, and T is the sample size. The correction is needed because in small samples the regular AIC tends to overfit and select models with too many parameters. Each time series is modelled by ETS and then forecasted in a one-step ahead fashion, irregarding its length. §.§ Global Benchmark Model As a second benchmark model we use a pooled AR(1) model, which can be seen as a global model since it uses all individuals' information. This linear model is given by y_i,t=α+β y_i,t-1+ϵ_i,t, where i=1,…,N denotes the i-th individual and t=s_i,…,t_i denotes the time component of the panel. This means that for the entire panel data we only need to estimate two parameters, the global intercept α as well as the global slope parameter β. In practice, the observations of all individuals are stacked on top of each other to obtain a simple linear model which can be estimated by ordinary least squares. Such models, also called panel models, can be of different forms as well. We also modelled the data using variable intercepts, i.e. each individual has an individual intercept. However, this model turned out to be worse than the pooled model for this data. Another variant is the variable-coefficient model with individual intercept and slope. However, such model is usually estimated by estimating each individual's model by its own. Thus, such approach cannot really be seen as a global model, and thus we opted to not use this model as well. For more details about the models and their statistical properties, see the book of <cit.>. All global panel models have been fitted using the plm package <cit.> in R. §.§ Choice of Optimal Parameter In every experiment we perform time-series cross-validation based on a rolling window approach to choose the optimal hyperparameter k in k-NN. We cannot apply regular cross-validation since the assumption of independent data is not valid in a time series context. The initial window contains observations from time step 1 until time step T_0=21. In each iteration, the window is expanded by one time step, yielding N_f=54 folds in total. In each fold we obtain an RMSSE value for each individual and hyperparameter. In detail, assume there exists a hyperparameter k∈Θ, where Θ is the search grid. Let a fold be f={1,…,T_0,…,t_f}, T_0≤ t_f≤ T_train and consider an individual time series y^(i)=(y_s_i,…,y_t_i). Then we compute RMSSE_max(T_0,s_i)^t_f(y^(i), ŷ^(i)(m); k) with random walk benchmark forecasts for each parameter k, time step t_f, individual i and method m. Furthermore, the cross-validation yields standard errors as well by aggregating over the folds, i.e. SE(y^(i),ŷ^(i);k) = √(1T_train-max(T_0,s_i)+1) √(∑_t_f=max(T_0,s_i)^T_train(RMSSE_max(T_0,s_i)^t_f (y^(i),ŷ^(i)(m);k)-μ(y^(i),ŷ^(i)(m);θ))^2T_train-max(T_0,s_i)), with the mean cross-validation error, also known as CV score, of μ(y^(i),ŷ^(i)(m);k)=∑_t_f=max(T_0,s_i)^T_trainRMSSE_max(T_0,s_i)^t_f (y^(i),ŷ^(i)(m);k)T_train-max(T_0,s_i). Note here that each individual i might be available in a different amount of folds since we perform cross-validation based on the time aspect of the panel data. The optimal hyperparameter is now chosen using the one-standard-error rule, i.e. we choose the most parsimonious model lying in the one standard error band of the global minimum of cross-validation errors values. This means we obtain the optimal parameter k^∗ for y^(i) as k^∗_i = min{θ:|k-k_i^(m)|≤SE(y^(i),ŷ^(i);k_i^(m))}, where k_i^(m) is the parameter realizing the minimal mean cross-validation error, i.e. k_i^(m)=argmin_k μ(y^(i),ŷ^(i)(m);k). §.§ Evaluation In each fold we forecast the time series to obtain one-step ahead forecasts as explained in Section <ref>. The ETS models themselves are not being tuned using the folds. The best ETS model is solely chosen in each fold using AIC. That way the only tuning parameter is the size of each individual's neighborhood. §.§.§ Time Series Cross Validation We take a closer look at the time series cross validation (TSCV) procedure and the choice of the optimal tuning parameters. Figure <ref> shows the corresponding TSCV scores and standard errors as defined above for the distance-based averaging methods. Each plot panel shows the results for a selected smart fridge. For demonstration purposes we choose a grid of Θ={1,3,5,10,20} number of neighbors and select the optimal number applying the one standard error rule. We do this for each forecast averaging methodology. The vertical lines indicate the choice of the tuning parameters. We observe different behaviors between the global averaging approach and the others (D-AVG, D-AVG-N). This can be explained by the similarity of the two methods. We also see that the scores for individual 3 obtain a constant level because this individual is one of the longer time series, and hence it is not even possible to have more than 3 neighbors. The actual selected number of neighbors is given in Table <ref>. §.§.§ Model Evaluation First, we start off by evaluating the experiment on a global level. This is where the scaled errors come in handy since these are comparable also on individuals' level. Figure <ref> shows boxplots of the RMSSE values as in Eq. (<ref>), split in both training and test errors. Given the errors are skewed, we display them on a log-scale. We clearly observe that the simple, non-models based averaging approaches yield worse results than the ETS benchmark model. However, we do see minor improvements of the ETS forecasts by using distance- and performance-weighted averaging methods. Another interest fact is that approaches not considering an individual's information (S-AVG-N, D-AVG-N) tend to yield also worse results, indicating that these informations should also be included in the forecast procedure, even though these time series might be very short and difficult to forecast. For completeness, Figure <ref> in the Appendix shows the original RMSSE values, but as mentioned before, this makes comparisons to the ETS benchmark model quite difficult. Next, we want to dig deeper into the performance of selected individuals. For this, we selected 4 of the 43 individuals to analyze in more detail. Figure <ref> shows the RMSSE with respect to ETS forecasts for these selected fridges. We also show standard errors of the RMSSE. Due to the different behaviors of the individuals, we also observe different performances of the averaging approaches. While for individual 3 we can improve forecasts using model-based averaging methods, this is not the case for individual 27. We also observe that the panel benchmark model PLM-POOL does not always yield better forecasts and is often even surpassed by any of the averaging approaches. We can dig even deeper into evaluating the errors when looking at the evolution of the RMSSE. For reasons of clarity, Figure <ref> shows the running RMSSE with respect to the RW forecasts for the distance-based averaging methods and the two benchmark methods of ETS and PLM-POOL. The dashed vertical line indicates the split between training and test periods. These plots really show the possibility of improving the benchmark forecasts by smartly taking averages of neighbors' forecasts. Similar plots for the remaining averaging methods can be found in the appendix. §.§.§ Further Diagnostics To understand better the differences in performance on individual level, we perform further diagnostics. For fixed neighborhood size of at most 5 neighbors, we first consider the median, minimum and maximum normalized DTW distance and their evolution over time. This analysis might already give hints on the homogeneity of the neighborhoods (Figure <ref>). We observe that for individuals 3, 15 and 29 the distances and hence the neighborhoods seem to stabilize after the first few steps while for individual 27 the normalized distance increases until the end. This means that this individual becomes harder and harder to match. Another way to evaluate the neighborhoods is as follows. We consider the final training neighborhoods as the ground truth and compare each neighborhood yielded in the training process to this ground truth using the adjusted Rand index (<cit.>, <cit.>). Figure <ref> shows the adjusted Rand index for each time step. We see that individual 3 is perfectly matched from the beginning which is also not suprising since this individual only has very few possible neighbors. However, there is much more variability present for the remaining smart fridges, and especially for individual 27 the adjusted Rand index is hardly increasing, indicating difficulties in finding the proper neighbors for this individual. Finally, we take a look at the 2-Wasserstein distances as discussed in Section <ref>. Since we only introduced the Wasserstein for a basic ANN model, we want to go into more detail here. The one-step ahead point forecasts which we obtain and use for averaging are the means of the forecast distribution, hence we can compute the Wasserstein distances in a straight-forward way. Namely, we have that W_2^2(X̂_n+1|n,Ŷ_n+1|n) = (x̂_n+1|n-ŷ_n+1|n)^2 + (σ̂_X-σ̂_Y)^2, where X,Y are two arbitrary ETS models with realized point forecasts x̂,ŷ as well as estimated standard deviations σ̂_X,σ̂_Y, respectively. Figure <ref> shows the mean 2-Wasserstein distances as in Eq. (<ref>) per selected individual. The neighborhoods considered here are the same as in Figure <ref>. The dashed lines indicate the overall mean distance. We clearly observe large distances for individual 27 which is consistent with the rather bad results. Since the DTW distances as in Figure <ref> are increasing for this individual, we cannot expect the Wasserstein distances to be small, and thus also cannot guarantee a reduction in forecast error. Indeed, the overall mean distance is 12.5 which is more than twice as large as the mean values for the remaining individuals (3.2 for individuals 3,15, and around 5.2 for individual 29.) selected in the analysis. This result empirically verifies the extension of our motivating theoretical results. §.§.§ The Best Models For each individual, the best averaging approach may be selected by minimizing the test RMSSE value as given in Table <ref>. We see that the best averaging methods outperform both the ETS and pooled panel model for individuals 3,15,29 based on the test set RMSSE. For individual 27 which we have already observed to be a difficult individual to match, the averaging methods can not improve upon the benchmark ETS model yet still outperform the panel benchmark. The given standard errors indicate a significant improve for only individual 29 compared to the baseline ETS forecast, which is also confirmed by Diebold-Mariano tests to compare forecast accuracies <cit.>. However, this does not give us a global, overall way of evaluating the methodology. To this end, we take a look at the distributional properties of the RMSSE values obtained on the test set and compare the methods based on how many individuals could be improved by averaging. First, Figure <ref> shows the ordered RMSSE test set values with respect to the ETS forecast for each averaging method. The vertical lines indicate the percentile of individuals with smaller error compared to the ETS benchmark. For an overall useful methodology we want the vertical lines to be greater than 0.5, meaning we can improve the forecasting for more than 50% of the individuals. This is the case for the global averaging method G-AVG, the performance-based one P-AVG, P-AVG-R, as well as the simple S-AVG, whereby the best improvement is obtained by the weighted average of using errors of the refitted models (P-AVG-R). Furthermore, we see that all methods not including the individual, namely D-AVG-N and S-AVG-N, yield worse results. Also, the non-model based methods which only regard the DTW matchings do not seem to be very useful at all since they only decrease the forecast error compared to the ETS benchmark for around 36%, and 38% of the individuals, respectively. Finally, we take a look at the actual forecasts for the selected fridges and best models (Table <ref>) which can be seen in Figure <ref>. A clear change of forecasts is present. While for individuals 3,25,29 the actual forecasts of all methods look quite similar, we do see slight improvements in terms of forecast error. This is not the case for individual 27 where the ETS base forecast is still the best. Due to its high error though, this ETS base forecast is just slightly better than a random forecast would be. § CONCLUSIONS We present a general framework for the improvement of individual forecasts in a set of possibly heterogeneous time series. It is based on a theoretical motivation regarding simple state-space models in terms of exponential smoothing, however, this motivation may be extended to more general models. In this theoretical part of the work, new approaches are proposed by computing the DTW distance explicitly on random processes as well as using the Wasserstein distance to compare the forecast distribution of state-space models. Additionally, dynamic time warping is introduced and described in a far more rigorous way. We apply a sequence matching procedures in dynamic time warping which allows us to find similar time series in terms of their shape and which works for any two time series. To this end, we emphasize the use of asymmetric matching and the corresponding extension of the global averaging methodology to this type of matching. The transparency of the algorithm also enables us to perform diagnostics and to understand when this procedure is appropriate and yields reasonable results. This point also differentiates our work to machine learning approaches which tend to be intransparent and require a large total number of observations to work efficiently. The models we use for our analysis are ETS models, yet any family of models can be used in our procedure, making it very flexible in practice. It also extends automatic forecasting frameworks such as ARIMA or ETS naturally. Overall, this framework fits many real world applications where one encounters a set of heterogeneous and possibly short time series. Some aspects are still to be considered. Extension of the work could be using an adaptive data-driven number of neighbors for each time series instead of a fixed one. Further, a robustification of the procedure could be beneficial since dynamic time warping especially is quite sensitive to outliers in the time series. § COMPUTATIONAL DETAILS The results in this paper were obtained using R 4.1.3. Figures were produced by ggplot2 <cit.> and tables were created using knitr <cit.> as well as kableExtra <cit.>. R itself and all packages used are available from the Comprehensive R Archive Network (CRAN) at <https://CRAN.R-project.org/>. The source code of this paper in form of the R package TSAvg is available from GitHub at <https://github.com/neubluk/TSAvg>. § ACKNOWLEDGMENTS AND DISCLOSURE OF FUNDING We acknowledge support from the Austrian Research Promotion Agency (FFG), Basisprogramm project “Meal Demand Forecast”, and Schrankerl GmbH for the cooperation and access to their data. § FURTHER TSCV EVALUATION PLOTS § FURTHER EVALUATION PLOTS apalike
http://arxiv.org/abs/2306.03289v1
20230605222456
Towards Adapting Computer Science Courses to AI Assistants' Capabilities
[ "Tianjia Wang", "Daniel Vargas-Diaz", "Chris Brown", "Yan Chen" ]
cs.HC
[ "cs.HC" ]
Towards Adapting Computer Science Courses to AI Assistants' Capabilities Tianjia Wang, Daniel Vargas-Diaz, Chris Brown, Yan Chen Department of Computer Science Virginia Tech Blacksburg, VA, USA {wangt7, danielvargasdiaz, dcbrown, ych}@vt.edu July 31, 2023 ==================================================================================================================================================================================== The use of AI assistants, along with the challenges they present, has sparked significant debate within the community of computer science education. While these tools demonstrate the potential to support students' learning and instructors' teaching, they also raise concerns about enabling unethical uses by students. Previous research has suggested various strategies aimed at addressing these issues. However, they concentrate on the introductory programming courses and focus on one specific type of problem. The present research evaluated the performance of ChatGPT, a state-of-the-art AI assistant, at solving 187 problems spanning three distinct types that were collected from six undergraduate computer science. The selected courses covered different topics and targeted different program levels. We then explored methods to modify these problems to adapt them to ChatGPT's capabilities to reduce potential misuse by students. Finally, we conducted semi-structured interviews with 11 computer science instructors. The aim was to gather their opinions on our problem modification methods, understand their perspectives on the impact of AI assistants on computer science education, and learn their strategies for adapting their courses to leverage these AI capabilities for educational improvement. The results revealed issues ranging from academic fairness to long-term impact on students' mental models. From our results, we derived design implications and recommended tools to help instructors design and create future course material that could more effectively adapt to AI assistants' capabilities. Computer science education, Large language model, ChatGPT, Interview § INTRODUCTION Artificial intelligence (AI) is rapidly transforming the way computer science (CS) is taught and learned. Prominent AI assistants, such as ChatGPT <cit.> and GitHub Copilot <cit.>, provide students with access to advanced problem-solving resources. An increasing number of researchers have shown these AI assistants outperform students when solving complex computing tasks <cit.>, and can circumvent plagiarism detection software <cit.>. These advancements bring about concerns regarding cheating and the integrity of assignments and exams, as students who do not use these assistants may be at a disadvantage when learning new concepts. Furthermore, AI assistants may generate incorrect answers which could lead students to form incorrect mental models of a concept <cit.>. These issues present a challenge for instructors who lack the support to address the impact these AI assistants have on education. To address these challenges, researchers and organizations have proposed various solutions. For example, studies have investigated strategies to change course materials using prompt engineering <cit.>. Ethan Mollick, a Professor at the University of Pennsylvania, fully embraced AI for his classes by asking students to use AI tools in various ways <cit.>. Some organizations have prohibited the use of these assistants <cit.>, or developed detectors to mitigate their use <cit.>. However, they have not explored the AI assistants' impact across different topics and program levels of CS courses. Also, given the rapidly evolving nature and widespread accessibility of AI assistants, these practices are also not feasible in the long run <cit.>. Therefore, we ask how can we support instructors to more effectively adapt CS education (e.g., materials, and practices) to the capabilities of AI assistants to prevent students from misuses and improve students' learning experiences. A three-phase study was conducted to answer this question, as outlined in Fig <ref>. First, we evaluated ChatGPT's performance in solving problem sets from six undergraduate CS courses, which encompassed a variety of problem types. Second, we explored two problem modification methods based on previous research, including adding distracting information and altering a problem's format and evaluation, to assist instructors in adapting course materials to ChatGPT's capabilities and mitigate potential misuse. Finally, we conducted semi-structured interviews with CS instructors to gauge their understanding of ChatGPT's capabilities, their opinions on our problem modification methods, and their perspectives and concerns regarding the use of AI assistants in CS education. Our findings show mixed feelings and concerns from the interviewees on issues, such as academic fairness, for the long-term impact of AI assistants. Specifically, we found that 1) while they recognized the potential for students to exploit ChatGPT inappropriately, the majority had not yet made changes to their materials to minimize such misuse because of the lack of effective strategies or tool support; 2) instructors perceived the adapting of course materials to incorporate AI assistants as more feasible for advanced courses than introductory ones; 3) although the interviewees expressed ethical concerns about AI assistant usage, these concerns remained unchanged (not worsened) compared to existing issues; and 4) the false answers generated by ChatGPT could potentially mislead students, causing them to develop incorrect mental models. Based on these findings, we discuss two main implications of adapting CS education to AI tools' capabilities, including a) tools and strategies for CS problem adaptation, and b) designing personalized learning experiences using large language model (LLM). Our findings contribute to the body of research on the challenges and opportunities that LLM-based AI assistants bring to our society by providing detailed evidence, insights, and analysis from the CS instructors' perspectives, focusing on CS education <cit.>. These ideas work toward a vision of personalized, fair, and adaptive learning experiences for future CS education. This research thus contributes: * Evaluation of ChatGPT's capabilities of solving various types of problems across various levels and topics within computer science courses, which provides insights into identifying the areas where it may struggle, and recognizing its potential use in different aspects of computer science education. * Insights of applying two problem modification techniques aimed at assisting instructors in preventing the misuse of ChatGPT by students. The results indicate that the prevailing method of adding distracting information may not be as effective as previously thought. * Design implications and tool recommendations to support instructors in adapting course materials to AI assistants' capabilities. § BACKGROUND & RELATED WORK ChatGPT is a state-of-the-art large language model capable of engaging in conversations and delivering a variety of assistance to users <cit.>. It exhibits expertise in both natural languages and programming languages like Python, Java. ChatGPT responds to user prompts based on the input it receives and provides answers in real-time. We selected this language model due to its widespread accessibility and cost-free availability for all users. Recent literature has explored the influence of AI assistants on student learning, such as their use in solving and generating CS problems, and in providing feedback to students <cit.>. Relevant practical reports, such as the lessons and strategies shared in Ethan Mollick's newsletters, also inform our work, helping shape our strategies and interview questions <cit.>. However, due to the recent public release of these techniques, most studies focus on assessing AI tool performance, not addressing associated concerns. More research is needed on long-term impacts and mitigation strategies. Our work aims to fill this gap, expanding course topics and levels, and eliciting instructors' insights. Several adaption strategies have been proposed to address the issues of academic integrity associated with the use of AI assistants. Companies have explored the development of AI-based cheating detection systems, which aim to identify instances of AI-generated text and code submissions <cit.>. However, they often suffer from low accuracy <cit.>. Other studies have suggested redesigning course materials to emphasize computational thinking and problem-solving skills, rather than focusing on specific programming tasks that can be easily solved by AI tools <cit.>. In addition, researchers have investigated alternative assessment methods that could better measure students' understanding of CS concepts and their ability to apply these concepts in novel situations <cit.>. For example, some have proposed using open-ended projects, collaborative assignments, or interactive programming tasks that require students to demonstrate their understanding of the material in a more authentic and engaging manner <cit.>. Our work contributes to this line of research by examining the performance of ChatGPT on CS problem sets and suggesting modifications to these problem sets to mitigate the impact of AI assistants. Understanding instructors' perceptions of AI in education is essential for developing effective strategies to address the challenges posed by AI assistants. Prior research has examined instructors' attitudes towards AI-driven tools, such as Intelligent Tutoring Systems, and their potential impact on teaching and learning <cit.>. These studies have highlighted concerns about the potential negative effects of AI tools on students' motivation and learning outcomes, as well as the need for guidance on how to integrate these tools effectively into the curriculum <cit.>. However, the new AI-based programming assistance has shown its strong capability on performing these tasks, and there is limited research on instructors' perspectives on their impact on CS education. Our study addresses this gap by conducting semi-structured interviews with relevant instructors to gain insights into their perceptions, concerns, and opportunities related to AI techniques in CS education. These qualitative findings inform the design implications for future educational materials and instructional strategies, helping to shape a more effective and fair educational landscape. § DATA COLLECTION AND PERFORMANCE EVALUATION Supplementing prior work, we selected various CS courses across multiple difficulty levels, using collected sample problems from those courses to evaluate ChatGPT's performance. The dataset will be released upon paper acceptance. §.§ Data Selection We assessed ChatGPT's performance across six fundamental and advanced CS courses common in degree programs (Figure <ref>). We collected 30-36 problems per course from public resources like Coursera, Udemy, Udacity, CodeAcademy, and several institutions' course materials, resulting in a total of 187 problems. These problems were chosen for their authenticity and availability of ground truth answers. The problems, excluding open-ended questions and those with non-text information, spanned three types: multiple choice, short answer, and coding. Multiple choice problems necessitated choosing the appropriate option(s), encompassing Single Answer, Multiple Answer, and True/False categories. Short answer problems required concise text responses, and coding problems necessitated correct code implementation. These categories were chosen for their prevalence across different subjects and to avoid bias in evaluation. §.§ Process We employed the Jan 30 and Feb 13 versions of ChatGPT, accessible to students for free. Each problem description from the dataset was used as a prompt for ChatGPT under default settings. The generated solutions, influenced by a degree of randomness due to ChatGPT's temperature parameter, were then compared to the correct answers or assessed through manual evaluation or provided test cases. For each problem, we generated three alternative responses, resulting in a total of 561 answers that were manually evaluated by the first two authors. For multiple-choice problems, the generated answer will be compared to the correct option(s) on the associated answer keys. For short-answer problems, the generated answer will be manually evaluated by comparing it with the ground truth answer. For programming problems, the generated answer will be assessed using provided test cases if available in the original problem set, or through manual evaluation by the first two authors. In the following sections, we report our findings. §.§ Results Table <ref> presents the results of ChatGPT's accuracy in solving CS problem sets across all the selected courses. As stated above, we employed ChatGPT to generate three alternative answers for each problem. We defined a problem as “solvable” if ChatGPT correctly answered it in all three attempts, “partially solvable” if it provided both correct and incorrect answers across the three attempts, and “not solvable” if it answered the problem incorrectly in all three attempts. Overall, ChatGPT is able to solve 61.5% of problems in all three attempts. Specifically, its performance in the senior level Design and Analysis of Algorithms course is subpar compared to the other courses by only being able to completely solve 26.67% of the collected problems. Our findings demonstrate that ChatGPT can solve various problem types across different levels and topics in CS courses with satisfactory performance. This raises concerns regarding the potential for students to simply copy and paste problems into ChatGPT, obtain solutions, and answer them without truly engaging with the material. § PROBLEM MODIFICATION METHODS Our results from the previous section demonstrate the capabilities of ChatGPT in solving various CS course problems. Due to its proficiency in successfully solving more than 60% of CS problems in our dataset, which may potentially be misused by students, we explored two methods for helping instructors and TAs modify the problems to prevent inappropriate use of ChatGPT. Method 1 (M1) is to manually adding information or context to mislead or distract the model. The goal of this method is to prevent students from simply copying and pasting problems into ChatGPT to obtain answers. Method 2 (M2) consisted of asking students to validate the answers generated by ChatGPT. M2 aims to help students to reduce shallow learning and be aware of misinformation because they will be evaluating correct and incorrect answers from the model. §.§ Method 1 (M1): Adding Distracting Information Inspired by prior work <cit.>, M1 adds distracting information to a given problem to mislead ChatGPT, rendering the problem unsolvable by ChatGPT to prevent potential misuse. The template of M1 involves manipulating the “Operation” and the “Content” of the distracting information. The “Operation” includes appending, inserting, and editing, while the “Content” refers to the definition of a conceptual term related to the problem, related homogeneous information, and supplementary context for the problem. For example, consider the multiple-choice problem shown in Appendix <ref>. ChatGPT can answer it correctly by selecting both B and C with the following response, “Both B and C are true statements. However, statement A is false. The Edmonds-Karp algorithm is a variation of the Ford-Fulkerson algorithm that uses breadth-first search to choose the augmenting path, which can make it faster in some cases, but not always.” However, we could apply M1 to add distracting information to make this problem no longer solvable by ChatGPT. As shown in Appendix <ref>, the original problem is presented in black color, while the distracting information is highlighted in blue. After appending the definition of the Ford-Fulkerson algorithm as distracting information to both answers A and C, ChatGPT began to choose B as the only correct answer, and the response now indicates that C is incorrect by saying “C. The statement is also false. The Ford-Fulkerson algorithm does not have a guaranteed polynomial time complexity, and there exist instances where it can take exponential time to find the maximum flow, even with unit edge capacities.” To test the efficacy of this method, we selected 30 problems by taking five problems that are solvable by ChatGPT from each of the six courses in the dataset. We attempted to modify each problem by adding various combinations of “Operation" and “Content" to mislead the model. For each problem, we attempted up to 15 distinct combinations, and if none of the combinations were effective in misleading the model, we considered the method to have failed for that problem. Our findings indicate that only 7 out of the 30 problems could be successfully modified to confuse ChatGPT. For those attempts that successfully misled ChatGPT, an average of 8 iterations were needed to explore various combinations of “Operation" and “Content." While performing the evaluation, we noticed some limitations to applying M1. It can only be applied to problems with contexts or textual information. For example, for algorithm problems with only formulas or code completion problems, it would be hard to add distracting information. More identified limitations will be discussed in Section <ref>. §.§ Method 2 (M2): Changing The Problem Evaluation Another limitation of ChatGPT is that its responses may not always be accurate due to its inherent probabilistic nature<cit.>. If students rely solely on the answers provided by ChatGPT without verifying their accuracy, they may potentially develop flawed mental models <cit.>. However, this limitation can be turned into an opportunity to enhance the learning experience and prevent the misuse of ChatGPT as M2. To implement this method, instructors can use ChatGPT to generate multiple answers, compare them to the ground truth, and ensure there is at least one incorrect response. After that, instructors can present the original problem along with the ChatGPT's answers to the students, then rephrase the problem to ask students to “review the problem and answers from ChatGPT, distinguish between correct and incorrect responses, and justify your answer.” For instance, in the multiple-choice problem shown in Appendix <ref>, ChatGPT can be used to generate two distinct answers, with one being correct and one being incorrect. Instructors can then modify the problem format, as shown in Appendix <ref>, by asking students to review the problem with the ChatGPT-generated answers and distinguish between correct and incorrect responses with justifications. By using this method, students are encouraged to develop their analytical and reasoning skills, as they must assess the validity of each answer generated by ChatGPT and determine which ones are accurate or erroneous. By having instructors verify the answers and informing students there are incorrect answers to the problem, it could help students avoid learning from the misinformation generated by ChatGPT and developing a flawed mental model. Compared to M1, it also works for problems that do not contain textual or context information. However, a limitation of this method is that it can only be applied to problems for which ChatGPT is capable of generating incorrect solutions. Consequently, simple problems that ChatGPT consistently answers and explains correctly will not be suitable for this approach. More identified limitations will be discussed in Section <ref>. § INTERVIEWS To gain a deeper understanding of instructors' views on problem modification techniques and their opinions on using AI assistants in CS courses, we conducted surveys and semi-structured interviews with 11 instructors teaching our six selected CS courses. Participants were recruited through a private network within our university and were required to have prior experience as primary instructors for the course. After obtaining their consent, we requested they complete a survey regarding their experience and knowledge of AI assistants. Interviews were scheduled at the participants' convenience, either in-person or online via Zoom, lasting 30-60 minutes. Participants were compensated $25 for their time and effort. The study was approved by the authors' organization's IRB. At the interview's outset, we demonstrated ChatGPT's capability to solve problems related to the participants' courses. We then asked if ChatGPT's performance aligned with their impressions and experiences. Next, we provided examples of the two problem modification methods and asked for their thoughts on each. The examples were based on problems from the course the interviewee had taught. We also discussed their perceptions, ethical concerns, potential applications, and challenges related to ChatGPT in CS courses. The interview sessions were recorded, transcribed using an automated tool, and corrected for inaccuracies. The first two authors coded the interviews and identified significant themes from the transcripts. § RESULTS §.§ Instructors' Impressions on Performance of AI Assistants §.§.§ Survey Result Prior to the interview, we conducted surveys to gather information about participants' previous experiences with AI assistants. The results show that 63.63% (7/11) of the participants have prior experience using AI assistants. Additionally, 72.72% (8/11) of the participants are familiar with the problem-solving capabilities of AI assistants for the classes they currently teach or have taught in the past. From the brief overview of their previous experience utilizing AI assistants the participants provided, we noticed that most participants (8/11) have used AI assistants as a teaching aid or for solving personal tasks. Only 2 out of 11 participants have tried to test the performance of AI assistants in solving problems in the course they taught. §.§.§ Participants were aware of ChatGPT's problem-solving performance in CS course To better understand the participants' preexisting opinions and general impressions about ChatGPT, we began the interview by demonstrating its performance in solving problem sets in CS courses. We selected 3-5 examples of different problem types from the course taught by participants, including some that could be solved by ChatGPT and others that could not. We then presented these examples to the participants, along with the statistical results in Table <ref>. After viewing the statistical results and examples provided by us, nearly all (9/11) of the participants reported that ChatGPT's performance in solving CS problems aligned with their previous impressions. One participant (P11) was surprised by ChatGPT's capabilities, which enabled it to solve challenging coding problems that passed all the test cases. §.§ Instructors' Perceptions on Proposed Methods §.§.§ Distracting information can be helpful but also confuse students Our M1 modifed the problems by using distracting information to mislead or distract ChatGPT. Less than half of the participants (5/11) felt that this method is helpful for making problems harder to solve by ChatGPT, with P2 saying “it could discourage my students from looking at the answers to these questions”. Four participants (P1, P8, P9, P11) raised concerns about the use of this method, whereas two participants (P1, P9) mentioned this method may also confuse students. For example: It is good that confuses ChatGPT, it is bad it probably also distracts students, it's distracting information to them as well as ChatGPT.[...] I understand the goal. But we also are trying to be fair to students and be accurate with them. (P1) P8 is concerned that the distracting information could be potentially identified and removed by the students, which let them “easily evade this technique.” In summary, participants' responses suggest that M1 could be an effective way to modify the problem and increase the difficulty of solving it for ChatGPT. However, this approach has limitations, such as potentially distracting students and students can possibly remove the distracting information. §.§.§ Asking students to validate answers can be effective, but would require more effort for grading We found that the majority of participants (7/11) considered M2 to be engaging and effective for adapting course materials. Five participants believed it could “facilitate critical thinking” among students while solving the problems. P2 and P6 believed it could “enhance students' understanding of AI in general”. Three participants (P1, P10, P11) teaching introductory level courses, which typically have a large number of enrolled students, pointed out that applying M2 is challenging due to the increased effort required for grading, as one mentioned: However, I am not sure how this could be scalable [...] as an instructor, that means that I have like 300 generated texts that I have to manually go through which [sic], so I won't use it. (P10) The results indicate that instructors in some courses perceive the M2 as an effective method with the potential to enhance students' critical thinking skills and facilitate learning about the use of AI assistants. However, it should be noted that this approach may demand additional effort when assessing student responses. §.§ Instructors’ Perceptions and ethical concerns regarding using AI assistants in CS education §.§.§ Instructors worry about academic integrity problems with AI assistants but are mostly open to using them in class with conditions As we demonstrated in Table <ref>, ChatGPT performs well overall in solving CS problems, and its widespread, free access to all students presents a significant challenge to the education system regarding academic integrity issues. From the results of our interviews, almost all the participants (9/11) acknowledged potential academic integrity concerns posed by AI assistants. However, almost all of them (9/11) were open to students using these tools as supplementary aids rather than completely prohibiting their use in the classroom. As P8 stated, “I think that [banning ChatGPT] is not a good approach, because it is, it is another wave that we can’t evade. Right? We have to face it.” While instructors are receptive to the use of AI assistants, they believe that students should utilize them thoughtfully and within certain constraints. As one participant mentioned: There's some way that we can use it well, to be thoughtful about it and be creative. [...] You have to say, here's where we're okay to use it, and then here's where you can't. (P3) P5 felt the idea of using AI assistants in theory courses was “terrible” as it will take away the learning benefits gained from “writing the proofs by doing them from scratch.” In conclusion, while there are valid concerns about academic integrity, the consensus among participants is that AI assistants like ChatGPT can be valuable educational tools when used thoughtfully and within clearly defined boundaries. §.§.§ Instructors haven't adapted courses for AI assistants, considering more in-class assessments to prevent misuse One of the main impacts that AI assistants have on the CS education system is that they may necessitate instructors to modify their course materials to minimize potential misuse. This viewpoint aligns with the perspectives shared by other scholars <cit.>. As shown in the previous section, 81.81% of participants were aware of the potential misuse of AI assistants. However, we found that only two (P1, P4) participants have modified their course structure and material to adapt to AI assistants. Among the eight participants who have not made any changes, seven are open to making adjustments if they receive proper guidance. As participants may have previously adapted their courses to address plagiarism and cheating, we want to gather insights into methods of adapting their courses to AI assistants. In terms of class dynamics and methods of evaluations, the majority of instructors (7/11) were inclined to administer more in-class activities, quizzes, and exams in the future to prevent the potential misuse of AI assistants. Two (P2, P4) mentioned they are considering implementing oral exams. As P4 mentioned during the interview: I'm thinking like, this is in the future, like, I'm most probably doing everything flipped. Okay. So I'm even going to record my lectures, and then they [students] will come to the class to work on the project, under my supervision and help. (P4) In summary, most instructors have not yet updated their course materials to adapt to AI assistants. However, they intend to adapt by implementing more traditional methods, such as conducting in-class activities and in-person evaluation mechanisms. §.§.§ Enforcing fairness was challenging before ChatGPT, and remains (not worsens) relatively unchanged even with the existence of ChatGPT We gathered instructors' opinions on the fairness concerns raised by AI assistants during interviews. We observed that while some students may begin using ChatGPT to enhance their grades, others might choose not to rely solely on it to gain a deeper understanding of the material. Additionally, we noted that ChatGPT offers a “plus” version, providing subscribers with faster and more accurate access to the GPT-4 model. This could potentially result in inequity regarding educational outcomes and resource distribution, as it might create disparities between students who can afford the upgraded version and those who came from a low social-economic status. Initially, based on this observation, we expected that participants might believe ChatGPT would exacerbate the fairness issue. However, only three (P2, P8, P10) out of 11 participants reported that they think ChatGPT could worsen fairness. Contrary to the belief held by some participants that ChatGPT will aggravate the fairness issue, the majority of (8/11) participants stated that ensuring fairness is inherently a difficult endeavor, and ChatGPT does not worsen the problem. As one participant expressed about plagiarism: It's always a concern, even before ChatGPT. [...] The source of the concern is that now instead of stealing the code from you [TAs, instructors, and other sources], they're gonna steal from ChatGPT, so it becomes a different source of the issue. (P3) It is intriguing to observe that participants' opinions diverge from our prior impressions. They assert that enforcing fairness was already a challenging task before ChatGPT's emergence, and its presence has not worsened the situation. §.§.§ Instructors think false ChatGPT results will potentially lead to flawed mental models As the previous findings indicate, most instructors are open to students using AI assistants in their courses. However, we identified a potential risk associated with using AI assistants in CS courses, where students could rely solely on answers generated by ChatGPTs without verifying the accuracy of the information provided. Studies have demonstrated that accepting unreliable sources as valid could lead to the development of flawed mental models <cit.>. As depicted in Table <ref>, ChatGPT fails to provide correct answers for 38.5% of the problems. While those generated answers may appear to be credible, they can actually contain misinformation. We discussed this concern with the instructors and found that the majority (9/11) acknowledged the potential for incorrect results generated by ChatGPT to contribute to students' inaccurate mental models. One participant (P3) mentioned: I think also, maybe, as this tool is more widely available in use us having a discussion about it at the beginning of class. [...] It gives the wrong answer sometimes, and you have to be thoughtful about what it tells you, not, maybe not telling them, they should use it completely. (P3) Overall, a major concern among the participants is that students may rely on inaccurate ChatGPT-generated answers, potentially leading to the development of flawed mental models. To address this issue, some participants (P3, P11) believe instructors should inform students that answers generated by ChatGPT could be incorrect and contain misinformation. §.§.§ Instructors of introductory level courses are more worried about the potential threats posed by AI assistants We observed that instructors teaching introductory level courses express heightened concern regarding the potential risks associated with AI assistants. Interviewees mentioned that there are fundamental concepts essential for designing superior programs and applications. The use of AI assistants could lead students to engage in shallow learning and develop misconceptions, which could impact their performance in advanced courses. One participant shares his concern by saying: I feel worried, because I feel like students may be in their introductory classes, like introductory Python, or Java may use ChatGPT, you [students] could do just to solve any assignment. And then they will not build the true understanding. (P4) For high-level courses, instructors feel the problems in their course are less likely to be solved by ChatGPT–and they even support the use of this tool in their classes, with one saying: I mean, it should definitely be used. Like I wouldn't ban anything in my class, especially great resources like this. I just think it's important to understand how it can be applied and what its limitations are. (P6) The findings indicate a potential relationship between the tool's utility and the academic level of users. Research has demonstrated notable differences in the mental models of graduate and undergraduate students <cit.>. Moreover, studies have shown that younger students struggle to identify reliable sources of information <cit.>. Overall, instructors are concerned that students with underdeveloped mental models may experience shallow learning when using AI assistants. However, it is plausible that employing AI assistants could lead to improved outcomes for students in advanced courses. § DISCUSSION AND DESIGN IMPLICATIONS In light of our findings, how can we adapt CS Courses to AI assistants such as ChatGPT to prevent their misuse by students? Additionally, how can we help learners to make better use of AI Assistants to improve their learning experiences? To address these questions, we discuss the design implications in this section. §.§ Adapting Curriculum with AI-driven Tools and Strategies In contrast to educators from other disciplines and institutes <cit.>, our CS instructor participants are generally more open to using AI in their educational practices. We surmise that their openness stems from their familiarity with the underlying technology and the potential benefits of enhancing learning experiences and personalizing education. The majority of participants believe it is essential to explore effective methods for utilizing AI assistants while preventing their misuse. As P7 stated, “No one is going to say should we not let students use AI if it helps them to learn (...) the argument is always will the AI stop them from learning? Yeah, and if it does stop them from learning, can we ban it somehow?” Nonetheless, curriculum (e.g., problem sets) modification is a frequent task for educators and teaching staff, and it often necessitates careful planning, acquiring new knowledge, iterative processes, and a significant amount of effort from team members <cit.>. Previous research has explored and developed tools to support these endeavors with various objectives, such as preventing students from using answer keys from previous semesters, incorporating feedback from past students, or updating course materials to reflect rapidly evolving techniques, particularly in the field of computer science. Building on the insights from our study, we found a persistent inclination among educators to revise course content; however, a lack of supportive tools has led to limited advancements in this space. Following are some design implications, grounded in our findings and the existing body of work, that could enhance learning experiences. Enhancing In-class Activities with Support Tools: Many instructors highlighted their preference for conducting real-time, in-class activities to stimulate and assess student learning, bypassing the need for AI tools. Existing tools such as VizProg, PuzzleMe, and CodeOpticon have demonstrated potential for facilitating effective in-class exercises <cit.>. We propose future work to build upon these foundational tools, enhancing them with capabilities to support diverse, longer format in-class activities, and designing features to deter students from directly copying and pasting problems into other applications, similar to the approach adopted by platforms like Hackerrank <cit.>. Automate Problem Modification: Our findings revealed mixed views on the effectiveness of M1 (adding distracting information) and M2 (validating multiple answers) strategies. While some interviewees found them useful, others worried about the labor-intensive implementation, evaluation, and possible student confusion. Also, we found it is easier for instructors to verify answers for multiple choice using correct options in the ground truth and coding problems using test cases, but more effort will be required to manually verify answers for short answer problems due to potential uncertainty in student responses. As both methods required manual verification on short answers either from ChatGPT or students, future systems could provide comparison tools to enhance the visualization of the answers and help instructors efficiently verify correctness across iterations. Furthermore, future work could automate the iterative modify-evaluate process, allowing instructors to focus on decision-making. Sampling the Generated Answers: When using M2, we observed homogeneity in AI-generated responses. This could potentially hinder learning outcomes, as ideally, responses presented to the students should exhibit sufficient diversity to enhance their understanding of concepts from various angles. Future work could explore various prompting techniques to generate diverse answers from multiple perspectives. Drawing on the analogy of stratified sampling strategies, where samples are selected based on characteristics such as style or semantic meaning, future systems could allow instructors to efficiently gauge the similarities of generated responses through a range of metrics, and select the most distinct ones for problem modification. §.§ Designing Personalized Learning Experiences with LLMs Our research reveals that the accuracy of responses generated by ChatGPT fluctuates depending on the complexity and type of the problem. In line with prior research <cit.>, our participants expressed concerns about the potential misleading effects of such inconsistencies on learners, and the possibility of creating long-term learning impediments. Although AI tools have inherent limitations, they offer the potential for real-time feedback and personalized learning environments. The challenge, as indicated in previous studies on trustworthy AI <cit.>, is to strike a balance between trust and effective decision-making when interacting with AI assistants. To this end, we propose two directions for the development of AI-assisted learning tools that create a more reliable and personalized learning experience: Facilitating Trust Calibration ChatGPT currently lacks a mechanism to provide transparency regarding the source or reliability of its responses. Learners are left to their own devices to determine the trustworthiness of the provided answers, a task made more challenging by their incomplete or flawed mental models <cit.>. Trust calibration between users and automated systems is a complex yet essential aspect of creating effective human-computer interactions <cit.>. Misplaced trust in a misleading response can lead to long-term learning obstacles, particularly for novice learners still building their understanding of the concepts <cit.>. Prior work offers potential solutions to this issue, such as highlighting uncertain tokens in AI system's code completion, emphasizing tokens with the lowest likelihood of being generated by the generative model, or spotlighting tokens that are most likely to be edited by a programmer <cit.>. Building upon these ideas, future work should focus on developing adaptive tools that facilitate effective use of AI-generated outputs, aligning student trust with the veracity of the AI-generated responses. This might involve deploying a conversational agent capable of eliciting a student's mental model of a concept, then generating mental model adaptations based on the uncertainty and accuracy of the information provided by the AI. This way, we can pave the way for a more trustworthy and productive AI-assisted learning experience. Steering AI Tool Usage Among Students Participants advocated for an approach focused on educating students about the potential short and long-term impacts of AI tools, rather than the laborious and guidance-lacking process of modifying course materials. They believed that by imparting knowledge about the limitations and capabilities of AI tools, students would make more informed and responsible decisions about their usage. Instructors could utilize AI tools to illustrate these impacts through scenario-based examples, thus shaping students' mental models of AI tool usage. For instance, an example might highlight a scenario where relying on an AI to generate the correct prompt consumes more time than understanding the concept and solving the problem independently. This approach encourages students to value knowledge acquisition over blind reliance on AI tools. Concurrently, in the dawn of AI's integration into education, these intelligent tools can potentially serve as personal tutors, enhancing the learning experience. The longstanding research in Intelligent Tutoring Systems underscores the potential of AI to foster engaging and personalized learning experiences <cit.>. With recent advancements, future iterations of ITS can move away from reliance on human-crafted feedback and towards a dynamic and adaptive model that scales with students' performance. The future of education could see a shift towards a shared learning experience, where students exchange and learn from each other's interactions with AI. This approach could alleviate the burden of grading and course material creation on instructors, enabling them to focus on areas and students where their influence could be most impactful. §.§ Limitation and Future Work While we strove to collect problems across diverse topics and types, the sample size of our dataset is still relatively small, with approximately 30 problems for each course. To enhance the validity of our findings, future work should aim to collect a more extensive dataset encompassing various courses and problem types. Additionally, our participant recruitment method, which relied on personal networks, has limitations, as the interviewees are primarily from our own institutions. In future work, recruitment efforts should target a more diverse geographic range to ensure a broader representation of participants. In terms of evaluating ChatGPT's performance, our current approach does not incorporate advanced prompting techniques. It is possible that the model could achieve better results with them and future studies should explore the use of advanced prompting methods to further assess the model's performance and potential improvements. § CONCLUSION In conclusion, the primary objective of this study is to provide a comprehensive analysis of the impact of AI assistants, such as ChatGPT, on computer science (CS) education. Our research offers valuable insights into ChatGPT's problem-solving capabilities within CS courses. We examined and evaluated two problem modification methods, innovated from previous research, to prevent potential misuse of ChatGPT by students. Through interviews with CS instructors, we gauged their understanding of ChatGPT's capabilities, collected their assessment of our problem modification methods, and delved into their concerns regarding the use of AI assistants in CS education. Based on our findings, we have suggested design implications to aid instructors in modifying their materials and integrating AI assistants into CS education. Our study contributes to the growing body of research on the challenges and opportunities that AI presents to society, specifically to the ongoing discussion on the use of AI in educational settings, and offers a relevant perspective that can inform and shape future policy, practice, and research in the field of CS education. By building on these insights, we aim to advance the understanding of how AI should be used in CS education, and provide guidance for educators seeking to adapt their course to AI assistants' capabilities to mitigate misuse and improve students' learning experiences. IEEEtran § EXAMPLES OF PROBLEM MODIFICATION METHODS §.§ Example of Method 1 §.§.§ Original problem Which of the statements below is true? (Select all that apply) A.The Edmonds-Karp algorithm is always faster than the Ford-Fulkerson algorithm. B.The sum of the capacities of the edges of a network equals the sum of the capacities of the edges of any residual network. C.The Ford-Fulkerson algorithms runs in polynomial time on graphs with unit edge capacities. §.§.§ Modified problem after applying M1 Which of the statements below is true? (Select all that apply) A.The Edmonds-Karp algorithm is always faster than the Ford-Fulkerson algorithm. The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. B.The sum of the capacities of the edges of a network equals the sum of the capacities of the edges of any residual network. C.The Ford-Fulkerson algorithms runs in polynomial time on graphs with unit edge capacities. The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. §.§ Example of Method 2 §.§.§ Original problem Given an int8_t variable named X, if X is divisible by a power of 2, what would be the right most bits in X? Or, what is the bit pattern for X in the right most bits? A. The rightmost N-1 bits will be 0, where N is 2^N = X. B. The rightmost N bits will be 0, where N is 2N = X. C. There is no predicable pattern in the bits. D. The rightmost N+1 bits will be 0, where N is 2^N = X. §.§.§ Modified problem after applying M2 Review the “Problem” and “ChatGPT Answers” below. For the answers generated by ChatGPT, distinguish between correct and incorrect responses, and justify your answer: Problem: Given an int8_t variable named X, if X is divisible by a power of 2, what would be the right most bits in X? Or, what is the bit pattern for X in the right most bits? A. The rightmost N-1 bits will be 0, where N is 2^N = X. B. The rightmost N bits will be 0, where N is 2N = X. C. There is no predicable pattern in the bits. D. The rightmost N+1 bits will be 0, where N is 2^N = X. ChatGPT Answers:
http://arxiv.org/abs/2306.03944v1
20230606181051
The Formation of Magellanic System and the total mass of Large Magellanic Cloud
[ "Jianling Wang", "Francois Hammer", "Yanbin Yang", "Maria-Rosa L. Cioni" ]
astro-ph.GA
[ "astro-ph.GA" ]
Someone et al. IAU Symposium 379: Template 17 2023 10.1017/xxxxx Proceedings of IAU Symposium 379 P. Bonifacio, M.-R. Cioni, F. Hammer, M. Pawlowski, and S. Taibi, eds. ^1 CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Beijing 100101, China ^2 GEPI, Observatoire de Paris, CNRS, Place Jules Janssen, F-92195 Meudon, France ^3 Leibniz-Institüt für Astrophysik Potsdam, An der Sternwarte 16, D-14482 Potsdam, Germany The Magellanic Stream is unique to sample the MW potential from ∼50 kpc to 300 kpc, and is also unique in constraining the LMC mass, an increasingly important question for the Local Group/Milky Way modeling. Here we compare strengths and weaknesses of the two types of models (tidal and ram-pressure) of the Magellanic Stream formation. I will present our modeling for the formation of the Magellanic System, including those of the most recent discoveries in the Stream, in the Bridge and at the outskirts of Magellanic Clouds. This model has been successful in predicting most recent observations in both properties of stellar and gas phase. It appears that it is an over-constrained model and provides a good path to investigate the Stream properties. In particular, this model requires a LMC mass significantly smaller than 10^11 M_⊙. Galaxies: Magellanic Clouds, Galaxies: interactions, Galaxy: halo, Galaxy: structure The Formation of Magellanic System and the total mass of Large Magellanic Cloud Jianling WANG^1, Francois Hammer^2, Yanbin Yang^2, Maria-Rosa L. Cioni^3, July 31, 2023 =============================================================================== § INTRODUCTION The Magellanic Stream (MS) and Leading Arm (LA) subtend an angle of 230^∘, which is identified to be anchored to the Magellanic Clouds in 1974 by <cit.>. The nature of its formation was considered still unknown in 2012 <cit.>. Modern observations of proper motion from both HST and GAIA are indicating that the the Clouds are presently at first passage to the Milky Way <cit.>. Besides large amount of neutral gas distributed along the Stream, there are mounting evidence that 3-4 times more ionized than neutral gas has been deposited along the Stream <cit.>. In the first infall frame, the explanations of the MS can be broadly classified into two schemes. One is the tidal tail model <cit.>, the other is ram-pressure tails <cit.>. In the tidal tail model, the MS is generated by the mutual close interaction 1-2 Gyr ago before the MCs entering into the halo of MW. In this scenario, the SMC is assumed to be a long-lived satellite of LMC, which requires a LMC mass in excess of 10^11 M_⊙. There are several major limitations for the tidal model. First, it already lacks by a factor of 10 the amount of neutral gas observed in the Stream. Second, it is unable to reproduce the huge amount of ionized gas that is observed along the Stream. Third, it can only produce a single stream filament, while the MS are made of two filaments, which have been clearly identified by chemical, kinematic, and morphological analyses <cit.>. Fourth, no stars along the stream have been observed. Recent revised tidal model have been made by including hot corona of LMC to amend parts of above drawbacks <cit.>. But these models required either a unreasonable massive corona of LMC that is even larger than that of MW, or a dramatic change of the Cloud orbits, which can not reproduce their observed proper motions within 3σ. Conversely to the tidal model, the ram-pressure plus collision model <cit.> naturally reproduce most observational properties associated with the Magellanic System, for instance, the dual filaments, huge amount of ionized gas, absence of stars in the Stream. Interestingly, several observations made after the elaboration of the model have been reproduced without fine tuning. In this scenario, the Leading Arm are the trailing gas of front-runner dwarfs <cit.>, which is well supported by the determination of low metal abundances in 3 part of this structure (see Philip Richter's contribution). § TWO HYDRODYNAMIC FILAMENTS FORMED BY RAM-PRESSURE PLUS COLLISION In the frame of the ram-pressure and collision model, we have built a stable model of Milky Way which include a hot gas corona. The progenitors of MCs are gas rich dwarf galaxies before entering the halo of MW. Figure <ref> compare the observed neutral gas and ionized gas to this model. This model naturally generates two HI streams behind the MCs, and a huge amount of ionized gas deposited along the stream. The strong mutual interaction between the MCs totally stretched by gravitational tides the SMC into a 'cigar' shape, which is well reproduced by this model as shown in Figure <ref>. Recent observation indicates that there is offset between the ancient stars and young stellar population in the Bridge region <cit.>, which is also well reproduced in this model. § MANY PREDICTIONS ARE CONFIRMED BY OBSERVATIONS With the progress of observations, new data and findings provide essential test for any model aiming at reproducing the formation of Magellanic System. We will show that our ram-pressure plus collision model pass these tests with many predictions confirmed by recent observations. §.§ Two separated populations in the Bridge region Observations indicate that there are two different populations in the Bridge region, which are separated in both distance and kinematics space <cit.>. <cit.> found that two populations of red clump stars in the Bridge region starting from SMC to LMC, which show different brightnesses. The bright and faint red clump populations show different distance and kinematics, which consistent with the finding of <cit.>. In our model, the two populations are formed by SMC, which are tidally stripped by the LMC. The foreground population indicates the disk component of SMC, which is tidally stripped from SMC and showing debris of interaction stretching from SMC to LMC. While the background population come from the spheroid component which is less affected by the LMC tidal and distributed in the back of the Bridge region. This model naturally reproduces this new observation as shown in Figure <ref>. §.§ Periphery of the Clouds With deep observations, many faint features in the periphery of MCs have been discovered as shown in the left panel of Figure <ref> from <cit.>, many of which have been well predicted by our model as shown in the right panel of Figure <ref>. The North Tidal Arm (NTA) is the largest tidal features rooted from the disk LMC, which is confirmed to originate from the LMC on the basis of its metallicity, distance, and kinematics <cit.>. Before observations, the model of <cit.> predicted the existence of the NTA , which is formed by the Galactic tides exerted on the disk of LMC. From Gaia EDR3, we have selected stars belonging to NTA according to its morphology position, proper motion. With this sample stars of NTA, we cross-matched with results of literature to get their distance and metallicity. The distance and metallicity variation as function of radius to the LMC are shown in Figure <ref>. In order to compare simulation model with observations, we assigned metallicities to particles of the simulation model by painting that of the LMC with metallicities following the observational constraints <cit.>. <cit.> selected red giant stars of LMC from DR2, and used machine-learning method with data of Gaia+2MASS+WISE to estimate the photometric metallicity for these stars. With these data set, they estimated radial metallicity profile: [Fe/H] = α R + b. They found the α=-0.048±0.001 dex kpc^-1 and b=-0.656±0.004 dex. We use this relation to paint the initial metallicity of our modeled LMC. For simplicity, we have adopted the approximated relation [Fe/H] ∼ [M/H]. In Figure <ref>, the modeled data are shown with red points. The simulation model, can explain well the observational results for both the distance and metallicity profiles, without fine tuning. The current observed metallicity profile of the LMC <cit.> also reproduces the formation of NTA, which indicates that the LMC metallicity profile has been settled down before the formation of NTA, or the mutual interaction of MCs/gas loss have marginal influence on the metallicity structure of LMC. § CONCLUSION The ram-pressure plus collision model <cit.> can not only reproduce MS, but also succeed in predicting many observations that have been done in the meantime (see a description in ). This model naturally reproduces the two inter-twisted filaments of HI MS, as well as the huge amount of ionized gas associated with MS. This ability also validates that this model goes into the right direction to disentangle the mystery of Magellanic System formation (Mathewson, private communication). We conjecture that the LMC mass has to be small (a few times 10^10 M_⊙) to form the Magellanic Stream, though further studies are needed to explore the exact mass range. [Anders et al(2022)Anders, Khalatyan, Queiroz, Chiappini, Ardèvol, Casamiquela, Figueras, Jiménez-Arranz, Jordi, Monguió, Romero-Gómez, Altamirano, Antoja, Assaad, Cantat-Gaudin, Castro-Ginard, Enke, Girardi, Guiglion, Khan, Luri, Miglio, Minchev, Ramos, Santiago, & Steinmetz]Anders2022 Anders F. et al., 2022, A&A, 658, A91 [Belokurov et al(2017)Belokurov, Erkal, Deason, Koposov, De Angeli, Evans, Fraternali, & Mackey]Belokurov2017 Belokurov V., Erkal D., Deason A. J., Koposov S. E., De Angeli F., Evans D. W., Fraternali F., Mackey D., 2017, MNRAS, 466, 4711 [Besla et al(2012)Besla, Kallivayalil, Hernquist, van der Marel, Cox, & Kereš]Besla2012 Besla G., Kallivayalil N., Hernquist L., van der Marel R. P., Cox T. J., Kereš D., 2012, MNRAS, 421, 2109 [Cullinane et al(2022a)Cullinane, Mackey, Da Costa, Erkal, Koposov, & Belokurov]Cullinane2022a Cullinane L. R., Mackey A. D., Da Costa G. S., Erkal D., Koposov S. E., Belokurov V., 2022a, MNRAS, 510, 445 [Cullinane et al(2022b)Cullinane, Mackey, Da Costa, Erkal, Koposov, & Belokurov]Cullinane2022b Cullinane L. R., Mackey A. D., Da Costa G. S., Erkal D., Koposov S. E., Belokurov V., 2022b, MNRAS, 512, 4798 [Cullinane et al(2020)Cullinane, Mackey, Da Costa, Koposov, Belokurov, Erkal, Koch, Kunder, & Nataf]Cullinane2020 Cullinane L. R. et al., 2020, MNRAS, 497, 3055 [D'Onghia & Fox(2016)]Donghia2016 D'Onghia E., Fox A. J., 2016, ARA&A, 54, 363 [Fox et al(2014)Fox, Wakker, Barger, Hernandez, Richter, Lehner, Bland-Hawthorn, Charlton, Westmeier, Thom, Tumlinson, Misawa, Howk, Haffner, Ely, Rodriguez-Hidalgo, & Kumari]Fox2014 Fox A. J. et al., 2014, ApJ, 787, 147 [Gatto et al(2022)Gatto, Ripepi, Bellazzini, Tortora, Tosi, Cignoni, & Longo]Gatto2022 Gatto M., Ripepi V., Bellazzini M., Tortora C., Tosi M., Cignoni M., Longo G., 2022, ApJ, 931, 19 [Gaia Collaboration et al(2021)Gaia Collaboration, Luri, Chemin, Clementini, Delgado, McMillan, Romero-Gómez, Balbinot, Castro-Ginard, Mor, Ripepi, Sarro, Cioni, Fabricius, Garofalo, Helmi, Muraveva, Brown, Vallenari, Prusti, de Bruijne, Babusiaux, Biermann, Creevey, Evans, Eyer, Hutton, Jansen, Jordi, Klioner, Lammers, Lindegren, Mignard, Panem, Pourbaix, Randich, Sartoretti, Soubiran, Walton, Arenou, Bailer-Jones, Bastian, Cropper, Drimmel, Katz, Lattanzi, van Leeuwen, Bakker, Castañeda, De Angeli, Ducourant, Fouesneau, Frémat, Guerra, Guerrier, Guiraud, Jean-Antoine Piccolo, Masana, Messineo, Mowlavi, Nicolas, Nienartowicz, Pailler, Panuzzo, Riclet, Roux, Seabroke, Sordo, Tanga, Thévenin, Gracia-Abril, Portell, Teyssier, Altmann, Andrae, Bellas-Velidis, Benson, Berthier, Blomme, Brugaletta, Burgess, Busso, Carry, Cellino, Cheek, Damerdji, Davidson, Delchambre, Dell'Oro, Fernández-Hernández, Galluccio, García-Lario, Garcia-Reinaldos, González-Núñez, Gosset, Haigron, Halbwachs, Hambly, Harrison, Hatzidimitriou, Heiter, Hernández, Hestroffer, Hodgkin, Holl, Janßen, Jevardat de Fombelle, Jordan, Krone-Martins, Lanzafame, Löffler, Lorca, Manteiga, Marchal, Marrese, Moitinho, Mora, Muinonen, Osborne, Pancino, Pauwels, Recio-Blanco, Richards, Riello, Rimoldini, Robin, Roegiers, Rybizki, Siopis, Smith, Sozzetti, Ulla, Utrilla, van Leeuwen, van Reeven, Abbas, Abreu Aramburu, Accart, Aerts, Aguado, Ajaj, Altavilla, Álvarez, Álvarez Cid-Fuentes, Alves, Anderson, Anglada Varela, Antoja, Audard, Baines, Baker, Balaguer-Núñez, Balog, Barache, Barbato, Barros, Barstow, Bartolomé, Bassilana, Bauchet, Baudesson-Stella, Becciani, Bellazzini, Bernet, Bertone, Bianchi, Blanco-Cuaresma, Boch, Bombrun, Bossini, Bouquillon, Bragaglia, Bramante, Breedt, Bressan, Brouillet, Bucciarelli, Burlacu, Busonero, Butkevich, Buzzi, Caffau, Cancelliere, Cánovas, Cantat-Gaudin, Carballo, Carlucci, Carnerero, Carrasco, Casamiquela, Castellani, Castro Sampol, Chaoul, Charlot, Chiavassa, Comoretto, Cooper, Cornez, Cowell, Crifo, Crosta, Crowley, Dafonte, Dapergolas, David, David, de Laverny, De Luise, De March, De Ridder, de Souza, de Teodoro, de Torres, del Peloso, del Pozo, Delgado, Delisle, Di Matteo, Diakite, Diener, Distefano, Dolding, Eappachen, Enke, Esquej, Fabre, Fabrizio, Faigler, Fedorets, Fernique, Fienga, Figueras, Fouron, Fragkoudi, Fraile, Franke, Gai, Garabato, Garcia-Gutierrez, García-Torres, Gavras, Gerlach, Geyer, Giacobbe, Gilmore, Girona, Giuffrida, Gomez, Gonzalez-Santamaria, González-Vidal, Granvik, Gutiérrez-Sánchez, Guy, Hauser, Haywood, Hidalgo, Hilger, Hładczuk, Hobbs, Holland, Huckle, Jasniewicz, Jonker, Juaristi Campillo, Julbe, Karbevska, Kervella, Khanna, Kochoska, Kontizas, Kordopatis, Korn, Kostrzewa-Rutkowska, Kruszyńska, Lambert, Lanza, Lasne, Le Campion, Le Fustec, Lebreton, Lebzelter, Leccia, Leclerc, Lecoeur-Taibi, Liao, Licata, Lindstrøm, Lister, Livanou, Lobel, Madrero Pardo, Managau, Mann, Marchant, Marconi, Marcos Santos, Marinoni, Marocco, Marshall, Martin Polo, Martín-Fleitas, Masip, Massari, Mastrobuono-Battisti, Mazeh, Messina, Michalik, Millar, Mints, Molina, Molinaro, Molnár, Montegriffo, Morbidelli, Morel, Morris, Mulone, Munoz, Murphy, Musella, Noval, Ordénovic, Orrù, Osinde, Pagani, Pagano, Palaversa, Palicio, Panahi, Pawlak, Peñalosa Esteller, Penttilä, Piersimoni, Pineau, Plachy, Plum, Poggio, Poretti, Poujoulet, Prša, Pulone, Racero, Ragaini, Rainer, Raiteri, Rambaux, Ramos, Ramos-Lerate, Re Fiorentin, Regibo, Reylé, Riva, Rixon, Robichon, Robin, Roelens, Rohrbasser, Rowell, Royer, Rybicki, Sadowski, Sagristà Sellés, Sahlmann, Salgado, Salguero, Samaras, Gimenez, Sanna, Santoveña, Sarasso, Schultheis, Sciacca, Segol, Segovia, Ségransan, Semeux, Siddiqui, Siebert, Siltala, Slezak, Smart, Solano, Solitro, Souami, Souchay, Spagna, Spoto, Steele, Steidelmüller, Stephenson, Süveges, Szabados, Szegedi-Elek, Taris, Tauran, Taylor, Teixeira, Thuillot, Tonello, Torra, Torra, Turon, Unger, Vaillant, van Dillen, Vanel, Vecchiato, Viala, Vicente, Voutsinas, Weiler, Wevers, Wyrzykowski, Yoldas, Yvard, Zhao, Zorec, Zucker, Zurbach, & Zwitter]Luri2020 Gaia Collaboration et al., 2021, A&A, 649, A7 [Grady, Belokurov & Evans(2021)Grady, Belokurov, & Evans]Grady2021 Grady J., Belokurov V., Evans N. W., 2021, ApJ, 909, 150 [Hammer et al(2015)Hammer, Yang, Flores, Puech, & Fouquet]Hammer2015 Hammer F., Yang Y. B., Flores H., Puech M., Fouquet S., 2015, ApJ, 813, 110 [Huang et al(2022)Huang, Beers, Wolf, Lee, Onken, Yuan, Shank, Zhang, Wang, Shi, & Fan]Huang2022 Huang Y. et al., 2022, ApJ, 925, 164 [James et al(2021)James, Subramanian, Omkumar, Mary, Bekki, Cioni, de Grijs, El Youssoufi, Kartha, Niederhofer, & van Loon]James2021 James D. et al., 2021, MNRAS, 508, 5854 [Kallivayalil et al(2006)Kallivayalil, van der Marel, Alcock, Axelrod, Cook, Drake, & Geha]Kallivayalil2006 Kallivayalil N., van der Marel R. P., Alcock C., Axelrod T., Cook K. H., Drake A. J., Geha M., 2006, The Astrophysical Journal, 638, 772 [Kallivayalil et al(2013)Kallivayalil, van der Marel, Besla, Anderson, & Alcock]Kallivayalil2013 Kallivayalil N., van der Marel R. P., Besla G., Anderson J., Alcock C., 2013, The Astrophysical Journal, 764, 161 [Lucchini et al(2020)Lucchini, D'Onghia, Fox, Bustard, Bland-Hawthorn, & Zweibel]Lucchini2020 Lucchini S., D'Onghia E., Fox A. J., Bustard C., Bland-Hawthorn J., Zweibel E., 2020, Nature, 585, 203 [Lucchini, D'Onghia & Fox(2021)Lucchini, D'Onghia, & Fox]Lucchini2021 Lucchini S., D'Onghia E., Fox A. J., 2021, ApJL, 921, L36 [Mastropietro(2010)]Mastropietro2010 Mastropietro C., 2010, in American Institute of Physics Conference Series, Vol. 1240, Hunting for the Dark: the Hidden Side of Galaxy Formation, Debattista V. P., Popescu C. C., eds., pp. 150–153 [Mathewson(2012)]Mathewson2012 Mathewson D., 2012, Journal of Astronomical History and Heritage, 15, 100 [Mathewson, Cleary & Murray(1974)Mathewson, Cleary, & Murray]Mathewson1974 Mathewson D. S., Cleary M. N., Murray J. D., 1974, The Astrophysical Journal, 190, 291 [Nidever et al(2010)Nidever, Majewski, Butler Burton, & Nigra]Nidever2010 Nidever D. L., Majewski S. R., Butler Burton W., Nigra L., 2010, ApJ, 723, 1618 [Omkumar et al(2021)Omkumar, Subramanian, Niederhofer, Diaz, Cioni, El Youssoufi, Bekki, de Grijs, & van Loon]Omkumar2021 Omkumar A. O. et al., 2021, MNRAS, 500, 2757 [Onken et al(2019)Onken, Wolf, Bessell, Chang, Da Costa, Luvaul, Mackey, Schmidt, & Shao]Onken2019 Onken C. A. et al., 2019, PASA, 36, e033 [Piatek, Pryor & Olszewski(2008)Piatek, Pryor, & Olszewski]Piatek2008 Piatek S., Pryor C., Olszewski E. W., 2008, The Astronomical Journal, 135, 1024 [Richter et al(2017)Richter, Nuza, Fox, Wakker, Lehner, Ben Bekhti, Fechner, Wendt, Howk, Muzahid, Ganguly, & Charlton]Richter2017 Richter P. et al., 2017, A&A, 607, A48 [Ripepi et al(2017)Ripepi, Cioni, Moretti, Marconi, Bekki, Clementini, de Grijs, Emerson, Groenewegen, Ivanov, Molinaro, Muraveva, Oliveira, Piatti, Subramanian, & van Loon]Ripepi2017 Ripepi V. et al., 2017, MNRAS, 472, 808 [Tepper-García et al(2019)Tepper-García, Bland-Hawthorn, Pawlowski, & Fritz]Tepper-Garcia2019 Tepper-García T., Bland-Hawthorn J., Pawlowski M. S., Fritz T. K., 2019, MNRAS, 488, 918 [van der Marel & Kallivayalil(2014)]vdM2014 van der Marel R. P., Kallivayalil N., 2014, ApJ, 781, 121 [Wang et al(2019)Wang, Hammer, Yang, Ripepi, Cioni, Puech, & Flores]Wang2019 Wang J., Hammer F., Yang Y., Ripepi V., Cioni M.-R. L., Puech M., Flores H., 2019, MNRAS, 486, 5907 [Wang, Hammer & Yang(2022)Wang, Hammer, & Yang]Wang2022 Wang J., Hammer F., Yang Y., 2022, MNRAS, 515, 940 [Yang et al(2014)Yang, Hammer, Fouquet, Flores, Puech, Pawlowski, & Kroupa]Yang2014 Yang Y., Hammer F., Fouquet S., Flores H., Puech M., Pawlowski M. S., Kroupa P., 2014, MNRAS, 442, 2419 [Zivick et al(2018)Zivick, Kallivayalil, van der Marel, Besla, Linden, Kozłowski, Fritz, Kochanek, Anderson, Sohn, Geha, & Alcock]Zivick2018 Zivick P. et al., 2018, ApJ, 864, 55 Müller OliverDo you think we can find other dwarfs showing 'cigar' shape as SMC in the extragalactic dwarfs ? Jianling WANGThis is difficult, since we need precise distance for individual star showing dwarf shapes in 3D.
http://arxiv.org/abs/2306.10600v2
20230618171734
A Smoothed FPTAS for Equilibria in Congestion Games
[ "Yiannis Giannakopoulos" ]
cs.GT
[ "cs.GT", "cs.CC", "cs.DS" ]
: Spatial-Temporal Heterogeneous Graph Learning for Advanced Audio-Visual Diarization Kyle Min Intel Labs [email protected] ===================================================================================== We present a fully polynomial-time approximation scheme (FPTAS) for computing equilibria in congestion games, under smoothed running-time analysis. More precisely, we prove that if the resource costs of a congestion game are randomly perturbed by independent noises, whose density is at most ϕ, then any sequence of (1+ε)-improving dynamics will reach an (1+ε)-approximate pure Nash equilibrium (PNE) after an expected number of steps which is strongly polynomial in 1/ε, ϕ, and the size of the game's description. Our results establish a sharp contrast to the traditional worst-case analysis setting, where it is known that better-response dynamics take exponentially long to converge to α-approximate PNE, for any constant factor α≥ 1. As a matter of fact, computing α-approximate PNE in congestion games is PLS-hard. We demonstrate how our analysis can be applied to various different models of congestion games including general, step-function, and polynomial cost, as well as fair cost-sharing games (where the resource costs are decreasing). It is important to note that our bounds do not depend explicitly on the cardinality of the players' strategy sets, and thus the smoothed FPTAS is readily applicable to network congestion games as well. § INTRODUCTION The systematic study of congestion games has its origins in the seminal work of Rosenthal1973a. Rosenthal, via a remarkably elegant construction, proved that (unweighted) congestion games are potential games Monderer1996a, establishing in that way that they always have pure Nash equilibria (PNE). Since then, congestion games have been extensively studied in (algorithmic) game theory and combinatorial optimization, since they provide a powerful abstraction for modelling incentives in problems where different agents compete over a collection common of resources. From a computational perspective, the problem of computing a PNE of a congestion game is a “canonical” local optimization problem, being a prominent member of the complexity class PLS introduced by Johnson:1988aa. As a matter of fact, as was first shown by FPT04, the problem is PLS-complete. This hardness is two-fold. First, it implies that, unless P=PLS, there does not exist an efficient algorithm to compute equilibria in congestion games. Secondly, it proves that better-response dynamics, which is simply the implementation of standard local search in congestion games, can take exponentially long to converge to a PNE. It is important to emphasize that the later result is unconditional, that is, it does not depend on any complexity-theoretic assumptions. Ackermann2008 showed that the PLS-completeness is also valid for network congestion games, that are defined succinctly over a graph structure, and even for combinatorially very simple instances, with linear resource cost functions. Given the aforementioned hardness results, the natural direction to investigate is the complexity of computing approximate PNE. Unfortunately, it turns out that the problem does not become easier: for any given constant α, Skopalik2008 showed that computing an α-PNE is PLS-complete, and furthermore, proved the unconditional existence of exponentially long better-response sequences, even for “well-behaved” resource costs. Our goal in this paper is to demystify this dramatic complexity barrier, by proving that the hard instances in congestion games are actually rather “fragile”. To do to this formally, we deploy the framework of smoothed analysis. Smoothed analysis was first introduced in the groundbreaking work of Spielman2004, as a model for providing rigorous justification for the empirical fact that the Simplex algorithm for linear programming, although provably having exponential worst-case running-time, in practice performs exceptionally well. Their idea was very natural and remarkably effective: after an input instance has been adversarially fixed, random perturbations are introduced by “nature”, independently, to all numerical parameters. Then, the running-time of an algorithm is measured in expectation with respect to this randomness, termed smoothed running-time. In the original model of Spielman2004, the perturbations are Gaussian around 0, parameterized by their standard deviation σ>0; as σ→ 0 this stochastic model converges to the original, fixed worst-case instance. The seminal result of  says that the smoothed running-time of Simplex (under the shadow-vertex pivot rule) is polynomial in 1/σ (and the size of the input). One way to interpret this, is that the “bad” instances for the performance of Simplex (see, e.g., the Klee-Minty cube Klee1972) are “rare” or “isolated”, and exponential precision is needed in their description in order to be effective. Since Spielman2004, smoothed analysis has been successfully applied to a wide range of combinatorial problems, including, e.g., integer programming Beier2006a,Roeglin2007a, the k-means method for clustering Arthur11, multiobjective optimization Brunsch15, TSP Englert_2016,Englert14, and an impressive line of work on Local Max-Cut ElsasserT11,Etscheid17,Angel:2017aa,Chen20,Bibak21. As far as game-theoretic problems are concerned, Boodaghians20 studied the smoothed complexity of finding PNE in network coordination games. Congestion games had not been studied from a smoothed analysis perspective until very recently, when ggm2022_arxiv showed that (exact) PNE can be found in smoothed polynomial time for a (rather restrictive) class of games that satisfy a certain “constant-restraint” assumption. For a more in-depth view of smoothed analysis we refer to, e.g., Roughgarden2021,Spielman:2009aa,Beier2006a,ggm2022_arxiv. A more detailed presentation of the specific smoothness framework that we employ in this paper is given in <ref>. We discuss further related work, in particular regarding various results about the computability of approximate equilibria for the different models of congestion games that we study in this paper, in the following, more technical sections. §.§ Our Results and Techniques In this paper we study the smoothed complexity of computing (approximate) pure Nash equilibria (PNE) in (unweighted) congestion games. For our smoothed analysis framework we follow the one recently proposed in ggm2022_arxiv for congestion games, where the cost of the resources on the different possible loads are independently perturbed according to an arbitrary probability distribution with density at most ϕ. We formalize our general congestion game model <ref>, where we also define all necessary game-theoretic fundamentals. In <ref> we discuss (approximate) better-response dynamics (BRD) and define our FPTAS (see <ref>). Our main result is stated in <ref>: in general congestion games, an (1+ε)-approximate PNE can be computed in smoothed strongly polynomial time in 1/ε (and the size of the game's description). More precisely, (1+ε)-approximate BRD terminate after at most Õ(ε^-1ϕ n^3 m^5) iterations (in expectation), where n is the number of players and m the number of resources. The proof is given in <ref> and the exact bound can be found in (<ref>). Furthermore, <ref> contains similar positive results for additional, well-established special classes of congestion games. These differ from general congestion games, in the way in which the resource cost functions are defined and represented. Namely, we study: step-function costs with (at most) d break points; polynomial costs of constant degree d (and nonnegative coefficients); and fair cost-sharing games where a fixed cost is equally split among the players who use it. The corresponding smoothed complexity bounds on the number of iterations of (1+ε)-BRD are, respectively: Õ(ε^-1ϕ n m^5 d^3) (see <ref> for the proof), Õ(ε^-1ϕ n^d+2 m^5) (<ref>), and Õ(ε^-1ϕ n m^5). It is worth mentioning that the aforementioned bounds hold for any starting configuration of the dynamics, and for any choice of the intermediate pivoting rule for the player deviations. Furthermore, all our results are immediately valid for network congestion games (see <ref> for a definition and <ref> for a discussion) as well, since the running time of our FPTAS does not depend on the number of strategies available to the players (which, for network games, can be exponential in n and m). The technique for achieving the smoothed polynomial complexity bounds in this paper can be distilled in two core steps. First, we establish that the number of iterations of BRD can be upper bounded by an appropriate function of the ratio between the maximum and minimum resource costs of our game; for general congestion games, for example, this can be seen in (<ref>). Similar inequalities hold for the other special congestion game models that we study, and they all arise from the algebraic relation between player costs and the value of Rosenthal's potential (see <ref> for definitions). Secondly, we show that when this expression is paired with a trivial (exponential, based on exhaustive search) bound on the running time (see (<ref>)), the expectation of the resulting quantity grows polynomially. This probabilistic property is the cornerstone for our derivation, and we present it in its own <ref>, before we dive into the rest of the technicalities in our proofs. The presentation in <ref> is essentially self-contained, independent of congestion games, and <ref> applies to general ϕ-smooth random variables. As a result, it may prove useful for future work in smoothed analysis, whenever similar bounds involving the ratios of the numerical parameters of the problem can be shown to hold. § MODEL AND NOTATION We will use , , and _+ to denote the set of nonnegative integer, real, and nonnegative real numbers, respectively. For n∈ we denote [n]1,2,…,n and [0..n]0[n]. For a random variable X we use F_X for its cumulative distribution function (cdf) and f_X for its probability density function (pdf). In this paper we only deal with (absolutely) continuous, real-valued random variables. We will use for the usual (first-order) stochastic ordering; that is, for two random variables X,Y: X Y if and only if F_Y(t) ≤ F_X(t) for all t∈. §.§ Congestion Games A congestion game 𝒢=(N,R,S_i_i∈ N,c_r_r∈ R) is defined by [(1)] * a finite set of players N=[n], * a finite set of resources R=r_1,r_2,…,r_m, * for each player i, a strategy set S_i⊆ 2^R∖∅; each element s_i∈ S_i is thus a nonempty set of resources, and is called a strategy for player i, and * for each resource r∈ R, a cost function c_r:[n]_+; c_r(ℓ) is interpreted as the cost (or congestion) of resource r when ℓ players use it. A network congestion game is a congestion game where the strategy sets S_i_i∈ N are not given explicitly, but induced via an underlying graph structure. More precisely, we are given a directed graph G=(V,E) and, for each player i∈ N a pair of nodes (o_i,d_i)∈ V. The resources of the game are exactly the edges of the graph, i.e. R=E. Then, the strategies of player i are all simple o_i→ d_i paths in G. Notice how, in the definition above, we do not enforce any monotonicity requirement on the resource cost functions, since our main result does not depend on such an assumption and applies to general congestion games with arbitrary cost functions (see case (<ref>) of <ref> and the corresponding proof in <ref>). We discuss more specialized congestion game models, including step-function and polynomial costs (which are nondecreasing) and cost-sharing games (where the costs are decreasing) in their corresponding <ref>. In such models, the resource costs c_r(ℓ) are not given explicitly, but rather via a more succinct functional expression. A strategy profile of a congestion game 𝒢 is a collection of strategies, one for each player: s=(s_1,s_2,…,s_n)∈S S_1× S_2×…× S_n. For a strategy profile s, we use ℓ_r( s) for the load it induces on a resource r∈ R, that is, the number of players that use it: ℓ_r( s)i∈ Nr∈ s_i. This induces a cost to the players, equal to the sum of the cost of the resources that they are using. That is, the cost of player i∈ N under a strategy profile s∈S is: C_i( s) ∑_r∈ s_i c_r(ℓ_r( s)). For an α≥ 1, we will say that a strategy profile is an α-approximate pure Nash equilibrium (α-PNE) if no player can improve their cost more than a (multiplicative) factor of α, by unilaterally deviating to another strategy. Formally, s∈S is an α-PNE if[Here we are using the standard game-theoretic notation of s_-i to denote the (n-1)-dimensional vector that remains from the n-dimensional vector s if we remove its i-th coordinate. In that way, for any vector s we can write s=(s_i,s_-i).] C_i( s) ≤α· C_i(s_i',s_-i), for all i∈ N, s_i'∈ S_i. For the special case of α=1 this definition coincides with the stricter, standard notion of a pure Nash equilibrium. To emphasize this, sometimes an 1-PNE is called an exact PNE. It is not hard to verify that any exact PNE is also an α-PNE, for any α≥ 1. The Rosenthal potential of a congestion game 𝒢 is the function Φ:S_+ given by Φ( s) ∑_r∈ R∑_j=1^ℓ_r( s) c_r(j). This is due to the work of Rosenthal1973a who first defined the quantity in (<ref>) and proved that, for all strategy profiles s of a congestion game it is Φ( s) - Φ(s_i',s_-i) = C_i( s) - C_i(s_i',s_-i). An immediate consequence of (<ref>), also shown by Rosenthal1973a, is that a minimizer of Rosenthal's potential s^*∈_ s ∈SΦ( s) is an exact PNE. This establishes the existence of α-PNE (for any α≥ 1) in all congestion games. The goal of the present paper is to study computationally efficient methods for computing such an α-PNE, for a factor α as close to 1 as possible. §.§.§ Smoothed Congestion Games In this paper we study the complexity of our algorithms under smoothed analysis (see, e.g., [Part 4]Roughgarden2021). In a ϕ-smooth congestion game, we assume that the resource costs c_r(ℓ)_r∈ R,ℓ∈[n] are independent random variables; then, we measure running-times in expectation with respect to their realizations. In more detail, we assume that the resource costs are continuous random variables, taking values in [0,1], and that their density functions are upper-bounded by a universal parameter ϕ≥ 1. More generally, we will call such a random variable X with f_X:[0,1] [0,ϕ], a ϕ-smooth random variable. Notice here that the normalization of the resource costs within [0,1] is without loss for our purposes: one can just divide all costs by their maximum, and get a totally equivalent game that fully maintains the equilibrium structure. Such a scaling is done to facilitate the smoothed analysis modelling, and is standard in the field (see, e.g., Englert_2016,Etscheid17,ggm2022_arxiv). Parameter ϕ allows smoothed analysis to interpolate between average-case analysis (ϕ=1) where all costs are uniformly i.i.d., and worst-case analysis (ϕ=∞) where the ϕ-smooth resource costs degenerate to single values. The aim of smoothed analysis is to capture the complexity of an algorithm, asymptotically as ϕ grows large. To see it from another perspective, smoothed analysis can be seen as introducing small, independent, random perturbations to the numerical values of a problem instance, before performing a traditional, worst-case running-time analysis. The magnitude of this random noise can be “controlled” by a parameter σ=1/ϕ→ 0. A detailed discussion of the fundamentals and subtleties of smoothed analysis is beyond the scope of this paper; for such a treatment, the interested reader is referred to, e.g., Roughgarden2021,Spielman2004,Beier2006a. It is important to clarify that, in this section we introduced our smoothness framework only for general congestion games. Due to its nature, smoothed analysis depends heavily on the representation and the numerical parameters of each problem instance; therefore, different special models of congestion games require their own, tailored smoothness treatment. To assist readability, we have decided to defer the discussion of smoothness for step-functions, polynomial, and fair cost-sharing games to their corresponding <ref>. For most of these models, we adopt the smoothness frameworks for congestion games that were first proposed recently by ggm2022_arxiv; except for cost-sharing games (see <ref>) for which, to the best of our knowledge, this is the first time that they've been studied from a smoothed analysis perspective. § THE SMOOTHED FPTAS In this section we present our FPTAS (see <ref>). It is a based on a very simple, but fundamental idea. Fix an arbitrary game 𝒢 and a parameter α≥ 1. If a strategy profile s of 𝒢 is not an α-approximate PNE, then (by simply considering the negation of (<ref>)) there has to exist a player i and a strategy s_i' of i that improves their cost by a factor larger than α; formally: α C_i(s_i', s_-i) < C_i( s) Such a deviation s→ (s_i',s_-i), that satisfies (<ref>), is called an α-improving move for game 𝒢. This gives rise to the following natural process for finding approximate equilibria in games, called α-better-response dynamics (α-BRD): starting from an arbitrary strategy profile, repeatedly perform α-improving moves. When no such move exists any more, it must be that an α-PNE has been reached. For a more formal description, see <ref>. Notice how in Line <ref> of <ref> there might be multiple valid α-improving moves s → (s_i',s_i) to choose from. Our definition deliberately leaves this underdetermined, as all the results presented in this paper hold for any choice of the α-improving moves. Furthermore, we have made the starting profile to be part of the input, in order to emphasize the fact that this can be adversarially selected; again, all our bounds are robust to the choice of an initial configuration. The main result of our paper is that, under smoothed running-time analysis, approximate better-response dynamics converge fast to approximate equilibria: [backgroundcolor=gray!30] In ϕ-smooth congestion games, (1+ε)-BRD always find a (1+ε)-PNE, after an expected number of iterations which is strongly polynomial in 1/ε, ϕ, and the description of the game. This holds for [(a)] * general, * step-function, * polynomial, and * cost-sharing congestion games, and even under a succinct network representation of all models (<ref>)–(<ref>). More precisely, the expected number of iterations is at most (1+1/ε)(ϕ,n,m) where n is the number of players and m the number of resources. Notice that <ref> gives a bound on the number of iterations of our dynamics (i.e., the while-loop of Lines <ref>–<ref>), and not the total running time. This is due to fact that, checking whether a given strategy profile is an (approximate) equilibrium (Line <ref> of <ref>) and, if not, returning an improving move (Line <ref>), can both be done in polynomial time (in the description of the game), for all congestion game models studied in this paper; therefore, the total running time is indeed dominated by the number of improving-move steps. To see that indeed that's the case, first consider the standard representation of congestion games, where the strategy sets S_i_i∈ N of the players are explicitly given in the input, each S_i being a list of, at most k, subsets of elements of the ground set of resources R. Then, one can actually compute all α-improving moves by simply exhaustively going over all players i∈ N, and all strategy deviations s_i'∈ S_i, checking whether C_i( s)/C_i(s_i',s_-i)>α; this can be done in O(nk) time. On the other hand, for congestion game representations where the strategy sets are implicitly given, it might not be possible to efficiently perform such an exhaustive search. Nevertheless, one has access to the fundamental game-theoretic primitive of a best-response, i.e., for every player i and any strategy profile s_-i∈S_-i, being able to efficiently compute an element s_i'∈_s_i∈ S_iC_i(s_i,s_-i). Then, finding an α-improving move can still be done in polynomial time, since it boils down to going over the players i∈ N, computing a best-response s_i', and checking whether C_i( s)/C_i(s_i',s_-i)>α. If this check fails for all players, then it must be that s is an α-PNE. In particular, this applies to network congestion games (recall the definition from <ref>) where each strategy set S_i is succinctly described as a set of paths between two fixed nodes, and therefore its cardinality might be exponential. However, for such games, best-responses in (<ref>) are simply shortest-path computations, and thus they can indeed be performed efficiently. The rest of our paper is devoted to proving <ref> and describing the smoothness frameworks for all the different congestion game models (<ref>), (<ref>), (<ref>), and (<ref>) that are involved in the statement of <ref> and which we study in this paper. §.§ The Key Probabilistic Lemma A critical step in the proof of our main results (<ref>) is that of identifying a key property about ϕ-smooth random variables. In order to highlight its importance, we decided to disentangle it from the rest of the running-time analysis of our FPTAS, and present it beforehand here in its own section, together with its proof (see <ref>). Furthermore, since it is completely independent of congestion games, it can be of particular interest for future work on smoothed analysis. To provide intuition first, consider some combinatorial optimization problem involving numerical inputs w_1,w_2,…,w_m, normalized in (0,1]. For example, these could be the weights of a knapsack instance, or the coefficients of an integer program. Then, W=max_i 1/w_i=1/min_i w_i effectively captures the “magnitude” of the numbers that are involved in our computation. A running time which is polynomial on W does not, in general, imply efficient computation; for example, this is highlighted by many NP-hard problems that are known to admit pseudopolynomial solutions (like knapsack, for example). Nevertheless, what if, under the more optimistic lens of smoothed analysis, one could show that the magnitude of W is “well-behaved” with “high probability”? The following observation immediately shatters such hopes: consider a uniformly distributed random variable X over [0,1], and observe that the expectation 1/X=∫_0^11/x dx=∞ of its reciprocal is actually unbounded. Nevertheless, it turns out that a small patch is enough to do trick: truncating the random variable 1/X from above, even at an exponentially large threshold, results in a polynomially bounded expectation. This is formalized in the following: Let X_1,X_2,…,X_μ be independent ϕ-smooth random variables over [0,1]. For any λ∈ and real α≥ 1, minmax_i∈[μ](α/X_ilnα/X_i),μ^λ≤ϕαμ^2ln(ϕαμ) (λ+1)^2 + 1. To simplify notation, we first define function g:[0,1] with g(t)minα/tlnα/t,μ^λ. Observe that, for 0< t≤α, the quantity α/tlnα/t is strictly decreasing with respect to t. Therefore, function g is nonincreasing and, furthermore, the expectation in (<ref>) can then be more simply expressed as g(min_i X_i). Now let Y_1, Y_2,…, Y_μ be independent uniformly distributed random variables over [0,1/ϕ]. Notice that they are ϕ-smooth and their cdf is given by F_Y(y)=ϕ y for 0≤ y ≤1/ϕ. Let us also define the random variable Z1/min_i∈[μ] Y_i and observe that Z takes values in [ϕ,∞) and its cdf is given by F_Z(z) =Z≤ z =∏_i=1^μY_i≥1/z =∏_i=1^μ[1-F_Y(1/z)] =(1-ϕ/z)^μ, for z≥ϕ. Next, we argue that Y_i X_i, where denotes the usual (first-order) stochastic order. Indeed, since each X_i is ϕ-smooth, its cdf is upper-bounded by F_X_i(x) ≤∫_0^x ϕ dt =ϕ x = F_Y(x), for all 0≤ x ≤1/ϕ; obviously, F_X_i(x)≤ 1 = F_Y(x) for all x≥1/ϕ as well. Since order statistics preserve stochastic dominance (see, e.g., <cit.>), it must be that min_i Y_i min_i X_i, and so by the monotonicity of function g it must be that (see, e.g., <cit.>) g(min_i X_i)≤g(min_i Y_i)= minα Z ln(α Z),μ^λ. For j=0,1,…,λ, let E_j denote the random event that Z∈ [ϕμ^j,ϕμ^j+1], and let E_λ+1 be the event that Z∈ [ϕμ^λ+1,∞). Notice that events E_j_j∈[0..λ+1] form a partition of the entire sample space [ϕ,∞) of the random variable Z, so by using the law of total expectation we can upper bound the expectation in (<ref>) by: g(min_i X_i) ≤∑_j=0^λE_j·α Z ln(α Z) | E_j + E_λ+1·μ^λ ≤∑_j=0^λ[F_Z(ϕμ^j+1) - F_Z(ϕμ^j)] αϕμ^j+1ln( αϕμ^j+1) + [1-F_Z(ϕμ^λ+1)] μ^λ ≤αϕln( αϕμ) ∑_j=0^λ[F_Z(ϕμ^j+1) - F_Z(ϕμ^j)] μ^j+1 (j+1) + [1-F_Z(ϕμ^λ+1)] μ^λ = αϕln( αϕμ) ∑_j=0^λ[(1-1/μ^j+1)^μ - (1-1/μ^j)^μ]μ^j+1 (j+1) + [1-(1-1/μ^λ+1)^μ] μ^λ ≤αϕln( αϕμ) ∑_j=0^λ[1/1+μ/μ^j+1 - (1-μ/μ^j) ]μ^j+1 (j+1) + [1-(1-μ/μ^λ+1)] μ^λ = αϕln( αϕμ) [∑_j=0^λ(μ^2 - μ/1+1/μ^j)(j+1)]+1 ≤αϕln( αϕμ) μ^2 (∑_j=1^λ+1j)+1 ≤αϕln( αϕμ) μ^2 (λ+1)^2 + 1, where for the first inequality we used (<ref>), for the second the monotonicity of function α z ln(α z) (with respect to variable z≥ϕ≥ 1 ≥1/α), and for the fourth the bounds from <ref>. §.§ General Congestion Games In this section we prove part (<ref>) of <ref>. We have already introduced our smoothness model for general congestion games in <ref>. Recall that, under traditional worst-case analysis, computing an α-PNE of a congestion game is PLS-complete, for any constant α Skopalik2008. Finally, we emphasize that in general congestion games we make no monotonicity assumptions, and therefore our results hold for arbitrary (positive) resource costs. In case one wants to enforce an increasing assumption (which is common in the literature of congestion games), the step-function model of the following <ref> can be used instead, in order for the monotonicity to be preserved under smoothness. Fix an arbitrary congestion game, with n players and R=m resources. For simplicity, we will denote the maximum and minimum resource costs by c_maxmax_r∈ R,j∈[n] c_r(j) and c_minmin_r∈ R,j∈[n] c_r(j). First observe that, at any outcome s∈S, Rosenthal's potential (<ref>) can be upper-bounded by Φ( s) = ∑_r∈ R∑_j=1^ℓ_r( s) c_r(j) ≤∑_r∈ R∑_j=1^n c_r(j) ≤ mn · c_max and lower-bounded by Φ( s) = ∑_r∈ R∑_j∈[n]ℓ_r( s) ≥ j c_r(j) ≥ n · c_min, since each player uses at least one resource; here · denotes the standard indicator function. At the same time, the cost of any player i can be lower-bounded by C_i( s) = ∑_r∈ s_i c_r(ℓ_r( s)) ≥min_r∈ R c_r(ℓ_r( s))≥ c_min. If s → s'=(s_i',s_-i) is a move during the execution of our (1+ε)-BRD, then it must be that (1+ε)C_i( s') < C_i( s). So, we can lower-bound the improvement of the potential by: Φ( s) - Φ( s') = C_i( s) - C_i( s') > ε/1+ε C_i( s) (<ref>)≥ε/1+ε c_min(<ref>)≥ε/1+ε(nmc_max/c_min)^-1Φ( s) and thus Φ( s') < (1-ε')Φ( s), where ε'[(1+1/ε) nmc_max/c_min]^-1. Therefore, after T steps s^0→s^1→…→s^T of the (1+ε)-BRD we have that n c_min(<ref>)≤Φ(s^T) (<ref>)< (1-ε')^T Φ(s^0) (<ref>)≤ (1-ε')^T nm c_max, and by taking logarithms at both sides of this inequality, Tln(1-ε') > lnc_min/m c_max. By taking into consideration that 0<ε'<1 (and thus ln(1-ε')<0), the above gives us the following upper bound on the number of steps for our dynamics: T < -1/ln(1-ε')lnm c_max/c_min≤1/ε'lnm c_max/c_min(<ref>)≤(1+1/ε) nmc_max/c_minln(mc_max/c_min), where for the second inequality we used <ref> and the fact that m c_max≥ c_min. On the other hand, recall that our dynamics goes over strategy profiles, strictly decreasing (see (<ref>)) the potential at every step; therefore, no two profiles with the same potential value can be visited via our dynamics. Additionally, observe that the values Φ( s) of Rosenthal's potential (<ref>) are fully determined by the configuration of resource loads ℓ_r( s)_r∈ R under profile s, and do not directly depend on the actual identities of the players that use each edge. As a result, we deduce that the total number of iterations cannot be larger than the number of possible different resource-load profiles. Since each resource can be used by at most n players, this is at most (n+1)^R=(n+1)^m. Combining this with (<ref>) we can derive that (1+ε)-BRD terminates after at most T ≤min(1+1/ε) nmc_max/c_minln(mc_max/c_min),(n+1)^m ≤(1+1/ε) n ·minm/c_minln(m/c_min),(n+1)^m/n iterations. For the last inequality we used the fact that c_max≤ 1. Without loss, we can assume that the number of resources is at least m≥ 2; otherwise our congestion game is degenerate, having only a single strategy profile (and thus the dynamics trivially converge in constant time). Then it is (nm/n+1)^m ≥(2n/n+1)^m ≥ 1, since n≥ 1, and so (n+1)^m/n≤ (n+1)^m≤ (nm)^m. Using this, we can further upper bound (<ref>) by: T ≤(1+1/ε) n ·minm/c_minln(m/c_min),(nm)^m. Since the resource costs c_r(j)_r∈ R,j∈[n] are independent ϕ-smooth random variables, we can now deploy <ref>, with the choice of parameters μRn=mn, λ m, and α m, in order to finally bound the expected number of steps of (1+ε)-BRD in (<ref>) by (1+1/ε) n ·[ϕ m (mn)^2 ln(ϕ m · nm)(m+1)^2+1] = O(1/εϕ n^3 m^5 log(ϕ n m)). §.§ Special Congestion Game Models §.§.§ Step-Function Games In this section we formally describe the model of step-function congestion games, and prove the corresponding part (<ref>) of <ref>. It is important to mention that the PLS-hardness of approximation of Skopalik2008, that we have already mentioned for general congestion games, actually uses nondecreasing costs, so it applies to step-functions as well. In step-function congestion games, each resource cost function c_r is represented by a set of d_r integer break points 1=b_r,1 < b_r,2 < … <b_r,d_r≤ n and corresponding value jumps a_r,1, a_r,2,…, a_r,d_r∈ (0,1]. More precisely, the cost of resource r on a load of ℓ∈[n] players is then given by c_r(ℓ) a_r,1+a_r,2+… + a_r,κ, where κ=maxj∈[d_r]b_r,j≤ℓ. Let dmax_r∈ R d_r be the maximum number of break points across the representation of all resources, and let d̃∑_r∈ R d_r ≤ m d be their sum. For our smoothed analysis, we follow the framework recently proposed by ggm2022_arxiv for step-function congestion games, assuming that the jumps a_r,j are independent ϕ-smooth random variable. We emphasize, though, that the break points b_r,j are not perturbed but are (adversarially) fixed. Similarly to (<ref>) and (<ref>) for general congestion games (see <ref>), if we denote a_minmin_r∈ R,j∈[d_r] a_r,j and a_maxmax_r∈ R,j∈[d_r] a_r,j, the potential and the cost of any player i, at any strategy profile s, can be lower-bounded by Φ( s) ≥ n a_min and C_i( s) ≥ a_min. Also, since all cost functions c_e are now nondecreasing, from (<ref>) the potential can be upper-bounded as: Φ(s) ≤∑_r∈ R n c_r(n) = n ∑_r∈ R∑_j=1^d_r a_r,j≤ n ∑_r∈ R d_r a_max =nd̃ a_max. In a totally analogous way to (<ref>), following the steps of the proof of <ref>, we can now bound the expected number of steps of (1+ε)-BRD by T ≤(1+1/ε) n ·mind̃/a_minln(d̃/a_min),(n+1)^m/n Choosing this time parameters μ=αd̃ and λ mln (n+1)/lnμ, we can see that (n+1)^m/n≤ (n+1)^m=μ^mlog_μ(n+1)=μ^λ and so T ≤(1+1/ε) n ·minα/a_minln(α/a_min),μ^λ. Noticing that the expectation above is taken with respect to the independent ϕ-smooth random variables a_r,j_r∈ R, j∈[d_r], we can again deploy <ref> to bound it by: T = (1+1/ε)n· O( ϕαμ^2ln(ϕαμ) λ^2 ) =(1+1/ε)n · O(ϕ·d̃^3 ln(ϕ·d̃^2)(mln n/lnd̃)^2) =O(1/εϕlog(ϕ) nlog^2(n) m^2 d̃^3 ) =O(1/εϕlog(ϕ) nlog^2(n) m^5 d^3 ). §.§.§ Polynomial Games In this section we introduce the model of polynomial congestion games, and prove the corresponding part (<ref>) of <ref>. In this model the resource cost functions are polynomials of a constant maximum degree d, with nonnegative coefficients. It is arguably the most established, and well-studied congestion game model in algorithmic game theory. Although finding exact equilibria in polynomial congestion games is still a PLS-complete problem Ackermann2008,Roughgarden16,ggm2022_arxiv, no hardness of approximation results are known. At the same time, the only positive computational results that we have are for efficiently computing d^O(d)-approximate PNE Caragiannis2011,Feldotto2017,gns2018_journal. Closing this gap in our understanding of computability of approximate PNE in polynomial games is one of the most important remaining open problems in the field. To continue with the formal definition of our model, each cost function c_r is represented by a set of coefficients a_r,j_j∈[0..d]∈ [0,1], where d∈, so that for all loads ℓ∈[n] c_r(ℓ) a_r,0+ a_r,1ℓ+… + a_r,dℓ^d. We emphasize that the normalization of the coefficients within [0,1] here is without loss. To perform smoothed analysis in this congestion game model, the natural choice is to consider perturbations on the coefficients of the polynomial costs. However, special care needs to be taken with respect to zero coefficients: any random noise on them would “artificially” introduce monomial terms that did not exist in the original cost function This, arguably, will distort the combinatorial aspect of our instance. Therefore, for our smoothness framework we will assume that only the nonzero polynomial coefficients a_r,j are (independent) ϕ-smooth random variables. For that reason, it will be technically convenient to introduce notation J_rj∈[0..d]a_r,j > 0, d_rJ_r, and d̃∑_r∈ R d_r ≤ m(d+1), for the set of indices of the non-trivial cost coefficients of resource r; then, (<ref>) can now be written as c_r(ℓ) = ∑_j∈ J_r a_r,jℓ^j. We also let a_min=min_r∈ R, j∈ J_r a_r,j and a_max=max_r∈ R, j∈ J_r a_r,j for the minimum and maximum nonzero coefficients across all resources. It is not hard to verify that we can again derive the same lower bounds as in (<ref>) for the potential and player costs, and for the upper bound on the potential, due to the monotonicity of the resource cost function, similarly to (<ref>) we can now get: Φ( s) ≤ n∑_r∈ R c_r(n) = n∑_r∈ R∑_j∈ J_r a_r,j n^j ≤ n∑_r∈ R d_r a_max n^d = d̃ n^d+1 a_max. Following along the lines of the derivations for the previous <ref>, we can now bound the expected number of steps of (1+ε)-BRD by T≤(1+1/ε) n ·mind̃ n^d/a_minln(d̃ n^d/a_min),(n+1)^m/n. Choosing parameters μd̃≤ m(d+1)=O(m), αd̃ n^d+1, and λ mln (n+1)/lnμ so that (<ref>) holds, similarly to the proof in <ref> we can again derive the bound in (<ref>). Thus, applying <ref> for the independent ϕ-smooth random variables a_r,j_r∈ R, j∈ J_r, we can now bound the expected number of steps of our dynamics by: T = O(1/εn ϕαμ^2ln(ϕαμ) λ^2 ) = O(1/εn ϕ·d̃ n^d+1·d̃^2 ·ln(ϕ·d̃ n^d+1·d̃) (mln n/lnd̃)^2) = O(1/εϕlog(ϕ) n^d+2log^3(n) d̃^3 m^2) = O(1/εϕlog(ϕ) n^d+2log^3(n) m^5). §.§.§ Cost-Sharing Games This section deals with the proof of part (<ref>) of <ref>. Fair cost-sharing games are congestion games where the cost of resources are given by c_r(ℓ)=a_r/ℓ, where a_r>0. Notice that these are decreasing functions; they can be interpreted as a fixed edge cost a_r being equally split among the players that use it. It is known that the problem of finding an exact PNE in fair cost-sharing games is PLS-complete, even in network games Syrgkanis2010, and that better-response dynamics take exponentially long to converge ADKTWR04. To the best of our knowledge, no positive results exist regarding the efficient computation of approximate PNE, apart from the very special case of metric network facility location games, with uniform costs Hansen2009. Unlike the previous congestion game models studied in this paper, no smoothness framework has been proposed before for cost-sharing games. Therefore, we propose here to consider a_r as independent ϕ-smooth random variables; arguably, this seems as the most natural approach. Like before, we denote a_minmin_r∈ R a_r and a_maxmax_r∈ R a_r. Given that resource costs are now decreasing, we can lower-bound the player costs at any profile s by C_i( s)≥min_r∈ R c_r(ℓ_r( s))≥min_r∈ Rc_r(n)=min_r∈ Ra_e/n=1/n· a_min. Furthermore, we can bound the potential values by Φ( s) =∑_r∈ R∑_j=1^ℓ_r( s) c_r(j)≤ m max_r∈ R∑_j=1^n c_r(j) = m max_r∈ R∑_j=1^na_r/j= m H_n · a_max Φ( s) = ∑_r∈ R∑_j=1^ℓ_r( s)a_r/j≥ a_min∑_r∈ R H_ℓ_r( s)≥ H_n· a_min, where H_n∑_j=1^n1/j is the harmonic numbers function. For the last inequality we have used the fact that we have n players in total, each one of whom is using at least one edge, and that the harmonic numbers are increasing and subadditive, i.e. H_k_1+k_2≤ H_k_1+H_k_2 for all positive integers k_1,k_2. Using the above inequalities, we get the following bound on the expected number of steps of (1+ε)-BRD, analogously to (<ref>): T≤(1+1/ε) nH_n ·minm/a_minln(m/a_min),(n+1)^m/nH_n. Similarly to our derivation in <ref>, we can deploy <ref>, this time with parameters μ=α m and λ mln(n+1)/ln m, to get the following bound T =O(1/εn H_n ·ϕαμ^2ln(ϕαμ) λ^2 ) =O(1/εnlog(n) ·ϕ m^3log(ϕ m^2) m^2log^2 n/log^2 m) = O(1/εϕlog(ϕ) n log^3(n) m^5). § APPENDIX § TECHNICAL LEMMAS For any positive integer n and real x≥ 1, 1-n/x≤(1-1/x)^n ≤1/1+n/x. In our proof we will make use of Bernoulli's inequality (for a proof see, e.g., [0.2]Mitrinovic1964): (1+y)^m ≥ 1+my ∀ m∈∖0, ∀ y∈ [-1,∞) Fix an arbitrary positive integer n and a real x≥ 1. Setting y -1/x≥ -1 and m n, from Bernoulli's inequality (<ref>) we get that (1-1/x)^n ≥ 1-n/x, which is exactly the first inequality that we wanted to prove in (<ref>). To complete our proof, we need to show the remaining, second inequality of (<ref>), i.e. prove that: 1/1+n/x≥(1-1/x)^n. Applying Bernoulli's inequality for y1/x>0 and m n we get that 1+n/x≤(1+1/x)^n and so we can bound 1/1+n/x≥(1+1/x)^-n So, to establish the desired (<ref>), it is enough to show that (1+1/x)^n(1-1/x)^n ≤ 1, which is true since: (1+1/x)^n(1-1/x)^n = [(1+1/x)(1-1/x)]^n =(1-1/x^2)^n ≤ 1^n=1, the inequality holding due to the fact that 0<1/x^2≤ 1 (because x≥ 1). For any real x∈(0,1), -1/ln(1-x)≤1/x. First we observe that, due to the fact that function t↦1/t is decreasing in [1,∞), for any real number y≥ 1 it is ln y = ∫_1^y 1/t dt ≥∫_1^y 1/y dt = y-1/y= 1-1/y. Next, fix an arbitrary x∈ (0,1). Then 1/1-x>1, so we can utilize (<ref>) with y1/1-x to get: -ln(1-x)=ln(1/1-x) (<ref>)≥ 1-1/1/1-x=x, which is equivalent to the statement of our lemma.
http://arxiv.org/abs/2306.02298v1
20230604082857
NB-IoT Uplink Synchronization by Change Point Detection of Phase Series in NTNs
[ "Jiaqi Jiang", "Yihang Huang", "Yin Xu", "Runnan Liu", "XiaoWu Ou", "Dazhi He" ]
cs.IT
[ "cs.IT", "math.IT" ]
NB-IoT Uplink Synchronization by Change Point Detection of Phase Series in NTNs This paper is supported in part by National Natural Science Foundation of China Program(62271316, 62101322), National Key R&D Project of China (2018YFB1802201), Shanghai Key Laboratory of Digital Media Processing (STCSM 18DZ2270700) and the Fundamental Research Funds for the Central Universities. The authors Jiaqi Jiang, Yihang Huang, Yin Xu, Runnan Liu, Xiaowu Ou and Dazhi He are with School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University. The corresponding author is Yin Xu (E-mail: [email protected]). Jiaqi Jiang, Yihang Huang, Yin Xu, Runnan Liu, XiaoWu Ou and Dazhi He School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China {jiangjiaqi1999, huangyihangsg, xuyin, liurunnan,xiaowu_ou,hedazhi}@sjtu.edu.cn July 31, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Non-Terrestrial Networks (NTNs) are widely recognized as a potential solution to achieve ubiquitous connections of Narrow Bandwidth Internet of Things (NB-IoT). In order to adopt NTNs in NB-IoT, one of the main challenges is the uplink synchronization of Narrowband Physical Random Access procedure which refers to the estimation of time of arrival (ToA) and carrier frequency offset (CFO). Due to the large propagation delay and Doppler shift in NTNs, traditional estimation methods for Terrestrial Networks (TNs) can not be applied in NTNs directly. In this context, we design a two stage ToA and CFO estimation scheme including coarse estimation and fine estimation based on abrupt change point detection (CPD) of phase series with machine learning. Our method achieves high estimation accuracy of ToA and CFO under the low signal-noise ratio (SNR) and large Doppler shift conditions and extends the estimation range without enhancing Random Access preambles. NB-IoT, NTNs, Random Access, change point detection § INTRODUCTION Narrow Bandwidth Internet of Things (NB-IoT) is a key motivation for Beyond fifth Generation (B5G) mobile networks which in turn will support applications of massive Machine Type of Communication (mMTC) such as smart cities, mobile health, and other large-scale NB-IoT use cases <cit.>. With the rapid growth in the amount of IoT and NB-IoT devices, especially for devices in remote areas where terrestrial IoT can not cover, Non-terrestrial networks (NTNs) are introduced to help achieve seamless global coverage. In NTNs, Low and very Low Earth Orbit (LEO and vLEO) satellite are widely recognized as the most suitable way to achieve ubiquitous IoT coverage with its lower power consumption and propagation delay compared with Geostationary Earth Orbit (GEO) satellite. The Third Generation Partnership Project (3GPP) has standardized the NB-IoT in release-16 <cit.> to enable communications between IoT devices and has started a study item centered on NB-IoT/eMTC support for NTNs to adopt the NTNs in NB-IoT <cit.>. As is well known, Random Access (RA) procedure is widely used to establish a wireless link and enable the data transmission between the user equipments (UEs) and satellite base station. In this context, one of the main challenges is the NB-IoT uplink synchronization for RA in NTNs which refers to accurate time of arrival (ToA) and carrier frequency offset (CFO) estimation in particular. In NTNs, due to the impairments caused by large propagation delay and Doppler shift, the original estimation method <cit.> designed for TNs does not apply in NTNs. In <cit.>, Brute Force (BF) algorithm based on differential correlation is proposed to detect ToA while its accuracy degrades with the presence of residual CFO. <cit.> exploits frequency hopping rules in RA to eliminate the impact of CFO when estimating ToA but its estimation range is limited by preamble format. Many existed studies <cit.> adopt Global Navigation Satellite System (GNSS) in user equipments to pre-compensate the large propagation delay and Doppler shift and then tackle the issue with small residual ToA and CFO after pre-compensation. Furthermore, <cit.> uses cumulative sum algorithm (CUSUM) to detect the change point of wavelet to estimate the ToA and CFO but its performance heavily depends on how well the actual data follows the assumed distribution. However, the assumption of GNSS-assisted devices with high accuracy positioning for NB-IoT in NTNs is questionable. Major challenges include <cit.>: * GNSS receiver needs to frequently compensate CFO and ToA which is not suitable for NB-IoT devices with limited computation capability and power consumption. * Natural radio propagation impairments and building blockage can undermine the positioning performance of GNSS. * The time synchronization algorithm in presence of GNSS is vulnerable to different types of intentional attacks. Such impacts heavily interfere the pre-compensation performance of GNSS. Hence the residual ToA and CFO may exceed the estimation range, leading to overall performance degradation. Consequently, instead of using GNSS, we firstly use a system-level method proposed in <cit.> to compensate the large CFO to around 600Hz which relies on the information obtained from downlink synchronization. Then, aiming to eliminate the impact of large ToA and achieve the accurate estimation of residual ToA and CFO, we propose a novel method based on the change point detection (CPD) of phase series to identify the existence of preamble signal. We firstly multiply the received signal by local RA preamble and then extract its phase series. It can be explored that the phase series represent periodicity when preamble signal exists and follow random distribution otherwise. The periodicity reflects CFO and the location of each period reflects ToA. We find that the slope of phase series changes abruptly at the junction between each period, caused by phase ambiguity. Then, motivated by <cit.>, we use autoenconders with a time-invariant representation (TIRE) to detect the change points. The reason to select TIRE method relies on its accurate detection ability of the slope change despite the data is not generated from parametric probability distribution. Besides, the sensitivity of detection performance to common noise can be mitigated during the process of feature learning by autoencoders. The coarse estimation of ToA is conducted by first two change points of phase series. Then CFO is estimated from the period and the fine estimation of TOA is calculated based on the CFO and the location of first change point. The remainder of this paper is summarized as follows. Section 2 describes the RA structures and the preamble signals. Section 3 proposes our ToA and CFO estimation method based on phase series. In Section 4, the simulation results are presented and the conclusion of this paper is in Section 5. § SYSTEM MODEL This work focuses on addressing the accurate estimation of ToA and CFO in RA procedure via satellite, which manages the uplink synchronization in NTNs. We consider D2 scenario among six reference scenarios presented by 3GPP in <cit.>. It means our system is operated in S-band at a carrier-frequency of 2G, deployed at the altitude of 600km. In this section, the basic structures of RA preamble are introduced and then we model the transmission signal and present our assumptions. §.§ Random Access Preamble Structure In NB-IoT, the RA preamble is transmitted on Narrowband Physical Random Access Channel (NPRACH) which occupies a bandwidth of 180k. There are three commonly used formats tagged by format 0 to format 2 for RA preamble in FDD mode. The starting time of NPRACH transmission is set by N_start. Four symbol groups constitute a basic unit preamble and the number of preamble units in whole RA procedure is determined by N_rep. One symbol group (SG) consists of one cyclic prefix (CP) plus five identical symbols. In format 0, the length of CP (66.67μ s) is a quarter of the length of one basic symbol (266.67μ s) and in format 1 both lengths are identical. In frequency domain, each SG occupies one subcarrier with the bandwidth of 3.75KHz and the total number of subcarriers is denoted as N_sc. The initial subcarrier for NPRACH is defined by N_off. The frequency spacing between SGs follows a predefined hopping pattern and the frequency spacing from one unit preamble to another follows a pseudo-random selection principle summarized in <cit.>. In fact, the allocation of frequency resources in one unit preamble only depends on the subcarrier location of first SG. Fig. <ref> shows an example of preamble structure. §.§ NPRACH Signal Transmission The baseband signal transmitted by NPRACH can be expressed as s_m,p(n)=∑_k^N-1S_m,p[k]e^j2πk/Nn where s_m,p(n) denotes the n-th sample of the time domain waveform in p-th symbol of the m-th SG. S_m,p[k] denotes the m-th symbol on the p-th subcarrier during the m-th SG. The range of sample point n belongs to [N_m,p-N_CP,N_m,p+N-1], in which p∈[0,L-1]. L denotes the number of symbols in one SG and N_m,p=mN_g+pN, with N_g=N_CP+pN being the size of one SG, N_CP denoting the length of CP and N denoting the length of one symbol. Based on <cit.>, S_m,p[k]= 1 k=N_sc(m) 0 others, N_sc(m) is the subcarrier index occupied by m-th SG, so the transmitted signal can be rewritten as s_m,p(n)=∑_m^N_symS_m,p[N_sc(m)]e^j2πN_sc(m)/Nn N_sym=4× N_rep represents the number of SG in format 1. After transmitting to the receiver on NPRACH, the n-th sample of p-th symbol of the received signal can be written as y_m,p(n)=h_me^j2π f_off(n-D)s_m,p(n-D)+w_m,p(n) where f_off denotes the CFO normalized by sampling frequency and D denotes the ToA normalized by the symbol duration. w_m,p(n) is the noise term and h_m is the channel coefficient for the m-th SG. Here we consider the impact of Doppler rate. The changing rate of the frequency offset over time changes from f_off to f_off+α (n-D) with the presence of Doppler rate α normalized by squared sampling frequency. In this context, the received signal can be changed as y_m,p(n)=h_me^j2π [f_off(n-D)+1/2α (n-D)^2]s_m,p(n-D)+w_m,p(n) The following assumptions are made based on the considered scenario: 1) the terrestrial UEs are directly connected to LEO satellite. 2) CFO remains constant during the RA procedure. 3) The Doppler rate of each geographical location can be estimated by LEO satellite. For the sake of simplicity, two kinds of transmission channels concluding Additive White Gaussian Noise (AWGN) channel and Tapped Delay Line-C (TDL-C) channel are considered in the following analysis. § ESTIMATION METHOD BASED ON CHANGE POINT DETECTION Based on the preamble index attained from the preamble detection, the same baseband preamble signal can be selected at the local receiver side. Fig. <ref> illustrates the overall estimation scheme. A system level solution based on the information of initial downlink synchronization is used to reduce the large CFO to a relatively small scale <cit.>. In most cases, the residual frequency offset is about 600Hz <cit.> which is still much larger than estimation range of existed studies <cit.>. Therefore, we consider the frequency uncertainty as 600Hz in the following analysis. §.§ Phase Series Analysis We first multiply the received signal by conjugation of local baseband signal: r(n) =y_m,p(n)× s_m,p^∗(n) =h_me^j2π [f_off(n-D)+1/2α (n-D)^2]× e^j-2πN_sc(m)/ND+w_m,p(n) The phase series extracted from r(n) is expressed as φ (n) = (y_m,p(n)× s_m,p^∗(n)) =2π[f_off(n-D)+1/2α (n-D)^2]-2πN_sc(m)/ND +Δ_h+Δ_w Δ_h denotes the phase of channel coefficient and Δ_w denotes the phase of noise. On AWGN channel, Δ_h=0. On TDL-C channel, Δ_h represents the micro phase fluctuation. Because the Doppler rate varies from 0 to -620 in our scenario <cit.>, the parameter α normalized by squared sampling frequency is far less than f_off normalized by sampling frequency. So α can be neglected when phase series is analyzed. The impact of Δ_w to phase series can be weakened by smoothening the mean of received signal. Then the phase series can be simplified as φ (n)=2π f_off(n-D)-2πN_sc(m)/ND It can be seen that the phase series is a linear function of time sample n with the slope as 2π f_off and intercept as -2π D(f_off+N_sc(m)/N), manifested as periodic change of phase series in the range of [-π,π]. §.§ ToA and CFO Estimation Scheme Due to the range of residual CFO after system level compensation being [-600,600], the linearity of phase series would be undermined in extreme cases. In order to transfer the linear feature to the periodic change of phase series, a fixed frequency offset can be added to received signal after the system level compensation. Here we set a 1000 offset plus positive CFO and -600 plus negative CFO to enhance the estimation performance. The frequency uncertainty then turns to [(-1600,-1000),(1000,1600)] which means there exists a constant prior deviation between the estimation value and the true value. During the NPRACH process, based on the assumptions 2), the slope of phase series remains constant and the intercept changes with the SG. Fig. <ref> shows the phase series with the ToA of 200 sample points and the CFO of 1500. Next, we reveal that the information of ToA and CFO can be reflected in phase series. Determined by configuration parameters and formulation (<ref>), the initial phase at the start of NPRACH transmission can be expressed as φ_int =φ(N_start) =2π[f_off(N_start-D)]-2πN_off/NN_start Then the ToA and CFO can be calculated as D=n_l-|±π-φ_int|/f_off f_off=1/T_ph T_ph denotes the period of phase series and n_l denotes the timing location of the first period and the sign of π depends on the sign of CFO. Now the estimation of ToA and CFO refers to the estimation of T_ph and n_l. We use the change point detection method which would be introduced in following part to determine T_ph and n_l. Due to the 2π phase ambiguity, the solution to equation (<ref>) is not unique, resulting several ToA candidates to be selected. A discrimination method based on Doppler rate <cit.> is utilized to select the true ToA among candidates. Because both the Doppler rate and ToA are related to the position of UE, the precise positioning information is not required. To illustrate, assuming the true ToA is 104.7μ s and the solutions to formulation (<ref>) are 104.7μ s, 371.3μ s, 638.0μ s with corresponding Doppler rate of -297/s, -252/s, -215/s respectively. If the Doppler estimation is -240/s, then correct ToA selection is 371.3μ s because its Doppler rate is the nearest one. Then in ToA coarse estimation process, the distance of first two detected change points is recorded as T_ph. Coarse ToA values which are used to compensate large ToA can be calculated based on the T_ph. Noting that the coarse T_ph estimation would result in wrong ToA candidates selection, leading major ToA estimation error in coarse estimation process. After coarse estimation of ToA, the accurate CFO is estimated based on the whole preamble signal. In order to eliminate the impact of different intercept of phase series in each SGs, we divide the phase series into segments with the length of one symbol group and then calculate the distances in each segments, which is shown in Fig. <ref>. The estimation of CFO relies on the average distances of change points in each segments and we use the Tree Sigma Guidelines as the postprocessing scheme to exclude anomalous values of distances to increase estimation accuracy. Then fine estimation of ToA can be conducted. §.§ Slope Change Detection Method We use autoencoders with a time-invariant representation (TIRE) to detect slope change points of phase series and then calculate the T_ph and n_l <cit.>. We first segment phase series into consecutive windows with the length of N. The autoencoders are utilized to extract features of hidden layer from the consecutive windows. These features can be divided into time-invariant features s_n and instantaneous features u_n (<ref>). The structure is shown in Fig. <ref>. h_n=[(s_n)^T,(u_n)^T]^T In (<ref>) h_n denotes the encoded output from encoder layer of the n-th window. Invariant features refer to the statistical characteristics that change only when change point exists in consecutive windows. That means the differences of time-invariant features between time windows can manifest the abrupt change. The differences are summarized by the defined dissimilarity measure D D_n=||s_n^TD-s_n+N^TD||_2 N denotes the time-domain window size. The change points are located at the peaks of dissimilarity measure D_n. In order to reduce the alarming rate, we exploit the prominence of peaks to determine the location of change points by comparing with a predifined threshold. In our context, we set the number of time invariant features as one in time domain which refers to the slope of phase series. Fig. <ref> demonstrates the phase series and its corresponding dissimilarity measure. The blue lines represent the prominence of peaks. Change points can be detected when the prominence exceeds predefined threshold. § SIMULATION RESULTS In this section, we first discuss the coarse estimation performance of ToA to determine the range of residual ToA through Monte Carlo simulations. Then residual ToA and CFO estimation results are presented. The simulation parameters are listed in TABLE <ref>. §.§ ToA Coarse Estimation Performance The normalization of ToA and CFO is done on the symbol duration and sampling frequency. Table <ref> reports the ToA coarse estimation performance in terms of absolute average estimation error and max estimation error. Based on the estimation performance, the range of residual ToA is set as [-100μ s,100μ s] in the following simulation. §.§ CFO Estimation Performance Fig. <ref> shows the cumulative distribution function (CDF) of normalized CFO error with different SNR (SNR=3dB, SNR=0dB, SNR=-3dB) when the N_rep is set as 8 and the CDF of normalized CFO error with different N_rep (N_rep=8, N_rep=16, N_rep=32) when SNR is set as -3dB. It can be found that the absolute CFO estimation error in 99% cases is less than 1.92 with SNR=3dB. However, the estimation error grows when decreasing the SNR. This can be solved by increasing the repetition of basic preamble units (N_rep). When N_rep=32, the max absolute CFO error is less than 4.5. The estimation accuracy of CFO grows with the N_rep resulting from the amount of detected change points in phase series. The amount also increases with the growth of CFO. Hence, the estimation performance with large CFO can be guaranteed. §.§ ToA Fine Estimation Performance We simulate the ToA estimation process on two kinds of channels: AWGN channel and TDL-C channel. Here the technique proposed in <cit.> and the Brute Force (BF) algorithm based on differential correlation <cit.> are compared with our method. In <cit.>, the author exploits Stationary Discrete Wavelet Transform (S-DWT) to decompose the received signal into 8 levels. It can be found that the decomposed sequence y_i follows two different distributions before and after time delay τ. Then the hypothesis of no change point in whole sequence is described as H_0:θ =θ_0 for 1≤ i≤ N and if one change point appears at i=τ, the hypothesis is presented as H_1:θ =θ_0 for 1≤ i≤τ,θ=θ_1 for τ≤ i≤ N. Based on cumulative sum (CUSUM) algorithm of change point detection, the log-likelihood ratio (LLR) of hypothesis reaches the maximum after the change. This algorithm is indicated as DWT in the following. In <cit.>, the BF algorithm based on differential correlation detects the peak of cross correlation values. In the following, Fig. 8-9 report the normalized ToA error with different SNR (SNR=3dB, SNR=0dB, SNR=-3dB) when the N_rep is set as 8 on AWGN channel and TDL-C channel. It can be seen that the proposed method outperforms the DWT and differential correlation method both on AWGN channel and on TDL-C channel due to the steepest slope of CDF. The performance of differential correlation method is heavily ruined because of the presence of noise. The performance of DWT heavily depends on how well the actual data follows the assumed Gaussian distribution which can be proved by the simulation results that the estimation error of DWT increases apparently on TDL-C channel. Since we use the autoencoders with TIRE to detect the change points of phase series without advanced assumption of distribution, our method is applicable for various channels besides AWGN channel. When the SNR decreases from 3dB to -3dB on AWGN channel and TDL-C channel, the curves follow the same trend. And the max absolute normalized ToA error changes from 2.1μ s to 4.2μ s on AWGN channel and from 2.7μ s to 5.3μ s on TDL-C channel which demonstrates the robustness against the noise. The whole estimation process is conducted in the time domain and both ToA and CFO are estimated in one process of change point detection in this method, which conserves the computational complexity of Discrete Fourier Transform (DFT). The structure of autoencoders with TIRE in this context can be pre-trained using the training data from actual preamble signals to speed up the uplink synchronization. In addition, our method enlarges the estimation range of CFO compared with existed methods. Thus, the overall performance proves that our method is fully suitable to address the uplink synchronization for NB-IoT in NTNs. § CONCLUSION In this paper, we propose a phase series based approach using change point detection to address NB-IoT uplink synchronization for NTNs without GNSS. We analyze the linearity of phase series which allows us to identify the expression of ToA and CFO. Then a coarse estimation method to eliminate the impact of large propagation delay is proposed. After compensating the delay, the fine estimation of residual ToA and CFO is presented. Simulation results demonstrate the superiority of our method. IEEEtran
http://arxiv.org/abs/2306.08125v1
20230613203702
Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD
[ "Yijun Wan", "Abdellatif Zaidi", "Umut Simsekli" ]
stat.ML
[ "stat.ML", "cs.LG", "math.PR" ]
Self-supervised Deep Hyperspectral Inpainting with the Sparsity and Low-Rank Considerations Shuo Li, Mehrdad Yaghoobi S. Li, and M. Yaghoobi are with the Institute for Digital Communications, School of Engineering, University of Edinburgh, EH9 3JE UK (e-mail: [email protected]; [email protected]). July 31, 2023 ========================================================================================================================================================================================================================================= Neural network compression has been an increasingly important subject, due to its practical implications in terms of reducing the computational requirements and its theoretical implications, as there is an explicit connection between compressibility and the generalization error. Recent studies have shown that the choice of the hyperparameters of stochastic gradient descent (SGD) can have an effect on the compressibility of the learned parameter vector. Even though these results have shed some light on the role of the training dynamics over compressibility, they relied on unverifiable assumptions and the resulting theory does not provide a practical guideline due to its implicitness. In this study, we propose a simple modification for SGD, such that the outputs of the algorithm will be provably compressible without making any nontrivial assumptions. We consider a one-hidden-layer neural network trained with SGD and we inject additive heavy-tailed noise to the iterates at each iteration. We then show that, for any compression rate, there exists a level of overparametrization (i.e., the number of hidden units), such that the output of the algorithm will be compressible with high probability. To achieve this result, we make two main technical contributions: (i) we build on a recent study on stochastic analysis and prove a `propagation of chaos' result with improved rates for a class of heavy-tailed stochastic differential equations, and (ii) we derive strong-error estimates for their Euler discretization. We finally illustrate our approach on experiments, where the results suggest that the proposed approach achieves compressibility with a slight compromise from the training and test error. § INTRODUCTION Obtaining compressible neural networks has become an increasingly important task in the last decade, and it has essential implications from both practical and theoretical perspectives. From a practical point of view, as the modern network architectures might contain an excessive number of parameters, compression has a crucial role in terms of deployment of such networks in resource-limited environments <cit.>. On the other hand, from a theoretical perspective, several studies have shown that compressible neural networks should achieve a better generalization performance due to their lower-dimensional structure <cit.>. Despite their evident benefits, it is still not yet clear how to obtain compressible networks with provable guarantees. In an empirical study <cit.>, introduced the `lottery ticket hypothesis', which indicated that a randomly initialized neural network will have a sub-network that can achieve a performance that is comparable to the original network; hence, the original network can be compressed to the smaller sub-network. This empirical study has formed a fertile ground for subsequent theoretical research, which showed that such a sub-network can indeed exist (see e.g., <cit.>); yet, it is not clear how to develop an algorithm that can find it in a feasible amount of time. Another line of research has developed methods to enforce compressibility of neural networks by using sparsity enforcing regularizers, see e.g., <cit.>. While they have led to interesting algorithms, the resulting algorithms typically require higher computational needs due to the increased complexity of the problem. On the other hand, due to the nonconvexity of the overall objective, it is also not trivial to provide theoretical guarantees for the compressibility of the resulting network weights. Recently it has been shown that the training dynamics can have an influence on the compressibility of the algorithm output. In particular, motivated by the empirical and theoretical evidence that heavy-tails might arise in stochastic optimization (see e.g., <cit.>), <cit.> showed that the network weights learned by stochastic gradient descent (SGD) will be compressible if we assume that they are heavy-tailed and there exists a certain form of statistical independence within the network weights. These studies illustrated that, even without any modification to the optimization algorithm, the learned network weights can be compressible depending on the algorithm hyperparameters (such as the step-size or the batch-size). Even though the tail and independence conditions were recently relaxed in <cit.>, the resulting theory relies on unverifiable assumptions, and hence does not provide a practical guideline. In this paper, we focus on single-hidden-layer neural networks with a fixed second layer (i.e., the setting used in previous work <cit.>) trained with vanilla SGD, and show that, when the iterates of SGD are simply perturbed by heavy-tailed noise with infinite variance (similar to the settings considered in <cit.>), the assumption made in <cit.> in effect holds. More precisely, denoting the number of hidden units by n and the step-size of SGD by η, we consider the mean-field limit, where n goes to infinity and η goes to zero. We show that in this limiting case, the columns of the weight matrix will be independent and identically distributed (i.i.d.) with a common heavy-tailed distribution. Then, we focus on the finite n and η regime and we prove that for any compression ratio (to be precised in the next section), there exists a number N, such that if n ≥ N and η is sufficiently small, the network weight matrix will be compressible with high probability. Figure <ref> illustrates the overall approach and precises our notion of compressibility. R0.58 format=plain < g r a p h i c s > The illustration of the overall approach. We consider a one-hidden-layer neural network with n hidden units, which results in a weight matrix of n columns (first layer). We show that, when SGD is perturbed with heavy-tailed noise, as n→∞, each column will follow a multivariate heavy-tailed distribution in an i.i.d. fashion. This implies that a small number of columns will have significantly larger norms compared to the others; hence, the norm of the overall weight matrix will be determined by such columns <cit.>. As a result, the majority of the columns can be removed (i.e., set to zero), which we refer to as compressibility. To prove our compressibility result, we make two main technical contributions. We first consider the case where the step-size η→ 0, for which the SGD recursion perturbed with heavy-tailed noise yields a system of heavy-tailed stochastic differential equations (SDE) with n particles. As our first technical contribution, we show that as n →∞ this particle system converges to a mean-field limit, which is a McKean-Vlasov-type SDE that is driven by a heavy-tailed process <cit.>. For this convergence, we obtain a rate of n^-1/2, which is faster than the best known rates, as recently proven in <cit.>. This result indicates that a propagation of chaos phenomenon <cit.> emerges[Here, the term chaos refers to statistical independence: when the particles are initialized independently, they stay independent through the whole process even though their common distribution might evolve.]: in the mean-field regime, the columns of the weight matrix will be i.i.d. and heavy-tailed due to the injected noise. Next, we focus on the Euler discretizations of the particle SDE to be able to obtain a practical, implementable algorithm. As our second main technical contribution, we derive strong-error estimates for the Euler discretization <cit.> and show that for sufficiently small η, the trajectories of the discretized process will be close to the one of the continuous-time SDE, in a precise sense. This result is similar to the ones derived for vanilla SDEs (e.g., <cit.>) and enables us to incorporate the error induced by using a finite step-size η to the error of the overall procedure. Equipped with these results, we finally prove a high-probability compression bound by invoking <cit.>, which essentially shows that an i.i.d. sequence of heavy-tailed random variables will have a small proportion of elements that will dominate the whole sequence in terms of absolute values (to be stated formally in the next section). This establishes our main contribution. Here, we shall note that similar mean-field regimes have already been considered in machine learning (see e.g., <cit.>). However, these studies all focused on particle SDE systems that either converge to deterministic systems or that are driven by Brownian motion. While they have introduced interesting analysis tools, we cannot directly benefit from their analysis in this paper, since the heavy-tails are crucial for obtaining compressibility, and the Brownian-driven SDEs cannot produce heavy-tailed solutions in general. Hence, as we consider heavy-tailed SDEs in this paper, we need to use different techniques to prove mean-field limits, compared to the prior art in machine learning. To validate our theory, we conduct experiments on single-hidden-layer neural networks on different datasets. Our results show that, even with a minor modification to SGD (i.e., injecting heavy-tailed noise), the proposed approach can achieve compressibility with a negligible computational overhead and with a slight compromise from the training and test error. For instance, on a classification task with the MNIST dataset, when we set n=10K, with vanilla SGD, we obtain a test accuracy of 94.69%, whereas with the proposed approach, we can remove 44% of the columns of the weight matrix, while maintaining a test accuracy of 94.04%. We provide all the proofs in the appendix. § PRELIMINARIES AND TECHNICAL BACKGROUND Notation. For a vector u∈^d, denote by u its Euclidean norm, and by u_p its ℓ_p norm. For a function f∈ C(^d_1, ^d_2), denote by f_∞:=sup_x∈^d_1f(x) its L^∞ norm. For a family of n (or infinity) vectors, the indexing ·^i,n denotes the i-th vector in the family. In addition, for random variables, (d)= means equality in distribution, and the space of probability measures on ^d is denoted by 𝒫(^d). For a matrix A∈^d_1× d_2, its Frobenius norm is denoted by A_F = √(∑_i=1^d_1∑_j=1^d_2|a_i,j|^2). Without specifically mentioning, 𝔼 denotes the expectation over all the randomness taken into consideration. §.§ Alpha-stable processes A random variable X ∈ℝ^d is called α-stable with the stability parameter α∈ (0,2], if X_1, X_2, … are independent copies of X, then n^-1/α∑_j=1^n X_j (d)= X for all n≥ 1 <cit.>. Stable distributions appear as the limiting distribution in the generalized central limit theorem (CLT) <cit.>. In the one-dimensional case (d=1), we call the variable X a symmetric α-stable random variable if its characteristic function is of the following form: 𝔼[exp(iω X)]=exp(-|ω|^α) for ω∈ℝ. For symmetric α-stable distributions, the case α=2 corresponds to the Gaussian distribution, while α=1 corresponds to the Cauchy distribution. An important property of α-stable distributions is that in the case α∈(1,2), the p-th moment of an α-stable random variable is finite if and only if p<α; hence, the distribution is heavy-tailed. In particular, 𝔼[|X|]< ∞ and 𝔼[|X|^2] = ∞, which can be used to model phenomena with heavy-tailed observations. There exist different types of α-stable random vectors in ℝ^d. In this study we will be interested in the following three variants, whose characteristic functions (for u∈ℝ^d) are given as follows: * Type-I. Let Z ∈ℝ be a symmetric α-stable random variable. We then construct the random vector X such that all the coordinates of X is equated to Z. In other words X = 1_d Z, where 1_d ∈ℝ^d is a vector of ones. With this choice, X admits the following characteristic function: exp(i⟨ u,X⟩=exp(-|⟨ u, 1_d ⟩|^α); * Type-II. X has i.i.d. coordinates, such that each component of X is a symmetric α-stable random variable in ℝ. This choice yields the following characteristic function: exp(i⟨ u,X⟩=exp(-∑_i=1^d|u_i|^α); * Type-III. X is rotationally invariant α-stable random vector with the characteristic function exp(i⟨ u,X⟩=exp(-u^α). Note that the Type-II and Type-III noises reduce to a Gaussian distribution when α =2 (i.e., the characteristic function becomes exp(-u^2)). Similar to the fact that stable distributions extend the Gaussian distribution, we can define a more general random process, called the α-stable Lévy process, that extends the Brownian motion. Formally, α-stable processes are stochastic processes (L^α_t)_t ≥ 0 with independent and stationary α-stable increments, and have the following definition: * L^α_0=0 almost surely, * For any 0≤ t_0<t_1<⋯<t_N, the increments L^α_t_n-L^α_t_n-1 are independent, * For any 0≤ s< t, the difference L^α_t-L^α_s and (t-s)^1/αL^α_1 have the same distribution, * L^α_t is stochastically continuous, i.e. for any δ>0 and s≥ 0, ℙ(L^α_t-L^α_s>δ)→ 0 as t→ s. To fully characterize an α-stable process, we further need to specify the distribution of L^α_1. Along with the above properties, the choice for L^α_1 will fully determine the process. For this purpose, we will again consider the previous three types of α-stable vectors: We will call the process L^α_t a Type-I process if L^α_1 is a Type-I α-stable random vector. We define the Type-II and Type-III processes analogously. Note that, when α =2, Type-II and Type-III processes reduce to the Brownian motion. For notational clarity, occasionally, we will drop the index α and denote the process by L_t. §.§ Compressibility of heavy-tailed processes One interesting property of heavy-tailed distributions in the one-dimensional case is that they exhibit a certain compressibility property. Informally, if we consider a sequence of i.i.d. random variables coming from a heavy-tailed distribution, a small portion of these variables will likely have a very large magnitude due to the heaviness of the tails, and they will dominate all the other variables in terms of magnitudes <cit.>. Therefore, if we only keep this small number of variables with large magnitude, we can `compress' (in a lossy way) the whole sequence of random variables by representing it with this small subset. Concurrently, <cit.> provided formal proofs for these explanations. Formally, <cit.> characterized the family of probability distributions whose i.i.d. realizations are compressible. They introduced the notion of ℓ_p-compressibility - in terms of the error made after pruning a fixed portion of small (in magnitude) elements of an i.i.d. sequence, whose common distribution has diverging p-th order moments. More precisely, let X_n=(x_1, …, x_n) be a sequence of i.i.d. random variables such that |x_1|^α=∞ for some α∈ℝ_+. Then, for all p≥α and 0<κ≤ 1 denoting by X_n^(κ n) the ⌊κ n⌋ largest ordered statistics[In other words, X_n^(κ n) is obtained by keeping only the largest (in magnitude) κ n elements of X_n and setting all the other elements to 0.] of X_n, the following asymptotic on the relative compression error holds almost surely: lim_n→∞X_n^(κ n) -X_n_p/X_n_p = 0 Built upon this fact, <cit.> proposed structural pruning of neural networks (the procedure described in Figure <ref>) by assuming that the network weights provided by SGD will be asymptotically independent. In this study, instead of making this assumption, we will directly prove that the network weights will be asymptotically independent in the two layer neural network setting with additive heavy-tailed noise injections to SGD. § PROBLEM SETTING AND THE MAIN RESULT We consider a single hidden-layer overparametrized network of n units and use the setup provided in <cit.>. Our goal is to minimize the expected loss in a supervised learning regime, where for each data z=(x,y) distributed according to π( x, y);[Note that for finite datasets, π can be chosen as a measure supported on finitely many points.] the feature x is included in 𝒳⊂^d and the label y is in 𝒴. We denote by θ^i,n∈^p the parameter for the i-th unit, and the parametrized model is denoted by h_x: ℝ^p →ℝ^l. The mean-field network is the average over models for n units: f_Θ^n(x) = 1/n∑_i=1^n h_x (θ^i,n), where Θ^n=(θ^i,n)_i=1^n ∈ℝ^p × n denotes the collection of parameters in the network and x∈𝒳 is the feature variable for the data point. In particular, the mean-field network corresponds to a two-layer neural network with the weights of the second layer are fixed to be 1/n and Θ^n is the parameters of the first layer. While this model is less realistic than the models used in practice, nevertheless, we believe that it is desirable from theoretical point of view, and this defect can be circumvented upon replacing h_x(θ^i,n) by h_x(c^i,n,θ^i,n) = c^i,nh_x(θ^i,n), where c^i,n and θ^i,n are weights corresponding to different layers. However, in order to obtain similar results in this setup as in our paper, stronger assumptions are inevitable and the proof should be more involved, which are left for future work. Given a loss function ℓ: ℝ^l×𝒴→ℝ^+, the goal (for each n) is to minimize the expected loss R(Θ^n) = 𝔼_(x,y)∼π[ℓ( f_Θ^n(x),y) ] . One of the most popular approaches to minimize this loss is the stochastic gradient descent (SGD) algorithm. In this study, we consider a simple modification of SGD, where we inject a stable noise vector to the iterates at each iteration. For notational clarity, we will describe the algorithm and develop the theory over gradient descent, where we will assume that the algorithm has access to the true gradient ∇ R at every iteration. However, since we are already injecting a heavy-tailed noise with infinite variance, our techniques can be adapted for handling the stochastic gradient noise (under additional assumptions, e.g., <cit.>), which typically has a milder behavior compared to the α-stable noise[In <cit.> the authors argued that the stochastic gradient noise in neural networks can be modeled by using stable distributions. Under such an assumption, the effect of the stochastic gradients can be directly incorporated into L_t^α. ]. Let us set the notation for the proposed algorithm. Let θ̂^i,n_0, i=1,…,n, be the initial values of the iterates, which are n random variables in ^d distributed independently according to a given initial probability distribution μ_0. Then, we consider the gradient descent updates with stepsize η n, which is perturbed by i.i.d. α-stable noises σ·η^1/αX^i,n_k for each unit i=1,…, n and some σ>0: θ̂^i,n_k+1 = θ̂^i,n_k -η n[∂_θ^i,nR(Θ^n) ]+ σ·η^1/α X^i,n_k θ̂^i,n_0 ∼μ_0 ∈𝒫(^d), where the scaling factor η^1/α in front of the stable noise enables the discrete dynamics of the system homogenize to SDEs as η→ 0. At this stage, we do not have to determine which type of stable noise (e.g., Type-I, II, or III) that we shall consider as they will all satisfy the requirements of our theory. However, our empirical findings will illustrate that the choice will affect the overall performance. We now state the assumptions that will imply our theoretical results. The following assumptions are similar to <cit.>. * Regularity of the model: for each x∈𝒳, the function h_x: ^p →^l is two-times differentiable, and there exists a function Ψ: 𝒳→_+ such that for any x∈𝒳, h_x(·)_∞ + ∇ h_x(·)_∞ + ∇^2 h_x(·)∞≤Ψ(x). * Regularity of the loss function: there exists a function Φ: 𝒴→_+ such that ∂_1 ℓ(·,y)_∞ + ∂^2_1 ℓ(·,y)_∞≤Φ(y) * Moment bounds on Φ(·) and Ψ(·): there exists a positive constant B such that 𝔼_(x,y)∼π[Ψ^2(x)(1+Φ^2(y))] ≤ B^2. Let us remark that these are rather standard smoothness assumptions that have been made in the mean field literature <cit.> and are satisfied by several smooth activation functions, including the sigmoid and hyper-tangent functions. We now proceed to our main result. Let Θ̂^n_k ∈ℝ^p × n be the concatenation of all parameters θ̂^i,n_k, i=1,…,n obtained by the recursion (<ref>) after k iterations. We will now compress Θ̂^n_k by pruning its columns with small norms. More precisely, fix a compression ratio κ∈ (0,1), compute the norms of the columns of Θ̂^n_k, i.e., θ̂^i,n_k. Then, keep the ⌊κ n⌋ columns, which have the largest norms, and set all the other columns to zero, in all their entirety. Finally, denote by Θ̂^(κ n)_k ∈ℝ^p × n, the pruned version of Θ̂^n_k. Suppose that Assumption <ref> holds. For any fixed t>0, κ∈ (0,1) and ϵ>0 sufficiently small, with probability 1-ϵ, there exists N∈ℕ_+ such that for all n≥ N and η such that η≤ n^-α/2-1, the following upper bound on the relative compression error for the parameters holds: Θ̂^(κ n)_⌊ t/η⌋ - Θ̂^n_⌊ t/η⌋_F/Θ̂^n_⌊ t/η⌋_F≤ϵ . This bound shows that, thanks to the heavy-tailed noise injections, the weight matrices will be compressible at any compression rate, as long as the network is sufficiently overparametrized and the step-size is sufficiently small. We shall note that this bound also enables us to directly obtain a generalization bound by invoking <cit.>. § PROOF STRATEGY AND INTERMEDIATE RESULTS In this section, we gather the main technical contributions with the purpose of demonstrating Theorem <ref>. We begin by rewriting (<ref>) in the following form: θ̂^i,n_k+1 - θ̂^i,n_k = η b(θ̂^i,n_k, μ̂^n_k) + σ·η^1/α X^i,n_k θ̂^i,n_0 ∼μ_0 ∈𝒫(^d), where μ̂^n_k = 1/nδ_θ̂^i,n_k is the empirical distribution of parameters at iteration k and δ is the Dirac measure, and the drift is given by b(θ^i,n_k,μ^n_k) = - 𝔼[∂_1 ℓ(μ^n_k(h_x(·),y)∇ h_x(θ^i,n)], where ∂_1 denotes the partial derivative with respect to the first parameter and μ^n_k(h_x(·)) := ∫ h_x(θ) μ^n_k(θ) = ∑_i=1^n h_x(θ^i,n_k) = f_Θ^n_k(x). It is easy to check that b(θ^i,n_k,μ^n_k) = -n∂_θ^i,nR(Θ^n). By looking at the dynamics from this perspective, we can treat the evolution of the parameters as a system of evolving probability distributions μ^n_k: the empirical distribution of the parameters during the training process will converge to a limit as η goes to 0 and n goes to infinity. We start by linking the recursion (<ref>) to its limiting case where η→ 0. The limiting dynamics can be described by the following system of SDEs: θ^i,n_t = b(θ^i,n_t,μ^n_t) t + σL^i,n_t θ^i,n_0 ∼μ_0 ∈𝒫(^d), where μ^n_t = 1/nδ_θ^i,n_t and (L^i,n_t)_t≥ 0 are independent α-stable processes such that L^i,n_1 (d)= X^i,n_1. We can now see the original recursion (<ref>) as an Euler discretization of (<ref>) and then we have the following strong uniform error estimate for the discretization. Let (θ^i,n_t)_t≥ 0 be the solutions to SDE (<ref>) and (θ̂^i,n_k)_k∈ℕ_+ be given by SGD (<ref>) with the same initial condition ξ^i,n and α-stable Lévy noise L^i,n_·, i=1,…,n. Under Assumption <ref>, for any T>0, if η k≤ T, there exists a constant C depending on B,T,α such that the approximation error sup_i≤ nθ^i,n_η k-θ̂^i,n_k≤ C (η n)^1/α. In comparison to the standard error estimates in the Euler-Maruyama scheme concerning only the stepsize η, the additional n-dependence is due to the fact that here we consider the supremum of the approximation error over all i≤ n, which involves the expectation of the supremum of the modulus of n independent α-stable random variables. Next, we start from the system (<ref>) and consider the case where n→∞. In this limit, we obtain the following McKean-Vlasov-type stochastic differential equation: θ^∞_t = b(θ^∞_t,[θ^∞_t]) t + L_t [θ^∞_0]=μ∈𝒫(^d), where (L_t)_t≥ 0 is an α-stable process and [θ^∞_t] denotes the distribution of θ^∞_t. The existence and uniqueness of a strong solution to (<ref>) are given in <cit.>. Moreover, for any positive T, sup_t≤ Tθ^∞_t^α<+∞. This SDE with measure-dependent coefficients turns out to be a useful mechanism for analyzing the behavior of neural networks and provides insights into the effects of noise on the learning dynamics. In this step, we will link the system (<ref>) to its limit (<ref>), which is a strong uniform propagation of chaos result for the weights. The next result shows that, when n is sufficiently large, the trajectories of weights asymptotically behave as i.i.d. solutions to (<ref>). Following the existence and uniqueness of strong solutions to (<ref>) and (<ref>), let (θ^i,∞_t)_t≥ 0 be solutions to the McKean-Vlasov equation (<ref>) and (θ^i,n_t)_t≥ 0 be solutions to (<ref>) associated with same realization of α-stable processes (L^i_t)_t≥ 0 for each i. Suppose that (L^i_t)_t≥ 0 are independent. Then there exists C depending on T, B such that sup_t≤ Tsup_i≤ n|θ^i,n_t - θ^i,∞_t|≤C/√(n) It is worth mentioning that the O(1/√(n)) decreasing rate here is better, if α<2, than O(1/n^α) in the litterature on the propagation of chaos <cit.> with classical Lipschitz assumptions on the coefficients of SDEs. The reason is that here, thanks to Assumption <ref>, we can take into account the specific structure of the one-hidden layer neural networks. Finally, we are interested in the distributional properties of the McKean-Vlasov equation (<ref>). The following result establishes that the marginal distributions of (<ref>) will have diverging second-order moments, hence, they will be heavy-tailed. Let (L_t)_t≥ 0 be an α-stable process. For any time t, let θ_t be the solution to (<ref>) with initialization θ_0 which is independent of (L_t)_t≥ 0 such that θ_0<∞, then the following holds θ^∞_t^2 = +∞. We remark that the result is weak in the sense that details on the tails of θ_t with respect to α and t are implicit. However, it renders sufficient for our compressibility result in Theorem <ref>. Now, having proved all the necessary ingredients, Theorem <ref> is obtained by accumulating the error bounds proven in Theorems <ref> and <ref>, and applying <cit.> along with Theorem <ref>. § EMPIRICAL RESULTS In this section, we validate our theory with empirical results. Our goal is to investigate the effects of the heavy-tailed noise injection in SGD in terms of compressibility and the train/test performance. We consider a single-hidden-layer neural network with ReLU activations and the cross entropy loss, applied on classifications tasks. We chose the Electrocardiogram (ECG) dataset <cit.>, MNIST, and the CIFAR10 datasets. By slightly streching the scope of our theoretical framework, we also train the weights of the second layer instead of fixing them to 1/n. For SGD, we fix the batch-size to be one tenth of the number of training data points, the step-size is chosen to be small enough to approximate the continuous dynamics given by the McKean-Vlasov equation in order to stay close to the theory, but also not too small so that SGD converges in a reasonable amount of time. As for the noise level σ, we have tried a range of values for each dataset and n, and we chose the largest σ such that the perturbed SGD converges. Intuitively, we can expect that smaller α with heavier tails will lead to lower relative compression error. However, it does not guarantee better test performance: one has to fine tune the parameters appropriately to achieve a favorable trade-off between compression error and the test performance. We repeat all the experiment 5 times and report and average and the standard deviation. For the noiseless case (vanilla SGD), the results of the different runs were almost identical, hence we did not report the standard deviations. All the experimentation details are given in Appendix <ref> and we present additional experimental results in Appendix <ref>. In our first experiment, we consider the ECG500 dataset and choose the Type-I noise. Our goal is to investigate the effects α and n over the performance. Tables <ref>-<ref> illustrate the results. Here, for different cases, we monitor the training and test accuracies (over 1.00), the pruning ratio: the percentage of the weight matrix that can be pruned while keeping the 90% of the norm of the original matrix[The pruning ratio has the same role of κ, whereas we fix the compression error to 0.1 and find the largest κ that satisfies this error threshold.], and training/test accuracies after pruning (a.p.) the network with the specified pruning ratio. The results show that, even for a moderate number of neurons n=2K, the heavy-tailed noise results in a significant improvement in the compression capability of the neural network. For α =1.9, we can see that the pruning ratio increases to 39%, whereas vanilla SGD can only be compressible with a rate 11%. Besides, the compromise in the test accuracy is almost negligible, the proposed approach achieves 95.3%, whereas vanilla SGD achieves 95.7% accuracy. We also observe that decreasing α (i.e., increasing the heaviness of the tails) results in a better compression rate; yet, there is a tradeoff between this rate and the test performance. In Table <ref>, we repeat the same experiment for n=10K. We observe that the previous conclusions become even clearer in this case, as our theory applies to large n. For the case where α=1.75, we obtain a pruning ratio of 52% with test accuracy 95.4%, whereas for vanilla SGD the ratio is only 11% and the original test accuracy is 96.3%. In our second experiment, we investigate the impact of the noise type. We set n=10K and use the same setting as in Table <ref>. Tables <ref>-<ref> illustrate the results. We observe that the choice of the noise type can make a significant difference in terms of both compressibility and accuracy. While the Type-III noise seems to obtain a similar accuracy when compared to Type-I, it achieves a worse compression rate. On the other hand, the behavior of Type-II noise is perhaps more interesting: for α=1.9 it both increases compressibility and also achieves a better accuracy when compared to unpruned, vanilla SGD. However, we see that its behavior is much more volatile, the performance quickly degrades as we decrease α. From these comparisons, Type-I noise seems to achieve a better tradeoff. In our next experiment, we consider the MNIST dataset, set n=10K and use Type-I noise. Table <ref> illustrates the results. Similar to the previous results, we observe that the injected noise has a visible benefit on compressibility. When α=1.9, our approach doubles the compressibility of the vanilla SGD (from 10% to 21%), whereas the training and test accuracies almost remain unchanged. On the other hand, when we decrease α, we observe that the pruning ratio goes up to 44%, while only compromising 1% of test accuracy. To further illustrate this result, we pruned vanilla SGD by using this pruning ratio (44%). In this case, the test accuracy of SGD drops down to 92%, where as our simple noising scheme achieves 94% of test accuracy with the same pruning ratio. Our last experiment is a negative result that might be useful for illustrating the limitations of our approach. In this case, we consider the CIFAR10 dataset, set n=5000, use Type-I noise. We compute the pruning ratio and accuracies as before and we illustrate the results in Table <ref>. We observe that the injected noise does not bring an advantage in this case: vanilla SGD achieves a better pruning ratio when compared to the case where α=1.9. On the other hand, the noise injections result in a significant drop in the training accuracy, and the situation becomes even more prominent when we decrease α. This might indicate that the injected noise might complicate the training process. Following the arguments of <cit.>, we suspect that, in this case, vanilla SGD already exhibits some sort of heavy tails and the additional noise might not be as beneficial as it was in the other cases. Although neural SGD can achieve similar compressibility, this regime is not easily controllable, and our paper is able to provide a more practical guideline for achieving compressibility along with theoretical guarantees. § CONCLUSION We provided a methodological and theoretical framework for provably obtaining compressibility in mean-field neural networks. Our approach requires minimal modification for vanilla SGD and has the same computational complexity. By proving discretization error bounds and propagation of chaos results, we showed that the resulting algorithm is guaranteed to provide compressible parameters. We illustrated our approach on several experiments, where we showed that, in most cases, the proposed approach achieves a high compressibility ratio, while slightly compromising from the accuracy. The limitations of our approach are as follows: (i) we consider mean-field networks, it would be of interest to generalize our results to more sophisticated architectures, (ii) we focused on the compressibility; yet, the noise injection also has an effect on the train/test accuracy. Hence, an investigation of the noise injection on the training loss needs to be performed to understand the bigger picture. Finally, due to the theoretical nature of our paper, it does not have a direct negative social impact. § ACKNOWLEDGEMENTS The authors thank Alain Durmus for fruitful discussions. U.S. is partially supported by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d'avenir” program, reference ANR-19-P3IA0001 (PRAIRIE 3IA Institute) and by the European Research Council Starting Grant DYNASTY – 101039676. APPENDIX The Appendix is organized as follows. * In Section A, we provide technical lemmas for proving Theorem <ref>, Theorem <ref> and Theorem <ref>. * In Section B, we give the proofs to the theorems in the main paper. * In Section C and D, we present experimental details and results of additional experiments. * In Section E, implications of our compressibility studies on federated learning are discussed. § TECHNICAL LEMMAS Under Assumption <ref>, b(θ_1,μ_1) - b(θ_2,μ_2)≤ B ·(θ_1-θ_2 + 𝔼_x∼π[|μ_1(h_x(·)) - μ_2(h_x(·))|^2]^1/2). Moreover, b(·,·)_∞≤ B, and if μ_1 = 1/n∑_i=1^nδ_θ^i_1, μ_2 = 1/n∑_i=1^nδ_θ^i_2, b(θ_1,μ_1) - b(θ_2,μ_2)≤ Bθ_1 - θ_2 + B/n∑_i=1^n θ^i_1 - θ^i_2. Recall that b(θ,μ) = -∂_1 l(μ(h_x(·)),y)∇ h_x(θ). Then it follows from triangular inequality that b(θ_1,μ_1) - b(θ_2,μ_2)≤b(θ_1,μ_1) - b(θ_2,μ_1) + b(θ_2,μ_1) - b(θ_2,μ_2) The first term is upper bounded by b(θ_1,μ_1) - b(θ_2,μ_1) ≤∂_1 l(·,y)_∞·∇^2 h_x_∞·θ_2-θ_1 ≤Φ(y)Ψ(x)·θ_1 - θ_2 ≤(Φ^2(y)Ψ^2(x))^1/2·θ_1 - θ_2 ≤ B ·θ_1-θ_2 The second term is upper bounded by b(θ_2,μ_1) - b(θ_2,μ_2) ≤∂^2_1 l(·,y)_∞·∇ h_x(·) _∞· |μ_1(h_x(·)) - μ_2(h_x(·)) | ≤(Φ^2(y)Ψ^2(x))^1/2|μ_1(h_x(·)) - μ_2(h_x(·)) |^2^1/2 ≤ B ·|μ_1(h_x(·)) - μ_2(h_x(·)) |^2^1/2 We conclude the first inequality by combining (<ref>), (<ref>) and (<ref>). For the boundedness of b in the norm infinity, it is not hard to observe that b(θ,μ) = -𝔼[∂_1 l(μ(h_x(·)), y)∇ h_x(θ)] ≤Φ(y)Ψ(x)≤ B. For the last one, it follows from the first bound and Cauchy-Schwarz inequality that b(θ_1,μ_1)-b(θ_2,μ_2) ≤ Bθ_1 - θ_2 + 1/n𝔼_x∼π[(∑_i=1^n h_x(θ^i_1) - h_x(θ^i_2) )^2 ]^1/2 ≤ Bθ_1-θ_2 + 1/n𝔼_x∼π[∇ h_x_∞(∑_i=1^n θ^i_1-θ^i_2)^2 ]^1/2 ≤ Bθ_1 - θ_2 + 1/n𝔼_x∼π[Ψ^2(x)]^1/2·∑_i=1^n θ^i_1-θ^i_2 ≤ Bθ_1-θ_2 + B/n∑_i=1^n θ^i_1 - θ^i_2. Then the proof is completed. §.§ Propagation of Chaos Let (L_t)_t≥ 0 be an α-stable Lévy process and let (ℱ_t)_t≥ 0 be the filtration generated by (L_t)_t≥ 0. Then under Assumption <ref>, given the the initial condition X_0=ξ, there exists a unique adapted process (X_t)_t∈[0,T] for all integrable datum ξ∈ L^1(^p) such that X_t = ξ + ∫_0^t b(X_t,[X_t]) t + L_t. Moreover the first moment of the supremum of the process is bounded sup_t≤ T X_t < +∞. It follows from Theorem 1 in <cit.> by Lemma <ref> where β is taken to be 1. §.§ Compression Consider a non-integrable probability distribution μ taking values in ℝ_+ such that 𝔼_X∼μ[X]=+∞. Let X_1,…,X_n be n i.i.d. copies distributed according to μ. Then for any C positive, ℙ[ 1/n∑_i=1^n X_i ≤ C ] 0. Using the assumption that μ is non-integrable, let K be a cutoff level for μ such that 𝔼_X∼μ[max (X ,K)] = C+1. Therefore by the law of large numbers, when goes to infinity, lim_n→∞1/n∑_i=1^n max(X_i,K) = C+1 almost surely. To conclude, we remark that 1/nlim inf_n→∞∑_i=1^n X_i ≥1/nlim_n→∞∑_i=1^n max(X_i,K), which is lower bounded by C+1 almost surely. Thus the probability that 1/n∑_i=1^n X_i≤ C goes to 0 when n goes to infinity. § PROOFS §.§ Proof of Theorem <ref> Recall that θ_t = θ_0 + ∫_0^t b(θ_s,[θ_s]) s + L_t, then θ_t^2 = ⟨θ_0 + ∫_0^t b(θ_s,[θ_s]) s + L_t, θ_0 + ∫_0^t b(θ_s,[θ_s]) s + L_t⟩ = θ_0+ ∫_0^t b(θ_s,[θ_s]) s^2 + 2⟨θ_0+ ∫_0^t b(θ_s,[θ_s]) s, L_t⟩ + L_t^2 ≥L_t^2 - 2θ_0·L_t - 2tb(·)_∞·L_t ≥L_t^2 - 2θ_0L_t - 2Bt ··L_t, where the last relation follows from the independence between the initialization θ_0 and the difusion noise (L_t)_t≥ 0 and Lemma <ref>. The proof is completed by noting that L_t^2 = ∞ and θ_0, L_t < ∞. §.§ Proof of Theorem <ref> By identification of the diffusion process (Z^i_t)_t≥ 0 in (<ref>) and (<ref>), the difference of their solutions θ^i,n_t and θ^i,∞_t for all t∈[0,T] satisfies θ^i,n_t - θ^i,∞_t = ∫_0^t [b(θ^i,n_s,μ^n_s) - b(θ^i,∞_s, [θ^i,∞_s]) ] s, where μ_t = 1/n∑_i=1^n δ_θ^i,n_t and [θ^i,∞_t] denotes the distribution of θ^i,∞_t. Using Lemma <ref>, θ^i,n_t - θ^i,∞_t ≤ B∫_0^t θ^i,n_s-θ^i,∞_s s + B ∫_0^t 𝔼_x∼π[|μ^n_s(h_x(·)) - [θ^i,∞_s](h_x(·))|^2]^1/2 s ≤ B ∫_0^t sup_i≤ nθ^i,n_s - θ^i,∞_s s + B∫_0^t 𝔼_x∼π[|μ^n_s(h_x(·)) - μ̅^n_s(h_x(·))|^2]^1/2 s + B∫_0^t 𝔼_x∼π[|μ̅^n_s(h_x(·)) - [θ^i,∞_s](h_x(·))|^2]^1/2 s where μ̅^n_s := 1/n∑_i=1^n δ_θ^i,∞_s, the empirical measure of θ^i,∞_s for i = 1,…,n. the last inequality follows from Cauchy-Schwarz inequality. Moreover we have 𝔼_x∼π[|μ^n_s(h_x(·)-μ̅^n_s(h_x(·))|^2]^1/2 ≤𝔼_x∼π[|∇ h_x∞/n∑_i=1^n θ^i,n_s - θ^i,∞_s|^2]^1/2 ≤𝔼_x∼π[Ψ^2(x)] ^1/2·1/n∑_i=1^n θ^i,n_s - θ^i,∞_s ≤ Bsup_i≤ nθ^i,n_s - θ^i,∞_s. Plug the estimate above into (<ref>): θ^i,n_t - θ^i,∞_t≤ B(1+B)∫_0^t sup_i≤ nθ^i,n_s - θ^i,∞_s s + B∫_0^t 𝔼_x∼π[|μ̅^n_s(h_x(·)) - [θ^i,∞_s](h_x(·))|^2]^1/2 s By taking the supremum over i=1,…,n and t, and using the fact that sup∫_·(·) ≤∫_·sup(·), we arrive at sup_t≤ Tsup_i≤ nθ^i,n_t - θ^i,∞_t ≤ B(1+B)∫_0^T sup_t≤ ssup_i≤ nθ^i,n_t - θ^i,∞_t s + B∫_0^t 𝔼_x∼π[|μ̅^n_s(h_x(·)) - [θ^i,∞_s](h_x(·))|^2]^1/2 s Let us now estimate |μ̅^n_s(h_x(·)) - [θ^i,∞_s](h_x(·))|^2 | x^1/n, the expectation under the stable diffusion, rather than the expectation over the data distribution, where the 1/√(n) convergence rate comes from. In deed for fixed x, h_x(θ^i,∞_s), i=1,…,n are bounded i.i.d. random variables with mean [θ^i,∞_s](h_x(·)). Therefore |μ̅^n_s(h_x(·)) - [θ^i,∞_s](h_x(·))|^2 | x^1/2 = .|1/n∑_i=1^n h_x(θ^i,∞_s) - [θ^i,∞_s](h_x(·))|^2 | x^1/2 ≤1/√(n)h_x(·)_∞≤Ψ(x)/√(n). Finally, combining (<ref>), (<ref>), the integrability condition Lemma <ref> and using Fubini's theorem, we arrive at sup_r≤ tsup_i≤ nθ^i,n_r - θ^i,∞_r≤ B(1+B)∫_0^t sup_r≤ ssup_i≤ nθ^i,n_r - θ^i,∞_r s + Bt𝔼_x∼π[Ψ(x)]/√(n). We conclude by Gronwall's inequality that sup_t≤ Tsup_i≤ nθ^i,n_t - θ^i,∞_t≤ (1+B)(BT/√(n) + B^2T^2exp(B T(1+𝔼_x∼π[Ψ(x)]))/2√(n)). Then the proof of Theorem <ref> is completed. §.§ Proof of Theorem <ref> Similarly as in the proof above, we have sup_i≤ nθ^i,n_η - θ̂^i,n_1 ≤sup_i≤ n∫_0^η b(θ^i,n_t,μ^i,n_t) - b(θ̂^i,n_0, μ^n) t ≤ B∫_0^ηsup_i≤ nθ^i,n_t - ξ^i,n + 1/n∑_j=1^n θ^j,n_t - ξ^j,n t ≤ B∫_0^η 2b_∞· t + sup_i≤ nL^i,n_t + 1/n∑_j=1^n L^j,n_t t Recall that b_∞≤ B, therefore by taking the expectation and the scaling of the stable process L^i,n_t, sup_i≤ nθ^i,n_η - θ̂^i,n_1 ≤ B ∫_0^η 2Bt + 2t^1/α·sup_i≤ nL^i,n_1 + 1/n∑_j=1^n L^j,n_1 t ≤ B^2η^2+ Bα·sup_i≤ nL^i,n_1 + L^α_1/α+1η^1+1/α. Denote by C' := sup_i≤ nL^i,n_1 + L^α_1, and ψ_t(ξ) the solution of (<ref>) at time t with initial condition ξ∈^nd, which is the concatenation of n vectors ψ^i,n_t(ξ) ∈^d, i=1, …, n. At time T which is a multiple of η, θ^i,n_T - θ̂^i,n_T/η = ∑_k=0^T/η-1ψ^i,n_T-η k(Θ̂^n_k) - ψ^i,n_T-η(k+1)(Θ̂^n_k+1), where Θ̂^n_k is the concatenation of θ̂^i,n_k. Similarly, for each of the term inside the summation above, ψ^i,n_T-η k(Θ̂^n_k) - ψ^i,n_T-η(k+1) (Θ̂^n_k+1) = [ ∫_η k^η (k+1) b^i,n(ψ_t-η k(Θ̂^n_k)) t + L^i,n_t - (θ̂^i,n_k+1-θ̂^i,n_k) ] - ∫_η(k+1)^T (b^i,n(ψ_t-η k(Θ̂^n_k)) - b^i,n(ψ_t-η(k+1)(Θ̂^n_k+1))) t . Note that the first term in the big bracket is the difference of one-step increment started from Θ̂^n_k. Then, it follows from (<ref>) that sup_i≤ n∫_η k^η (k+1) b^i,n(ψ_t(Θ̂^n_k)) t + L^i,n_t - (θ̂^i,n_k+1-θ̂^i,n_k) ≤ B^2η^2+ Bα· C'/α+1η^1+1/α. The second integral term similarly, sup_i≤ nb^i,n(ψ_t-η k(Θ̂^n_k)) - b^i,n(ψ_t-η(k+1)(Θ̂^n_k+1)) ≤ B ·sup_i≤ nψ^i,n_t-η k(Θ̂^n_k)-ψ^i,n_t-η (k+1)(Θ̂^n_k+1) + B/n∑_j=1^n ψ^j,n_t-η k(Θ̂^n_k)-ψ^j,n_t-η (k+1)(Θ̂^n_k+1) If we combine (<ref>), (<ref>), (<ref>): 𝔼[sup_i≤ nψ^j,n_T-η k(Θ̂^n_k)-ψ^j,n_T-η (k+1)(Θ̂^n_k+1)] ≤ B^2η^2+ 2Bα· C'/α+1η^1+1/α + 2B·∫_η(k+1)^T sup_i≤ nψ^i,n_t-η k(Θ̂^n_k)-ψ^i,n_t-η (k+1)(Θ̂^n_k+1) t. Next it follows from Gronwall's inequality that sup_i≤ nψ^j,n_T-η k(Θ̂^n_k)-ψ^j,n_T-η (k+1)(Θ̂^n_k+1)≤exp(2BT)(B^2η^2+ 2Bα· C'/α+1η^1+1/α). Finally, combined with (<ref>), we obtain sup_i≤ nθ^i,n_T - θ̂^i,n_T/η≤ Texp(2BT)(B^2η+ 2Bα· C'/α+1η^1/α). Then the result follows by Lemma <ref> that C' = sup_i≤ nL^i,n_1 + L^α_1≤ (8C_α+1) (n^1/α+1). The proof of Theorem <ref> is therefore completed. Take n i.i.d. α-stable random variables X^i (rmk:distributed as L^α_1. there exists C_α >0 such that for t sufficiently large and any i=1,…,n, ℙ[X^i≥ t] ≥ C_α t^-α.) such that exp(it X^i) = exp(-|t|^α) then sup_i≤ n X^i ≤ (8C_α+1)n^1/α It is not hard to see from the condition ℙ[X^i≥ t] ≥ C_α t^-α that ℙ[sup_i≤ nX^i≥ t] = 1 -∏_i=1^n ℙ[X^i <t] ≤ 1- ( 1-C_α t^-α)^n sup_i≤ nX^i =∫_0^∞ℙ[sup_i≤ nX^i≥ t ] dt = ∑_k=0^-∞∫_(n/2^k+1)^1/α^(n/2^k)^1/αℙ[sup_i≤ nX^i≥ t ] dt + ∫_0^n^1/αℙ[sup_i≤ nX^i≥ t ] dt ≤ 2 n^1/α∑_k=0^-∞ℙ[sup_i≤ nX^i≥ (n/2^k+1)^1/α] + n^1/α ≤ 2 C_α n^1/α∑_k=0^-∞ 2^k+1 + n^1/α ≤ (8C_α + 1) n^1/α. The proof of Lemma <ref> is completed. §.§ Proof of Theorem <ref> The best k-term approximation error σ_k(𝐱) of a vector 𝐱 is defined by σ_k(𝐱) = inf_𝐲_0≤ k𝐱 - 𝐲, where 𝐲_0 is the l^0-norm of 𝐲, which counts the non-zero coefficients of 𝐲. Without mentioned explicitly, 𝐱 denotes the square norm of 𝐱. Denote by 𝐰̂^n_t = (θ̂^1,n_⌊ t/η⌋, …, θ̂^n,n_⌊ t/η⌋) and 𝐰^*_t = (θ^1,∞_t, …, θ^n,∞_t), where the components θ^i,∞_t are independent solutions to (<ref>) in Theorem <ref>. Note that the definition of Frobenius matrix norm ·_F gives that Θ̂^{κ n}_⌊ t/η⌋ - Θ̂^n_⌊ t/η⌋_F = σ_⟨κ n ⟩(𝐰̂^n_t), Θ̂^n_⌊ t/η⌋_F = 𝐰^⋆_t, Therefore it suffices to prove Theorem <ref> for 𝐰̂^n_t. It follows from Theorem <ref> and Theorem <ref> that there exists a constant C independent of n such that sup_i≤ nθ̂^i,n_⌊ t/η⌋ - θ^i,∞_t≤C/√(n) Then by Markov's inequality, ℙ[sup_i≤ nθ̂^i,n_⌊ t/η⌋ - θ^i,∞_t > C/ϵ√(n)] ≤ϵ/3. Denote by E the event E := {sup_i≤ nθ̂^i,n_⌊ t/η⌋ - θ^i,∞_t≤C/ϵ√(n)}. If sup_i≤ nθ̂^i,n_⌊ t/η⌋ - θ^i,∞_t≤C/ϵ√(n) and σ_⌊κ n⌋(𝐰̂^n_t)≥ϵ𝐰̂^n_t, then σ_⌊κ n ⌋(𝐰^⋆_t) ≥σ_⌊κ n ⌋(𝐰̂^n_t) - κ nC/ϵ√(n) ≥ϵ𝐰̂^n_t- C√(n)κ/ϵ ≥ϵ(𝐰^⋆_t - C√(n)/ϵ) - C√(n)κ /ϵ = ϵ𝐰^⋆_t - C√(n)(1 + κ/ϵ) Therefore plugging in (<ref>), ℙ[σ_⌊κ n ⌋(𝐰̂^n_t)≥ϵσ_⌊κ n ⌋(𝐰̂^n_t)] ≤ ℙ[ σ_⌊κ n ⌋(𝐰̂^n_t)≥ϵσ_⌊κ n ⌋(𝐰̂^n_t), E^c ] + ℙ[σ_⌊κ n ⌋(𝐰̂^n_t)≥ϵσ_⌊κ n ⌋(𝐰̂^n_t), E ] ≤ ℙ[sup_i≤ nθ̂^i,n_⌊ t/η⌋ - θ^i,∞_t > C/ϵ√(n)] + ℙ[ σ_⌊κ n ⌋(𝐰^⋆_t)≥ϵ𝐰^⋆_t-C√(n)(1+κ/ϵ)] ≤ ϵ/3 + ℙ[ σ_⌊κ n ⌋(𝐰^⋆_t)≥ϵ𝐰^*_t-C√(n)(1+κ/ϵ)] Moreover, there exists N'>0 such that for all n≥ N', ℙ[ σ_⌊κ n ⌋(𝐰^⋆_t)≥ϵ𝐰^*_t-C√(n)(1+κ/ϵ)] ≤ ℙ[ 𝐰^⋆_t≤ 2C√(n)(1+κ/ϵ)] + ℙ[σ_⌊κ n ⌋(𝐰^⋆_t) ≥ϵ/2𝐰^⋆_t] = ℙ[ 1/n𝐰^⋆_t^2 ≤ 4C^2(1+κ/ϵ)^2] + ℙ[σ_⌊κ n ⌋(𝐰^⋆_t) ≥ϵ/2𝐰^⋆_t] ≤ ϵ/3 + ℙ[σ_⌊κ n ⌋(𝐰^⋆_t) ≥ϵ/2𝐰^⋆_t], where the last inequality follows from Lemma <ref>. By the independence of the n coordinates of the vetor 𝐰^⋆_t, Theorem <ref> and [GCD12, Proposition 1, Part 2], there exists N”>0, for all n≥ N”, ℙ[σ_⌊κ n ⌋(𝐰^⋆_t) ≥ϵ/2𝐰^⋆_t] ≤ϵ/3. We conclude the proof by combining (<ref>), (<ref>), (<ref>) and (<ref>). § EXPERIMENTAL DETAILS The code, implemented in PyTorch, takes about 90 hours to run on the MNIST dataset with five different seeds for n=2K, 5K, 10K on a NVIDIA Tesla P100 GPU. With the same system configuration, it takes 5 minutes to run on ECG5000 with Type-I noise; 3 hours with Type II noise and 30 minutes with Type-III noise. The ECG5000 dataset consists of 5000 20-hour long electrocardiograms interpolated by sequences of length 140 to discriminate between normal and abnormal heart beats of a patient that has severe congestive heart failure. After random shuffling, we use 500 sequences for the training phase and 4500 sequences for the test phase. The hyperparameters used in the ECG5000 classification experiments are summarized in Table <ref> and Table <ref> that follow. The MNIST database of handwritten digits consists of a training set of 60,000 examples and a test set of 10,000 examples of dimension 784. The hyperparameters used in the MNIST classification experiments until 95% training accuracy are specified in Table <ref>. § ADDITIONAL EXPERIMENTS   In this section, we provide additional experiments for the classification task with the one hidden-layer neural network trained using the ECG5000 and MNIST datasets. We conducted prunability tests further for various values of the number of neurons n, the index α and the noise type. §.§ Further results for the ECG5000 classification For the classification of the ECG5000 dataset using the one hidden-layer neural network, we conducted prunability tests for the following values of parameters: number of neurons, n=2K, 5K and 10K, index α=1.75, 1.8 and 1.9 and noises Type-I, II and III. The results, which complement those in Tables  <ref>, <ref>, <ref>, <ref>, are reported in Tables <ref>, <ref>, <ref>, <ref>, <ref> that follow. §.§ Further results for the MNIST classification For the classification of the MNIST dataset using the one hidden-layer neural network, we conducted prunability tests for the following values of parameters: number of neurons, n=2K, 5K and 10K, index α=1.75, 1.8 and 1.9 and noises Type-I, II and III. The results for smaller n=2K and 5K are reported in Tables <ref> and <ref> that follow; and they complement those for n=10K in Table <ref> in the main body of the text. In Tables <ref>, <ref> and <ref> we report train and test accuracies that are obtained in the following way: (i) the one-hidden-layer neural network is first trained on the MNIST dataset using vanilla SGD, i.e., SGD with no noise injection; (ii) then we perform pruning with the same pruning ratios as those given in Tables <ref> and <ref>; and (iii) finally we evaluate the accuracy after pruning on both train and test sets of the used MNIST dataset. It can be observed that the larger the value of n (the size of the neural network), the less compressible is the neural network trained with vanilla SGD, especially for n=10K. In the latter case the test accuracy of the neural network trained using vanilla SGD drops down to 92%, while the noising scheme achieves 94% of test accuracy with the same pruning ratio, as can be seen from Table <ref>. §.§ Effect of heavy-tailed noise injection during SGD on the performance after pruning Table <ref> reports accuracy results for the two-layer neural network trained on the CIFAR10 dataset with heavy-tailed SGD (α=1.8), for various levels of the variance of the added Type-I noise. In this case, the effect of pruning seems to require a larger value of n to start to be visible – the results reported in the table, which suggest that the noise injection may have a non-negligible effect on the train/test accuracy after pruning especially for large values of the noise variance, are obtained with relatively small n=5K for CIFAR10 dataset with samples of dimension 3072. § IMPLICATIONS ON FEDERATED LEARNING The federated learning (FL) setting <cit.> is one in which there are a number of devices or clients, say n; all equipped with the same neural network model and each holding an independent own dataset. Every client learns an individual (or local) model from its own dataset, e.g., via Stochastic Gradient Descent (SGD). The individual models are aggregated by a parameter server (PS) into a global model and then sent back to the devices, possibly over multiple rounds of communication between them. The rationale is that the individually learned models are refined progressively by taking into account the data held by other devices; and, at the end the training process, all relevant features of all devices' datasets are captured by the final aggregated model. The results of this paper are useful towards a better understanding of the compressibility of the models learned by the various clients in this FL setting. Specifically, viewing each neuron of the hidden layer of the setup of this paper as if it were a distinct client, the results that we establish suggest that if the local models are learned via heavy-tailed SGD this would enable a better compressibility of them. This is particularly useful for resource-constrained applications of FL, such as in telecommunication networks where bandwidth is scarce and latency is important.
http://arxiv.org/abs/2306.03874v1
20230606172121
Embracing Background Knowledge in the Analysis of Actual Causality: An Answer Set Programming Approach
[ "Michael Gelfond", "Jorge Fandinno", "Evgenii Balai" ]
cs.AI
[ "cs.AI", "cs.LO" ]
examplethmExample definitionDefinition example arrows,positioning,automata,fit not cm d March 2003 2003 firstpage–lastpage S1471068401001193 Embracing Background Knowledge in the Analysis of Actual Causality]Embracing Background Knowledge in the Analysis of Actual Causality: An Answer Set Programming Approach Michael Gelfond, Jorge Fandinno and Evgenii Balai] Michael Gelfond Texas Tech University Jorge Fandinno University of Nebraska at Omaha Evgenii Balai Texas Tech University 5 February 2003 6 April 2023 11 May 2023 [ [ July 31, 2023 ================= This paper presents a rich knowledge representation language aimed at formalizing causal knowledge. This language is used for accurately and directly formalizing common benchmark examples from the literature of actual causality. A definition of cause is presented and used to analyze the actual causes of changes with respect to sequences of actions representing those examples. Answer Set Programming, Causality, Knowledge Representation § INTRODUCTION This work is a part of larger research program, originated by John McCarthy and others in the late fifties. The program is aimed at the development of Knowledge Representation (KR) languages capable of clear and succinct formalization of commonsense knowledge. In this paper we concentrate on a long standing problem of giving a formal account of the notion of actual causality. Despite significant amount of work in this area the problem remains unsolved. We believe that the difficulty is related to insufficient attentions paid to relevant commonsense background knowledge. To analyze causal relations involved in a sequence of events happening even in comparatively simple domains, we need to be able to represent sophisticated causal laws, time, defaults and their exceptions, recursive definitions, and other non-trivial phenomena of natural language. To the best of our knowledge none of the KR-languages used in previous works are capable of representing all of these phenomena. We propose to remedy this problem by analyzing causality in the context of a new rich KR-language 𝒲 based on the ideas from Answer Set Prolog (ASP), Theories of Action and Change (TAC), and Pearl's do-operator . The language is used to define several causal relations capable of accurate analysis of a number of examples which could not have been properly analyzed by the previous approaches. Special emphasis in our approach is given to accuracy and elaboration tolerance <cit.> of translations of English texts into theories of 𝒲. This is facilitated by the well developed methodology of such translations in ASP and TAC. These issues were not typically addressed in work on causality, but they are essential from the standpoint of KR. We focus on the suitability of 𝒲 for causal analysis, illustrated by its application to well-known benchmarks from the literature. The paper is organized as follows. In the next section we motivate the need for a richer KR-language by analyzing such benchmarks. After that, we introduce causal theories of 𝒲 and a methodology for formalizing natural language stories. This is illustrated on these benchmarks. Special care is taken of obtaining accurate and direct translation from natural language sentences, and the elaboration tolerance of the representation. The later is obtained by a clear separation between background commonsense knowledge (formalized in a background theory) and the particular story (formalized as a sequence of events that we call scenario). Finally, we introduce our definition of cause and discuss several variations of the benchmark examples. This definition provides answers that match our intuition. Note that, since 𝒲 is a powerful action language based on ASP it can also be used for reasoning about temporal prediction, planning, etc. Due to space limitation the paper does not demonstrate the full power of 𝒲 and the full variety of its causal relations. This will be done in a longer version of the paper. § MOTIVATING EXAMPLES In this section, we discuss two problematic benchmarks from the literature and provide their causal description based on KR perspective. We start by considering the Suzy First example introduced by citehall04a and extensively discussed in the literature. The following reading is by citehalpea01a. Suzy and Billy both pick up rocks and throw them at a bottle. Suzy's rock gets there first, shattering the bottle. Since both throws are perfectly accurate, Billy's would have shattered the bottle had it not been preempted by Suzy's throw. Common sense suggests that Suzy's throw is the cause of the shattering, but Billy's is not. Time and actions are essential features of this example. The reasoning leading to Suzy's throw being regarded as the cause of the bottle directly points to the sentence “Suzy's rock gets there first, shattering the bottle.” Had Billy's throw got there first, we would have concluded that Billy's throw was the cause. Despite the importance of time in this example, most approaches do not explicitly represent time. As a result, the fact that “Suzy's rock gets there first,” which naturally is part of the particular scenario, is represented as part of background knowledge <cit.>. This means that a small change in the scenario such as replacing “Suzy's rock gets there first, shattering the bottle” by “Billy's rock gets there first, shattering the bottle” or “Suzy's rock gets there first, but her throw was too weak to shatter the bottle” requires a complete change of the formal model of the domain instead of a small change to the scenario. This is a problem of elaboration tolerance. Several approaches addressed the lack of representation of time by introducing features from the area of Reasoning about Actions and Change. Approaches in the context of the Situation Calculus <cit.> and Logic Programming <cit.> allow us to reason about actual causes with respect to different sequences of actions, where the order of these actions matter. For instance, citecabfan16a explicitly represent a variation of this example where “Suzy's rock gets there first” is replaced by “Suzy throws first.” The model associated with this example is represented by the following rules: 𝑏𝑟𝑜𝑘𝑒𝑛(I) ←𝑡ℎ𝑟𝑜𝑤(A,I-1), 𝑛𝑜𝑡 𝑏𝑟𝑜𝑘𝑒𝑛(I-1) 𝑏𝑟𝑜𝑘𝑒𝑛(I) ←𝑏𝑟𝑜𝑘𝑒𝑛(I-1), 𝑛𝑜𝑡 𝑏𝑟𝑜𝑘𝑒𝑛(I) 𝑏𝑟𝑜𝑘𝑒𝑛(I) ←𝑏𝑟𝑜𝑘𝑒𝑛(I-1), 𝑛𝑜𝑡 𝑏𝑟𝑜𝑘𝑒𝑛(I) This can be used together with facts: 𝑏𝑟𝑜𝑘𝑒𝑛(0), 𝑡ℎ𝑟𝑜𝑤(𝑠𝑢𝑧𝑦,0),𝑡ℎ𝑟𝑜𝑤(𝑏𝑖𝑙𝑙𝑦,1) representing the particular scenario. We can represent an alternative story where “Billy throws first” by replacing the last two facts by 𝑡ℎ𝑟𝑜𝑤(𝑏𝑖𝑙𝑙𝑦,0) and 𝑡ℎ𝑟𝑜𝑤(𝑠𝑢𝑧𝑦,1). Clearly, this constitutes a separation between model and scenario because we do not need to modify the rules that represent the model of the domain to accommodate the new scenario. We go a step further and show how to represent the fact that “Suzy's rock gets there first” independently of who throws first. The rock may get there first because Suzy throws first, because she was closer, etc. The reason why her rock gets there first is not stated in the example and it is unnecessary to determine the cause of the shattering. We are able to do that thanks to the introduction of abstract time-steps in our language, a feature missing in all previously discussed approaches. As a second example, consider the Engineer scenario introduced by citehall00a. An engineer is standing by a switch in the railroad tracks. A train approaches in the distance. She flips the switch, so that the train travels down the right-hand track, instead of the left. Since the tracks reconverge up ahead, the train arrives at its destination all the same; let us further suppose that the time and manner of its arrival are exactly as they would have been, had she not flipped the switch. It is commonly discussed whether flipping the switch should be (part of) the cause of the train arriving at its destination <cit.>. Normally these solutions are not elaboration tolerant. For instance, adding a neutral switch position or a third route that does not reconverge, requires a different model or leads to completely different answers. § CAUSAL THEORY This section introduces a simplified version of knowledge representation language 𝒲, which is used for the analysis of basic causal relations. Formally, 𝒲 is a subset of P-log <cit.> expanded by a simple form of constraints with signature tailored toward reasoning about change. Theories of 𝒲 are called causal. A causal theory consists of a background theory 𝒯 representing general knowledge about the agent's domain and domain scenario 𝒮 containing the record of deliberate actions performed by the agents. A sorted signature Σ of 𝒯, referred to as causal, consists of sorts, object constants, and function symbols. Each object constant comes together with its sort; each function symbol – with sorts of its parameters and values. In addition to domain specific sorts and predefined sorts such as Boolean, integer, etc., a causal signature includes sorts for time-steps, fluents, actions, and statics. Fluents are divided into inertial, transient and time-independent. An inertial fluent can only change its value as a result of an action. Otherwise the value remains unchanged. The default value of a transient fluent is undefined. A time-independent fluent does not depend on time. But, different from a static, it may change its value after the scenario is expanded by new information. The value sort of actions is Boolean. Terms of Σ are defined as usual. Let e be a function symbol, t̅ be a sequence of ground terms not containing time-steps, i be a time-step, and y ∈𝑟𝑎𝑛𝑔𝑒(e). A ground atom of Σ is an expression of one of the forms e(t̅,i)=y e(t̅,i)≠y[As in citebalduccini2012answer and citeBalaiGZ19, f(x̅) ≠ y holds if f(x̅)=z such that z ≠ y. ] where e is an action, a fluent or a static. If e does not depend on time, i will be omitted. If e is a Boolean fluent then e(t̅) = ⊤ (resp. e(t̅) =) is sometimes written as e(i) (resp. e(i)). Atoms formed by actions are called action atoms. Similarly for statics, fluents, etc. Action atom a(i) may be written as occurs(a,i). The main construct used to form background theories of 𝒲 is causal mechanism (or causal law) – a rule of the form: m : e(t̅,I)=y ← body, 𝑎𝑏(m,I) where e is a non-static, I ranges over time-steps, m is the unique name of this causal mechanism, body is a set of atoms of Σ and arithmetic atoms of the form N ≺ AE where N is a variable or a natural number and AE is an arithmetic function built from +, -, ×, etc., and ≺ is =, >, or ≥. Special Boolean function 𝑎𝑏(m,I) is used to capture exceptions to application of causal mechanism m at step I. As usual in logic programming we view causal mechanisms with variables as sets of their ground instances obtained by replacing variables by their possible values and evaluating the remaining arithmetic terms. If e in rule (<ref>) is an action we refer to m as a trigger. A causal mechanism of the form (<ref>) says that “at time-step I, body activates causal mechanism m which, unless otherwise specified, sets the value of e to y”. cm1 To conform to this reading we need to enforce a broadly shared principle of causality: “the cause must precede its effect”. Our version of this principle is given by the following requirement: For every ground instance of causal mechanism such that i is a time-step occurring in its head and j is a time-step occurring in its body, the following two conditions are satisfied: * j < i if j occurs within an action atom; and * j ≤ i, otherwise. A scenario of background theory T with signature Σ is a collection of static and arithmetic atoms together with expressions of the form: * init(f=y) – the initial value of inertial fluent f is y; * do(a,i) – an agent deliberately executes action a at time-step i; * do( a,i) – an agent deliberately refrains from executing action a at i; * obs(f,y,i) – the value of f at time-step i is observed to be y; We refer to these expressions as extended atoms of Σ; a set of extended atoms of the form init(f=y), init(g=z)… will be written as init(f=y, g=z, …). We assume that the sort for time-steps consists of all natural numbers and symbolic constants we refer to as abstract time-steps. Atoms, extended atoms and scenarios where all object constants of the sort for time-steps are natural numbers are called concrete; those that contain abstract time-steps are called abstract. The story of Suzy First (Example <ref>) can be represented in 𝒲 by a background theory 𝒯_𝑓𝑠𝑡 which contains a sub-sort 𝑡ℎ𝑟𝑜𝑤 of actions, inertial fluent 𝑏𝑟𝑜𝑘𝑒𝑛, statics 𝑚𝑒𝑚𝑏𝑒𝑟, 𝑎𝑔𝑒𝑛𝑡, and 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛 and causal mechanism [ m_0(A) : 𝑏𝑟𝑜𝑘𝑒𝑛(I) ← 𝑜𝑐𝑐𝑢𝑟𝑠(A,I-D),𝑚𝑒𝑚𝑏𝑒𝑟(A,throw),; 𝑎𝑔𝑒𝑛𝑡(A) = Ag,𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(A)=D,; 𝑏𝑟𝑜𝑘𝑒𝑛(I-1),𝑎𝑏(m_0(A),I) ] The theory will be used together with an abstract scenario 𝒮_𝑠𝑢𝑧𝑦 which includes actions a_1 and a_2 of the sort 𝑡ℎ𝑟𝑜𝑤 and atoms init( broken), do(a_1,t_1), do(a_2,t_2), t_1 +𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) < t_2 +𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) where t_1 and t_2 are abstract time-steps. The last (arithmetic) atom represents the fact that Suzy's stone arrives first. Actions of 𝒮_𝑠𝑢𝑧𝑦 are described by statics 𝑎𝑔𝑒𝑛𝑡(a_1)=suzy 𝑚𝑒𝑚𝑏𝑒𝑟(a_1,throw) 𝑎𝑔𝑒𝑛𝑡(a_2)=billy 𝑚𝑒𝑚𝑏𝑒𝑟(a_2,throw) and arithmetic atoms 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1)≥ 1, 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) ≥ 1. Here, and in other places f(t̅) ≥ y is understood as a shorthand for f(t̅) = d and d ≥ y, where d is a fresh abstract constant. Similarly for >, = and ≠. To save space, we omit executability conditions for causal mechanisms. Note that causal mechanism m_0(A) is a general commonsense law which is not specific to this particular story. This kind of general commonsense knowledge can be compiled into a background library and retrieved when necessary <cit.>. The same applies to all the other causal mechanisms for variations of this example discussed in the paper. Note that we explicitly represent the temporal relation among time-steps and make no further assumptions about the causal relation among the rocks. The definition of cause introduced below is able to conclude that Suzy's rock is the cause of breaking the bottle. This is a distinguishing feature of our approach. Representing that Billy's stone arrives first is obtained simply by replacing the corresponding constraint in 𝒮_𝑠𝑢𝑧𝑦 by  t_1 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) > t_2 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2). The Engineer story (Example <ref>) can be represented by a background theory 𝒯_𝑒𝑛𝑔 containing causal mechanisms in Figure <ref>. The arrival of the train is modeled by a time-independent fluent 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑝𝑜𝑖𝑛𝑡). Action 𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ of m_2 causes the train to arrive at the fork after the amount of time determined by static 𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘 (note that since m_1 can fail to cause arrived(fork), this atom cannot be removed from m_2). The switch is controlled by action 𝑓𝑙𝑖𝑝𝑇𝑜 which takes one unit of time. This action can change the switch to any of its three positions: 𝑛𝑒𝑢𝑡𝑟𝑎𝑙, 𝑙𝑒𝑓𝑡, and 𝑟𝑖𝑔ℎ𝑡. Static 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑡𝑟𝑎𝑐𝑘) determines the time it takes the train to traverse the distance between the fork and the train's destination depending on the 𝑡𝑟𝑎𝑐𝑘 taken. When the switch is in the 𝑛𝑒𝑢𝑡𝑟𝑎𝑙 position, the train does not arrive at its destination. Inertial fluent 𝑠𝑤𝑖𝑡𝑐ℎ represents the position of the switch. The times to travel between two points must obey the following constraints included in scenario 𝒮_eng: 𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘≥ 1 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑙𝑒𝑓𝑡) ≥ 1 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡) ≥ 1 The rest of the scenario 𝒮_eng consists of the following atoms init(𝑠𝑤𝑖𝑡𝑐ℎ=left) do(𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ,t_3) do(𝑓𝑙𝑖𝑝𝑇𝑜(𝑟𝑖𝑔ℎ𝑡),t_4) 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑙𝑒𝑓𝑡) = 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡) We make no assumptions regarding the order in which actions 𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ and 𝑓𝑙𝑖𝑝𝑇𝑜 occur. We can easily modify the scenario to accommodate a variation of the story where traveling down the right-hand track is faster than over the left one by replacing the last arithmetic atom by 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑙𝑒𝑓𝑡) > 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡). A causal theory 𝒯(𝒮) is a pair where 𝒯 is a background theory and 𝒮 is a scenario. We identify each causal theory 𝒯(𝒮) with the logic program that consists of causal mechanisms without their labels, all atoms in 𝒮 as facts and the following general axioms: 𝑑𝑒𝑓(f(X̅)) ← f(X̅) = Y ← f(X̅) ≠ Y, 𝑑𝑒𝑓(f(X̅)) f(X̅) ≠ Y ← f(X̅) = Z, Z ≠ Y for every function symbol f, 𝑎𝑏(m,I) ←𝑎𝑏(m,I) for every causal mechanism m, f(0)=y ← init(f=y) f(I)=Y ← f(I - 1)=Y, f(I)≠ Y f(I) ≠ Y ← f(I - 1) ≠ Y, f(I) = Y for every inertial fluent f, a(I) ← do(a,I) a(I) ← do( a,I) a(I) ← a(I) a(I) 𝑎𝑏(m,I) ← do(a = v,I) for every action a, Boolean value v and causal law m with head a(I) = w and v ≠ w. ← obs(f(X̅),Y,I), not f(X̅,I) = Y Axioms (<ref>), (<ref>), and (<ref>) reflect the reading of relation ≠. Axiom (<ref>) ensures that causal mechanisms are defeasible. Axiom (<ref>) ensures that fluents at the initial situation take the value described in the scenario. Axioms (<ref>-<ref>) are the inertia axioms, stating that inertial fluents normally keep their values. Axiom (<ref>) ensures that the actions occur as described in the scenario. Axiom (<ref>) states the close world assumption for actions. Axiom (<ref>) is a cr-rule <cit.> which allows indirect exceptions to (<ref>). Intuitively, it says that a(I) may be true, but such a possibility is very rare and, whenever possible, should be ignored. Axiom (<ref>) ensures that deliberate actions overrule the default behavior of contradicting causal mechanisms (See Example <ref> below for more details). Axiom (<ref>) ensures that observations are satisfied in the model. Note, that if 𝒯(𝒮) contains occurrences of abstract time-steps then its grounding may still have occurrences of arithmetic operations. (If d is an abstract time-step then, say, d+1 > 5 will be unchanged by the grounding). The standard definition of answer set is not applicable in this case. The following modification will be used to define the meaning of programs with abstract time-steps. Let γ be a mapping of abstract time-steps into the natural numbers and 𝒯(𝒮) be a program not containing variables. By 𝒯(𝒮)|_γ we denote the result of (a) applying γ to abstract time-steps from 𝒯(𝒮), (b) replacing arithmetic expressions by their values, (c) removing rules containing false arithmetic atoms. Condition (c) is needed to avoid violation of principle of causality by useless rules. By an answer set of 𝒯(𝒮) we mean an answer set of 𝒯(𝒮)|_γ for some γ. If 𝒯(𝒮)|_γ is consistent, i.e., has an answer set then γ is called an interpretation of 𝒯(𝒮). 𝒯(𝒮) is called consistent if it has at least one interpretation. If 𝒯(𝒮) is consistent and for each interpretation γ of 𝒯(𝒮), 𝒯(𝒮)|_γ has exactly one answer set then 𝒯(𝒮) is called deterministic. In this paper we limit ourselves to deterministic causal theories. To illustrate our representation of triggers, parallel actions, and the defeasibility of causal laws, we introduce the following variation of Suzy First (Example <ref>). Suzy and Billy throw rocks by the order of a stronger girl. Suzy's rock gets there first. The effects of orders are described by the causal mechanism: [ m_6(A,T,B) : 𝑜𝑐𝑐𝑢𝑟𝑠(A,I) ← 𝑚𝑒𝑚𝑏𝑒𝑟(B,𝑜𝑟𝑑𝑒𝑟), 𝑜𝑐𝑐𝑢𝑟𝑠(B,T),; 𝑤ℎ𝑎𝑡(B) = A, 𝑤ℎ𝑒𝑛(B)=I,; I > T, 𝑎𝑏(m_6(A,T,B),I). ] The scenario 𝒮_order is obtained from 𝒮_suzy by adding new actions, b_1 and b_2, of the sort 𝑜𝑟𝑑𝑒𝑟 described by statics [ 𝑤ℎ𝑎𝑡(b_1)=a_1 𝑤ℎ𝑒𝑛(b_1) = t_1 𝑤ℎ𝑎𝑡(b_2) = a_2 𝑤ℎ𝑒𝑛(b_2) = t_2 ] and new constraints t_1>0, t_2>0, and replacing its extended atoms by init( broken), do(b_1,0), do(b_2,0). For any interpretation γ, in the unique answer set of 𝒯(𝒮_order)|_γ atom 𝑏𝑟𝑜𝑘𝑒𝑛 becomes true at time-step [ 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) stands for 𝑑 where duration(a_1)=d. ]  γ(t_1)+γ(𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1)). For the sake of simplicity, we assume that orders are given at time-step 0, but in general we would use two abstract time-steps. The example illustrates representation of triggers and parallel actions. To illustrate defeasibility, let us consider a scenario where both Suzy and Billy refuse to follow the order. This can be formalized as scenario 𝒮_order2 obtained from 𝒮_order by adding the extended atoms do( a_1,t_1) and do( a_2,t_2). Due to axioms (<ref>), causal mechanisms m_6(a_1,0,b_1) and m_6(a_2,0,b_2) do not fire, and 𝑏𝑟𝑜𝑘𝑒𝑛 never becomes true. § CAUSE OF CHANGE In this section, we describe our notion of cause of change. We start with scenarios not containing observations. We say that a ground atom e(t̅,k) = y is a change in 𝒯(𝒮)|_γ if the unique answer set M of 𝒯(𝒮)|_γ satisfies e(t̅,k) = y and one of the following conditions holds: * e is inertial and e(t̅,k-1) is either undefined in M or M satisfies e(t̅,k-1) = z with z ≠ y; * e is an action or a transient or time-independent fluent. The definition of cause of change relies on the definition of tight proof that we introduce next. By [P]_i, we denote the sequence consisting of the first i elements of sequence P. By 𝑎𝑡𝑜𝑚𝑠(P) we denote the atoms occurring in P. A proof of a set 𝒰 of ground atoms in 𝒯(𝒮)|_γ is a sequence P of atoms in the unique answer set M of 𝒯(𝒮)|_γ and rules of the ground logic program associated with 𝒯(𝒮)|_γ satisfying the following conditions: * P contains all the atoms in 𝒰. * Each element x_i of P is one of the following: * a rule whose body is satisfied by the set atoms([P]_i) ∪{ l : l ∉M}, or * an axiom, i.e., a do-atom or a static from 𝒮|_γ, or * the head of some rule from [P]_i. * No proper subsequence [A sequence obtained from P by removing some of its elements.] of P satisfies the above conditions. Let us consider the Engineer story (Example <ref>) and an interpretation γ of the abstract theory 𝒯_eng(𝒮_eng), that is, a function mapping 𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘, 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑙𝑒𝑓𝑡) and 𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡) to natural numbers such that γ(𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑙𝑒𝑓𝑡)) = γ(𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡)). For instance, an interpretation γ satisfying γ(t_3) = 0, γ(t_4) = 1, γ(𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘) = 3 and γ(𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑙𝑒𝑓𝑡)) = γ(𝑡𝑖𝑚𝑒2𝑑𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡)) = 5. The unique answer set of 𝒯_eng(𝒮_eng)|_γ contains, among others, atoms 𝑠𝑤𝑖𝑡𝑐ℎ(0) ≠𝑛𝑒𝑢𝑡𝑟𝑎𝑙, …, 𝑠𝑤𝑖𝑡𝑐ℎ(3) ≠𝑛𝑒𝑢𝑡𝑟𝑎𝑙 𝑎𝑟𝑟𝑖𝑣𝑇𝑖𝑚𝑒(𝑓𝑜𝑟𝑘) = 3, 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡) Since 𝑎𝑟𝑟𝑖𝑣𝑒𝑑 is a time-independent fluent, we can conclude that 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡) is a change in this concrete causal theory. In general, we can check that, for any interpretation γ of the abstract theory 𝒯_eng(𝒮_eng) where the switch is flipped before the train arrives to the fork, i.e. satisfying γ(t_4) < γ(t_3) + γ(𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘), the unique answer set of 𝒯_eng(𝒮_eng)|_γ contains atoms 𝑠𝑤𝑖𝑡𝑐ℎ(0) ≠𝑛𝑒𝑢𝑡𝑟𝑎𝑙, …, 𝑠𝑤𝑖𝑡𝑐ℎ(n_1)≠𝑛𝑒𝑢𝑡𝑟𝑎𝑙 𝑎𝑟𝑟𝑖𝑣𝑇𝑖𝑚𝑒(𝑓𝑜𝑟𝑘) = n_1, 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡) where n_1 = γ(t_3) + γ(𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘) is a natural number corresponding to the arrival time of the train to the switch. We can then conclude that 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡) is a change in this causal theory for any such interpretation γ. Figure <ref> depicts (condensed versions) of the two proofs in this scenario for any such interpretation γ. In P_1, we reach the conclusion that the switch is not in the neutral position by inertia. In P_2, the same conclusion is the result of the flipping the switch to the 𝑟𝑖𝑔ℎ𝑡. Both are valid derivations of the change 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡). However, to infer the causes of an event we give preference to proofs using inertia over those using extra causal mechanisms. This idea is formalized in the following notion of tight proof. Let P_1 and P_2 be proofs of change e(t̅,k)=y in 𝒯(𝒮)|_γ. P_1 is (causally) tighter than P_2 if every causal mechanism of P_1 belongs to P_2 but not vice-versa. Proof P of e(t̅,k)=y in 𝒯(𝒮)|_γ is tight if there is no proof of e(t̅,k)=y in 𝒯(𝒮)|_γ that is tighter than P. Clearly, proof P_1 from our running example is tighter than P_2; causal mechanisms of P_1 are m_1, m_2 and m_4, while P_2 contains the additional causal mechanism m_3(right). Given a numeric time-step i and an atom e(t̅,k)=y in 𝒯(𝒮)|_γ, a causal chain Ch(i) from i to e(t̅,k)=y is a sequence a_1,…,a_n, C_1,…,C_m, e(t̅,k) = y of atoms and ground causal mechanisms of 𝒯(𝒮)|_γ with n ≥ 1 and m ≥ 0 such that there is a tight proof P of e(t̅,k)=y in 𝒯(𝒮)|_γ satisfying the following conditions: * a_1 is a do-atom from P with time step i, * a_2,… a_n are all other do-atoms from P with time-steps greater than or equal to i, and * C_1,…,C_m are all causal mechanisms of P with time-steps greater than i. Let us introduce some terminology. We say that Ch(i) is generated from the proof P above. If e(t̅,k)=y is a change, we say that causal chain from i to e(t̅,k)=y in 𝒯(𝒮)|_γ leads to change e(t̅,k)=y. A causal chain is initiated by the set of all its do-atoms. Two proofs of a set of ground atoms 𝒰 are equivalent if they differ only by the order of their elements. Two chains are equivalent if they are generated from equivalent proofs. Continuing with our running example, sequence do(𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ,γ(t_3)), m_1, m_2, m_4, 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡) is a causal chain in this scenario that leads to change 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡), and it is generated by proof P_1 in Figure <ref>. However, sequence do(𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ,γ(t_3)), do(𝑓𝑙𝑖𝑝𝑇𝑜(𝑟𝑖𝑔ℎ𝑡),γ(t_4)), m_1, m_2, m_3(right), m_4, 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡 ) corresponding to proof P_2 is not causal chain because P_2 is not a tight proof. Given causal chains Ch(i) and Ch(j) to e(t̅,k)=y, we say that Ch(i) is more informative than Ch(j) if i < j and Ch(i) contains all elements of Ch(j). A time-step i is called a candidate inflection point of change e(t̅,k) = y in 𝒯(𝒮)|_γ if it satisfies the following conditions: (a) There is a causal chain from i to e(t̅,k)=y in 𝒯(𝒮[i])|_γ, and (b) There is a causal chain from i to e(t̅,k)=y in 𝒯(𝒮)|_γ where 𝒮[i] is the scenario obtained from 𝒮 by removing all do-atoms after i. A candidate inflection point i is called an inflection point of e(t̅,k) = y in 𝒯(𝒮)|_γ if there is a causal chain Ch(i) from i to e(t̅,k) = y in 𝒯(𝒮)|_γ such that there is no candidate inflection point j and causal chain Ch(j) from j to e(t̅,k)=y in 𝒯(𝒮)|_γ which is more informative than Ch(i). Note that a scenario can have more than one inflection point (see Example <ref>). A non-empty set α of do-atoms is called a (deliberate) cause of change e(t̅,k)=y in 𝒯(𝒮)|_γ if there is an inflection point i of e(t̅,k)=y in 𝒯(𝒮)|_γ such that α initiates a causal chain in 𝒯(𝒮)|_γ from i to e(t̅,k)=y. It is said to be (deliberate) cause of change e(t̅,k)=y in 𝒯(𝒮) if it is a cause of change e(γ(k))=y in 𝒯(𝒮)|_γ for every interpretation γ of 𝒯(𝒮). Following with the Engineer example (Example <ref>), let us consider scenario 𝒮_eng. Since this is an abstract scenario, to answer questions about the cause of change we have to consider all interpretations of this scenario. We proceed by cases. Let us first consider an interpretation γ satisfying condition γ(t_4) < γ(t_3) + γ(𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘). As we discussed above, (<ref>) is the only causal chain leading to change 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡). Furthermore, we can see that this is also a causal chain from γ(t_3) to this change in 𝒯(𝒮[γ(t_3)])|_γ. Therefore, time-step γ(t_3) is the unique candidate inflection point of change 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡) and, thus, it is the unique inflection point as well. As a result, singleton set { do(𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ,γ(t_3))} is the unique cause of this change with respect to any such γ. Let us now consider an interpretation γ satisfying γ(t_4) ≥γ(t_3) + γ(𝑡𝑖𝑚𝑒2𝑓𝑜𝑟𝑘). In this case 𝑎𝑟𝑟𝑖𝑣𝑒𝑑(𝑑𝑒𝑠𝑡) is still a change and P_1 is the only proof of this change. Hence, { do(𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ,γ(t_3))} is also the unique cause of this change with respect to any such γ. Consequently, { do(𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ,t_3)} is the unique cause of this change in this story. Let us now consider Suzy First story (Example <ref>). The unique answer set of 𝒯_𝑓𝑠𝑡(𝒮_𝑠𝑢𝑧𝑦)|_γ contains atoms do(a_1,(γ(t_1)), do(a_2,γ(t_2)), 𝑏𝑟𝑜𝑘𝑒𝑛(0), …, 𝑏𝑟𝑜𝑘𝑒𝑛(n_4-1), 𝑏𝑟𝑜𝑘𝑒𝑛(n_4) with n_4 = γ(t_1)+γ(𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1)) being a positive integer representing the arriving time-step of Suzy's rock. This means that 𝑏𝑟𝑜𝑘𝑒𝑛(n_4) is a change. There is only one causal chain leading to this change: do(a_1,γ(t_1)), m_0(a_1), 𝑏𝑟𝑜𝑘𝑒𝑛(n_4) and the only inflection point is γ(t_1). As a result, Suzy's throw, { do(a_1,t_1)} is the only cause of this change. Note that the order in which Suzy and Billy throw is irrelevant (as long as the constraint t_1 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) < t_2 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) is satisfied): the reason for Suzy's rock to get first may be because she throws first or because her rock is faster or any other reason. It is easy to check that, if we consider a scenario where Billy's rock gets first – formally a scenario 𝒮_𝑏𝑖𝑙𝑙𝑦 obtained from 𝒮_𝑠𝑢𝑧𝑦 by replacing constraint t_1 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) < t_2 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) by t_1 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) > t_2 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) – then Billy's throw, { do(a_2,t_2)}, is the only cause of this change. In the following variation of Suzy First story broken has two inflection points. Suzy and Billy throw rocks at a bottle, but this time both rocks arrive at the same time. This story can be formalized by a scenario 𝒮_𝑠𝑎𝑚𝑒 obtained from 𝒮_𝑠𝑢𝑧𝑦 by replacing t_1 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) < t_2 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) by t_1 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) = t_2 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) For any interpretation γ of this scenario, we have change 𝑏𝑟𝑜𝑘𝑒𝑛(n_5) with n_5 = γ(t_1) + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1) =γ(t_2) + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2) and two causal chains leading to this change: do(a_1,γ(t_1)), m_0(a_1), 𝑏𝑟𝑜𝑘𝑒𝑛(n_5) do(a_2,γ(t_2)), m_0(a_2), 𝑏𝑟𝑜𝑘𝑒𝑛(n_5) Both γ(t_1) and γ(t_2) are inflection points. They may be the same inflection point or different ones depending of the interpretation γ. In all cases, both { do(a_1,γ(t_1)) } and { do(a_2,γ(t_2)) } are causes of change 𝑏𝑟𝑜𝑘𝑒𝑛(n_5). Let us now consider the variation of this story introduced in Example <ref>, where Suzy and Billy throw by the order of a stronger girl. As we discuss above, 𝑏𝑟𝑜𝑘𝑒𝑛 becomes true at time-step t_1+𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1). In other words 𝑏𝑟𝑜𝑘𝑒𝑛(n_4) is a change in scenario 𝒮_𝑜𝑟𝑑𝑒𝑟. In this scenario there is only one causal chain leading to this change: do(b_1,0), m_6(a_1,t_1,b_1), m_0(a_1), 𝑏𝑟𝑜𝑘𝑒𝑛(n_4) and, thus, { do(b_1,0)} is the only cause of 𝑏𝑟𝑜𝑘𝑒𝑛(n_4). Note that our notion of cause is different from the notion of immediate or direct cause. The immediate cause of breaking the bottle is the throw of the rock, but the deliberate cause is the order. We also discussed the scenario where both Suzy and Billy refuse to follow the order and, thus, 𝑏𝑟𝑜𝑘𝑒𝑛 never happens. Therefore, there was no cause. Next let us consider a story where Suzy refuses to throw but Billy follows the order. This can be formalized by scenario 𝒮_order3 obtained from 𝒮_order by adding extended atom do( a_1,t_1). In this case the change happens later. That is, 𝑏𝑟𝑜𝑘𝑒𝑛(n_6) is a change with n_6 = γ(t_2)+γ(𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_2)). The only causal chain leading to this change is do(b_2,0), m_6(a_2,t_1,b_2), m_0(a_2), 𝑏𝑟𝑜𝑘𝑒𝑛(n_6) and, thus, { do(b_2,0)} is the only cause of 𝑏𝑟𝑜𝑘𝑒𝑛(n_6). Next example illustrates our treatment of preconditions of a cause. As in Example <ref> Suzy picks up a rock and throws it at the bottle. However, this time we assume that she is accurate only if she aims first. Otherwise, her rock misses. Suzy aims before throwing and hits the bottle. Billy just looks at his colleague's performance. The story is formalized by causal theory  𝒯_𝑎𝑖𝑚: [ m_0'(A) : 𝑏𝑟𝑜𝑘𝑒𝑛(I) ← 𝑜𝑐𝑐𝑢𝑟𝑠(A,I-D),𝑚𝑒𝑚𝑏𝑒𝑟(A,throw),; 𝑎𝑔𝑒𝑛𝑡(A) = Ag, 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(A)=D,; 𝑎𝑖𝑚𝑒𝑑(Ag,I-D),𝑏𝑟𝑜𝑘𝑒𝑛(I-1),𝑎𝑏(m_0'(A),I) ] [ m_7(A) : 𝑎𝑖𝑚𝑒𝑑(Ag,I) ← 𝑜𝑐𝑐𝑢𝑟𝑠(A,I-D),𝑚𝑒𝑚𝑏𝑒𝑟(A,aim),; 𝑎𝑔𝑒𝑛𝑡(A) = Ag, 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(A)=D,𝑎𝑏(m_7(A),I) ] where 𝑎𝑖𝑚𝑒𝑑 is an inertial fluent and scenario 𝒮_𝑎𝑖𝑚 [ 𝑎𝑔𝑒𝑛𝑡(a_1)=𝑠𝑢𝑧𝑦 𝑚𝑒𝑚𝑏𝑒𝑟(a_1,throw) 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(a_1)≥1; 𝑎𝑔𝑒𝑛𝑡(c) =𝑠𝑢𝑧𝑦 𝑚𝑒𝑚𝑏𝑒𝑟(c,𝑎𝑖𝑚) 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(c) ≥1; do(c,t_5), do(a_1,t_1), t_5 + 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛(c) < t_1 ] The inflection point in 𝒮_𝑎𝑖𝑚 is t_1 and the only deliberate cause of broken(n_4) is { do(a_1,t_1) }. Action do(c,t_5) is necessary for shattering the bottle, because it is required by one of the preconditions of m_0^'(a_1). However, it is not a deliberate cause because at the time of its occurrence the shattering could not be predicted (see condition (a) in Definition <ref>). Definition <ref> can be used to define the notion of causal explanation of unexpected observations: T(S) is called strongly consistent if T^reg(S), obtained from T(S) by dropping cr-rules, is consistent. If T(S) is strongly consistent and T(S ∪{ obs(f,y,i)}) is not we say that obs(f,y,i) is unexpected. We assume that every abductive support[Abductive support of a program Π is a minimal collection of cr-rules of Π which, if turned into regular rules and added to the regular part of Π, produce a consistent program Π^'. Answer set of Π is then defined as an answer set of Π^'. ] U of this theory has exactly one answer set. By a cause of atom f(i)=y we mean a cause of the last change of f to y which precedes i+1 (note that for actions and time-independent fluent f, f(i) = y is a change). By causal explanation of obs(f,y,i) we mean a cause of f(i)=y in T(S_U) where S_U is obtained from S by adding do(a,i) for every rule a(i) from U for some abductive support U. For example, consider a scenario S of 𝒯_𝑓𝑠𝑡 consisting of init( broken), obs(broken, true,2), actions a_1 and a_2 from 𝒮_𝑠𝑢𝑧𝑦 with durations 2 and 4 respectively. The program has one abductive support, a_1(0) and hence do(a_1,0) explains the unexpected observation. If broken were observed at 3 we'd have two explanations: do(a_1,0) and do(a_1,1). This can be compactly represented using a do-atom do(a_1,t) where t is an abstract time step satisfying 0 ≤ t < 2. § CONCLUSIONS The paper describes a new approach for representing causal knowledge, and its use for causal analysis. The approach emphasizes the separation between background theory and scenario. The first contains general knowledge that may be shared by different stories and the latter contains the information specific to the considered story. This, together with the use of abstract constants, provides a higher degree of elaboration tolerance than other approaches to causal analysis. We also propose the use of a rich KR-language that is able to represent sophisticated causal laws, time, defaults and their exceptions, recursive definitions, and other non-trivial phenomena of natural language. As a result, we can obtain accurate and direct formalizations of natural language sentences that, we believe, is essential for causal analysis. We have illustrated this with common challenging examples from the literature on actual causality. Causal analysis is realized over a formal representation rather than over the natural language statements. However, our intuitions are usually more clear with respect to the natural language statements than with respect to the formal representation. The closer the formal representation is to the natural language statements of a story, the better we can use our intuition to guide us towards a formal analysis of actual causality. A preliminary version of this paper was presented at a workshop <cit.>. We substantially extend that version and correct mistakes that were discovered after its presentation. This has led us to change the definition of inflection point, to introduce the notion of tight proof and abstract time-steps, etc. In the future, this work should be expanded to consider other types of causal relations. Some, like prevention, are not included due to space limitation. Others require further work. In particular we plan to expand 𝒲 by probabilistic constructs of P-log and use it to study probabilistic causal relations. Finally, we plan to investigate mathematical properties of causal theories and algorithms for effectively computing the causes of various causal relations and their implementations. The notion of tight proof is closely related to the notion of causal justifications for answer set programs <cit.>. This may open the door to use  <cit.> as the first-step of a new system for computing causes according to our definition. acmtrans
http://arxiv.org/abs/2306.12379v1
20230621165135
Fast QSC Solver: tool for systematic study of N=4 Super-Yang-Mills spectrum
[ "Nikolay Gromov", "Arpad Hegedus", "Julius Julius", "Nika Sokolova" ]
hep-th
[ "hep-th" ]
=1 C>c<` ` ="8000 ` =: [ ]()∂̣
http://arxiv.org/abs/2306.04971v1
20230608065921
A Melting Pot of Evolution and Learning
[ "Moshe Sipper", "Achiya Elyasaf", "Tomer Halperin", "Zvika Haramaty", "Raz Lapid", "Eyal Segal", "Itai Tzruia", "Snir Vitrack Tamam" ]
cs.NE
[ "cs.NE", "cs.LG" ]
A Melting Pot of Evolution and Learning M. Sipper et al. Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 8410501, Israel Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 8410501, Israel DeepKeep, Tel-Aviv, Israel [email protected] <http://www.moshesipper.com/> A Melting Pot of Evolution and LearningThis research was partially supported by the following grants: Israeli Innovation Authority through the Trust.AI consortium; Israeli Science Foundation grant no. 2714/19; Israeli Smart Transportation Research Center (ISTRC); Israeli Council for Higher Education (CHE) via the Data Science Research Center, Ben-Gurion University of the Negev, Israel. (To Appear in Proceedings of Genetic Programming Theory & Practice XX, 2023) Moshe Sipper 0000-0003-1811-472X1 Achiya Elyasaf 0000-0002-4009-53532 Tomer Halperin1 Zvika Haramaty 0000-0002-6225-188X1 Raz Lapid 0000-0002-4818-93381,3 Eyal Segal1 Itai Tzruia1 Snir Vitrack Tamam1 8 May, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We survey eight recent works by our group, involving the successful blending of evolutionary algorithms with machine learning and deep learning: * Binary and Multinomial Classification through Evolutionary Symbolic Regression, * Classy Ensemble: A Novel Ensemble Algorithm for Classification, * EC-KitY: Evolutionary Computation Tool Kit in Python, * Evolution of Activation Functions for Deep Learning-Based Image Classification, * Adaptive Combination of a Genetic Algorithm and Novelty Search for Deep Neuroevolution, * An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks, * Foiling Explanations in Deep Neural Networks, * Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors. § INTRODUCTION In Evolutionary Computation (EC)—or Evolutionary Algorithms (EAs)—core concepts from evolutionary biology—inheritance, random variation, and selection—are harnessed in algorithms that are applied to complex computational problems. As discussed by <cit.>, EAs present several important benefits over popular machine learning (ML) methods, including: less reliance on the existence of a known or discoverable gradient within the search space; ability to handle design problems, where the objective is to design new entities from scratch; fewer required a priori assumptions about the problem at hand; seamless integration of human expert knowledge; ability to solve problems where human expertise is very limited; support of interpretable solution representations; support of multiple objectives. Importantly, these strengths often dovetail with weak points of ML algorithms, which has resulted in an increasing number of works that fruitfully combine the fields of EC with ML or deep learning (DL). Herein, we will survey eight recent works by our group, which are at the intersection of EC, ML, and DL: * Machine Learning (Section <ref>) * Binary and Multinomial Classification through Evolutionary Symbolic Regression <cit.> (Section <ref>) * Classy Ensemble: A Novel Ensemble Algorithm for Classification <cit.> (Section <ref>) * EC-KitY: Evolutionary Computation Tool Kit in Python <cit.> (Section <ref>) * Deep Learning (Section <ref>) * Evolution of Activation Functions for Deep Learning-Based Image Classification <cit.> (Section <ref>) * Adaptive Combination of a Genetic Algorithm and Novelty Search for Deep Neuroevolution <cit.> (Section <ref>) * Adversarial Deep Learning (Section <ref>) * An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks <cit.> (Section <ref>) * Foiling Explanations in Deep Neural Networks <cit.> (Section <ref>) * Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors <cit.> (Section <ref>) If one's interest is piqued by a particular project we invite them to peruse the respective, cited, full paper (which are all freely available online). § MACHINE LEARNING §.§ Binary and Multinomial Classification through Evolutionary Symbolic Regression <cit.> Classification is an important subfield of supervised learning. As such, many powerful algorithms have been designed over the years to tackle both binary datasets as well as multinomial, or multiclass ones. Symbolic regression (SR) is a family of algorithms that aims to find regressors of arbitrary complexity. <cit.> showed that evolutionary SR-based regressors can be successfully converted into performant classifiers. We devised and tested three evolutionary SR-based classifiers: GPLearnClf, CartesianClf, and ClaSyCo. The first two are based on the one-vs-rest approach, while the last one is inherently multinomial. GPLearnClf is based on the GPLearn package <cit.>, which implements tree-based Genetic Programming (GP) symbolic regression, is relatively fast, and—importantly—interfaces seamlessly with Scikit-learn <cit.>. GPLearnClf evolves C separate populations independently, each fitted to a specific class by considering as target values the respective column vector (of C column vectors) of the one-hot-encoded target vector y. The fitness function is based on log loss (aka binary cross-entropy). Prediction is carried out by outputting the argmax of the set of best evolved individuals (one from each population). The hyperparameters to tune were population size and generation count. CartesianClf is based on Cartesian GP (CGP), which grew from a method of evolving digital circuits <cit.>. It is called `Cartesian' because it represents a program using a two-dimensional grid of nodes. The CGP package we used <cit.> evolves the population in a (1 + λ)-manner, i.e., in each generation it creates λ offspring (we used the default λ =4) and compares their fitness to the parent individual. The fittest individual carries over to the next generation; in case of a draw, the offspring is preferred over the parent. Tournament selection is used (tournament size =|population|), single-point mutation, and no crossover. We implemented CartesianClf similarly to GPLearnClf in a one-vs-rest manner, with C separate populations evolving independently, using binary cross-entropy as fitness. The hyperparameters to tune were number of rows, number of columns, and maximum number of generations. ClaSyCo (Classification through Symbolic Regression andCoevolution) also employs C populations of trees; however, these are not evolved independently as with the one-vs-rest method (as done with GPLearnClf and CartesianClf)—but in tandem through cooperative coevolution. A cooperative coevolutionary algorithm involves a number of evolving populations, which come together to obtain problem solutions. The fitness of an individual in a particular population depends on its ability to collaborate with individuals from the other populations <cit.>. Specifically, in our case, an individual SR tree i in population c, 𝑔𝑝^c_i, i ∈{1,…,𝑛_𝑝𝑜𝑝}, c ∈{1,…,C}, is assigned fitness through the following steps (we describe this per single dataset sample, although in practice fitness computation is vectorized by Python): * Individual 𝑔𝑝^c_i computes an output ŷ^c_i for the sample under consideration. * Obtain the best-fitness classifier of the previous generation, 𝑔𝑝^c'_𝑏𝑒𝑠𝑡, for each population c', c' ∈{1,…,C}, c' ≠ c (these are called “representatives” or “cooperators” <cit.>). * Each 𝑔𝑝^c'_𝑏𝑒𝑠𝑡 computes an output ŷ^c'_𝑏𝑒𝑠𝑡 for the sample under consideration. * We now have C output values, ŷ^1_𝑏𝑒𝑠𝑡, ... , ŷ^c_i , ... , ŷ^C_𝑏𝑒𝑠𝑡. * Compute σ(ŷ^1_𝑏𝑒𝑠𝑡, ... , ŷ^c_i , ... , ŷ^C_𝑏𝑒𝑠𝑡), where σ is the softmax function. * Assign a fitness score to 𝑔𝑝^c_i using the cross-entropy loss function. (NB: only individual 𝑔𝑝^c_i is assigned fitness—all other C-1 individuals are representatives.) Tested over 162 datasets and compared to three state-of-the-art machine learning algorithms—XGBoost, LightGBM, and a deep neural network—we found our algorithms to be competitive. Further, we demonstrated how to find the best method for one's dataset automatically, through the use of Optuna, a state-of-the-art hyperparameter optimizer <cit.>. §.§ Classy Ensemble: A Novel Ensemble Algorithm for Classification <cit.> <cit.> presented Classy Ensemble, a novel ensemble-generation algorithm for classification tasks, which aggregates models through a weighted combination of per-class accuracy. The field of ensemble learning has an illustrious history spanning several decades. Indeed, we ourselves employed this paradigm successfully in several recent works: * <cit.> presented conservation machine learning, which conserves models across runs, users, and experiments. As part of this work we compared multiple ensemble-generation methods, also introducing lexigarden—which is based on lexicase selection, a performant selection technique for evolutionary algorithms <cit.>. * <cit.> presented SyRBo—Symbolic-Regression Boosting—an ensemble method based on strong learners that are combined via boosting, used to solve regression tasks. * <cit.> introduced AddGBoost, a gradient boosting-style algorithm, wherein the decision tree is replaced by a succession of stronger learners, which are optimized via a state-of-the-art hyperparameter optimizer. * <cit.> presented a comprehensive, stacking-based framework for combining deep learning with good old-fashioned machine learning, called Deep GOld. The framework involves ensemble selection from 51 retrained pretrained deep networks as first-level models, and 10 ML algorithms as second-level models. Classy Ensemble receives as input a collection of fitted models, each one's overall accuracy score, and per-class accuracy, i.e., each model's accuracy values as computed separately for every class (note: we used 's <cit.> , which avoids inflated performance estimates on imbalanced datasets). Classy Ensemble adds to the ensemble the topk best-performing (over validation set) models, for each class. A model may be in the topk set for more than one class. Thus, for each model in the ensemble, we also maintain a list of classes for which it is a voter, i.e., its output for each voter class is taken into account in the final aggregation. The binary voter vector of size n_classes is set to 1 for classes the model is permitted to vote for, 0 otherwise. Thus, a model not in the ensemble is obviously not part of the final aggregrated prediction; further, a model in the ensemble is only “allowed” to vote for those classes for which it is a voter—i.e., for classes it was amongst the topk. Classy Ensemble provides a prediction by aggregating its members' predicted-class probabilities, weighted by the overall validation score, and taking into account voter permissions. Tested over 153 machine learning datasets we demonstrated that Classy Ensemble outperforms two other well-known aggregation algorithms—order-based pruning and clustering-based pruning—as well as our aforementioned lexigarden ensemble generator. We then enhanced Classy Ensemble with a genetic algorithm, creating Classy Evolutionary Ensemble, wherein an evolutionary algorithm is used to select the set of models which Classy Ensemble picks from. This latter algorithm was able to improve state-of-the-art deep learning models over the well-known, difficult ImageNet dataset. §.§ EC-KitY: Evolutionary Computation Tool Kit in Python <cit.> There is a growing community of researchers and practitioners who combine evolution and learning. Having used several EC open-source software packages over the years we identified a large “hole” in the software landscape—there was a lacuna in the form of an EC package that is: * A comprehensive toolkit for running evolutionary algorithms. * Written in Python. * Can work with or without scikit-learn (aka sklearn), the most popular ML library for Python. To wit, the package should support both sklearn and standalone (non-sklearn) modes. * Designed with modern software engineering in mind. * Designed to support all popular EC paradigms: genetic algorithms (GAs), genetic programming (GP), evolution strategies (ES), coevolution, multi-objective, etc'. While there are several EC Python packages, none fulfill all five requirements. Some are not written in Python, some are badly documented, some do not support multiple EC paradigms, and so forth. Importantly for the ML community, most tools do not intermesh with extant ML tools. Indeed, we have personally had experience with the hardships of combining EC tools with scikit-learn when doing evolutionary machine learning. Thus was born EC-KitY: a comprehensive Python library for doing EC, licensed under the BSD 3-Clause License, and compatible with scikit-learn. Designed with modern software engineering and machine learning integration in mind, EC-KitY can support all popular EC paradigms, including genetic algorithms, genetic programming, coevolution, evolutionary multi-objective optimization, and more. EC-KitY can work both in standalone, non-sklearn mode, and in sklearn mode. Below we show two code examples that solve a symbolic regression problem. In standalone mode the user can run an EA with a mere three lines of code: [language=Python,upquote=true] from eckity.algorithms.simple_evolution import SimpleEvolution from eckity.subpopulation import Subpopulation from examples.treegp.non_sklearn_mode.symbolic_regression.sym_reg_evaluator import SymbolicRegressionEvaluator algo = SimpleEvolution(Subpopulation(SymbolicRegressionEvaluator())) algo.evolve() print('algo.execute(x=2, y=3, z=4):', algo.execute(x=2, y=3, z=4)) Running an EA in sklearn mode is just as simple: [language=Python,upquote=true] from sklearn.datasets import make_regression from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from eckity.algorithms.simple_evolution import SimpleEvolution from eckity.creators.gp_creators.full import FullCreator from eckity.genetic_encodings.gp.tree.utils import create_terminal_set from eckity.sklearn_compatible.regression_evaluator import RegressionEvaluator from eckity.sklearn_compatible.sk_regressor import SKRegressor from eckity.subpopulation import Subpopulation X, y = make_regression(n_samples=100, n_features=3) terminal_set = create_terminal_set(X) algo = SimpleEvolution( Subpopulation(creators=FullCreator(terminal_set=terminal_set), evaluator=RegressionEvaluator())) regressor = SKRegressor(algo) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) regressor.fit(X_train, y_train) print('MAE on test set:', mean_absolute_error(y_test, regressor.predict(X_test))) We recently taught a course in which 48 students worked in groups of two or three, submitting a total of 22 projects that used EC-KitY to solve a diverse array of complex problems, including evolving Flappy Bird agents, evolving blackjack strategies, evolving Super Mario agents, evolving chess players, and solving problems such as maximum clique and vehicle routing. EC-KitY proved quite up to the tasks. § DEEP LEARNING §.§ Evolution of Activation Functions for Deep Learning-Based Image Classification <cit.> Artifical Neural Networks (ANNs), and, specifically, Deep Neural Networks(DNNs), have gained much traction in recent years and are now being effectively put to use in a variety of applications. Considerable work has been done to improve training and testing performance, including various initialization techniques, weight-tuning algorithms, different architectures, and more. However, one hyperparameter is usually left untouched: the activation function (AF). While recent work has seen the design of novel AFs <cit.>, the Rectified Linear Unit (ReLU) remains by far the most commonly used one, mainly due to its overcoming the vanishing-gradient problem, thus affording faster learning and better performance. <cit.> introduced a novel coevolutionary algorithm to evolve AFs for image-classification tasks. Our method is able to handle the simultaneous coevolution of three types of AFs: input-layer AFs, hidden-layer AFs, and output-layer AFs. We surmised that combining different AFs throughout the architecture may improve the network's performance. We devised a number of evolutionary algorithms, including a coevolutionary one, comprising three separate populations: 1) input-layer AFs, 2) hidden-layer AFs, 3) output-layer AFs. Combining three individuals—one from each population—results in an AF architecture that can be evaluated. We compared our novel algorithm to four different methods: standard ReLU- or LeakyReLU-based networks, networks whose AFs are produced randomly, and two forms of single-population evolution, differing in whether an individual represents a single AF or three AFs. We chose ReLU and LeakyReLU as baseline AFs since we noticed that they are the most-used functions in the deep-learning domain. We used Cartesian genetic programming (CGP), wherein an evolving individual is represented as a two-dimensional grid of computational nodes—often an a-cyclic graph—which together express a program <cit.>. An individual is represented by a linear genome, composed of integer genes, each encoding a single node in the graph, which represents a specific function. A node consists of a function, from a given table of functions, and connections, specifying where the data for the node comes from. A sample individual in the evolving CGP population, representing the well-known sigmoid AF, is shown in Figure <ref>. Tested on four datasets—MNIST, FashionMNIST, KMNIST, and USPS—coevolution proved to be a performant algorithm for finding good AFs and AF architectures. §.§ Adaptive Combination of a Genetic Algorithm and Novelty Search for Deep Neuroevolution <cit.> As the field of Reinforcement Learning (RL) <cit.> is being applied to harder tasks, two unfortunate trends emerge: larger policies that require more computing time to train, and “deceptive” optima. While gradient-based methods do not scale well to large clusters, evolutionary computation (EC) techniques have been shown to greatly reduce training time by using modern distributed infrastructure <cit.>. The problem of deceptive optima has long since been known in the EC community: Exploiting the objective function too early might lead to a sub-optimal solution, and attempting to escape it incurs an initial loss in the objective function. Novelty Search (NS) mitigates this issue by ignoring the objective function while searching for new behaviors <cit.>. This method had been shown to work for RL <cit.>. While both genetic algorithms (GAs) and NS have been shown to work in different environments <cit.>, we attempted in <cit.> to combine the two to produce a new algorithm that does not fall behind either, and in some scenarios surpasses both. <cit.> proposed a new algorithm: Explore-Exploit γ-Adaptive Learner (E^2γ AL, or EyAL). By preserving a dynamically-sized niche of novelty-seeking agents, the algorithm manages to maintain population diversity, exploiting the reward signal when possible and exploring otherwise. The algorithm combines both the exploitative power of a GA and the explorative power of NS, while maintaining their simplicity and elegance. Our experiments showed that EyAL outperforms NS in most scenarios, while being on par with a GA—and in some scenarios it can outperform both. EyAL also allows the substitution of the exploiting component (GA) and the exploring component (NS) with other algorithms, e.g., Evolution Strategy and Surprise Search, thus opening the door for future research. § ADVERSARIAL DEEP LEARNING §.§ An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks <cit.> Despite their success, recent studies have shown that DNNs are vulnerable to adversarial attacks. A barely detectable change in an image can cause a misclassification in a well-trained DNN. Targeted adversarial examples can even evoke a misclassification of a specific class (e.g., misclassify a car as a cat). Researchers have demonstrated that adversarial attacks are successful in the real world and may be produced for data modalities beyond imaging, e.g., natural language and voice recognition <cit.>. DNNs' vulnerability to adversarial attacks has raised concerns about applying these techniques to safety-critical applications. To discover effective adversarial instances, most past work on adversarial attacks has employed gradient-based optimization <cit.>. Gradient computation can only be executed if the attacker is fully aware of the model architecture and weights. Thus, these approaches are only useful in a white-box scenario, where an attacker has complete access and control over a targeted DNN. Attacking real-world AI systems, however, might be far more arduous. The attacker must consider the difficulty of implementing adversarial instances in a black-box setting, in which no information about the network design, parameters, or training data is provided—the attacker is exposed only to the classifier's input-output pairs. In this context, a typical strategy has been to attack trained replacement networks and hope that the generated examples transfer to the target model <cit.>. The substantial mismatch between the alternative model and the target model, as well as the significant computational cost of alternative network training, often renders this technique ineffective. <cit.> assumed a real-world, black-box attack scenario, wherein a DNN's input and output may be accessed but not its internal configuration. We focused on a scenario in which a specific DNN is an image classifier, specifically, a convolutional neural network (CNN), which accepts an image as input and outputs a probability score for each class. We presented QuEry Attack (for Query-Efficient Evolutionary Attack): an evolutionary, gradient-free optimization approach for generating adversarial instances, more suitable for real-life scenarios, because usually there is no access to a model's internals, including the gradients; thus, it is important to craft attacks that do not use gradients. Our proposed attack can deal with either constrained (ϵ value that constrains the norm of the allowed perturbation) or unconstrained (no constraint on the norm of the perturbation) problems, and focuses on constrained, untargeted attacks. We believe that our framework can be easily adapted to the targeted setting. Figure <ref> shows examples of successful and unsuccessful instances of images generated by QuEry Attack, evaluated against ImageNet, CIFAR10, and MNIST. QuEry Attack is a strong and fast attack that employs a gradient-free optimization strategy. We tested QuEry Attack against MNIST, CIFAR10, and ImageNet models, comparing it to other commonly used algorithms. We evaluated QuEry Attack's performance against non-differential transformations and robust models, and it proved to succeed in both scenarios. §.§ Foiling Explanations in Deep Neural Networks <cit.> In order to render a DL model more interpretable, various explainable algorithms have been conceived. <cit.> coined the term Explainable Artificial Intelligence (XAI), which refers to AI systems that “can explain their behavior either during execution or after the fact”. In-depth research into XAI methods has been sparked by the success of Machine Learning systems, particularly Deep Learning, in a variety of domains, and the difficulty in intuitively understanding the outputs of complex models, namely, how did a DL model arrive at a specific decision for a given input. Explanation techniques have drawn increased interest in recent years due to their potential to reveal hidden properties of DNNs <cit.>. For safety-critical applications, interpretability is essential, and sometimes even legally required. The importance assigned to each input feature for the overall classification result may be observed through explanation maps, which can be used to offer explanations. Such maps can be used to create defenses and detectors for adversarial attacks <cit.>. <cit.> showed that these explanation maps can be transformed into any target map, using only the maps and the network's output probability vector. This was accomplished by adding a perturbation to the input that is scarcely (if at all) noticeable to the human eye. This perturbation has minimal effect on the neural network's output, therefore, in addition to the classification outcome, the probability vector of all classes remains virtually identical. Our black-box algorithm, AttaXAI, enables manipulation of an image through a barely noticeable perturbation, without the use of any model internals, such that the explanation fits any given target explanation. AttaXAI explores the space of images through evolution, ultimately producing an adversarial image; it does so by continually updating a Gaussian probability distribution, used to sample the space of perturbations. By continually improving this distribution the search improves (Figure <ref>). Figure <ref> shows a sample result. This work demonstrated how focused, undetectable modifications to the input data can result in arbitrary and significant adjustments to the explanation map. We showed that explanation maps of several known explanation algorithms may be modified at will. Importantly, this is feasible with a black-box approach, while maintaining the output of the model. We tested AttaXAI against the ImageNet and CIFAR100 datasets using 4 different network models. §.§ Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on Object Detectors <cit.> The implications of adversarial attacks can be far-reaching, as they can compromise the security and accuracy of systems that rely on DL. For instance, an adversarial attack on a vehicle-mounted, image-recognition system could cause it to misidentify a stop sign as a speed-limit sign <cit.>, potentially causing the vehicle to crash. As DL becomes increasingly ubiquitous, the need to mitigate adversarial attacks becomes more pressing. Therefore, research into adversarial attacks and defenses is a rapidly growing area, with researchers working on developing robust and secure models that are less susceptible to such attacks. In <cit.> we focused on fooling surveillance cameras (both indoor and outdoor), because of their ubiquity and susceptibility to attack, by creating adversarial patches (Figure <ref>). Our objective was to generate physically plausible adversarial patches, which are performant and appear realistic—without the use of gradients. An adversarial patch is a specific type of attack, where an image is modified by adding a small, local pattern that engenders missclassification. The goal of such an attack is to intentionally mislead a model into making an incorrect prediction or decision. By “physically plausible” we mean patches that not only work digitally, but also in the physical world, e.g., when printed—and used. The space of possible adversarial patches is huge, and with the aim of reducing it to afford a successful search process, we chose to use pretrained generative adversarial network (GAN) generators. Given a pretrained generator, we seek an input latent vector, corresponding to a generated image that leads the object detector to err. We leverage the latent space's (relatively) small dimension, approximating the gradients using an Evolution Strategy algorithm <cit.>, repeatedly updating the input latent vector by querying the target object detector until an appropriate adversarial patch is discovered. Figure <ref> depicts a general view of our approach. We search for an input latent vector that, given a pretrained generator, corresponds to a generated image that “hides” a person from the object detector. The patches we generated can be printed and used in the real world. We compared different deep models and concluded that is is possible to generate patches that fool object detectors. The real-world tests of the printed patches demonstrated their efficacy in “concealing” persons, evidencing a basic threat to security systems. § CONCLUDING REMARK Our main conclusion from the works presented above is simple: When combined judiciously, EC and ML/DL reinforce each other to form a powerful alliance. And we are fervently expanding this lineup of successful joint ventures... unsrtnat
http://arxiv.org/abs/2306.06777v2
20230611211429
Improving the Validity of Decision Trees as Explanations
[ "Jiri Nemecek", "Tomas Pevny", "Jakub Marecek" ]
cs.LG
[ "cs.LG", "cs.AI", "math.OC" ]
Multimodal Pathology Image Search Between H&E Slides and Multiplexed Immunofluorescent Images Jacob M Luber, PhD July 31, 2023 ============================================================================================= In classification and forecasting with tabular data, one often utilizes tree-based models. This can be competitive with deep neural networks on tabular data [cf. Grinsztajn et al., NeurIPS 2022] and, under some conditions, explainable. The explainability depends on the depth of the tree and the accuracy in each leaf of the tree. Here, we train a low-depth tree with the objective of minimising the maximum misclassification error across each leaf node, and then “suspend” further tree-based models (e.g., trees of unlimited depth) from each leaf of the low-depth tree. The low-depth tree is easily explainable, while the overall statistical performance of the combined low-depth and suspended tree-based models improves upon decision trees of unlimited depth trained using classical methods (e.g., CART) and is comparable to state-of-the-art methods (e.g., well-tuned XGBoost). § INTRODUCTION In classification and forecasting with tabular data, one often utilizes axis-aligned decision trees <cit.>. A prime example of a high-risk application of AI, where decision trees are widely used, is credit risk scoring <cit.> in the financial services industry <cit.>. There, the relevant regulation, such as the Equal Credit Opportunity Act in the US <cit.> and related regulation <cit.> in the European Union, bars the use of models that are not explainable <cit.>, which is often construed <cit.> as requiring the use of decision trees. When studying the decision tree that a bank uses, one often focuses on ways that would make it possible to obtain a loan, and one would wish that the corresponding leaf of the decision tree had as high accuracy as possible. In many other domains, the use of tree-based models has an equally long tradition. Consider, for example, judicial applications of AI such as the infamous Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) <cit.>, which is marketed as the “nationally recognized decision tree model”, or medical applications of AI <cit.>. It is hard to overstate the importance of high accuracy of any rule that a medical doctor or a judge may learn from a decision tree. In yet more domains, low-depth decision trees are used <cit.> to provide globally valid explanations of black-box classifiers, which is sometimes known <cit.> as model extraction. Low-depth trees can indeed serve as global explanations for a classifier, or explainable classifiers per se, when each leaf is construed as a logical rule. Because various individuals or subgroups may deem various outcomes of importance, cf. the medical applications, a fair explanation would have as high accuracy in each leaf of the decision tree as possible. The depth needs to be low[According to <cit.>, humans can understand logical rules with boolean complexity of up to 5–9, depending on their ability, where the boolean complexity is the length of the shortest Boolean formula logically equivalent to the concept, usually expressed in terms of the number of literals.], in order for the rule explaining the decision in each leaf to remain comprehensible. Similarly, one could argue that a decision tree can provide misleading explanations. To evaluate how valid[ Our use of the term validity is related to its use in <cit.>, but distinct. ] or misleading the decision tree is, we suggest considering the minimal accuracy in any leaf of a tree (tree's leaf accuracy). Indeed, a member of the public, when presented with the decision tree, may assume that each leaf of a decision tree can be construed as a logical rule. Consider, for example, the decision tree of Figure <ref>, based on the two-year variant of the well-known COMPAS <cit.> dataset, which considers the binary classification problem of whether the individual would reoffend within the next two years. The left-most leaf may be interpreted as suggesting that for 1-3 prior counts and under 23 years of age, the defendant will not reoffend within the next two years after release. However, the validity of this rule is rather questionable: the training accuracy in that leaf is 66.8 %, while the test accuracy is 60 % in that leaf. This suggests that 40 % defendants who meet these criteria will actually reoffend within two years. For a more extreme example, see Figure <ref>, which shows two trees of similar overall accuracy for the pol(e) dataset. When optimizing for overall accuracy, the minimum test accuracy in one leaf can be as low as 57.1% (cf. the top tree in Figure <ref>). However, when maximizing the minimum training accuracy in one leaf, the minimum test accuracy in one leaf increases to 86.5% (cf. the bottom tree in Figure <ref>). One could argue that this improves the validity and fairness of the explanation provided by the tree. Although a recent comparison of the statistical performance of gradient-boosted trees and deep neural networks <cit.> by Grinsztajn et al. has shown that the state-of-the-art tree-based models can outperform state-of-the-art neural networks, across a comprehensive benchmark of tabular data sets, the low depth limits the overall accuracy. Therefore, one would like to improve the accuracy by “hybridizing” the tree, where the top, fixed-depth tree maximizing the minimum leaf accuracy objective would explain as much of the variance as possible, given the depth, while below, tree-based models suspended from all leaves of the fixed-depth tree, need not be interpretable, but would improve the overall accuracy of the hybrid tree. Here, we aim to introduce such hybrid trees and a two-step procedure for training these, to improve upon both the statistical performance and explainability of decision trees. In the first step of the procedure, we use mixed-integer programming (MIP) to train a low-depth tree, with the objective of minimizing the maximum misclassification error across each leaf node, and with constraints bounding the number of samples in each leaf node from below. Seen another way, we maximize the minimal accuracy in any leaf of a tree. In the second step, we train further tree-based models, which are to be suspended from each leaf of the low-depth tree. The low-depth tree with the additional constraints on the accuracy in the leaves is easily explainable, while the overall statistical performance of these hybrid tress <cit.> combining low-depth trees and suspended tree-based models (which we call the hybrid-tree accuracy) improves upon the accuracy of decision trees of unlimited depth trained using classical methods (e.g., CART) and is comparable to state-of-the-art tree-based methods, such as the well-tuned XGBoost of <cit.>. Let us illustrate the statistical performance. Figure <ref> shows that the accuracy of well-tuned gradient-boosted trees of <cit.> on the two-year COMPAS <cit.> test case exceeds 0.68. The accuracy of our low-depth tree trained with the leaf-accuracy objective is below 0.65, which should not be surprising, considering the mean accuracy is not the main objective. Nevertheless, by utilising the trees suspended from leaf nodes of the low-depth tree, we can improve the accuracy very close to 0.68, which improves over both the accuracy of CART of the same depth alone (0.67) and CART of the same depth with trees suspended from leaf nodes of the CART tree (around 0.675). this performance is rather typical across the benchmark of <cit.>, with the average improvement of 0.0053 on categorical datasets and 0.0016 on numerical datasets over the limited-depth CART tree with further CART trees suspended from its leaves. Our contributions. We present: * the challenge of fairness (or, equivalently, validity) of an explanation. * leaf accuracy as a criterion for evaluating the validity and fairness of a classification tree as a global explanation. * a method for training decision trees that are optimal with respect to leaf accuracy, which is scalable across a well-known benchmark <cit.>, despite its use of mixed-integer programming. * benchmarking on tabular datasets <cit.> suggesting that the leaf accuracy can be improved by up to 21.16 percentage points (i.e., by 44.84%), while suffering only a very modest drop (at most 2.76 percentage points across the benchmark) in the overall accuracy, compared to well-tuned XGBoost <cit.>. § RELATED WORK Decision trees <cit.> are among the leading supervised machine learning methods, where interpretability and out-of-sample classification performance is important. Random forests <cit.> and gradient-boosting tree-ensemble approaches <cit.> improve upon their statistical performance substantially, while limiting the interpretability, somewhat. We are given n samples (x_i1, …, x_ip ,y_i) with p features each, for i = 1 … n, and their classification y_i ∈ [K] into K classes. Let us denote sample i by x_i = (x_i1, …, x_ip). The decision trees sequentially splits the samples into subsamples: In each non-leaf node t, it splits the samples based on a single covariate x_:t and a threshold b_t. (See Figure <ref> for an illustration.) There is a vast literature, including the construction of confidence intervals <cit.>. More recently, decision trees play an important role in explainable artificial intelligence <cit.> and interpretable machine learning <cit.>. Construction of a optimal axis-aligned binary decision tree is NP-Hard <cit.>, and there are hence cases when all known polynomial-time algorithms such as CART <cit.> produce suboptimal results. Still, CART <cit.>, which utilizes Gini diversity index and cross-validation in pruning trees, ranks among the leading algorithms <cit.> in machine learning. A decade later, Breiman suggested that boosting can be interpreted as an optimization algorithm <cit.>, leading to the development of gradient-boosted trees <cit.>. Their well-tuned variants <cit.> are the state-of-the-art polynomial-time algorithms for training decision trees. We refer to <cit.> for comparisons against deep neural networks. Bertsimas and Dunn <cit.> and, independently others <cit.>, pioneered the use of exponential-time algorithms in the construction of decision trees, under the banner of optimal decision trees. The integer-programming formulation of <cit.> suffers from some issues of scalability <cit.>, but can be easily extended by the addition of further constraints, such as sparsity <cit.>, fairness <cit.>, upper bounds on the numbers of leaves <cit.>, incremental progress bounds <cit.>, bounds on similarity of the support <cit.>, a wide variety of privacy-related constraints, and in our case, numbers of samples and accuracy in the leaves. Likewise, there are numerous extensions in terms of the objective <cit.>, including F-score, AUC, and partial area under the ROC convex hull, and in our case, the minimum leaf accuracy. Subsequently, the optimal decision trees have grown into a substantial subfield within machine learning research. There have been a number of important proposals as to alternative convex-programming relaxations for optimal decision trees: <cit.> have demonstrated the use of an extended formulation in a column-generation (branch-and-price) approach; <cit.> have introduced another alternative formulation and a number of valid inequalities (cuts); <cit.> have introduced yet another alternative formulation based on the maximum-flow problem. <cit.>. Independently, <cit.> suggested to use non-linear optimization techniques, such as alternating minimization leading to a much further reserach <cit.>. We refer to <cit.> for overviews of mathematical optimization in the construction of decision trees. Much recent research <cit.> has also focussed on improving the scalability of exponential-time algorithms for optimal decision trees by using branch-and-bound methods without relaxations in the form of convex optimization and, more broadly, dynamic programming. These approaches are sometimes seen as less transparent, as the mixed-integer formulation needs to be translated to the appropriate pruning rules or cost-to-go functions, which are less succinct, and the correctness of the translation can be non-trivial to verify. Nevertheless, <cit.> have demonstrated the scalability of their method to a dataset with over 245,000 samples (utilizing less than 2000 core-hours), for example. On a benchmark of 21 datasets from UCI Repository with over 7,000 samples, the algorithm can improve training accuracy by 3.6% and testing accuracy by 2.8%, compared to the current state-of-the-art. This seems to validate the practical relevance of optimal decision trees. § MIXED-INTEGER FORMULATION We extend the Mixed-Integer Programming (MIP) formulation of optimal decision trees <cit.> to a different objective function and novel constraints. The entire MIP formulation is presented in Figure <ref>. Base model As in the original optimal decision trees <cit.>, we have n samples with p features each. Every point has one of K classes, which is represented in the formulation by a binary matrix Y such that Y_ik = 1 y_i = k. All tree nodes are split into two disjoint sets 𝒯_B and 𝒯_L which are sets of branching nodes and leaf nodes respectively. Variable a_t is a binary vector of dimension p, that selects a variable to be used for decisions in node t. b_t is then the value of the threshold. We assume all data are normalized to [0, 1] range. Equations (<ref>–<ref>) capture the original model of <cit.>, wherein: * Binary variable c_kt is equal to 1 if and only if leaf node t assigns class k to data. * Binary variable l_t is equal to 1 if and only if there is any point classified by the leaf node t. * Binary variable z_it is equal to 1 if and only if point x_i is classified by leaf node t. The only modification to the original formulation is the omission a binary variable d_t that decided whether a certain branching node is used. This introduced a flaw in the original formulation <cit.> which lead to invalid trees, so we decided against using it. We assume it to always be 1 instead. To prune redundancies, we introduce a process of tree reduction described in Section <ref>. Equations (<ref>) and (<ref>) implement the split of samples to leaf node t using disjoint sets A_R(t) and A_L(t), containing nodes to which the leaf t is on the right or on the left, respectively. Since we cannot use strict inequality, we use ϵ, a p-dimension vector of the smallest increments between two distinct consecutive values in every feature space <cit.>: ϵ_j = min{x_j^(i+1) - x_j^(i)| x_j^(i+1) x_j^(i), ∀ i ∈{1, … ,n - 1}} where x_j^(i) is the i-th largest value in the j-th feature. ϵ_max is the highest value of ϵ_j and serves as a tight big-M bound. Finally, Equation (<ref>) bounds the minimal number of points (N_min) in a single leaf from below, similar to <cit.>. Extensions In the original optimal decision trees <cit.>, the objective is to minimize total misclassification error. Instead, we wish to maximize the minimum leaf accuracy. Because a single sample usually contributes differently to accuracy at different leaves, we need to introduce multiple new variables to track the accuracy in each leaf: * variable s_it represents the potential accuracy that a sample x_i has in leaf t. It takes values in the range [0, 1] and must sum to 1 when summing across all samples assigned to leaf t. This is ensured by setting the value to 0 for all points that are not assigned to the leaf t in Constraint (<ref>). The sum of 1 is enforced in Equation (<ref>) for leaves with some samples assigned. Empty leaves do not have non-zero s_it values for any i, and would thus not sum to 1. * reference accuracy variable r_t serves as a common variable to which all accuracy contributions are equal. This is, of course, required only for points assigned to the leaf t. This is enforced in (<ref>) and (<ref>). * variable S_it represents the true assignment of accuracy given by the sample. That is achieved by setting it to 0 for misclassified points using constraint (<ref>) and by setting it equal to s_it otherwise by constraints (<ref>) and (<ref>). * variable Q is our objective and represents the lowest achieved accuracy across all non-empty leaves as per constraint (<ref>). For empty leaves, this constraint will be trivially satisfied, since Q cannot take value higher than 1 anyway. Tree reduction After the optimizer of the mixed-integer program is obtained, empty leaves are pruned to obtain the resulting unbalanced tree. Furthermore, to account for suboptimal solutions obtained when the solver is run with a strict time limit, each pair of sibling leaves classified in the same class is merged. This is performed recursively until no further action can be performed. This leads to no loss in total accuracy, and oftentimes leads to an improvement in leaf accuracy, given the fact that we consider the minimum over all leaves. Tree extension Finally, we suspend new tree-based models from the remaining leaves, to improve hybrid-tree accuracy that is comparable to the best-performing models. In particular, we used XGBoost to train the suspended models, since it was the best-performing model on the benchmark <cit.>. We trained a separate model for each leaf of the low-depth tree obtained in Section <ref> above. The hyperparameters of the models were tuned using 50 iterations of a Bayesian hyperparameter search with 3-fold cross-validation in each leaf. In experiments, we extend trees generated by other methods (OCT, CART) in the same way. § NUMERICAL RESULTS We have implemented the method in Python and all code and results are provided in the Supplementary material. We will release them under an open-source license on GitHub once the paper has been accepted. The hyperparameters have been chosen as follows: * The low-depth trees have been trained using the formulation in Figure <ref> with depth limited to four since that is a reasonable threshold for interpretability (e.g., printability on an A4 page, similar to Figure <ref>) and for not diluting the dataset to small parts that would impede the ability to train the “suspended” trees. * To further support this, we set the minimal amount of points in a leaf (N_min) to 50. * MIPFocus and Heuristics hyperparameters were set to 1 and 0.8, respectively, to focus on finding feasible solutions in the search since that leads to the fastest improvements of the solution. However, our experiments in Appendix <ref> show that default MIP solver hyperparameters perform similarly. We performed our experiments on the benchmark dataset of Grinsztajn et al. <cit.>, which contains datasets for both regression and classification. Since our implementation considers only classification, for the time being, we consider only classification datasets. <cit.> divides the datasets into numerical datasets and datasets with some categorical features. We follow this distinction and present results on both kinds of datasets separately. We also follow the suggestion of <cit.> to perform 10 different train-test splits with at most 10,000 datapoints or 80% of total datapoints (whichever is lower) for training across all datasets. That is, each model has been trained on each dataset 10 times, with different seeds for data splits. The training used 80% of all data points data or 10,000 datapoints, whichever is lower, while the remaining 20% of the dataset has been used as the test set for evaluating the accuracy, and minimum leaf accuracy Q (<ref>). All MIP formulations of our low-depth tree were warmstarted using a CART solution trained on the same data with default scikit-learn parameters, except maximal depth and a minimal number of samples in a leaf, which were set to 4 and 50, respectively. We performed all experiments on an internal cluster with sufficient amounts of memory. Each run of the MIP solver has been limited to 8 hours on 8 cores of AMD Epyc 7543, totaling 64 core-hours per split of a dataset. The extension part takes on average around 1 additional 3 core-hours per split. This totals around 15,500 core-hours for the entire classification part of the benchmark of <cit.> and one configuration of hyperparameters. Training each dataset requires between 15 and 95 GB of working memory; details are provided in the Appendix (Figure <ref>). We compare our method of training classification trees to CART, as it is by far the most common. All experiments used the scikit-learn implementation of CART. The hyperparameters for CART were optimized using Bayesian hyperparameter optimization for 100 iterations using 5-fold cross-validation. Hyperparameter search space was notably constrained only by a maximal depth of 4 to ensure comparability to our low-depth trees. In comparison to unconstrained depth CART, our model interestingly performed even better. See Appendix <ref>. The entire optimization of CART with the extensions of the leaves took around 500 core-hours for the entire benchmark. The XGBoost results are taken from the authors of the paper <cit.> introducing the benchmark datasets, which suggests 20,000 core-hours have been spent producing these. Figure <ref> shows the average performance (accuracy and minimum leaf accuracy) over categorical and numerical datasets. We include the comparison to optimal classification trees (OCT) <cit.> since it is the formulation we built on. The OCT models are warmstarted by the same trees as Our models. The performance of OCT is similar to CART. Our model improves the minimal leaf accuracy by 11.16 percentage points on average compared to CART. Figure <ref> shows again the average performance separately on Categorical and Numerical datasets divided into three groups by difficulty. The measure of difficulty is based on performance by XGBoost provided by the authors of the benchmark. The thresholds of difficulty categories are 0.7 and 0.8 for datasets containing categorical features and 0.75 and 0.85 for datasets with only numerical features. The thresholds were selected in order to separate too easy and too hard datasets, which make the plots less informative, and to explore behavior over datasets with varying inner complexity. We see that the proposed method always significantly improves the minimum leaf accuracy and also improves the accuracy compared to CART. Table <ref> quantifies the differences numerically. Our model have worse accuracy by about .01 on average when compared to the uninterpretable, best-performing state-of-the-art models (XGBoost). Compared to CART, a different training process of the same model, slightly improves the accuracy, but importantly improves the minimum leaf accuracy, sometimes by up to 20 percentage points. § CONCLUSIONS AND LIMITATIONS We have identified an important problem of the fairness and validity of an explanation, and shown that contemporary tree-based models do leave room for improvement, in terms of fairness. The use of hybrid trees, where the top is constructed with the goal of maximizing minimum leaf accuracy, offers multiple benefits. First, it ensures better validity of every explanation provided, improving the minimum leaf accuracy by around 11 percentage points on average across the benchmark of tabular datasets <cit.>. Second, the extended accuracy with tree-based models suspended from leaves improves over the accuracy of low-depth trees constructed using integer programming as well as hybrid trees, where both top and bottom is obtained using CART. Finally, it is easy to extend to further constraints, such as shape constraints, in the top tree. Overall, we hope that the proposed approach may lead to improving the validity and fairness of decision trees as explanations. The proposed approach shares some of the limitations of the original optimal decision trees <cit.>. Notably, the algorithms for solving mixed-integer programming problems we utilize scale exponentially in the number of decision variables. Having said that, depth-4 trees suffice to match state-of-the-art methods in terms of accuracy when additional tree-based models are suspended from the leaves, which makes exponential time algorithms sufficiently fast in practice. Furthermore, all recently proposed methods <cit.> improving the scalability of optimal decision trees can be applied, in principle. ieeetr § APPENDIX In the Supplementary material, we also provide the source code and complete results in files along with a Jupyter notebook with example tests in Supplementary Material, which will be publicly available once the paper is accepted. We also tabulate the results of further tests performed and describe ablation analyses. §.§ Datasets We used the classification part of the data sets from the mid-sized tabular data created by <cit.>. The datasets, with their properties, are listed in Table <ref>. Training sets contained 80% of the total amount of samples truncated to at most 10,000 samples. This constraint affects 16 of the 23 total datasets, although some only marginally. The affected datasets have their number of samples in Table <ref> in bold. The remaining 20% of samples were the testing dataset. In a 10-fold cross-validation, we used 10 random seeds that determined the train-test splits of each dataset. Additionally, datasets are either categorical or numerical. Categorical are those that contain at least one categorical feature. Numerical datasets have no categorical features. Four numerical datasets are the same as categorical datasets, but with their categorical features removed (covertype, default-of-credit-card-clients, electricity, eye_movements). Only datasets without missing features and with sufficient complexity are included in the benchmark. For more details on the methodology of dataset selection, we refer to the original paper <cit.>. §.§ MIP formulation description We provide Table <ref> with short descriptions of the parameters and variables in the proposed formulation in Figure <ref>. §.§ MIP Solver We have utilized Gurobi optimizer as a MIP solver. Although the solver makes steady progress towards global optimality, the road there is lengthy. Figure <ref> shows the progress of the MIP Gaps during the 8-hour optimization averaged over all datasets. For a detailed, per-dataset view, see Figure <ref>. We see that the solution is still improving, albeit rather slowly, after 8 hours. The narrowing of the MIP gap is achieved only by finding better feasible solutions. This lack of improvement of the objective bound might have been affected by our hyperparameter settings which focused on finding feasible solutions and heuristic search. However, tests with default parameters did not improve the best bound either. §.§.§ Default hyperparameters of Gurobi solver The performance of the Gurobi optimizer depends on the choice of hyperparameters. For the sake of simplicity, we have considered only two sets of parameters. To measure the performance change of our choice of (hyper)parameters, we ran a test with the default value of the MIPFocus parameter and a test with the default value of the Heuristics parameter. The results (cf. Figure <ref> and Tables <ref>, <ref>) show no significant improvements regarding the MIPFocus parameter. However, with the default value of the Heuristics parameter, we observe an improvement in performance on numerical datasets and a decrease in performance on categorical datasets. Both absolute differences in accuracy are about 0.015, so we opted for the variant with similar performances on both categorical and numerical datasets. That is the proposed variant focusing on heuristics. This proposed configuration also shows a more stable increase in accuracy w.r.t. the performance of CART models. The solver performance varies per dataset, as visualized by Figure <ref>. These differences in performance suggest that hyperparameter space regarding the MIP solver should be further explored and could yield improvements. A closer look at Figure <ref> suggests that different configurations help achieve better conditions for the solver on different datasets. This might be an area of further hyperparameter tuning based on the specific attributes of the dataset. §.§ Memory requirements Overall, the memory requirements of the datasets were between 15 and 95 GB. On average, all datasets required at most 70 GB of working memory. Figure <ref> shows the memory requirements of our formulation in more detail. The extension phase of the process is negligible in this regard, as it requires only about 1.5 GB of working memory in total and is performed after the MIP optimization. Training and extending the CART models also required less than 2 GB of working memory. The amount of memory required by the MIP solver is dependent on the size of the data in the number of training samples, as well as the number of features. Figure <ref> shows this linear dependence of memory requirements on the size of the training set. Based on the coloring of the nodes, we also see the dependence on the number of features, especially in the case of the Bioresponse dataset. §.§.§ Performance of the model given a shorter time When considering a shorter time for optimization, we can lower the memory requirements to levels attainable by current personal computers. When optimizing our MIP model for one hour, the required memory is below 50 GB for all datasets except Bioresponse, which has one order of magnitude more features than the rest of the datasets included in the benchmark. The mean memory requirement is below 30 GB of working memory (compared to 50 GB for the 8-hour run). See Figure <ref> for details. Figure <ref> shows that even with this limited budget, we can achieve significant improvement compared to CART in leaf accuracy and similar accuracy of hybrid trees. §.§ Reduction of the trees The reduction phase has a beneficial influence on the leaf accuracy of a model. Figure <ref> shows this improvement of mean leaf accuracy over all datasets. In Figure <ref>, we further provide a comparison of the complexity of the created trees by comparing the distributions of the number of leaves (or potential explanations) provided by the method. The maximum amount of leaves of a tree with depth 4 is 16. CART model has, on average, around 8 leaves after reduction. Our proposed model's distribution is close to the distribution of CART models. When solving the MIP formulation directly, the distribution is severely shifted toward very small trees. Our proposed method uses a default CART solution to warmstart the search, which might explain the shape of the distribution compared to the direct method and CART. §.§ Hyperparameter search distributions We needed to optimize hyperparameters for extending models and CART trees used for comparisons. We used Bayesian hyperparameter search for that purpose. §.§.§ Extending XGBoost models For the hyperparameter search of XGBoost models in leaves, we used the distributions listed in Table <ref>. The parameters are almost all the same as used by <cit.>. Only the Number of estimators and Max depth were more constrained to account for the fewer samples available for training. The Bayesian optimization was run for 50 iterations, with 3-fold cross-validation in every leaf that contained enough points to perform the optimization. The same process was used to extend all tested trees. In leaves with an insufficient amount of samples to perform the cross-validation (less than 3 samples of at least one class in our case), we train an XGBoost model with a single tree of max depth 5. In leaves with 100% training accuracy, we do not learn any model, and use the majority class. §.§.§ CART models For the hyperparameter optimization of CART models, we also used Bayesian search, with the distributions shown in the table <ref>. The search was run for 100 iterations, with 5-fold cross-validation on the same training data sets as our model. After this search, the best hyperparameters were used to train the model on the full training data. The resulting tree was reduced and every leaf was extended by an XGBoost model in the same way as our models. In comparisons in later Section <ref>, we optimized a deeper variant of the CART. The process was the same, except for initial distributions of hyperparameters for Max depth and Max leaf nodes. Those were UniformInteger [2, 20] and UniformInteger [8, 512] respectively. §.§ Detailed results We also provide the full results for each dataset. Figures <ref> and <ref> are decomposed variants of Figure <ref> for categorical and numerical datasets respectively. We also provide exact results, in Tables <ref> and <ref> respectively. The detailed results show that the proposed model outperforms the CART model in both accuracy measures on almost all datasets and has comparable accuracy to XGBoost. §.§ Other optimization approaches The best-performing approach of warmstarting the MIP solver with a CART solution is not the only one we tested. In Figure <ref>, we see a comparison of three different approaches to optimization. * Direct refers to the straightforward use of the MIP formulation. * Warmstarted uses a simple CART solution (created using default hyperparameters) as a starting point of the solving process. * Gradual refers to a special process where we start by training a tree with a depth equal to 1 and use the solution found in some given time to start the search for a tree with a depth of 2 and so forth until we reach the desired depth. All of the three approaches were run with the same resources. This meant that even the gradual approach took 8 hours in total. The time was distributed in a way that the available time for the optimization process doubled with each increase in depth. This means 32 minutes for the first run, 64 minutes for the tree of depth 2, 128 for depth 3, and 4 hours 16 minutes for the final tree with depth 4. Interestingly, while the direct approach understandably does not reach a performance similar to the warmstarted variant, the gradual approach shows more promise. It has higher hybrid-tree accuracy by another 0.2 percentage points on average while having lower leaf accuracy by about 1.2 percentage points compared to the warmstarted approach (cf. Table <ref>). §.§ Ablation Analyses We provide some comparing experiments performed by changing a single hyperparameter and 2 closely related hyperparameters, in the case of CART depth. §.§.§ Unlimited depth CART An argument could be made against our choice to compare our method to CART trees with the same limit on depth. Figure <ref> and Table <ref> in more detail show a comparison of CART models with a maximal depth of 4 and a maximal depth of 20. The actual depth limit for each model was optimized along with other hyperparameters using the Bayes hyperparameter optimization procedure. More details The aggregated results show worse performance regarding both leaf accuracy and hybrid-tree accuracy. Not only do the deeper trees perform worse, but the length of provided explanations is also well above the 5-9 threshold suggested as the limit of human understanding <cit.>. §.§.§ Non-warmstarted OCT We compare our method to warmstarted OCT because both start from the same CART initial solution. This makes them more comparable. However, we also tested the OCT variant, directly optimized from the MIP formulation. See the results in Figure <ref>. Both OCT models were run with the same hyperparameters as the proposed model. Those being heuristics-oriented solver, depth equal to 4, and a minimal amount of samples in leaves equal to 50. The average OCT performs worse than all our approaches (cf. Figure <ref>), but the improvement from the warmstarted variant is intriguing. Especially considering that it is not caused by the direct OCT method's inability to create complex trees without warmstarting. This is supported by Figure <ref> showing a distribution of leaves similar to the distribution of CART trees (cf. Figure <ref>). This suggests that the OCT trees have comparable tree complexity to CART and provide more valid explanations than CART, even without our extension to the formulation. This is an interesting result, considering the fact that neither CART nor OCT methods optimize for leaf accuracy. Our model, however, almost doubles the improvement of direct OCT. Its improvement is similar in the number of percentage points to OCT's improvement on CART. §.§.§ No minimum number of samples in leaves This comparison, see Figure <ref>, shows the importance of setting a minimal amount of samples in leaves. Without enough points to support the leaf's accuracy, it is more likely to be overfitted. On the other hand, when choosing the N_min parameter too high, we restrict some possibly beneficial splits, supported by a smaller amount of training data. N_min is a critical hyperparameter, and further testing could provide more insight into the proposed model's performance. §.§.§ Deeper trees Lastly, we provide a comparison of the proposed model of depths 4 and 5. Figure <ref> shows better overall results for shallower trees. This is likely caused by the exponential increase in memory requirements, given the decrease in overall accuracy as well. We provide data about its memory usage in Figure <ref>. With a model of twice the complexity, the solver struggles to achieve comparable results to the shallower proposed model. This is certainly a topic of further exploration by incorporating scalability improvements proposed in the literature. §.§ Fixed hyperparameters CART We compare our model to a CART model with optimized hyperparameters, as is common in practice. However, suppose we exclude the depth and minimum of samples in a leaf from the optimization and set them to reflect our settings for the proposed model. In that case, we achieve improved accuracy of the CART models as visualized in Figure <ref> and expressed more clearly in Table <ref>. The proposed model still performs significantly better, by 6 to 8 percentage points in leaf accuracy, with maxima at 18 percentage points. [These results were obtained after the Paper Submission Deadline. That is why they are not presented in the main body of the paper.] §.§ More data The 10,000 size limit on training samples was suggested by the authors of the benchmark <cit.>. Another good reason for such a limit is that we want our model to balance the size of the formulation and the capability of the formulated model. In other words, if we take a small amount of data, we are less likely to grasp the intricacies of the target variable distribution within the dataset. And if we take too many samples, we create a formulation that will not achieve good performance in a reasonable time. In a comparison of a model learned on a training dataset limited to 10,000 samples with a dataset limited to 50,000 samples, we see that more data does not necessarily lead to a better model, given same time resources, see Figure <ref>. The 50,000 model is worse because of the too-demanding complexity of the formulation. It improves the model accuracy, which is unsurprising since each leaf obtains more samples. The comparison to XGBoost is unreliable since the mean value for XGBoost was computed from the performance of models trained on at most 10,000 samples.
http://arxiv.org/abs/2306.05836v1
20230609120915
Can Large Language Models Infer Causation from Correlation?
[ "Zhijing Jin", "Jiarui Liu", "Zhiheng Lyu", "Spencer Poff", "Mrinmaya Sachan", "Rada Mihalcea", "Mona Diab", "Bernhard Schölkopf" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
In-situ micromotion compensation of trapped ions by Rabi oscillation and direct scanning of dc voltages Woojun Lee,1,2,3 Daun Chung,1,2 Jiyong Kang,1,2 Honggi Jeon,2,4 Changhyun Jung,2,5,6 Dong-Il “Dan” Cho,2,5,6 and Taehyun Kim,1,2,3,5,7,* July 31, 2023 ============================================================================================================================================ Causal inference is one of the hallmarks of human intelligence. While the field of CausalNLP has attracted much interest in the recent years, existing causal inference datasets in NLP primarily rely on discovering causality from empirical knowledge (e.g. commonsense knowledge). In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs). Specifically, we formulate a novel task , which takes a (set of) correlational statements and determines the causal relationship between the variables. We curate a large-scale dataset of more than 400K samples, on which we evaluate seventeen existing LLMs. Through our experiments, we identify a key shortcoming of LLMs in terms of their causal inference skills, and show that these models achieve almost close to random performance on the task. This shortcoming is somewhat mitigated when we try to re-purpose LLMs for this skill via finetuning, but we find that these models still fail to generalize – they can only perform causal inference in in-distribution settings when variable names and textual expressions used in the queries are similar to those in the training set, but fail in out-of-distribution settings generated by perturbing these queries. is a challenging task for LLMs, and would be helpful in guiding future research on improving LLMs' pure reasoning skills and generalizability.[ Our data is at <https://huggingface.co/datasets/causalnlp/corr2cause>. Our code is at <https://github.com/causalNLP/corr2cause>. Our code and data have been uploaded to the submission system, and will be open-sourced upon acceptance. ] § INTRODUCTION Causal inference is a crucial reasoning ability of human intelligence. It is a fundamental aspect of reasoning that involves establishing the correct causal relationships between variables or events. Roughly, there are two distinct ways to obtain causality: one through empirical knowledge, e.g., we know from common sense that preparing a birthday party for a friend will make them happy; the other through pure causal reasoning, as causality can be formally argued and reasoned about using known procedures and rules from causal inference <cit.>. For example, we know that only knowing that A correlates with B does not mean that A causes B. We also know another property from pure causal inference, specifically the study of causal discovery <cit.>, that if A and B are originally independent of each other, but become correlated given C, then we can infer that, in this closed system, C is a common effect of A and B, as illustrated in <ref>. This collider phenomenon can be used to deny the causation between A and B, regardless of what realizations the variables A, B, and C take. We formulate this task as a new task for NLP, namely correlation-to-causation inference (), and argue that this is a must-have skill for large language models (LLMs). Imagine the scenario in <ref>, where in the training corpus there are a large number of correlations, such as the word vaccine correlating with an increased number of disease cases. If we take the position that the success of LLMs <cit.> lies in capturing a vast set of statistical correlations among terms <cit.>, then the crucial yet missing step is how to process such correlations and infer causal relationships, for which a fundamental building block is this inference skill. To this end, we collect the first dataset, , to test the pure causal reasoning abilities of large language models. All the questions in this dataset are centered around testing when it is valid or invalid to infer causation from correlation. To systematically compose this dataset, we ground our generalization process in the formal framework of causal discovery <cit.>, which provides rules about how to deduce causal relations among variables given their statistical correlation in the observational data. We generate more than 400K data points, and label a correlation-causation statement pair as valid if and only if there is a bijective mapping between the statistical correlation and the underlying causality. Based on our dataset with 400K samples, we investigate two main research questions: (1) How well do existing LLMs perform on this task? (2) Can existing LLMs be re-trained or re-purposed on this task and obtain robust causal inference skills? Through extensive experiments, we show empirically that none of the seventeen existing LLMs we investigate perform well on this pure causal inference task. We also show that although LLMs can demonstrate better performance after being finetuned on the data, the causal inference skills attained by them are not robust. In summary, our contributions are as follows: * We propose the novel task of , to probe an aspect of LLMs reasoning ability, pure causal inference; * We compose a dataset of over 400K samples, using insights from causal discovery; * We evaluate the performance of seventeen LLMs on our dataset, finding that all of them perform poorly, close to the random baseline. * We further explored whether LLMs can learn the skill through finetuning, and find that but LLMs fail to robustly manage the skill with out-of-distribution perturbations, and suggest future work to explore more ways to enhance the pure causal inference skill in LLMs. § PRELIMINARIES: CAUSAL INFERENCE §.§ Directed Graphical Causal Models (DGCMs) A directed graphical causal model (DGCM) is a commonly used representation to express the causal relations among a set of variables. Given a set of N variables X = {X_1, …, X_N }, we can encode the causal relations among them using a directed graph 𝒢 := (X, E), where E is the set of directed edges. Each edge e_i,j∈E represents a causal link X_i → X_j, meaning that X_i is a direct cause of X_j. In the context of this work, we take the common assumption of directed acyclic graphs (DAGs), which most causal discovery methods use <cit.>, as graphs with cycles can make the causal discovery process arbitrarily hard. Following the graph-theoretic terminology, we use an analogy of the ancestry tree to denote the relations between two variables. For example, we call X_i as a parent of X_j if there is a directed edge X_i → X_j in the graph, and, thus, X_j is a child of X_i. Similarly, we denote X_i as an ancestor of X_j if there exists a directed path from X_i to X_j, and, thus, X_j is a descendent of X_i. Note that a parent is a special case of an ancestor where the directed path has a length of 1. For convenience, we also introduce the notions for some special three-variable relations. Given two variables X_i and X_j, we call a third variable X_k a confounder (i.e., common cause) if X_k is a parent of both X_i and X_j; a collider (i.e., common effect) if X_k is a child of both X_i and X_j; and a mediator if X_k is both a child of X_i, and a parent of X_j. §.§ D-Separation and Markov Property D-Separation D-separation <cit.> is a fundamental concept in graphical models used to determine whether two sets of nodes X and Y in a DAG 𝒢 are conditionally independent given a third set of nodes Z, where the three sets are disjoint. We say that X and Y are d-separated by Z if all paths between any node in X and any node in Y are blocked by the conditioning set Z. A path between X and Y is blocked by Z if there exists a node A∈Z which satisfies one of the following conditions: A is the parent node in a fork structure on the path (i.e., ·← A →·); A is the mediator node in a chain structure on the path (i.e., ·→ A →·); or in any collider structure on the path (i.e., ·→ A ←·), Z does not contain A or its descendants. Markov Property The Markov property in a DAG 𝒢 states that each node X_i is conditionally independent of its non-descendants given its parents, namely X_i ⊥⊥(X_i) | (X_i), where (X_i) denotes the non-descendants of X_i excluding itself, and (X_i) denotes the parents of X_i. Using the Markov property, we can factorize the joint distribution of all the nodes in the graph into P(X_1, …, X_N) = ∏_i=1^N P(X_i | PA(X_i) ). To infer the causal graph from probability distributions, a common assumption is faithfulness, namely the validity to infer all the d-separation sets in the graph from the independence relations in the probability distribution. In our work, we also take this broadly taken assumption which holds for most real-world scenarios. Markov Equivalence of Graphs We denote two DAGs as Markov equivalent if they induce the same joint distribution P(X). The set of DAGs that are Markov equivalent to each other is called a Markov equivalence class (MEC). Causal graphs in the same MEC can be easily identified since they have the same skeleton (i.e., undirected edges) and V-structures (i.e., structures in the form of A→ B ← C where A and C are not connected). Obviously, there is a one-to-many mapping (i.e., surjection) between the causal graph and statistical distribution. Namely, each causal graph sufficiently determines a statistical distribution, but from a statistical distribution, we cannot necessarily induce a unique causal graph. This is why we say “correlation does not necessarily mean causation”. §.§ Causal Discovery Causal discovery aims to learn the causal relations by analyzing statistical properties in the observational data <cit.>. It can be achieved through constraint-based methods <cit.>, score-based methods <cit.>, or other methods taking advantage of the functional causal models <cit.>. To fit for the spirit of this paper to infer from correlation (expressed in natural language) to causation, we base our dataset design on the widely-used Peter-Clark (PC) algorithm <cit.>. The PC algorithm is based on the principles of conditional independence and the causal Markov assumption, which allows it to efficiently identify causal relationships among variables in a given dataset. The algorithm first starts with a fully connected undirected graph among all the variables. Then it removes the edge between two variables if there is an unconditional or conditional independence relationship between them. Afterwards, it orients the directed edges whenever there is a V-structure. And finally, it iteratively checks the direction of the other edges until the entire causal graph is consistent with all the statistical correlations. § DATASET CONSTRUCTION We introduce the construction of our dataset in this section. We start with our task formulation for , and then briefly give an overview of the data generation process, followed by detailed descriptions of each step. We conclude the section with the overall statistics of the dataset. §.§ Task Formulation Given a set of N variables X={X_1, …, X_N}, we have a statement s about all the correlations among the variables, and a hypothesis h describing the causal relation r between the pair of variables X_i and X_j. The task is to learn a function f: (s, h) ↦ v which maps the correlation statement s and the causal relation hypothesis h to their validity v ∈{0, 1}, which takes the value 0 if this inference is invalid, and the value 1 if this inference is valid. §.§ Overview of the Data Generation Process We base the construction our dataset on several concepts of causal inference, including the DGCM, d-separation, and MECs, as introduced in <ref>. As in the overview of our data generation process in <ref>, we first choose the number N of variables (Step 1) and generate all the unique DGCMs with N nodes (Step 2), which we will introduce in the <ref>. Then we collect all the d-separation sets from these graphs to identify MECs (Step 3) in <ref>. Then, in Step 4, we create the formal form of data in <ref>. For each correspondence of the MEC to causal graphs, we compose the correlation statement based on the statistical relations in the MEC, and hypothesize a causal relation between two variables, and produce the validity v=1 if the hypothesis is a shared property of all causal graphs in the MEC, and v=0 if the hypothesis is not necessarily true for all the MEC graphs. Finally, we introduce the verbalization process in <ref>. §.§ Constructing the Graphs with Isomorphism Checks The first step of the data generation is to compose the causal graphs, as in Step 1 and 2 of <ref>. For a set of N variables X = {X_1, …, X_N}, there are N(N-1) possible directed edges, since each node can link to any node other than itself. To remove cycles in the graph, we make the nodes in topological order, which only allows edges X_i → X_j, where i<j. We achieve this by limiting the adjacency matrix of the graph to only having non-zero values above the diagnal, resulting in N(N-1)/2 possible directed edges for the DAGs. At the first glance, for N nodes, there should be 2^N(N-1)/2 possible DAGs (i.e., the power set of all edges). However, there could be isomorphic graphs in this set. To avoid this, we perform a graph isomorphism check <cit.>, and reduce the set so that only unique DAGs are retained, and we show their statistics in <ref>. Although we can handle large graphs, we mostly focus on smaller graphs that can still lead to a reasonably sized dataset, so we empirically set N=6, but future work can use our open-sourced codes to extend to more nodes. §.§ Programmatically Generating the D-Separation Sets Based on the set of unique DAGs, we then programmatically generate the d-separation sets by graph theoretical conditions, as in Step 3 of <ref>. To realize this step, we code an efficient graph-theoretic algorithm to check for all the chain, fork, and collider structures to automatically identify the set of nodes that d-separate each pair of nodes. Using the d-separation sets and the faithfulness assumption, we form the statistical correlations as follows. For each pair of nodes, they are conditionally independent given the variables in the d-separation set. If the d-separation set is empty, then the two nodes are unconditionally independent. If no d-separation set can be found for the two nodes, then they are directly correlated. Moreover, using the d-separation sets, we are able to cluster causal graphs to MECs. We achieve it by tracing the mapping between the causal graphs and the set of statistical correlations, and backtracking the graphs with the same d-separation sets to group them in the same MEC. We show in <ref> that each MEC contains on average 2.66 DAGs. §.§ Composing the Hypotheses and Label After generating the set of correlations based on the d-separation sets, we now generate the causal hypotheses. For the causal relation r, we focus on six common causal relations between two nodes introduced in <ref>: Is-Parent, Is-Child, Is-Ancestor (excluding the parents), Is-Descendant (excluding the children), Has-Confounder (i.e., there exists a confounder, or common cause, of the two nodes), and Has-Collider (i.e., there exists a collider, or common effect, of the two nodes). In this way, the set of hypotheses contains all six meaningful causal relations between every pair of variables, resulting in a total size of 6 · N (N-1)/2 = 3N(N-1) hypotheses for a graph with N variables. To generate the ground-truth validity label, we start from the correlation sets in Step 3, then look up all the causal graphs in the same MEC corresponding to the given set of correlations, and check the necessity of the hypothesized causal relation. If the causal relationship proposed in the hypothesis is valid for all causal graphs within the MEC, then we generate the validity v=1; otherwise, we generate v=0. A special case of valid samples is that when the size of the MEC is 1, then there is a bijective mapping between the causal graph and the d-separation sets, so any hypothesis stating the causal properties of that unique causal graph is valid. §.§ Verbalizing into Natural Language Finally, as in the last step of <ref>, we convert all the information above to text data for our task. For the correlation statement, we verbalize the set of correlations in Step 3 into a natural language statement s. When two variables can not be d-separated, i.e., A ⊥̸⊥ B, then we describe them as “A correlates with B” since they are directly correlated and cannot be independent by any condition. And if two variables have a valid d-separation set C, then we describe them as “A is independent of B given C.” In the special case when the d-separation set is empty, we directly say “A is independent of B.” In addition, we disambiguate the setting by starting the correlation statement with the setup of a closed system of the given variables, and no hidden variables: “Suppose there is a closed system of N variables, A, B, … All the statistical relations among these N variables are as follows:”. Finally, to verbalize the hypothesis, we feed the causal relation triplet (X_i, r, X_j) into their hypothesis templates in <ref>. For example, we turn the triplet (A, Is-Parent, B) into “A directly causes B”, as in the example of <ref>. §.§ Statistics of the Resulting Data We show the statistics of our dataset in <ref>. Overall, our dataset contains 415,944 samples, with 18.57% in valid samples. The average length of the premise is 424.11 tokens, and hypothesis 10.83 tokens. We split the data into 411,452 training samples, 2,246 development and test samples, respectively. Since the main purpose of the dataset is to benchmark the performance of LLMs, we prioritize the test and development sets to have a comprehensive coverage over all sizes of graphs. Specifically, we iterate through the subset of our data for each N, and split it entirely for only the test and development sets if the data is less than 1K, which is the case for N=2 and 3. For the other subsets that are larger, we randomly sample up to 1K or 10% of the data, whichever is smaller, to the test and development sets. We set the cap to be 1K in order to form a reasonable computation budget, since many LLMs are expensive to query in the inference mode. Aside from the test and valid sets, all the rest of the data goes into the training set. § EXPERIMENTS §.§ Experimental Setup We set up a diverse list of LLMs for the experiments on our dataset. To test existing LLMs, we first include six commonly used BERT-based NLI models in the transformers library <cit.> with the most number of downloads: BERT <cit.>, RoBERTa <cit.>, BART <cit.>, DeBERTa <cit.>, DistilBERT <cit.>, and DistilBART <cit.>. Apart from these BERT-based NLI models, we also evaluate the general-purpose autoregressive LLMs based on GPT <cit.>: GPT-3 Ada, Babbage, Curie, Davinci <cit.>; its instruction-tuned versions <cit.>, text-davinci-001, text-davinci-002, and text-davinci-003; and GPT-3.5 (i.e., ChatGPT), and the latest GPT-4 <cit.>, using the OpenAI API[<https://openai.com/api/>] with temperature 0. We also evaluate the recent, more efficient models LLaMa <cit.> and Alpaca <cit.>. When inspecting the behavior of finetuned models, we adopt a large set of models, including GPT-based models (GPT-3 Ada, Babbage, Curie, and Davinci) using the OpenAI finetuning API for classification,[<https://platform.openai.com/docs/guides/fine-tuning>] BERT-based models from scratch (BERT-Base, BERT-Large, RoBERTa-Base, and RoBERTa-Large), and BERT-Based NLI models (BERT-Base MNLI, BERT-Large MNLI, RoBERTa-Base MNLI, and RoBERTa-Large MNLI) using the transformers library <cit.>. Our training details are available in <ref>. For the random baselines, we provide “always majority” to predict the majority class 100% of the time, “random (uniform)” which randomly samples a label with 50% chance for each, and “random (proportional)” which samples a label from a Bernouli distribution proportional to the development set label distribution. §.§ The Skill in Existing LLMs We show the performance of LLMs in <ref>. We can see that pure causal inference is a very challenging task across all existing LLMs. Among all the LLMs, the best performance is 33.38% F1 by BART MNLI, which is even higher than latest GPT-based model, GPT-4. Notably, many models are worse than random guess, which means that they totally fail at this pure causal inference task. §.§ Finetuned Performance Next, we address the question: Can we re-purpose LLMs to learn this task? The experimental results in <ref> of 12 models finetuned on our seem very strong at first sight. Most models see a substantial increase, among which the finetuned BERT-based NLI models demonstrate the strongest performance. The best-performing one, RoBERTa-Large MNLI, achieves 94.74% F1 score on this task, as well as very high precision, recall and accuracy scores. §.§ Fine-Grained Performance by Causal Relation In addition to the overall results mentioned above, we also conduct a fine-grained analyze to check the performance of the strongest model, RoBERTa-Large MNLI, by our six causal relation types. As in <ref>, the model is very good at judging relations such as Is-Parent, Is-Descendant and Has-Confounder, all with more than 96% F1 scores, whereas it is several points weaker on the Has-Collider relations. This could be due to that the collider relation is the most special type, requiring identification of the V-structure based on both the unconditional independence based on the two variables only and correlations whenever conditioned on a common descendant. §.§ Robustness Analysis Looking at the very high performance of the finetuned models, we raise the next question: Did the models really robustly learn the causal inference skills? Two Robustness Tests We design two simple robustness tests: (1) paraphrasing, and (2) variable refactorization. For (1) paraphrasing, we simply paraphrase the hypothesis by changing the text template for each causal relation to some semantically-equivalent alternatives in <ref>. For (2) variable refactorization, we reverse the alphabet of the variable names, namely flipping A, B, C, to Z, Y, X and so on. The inspiration behind the two robustness tests comes from the spurious correlation analysis described in <ref>. Specifically, we adopt the common setup of text adversarial attack <cit.> to preserve the training set and keep the same saved models, but run the inference n the perturbed test set. In this way, we separate the possibilities of the models only overfitting on the training data vs. mastering the reasoning skills. Results After Perturbation We can see from <ref> that all the models drop drastically, by up to 39.29 when we paraphrase the test set, and they decrease substantially by up to 58.38 when we refactor the variable names. The best-performing model, RoBERTa-Large MNLI, is especially sensitive towards paraphrasing, demonstrating the most drop among all models; however, it is the most robust against the variable refactorization, maintaining a high F1 score of 67.87. We conduct fine-grained analysis for RoBERTa-Large MNLI under perturbation in <ref>. We can see the the main source of the performance drop of the model comes from the two classes, Is-Ancestor (decreasing to 45.45%) and Is-Descendant (decreasing to 29.41%), while the other classes stay relatively robust, keeping their F1 scores over 70%. From this analysis, we make the following suggestions to future studies testing this skill of LLMs. First, it is safe to use it as a test set to benchmark existing LLMs' performance, since the data we generate is out-of-distribution from the training data of the current LLMs. Then, when testing finetuned models, it is very important to accompany adversarial attack together with the i.i.d. test set. We also provide our perturbed versions of the test set in our data for future work to test the generalizability skill. §.§ Effect of In-Context Learning §.§ Adding Signals to the context §.§ Example Reasoning § RELATED WORK Existing Causal Reasoning Tasks A large body of existing research of causal reasoning in NLP focuses on leveraging empirical knowledge to do tasks such as inferring the cause and effect of why an agent perform certain tasks <cit.>, the motivation and emotional reaction in a social context <cit.>, how people achieve a given goal with a set of concrete steps <cit.>, the development of a story given a different beginning <cit.>, and how in general LLMs serve as a knowledge base of cause and effect <cit.>. In contrast, our task focuses on the pure causal inference skill of models, which is a knowledge-dependent reasoning skill based on formally correct rules from causal inference. Existing Logical and Inference Tasks Another related area of literature is logical and inference tasks. A well-established task is natural language inference (NLI), which identifies the semantic relationship between a pair of sentences <cit.>. NLI datasets mainly focus on the set and paraphrase relations, such as “a group of boys are playing football” can entail “some guys are playing football,” where “boys” are a sub-concept of “guys” and “a group of” and “some” are paraphrases. Existing datasets cover entailment in news articles <cit.>, image captions <cit.>, and across multiple genres <cit.>. Recently, there has been increasing efforts to extend the inference task to various logical inference skills such as deductive logic and propaganda techniques <cit.>. Our dataset is the first dataset testing the correlation-to-causation inference skill, which is unique of its type. § LIMITATIONS AND FUTURE WORK We identify several limitations of this work and open future directions: First, in the context of this work, we limit the causal graphs to two to six nodes, but future work can feel free to explore larger graphs. Another aspect is that we do not assume hidden confounders in this inference problem, so we welcome future work to generate an even more challenging dataset to infer the existence of hidden confounders, analogous to the causal discovery algorithm of fast causal inference (FCI) <cit.>. Finally, a lot of motivation behind proposing this task is inspired by the problem of invalid reasoning patterns in our daily reasoning <cit.>, which could fertilize the ground for more pervasive spread of fake news. We believe false causal inference is a prevalent type of fallacious beliefs, and welcome future work to connect the idea of this benchmark to more real-world false beliefs based on confusing correlation with causation. § CONCLUSION In this work, we introduced a novel task, , to infer causation from correlation, and collected a large-scale dataset of more than 400K samples. We evaluated an extensive list of LLMs on this new task, and showed that off-the-shelf LLMs perform poorly on this task. We also show that it is possible to re-purpose LLMs on this task by finetuning, but future work needs to be aware of the out-of-distribution generalization problem. To avoid the Goodhart's law, we recommend using this dataset to benchmark the pure causal inference skills for LLMs that have not seen this dataset. Given the limited reasoning abilities of current LLMs, and the difficulty of separating actual reasoning from training-corpus-derived knowledge, it is imperative that our community focus on work aiming to accurately disentangle and measure both abilities. We believe that the present work is a first such step. § ACKNOWLEDGMENT We thank Riley Goodside for valuable suggestions to improve the our prompts to LLMs. We thank Luigi Gresele and Amir Hossein Karimi for their suggestions to help us improve the formulation of our causal discovery questions. This material is based in part upon works supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B; by the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645; by the Precision Health Initiative at the University of Michigan; by the John Templeton Foundation (grant #61156); by a Responsible AI grant by the Haslerstiftung; and an ETH Grant (ETH-19 21-1). Zhijing Jin is supported by PhD fellowships from the Future of Life Institute and Open Philanthropy. We also thank OpenAI for granting Zhijing quota to their API of GPT series through the Researcher Access Program. § END OF MAIN PAPER acl_natbib § IMPLEMENTATION DETAILS When finetuning on our data, for GPT-based models, we use the default settings of the OpenAI finetuning API; and for BERT-based models, we use the library <cit.> and train the models on a server with an NVIDIA Tesla A100 GPU with 40G of memory. To fit for the GPU memory, we set the batch size to be 8. We use the validation set to tune the learning rate, which takes value in {2e-6, 5e-6, 1e-5, 2e-5, 5e-5}; dropout rate, which takes value in {0, 0.1, 0.2, 0.3}; and weight decay, which takes value in {1e-4, 1e-5}. We train the models until convergence, which is usually around five epochs. § TEMPLATES AND PARAPHRASES We use the verbalization templates in <ref> to compose the hypotheses for all six causal relations. § SPURIOUS CORRELATION ANALYSIS The inspirations of our two robustness tests (paraphrasing and variable refactorization) come from our data analysis. We check for spurious correlations in the data by reporting in <ref> the point-wise mutual information (PMI) between the label and any n-gram with no more than four tokens. In addition, we also report the difference of the PMI with the two labels in the |Diff| column of <ref>, and report the top 10 n-grams. The design spirit for our robustness test is that if the models' correct judgment relies on exploiting these spurious correlations, then such reliance will be broken in our perturbations. We can see that some spurious correlations are rooted in the framing of the hypothesis, such as “a cause (for)”, and “a direct (one)” (which we use the paraphrasing task to break), and others are connected to the variable names, such as “for D (but)” and “for E (but)” (which we use the variable refactorization to break).
http://arxiv.org/abs/2306.02421v1
20230604175330
Auto-Validate by-History: Auto-Program Data Quality Constraints to Validate Recurring Data Pipelines
[ "Dezhan Tu", "Yeye He", "Weiwei Cui", "Song Ge", "Haidong Zhang", "Han Shi", "Dongmei Zhang", "Surajit Chaudhuri" ]
cs.DB
[ "cs.DB", "cs.LG" ]
Work done at Microsoft. University of California, Los Angeles Microsoft Research Data pipelines are widely employed in modern enterprises to power a variety of Machine-Learning (ML) and Business-Intelligence (BI) applications. Crucially, these pipelines are recurring (e.g., daily or hourly) in production settings to keep data updated so that ML models can be re-trained regularly, and BI dashboards refreshed frequently. However, data quality (DQ) issues can often creep into recurring pipelines because of upstream schema and data drift over time. As modern enterprises operate thousands of recurring pipelines, today data engineers have to spend substantial efforts to manually monitor and resolve DQ issues, as part of their DataOps and MLOps practices. Given the high human cost of managing large-scale pipeline operations, it is imperative that we can automate as much as possible. In this work, we propose () that can automatically detect DQ issues in recurring pipelines, leveraging rich statistics from historical executions. We formalize this as an optimization problem, and develop constant-factor approximation algorithms with provable precision guarantees. Extensive evaluations using 2000 production data pipelines at Microsoft demonstrate the effectiveness and efficiency of . Auto-Validate by-History: Auto-Program Data Quality Constraints to Validate Recurring Data Pipelines Yeye He, Weiwei Cui, Song Ge, Haidong Zhang, Han Shi, Dongmei Zhang, Surajit Chaudhuri July 31, 2023 ==================================================================================================== plain § INTRODUCTION Data pipelines are the crucial infrastructure underpinning the modern data-driven economy. Today, data pipelines are ubiquitous in large technology companies such as Amazon, Google and Microsoft to power data-hungry businesses like search and advertisement <cit.>. Pipelines are also increasingly used in traditional enterprises across a variety of ML/BI applications, in a growing trend to democratize data <cit.>. Production data pipelines are often inter-dependent, forming complex “webs”, where input tables used by downstream pipelines frequently depend on output tables from upstream pipelines. Furthermore, these pipelines are often configured to recur on a regular basis (e.g., hourly or daily), to ensure data stay up-to-date for downstream use cases (e.g., fresh data enables ML models to be re-trained regularly, and BI dashboards refreshed continuously). Recurring Pipelines: Prone to Fail due to DQ. The recurring and inter-dependent nature of production pipelines make them vulnerable to failure due to data quality (DQ) issues, because over time unexpected DQ issues, such as data drift <cit.> and schema drift <cit.>, can creep in, causing cascading issues in pipelines. Although DQ issues in data pipelines are widely documented in the literature (especially in industry settings <cit.>), we describe a few common types of DQ issues from the literature, in order to make the discussion concrete and self-contained: * Schema drift: A newly arrived batch of input data may have a schema change compared to previous input (e.g., missing columns or extra columns), which can result in incorrect behavior in data pipelines <cit.>. * Increasing nulls: There is sometimes a sudden increase of null, empty strings, or special values (e.g., -1) in a column due to external factors – for instance, Google reports a DQ incident where null values in a column increase substantially in a short period of time, because the module that populates data in this column encountered an unusual number of RPC time-outs from a networking outage <cit.>. * Change of units: The unit of measurement for numeric values can change over time, when the logic that populates data evolves – for instance, Google reports a real DQ issue in their search ranking <cit.>, where the program that populates the “age” field of web documents previously used the unit of “days” (e.g., a document that is 30 days old will have an “age” value of 30), which later got changed to “hours” (making the same document to be have the “age” value of 720). This leads to orders of magnitude larger “age” values, and incorrect behaviours downstream. * Change of value standards: Value standards for string-valued data can change over time – for instance, Amazon reports a DQ issue where a “language-locale” column previously used lowercase values like “en-gb”, which later changed into uppercase “en-GB”, creating a mixed bag of inconsistent values in the same column, leading to incorrect behaviours in downstream applications <cit.>. * Change of data volume: The volume (e.g., row-count) for a new batch of data in a recurring pipeline can change significantly from previous batches, which can also be indicative of DQ issues. This list of DQ issues is clearly not exhaustive as there are many other types of DQ issues documented in the literature <cit.>. When DQ issues arise in recurring pipelines, they tend to introduce silent failures (i.e., with no explicit exceptions thrown, or error messages generated). The silent nature of DQ issues makes them difficult to catch, but no less damaging. For example, when null values increase significantly, or the unit of measurement changes, downstream ML models will continue to operate but will likely churn out inaccurate predictions (e.g., Google reports a DQ issue in their production pipelines that causes a recommendation model in Google Store to produce sub-optimal results – fixing this single DQ issue improves their apps install rate by 2% <cit.>). In general, “silent” DQ failures pollute downstream data products, which makes it more time-consuming for engineers to detect/debug/fix. Silent DQ failures in pipelines are therefore a major pain point in MLOps and DataOps practices <cit.>. “Guardrails” for Pipelines: Data Validation. Technology companies with large-scale data pipeline operations are among the first to recognize the need for employing data-validation "guardrails" in recurring pipelines to catch DQ issues early as they arise. A number of data-validation tools have been developed, including Google’s TensorFlow Data Validation (TFDV) <cit.> and Amazon’s Deequ <cit.>. These tools develop easy-to-use domain-specific languages (DSLs), so that engineers can write declarative DQ constraints that describe how “normal” data should look like in recurring data pipelines, such that unexpected deviation in the future can be flagged for review. Figure <ref> shows an example code snippet from Amazon Deequ. Using the DSL introduced in Deequ, one could declare the “review_id” column to be unique, the “marketplace” column to be complete (with no NULLs), etc.. These constraints are then used to validate future data arriving in the recurring pipeline. Figure <ref> shows a similar example from Google’s TFDV, which specifies that, when a new batch of input data arrives in a pipeline, the distributional distance of values in the “payment_type” column should be similar to the same column from previous batches (the code snippet therefore specifies that the L-infinity distance of the two should be no greater than 0.01). Automate Data Validation: Leveraging History. While these DSL-based declarative data-validation solutions improve upon low-level assertions and can improve DQ in production pipelines as reported in <cit.>, they require data engineers to manually write data constraints one-column-at-a-time like shown in Figure <ref> (with a lone exception in <cit.>, which employs off-the-shelf anomaly detection algorithms). This is clearly time-consuming and hard to scale – large organizations today operate thousands of pipelines, and hundreds of columns in each table, it is impractical for engineers to manually program DQ for each column. We emphasize that writing DQ constraints is not just time-consuming, sometimes it is also genuinely difficult for humans to program DQ correctly, because users need to (1) have a deep understanding of the underlying data including how the data may evolve over time; and (2) be well-versed in complex statistical metrics (e.g., L-infinity vs. JS-divergence), before they can program DQ effectively. Consider the example of online user traffic, such data can fluctuate quickly over time (e.g., between different hours of the day and different days of the week), which is hard to anticipate and even harder to program using appropriate metrics and thresholds. To address this common pain point, in this work we propose to “auto-program” DQ, by leveraging “history”. Our insight is that rich statistical information from past executions of the same pipeline (e.g., row-counts, unique-values, value-distributions, etc.) is readily available, which can serve as strong signals to reliably predict whether a new batch of data may have DQ issues or not. To see why this is the case, consider a simplistic example where all K past executions of a recurring pipeline produce exactly 50 output rows (one row for each of the 50 US states). This row-count becomes a “statistical invariant” unique to this particular pipeline, which can serve as a good predictor for DQ issues in the future – deviations from the invariant in new executions (e.g. an output with only 10 rows or 0 row), would likely point to DQ issues. Obviously, simple row-counts are not the best DQ predictor for all pipelines, as some pipeline can have row counts that can vary significantly. In such cases, DQ constraints based on other types of statistical metrics will likely be more effective. Table <ref> and Table <ref> list common statistical metrics used to program DQ constraints (details of these metrics can be found in Appendix <ref>). Tools like TFDV and Deequ already support many of these metrics today, but it is difficult for humans to manually select suitable metrics, and then guess what thresholds would work well. Our proposed aims to automatically program suitable DQ from this large space of statistical primitives, so that the resulting DQ is tailored to the underlying pipeline, without human intervention. Overall, our proposed is designed to have the following properties that we believe are crucial in recurring pipelines: * Automated. Instead of requiring humans to manually program DQ constraints column-at-a-time, can auto-program rich DQ leveraging statistics from past executions. * Highly accurate. is specifically designed to achieve high accuracy, as frequent false-alarms would require constant human attention that can erode user confidence quickly. In , we aim for very low False-Positive-Rate or FPR (e.g., 0.01%), which is also configurable if users so choose. can then auto-program DQ guaranteed not to exceed that FPR, while still maximizing the expected “recall” (the number of DQ issues to catch). * Robust. Unlike traditional ML methods that require a significant amount of training data, we exploit rich statistical properties (Chebyshev, Chantelli, CLT, etc.) of the underlying metrics, so that the predictions are robust even with limited historical data (e.g. only a few days of histories). * Explainable. produces explainable DQ constraints using standard statistical metrics (as opposed to black-box models), which makes it possible for human engineers to understand, review, and approve, if human interactions become necessary. Contributions. We make the following contributions in this work. * We propose a novel problem to auto-program pipeline DQ leveraging history, formalized as a principled optimization problem specifically optimizing both precision and recall. * We develop algorithms that leverage the different statistical properties of the underlying metrics, to achieve a constant-factor approximation, while having provable precision guarantees. * Our extensive evaluation on 2000 real production pipelines suggests that substantially outperforms a variety of commercial solutions, as well as SOTA methods for anomaly detection from the machine learning and database literature. § RELATED WORK Data validation. Data validation for pipelines is an emerging topic that has attracted significant interest from the industry, including recent efforts such as Google’s TensorFlow Data Validation (TFDV) <cit.>, Amazon’s Deequ <cit.>, and LinkedIn's Data Sentinel <cit.>. With the lone exception of <cit.>, most existing work focuses on developing infrastructures and DSLs so that engineers can program DQ constraints in a declarative manner. Anomaly detection. Anomaly detection has been widely studied in time-series and tabular settings <cit.>, and is clearly related. We compare with an extensive set of over 10 SOTA methods from the anomaly detection literature, and show is substantially better in our problem setting of DQ in pipelines, because uniquely exploits the statistical properties of underlying metrics, whereas standard anomaly detection methods would treat each metric as just another “feature dimension”. This enables to have higher accuracy, and excel with even limited data (e.g., 7 days of historical data), as we will show in our experiments. Data cleaning. There is a large literature on data cleaning (e.g., surveyed in <cit.>), which we also compare with. Most existing work focuses on the static single-table setting <cit.>, where data errors need to be detected from one table snapshot. In comparison, we study how multiple historical table snapshots from recurring pipelines can be explicitly leveraged for data quality, which is a setting not traditionally considered in the data cleaning literature. § PRELIMINARY: DQ IN PIPELINES In this section, we introduce necessary preliminaries for programming Data Quality (DQ) in the context of data pipelines. As we discussed, data pipelines are ubiquitous today, yet DQ issues are common in recurring pipelines, giving rise to data-validation tools such as Google’s TFDV and Amazon’s Deequ. At its core, these methods validate DQ by checking input/output tables of recurring pipelines, against pre-specified DQ constraints. Most of these constraints are defined over a single column C at a time (Figure <ref>). Single-column DQ is also the type of DQ we focus in this work. While TFDV and Deequ use syntactically different DSLs to program DQ constraints, the two are very similar in essence, as both can be described as constraints based on statistical metrics. DQ constraints by statistical metrics. Most DQ primitives used for pipeline validation can be expressed as statistics-based constraints. Table <ref> and Table <ref> list common statistical metrics used in DQ (e.g., row-count, L-infinity, etc.). We denote this space of possible metrics by 𝐌. This is obviously a large space that requires time and expertise from users to navigate and select appropriately. We now define two types of DQ constraints using metric M ∈𝐌, which we call single-distribution and two-distributions DQ constraints, respectively. A single-distribution DQ constraint, denoted by a quadruple Q(M, C, θ_l, θ_u), is defined using a statistical metric M ∈𝐌, over a target column of data C, with lower-bound threshold θ_l and upper-bound θ_u. The constraint Q(M, C, θ_l, θ_u) specifies that C has to satisfy the inequality θ_l ≤ M(C) ≤θ_u, or the metric M(C) is expected to be between the range [θ_l, θ_u] (otherwise the constraint Q is deemed as violated). Single-distribution DQ can be instantiated using example metrics shown in Table <ref> and Table <ref>. Such DQ constraints rely on a single data distribution of a column C, and can be validated using a newly-arrived batch data alone. We illustrate this using an example below. In the example Deequ snippet shown in Figure <ref>, the “review_id” column is required to be unique, which can be expressed as a single-distribution DQ using the unique_ratio metric from Table <ref>, as Q_1(unique_ratio, review_id, 1, 1), equivalent to Q_1: 1 ≤ unique_ratio(review_id) ≤ 1 (where the upper-bound θ_u and lower-bound θ_l converge to the same value 1). If on the other hand, uniqueness is required to be high, say at least 95% (but not 100%), we can write as Q_2(unique_ratio, review_id, 0.95, 1), or Q_2: 0.95 ≤ unique_ratio(review_id) ≤ 1. Other examples in Figure <ref> can be written as single-distribution DQ similarly. Next, we introduce two-distribution DQ constraints that require comparisons between two distributions of a column. A two-distribution DQ constraint, denoted as Q(M, C, C', θ_l, θ_u), is defined using a statistical metric M ∈𝐌, that compares one batch “target” data in a column C, and a batch of “baseline” data C', using lower-bound threshold θ_l and upper-bound threshold θ_u. Formally we write Q(M, C, C', θ_l, θ_u) = θ_l ≤ M(C, C') ≤θ_u, which states that the metric M(C, C') comparing C and C' is expected to be in the range [θ_l, θ_u] (or Q is as violated otherwise). Two-distribution DQ compares a target column against a baseline column, which can be the same column from two consecutive executions of the same pipeline, or two batches of training/testing data, etc. We illustrate this in the example below. In the example TFDV snippet shown in Figure <ref>, the first constraint specifies that for the “payment_type” column, we expect two batches of the same data to differ by at most 0.01 using the L-infinity metric. This can be written as Q_3: 0 ≤ L_inf(C, C') ≤ 0.01. The second constraint in Figure <ref> defined on the “company” column, can be specified using two-distribution DQ similarly. Conjunctive DQ program. Given a target column C, it is often necessary to validate C using multiple orthogonal metrics in 𝐌 (e.g., both row-counts and distribution-similarity need to be checked, among other things). In this work, we consider conjunctions of multiple DQ constraints, which we call a conjunctive DQ program. Note that the use of conjunction is intuitive, as we want all DQ to hold at the same time (prior work in TFDV and Deequ also implicitly employ conjunctions, as the example in Figure <ref> shows). A conjunctive DQ program, defined over a given set of (single-distribution or two-distribution) DQ constraints 𝐒, denoted by P(𝐒), is defined as the conjunction of all Q_i ∈𝐒, written as P(𝐒) = ⋀_Q_i ∈𝐒 Q_i. Continue with Example <ref>, let C denotes the target column “payment_type”. In addition to the aforementioned constraint Q_3: 0 ≤ L_inf(C, C') ≤ 0.01, one may additionally require that this column to be at least 95% complete (with less than 5% of nulls), written as Q_4: 0.95 ≤ complete_ratio(C) ≤ 1. Furthermore, we expect to see no more than 6 distinct values (with “cash”, “credit”, etc.) in this column, so we have Q_5: 0 ≤ distinct_cnt(C) ≤ 6. Putting these together and let 𝐒 = {Q_3, Q_4, Q_5}, we can write a conjunctive program P(𝐒) = ⋀_Q_i ∈𝐒 Q_i (or Q_3 Q_4 Q_5). § While DQ programs are flexible and powerful, they are difficult to write manually. We now describe our to auto-program DQ. §.§ Problem Statement For the scope of this work, we consider auto-generating conjunctive DQ programs for each column C in data pipelines (or only for a subset of important columns selected by users), using column-level single-distribution or two-distribution DQ constraints (Section <ref>). For a given column C, our goal is to program suitable DQ by selecting from a large space of metrics 𝐌 in Table <ref> and Table <ref>. This space of 𝐌 is clearly large and hard to program manually. Also note that while we list commonly-used metrics, the list is not meant to be exhaustive. In fact, is designed to be extensible, so that new metrics (e.g., statistical distances relevant to other use cases) can be added into 𝐌 in a way that is transparent to users. For a given C and a set of possible 𝐌, this induces a large space of possible DQ constraints on C. We denote this space of possible single-distribution and two-distribution DQ as 𝐐, defined as: 𝐐 = {Q(M, C, θ_l, θ_u) | M ∈𝐌, θ_l ∈ℝ, θ_u ∈ℝ, θ_l ≤θ_u } ∪{Q(M, C, C', θ_l, θ_u) | M ∈𝐌, θ_l ∈ℝ, θ_u ∈ℝ, θ_l ≤θ_u } We note that in production settings, it is crucial that auto-generated DQ programs are of high precision, with very few false-alarms (false-positive detection of DQ issues). This is because with thousands of recurring pipelines, even a low False-Positive Rate (FPR) can translate into a large number of false-positives, which is undesirable as they usually require human intervention. Because it is critical to ensure high precision, in we explicitly aim for a very low level of FPR, which we denote by δ, e.g., δ = 0.1%. Finally, because we are dealing with data from recurring data pipelines, we assume that the same data from K past executions of this pipeline is available, which we denote as H = {C_1, C_2, …, C_K}. These K previous batches of data are assumed to be free of DQ issues, which is reasonable because engineers usually manually check the first few pipeline runs after a pipeline is developed to ensure it runs properly. DQ issues tend to creep in over time due to data drift and schema drift <cit.>. (). Given these considerations, we now formally define our problem as follows. . Given a target column C from a pipeline, and the same data from previous K executions H = {C_1, C_2, …, C_K}, a space of possible DQ constraints 𝐐, and a target false-positive-rate (FPR) δ. Construct a conjunctive DQ program P(𝐒) with 𝐒⊆𝐐, such that the expected FPR of P(𝐒) is no greater than δ, while P(𝐒) can catch as many DQ issues as possible. We write as the following optimization problem: () max   R(P(𝐒))    FPR(P(𝐒)) ≤δ P(𝐒) = ⋀_Q_i ∈𝐒,  𝐒⊆𝐐 Q_i Where R(P(𝐒)) denotes the expected recall of a DQ program P(𝐒) that we want to maximize, and FPR(P(𝐒)) denotes its expected FPR, which is required to be lower than a target threshold δ. §.§ Construct DQ constraints To solve , in this section we will first describe how to construct a large space of DQ constrains 𝐐 (and estimate their FPR), which are pre-requisites before we can use 𝐐 to generate conjunctive programs for (in Section <ref>). Recall that to instantiate constraints like Q_i(M, C, θ_l, θ_u) (from Definition <ref>), we need to pick a metric M ∈𝐌, apply M on the given column C to compute M(C), and constrain M(C) using suitable upper/lower-bounds thresholds θ_u/θ_l. Here we leverage the fact that a history H = {C_1, C_2, …, C_K} of the same column C from past executions is available. If we apply M on H, we obtain M(H) = {M(C_1), M(C_2), …, M(C_K)}, which forms a statistical distribution[Later, we will discuss exceptions to the assumption (e.g., non-stationary time-series).]. When we apply the same metric M on a newly arrived batch of data C, the resulting value M(C) can then be seen as a data point drawn from the distribution M(H). Let the estimated mean and variance of M(H) be μ and σ^2, respectively. We can construct a DQ constraint Q(M, C, θ_l, θ_u), with the following probabilistic FRP guarantees. For any metric M ∈𝐌, and β∈ℝ^+, we can construct a DQ constraint Q(M, C, θ_l, θ_u), with θ_l = μ - β, θ_u = μ + β. The expected FPR of the constructed Q on data without DQ issues, denoted by E[FPR(Q)], satisfy the following inequality: E[FPR(Q)] ≤ (σ/β)^2 We prove this proposition using Chebyshev's inequality <cit.>. Chebyshev states that for a random variable X, P(|X - μ| ≥ k σ) ≤1/k^2, ∀ k ∈ℝ^+. For the random variable M(C), let k = β/σ. Replacing k with β/σ above, we get P(|M(C) - μ| ≥β) ≤ (σ/β)^2. Note that this implies P(|M(C) - μ| ≤β) ≥ 1 - (σ/β)^2, which can be rewritten as P(-β≤ (M(C) - μ) ≤β) ≥ 1 - (σ/β)^2, or P(μ -β≤ M(C) ≤μ + β) ≥ 1 - (σ/β)^2. Observe that μ -β≤ M(C) ≤μ + β is exactly our Q(C, M, θ_l, θ_u), where θ_l = μ - β, θ_u = μ + β. We thus get P(Q holds on C) ≥ 1 - (σ/β)^2, which is equivalent to saying that the expected FPR of Q is no greater than (σ/β)^2, or E[FPR(Q)] ≤ (σ/β)^2. We use the following example to illustrate such a constructed DQ constraint and its estimated FPR. Consider a metric M = complete_ratio from Table <ref> that computes the fraction of values in a column C that are “complete” (not-null), and a history of data from past executions H = {C_1, C_2, …, C_K}. Applying M on H, we obtain the complete_ratio on historical data as M(H) = {0.92, 0.90, …, 0.91}. From the sample M(H), we estimate its mean μ = 0.9 and variance of σ^2 = 0.0001, respectively. Using Proposition <ref>, suppose we set β = 0.05, we get a Q_6(C, complete_ratio, 0.85, 0.95) (or equivalently 0.85 ≤ complete_ratio(C) ≤ 0.95), whose expected FPR has the following inequality: E[FPR(Q_6)] ≤ (0.01/0.05)^2 = 0.04. Note that using different β allows us to instantiate different constraints with different levels of FPR. For example, setting β = 0.1 will induce a different Q_7( complete_ratio, C, 0.8, 1) (or 0.8 ≤ complete_ratio(C) ≤ 1), whose expected FPR is E[FPR(Q_7)] ≤ (0.01/0.1)^2 = 0.01. Note that this yields a lower FPR than that of Q_6 above, because Q_7 has a wider upper/lower-bound for complete_ratio. Using Proposition <ref> and different β values, we can instantiate an array of DQ constraints using the same M but different [θ_l, θ_u] (thus different FPR guarantees). A DQ with a larger β allows a larger range of M(C) values, which is less sensitive/effective in catching DQ issues, but is also “safer” with lower expected FPR. Tighter bounds of FPR leveraging metric properties. The results in Proposition <ref> apply to any metric M ∈𝐌, and the corresponding bounds on FRP are loose as a result. We derive two tighter FRP bounds for specific types of statistical metrics below, by exploiting unique characteristics of these metrics. For any metric M ∈{EMD, JS_div, KL_div, KS_dist, Cohen_d, L_1, L_inf, Cosine, Chi_squared}, and any β∈ℝ^+, we can construct a DQ constraint Q(M, C, θ_l, θ_u), with θ_l = 0, θ_u = μ + β. The expected FPR of the constructed Q on data without DQ issues, denoted by E[FPR(Q)], satisfy the following inequality: E[FPR(Q)] ≤σ^2/β^2 + σ^2 This bound is derived using Cantelli's inequality <cit.>, a proof of which can be found fullversion in Appendix <ref>. in a full version of this paper <cit.>. For any metric M ∈{count, mean, str_len, char_len, digit_len, punc_len, complete_ratio}, and any β∈ℝ^+, we can construct a DQ constraint Q(M, C, θ_l, θ_u), with θ_l = μ - β, θ_u = μ + β. The expected FPR of the constructed Q on data without DQ issues, denoted by E[FPR(Q)], satisfy the following inequality: E[FPR(Q)] ≤ 1 - 2/√(π)∫_0^β/√(2)σ e^-t^2 dt This bound is derived using Central Limit Theorem <cit.>. fullversion We show a proof of this in Appendix <ref>. (A proof of which can be found in <cit.>). We omit examples for Proposition <ref> and Proposition <ref>, but DQ can be constructed similar to Example <ref> with tighter bounds. We note that these tighter bounds allow us to construct DQ constraints with better FPR guarantees, which help to meet the constraint in Equation (<ref>) of more effectively. The pseudo-code of this step can be found in Appendix <ref>. Time-series Differencing for Non-stationary Data. Our analysis so far assumes M(H) to be a well-behaved distribution, generated from stationary processes <cit.>, defined as processes with probability distributions that are static and do not change over time. While this is true for many real cases (e.g., Example <ref>), there are cases where M(H) follows non-stationary processes <cit.>, in which parameters of the underlying probability change over time. Consider a recurring pipeline that processes one-day's worth of user traffic data visiting a website. Because overall, the user traffic will grow over time, the volume of data processed by the pipeline will increase slightly every day. So for the metric M=row_count, we get a sequence of row-counts for the past K days as M(H) = { 100K, 103K, 105K, 106K, …, 151K, 152K}. Note that M(H) is non-stationary here, because the parameters of the underlying distribution (e.g., the mean of M(C)) change over time. Modeling non-stationary M(H) like above as stationary using a static distribution is clearly sub-optimal, which may lead to false-positives and false-negatives in DQ applications. To account for non-stationary M(H), we first determine whether a M(H) is already stationary, using the Augmented Dickey–Fuller (ADF) test from the time-series literature <cit.>. If we reject the null-hypothesis in ADF that M(H) is already stationary (e.g., Example <ref>), we proceed to construct DQ constraints as before. For cases where M(H) is not stationary (e.g., Example <ref>), we repeatedly apply a technique known as time-series differencing <cit.> on M(H) until it reaches stationarity. fullversion we illustrate this using a small example below, and defer details of the time-series differencing step to Appendix <ref>. While we defer details of the time-series differencing step to a full version of the paper <cit.> due to space limit, we illustrate this using a small example below. Continue with Example <ref>, where M(H) = { 100K, 103K, 105K, 106K, …, 151K, 153K}, and the metric M=row_count. The Augmented Dickey–Fuller (ADF) test will fail to reject the null hypothesis that M(H) is non-stationary. Applying a first-order time-differencing step (<cit.>) with t=1 will produce: M'_t=1(H) = { M(C_2) - M(C_1), M(C_3) - M(C_2), … M(C_K) - M(C_K-1),} = { 3K, 2K, 1K, …, 2K}. This resulting M'_t=1(H) passes the ADF test and is then used as a static distribution to generate 𝐐. fullversion We note that We defer details of this step to <cit.>, but we note that the differencing step also allows us to handle cyclic time-series M(H) (e.g., weekly or hourly periodic patterns), by transforming M(H) using first-order differencing with lags <cit.>, which can then be handled like stationary processes as before. §.§ Construct DQ Programs in After we construct constraints 𝐐 and estimate their FPR bounds, we are ready solve . Recall that in , in addition to satisfying the hard constraint on FPR (Equation (<ref>)), our objective (Equation (<ref>)) is to maximize the expected “recall” of the constructed DQ program (the number of possible DQ issues to catch). In order to fully instantiate , we still need to estimate the expected recall benefit of each DQ constraint Q_i ∈𝐐, which can guide us to select the most “beneficial” DQ program. Estimate DQ recall using synthetic “training”. Clearly, we cannot foresee the exact DQ issues that may arise in the future in a particular pipeline, to precisely quantify the benefit of each Q_i ∈𝐐. However, there is a large literature that documents common types of DQ issues in pipelines (e.g., <cit.>), which include things like schema change, unit change, increased nulls, as discussed earlier. Our observation is that although it is hard to quantify the benefit of Q_i in a specific DQ incident, in the long run if future DQ issues are drawn from the set of common DQ problems, then we can still estimate the expected recall of a specific Q_i. With that goal in mind, we carefully reviewed the DQ literature and cataloged a list of 10 common types of DQ issues in pipelines (schema change, unit change, increased nulls, etc.). We then vary parameters in each type of DQ to systematically capture different magnitudes of DQ deviations (e.g., different fractions of values are overwritten with nulls for “increased nulls”, different magnitudes of changes for “unit changes”, etc.), to construct a total of 60 procedures that can systematically inject DQ issues in a given column C by varying C. We denote this set of synthetically generated DQ issues on C as 𝐃(C). fullversion We give a full list of these common types of DQ issues and their parameters configurations in Appendix <ref>. In the interest of space, we defer a detailed description of these common DQ issues, and the corresponding procedures to generate 𝐃(C), to a full version of the paper <cit.>. Intuitively, this 𝐃(C) models a wide variety of data deviations that may happen in C due to DQ issues, which guides us to select salient Q_i ∈𝐐 that are unique “statistical invariants” specific to a pipeline, to best differentiate between the “normal” H, and the “bad cases” in 𝐃(C), for this pipeline. This synthetic 𝐃(C) in effect becomes “training data” in ML, by assisting us to estimate the recall benefit of Q_i in . We give an example below to illustrate this. We revisit the example from the Introduction, where a recurring pipeline produces exactly 50 output rows, with one row for each of the 50 US states, over all past K executions in the history. In such a pipeline, for the “state” column in the output, one distinguishing feature is that the column has exactly 50 distinct values, or Q_8: dist_val_cnt(state) = 50 (which intuitively, is a “statistical invariant” only unique to this pipeline). When we synthetically inject DQ issues into C (the “state” column) to produce 𝐃(C), we get variants of C, such as C with an increased number of nulls, C with values taken from a neighboring column (due to schema-change), etc. This constraint Q_8: dist_val_cnt(state) = 50 will catch most of such variations in 𝐃(C), thus producing a high expected “recall” and making Q_8 a desirable constraint to use. Intuitively, Q_8 is a good constraint for C in this particular pipeline, because dist_val_cnt = 50 is a unique “statistical variant” specific to this column and pipeline, which has more discriminating power than other more generic constraints. Formally, we define the expected recall of Q_i or R(Q_i), as the set of issues it can detect in 𝐃(C), written as: R(Q_i) = {C' | C' ∈𝐃(C), C'  fails on  Q_i} Optimizing with guarantees. Given a DQ program with a conjunction of constraints P(𝐒) = ⋀_Q_i ∈𝐒 Q_i for some 𝐒⊆𝐐, naturally the recall of two constraints Q_i, Q_j ∈𝐒 will overlap (with R(Q_i) ∪ R(Q_j) ≠∅). This leads to diminishing recall for similar DQ constraints in the same program, and requires us to leverage “complementary” constraints when generating DQ programs. Given a conjunctive program P(𝐒) with 𝐒⊆𝐐, we model the collective recall of 𝐒, as the union of individual R(Q_i), or ⋃_Q_i ∈𝐒R(Q_i). This becomes a concrete instantiation of the objective function in Equation (<ref>) of the problem. Furthermore, recall that we can upper-bound the FPR of each Q_i ∈𝐒, using Proposition <ref>-<ref>. Given a program P(𝐒), assume a worst-case where the FRP of each Q_i ∈𝐒 is disjoint, we can then upper-bound the FPR of P(𝐒) (Equation (<ref>)), as the sum of the FRP bounds of each Q_i, or ∑_Q_i ∈𝐒FPR(Q_i). Together, we rewrite the abstract in Equation (<ref>)-(<ref>) as: () max   | ⋃_Q_i ∈𝐒R(Q_i) |    ∑_Q_i ∈𝐒FPR(Q_i) ≤δ 𝐒⊆𝐐 Intuitively, we want to weigh the “cost” of selecting a constraint Q_i, which is its estimated FRP(Q_i), against its “benefit”, which is its expected recall R(Q_i). Furthermore, we need to account for the fact that constraints with overlapping recall benefits yield diminishing returns that is analogous to submodularity. fullversion We prove that is in general intractable and hard to approximate in Appendix <ref>. (we prove is NP-hard in <cit.>). Given that it is unlikely that we can solve optimally in polynomial time, we propose an efficient algorithm that gives a constant-factor approximation of the best possible solution in terms of the objective value in Equation (<ref>), while still guaranteed to satisfy the FPR requirement in Equation (<ref>) in expectation. The pseudo-code of the procedure is shown in Algorithm <ref>. Algorithm <ref> takes as input a set of metrics 𝐌, an FPR target δ, as well as a column C together with its history H = {C_1, C_2, …, C_K}. We start by constructing a large space of possible DQ constraints 𝐐 (Line 1), using the given 𝐌 and H (Section <ref>). Using this 𝐐, we then iterate to find a solution 𝐒⊆𝐐, which is first initialized to empty. In each iteration, we select the best possible Q_s from remaining constraints in 𝐐 that have not yet been selected (Line 4), based on a cost/benefit calculation, where the “benefit” of adding a constraint Q_i is its increment recall gain on top of the current solution set 𝐒, written as | R(Q_i) ∖⋃_Q_j ∈𝐒R(Q_j) |, divided by its additional “cost” of adding Q_i, which is the increased FPR when adding FPR(Q_i). The selected Q_s is then simply the constraint that maximizes this benefit-to-cost ratio, as shown in Line 4. We add this Q_s to the current solution 𝐒, update the current total FPR as well 𝐐, and iterate until we exhaust 𝐐. In the final step (Line 9), we compare the best possible singleton Q_m ∈𝐐 that maximizes recall without violating the FPR requirement, with the current 𝐒 from above. We pick the best between {Q_m} and 𝐒 based on their recall as our final solution to . We show that Algorithm <ref> has the following properties fullversion (a proof of which can be found in Appendix <ref>). a proof of which can be found in <cit.> in the interest of space. Algorithm <ref> is a (1/2-1/2e)-approximation algorithm for the problem in Equation (<ref>), meaning that the objective value produced by Algorithm <ref> is at least (1/2-1/2e) OPT, when OPT is the objective value of the optimal solution to . Furthermore, Algorithm <ref> produces a feasible solution in expectation, meaning that the expected FPR of its solution is guaranteed to satisfy Equation (<ref>). § EXPERIMENTS We evaluate the effectiveness and efficiency of , using real production pipelines. Our code will be shared at <cit.> after an internal review. §.§ Evaluation Benchmarks Benchmarks. We perform rigorous evaluations, using real and synthetic benchmarks derived from production pipelines. - Real. We construct a Real benchmark using production pipelines from Microsoft's internal big-data platform  <cit.>. We perform a longitudinal study of the pipelines, by sampling 1000 numeric columns and 1000 categorical columns from these recurring pipelines, and trace them over 60 consecutive executions (which may recur daily or hourly). For each column C, this generates a sequence of history {C_1, C_2, …, C_60}, for a total of 2000 sequences. We evaluate the precision/recall of each algorithm 𝒜 (or otherwise) on the 2000 sequences, by constructing sliding windows of sub-sequences for back-in-time tests of 𝒜's precision/recall (following similar practices in other time-series domains <cit.>): Precision. Given a sequence of past runs H = {C_1, C_2, …, C_K}, if an algorithm 𝒜 looks at H together with the real C_K+1 that arrives next, and predicts C_K+1 to have data-quality issues, then it is likely a false-positive detection, because the vast majority of production pipeline runs are free of DQ issues (if there were anomalous runs, they would have been caught and fixed by engineers, given the importance of the production data). To validate that it is indeed the case in our test data, we manually inspected a sample of our production pipeline data and did not identify any DQ issues. (Details of the process can be found fullversion in Appendix <ref>.) in a full version of the paper <cit.>.) For each full sequence S = {C_1, C_2, …, C_60}, we construct a total of 30 historical sliding windows (each with a length of 30), as H_30 = {C_1, C_2, …, C_30}, H_31 = {C_2, C_3, …, C_31}, etc. Then at time-step K (e.g., 30), and given the history H_K = {C_K-29, C_K-28, …, C_K}, we ask each algorithm 𝒜 to look at H_K and predict whether the next batch of real data C_K+1 has a DQ issue or not, for a total of (2000 × 30) = 60K precision tests. Recall. For recall, because there are few documented DQ incidents that we can use to test algorithms at scale, we systematically construct recall tests as follows. Given a sliding window of prefix H = {C_1, C_2, …, C_30}, we swap out the next batch of real data C_31, and replace it with a column C'_31 that looks “similar” to C_31 (e.g., with a similar set of values). Specifically, we use C_31 as the “seed query”, to retrieve top-20 columns most similar to C_31 based on content similarity (Cosine), from the underlying data lake that hosts all production pipelines. Because C'_31 will likely have subtle differences from the real C_31 (e.g., value-distributions, row-counts, etc.), algorithm 𝒜 should ideally detect as many C'_31 as DQ issues as possible (good recall), without triggering false alarms on the real C_31 (good precision). Because we retrieve top-20 similar columns, this generates a total of 2000 × 20 = 40K recall tests.[It should be noted that some of the C'_31 columns we retrieve may be so similar to C_31 that they become indistinguishable, making it impossible for any 𝒜 to detect such C'_31 as DQ issues. This lowers the best-possible recall, but is fair to all algorithms.] -Synthetic. In addition, we create a Synthetic benchmark, where the precision tests are identical to Real. For recall tests, instead of using real columns that are similar to C_31, we synthetically inject 10 common DQ issues reported in the literature into C_31 (described in Appendix <ref>). This allows us to systematically test against a range of DQ issues with different levels of deviations. Evaluation metrics. For each algorithm 𝒜, we report standard precision/recall results on the 60K precision tests and 40K recall tests described above. We use standard precision and recall, defined as precision = TP/TP + FP, recall = TP/TP + FN, where TP, FP, and FN are True-Positive, False-Positive, and False-Negative, respectively. §.§ Methods Compared We compare with an extensive set of over 20 methods, including strong commercial solutions, as well as state-of-the-art algorithms from the literature of anomaly detection and data cleaning. We categorize these methods into groups, which we describe below. Commercial solutions. We compare with the following commercial solutions that aim to automatically validate data pipelines. Google TFDV. We compare with Google's Tensorflow Data Validation (TFDV) <cit.>. We install the latest version from Python pip, and use recommended settings in <cit.>. Amazon Deequ. We compare with Amazon's Deequ <cit.>, using configurations suggested in their documentations <cit.>. Azure Anomaly Detector. Azure Anomaly Detector <cit.> is a cloud-based anomaly detection service for time-series data, utilizing state-of-the-art algorithms in the literature <cit.>. Azure ML Drift Detection. Azure ML has the ability to detect data drift over time <cit.>. We use data from the past K executions as the “baseline” and data from a new execution as the “target”. Time-Series-based anomaly detection. There is a large body of literature on detecting outliers from time-series data. We use a recent benchmark study <cit.> to identify the following four best-performing methods, and use the same implementations provided in <cit.> on our statistical data for comparison purposes. LSTM-AD <cit.> employs LSTM networks to learn and reconstruct time series. It uses the reconstruction error to detect anomalies. Telemanom <cit.> also uses LSTM networks to reconstruct time-series telemetry, identifying anomalies by comparing expected and actual values and applying unsupervised thresholds. Health-ESN <cit.> uses the classical Echo State Network (ESN) and is trained on normal data. Anomalies are detected when the error between the input and predicted output exceeds a certain threshold, which is determined through an information theoretic analysis. COF <cit.> is a local density-based method that identifies time-series outliers, by detecting deviations from spherical density patterns. Classical anomaly detection. We also compare with the following anomaly detection methods developed in tabular settings. One-class SVM <cit.> is a popular ML method for anomaly detection, where only one class of training data is available. We train one-class SVM using historical data, and use it to make predictions. Isolation Forest <cit.> is also a popular method for anomaly detection based on decision trees. We again train Isolation Forest using historical data, and then predict on newly-arrived data. Local Outlier Factor (LOF) <cit.> is another one-class method for anomaly detection based on data density. We configure LOF in a way similar to other one-class methods above. K-MeansAD <cit.> is also a classical anomaly detection method, which is based on the unsupervised K-Means clustering. ECOD <cit.> identifies outliers by estimating the distribution of the input data and calculating the tail probability for each data point. Average KNN (Avg-KNN) is another outlier detection method, and was used to automate data validation in a pipeline setting <cit.> that is similar to the scenario considered in our work. Statistical tests. We compare with the following classical statistical tests used to detect outliers in distributions. Kolmogorov–Smirnov (KS) is a classical statistical hypothesis test for homogeneity between two numeric distributions, and is used in prior work to detect data drift <cit.>. We vary its p-value thresholds to generate PR curves. Chi-squared is a classical hypothesis test for homogeneity between two categorical distributions, and also used in prior work <cit.>. We vary its p-value thresholds like above. Median Absolute Deviation (MAD) is a measure of statistical dispersion from robust statistics <cit.>, and has been used to detect quantitative outliers <cit.>. We use MAD-deviation (Hampel X84, similar to z-scores) to produce predictions <cit.>. Database constraints. There is a large literature on using database constraints for data cleaning. We compare with these methods: Functional Dependency (FD). FD is widely-used to detect data errors in tables <cit.>, by exploiting correlations between columns (e.g., salary → tax-rate). Since not all columns can be “covered” by FD, to estimate its best possible recall, we detect all possible FDs from our 2000 test tables, and mark a test column C to be “covered” if there exists a detected FD that has C in its RHS. We report this as FD-UB (Functional Dependency Upper-bound). Order Dependency (OD) <cit.>. We discover OD using the same statistical information by ordering tables with statics in time. Sequential Dependency (SD) <cit.> generalizes OD, and we discover SD using the same statistical information over time. Denial Constraints (DC). We use the approach in <cit.> to discover DC that generalizes FD and OD, and use them for validating data. (). This is our proposed method as described in Section <ref>. §.§ Evaluation Metrics To compare the performance of proposed with other baselines, the Precision-Recall Curve is used. precision = 1-FP/N_normal !! this is not precision, but false positive rate. <https://en.wikipedia.org/wiki/False_positive_rate>. May need to change all figures. recall = TP/N_abnormal where FP refers to the number of false-positive cases, and N_normal refers to the total number of normal testing set. FP indicates the number of true-positive cases, and N_normal indicates the total number of abnormal testing set. §.§ Experiment Results Overall quality comparisons. Figure <ref> and <ref> show the average precision/recall of different methods on the Real and Synthetic benchmark, respectively. is at the top-right corner with high precision/recall, outperforming other methods across all cases. Anomaly detection methods, especially LOF and Health-ESP, are the best performing baselines. However, these methods use each statistical attribute just as a regular dimension in a data record, while our exploits different statistical properties of the underlying metrics (Chebyshev, Chantelli, CLT, etc., in Proposition <ref>-<ref>), which gives an unique advantage over even the state-of-the-art anomaly detection methods, underscoring the importance of our approach in validating data from recurring data pipelines. Commercial data-validation solutions like Amazon Deequ and Google TFDV have high precision but low recall, because they use predefined and static configurations (e.g., JS-Divergence and L-infinity are the default for TFDV), which lack the ability adapt to different pipelines, and thus the low recall. Similarly, statistical tests (KS/Chi-squared/MAD) use fixed predictors that also cannot adapt to different pipelines, and show sub-optimal performance. Constraint-based data cleaning methods from the database literature (e.g., FD, DC, OD, SD, etc.) are not competitive in our tests, because these methods are designed to handle single table snapshot, typically using manually designed constraints. Figure <ref> shows breakdown of results in Figure <ref> by different types of DQ issues in the Synthetic benchmark. We can see that is effective against most types of DQ issues (schema-change, distribution-change, data-volume-change, etc.). On numerical data, we see that it is the most difficult to detect “character-level perturbation” (randomly perturbing one digit character for another digit with small probabilities) and “character deletion” (randomly removing one digit character with small probabilities), which is not unexpected since such small changes may not always change the underlying numerical distributions. On categorical data, “character-level perturbation” is also the most difficult to detect, but is effective against “character deletion” and “character insertion”. Sensitivity and ablation studies. We perform extensive experiments to study the sensitivity of (to the length of history, different types of data-errors, target FPR τ, etc.). We also perform an ablation study to understand the importance of components. In the interest of space, we present these additional experimental results in Appendix <ref> and Appendix <ref>, respectively. Efficiency. Figure <ref> shows the end-to-end latency of to process a new batch of data. We vary the number of rows in a column C (x-axis), and report latency averaged over 100 runs. Recall that can be used offline, since DQ constraints can be auto-installed on recurring pipelines (without involving humans). Nevertheless, we want to make sure that the cost of is small. Figure <ref> confirms this is the case – the latency of on 100K rows is 1.6 seconds on average, making this interactive. The figure further breaks down the time spends into three components: (1) computing single-distribution metrics (green), (2) two-distribution metrics (blue), and (3) DQ programs (red), where computing two-distribution metrics (blue) takes the most time, which is expected. Overall, we see that the overall latency grows linearly with an increasing number of rows, indicating good scalability. § CONCLUSIONS In this work, we develop an () framework to automate data-validation in recurring pipelines. can automatically generate explainable DQ programs that are provably accurate, by leveraging the statistical properties of the underlying metrics. Extensive evaluations on production pipelines show the efficiency and effectiveness of . ACM-Reference-Format fullversion § DETAILS OF STATISTICAL METRICS Table <ref> and Table <ref> give detailed descriptions of the statistical metrics used in , which corresponds to simplified versions in Table <ref> and Table <ref>, respectively. § SYNTHETIC “TRAINING” DATA We carefully reviewed the DQ literature and cataloged a list of 10 common types of DQ issues in pipelines, so that we can systematically synthesize data deviations that are due to DQ issues, which would help us to select the most salient “features” or DQ constraints that are sensitive in detecting common DQ deviations. We enumerate the list of 10 different types of DQ issues below, as well as the parameters we use (to control deviations with different magnitudes). By injecting varying amounts of DQ issues into a given column C, we generates a total of 60 variations C' for each C (e.g., different fractions of values in C are replaced with nulls for the type of DQ issue “increased nulls”). Collectively, we denote this set of synthetically generated DQ issues on C as 𝐃(C). DQ Issue Type 1: Schema change. We replace p% (with p=1, 10, 100) of values in a target column C for which we want to inject DQ variation, using values randomly sampled from a neighboring column of the same type. This is to simulate a “schema change”, where some fraction of values in a different column are either partially mis-aligned (e.g., due to a missing delimiter or bad parsing logic), or completely mis-aligned (e.g., due to extra or missing columns upstream introduced over time). Note that p=100 corresponds to a complete schema-change, otherwise it is a partial schema change. DQ Issue Type 2: Change of unit. To simulate a change in the unit of measurement, which is a common DQ issue (e.g., reported by Google in <cit.> like discussed earlier), we synthetically multiply values in a numeric column C by x10, x100 and x1000. DQ Issue Type 3: Casing change. To simulate possible change of code-standards (e.g., lowercase country-code to uppercase, as reported by Amazon <cit.>), we synthetically change p% fraction of values (with p=1, 10, 100) in C, from lowercase to uppercase, and vice versa. DQ Issue Type 4: Increased nulls. Since it is a sudden increase of null values such as NULL/empty-string/0 is common DQ issue, we sample p% values in a C (with p=1, 50, 100), and replace them with empty-strings in the case of categorical attribute, and 0s in the case of numerical attribute. DQ Issue Type 5: Change of data volume. Since a sudden increase/decrease of row counts can also be indicative of DQ issues <cit.>, we up-sample values in C by a factor of x2, x10, or down-sample C with only 50%, 10% of the values. DQ Issue Type 6: Change of data distributions. To simulate a sudden change of data distributions <cit.>, we sorted all values in C first, then pick the first or last p% values as a biased sample and replace C, with p=10, 50. DQ Issue Type 7: Misspelled values by character perturbation. Typos and misspellings is another type of common DQ issue (e.g., “Missisipii” and “Mississippi”), frequently introduced by humans when manually entering data. To simulate this type of DQ issue, we randomly perturb p% of characters in C to a different character of the same type (e.g., [0-9] → [0-9], and [a-z] → [a-z]), with p=1, 10, 100. DQ Issue Type 8: Extraneous values by character insertion. Sometimes certain values in a column C may be associated with extraneous characters that are not expected in clean data. To simulate this, for each value in C, we insert randomly generated characters with probability p%, where p=10, 50. DQ Issue Type 9: Partially missing values by character deletion. Sometimes certain values in a column C may get partially truncated, due to issues in upstream logic. We simulate this by deleting characters for values in C with probability p%, where p=10, 50. DQ Issue Type 10: Extra white-spaces by padding. We randomly insert leading or tailing whitespace for p% of values, where p=10, 50, 100. While we are clearly not the first people to report these aforementioned DQ issues, we are the first to systematically catalog them and synthetically generate such DQ variations, and are the first to use them as “training data” that guides a DQ algorithm to select the most salient DQ features specific to the characteristics of a column C. We release our generation procedures in <cit.>, which can be used for future research. § PATTERN GENERATION In addition to use raw values from columns and compare their distributional similarity (e.g., using L_1, L_inf, Cosine, etc.), sometimes values in a column follows a specific pattern, for example, timestamp values like "2022-03-01 (Monday)", currency values like “$19.99”, zip-codes like “98052-1202”, etc. For such values, comparing distributions for raw values that are drawn from a large underlying domain induced by patterns (e.g., time-stamp), typically yields very small overlap/similarity because of the large space of possible values in the underlying domain. (This is in contrast to small categorical domains with a small number of possible values, where distributional similarity is usually high and more meaningful). In , we observe that the pattern strings for such pattern-induced domains is an orthogonal representation of values in a column, which gives another way to “describe” the column and detect possible DQ deviations. For example, timestamp values like "2022-03-01 (Monday)" can be generalized to a pattern "\ d\ d\ d\ d-\ d\ d-\ d\ d (\ l\ l\ l\ l\ l\ l)", currency values like “$19.99” can be generalized to “$\ d\ d.\ d\ d”, etc. Assuming that the format of the data is changed due to upstream DQ issue, e.g., currency values become mixed where some values have no currency-signs, or time-stamps becomes mixed with multiple formats of time-stamps, a distributional similarity of the pattern strings above provides a powerful way to “describe” the expected pattern distribution in a column, which makes it possible to catch DQ issues in columns whose underlying domains are pattern-related. For the metrics that have a prefix “Pat_” in Table <ref>, we first generate pattern-strings for each value v∈ C, by converting each character in v to a wildcard character following a standard [0-9] → \ d (for digits), [a-zA-Z] → \ l (for letters), and replace all punctuation as “-”. We then compute the same distributional similarity (e.g., L_1, L_inf, Cosine, etc.), just as regular distributional similarity metrics for raw string values. (Note that is robust to a large space of DQ constraints, and can intelligently select the most salient features, such that for columns where pattern-based DQ is not a good DQ description, such pattern-based DQ constraints will not be selected automatically.) § CONSTRUCT CONSTRAINTS: PSEUDOCODE We show the pseudo code to construct DQ constraints in Algorithm <ref>. This procedure directly corresponds to Section <ref> § TIME-SERIES DIFFERENCING Algorithm <ref> gives an overview of the time-series differencing step, which we will expand and explain in this section. Details of this step can be found in Algorithm <ref>. Recall that time-series differencing aims to make time series stationary with static underlying parameters. We start by performing the ADF test to determine the stationarity of M(H), and return M(H) if it is already stationary. If it is not, we then perform lag-based transforms <cit.>. Given a sequence of M(H) = {M(C_1), M(C_2), …, M(C_K)}, a lag-based transform with lag=l is defined as M(H)^lag=l = {M(C_l+1) - M(C_1), M(C_l+2) - M(C_2), …, M(C_l+K - M(C_l))} Which performs a difference step for two events that are l time-steps away. Observe that such a differencing step handles cyclic data with periodic patterns (e.g., weekly user traffic data can be differenced away with lag=7), as shown in Example <ref> earlier. For each lag ∈ [1, K-1], if the resulting M(H)^lag is already stationary (passes the ADF test), we return the corresponding M(H)^lag for the next stage for to auto-program DQ (and remember the lag parameter to pre-process data arriving in the future). If none of the lag parameter leads to a stationary time-series, we additionally perform a log transform on M(H), which can better handle time-series with values that are orders of magnitude different. We repeat the same process as lag-only transforms like above, until we find a stationary time-series or we return None (in which case, the sequence M(H) associated with this metric M will be ignored by downstream due to its non-stationary nature. Also note that it is possible to perform additional second-order or third-order differencing, which we omit here). § PROOF OF PROPOSITION 2 We prove this proposition using Cantelli's inequality <cit.>. Cantelli's inequality states that for a random variable X, there is a class of one-sided inequality in the form of P(X - μ≥ k σ) ≤1/1 + k^2, ∀ k ∈ℝ^+. For metrics M ∈{EMD, JS_div, KL_div, KS_dist, Cohen_d, L_1, L_inf, Cosine, Chisquared}, which are “distance-like” metrics, DQ constraints can be one-sided only, to guard against deviations with distances larger than usual, e.g., newly-arrived data whose distance from previous batches of data are substantially larger than is typically expected. (On the other hand, if the distance of new data and previous batches of data are smaller than usual, this shows more homogeneity and is typically not a source of concern). For this reason, we can apply Cantelli's inequality for metrics M ∈{EMD, JS_div, KL_div, KS_dist, Cohen_d, L_1, L_inf, Cosine, Chisquared}, with one-sided DQ. For such a metric M, let M(C) be our random variable. Let k = β/σ. Replacing k with β/σ above, we get P(M(C) - μ≥β) ≤σ^2/σ^2 + β^2. Note that P(M(C) - μ≤β) is exactly our one-sided DQ for metrics with distance-like properties. We thus get P(Q violated on C) ≤σ^2/σ^2 + β^2, which is equivalent to saying that the expected FPR of Q is no greater than σ^2/σ^2 + β^2. § PROOF OF PROPOSITION 3 We prove this proposition using Central Limit Theorem (CLT) <cit.>. Recall CLT states that when independent random variables are summed up and normalized, it tends toward normal distribution. Metrics M ∈{count, mean, str_len, char_len, digit_len, punc_len, complete_ratio}, can all be viewed as the sum of independent random variables (for example, str_len, char_len, digit_len etc. are straightforward sum of these functions applied on individual cells; count are the 0/1 sum for a random variable indicating tuple presence/not-presence, etc.). Such sums are then averaged over all cells in the same column C, which would tend to normal distributions per CLT. We can thus apply the tail bound of normal distributions, making it possible to apply tail bounds of normal distributions. For M ∈{count, mean, str_len, char_len, digit_len, punc_len, complete_ratio}, let M(C) be our random variable. From tail bounds of normal distributions, we know P(-k σ≤ M(C) - μ≤ k σ) = erf(k/√(2)) <cit.>, where erf(x) is the Gauss error function. Let k = β/σ. Replacing k with β/σ above, we get P(-β≤ M(C) - μ≤β) = erf(β/√(2)σ) = 2/√(π)∫_0^β/√(2)σ e^-t^2 dt. Note that P(-β≤ M(C) - μ≤β) is exactly P(Q  satisified on C), thus we get E[FPR(Q)] = 1 - 2/√(π)∫_0^β/√(2)σ e^-t^2 dt. § HARDNESS OF THE PROBLEM The problem in Equation (<ref>)-Equation (<ref>) is NP-hard. Furthermore, it cannot be approximated within a factor of (1-1/e) under standard assumptions. We show the hardness using a reduction from the Maximum Coverage problem <cit.>. Recall that in Maximum Coverage, we are given a set of sets S, and the objective is to find a subset S' ⊆ S such that the union of the elements covered by S', |⋃_S_i ∈ S'S_i|, is maximized, subject to a cardinality constraint |S'| ≤ K. We show a polynomial time reduction from Maximum Coverage to as follows. For any instance of Maximum Coverage with S = {S_i}, we construct the an problem by converting each S_i into a DQ constraint Q_i, whose FPR(Q_i) is unit cost 1, and recall R(Q_i) is exactly the set of elements in S_i. If we could solve the corresponding problem in polynomial-time, we would have solved the Maximum Coverage, thus contracting the hardness of Maximum Coverage. Also note that through the construction above, the objective value of Maximum Coverage is identical to that of . Thus we can use the inapproximation results from Maximum Coverage  <cit.>, to show that cannot be approximated within a factor of (1 - 1/e). § PROOF OF PROPOSITION 4 We show that Algorithm <ref> is a (1/2 - 1/2e) approximation algorithm for the problem, which follows from the Budgeted Maximum Coverage problem <cit.>. Recall that in Budgeted Maximum Coverage problem, we are given a set of sets S = {S_i}, where each set S_i has a cost c(S_i), and each element in sets has a weight w(e_j), the objective is to find a subset S' ⊆ S such that the weight of all elements covered by S' is maximized, subject to a budget constraint ∑_S_i ∈ S'c(S_i) ≤ B. We show that for any instance of our problem, it can be converted to Budgeted Maximum Coverage as follows. We convert each Q_i into a set S_i, and let the cost c(S_i) be FPR(Q_i). Furthermore, we convert the set of recall items into elements in Budgeted Maximum Coverage, and set the weight of each element to unit weight. Finally, we let the elements covered by S_i in Budgeted Maximum Coverage to be exactly the R(Q_i) in . The approximation ratio in Proposition <ref> follows directly from the Theorem 3 of <cit.> now. We note that there are an alternative algorithm with better approximation ratio (1 - 1/e) <cit.>, which however is of complexity |𝐐|^3, where |𝐐| is the number of DQ constraints constructed from Algorithm <ref>. Because |𝐐| is at least in the hundreds, making the alternative very expensive in practice and not used in our system. We also show that the solution S from our Algorithm <ref> is a feasible solution of , whose expected FPR is lower than δ. In order to see this, recall that we construct DQ Q_i ∈𝐐 and estimate each Q_i's worst case FPR(Q_i) following Proposition <ref>, <ref>, <ref>. Algorithm <ref> ensures that ∑_Q_i ∈ SFPR(Q_i)≤δ. For the conjunctive program P(S) induced by S, the FPR of P(S) follows the inequality FPR(P(S)) ≤∑_Q_i ∈ SFPR(Q_i), because the false-positives from P(S), is produced by a union of the false-positives from each Q_i ∈ S. Combining this with ∑_Q_i ∈ SFPR(Q_i)≤δ, we get FPR(P(S)) ≤δ. § SENSITIVITY ANALYSIS We perform extensive experiments to understand the sensitivity of our method. fullversion We discuss a subset of our sensitivity and ablation studies in this section (additional results can be found in <cit.>). Sensitivity to history length. Since leverages a history of past pipeline executions, where the number of past executions likely has an impact on accuracy. Figure <ref> shows the accuracy results for numerical and categorical data respectively, when {7, 14, 21, 28} days of historical data are available. Overall, having 28-day history leads to the best precision, though with 14 and 7-day histories also produces competitive results. We highlight that unlike traditional ML methods that typically require more than tens of data points (Figure <ref> suggests that even 30-day history is not sufficient for ML methods), exploits the unique statistical properties of the underlying metrics (e.g., Chebyshev and CLT), and can work well even with limited data, which is a unique characteristic of . Sensitivity to target precision δ. Figure <ref> shows the relationship between the target FPR δ parameter used in (Equation (<ref>)), and the real FPR observed on results (we note that 1-FPR corresponds to the precision metric). On both numerical and categorical data, the real FPR increases slightly when a larger target FPR δ is used, showing the effectiveness of this knob δ in . Also note that the real FPR is consistently lower than the target-FPR, likely due to the conservative nature of the statistical guarantees we leverage (Chebyshev and Cantelli's inequalities we use in Proposition <ref> and <ref> give worst-case guarantees). § ABLATION STUDIES We perform additional ablation studies, to understand the importance of different components used in . Effect of using single/two-distribution metrics. Recall that in , we exploit both single-distribution and two-distribution metrics (in Table <ref> and Table <ref>) to construct DQ programs. A key difference of the two types of metrics, is that computing two-distribution metrics (e.g., L_inf and L_1) would require both the current column C_K and its previous snapshot C_K-1 (e.g., in L_1(C_K, C_K-1)). This requires raw data from the previous run C_K-1 to be kept around, which can be costly in production big-data systems. In contrast, single-distribution metrics (e.g., row_count and unique_ratio) can be computed on C_K and C_K-1 separately, and we only need to keep the corresponding metrics from C_K-1 without needing to keep the raw C_K-1, which makes single-distribution metrics a lot more efficient and inexpensive to use in . In Figure <ref>, we compare the full (with both single- and two-distribution metrics), with using only single-distribution metrics. Encouragingly, the latter variant produces comparable quality with the full , likely because the large space of single-distribution metrics is already rich and expressive enough. This suggests that we can deploy inexpensively without using two-distribution metrics, while still reaping most of the benefits. Effect of limiting the number of DQ clauses. We also study the number of DQ clauses that generates, because intuitively, the more clauses it generates, the more expressive the DQ programs become, at the cost of human explainability/interpretability (there is a setting in that engineers can review and approve auto-suggested DQ programs). We report that on numerical data in the benchmark, the median/mean of the number of clauses generates is 3 and 2.62, respectively; on categorical data the median/mean is 2 and 2.15, respectively. We believe this shows that the programs generates are not only effective but also simple/understandable. In Figure <ref> we impose an artificial limit on the number of clauses that can generate (in case better readability is required). We observe a drop in performance when only 1 or 2 clauses are allowed, but the performance hit becomes less significant if we allow 3 clauses. Effect of stationary processing. Since we use stationarity test and stationary processing for statistics that are time-series (Section <ref>), in Figure <ref> we study its effect on overall quality. We can see that for numerical data, stationary processing produces a noticeable improvement, which however is less significant on categorical data. § MANUAL REVIEW OF PIPELINE DATA Based on our conversations with data engineers and data owners, the data tables collected from production data pipelines used in our benchmark (describe in Section <ref>) are production-quality and likely free of DQ issues, because these files are of high business impact with many downstream dependencies, such that if they had any DQ issues they would have already been flagged and fixed by data engineers. In order to be sure, we randomly sampled 50 categorical and 50 numerical data columns, and manually inspected these sample data in the context of their original data tables, across 60 snapshots, to confirm the quality of the benchmark data. We did not find any DQ issues based on our manual inspection. We also perform a hypothesis test based on the manual analysis, with H_0 stating that over 3% of data has DQ issues. Our inspection above rejects the null hypothesis (p-level=0.05), indicating that it is highly unlikely that the benchmark data has DQ issues, which is consistent with the assessment from data owners, and confirms the quality of the data used in the benchmark. During our conversations with data engineers, we were pointed to three known DQ incidents, which we collect and use as test cases to study 's coverage. was able to detect all such known DQ cases based on historical data. Figure <ref> shows such an example that is intuitive to see. Here each file is an output table (in csv format) produced by a daily recurring pipeline. As can be seen in the figure, for the file produced on “2019-01-19”, the file size (and thus row-count) is much larger than the days before and after “2019-01-19” (21KB vs. 4KB). While small in scale, we believe this study on user-provided data further confirms the effectiveness of . fullversion § ERROR ANALYSIS In addition, we perform a careful error analysis of the false-positive cases generated by on the Real benchmark. Interestingly, some of the false-positives are in fact real DQ issues in production data that are overlooked by engineers, which we verified manually. Figure <ref> shows such an example that is intuitive to see. Here each file is an output table (in csv format) produced by a daily recurring pipeline. As can be seen in the figure, for the file produced on “2019-01-19”, the file size (and thus row-count) is much larger than the days before and after “2019-01-19” (21KB vs. 4KB). In our Real benchmark where we automatically evaluate precision, we treat all production feeds from recurring pipelines as free of DQ issues (true in the vast majority of cases), which allows us construct “precision-tests” where an algorithm 𝒜 sees a prefix of data {C_1, C_2, …, C_K} and predicts on C_K+1, and if C_K+1 is predicted to have DQ issue, then this is most likely a false-positive for 𝒜. While such an automatic evaluation is true in the vast majority of production feeds, there are rare exceptions like shown in Figure <ref>, in which the data corresponding to “2019-01-19” is truly suspicious, and an algorithm like rightfully predict this batch as anomalous (which becomes a false-positive and something that we analyzed in our error-analysis). We turned to the data engineers who owns the pipeline for verification, and after inspecting the data, they confirmed that the batch of data corresponding to “2019-01-19” is indeed having DQ issues, validating the predictions made by . There are a couple of cases like this that we were able to verify manually with data owners, and we find it interesting that can reveal DQ issues that are overlooked by humans in production. This also underscores the high-precision nature of – even if we report precision like 99.5% on our benchmark evaluation, in practice the real precision is likely even higher, because of real DQ incidents like this (which our automatic evaluation will mark as false-positives, thus penalizing the precision of ). Error analysis. We performed an analysis of the false-positive cases generated by identified in the experiments on the Real benchmark. Interestingly, some of the false-positives are in fact real DQ issues in production data that are overlooked by engineers (we verified with data engineers manually). We find it interesting that can reveal DQ issues that are overlooked by humans in production. We describe details of our findings in <cit.>.
http://arxiv.org/abs/2306.02911v2
20230605141527
Catch Me If You Can: Deep Meta-RL for Search-and-Rescue using LoRa UAV Networks
[ "Mehdi Naderi Soorki", "Hossein Aghajari", "Sajad Ahmadinabi", "Hamed Bakhtiari Babadegani", "Christina Chaccour", "Walid Saad" ]
cs.NI
[ "cs.NI", "eess.SP" ]
Catch Me If You Can: Deep Meta-RL for Search-and-Rescue using LoRa UAV Networks Mehdi Naderi Soorki1, Hossein Aghajari1, Sajad Ahmadinabi1, Hamed Bakhtiari Babadegani1, Christina Chaccour2, Walid Saad2 1IWiN Research laboratory, Engineering Faculty,Shahid Chamran University of Ahvaz,Ahvaz, Iran, 2Wireless@ VT, Bradly Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA USA, Emails:[email protected],{hn.aghajari,sajadahmadinabi,h.bakhtiaribabadegani}@gmail.com, {christinac, walids}@vt.edu. ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================= empty Long range (LoRa) wireless networks have been widely proposed as a efficient wireless access networks for the battery-constrained Internet of Things (IoT) devices. In many practical search-and-rescue (SAR) operations, one challenging problem is finding the location of devices carried by a lost person. However, using a LoRa-based IoT network for SAR operations will have a limited coverage caused by high signal attenuation due to the terrestrial blockages especially in highly remote areas. To overcome this challenge, the use of unmanned aerial vehicles (UAVs) as a flying LoRa gateway to transfer messages from ground LoRa nodes to the ground rescue station can be a promising solution. In this paper, an artificial intelligence-empowered SAR operation framework using UAV-assisted LoRa network for different unknown search environments is designed and implemented. The problem of the flying LoRa (FL) gateway control in the search-and-rescue system using the UAV-assisted LoRa network is modeled as a partially observable Markov decision process. Then, a deep meta-RL-based policy is proposed to control the FL gateway trajectory during SAR operation. For initialization of proposed deep meta-RL-based policy, first, a deep RL-based policy is designed to determine the adaptive FL gateway trajectory in a fixed search environment including a fixed radio geometry. Then, as a general solution, a deep meta-RL framework is used for SAR in any new and unknown environments to integrate the prior FL gateway experience with information collected from the other search environments and rapidly adapt the SAR policy model for SAR operation in a new environment. The proposed UAV-assisted LoRa network is then experimentally designed and implemented. To analyze the performance of proposed framework in real world scenarios, the proposed SAR system is tested in two different target areas: a wide plain and a slotted canyon at Mongasht mountain ranges, Iran. Practical evaluation results show that if the deep meta-RL-based control policy is applied instead of the deep RL-based one, the number of SAR time slots decreases from 141 to 50. Moreover, the average distance between UAV trajectories under deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are respectively 619 and 1930 meter during the SAR operation time. LoRa technology, Unmanned aerial vehicle, Deep meta-reinforcement learning, Search-and-rescue operation § INTRODUCTION Unmanned aerial vehicles (UAVs) are playing an increasingly important role in next-generation wireless networks such as 5G and beyond <cit.>. For instance, UAVs can guarantee high-speed and ultra-reliable connectivity while also extending the cellular network coverage to three-dimensional (3D) space <cit.>. In particular, we can temporarily move UAVs to cover Internet-of-Thing (IoT) devices and establish communications therein without high-cost conventional network infrastructures. In this regard, UAV-assisted wireless networks can decrease the operational expenditures and improve the efficiency of various IoT applications. With the proliferation of UAV-assisted wireless access networks, our reliance on IoT applications such as smart farming, smart factory, and public safety will be more pronounced <cit.>. However, to support this IoT trend, a reliable wireless access technology with wide reach and low power consumption is required. In this regard, the so-called long-range (LoRa) communication protocol has been proposed as a promising technology for high energy-efficient and long-range communication <cit.>. These two characteristics make LoRa technology an appropriate solution for battery-constrained IoT devices that are often deployed in dispersed rural areas. A typical LoRa-based IoT network begins with a LoRa-enabled embedded sensor node that sends data to the LoRa gateway. Then, data can be sent from LoRa gateway over cellular network and then routed to application servers located at the network core. One of the key challenges of LoRa-based IoT networks is localization for outdoor environments that is needed for different applications such as navigation and tracking, air traffic control, remote sensing, intelligence, surveillance, and reconnaissance, and search-and-rescue (SAR) operations <cit.>. Existing localization techniques are mainly based on the time difference of arrival (TDOA) and the received signal strength index (RSSI) schemes in wireless LoRa networks <cit.>. In the so-called TDOA portioning methods with LoRa networks, the distances between a LoRa node and each LoRa gateway are estimated through a time of arrival in trilateration approach <cit.>. Thus, this method requires the use of a precise clock to synchronize between all LoRa nodes <cit.>. This implies additional communication overheads and higher and, thus, this solution is not appropriate for low-power and low-cost LoRa device <cit.>. In the RSSI trilateration positioning methods, the end-device location is estimated by RSSI value when it transmits data to the LoRa gateways without requirement of clock synchronization. Thus, RSSI-based techniques are employed to develop positioning functions using RSSI in LoRa networks <cit.>. Several recent works such as in <cit.> and <cit.> analyze RSSI-based LoRa localization system for different scenarios. In <cit.>, the authors proposed six new RSSI-based localization algorithms to reduce the effect of non-Gaussian noise in LoRa networks by either eliminating bad anchor nodes or selecting good anchor nodes during localization. In this work, the performance of all localization algorithms is investigated using simulation model with real-data measurement with developed LoRa localization system. In <cit.>, since noise-like electronic interference and blocking can affect the accuracy of localization, the authors propose a new approach to improve the performance of a LoRa-based localization system in noisy outdoor environments. Specifically, the work in <cit.> developed two new localization algorithms based on a traditional localization linear model. The first new localization algorithm locates the noisy measurement using k-mean clustering and then re-calculates the localization outcomes without a node thereby deriving the largest estimated RSSI error. The second algorithm in <cit.> requires the localization error is low if the estimated RSSI errors of the estimated location of the target node to other anchor nodes are small. Then, the best solution is chosen by calculating the estimated RSSI errors in all possible estimated locations. In <cit.>, the authors combine fingerprint-based and model-based RSSI methods to solve the outdoor positioning problem. They adopt an interpolation-based approach to build a 3D model with 36 RSSI sampling points to achieve localization model with higher accuracy. The work in <cit.> proposed an RSSI-based method to accurately identify the location of a vehicle, equipped with a LoRa node, travelling along a known path which is divided into segments of length equal to or shorter than the desired accuracy. Values of the RSSI measured by the LoRa gateways are collected and used to characterize each segments. In <cit.>, the RSSI-based method is proposed for the localization of cattle collars communicating with LoRa radios. In particular, the authors developed an RSSI-based distance estimation using realtime adjustment of RSSI-distance mapping, taking advantage of communication between collar nodes and gateway. However, the works in <cit.> and <cit.> are based on the RSSI method and, thus, they need to deploy large number of anchor points on a large scale outdoors for SAR operation which in not practical in highly remote areas. Moreover, the works in <cit.> and <cit.> do not investigate the potential of a UAV-assisted LoRa networks. Fortunately, employing UAV as a flying gateway in localization and tracking system can bring many attractive advantages due to its high possibility of line-of-sight (LoS) links, high mobility, on-demand deployment and low cost <cit.>. Several recent works such as in <cit.>, and <cit.> have proposed the use of UAVs in the LoRa networks. In <cit.>, the authors implemented a prototype of LoRa-based air quality sensor on a UAV and a web-UI for user to configure the route of UAV and view the sensed data immediately. In <cit.>, a UAV-assisted LoRa architecture is suggested in which UAVs act as relays for the traffic generated between LoRa nodes and a base station (BS). Then, they focus on designing a distributed topology control algorithm that periodically updates the UAV topology to adapt to the movement of the ground-based LoRa nodes. The work in <cit.> proposed measuring the RSSI at a LoRa gateway for indoor, suburban, and urban areas, when the LoRa transmitter is in another indoor location or mounted on a UAV. Their result shows that for a suburban environment, the drone height and antenna orientation have a crucial impact on the RSSI. Specifically, if the transmitting antenna is vertical, a stronger signal is received. In<cit.>, the authors experimentally analyzed and modeled the channel of the UAV-to-ground LoRa links in the urban environments. Then, they discussed the dependencies between transmission power, spread factor (SF), RSSI, and signal-to-noise (SNR). However, the prior art in <cit.> and <cit.> did not apply a practical localization method for a UAV-Assisted LoRa Networks, namely when used for SAR operations in the highly remote areas. The works in <cit.>, and <cit.> investigated the use of wearable LoRa radios to foster SAR missions in mountain environments. In <cit.>, the authors designed a localization system for SAR operations using LoRa. In this regard, they have characterized the path loss of a LoRa channel in mountain scenarios. However, the work in <cit.> mainly focused on LoRa channel modeling in a specific scenario without considering a flying LoRa gateway and UAVs. Moreover, the authors in <cit.> did not propose a general adaptive solution for SAR in new unknown environment. In <cit.> and <cit.>, the authors reported the measurements of the excessive aerial path loss for modeling ground-to-UAV links in a real mountain canyon, involving a receiving UAV and a transmitting LoRa radio worn by a volunteer lying on the rocks. They also demonstrated that LoRa radio propagation in the canyon is season-independent. Consequently, it is highly essential to practically design and analyze a localization system that leveraged a UAV-assisted LoRa network for the SAR operation in particular for highly remote areas. This is due to the fact that the ground-to-UAV LoRa link has less path loss compare to the ground-to-ground LoRa link. Moreover, by using the UAV as an FL gateway, it is possible to move the location of gateway in the sky in a quicker and more flexible way, compared to ground scenarios. The main contribution of this paper is the implementation and analysis of a novel artificial intelligence-empowered search-and-rescue operation framework using a UAV-assisted LoRa network that can be applied to different unknown search environments. The proposed approach autonomously adapts the control policy of the UAV trajectory to the spatial geometry of a new search environment thereby allowing the system to determine the unknown location of a lost person. To solve the problem of the FL gateway control in the search-and-rescue system using the UAV-assisted LoRa network, we formulate a stochastic optimization problem whose goal is to maximize an episodic return that includes the received power from LoRa node at lost person over future time slots. Next, we model the FL gateway control problem as a partially observable Markov decision process (POMDP). Then, a deep reinforcement learning (RL) policy is proposed to adaptively control the FL gateway trajectory during SAR operation in a given environment. To find a near optimal solution, a parametric functional form policy is implemented using a deep recurrent neural network (RNN) that can directly search the optimal policies of the FL gateway controllers. Then, to increase the generalizability of the our framework, a control policy using deep meta-RL is designed. By applying deep meta-RL, the controller can integrate the prior FL gateway experience with information collected from the other search environments to train a rapidly adaptive policy model for SAR operation in a new environment <cit.>. To analyze the performance of our proposed framework in the real world, we have experimentally designed and implemented our UAV-assisted LoRa networks including the LoRa end node as well as the FL and ground LoRa (GL) gateways. Then, we have done extensive experiments at Mongasht mountain ranges, near the areas of Ghaletol city, Khuzestan province, Iran. We have practically tested our SAR system in two different target areas: a wide plain and a slotted canyon. Practical evaluation results show that the FL gateway hovers over lost person's location after 50 and 141 time slots under deep meta-RL and RL control policy, respectively. Moreover, the average distance between UAV trajectories under deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are 619 and 1930 meter during the SAR operation time. The rest of the paper is organized as follows. Section II describes system model and problem formulation. Section III proposes deep meta-RL framework for UAV gateway controlling in different unknown SAR environment. In Section <ref>, we introduce our experimental setup including the hardware that we have used to implement our UAV-Assisted LoRa Network and also our measurement scenarios. Then, in Section <ref>, we numerically evaluate the performance of our SAR system for highly remote areas and our proposed deep meta-RL-based UAV control policy which is trained by real data. Finally, conclusions are drawn in Section <ref>. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ System Model Consider a UAV-Assisted LoRa network composed of a LoRa node, an FL gateway, and one GL gateway. Here, a lost person is equipped with a LoRa node that periodically transmits a known signal called beacon with duration τ. The transmission power of the LoRa node is P_Tx in dB. The location of the lost person is an unknown point of interest (POI) (x_P,y_P) in the search area of interest (SAI), 𝒞⊂ℝ^2. The FL gateway is a LoRa gateway mounted on a UAV. The FL gateway is equipped with GPS and LoRa modules. At each time slot t, the FL gateway transmits a message, m_t=[β_t,γ_t,x_t,y_t] to the GL gateway. This message contains the RSSI β_t and SNR γ_t of received LoRa beacon signal from LoRa node, as well as the FL gateway location, (x_t,y_t). For simplicity, we assume the UAV is at the same height z with the same speed v all the time. In our model, the control action of UAV is a_t∈{E,W,N,S,H} where E, W, N, S, and H represent the movement actions across the four cardinal points, i.e., east, west, north, south, and also hover on the current location. For example, if a_t=N, then in the next time slot t+1 and during τ, the FL gateway will be at (x_t,y_(t+1))=(x_t,y_t+vτ). Following the received data of β_t,γ_t in the message m_t at the LoRa station, the received signal power P_Rx,t at FL gateway and time slot t can be computed as <cit.>: P_Rx,t=β_t-10log_10(1+10^-γ_t/10). The resulting received power P_Rx,t at the FL gateway from unknown location of LoRa node is a random variable. This is due to the fact that the LoRa signals transmitting from the LoRa node antenna will often encounter the spatial geometry of SAI including random obstacles, such as trees and rocks, before reaching a given moving FL gateway receiver. The radiating electromagnetic field is reflected, diffracted, and scattered by these various obstacles, resulting commonly in a random multiplicity of rays impinging on the FL gateway antenna <cit.>.Thus, the radio geometry of a given SAI i is directly affected by the spatial geometry. Generally, the statistically varying received signal power P_Rx,t over wireless link between mobile FL gateway and LoRa node is modeled as follow <cit.>: P_Rx,t=10log_10ν^2+ω+10log_10g(d_t)+P_Tx+ 10log_10(G_TxG_Rx). where 10log_10g(d_t)+P_Tx+10log_10(G_TxG_Rx) is the far-field average power P̅_Rx,t. ω is the shadow-fading random variable due to the large obstacles. ν is referred to multipath fading which is resulting from the rate of change of the signal being proportional to FL gateway velocity. Here, d_t=√((x_t-x_P)^2+(y_t-y_P)^2+z^2) represents the distance between the LoRa node carried by the flying gateway and the unknown location of lost person at time slot t. Given the random spatial geometry of each SAI, the resulting radio geometry parameters such as ω, ν, and function g are unknown. In our model, we consider the worst case scenario in which there is no available model for the radio geometry parameters because the SAI is generally unknown for SAR operations. However, g is a decreasing function with respect to d_t in a UAV-assisted LoRa networks <cit.>. Then, we define a view circle centered at FL gateway with radius d_t as follow: 𝒞_t={(x,y)| (x_t-x_P)^2+(y_t-y_P)^2+z^2≤ d_t^2 }. Indeed, from the point of view of the FL gateway, the possible location of the lost person is on the edge of this view circle 𝒞_t at time slot t. Since function g decreases with the distance d, if the FL gateway moves toward lost person correctly, the radius of this circle decreases. Fig. <ref> illustrates our smart SAR system using UAV-assisted LoRa networks during three consecutive time slots t, t + 1, and t + 2. During these time slots, the portable rescue station equipped with GL gateway receives three messages m_t, m_t+1, and m_t+2. As we can see in Fig. <ref>, FL gateway has the view circles of 𝒞_t, 𝒞_t+1, and 𝒞_t+2 at time slots t, t+1, and t+2. Following the received message from FL gateway, the possible locations of lost person will be in the edges of these view circles. As we can see in Fig. <ref>, using the FL gateway control algorithm, the FL gateway moves toward the direction to increase the received power P_Rx,t during time slots, P_Rx,t<P_Rx,t+1<P_Rx,t+2. Thus, the FL gateway moves toward the unknown location of the lost person during time slots. Note that, during the SAR operation, the GL gateway at the portable rescue station receives data messages from FL gateways over LoRa links. Thus, the FL gateway formation algorithm is run in the portable rescue station and all the SAR operation is monitored at the portable rescue in the real-time manner. Considering the stochastic changes in the received power at the FL gateway, designing an FL gateway control policy to move the UAV toward the lost person location is highly challenging particularly for different SAI scenarios with unknown radio geometry. §.§ Problem formulation Our goal is to characterize the FL gateway control policy which moves the UAV toward lost person over a future finite horizon 𝒯_t={t'|t'=t+1,...,t+T} of length T time slots. The objective of this policy is to minimize the set size of lost person possible location, |𝒞_t|, over ground of 2D dimension. Following (<ref>) and (<ref>), and since g is a decreasing function with respect to d_t in UAV-assisted LoRa networks, minimizing the set size of lost person possible location, |𝒞_t|, is equivalent to increasing the received power, P_Rx,t, during the corresponding time slots. The FL gateway control policy at a given slot t depends on the unknown radio geometry of SAI, which is a consequence of the stochastic nature of the wireless channel. Formally, we define a policy Π_t={a_t'| ∀ t' ∈𝒯_t} for the controller that assigns the next location of FL gateway. Consequently, we formulate the FL gateway control problem in our SAR system as follows: {Π_t }max∑_t'=t+1^t+Tδ^(t'-t) P_Rx,t', s.t. a_t'∈{E,W,N,S,H}, ∀ t' ∈𝒯_t, here, δ is a discount factor. Maximizing the objective function in (<ref>) ensures that the received power at FL gateway increases while the view circle surface of the FL gateway decreases. Thus, the FL gateways move toward the lost person during the considered time slots. In practice, the solution of (<ref>) faces the following challenges. First, since the location of the lost person is unknown, it is difficult to obtain the closed expression of the objective function in (<ref>). Second, the received power distribution is a stochastic variable because the radio geometry wireless channels is dynamic and unknown. The complexity of the stochastic optimization problem in (<ref>) becomes more significant due to the unknown probabilities for possible random network changes such as the fading over LoRa links and the user's location. Thus, the FL gateway control problem in (<ref>) is a stochastic optimization problem that does not admit a closed-form solution and has an exponential complexity <cit.>. Therefore, we propose a framework based on principles of the deep meta-RL for SAR operation in different unknown SAIs to solve the optimization problem in (<ref>) with low complexity and in an adaptive manner. The proposed deep meta-RL method for FL gateway formation in UAV-assisted LoRa network, only takes UAV initial position, the RSSI and SNR of received LoRa beacon signal as input, then outputs the UAV trajectories after several episodes to move the UAV toward location of lost person. § DEEP META-REINFORCEMENT LEARNING FOR SAR OPERATION In this section, we present the proposed adaptive control policy based on a deep meta-RL framework to solve the the FL gateway control problem in (<ref>). Traditional policy gradient-based RL algorithms can only determine the adaptive FL gateway control policy in a fixed SAI including a fixed radio geometry. However, the meta-RL framework <cit.> is a novel learning approach that can integrate the prior FL gateway experience with information collected from the other SAI radio geometry to train a rapidly adaptive policy model for SAR operation a new SAI. Therefore, the proposed deep meta-RL can obtain the FL gateway control policies that can be quickly updated to adapt to new radio geometry properties using only a few further training steps. Next, we first introduce the deep RL algorithm for an adaptive FL gateway control policy in a given environment. Then, we explain the framework of deep meta-RL algorithm to train a rapidly adaptive policy model for a new SAI using the previous information collected from the given environment. §.§ Deep RL frame work for a given environment We model the problem in (<ref>) as a partially observable Markov decision process (POMDP) represented by the tuple {𝒮,𝒜,𝒪,P,R,o_0}. where 𝒮 is the state space, 𝒜 is the action space, 𝒪 is the observation space, P is the stochastic state transition function, P(s',s,a) = (s_t+1= s'|s_t = s, a_t = a), R_t(a_t,s_t) is the immediate reward function, and o_0 is the initial observation for the controller of the UAV to move FL gatewy<cit.>. Following POMDP, the required components of our proposed framework for a given SAI 𝒞 are specified as follows: * Agent: the controller of UAV that moves the LoRa FL gateway. * Actions: the control action of the agent at each time slot t is a a_t∈𝒜. And, the action space 𝒜={E,W,N,S,H} is the set of all optional actions including move east, west, north, south, and hover. * Observations: the observation at time slot t is the RSSI and SNR of received LoRa beacon signal by the FL gateway, and also current location of UAV, which are received with massage m_t. Thus, o_t=[β_t,γ_t,x_t,y_t], where (x_t,y_t) ∈𝒞. The observation space 𝒪 is the set of all possible observations. * States: the state at time slot t is the radio geometry characteristics including shadow-fading random variable ω, multipath fading random variable ν, unknown decreasing function g of LoRa links between FL gateways and the LoRa node which is not observable due to the unknown location of lost person. In the case of POMDP, we consider the observation history of during H-consecutive previous time slots as state <cit.>. Hence, the state at time slot t in the environment i is ℋ_t=∪_h=0^H-1{β_t-h,γ_t-h,a_t-h} containing the RSSI and SNR of received LoRa beacon signal and the FL gateway control action at time slot t-h. The state space 𝒮 is the set of all possible histories. * Immediate reward: if the distance between the LoRa gateway and the LoRa node decreases, the received power at the LoRa gateway increases. Thus, we define the immediate reward as the power that FL gateway at time slot t, R_t=P_Rx,t which is given by (<ref>). * Episodic return: if Λ_t= ∪_∀ t' ∈𝒯_t{a_t',o_t'} is a trajectory of the POMDP during future T-consecutive time slots, then the stochastic episodic reward function during future T-consecutive time slots is defined as R_T,t=∑_t'=t+1^t+TR_t. * Policy: for a given state, the policy is defined as the probability of the agent choosing each action. Our framework uses a functional-form policy parameterized by vector θ to map the input state to the output action. Hence, the policy is expressed as π_θ(a_t,ℋ_t)=(a_t|ℋ_t) The purpose of deep RL is to find the optimal policy that maximizes episodic return at FL gateway of UAV-assisted LoRa networks. Given the policy π_θ and stochastic changes over the wireless LoRa link, the unknown probability of trajectory Λ_t is equal to (Λ_t,θ)=∏_∀ t' ∈𝒯_tπ_θ(a_t',ℋ_t'){o_(t'+1)|a_t',o_t'} during future T-consecutive time periods. For a given SAI, we define the average episodic return for parameter vector θ at time slot t as J_t(θ) =∑_∀Λ_t(Λ_t,θ) R_T,t. Given the parametric functional-form policy π_θ, the goal of the FL gateway controller is to solve the following optimization problem: {θ∈ℝ^N}max J_t(θ), s.t. 0≤π_θ(a_t',ℋ_t') ≤ 1, ∀ a_t'∈𝒜,∀ t' ∈𝒯_t, ∑_∀ a_t'∈𝒜π_θ(a_t',ℋ_t') = 1, ∀ t' ∈𝒯_t, where T<<N and N is the number of parameters in the parametric functional-form policy π_θ. To solve the optimization problem in (<ref>), the FL gateway controller must have full knowledge about the transition probability (Λ_t,θ), and all possible values of R_T,t for all of the trajectories Λ_t of the POMDP under policy π_θ. However, achieving this knowledge is not feasible, especially for a dynamic LoRa wireless channels between mobile FL gate way and LoRa node located at unknown location. To overcome this challenge, we propose to combine deep neural network (DNN) with the policy gradient-based RL method. Such a combination was shown in <cit.>, where a DNN learns a mapping from the partially observed state to an action without requiring any lookup table of all trajectories of observation values and policies over time. Consequently, we use a deep RL algorithm that includes a DNN to approximate the policy π_θ for solving (<ref>). Our proposed DNN for deep RL method is presented in Fig. <ref>. Here, the parameters θ∈ℝ^N includes the weights over all connections of the proposed DNN where N is equal to the number of connections <cit.>. The layers of the proposed deep NN for the implementing policy π_θ are defined as follows: * Input layer: the input of proposed deep RL policy at time slot t is the history of the POMDP during H-consecutive previous time slots, ℋ_t. Unlike the traditional DNN layers, we use a long short-term memory (LSTM) layer with size H in the input of our policy DNN. This LSTM layer is an RNN that learns long-term dependencies between time steps in sequence data history of the POMDP trajectories. This is due to the fact that, in our model, the wireless channel affecting POMDP state transitions continuously depends on the spatiotemporal locations of UAV and the radio geometry of the SAR operation geographical area. Thus, we need to use deep RL that aggregates the observations over wireless links during previous time slots of the UAV trajectory and makes a more precise prediction of the next state of the POMDP <cit.>. Indeed, we use LSTM layer in policy function to persist the hidden RSSI and SNR states across previous FL gateway trajectory for continued adaption to the radio geometry of a given environment. * Hidden layer: the hidden layers include fully connected and Sigmoid layers. A sigmoid layer applies a Sigmoid function to the input such that the output is bounded in the interval (0,1). A fully connected layer multiplies the input by a weight matrix and then adds a bias vector. * Output layer: the output layers include Softtmax and Classifier layers. The Softmax layer applies a Softmax function to the input. A classification layer computes the cross-entropy loss for classification and weighted classification tasks with mutually exclusive classes. In our model, the output shows the index of the actions in the the action space 𝒜. More precisely, the output y_t=[y_i,t]∈ℝ^T is a vector of size T actions where each element i in this vector shows the action index of FL gateway at future time slot t+i. The gradient of objective function in (<ref>) is ∇_θJ_t(θ)= ∑_∀Λ_t∇_θ(Λ_t,θ) R_T,t. Since ∇_θlog(Λ_t,θ)=∇_θ(Λ_t,θ)/(Λ_t,θ), we can write ∇_θJ_t(θ)= 𝔼_Λ_t∇_θlog(Λ_t,θ) R_T,t. Here, (Λ_t,θ)=∏_t'=t+1^t+Tπ_θ(a_t',ℋ_t'){o_((t'+1)|o_t',a_t'} and ∇_θ{o_((t'+1)|o_t',a_t'}=0. Thus, ∇_θJ_t(θ)= E_Λ_t∑_t'=t+1^t+T∇_θlogπ_θ(a_t',ℋ_t')R_T,t. Having enough M samples from trajectories Λ_t_m, one can approximate the expectation with sample-based estimator for ∇_θJ(θ). As a result, we use the gradient-ascend algorithm to train the deep RL policy π_θ as follows: ∇_θJ_t(θ)≈1/M∑_m=1^M ( ∑_t'=t_m+1^t_m+T∇_θlogπ_θ(a_t',ℋ_t') R_T,t'), θ←θ+α_RL∇_θJ(θ), where α_RL is the reinforcement learning rate. As shown in Fig. <ref>, the training batch set 𝒟_RL, train is randomly selected from experience memory ℳ. Thus, a batch of deep RL training set 𝒟_RL,train of M samples is available enough to train the deep NN network of π_θ during the time. Each training sample m in 𝒟_RL,train includes histories during H-consecutive time slots before time slot t_m, ℋ_t_m, and actions in the trajectory of future T-consecutive time slots after time slot t_m, Λ_t_m. In summary, we implement the parametric functional-form policy π_θ with the proposed deep NN in Fig. <ref>. The proposed deep RL algorithm for FL gateway control is summarized in Algorithm <ref>. During the SAR time, the deep RL policy is trained with probability ξ based on the gradient-ascend algorithm in (<ref>). In the training phase, the deep RL policy is adaptively trained using train data set from experience memory ℳ. In the update phase, the experience memory is updated with history and trajectories during the time slots. The FL gateway moves using this deep RL-based algorithm until the reward R_t becomes more than defined target reward R_target. This means that the UAV is close enough to the lost person and the received power is more than R_target. The spatial geometry of a given environment affects on the channel fading due to the the the blockage, reflection, and refraction wireless waves. Thus, the radio geometry of a given environment is directly affected by the spatial geometry. However, unlike ground-to-ground wireless links, a UAV-to-ground LoRa link is more robust to the fading effects resulting from the spatial geometry. Due to this fact, compare to the ground-to-ground LoRa links, the radio geometry information resulting from UAV-to-ground LoRa link is more stable. Moreover, given the radio geometry information resulting from UAV-to-ground LoRa link, the knowledge gained while learning the FL gateway control policy in a given spacial geometry could be applied when trying to recognize a new FL gateway control policy in another spacial geometry. Consequently, next, we use the information of a trained deep RL policy of FL gateway control in a given environment to design of a control policy for a SAR operation in a new SAI. §.§ Deep meta-RL framework for new environments We introduce the deep meta-RL framework to use the information from a given environment to design a control policy for a new unknown radio geometry. Compare to deep RL policy, the proposed deep meta-RL policy can integrate the prior experience in one environment with information collected from the FL gateway movement in a new search environment; thus, training a rapidly adaptive learning model for FL gateway control. Indeed, the meta training procedure requires a realization set of state and near optimal policy. Here, we use the realization set of states and actions in the success full SAR operation in different SAIs to design a policy for SAI environment. In this case, we define our task as follow: * Tasks: Given the history of POMDP, task 𝒯 is the realization of FL control policy solve the optimization problem in (<ref>) in each time slot t at each environment with specific radio geometry. Thus, for a given SAI environment, the 𝒯_t_k={ℋ_t_k∪Λ_t_k} include history and trajectory under control policy at time slot t_k of SAR operation which is accessible from experience memory in Fig <ref>. * Meta-train dataset: 𝒟_Meta-train=∪_k=1^K 𝒯_t_k is defined as K different tasks in previous successful SAR operation. * Meta-test dataset: 𝒟_Meta-test=∪_e=1^E {ℋ_t_e∪ a_t_e} is defined as E different histories and actions in the new environment under target policy π_ψ,ϕ. Here, we use the idea of the most popular policy-gradient meta-RL method which is called using model agnostic meta-learning (MAML) to design the FL gateway control policy in the new search area of interest using the previous successful SAR information <cit.>. During the meta training procedure, a Meta-train dataset 𝒟_Meta-train is first sampled from experienced memory of previous successful SAR operation. Then, the meta-RL method collects experience information in variable z from 𝒟_Meta-train and use the deep meta-RL policy function π_ϕ,ψ to predict actions given the history ℋ_t of the new environment. Indeed, the deep meta-RL policy function π_ϕ,ψ= π_ψπ_ϕ,z in which π_ψ=(z) is the probability of experience information variable z and π_ϕ,z=(a_t|ℋ_t,z) is the probability of chosen action a_t given the history ℋ_t of the new environment and encoded experience information data z from expreince of the previous successful SAR operations. More concretely, the objective of the meta training procedure is as follows: {π_ϕ,ψ}max J_t(ϕ,ψ), s.t. 0≤π_ϕ,ψ(a_t',ℋ_t') ≤ 1, ∀ a_t'∈𝒜,∀ t' ∈𝒯, ∑_∀ a_t'∈𝒜π_ϕ,ψ(a_t',ℋ_t') = 1, ∀ t' ∈𝒯, where the parametric functional-form π_ψ encodes the experience information from tasks in Meta-train dataset 𝒟_Meta-train to help π_ϕ,z in finding optimal policy in the new environment. In our deep meta-RL framework, we use a DNN to approximate the policy π_ψ, where the parameters ψ includes the weights over all connections of the DNN. Having enough M_1 samples from trajectories Λ_t_m_1 in the dataset 𝒟_Meta-test from experienced memory in new SAI, one can approximate the expectation with sample-based estimator for ∇_ϕ,ψJ_t(ϕ,ψ). Here, ∇_ϕlogπ_ϕ,ψ=∇_ϕlogπ_ψπ_ϕ,z which is equal to ∇_ϕlogπ_ψ+∇_ϕlogπ_ϕ,z. Thus, for a given ψ_0, z_0=π_ψ_0(𝒟_Meta-train) and ∇_ϕlogπ_ϕ,ψ_0=∇_ϕlogπ_ϕ,z_0. In this case, we use the gradient-ascend algorithm to train the deep meta-RL policy π_ϕ,ψ with respect to ϕ as follows: ∇_ϕJ(ϕ,ψ_0)≈ 1/M_1∑_m=1^M_1( ∑_t'=t_m_1+1^t_m_1+T∇_ϕlogπ_ϕ,z_0(a_t',ℋ_t') R_T,t'), z_0=π_ψ_0(𝒟_Meta-train), ϕ←ϕ+α_Meta-RL,1∇_ϕJ(ϕ,ψ_0). Since the parameters ϕ are updated based on collected data in 𝒟_Meta-test of new environment, the phase of updating ϕ is called adaption phase. Here, α_Meta-RL,1 is the meta-RL rate for the adaption phase. For a given ϕ_0, we have ∇_ψlogπ_ϕ_0,ψ=∇_ψlogπ_ψ+∇_ψlogπ_ϕ_0,z which is equal to ∇_ψlogπ_ψ. Having enough M_2 samples from trajectories Λ_t_m_2 in the dataset 𝒟_Meta-train from experienced memory in previous successful SAR operations in different environments, we use the gradient-ascend algorithm to train the deep meta-RL policy π_ϕ_0,ψ with respect to ψ as follows: ∇_ψJ(ϕ_0,ψ)≈ 1/M_2∑_m=1^M_2( ∑_t'=t_m_2+1^t_m_2+T∇_ψlogπ_ψ(a_t',ℋ_t') R_T,t_m_2), ψ←ψ+α_Meta-RL,2∇_ψJ(ϕ_0,ψ), where α_Meta-RL,2 is the meta-RL rate for Meta-train set. As shown in Fig. <ref>, the training set 𝒟_Meta, train is randomly selected from experience memory of previous successful SAR operations in different environments. However, each adaptation sample m_1 includes histories during H-consecutive time slots before time slot t_m_1, ℋ_t_m_1, and actions in the trajectory of future T-consecutive time slots after time slot t_m_1, Λ_t_m_1 in the new environment. In summary, we implement the parametric functional-form policy π_ϕ,ψ with the proposed deep NN in Fig. <ref>. The proposed deep meta-RL algorithm for FL gateway control is summarized in Algorithm <ref>. In the meta-training phase, the deep meta-Rl policy is trained using train data set from experience memory of successful SAR operations in previous environments, and in the adaptation phase, the experience memory including history and trajectories of the new environment during the time of SAR operation is used. § EXPERIMENTAL SETUP The our implementation of the UAV-assisted LoRa networks and environments for our experimental testbed are presented and discussed in this section. §.§ Considered Hardware Our experimental setup of UAV-assisted LoRa networks includes the LoRa end node, FL and GL gateways. In our setup, all of the designed nodes, and gateways are powered using lithium polymer (Li-Po) batteries with output voltage of 3.7 V and capacity of 1100 mAh. We have designed our printed circuit boards (PCBs) to assemble LoRa node FL gateways on them. Using ATmega328 which is a single-chip microcontroller created by Atmel in the megaAVR family, the LoRa node FL gateways are assembled and programmed in C language  <cit.>. In our setup, we use the LoRa Ra-02 module from Ai-Thinker which is equipped by a LoRa SX1278 Semtech core <cit.> and <cit.>. The designed LoRa node has a LoRa module which is connected to a commercial external folded dipole antenna, and transmits a beacon packet using LoRa module on the ISM frequency band at 433 MHz. The detail of LoRa node is shown in Fig. <ref>. The FL gateway board is equipped with LoRa Ra-02 and GPS modules in which the LoRa Ra-02 module is connected to to 433 Mhz antenna and the GPS module has the square-shape receiver GPS antenna. The designed of FL gateway board is shown in Fig. <ref>. The dimensions of FL gateway board are 295 mm×60 mm×20 mm and its weight is 100 gr. Thus, the weight of designed FL gateway board is suitable to be mounted on the commercial drones and UAVs. We mount the FL gateway board over DJI Phantom 4 Pro to set up an FLoRa gateway. To use the maximum antenna radiation, we firmly fix the gateway node under DJI Phantom 4 Pro, while the LoRa antenna is vertically pointed toward the earth surface. In our setup, the FL gateway receives the beacon packet using LoRa module and GPS coordinations of UAV using GPS module. Then, The FL gateway retransmits a data packet containing of the RSSI and SNR of received LoRa beacon signal and GPS coordination of UAV over UAV-to-ground LoRa link to the be received by GL gateway. The GL gateway consists of a LoRa Ra-02 module connected to 433Mhz antenna and our designed board. The head of rescue operation can connect to the Gl gateway vie USB cable or Bluetooth wireless link. The GL gateway setsup is shown in Fig. <ref>.The GL gateway receives data packets from FL gateways using LoRa module, and transmits these received packets to the portable computer such as Labtop over USB cable or to the mobile phone over Bluetooth link. In this case, the rescue operation can be monitored and analyzed the using the data that are gather on computer or mobile phone in a realtime manner. The realtime received data packets are used to run our proposed deep reinforcement algorithms. Following our proposed deep RL algorithms, the control policy suggest the next movement of UAV, and the pilot move the UAV toward that direction. Using the application which is written in Python, the trajectory of UAV is depicted for the rescue operation head. And the UAV moves Fl gateway toward the location of LoRa end node. Thus, the location of lost person is find very fast. §.§ Experimental Testbed The measurement is at Mongasht mountain ranges, near the areas of Ghaletol city, Khuzestan province, Iran. For the lost person, the LoRa node is held in the students’ hand while the students move to the unknown location in the target area. And for the FLoRa gateway, the gateway node is mounted on the UAV. The UAV hovers the gateway node at 300 meters. This elevation is high enough that UAV does not hit any obstacle such as mountains or trees on its path. Since, we are interested to evaluate our SAR system in the different radio geometry including both LoS and NLoS links, we have done our experiments on two different target areas: a wide plain and a slotted canyon. When the target area of lost person is a wide plain, there is a LoS path between GLoRa or FLoRa gateway and LoRa nodes. However, when the lost person is in a slotted canyon, there may not be any LoS path between GLoRa or FLoRa gateway and nodes, and the LoRa nodes are in the shadowing area of mountain walls around the slotted canyon. In Fig. <ref>, the experimental scenarios have been shown which are plain and canyon scenarios. § PERFORMANCE ANALYSIS In this section, we evaluate the performance of our proposed deep meta-reinforcement learning approach for SAR operation using the real measurement of UAV-assited LoRa network for the scenarios at Mongasht mountain ranges. In our measurement, the duration of each time slots is 2 seconds, the UAV speed is 20 meter per second. The maximum life time of one UAV battery is 20 minutes. We compare deep learning policy with optimal and greedy policies as the benchmarks. In the optimal policy, we give the unknown location of lost person to the UAV pilot, then, the UAV directly moves toward target location. Under the greedy policy, there tow phases: sense and action. The UAV starts the sense phase with probability of 0.1, in which UAV sequentially moves north, east, south, and west for 3 time slots to measure received power. The UAV starts action phase with probability of 0.9, in which UAV moves toward the direction with highest received power at the previous sense phase. In Fig. <ref>, we show the received power at FL gateway versus rescue time slots in the wide plain environment. From Fig. <ref>, we observe that the received power at FL gateway increases in less time slots under the optimal policy. However, when the greedy algorithm is used, the received power at FL gateway increases slower. The performance of proposed deep learning policy is between optimal and greedy policies. As we can see in Fig. <ref>, when the UAV initial location is 0.8R far from lost person, the received power at FL gateway received to its maximum value around SAR time slot 300, 700, 800 under the optimal, deep RL, and greedy algorithms, respectively. Moreover, in the plain environment, the greedy and deep learning policies converge and the UAV can find the lost person finally because in the plain environment there are mostly LoS link between lost person and FL gateway. under deep learning policy, when we start the initial location of UAV SAR operation at 0.4 radius of plain environment the UAV can find the lost person at time slot 400, however is the UAV initial location is at 0.8 radius of plain environment, the lost person is found at time slot 800. In Fig. <ref>, we show the received power at FL gateway versus rescue time slots in the slotted canyon environment. For the deep meta-RL policy, we have used the data of pervious experiences of different SAR operations in the plain environment, where the initial locations of UAV have been changed and the data of SAR has been saved in the memory. Then, this data is used as the 𝒟_Meta-train in Algorithm 2. From Fig. <ref>, we observe that the received power at FL gateway under deep meta-RL policy receives to its maximum value at SAR time slot 50, while the deep RL policy at time slot 141 moves the FL gateway to the location with the maximum received power. Moreover, on average, the received power at the FL gateway under the deep meta-RL policy is 25% more than deep RL policy. In Fig. <ref>, we show the FL gateway horizontal distance from lost person during SAR time slots in the slotted canyon environment. From Fig. <ref>, we observe that the deep meta-RL and deep RL based policies could finally find the lost person; however the greedy algorithm does not converge to the lost person location during SAR operation. As we can see in Fig. <ref>, the UAV hovers at 300m height over lost person after 31 time slot under optimal policy. While deep RL and deep meta-RL-based policies find the lost person at time slots 51 and 141, respectively. Moreover, the FL gateway horizontal distance from lost person under deep meta-RL-based policy is on average 26% closer than deep RL-based policy. In Fig. <ref>, we show the UAV trajectory during SAR operation at a slotted canyon. From Fig. <ref>, we observe that the UAV under the deep meta-RL and deep RL-based policies finally hovers over the lost person's location. The greedy algorithm moves the UAV towards lost person location while the UAV is not able to hover over exact lost person location during SAR operation time. As we can see in Fig. <ref>, the average distance between UAV trajectories under the deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are 619 and 1930 meter during the SAR operation time. § CONCLUSION In this paper, we have introduced a smart SAR system based on UAV-assisted LoRa network for highly remote areas such as mountain ranges. More prices, we have designed and implemented an artificial intelligence-empowered SAR operation framework for different unknown search environments. We have modeled the problem of the FL gateway control in the SAR system using the UAV-assisted LoRa network as a partially observable Markov decision process. Then, we have proposed a deep meta-RL-based policy to control the FL gateway trajectory during SAR operation. For initialization of our deep meta-RL-based policy, first, a deep RL-based policy determines the adaptive FL gateway trajectory in a fixed search environment including a fixed radio geometry. Then, as a general solution, our deep meta-RL framework is used for SAR in any new environment. Indeed, deep meta-RL-based policy integrates the prior FL gateway experience with information collected from the other search environments to rapidly adapt the SAR policy model for SAR operation in a new environment. We have experimentally implemented UAV-assisted LoRa network and then we have tested our proposed SAR system in two real different areas: a wide plain and a slotted canyon at Mongasht mountain ranges, Iran. Practical evaluation results show that if the deep meta-RL policy is applied instead of the deep RL one to control the UAV, the number of SAR time slots decreases from 141 to 50. Moreover, the average distance between UAV trajectories under deep meta-RL and deep RL based policies from the UAV trajectory under optimal policy are 619 and 1930 meter during the SAR operation time. IEEEtran 0.9