Dataset Viewer
Auto-converted to Parquet
entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
10
200
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
817k
http://arxiv.org/abs/2306.17797v1
20230620082028
HIDFlowNet: A Flow-Based Deep Network for Hyperspectral Image Denoising
[ "Li Pang", "Weizhen Gu", "Xiangyong Cao", "Xiangyu Rui", "Jiangjun Peng", "Shuang Xu", "Gang Yang", "Deyu Meng" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China 710049 [email protected] 0000-0003-2079-3354 Nankai University No.38, Tongyan Road Tianjin China [email protected] Corresponding Author Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Northwest A&F University No. 22, Xinong Road, Yangling District Xianyang Shaanxi China [email protected] University of Science and Technology of China No. 96, Jinzhai Road, Baohe District Hefei Anhui China [email protected] Xi'an Jiaotong University No.28, Xianning West Road Xi'an Shaanxi China [email protected] Hyperspectral image (HSI) denoising is essentially ill-posed since a noisy HSI can be degraded from multiple clean HSIs. However, current deep learning-based approaches ignore this fact and restore the clean image with deterministic mapping (i.e., the network receives a noisy HSI and outputs a clean HSI). To alleviate this issue, this paper proposes a flow-based HSI denoising network (HIDFlowNet) to directly learn the conditional distribution of the clean HSI given the noisy HSI and thus diverse clean HSIs can be sampled from the conditional distribution. Overall, our HIDFlowNet is induced from the flow methodology and contains an invertible decoder and a conditional encoder, which can fully decouple the learning of low-frequency and high-frequency information of HSI. Specifically, the invertible decoder is built by staking a succession of invertible conditional blocks (ICBs) to capture the local high-frequency details since the invertible network is information-lossless. The conditional encoder utilizes down-sampling operations to obtain low-resolution images and uses transformers to capture correlations over a long distance so that global low-frequency information can be effectively extracted. Extensive experimental results on simulated and real HSI datasets verify the superiority of our proposed HIDFlowNet compared with other state-of-the-art methods both quantitatively and visually. <ccs2012> <concept> <concept_id>10010147</concept_id> <concept_desc>Computing methodologies</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224.10010245.10010254</concept_id> <concept_desc>Computing methodologies Reconstruction</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies [500]Computing methodologies Reconstruction < g r a p h i c s > Instead of performing HSI denoising with a deterministic mapping, our HIDFlowNet learns the conditional distribution of clean HSI given corresponding noisy counterpart, which explicitly alleviates the ill-posed nature of HSI denoising and enables us to sample diverse clean HSIs. The charts on the right demonstrate that the reconstructed spectral reflectance of our HIDFlowNet is more consistent with the ground truth than that of other approaches, verifying the superiority of our proposed method. HIDFlowNet: A Flow-Based Deep Network for Hyperspectral Image Denoising Deyu Meng 30 July 1999 ======================================================================= § INTRODUCTION Hyperspectral image (HSI) depicts an object in numerous narrow and contiguous spectral bands across the electromagnetic spectrum. Compared with RGB images, HSIs enable a more comprehensive depiction of captured scenes due to more spectral bands and have been widely applied in various fields including remote sensing <cit.>, medical diagnosis <cit.>, agriculture <cit.> and so on. However, owing to multiple factors such as instrument instability, circuit malfunction and light disturbance, HSIs are often subjected to various noises during the data acquisition stage, which can negatively impact the performance of the downstream applications aforementioned. Therefore, noise reduction is an essential step in HSI analysis and processing. However, HSI denoising is an ill-posed problem since a given noisy HSI can be degraded from multiple clean HSIs, which presents significant challenges when designing HSI denoising approaches. In the last decade, numerous HSI denoising techniques have been proposed and these methods can be categorized into two classes, i.e., model-based approaches and deep learning-based methods. Model-based approaches rely on human handcrafted prior and conduct HSI denoising in an iterative optimization manner. However, since the characteristics of HSIs are complex, the hand-crafted priors only partially reflect the features of HSIs, making these approaches incapable of handling unknown real-world noise. Moreover, the iterative optimization process consumes a substantial amount of time to denoise a single image. In contrast, by utilizing the impressive nonlinearity capability of neural networks, deep learning-based approaches model the intrinsic characteristics of HSIs in a data-driven manner. These methods learn the underlying image features statistically with abundant clean and noisy image pairs. Although these approaches can achieve desirable denoising performance, they can only predict a single clean HSI with a deterministic mapping (see Figure <ref>) and ignore the ill-posed nature of HSI denoising. Compared with distribution learning-based denoising approaches, these deterministic methods overemphasize pixel similarity and tend to predict the average of all possible clean images, resulting in over-smoothed areas and loss of image details. Additionally, most of the existing deep learning-based methods focus on directly learning the network mapping from numerous training pairs and always neglect the fact that noise is part of the high-frequency component. Thus the existing network architectures often fail to decouple the learning of low-frequency and high-frequency and thus lack specific physical meaning. To alleviate these issues, this paper proposes a flow-based hyperspectral image denoising network (i.e., HIDFlowNet). HIDFlowNet aims to directly learn the conditional distribution of the clean HSIs by transforming the unknown conditional distribution of clean HSIs into a known Gaussian distribution (see Figure <ref>). Concretely, the HIDFlowNet decouples the learning of low-frequency and high-frequency information of HSI and contains two main components: a conditional encoder network and an invertible decoder network. The encoder network composed of a series of transformer blocks and down-sampling operations, is utilized to extract global low-frequency information in an unsupervised manner. To be specific, the down-sampling operations employed in the encoder enable the network to obtain low-resolution images so that low-frequency information is extracted efficiently. Transformers which is able to capture long-distance correlations are also adopted to extract global information effectively. Additionally, the invertible decoder is built by staking a successive of invertible conditional blocks (ICBs) to preserve local high-frequency details since invertible networks are information-lossless <cit.>. Finally, HIDFlowNet is trained by minimizing the negative log-likelihood of the conditional distribution given the training data and a reconstruction loss to obtain high-quality HSIs. Once the training is finished, diverse clean HSIs corresponding to one noisy HSI can be generated by first sampling in the latent space and then performing inverse transforms. In summary, our contributions are shown as follows: * A flow-based network namely HIDFlowNet is proposed to learn the conditional distribution of a clean HSI given its corresponding noisy counterpart. The model is able to generate diverse restored images by sampling random Gaussian noise and performing inverse transforms. To our knowledge, this is the first attempt to employ a flow-based model for HSI denoising. * The architecture of HIDFlowNet induced from the flow methodology contains two main components and has an explicit physical interpretation since it decouples the learning of low-frequency and high-frequency information of HSI. The invertible decoder preserves the local high-frequency details and the conditional encoder network extracts global low-frequency representation. * Extensive experiments on the simulated and real HSI datasets verify the superiority of our proposed method compared with other state-of-the-art methods. § RELATED WORK In this section, we give a brief review of several research fields related to our work, including two major HSI denoising directions and flow-based generative models. Model-based methods utilize priori information about the underlying statistical properties of the hyperspectral data to perform denoising. Handcrafted priors such as low-rank <cit.>, sparse representation <cit.>, total variation <cit.> and nonlocal similarity <cit.> are proposed and corresponding model regularization terms are designed to obtain promising denoising results. For example, in <cit.>, low-rank matrix recovery (LRMR) is proposed to simultaneously remove various noises by utilizing the low-rank property of HSIs and the sparsity nature of non-Gaussian noise. Cao et al. <cit.> proposed a mixture of exponential power distribution in the low-rank matrix factorization framework to capture the complex noise of HSIs. Xue et al. <cit.> proposed a structured sparse low-rank representation (SSLRR) model to induce sparse property. Spatial-spectral total variation regularized local low-rank matrix recovery (LLRSSTV) <cit.> employed a global reconstruction strategy to fully utilize both low-rank property and smoothness properties of HSIs. He et al. <cit.> proposed NG-Meet which unified spatial and spectral low-rank properties. While these methods effectively preserve the spectral and spatial characteristics of HSIs, the optimization of the model is typically complex and thus these methods can be considerably time-consuming. In addition, the denoising performance is highly dependent on the consistency between the priors and HSIs. However, manually designed priors only reflect the intrinsic characteristics of HSIs partially, limiting their ability for HSI denoising. Recently, deep learning-based methods for HSI denoising gain increasing attention and popularity owing to the powerful nonlinear fitting ability of neural networks. These methods capture the statistical characteristics of HSIs in a data-driven manner with a large number of training pairs. For instance, HSI-DeNet <cit.> employs a 2-D convolutional neural network to learn multiple image filters for HSI denoising. HSID-CNN <cit.> employs convolution kernels of multiple sizes to extract multilevel features, which are then fused to restore the HSIs. QRNN3D <cit.> introduces 3-D convolution blocks and quasi-recurrent mechanisms to extract spatial and spectral simultaneously without damaging the image structure. GRN <cit.> used two reasoning modules based on the graph neural network (GNN) to carefully extract both global and local spatial-spectral features. TRQ3DNet <cit.> first introduces a vision Transformer in HSI denoising, modelling the spatial long-range dependencies of HSIs and achieving desirable denoising performance. SST <cit.> conducts attention mechanisms in both spatial and spectral dimensions to fully explore the similarity characteristics of HSIs. HWnet <cit.> is proposed to improve the generalization ability of model-based methods in a data-driven manner. While demonstrating promising denoising performance, these approaches learn a deterministic mapping and neglect the fundamental ill-posed nature of HSI denoising. Flow-based generative models have shown promising results in a variety of applications, including image generation <cit.>, speech synthesis <cit.>, and physics simulations <cit.>. These models transform a complex distribution into a known simple distribution (e.g., Gaussian Distribution) with an invertible network so that diverse samples can be obtained by sampling in the known latent space and performing inverse transforms. For example, NICE <cit.> stacks several additive coupling layers and a rescaling layer to learn manifolds. Based on NICE, RealNVP <cit.> further proposes affine coupling layers with masked convolution to improve fitting ability. Glow <cit.> employs invertible 1 × 1 convolutions to perform channel permutations and actnorm layers to accelerate training. Recently, flow-based models which model complex conditional distribution have been increasingly proposed to tackle various tasks <cit.>. SRFlow <cit.> models the conditional distribution of high-resolution images given corresponding low-resolution images, enabling the trained model to predict diverse high-resolution images. VideoFlow <cit.> predicts high-quality stochastic multi-frame videos based on past observations using a normalizing flow. In this paper, we follow this research line and further exploit the application of flow-based methods in HSI denoising task. § THE PROPOSED METHOD In this section, we provide a detailed description of our proposed HIDFlowNet. Firstly, we present the problem of the ill-posed nature of HSI denoising and then introduce conditional flow models. Next, we illustrate the network structure of HIDFlowNet in detail. §.§ Conditional Generative Flows The task of HSI denoising is to restore clean HSIs from given noisy HSIs. Generally, a degraded HSI can be mathematically modeled as 𝐘 = 𝐗 + ϵ. where 𝐘∈ℝ^H× W× B denotes the degraded HSI, 𝐗∈ℝ^H× W× B is the corresponding clean HSI and ϵ∈ℝ^H× W× B stands for the additive noise. H, W, B denote the height, width and spectral band number of the HSI, respectively. As previously mentioned, HSI denoising is an ill-posed problem since a noisy HSI can be degraded from multiple clean HSIs that are equally reasonable. Therefore, instead of learning a deterministic mapping 𝐘→𝐗 as existing deep learning-based methods do, we propose to employ a flow-based network f_θ to learn the conditional distribution P_𝐗|𝐘(𝐗|𝐘, θ) of clean HSI 𝐗 given corresponding noisy counterpart 𝐘. Specifically, the network is designed to be invertible to guarantee one-to-one mapping. To put it another way, the invertible network transforms a clean and noisy HSI pair (𝐗, 𝐘) into a latent variable 𝐳 = f_θ(𝐗;𝐘), and the clean HSI 𝐗 can be reconstructed exactly by performing inverse transforms as 𝐗 = f^-1_θ(𝐳;𝐘). In this context, by applying the change-of-variables formula, the probability density of p_𝐗|𝐘 can be explicitly defined as p_𝐗|𝐘(𝐗|𝐘,θ)=p_𝐳(f_θ(𝐗;𝐘))|∂ f_θ∂𝐗(𝐗;𝐘)|. where the det(·) term is the determinant of the Jacobian matrix ∂ f_θ∂𝐗(𝐗;𝐘). Therefore, the conditional distribution of the clean HSI can be directly learned by minimizing the negative log-likelihood (NLL) as ℒ_nll(θ;𝐗,𝐘) = -log p_𝐗|𝐘(𝐗|𝐘,θ) = -log p_𝐳(f_θ(𝐗;𝐘))-log|∂ f_θ∂𝐗(𝐗;𝐘)|. In addition, the flow-based network is decomposed into a succession of invertible layers so that the determinant term in Eq.(<ref>) can be readily calculated. Specifically, the flow-based network consists of N invertible layers, i.e.,f_θ=f_θ^Nf_θ^N-1⋯ f_θ^1, where f_θ^n denotes the n_th layer. The n_th layer takes the outputs of the previous layer as inputs, i.e.,𝐡^n+1 = f^n_θ(𝐡^n;𝐗), where 𝐡^1 = 𝐗 and 𝐡^N+1 = z. Then, by employing the chain rule and the multiplicative property of the determinant, the NLL objective in Eq.(<ref>) can be defined as ℒ_nll(θ;𝐗,𝐘)=-log p_𝐳(𝐳)-∑_n=1^Nlog|∂ f_θ^n∂𝐡^n(𝐡^n;𝐗,𝐘)|. As a consequence, we only need to ensure that each layer is invertible and corresponding log-determinant of the Jacobian matrix can be efficiently computed, which will be detailed in the following section. Then clean HSIs can be sampled from p_𝐗|𝐘(𝐗|𝐘,θ_*) by drawing samples from a simple distribution (e.g. Gaussian) p_z and performing inverse transforms, i.e.,𝐗=f^-1_θ_*(ẑ;𝐘), ẑ∼ p_𝐳, where θ_* is the learnt parameters of the proposed network. §.§ Network Architecture In this section, we illustrate the network architecture and implementation details of our proposed method. §.§.§ Overall Network Architecture. While the invertibility of flow-based networks ensures one-to-one mapping, this constraint also imposes limitations on the network design and decreases the fitting ability. Furthermore, the dimensionality of HSIs is significantly larger than RGB images, resulting in the learning of HSI distribution more challenging. Therefore, we propose to decouple the learning of global low-frequency representation and local high-frequency details. Specifically, we propose a flow-based framework namely HIDFlowNet, which is composed of a transformer-based encoder and an invertible decoder as shown in Figure <ref>. The framework employs a conditional encoder without the constraint of invertibility to learn global low-frequency information. Then the flow-based decoder consisting of invertible conditional blocks (ICBs) takes the features maps of the conditional encoder's hidden layers as conditional inputs and transforms samples drawn from Gaussian distribution into local high-frequency information. Since invertible networks are information-lossless and can preserve details <cit.>, the flow-based decoder is ideal for learning the distribution of the high-frequency part of HSIs. Finally, we apply a bilinear upsampling operation to the outputs of the encoder to expand the spatial size. Then the restored HSI is obtained by adding up the outputs of the encoding network and the flow-based decoder so that the global low-frequency and local high-frequency details are restored simultaneously. Next, we will introduce the conditional encoder network and the invertible decoder network in detail. §.§.§ Conditional Encoder. Previous works <cit.> perform either checkerboard pattern squeeze operation or Haar wavelets to reshape image to lower resolutions and capture information in a larger distance when designing invertible networks. However, each time the squeeze operation is performed, the number of channels becomes four times the original number as the size of the image needs to remain unchanged to ensure reversibility. Such operations are not suitable for HSIs which contain tens and even hundreds of spectral bands, as the exponential growth of the number of channels could lead to intolerable computational cost and model complexity. Therefore, inspired by previous work <cit.>, we compress the high-dimensional image data by applying down-sampling operations in the encoder which is not necessarily invertible to capture low-frequency information while reducing model complexity in an unsupervised manner. Recently, vision transformers have gained great popularity in various tasks such as classification <cit.>, segmentation <cit.> and image restoration <cit.>. The self-attention mechanism in transformers enables networks to capture global dependencies and has demonstrated powerful representation capabilities. Therefore, in this work, the encoding network is built by staking a succession of transformers with down-sampling operations to obtain global low-resolution representations as shown in Figure <ref>. Specifically, the locally-enhanced window (LeWin) transformer block proposed in <cit.> is employed in the HIDFlowNet as the block is considerably efficient and captures both local and global features. Since the LeWin transformer is not the main point of our proposed method, readers could refer to <cit.> for further details. The downsampling is implemented by a 2-D convolution block with stride=2. §.§.§ Invertible Decoder. The architecture of the invertible decoder which learns the distribution of high-frequency information requires careful design to ensure that the network is invertible and the Jacobian determinant term in Eq.(<ref>) is tractable. Based on previous works <cit.>, a novel invertible conditional block (ICB) is proposed in this work. As shown in Figure <ref>, each ICB consists of a conditional affine layer and a residual invertible 1 × 1 convolution. The conditional affine layer utilizes an information transfer layer to perform element-wise scaling and addition. Concretely, the conditional affine layer takes the low-resolution feature map t^n of the encoder layer as conditional inputs and generates scale and bias, which can be illustrated as s, b = (g_θ((t^n))) h^n+1 = (s) ⊙h^n + b where g_θ denotes the information transfer layer, denotes bilinear upsampling and ⊙ is Hadamard product. Half instance normalization block <cit.> with channel attention <cit.> (HinCaBlock) is employed as the information transfer layer in our work, which is shown in Figure <ref>. The Jacobian matrix of this affine transformation is diagonal and the log-determinant can be efficiently computed by adding up the elements of scale s. The inverse of this transformation is given by h^n = (h^n+1 - b) ⊘(s) where ⊘ is element-wise division. <cit.> proposed an invertible 1 × 1 convolution as a permutation operation. However, the determinant of the convolution weight matrix is likely to be a large value and change drastically during the training process as the magnitude of the matrix elements is equivalent. In our work, we further propose a residual invertible 1 × 1 convolution to improve the stability of the training process. Specifically, the residual convolution can be defined as h_ij^n+1=Wh_ij^n+h_ij^n=(W+I)h_ij^n where h_ij^n is the feature vector on spatial coordinate (i, j). The log-determinant is computed in a straightforward way as log|(dResidualConv(𝐡;𝐖)d𝐡)|=h· w·log|(𝐖+𝐈)| where h and w are the height and width of the feature map 𝐡, and ResidualConv is the residual invertible convolution. Since the channel number remains unchanged in the invertible decoder, the log-determinant can be trivially calculated. In addition, the Jacobian determinant term in Eq.(<ref>) prevents the coefficient matrix W+I from being singular. We initialize the parameters W with small values, such that the residual convolution performs as an identity function approximately, which is helpful for training deep networks <cit.>. §.§.§ Objective Function. As mentioned earlier, we propose a negative log-likelihood loss ℒ_nll(θ;𝐗,𝐘) to learn the distribution of HSIs. To restore high-quality HSI and accelerate training, we further define reconstruction loss as ℒ_rec(θ;𝐗,𝐘, ẑ) = ||f^-1_θ(ẑ;𝐘) - 𝐗||_1. Finally, the total objective function is defined as ℒ_total(θ;𝐗,𝐘,ẑ) = λ_1 ℒ_nll(θ;𝐗,𝐘) + λ_2 ℒ_rec(θ;𝐗,ẑ) where λ_1 and λ_2 are hyperparameters. In our experiments, λ_1 and λ_2 is set as 0.001 and 1, respectively. § RESULTS §.§ Experimental Settings In this section, we provide a detailed description of the datasets and training settings in our experiment. §.§.§ Synthetic Datasets. Two datasets, i.e., CAVE <cit.> and KAIST <cit.>, are used in our experiments. CAVE dataset consists of 32 HSIs with a spatial resolution of 512 × 512 over 31 spectral bands. KAIST dataset contains 30 HSIs with a spatial resolution of 2704 × 3376 over 31 spectral bands. For the CAVE dataset, we use 20 images for training, 2 images for validation and 10 images for testing. For the KAIST dataset, 20 images are used for training and the rest are used for testing, 2 images selected from the CAVE dataset are used for validation. We crop the training set with a spatial size of 64 × 64 and stride 16 to enlarge training sets, resulting in 16824 training patches in total. Various transformations, i.e., random flipping and multi-angle image rotation (angles of 0^∘, 90^∘, 180^∘, 270^∘) are employed for data augmentation. §.§.§ Real HSI Data. We evaluate all competing approaches on one real-world noisy HSI, i.e., Indian Pines dataset, which consists of 145 × 145 pixels with 220 bands. For computational convenience, we crop the centre area with a spatial size of 128 × 128 for comparison. §.§.§ Noise Setting. We consider two types of noises (i.e., Gaussian noise and mixture noise) which are consistent with real-world situations <cit.>. In the Gaussian noise case, HSIs are contaminated by noises with variance set as {50, 70, 90}. In the mixture noise case, HSIs are contaminated by non-i.i.d. Gaussian noise, impulse noise, deadlines and strips. Specifically, each band of the clean HSIs is firstly corrupted by Gaussian noise with random intensities which range from 10 to 70. Next, the spectral bands are randomly divided into three parts, each part is respectively added with impulse noise, stripe noise and deadline noise. §.§.§ Competing Methods and Evaluation Metrics. Eight HSI reconstruction methods are adopted for comparison, including five model-based methods, i.e., BM4D <cit.>, LRTDTV <cit.>, NMoG <cit.>, FastHyDe <cit.>, LLRGTV <cit.>, and three learning based methods, i.e., HSIDCNN <cit.>, QRNN3D <cit.>, SST <cit.>. Three commonly used image quality evaluation metrics, including peak signal-to-noise ratio (PSNR), structural similarity (SSIM) <cit.> and spectral angle mapper (SAM) <cit.>, are employed to evaluate the denoising performance of different approaches. Larger values of PSNR and SSIM and smaller values of SAM indicate better image quality. §.§.§ Implementation Details. We implement the proposed framework HIDFlowNet in Pytorch. Adam <cit.> optimizer with β_1 = 0.9, β_2 = 0.999 is employed to update model parameters and the learning rate is set to 2 × 10^-4. All models are trained in an easy-to-difficult way which has been proven helpful for network training <cit.>. Concretely, the networks are trained with Gaussian noise for 50 epochs and then trained with mixture noise for another 50 epochs. The training batch size is set as 8. For fair comparisons, all deep learning-based methods are trained and tested in the same way. The models trained for 50 and 100 epochs are employed to remove Gaussian noise and mixture noise respectively. All deep learning-based models are trained on an NVIDIA Geforce RTX 3090 GPU. §.§ Experimental Results §.§.§ Experiment on Synthetic Data. The denoising results on the CAVE dataset are shown in Table <ref> and Figure <ref>. It can be seen that our proposed HIDFlowNet demonstrates better performance in most cases. While achieving desirable results in Gaussian noise cases, most model-based methods fail to tackle complex noise as manually designed priors cannot fully describe complex situations. In addition, although HSIDCNN achieves the best PSNR in several cases by performing multiscale feature extraction, HIDFlowNet also achieves promising PSNR and performs significantly better in other evaluate indexes. The visualization results of reconstructed HSIs are provided in Figure <ref>. As shown in the figure, model-based approaches yield either still noisy images or over-smooth results. Deep learning-based methods obtain promising denoising results but are also prone to provide over-smooth predictions since these methods overemphasize the pixel similarity and ignore the underlying distribution of clean HSIs. In contrast, HIDFlowNet is more capable of preserving fine-grained details while restoring spatial smoothness without introducing undesirable artefacts. The excellent performance of HIDFlowNet is primarily owing to the fact that the compressive encoding component suppresses noise and enhances the low-frequency part of HSIs, and the flow-based decoder enjoys the information-less property and preserves textural details. Moreover, HIDFlowNet also exhibits desirable denoising performance on the KAIST dataset as shown in Table <ref>, which further verifies the superiority of our proposed method. §.§.§ Experiment on Real-World Data. We further employ all models trained on the Indian Pines dataset for real-world HSI denoising to verify the effectiveness of our proposed approach. Since there is no ground truth for real-world data, we provide visualization results shown in Figure <ref> for comparison. It can be observed that the original image is seriously degraded owing to environmental factors such as terrible atmosphere or sensor failure. Compared with other approaches, our HIDFlowNet effectively handles the unknown noise and outputs sharper and more realistic results, convincing the robustness and superiority of HIDFlowNet. §.§.§ Effectiveness of Flow Model. We present visualization results of the generated HSIs derived from different Gaussian noises in Figure <ref> to verify the effectiveness of our proposed flow-based model. It can be observed that while generated HSIs are highly similar which verifies the stability of the trained model, there still exist differences in local details owing to different noises, confirming the effectiveness of our proposed flow-based model. §.§ Ablation Study In this section, we provide an ablation study on the components of HIDFlowNet and model complexity. §.§.§ Feature Decoupling Analysis. In addition to quantitative results, we provide visual analysis to further prove the effectiveness of the proposed encoding network and the flow-based decoder. Specifically, the inputs and the feature maps of the 3th, 6th and 9th layers of the encoder and decoder are depicted in Figure <ref>. It can be seen that with the increase of layers, the outputs of the encoder tend to ignore local details (e.g., the joint of the blocks) and gradually capture global low-frequency information. Since attention is calculated in local windows as elaborated in <cit.>, the feature map of the last layer exhibits a relatively obvious reticular structure. The outputs of the decoder demonstrate that with the guidance of the encoder, random Gaussian noise is transformed into local high-frequency information progressively, convincing the feasibility of the invertible network. §.§.§ Component Analysis. There are two components in an invertible conditional block, including an affine conditional layer and a residual invertible convolution. In this section, to verify the effectiveness and rationality of the two components adopted in our work, we conduct denoising on the KAIST dataset in Gaussian noise case with σ=50 for comparison and the effectiveness of the two components is explored as illustrated in Table <ref>. As can be seen, the model without affine conditional layers demonstrates the worst performance since the decoder is a pure generative network without conditional information in this case, and the quality of the denoising result is highly reliant on the performance of the encoder. HIDFlowNet adopted in our work outperforms other configurations, verifying the rationality of the proposed approach. §.§.§ Model Complexity. We further investigate the influence of the depth of HIDFlowNet by testing models on the KAIST test set in Gaussian noise case with σ=50. As shown in Table <ref>, the denoising performance improves with the increasing number of ICBs. HIDFlowNet with 9 ICBs is adopted in our work for a tradeoff between complexity and performance. § LIMITATIONS AND FUTURE WORK While our proposed HIDFlowNet exhibits plausible denoising performance, there are still several limitations. Specifically, the invertible requirement of flow-based models puts limitations on the use of various operations such as convolution with larger kernels, attention mechanisms and dimension reduction, reducing the fitting ability of the network. Moreover, the proposed method lacks control over the generative process and is unable to explicitly generate HSIs with expected specific properties such as higher SSIM. In the future, novel invertible frameworks and controllable generative models are worth further exploration to alleviate these problems. § CONCLUSION To alleviate the ill-posed nature of HSI denoising (i.e., multiple predictions are reasonable for a given noisy HSI) which is ignored by most existing deep learning-based approaches, this paper proposes a novel flow-based network namely HIDFlowNet. The network directly learns the distribution of clean HSIs conditioned on noisy counterparts and is capable of generating diverse clean HSIs. Specifically, the proposed HIDFlowNet is composed of a conditional encoder and an invertible decoder to decouple the learning of low-frequency and high-frequency information. The encoder utilizes transformers and down-sampling operations to obtain low-resolution images so that global representation is effectively extracted, while the decoder employs a series of invertible conditional blocks to preserve local details. Extensive experiments on two synthetic datasets and one real-world dataset demonstrate the superiority of our proposed model both quantitatively and qualitatively. ACM-Reference-Format
http://arxiv.org/abs/2306.05772v2
20230609092548
A Boosted Model Ensembling Approach to Ball Action Spotting in Videos: The Runner-Up Solution to CVPR'23 SoccerNet Challenge
[ "Luping Wang", "Hao Guo", "Bin Liu" ]
cs.CV
[ "cs.CV" ]
A Boosted Model Ensembling Approach to Ball Action Spotting in Videos: The Runner-Up Solution to CVPR'23 SoccerNet Challenge Luping WangEqual contributions Hao Guo^∗ Bin LiuCorresponding author Research Center for Applied Mathematics and Machine Intelligence Zhejiang Lab, Hangzhou 311121, China {wangluping, guoh, liubin}@zhejianglab.com July 31, 2023 ================================================================================================================================================================================================================================================================================================================================= This technical report presents our solution to Ball Action Spotting in videos. Our method reached second place in the CVPR'23 SoccerNet Challenge. Details of this challenge can be found at <https://www.soccer-net.org/tasks/ball-action-spotting>. Our approach is developed based on a baseline model termed E2E-Spot <cit.>, which was provided by the organizer of this competition. We first generated several variants of the E2E-Spot model, resulting in a candidate model set. We then proposed a strategy for selecting appropriate model members from this set and assigning an appropriate weight to each model. The aim of this strategy is to boost the performance of the resulting model ensemble. Therefore, we call our approach Boosted Model Ensembling (BME). Our code is available at <https://github.com/ZJLAB-AMMI/E2E-Spot-MBS>. § INTRODUCTION To better understand the salient actions of a broadcast soccer game, SoccerNet has introduced the task of action spotting, which involves finding all the actions occurring in the videos. This task addresses the more general problem of retrieving moments with specific semantic meaning in long untrimmed videos, extending beyond just soccer understanding. Details of the SoccerNet Ball Action Spotting challenge can be found at <https://www.soccer-net.org/tasks/ball-action-spotting>. In this technical report, we introduce our submitted solution termed Boosted Model Ensembling (BME), which reached second place in this Challenge.. Our proposed solution is built on a baseline model termed E2E-Spot <cit.>, which was provided by organizers of this challenge. We analyzed E2E-Spot and identified 3 opportunities to improve it for addressing the SoccerNet Ball Action Spotting challenge: * In the data set, only one frame associated with a representative event is labeled, whereas in reality, each considered event should be associated with multiple consecutive frames. * A higher quality event feature extraction may help, as indicated by <cit.>. * the loss function used in training E2E-Spot is not totally consistent with the metric , which is adopted by this challenge. Taking all above issues into consideration, we developed our solution BME, described in detail in Section <ref>. The experimental setting is presented in Section <ref>, and some key results are showed in Section <ref>. Finally, we conclude our work in Section <ref>. § OUR METHOD In this section, we describe our proposed method BME in detail. §.§ The Model Ensembling Operation The key operation of BME is model ensembling, which is illustrated in Figure <ref>. As is shown, the final model ensemble F_T is obtained after T iterations. At an iteration say t, an objective function obj_t associated with the performance metric ( used here) is defined (namely, Equation (2) in Section 2.2), which is used to select the best model f_i_t and its weight value w_t. Then, a new model ensemble F_t is got by combining F_t-1 and f_i_t as follows F_t(x) = (1 - w_t)F_t-1(x) + w_tf_i_t(x) §.§ Objective function The objective function used to select the best model f_i_t and its weight value w_t, which appear in Equation (1), is defined as follows obj_t = e(F_t, 𝒟_valid) - e(F_t-1, 𝒟_valid) where the function e(·) denotes the target performance metric ( here), 𝒟_valid the validation data set. At each iteration, we search for an appropriate member model and its corresponding weight that maximizes Equation (2), and then update the model ensemble according to Equation (1). §.§ Generating Candidate Models All candidate models are built on E2E-Spot <cit.>. Their differences lie in: (1) the training samples being used; (2) network architectures for feature extraction; and (3) the optimizer being used. Training samples We generate training samples as shown in Figure <ref>. Firstly, the video is decomposed into a fixed number of frames per second (FPS=25 in our case). Then, all the achieved frames are labeled based on the time of the given events together with the label sharing scope controlled by a hyper-parameter Δ. Finally, a training sample-set 𝒟_s, Δ={(x_s, i, y_Δ, i)}_i=1^N can be constructed by randomly picking N video clips with a fixed clip length L and a fixed frame stride size s. Different settings of s and Δ lead to training sample sets with different properties. If Δ is set to a large value, the ratio of event frames increases while the ratio of error-labeled frames also increases and vice versa. The larger the stride size s, the longer time the clip covers, and the poorer continuity between the frames. Therefore, models trained with such different sample-sets will have different properties. Network architectures for feature extraction The RegNet <cit.> is used as the baseline of the feature architecture in E2E-Spot <cit.>. However, according to the experimental results reported in <cit.>, RegNet performs worse than EfficientNet <cit.> on the problems addressed in that paper. Therefore, RegNet and EfficientNet are the two candidates for the feature architecture considered in our solution. In addition, we incorporated the Gate-Shift Module (GSM) <cit.> into the 2D convolutional operator included in both RegNet and EfficientNet. The two versions of the feature architecture are denoted as rny008_gsm and enetb2_gsm, respectively. The optimizer We use the same baseline optimizer as in  <cit.>, which is AdamW. In addition, we incorporate stochastic weight averaging (SWA)<cit.> into the training process to improve the generalization of the trained model, denoted as AdamW^†. Both AdamW and AdamW^† are considered as candidates for the optimizer used to train a candidate model. Each candidate model is trained with a specific combination of the training sample set, network architecture for feature extraction, and optimizer. Therefore, the number of candidate models is N_1× N_2× N_3, where N_1, N_2, and N_3 represent the number of training sample sets, network architectures, and optimizers, respectively. § EXPERIMENTAL SETTING Datasets We solely used the dataset provided by the challenge organizers in our experiments. We employed five settings for constructing training samples, namely 𝒟_1, 5, 𝒟_1, 4, 𝒟_2, 5, 𝒟_2, 4, and 𝒟_2, 2. The length of the clip is set to L=100, and the dimension of cropping for the frame is 224. During the test phase, we used 𝒟_valid as the validation dataset, while during the challenge phase, it was used as the test dataset. Training candidate models All hyperparameters used for training candidate models were kept the same, unless otherwise specified. We selected GRU <cit.> as the temporal architecture of E2E-Spot and employed data augmentation techniques such as random cropping, random flipping, brightness, contrast, hue, saturation, and MixUp <cit.> during training. The initial learning rate was set to 0.001 and was scheduled based on LinearLR and CosineAnnealingLR after warming up for 3 epochs. Each member model was trained for a total of 100 epochs on A100-GPU-80GB, with a batch size of 8. All the related source code was implemented using PyTorch 1.12.1. Other issues During model inference, the length of overlap between adjacent clips is set to L-1, i.e., overlap_len=99. After model inference, we employed non-maximum suppression (NMS) <cit.> as a post-processing step on the predicted results. The window size, frame rate, and threshold of NMS are set to 10, 25, 0.01, respectively. When using BME to ensemble the sub-models, the weights are sampled from {0.1, 0.2, 0.3, ⋯, 1.0}. § RESULTS To provide a clear view of the performance of each sub-model and the overall result of BME, we present the values of and the weights of the selected candidate models in Table <ref>. From the table, we can observe that the performance of the sub-model candidates is similar, but their abilities and/or properties may differ. However, by combining the results of the selected sub-models through BME, we achieved a significant improvement, with an of 86.37%. These findings suggest that the method of generating candidate models is reasonable and the proposed BME approach is effective. § CONCLUSION In this report, we presented our submitted solution, termed Boosted Model Ensembling (BME), for the CVPR'23 SoccerNet Challenge (<https://www.soccer-net.org/tasks/ball-action-spotting>). BME is a model ensembling approach built on the end-to-end baseline model, E2E-Spot, as presented in <cit.>. We generate several variants of the E2E-Spot model to create a candidate model set and propose a strategy for selecting appropriate model members from this set while assigning appropriate weights to each selected model. BME is characterized by operations for generating candidate models and a novel method for selecting and weighting them during the model ensembling process. The resulting ensemble model takes into account uncertainties in event length, optimal network architectures, and optimizers, making it more robust than the baseline model. Our approach can potentially be adapted to handle various video event analysis tasks. ieee_fullname
http://arxiv.org/abs/2306.03516v1
20230606090840
COPR: Consistency-Oriented Pre-Ranking for Online Advertising
[ "Zhishan Zhao", "Jingyue Gao", "Yu Zhang", "Shuguang Han", "Siyuan Lou", "Xiang-Rong Sheng", "Zhe Wang", "Han Zhu", "Yuning Jiang", "Jian Xu", "Bo Zheng" ]
cs.IR
[ "cs.IR", "cs.LG" ]
Zhishan Zhao and Jingyue Gao contribute equally to this work. Alibaba Group Beijing China Alibaba Group Beijing China Alibaba Group Beijing China Han Zhu is the corresponding author. Alibaba Group Beijing China Alibaba Group Beijing China Alibaba Group Beijing China Cascading architecture has been widely adopted in large-scale advertising systems to balance efficiency and effectiveness. In this architecture, the pre-ranking model is expected to be a lightweight approximation of the ranking model, which handles more candidates with strict latency requirements. Due to the gap in model capacity, the pre-ranking and ranking models usually generate inconsistent ranked results, thus hurting the overall system effectiveness. The paradigm of score alignment is proposed to regularize their raw scores to be consistent. However, it suffers from inevitable alignment errors and error amplification by bids when applied in online advertising. To this end, we introduce a consistency-oriented pre-ranking framework for online advertising, which employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results. A Δ NDCG-based weighting mechanism is adopted to better distinguish the importance of inter-chunk samples in optimization. Both online and offline experiments have validated the superiority of our framework. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. COPR: Consistency-Oriented Pre-Ranking for Online Advertising Bo Zheng ============================================================= § INTRODUCTION Online advertising has become a major source of revenue for many web platforms <cit.>. Advertisers ensure effective promotion of products by bidding and paying for user actions (e.g., click and purchase)[Without loss of generality, we regard click as the action in this paper] on advertisements (i.e., ads). To maximize platform revenue, the advertising system typically ranks ads based on their Expected Cost Per Mille (ECPM) <cit.> and selects top ones for impression: ECPM = 1000 × bid × pCTR , where bid is the price that the advertiser is willing to pay and pCTR is the predicted click-through rate (CTR) denoting the probability that the user clicks the ad. Under strict latency requirements in online deployment, it is infeasible for complex CTR models <cit.> with high inference cost to handle millions of candidates in the ad corpus. To balance efficiency and effectiveness, a common practice in industrial systems is to adopt a cascading architecture <cit.>, which filters ads through multiple phases with increasingly complex models as illustrated in Fig. <ref>. Particularly, the retrieval model first retrieves tens of thousands of relevant ads from the corpus. Afterwards, the pre-ranking model outputs pCTR for retrieved candidates, where top hundreds with highest ECPM are sent to the ranking model for final selection. To handle a larger candidate set, the pre-ranking model is usually designed to be lightweight, which works more efficiently but less accurately compared with the ranking model. Pre-Ranking has recently received increasing attention due to its importance in the cascading architecture. Huang et al. <cit.> propose a two-tower model that maps users and candidates into latent vectors and calculates their inner products. To enable high-order feature interactions, Li et al. <cit.> add find-grained interactions between two towers and Wang et al. <cit.> propose to use deep neural network with squeeze-and-excitation block. Despite improvement of accuracy, there is still a non-negligible gap between the pre-ranking and ranking models. They may generate significantly different ranked results on the same candidate set. Such inconsistency hinders the overall system effectiveness. For example, top ads selected from the pre-ranking phase could be less competitive in the ranking phase, causing waste of the computational resource. Also, ads which are preferred in the ranking phase could be unfortunately discarded in the pre-ranking phase, leading to sub-optimal results. Some pioneering studies <cit.> propose to align the pre-ranking and ranking models via distillation on pCTR scores. The pre-ranking model is encouraged to generate same scores as the ranking model <cit.> or generate high scores for top candidates selected by the ranking model <cit.>. Although exhibiting encouraging performance, the paradigm of score alignment suffers from the following issues, especially when applied to the advertising system: * Inevitable alignment errors. Due to simpler architecture and fewer parameters for efficiency concerns, the capacity of the pre-ranking model is limited, making it difficult to well approximate original scores of the complex ranking model. Thus even with explicit optimization, there still exist errors in aligning their scores to be exactly the same. * Error amplification in ECPM ranks[We use ECPM rank to denote the order of an ad in the ECPM-ranked list.]. In both pre-ranking and ranking phases, ads are ranked according to their ECPM as Eq. (<ref>), which is jointly determined by the pCTR score and the bid. Thus the influence of alignment errors could be amplified due to existence of bids. As shown in Table <ref>, when multiplied by corresponding bids, even a tiny difference in pCTR scores of the pre-ranking and ranking models leads to completely different ranked results. Above issues call for rethinking the necessity of strictly aligning pCTR scores in the advertising system. Essentially, given a set of candidates, it is not their absolute pCTR scores but their relative ECPM ranks that determine the results of each phase. Therefore, to achieve consistent results, the pre-ranking model is not required to output same pCTR scores as the ranking model. Instead, it only needs to output scores which yield same ECPM ranks when multiplied by bids. In this way, the requirement of score alignment can be relaxed to that of rank alignment, which is more easier to meet. Moreover, when optimizing pCTR scores for consistent ECPM ranks, the influence of bids can be taken into account beforehand, thus alleviating the issue of error amplification. To this end, we introduce a Consistency-Oriented Pre-Ranking (COPR) framework for online advertising, which explicitly optimize the pre-ranking model towards consistency with the ranking model. Particularly, we collect historical logs of the ranking phase, where each log records a ECPM-ranked list of candidates. COPR segments the list into fixed-sized chunks. Each chunk is endowed with certain level of priority from the view of the ranking phase. With pairs of ads sampled from different chunks, COPR learns an plug-and-play rank alignment module which aims to consistently distinguish their priority using scores at the pre-ranking phase. Moreover, we adopts a Δ NDCG-based weighting mechanism to better distinguish the importance of inter-chunk pairs in optimization. Our main contributions can be summarized as follows: * To the best of our knowledge, we are the first to explicitly optimize the pre-ranking model towards consistency with the ranking model in the widely-used cascading architecture for online advertising. * We propose a novel consistency-oriented pre-ranking framework named COPR, which employs a chunk-based sampling module and a plug-and-play rank alignment module for effective improvement of consistency. * We conduct extensive experiments on public and industrial datasets. Both offline and online results validate that the proposed COPR framework significantly outperforms state-of-the-art baselines. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. § RELATED WORK In this section, we briefly review studies about pre-ranking. Located in the middle of the cascading architecture, the pre-ranking system has played an indispensable role for many large-scale industrial systems <cit.>. The development of a pre-ranking model is mainly for balancing the system effectiveness and efficiency, as the downstream ranking model usually cannot deal with tens of thousands of candidates. To this end, techniques such as the dual-tower modeling <cit.> are commonly adopted. However, this paradigm limits feature interactions between users and items to the form of vector product, which often results in extensive performance degradation. Another line of work strives to enhance high-order feature interactions, and explores the ways to reduce the online latency. Li et al. <cit.> add fine-grained and early feature interactions between two towers. Wang et al. <cit.> propose to use fully-connected layers and employ various techniques from the perspectives of both modeling efficiency and engineering optimization. Specifically, a Squeeze-and-Excitation module <cit.> is utilized to choose the most useful feature set, and meanwhile system parallelism and low-precision computation are exploited whenever possible for latency optimization. Ma et al. <cit.> propose a feature selection algorithm based on feature complexity and variational dropout (FSCD) to search a set of effective and efficient features for pre-ranking. A similar study <cit.> uses network architecture searching (NAS) to determine the optimal set of features and corresponding architectures. These studies mainly focus on improving the accuracy of the pre-ranking model but neglects its interaction with the subsequent ranking model, leading to inconsistent ranked results. Several studies propose to align the pre-ranking and ranking models in terms of pCTR scores via knowledge distillation. RD <cit.> encourages the lightweight student model to score higher for candidates selected by the larger teacher model, which is often used in training pre-ranking models. RankFlow <cit.> regularizes the pre-ranking and ranking models to generate same scores for same candidates. Despite encouraging performance, there still exist inevitable errors in score alignment due to discrepancy in model capacity. When applied in online advertising, influence of such errors would be amplified by bids of ads, yielding inconsistent ECPM-ranked results. In this paper, we propose to relax the objective of score alignment to rank alignment, where bids of ads are incorporated and consistency of ranked results between two phases can be explicitly optimized in an effective manner. § METHODOLOGY In this section, we first introduce background knowledge about the pre-ranking model, and then describe our proposed COPR framework as illustrated in Fig. <ref>. §.§ Background Training Data. When the advertising system serves online traffic as Fig. <ref>, hundreds of ads are ranked through the ranking phase and recorded to logs, which we refer to as ranking logs. Each log contains an ranked list of ads with descending ECPM: 𝐑 = [(ad_1, pCTR_1, bid_1),...,(ad_M, pCTR_M, bid_M)], where pCTR_i is the score output by the ranking model for i-th ad and bid_i denotes its bid. M is the number of candidates. Then top N ads are displayed to the user. User feedback y (click/non-click) on each displayed ad is recorded to impression logs: 𝐈 = [(ad_1,y_1),...,(ad_N, y_N)]. Base Model. The base model for pre-ranking is usually a lightweight CTR model. Here we adopt the architecture of COLD <cit.>. The input features consist of three parts: user features 𝐔 such as age and gender, ad features 𝐀 such as brand and category, context features 𝐂 such as time and device. After pre-selecting a concise set of features, COLD feeds them into embedding layers and concatenate their embeddings for a compact representation 𝐱: 𝐱 = E(𝐔) ⊕ E(𝐀) ⊕ E(𝐂). Then it employs a prediction net consists of multiple fully-connected layers to estimate CTR: ŷ = Sigmoid(MLP(𝐱)) ∈ [0,1]. To accurately predict user click y, the model is optimized with cross entropy loss over impression logs I: L_ctr = ∑_𝐈[-ylog(ŷ)-(1-y)log(1-ŷ)]. §.§ Consistency-Oriented Pre-Ranking Though the pre-ranking model is expected to well approximates the ranking model in the cascading system, their gap in model capacity often hinders satisfying approximation. Thus in addition to L_ctr, we aim to explicitly optimize the pre-ranking model towards consistent results with the ranking model over 𝐑. §.§.§ Chunk-Based Sampling Given candidates {Ad_i}_1^M in ranking logs, an ideal pre-ranking model should output scores that yield same ECPM-ranked list as Eq. (<ref>). Considering its limited capacity, it could be hard to rank hundreds of ad all in correct positions. To reduce the learning difficulty, we partition the ranked list into D=M/K fixed-sized chunks, each constituting K adjacent ads, as shown in Fig. <ref>. We regard ads in the same chunk as candidates with same priority in the ranking phase. The pre-ranking model is not required to distinguish ads in the same chunk. Instead, it only needs to consistently rank candidates in the granularity of chunk. For each chunk, we randomly sample a candidate and endow it with the priority related to this chunk. In this way, for each ranked list, we obtain a concise sub-list: 𝐑_chunk = [(ad_s_d, pCTR_s_d, bid_s_d, D-d)]_d=1^D, where s_d is the index of sampled ad in chunk d and D-d denotes its priority which the larger the better. The above chunk-based sampling has two-fold advantages: 1) It provides a flexible way to control the granularity of consistency, which makes the objective reachable for the lightweight pre-ranking model. By increasing the chunk size K, the objective of consistency gradually shifts from fine-grained to coarse-grained. 2) It effectively reduces the size of ranked list in logs by K times and still maintains coverage of original lists, which is critical for efficient training in industrial machine learning systems. In our production implementation, K is set to 10. §.§.§ Rank Alignment In the following, we introduce how to modify the base model with a plug-and-play rank alignment module. Instead of regularizing the difference between ŷ_i in Eq. (<ref>) and pCTR_i in Eq. (<ref>) as score alignment methods <cit.>, we propose to relax the objective to rank alignment on a properly-adjusted pCTR score. Particularly, we employ a relaxation net to learn a factor α > 0, with which we adjust the original pCTR score: α = ReLU(MLP(x))+1e^-6∈ℛ^+, ỹ = α * ŷ, where ỹ denote the adjusted pCTR. Thus ECPM at the pre-ranking phase can be accordingly estimated as ỹ * bid, based on which we aim to correctly rank each inter-chunk pair in 𝐑_chunk. Here we adopt the pairwise logistic loss for its relatively good performance and the simplicity for implementation <cit.>: L_rank = ∑_i<jlog[1+e^-(ỹ_s_i * bid_s_i/ỹ_s_j * bid_s_j-1) ]. For each pair of ad_s_i and ad_s_j sampled from different chunks that i<j, we optimize L_rank by encouraging ỹ_s_i * bid_s_i > ỹ_s_j * bid_s_j, which means ad_s_i would be ranked before ad_s_j by ECPM in the pre-ranking phase. If all inter-chunk pairs can be correctly ranked, we achieve consistent ECPM-ranked results between the pre-ranking and ranking phases over R_chunk. Note that by introducing the relaxation factor α, we slightly modify the original pCTR score to achieve consistent ranked results if necessary. To maintain original value as much as possible, α should be around 1. Thus we add a symmetric regularization to penalize the deviation of α from 1: L_reg = α-1 α>1 1/α-1 α<=1 . It is worth mentioning that the proposed rank alignment module does not rely on specific assumption about the architecture of base model. It is an plug-and-play component that can be added to any pre-ranking models for improvement of consistency. §.§.§ Δ NDCG-Based Pair Weighting L_rank in Eq. (<ref>) fails to consider the relative importance of different pairs in consistency optimization. In practice, consistently ranking ads from chunk 1 and chunk 10 is more important than ranking chunk 11 and chunk 20, since only the top ads will be sent to the ranking phase and displayed to users. It calls for a weighting mechanism that considers chunk-related priorities of candidates. Intuitively, if pair (ad_s_i, ad_s_j) in L_rank are mistakenly ranked, the consistency between the pre-ranking and ranking phase will be hurt. Thus its weight in L_rank should be determined by the negative impact. As each sampled ad_s_d in 𝐑_chunk is endowed with priority D-d, we use NDCG <cit.> to measure the utility of any ranked list p of these candidates: DCG = ∑_i=1^D2^p_i-1/log(i+1), IDCG = ∑_i=1^D2^D-i-1/log(i+1), where p_i denote the priority of i-th ad in the permutation and the IDCG is the ideal DCG achieved by 𝐑_chunk. If we swap the position of ad_s_i and ad_s_j in 𝐑_chunk, the utility of the list will experience a drop which can be further normalized as: Δ NDCG(i,j) = 2^D-i-2^D-j/IDCG[1/log(i+1)-1/log(j+1)]. The utility drop is used to re-weight inter-chunk pairs in consistency optimization: L_rank = ∑_i<jΔ NDCG(i,j) log[1+e^-(ỹ_s_i * bid_s_i - ỹ_s_j * bid_s_j)]. Thus the objective function of COPR can be formulated as: L = L_ctr_CTR Loss+ λ_1L_rank + λ_2 L_reg_Consistency Loss, where λ_1>0, λ_2>0 are weights for corresponding loss terms. By minimizing L, we explicitly optimize the pre-ranking model towards consistency with the ranking model via a plug-and-play rank alignment module. §.§ System Deployment We introduce the deployment of COPR in three stages: data generation, model training, and online serving as shown in Fig. <ref>. Data Generation. During online serving, hundreds of ads are ranked through ranking model and recorded to ranking logs, with which we perform chunk-based sampling. The content of each sample includes user index, ad index, chunk index as well as the bid. Note that the bid at the ranking phase could differ from that at the the pre-ranking phase <cit.>. In this case, we record the pre-ranking bid since it influences L_rank in model training. When ads are displayed to users in the client, we also record user feedback in impression logs, which are used in calculating L_ctr. Model Training. The training procedure is performed on our ODL (Online Deep Learning) <cit.> platform, which consumes real-time streaming data to continuously update model parameters. After training with fixed number of steps, the learnt model will be delivered to the Model Center, which manages all online models. Online Serving. Once a new version of pre-ranking model is ready, pre-ranking server will load it from Model Center to replace the online version in service. § EXPERIMENTS In this section, we conduct experiments on both public dataset and production dataset to validate the effectiveness of COPR in improving consistency and overall system performance. §.§ Experiment Setup Taobao Dataset. It is a public dataset[https://tianchi.aliyun.com/dataset/dataDetail?dataId=56] with 26 million impression logs of 1 million users and 0.8 million items in 8 days. Item price is used as bid. Impressions of first 7 days are used to train DIN <cit.> as the ranking model. For each impression, we sample 10 candidates and collect ECPM-ranked results by the ranking model to train pre-ranking models. Logs of the last day are used for evaluation. To simulate the cascading process, we sample 100 candidates for each impression, among which the pre-ranking and ranking model sequentially select top 10 and top 1 candidates to display. Production Dataset. It contain 8 days of impression logs and ranking logs collected from our system shown in Fig. <ref>. These logs are of the magnitude of billions. The first week of logs are used for training and the last day is used for evaluation. According to the scenario that logs come from, it is further divided into two subsets: Homepage and Post-Purchase. Baselines. COPR is compared with following baselines: * Base adopts the architecture of COLD <cit.> and is trained on impression logs. * Distillation <cit.> directly distills predicted scores of the ranking model on impression logs. * RankFlow <cit.> distills predicted scores of the ranking model on ranking logs and further regularizes the pre-ranking model to generate high scores for candidates selected by the ranking model. * COPR w/o Δ NDCG removes the Δ NDCG-based weighting mechanism from the COPR framework. Metrics. We adopt two groups of metrics in evaluation. * The first group measures the consistency between ECPM-ranked results of the pre-ranking and ranking phases, including HitRatio(HR@K), normalized discounted cumulative gain (NDCG@K), and mean average precision (MAP@K). In HR@K and MAP@K, top 10 candidates selected by the ranking model are treated as relative ones. In NDCG@K, order in ranking logs is used as a proxy of relevance. The standard calculation of these metrics can be found in <cit.>. * The second group measures the overall system performance. We use Click-Through-Rate (CTR) and Revenue Per Mille (RPM) similar to <cit.>, which corresponds to user experience and platform revenue, respectively. On public dataset, CTR is simulated as the portion of clicked ads in displayed ads, and RPM is simulated as the product of CTR and average bid of clicked ads. In production experiment, we perform online A/B test to obtain CTR and RPM on real traffic. Hyper-parameters. The chunk size is set to 2 and 10 on the public dataset and the production dataset, respectively. The number of MLP layers in the prediction net and the relaxation net is 3. The embedding size of raw input features is set to 16. λ_1 and λ_2 in Eq. (<ref>) are fixed to 1 and 0.2. §.§ Results on Public Dataset Table <ref> compares COPR and baselines in terms of consistency and system performance. We only show K=10 in HR@K, NDCG@K, and MAP@K due to limited space. Results under other settings of K are similar. From Table <ref>, we draw the following conclusions. First, system performance (CTR and RPM) is highly associated with the consistency between the pre-ranking and ranking phases. For COPR and baselines, the higher consistency generally yields the better system performance. It validates our motivation to explicitly optimize consistency between phases in order to improve the overall effectiveness of the cascading system. Second, COPR achieves best consistent results of all methods, outperforming the state-of-the-art RankFlow by 5.1%, 13.5%, and 33.0% in terms of HR@10, NDCG@10, and MAP@10. We attribute the improvement to our shift of objective from score alignment to rank alignment. By such relaxation, COPR can directly optimize towards consistent ECPM-ranked results and meanwhile reduce the learning difficulty for the lightweight model. Moreover, the influence of bids is considered in training COPR, thus alleviating the issue of error amplification that RankFlow suffers from. We also find that RankFlow is better than Distillation. We think it is because Rankflow aligns scores over ranking logs while the latter is on impression logs which is too sparse. Third, COPR w/o Δ NDCG experiences performance drop compared with COPR. This ablation study verifies the effectiveness of the pair weighting mechanism based on Δ NDCG. By emphasizing more on important inter-chunk pairs in consistency optimization, COPR ensures top candidates are more likely to be consistently ranked, which helps improve the overall utility of pre-ranking results. §.§ Results on Production Dataset We also perform similar evaluation on the production dataset composed of samples from two scenarios. Most conclusions are consistent with those on the public dataset. As shown in more details from Fig. <ref> to Fig. <ref>, COPR significantly outperforms other methods in term of HR@K, NDCG@K, and MAP@K with varying K from 5 to 100 on two scenarios, which demonstrates the stable improvement of consistency achieved by our proposed framework. Moreover, we still observe the gap between COPR and COPR w/o Δ NDCG, which shows that the weighting mechanism also works in the large-scale production dataset. To evaluate system performance in production environment, we perform online A/B test on two scenarios, where these methods are used to serve real users and advertisers. From Table <ref> we find that Distillation, RankFlow, and COPR all perform better than the production baseline, among which COPR achieves the largest improvement, with a lift of up to +12.3% CTR and +5.6% RPM. With impressive performance, COPR has been successfully deployed to serve the main traffic of Taobao display advertising system in the pre-ranking phase since October of 2022. §.§ Qualitative Analysis Given ranked results from the pre-ranking and ranking phases, we calculate the average pre-ranking position for candidates at each ranking position, based on which we draw the Ranking-PreRanking Curve (RPC). The ideal RPC happens when results are exactly same. §.§.§ Error Amplification in ECPM Rank. As shown in Fig. <ref> (Left), RPC by pCTR of RankFlow is close to the ideal curve, showing well alignment of raw pCTR in two phases. However, after ranking by ECPM, RPC of RankFlow largely deviates from the ideal one. It verifies that the involvement of bid in ECPM will amplify the influence of errors in score alignment, leading to more inconsistent ECPM-ranked results. This analysis is consistent with the example in Table <ref>. Hence we confirm that merely score alignment is not enough for the cascading architecture in online advertising. §.§.§ More Consistent ECPM Rank. Fig. <ref> (Right) shows RPC by ECPM of different methods. We observe that compared with Base and RankFlow, RPC of COPR is more close to the ideal curve in almost each ranking position. It qualitatively shows that ECPM-ranked results given by COPR are more consistent with results of the ranking phase. It can be attributed to the design of our consistency-oriented framework, where the rank alignment module directly optimizes towards this objective. The incorporation of bid also helps alleviate the above mentioned error amplification. § CONCLUSION In this paper, we introduce a consistency-oriented pre-ranking framework for online advertising, which employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results. A Δ NDCG-based weighting mechanism is also adopted to better distinguish the importance of inter-chunk samples in optimization. Both online and offline experiments have validated the superiority of our framework. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. ACM-Reference-Format
http://arxiv.org/abs/2306.07631v1
20230613085950
Time Resolved Investigation of High Repetition Rate Gas Jet Target For High Harmonic Generation
[ "Balázs Nagyillés", "Zsolt Diveki", "Arjun Nayak", "Mathieu Dumergue", "Balázs Major", "Katalin Varjú", "Subhendu Kahaly" ]
physics.optics
[ "physics.optics", "physics.app-ph", "physics.atom-ph", "physics.comp-ph", "quant-ph" ]
[Correspondence: ][email protected] ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Institute of Physics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary LULI–CNRS, CEA, Sorbonne Université, Ecole Polytechnique, Institut Polytechnique de Paris, Paris ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Department of Optics and Quantum Electronics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Department of Optics and Quantum Electronics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary [Correspondence: ][email protected] ELI ALPS, ELI-HU Non-Profit Ltd., Wolfgang Sandner utca 3., Szeged 6728, Hungary Institute of Physics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary High repetition rate gas targets constitute an essential component in intense laser matter interaction studies. The technology becomes challenging as the repetition rate approaches kHz regime. In this regime, cantilever based gas valves are employed, which can open and close in tens of microseconds, resulting in a unique kind of gas characteristics in both spatial and temporal domain. Here we characterize piezo cantilever based kHz pulsed gas valves in the low density regime, where it provides sufficient peak gas density for High Harmonic Generation while releasing significantly less amount of gas reducing the vacuum load within the interaction chamber, suitable for high vacuum applications. In order to obtain reliable information of the gas density in the target jet space-time resolved characterization is performed. The gas jet system is validated by conducting interferometric gas density estimations and high harmonic generation measurements at the Extreme Light Infrastructure Attosecond Light Pulse Source (ELI ALPS) facility. Our results demonstrate that while employing such targets for optimal high harmonic generation, the high intensity interaction should be confined to a suitable time window, after the cantilever opening. The measured gas density evolution correlates well with the integrated high harmonic flux and state of the art 3D simulation results, establishing the importance of such metrology. Time Resolved Investigation of High Repetition Rate Gas Jet Target For High Harmonic Generation Subhendu Kahaly July 31, 2023 =============================================================================================== § INTRODUCTION Investigations in ultrashort laser-plasma science in the strong field regime are generically based on the interaction of an appropriately focused laser driver on to reflective (overdense) or transparent (underdense) targets. The interaction conditions needs to be reproduced, and hence the target needs to be replenished, at the repetition rate of the laser. Recent advances in few cycle, high peak power high repetition rate (≥ 1 kHz) lasers <cit.> has expedited the development and characterization of targets that are able to sustain interactions at such challenging repetition rate in a reproducible and stable manner. For all transmission based experiments in this domain, the use of gas targets is widespread because they can provide dense, stable and reproducible medium for laser matter interaction studies. The application space is ever expanding with recent demonstrations of laser wake field acceleration of electrons <cit.> and high harmonic based attosecond pulse generation <cit.>, both operating at a high repetition rate. In both the cases a continuous gas cell has been used for the interaction and the accessible gas density space is limited due to the residual gas load within the vacuum chamber. One straight-forward way to overcome this is to use a high repetition rate gas jet target with appropriate nozzle geometry. Pulse valves working up to a very high pressure and gas density has been demonstrated <cit.>, albeit operating at a low frequency. Nonetheless the available repetition rate for pulsed valves currently allows one to reach up to ∼ 5kHz <cit.>. The importance of careful metrology of gas jets emanating from such valves with respect to their appropriate application space cannot be overemphasized. Such systems are important for the attoscience community and beyond. For example coupled with the emergence of ≥ 1 kHz intense lasers <cit.> such a high repetition rate gas jet target can enable the extension of the recent demonstrations like multi millijoule THz <cit.> and/or relativistic single cycle mid IR pulses <cit.> to the high average power regime, opening up wide ranging applications. The capability of solenoid type Even-Lavie valves operating at less than 2 kHz repetition rate has been demonstrated in the domain of high harmonic spectroscopy of molecules <cit.> and transient absorption spectroscopy <cit.>. Here we undertake the space and time resolved investigation of the gas density profile of a piezo cantilever based high repetition rate gas jet from the perspective of optimizing the high harmonic generation (HHG). HHG is a non-linear process where the strong fundamental laser field gets coherently upconverted to a comb of higher frequency radiation <cit.>. This frequency conversion happens in a gas target most of the cases, when the atomic/molecular system of the gas is driven in the strong field regime <cit.>. The conversion efficiency is inherently defined by the HHG process which is dependent on the characteristics of the generating laser pulse and the gas medium. One of the parameters to optimize the high harmonic radiation, is the pressure of the gas target, since the number of particles determine the number of emitters and absorbers in the HHG and define the phase-mismatch. There is a fine balance between increasing the number of emitters and absorbing the generated harmonics <cit.>, for a given set of laser parameters. Thus, it is evident that proper gas target characterization is essential for optimization of the HHG process. In this article, we perform interferometric characterization of the space-time resolved gas density profile and study the HHG from a cantilever based high repetition rate piezo gasjet system. Our investigation reveal a clear correlation of the HHG yield and the dynamics of the gas density evolution. We further corroborate our observation with 3D strong field simulations that incorporate, microscopic HHG along with macroscopic propagation effects emulating the experimental conditions.Our results show that the gas density profile resulting from such a valve is intricately linked to the dynamics of the cantilever piezo. The remarkable correlation of the HHG signal with the gas density dynamics allows us to achieve stable and optimum harmonic signal through careful timing setting of the valve opening with respect to the pulse arrival. This also establishes the importance of such space time resolved characterization for each such high repetition rate piezo cantilever based valve for any given application under consideration. This becomes crucial in systems like the SYLOS COMPACT beamline at ELI-ALPS <cit.> where several high repetition rate pulsed gas jets can be placed in sequence <cit.>, in order to improve XUV beam energy, by optimising the phase matching conditions for applications in nonlinear XUV physics <cit.>. § EXPERIMENTS The required behaviour of a gas jet in HHG is its short opening time, while creating high density jet at its orifice at high repetition rate. It is crucial to reliably synchronize the timing of the nozzle opening of the gas jet and the arrival of the generating laser pulse, in order to get the best harmonic yield. The experiments have been conducted at two separate locations. We developed a standalone test station to characterize the gas density inside the jet under different timing of trigger, valve opening time and backing pressure. We used the outcome to compare it to the harmonic yield obtained from the experiments conducted at the SYLOS COMPACT beamline <cit.> at ELI-ALPS. §.§ Gas jet characterization by interferometry The density profiling of gas targets has been carried out with several different methods (see <cit.> and references teherin), for both static <cit.> and pulsed jets <cit.>. Here we undertake space-time resolved interferometry to access the gas atomic density distribution. The experimental layout is based on an Mach-Zehnder interferometer, Fig. <ref>(a). An expanded He-Ne laser is used to enter the interferometer after being split in two arms. The interaction arm passes through a gas jet to introduce some phase shift in the optical path of the laser beam with respect to the reference arm. Both arms are in vacuum. The gas target transverse plane (represented by I_1(y,z) in Fig. <ref>(a)) is imaged onto a CMOS sensor (a commercial Basler acA1440-73gm camera), where the recombined beams form the interference pattern (I_12(y,z) in Fig. <ref>(b)). Several vital points have been taken into account when characterizing the gasjets: * To keep the signal to noise ratio high (especially at low gas jet densities) an ATH 500M turbo-molecular pump was providing the low ambient pressure in the chamber, 10^-5-10^-6 mbar. Additionally, the whole part of the interferometer, which is not in vacuum had to be covered to protect the beam paths from air fluctuations and the assembly was placed on stable optical table. These precautions, reduced the residual gas load, minimised parasitic vibrations and reduced refractive index fluctuations in the interferometer, allowing the intrinsic noise of the setup to be limited to gas density levels as low as about 3×10^17 cm^-3, estimated from the analysis of reference images of the interference pattern, without activating the valve. * The experimental target gas is argon which has high refractive index of 1.00028 at wavelength λ=633 nm (for example significantly higher compared to helium 1.000034 at the same λ) allowing for more sensitivity in terms of measuring phase difference, in spite of the sub-millimetric width of the gasjet, even in the sub-10^18 cm^-3 density regime. * The gas refractive index ansatz is valid, when the characterization is not corrupted due to molecular jet formation with large clusters. Within the parameter range relevant to us the empirical Hagena parameter Γ^*≪ 100 is significantly less that the limit Γ^*∼ 10^3 required for cluster formation <cit.>. For the tests we used an Amsterdam Piezo Valve ACPV2 model, a cantilever piezo with 500 μm nozzle size. Cantilever piezos can deliver large displacements up to hundreds of micrometers to 1 mm, while working at high repetition rates, up to 5kHz. The difference of the cantilever piezos to disk shaped piezos is that by adjusting the free length of the cantilever one can adjust the displacement of the cantilever<cit.>. For example, by decreasing the length, the displacement drops rapidly, while its resonant frequency increases. Cantilever resonance can introduce observable effects in gas density measurement. Since the cantilever will bounce back and forth while opening and closing the pulsed valve, it can introduce pressure and hence number density fluctuation in the released gas within one such cycle of operation. The synchronization between the camera and the jet is realized with a delay generator. The time resolution of the measurement - which is determined by the shortest possible exposure time of the CMOS sensor is 1 μs. For each measurement two images are recorded, one with the nozzle opened and one with the nozzle closed serving as a reference measurement without any gas present in any arms - this is realized by running the camera at twice the repetition rate of the gas source. For resolving the temporal evolution of the gas density while opening the valve, the camera trigger was delayed compared to the trigger signal of the jet. The setup can record images at up to 100 Hz, but in order to get a background free image one has to wait until the turbomolecular pump can reduce the pressure in the chamber to the base - 10^-5-10^-6 level, resulting a few Hertz operation. As depicted in the flowchart in Fig. <ref>(b), the 2D phase shift ϕ(y,z) is extracted from the interferogram using 2D Fourier transformation algorithm described in Ref. <cit.>. One can see from a typical unwrapped phase map presented in Figure <ref>(b) (step 3) that in the plane perpendicular to the propagation axis x, the jet rapidly spreads out as the distance from the nozzle tip increases (vertical z direction). The extra contribution to the phase shift introduced in the probe beam propagating along x by the argon gas density profile is, Δϕ(y,z) = ∫2π/λΔ n(x,y,z)dx, where Δ n(x,y,z)=n(x,y,z)-1, is the shift in index of refraction due to the presence of the gas and n(x,y,z) is the refractive index of argon jet. As explained in the caption of Fig. <ref>, Δϕ(y,z) is calculated from two projection interferograms: one with the gasjet on and the other without any gas in the interaction arm. The measured phase-map Δϕ(y,z) is a 2D projection of the 3D distribution of the phase difference Δϕ(r,z) introduced by the gasjet (r is the radial distance from the center of the gasjet axis z). Since one can assume that the jet has a cylindrical symmetry it is possible to transform the projection Δϕ(y,z) to a radial distribution θ(r, z) = Δϕ(r,z) using the inverse Abel transform (IAT) <cit.> as follows: θ(r, z) = IAT[Δϕ(y, z)] = -1/π∫_r^∞dΔϕ(y, z)/dy1/√(y^2 - r^2) dy where r is the radial distance from the center of the nozzle, z is the vertical distance from the tip of the nozzle and transverse coordinate y is the coordinate perpendicular to x and z. This is indicated in step 4 of Fig. <ref>(b). We numerically carry out the IAT in Python using the well developed BAsis Set EXpansion (BASEX)<cit.> method in the package PyAbel <cit.>. The refractive index can be expressed with the radial distribution using equation, Δ n(r,z)=n(r,z)-1 = λ/2 πθ (r,z), where λ is the wavelength of the laser. The refractive index is connected to the number density. This can be found through the series of few steps. The molar reflectivity (A) relates the optical properties of the substance with the thermodynamic properties. From the Lorenz—Lorentz expression which is dependent on the temperature (T) through the molar mass (M), A = ( n^2 -1 ) /( n^2 +2 ) M/ρ, where n is the refractive index of the atomic gas and ρ is the gas density <cit.>. The molar mass can be given by, M = R T ρ/p, where R is the universal gas constant and p is pressure. Using the relation between polarizability (α_e=1.664 Å^3 for argon) and molar reflectivity, A = 4/3N_Aα_eπ and ideal gas law as pV = NRT, the number density n_md=N/V in the units of particles/cm^3 can be written as: n_d = n_md/N_A = 3/4( n^2 -1 ) /( n^2 +2 ) 1/N_A^2α_e π This is the last step presented in Fig. <ref>(b). §.§ High Harmonic Generation in the beamline The gas density dynamics during the opening and closing of the cantilever is crosschecked on the SYLOS COMPACT beamline <cit.>. The main goal of this beamline is to achieve high energy isolated attosecond pulses as well as attosecond pulse trains in the sub 150 eV regime at high repetition rate, in order to perform XUV-pump XUV-probe nonlinear experiments <cit.>. To achieve this goal it uses long laser focusing (10 m) and up to four high pressure gas jets to generate XUV radiation. The generated XUV beam is separated from the driver IR by a 200 nm thick Al filter and the XUV signal is detected with a calibrated XUV photodiode. The incoming beam size is around 6 cm which was reduced with an iris to 3 cm in order to maximize the XUV yield - resulting around 12 mJ in the interaction region. These conditions enable the generation of around 30nJ XUV pulses, from argon gas, after XUV filter. The driving laser for this experiment was the SYLOS Experiment Alignment (SEA) laser <cit.> which operates at 10Hz and delivers 34mJ pulse energy with 11fs pulse duration at 825nm central wavelength. When optimizing the XUV energy it is crucial to get the timing of the valve opening and the opening duration correct for the individual gas jets, in order to maximize the gas density in each interaction region. Keeping the valve opening time constant and changing the delay between the laser trigger and the opening of the valve one can study the impact of the dynamics of the valve opening on the integrated yield of the generated XUV. In an ideal case there is a rise in the gas density as the valve opens, so does the XUV yield grow, then it reaches a maximum, when the valve opens the most. The gas density corresponding to the maximum valve opening should stay fairly constant during the opening time of the valve, then it should slowly drop to zero with the closing valve. This is the typical behavior at the disk shaped piezo valve. We show that the gas density does not stay constant during the opening time of the cantilever piezo valve, which introduces an extra factor to optimize during the high harmonic generation. § RESULTS §.§ Experimental observations In Fig. <ref>(a) we present the retrieved gas atomic number density distribution along the radial (please note that the radial r, x and y distributions are same due to the cylindrical symmetry of the gas flow) and the vertical direction, achieved by applying the protocol presented in the previous section. The presented density map is achieved at one specific delay after opening of the gas valve. Due to the shape of the nozzle, the expanding gas jet is rather confined into a cylinder along the vertical direction with a diameter of 500 μm (which is the opening size of the nozzle) and not expanding much in the radial direction. As expected the maximum value of the number density distribution is close to the exit of the valve and its value is around 1.2×10^19 particles per cm^3. The central z line-out n_d(r=0,z) presented in Fig. <ref>(b) shows the exponential decay of the number density distribution as the distance from the nozzle increases in the vertical direction, a typical feature of such nozzle geometry <cit.>. In Fig. <ref>(c), we plot the radial line-outs of n_d(r,z) for the five different z values marked in Fig. <ref>(b). In all the measurements, the opening window of the valve is set to 400 μs. In Fig. <ref> (a) we present five different snapshots of the dynamic evolution of the gas number density distribution in space, within the opening window of the gas valve. As discussed before, the gas density is exponentially dropping as the function of height from the nozzle exit. Therefore, in order to access the higher density region for the HHG experiments, for the given focusing configuration, one has to shoot as close to the exit of the nozzle as possible, without damaging the nozzle. Because of technical constrains, for our conditions, in the HHG experiments, we kept the center of the intense focused beam around 400 μm above the exit. The black dashed horizontal lines on the colormaps in Fig. <ref> (a) represent the laser propagation axis (z=400 μm) for HHG experiments. The red solid circles in Fig. <ref> (a) indicate the position of maximum atomic gas density along the laser propagation axis. These colormaps show that during the opening time of the valve the laser focus experiences large variations in the gas distribution. In order to closely follow the temporal evolution we obtain a large number of 2D spatial gas number density snapshots as a function of delay within the opening window of the valve. In Fig. <ref> (b) we plot the maximum number density seen by the center of the laser focal spot as a function of this delay. The vertical red lines in Fig. <ref> (b) indicate the temporal delay where the 2D number density snapshots (presented in Fig. <ref> (a)) were taken while the red circles and the black curve correspond to maximum gas number density in the focal spot. In an ideal case, during the scan of the opening window one would see the rise then the drop of the number of particles in the interaction region, while the maximum would show the right delay setting for optimal harmonic yield. However, in our case in Figure <ref> (b) we clearly identify several maxima as the function of the delay. Time dependent injection of the gas jet within the pulsed valve aperture time and a consequent gas density depletion has been observed by other research groups as well <cit.>. One can observe two important features on the graph in Fig. <ref> (b): on one hand the opening and closing of the valve is not sudden, but takes several tens of microseconds. On the other hand, during the opening window, the signal oscillates. The first minimum is a drop of approximately 40 percent in the number density, while the following drops are less intense. However the maxima reaches roughly the same level in each case. The observed facts are a clear sign of a damped oscillation of the cantilever piezo <cit.> which is well known from vibrations of cantilever beams <cit.>. The frequency of this oscillation is roughly 7.6 kHz determined by physical parameters of the piezzo and not influenced by the driving frequency <cit.>. The results show that the dynamics of the cantilever when using it as a valve introduces variations in the gas jet density profile emphasizing the importance of such metrology in any experiment that is sensitive to the gas atomic number density. In addition, since the gas expansion from such a valve is non-trivial, the knowledge of the exact distribution of gas density could improve the understanding and modelling of the HHG process, while correlating with experimental observations. Fig. <ref> (c) presents the normalized high harmonic yield (red rectangles) as the function of the delay of the arrival time of the interacting intense focused laser pulse with respect to the opening time (marked as zero delay which signifies a measurable number density above the detection threshold in Fig. <ref>(b)) of the valve. The relative high harmonic yield is experimentally measured with a thin film coated photodiode (Optodiode-AXUV100AL). The HHG yield data presented in Fig. <ref> (c) is normalized with respect to the maximum measured yield. The red vertical lines correspond to the delay times presented in Fig. <ref> (a). Multiple red rectangles at the same delay time (wherever available are presented in Fig. <ref> (c)) represent typical fluctuations in the measured yield. The measured HHG yield data in Fig. <ref> (c) follows remarkably well the number density variations presented in Fig. <ref> (b). For the HHG interaction, here we note the following points: * The laser pulse duration (∼11fs FWHM of the pulse intensity envelop) is negligible on the timescale of gas density evolution. This implies that the interacting pulse sees a frozen gas density distribution in the transverse plane. This ensures that the microscopic emitter distribution across the focal spot, within the pulse duration during HHG are not evolving. * The transit time of the intense laser pulse through the gasjet target (typically < 10ps in our case) is also negligible compared to the time scale of gas dynamics. This ensures that the measured temporal snap-shots of the spatial distribution of gas number density does not change during pulse propagation and thus can be utilised for macroscopic phase matching considerations in HHG. * Since the confocal parameter (∼49 cm) is significantly larger than the medium length (∼1 mm) under our experimental configuration, we are not limited by longitudinal variation of intensity and the contribution of Gouy phase, associated with the spatial focusing of the fundamental laser pulse, to phase matching is unimportant. * The gradient in the number density (presented in Fig. <ref>(b)), across the focal spot diameter of ≈300 μm can result in subtle effects like influencing the phase matching condition for the HHG. This can lead, for example, to distortions in the XUV wavefront impacting the focusability of the XUV beam <cit.>, which is beyond the scope of the present manuscript. The experimental results demonstrate that in case of HHG, it is essential to know the exact number density of the gas medium in a space time resolved manner. In addition, in order to optimize the harmonic yield one has to synchronize the arrival of the generating laser pulse with the opening time of the valve and introduce an appropriate relative time delay, depending on spatio-temporal the characteristics of the gas jet under utilization. At this point, we would like to emphasize that the monotonic nature of the HHG yield as a function of measured gas jet atomic density as observed experimentally and presented in Fig. <ref> (c) is not the case in general. In case of coherent light emission - like HHG - the generated photon flux scales quadratically with the number of emitters under ideal conditions <cit.>. The resemblance between the jet density (Fig. <ref> (b)) and the harmonic yield (Fig. <ref> (c)) highlights the importance of phase matching, as it manifests under our specific experimental conditions. A close investigation of the correlation between the number density data in Fig. <ref> (b) and the measurements in Fig. <ref> (c) reveal that within our interaction regime the HHG yield is almost proportional to the gas pressure. Phase matching is a complex dynamical <cit.> process and the relation between gas atomic number density and HHG yield is not straight forward in the short pulse regime. In order to investigate further we undertake numerical simulations in the following. §.§ Numerical validation using 3D Simulation Direct measurement of the gas number density distribution in the HHG interaction region is crucial not just from the optimization of the high harmonic source. Such metrology also enables one to feed experimental measurements into state of the art simulation tools that are often utilised to investigate the strong field interaction further. In this case the numerical simulations can be performed in a virtual experimental set up with initial parameters mimicking the real experimental conditions. This is important, if one needs to reconcile experimental observations with theoretical results and interpret the relevant physics in a correct manner. In our case, we undertake such an effort and use state of the art simulations where the gas jet metrology data is fed as input to simulate the harmonic yield. We note here, that the macroscopic effects like plasma generation, absorption and refraction during propagation play significant part in the phase matching process and hence cannot be neglected for calculation of the HHG yield. In order to investigate the experimental results further, we have performed a series of macroscopic simulations using a three-dimensional (3D) non-adiabatic model, described in detail elsewhere <cit.>.As a short summary, the simulation is performed in three self-consistent computational steps. Firstly, to analyze the propagation of the linearly polarised electric field of the fundamental laser pulse E(𝐫_L,t) in the generation volume, the nonlinear wave equationv of the form ∇^2E(𝐫_L,t)-1/c^2∂^2E(𝐫_L,t)/∂ t^2=ω_0^2/c^2(1-n_eff^2(𝐫_L,t))E(𝐫_L,t) , is solved <cit.>. In the previous equation c is the speed of light in vacuum, ω_0 is the central angular frequnecy of the laser field, and the suffix L in 𝐫_L indicates that this vector represents the coordinate in the frame with respect to the laser axis (in contrast to the r scalar coordinate described previously around the gas jet symmetry axis). The effective refractive index n_eff(𝐫_L,t) of the excited medium — depending on both space and time — can be ontained by <cit.> n_eff(𝐫_L,t)=n+n̅_2 I(𝐫_L,t)-ω_p^2(𝐫_L,t)/2ω_0^2, where I(𝐫_L,t)=1/2ϵ_0c|Ẽ(𝐫_L,t)|^2 is the intensity envelope of the laser field (note that in this expression the complex electric field Ẽ(𝐫_L,t) is present <cit.>), and ω_p(𝐫_L,t)=[n_e(𝐫_L,t)e^2/(mϵ_0)]^1/2 is the plasma frequency. The plasma frequnecy is well-known to be a function of the electron number density n_e(𝐫_L,t), and its expression also contains the electron charge e, the effective electron mass m, and the vacuum permittivity ϵ_0). Dispersion and absorption, along with the Kerr effect, are thus incorporated via the linear (n) and nonlinear (n̅_2) part of the refractive index. Absorption losses due to ionization <cit.> are also included, while plasma dispersion is estimated based on ionization values in the last term of n_eff(𝐫_L,t). The model assumes cylindrical symmetry about the laser propagation direction z_L (𝐫_L→ r_L,z_L) and uses paraxial approximation <cit.>. Applying a moving frame translating at the the speed of light, and by eliminating the time derivative using Fourier transform ℱ, equation. (<ref>) reduces to the explicit form, (∂^2/∂ r_L^2+1/r_L∂/∂ r_L)E(r_L,z_L,ω)-2iω/c∂ E(r_L,z_L,ω)/∂ z_L = ω^2/c^2ℱ[(1-n_eff^2(r_L,z_L,t))E(r_L,z_L,t)]. Equation. (<ref>) is solved using the Crank–Nicolson method in an iterative algorithm <cit.>. The ABCD-Hankel transform is used to define the laser field distribution in the input plane of the medium <cit.>. In step two, we calculate the single-atom response (dipole moment D(t)) based on the laser-pulse temporal shapes available on the complete (r_L, z_L) grid, by evaluating the Lewenstein integral <cit.>. The macroscopic nonlinear response P_nl(t), is then calculated by taking the depletion of the ground state into account <cit.> using P_nl(t)=n_aD(t)exp[-∫^t_-∞w(t')dt'], where w(t) is the ionization rate obtained from tabulated values calculated using the hybrid anti-symmetrized coupled channels approach (haCC) <cit.> showing a good agreement with the Ammosov-Delone-Krainov (ADK) model <cit.> and n_a is the atomic number density within the specific grid point (r_L, z_L) <cit.>. In the third step we calculate the propagation of the generated harmonic field E_h(𝐫_L,t) using the wave equation ∇^2E_h(𝐫_L,t)-1/c^2∂^2E_h(𝐫_L,t)/∂ t^2=μ_0d^2P_nl(t)/dt^2 , with μ_0 being the vacuum permeability. Equation. (<ref>) is solved in a manner similar to equation. (<ref>), but without an iterative scheme (since the source term is known). The amplitude decrease and phase shift of the harmonic field - caused by absorption and dispersion, respectively - are incorporated at each step when solving equation (<ref>) by taking into account the effect of complex refractive index on wave propagation. The real and imaginary parts of the refractive index in the XUV regime are from tabulated values of atomic scattering factors <cit.>. The simulation method described above assumes radial symmetry aroung the laser propagation axis. For the laser spatio temporal profile we use the measured focal spot distribution and experimental laser pulse duration in order to mimic the real experimental conditions. For the gas jet atomic number density profile we use the measured gas jet number density profile along the axis of laser propagation (peak densities as shown in Fig. <ref> (b)). Thus, within our numerical simulations, the influence of gas density gradient across the laser focal spot (along the symmetry axis of the gas jet as presented in Fig. <ref>(b)) is lumped into an average value. Figure <ref> (c) presents tbe simulated harmonic yield (black hollow circles) as the function of the delay from the opening of the valve. The gas jet pressure for the simulation was calculated from the number density variation on Figure <ref> (b). Both the HHG measurements and simulations show a remarkable resemblance to the jet density variation measured with the interferometric technique. The simulations also revelaed that under the circumstances that describe these experiments, transient phase matching <cit.> limits efficient generation to the first half of the short laser pulse. At the same time, due to minimal reshaping of the pulsed laser beam, there are spatially homegenous phase matching conditions in the whole interaction volume. This allows us to apply a simple model<cit.> to explain the variation of the observable harmonic flux in the absorbing medium. The analysis confirmed that with the coherence lengths and absorption lengths involved, the harmonic flux changes close to linearly with the change of atom number density. § CONCLUSION On one hand, an interferometric gas density characterization was developed for underdense gas jets produced from a high frequency (up to 5 kHz) cantilever piezo valve. On the other hand we show that the cantilever valve has its characteristic dynamics while opening the valve resulting in the oscillation of the gas density as the function of time. Using HHG from such a gas jet target, we observe a remarkable experimental correlation in between the gas density and HHG yield. Our results have been corroborated by sophisticated simulations that self consistently include both microscopic HHG and macroscopic propagation effects under conditions mimicking the real experimental scenario. Our results establish the feasibility of utilizing cantilever based high repetition rate gas valves for high harmonic generation processes, emphasizing the importance of precise timing control in order to access proper gas density regime. This also shows that appropriate time and space resolved characterization and monitoring of such gas valves is an important aspect for its application and reproducible performance is easily achieved by properly managing the synchronization of gas jet with respect to the arrival time of the laser. The results are also important to a diverse field of studies which can benefit from high repetition rate gas jets, where the signature effects of the phenomena, have sensitive dependence upon the precise gas density profile such as molecular or atomic quantum path interferometry <cit.>, ion spectroscopy from dilute plasma <cit.> spatio-temporal <cit.> and equivalently spatio-spectral <cit.> control of attosecond pulses, and in designing of gas based extreme-ultraviolet refractive optics <cit.>, to name a few. § ACKNOWLEDGMENTS ELI ALPS is supported by the European Union and co-financed by the European Regional Development Fund (ERDF) (GINOP-2.3.6-15-2015-00001). This project has received funding from the European Union Framework Programme for Research and Innovation Horizon 2020 under IMPULSE grant agreement No 871161. S.K. acknowledges Project No. 2020-1.2.4-TÉT-IPARI-2021-00018, which has been implemented with support provided by the National Research, Development and Innovation Office of Hungary, and financed under the 2020-1.2.4-TET-IPARI-CN funding scheme.
http://arxiv.org/abs/2306.04372v2
20230607121154
Thermal expansion of atmosphere and stability of vertically stratified fluids
[ "T. D. Kaladze", "A. P. Misra" ]
physics.ao-ph
[ "physics.ao-ph", "physics.flu-dyn", "physics.geo-ph" ]
1 .001 [email protected] I. Vekua Institute of Applied Mathematics and E. Andronikashvili Institute of Physics, Tbilisi State University, Georgia [email protected]; [email protected] Department of Mathematics, Siksha Bhavana, Visva-Bharati University, Santiniketan-731 235, India The influence of thermal expansion of the Earth's atmosphere on the stability of vertical stratification of fluid density and temperature is studied. We show that such an influence leads to the instability of incompressible flows. Modified by the thermal expansion coefficient, a new expression for the Brunt-Väisälä frequency is derived, and a critical value of the thermal expansion coefficient for which the instability occurs is revealed. Thermal expansion of atmosphere and stability of vertically stratified fluids A.P. Misra July 31, 2023 ============================================================================= § INTRODUCTION Climate change is vitally connected to the warming processes (such as convection, in which the heat energy gets transferred by the movement of neutral fluids from one place to another) in the Earth's atmosphere. In addition, numerous other processes, including meteorological and auroral activities and a solar eclipse, can cause equilibrium density and pressure inhomogeneities, and their gradients. As a result, the atmospheric fluids under gravity become stratified, and in the interior, the small-scale density and pressure fluctuations can produce internal gravity waves (IGWs). The latter are thus of interest in the general circulation of atmospheric stratified fluids <cit.>. So, the characteristics of IGWs become the primary investigation of many scientists. Not only do these waves play crucial roles in particle transport and momentum and energy transfers, as they propagate vertically from the Earth's surface to the upper atmosphere, but these are also relevant in large-scale zonal flows <cit.>, formation of solitary vortices <cit.>, and for the emergence of chaos and turbulence <cit.>. In the generation of IGWs, buoyancy plays the role of restoring force that opposes vertical displacements of fluid particles under gravity, and they are associated with the equilibrium density and temperature inhomogeneities. Typically, the frequency of IGWs ranges in between the Coriolis parameter and the Brunt-Väisälä frequency, i.e., 10^-4 s^-1<ω<1.7×10^-2 s^-1 and their amplitudes are relatively small in the tropospheric and stratospheric layers <cit.>. The linear and nonlinear theories of IGWs have been studied by several authors owing to their fundamental importance in understanding the Earth's atmosphere <cit.>. Typically, the dynamics of stratified fluids are more complex than homogeneous fluids. When the stratified fluids are stable, they can support the existence and propagation of various kinds of gravity waves, including IGWs. However, the stratified fluids may become unstable due to the density variations in different layers of the atmosphere. In this situation, the corresponding Brunt-Väisälä frequency may become imaginary due to a negative density gradient, i.e., when the atmospheric fluid density decreases with height <cit.>. In addition, if the temperature variations (spatial) occur due to differential heating and hence the density variations owing to thermal expansion, there may be competitive roles between the temperature and density gradients, and the relevant fluid dynamics becomes more interesting to study. In this letter, we study the influence of thermal expansion on the stability of vertical stratification of atmospheric fluids (in the regions of the troposphere and stratosphere). We show that the Brunt-Väisälä frequency N(z) gets tightly connected to IGWs, and it stimulates their horizontal propagation. In the case when N^2(z)>0, the background vertical stratification is said to be stable, but when N^2(z)<0, the stratification becomes unstable. Also, we discuss the behaviors of N(z) with the effects of the thermal expansion coefficient. § BASIC EQUATIONS AND ANALYSIS WITH OBSERVATIONAL DATA We consider the linear propagation of IGWs in incompressible stratified atmospheric neutral fluids. As a starting point, we consider the following momentum balance and the continuity equations for incompressible neutral fluids. ∂ u/∂ t+( u·∇) u=-1/ρ∇ p+ g, dρ/dt≡∂ρ/∂ t+( u·∇)ρ=0,i.e., ∇· u=0, where u, ρ, and p are the neutral fluid velocity, mass density, and the pressure respectively, and g=(0,0,-g) is the constant gravitational acceleration directed vertically downward. In equilibrium without the fluid flow, we have from Eq. (<ref>) ∂ p_0/∂ z=-ρ_0 g. As said, differential heating causes spatial variations of temperature in the fluid, which in turn produces the density variation due to the thermal expansion. Thus, if β (K^-1) is the volumetric thermal expansion coefficient of the heated incompressible fluid, the equation of state can be written as <cit.> ρ=ρ_0(z)(1-β T), where ρ_0 is the fluid mass density at temperature T=0. Considering the data for the “U.S. Standard Atmosphere Air Properties" <cit.>, the density and temperature variations of the atmosphere with the height (stratification) are presented in Table <ref> and the variations are graphically exhibited in Fig. <ref>. In Table <ref>, the temperature and density gradients are obtained using the central difference formula. From Table <ref> and Fig. <ref>, it is evident that the fluid density decreases with the height, i.e., dρ_0/dz<0 in the whole region of 0<z<50 (km). However, the temperature decreases with the height, i.e., dT_0/dz<0 in the interval 0<z<15 (km), but the same increases in the other interval, i.e., dT_0/dz>0 in 15<z<50 (km). So, we are interested mainly in the altitudes of the troposphere (ranging from 0 to 15 km) and stratosphere (ranging from 15 to 50 km) and consider the vertical distribution of Brunt-Väisälä in the neutral fluid atmosphere. In what follows, we also show the dependence of the thermal expansion coefficient (β) on the temperature T_0 in Fig. <ref>. The data used are as in Ref. <cit.>. It is clear that the expansion coefficient falls off quickly with increasing values of the temperature and that the maximum temperature T_0≈288.15 K occurring at the Earth's surface corresponds to the thermal expansion coefficient β≈0.0035. Later, we will show that such value of β is minimum (corresponding to the maximum temperature T_0≈288.15 K) above which the Brunt-Väisälä frequency becomes negative (N^2<0) and hence the instability of stratified fluid density perturbations. It is well known that the density variations due to internal gravity waves do not exceed 3-4%. So, the ratio between the density perturbation and the unperturbed density is small, i.e., ρ_1/ρ_0≈(1-4)×10^-2. In this case, the momentum equation (<ref>) in the Boussinesq approximation reduces to ∂ u/∂ t+( u·∇) u=-1/ρ_0∇ p_1-ρ_1/ρ_0 gẑ, where the suffix 1 in ρ and p denotes perturbation and ẑ is the unit vector along the z-axis. Next, using the relation (<ref>), Eq. (<ref>) reduces to <cit.> ∂ u/∂ t+( u·∇) u=-1/ρ_0∇ p_1+ gβ T_1ẑ, where T_1 denotes the temperature perturbation. Furthermore, we require the following heat equation for the imcompressible fluid in absence of any heat source <cit.>. ∂ T/∂ t+( u·∇)T=χ∇^2 T, where χ is the coefficient of the thermal diffusivity and is equal to the ratio between the therml conductivity κ (W/mK) and the volumetric heat capacity ρ C_p (J/m^3K). Here, C_p is the specific heat capacity (J/kg K) and the mass density ρ is in the unit of kg/m^3. Representing the total temperature as the sum of its equilibrium and perturbed parts, i.e., T=T_0(z)+T_1, and assuming that α≡ dT_0/dz as more or less a constant equilibrium gradient of temperature along the z-axis, i.e., ∇^2T=∇^2(T_0+T_1)=∇^2T_1, from Eq. (<ref>) we obtain <cit.> ∂ T_1/∂ t+( u·∇)T_1=χ∇^2 T_1-α u_z, where u_z is the component of u along the z-axis and α (>0) represents the action of buoyancy force. Equations (<ref>) and (<ref>) with the conditions ∇· u=0,  dρ/dt=0, are the desired set of equations for the evolution of the temperature and density perturbations of stratified incompressible fluids. To elucidate the role of the temperature gradient (vertical), we consider the linear approximation, i.e., we consider the following simple model equations and remove the suffix 1 in the perturbed variables, for simplicity. Separating the perpendicular and vertical (parallel to the gravity) components of Eq. (<ref>), we obtain ∂ u_⊥/∂ t+1/ρ_0∇_⊥ p=0, ∂ u_z/∂ t+1/ρ_0∂ p/∂ z-gβ T=0. Also, the equation ∇· u=0 gives ∇_⊥· u=-∂ u_z/∂ z. Taking the gradient (∇) of Eq. (<ref>), noting that ∇_⊥^2=Δ_⊥=∂^2/∂ x^2+ ∂^2/∂ y^2, and using Eq. (<ref>), we get ∂^2u_z/∂ t∂ z=1/ρ_0∇_⊥ p. Next, we operate ∂Δ_⊥/∂ t on Eq. (<ref>) to get ∂^2/∂ t^2Δ_⊥ u_z+1/ρ_0∂^2/∂ t∂ z∇_⊥ p-gβ∂/∂ tΔ_⊥ T=0. Furthermore, using Eq. (<ref>) and noting that ρ_0=ρ_0(z), from Eq. (<ref>) we have ∂^2/∂ t^2(Δ u_z+1/ρ_0dρ_0/dz∂ u_z/∂ z)-gβ∂/∂ tΔ_⊥ T=0, where the Laplacian operator, Δ=Δ_⊥+∂^2/∂ z^2. Also, operating Eq. (<ref>) with Δ_⊥, we get ∂/∂ tΔ_⊥ T=χΔ_⊥Δ T-αΔ_⊥ u_z. Combining Eqs. (<ref>) and (<ref>) yields ∂^2/∂ t^2(Δ u_z+1/ρ_0dρ_0/dz∂ u_z/∂ z)-gβχΔ_⊥Δ T+gαβΔ_⊥ u_z=0. Using the thermal expansion relation (<ref>), we recast the density conservation equation (<ref>) as (1-β T_0-β T)dρ_0/d t-ρ_0βd/dt(T_0+T)=0. By means of the heat equation (<ref>), Eq. (<ref>) gives, in the linear approximation, the following. (1-β T_0)1/ρ_0dρ_0/d zu_z =βχΔ T. Finally, from Eqs. (<ref>) and (<ref>), we obtain ∂^2/∂ t^2( Δ u_z+1/ρ_0dρ_0/d z∂ u_z/∂ z)+N^2Δ_⊥ u_z=0, where N^2 is the squared Brunt-Väisälä frequency, given by, N^2(z)=g[(β T_0-1)1/ρ_0dρ_0/d z +βdT_0/d z]. Equation (<ref>) represents a differential equation of only one unknown variable u_z with the frequency N^2 being modified by the temperature stratification (proportional to β). In absence of the latter, one recovers the known Brunt-Väisälä frequency <cit.>. Further simplification of Eq. (<ref>) can be made by neglecting the second term in the parentheses, compared to the first one. Thus, the dynamics of internal gravity waves in stratified fluids can be described by the following equation. ∂^2/∂ t^2Δ u_z+N^2Δ_⊥ u_z=0. To elucidate the influence of the thermal expansion parameter β on the stability of perturbations in vertical stratified fluids, from Eq. (<ref>) we find that, N^2 becomes negative when the thermal expansion coefficient β satisfies the inequality: β T_0(L_ρ_0^-1+L_T_0^-1)<L_ρ_0^-1, where L_ρ_0^-1≡(1/ρ_0)|dρ_0/dz| and L_T_0^-1≡(1/T_0)|dT_0/dz|, respectively, denote the inverses of the length scales of density and temperature inhomogeneities. Since in the altitudes of troposphere and stratosphere [0<z<50 (km)], dρ_0/dz<0 (cf. Table <ref>), the inequality (<ref>) reduces to β T_0(1/|L_ρ_0|-1/L_T_0)> 1/|L_ρ_0|. From Table <ref>, it is also evident that |L_T_0^-1|<|L_ρ_0^-1|. Thus, from Eq. (<ref>), we get the following approximate condition of instability in vertical stratified fluids. β T_0>1. From Table <ref>, we find that the maximum value of the temperature is at the Earth's surface (T_0≈288.15 K). So, the instability condition [Eq. (<ref>)] holds for a minimum value of β: β_min≈0.0035. The latter well agrees with the observational data (See the text arrow in Fig. <ref>). The dependence of the squared Brunt-Väisälä frequency (N^2) on the thermal expansion coefficient (β) is shown in Fig. <ref>. It is seen that the instability of atmospheric stratification occurs with an increase of the thermal expansion coefficient beyond the critical value (≈0.0035). The Brunt-Väisälä frequency becomes completely negative for β≳0.005. In the latter, it is also noted that the magnitude of N^2 initially increases in the interval 0≲ z≲10^3 (m), and then decreases in 10^3≲ z≲3×10^4 (m). In the rest of the interval, 3×10^4≲ z≲5×10^4 (m), its magnitude again increases. Such behaviors of N^2 may be due to the variation of the relative magnitudes of the length scales corresponding to the fluid density and temperature as the height z increases from z=0 to z=50 km. It is interesting to note that when the value of β is lower than β=0.005, N^2 can be negative, zero, or positive depending on the altitude z. For example, when β=0.003, N^2<0 in 0≲ z≲2×10^3 (m), N^2≈0 at z=3×10^3 (m), and N^2>0 in 3×10^3≲ z≲50×10^3 (m). Also, when β=0.004, N^2<0 in 0≲ z≲9×10^3 (m), N^2≈0 at z=10×10^3 (m), and N^2>0 in 15×10^3≲ z≲30×10^3 (m). Again, N^2≈0 at z=40×10^3 (m), and N^2<0 at z=50×10^3 (m). Physically, when N^2>0, Eq. (<ref>) admits oscillating solutions for the velocity u_z with frequency N, i.e., if a parcel of stratified neutral fluids moves upward and N^2>0, it will oscillate in between the heights where the fluid density of the parcel matches with the surrounding fluids. In this case, the fluid is said to be stable. However, when N^2=0, the parcel, once pushed up, will not move any further. On the other hand, when N^2<0, i.e., the squared Brunt-Väisälä frequency becomes imaginary, the parcel will move up and up until N^2 becomes zero or positive again in the atmosphere. Typically, such a situation leads to convection, and hence the criterion for the stability of stratified fluids in the atmosphere against convection is that N^2>0. § CONCLUSION We have studied the influence of the thermal expansion of the Earth's atmosphere on the stability of vertical stratification of density and temperature perturbations. We have shown that such an influence can lead to instability in stratified incompressible fluids. Modified by the thermal expansion coefficient, the Brunt-Väisälä frequency is obtained, and a critical value of the expansion coefficient for which the instability occurs is revealed. To conclude, the instability of vertical stratification reported here could be helpful for the initiation of large-scale instability (which may be larger than the scales of any external force or turbulence phenomena) as well as the generation of large-scale vortices in the atmosphere <cit.> through which the particle momentum and energy transfer take place. In the fluid model, we have neglected the dissipative effects, such as those associated with the fluid-particle collision and the kinematic viscosity. These effects will contribute to the evolution equation for internal gravity waves, modify their dispersion properties, and may eventually reduce or prevent the instability of stratified fluids reported here. However, the influence of these forces and the effects of the temperature and density gradients on the propagation characteristics of internal gravity waves are beyond the scope of the present work but a project for our future study. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. apsrev4-1
http://arxiv.org/abs/2306.11808v1
20230620180523
Higgs Footprints of Hefty ALPs
[ "Anisha", "Supratim Das Bakshi", "Christoph Englert", "Panagiotis Stylianou" ]
hep-ph
[ "hep-ph", "hep-ex" ]
DESY-23-082 We discuss axion-like particles (ALPs) within the framework of Higgs Effective Field Theory, targeting instances of close alignment of ALP physics with a custodial singlet character of the Higgs boson. We tension constraints arising from new contributions to Higgs boson decays against limits from high-momentum transfer processes that become under increasing control at the LHC. Going beyond leading-order approximations, we highlight the importance of multi-top and multi-Higgs production for the pursuit of searches for physics beyond the Standard Model extensions. [email protected] School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, United Kingdom [email protected] CAFPE and Departamento de Física Teórica y del Cosmos, Universidad de Granada, Campus de Fuentenueva, E–18071 Granada, Spain [email protected] School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, United Kingdom [email protected] Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany Higgs Footprints of Hefty ALPs Panagiotis Stylianou Received XXX; accepted XXX ============================== § INTRODUCTION Searches for new interactions beyond the Standard Model of Particle Physics have, so far, been unsuccessful. This is puzzling as the Standard Model contains a plethora of flaws that are expected to be addressed by a more comprehensive theory of microscopic interactions. A reconciliation of these flaws can have direct phenomenological consequences for physics at or below the weak scale v≃ 246 GeV. This is particularly highlighted by fine-tuning problems related to the Higgs mass or the neutron electric dipole moment, both of which take small values due to cancellations which are not protected by symmetries in the Standard Model (SM). Dynamical solutions to these issues have a long history, leading to new interactions and states around the TeV scale to address Higgs naturalness, or relaxing into and CP-conserving QCD vacuum via a Peccei-Quinn-like mechanism <cit.>. Often such approaches yield an additional light pseudo-Nambu Goldstone field, in the guise of a composite Higgs boson or the axion <cit.>. The search for a wider class of the latter states referred to as axion-like particles (ALPs) bridges different areas of high energy physics. Efforts to detect ALPs across different mass and coupling regimes have shaped the current BSM programme in many experimental realms (see e.g. <cit.> for recent reviews). In particular, at the Large Hadron Collider (LHC), ALP interactions have been discussed in relation to their tell-tale signatures arising from ∼ FF̃ coupling structures <cit.>, top quarks <cit.>, emerging signatures <cit.>, flavour physics <cit.>, electroweak precision constraints <cit.>, Higgs decays <cit.>, and mixing <cit.>. The methods of effective field theory <cit.> naturally embed ALP-related field theories into a broader framework of a more modern perspective on renormalisability <cit.>. Experimental searches for these states have been carried out using a variety of techniques, including collider searches, precision measurements of atomic and nuclear transitions (e.g. ACME <cit.> and nEDM <cit.>), and searches from astrophysical events <cit.>, over a wide range of ALP mass <cit.>. In particular for the ALP mass range M_𝒜 ∈ [6, 100] GeV, the most stringent exclusion limits for ALPs are derived from ultra-peripheral lead nuclei collision data <cit.>. These limits are from exclusive di-photon searches, and define SM–ALPs interactions via electromagnetic interactions (∼ F F). [When the ALP mass M_𝒜 < 2 m_e, only the di-photon channel is the allowed decay process via SM particles. In same manner, with greater ALP mass, the decay modes to other leptons, quarks (jets), gauge bosons open up as well.] The ATLAS limits <cit.> on the ALP–photon cross sections when put in terms of ALP–photon couplings is found in the range g_𝒜γ∈ [0.05,1] TeV^-1 <cit.>. However, in general, the ALP–SM interactions can be defined via gauge bosons, fermions, and scalars; although, its decays will depend on its mass. The limits on ALP couplings to the SM fields (except photons) are less stringent. The exotic decays of the SM Higgs and Z-bosons are promising channels for ALP searches (particularly benefiting from the high-luminosity run of the LHC), e.g., with the decay modes h → Z 𝒜 <cit.>. It is the latter perspective that we adopt in this note to focus on ALP interactions with the Higgs boson, also beyond leading order. Adopting the methodology of Higgs Effective Theory (HEFT), we can isolate particular interactions of the ALP state and trace their importance (and thus the potential for constraints) to representative collider processes that navigate between the low energy precision and the large momentum transfer regions accessible at the LHC. If the interactions of ALPs and Higgs particles is predominantly related to a custodial singlet realisation of the Higgs boson, these areas might well be the first phenomenological environments where BSM could be unveiled as pointed out in, e.g., Ref. <cit.>. In parallel, our results demonstrate the further importance of multi-top and multi-Higgs final states as promising candidates for the discovery of new physics. With the LHC experiments closing in on both Higgs pair <cit.> and four-top production in the SM <cit.>, such searches becoming increasingly interesting for our better understanding of the BSM landscape. This work is organised as follows: In Sec. <ref>, we review the ALP-HEFT framework that we use in this study to make this work self-contained (a comprehensive discussion is presented in <cit.>). In Sec. <ref>, we focus on the decay phenomenology of the Higgs boson in the presence of ALP interactions before we turn to discuss a priori sensitive processes that can provide additional constraints due to their multi-scale nature and kinematic coverage. Specifically, in Sec. <ref> we analyse ALP corrections to Higgs propagation as accessible in four-top final states <cit.>, which informs corrections to multi-Higgs production. We conclude in Sec. <ref>. § ALP CHIRAL HEFT LAGRANGIAN The leading order ALP interactions with SM fields in the framework of chiral (non-linear) electroweak theory are written as ℒ_LO=ℒ^HEFT_LO+ ℒ^ALP_LO . ℒ^HEFT_LO is the chiral dimension-2 HEFT Lagrangian <cit.>. In this framework, the SM Higgs (H) is a singlet field and the Goldstone bosons π^a are parametrised non-linearly using the matrix U U(π^a) = exp(i π^aτ^a/v) , with τ^a as the Pauli matrices with a= 1,2,3 and v≃ 246 GeV. The U matrix transforms under L∈ SU(2)_L, U(1)_Y⊂ SU(2)_R ∋ R as U→ L U R^† and is expanded as U(π^a) = 1_2 + i π^a/vτ^a - 2G^+G^- + G^0 G^0/2 v^21_2 + … , where G^± = ( π^2± i π^1 )/√(2) and G^0 = -π^3. The dynamics of the gauge bosons W^a_μ and B_μ are determined by the usual SU(2)_L× U(1)_Y gauge symmetry. Weak gauging of SU(2)_L× U(1)_Y is achieved through the standard covariant derivative D_μU = ∂_μ U + i g_W (W^a_μτ^a /2) U -i g' U B_μτ^3/2 . The gauge fields in the physical (mass and electromagnetic U(1)_em) basis are are related to the gauge basis via the Weinberg angle s_W=sinθ_W,c_W=cosθ_W W^±_μ = 1/√(2)(W^1_μ∓ W^2_μ) , [ Z_μ; A_μ ] = [ c_W s_W; -s_W c_W ][ W^3_μ; B_μ ] . The leading order HEFT Lagrangian relevant for our discussion is then given by ℒ^HEFT_LO = - 1/4 W^a_μν W^aμν - 1/4 B_μν B^μν + L_ferm+ ℒ_Yuk + v^2/4ℱ_H Tr[D_μ U^† D^μ U] + 1/2∂_μ H ∂^μ H - V(H) . The interactions of the singlet Higgs field with gauge and Goldstone bosons are parametrised by the flare function ℱ_H given as ℱ_H = (1+ 2(1+ζ_1)H/v + (1+ζ_2) (H/v)^2 + ... ) . The couplings ζ_i denote the independent parameters that determine the leading-order interactions of the Higgs boson with the gauge fields. L_ferm parametrises the fermion-gauge boson interactions, which we take SM-like in the following. V(H) is the Higgs potential, which we relate to the SM expectation V(H)= 1/2 M_H^2 H^2 + κ_3 H^3+ κ_4 H^4 . with κ_3≃ 32 GeV, κ_4≃ 0.03 in the SM. In this work, we consider ALP interactions that particularly probe the singlet character of the Higgs boson as a parametrised by the HEFT Lagrangian. The interactions are given by ℒ^ALP_LO = 1/2∂_μ𝒜∂^μ𝒜 -1/2M_𝒜^2𝒜^2 + a_2D(i v^2 Tr[ U τ^3 U^† 𝒱_μ] ∂_μ𝒜/f_Aℱ_2D) with ℱ_2D= (1+ 2ζ_12DH/v + ζ_22D(H/v)^2 + ... ) , and 𝒱_μ= (D_μU)U^† . In Eq. (<ref>), f_𝒜 denotes the scale linked with the ALP interactions. The interactions specified by Eq. (<ref>) are the leading order chiral interactions of the ALP field with SM states. These couplings specifically probe the custodial singlet nature of the Higgs boson <cit.>. Therefore, the phenomenology of these interactions provides relevant insights into the mechanism of electroweak symmetry breaking and its relation to axion-like states. The radiative imprints of these interactions on SM correlations are then captured by the chiral dimension-4 interactions contributing to ℒ^HEFT_LO when HEFT parameters coincide with the SM expectation.[All interactions detailed above are implemented using the FeynRules package <cit.>.] The aim of our analysis is to clarify the phenomenological reach to the couplings involved in Eq. (<ref>) from two different angles. Firstly, these interactions are clear indicators of a singlet character of the Higgs boson in HEFT. Secondly, the interactions ∼ζ_12D will introduce modifications to the Higgs boson propagation and Higgs decay in HEFT, ζ_22D will imply modifications to the Higgs pair production rate. Although the ALP might be too light to be directly accessible at collider experiments such as the LHC, its virtual imprint through specific predictions between the correlations of four-top and Higgs pair production could reveal its presence. We will turn to the expected constraints in the next section. Throughout, we will identify the HEFT parameters with their corresponding tree-level SM limit except for the deviations introduced by the ALP, which we also detail below. We will focus on the interactions that are generated at order a_2Dζ_12D etc.; fits against the ALP-less HEFT (or the SM as a particular HEFT parameter choice) should be sensitive to these contributions when data is consistent with the latter expectation. To reflect this we will therefore also assume that HEFT operators coincide with their SM expectation. Specifically this means that we will choose vanishing HEFT parameters arising at chiral dimension-4. Departures from the SM correlations are then directly related to (radiative) presence of the ALP. § DIRECT CONSTRAINTS FROM HIGGS DECAYS The interactions of an ALP with a Higgs boson via the a_2Dζ_12D coupling of Eq. (<ref>) is tree-level mediated. The exotic decay of the Higgs boson via H →𝒜 Z at leading order is given by 2.7cm < g r a p h i c s > =- 2i e/c_W s_W v / f_𝒜 a_2 D ζ _12 D q^μ(H) , with q^μ(H) denoting the four-momentum carried by the Higgs leg. When kinematically accessible, the decay width of the Higgs boson receives a non-SM contribution Γ( H →𝒜 Z) = v^2 a_2D^2 ζ_12D^2/274 s_W^2 c_W^2 f_𝒜^2 M_H^3 M_Z^2 (M_𝒜^4-2 M_𝒜^2 (M_H^2+M_Z^2)+(M_H^2-M_Z^2)^2)^3/2. Assuming this two-body process as the most dominating BSM decay involving the ALP, the SM Higgs boson signal strengths get uniformly modified μ_SM,A = Γ(H)_SM/Γ( H →𝒜 Z) + Γ(H)_SM , with Γ(H)_SM≃ 4 MeV as the total Higgs boson decay width in the SM <cit.>. To constrain this BSM decay, we use the well constrained and hence representative signal strength for H →γγ. This has been measured μ_γγ=1.04^+0.1_-0.09 <cit.> for the representative ATLAS Run 2 dataset of 139 fb^-1. For the on-shell decay of H→𝒜 Z, the maximum value of ALP mass allowed kinematically is ≃ 34  GeV with M_H = 125  GeV and M_Z = 91.18  GeV. For heavier ALP masses, the branching ratio quickly dies off due to the offshellness of the involved Z boson. The allowed parameter space in a_2D/f_𝒜 vs M_𝒜 plane is shown in Fig. <ref> for three different values of ζ_12D. The above 95% limit translates into the lower bound on Γ( H →𝒜 Z) < 0.65  MeV using Eq. (<ref>) for ζ_12D=1. The above bound on Γ( H →𝒜 Z) is reduced by half with the HL-LHC projections for H →γγ at 3 ab^-1 <cit.>, i.e. we obtain Γ( H →𝒜 Z) < 0.32  MeV for ζ_12D=1. § HIGGS SIGNALS OF VIRTUAL ALPS §.§.§ Propagation vs. on-shell properties: Four-top production BSM corrections to the Higgs self energy Σ_H can give rise to an oblique correction Ĥ = - M_H^2/2Σ_H^'' (M_H^2) , analogously to the Ŵ, Ŷ parameters in the gauge sector, e.g. <cit.>. Such a correction leads to a Higgs propagator modification <cit.> -iΔ_H(q^2) = 1/q^2 - M_H^2( 1 + Ĥ(1-q^2 M_H^2) ) , indicating a departure for large momentum transfers at unit pole residue. Measurements of this parameter have by now been established by ATLAS and CMS in Refs. <cit.>. The expected upper limit is Ĥ≤ 0.12 , at 95% CL from the recent four-top production results of Ref. <cit.>. We can re-interpret this in the framework that we consider. In parallel, we can employ an extrapolation of four-top final states to estimate sensitivity improvements that should become available in Ĥ-specific analyses at the high-luminosity LHC (ATLAS currently observe a small tension in their Ĥ fit). Explicit calculation in general R_ξ gauge of the ALP insertion of Eq. (<ref>) into the Higgs two-point function yields the ξ-independent result (see also remarks in <cit.>) Γ(H(q)H(q))=a_2D^2 ζ_12D^2/4 π^2 f_𝒜^2(4 M_𝒜^4 - 3 M_𝒜^2 q^2 + ( q^2 - 3 M_Z^2 ) q^2) Δ_UV + … , with MS factor Δ_UV=Γ(1+ϵ) ϵ(4πμ^2 M_H^2)^ϵ in dimensional regularisation D=4-2ϵ with `t Hooft mass μ. The ellipses in Eq. (<ref>) denote finite terms for ϵ→ 0 (see below). In the following we will adopt the on-shell scheme for field and mass renormalisation (cf. Eq. (<ref>)), and the MS scheme for HEFT parameters (see also <cit.>). On the one hand, part of the divergence of Eq. (<ref>) are then cancelled by the (divergent, div.) counterterms related to the Higgs wave function and mass renormalisation δZ_H|_div. = 3 a_2D^2 ζ_12D^2 (M_𝒜^2 + M_Z^2) /4 π^2 f_𝒜^2 , δM_H^2 |_div. = a_2D^2 ζ_12D^2 (4 M_𝒜^4 - 3 M_𝒜^2 M_H^2 - 3 M_H^2 M_Z^2) /4π^2 f_𝒜^2 . On the other hand, the appearance of a q^4 contribution signifies the sourcing of the chiral dimension-4 operator 𝒪_□□ of the HEFT Lagrangian 𝒪_□□= a_□□□ H □ H/v^2 . This operator is renormalised by the ALP interactions via δ a_□□= - a_2D^2 ζ_12D^2 v^2/8 π^2 f_𝒜^2 Δ_UV . Together, the renormalised Higgs two-point function then links to the Ĥ parameter as Ĥ = - a_2D^2 M_H^2 ζ_12D^2/8 π^2 f_A^2( 2 B_0(M_H^2, M_A^2, M_Z^2)|_fin. - 4(M_A^2 - M_H^2 + M_Z^2)B'_0(M_H^2, M_A^2, M_Z^2) . .+ [M_A^4 - 2 M_A^2 (M_H^2 + M_Z^2) + (M_H^2 - M_Z^2)^2] B”_0(M_H^2, M_A^2, M_Z^2)) , where `fin.' denotes the UV finite part of the Passarino-Veltman B_0 function after subtracting Eq. (<ref>) and derivatives are taken with respect to the first argument of the B_0 function (an explicit representation can be found in Ref. <cit.>). Ĥ vanishes in the decoupling limit f_A>M_A≫ M_H. Equation (<ref>) shows that propagator corrections that can be attributed to Ĥ probe similar couplings as the Higgs decay of Eq. (<ref>), however, in a momentum transfer-enhanced way, at the price of a loop suppression. This way the energy coverage of the LHC that becomes under increasing statistical control provides additional sensitivity beyond the fixed scale Higgs decay. Any enhanced sensitivity to the on vs. off-shell phenomenology that can be gained from the combination of the processes discussed so far, can then break the degeneracies between the different HEFT coefficients in Eq. (<ref>). To obtain an extrapolation estimate from the current constraints on Ĥ, we implement the modifications from Ĥ in MadGraph5_aMC@NLO <cit.> in order to estimate the changes caused in the four-top cross section from different contributions to the Higgs self-energy, and extrapolate the result of Eq. (<ref>). Assuming a significance S(Ĥ = 0.12) / √(B) = 2 from the constraint of Ref. <cit.> at 140/fb, and then subsequently rescaling the results to 3/ab, we obtain the approximate significance at HL-LHC. While using the more recent results yields improved bounds compared to earlier projections of Ref. <cit.> that include systematics (due to improvements in the analysis procedure utilising ML techniques), our projections remain conservative compared to the previously estimated significance with only statistical uncertainties, see Fig. <ref>. In Fig. <ref>, we also see that if M_A is light, it will freely propagate in the 2 point function thus imparting the characteristic q^4 dependence probed by Ĥ. This also means that this behaviour is essentially independent of the light ALP mass scale. Turning to heavier states, this kinematic dependence is not sourced as efficiently anymore, leading to a quick decoupling from the two-point Higgs function and reduced sensitivity and larger theoretical uncertainty. We will return to the relevance of Ĥ for the discussed scenario after discussing the modifications to Higgs pair production in the next section. §.§.§ Higher terms of the ALP flare function: Higgs pair production Corrections to Higgs pair production under the same assumptions as in the previous section are contained in propagator corrections and corrections to trilinear Higgs coupling. As with the chiral dimension-4 operator that leads to new contributions to the Higgs-two point function, there are additional operators that modify the Higgs trilinear interactions. The amputated off-shell three-point function receives contributions (see also <cit.>) v^3Γ_1(H(q)H(k_1)H(k_2)) = a_χ 1 (q^4 + k_1^4 + k_2^4) + 2 a_χ 2 (q^2 k_1^2 + k_1^2 k_2^2 + q^2 k_2^2) + a_χ 3v^2 (q^2+ k_1^2 + k_2^2) , which are renormalised in the MS scheme according to δ a_χ1 = a_2D^2 ζ_12D v^2/8 π^2 f_𝒜^2 (3(1+ζ_1) ζ_12D + 2ζ_22D) Δ_UV , δ a_χ2 = 3 a_2D^2 ζ_12D^2 v^2 /8 π^2 f_𝒜^2 (1+ζ_1) Δ_UV , δ a_χ3 = 3 a_2D^2 ζ_12D v^2/4 π^2 f_𝒜^2 [ (M_A^2+M_Z^2)ζ_22D - 3(M_A^2+2M_Z^2) (1+ζ_1)ζ_12D]Δ_UV . The remaining renormalisation of the chiral dimension-2 term follows from Eq. (<ref>) δΓ_2(H(q)H(k_1)H(k_2))|_div = - 9a_2D^2 ζ_12D^2 /8 π^2 f_𝒜^2 κ_3 (M_A^2+m_Z^2) - a_2D^2 ζ_12D M_A^4 /2 π^2 f_𝒜^2 v ( 2 (1+ζ_1) ζ_12D - ζ_22D) . ATLAS (CMS) have set highly competitive expected 95% confidence level cross section limits of σ/σ_SM<3.9 (5.2) <cit.> in the bb̅ττ channel <cit.> alone. Slightly reduced sensitivity <cit.> can be achieved in the 4b and 2b2γ modes <cit.>. ATLAS have combined these channels to obtain a combined exclusion of 3.1 σ_SM <cit.> with the currently available data and forecast a sensitivity of σ/σ_SM≳ 1.1 at the HL-LHC <cit.>. We use the two latter result to gain a qualitative sensitivity reach of Higgs pair production in the considered scenario. In Fig. <ref>, we show representative invariant Higgs pair mass distributions for 13 TeV LHC collisions, which demonstrates the potential of multi-Higgs final states' sensitivity to the momentum-enhanced new physics contributions characteristic to the ALP.[We have implemented these changes into an in-house Monte Carlo event generate based on Vbfnlo <cit.> employing FeynArts, FormCalc, and LoopTools <cit.> and PackageX <cit.> for numerical and analytical cross checks. Throughout this work we chose a renormalisation scale of μ=2M_h.] The behaviour exhibited by the invariant mass distribution is not sensitive to the mass of the ALP as long as the latter is not close to the ≃ 2M_H threshold that determines the gg→ HH phenomenology. In instances when hefty ALPs propagate freely, their distinctive momentum enhancements will sculpt the Higgs-boson distributions. In parallel, non-linear effects will be important away from the SM reference point as shown in Fig. <ref>. This shows that the constraints that can be obtained in the di-Higgs channel are relatively strongly coupled, which is motivation for us to directly include “squared” BSM effects to our analysis in addition to interference effects. We combine the three representative analyses in a global χ^2 to obtain sensitivity estimates. In the case when the ALP is light, there are significant modifications to Higgs physics, also at large momentum transfers, see also Fig. <ref>. Of course, these large contributions in particular to the Higgs pair rate are tamed by decreasing signal strengths into SM-like states, which quickly result in tension with experimental observations for larger couplings. As Higgs pair production observations need to rely on relatively clean and high branching ratio final states, the prospects of Higgs pair production (and four top) analyses to provide additional sensitivity is relatively low. This is highlighted already in the combination of the Higgs decay constraints with these processes in Fig. <ref>. For parameter choices for which the ALP is above the Higgs decay threshold, this picture changes. Multi-Higgs constraints remain relatively insensitive to the ALP mass scale as long as these states are away from the 2 M_H threshold. The cross section enhancement then translates directly into an enhancement of the observable Higgs boson pair production rate. In turn, constraints on the the higher order terms in the ALP flare function become possible. It is important to note that these are independent of couplings (to first order) that shape the ALP decay phenomenology. As the large enhancements result from the tails of distributions there is a question of validity. Nonetheless the momentum dependence introduced by Eq. (<ref>) leads to partial wave unitarity violation as, e.g. HZ scattering proceeds momentum-enhanced. A numerical investigation shows that for O(1) couplings in Eq. (<ref>), conserved zeroth partial wave unitarity up to scales ∼ 1.5 TeV sets a lower bound of f_a≳ 300 GeV for unsuppressed propagation M_A=1 GeV. These constraints are driven by the longitudinal Z polarisations, constraints from transverse modes are comparably weaker. This means that the entire region that is shown in Fig. <ref> is perturbative at tree-level. In parallel, the HL-LHC is unlikely to probe Higgs pairs beyond invariant masses M_HH>600 GeV in the SM (for which the cross section drops to 10% of the inclusive rate). Most sensitivity in HL-LHC searches results from the threshold region. Therefore, the sensitivity expected by the HL extrapolation of <cit.> will probe Eq. (<ref>) in a perturbatively meaningful regime. The combined constraints are largely driven by Higgs pair constraints, Fig. <ref>. However, it is worth highlighting that the statistics-only extrapolation does not include changes to the four top search methodology. Improvements of the latter can be expected with increasing luminosity and the final verdict from four top production might indeed be much more optimistic than our √(luminosity) extrapolation might suggest. § SUMMARY AND CONCLUSIONS Searches for new light propagating degrees of freedom such as axion-like particles are cornerstones of the BSM programme in particle physics as explored at, e.g., the Large Hadron Collider. The Higgs boson, since a global picture of its interactions is still incomplete, provides a motivated avenue for the potential discovery of new physics in the near future as the LHC experiments gain increasingly phenomenological sensitivity in rare processes that could be tell-tale signs of Higgs-related BSM physics. We take recent experimental developments in multi-Higgs and multi-top analyses as motivation to analyse effective Higgs-philic ALP interactions, also beyond leading order. This enables us to tension constraints from different areas of precision Higgs phenomenology, combining Higgs decay modifications with large-momentum transfer processes that are becoming increasingly accessible at the LHC. For light states and sizeable HEFT-like couplings, a large part of the sensitivity is contained in Higgs signal strength measurements (see also <cit.>), which, however, only provide limited insights into the Higgs-ALP interactions. Higher terms of the Higgs-ALP flare function, still have the phenomenological potential to sizeably modify Higgs pair final states at a level that will be observable at the LHC in the near future. Our findings therefore also highlight further the relevance of multi-top and multi-Higgs final state for the quest for new physics. We thank Dave Sutherland for insightful discussions. C.E. thanks the high-energy physics group (FTAE) at the University of Granada for their hospitality during early stages of this work. A. is supported by the Leverhulme Trust under grant RPG-2021-031. S.D.B is supported by SRA (Spain) under Grant No. PID2019-106087GB-C21 (10.13039/501100011033), and PID2021-128396NB-100/AEI/10.13039/501100011033; by the Junta de Andalucía (Spain) under Grants No. FQM-101, A-FQM-467-UGR18, and P18-FR-4314 (FEDER). C.E. is supported by the STFC under grant ST/T000945/1, the Leverhulme Trust under grant RPG-2021-031, and the Institute for Particle Physics Phenomenology Associateship Scheme. P.S. is supported by the Deutsche Forschungsgemeinschaft under Germany’s Excellence strategy EXC2121 “Quantum Universe” - 390833306. This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 491245950.
http://arxiv.org/abs/2306.02829v1
20230605122431
Dynamic Calculations of Magnetic Field and Implications on Spin Polarization and Spin Alignment in Heavy Ion Collisions
[ "Hui Li", "Xiao-Liang Xia", "Xu-Guang Huang", "Huan Zhong Huang" ]
nucl-th
[ "nucl-th" ]
[email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China [email protected] Department of Physics and Center for Field Theory and Particle Physics, Fudan University, Shanghai 200433, China [email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China Department of Physics and Center for Field Theory and Particle Physics, Fudan University, Shanghai 200433, China Shanghai Research Center for Theoretical Nuclear Physics, NSFC and Fudan University, Shanghai 200438, China [email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA Magnetic field plays a crucial role in various novel phenomena in heavy-ion collisions. We solve the Maxwell equations numerically in a medium with time-dependent electric conductivity by using the Finite-Difference Time-Domain (FDTD) algorithm. We investigate the time evolution of magnetic fields in two scenarios with different electric conductivities at collision energies ranging from = 7.7 to 200 GeV. Our results suggest that the magnetic field may not persist long enough to induce a significant splitting between the global spin polarizations of Λ and Λ̅ at freeze-out stage. However, our results do not rule out the possibility of the magnetic field influencing the spin (anti-)alignment of vector mesons. Dynamic Calculations of Magnetic Field and Implications on Spin Polarization and Spin Alignment in Heavy Ion Collisions Huan Zhong Huang Received 21 February, 2023; accepted 5 June, 2023 ======================================================================================================================= § INTRODUCTION In non-central relativistic heavy-ion collisions, two positively charged nuclei collide with non-zero impact parameters, resulting in the generation of a large magnetic field. This magnetic field can reach 10^18 Gauss in Au + Au collisions at =200 GeV at RHIC and 10^19 Gauss in Pb + Pb collisions at =5020 GeV at LHC <cit.>. The effect of this strong magnetic field on the Quark-Gluon Plasma (QGP) has attracted much attention due to its potential impacts on many novel phenomena, such as the chiral magnetic effect <cit.>, the spin polarization of hyperons <cit.>, the spin alignment of vector mesons <cit.>, the charge-dependent directed flow <cit.>, and the Breit-Wheeler process of dilepton production <cit.> in heavy-ion collisions. When making theoretical predictions about the aforementioned effects, a crucial question to be addressed is how the magnetic field evolves over time. In particular, it is important to determine whether the lifetime of the magnetic field is sufficiently long to maintain significant field strength leading to observable effects. In general, simulations of the magnetic field evolution can be carried out using the following steps. Before the collision, the charge density of the two colliding nuclei can be initialized by utilizing the Wood-Saxon distribution or by sampling the charge position in the nucleus using the Monte-Carlo Glauber model <cit.>. After the collision, the two charged nuclei pass through each other like two instantaneous currents. Currently, several approaches exist to simulate the collision process. The simplest method is to assume that the two nuclei pass through each other transparently or to incorporate the charge stopping effect using empirical formulae <cit.>. A more sophisticated approach involves simulating the entire collision process through transport models <cit.>. Once the motion of electric charge is determined, the magnetic field can be calculated using analytical formulae. These methods have been widely employed in previous studies to investigate the evolution of magnetic fields in heavy-ion collisions <cit.>. Previous simulations have shown that the strong magnetic field produced by the colliding nuclei rapidly decays with time in vacuum <cit.>. The lifetime of the magnetic field is primarily determined by fast-moving spectators, and the strong magnetic field only exists during the early stage of the collision. However, the time evolution of the magnetic field can be significantly modified when taking into account the response of the QGP, which is a charge-conducting medium. In this case, when the magnetic field begins to decrease, the induced Faraday currents in the QGP considerably slow down the damping of the magnetic field. Analytical formulae have demonstrated that the damping of the magnetic field in a constant conductive medium can be significantly delayed <cit.>. However, those analytical formulae only apply to the case of a constant conductivity, which is unrealistic because the conductivity only exists after the collision and the value varies as the QGP medium expands. Therefore, it is essential to numerically calculate the magnetic field. Numerical results can overcome the limitations of analytical calculations and provide unambiguous solutions for time-dependent conductive medium. As a result, numerical results can serve as a more accurate reference for final state observations that are sensitive to the evolution of the magnetic field. It is worth noting that some studies have also simulated the magnetic field by numerically solving the Maxwell equations with an electric conductivity <cit.> and by combining the magnetic field with the electromagnetic response of the QGP medium <cit.>. This paper presents a numerical study of the time evolution of magnetic fields with time-dependent electric conductivities at = 7.7–200 GeV. To solve the Maxwell equations, we utilize the Finite-Difference Time-Domain (FDTD) algorithm <cit.>. The paper is organized as follows: Sec. <ref> introduces the analytical formulae. Sec. <ref> describes the numerical model setup of the charge density, charge current, and the electric conductivity. Sec. <ref> describes the numerical method. Sec. <ref> presents the results and discusses the impact on the spin polarization and the spin alignment. Finally, Sec. <ref> concludes the results. § LIMITATION OF ANALYTICAL FORMULA We consider the electromagnetic field which is generated by external current of two moving nuclei and evolves in a conductive medium created in heavy-ion collisions. The electromagnetic field is governed by Maxwell equations: ∇· = ρ, ∇· = 0, ∇× = -∂_t , ∇× = + σ + ∂_t , where ρ and are the charge density and the charge current, and σ is the electric conductivity of the medium. For a point charge q moving with a constant velocity $̌,ρandare: ρ(t,) = qδ^(3)[-_q(t)], (t,) = qδ̌^(3)[-_q(t)], whereis the position of the field point and_q(t)is the position of the point charge at timet. If the conductivityσis a constant, the magnetic field has been rigorously derived as follows <cit.>: (t,) =γ×̌/Δ^3/2(1+γσ/2||̌√(Δ))e^A, whereγ=1/√(1-^̌2)is Lorentz contraction factor,≡-_q(t)is the position difference between the field point and the point charge at timet,Δ≡R^2+(γ·̌)^2, andA≡-γσ(γ·̌+||̌√(Δ))/2. Ifσis set to zero, the above formula can recover to the electromagnetic field in vacuum which can be expressed by the Lienard-Wiechert potential: (t,) =γ×̌/[R^2+(γ·̌)^2]^3/2. Because the Maxwell equations (<ref>–<ref>) satisfy the principle of superposition, the formulae (<ref>) and (<ref>) can also be applied to charge distributions rather than just a point charge. Therefore, the formulae have been widely used in the literature <cit.> to calculate the magnetic field generated by nuclei in heavy-ion collisions. However, Eq. (<ref>) is valid only if 1) the point charge moves with a constant velocity, and 2) the conductivityσis constant (fort∈[-∞,∞]). Unfortunately, neither of these conditions is realistic in heavy-ion collisions. First, when the collision occurs, charged particles slow down, and the velocities keep changing during the subsequent cascade scattering. Second, the QGP is produced after the collision, which means that the conductivityσis non-zero only aftert = 0(the time when the collision happens) and the value ofσvaries with time. For these reasons, it is important to develop a numerical method which can solve the Maxwell equations under more complicate and more realistic conditions ofρ,, andσ. In this paper, we focus on studying the influence of time-dependentσon the evolution of magnetic field. § MODEL SETUP §.§ Charge density and current In heavy-ion collisions, the external electric current arises from the contribution of protons in the fast moving nuclei. In this case, we consider two nuclei, which are moving along+zand-zaxis with velocityv_z, and their projections on thex-yplane are centered at(x=±b/2, y=0), respectively, withbbeing the impact parameter. In the rest frame of a nucleus, the charge distribution can be described by the Wood-Saxon distribution: f(r) = N_0/1+exp[(r-R)/a], whereRis the nuclear radius,ais the surface thickness, andN_0is a normalization factor determined by4π∫f(r)r^2dr = Ze. Take the gold nucleus as an example, we haveZ=79,R=6.38fm,a=0.535fm, thereforeN_0≈0.0679 e/fm^3. Then, it is straightforward to derive the charge density and current of the two moving nuclei by a Lorentz boost from Eq. (<ref>), which leads to ρ^±(t,x,y,z) = γ f(√((x∓ b/2)^2+y^2+γ^2(z∓ v_zt)^2)), j_x^±(t,x,y,z) = 0, j_y^±(t,x,y,z) = 0, j_z^±(t,x,y,z) = γ v_z f(√((x∓ b/2)^2+y^2+γ^2(z∓ v_zt)^2)), where the±sign overρandjon the left side indicates the direction of nucleus' motion alongzaxis, the velocityv_z = √(γ^2 - 1) / γ, withγ= / (2m_N)andm_N=938MeV. The total charge density and current are given as follows: ρ(t,x,y,z) = ρ^+(t,x,y,z) + ρ^-(t,x,y,z), (t,x,y,z) = ^+(t,x,y,z) + ^-(t,x,y,z). Eqs. (<ref>) and (<ref>) can describe the charge and current distributions before the collision exactly when the two nuclei are moving at a constant velocity. After the collision, the two nuclei are “wounded”, and some charged particles are stopped to collide with each other. This causes dynamic changes in the charge and current distributions. However, the main goal of this paper is to investigate how the time behavior of the magnetic field is influenced by the time-dependentσ. As a simplification, we currently assume that the two nuclei pass through each other and continue moving with their original velocity, so the charge and current distributions in Eqs. (<ref>) and (<ref>) are unchanged after the collision. This allows us to compare our numerical results with the analytical results obtained by Eq. (<ref>) under the same conditions ofρand, so that we can focus on studying the influence of the time-dependentσ. §.§ Electric conductivity Generally, Eq. (<ref>) is not a realistic description of the electromagnetic response of QGP matter because it assumes a constant conductivity. In reality, the QGP matter exists only after the collision, and the conductivity is time-dependent during the expansion of the system. To provide a more realistic description of the evolution of the magnetic field, it is necessary to consider a time-dependent electric conductivity. In this study, we consider two scenarios for the electric conductivity. In the first scenario, the conductivity is absent before the collision, and it appears to be constant after the collision. Thus, we can introduce aθ(t)function to describe it, σ = σ_0 θ(t). In this equation, if the constant conductivityσ_0were not multiplied by theθ(t)function, the formula (<ref>) would be valid for calculating the magnetic field. However, as we will show in Sec. <ref>, even with such a minor modification on the electric conductivity, the time behavior of the magnetic field becomes very different. In the second scenario, we consider the electric conductivity to be absent before the collision, and after the collision the electric conductivity depends on time via σ = σ_0 θ(t)/(1 + t / t_0)^1/3. The denominator in this equation accounts that the conductivity decreases as the QGP medium expands <cit.>. Thus, this scenario provides a more relativistic description of the magnetic field's time behavior in heavy-ion collisions. § NUMERICAL METHOD In the aforementioned scenarios in Eqs. (<ref>) and (<ref>),σis time dependent, therefore the analytical results in Eq. (<ref>) is not applicable, and the Maxwell equations (<ref>–<ref>) need to be solved numerically. Becauseσis zero before the collision, and the two nuclei move linearly with constant velocity, the electromagnetic field att ≤0can be analytically calculated by the Lienard-Wiechert formula as given by Eq. (<ref>). This provides the initial condition of the electromagnetic field att = 0. Once the initial condition is given, the electromagnetic field att ≥0is calculated by numerically solving the Maxwell equations (<ref>–<ref>). We use the FDTD algorithm <cit.> to solve the Maxwell equations. In detail, electric and magnetic fields are discretized on the Yee's grid, and the updating format forandcan be constructed by discretizing Eqs. (<ref>) and (<ref>) with a finite time step, as follows (t + Δ t) - (t)/Δ t = - ∇×(t+Δ t/2), and (t + Δ t) - (t)/Δ t + σ(t + Δ t) + (t)/2 = ∇×(t+Δ t/2) - (t+Δ t/2). The Yee's grid provides a high-accuracy method to calculate∇×and∇×. As time evolves,andare updated alternately. For example, ifis initially known at timetandis initially known at timet+Δt/2, then one can use the values of(t+Δt/2)and Eq. (<ref>) to updatefromttot + Δt; and after(t + Δt)is obtained, one can use Eq. (<ref>) to updatefromt + Δt/2tot + 3Δt/2. This algorithm provides higher accuracy than the regular first-order difference method. § NUMERICAL RESULTS Using the numerical method described in Sec. <ref>, we calculate the magnetic field by solving the Maxwell equations (<ref>–<ref>) under the conditions ofσ= 0,σ= σ_0 θ(t), andσ= σ_0 θ(t)/(1 + t / t_0)^1/3, respectively. As a verification of our numerical method, we have checked that our numerical solution forσ= 0matches the analytical result by Eq. (<ref>). We also calculate the magnetic field under the condition ofσ= σ_0using the analytical formula (<ref>) for comparison. In all the results presented in this section, the values ofσ_0andt_0are set to beσ_0= 5.8MeV andt_0 = 0.5fm/c, which are taken from Ref. <cit.>. §.§ σ = σ_0 θ(t) vs σ = σ_0 Figure <ref> displays the time evolution of the magnetic field in the out-of-plane direction (B_y) at the center of collision (𝐱 = 0) in Au+Au collisions for energies ranging from 7.7 to 200 GeV with impact parameterb = 7fm. The results ofσ= σ_0 θ(t)are calculated using the numerical algorithm described in Sec. <ref>, while the results ofσ= σ_0are calculated using the analytical formula given by Eq. (<ref>). The magnetic field in vacuum (σ= 0) is also shown as a baseline. In general, the presence of electric conductivity delays the decreasing of the magnetic field. However, the time behavior of the magnetic field under the condition ofσ= σ_0 θ(t)is very different from that ofσ= σ_0. In Figure <ref> we can see that, in the case ofσ= σ_0(namely,σis constant at botht < 0andt > 0), the magnitude of magnetic field is different from the vacuum baseline since a very early time. On the other hand, in the case ofσ= σ_0 θ(t), the difference between the magnetic field and the vacuum baseline is negligible at early time stages (t < 1fm/c for 200 GeV ort < 3fm/c for 7.7 GeV). This is because thatσexists only after the collision and it needs some time to build the effect on delaying the magnetic field's decay. Only at very late time stage (t > 7fm/c), when the evolution system has “forgotten” whetherσis zero or not beforet = 0, the curves ofσ= σ_0 θ(t)and ofσ= σ_0converge. In the middle time stage, the magnitude of the magnetic field is ranked in the order:B[vacuum] < B[σ=σ_0θ(t)] < B[σ=σ_0]. Our results indicate that the analytical formula (<ref>) significantly overestimates the magnetic field in the early and middle time stage compared to the numerical results. The difference between the analytical and numerical results arises from theθ(t)function introduced in Eq. (<ref>). It is important to note that the conductivity is absent att < 0in realistic collisions, therefore the formula (<ref>) is not applicable. This remarks the importance of considering time-dependentσand solving the Maxwell equations numerically. At the late time stage, although the analytical results agree well with the numerical ones, the magnetic field has become very small and has little impact on final observables. §.§ σ = σ_0 θ(t) vs σ = σ_0 θ(t) / (1 + t / t_0)^1/3 The electric conductivity in heavy-ion collisions is a time-dependent quantity due to the expansion of the QGP. Therefore, we consider a more realistic scenario where the electric conductivity decreases with time as given by Eq. (<ref>). Figure <ref> shows the corresponding results, which are compared to the results under the conditions ofσ= σ_0 θ(t)andσ= 0. We see again that, in both scenarios ofσ= σ_0 θ(t)and ofσ=σ_0θ(t) / (1 + t / t_0)^1/3, the magnitude of the magnetic field does not obviously diverge from the vacuum baseline at the early time stage. At later time, the differences is manifested, and we see thatB[vacuum] < B[σ=σ_0θ(t) / (1 + t / t_0)^1/3] < B[σ=σ_0θ(t)]. Needless to say, the decreasing conductivity has smaller effect on delaying the magnetic field's decay than a constant one. Nevertheless, Figure <ref> shows that the magnitude of the magnetic field withσ=σ_0θ(t) / (1 + t / t_0)^1/3are more close to the one ofσ=σ_0θ(t)than to the vacuum baseline, especially at high energies. This suggests that the even if the conductivity decreases, it still has an obvious effect on delaying the damping of the magnetic field. However, this effect is only significant in late time stage, when the magnetic field has already decreased. §.§ Impact on the spin polarization Now let us discuss the impact of the magnetic field on the splitting between the global spin polarizations ofΛandΛ̅. The magnetic-field-induced global spin polarization ofΛandΛ̅can be calculated using the following formula <cit.> P_Λ/Λ̅ = ±μ_ΛB/T, whereμ_Λis the magnetic moment ofΛand is equal to-0.613μ_N, withμ_Nbeing the nuclear magneton, andTis the temperature when the hyperon spin is “freezed”. We shall use the hadronization temperatureT≈155MeV as an estimate. Then the splitting between theΛandΛ̅global spin polarizations is given by P_Λ̅-P_Λ = 0.0826eB/m_π^2. Based on the numerical results presented in Figure <ref>, the magnitude of the magnetic field at late time is of the order ofeB_y∼10^-3–10^-2 m_π^2, which is significantly smaller than the initial values att=0. Therefore, the effect of the magnetic field on the global spin polarizations ofΛandΛ̅is negligible, as the splitting can be no larger than0.1%. This is consistent with the recent STAR data <cit.> which puts an upper limit ofP_Λ̅-P_Λ < 0.24%at=19.6 GeV andP_Λ̅-P_Λ < 0.35%at=27 GeV. In conclusion, our results suggest that the magnetic field is not sufficiently long-lived to provide a distinguishable splitting between theΛandΛ̅global spin polarizations under the current experimental accuracy; similar results were obtained also in Ref. <cit.>. §.§ Impact on the spin alignment The magnetic field also plays an important role in the spin (anti-)alignment of vector mesons. For vector mesons such asϕandK^*0, the spins of the constituent quarks in the meson have a lager chance to be anti-algined [i.e. the(|↑↓⟩+|↓↑⟩)/√(2)state] than to be aligned (|↑↑⟩or|↓↓⟩state) in an external magnetic field <cit.>. This effect can be explored experimentally by measuring the spin-density matrix elementρ_00. We note thatρ_00is a frame dependent quantity. The following formulae show theρ_00with respect tox,y, andzaxis, respectively <cit.>: ρ_00^(x) = 1-P_x^qP_x^q̅+P_y^qP_y^q̅+P_z^qP_z^q̅/3+𝐏_q·𝐏_q̅, ρ_00^(y) = 1-P_y^qP_y^q̅+P_x^qP_x^q̅+P_z^qP_z^q̅/3+𝐏_q·𝐏_q̅, ρ_00^(z) = 1-P_z^qP_z^q̅+P_x^qP_x^q̅+P_y^qP_y^q̅/3+𝐏_q·𝐏_q̅. where(P_x^q, P_y^q, P_z^q)and(P_x^q̅, P_y^q̅, P_z^q̅)are spin polarization vectors of the constituent quark and anti-quark, respectively. Our results have shown that the global spin polarization induced by the magnetic field is a small amount (<0.1%), therefore one may expect that the contribution from the magnetic field to the spin alignment (measured viaρ_00-1/3, which is proportional to the square of the magnetic field) will be even smaller. However, it should be realized that our calculations do not take into account the fluctuations in the charge density and current. Therefore, the results should be interpreted as the averaged magnetic field, which suggest that the average values such as⟨P_q ⟩and⟨P_q̅ ⟩are small, but do not imply that the correlation betweenP_qandP_q̅is small. Instead, when a vector meson is formed by combination of a quark and an anti-quark, the distance between the quarks should be small enough, thusP_qandP_q̅, which arise from the fluctuation of magnetic field, are highly correlated. This can lead to a massive contribution toρ_00. Therefore, our results do not rule out the possible effect of the magnetic field on the spin (anti-)alignment of vector mesons. For the same reason, the spin alignment of vector mesons can also arise from the fluctuation of other fields such as vorticity <cit.>, temperture gradient <cit.>, shear tensor <cit.>, and strong-force field <cit.>. Finally, it is important to note that, if the spin alignment is mainly contributed by fluctuations, then the value ofρ_00is not constrained by the value of global or localΛpolarizations. This may explain the significant value of|ρ_00-1/3|in the experimental data <cit.>, whereas the global or localΛpolarizations are much smaller <cit.>. § SUMMARY In this study, we present a numerical method to solve the Maxwell equations and investigate the evolution of magnetic field in heavy-ion collisions. We also discuss the impact of the magnetic field on the spin polarizations ofΛandΛ̅as well as the spin alignment of vector mesons. We demonstrate that although the electric conductivity can delay the decay of the magnetic field, this effect has been overestimated by the analytical formula which assumes a constant conductivity. After taking into account that the conductivity only exists after the collision, we find that the magnetic field is not sufficiently long-lived to induce a significant splitting between the global spin polarizations ofΛandΛ̅. On the other hand, the spin alignment of vector meson is a measure of correlation between the spin polarizations of quark and anti-quark, instead of the spin polarization being squared solely. Therefore, although the averaged spin polarization induced by the magnetic field is very small, our results do not rule out the possibility that the fluctuations of the magnetic field, as well as other fields, can have a significant contribution to the spin alignment of vector meson. We thank Dmitri Kharzeev and Oleg Teryaev for useful comments on the retreat on Spin Dynamics, Vorticity, Chirality and magnetic field workshop. This work was supported by the NSFC through Grants No. 11835002, No. 12147101, No. 12225502 and No. 12075061, the National Key Research and Development Program of China through Grant No. 2022YFA1604900, and the Natural Science Foundation of Shanghai through Grant No. 20ZR1404100. H. L was also supported by the China Postdoctoral Science Foundation 2019M661333. apsrev4-2
http://arxiv.org/abs/2306.08853v1
20230615044225
"In Search of netUnicorn: A Data-Collection Platform to Develop Generalizable ML Models for Network (...TRUNCATED)
[ "Roman Beltiukov", "Wenbo Guo", "Arpit Gupta", "Walter Willinger" ]
cs.NI
[ "cs.NI", "cs.CR", "cs.LG" ]
"\n\n\n\n\nhttps://netunicorn.cs.ucsb.edu\n\n\n\[email protected]\n0000-0001-8270-0219\n\n UC Sa(...TRUNCATED)
http://arxiv.org/abs/2306.01844v1
20230602180314
Stepped Partially Acoustic Dark Matter: Likelihood Analysis and Cosmological Tensions
[ "Manuel A. Buen-Abad", "Zackaria Chacko", "Can Kilic", "Gustavo Marques-Tavares", "Taewook Youn" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
=1
http://arxiv.org/abs/2306.01863v1
20230602183529
Embedding Security into Ferroelectric FET Array via In-Situ Memory Operation
["Yixin Xu","Yi Xiao","Zijian Zhao","Franz Müller","Alptekin Vardar","Xiao Gong","Sumitha George","(...TRUNCATED)
cs.ET
[ "cs.ET" ]
"\n[\n Ilon Joseph\n June 2, 2023\n================\n\n\n\n\nNon-volatile memories (NVMs) have(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for "arxiv_june_2023"

More Information needed

Downloads last month
6