entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.05564v1
20230709223937
Augmenters at SemEval-2023 Task 1: Enhancing CLIP in Handling Compositionality and Ambiguity for Zero-Shot Visual WSD through Prompt Augmentation and Text-To-Image Diffusion
[ "Jie S. Li", "Yow-Ting Shiue", "Yong-Siang Shih", "Jonas Geiping" ]
cs.CL
[ "cs.CL" ]
1]Jie S. Li 1]Yow-Ting Shiue 2]Yong-Siang Shih 1]Jonas Geiping [1]University of Maryland, College Park [2]Duolingo, Inc. [ ] [ ] Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant [ August 12, 2023 ======================================================================== This paper describes our zero-shot approaches for the Visual Word Sense Disambiguation (VWSD) Task in English. Our preliminary study shows that the simple approach of matching candidate images with the phrase using CLIP suffers from the many-to-many nature of image-text pairs. We find that the CLIP text encoder may have limited abilities in capturing the compositionality in natural language. Conversely, the descriptive focus of the phrase varies from instance to instance. We address these issues in our two systems, Augment-CLIP and Stable Diffusion Sampling (SD Sampling). Augment-CLIP augments the text prompt by generating sentences that contain the context phrase with the help of large language models (LLMs). We further explore CLIP models in other languages, as the an ambiguous word may be translated into an unambiguous one in the other language. SD Sampling uses text-to-image Stable Diffusion to generate multiple images from the given phrase, increasing the likelihood that a subset of images match the one that paired with the text. § INTRODUCTION The Task of Visual Word Sense Disambiguation, as set out in SemEval-2023 Task 1 Overview Paper <cit.>, can be described as follows: Given a target word (the target word) in the context of a two or more word phrase (the full phrase) and ten candidate images, pick the image (the gold image) among the ten candidate images that correctly corresponds to the target word. This competition was run in three languages, English, Farsi and Italian. We participated in the English version of the task. This task is in line with previous tasks connecting images to text, such as <cit.>. We explore two distinct systems to tackle this task. Both systems use Contrastive Language-Image Pre-training (CLIP) <cit.> as a foundation. CLIP was trained to associate text and related images, through increasing the cosine similarity (CLIP-similarity) between the normalized text-embedding and image-embedding of related text and image pairs and decreasing that for unrelated pairs. Our first system (Augment-CLIP) augments the CLIP text embedding by introducing additional context (through key-to-text) and accessing CLIP text and image embedding in other languages, through third-party implementation of CLIP for various languages. The second system (SD Sampling) samples Stable Diffusion <cit.> to generate multiple images illustrating the semantics of the full phrase and then applies a distance metric to select the candidate image that is closest to the generated images. As standalone systems, Augment-CLIP and SD Sampling do not outperform Base-CLIP as the additional context may not correctly extend the target word meaning, but they offer complementary benefits and improve Base-CLIP through ensembling. We ensemble models by first calculating a new probability (or score) for each candidate image by taking the equally weighted average of the probability calculated from the underlying models. Each individual model can output a probability for a candidate image. For CLIP-based models, the probability is the softmax of the candidate image logits. We then rank the candidate images based on the new probability in descending order, with the highest probability candidate image being the predicted image from the ensembled model. See Table <ref>. § SYSTEMS OVERVIEW §.§ Augment-CLIP We look at two methods to create the Augment-CLIP system. Both methods attempt to disambiguate the full phrase containing the target word. The first does this through introducing additional text and the second does this through accessing additional languages. §.§.§ Augment-CLIP with key-to-text Our baseline approach, referred to as the Base-CLIP approach (or Base-CLIP model) is the approach of encoding the full phrase using the CLIP text encoder and encoding the candidate images using the CLIP image encoder, followed by choosing the candidate image whose encoding has the largest similarity to the full phrase encoding. Base-CLIP models, regardless of their specific underlying architecture, suffer from a weakness in compositionality. Compositionality is the change of word meaning in the presence of other words. For example, the meaning of "baby powder" is not the average of "baby" and "powder", and "powder" means different things in "baby powder" versus "milk powder". This is a general problem with embeddings beyond CLIP, such as text embeddings <cit.>. CLIP is trained with image and caption pairs only, with captions consisting of shorter texts and of less diversity than texts used to train language models, so the complex syntactic and semantic relationships among words, including compositionality, is not well-captured by CLIP. In comparison, standard language models trained on larger text corpora are composed of longer texts, from a larger variety of sources. We utilize this idea to augment Base-CLIP with key-to-text completion [<https://github.com/gagan3012/keytotext>] to leverage additional language knowledge through the key-to-text system. We use the key-to-text systems "k2t" (k2t 1), "k2t-base" (k2t 2), and "mrm8488/t5-base-finetuned-common_gen" (k2t 3). For example, for the target word "administration" and the full phrase "administration prime minister" from the trial data, we created three additional sets of context texts: * "The administration prime minister is the official title of the leader." * "The Administration Prime Minister is the leader of the country." * "prime minister speaks to the media during his visit." These texts further reinforce the semantic meaning of "administration". The CLIP text-embedding of the augmented context text is used to measure the CLIP-similarity to the candidate images. To keep the focus on the benefit of additional text context rather than optimizing the context itself, we use a greedy method to sample key-to-text and do not evaluate alternative sampling methods. §.§.§ Augment-CLIP through additional languages The second way to augment Base-CLIP is to resolve the ambiguity of the full phrase in the source language by translating the full phrase into a different language via a translation model (we leverage Google Translate[<https://translate.google.com/>]) and then use the other language's CLIP text-embedding of the translation to measure the distance to the candidate images. We evaluate this idea with Chinese translations. Chinese Augment-CLIP does not outperform Base-CLIP, often due to poor translation, but, interestingly, it offers sufficient complementarity to Base-CLIP or other Augment-CLIP that it improves performance through ensembling. See results in Table <ref>. §.§.§ Base-CLIP model differences For Base-CLIP, the performance difference in the two versions of CLIP that we used, ViT-B/32 and Vit-L/14, is notable. ViT-B/32 in fact gave better performance on trial and test data. This is unexpected as ViT-L/14 is a larger model and has more training and more data <cit.>. Further the organizers' baseline uses CLIP-ViT-large-patch14-336, an even larger model which improved performance in test data. See Table <ref>. This leads to the question of how different Base-CLIP embeddings affect performance on this task, which is outside the scope of this paper as we take the Base-CLIP embedding as a given in our systems. §.§ Stable Diffusion Sampling The second system samples text-to-image Stable Diffusion-v1.4 (SD) to generate multiple images after inputting the full phrase as the text prompt. Then the system outputs the candidate image with the closest distance to any of the generated SD images. There are two advantages of this system: first is the access to the larger training data of Stable Diffusion, which includes LAION2B-en <cit.>, a 2.32 billion common crawl image-text pairs dataset. Second, evaluating multiple images for a given text input resolves the text ambiguity of the input text and also the pictorial ambiguity in its image representation. As an example of text ambiguity, "angora" can mean a type of fiber or less frequently a specific city, as in "Angora City". Sampling several images allows the possibility that a subset of the images correctly express the meaning of the target word. Even for an unambiguous word, there may be pictorial diversity in its representation, and sampling multiple images allows for broader coverage of this diversity than a single image. We evaluate two sampling methods of Stable Diffusion, text-to-image and text-and-image-to-image. For each, two similarity metrics were used: CLIP-similarity and l_2 distance between InceptionV3 <cit.> features of candidate image and InceptionV3 features of SD sampled image. Of these four, text-to-image sampling of Stable Diffusion with CLIP-similarity performs the best on trial data and a subset of train data - this is designated SD Sampling and is our submission 2 for the task. For text-to-image sampling, we input the full phrase to SD and generate 50 output images (independent of any candidate images). We then calculate the maximum CLIP-similarity (CLIP ViT-L/14) between a candidate image and the 50 output images and associate that largest CLIP-similarity to that candidate image (candidate image distance). We then output the candidate image with the largest CLIP-similarity. § EXPERIMENTAL SETUP The trial, train, and test datasets consist of multiple instances. An instance is a target word and a full phrase (containing the target word) and ten candidate images, with one image (the gold image) capturing the semantic meaning of the target word as exemplified in the full phrase. Train, trial, and test have 12869, 16, and 463 instances, respectively. For the test data, there are two versions of the dataset provided by the task organizers, differing in the image size <cit.>. We perform our predictions on the dataset with the smaller image size. We do not train or fine-tune our models on training data to demonstrate the zero-shot property of our approach, although we do use the training data in part to inform us of which Augment-CLIP system and which SD Sampling system to select for task submissions. Based on trial data performance, among the three k2t systems, we choose k2t 2, and among the SD Sampling systems, we choose text-to-image with CLIP-similarity. As measurements of the performance of the models, hit rate and mean reciprocal rank (mrr) are applied to the model predictions on the trial dataset and test dataset. Based on the inputs, the model assigns a score to each candidate image. The model can output one predicted image, with the highest score, or it can output a list of images ordered in decreasing order of score. Hit rate is the percentage of instances where the predicted image is the gold image. Mean reciprocal rank is the average of the reciprocal of the rank of the gold image in the list of images, ordered based on score. § RESULTS §.§ Augment-CLIP through key-to-text While standalone Augment-CLIP through key-to-text does not outperform Base-CLIP, it does reveal that adding context can improve performance. The additional context, when correctly augmenting the meaning of the target word, can indeed improve performance on the test set. In the best-case scenario, the additional context is an extension and explanation of meaning of the target word. In the worst-case, an incorrect extension of context dilutes the meaning of the target word. In the former case, Augment-CLIP is likely to correctly predict the gold image. In the group of instances in which both Augment-CLIP and Base-CLIP correctly predict the gold image, the CLIP-similarity score is higher in Augment-CLIP than in Base-CLIP, showing the effectiveness of added context. This depends on the quality of the context extension process: if the augmented context does not aid in conveying the correct semantic meaning of the target word, then the incorrect additional context may degrade performance in a standalone system. This is analogous to the performance of a language model with in-context learning, where the performance depend on the quality of in-context examples <cit.>. Adding a k2t system can improve performance of the Base-CLIP. This can be seen in Table <ref>. For each instance in the dataset, consider the Base-CLIP similarity score for the full phrase and the gold image and consider the Augment-CLIP through k2t similarity score for the full phrase and the gold image. The difference between the Augment-CLIP similarity score and the Base-CLIP similarity score is calculated and shown in Table <ref>. This difference shows whether Augment-CLIP would have done better or worse than Base-CLIP. It also shows the potential of Augment-CLIP to improve Base-CLIP's performance. Extra steps can be taken to improve the quality of the k2t text completion, but our focus is not to improve the performance of the k2t system but to show that reasonable additional context offers complementary benefits to Base-CLIP. §.§ Augment-CLIP through other languages We evaluate another method to disambiguate the full phrase by translating the full phrase in English to another language and exploring the CLIP text embedding and image embedding in that foreign language. Direct translation to a foreign language (through taking the first result of Google Translate), with that language chosen to be Chinese (Aug-CLIP: zh) in our evaluation, does not increase performance, and this is partly due to incorrect translations. Here, identical round-trip translations can serve as a proxy for correct translation from English to Chinese. The test instances can be divided into two groups, one being identical in round-trip translation and the other group containing all other instances. For instance, starting with the English full phrase and translating to Chinese and then translating that result back to English (English_1 → Chinese → English_2 with English_1 and English_2 being the same, up to capitalization) and another group that does not have identical round-trip translation, the first group has a higher foreign-language CLIP-similarity score than the second. As a standalone system, direct translation does not improve performance, but ensembling with a direct translation system does improve performance. By adding Chinese translation to the ensemble (ensemble(B-CLIP, zh, k2t 2)), test data hit rate increases from 59.18 to 63.71 and test data mrr increases from 73.21 to 76.11. See Table <ref>. §.§ SD Sampling SD Sampling does not outperform Base-CLIP. It's worth noting that the instances where SD Sampling correctly selects the gold image is different from those of Base-CLIP, showing a potential to gain from accessing the SD Sampling system. See Table <ref>. There is pictorial diversity in the SD samples, and often that diversity includes the correct image expression of the target word in the full phrase, as intended. There is diversity in viewpoint, proximity and style of the object presented. See images of various cityscapes outputted by Stable Diffusion for the full phrase "angora city" in Figure <ref>, and see images of various views of different models of "internet routers" in Figure <ref>. There is also diversity in the semantic interpretation of the full phrase: see for example Figure <ref> for interpretation of the word "breaking wheel" as both a torture device and a music group. This shows that the first goal of the SD System of producing a diversity of pictorial representation of the desired object is met. But the subsequent application of the distance metric fails to match the sampled SD image to the gold image. At times, incorrect candidate images have larger CLIP-similarity to the correct sampled SD image than the gold image, due to a coincidence of similar style, lighting, or material. This is not a shortcoming of CLIP-similarity as it is intended to be applied to (text, image) pairs and not (image, image) pairs <cit.>. As an alternative, we evaluate metrics such as l_2 distance between InceptionV3 features of the sampled image and InceptionV3 features of the candidate image. Using l_2 metric underperforms the CLIP-similarity metric as shown Table <ref>. We do not evaluate other image-to-image similarity metrics and leave for future work the search for an effective metric. An issue with SD sampling on this dataset is the domain shift between the dataset on which SD was trained, a common crawl of text and image pairs in English, and the more scientific and technical focus of the full phrases in the test data. For example, the full phrase "breaking wheel" is a historical term and meant to be unambiguous and to resolve to mean a medieval torture device and the gold image is of such a device. On the other hand, to the layperson, "breaking wheel" sounds like the name of a band, akin to Stone Sour, Breaking Benjamin, Nickelback, and this popular understanding of "breaking wheel" is evidenced in Stable Diffusion sampled images, which include images of band groups. Similarly, other instances with the full phrase being technical and scientific terms that are not well-known to the general public are expressed in Stable Diffusion output images in terms of how a layperson would interpret such a term, instead of the correct technical meaning. § CONCLUSION The Base-CLIP system is a strong solution to the task challenge. Our system Augment-CLIP complements Base-CLIP to resolve text ambiguity and improves text compositionality. Our system SD Sampling provides pictorial diversity in ambiguous and unambiguous text interpretation. These two methods offer additional ways to connect text and images. acl_natbib
http://arxiv.org/abs/2307.04469v1
20230710103740
Beyond spectroscopy. II. Stellar parameters for over twenty million stars in the northern sky from SAGES DR1 and Gaia DR3
[ "Yang Huang", "Timothy C. Beers", "Hai-Bo Yuan", "Ke-Feng Tan", "Wei Wang", "Jie Zheng", "Chun Li", "Young Sun Lee", "Hai-Ning Li", "Jing-Kun Zhao", "Xiang-Xiang Xue", "Yu-Juan Liu", "Hua-Wei Zhang", "Xue-Ang Sun", "Ji Li", "Hong-Rui Gu", "Christian Wolf", "Christopher A. Onken", "Ji-Feng Liu", "Zhou Fan", "Gang Zhao" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
1School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, Chinese; [email protected] 2Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, P. R. China; [email protected]; [email protected] 3Department of Physics and Astronomy and JINA Center for the Evolution of the Elements (JINA-CEE), University of Notre Dame, Notre Dame, IN 46556, USA 4Department of Astronomy, Beijing Normal University, Beijing 100875, People's Republic of China 5Department of Astronomy and Space Science, Chungnam National University, Daejeon 34134, Republic of Korea 6Department of Astronomy, School of Physics, Peking University, Beijing 100871, People's Republic of China 7Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, People's Republic of China 8Department of Space Science and Astronomy, Hebei Normal University, Shijiazhuang 050024, People's Republic of China 9Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia 10Centre for Gravitational Astrophysics, Research Schools of Physics, and Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia We present precise photometric estimates of stellar parameters, including effective temperature, metallicity, luminosity classification, distance, and stellar age, for nearly 26 million stars using the methodology developed in the first paper of this series, based on the stellar colors from the Stellar Abundances and Galactic Evolution Survey (SAGES) DR1 and Gaia EDR3. The optimal design of stellar-parameter sensitive uv filters by SAGES has enabled us to determine photometric-metallicity estimates down to -3.5, similar to our previous results with the SkyMapper Southern Survey (SMSS), yielding a large sample of over five million metal-poor (MP; [Fe/H] ≤ -1.0) stars and nearly one million very metal-poor (VMP; [Fe/H] ≤ -2.0) stars. The typical precision is around 0.1 dex for both dwarf and giant stars with [Fe/H] >-1.0, and 0.15-0.25/0.3-0.4 dex for dwarf/giant stars with [Fe/H] <-1.0. Using the precise parallax measurements and stellar colors from Gaia, effective temperature, luminosity classification, distance and stellar age are further derived for our sample stars. This huge data set in the Northern sky from SAGES, together with similar data in the Southern sky from SMSS, will greatly advance our understanding of the Milky Way, in particular its formation and evolution. § INTRODUCTION Estimates of stellar parameters, in particular the metallicity, of a large, complete sample of stars is of vital importance to understand the formation and evolution of the Milky Way. In the past decades, massive progress has been achieved by large-scale spectroscopic surveys, such as the HK Survey <cit.>, the Hamburg/ESO Survey (HES; ) the Sloan Digital Sky Survey (SDSS; ), the Radial Velocity Experiment (RAVE; ), the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST; ), the Galactic Archaeology with HERMES project (GALAH; ), and the Apache Point Observatory Galactic Evolution Experiment (APOGEE; ). However, the total number of observed targets collected from all those surveys is no greater than about ten million, less than one ten-thousandth of the estimated total numbers of Milky Way stars. This under-sampling, together with the complex target-selection strategies, makes it extremely difficult to understand the full assembly history of our Galaxy. In the first paper of this series <cit.>, we proposed to alleviate this issue of current spectroscopic surveys by deriving stellar parameters for a huge number of stars using narrow/medium-bandwidth photometric surveys (see Table 1 of H22 for a summary). As a pioneering experiment, H22 present measurements of stellar parameters, including metallicity, luminosity classification, effective temperature, distance, and stellar age, for over 24 million stars, based on the stellar colors from the SkyMapper Southern Survey (SMSS; ) and Gaia <cit.>, as well as the parallax measurements from Gaia. This huge data set has already been applied to a number of Galactic studies, including searching for metal-poor stars <cit.>, discovery of ancient halo substructures <cit.>, and understanding the disk/halo formation history (Hong et al. 2023). Its contribution to this field is just beginning to be explored. In this paper, we present a second pioneering experiment in the Northern sky, using the data from the first data release of the Stellar Abundance and Galactic Evolution Survey <cit.> and Gaia EDR3 <cit.>. SAGES is an optical multi-band (u, v, g, r, i, DDO-51, Hα_ wide, Hα_ narrow) large-scale photometric survey, aiming to cover 12,000 square degrees of the Northern sky with δ > -5^∘ down to a 5σ depth of 21.5 in the u-band <cit.>. The u-band filter is the same as in the Strömgren system <cit.>, and the v-band is optimized to provide reliable metallicity measurements by shifting the central wavelength of the SkyMapper v <cit.> to longer wavelengths, by about 100 Å, to reduce the effect of molecular bands of carbon and nitrogen on the metallicity estimates. The special design of the uv filters (especially the v-band) provides photometric sensitivity to stellar surface gravity and metallicity that are well-demonstrated by numerous previous efforts with similar filter systems (e.g., ; H22). The gri filters are SDSS-like, which can be used to estimate the stellar effective temperature. The combination of Hα and other filters can be used to estimate the values of reddening. Similar to our effort with SMSS (H22), here we present stellar parameter estimates for over 26 million stars using the uv-band data released in SAGES DR1, along with the photometric and parallax information provided by Gaia EDR3 <cit.>. This paper is structured as follows. In Section 2, we introduce the data adopted in the current work. In Section 3, photometric-metallicity estimates from the stellar colors of SAGES DR1 and Gaia EDR3 are described, along with various checks on the photometric measurements. The determinations of effective temperature, T_ eff, distance, and age are presented in Section 4. Radial velocity measurements collected from previous spectroscopic surveys and the final sample are described in Section 5. We present a summary in Section 6. § DATA In the present work, the SAGES DR1 <cit.> dataset is adopted. SAGES DR1 has released a total of about 100 million sources extracted from 36,092 accepted frames in the uv-bands collected by the 90-inch (2.3m) Bok Telescope at Kitt Peak National Observatory in Arizona. DR1 covers about half of the Northern Hemisphere (9960 square degrees), about 90 per cent of the planned area. The median completeness is about 20.4 and 20.3 for the u- and v-band, respectively. This is one of the deepest near-ultraviolet large-scale photometric survey with a 5σ depth close to 21.5 in the u-band. Compared to other near-ultraviolet deep photometric surveys, e.g., the SDSS <cit.> and the South Galactic Cap u-band Sky Survey <cit.>, SAGES has the advantage of using the two medium-bandwidth filters uv, which are optimized for estimates of stellar parameters. In addition to the uv-band data provided by SAGES DR1, the optical bands of G, G_ BP, G_ RP, as well as astrometric information, is adopted from the Gaia EDR3 <cit.>. The Gaia EDR3 broadband photometry is essentially complete between G = 12 and G = 17. The completeness is quite complicated for sources fainter than G = 17, which is strongly dependent on celestial position <cit.>. In total, nearly 33 million stars are selected by the following cuts: * flag_u/v = 0 in SAGES DR1 * Uncertainties of G, G_ BP, and G_ RP smaller than 0.05 mag * Galactic latitude |b| ≥ 10^∘ SAGES was initially designed to avoid the high-reddening regions with |b| ≤ 10^∘, although a few disk areas are observed for specific reasons. The former two cuts are required for precise metallicity estimates, but they do affect the completeness in the faint range (G > 18.5). The last cut is to exclude those disk regions in our analysis, given their high values of extinction. This sample is referred to as the main sample for our following analysis. In this study, the colors u-G_ BP, v-G_ RP, and G_ BP - G_ RP are used. We note that the mean G_ BP flux in Gaia EDR3 is over-estimated for faint red sources with G ≥ 20.0 <cit.>. However, only 650 thousand stars (no more than 3 per cent of the full sample) in our final catalog are fainter than 20th magnitude in the G-band. Therefore, the systematic issue for G_ BP is minor for the current study. Unless indicated otherwise, these colors are corrected for reddening using the extinction map of <cit.> [Here the SFD98 E(B-V) is corrected for a 14% systematic over-estimate <cit.>]. The reddening coefficients for those colors, as well as for the G-band, are calculated using the same way as in H22. § METALLICITY DETERMINATION §.§ Training Set The key to determinations of metallicity using stellar colors is the training set. The training set adopted here is similar to that used in H22, which consists of 1) LAMOST DR9[<http://www.lamost.org/dr9/v1.0/>], 2) the revised parameters of metal-poor ([Fe/H] ≤ -1.8) stars of SEGUE <cit.>, along with other datasets from SDSS (we refer to the total dataset below as SEGUE), and LAMOST <cit.> by a custom version of the SSPP (LSSPP; Lee et al. 2015), along with careful visual inspection (by Beers), and 3) the bibliographical compilation of measurements of stellar atmospheric parameters from high-resolution spectroscopy (HRS) by PASTEL <cit.> and SAGA <cit.> . The metallicity scale of the former two sets is calibrated to the one obtained from the HRS dataset. More details of our efforts to construct a training set with a homogenous scale of metallicity, as well as other elemental-abundance ratios, will be described in Huang et al. (2023). We then cross-match the above training set to the main sample, together with the following cuts: * The stars must have small values of extinction (to minimize uncertainties due to reddening corrections): Galactic latitude |b| ≥ 20^∘ and E (B - V) ≤ 0.08 * The stars must have reliable metallicity estimates: LAMOST/SEGUE spectral signal-to-noise ratio (SNR) greater than 20, effective temperatures in the range 3800 ≤ T_ eff (K)≤ 7500 (i.e., typical FGK-type stars) * The photometric uncertainties in the SAGES uv and Gaia G_ BPG_ RPG bands must be smaller than 0.035 mag * The stars must have Gaia relative parallax measurement uncertainties smaller than 50% In addition to the above cuts, only about half of the metal-rich ([Fe/H] >-1.0) stars are selected to avoid large differences in the number of metal-rich ([Fe/H] >-1.0) and metal-poor ([Fe/H] <-1.0) stars (see the right panel of Fig. 1). Given the number of stars in common between SAGES and those with spectroscopy, the cut on Galactic latitude would not introduce bias in the training sets, e.g., a lack of metal-rich disk populations (see the right panel of Fig. 1). A total of 223,537 stars (182,035 dwarfs and 41,502 giants) are selected to construct the final training set. The absolute G-band magnitudes of these stars are derived by adopting the distances from <cit.>, based on the parallax measurements from Gaia EDR3. The Hertzsprung–Russell (H-R) diagram of the training set is then shown in the left panel of Fig. 1. By using empirical cuts defined in H22, the training stars are further divided into dwarf and giant stars. The right panel of Fig. 1 shows the metallicity distributions of the dwarf and giant stars in the training set. §.§ Metallicity Estimation To estimate photometric metallicity, we first define the metallicity-dependent stellar loci of (u/v-G_ BP)_0 versus (G_ BP - G_ RP)_0 in Fig. 2 for both dwarf stars (top panel) and giant stars (bottom panel). Similar to our results with SMSS DR2 in H22, both (u-G_ BP)_0 and (v-G_ BP)_0 colors exhibit significant sensitivities to stellar metallicity for different types of stars characterized by (G_ BP - G_ RP)_0. Third-order 2D polynomials with 10 free parameters are then applied to describe the stellar loci of dwarf and giant stars: (u/v - G_ BP)_0 = a_0,0 + a_0,1y + a_0,2y^2 + a_0,3y^3 + a_1,0x + a_1,1xy + a_1,2xy^2 + a_2,0x^2 + a_2,1x^2y + a_3,0x^3, where x and y represent (G_ BP - G_ RP)_0 and [Fe/H], respectively. Two to three sigma-clipping is applied in the fitting process. The resultant fit coefficients are listed in Table 1. Using the stellar loci, one can determine the photometric metallicity using the maximum-likelihood approach developed in H22. For a given star, the metallicity is obtained from the probability distribution function (PDF) of [Fe/H] estimated from the likelihood function: L_c = 1/√(2π)σ_c_ obsexp-(c_ obs - c_ pred)^2/2σ_c_ obs^2, where c_ obs are the observed colors, i.e., (u/v - G_ BP)_0, with assumed Gaussian errors σ_c_ obs. The c_ pred represents the same colors predicted by the metallicity-dependent stellar loci (defined by Equation 1) with (G_ BP - G_ RP)_0 from observations and [Fe/H] ranging from -3.5 to +0.8 in steps of 0.01 dex. The uncertainty in the photometric metallicity estimated is taken to be half of the 68% interval of the resultant PDF. From the above approach, we estimate the photometric metallicities of training-set stars to be compared to the spectroscopic measurements as an internal test. These comparisons are shown in Fig. 3 for both dwarf stars (top panel) and giant stars (bottom panel). Generally, the estimated photometric metallicities agree with the spectroscopic metallicities very well for both dwarf and giant stars, either from (u - G_ BP)_0 or (v - G_ BP)_0; the overall scatter is only 0.09 dex and 0.13 dex for dwarf stars achieved by (u - G_ BP)_0 and (v - G_ BP)_0, respectively. The scatter of the combined estimates using an error-weighted mean is further reduced to 0.08 dex, even better than the precision of low/medium-resolution spectroscopy. As shown in the top-right panel of Fig. 4, no significant systematic offset is found for dwarf stars with photometric [Fe/H] >-1.0, and a mild offset of -0.20 to -0.4 dex (photometric minus spectroscopic) is found for metal-poor dwarf stars with photometric [Fe/H] ≤-1.0. The metallicity precision for dwarf stars as revealed by the internal comparisons is a function of [Fe/H], with scatter smaller than 0.1 dex for [Fe/H] >-0.5, increasing to 0.3-0.4 dex at the extremely metal-poor end ([Fe/H] ∼ -3.0). For giant stars, the overall scatter is around 0.11 dex. The comparisons show that photometric metallicity derived from (v - G_ BP)_0 is in excellent agreement with that of spectroscopy, with negligible offsets for [Fe/H] >-2.0 and a small offset of -0.2 dex (photometric minus spectroscopic) at the extremely metal-poor end ([Fe/H] ∼ -3.0). The metallicity precision from (v - G_ BP)_0 is around 0.1 dex for [Fe/H] >-1.0, and 0.2-0.3 dex for [Fe/H] ≤ -1.0. The performance of photometric metallicity derived from (u - G_ BP)_0 is moderately worse, especially for warmer giant stars, which are mostly BHB stars (see the blue box in the bottom left panel of Fig. 3). Finally, the internal checks indicate that there are no systematic trends with effective temperature for the photometric-metallicity estimates of both dwarf and giant stars (see the top-left panel of Fig. 4). In addition to the internal test, we derive photometric metallicities for LAMOST targets with larger values of E (B-V) that are not included in the training set. Using the LAMOST targets (including these stars with low values of extinction in the training set), we show the metallicity differences between the photometric and spectroscopic values as a function of E (B-V) in Fig. 5. The metallicity differences (photometric minus spectroscopic) steadily decrease with E (B-V), and reach ∼ +0.2 dex at E (B-V) ∼ 0.5 for both dwarf and giant stars. This trend is possibly due to the spatial systematic uncertainties of theSFD98 extinction map, as found most recently by <cit.>. Moreover, <cit.> have shown that the reddening coefficients depend not only on effective temperature/intrinsic colors, but also extinction itself (ignored in this work). The neglect of the extinction term may also partly contribute to this E (B-V) dependent trend. To correct for this systematic trend, a fifth-order polynomial is applied to describe the differences as a function of E (B-V) for dwarf and giant stars, respectively. According to the above tests, the final metallicity of a dwarf star is given by the combined estimate if both (u - G_ BP)_0 and (v - G_ BP)_0 colors are available, or given by the single measurement from either (u - G_ BP)_0 or (v - G_ BP)_0, depending on which color is available. The final metallicity of a giant star is given by the measurement of color (v - G_ BP)_0, or the color (u - G_ BP)_0 if the former is not available. In this manner, photometric-metallicity estimates are derived for over 26 million stars (23 million dwarf stars and 3 million giant stars) in SAGES. Note that the extinction dependent zero-point offsets are corrected using the fifth-order polynomial constructed above. The G-band magnitude distributions of stars with metallicity estimates are shown in the left panel of Fig. 6. The overall completeness limit is around magnitudes G = 17.5 and 18.5, for dwarf and giant stars, respectively. As mentioned earlier, we caution that the completeness of Gaia broadband photometry is quite complicated, especially in crowded regions, for stars with G > 17 <cit.>. The photometric-metallicity distributions of dwarf and giant stars are shown in the right panel of Fig. 6. The total number of very metal-poor (VMP; [Fe/H] < -2.0) stars is about one million, which is the largest database of VMP candidates yet assembled from photometric techniques. The metallicity uncertainty of a star is contributed by two sources: the method error deduced from the internal checks and the random errors derived from the likelihood function. The metallicity uncertainty as a function of G-band magnitude is shown in Fig. 7, which is dominated by the method error and random errors in the bright and faint end, respectively. §.§ Comparison with APOGEE DR17 and GALAH DR3+ The accuracy of our photometric estimates of metallicity is examined by comparisons with the independent spectroscopic measurements from the APOGEE DR17 <cit.> and GALAH DR3+ <cit.>. The comparisons are shown in Fig. 8 for 72,995 high-quality (SNR ≥ 30) stars in common with APOGEE and 13,038 high-quality (SNR ≥ 30) stars in common with GALAH DR3+. Generally, the photometric-metallicity estimates agree very well with the spectroscopic values, without significant offsets. The overall scatter is only 0.09 dex for dwarf stars and 0.10-0.15 dex for giant stars. The zero-point and precision of individual metallicity bins are also examined in the lower panels of Fig. 8; the results are consistent with our internal tests (see Fig. 4). We also present the metallicity differences between the photometric estimates and spectroscopic values from APOGEE DR17 as a function of E (B-V) in Fig. 9. The plot clearly shows that the offsets are all around zero for different bins of E (B-V), a validation of our polynomial corrections described in Section 3.2 (see Fig. 5). §.§ Comparison with Metal-poor Samples from High-resolution Spectroscopy To explore the capabilities of the SAGES filters for determinations of metallicity for metal-poor stars, we collect samples of independent metallicity estimates from HRS, especially for metal-poor stars. The HRS samples we compare with include a sample of the most metal-poor stars <cit.>, the R-Process Alliance sample <cit.> for over 600 VMP stars, the CFHT ESPaDOnS follow-up observations of 132 metal-poor candidates selected from the Pristine survey <cit.>, the Subaru follow-up observations of 400 VMP candidates selected from the LAMOST <cit.>, and the GTC follow-up observations of extremely metal-poor (EMP) candidates identified from the Pristine and LAMOST surveys <cit.>. We cross-match the SAGES sample to the collected HRS samples and find 112 stars in common (54 dwarfs and 58 giant stars). The comparison result is shown in Fig. 10. Generally, our photometric-metallicity estimates are consistent with the HRS values for metal-poor stars without significant carbon enhancements ([C/Fe] < +0.6). The overall scatter of the differences (photometric minus spectroscopic) is 0.57 dex and 0.30 dex, respectively, for dwarf and giant stars, with mild offsets of +0.38 dex and +0.18 dex, respectively . The result is in line with our internal checks (see Fig. 4). We note the photometric-metallicity estimates of ultra metal-poor (UMP; [Fe/H] < -4.0) stars can be over-estimated by up to 2 dex for stars with very high carbon enhancements ([C/Fe] ≥ +2.0). §.§ Comparison with SMSS and Gaia XP Spectra We compare our results to those of H22 from SMSS and those of <cit.> from Gaia XP low-resolution spectra. The latter has recently delivered estimates of metallicity using a data-driven technique for over 120 million stars from Gaia XP low-resolution spectra. As shown in Fig. 11, our estimates are consistent with those of <cit.> and H22, with tiny offsets and a scatter smaller than 0.20 dex. Finally, although the total number of our metallicity estimates (SAGES + SMSS) does not exceed 50 million stars, we emphasize that the volume of our sample is much larger than that of sample constructed from Gaia XP spectra, given that the limiting magnitude of SAGES and SMSS is nearly 3 mag deeper than that of the Gaia XP spectra. This larger volume will enable numerous interesting studies of the Milky Way, e.g., searching for substructures in the stellar halo. § EFFECTIVE TEMPERATURE, DISTANCE, AND AGE ESTIMATES The effective temperatures of dwarf and giant stars are derived from the metallicity-dependent T_ eff–color relations constructed in H22. Here the color is the de-reddened (G_ BP - G_ RP)_0, and metallicity is given by photometric [Fe/H]. In this way, effective temperatures are obtained for all of our program stars. As examined with over 159,000 stars in common, the effective temperature estimated in this work is quite consistent with that from LAMOST, with a small offset around -24 K (this work minus LAMOST) and a scatter of only 84 K (see Fig. 13). Distances estimated by <cit.> are adopted for stars with reliable parallax measurements with precision better than 30%, parallax greater than 0.15 mas, and renormalized unit weight error (RUWE) smaller than 1.4. A total of 15,974,812 stars have distances estimated in this way. Using the apparent G-band magnitudes and SFD E (B-V), the G-band absolute magnitudes have been derived for the nearly 16 million stars with reliable geometric distances. Fig. 12 is the Hertzsprung-Russell (H-R) diagram for about 8 million stars with relative parallax error better than 10%, parallax greater than 0.4 mas, and RUWE≤ 1.4. Guided by the isochrones of PARSEC <cit.>, empirical cuts are defined to further classify dwarf stars into main-sequence turn-off, main-sequence, and binary stars. For the stars without geometric distance estimates, the distances are obtained by inferring their absolute magnitudes from the constraints of stellar colors and photometric metallicity. For main-sequence dwarf stars, the G-band absolute magnitudes are derived from the third-order 2D polynomial relation constructed in H22. Combining with the G-band magnitude and the SFD E (B -V), the distances are found for over one million main-sequence dwarf stars with (G_ BP - G_ RP)_0 ≥ 1.0. For giant stars, a likelihood method developed in <cit.> and <cit.> is adopted to infer the i-band absolute magnitude using the (g - i)_0 color, photometric [Fe/H], and empirical color–magnitude fiducials interpolated from six globular clusters. Here, the g- and i-band magnitudes are from the Pan-STARRS1 surveys <cit.>; the reddening-correction coefficients are from <cit.>. The interested reader is referred to X14 or <cit.> for more details. In the above manner, a total of over 1.6 million giant stars have their distances estimated. To test the accuracies of our distance estimates for giant stars, Fig. 14 compares these with those of X14 for over 1600 stars in common. The results are consistent with each other, with a tiny relative offset of -3.7% (this work minus X14) and a scatter of 21.7%. This scatter implies that both estimates have a typical precision of about 16%, which is expected by X14. Finally, we derive stellar ages for stars with good parallax measurements, i.e., parallax measurements with precision better than 30%, parallax greater than 0.15 mas, and RUWE≤ 1.4, using the technique developed in H22. Nearly 15 million stars have their ages estimated in this way. We note that the RUWE cut cannot exclude all of the binary stars, whose ages may be over-estimated. As noted by H22, this technique is mostly valid for main-dequence turn-off and sub-giant stars; uncertainties are larger for other types of stars in the H-R diagram. We perform a similar check as done in H22 with over 160,000 stars in common between this work and <cit.>, who derived isochrone ages for over 3 million stars with both spectroscopic and astrometric information. The check shows that the age estimates in this work agree with with those from SD18, with an offset of 5% in relative age difference (age_ TW -age_ SD18)/age_ SD18 and a scatter in the relative age difference of around 20%. § RADIAL VELOCITIES AND THE FINAL SAMPLE We collect measurements of radial velocities for our sample stars available from from completed and ongoing spectroscopic surveys, including GALAH DR3+ <cit.>, SDSS/APOGEE DR17 <cit.>, Gaia DR3 <cit.>, RAVE DR5 <cit.>, LAMOST DR9[<http://www.lamost.org/dr9/v1.0/>] and SDSS/SEGUE DR16 <cit.>, with typical measurement errors of 1.1, 0.5, 1.0-6.0, 2.0, 5.0 and 5.0 km s^-1, respectively. In total, over 4.2 million stars in our final sample have radial velocity measurements. The detailed contributions of radial velocities from each survey are given in Table 2. If a star has radial velocity measurements from two more surveys, the result from the survey with the highest spectral resolution is adopted. We note that all of the radial velocity zero-points are calibrated to the updated APOGEE radial-velocity standard stars based on the SDSS/APOGEE DR17 constructed using the same technique proposed in <cit.>. In the final sample, over 22 million dwarf and 3 million giant stars have photometric-metallicity estimates (see Section 3) from the stellar colors provided by SAGES DR1 <cit.> and Gaia EDR3 <cit.>, and effective temperature estimates from the intrinsic (G_ BP - G_ RP)_0 colors and photometric [Fe/H] (see Section 4). From the well-developed techniques described in H22, distances and ages are further derived for 18 and 15 million stars in the final sample, respectively (see Section 4). The radial velocity measurements, if available from the spectroscopic surveys, and the astrometric parameters in Gaia EDR3 <cit.> are also included. A description of the information for stars in the final sample catalog is presented in Table 3. The final stellar-parameter sample catalog will be released by the SAGES project as a value added catalog. This sample already represents large progress on the development of stellar samples from the Northern sky for use in Galactic studies. Together with our former effort from SMSS DR2 described in the first paper in this series, the sum of which represent photometric metallicities for on the order of 50 million stars, these results will shed light on understanding the formation and evolutionary history of our Galaxy. The next step of this project is to extend this technique to derive photometric-metallicity with improved precision, especially at the metal-poor end, and other elemental-abundance ratios (e.g., [α/Fe] and [C/Fe]) from the narrow/medium-band photometric surveys <cit.>, or from Gaia XP low-resolution spectra, although only for stars with a relatively bright limiting magnitude around G ∼ 17.5 mag <cit.>. § SUMMARY In this, the second paper of this series, we present stellar parameters for over 20 million stars in the Northern sky, using SAGES DR1 and Gaia EDR3. With a careful and comprehensive selection of a training set from spectroscopic measurements, we present photometric-metallicity estimates for nearly 26 million stars (23 million dwarf and 3 million giant stars), with useful metallicity determinations down to [Fe/H] = -3.5. Both internal and external checks show that the precisions of our photometric measurements are about 0.1 dex in the metal-rich range ([Fe/H] > -1.0) and 0.15-0.25/0.3-0.4 dex for dwarf/giant stars with [Fe/H]≤ -1.0. This result is comparable to or even better than obtained for the low/medium-resolution spectroscopy. In addition to metallicity, the final sample also includes measurements of effective temperature from metallicity-dependent T_ eff–color relations, distances either from Gaia parallax measurements or from the metallicity-dependent color-absolute magnitude fiducials, and ages from comparisons between observations and stellar isochrones. Radial velocities from spectroscopic surveys and astrometric parameters from Gaia EDR3 are also included. To date, we have delivered stellar parameters for over 50 million stars covering almost 3π steradians of sky, which will be useful to a variety of studies of the Milky Way. § ACKNOWLEDGEMENTS This work is supported by National Key R&D Program of China No. 2019YFA0405500 and National Natural Science Foundation of China grants 11903027, 11833006, 11973001, 11603002, 11811530289 and U1731108. We used data from the European Space Agency mission Gaia (<http://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC; see <http://www.cosmos.esa.int/web/gaia/dpac/consortium>). T.C.B. acknowledges partial support from grant PHY 14-30152, Physics Frontier Center/JINA Center for the Evolution of the Elements (JINA-CEE), awarded by the US National Science Foundation. His participation in this work was initiated by conversations that took place during a visit to China in 2019, supported by a PIFI Distinguished Scientist award from the Chinese Academy of Science. Y.S.L. acknowledges support from the National Research Foundation (NRF) of Korea grant funded by the Ministry of Science and ICT (NRF-2021R1A2C1008679). Y.S.L. also gratefully acknowledges partial support for his visit to the University of Notre Dame from OISE-1927130: The International Research Network for Nuclear Astrophysics (IReNA), awarded by the US National Science Foundation. CAO acknowledges support from the Australian Research Council through Discovery Project DP190100252. The Stellar Abundance and Galactic Evolution Survey (SAGES) is a multi-band photometric project built and managed by the Research Group of the Stellar Abundance and Galactic Evolution of the National Astronomical Observatories, Chinese Academy of Sciences (NAOC). The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth's Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS). The Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. apj
http://arxiv.org/abs/2307.04940v1
20230710233548
An open-source alignment method for multichannel infinite-conjugate microscopes using a ray transfer matrix analysis model
[ "Gemma S. Cairns", "Brian R. Patton" ]
physics.optics
[ "physics.optics", "physics.ins-det" ]
Metastability exchange optical pumping of ^3He at low pressure and high magnetic field [ October 2023 ====================================================================================== § ABSTRACT Multichannel, infinite-conjugate optical systems easily allow implementation of multiple image paths and imaging modes into a single microscope. Traditional optical alignment methods which rely on additional hardware are not always simple to implement, particularly in compact open-source microscope designs. We present here an alignment algorithm and process to position the lenses and cameras in a microscope using only image magnification measurements. We show that the resulting positioning accuracy is comparable to the axial resolution of the microscope. Ray transfer matrix analysis is used to model the imaging paths when the optics are both correctly and incorrectly aligned. This is used to derive the corresponding image magnifications. We can then extract information about the lens positions using simple image-based measurements to determine whether there is misalignment of the objective lens to sample distance (working distance) and with what magnitude and direction the objective lens needs to be adjusted. Using the M4All open-source 3D printable microscope system in combination with the OpenFlexure microscope, we validate the alignment method and highlight its usability. We provide the model and an example implementation of the algorithm as an open-source Jupyter Notebook. § INTRODUCTION Advanced microscopes often include multiple optical paths in the system to enable, for example, multi-colour fluorescence microscopy and to combine multiple modes of microscopy in one instrument. There are two different ways to design an imaging path; finite-conjugate and infinite-conjugate systems. In a finite-conjugate design the sample is positioned between f_o and 2f_o before the objective lens (where f_o is the effective focal length of the objective lens). The objective lens then focuses light at an intermediate image plane <cit.> where either an imaging sensor or relay lens, such as an eyepiece for direct observation, can be placed (figure <ref> (a)). While there are methods for allowing multiple imaging paths in a finite-conjugate system, it can be more complicated to implement than in an infinite-conjugate system. An infinite-conjugate system positions the sample at f_o before the objective lens and produces a collimated beam after the objective lens from a single point source in the focal plane. The collimated beam subsequently must be focused using a tube lens to form an image (figure <ref> (b)) <cit.>. To easily implement multiple imaging paths, non-focusing optics for splitting light into different detection channels, such as dichroic mirrors and beamsplitters, can be placed between the objective and tube lenses in the collimated “infinity space” without introducing spherical aberration into the system and without changing the position of the image plane <cit.>. The distance between the objective and tube lens can also be varied without impacting the magnification, further easing the implementation of multichannel systems. Note that, figure <ref> depicts the objective lenses as single lens elements, however in practice objective lenses contain multiple lens elements. Therefore, f_o is measured from an effective plane within the objective lens body, which is not normally marked. Instead, objective lens manufacturers also state a working distance for the lens, which is illustrated in figure <ref>. For objectives designed to work with a coverslip, the working distance does not include the coverslip thickness (i.e. an objective specified to have a 400 μm working distance and working with 170 μm coverslips will have the focal plane 570 μm from the front surface of the objective). For an objective placed at the working distance from the coverslip, as in figure <ref>, this means that the focal plane will be coincident with the bottom surface of the coverslip. When setting up a multichannel infinite-conjugate microscope, it is important that the objective lens - tube lens system is aligned so that the plane being imaged on the sensor is positioned at the correct working distance. If it is not, the infinity space will not be collimated, resulting in different magnifications for each channel if they have different path lengths. In addition, other aberrations and field distortions may be introduced when using the objective at the wrong working distance. Note that collimation refers to light emitted from a point source. As can be seen in figure <ref> (b), extended objects in the sample plane will result in beam divergence in infinity space, as each collimated bundle of rays from each point contributing to the extended object will propagate at a different angle to the optical axis (compare the black and red ray bundles). Therefore, for microscopes that image samples with illumination spread over a wide area, collimation is not the same as looking to see if all the light rays coming out of the back of the objective remain parallel. Instead, checking whether the rays from a point source are collimated (i.e. checking that the sample is at the correct working distance in an infinite-conjugate system) must be achieved through an appropriate technique. Traditional methods, such as using an auto-collimator <cit.> or shear plate, are very effective, but require dedicated hardware. Recently there has been a growing community of researchers focused on developing open-source hardware for microscopy (see <cit.> for an extensive list of projects), where designs are becoming increasingly compact which results in difficulties using traditional optical alignment methods. For example, an auto-collimator may not fit into the optical path. Therefore, to align an infinite-conjugate multichannel microscope without the use of additional hardware, we have developed an image-based alignment method based on a mathematical ray transfer matrix analysis model. The only requirements for the method are: * To have a way of accurately controlling the z step (focusing) movements of either the sample or objective lens. * To know the specifications of the optics and cameras in the system. * To decide whether it is important to know the absolute magnifications of the imaging channels or whether it is adequate to know their relative magnifications to one another. This will determine whether a feature-size calibrated sample is required, e.g. a calibrated graticule slide. In this paper we describe the mathematical model and resulting alignment method in detail before showing it being used to align a low-cost, open-source and 3D printable multichannel microscope built using M4All <cit.> combined with the OpenFlexure microscope stage <cit.>. § RAY TRANSFER MATRIX ANALYSIS THEORY Mathematical ray tracing calculations within the paraxial approximation (where only light rays which make a small angle to the optical axis are considered, such that sin(θ)≈θ) can be performed using ray transfer matrix analysis (RTMA). Note that in high numerical aperture (NA=nsin(θ)) systems, where θ is larger, ray-tracing still often gives useful results. With this in mind, we can recommend this alignment approach even with high-NA (NA> 0.6) objectives. For those unaware of the theory of RTMA, reference <cit.> provides an excellent introduction to both the theory and the Python library we use in this paper. A light ray at a plane along the optical axis, z, has a height, y, and angle θ, with respect to z which is represented as a ray vector: r = [ y; θ ] In RTMA the input ray vector is transformed through different optical elements or free space propagation paths which are described by 2x2 matrices, known as transfer matrices and also often referred to by their indices as ABCD matrices. The output ray vector is defined by left multiplication of the input ray vector with the transfer matrices for each element (note here that the ABCD matrix represents the transfer matrix for the total system): [ y_out; θ_out ] = [ A B; C D ][ y_in; θ_in ] = [ Ay_in+Bθ_in; Cy_in+Dθ_in ] For this work it is sufficient to use only the transfer matrices for free space and a thin lens respectively, where d is the propagation distance in free space and f is the focal length of the thin lens (transfer matrices for further elements and matrix derivations can be found in Burch et al. <cit.>): [ 1 d; 0 1 ] [ 1 0; -1/f 1 ] The total ABCD matrix can be used to derive some useful properties of the system <cit.>. Most importantly for this work is the fact that when B=0 the system produces a real image at the output plane from an object at the input plane. This is equivalent to y_out being independent of θ_in. The lateral and angular magnifications in this case are given by A and D respectively. § RAY TRANSFER MATRIX ANALYSIS MODEL FOR ALIGNMENT OF INFINITE-CONJUGATE MICROSCOPE DESIGNS The systems we wished to align comprised of a single objective and an additional single tube lens per optical path, as shown in figure <ref>. We therefore demonstrate the application of our alignment routine for such a microscope - we anticipate that it would also work for more complex optical paths with suitable calculation of the total ABCD matrix. To model a correctly aligned microscope with an infinite-conjugate optical design, we define variables in figure <ref>. The total length of the channel from the sample to the camera sensor is d_total, and due to the design of the OpenFlexure microscope and the M4All system, will be treated as being fixed in the following for this example system. For other systems, the total distance may change with sample positioning and this would need to be incorporated in the calculation of the total ABCD matrix. Treating the objective lens as a single thin lens, the distance between the sample and the objective lens is d_sample. The distance between the objective lens and the camera sensor, and the objective lens and the tube lens is d_intercam and d_interlens respectively. Finally, we define the distance between the tube lens and the camera sensor as d_cam. This set of variables implies the manner in which we align the system - the sample is placed as close to the correct working distance as we can estimate by moving the objective (the sample stage is fixed in position with respect to the propagation axis), the camera is also fixed in position at d_total from the sample by a non-adjusting mount, and we move the tube lens to focus the image of the sample. The defined variables, along with the objective lens effective focal length f_o and tube lens focal length f_t, can be substituted into the ABCD matrices <ref> and <ref> to build the matrix equation <ref> for the infinite-conjugate imaging channel (where the matrices are left multiplied in the order they are positioned in the optical path). [ y_out; θ_out ] = [ 1 d_cam; 0 1 ][ 1 0; -1/f_t 1 ][ 1 d_interlens; 0 1 ][ 1 0; -1/f_o 1 ][ 1 d_sample; 0 1 ][ y_in; θ_in ] The following definition can also be made for d_interlens: d_interlens = d_total - d_sample - d_cam In a correctly aligned system, d_sample is equal to f_o and d_cam is equal to f_t. Therefore the matrix equation becomes: [ y_out; θ_out ] = [ 1 f_t; 0 1 ][ 1 0; -1/f_t 1 ][ 1 d_total - f_o - f_t; 0 1 ][ 1 0; -1/f_o 1 ][ 1 f_o; 0 1 ][ y_in; θ_in ] Upon substituting the microscope design values for d_total, f_o and f_t into the matrix equation for a correctly aligned system and multiplying the transfer matrices to obtain a single transfer matrix for the total channel, the lateral magnification of the image, A, will equal the value M obtained using: M = f_t/f_o However, the magnification of an incorrectly aligned microscope, such as when the objective lens is not at the correct working distance, will differ from equation <ref>. This is because when d_sample ≠ f_o an image can only be formed when d_cam ≠ f_t. The first step in our calibration routine is therefore to calculate the magnification for each channel for a range of suitable d_sample values, centred around the real effective focal length of the objective lens, f_o. To do this calculation we substitute equation <ref> into equation <ref>, set a value for d_sample from the range chosen as appropriate for the objective, and solve for the value of d_cam that gives an image at the sensor. This is easily done by recalling that the B component of the ABCD matrix of a system is equal to zero for systems producing a real image at the output plane from an object at the input plane. Therefore a function that returns the value of B for a given physical setup can be passed to e.g. the Python fsolve routine allowing a numerical solution for the value of d_cam that produces an image for each d_sample in the range of interest. Note that it is possible that no imaging solution can be found, given the fixed camera position and the choice of d_sample range. In this case, our example code fails gracefully and warns the user of the position at which the failure occurs for the relevant path. Substituting the solved d_cam value for each d_sample value back into the matrix equation allows the lateral magnification A to be determined for each iteration. A plot of lateral magnification vs d_sample shows the deviation in magnification when the objective lens is moved away from the correct working distance. For a multichannel microscope, repeating the analysis for each channel allows the theoretical difference in magnification between each channel to be modelled in the situation where the objective lens is not positioned correctly along the optical path. Note that the fundamental design choice that creates the differing magnification is the different path lengths for each channel. As such, if the microscope is designed with equal path lengths, it may be worthwhile to introduce a temporary path difference to allow alignment using this method. Since both arms will then be set up after alignment to image at the correct working distance, the path difference can be subsequently removed from the modified arm and that arm corrected relative to the unmodified arm. The theoretical plot of lateral magnification vs d_sample can be used to align the position of the objective lens and tube lenses in an infinite-conjugate microscope. For a single channel microscope the practical magnification of the microscope can be measured using for example a graticule sample. Then using this value, d_sample can be interpolated from the theoretical plot (example plot shown in figure <ref>). The difference in distance between d_sample and f_o is the distance the objective lens needs moved to be at the correct working distance and thus correcting the magnification of the microscope to the expected value. We focus here, however, on alignment of multichannel infinite-conjugate microscopes in the case where a calibration sample, such as a graticule, may not be available. In this case, a plot of the modelled lateral magnification vs d_sample for each channel is created and then, so as to mitigate the need to measure true magnifications, each plot is normalised to a single channel (we select the channel with the shortest path length for consistency) to create a plot of normalised magnification vs d_sample for each channel. An example of the plots is given in figure <ref> for a multichannel microscope with three channels where the path lengths of each channel are d_total = 300 mm, 350 mm and 400 mm respectively. All three channels are otherwise identical in terms of cameras and tube lenses. The camera specification of note is the pixel size; we measure the size of identifiable features within the image, and from there estimate the relative magnification for each channel, using image pixel distances. Therefore, different sized camera pixels will be measuring different absolute distances on the imaging plane and must be compensated for when comparing images from different pixel-sized cameras in the same microscope. Note here that it can be seen that at the point where d_sample = f_o = 4.5 mm, the three channels have equal magnifications as expected. Then, as the objective lens is moved away from the correct position, the magnifications vary from one another. When the objective lens is too close to the sample (d_sample < 4.5 mm) the third channel has the smallest magnification. Whereas the first channel has the smallest magnification when the objective lens is too far away from the sample (d_sample > 4.5 mm). This allows unambiguous estimation of both the magnitude and direction of a positioning error of the sample plane. To use the calculated plots for a multichannel infinite-conjugate microscope to align the objective lens and tube lenses the following steps are followed: * Set up the microscope with each channel in focus. * Capture an image on each channel of a sample where the distance in pixels between the same two points can be measured (a specific calibration slide allows absolute magnifications to be calibrated and measured, while a general sample will allow for alignment but not confirm the final magnification). * Normalise the distances measured on the calibration images to the distance measured on the channel which was used for the lateral magnification normalisation calculations. * To determine the predicted error in the position of the objective lens, the measured normalised magnification value for the channel with the largest d_total value, can be plotted on the normalised magnification graph and the corresponding d_sample value can be interpolated, which we define as d_sample_interpolated. The error in the position of the objective lens is then calculated according to equation <ref>. If the working distance error is a positive value then the objective lens is that magnitude too far away from the sample, and if it is a negative value then it is that magnitude too close to the sample. working distance error = d_sample_interpolated - f_o There are some considerations to be made with this approach. Since we are normalising distances relative to an ideal system, we are making some important assumptions e.g. all lenses have the design focal length with no consideration of manufacturing tolerances. As such, there are a few areas where it's worth applying a critical approach when working with this alignment algorithm. Large discrepancies in the estimated current value of d_sample between different channels could indicate some of the following issues: * The measurement of the feature size within the image implicitly assumes an image with no distortions. To minimise the impact of any distortions that are present, try to get feature size measurement from a region central to the image in case the magnification differs over the field of view (typical with high magnification, very simple optical systems). This is the easiest problem to test for, since it just requires repeating the computational side of the alignment, without needing new images. * Tolerance differences on tube lenses (a 45 mm nominal focal length lens might have a different focal length as manufactured) are the final source of error we consider. This is likely the source of absolute errors on calibration (when all paths agree on the magnification, but it differs from the expected magnification, or when the calculated d_sample position is significantly different on each path). It is slightly more likely to be observed when a range of different tube lenses are used (e.g. same focal lengths but different lens types or different focal lengths to suit different cameras). This is the hardest to ascertain on a purely image-based system of calibration - it may be that testing of the focal length is required for each lens. Finally, we note that the use of normalised magnification also allows the alignment of channels that have different absolute magnifications and/or cameras with differing pixel sizes. If the normalisation is performed over both relative magnification and relative pixel size, then the error in d_sample can still be estimated. See our sample code for an example of how to implement this normalisation. § PRACTICAL EXAMPLES To show the alignment procedure works in practice, we used the M4All Fluorescence and TIE Microscope, figure <ref>. M4All is an open-source 3D printable microscope system <cit.> which is compatible with the OpenFlexure microscope <cit.>. Full build instructions can be found on the M4All repository <cit.>. Briefly, this microscope was designed for single channel fluorescence and simultaneous brightfield multifocal plane imaging to enable computational phase contrast microscopy using the transport of intensity equation (TIE) <cit.>. A schematic of the microscope can be seen in figure <ref> (a) along with photos of the microscope in figures <ref> (b) and (c). A turning mirror (Thorlabs PF10-03-P01) placed below the OpenFlexure microscope stage couples the light into the M4All cubes. A 650 nm shortpass dichroic mirror (Thorlabs DMSP650) then reflects fluorescence emission ≥ 650 nm which is focused by a 125 mm focal length tube lens (Thorlabs AC254-125-A) onto an IDS CMOS camera (UI-3060CP-M-GL Rev. 2). The remaining transmitted laser light is split into the three brightfield channels by 30:70 and 50:50 beamsplitters (Thorlabs BSS10R and BSW10R) and focused by 45 mm focal length tube lenses (Thorlabs AC254-045-A) onto Raspberry Pi v2 camera modules. The three brightfield channels have the same d_total values of 300, 350 and 400 mm as the example in figure <ref>. Before altering the positions of the three brightfield channel tube lenses to enable multifocal plane imaging for future work, we first used the ray transfer matrix alignment method to co-align each channel to image at the correct working distance. We first set the microscope up with each brightfield channel in focus and captured an image of a 10 μm graticule sample (as discussed however, any sample where the distance in pixels between the same two feature points can be measured for every channel can be used). As we were using a graticule, a line profile of the graticule was plotted for each channel and the distance in pixels between the same two points on each plot was measured. The distances were then normalised to the first brightfield channel (d_total = 300 mm), which we call the measured normalised magnifications, and used to interpolate d_sample from the plot of normalised lateral magnification vs d_sample in figure <ref> (b). The working distance error was then calculated from equation <ref>. A flow chart of the alignment steps is given in figure <ref>. The alignment process was then iterated until the three channels had equal magnifications within the tolerances of the equipment and d_sample = f_o within the optical axial resolution limit (d_z). For this example microscope the wavelength of light, λ, was 532 nm, and the numerical aperture, NA, of the objective lens was 0.65, resulting in an axial resolution limit of 2.518 μm using Abbe's axial diffraction equation, d_z = 2λ / NA^2. We carried out the alignment process for the situation where the initial position of the objective lens was intentionally too far away from the sample, and again when it was too close to the sample. The results are shown in figure <ref>. In both the cases shown it took three iterations of the alignment process to reduce the working distance error to less than the axial resolution limit (indicated by the red dashed lines on the working distance error graphs). Repeats for intentional misalignment of the objective lens and performing the alignment procedure can be found in supplemental figure 1. Please note, for transparency, the data in figure <ref> was obtained using an older version of our code where the RTMA model computations were performed using MapleTM (Maple is a trademark of Waterloo Maple Inc.) and the magnification analysis was performed in a Jupyter Notebook, both of which are provided as supplemental material. We have since written both the RTMA model and analysis code in a single Jupyter Notebook which gives equivalent results and is also provided. The magnification plots in figures <ref> and <ref> were created using our new code. § CONCLUSION Ray transfer matrix analysis within the paraxial approximation has been shown to effectively model the lateral magnifications of the imaging paths in a multichannel infinite-conjugate microscope when the optics are both aligned and misaligned along the optical axis. Furthermore, we have shown how magnification measurements from images acquired on each channel can be used to interpolate objective lens position from the model and how this information can be used to practically align the microscope optics. We have validated this alignment method on an open-source 3D printed multichannel microscope and shown it is a powerful tool when use of additional alignment hardware is not suitable (however, the method is applicable to all multichannel infinite-conjugate imaging systems). We provide the Python code for the ray transfer matrix analysis model and alignment algorithm as a detailed open-source Jupyter Notebook and believe it will be a useful tool for the open-source microscopy hardware community. §.§.§ Data Accessibility All data and code underpinning this publication are available from Zenodo at https://doi.org/10.5281/zenodo.8125287 §.§.§ Competing Interests We declare we have no competing interests. §.§.§ Authors' Contributions G.S.C - conceptualization, data curation, formal analysis, investigation, methodology, software, validation and writing - original draft. B.R.P - conceptualization, funding acquisition, methodology, software, supervision, writing - original draft. §.§.§ Funding This work was funded under grants from the Royal Society (RGF\EA\181058 and URF\R\180017) and EPSRC (EP/M003701/1). When this work was carried out G.S.C. was funded under ‘OPTIMA: The EPSRC and MRC Centre for Doctoral Training in Optical Medical Imaging’ and B.R.P. held a Royal Society University Research Fellowship. §.§.§ Acknowledgements The work presented in this article originally formed part of Gemma S. Cairns' doctoral thesis at the University of Strathclyde <cit.>. vancouver § SUPPLEMENTAL FIGURE 1
http://arxiv.org/abs/2307.04750v1
20230710175544
Quantum oscillations with topological phases in a kagome metal CsTi$_3$Bi$_5$
[ "Yongkang Li", "Hengxin Tan", "Binghai Yan" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
[email protected] Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Quantum oscillations can reveal Fermi surfaces and their topology in solids and provide a powerful tool for understanding transport and electronic properties. It is well established that the oscillation frequency maps the Fermi surface area by Onsager's relation. However, the topological phase accumulated along the quantum orbit remains difficult to estimate in calculations, because it includes multiple contributions from the Berry phase, orbital and spin moments, and also becomes gauge-sensitive for degenerate states. In this work, we develop a gauge-independent Wilson loop scheme to evaluate all topological phase contributions and apply it to CsTi_3Bi_5, an emerging kagome metal. We find that the spin-orbit coupling dramatically alters the topological phase compared to the spinless case. Especially, oscillation phases of representative quantum orbits demonstrate a strong 3D signature despite their cylinder-like Fermi surface geometry. Our work reveals the Fermi surface topology of CsTi_3Bi_5 and paves the way for the theoretical investigation of quantum oscillations in realistic materials. Quantum oscillations with topological phases in a kagome metal CsTi_3Bi_5 Binghai Yan August 12, 2023 ============================================================================ § INTRODUCTION Kagome lattice, a 2D corner-sharing triangle lattice, has attracted much interest due to its geometric frustration and non-trivial band geometry. Among various materials containing such 2D lattice structure, Kagome material family AV_3Sb_5 (A = K, Rb, Cs)<cit.> receives special attention since it exhibits many exotic quantum phenomena including ℤ_2 topology and flat bands<cit.>, possible unconventional superconductivity<cit.> and density wave order <cit.>. However, because of the interplay and competition between different correlated states, the origin of these physical properties and their relation to the unique electronic structure remains elusive. Recently, a new Ti-based Kagome material ATi_3Bi_5 (A = K, Rb, Cs) isostructural to AV_3Sb_5 is synthesized<cit.> and investigated<cit.>. Unlike V-based AV_3Sb_5 family, the charge density wave (CDW) order is absent in ATi_3Bi_5 family as shown in transport and scanning tunneling microscopy (STM) experiments<cit.>. First-principles calculation also shows the absence of lattice structural instability<cit.>. Hence, ATi_3Bi_5 could serve as a complementary system to AV_3Sb_5, in which the origin of these exotic phenomena and their relation to electronic properties can be investigated without reference to lattice's effect. For example, the observed two-fold rotational symmetry and orbital selectivity in the electronic structure of ATi_3Bi_5 <cit.> may form a pure electronic nematic phase, similar to that in Fe-based high-temperature superconductors<cit.>. Understanding the band structure and Fermi surface of ATi_3Bi_5 is crucial for further investigating these correlating properties. Quantum oscillation measurement is one way to measure the Fermi surface topology as well as its associated properties like cyclotron mass and carrier mobility<cit.>. More importantly, the phase of the fundamental oscillation is related to the band topology. Usually, a π phase shift in the oscillation is regarded as π Berry phase which indicates a topological band structure<cit.>. The quantum oscillation analysis from this perspective has been carried out in AV_3Sb_5<cit.> and also recently in ATi_3Bi_5<cit.>, which claims nontrivial band topology due to this π Berry phase. The topological phase actually has other contributions entangled with the Berry phase<cit.>. Especially in the degenerate case with strong spin-orbit coupling (SOC), such π phase may mainly come from orbital or spin magnetic moment other than the Berry phase, as revealed recently in CsV_3Sb_5 <cit.>. Hence, the analysis of the topological properties based on the phase shift in quantum oscillation should consider all contributions. Apart from the experiment, this phase can be independently evaluated from ab-initio band structures. However, such calculation has to deal with the gauge fixing problem in the presence of degeneracy which is common for centrosymmetric nonmagnetic materials. A numerical study for all phase contributions without gauge ambiguities has not been explored in detail before. In this work, we develop a Wilson loop method to determine the quantum oscillation phase and apply it to CsTi_3Bi_5. We first detail the method which has explicit gauge independence and can be implemented conveniently in the case of degenerate bands. Then combining this method with first-principles calculation we resolve the Fermi surface of CsTi_3Bi_5 and determine the total oscillation phase for all quantum orbits. Its relation to the Fermi surface geometry and band topology is clarified at last. The 3D nature of several representative quantum orbits present is imprinted in the topological phase, although related Fermi surfaces show a cylinder-like shape. Our work provides a useful theoretical tool to investigate the Fermi surfaces and topological electronic properties in materials. § OVERVIEW ON THE QUANTUM OSCILLATION PHASE In the presence of a strong magnetic field, the physical quantities (e.g., resistance and magnetization) show oscillation with respect to a magnetic field (B) due to the formation of quantized Landau levels (LLs). Under the semiclassical limit in which the scale of k-space orbit is much larger than the inverse of magnetic length l_B^-1 (l_B=√(ħ /eB)), the oscillation is periodic with respect to 1/B and can be expanded as a sum of Fourier series in general: δ A = ∑_i∑_r A_i,rcos[r(l_B^2 S_F,i+θ_i+ϕ_M,i) + δ_i + φ_A ]. Here, A is the physical quantity being measured which is usually magnetization M or longitudinal resistivity ρ_xx, δ A is the oscillation part and A_i,r is the oscillation amplitude for the r-th harmonic of the i-th extremal orbit. S_F,i is the momentum space area of the i-th extremal orbit on Fermi surface and determines the i-th oscillation frequency. Here the total oscillation phase is decomposed into four parts: θ_i is the first-order correction to the dynamical phase including the geometry phase and (orbital and spin) magnetic moment phase. ϕ_M,i is Maslov correction which equals to π for a simple closed orbit. δ_i is dimension related phase resulting from the integration over k_z if a 3D solid is measured (suppose B is along z direction). The last term φ_A is measured quantity (A) related phase (see the following discussion). All phases except φ_A depend only on the Fermi surface properties and are universal for any oscillatory quantity. Below we show that each phase can be determined from first-principles calculations to understand experiments. We note that a comprehensive theory on quantum oscillations was established in Refs. <cit.>. We first overview this theory and then introduce the Wilson loop method to compute the topological phase. §.§ Phase θ The first two phases θ and ϕ_M (below we focus on a single orbit and ignore the subscript i) are related to LLs. In general, there are no simple rules to determine the exact LL for arbitrary band structure. However, in the semiclassical limit, approximate LL can be determined from Bohr-Sommerfeld-like quantization rules. For a group of D-fold degenerate bands, the j-th LLs can be obtained up to leading order in l_B^-1 as, l_B^2 S(E_a,j) +λ_a + ϕ_M = 2π j + O(l_B^-2/3). a ∈ℤ_D:={1, …, D} is the band index among D degenerate bands and λ_a is a phase that we are interested. λ_a is equivalent to θ if there is no degeneracy,i.e., D=1. ϕ_M is Maslov correction and can be determined from the topology of the orbit, which equals π for a simple closed orbit. Because of degeneracy, D LLs create D oscillation terms with the same frequency F = ħ S_F/2π e by Onsager's relation but different phase shift λ_a. It amounts to a single oscillation term with reduced amplitude C and effective phase shift θ, ∑^D_a=1cos[r(l_B^2 S_F+λ_a+ϕ_M)] = Ccos[r(l_B^2 S_F+θ+ϕ_M)]. For example, all bands are doubly degenerate (D=2) in the presence of combined inversion and time reversal (𝒫𝒯) symmetries, which is the case of kagome metals CsV_3Sb_5 and CsTi_3Bi_5. We regulate λ_1,2 in the range of [-π,π] and then 𝒫𝒯 symmetry leads to λ_1 = - λ_2. Hence, summing two cosine functions in Eq.(<ref>) leads to θ = 0, if |λ_1| < π/2 π, if |λ_1| > π/2 C = |cos(λ_1)| , One can find θ is a quantized topological invariant (0 or π) <cit.> insensitive to orbit details. In general, phase λ_a can be determined from the spectrum {e^iλ_a}_a=1^D of propagator<cit.> 𝒜[𝔬]=exp[i ∮_𝔬{(A+R) · d k+Z(σ^z / v^⊥) d k}]. Here exp means path-ordered product, A(k)_m n=i⟨ u_m k| ∇_k u_n k⟩ is non-Abelian Berry connection and R_m n· d k =∑_l ∉ℤ_DA_m l^x Π_l n^y d k_x / 2 v_y+(x ↔ y) =-iħ∑_l ∉ℤ_DΠ_m l^x Π_l n^y/ε_m k-ε_l kd k_x/2 v_y + (x ↔ y) =-(M_z/ev^⊥)dk, is Roth term and represents the orbital correction (-M_z B_z) to the band energy. Π(k)_l n=⟨ u_l k|(1/ħ)∇_kĤ(k)| u_n k⟩ is velocity matrix element and v=Π_n n is group velocity. ϵ_mk is band energy and v^⊥ is the velocity in xy plane. M_z=i(eħ/2)∑_l ∉ℤ_DΠ_m l^xΠ_l n^y/(ε_m k-ε_l k) - (x ↔ y) is the self rotation part of orbital magnetic moment<cit.>. Furthermore, σ_z,mn = ⟨ u_l k|σ̂_z| u_n k⟩ (σ̂_z is spin Pauli matrix) and Z=g_0 ħ/4m. The last term is the spin Zeeman term. Once the propagator (𝒜[𝔬]) is known, the phase λ_a can be easily obtained by diagonalizing it. Though its formulation is clear in theory, the numerical calculation of this propagator needs to deal with the derivatives in the Berry connection. Besides, the multi-band magnetic moment (including orbital and spin) is a gauge covariant quantity whose matrix elements depend on the gauge. If a random gauge is chosen, the magnetic moment transforms independently at each point along the orbit, rendering the (<ref>) meaningless. To deal with these problems, one can choose a smooth gauge by finding the maximally localized Wannier function<cit.>. Alternatively, the Wilson loop method<cit.> can be applied to avoid the choice of any specific gauge. Below, we shall use the Wilson loop method for the calculation of λ_a. In this way, the quantum orbit is discretized into N segments (Fig.<ref>) and the propagator is written as the product for each segment. If the segment is small enough, the exponent can be split into Berry connection and magnetic moment parts. 𝒜[𝔬] = ∏_i=1^Nexp{i[(A(k_i) + R(k_i))· dk_i+Zσ^z/v^⊥|dk_i|]} ≈∏_i=1^Nexp[iA(k_i) · dk_i] exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|]. For numerical calculation, the Berry connection part is usually expressed by an overlap matrix M^i =exp[iA(k_i) · dk_i]. M_i is a D by D matrix with M^i_mn=⟨ u_m k_i+1|u_n k_i⟩. The last ingredient for the propagator is an appropriate expression for the Roth term which shows explicit gauge covariance. It can be written as a summation of velocity matrix elements over all other states as in (<ref>). Instead, we propose another method that considers only D degenerate states on the Fermi surface using the covariant derivative<cit.>. The covariant derivative is defined as |D_α u_n k⟩ =Q_k|∂_α u_n k⟩, Q_k := I-∑_a ∈ℤ_D|u_a k⟩⟨ u_a k|. In numerical calculation, it can be evaluated as an appropriate finite difference<cit.> |D_α u_n k⟩=1/2|q_α|(|u_n, k+q_α⟩-|u_n, k-q_α⟩), where the dual state |u_n, k+q⟩ is a linear combination of |u_n, k+q⟩ and has the property ⟨u_m k| u_n k+q⟩=δ_m n. This ensures the orthogonality between the covariant derivative and states in degenerate space, i.e. ⟨ u_m k| D_α u_n k⟩=0. Dual states are constructed as |u_n, k+q⟩=∑_n^'(S_k, k+q^-1)_n^' n|u_n^', k+q⟩ and (S_k, k+q)_n n^'=⟨ u_n k | u_n^', k+q⟩ . Using covariant derivative, Eq. (<ref>) is expressed only by states inside the degenerate space ℜ_m n· d k = -i/ħ∑_l ∉ℤ_DA_m l^x (ε_n k-ε_l k) A_l n^y d k_x / 2 v_y+(x ↔ y) = -i/ħ∑_l ∉ℤ_D⟨ D_x u_m k| u_l k⟩ (ε_n k-ε_l k) ⟨ u_l k| D_y u_n k⟩ d k_x / 2 v_y + (x ↔ y) = -i/ħ⟨ D_x u_m k| ε_n k-Ĥ(k) | D_y u_n k⟩ d k_x / 2 v_y+(x ↔ y). In Appendix we show both Eq. (<ref>) and Eq. (<ref>) are gauge independent, which can be implemented easily in first-principles calculation. The Eq. (<ref>) is practical for tight-binding models with a small number of bands but quite tedious if the total number of bands is large. The Eq. (<ref>) avoids these problems and focuses only on the degenerate space and it is convenient when covariant derivatives can be easily calculated. §.§ Phase δ The above discussion about phase θ is for a single k-plane perpendicular to the magnetic field. For 3D material, one needs to integrate over k_z to get the contribution from the whole Fermi surface. Extremal orbits will dominate in the integration and this procedure will introduce another phase δ for each of them, which is generally ±π/4 (+ for minimum cross-section and - for maximum cross-section). δ = 0 for 2D material since there is only one k-plane. But for a nearly cylindrical Fermi surface (e.g., Fig.<ref>), δ lies between these two limits. Below we adopt a simple model from Refs. <cit.> to determine δ for every extremal orbit that lies in a mirror plane. Here, we assume 𝒫𝒯 symmetry for simplicity. The oscillation of 3D Fermi surface is calculated first for a 2D plane with thickness dk_z and then integrate with respect to k_z, i.e. A_r = ∑_a ∫ dk_z A_r(k_z) cos[r (2πF(k_z)/B + λ_a(k_z)) + ϕ_M ] ∝∫ dk_z A_r(k_z) cos[r (2πF(k_z)/B + θ(k_z)) + ϕ_M ], where A_r(k_z) is the oscillation amplitude of 2D plane, which depends on k_z through cyclotron frequency F(k_z) and cyclotron mass m(k_z). The relative change of F(k_z) and m(k_z) in the interval where the integral is appreciable is usually small. Hence in the integration of Eq. (<ref>), A_r(k_z) can be treated approximately as a constant while F(k_z) in the cosine function can't be treated as fixed because F(k_z)≫ B. Maslov phase ϕ_M remains constant as long as the orbit on the Fermi surface doesn't change its topology. Moreover, 𝒫𝒯 symmetry cause the phase θ(k_z) quantized to 0 or π as in Eq. (<ref>). So the k_z dependence of θ can also be ignored and only the k_z-variation of F(k_z) needs to be considered. We expand F(k_z) near its extremal value to the fourth order and all odd orders are zero due to mirror symmetry. F(k_z) = F_0 + 1/2F_2k_z^2 + 1/24F_4k_z^4. Introducing dimensionless variable x = (2r|F_2|/B)^1/2 k_z and α = sgn(F_2)F_4B/24 r |F_2|^2 then the integration can be calculated as A_r ∝Re∫exp[i(2π rF(k_z)/B + rθ + ϕ_M)] dk_z ∝Re exp[i(2π rF_0/B + rθ + ϕ_M) ] ∫exp[sgn(F_2)iπ/2 x^2 (1+α x^2)]dx ∝cos[r (2πF_0/B + θ) + ϕ_M + δ]. where phase δ is the argument of the last integral δ = arg{∫^x_m_x_mexp[sgn(F_2)iπ/2 x^2 (1+α x^2)] dx}. δ was numerically determined by carrying out the integral with given value α <cit.>, for which F_2, F_4 can be found from the polynomial fitting of F(k_z) around the extremal orbit. The integral limit x_m can be taken as ∞ when α>0 because the main contribution comes from x≈ 0. However, this argument does not apply when α<0 due to the two extra artificial extrema. Since the real cross-section varies monotonically on either side of x=0, x_m should be taken less than the turning point 1/√(2|α|) to avoid these artificial extrema. In calculation, the argument of the integral goes to a steady value before the turning point, which should be assigned as δ. It's obvious that δ= 0 from Eq. (<ref>) when F(k_z)=F_0. For a general 3D material, if α→ 0 (i.e., F_4 → 0 and F_2 k_z^2 is the leading dispersion), one can get δ=±π/4. Otherwise, δ may take a value between 0 and ±π/4. §.§ Phase φ_A The last phase φ_A depends on the type of physical quantity A. When A is the density of states (DOS), this phase vanishes φ_DOS=0. For other quantities, φ_A represents the connection between the oscillation of A and the oscillation of DOS. For example, φ_M=π/2 if A is sample magnetization, and φ_χ=π if A is magnetic susceptibility. In four terminal devices, the longitudinal conductivity σ_xx oscillates in phase with DOS hence φ_σ=0. But since σ_xx=ρ_xx/(ρ_xx^2+ρ_xy^2), the resistivity ρ_xx can be in phase (if ρ_xx≪ρ_xy) or out of phase (if ρ_xx≫ρ_xy) with σ_xx, so φ_ρ = 0 if ρ_xx≪ρ_xy or φ_ρ = π if ρ_xx≫ρ_xy<cit.>. To summarize, all the phases in the oscillation term Eq. (<ref>) have the following intuitive explanations. First, the magnetic-field-dependent term l_B^2 S_F is given by the combination of the de Broglie phase (determined by the number of wavelengths in an orbit) and the Aharonov–Bohm phase. Then there is a phase λ_a associated with each orbit and each band coming from geometric effects and magnetic moment energy. λ_a of degenerate bands for the same orbit will combine to give the phase θ. The reflection of the wave packet at turning points in the orbit causes phase ϕ_M. These phases are the total phase for a single orbit lying in the kx-ky plane. For 3D materials, k_z integration needs to be carried out to incorporate the whole Fermi surface's contribution, which gives phase δ. At last, depending on what quantity A is measured, there will be another phase ϕ_A if the oscillation of A is not synchronized with the oscillation of DOS. § RESULTS AND DISCUSSIONS The crystal structure of CsTi_3Bi_5 is fully relaxed within the Density Functional Theory (DFT) as implemented in the Vennia ab-inito Simulation Package <cit.>. The cutoff energy for the plane-wave basis set is 300 eV. The force convergence criteria is 5 meV/Å. The electronic structure is calculated with the full-potential local-orbital minimum-basis code (FPLO) <cit.>. The default atomic basis sets are employed for the wave function expansion. The generalized gradient approximation parameterized by Perdew, Burke, and Ernzerhof (PBE) <cit.> is employed to mimic the exchange-correlation interaction between electrons throughout. The Brillouin zone is sampled by a k-mesh of 12×12×6. The tight-binding Hamiltonian of CsTi_3Bi_5 is extracted via the maximally localized Wannier functions <cit.> as implemented in FPLO, which enforces all crystal symmetries. The Wannier basis set is composed of the Ti d and Bi p orbitals. The Fermi surface is calculated with the tight-binding Hamiltonian on a k-mesh of 300×300×100. We mention that the above Wilson loop method for the total oscillation phase shift has been successfully applied to the 𝒫𝒯 symmetric kagome metal CsV_3Sb_5 <cit.>, which predicted consistent results with experiments. In the following, we will apply the Wilson loop method to the recently discovered kagome superconductor CsTi_3Bi_5 <cit.> to further demonstrate the reliability of this method. We note here that the characterization of the dimensionality of the quantum orbit by the phase δ has not been discussed in our previous work on CsV_3Sb_5. The band structure of CsTi_3Bi_5 with spin-orbit coupling is plotted in Fig.<ref>(a), which contains rich topological properties. Due to the 𝒫𝒯 symmetry in CsTi_3Bi_5, each band is doubly degenerate. Characteristic features of the kagome lattice, such as Dirac points at K/H points away from the Fermi level which are gapped by SOC, van Hove singularities at M/L, and flat bands along M-K/L-H lines <cit.>, are shown. There are also type II Dirac crossings on the Γ-M and A-L lines, which form a Dirac nodal line <cit.> in the Γ-M-A plane. Besides, both the experiment and theory have shown that CsTi_3Bi_5 has topological Dirac surface states at the Γ point on the (001) surface <cit.>. The band structure on the k_z = 0 plane looks similar to the band structure on the k_z = 0.5 plane (in units of 2π/c, c is the lattice constant), which indicates the quasi-two-dimensional feature of the electronic structure of CsTi_3Bi_5. Indeed, the 3D Fermi surface shown in Fig.<ref>(b) shows a good cylindrical shape for all pieces. There are totally four bands crossing the Fermi level creating five pieces of the Fermi surface. By sweeping k_z, all extremal quantum orbits perpendicular to the z-direction are found to locate at the two mirror planes k_z=0 and k_z=0.5, shown in Fig.<ref>(c) and (d). The initial experiment reported an oscillation frequency of 200 T <cit.>. A more recent transport experiment <cit.> reported a series of oscillation frequencies, ranging from 217 to 1013 T. Our calculations show agreement with the experiments in the low-frequency region. For example, the calculated frequencies of 213, 336, and 542 T might correspond to the observed frequencies of 200/217, 281, 498 or 594 T, respectively. We notice that our calculated frequencies are slightly different from the calculations in Ref. Dong2023CTB, which might be induced by the mismatch of Fermi energy and/or different calculation parameters employed. The cyclotron masses m^* of all calculated quantum orbits are summarized in Table <ref>. Except for the two small pockets (336 and 213 T) around M/L points, all other orbits are electron pockets, whose cyclotron masses are defined as positive. The two largest hexagonal orbits centered around the Γ point (7488 and 8111 T) have the largest cyclotron masses (1.6∼1.7) while others have relatively small cyclotron masses. The different quantum oscillation phases, as mentioned above, of all orbits are calculated and listed in Table.<ref>. Here every cyclotron orbit is a simple closed curve; thus the Maslov correction ϕ_M=π is omitted in the table. The phase λ_a is calculated by Eq. (<ref>) with random gauge choices to test the gauge invariance, which presents the same results. We also confirm the relation λ_1 = -λ_2 for any two degenerate quantum orbits imposed by the 𝒫𝒯 symmetry. Thus only the positive one λ_1 is listed. The Berry phases without (ϕ_B0) or with SOC (ϕ_B) are also listed for comparison. According to our previous discussion of Eq. (<ref>), the final phase shift of the quantum orbit θ must be quantized to either 0 or π, depending on the magnitude of λ_1, as listed in Table.<ref>. From these phases, it's clear that phase λ_1 is in general different from the Berry phase ϕ_B due to the orbital and spin magnetic moment contribution. Also for the 𝒫𝒯-symmetric system, the topology of the quantum orbit is not equivalent to the band topology of the individual Fermi surface. For example, the quantum orbits of 336 T (around M) and 8111 T (around Γ) have Berry phases ϕ_B close to 0 but the oscillation phase shifts are π. On the contrary, the quantum orbit of 4907 T (around A) has a Berry phase close to π but a zero oscillation phase shift. We note here that the strong SOC is important because these orbits have only a trivial Berry phase in the spinless case. Therefore, the incorporation of the magnetic moment contribution in the oscillation phase by SOC is crucial and the quantum phase shift extracted from the Landau fan diagram should be interpreted more carefully, rather than just interpreting it as the Berry phase. The recent experiment <cit.> finds that the quantum orbit of 281 T is non-trivial with a π phase shift (θ=π), which is consistent with our calculated non-trivial quantum orbit of 336 T. Because the 3D Fermi surface is nearly cylindrical, the dimension-related phase δ should be determined by considering higher order terms in the expansion of F(k_z) in Eq. <ref>. From numerical calculations, the frequency F and cyclotron mass m^* of all extremal orbits have a small relative change on the Fermi surface (less than 5% in the interval |Δ k_z| ≤ 0.1). Since CsTi_3Bi_5 has 𝒫𝒯 symmetry and all extremal orbits locate in mirror planes, Eq. (<ref>) applies, which is used to calculate phase δ. The phase δ is calculated with the magnetic field B varying from 5 T to 40 T, covering the range of B in general oscillation experiments <cit.>. The variation of δ is very small in the considered B range. Thus the δ can be approximately treated as a constant, whose average value is listed in Table <ref>. It shows that all quantum orbits except for the 213 and 802 T ones have a phase δ quite close to ±π/4. Therefore, most orbits should be classified as 3D cases in quantum oscillation, even though the Fermi surfaces in Fig. <ref>(b) show a strong quasi-2D feature. On the other hand, the Fermi surface around A is almost dispersionless along k_z, so the δ for the quantum orbit of 802 T is closer to zero than others. As a result, this quantum orbit is 2D. However, the quantum orbit of 713 T which comes from the same Fermi surface as the 802 T orbit but on the k_z=0 plane, has a δ=π/4. Consequently, the character (2D or 3D) of a quantum orbit should not be simply determined from the appearance of the related Fermi surface in the 3D k space. § CONCLUSION We theoretically studied the quantum oscillations by revealing their frequencies and topological phases through a Wilson loop method in CsTi_3Bi_5. We revealed three quantum orbits with θ = π phase shift. Despite most Fermi surfaces are quasi-2D, the dimensional-related phase δ, beyond the angle-dependent frequency, clearly indicates their 3D nature. Our method can be applied to other quantum materials and provides a general way to study quantum oscillations assisted by first-principles calculations. § ACKNOWLEDGEMENT B.Y. acknowledges the financial support by the European Research Council (ERC Consolidator Grant “NonlinearTopo”, No. 815869) and the ISF - Personal Research Grant (No. 2932/21). § APPENDIX The most general gauge transformation is a U(D) basis transformation among the degenerate bands |u_n k⟩ →∑_m=1^D U(k)_m n|u_m k⟩ , U^-1=U^†, It has already been shown that the propagator 𝒜[𝔬] is gauge covariant under such transformation<cit.> provided that the same wave function is used at the initial point and the final point, i.e. | u(k_N+1) ⟩=| u(k_1) ⟩. Here we use the same way to show our numerical formula inherits this property so it's appropriate for calculation. First, covariant derivatives transform as states under the U(D) gauge transformation |u_n, k+q⟩ →∑_n^'(U(k)^† S_k, k+q U(k+q))^-1_n^' n|u_n^', k+q⟩ = ∑_n^',m,l,m^'U(k+q)^-1_n^'m (S_k, k+q^-1)_m l U(k)_l n U(k+q)_m^'n^'|u_m^', k+q⟩ = ∑_m,l (S_k, k+q^-1)_m l U(k)_l n|u_m, k+q⟩ = ∑_l U(k)_l n|u_l, k+q⟩ which makes the covariant derivative expression of Roth term (<ref>) transform covariantly. This is also true for the matrix elements expression of Roth term (<ref>) and spin matrix σ_z, meaning that R(k_i)_m n· d k_i → U(k_i)^-1R(k_i)_m n· d k_i U(k_i) σ_z(k_i)_m n· d k_i → U(k_i)^-1σ_z(k_i)_m n· d k_i U(k_i) Therefore, the second term in (<ref>) is also gauge covariant exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|] →exp{i U(k_i)^-1[R(k_i) · dk_i+Zσ^z/v^⊥|dk_i|]U(k_i)} = U(k_i)^-1exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|] U(k_i) Besides, the overlap matrix M^i_mn=⟨ u_m k_i+1|u_n k_i⟩ transforms like M^i→ U(k_i+1)^-1 M^i U(k_i) Hence, the covariance of discretized propagator (<ref>) follows from the transformation properties of the two separate terms as 𝒜[𝔬] →∏_i=1^N U(k_i+1)^-1 M^i U(k_i) · U(k_i)^-1 exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|]U(k_i) = U(k_N+1)^-1 {∏_i=1^N M^i·exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|]} U(k_1) = U(k_1)^-1𝒜[𝔬] U(k_1) Since propagator 𝒜[𝔬] transforms covariantly, its spectrum {e^iλ_a}_a=1^D is gauge invariant. In other words, the phase λ_a obtained through these numerical formulas is uniquely determined (module 2π) independent of gauge choice in the calculation.
http://arxiv.org/abs/2307.04000v1
20230708155126
Synthesis of resonant modes in electromagnetics
[ "Antonello Tamburrino", "Carlo Forestiere", "Giovanni Miano", "Guglielmo Rubinacci", "Salvatore Ventre" ]
physics.optics
[ "physics.optics", "physics.class-ph" ]
Department of Electrical and Information Engineering M. Scarano, Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy. Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI-48824, USA. e-mail: [email protected] Department of Electrical Engineering and Information Technology, Università degli Studi di Napoli Federico II, via Claudio 21, Napoli, 80125, Italy Department of Electrical and Engineering Information M. Scarano, Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy. Resonant modes determine the response of electromagnetic devices, including dielectric and plasmonic resonators. Relying on the degrees of freedom that metamaterials provide, this contribution shows how to design, at will, the resonant modes of a dielectric object placed in an unbounded space. Specifically, the proposed method returns in analytical form the spatial distribution of the dielectric susceptibility tensor for which the object exhibits resonances at prescribed frequencies and spatial distribution of the polarization. Together with the synthesis of the material, two key concepts are introduced: the controlled tunability of the resonant modes and the number of essential modes, i.e. the number of modes that uniquely characterize the spatial distribution of the dielectric susceptibility. Moreover, this approach can be applied to design the resonant modes of any system where the constitutive relationship is linear and local. Synthesis of resonant modes in electromagnetics Salvatore Ventre August 12, 2023 ================================================ Media with spatially inhomogeneous refractive index have fascinated the humankind for millennia, exhibiting counter-intuitive effects such as mirages, or fata morgana. Archaeological evidence indicates that humans learned how to engineer the refractive index variations to make lenses in antiquity, spanning several millennia. More recently, nano-fabrication techniques, the discovery of materials with tunable permittivity, and the introduction of the metamaterial concept <cit.> have greatly expanded the landscape of feasible permittivity distributions for the electromagnetic design. Anisotropic and even continuous effective variations of the permittivity can be now implemented. Using the degrees of freedom in the choice of the materials, it is possible to control the electromagnetic field as shown by Pendry et al <cit.> by introducing trasformation optics <cit.>. They showed that, the permittivity and permeability effectively determine a curved spatial geometry for the electromagnetic field. Thus, leveraging on this analogy, they showed how the anisotropic and inhomogeneous permittivity and permeability profiles to redirect the electromagnetic field in a prescribed way. Recently, several optimization methods have been introduced to design materials to achieve a prescribed electromagnetic response, incorporating at the same time fabrication constraints <cit.>. In this manuscript, we take a fresh path to the design of electromagnetic resonances of a scatterer, which plays a central role in electromagnetic devices, e.g. <cit.>. Plasmonic and dielectric nano-resonators are an interesting example. When the resonance condition is met, the near-field and far-field characteristics of the device are dominated by the corresponding resonant mode. We introduce a theoretical framework that enables the synthesis of the spatial distribution of the permittivity profile of a dielectric object, to design its resonant modes, i.e. polarization current density distributions. The designer preliminary specifies, in the spatial domain occupied by the object, one or several modes, together with the corresponding resonant frequencies. Then, the synthesis process returns the possibly inhomogeneous and anisotropic permittivity profile which guarantees that the dielectric object exhibits the prescribed modes at the specified resonance frequencies. It is a direct method: it does not require the use of any optimization approaches, but explicitly returns the analytical solution in a single step. The syntheses approach leverages on a formulation of the generalized eigenvalue problem where the contributions of the material and of the electromagnetic field are separated. Yet, this approach is very general: it can be applied to any system where the constitutive relationship is linear and non-spatially dispersive. For instance, it can be used to design the properties of an elastic material to control its vibrational modes. In addition, the proposed framework allows one to clearly identify the physical feasibility and limitations inherent to the problem of the design of the modes. The main outcome is that the maximum number of modes (essential modes) that can be prescribed at a given resonance frequency, is equal to the dimension of the problem (two for a 2D problem and three for a 3D problem). These are inherent physical limits unveiled by the proposed framework. Finally, we also address the problem of the tunability where, by scaling the dielectric susceptibility, we can change completely the resonance property in a controlled way. This feature enables the design of tunable materials, where one can adapt the response of the material dynamically, according to specific needs. § MODES AND EIGENVALUE PROBLEM We consider a linear, nonmagnetic and non-spatially dispersive dielectric of finite size, shown in Fig. <ref>. We denote the space occupied by the dielectric with Ω, its boundary by ∂Ω, the (unit vector) normal to ∂ V that points outward by 𝐧. Under these assumptions, the polarization density 𝐏 is given by 𝐏( 𝐫,ω) = ε_0χ( 𝐫,ω) ·𝐄( 𝐫,ω), where is the dielectric susceptibility tensor, ω is the angular frequency (the e^jω t time behavior is assumed), ε_0 is the vacuum permittivity, and · corresponds to the usual dot product between tensors and vectors. When the dielectric scatterer is excited by an external electric field 𝐄^i, the total electric field 𝐄 can be written as the sum of 𝐄^i and of the reaction field 𝐄^𝙿 due to the presence of the polarization current density jω𝐏. The constitutive relation can be written as 1/ε_0( 𝐫, ω) ·𝐏( 𝐫, ω) - ( 𝐫, ω) = 𝐄^i( 𝐫, ω) in Ω, where tensor is the pointwise inverse of , i.e. ( 𝐫,ω) =^-1( 𝐫,ω). Let ℰ( ω) be the operator giving the electric field produced by a prescribed polarization density field 𝐏 radiating in the free space at frequency ω <cit.>: 𝐄^P( 𝐫) =jω∫_Ω𝐆 ( 𝐫-𝐫^') 𝐏( 𝐫^') dS^' where 𝐆 is the proper electric-electric dyadic Green function. For any prescribed angular frequency ω, the electromagnetic scattering is governed by the integral equation 1/ε_0·𝐏 - ℰ( ω) 𝐏=𝐄^i in Ω. Two particularly significant auxiliary eigenvalue problems can be defined starting from Eq. <ref>, setting the exciting field to zero, and assigning the material tensor . Quasi Normal Modes <cit.> (QNM) are nontrivial solutions ω and 𝐏 of ℰ( ω) 𝐏=1/ε_0 ·𝐏 in Ω. QNM are often used to characterize micro- and nano- resonators <cit.>, enabling the calculation of synthetic parameters such as the quality factor, the mode volume <cit.>, and the Purcell factor. QNM are also used to expand the response of micro-nanoresonators by <cit.> highlighting the contribution of the individual modes in the overall scattering response. The eigen-frequencies ω are complex numbers, i.e. ω∈ℂ, and (ω, 𝐏) forms a (generalized) eigenvalue/eigenvector pair. Material Modes are nontrivial solution ξ∈ℂ and 𝐏 of ℰ( ω) 𝐏=ξ1/ε_0 ·𝐏 in Ω, where the frequency ω∈ℂ is prescribed. ξ and 𝐏 form a (generalized) eigenvalue/eigenvector pair. These modes for ω∈ℝ and uniform and isotropic material ((𝐫) = χ scalar constant in Ω) have been already investigated in <cit.>, and have been used to expand the electromagnetic response of nano-resonators <cit.>, and also to design the scalar permittivity of a homogeneous object to achieve a prescribed scattering response, such as scattering cancellation or maximization <cit.>. In this work χ may be non uniform and/or non isotropic, and ω may be complex. The characteristic feature of the eigenvalue/eigenvector pair for (<ref>) is to be a homogeneous function of , i.e. if ^'=α then 𝐏^' =𝐏; 1/ξ^' =α1/ξ is an eigenvalue/eigenvector pair for ^'. Specifically, the eigenvector 𝐏 is a 0-degree homogeneous function, whereas the reciprocal of the eigenvalue ξ is a 1-degree homogeneous function. After this property, we term these modes as Homogeneous Material Modes. Homogeneous Material Modes have been successfully introduced in low-frequency electromagnetism for eddy current tomography <cit.>. A unique feature of Material Modes and, more in general, of Homogeneous Material Modes, is that since the eigenvalue ξ and the eigenvector are homogeneous function of χ, it is possible to tune on different resonant modes the electromagnetic system by scaling the susceptibility. This feature, which we call tunabilty, opens the door to a systematic design of reconfigurable materials and will be discussed in detail in a subsequent Section. § SYNTHESIS OF MODES (SOM) In this Section, we introduce a theoretical framework enabling the synthesis of the dielectric permittivity tensor = ( 𝐫, ω) of the object, such that it exhibits the set of resonance modes {(ω_k,ξ_k,𝐏_k) }_k=1… N at prescribed frequencies ω_k. Each individual mode is described by the triplet ( ω_k,ξ_k,𝐏_k). Hereafter, ω_k is referred as the frequency eigenvalue, ξ_k as the material eigenvalue, and 𝐏_k as the spatial mode. The problem consists in solving for a proper γ_k ( 𝐫) = γ( 𝐫, ω_k ), the set of equations imposing the modes ℰ( ω_k ) 𝐏_k=ξ_k 1/ε_0_k ·𝐏_k in Ω, for k=1, …, N. The synthesis is carried out in two steps. First, we solve the problem at each prescribed angular frequency ω_k, by evaluating γ_k, as solution of (<ref>). Then, we interpolate in the frequency domain the collection of tensors χ_1, …, χ_N, being χ_k = γ_k^-1 Hereafter, we consider the _z scenario where the electromagnetic problem is x_3- invariant and the electric field is transverse to the x_3-axis. This is a 2D case where the tensor is of the type ( 𝐫, ω) =∑_l,m=1^2χ_lm( 𝐫, ω) 𝐞 _l 𝐞_m, the electric field is 𝐄( 𝐫, ω) =E_1( 𝐫, ω) 𝐞_1+E_2( 𝐫, ω) 𝐞_2, 𝐫=x_1𝐞_1 +x_2𝐞_2 and 𝐞_1 and 𝐞_2 are the unit vectors along the x_1 and x_2 directions, respectively. The elements of the Green function are given in Appendix <ref>. §.§ Synthesis of Modes at a prescribed angular frequency Given a prescribed angular frequency ω_k, we distinguish two cases: (i) a single mode is prescribed or (ii) two modes are prescribed. In a 3D setting, one have to include also the third case when three modes are prescribed. The treatment of this case is nothing but a straightforward extension of the one needed when two modes are prescribed. Single mode case. Let ( ω_k,ξ_k,𝐏_k) be an individual prescribed resonances modes at frequency ω_k, where ω_j ≠ω_k for j≠ k. The solution of equation (<ref>) can be expressed in explicit form as _k( 𝐫) = ε_0 _k( 𝐫) /ξ_k|𝐏_k( 𝐫) | ^2𝐏_k ^∗( 𝐫) +α_k( 𝐫) 𝐯_k( 𝐫) 𝐩_k^∗( 𝐫), where ∗ is the complex conjugate operation, _k=ℰ( ω_k) 𝐏_k, 𝐩 _k( 𝐫) ⊥𝐏_k( 𝐫) for almost everywhere (a.e.) 𝐫∈Ω [Here 𝐚( 𝐫 ) 𝐛( 𝐫) means that 𝐚 ^∗( 𝐫) ·𝐛( 𝐫) =0.], 𝐯_k is an arbitrary vector field and α_k is an arbitrary scalar field. The solution γ_k given in equation (<ref>) can be easily verified by plugging it in equation (<ref>). A possible choice for 𝐩_k is 𝐩_k =ℛ𝐏_k^∗, being ℛ the 90 ^∘ rotation operator in the counterclockwise direction. We notice that ℛ𝐏_k^∗( 𝐫) =𝐏 _k^∗( 𝐫) ×𝐞_3 where 𝐞_3 is the unit vector along the x_3 direction. Finally, we highlight that by means of the explicit solution of equation (<ref>) one can easily check if _k is bounded or continuous. Specifically, we have that if _k and 𝐏_k are continuous (piecewise continuous) and |_k|/|𝐏 _k| is bounded, then _k is continuous (piecewise continuous). We conclude this Section with a remark about the scalar case. When _k∥𝐏_k, i.e. 𝐏_k( 𝐫) =ε_0χ_k( 𝐫) _k( 𝐫) being χ_k a scalar field, Eq. (<ref>) returns a scalar susceptibility tensor (homogeneous material): _k=1/ξχ_kℐ, where ℐ is the unit dyad. Indeed, Eq. (<ref>) follows from (<ref>) by choosing 𝐩_k( 𝐫) =𝐏_k^∗( 𝐫 ) ×_3, 𝐯( 𝐫 ) =𝐄_P^∗×_3, α _k( 𝐫) =χ_k^∗( 𝐫) /χ_k( 𝐫), and by observing that 𝐮𝐮 ^∗+( 𝐮^∗×_3) ( 𝐮×_3) gives the (2D) unit dyad ℐ when 𝐮 is an arbitrary unit vector. In this case, the prescribed mode is a material independent mode <cit.>. Two isofrequential modes. Let ω_1=ω_2≠ω_j for j>2, and ( ω_1,ξ_1,𝐏_1) and ( ω_2,ξ_2,𝐏_2) be the prescribed resonances modes. Let the solution be expressed as _1 ( 𝐫)= ∑_l,m=1^2Γ_lm( 𝐫) 𝐔_l( 𝐫) 𝐏_m^∗( 𝐫). where Γ_lm( 𝐫) ∈ℂ and 𝐔_l = ε_0ℰ (ω_1) 𝐏_l/ξ_l, l=1,2. To find the unknown coefficients Γ_lm, we observe that by imposing Eq. (<ref>) on the two prescribed resonance modes we have: 𝐔_r ( 𝐫)= γ_1 ( 𝐫) ·𝐏_r ( 𝐫) for a.e. 𝐫∈Ω, and r=1,2. Then, by left multiplying this expression by 𝐔^∗_s( 𝐫), we have 𝐔_s^∗·𝐔_t=∑_l,m=1^2( 𝐔_s^∗·𝐔_l) Γ_lm( 𝐏_m^∗·𝐏_t) in Ω, s,t=1,2, which, in matrix form, gives 𝐆_U( 𝐫) = 𝐆_U( 𝐫) Γ( 𝐫) 𝐆_P( 𝐫), where ( G_U) _st=𝐔_s^∗·𝐔_t, ( G_P) _ik=𝐏_i^∗·𝐏_k and Γ is the matrix made by the unknown coefficients Γ_lm. When both 𝐆_U and 𝐆_P are invertible at location 𝐫, the solution of (<ref>) exists, is unique and is given by Γ( 𝐫) = 𝐆_P^-1( 𝐫). In the remaining cases, i.e. 𝐆_P and/or 𝐆_U non invertible, the solution may not exist or be unique. It is worth noting that matrices 𝐆_U and 𝐆_P are Gram matrices and, therefore, 𝐆_U=𝐆_U^†, 𝐆 _U≥ 0, 𝐆_P=𝐆_P^† and 𝐆_P≥ 0. Moreover, the inverse of (<ref>) is (when it exists) χ=∑_l,m=1^2_ml^D𝐏_m𝐔_l^∗, where ^D=( 𝐆_U^I 𝐆_P) ^-1. §.§ Parameterization of the frequency response Once the inverse of the susceptibility tensor is found at each each prescribed angular frequency ω_k, we need to reconstruct the dispersion relation ( 𝐫,ω), which has to satisfy the causality throught the Kramers-Kronig conditions and the Hermitian symmetry, namely ( 𝐫,-ω)=^* ( 𝐫,ω). To this purpose, we parameterize the dispersion relation, as follows ( 𝐫,ω) =∑_m=1^M𝐚 _m( 𝐫) φ_m( ω) where M is the number of terms, each expansion function φ_m is causal and Hermitian and each tensor field 𝐚_m is real. The φ_ms depend on the actual realization of the artificial material. A possible choice consists in assuming each expansion function φ_m of the Lorentz-Drude type: φ_m( ω) =ω_p,m^2/( ω_0,m ^2-ω^2) +jωβ_m, where causality requires β_m>0. Tensors fields 𝐚_ms can be found by point matching, for instance. Within this approach, we enforce the following constraints ∀ k =1, …, N ∑_m=1^M𝐚_m (𝐫) Re{φ_m( ω _i)} = Re{γ_k^-1 (𝐫) } , ∑_m=1^M𝐚_m (𝐫) Im{φ_m( ω _i)} = Im{γ_m^-1 (𝐫) } , where Re{·} and Im{·} are the real and imaginary parts of their argument, respectively. Moreover, from (<ref>) and (<ref>), it follows that M=2 N to have existence and uniqueness of the solutions in terms of the unknown tensor fields 𝐚_ms. We remark that parameters ω_p,m, ω_0,m and β_m depend on the actual realization of the artificial material. For instance, ω_0,m does not need to be equal to the resonant (angular) frequency ω_m prescribed for the Synthesis of the Modes. In the remaining of the paper we select parameters ω_p,m, ω_0,m and β_m, to avoid the appearance of any resonance due to the expansion functions, at the resonant frequencies prescribed for the Synthesis of the Modes. § TUNABILITY AND ESSENTIAL MODES The tunability of the resonance refers to the possibility of changing the properties of a material in a controlled manner. The Synthesis of Modes entails tunability in a natural manner via the material eigenvalues ξ_k. Indeed, after (<ref>), we have that a material with dielectric permittivity given by χ / ξ_k, being χ the result of the synthesis of modes, resonates at the angular frequency given by ω_k. In other terms, we can control the frequency behaviour of a material (value of the frequency resonances and spatial distribution of the related mode), by simply scaling χ by a proper factor. From another perspective, the proposed approach to the synthesis of the modes allows to get the resonance frequencies and related spatial modes as a function of an individual parameter: a scaling factor in front of the synthesized χ. This feature open the door to a systematic design of reconfigurable materials. The concept of essential modes refers to the maximum number of modes that can be arbitrarily prescribed at a given angular frequency ω_k. Equation (<ref>) provide the values of the Γ_lm giving the sought inverse of the dielectric susceptivity tensor in (<ref>). This equation shed light on a special and not obvious physical feature of the modes: two modes are capable of defining uniquely the material property of the scatterer, at the prescribed angular frequency. In other words, γ(·,ω_k) is in a one-to-one correspondence with two of its modes at ω_k, at ω_k. From another perspective, only two modes can be assigned in a completely independent manner or, equivalently, all the modes depend upon two arbitrarily selected modes, at a prescribed angular frequency. We term two arbitrary modes in a one-to-one correspondence with χ(·,ω_k) as essential modes. It is worth noting that he number of essential modes is two in a 2D problem and three in a 3D problem. § APPLICATION OF THE THEORY OF SYNTHESIS OF MODES In this Section, we show the effectiveness of the resonance synthesis method by means of three application examples. We demonstrate (i) the capability of the method to synthesise several modes, each one having prescribed polarization density distribution at prescribed frequencies, (ii) the tunability of resonant response, by a proper scaling of the dielectric susceptibility tensor and (iii) the concept of essential modes. In the first two examples, the reference geometry is an indefinite cylinder with square L× L cross-section with L=10cm under the illumination. In the third example the geometry consist of coated spherical gold nanoparticle. The numerical model for solving the electromagnetic problem is derived from Ref. <cit.>. The parameters of the Lorentz-Drude expansion functions φ_k, introduced in Eq. (<ref>), are given in Table <ref>. The plot of each individual expansion function is shown in Figure <ref>. The positions of the peaks of the expansion function are uniformly spaced over the bandwidth of interest. We assume ω_p,k=ω_0,k and β_k=0.1ω_0,k. With this latter choice, each expansion function is localized in a neighborhood of its peak position, but does not present a sharp resonance that could hide those arising from the Synthesis of Modes. The amplitude and the shape of the expansion function are briefly discussed in Appendix <ref>. Synthesis of the modes. In this first application, we prescribe the modes at the three angular frequencies shown in Table <ref>. Specifically, at the angular frequency ω_1 we prescribe two modes: the first one has a polarization density field 𝐏_0, whose shape resembles the number “0" and it is associated with the eigenvalue ξ_A=1; the second mode has a polarization density field 𝐏_1, whose shape resembles the number “1" and it is associated with the eigenvalue ξ_B=2. At the angular frequency ω_2, we prescribe the modes 𝐏_1 and 𝐏_2, where 𝐏_2 has a shape which resembles number 2. Modes 𝐏_1 and 𝐏_2 are associated with eigenvalues ξ_A=1 and ξ_B=2, respectively. Finally, at the angular frequency ω_3 we prescribe modes 𝐏_2 and 𝐏_0, associated with eigenvalues ξ_A=1 and ξ_B=2, respectively. Tables <ref> and <ref> summarize these choices. The synthesis is carried out in two steps: i) we evaluate γ_i ( 𝐫) at the three prescribed frequencies; ii) we interpolate the corresponding dielectric susceptibility as in Eq. (<ref>), by solving (<ref>) and (<ref>). In the first step, the theory for the synthesis of two isofrequential modes is applied at each individual angular frequency using equation (<ref>): (i) for (ω_1, ξ_A, 𝐏_0) and (ω_1, ξ_B, 𝐏_1) at ω_1, (ii) for (ω_2, ξ_A, 𝐏_1) and (ω_2, ξ_B, 𝐏_2) at ω_2 and (iii) for (ω_3, ξ_A, 𝐏_2) and (ω_3, ξ_B, 𝐏_0) at ω_3. Figures <ref>, <ref>, and <ref> show the real and imaginary part of every element of the relative dielectric permittivity tensor ε_R,k=χ_k+1, at ω_1, ω_2, and ω_3, respectively. To validate the proposed method, we performed two tests, where the dielectric susceptibility profile is either χ^𝙰 ( 𝐫, ω ) = χ ( 𝐫, ω ) / ξ^𝙰 or χ^𝙱 ( 𝐫, ω ) = χ ( 𝐫, ω ) / ξ^𝙱, where χ ( 𝐫, ω ) is the outcome of the synthesis of modes. The first test was a direct test and it consisted in i) computing the modes at the three frequencies and in ii) comparing them with the prescribed polarization density field. This test was passed successfully. As second test, we evaluate the induced polarization density fields at the three frequencies ω_1, ω_2, and ω_3, when the cylinder is excited by a linearly polarized plane wave, propagating along the horizontal axis. These polarization fields are showed in Fig. <ref> (e-c) assuming a susceptibility tensor χ^𝙰(𝐫,ω) and in Fig. <ref> (d-f) for χ^𝙱. The induced polarization density fields is very close to the prescribed modes. In quantitative terms, Table <ref>, shows the 2-norm of the relative difference between the actual 𝐏 and its projection along the subspaces generated by the prescribed modes, at each specific angular frequency: ρ_k^i = ‖𝐏_i( ·,ω_k) -Π^i_k𝐏_i( ·, ω_k) ‖/‖𝐏_i( ·, ω_k ) ‖ with k=1,2,3 and i=𝙰,𝙱. In (<ref>), 𝐏_𝙰( ·,ω_k) and 𝐏_B ( ·,ω_k) are the polarization vectors at ω_k and for material 𝙰 and 𝙱, Π^𝙰_k and Π^𝙱_k are the projector into the linear space for the modes at the k-th angular frequency ω_k and for material 𝙰 and 𝙱. The detail about projectors Π^𝙰_ks and Π^𝙱_ks is given in Table <ref>. We stress that 𝐏_i ( ·, ω_k) is the polarization vector for the physical system under the prescribed illumination at ω_k. This example clearly illustrates the concept of tunability of the resonant response: by just uniformly halving the value of the susceptibility distribution (passing from χ^𝙰 to χ^𝙱) the resonance modes in correspondence of the peaks change from the ordered sequence 0, 1, 2to 1, 2, 0. Tunability. In this second application we determine the dielectric susceptibility by synthesizing at the frequency ω_1 the degenerate modes 𝐏_ and 𝐏_∨, whose polarization density field distribution resembles the characters and ∨, respectively; and at ω_2 the degenerate modes 𝐏_- and 𝐏_|, whose prescribed field distribution resembles the characters - and |, respectively. To validate the performed synthesis, we excite the infinite cylinder with a plane wave polarized along (𝐞_1+𝐞_2)/√(2). We show the real and imaginary part of the induced polarization field distributions at ω_1 in Figures <ref>(c), (d), and in Figures <ref>(g), (h) at ω_2. It is immediately apparent that at ω_1 the induced polarization field is a linear combination of the two prescribed degenerated modes 𝐏_ and 𝐏_∨, while at ω_1 the induced polarization field is a linear combination of 𝐏_- and 𝐏_|. From the quantitative perspective, the 2-norm relative difference ρ between the actual 𝐏 and its projection along the subspaces generated by the prescribed degenerated modes, is equal at 2.9908 × 10^-2 at ω_1 and 3.5310 × 10^-2 at ω_2. In this case Π_1 projects onto {𝐏_, 𝐏_∨}, whereas Π_2 projects onto {𝐏_-, 𝐏_| }. Essential modes. This final application case demonstrates a key feature of the Theory of the Synthesis of Modes, i.e. the concept of Essential Modes. Specifically, given a scatterer operated at a prescribed angular frequency ω_1 and described by the dielectric susceptivity tensor χ(·,ω_1), we compute two resonance modes (ω_1, ξ_A,𝐏_A) and (ω_1, ξ_B,𝐏_B) and, then, we apply our Theory of the Synthesis to these modes. Since the tensor of the dielectric permittivity is in an one-to-one correspondence with two arbitrary modes, as discussed in a previous Section, we expect that the tensor χ_s(·,ω_1) of the dielectric permittivity synthesized by means of (ω_1, ξ_A,𝐏_A) and (ω_1, ξ_B,𝐏_B) via (<ref>), is equal to χ(·,ω_1). The scatterer of this example consists of a coated (thickness 100 nm) circular (radius 200 nm) gold nanorod operated at f=500 THz (ω_1=π× 10^15 rad/s, free-space wavelength of 600 nm). The relative dielectric permittivity of the gold nanoparticle is 9.44-j 1.51, whereas that of the coating is 4. Figures <ref> and <ref> show the real and imaginary parts for the selected modes 𝐏_A and 𝐏_B. The synthesized dielectric permittivity tensor is almost equal to that of the prescribed scatterer. As a figure of merit we evaluated the maximum relative error over the scatterer domain Ω: e=max_𝐫∈Ω||χ(𝐫,ω_1)-χ_s(𝐫,ω_1)||_2/||χ(𝐫,ω_1)||_2, which, in this case, is equal to 3.3 × 10^-11. In (<ref>) χ is the prescribed tensor of the dielectric susceptibility, whereas χ_s is the tensor of the synthesized dielectric susceptibility. § CONCLUSIONS In this work we introduced a theoretical framework to find the permittivity profile of a dielectric object to synthesize at will its resonant modes. Specifically, we are able to control the spatial distribution of the polarization density field and the resonance frequency of a set of modes. The equations for the synthesis are straightforward and in an explicit form, making them suitable for specific customization. Moreover, we can prescribe the modes at many different frequencies. The only limit, arising from the underlying physics, consists in the possibility of assigning at most two modes to each individual frequency and eigenvalue (up to three modes in a 3D setting). Indeed, from the theory of the synthesis of modes arises naturally that, at a prescribed angular frequency, the dielectric susceptivity tensor is in one-to-one correspondence with two of its modes, that we termed as essential modes. We also demonstrated the concept of tunability: the proposed approach enables the design of the permittivity of a dielectric object that not only allows the synthesis at will of its resonant modes, but also allows to changes the resonant modes of the dielectric object in a controlled manner, by multiplying the designed permittivity by a proper multiplicative factor. We also demonstrated the concept of tunability: our approach enables the design of the permittivity of a dielectric object, that not only allows the synthesis at will its scattering resonances, but also allows when such permittivity is multiplied by a proper multiplicative factor, it changes its resonant behaviour in a controlled manner. This is relevant from the practical point of view because this operation (multiplication by a constant) appears to be a simple operation. With this theoretical framework, future development will be aimed to design a real world material approximating the synthesized dielectric susceptibility. Metamaterials are the natural candidates to this purpose. The method introduced can be transplanted to different linear physical systems, where the constitutive relationship is linear and local, including thermal and mechanical systems. § METHODS All the numerical calculations have been carried out by using the numerical method of <cit.>. All the value of the parameters used for generating numerical results have been included into the article. § DATA AVAILABILITY All the data supporting the conclusions of this study are included in the article. Source data are provided with this paper. § CODE AVAILABILITY The computer code and algorithm that support the findings of this study are available from the corresponding author on request. § GREEN FUNCTION The component of the Green function for the illumination are G_11( 𝐫) =-ζ_0/4r^3[ krx_2^2H_0( kr) +( x_1^2-x_2^2) H_1( kr) ] G_12( 𝐫) =-ζ_0/4r^3x_1 x_2[ 2H_1( kr) -krH_0( kr) ] G_21( 𝐫) =G_12( 𝐫) G_22( 𝐫) =-ζ_0/4r^3[ krx_1^2H_0( kr) +( x_2^2-x_1^2) H_1( kr) ] , being ζ_0 the characteristic impedance of vacuum, k=ω/c_0 the wavenumber, and c_0 the speed of light in vacuum. § LORENTZ-DRUDE EXPANSION FUNCTION The (normalized) amplitude of the elementary Lorentz-Drude expansion function is: | φ (ω) |/( ω_p / ω_0)^2 =1/√([ 1 - ( ω/ω_0)^2]^2 +( ω/ω_0)^2 ( β/ω_0)^2). Its maximum value is | φ (ω) |_max/( ω_p / ω_0)^2 =1/β/ω_0√(1 + 3/4( β/ω_0)). and it is achieved at ω/ω_0 = √(1+1/2( β/ω_0)^2) The plot of (<ref>) for different β / ω_0 ratios is showed in Figure <ref>. 10 engheta_metamaterials_2006 N. Engheta and R. W. Ziolkowski, Metamaterials: Physics and Engineering Explorations. John Wiley & Sons, June 2006. pendry_controlling_2006 J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling Electromagnetic Fields,” Science, vol. 312, no. 5781, pp. 1780–1782, 2006. leonhardt_optical_2006 U. Leonhardt, “Optical Conformal Mapping,” Science, vol. 312, no. 5781, pp. 1777–1780, 2006. hughes_adjoint_2018 T. W. Hughes, M. Minkov, I. A. D. Williamson, and S. Fan, “Adjoint Method and Inverse Design for Nonlinear Nanophotonic Devices,” ACS Photonics, vol. 5, pp. 4781–4787, Dec. 2018. Publisher: American Chemical Society. yao_intelligent_2019 K. Yao, R. Unni, and Y. Zheng, “Intelligent nanophotonics: merging photonics and artificial intelligence at the nanoscale,” Nanophotonics, vol. 8, pp. 339–366, Jan. 2019. lalanne_light_2018 P. Lalanne, W. Yan, K. Vynck, C. Sauvan, and J. . P. Hugonin, “Light interaction with photonic and plasmonic resonances,” Laser & Photonics Rev., vol. 12, 2018. van_bladel_electromagnetic_2007 J. G. Van Bladel, Electromagnetic fields, vol. 19. John Wiley & Sons, 2007. kristensen_modes_2013 P. T. Kristensen and S. Hughes, “Modes and mode volumes of leaky optical cavities and plasmonic nanoresonators,” ACS Photonics, vol. 1, 2013. muljarov_brillouin-wigner_2010 E. A. Muljarov, W. Langbein, and R. Zimmermann, “Brillouin-Wigner perturbation theory in open electromagnetic systems,” EPL (Europhysics Letters), vol. 92, p. 50010, Dec. 2010. Publisher: IOP Publishing. lalanne_quasinormal_2019 P. Lalanne, W. Yan, A. Gras, C. Sauvan, J.-P. Hugonin, M. Besbes, G. Demésy, M. D. Truong, B. Gralak, F. Zolla, A. Nicolet, F. Binkowski, L. Zschiedrich, S. Burger, J. Zimmerling, R. Remis, P. Urbach, H. T. Liu, and T. Weiss, “Quasinormal mode solvers for resonators with dispersive materials,” JOSA A, vol. 36, pp. 686–704, Apr. 2019. kristensen_generalized_2012 P. T. Kristensen, C. V. Vlack, and S. Hughes, “Generalized effective mode volume for leaky optical cavities,” Optics Letters, vol. 37, pp. 1649–1651, May 2012. sauvan_theory_2013 C. Sauvan, J.-P. Hugonin, I. Maksymov, and P. Lalanne, “Theory of the spontaneous optical emission of nanosize photonic and plasmon resonators,” Physical Review Letters, vol. 110, no. 23, p. 237401, 2013. Publisher: APS. muljarov_exact_2016 E. A. Muljarov and W. Langbein, “Exact mode volume and Purcell factor of open optical systems,” Physical Review B, vol. 94, p. 235438, Dec. 2016. Publisher: American Physical Society. bergman_theory_1980 D. J. Bergman and D. Stroud, “Theory of resonances in the electromagnetic scattering by macroscopic bodies,” Phys. Rev. B, vol. 22, 1980. forestiere_material-independent_2016 C. Forestiere and G. Miano, “Material-independent modes for electromagnetic scattering,” Phys. Rev. B, vol. 94, p. 201406, Nov. 2016. forestiere_volume_2018 C. Forestiere, G. Miano, G. Rubinacci, A. Tamburrino, R. Tricarico, and S. Ventre, “Volume Integral Formulation for the Calculation of Material Independent Modes of Dielectric Scatterers,” IEEE Transactions on Antennas and Propagation, vol. 66, pp. 2505–2514, May 2018. pascale_full-wave_2019 M. Pascale, G. Miano, R. Tricarico, and C. Forestiere, “Full-wave electromagnetic modes and hybridization in nanoparticle dimers,” Scientific Reports, vol. 9, p. 14524, Oct. 2019. forestiere_nanoparticle_2017 C. Forestiere and G. Miano, “On the nanoparticle resonances in the full-retarded regime,” Journal of Optics, vol. 19, p. 075601, June 2017. pascale_spectral_2017 M. Pascale, G. Miano, and C. Forestiere, “Spectral theory of electromagnetic scattering by a coated sphere,” JOSA B, vol. 34, pp. 1524–1535, July 2017. forestiere_directional_2019 C. Forestiere, G. Miano, M. Pascale, and R. Tricarico, “Directional scattering cancellation for an electrically large dielectric sphere,” Optics Letters, vol. 44, pp. 1972–1975, Apr. 2019. su_monotonicity_2017 Z. Su, S. Ventre, L. Udpa, and A. Tamburrino, “Monotonicity based imaging method for time-domain eddy current problems,” Inverse Problems, vol. 33, p. 125007, Nov. 2017. tamburrino_monotonicity_2021 A. Tamburrino, G. Piscitelli, and Z. Zhou, “The monotonicity principle for magnetic induction tomography,” Inverse Problems, vol. 37, p. 095003, Aug. 2021. Publisher: IOP Publishing. Note1 Here 𝐚 ( 𝐫 ) 𝐛 ( 𝐫 ) means that 𝐚^∗ ( 𝐫 ) ·𝐛 ( 𝐫 ) =0. richmond_te-wave_1966 J. Richmond, “TE-wave scattering by a dielectric cylinder of arbitrary cross-section shape,” IEEE Transactions on Antennas and Propagation, vol. 14, pp. 460–464, July 1966.
http://arxiv.org/abs/2307.07269v2
20230714105043
Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation
[ "Asif Hanif", "Muzammal Naseer", "Salman Khan", "Mubarak Shah", "Fahad Shahbaz Khan" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Volumetric Adversarial Frequency Attack and Training (VAFA & VAFT) A. Hanif et al. Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE {asif.hanif,muzammal.naseer,salman.khan,fahad.khan}@mbzuai.ac.ae University of Central Florida (UCF), USA [email protected] Linköping University, Sweden Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation Asif Hanif1 Muzammal Naseer1 Salman Khan1 Mubarak Shah2 Fahad Shahbaz Khan1,3 August 12, 2023 ==================================================================================== It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare. While recent advances in deep learning have improved the performance of volumetric medical image segmentation models, these models cannot be deployed for real-world applications immediately due to their vulnerability to adversarial attacks. We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models and demonstrate its advantages over conventional input or voxel domain attacks. Using our proposed attack, we introduce a novel frequency domain adversarial training approach for optimizing a robust model against voxel and frequency domain attacks. Moreover, we propose frequency consistency loss to regulate our frequency domain adversarial training that achieves a better tradeoff between model's performance on clean and adversarial samples. Code is available at <https://github.com/asif-hanif/vafa>. § INTRODUCTION Semantic segmentation of organs, anatomical structures, or anomalies in medical images (e.g. CT or MRI scans) remains one of the fundamental tasks in medical image analysis. Volumetric medical image segmentation (MIS) helps healthcare professionals to diagnose conditions more accurately, plan medical treatments, and perform image-guided procedures. Although deep neural networks (DNNs) have shown remarkable improvements in performance for different vision tasks, including volumetric MIS, their real-world deployment is not straightforward particularly due to the vulnerabilities towards adversarial attacks <cit.>. An adversary can deliberately manipulate input data by crafting and adding perturbations to the input that are imperceptible to the human eye but cause the DNN to produce incorrect outputs <cit.>. Adversarial attacks pose a serious security threat to DNNs <cit.>, as they can be used to cause DNNs to make incorrect predictions in a wide range of applications, including DNN-based medical imaging systems. To mitigate these threats, various techniques have been explored, including adversarial training, input data transformations, randomization, de-noising auto-encoders, feature squeezing, and robust architectural changes <cit.>. Although significant progress has been made in adversarial defenses, however, this area is still evolving due to the development of attacks over time <cit.>. Ensuring the adversarial robustness of the models involved in safety-critical applications such as, medical imaging and healthcare is of paramount importance because a misdiagnosis or incorrect decision can result in life-threatening implications. Moreover, the weak robustness of deep learning-based medical imaging models will create a trust deficit among clinicians, making them reluctant to rely on the model predictions. The adversarial robustness of the medical imaging models is still an open and under-explored area <cit.>. Furthermore, most adversarial attacks and defenses have been designed for 2D natural images and little effort has been made to secure volumetric (3D) medical data <cit.>. In the context of 2D natural images, it has been recently observed that frequency-domain based adversarial attacks are more effective against the defenses that are primarily designed to “undo” the impact of pixel-domain adversarial noise in natural images <cit.>. Motivated by this observation in 2D natural images, here we explore the effectiveness of frequency-domain based adversarial attacks in the regime of volumetric medical image segmentation and aim to obtain a volumetric MIS model that is robust against adversarial attacks. To achieve this goal, we propose a min-max objective for adversarial training of volumetric MIS model in frequency-domain. For maximization step, we introduce Volumetric Adversarial Frequency Attack - VAFA (Fig. <ref>, Sec. <ref>) which operates in the frequency-domain of the data (unlike other prevalent voxel-domain attacks) and explicitly takes into account the 3D nature of the volumetric medical data to achieve higher fooling rate. For minimization step, we propose Volumetric Adversarial Frequency-domain Training - VAFT (Fig. <ref>, Sec. <ref>) to obtain a model that is robust to adversarial attacks. In VAFT, we update model parameters on clean and adversarial (obtained via VAFA) samples and further introduce a novel frequency consistency loss to keep frequency representation of the logits of clean and adversarial samples close to each other for a better accuracy tradeoff. In summary, our contributions are as follows: * We propose an approach with a min-max objective for adversarial training of volumetric MIS model in the frequency domain. In the maximization step, we introduce a volumetric adversarial frequency attack (VAFA) that is specifically designed for volumetric medical data to achieve higher fooling rate. Further, we introduce a volumetric adversarial frequency-domain training (VAFT) based on a frequency consistency loss in the minimization step to produce a model that is robust to adversarial attacks. * We conduct experiments with two different hybrid CNN-transformers based volumetric medical segmentation methods for multi-organ segmentation. Related Work: There are three main types of popular volumetric MIS model architectures: CNN <cit.>, Transformer <cit.> and hybrid <cit.>. Research has shown that medical machine learning models can be manipulated in various ways by an attacker, such as adding imperceptible perturbation to the image, rotating the image, or modifying medical text <cit.>. Adversarial attack studies on medical data have primarily focused on classification problems and voxel-domain adversaries. For example, Ma et al. <cit.> have used four types of pixel-domain attacks <cit.> on two-class and multi-class medical datasets. Li et al. <cit.> and Daza et al. <cit.> have focused on single-step and iterative adversarial attacks <cit.> on the volumetric MIS. In constant to voxel-domain adversarial attacks, our approach works in the frequency-domain. § FREQUENCY DOMAIN ADVERSARIAL ATTACK AND TRAINING We aim to train a model for volumetric medical segmentation that is robust against adversarial attacks. Existing adversarial training (AT) approaches rely on min-max optimization <cit.> and operate in the input space. They find adversaries by adding the adversarial perturbation to the input samples by maximizing the model loss (e.g., dice loss in segmentation). The loss function is then minimized on such adversaries to update the model parameters. In this work, we propose a frequency-domain adversarial attack that takes into account the 3D nature of the volumetric medical data and performs significantly better than the other voxel-domain as well as 2D frequency domain attacks (Tab. <ref>). Based on our attack, we then introduce a novel frequency-domain adversarial training to make the model resilient to adversarial attacks. Additionally, we observe that our approach improves/retains the performance of the robust model on clean samples when compared to the non-robust model. Our approach optimizes adversarial samples by perturbing the 3D-DCT coefficients within the frequency domain using our frequency perturbation module (Fig. <ref>) and adversarial guidance from the segmentation loss (Sec. <ref>). We find adversarial samples with high perceptual quality by maximizing the structural similarity between clean and adversarial samples. Using clean and adversarial samples, we propose updating the model parameters by simultaneously minimizing the segmentation loss (i.e. Dice loss) and the frequency consistency loss (Eq. <ref>) between the clean and adversarial outputs of the segmentation model. 3D Medical Segmentation Framework: Deep learning-based 3D medical segmentation generally uses encoder-decoder architectures <cit.>. The encoder produces a latent representation of the input sample. A segmentation map of the input sample is generated by the decoder using the latent feature representation. The decoder usually incorporates skip connections from the encoder to preserve spatial information <cit.>. Next, we describe our proposed volumetric frequency-domain adversarial attack in Sec. <ref> and then training in Sec. <ref>. §.§ Volumetric Adversarial Frequency Attack (VAFA) Generally, adversarial attacks operate in the voxel domain by adding an imperceptible perturbation to the input data. In contrast, our attack perturbs the 3D-DCT coefficient to launch a frequency-domain attack for 3D medical image segmentation. Our Frequency Perturbation Module (FPM) transforms voxel-domain data into frequency-domain by using discrete cosine transforms (DCTs) and perturbs the DCT coefficients using a learnable quantization. It then takes an inverse DCT of the perturbed frequency-domain data and returns voxel-domain image. We keep the model in a “frozen” state while maximizing the dice loss <cit.> for segmentation and minimizing structural similarity loss <cit.> for perceptual quality. We represent a 3D (volumetric) single channel clean sample by X∈ℝ^1× H× W× D and its ground-truth binary segmentation mask by Y∈{0,1}^NumClass× H× W× D, where “NumClass" is the number of classes. We split X into n 3D patches i.e. X↦{x_i}_i=1^n, where x_i∈ℝ^h× w× d and h≤ H,w≤ W, d≤ D, h=w=d. We apply our frequency perturbation module to each of these patches. Frequency Perturbation Module: We apply a 3D discrete cosine transform (DCT), represented as 𝒟(·), to each patch x_i. The resulting DCT coefficients are then processed through a function φ(·), which performs three operations: quantization, differentiable rounding (as described in <cit.>), and subsequent de-quantization. φ(·) utilizes a learnable quantization table q∈ℤ^h× w× d to modify the DCT coefficients, setting some of them to zero. In particular, φ(𝒟(x),q) ⌊𝒟(x)/q⌋⊙q, where DCT coefficients of a patch (i.e. 𝒟(x)) are element-wise divided by quantization table q. After the division operation, the result undergoes rounding using a differentiable rounding operation <cit.>, resulting in some values being rounded down to zero. The de-quantization step involves element-wise multiplication of ⌊𝒟(x)/q⌋ with the same quantization table q. This step allows us to reconstruct the quantized DCT coefficients. Since quantization table is in the denominator of the division operation, therefore, higher quantization table values increase the possibility of more DCT coefficients being rounded down to zero. To control the number of DCT coefficients being set to zero, we can constrain the values of the quantization table to a maximum threshold (constraint in Eq. <ref>). In other words, φ(·) performs a 3D adversarial lossy compression on input through a learnable quantization table. Finally, a 3D inverse DCT (IDCT) is performed on the output of φ(·) in order to obtain an adversarially perturbed voxel-domain representation, denoted by x^'. We show our frequency perturbation module in Eq. <ref> as follows: x↦𝒟(x) ↦φ(𝒟(x),q) _quantization, rounding and de-quantization↦𝒟_I(φ(·)) ↦x^' We repeat the above mentioned sequence of transformations for all patches and then merge {x_i^'}_i=1^n to form adversarial image X^'∈ℝ^H× W× D. Quantization Constraint: We learn quantization table q by maximizing the ℒ_dice while ensuring that q_∞≤ q_max. Quantization threshold q_max controls the extent to which DCT coefficients are perturbed. The higher the value of q_max, the more information is lost. The drop in perception quality of the adversarial sample and the accuracy of the model are directly proportional to the value of q_max. To increase the perceptual quality of adversarial samples, we also minimize the structural similarity loss <cit.> between clean and adversarial samples, denoted by ℒ_ssim(X,X^'), in optimization objective. Our attack optimizes the following objective to fool a target model ℳ_θ: qmaximize  ℒ_dice (ℳ_θ(X^'), Y) - ℒ_ssim(X,X^') s.t.  q_∞≤ q_max, where ℒ_ssim(X,X^') = 1-1/n∑_i=1^nSSIM(x_i,x^'_i) is structural similarity loss <cit.>. Algorithm <ref> presents our volumetric adversarial frequency attack (VAFA). An overview of the attack can be found in maximization step of Fig. <ref>. §.§ Volumetric Adversarial Frequency Training (VAFT) The model parameters are then updated by minimizing the segmentation loss on both clean and adversarial samples (Eq. <ref>). Since our attack disrupts the frequency domain to find adversaries, we develop a novel frequency consistency loss (Eq. <ref>) to encourage frequency domain representation of the model's output (segmentation logits) for the clean sample close to the adversarial sample. Our frequency consistency loss not only boosts the robustness of the model against adversarial attacks but also improves/retains the performance of the robust model on clean images (Sec. <ref>). We present our volumetric adversarial frequency training (VAFT) in Algo. <ref>. θminimize ℒ_dice (ℳ_θ(X), Y)+ ℒ_dice (ℳ_θ(X^'), Y) + ℒ__fr(ℳ_θ(X),ℳ_θ(X^')), ℒ__fr(ℳ_θ(X),ℳ_θ(X^')) = 𝒟(ℳ_θ(X))-𝒟(ℳ_θ(X^'))__1, where X^' = VAFA(X,Y) and 𝒟(·) is 3D DCT function. An overview of the adversarial training can be found in minimization step of Fig. <ref>. Fig. <ref> presents a qualitative results of adversarial examples under different attacks on the standard UNETR model. We highlight areas by red bounding box in Fig. <ref> to show the impact of each attack on the model performance, when compared with prediction on clean sample. Our attack (VAFA) achieves higher fooling rate as compared to other voxel-domain attacks, while maintaining comparable perceptual similarity. § EXPERIMENTS AND RESULTS Implementation Details: We demonstrate the effectiveness of our approach using two medical segmentation models: UNETR<cit.>, UNETR++<cit.> and two datasets: Synapse (18-12 split) <cit.>, and ACDC <cit.>. Using pre-trained models from open-source Github repositories by the corresponding authors, we launch different adversarial attacks and conduct adversarial training with default parameters. We use the Pytorch framework and single NVIDIA A100-SXM4-40GB GPU for our experiments. For a pixel/voxel range [0,255], we create l_∞ adversarial examples under perturbation budgets of ϵ∈{4,8} for voxel-domain attacks following <cit.> and compare it with our attack VAFA. Unless otherwise specified, all attacks are run for a total of 20 optimization steps. More details about the parameters of the attacks used in different experiments can be found in Appendix. We use mean Dice Similarity Score (DSC), mean 95% Hausdorff Distance (HD95). We also report perceptual similarity between clean and adversarial sample (LPIPS) <cit.>. Results: For each evaluation metric, we take mean across all classes (including background) and test images. In each table (where applicable), green values show DSC and HD95 on clean images. Table <ref> shows comparison of voxel-domain attacks (e.g. PGD <cit.>, FGSM <cit.>, BIM <cit.>, GaussianNoise(GN) <cit.>) with VAFA-2D (2D DCT in FPM applied on each scan independently) and VAFA on UNETR model (Synapse). VAFA achieves a higher fooling rate as compared to other attacks with comparable LPIPS. We posit that VAFA-2D on volumetric MIS data is sub-optimal and it does not take into account the 3D nature of the data and model's reliance on the 3D neighborhood of a voxel to predict its class. Further details are provided in the supplementary material. We show impacts of different parameters of VAFA e.g. quantization threshold (q_max), steps, and patch size (h× w × d) on DSC and LPIPS in Table. <ref>,<ref> and <ref> respectively. DSC and LPIPS decrease when these parameters values are increased. Table <ref> shows a comparison of VAFA (patch size = 32× 32 × 32) with other voxel-domain attacks on UNETR and UNETR++ models. For adversarial training experiments, we use q_max=20 (for Synapse), q_max=10 (for ACDC) and patch-size of 32×32×32 (chosen after considering the trade-off between DSC and LPIPS from Table <ref>) for VAFA. For voxel-domain attacks, we use ϵ=4 (for Synapse) and ϵ=2 (for ACDC) by following the work of <cit.>. Table <ref> presents a comparison of the performance (DSC) of various adversarially trained models against different attacks. ℳ^[0.5]VAFA-FR__*, ℳ^[0.5]VAFA__* denote our robust models which were adversarially trained with and without frequency consistency loss (ℒ_fr, Eq. <ref>) respectively. In contrast to other voxel-domain robust models, our approach demonstrated robustness against both voxel and frequency-based attacks. § CONCLUSION We present a frequency-domain based adversarial attack and training for volumetric medical image segmentation. Our attack strategy is tailored to the 3D nature of medical imaging data, allowing for a higher fooling rate than voxel-based attacks while preserving comparable perceptual similarity of adversarial samples. Based upon our proposed attack, we introduce a frequency-domain adversarial training method that enhances the robustness of the volumetric segmentation model against both voxel and frequency-domain based attacks. Our training strategy is particularly important in medical image segmentation, where the accuracy and reliability of the model are crucial for clinical decision making. splncs04
http://arxiv.org/abs/2307.05199v1
20230711120914
Reject option models comprising out-of-distribution detection
[ "Vojtech Franc", "Daniel Prusa", "Jakub Paplham" ]
cs.LG
[ "cs.LG" ]
argmax argmin Argmax Argmin conv T E𝔼 A I M P S Q G B K X Y𝕐 F N H V O C U D pred colℕε∪#1(<ref>)⟨⟩Δ^ pΔ_t^ p#11_#1λψ_ advψ_ manaψ^p_ manaψ_ lpψψ_ mrψ^pψ^p_ rampψ^p_ lpR_ℓR_ℓ^*R_ψR_ψ^pR^p_ manaR_ manaR_ adv rejectϕρ≺κ R^S rejectϕ_nρ_nκ_n R^S_nR_nR ROC RC PR##1#1#1#1[1.3pt[]1.3pt]:=ρϕ_ minρ_ maxκ_ mintheoremTheoremproblemProblemlemmaLemmadefinitionDefinitionclaimClaim[theorem] claimproofProof of claim[claim] Reject option models comprising out-of-distribution detection Vojtech Franc Daniel Prusa Jakub Paplham Department of Cybernetics Faculty of Electrical Engineering Czech Technical University in Prague August 12, 2023 =============================================================================================================================================================== The optimal prediction strategy for out-of-distribution (OOD) setups is a fundamental question in machine learning. In this paper, we address this question and present several contributions. We propose three reject option models for OOD setups: the Cost-based model, the Bounded TPR-FPR model, and the Bounded Precision-Recall model. These models extend the standard reject option models used in non-OOD setups and define the notion of an optimal OOD selective classifier. We establish that all the proposed models, despite their different formulations, share a common class of optimal strategies. Motivated by the optimal strategy, we introduce double-score OOD methods that leverage uncertainty scores from two chosen OOD detectors: one focused on OOD/ID discrimination and the other on misclassification detection. The experimental results consistently demonstrate the superior performance of this simple strategy compared to state-of-the-art methods. Additionally, we propose novel evaluation metrics derived from the definition of the optimal strategy under the proposed OOD rejection models. These new metrics provide a comprehensive and reliable assessment of OOD methods without the deficiencies observed in existing evaluation approaches. § INTRODUCTION Most methods for learning predictors from data are based on the closed-world assumption, i.e., the training and the test samples are generated i.i.d. from the same distribution, so-called in-distribution (ID). However, in real-world applications, ID test samples can be contaminated by samples from another distribution, the so-called Out-of-Distribution (OOD), which is not represented in training examples. A trustworthy prediction model should detect OOD samples and reject to predict them, while simultaneously minimizing the prediction error on accepted ID samples. In recent years, the development of deep learning models for handling OOD data has emerged as a critical challenge in the field of machine learning, leading to an explosion of research papers dedicated to developing effective OOD detection methods (OODD) <cit.>. Existing methods use various principles to learn a classifier of ID samples and a selective function that accepts the input for prediction or rejects it to predict. We further denote the pair of ID classifier and the selective function as OOD selective classifier, borrowing terminology from the non-OOD setup <cit.>. There is an agreement that a good OOD selective classifier should reject OOD samples and simultaneously achieve high classification accuracy on ID samples that are accepted <cit.>. To our knowledge, there is surprisingly no formal definition of an optimal OOD selective classifier. Consequently, there is also no consensus on how to evaluate the OODD methods. The commonly used metrics <cit.> evaluate only one aspect of the OOD selective classifier, either the accuracy of the ID classifier or the performance of the selective function as an OOD/ID discriminator. Such evaluation is inconclusive and usually inconsistent; e.g., the two most commonly used metrics, AUROC and OSCR, often lead to a completely reversed ranking of evaluated methods (see Sec. <ref>). In this paper, we ask the following question: What would be the optimal prediction strategy for the OOD setup in the ideal case when ID and OOD distributions were known? To this end, we offer the contributions: [label=(*)] * We propose three reject option models for the OOD setup: Cost-based model, bounded TPR-FPR model, and Bounded Precision-Recall model. These models extend the standard rejection models used in the non-OOD setup <cit.> and define the notion of an optimal OOD classifier. * We establish that all the proposed models, despite their different formulations, share a common class of optimal strategies. The optimal OOD selective classifier combines a Bayes ID classifier with a selective function based on a linear combination of the conditional risk and likelihood ratio of the OOD and ID samples. This selective function enables a trade-off between distinguishing ID from OOD samples and detecting misclassifications. * Motivated by the optimal strategy, we introduce double-score OOD methods that leverage uncertainty scores from two chosen OOD detectors: one focused on OOD/ID discrimination and the other on misclassification detection. We show experimentally that this simple strategy consistently outperforms the state-of-the-art. * We review existing metrics for evaluation of OODD methods and show that they provide incomplete view, if used separately, or inconsistent view of the evaluated methods, if used together. We propose novel evaluation metrics derived from the definition of optimal strategy under the proposed OOD rejection models. These new metrics provide a comprehensive and reliable assessment of OODD methods without the deficiencies observed in existing approaches. § REJECT OPTION MODELS FOR OOD SETUP The terminology of ID and OOD samples comes from the setups when the training set contains only ID samples, while the test set contains a mixture of ID and OOD samples. In this paper, we analyze which prediction strategies are optimal on the test samples, but we do not address the problem of learning such strategy. We follow the OOD setup from <cit.>. Let be a set of observable inputs (or features), and a finite set of labels that can be assigned to in-distribution (ID) inputs. ID samples (x,y)∈× are generated from a joint distribution p_I×→_+. Out-of-distribution (OOD) samples x are generated from a distribution p_O→_+. ID and OOD samples share the same input space . Let ∅ be a special label to mark the OOD sample. Let =∪{∅} be an extended set of labels. In the testing stage the samples (x,y̅)∈× are generated from the joint distribution p×→_+ defined as a mixture of ID and OOD: p(x,y̅) = {[ p_O(x) π y̅=∅; p_I(x,y̅) (1-π) y̅∈ ] ., where π∈[0,1) is the probability of observing the OOD sample. Our OOD setup subsumes the standard non-OOD setup as a special case when π=0, and the reject option models that will be introduced below will become for π=0 the known reject option models for the non-OOD setup. Our goal is to design OOD selective classifier q→, where =∪{}, which either predicts a label, q(x)∈, or it rejects the prediction, q(x)=, when [label=(*),before=, itemjoin=, , itemjoin*=, or ] * input x∈ prevents accurate prediction of y∈ because it is noisy * comes from OOD. We represent the selective classifier by the ID classifier h→, and a stochastic selective function c→[0,1] that outputs a probability that the input is accepted <cit.>, i.e., q(x)=(h,c)(x) ={[ h(x) c(x); 1-c(x) ] .. In the following sections, we propose three reject option models that define the notion of the optimal OOD selective classifier of the form equ:selClassif applied to samples generated by equ:dataDistr. §.§ Cost-based rejection model for OOD setup A classical approach to define an optimal classifier is to formulate it as a loss minimization problem. This requires defining a loss ℓ̅×→_+ for each combination of the label y̅∈=∪{∅} and the output of the classifier q(x)∈=∪{}. Let ℓ×→_+ be some application-specific loss on ID samples, e.g., 0/1-loss or MAE. Furthermore, we need to define the loss for the case where the input is OOD sample y̅=∅ or the classifier rejects q(x)=. Let ε_1∈_+ be the loss for rejecting the ID sample, ε_2∈_+ loss for prediction on the OOD sample, and ε_3∈_+ loss for correctly rejecting the OOD sample. ℓ, ε_1,ε_2 and ε_3 can be arbitrary, but we assume that ε_2>ε_3. The loss ℓ̅ is then: ℓ̅(y̅,q) = {[ ℓ(y̅,q) y̅∈ q∈; ε_1 y̅∈ q =; ε_2 y̅=∅ q ∈; ε_3 y̅=∅ q = ] . Having the loss ℓ̅, we can define the optimal OOD selective classifier as a minimizer of the expected risk R(h,c) = _x,y∼ p(x,y̅)ℓ̅(y̅,(h,c)(x)). (Cost-based OOD model) An optimal OOD selective classifier (h_C,c_C) is a solution to the minimization problem min_h,c R(h,c) where we assume that both minimizers exist. An optimal solution of the cost-based OOD model requires three components: The Bayes ID classifier h_B(x) ∈_y'∈∑_y∈p_I(y| x)ℓ(y,y') , its conditional risk r_B(x)=∑_y∈p_I(y| x)ℓ(y,h_B(x)), and the likelihood ratio of the OOD and ID inputs, g(x) = p_O(x)/p_I(x), which we defined to be g(x)=∞ for p_I(x)=0. An optimal selective classifier (h_C,c_C) under the cost-based OOD model is composed of the Bayes classifier equ:BayesCls, h_C=h_B, and the selective function c_C(x)={[ 1 s_C(x) < ε_1; τ s_C(x) = ε_1; 0 s_C(x) > ε_1 ] . s_C(x) = r_B(x) + (ε_2-ε_3)π/1-π g(x) where τ is an arbitrary number in [0,1], and _1, _2, _3 are losses defining the extended loss  equ:extendedLoss. Note that τ can be arbitrary and therefore a deterministic selective function c_C(x)= s_C(x)≤_1 is also optimal. An optimal selective function accepts inputs based on the score s_C(x), which is a linear combination of two functions, conditional risk r_B(x) and the likelihood ratio g(x)=p_O(x)/p_I(x). Relation to cost-based model for Non-OOD setup For π=0, the cost-based OOD model reduces to the standard cost-based model of the reject option classifier in a non-OOD setup <cit.>. In the non-OOD setup, we do not need to specify the losses _2 and _3 and the risk R(h,c) simplifies to R'(h,c) = _x,y∼ p_I(x,y) [ℓ(y,h(x)) c(x) + _1 (1-c(x)) ]. The well-known optimal solution is composed of the Bayes classifier h_B(x) as in the OOD case; however, the selection function c'_C(x)= r(x)≤ϵ_1 accepts the input solely based on the conditional risk r(x). §.§ Bounded TPR-FPR rejection model The cost-based OOD model requires the classification loss ℓ for ID samples and defining the costs ε_1, ε_2, ε_3 which is difficult in practice because the physical units of ℓ and ε_1, ε_2, ε_3 are often different. In this section, we propose an alternative approach which requires only the classification loss ℓ while costs ε_1, ε_2, ε_3 are replaced by constraints on the performance of the selective function. The selective function c→[0,1] can be seen as a discriminator of OOD/ID samples. Let us consider ID and OOD samples as positive and negative classes, respectively. We introduce three metrics to measure the performance of the OOD selective classifier (h,c). We measure the performance of selective function by the True Positive Rate (TPR) and the False Positive Rate (FPR). The TPR is defined as the probability that ID sample is accepted by the selective function c, i.e., (c) = ∫_ p(x|y̅≠∅ ) c(x) dx = ∫_ p_I(x) c(x) dx . The FPR is defined as the probability that OOD sample is accepted by the selective function c, i.e., (c) = ∫_ p(x|y̅=∅) c(x) dx = ∫_ p_O(x) c(x) dx . The second identity in equ:TPR and equ:FPR is obtained after substituting the definition of p(x,y̅) from equ:dataDistr. Lastly, we characterize the performance of the ID classifier h→ by the selective risk (h,c) = ∫_∑_y∈ p_I(x,y) ℓ(h(x),y) c(x) dx/(c) defined for non-zero (c), i.e., the expected loss of the classifier h calculated on the ID samples accepted by the selective function c. Let ϕ_ min∈[0,1] be the minimal acceptable TPR and ρ_ max∈[0,1] maximal acceptable FPR. An optimal OOD selective classifier (h_T,c_T) under the bounded TPR-FPR model is a solution of the problem min_h∈^,c∈[0,1]^(h,c) (c) ≥ϕ_ min (c) ≤ρ_ max , where we assume that both minimizers exist. Let (h,c) be an optimal solution to equ:TprFprModel. Then (h_B,c), where h_B is the Bayes ID classifier equ:BayesCls, is also optimal to equ:TprFprModel. According to Theorem <ref>, the Bayes ID classifier h_B is an optimal solution to equ:TprFprModel that defines the bounded TPR-FPR model. This is not surprising, but it is a practically useful result, because it allows one to solve equ:TprFprModel in two consecutive steps: First, set h_T to the Bayes ID classifier h_B. Second, when h_T is fixed, the optimal selection function c_T is obtained by solving equ:TprFprModel only w.r.t. c which boils down to: [Bounded TPR-FPR model for known h(x)] Given ID classifier h→, the optimal selective function c^*→[0,1] is a solution to min_c∈[0,1]^(h,c) (c) ≥ϕ_ min , (c) ≤ρ_ max . Problem <ref> is meaningful even if h is not the Bayes ID classifier h_B. We can search for an optimal selective function c^*(x) for any fixed h, which in practice is usually our best approximation of h_B learned from the data. Let h→ be ID classifier and r→ its conditional risk r(x)=∑_y∈p_I(y| x)ℓ(y,h(x)). Let g(x)=p_I(x)/p_I(x) be the likelihood ratio of ID and OOD samples. Then, the set of optimal solutions of Problem <ref> contains the selective classifier c^*(x)={[ 0 s(x) > λ; τ(x) s(x) = λ; 1 s(x) < λ ] . s(x) = r(x) + μ g(x) where decision threshold λ∈, and multiplier μ∈ are constants and τ→[0,1] is a function implicitly defined by the problem parameters. The optimal c^*(x) is based on the score composed of a linear combination of r(x) and g(x) as in the case of the cost-based model equ:CostBasedOptSol. Unlike the cost-based model, the acceptance probability τ(x) for boundary inputs _s(x)=λ={x∈| s(x)=λ} cannot be arbitrary, in general. However, if is continuous, the set _s(x)=λ has probability measure zero, up to some pathological cases, and τ(x) can be arbitrary, i.e., the deterministic c^*(x)= s(x) ≤λ is optimal. If is finite, the value of τ(x) can be found by linear programming. The linear program and more details on the form of τ(x) are in the Appendix. Relation to Bounded-Abstention model for the non-OOD setup For π=0, the bounded TPR-FPR model reduces to the bounded-abstention option model for non-OOD setup <cit.>. Namely, ρ(c)≤ρ_ max can be removed because there are no OOD samples, and equ:TprFprModel becomes the bounded-abstention model: min_h,c(h,c), s.t. (c) ≥ϕ_ min, which seeks the selective classifier with guaranteed TPR and minimal selective risk. In the non-OOD setup, TPR is called coverage. An optimal solution of the bounded abstention model <cit.>, is composed of the Bayes ID classifier h_B, and the same optimal selective function as the TPR-FPR model equ:optSelFunTprFprModel, however, with μ=0 and τ(x)=τ, ∀ x∈, i.e., the score depends only on r(x) and an identical randomization is applied in all edge cases <cit.>. Therefore, r(x) is the optimal score to detect misclassified ID samples in non-OOD setup as it allows to achieve the minimal selective risk for any fixed coverage (TPR,). §.§ Bounded Precision-Recall rejection model The optimal selective classifier under the bounded TPR-FPR model does not depend on the prior of the OOD samples π, which is useful, e.g., when π is unknown in the testing stage. In the case π is known, it might be more suitable to constrain the precision rather than the FPR, while the constraint on TPR remains the same. In the context of precision, we denote ϕ(c) as recall instead of TPR. The precision ≺(c) is defined as the portion of samples accepted by c(x) that are actual ID samples, i.e., κ(c) = (1-π) ∫_ p(x|y̅≠∅) c(x) dx /∫_ p(x) c(x) dx = (1-π) ϕ(c)/ρ(c) π+ϕ(c) (1-π) . Let κ_ min∈[0,1] be a minimal acceptable precision and ϕ_ min∈[0,1] minimal acceptable recall (a.k.a. TPR). An optimal selective classifier (h_P,c_P) under the bounded Precision-Recall model is a solution of the problem min_h∈^,c∈[0,1]^(h,c) (c) ≥ϕ_ min ≺(c) ≥κ_ min where we assume that both minimizers exist. Let (h,c) be an optimal solution to equ:PrecRecallModel. Then (h_B,c), where h_B is the Bayes ID classifier equ:BayesCls, is also optimal to equ:PrecRecallModel. Theorem <ref> ensures that the Bayes ID classifier is an optimal solution to equ:PrecRecallModel. After fixing h_P=h_B, the search for an optimal selective function c leads to: [Bounded Prec-Recall model for known h(x)] Given ID classifier h→, the optimal selective function c^*→[0,1] is a solution to min_c∈[0,1]^(h,c) (c) ≥ϕ_ min ≺(c) ≥κ_ min . Let h→ be ID classifier and r→ its conditional risk r(x)=∑_y∈p_I(y| x)ℓ(y,h(x)). Let g(x)=p_O(x)/p_I(x) be the likelihood ratio of OOD and ID samples. Then, the set of optimal solutions of Problem <ref> contains the selective function c^*(x)={[ 0 s(x) > λ; τ(x) s(x) = λ; 1 s(x) < λ ] . s(x) = r(x) + μ g(x) where detection threhold λ∈, and multiplier μ∈ are constants and τ→[0,1] is a function implicitly defined by the problem parameters. §.§ Summary We proposed three rejection models for OOD setup which define the notion of optimal OOD selective classifier: Cost-based model, Bounded TRP-FPR model, and Bounded Precision-Recall model. We established that all three models, despite different formulation, share the class of optimal prediction strategies. Namely, the optimal OOD selective classifier (h^*,c^*) is composed of the Bayes ID classifier equ:BayesCls, h^*=h_B, and the selective function c^*(x)={[ 0 s(x) > λ; τ(x) s(x) = λ; 1 s(x) < λ ] . s(x) = r(x) + μ g(x) where λ, μ, and τ(x) are specific for the used rejection model. However, in all cases, the optimal uncertainty score s(x) for accepting the inputs is based on a linear combination of the conditional risk r(x) of the ID classifier h^* and the OOD/ID likelihood ratio g(x)=p_O(x)/p_I(x). On the other hand, from the optimal solution of the well-known Neyman-Person problem <cit.>, it follows that the likelihood ratio g(x) is the optimal score of OOD/ID discrimination. Our results thus show that the optimal OOD selective function needs to trade-off the ability to detect the misclassification of ID samples and the ability to distinguish ID from OOD samples. Single-score vs. double-score OODD methods The existing OODD methods, which we further call single-score methods, produce a classifier h→ and an uncertainty score s→. The score s(x) is used to construct a selective function c(x)= s(x)≤λ where λ∈ is a decision threshold chosen in post-hoc evaluation. Hence, the existing methods effectively produce a set of selective classifiers ={ (h,c) | c(x)= s(x) ≤λ , λ∈}. In contrast to existing methods, we established that the optimal selective function is always based on a linear combination of two scores: conditional risk r(x) and likelihood ratio g(x). Therefore, we propose the double-score method, which in addition to a classifier h(x), produces two scores, s_r→ and s_g→, and uses their combination s(x)=s_r(x)+μ s_g(x) to accept inputs. Formally, the double-score method produces a set of selective classifiers ={(h,c)| c(x)= s_r(x) + μ s_g(x)≤λ , μ∈ ,λ∈}. The double-score strategy can be used to leverage uncertainty scores from two chosen OODD methods: one focused on OOD/ID discrimination and the other on misclassification detection. § POST-HOC TUNING AND EVALUATION METRICS Let =((x_i,y̅_i)∈×| i=1,…,n ) be a set of validation examples i.i.d. drawn from a distribution p(x,y̅). Given a set of selective classifiers , trained by the single-score or double-score OODD method, the goal of the post-hoc tuning is to use to select the best selective classifier (h_n,c_n)∈ and estimate its performance on unseen samples generated from the same p(x,y̅). This task requires a notion of an optimal selective classifier which we defined by the proposed rejection models. In Sec <ref> and Sec <ref>, we propose the post-hoc tuning and evaluation metrics for the Bounded TPR-FPR and Bounded Precision-Recall models, respectively. In Sec <ref> we review the existing evaluation metrics for OODD methods and point out their deficiencies. We will exemplify the proposed metrics on synthetic data and OODD methods described in Sec <ref>. §.§ Synthetic data and exemplar single-score and double-score OODD methods Let us consider a simple 1-D setup. The input space is = and there are three ID labels ={1,2,3}. ID samples are generated from p_I(x,1)=0.3(x;-1,1), p_I(x,2)=0.3(x;1,1), p_I(x,3)=0.4(x;3,1), where (x;μ,σ) is normal distribution with mean μ and variance σ. OOD is the normal distribution p_O(x)=(x;3,0.2), and the OOD prior π=0.25. We use 0/1-loss ℓ(y,y')= y≠ y', i.e., is the classification error on accepted inputs. The known ID and OOD alows us to evaluate the Bayes ID classifier h_B(x) by equ:BayesCls, its conditional risk r_B(x)=min_y'∈∑_y∈p_I(y| x)ℓ(y,y') and the OOD/ID likelihood ratio g(x)=p_O(x)/p_I(x). We consider 3 exemplar single-score OODD methods A, B, C. The methods produce the same optimal classifier h^*(x) and the selective functions c(x)= r_B(x) + μ g(x)≤λ with a different setting of μ. I.e., the method k∈{A,B,C} produces the set of selective classifiers _k={(h^*(x),c(x))| c(x)= r_B(x) + μ_k g(x) ≤λ , λ∈}, where the constant μ_k is defined as follows: * Method A(∞): μ=∞, s(x)=g(x). This corresponds to the optimal OOD/ID discriminator. * Method B(0.2): μ=0.2, s(x)=r_B(x)+0.2g(x). Combination of method A and C. * Method C(0): μ=0, s(x)=r_B(x). This corresponds to the optimal misclassification detector. We also consider a double-score method, Method D(), which outputs the same optimal classifier h_*(x), and scores s_r(x)=r(x) and s_g(x)=g(x). I.e., Method D() produces the set of selective classifiers _D={(h^*(x),c(x))| c(x)= r(x) + μ g(x) ≤λ , μ∈,λ∈}. Note that we have shown that _D contains an optimal selective classifier regardless of the reject option model used. §.§ Bounded TPR-FPR rejection model The bounded TPR-FPR model is defined using the selective risk (h,c), TPR (c) and FPR (c) the value of which can be estimated from the validation set as follows: (h,c)=∑_i∈_Iℓ(y_i,h(x_i)) c(x_i)/∑_i∈_I c(x_i) , (h,c) = 1/|_I|∑_i∈_Ic(x_i) , (h,c) = 1/|_O|∑_i∈_O c(x_i) where _I={i∈{1,…,n}|y̅_i≠∅} and _O={i∈{1,…,n}|y̅_i= ∅} are indices of ID and OOD samples in , respectively. Given the target TPR _ min∈(0,1] and FPR _ max∈(0,1], the best selective classifier (h_n,c_n) out of is found by solving: (h_n,c_n)∈_(h,c)∈(h,c) (h,c) ≥ϕ_ min , (h,c) ≤ρ_ max . Proposed evaluation metric If problem equ:EmpiricalTprFprModel is feasible, (h_n,c_n) is reported as the performance estimator of OODD method producing . Otherwise, the method is marked as unable to achieve the target TPR and FPR. Tab. <ref> shows the selective risk for the methods A-D at the target TPR _ min=0.7 and FPR _ max=0.2. The minimal is achieved by method D(), followed by B(0.2) and A(∞), while C(0) is unable to achieve the target TPR and FPR. One can visualize in a range of operating points while bounding only _ max or _ min. E.g., by fixing _ max we can plot as a function of attainable values of by which we obtain the Risk-Coverage curve, known from non-OOD setup, at _ max. Recall that TPR is coverage. See Appendix for Risk-Coverage curve at _ max for methods A-D. ROC curve The problem equ:EmpiricalTprFprModel can be infeasible. To choose a feasible target on _ min and _ max, it is advantageous to plot the ROC curve, i.e., values of TPR and FPR attainable by the classifiers in . For single-score methods, the ROC curve is a set of points obtained by varying the decision threshold: ()={((h,c),(h,c))| c(x)= s(x)≤λ , λ∈}. In case of double-score methods, we vary _ max∈[0,1] and for each _ max we choose the maximal feasible . I.e., ROC curve is ()={(ϕ,ρ_ max)|ϕ=max_(h,c)∈(h,c) s.t. (h,c)≤ρ_ max , ρ_ max∈[0,1]}. See Appendix for ROC curve of the methods A-D. In Tab. <ref> we report the Area Under ROC curve (AUROC) which is a commonly used summary of the entire ROC curve. The highest AUROC achieved Methods A(∞) and E(). Recall that Method A(∞) uses the optimal ID/OOD discriminator and the proposed Method E() subsumes A(∞). §.§ Bounded Precision-Recall rejection model Let (c)=(1-π)(c)/((1-π)(c)+π(c)) be the sample precision of the selective function c. Given the target recall _ min∈(0,1] and precision ≺_ min∈(0,1], the best selective classifier (h_n,c_n) out of is found by solving (h_n,c_n)∈_(h,c)∈(h,c) (h,c) ≥_ min , (h,c) ≥≺_ min . Proposed evaluation metric If problem equ:EmpiricalPrecReallModel is feasible, (h_n,c_n) is reported as the performance estimator of OODD method which produced . Otherwise, the method is marked as unable to achieve the target Precison/Recall. Tab. <ref> shows the selective risk for the methods A-D at the Precision ≺_ min=0.9 and recall _ max=0.7. The minimal is achieved by the proposed method D(), followed by B(0.2) and A(∞), while method C(0) is unable to achieve the target Precision/Recall. Note that single-score methods A-C achieve the same under both TPR-FPR and Prec-Recall models while the results for double-score method D() differ. The reason is that both models share the same constraint ≥ 0.7 (TPR is Recall) which is active, while the other two constraints are not active because is a monotonic function w.r.t. the value of the decision threshold. Precision-Recall (PR) curve To choose feasible bounds on ≺_ min and _ min before solving equ:EmpiricalPrecReallModel, one can plot the PR curve, i.e., the values of precision and recall attainable by the classifiers in . For single-score methods, the PR curve is a set of points obtained by varying the decision threshold: ()={((h,c),(h,c))| c(x)= s(x)≤λ , λ∈}. In case of double-score methods, we vary _ min∈[0,1] and for each _ min we choose the maximal feasible , i.e., ()={(≺,_ min)|≺=max_(h,c)∈(h,c) s.t. (h,c)≥_ min , _ min∈[0,1]}. See Appendix for PR curve of the methods A-D. We compute the Area Under the PR curve and report it for Methods A-D in Tab. <ref>. Rankings of the methods w.r.t AUPR and AUROC are the same. §.§ Shortcomings of existing evaluation metrics The most commonly used metrics to evaluate OODD methods are the AUROC and AUPR <cit.>. Both metrics measure the ability of the selective function c(x) to distinguish ID from OOD samples. AUROC and AUPR are often the only metrics reported although they completely ignore the performance of the ID classifier. Our synthetic example shows that high AUROC/AUPR is not a precursor of a good OOD selective classifier. E.g., Method A(∞), using optimal OOD/ID discriminator, attains the highest (best) AUROC and AUPR (see Tab. <ref>), however, at the same time Method A(∞) achieves the highest (worst) under both rejection models, and it is also the worst misclassification detector according to the OSCR score defined below. The performance of the ID classifier h(x) is usually evaluated by the ID classification accuracy (a.k.a. closed set accuracy) <cit.> and by the OSCR score <cit.>. The ID accuracy measures the performance of h(x) assuming all inputs are accepted, i.e., c(x)=1, ∀ x∈, hence it says nothing about the performance on the actually accepted samples like . E.g., Methods A-D in our synthetic example use the same classifier h(x) and hence have the same ID accuracy, however, they perform quite differently in terms of the other more relevant metrics, like or OSCR. The OSCR score is defined as the area under CCR versus FPR curve <cit.>, where the CCR stands for the correct classification rate on the accepted ID samples; in case of 0/1-loss CCR=1-. The CCR-FPR curve evaluates the performance of the ID classifier on the accepted samples, but it ignores the ability of c(x) to discriminate OOD and ID samples as it does not depend on TPR. E.g., Method C(0), using the optimal misclassification detector, achieves the highest (best) OSCR score; however, at the same time, it has the lowest (worst) AUROC and AUPR. Other, less frequently used metrics involve: F1-score, FPRTPRx, TNRTPRx, CCRFPRx <cit.>. All these metrics are derived from either ROC, PR or CCR-FPR curve, and hence they suffer with the same conceptual problems as AUROC, AUPR and OSCR, respectively. We argue that the existing metrics evaluate only one aspect of the OOD selective classifier, namely, either the ability to disciminate ID from OOD samples, or the performance of ID classifier on the accepted (or on possibly all) ID samples. We show that in principle there can be methods that are best OOD/ID discriminators but the worst misclassification detectors and vice versa. Therefore, using individual metrics can (and often does) provide inconsistent ranking of the evaluated methods. §.§ Summary We propose novel evaluation metrics derived from the definition of the optimal strategy under the proposed OOD rejection models. The proposed metrics simultaneously evaluate the classification performance on the accepted ID samples and they guarantee the perfomance of the OOD/ID discriminator, either via constraints in TPR-FPR or Precision-Recall pair. Advantages of the proposed metrics come at a price. Namely, we need to specify feasible target TPR and FPR, or Precision and Recall, depending on the model used. However, feasible values of TPR-FPR and Prec-Recall pairs can be easily read out of the ROC and PR curve, respectively. We argue that setting these extra parameters is better than using the existing metrics that provide incomplete, if used separately, or inconsistent, if used in combination, view of the evaluated methods. Another issue is solving the problems equ:EmpiricalTprFprModel and equ:EmpiricalPrecReallModel to compute the proposed evaluation metrics and figures. Fortunately, both problems lead to optimization w.r.t one or two varibales in case of the single-score and double-score methods, respectively. A simple and efficient algorithm to solve the problems in (nlog n) time is provided in Appendix. § EXPERIMENTS In this section, we evaluate single-score OODD methods and the proposed double-score strategy, using the existing and the proposed evaluation metrics. We use MSP <cit.>, MLS <cit.>, ODIN <cit.> as baselines and REACT <cit.>, KNN <cit.>, VIM <cit.> as repesentatives of recent single-score approaches. We evaluate two instances of the double-score strategy. First, we combine the scores of MSP <cit.> and KNN <cit.> and, second, scores of MSP and VIM <cit.>. MSP score is asymptotically the best misclassification detector, while KNN and VIM are two best OOD/ID discriminators according to their AUROC. We always use the ID classifier of the MSP method. The evaluation data and implementations of OODD methods are taken from OpenOOD benchmark <cit.>. Because the datasets have unrealistically high portion of OOD samples, e.g., π>0.5, we use metrics that do not depend on π. Namely, AUROC and OSCR as the most frequently used metrics, and the proposed selective risk at TPR and FPR. We use 0/1-loss, hence the reported selective risk is the classification error on accepted ID samples with guranteed TPR and FPR. In all experiments we fix the target TPR to 0.8 while FPR is set for each database to the highest FPR attained by all compared methods. Results are presented in Tab. <ref>. It is seen that the single-score methods with the highest AUROC and OSCR are always different, which prevents us to create a single conclusive ranking of the evaluated approaches. MSP is almost consistently the best misclassification detector according to OSCR. The best OOD/ID discriminator is, according to AUROC, one of the recent methods: REACT, KNN, or VIM. The proposed double-score strategy, KNN+MSP and VIM+MSP, consistently outperforms the other approaches in all metrics. § CONCLUSIONS This paper introduces novel reject option models which define the notion of the optimal prediction strategy for OOD setups. We prove that all models, despite their different formulations, share the same class of optimal prediction strategies. The main insight is that the optimal prediction strategy must trade-off the ability to detect misclassified examples and to distinguish ID from OOD samples. This is in contrast to existing OOD methods that output a single uncertainty score. We propose a simple and effective double-score strategy that allows us to boost performance of two existing OOD methods by combining their uncertainty scores. Finally, we suggest improved evaluation metrics for assessing OOD methods that simultaneously evaluate all aspects of the OOD methods and are directly related to the optimal OOD strategy under the proposed reject option models. plain § SUPPLEMENTARY MATERIAL Appendix <ref> provides proofs of theorems stated in Sec. <ref>, where we presented the proposed reject option models for the OOD setup and their optimal strategies. Appendix <ref> is organized as follows: * Appendix <ref>. Proof of Theorem <ref> providing an optimal strategy of the cost-based OOD model. * Appendix <ref>. Proof of Theorem <ref> and Theorem <ref> that claim that the Bayes ID classifier equ:BayesCls is an optimal solution of the bounded TPR-FPR and the bounded Precision-Recall model, respectively. The proof of both theorems is the same, hence we put it to the same section. * Appendix <ref>. Proof of Theorem <ref> providing a form of an optimal selective function under the bounded TPR-FPR model for an arbitrary fixed ID classifier. * Appendix <ref>. In this section, we characterize the form of τ(x) function, which defines the acceptance probability of boundary inputs _s(x)=λ={x∈| s(x)=λ} for the optimal selective function equ:optSelFunTprFprModel. * Appendix <ref>. In the case of finite input space, i.e. ||< ∞, we can find an optimal selective function under the bounded TRP-FPR model via Linear Programming described in this section. * Appendix <ref>. Proof of Theorem <ref> providing a form of an optimal selective function under the Bounded Precision-Recall model for an arbitrary fixed ID classifier. Appendix <ref> provides supplementary material for Sec. <ref>. The evaluation curves obtained for the exemplar methods on synthetic data are shown in Sec. <ref>. The algorithm to solve the problems equ:EmpiricalTprFprModel and equ:EmpiricalPrecReallModel is discussed in Sec. <ref>. § PROOFS OF THEOREMS FROM SEC. <REF> §.§ Proof of Theorem <ref> Due to the additivity of the expected risk R(h,c) = _x,y∼ p(x,y̅)ℓ̅(y̅,(h,c)(x)), the optimal strategy minimizing the risk can be found for each input x∈ separately by solving q^*(x) = _q∈R_x(q) where R_x(q) is the partial risk defined as R_x(q) = ∑_y̅∈ p(x,y̅) ℓ̅(y̅,q) = p_O(x)π ( q=∅ ε_3 + q(x)≠∅ε_2 ) + (1-π)∑_y∈p_I(x,y) ( q = ∅ε_1 + q≠∅ ℓ(y,q) ) We can se that R_x(q=) = p_O(x) π ε_3+P_I(x) (1-π) ε_1 R_x(q≠) = p_O(x) π ε_2 + (1-π) ∑_y∈p_I(x,y) ℓ(y,q) min_q∈ R_x(q) = p_O(x) π ε_2 + (1-π) p_I(x) r_B(x) , where r(x) is the minimal conditional risk r_B(x) = min_ŷ∈∑_y∈ p_I(y| x) ℓ(y,ŷ) It is optimal to reject when 0 ≤ min_q∈R_x(q) - R_x(q= ) = p_O(x) π ε_2 + (1-π) p_I(x) r(x) - p_O(x) π ε_3-p_I(x) (1-π) ε_1 = p_O(x) π (ε_2-ε_3)+(1-π) p_I(x) (r_B(x)-ε_1) = s(x) . The inequality 0≤ s(x) is equalivalent to r_B(x) + (_2-_3)π/1-πg(x) ≥ε_1 . In case that π<1 0 and K/0=∞, the optimal strategy then reads q^*={[ s(x) ≥ 0; _ŷ∈∑_y∈p_I(x,y)ℓ(y,ŷ) s(x) ≤ 0 ] . Note that in the boundary case s(x)=0 we can reject or accept arbitrarily without affecting the solution. §.§ Proof of Theorem <ref> and Theorem <ref> The definition of h_B allows to derive (h_B,c) ≤(h,c) as follows: (h_B,c) =1/ϕ(c)∫_∑_y∈p(x,y) ℓ(y,h_B(x)) c(x) dx = 1/ϕ(c)∫_p(x)c(x)(∑_y∈ p(y | x) ℓ(y,h_B(x))) dx ≤1/ϕ(c)∫_p(x)c(x)(∑_y∈ p(y | x) ℓ(y,h(x))) dx =1/ϕ(c)∫_∑_y∈p(x,y) ℓ(y,h(x)) c(x) dx =(h,c) . §.§ Proof of Theorem <ref> φPc^*intconvdim span It is a direct consequence of the following theorem. For any (h,c) optimal to equ:TprFprModel, there exist real numbers λ, μ such that ∫_^< p_I(x)c(x)dx = ∫_^< p_I(x)dx , ∫_^> p_I(x)c(x)dx = 0 , where ^< = {x∈| r(x)+μp_O(x)/p_I(x) < λ} , ^> = {x∈| r(x)+μp_O(x)/p_I(x) > λ} . We first give a proof for countable sets , when integrals can be expressed as sums, then we present its generalization to arbitrary . Assume is countable and (h,c) is optimal to equ:TprFprModel. Observe that we do not need to pay attention to those x∈ for which p_I(x)=0 as they do not have any impact on the theorem statement. Denote ^+ = {x∈| p_I(x) > 0} , _0 = {x∈^+ | c(x) = 0} , _1 = {x∈^+ | c(x) = 1} , _2 = {x∈^+ | 0 < c(x) < 1 } . Let : ^+ →_+^2 be a mapping such that (x)=(p_O(x)/p_I(x), R(x)/p_I(x)), where R(x)= ∑_y∈p(x,y) ℓ(y,h(x)) . To confirm the existence of suitable λ, μ, it suffices to show that the sets A_0 = {(x) | x∈_0 } , A_1 = {(x) | x∈_1 } , A_2 = {(x) | x∈_2 } are “almost” linearly separable, i.e., there is a line L that includes A_2 and linearly separates the sets A_0∖ L, A_1∖ L. The existence of such L is ensured if ( ((A_0) ∩(A_1)) ∪ A_2 ) < 2 , where (·) denotes the convex hull and (·) denotes the span of a set of vectors. We will check validity of condition equ:lincond by using the following two claims. Let x_1,x_2∈^+, r(x_1)>r(x_2), and p_O(x_1)/p_I(x_1)≥p_O(x_2)/p_I(x_2). Then, x_1∈_0 or x_2∈_1. Proof of the claim. By contradiction. Assume c(x_1) > 0 and c(x_2)<1. Define a selective function c' which is identical to c up to c'(x_1)=c(x_1)-Δ, c'(x_2)=c(x_2)+p_I(x_1)/p_I(x_2)Δ, where Δ = min{c(x_1), p_I(x_2)/p_I(x_1)(1-c(x_2))} > 0 . Now, observe that ϕ(c')-ϕ(c)=-Δ· p_I(x_1)+p_I(x_1)/p_I(x_2)Δ· p_I(x_2)=0 , ρ(c')-ρ(c)=-Δ· p_O(x_1)+p_I(x_1)/p_I(x_2)Δ· p_O(x_2) ≤ 0 , and ϕ(c) ((h,c')-(h,c))=-Δ· R(x_1)+p_I(x_1)/p_I(x_2)Δ· R(x_2)=Δ· p_I(x_1)(r(x_2)-r(x_1)) < 0 contradicts the optimality of (h,c). ▪ Let x_1,x_2,x_3 be elements of ^+ such that the points P_1=(x_1), P_2=(x_2), P_3=(x_3) are non-collinear and β· P_3=α_1· P_1 + α_2 · P_2 holds for some α_1, α_2, β∈_+, where α_1+α_2=1. * If β < 1, then x_3∈_0 or {x_1,x_2}∩_1 ≠∅. * If β > 1, then x_3∈_1 or {x_1,x_2}∩_0 ≠∅. Proof of the claim. We will give a proof for β < 1 and note that the steps for β > 1 are analogous. By contradiction. Assume c(x_1)<1, c(x_2)<1, and c(x_3)>0. To simplify the notation, for i∈{1,2,3}, let p_i=p_I(x_i), q_i=p_O(x_i), and R_i=R(x_i). Define a selective function c' which is identical to c up to c'(x_1) =c(x_1)+Δ·α_1 p_3/p_1 , c'(x_2) =c(x_2)+Δ·α_2 p_3/p_2 , c'(x_3) =c(x_3)-Δ , where Δ = min{c(x_3), p_1/α_1 p_3(1-c(x_1)), p_2/α_2 p_3(1-c(x_2))} > 0 . Observe that ϕ(c')-ϕ(c)=Δα_1 p_3 + Δα_2 p_3 - Δ p_3 = 0 , ρ(c')-ρ(c)=Δα_1 p_3/p_1 q_1+ Δα_2 p_3/p_2 q_2 - Δ q_3 = Δ(β-1)q_3 ≤ 0 , and ϕ(c) ( (h,c')-(h,c) ) =Δα_1 p_3/p_1 R_1+Δα_2 p_3/p_2 R_2-Δ R_3 = Δ p_3 ( α_1 R_1/p_1 + α_2 R_2/p_2) - Δ R_3 = Δ p_3 βR_3/p_3 - Δ R_3 = Δ (β-1) R_3 < 0 contradicts the optimality of c. ▪ We are ready to confirm condition equ:lincond, this is done by analyzing the potential infeasible cases. * ((A_0)∩(A_1))=2. Then, there are x_1,x_2,x_3,x_4∈^+ such that P(x_1), P(x_2), P(x_3) are non-collinear, P(x_4) is inside the triangle P(x_1), P(x_2), P(x_3), and, either x_1,x_2,x_3∈_0, x_4∈_1, or x_1,x_2,x_3∈_1, x_4∈_0. * (A_2) = 2. There are x_1,x_2,x_3∈_2 such that P(x_1), P(x_2), P(x_3) are non-collinear. * ((A_0)∩(A_1))=1 and (((A_0)∩(A_1))∪ A_2)=2. There are x_1,x_2∈_0, x_3,x_4∈_1, x_5∈_2 such that points P(x_1), P(x_3) lie on a half-line H_1, points P(x_2), P(x_4) lie on a half-line H_2, where H_1∩ H_2=∅ and (H_1∪ H_2) is a line not containing P(x_5). * (A_2)=1, and (((A_0)∩(A_1))∪ A_2)=2. There are x_1∈_0, x_2∈_1, x_3,x_4∈_2 such that P(x_3)≠ P(x_4), points P(x_3), P(x_4) lie on a line L, and points P(x_1), P(x_2) lie in one half-plane of L, but not on L. It is not difficult to check that all the listed points configurations always enable to select a subset of two or three points whose existence is ruled out by Claim <ref> or Claim <ref>, respectively. Consider now that is an arbitrary set. For a,b,ε∈_+, where ε > 0, let B_a,b,ε={ (x,y) | a ≤ x < a + ε b ≤ y < b + ε}. For a given ε > 0, we can decompose the positive quadrant Q={(x,y) | x∈_+, y ∈_+} into countably many pairwise disjoint sets as follows Q=⋃ℬ(ε) , ℬ(ε) = { B_ε m, ε n, ε| m,n ∈} . For B∈ℬ(ε), define (B) = {x∈^+ | P(x) ∈ B} , p_I(B) = ∫_(B) p_I(x)dx . In analogy to ^+,_0,_1,_2, define ℬ^+(ε) = { B∈ℬ(ε) | p_I(B)>0} , c(B) = 1/p_I(B)∫_(B) p_I(x)c(x)dx ∀ B∈ℬ^+(ε) , ℬ_0(ε) = { B∈ℬ^+(ε) | c(B) = 0 } , ℬ_1(ε) = { B∈ℬ^+(ε) | c(B) = 1 } , ℬ_2(ε) = { B∈ℬ^+(ε) | 0 < c(B) < 1 } . The set ℬ^+(ε) can thus be viewed as a discretisation of . Since a · p_I(x) ≤ p_O(x)≤ (a+ε)· p_I(x) and b · p_I(x) ≤ R(x)≤ (b+ε)· p_I(x) for all x∈(B_a,b,ε), it holds a · p_I(B_a,b,ε) c(B_a,b,ε) ≤∫_(B_a,b,ε) p_O(x)c(x)dx ≤ (a+ε)· p_I(B_a,b,ε) c(B_a,b,ε) , b · p_I(B_a,b,ε) c(B_a,b,ε) ≤∫_(B_a,b,ε) R(x)c(x)dx ≤ (b+ε)· p_I(B_a,b,ε) c(B_a,b,ε) . Define P̌(B_a,b,ε) =(a,b) , P̂(B_a,b,ε) =(a+ε,b+ε) , i.e., P̌(B_a,b,ε) and P̂(B_a,b,ε) is the bottom-left and top-right corner of B_a,b,ε, respectively. Claims <ref> and <ref> can be generalized to elements of ℬ^+(ε) as follows. Let B_a,b,ε,B_a',b',ε∈ℬ^+(ε), a≥ a'+ε, and b>b'+ε. Then, B_a,b,ε∈ℬ_0(ε) or B_a',b',ε∈ℬ_1(ε). Proof of the claim. Denote B=B_a,b,ε, B'=B_a',b',ε. By contradiction. Assume c(B) > 0 and c(B')<1. Find a selective function c' which is identical to c up to c'(B)=c(B)-Δ, c'(B')=c(B)+p_I(B)/p_I(B')Δ, where Δ = min{c(B), p_I(B')/p_I(B)(1-c(B'))} > 0 . Observe that ϕ(c')-ϕ(c)=-Δ· p_I(B)+p_I(B)/p_I(B')Δ· p_I(B')=0 . With the use of equ:estim-q and equ:estim-R, derive ρ(c')-ρ(c) = ∫_(B')p_O(x)c'(x)dx - ∫_(B)p_O(x)c(x)dx ≤ (a'+ε) p_I(B')c(B') - a· p_I(B) c(B) ≤ a (ϕ(c')-ϕ(c)) = 0 , and ϕ(c) ((h,c')-(h,c)) = ∫_(B')R(x)c'(x)dx - ∫_(B)R(x)c(x)dx ≤ (b'+ε)p_I(B')c(B')-b · p_I(B) c(B) < b · (ϕ(c') - ϕ(c)) = 0 . Hence, (h,c') contradicts the optimality of (h,c).▪ Let ε > 0 and B_1,B_2,B_3 be elements of ℬ^+(ε). * If β·P̌(B_3)=α_1·P̂(B_1) + α_2 ·P̂(B_2), where α_1, α_2, β∈_+, α_1+α_2=1, β < 1, then B_3∈ℬ_0(ε) or {B_1,B_2}∩ℬ_1(ε) ≠∅. * If β·P̂(B_3)=α_1·P̌(B_1) + α_2 ·P̌(B_2), where α_1, α_2, β∈_+, α_1+α_2=1, β > 1, then B_3∈ℬ_1(ε) or {B_1,B_2}∩ℬ_0(ε) ≠∅. Proof of the claim. Apply the technique from the proof of Claim <ref> to the proof of Claim <ref>. ▪ For ε>0, define C_0(ε) = ⋃ℬ_0(ε) , C_1(ε) = ⋃ℬ_1(ε) , C_2(ε) = ⋃ℬ_2(ε) . For ε > ε' > 0, let B∈ℬ^+(ε), B'∈ℬ^+(ε'), B'⊂ B. Observe that B∈ℬ_0(ε) and p_I(B') > 0 implies B'∈ℬ_0(ε'). And similarly, B∈ℬ_1(ε) and p_I(B') > 0 implies B'∈ℬ_1(ε'). This means that C_2(ε/2) ⊆ C_2(ε) , C_0(ε/2) ∪ C_2(ε/2) ⊆ C_0(ε) ∪ C_2(ε) , C_1(ε/2) ∪ C_2(ε/2) ⊆ C_1(ε) ∪ C_2(ε) . We can thus define C_2 =lim_n→∞ C_2(ε/2^n) , C_0 = (lim_n→∞[C_0(ε/2^n) ∪ C_2(ε/2^n)] ) ∖ C_2 , C_1 = (lim_n→∞[C_1(ε/2^n) ∪ C_2(ε/2^n)] ) ∖ C_2 . where we utilize the fact: if a sequence of sets {D_n}_n=0^∞ fulfills D_n+1⊆ D_n⊆^2 for all n∈, then lim_n→∞ D_n=⋂_n=0^∞ D_n. Note that each C_i corresponds to A_i (see <ref>– <ref>) in the following sense: ∫_(A_2Δ C_2)p(x)dx = 0 , ∫_((A_0∪ A_2)Δ (C_0 ∪ C_2))p(x)dx = 0 , ∫_((A_1∪ A_2)Δ (C_1 ∪ C_2))p(x)dx = 0 , where Δ denotes the symmetric difference of two sets. It holds ( ((C_0) ∩(C_1)) ∪ C_2 ) < 2 , otherwise we can find ε > 0 and a configuration of two or three elements of ℬ^+(ε) which is ruled out by Claims <ref> and <ref> (the analysis of infeasible configurations is analogous to cases <ref>– <ref>). §.§ Characterization of function τ in Theorem <ref> Let there be real numbers μ, λ such that equ:TprFprModel fulfills R(x)+μ p_O(x) = λ p_I(x) for all x∈. Then, there are real numbers γ_1, γ_2, χ_1, χ_2, where γ_1≤γ_2, and χ_1, χ_2∈ [0,1], such that the selective function τ defined as τ(x) = {[ 1 γ_1 < p_O(x)/p_I(x) < γ_2; χ_1 p_O(x)/p_I(x) = γ_1; χ_2 p_O(x)/p_I(x) = γ_2; 0 ]. is an optimal solution to equ:TprFprModel. Since, for all x∈, R(x)=λ p_I(x) - μ p_O(x), we can write (h,c) = ∫_R(x)c(x)dx/ϕ(c) = λϕ(c)-μρ(c)/ϕ(c) = λ - μρ(c)/ϕ(c) . For a,b∈_+, let M_a,b={x∈| a < p_O(x)/p_I(x) < b}, and M_a={x∈|p_O(x)/p_I(x) = a}. Define continuous functions Φ,Ρ: [0,1]^2 ×_+^2 → [0,1] as Φ(α,β,s,t) = α∫_M_s p_I(x)dx + ∫_M_s,t p_I(x)dx + s < t β∫_M_t p_I(x)dx , Ρ(α,β,s,t) = α∫_M_s p_O(x)dx + ∫_M_s,t p_O(x)dx + s < t β∫_M_t p_O(x)dx . Distinguish two cases. Case μ<0. The problem reduces to min_h,cρ(c)/ϕ(c) (c) ≥ϕ_ min (c) ≤ρ_ max . An optimal solution τ is obtained by setting γ_1 = 0 , γ_2 = inf{ t∈_+ |Φ(1,1,0,t) ≥} , χ_2 = {[ inf{β∈ [0,1] |Φ(1,β,0,γ_2) ≥} if γ_2 > 0; inf{β∈ [0,1] |Φ(β,0,0,0) ≥} otherwise ]. , χ_1 = {[ 1 if γ_2>0; χ_2 otherwise ]. . Note that Ρ(χ_1, χ_2, γ_1, γ_2)> means that the problem is not feasible. Case μ>0. The problem reduces to max_h,cρ(c)/ϕ(c) (c) ≥ϕ_ min (c) ≤ρ_ max . Define a partial function F:[0,1]×_+ → [0,1]×_+ such that F(α,s)=(β,t) iff Ρ(α,β,s,t) = , t = sup{a ∈_+ |Ρ(α,0,s,a) ≤} , β = sup{b ∈ [0,1] |Ρ(α,b,s,t) ≤} . By the assumption that the problem is feasible, an optimal solution τ is obtained by setting γ_1 = sup{s ∈_+ |∃α∈[0,1] : F(α,s)=(β,t) Φ(α, β, s,t) ≥} , χ_1 = sup{α∈ [0,1] | F(α,γ_1)=(β,t) Φ(α, β, γ_1, t) ≥} , (χ_2,γ_2) = F(χ_1,γ_1) . §.§ Linear programming formulation of the Bounded TPR-FPR model for finite input sets For any (h,c) optimal to equ:TprFprModel, ϕ(c) = unless (h,c)=0. By contradiction. Assume that (h,c)>0 and ϕ(c) = α· for some α > 1. Let c' be the selective function defined by c'(x)=c(x) / α for all x∈. Then, ϕ(c') = , ρ(c') =ρ(c)/α≤ , (h,c') =(h,c)/α<(h,c) , and thus (h,c') contradicts the optimality of (h,c). If is a finite set, Lemma <ref> enables us to refomulate Problem <ref> as the following linear program: min_c∈[0,1]^∑_x∈1/R(h,x)c(x) ∑_x∈p_I(x)c(x) = ϕ_ min , ∑_x∈p_O(x)c(x) ≤ρ_ max . §.§ Proof of Theorem <ref> Let (h,c^*) be optimal to equ:PrecRecallModel. Denote C = ϕ(c^*). By rewriting equ:PrecRecallModel, it turns out that (h,c^*) is optimal to min_h,c∫_1/CR(x)c(x)dx [ ϕ(c) = C; ρ(c) ≤ (1-π)(1-κ_ min)/πκ_ min C .; ] According to Lemma <ref>, this is synonymous with equ:TprFprModel, and as a result, Theorem <ref> is applicable to c^*. § POST-HOC TUNING AND EVALUATION METRICS §.§ Figures In case of the bounded TPR-FPR model, the objective, and also the evaluation metric, is the selective risk attained at minimal acceptable TPR ρ_ min and maximal acceptable FPR ρ_ max. In addition to reporting a selective risk for a single operating point, it can be useful to fix the maximal acceptable FPR ρ_ max and show the selective risk as the function of varying TPR/coverage ϕ_ max, which yields the Risk-Coverage curve at FPR ρ_ max. The RC curve at ρ_ max=0.2 for our example on synthetic data is shown in Figure <ref>(a). The proposed double score method D() is seen to achieve the lowest selective risk in the entire range of coverages available. The selective risk of the methods D() and C(0) is the same; however, the method C(0) has much lower maximal attainable coverage, namely, ϕ_ max=0.58 and hence the method is marked as unable to achieve the target coverage; see Table <ref>. The problem of defining the TPR-FPR model equ:EmpiricalTprFprModel can be infeasible. To choose a feasible target value of _ min and _ max, it is advantageous to plot the ROC curve, that is, the TPR and FPR values attainable by the classifiers in . ROC curve for the methods in our example is shown in Figure <ref>(b). The operation point (ϕ_ min,ρ_ max) is attainable if the ROC curve of the given method is entirely above the point. In case of the bounded Precision-Recall model, the objective, and also the evaluation metric, is the selective risk attained at minimal acceptable Precision κ_ min and minimal acceptable Recall/TPR ϕ_ min. In our example, the single-score method achieves the same selective risk under both models as we use the same target TPR/recall and the selective risk is a monotonic function of the score, see discussion in Sec. <ref>, hence we do not show the risk-coverage curve at fixed precision. However, we show the Precision-Recall curve, Figure <ref>(c), which is useful for determining the feasible target value for precision and recall. Again, the operation point (κ_ min,ϕ_ min) is achievable if the PR curve of the given method is entirely above the point. §.§ Algorithms The single-score OODD methods output a set of selective OOD classifiers ={ (h,c) | c(x)= s(x) ≤λ , λ∈} parameterized by the decision threshold λ. Double-score OODD methods output a set ={(h,c)| c(x)= s_r(x) + μ s_g(x)≤λ , μ∈ ,λ∈} parameterized by λ∈ and μ∈. The post hoc tuning aims to find the best OOD selective classifier out of based on the appropriate metric. To this end, the existing methods used the AUROC, AUPR of OSCR score as the metric to find the best classifier. Instead, we formulate the bounded TPR-FPR and the bound Precision-Recall model, where we find the best selective amounts to solving the constrained optimization problem equ:EmpiricalTprFprModel and equ:EmpiricalPrecReallModel, respectively. In case of single-score methods, the problems equ:EmpiricalTprFprModel and equ:EmpiricalPrecReallModel are 1-D optimization, namely, one needs to find the decision threshold λ∈ which leads to the minimal selective risk and simultaneously satisfies both constraints on the validation set =((x_i,y̅_i)∈×| i=1,…,n ). The threshold λ influences the involved metrics, that is, ((h,c), (c), (c), (c)), only via the value of the selective function c(x)= s(x) ≤λ which is a step function of the optimized threshold λ. Hence, we can see ((λ), (λ), (λ), (λ)), as a function of λ and we can find all n+1 achievable values of ((λ), (λ), (λ), (λ)) in a single sweep over the validation examples sorted according to the value of s(x_i). This procedure has complexity (nlog n) attributed to the sorting of n examples. In case of the double-score methods, we need to optimize w.r.t. λ and μ which are the free parameters of the selective function c(x)= s_r(x) + μ s_g(x)≤λ. The selective classifier can be seen as a binary in 2-D space. Hence, we equivalently parameterize the selective function as c(x)= s_r(x),cos(α) + s_g(x) sin(α)≤λ' where α∈=[0,π] and λ'∈. We approximate by a finite set ⊂, where contains d equidistantly placed values over the interval [0,π]. For each α∈, we compute all values n+1 of((λ), (λ), (λ), (λ)), using the algorithm described above. We found that setting d=360 is enough, as higher values d do not change the results.
http://arxiv.org/abs/2307.04944v1
20230711002022
Linear mixed models for complex survey data: implementing and evaluating pairwise likelihood
[ "Thomas Lumley", "Xudong Huang" ]
stat.ME
[ "stat.ME", "stat.CO" ]
What do LLMs need to Synthesize Correct Router Configurations? Rajdeep Mondal mailto:[email protected]@ucla.edu Alan Tang mailto:[email protected]@cs.ucla.edu Ryan Beckett mailto:[email protected]@microsoft.com Todd Millstein mailto:[email protected]@cs.ucla.edu George Varghese mailto:[email protected]@cs.ucla.edu October 2023 ================================================================================================================================================================================================================================================================================================================================================================= As complex-survey data becomes more widely used in health and social-science research, there is increasing interest in fitting a wider range of regression models. We describe an implementation of two-level linear mixed models in R using the pairwise composite likelihood approach of Rao and co-workers. We discuss the computational efficiency of pairwise composite likelihood and compare the estimator to the existing stagewise pseudolikelihood estimator in simulations and in data from the PISA educational survey. § INTRODUCTION Mixed or multilevel models for variability in regression associations are important in the social sciences and health sciences, so when data are collected using complex survey designs it is of interest to be able to fit these models and estimate the same population parameters as if data were collected from a cohort or with some ignorable sampling design. Design-based estimation in mixed models is challenging. The classical approach to design-based inference is to use sampling probabilities to reweight estimating functions or pseudolikelihoods that are sums of functions of one observation at a time <cit.>. Since the purpose of mixed models is to estimate relationships between individuals, they intrinsically cannot be reduced to linear estimating functions in this way. Two main approaches to design-based inference for mixed models have been proposed. The first, proposed by <cit.> and expanded by <cit.> takes advantage of the stagewise independence in stratified multistage sampling. At each stage, a random sample of next-stage units is taken independently within each stratum, and random effects for each unit are introduced, giving a loglikelihood that is a simple sum and can be reweighted using stage-specific sampling probabilities. Implementations of this approach and extensions to generalised linear models and other latent variable models, have been developed and are available in standard software <cit.>. There does not appear to be a consistent name for this estimation approach; we propose `stagewise pseudolikelihood', emphasising how it takes advantage of stagewise (conditional) independence between clusters at each stage in the design and each level in the model. The second approach is to replace the population objective function with one that can be more easily estimated. <cit.> and <cit.> proposed pairwise composite likelihood, where the population objective function is a sum of loglikelihooods for pairs of observations, reweighted using reciprocals of pairwise sampling probabilities. <cit.> also develop a Bayesian approach to inference in a wider range of models based on pairwise likelihood. Software has not previously been available for this approach and, to our knowledge, no comparisons with stagewise pseudolikelihood have been published. Each approach has theoretical advantages. Extracting estimates of the realised random effects is straightforward for stagewise pseudolikelihood; these are of interest in themselves and are an important component of efficient quadrature algorithms for generalised linear mixed models. On the other hand, the stagewise pseudolikelihood approach is applicable only when the clusters in the model are the same as (or nested in) the sampling units in the design, and while the approach allows for stratified sampling, the available Stata implementations do not. Since the weights do not enter the loglikelihood linearly, the asymptotic structure for proving consistency of stagewise pseudolikelihood has cluster size as well as cluster number going to infinity, and choices need to be made about the scaling of weights at different stages/levels. Previous research had applied the pairwise estimator only to the settings where the design and model structure are the same (or nested), but in fact the pairwise estimator can be applied to very general designs and models <cit.>. On the other hand, estimators (predictors) of the random effects are currently not known, and as a result it is not currently possible to use adaptive Gaussian quadrature <cit.> to fit generalised linear mixed models with pairwise likelihood. We would also expect some efficiency loss from considering only pairs. In this paper we present a novel R <cit.> implementation of the weighted pairwise likelihood approach, and compare it in simulations to stagewise pseudolikelihood as implemented in Stata <cit.>, and to naive maximum likelihood (which would be appropriate in practice only under ignorable sampling). We consider the setting of a two-level model where the clusters are the same in the design and the model. The R package is available from <https://github.com/tslumley/svylme/tree/pairwise-vs-sequential> and in the supplemental material. § DESIGN AND MODEL We consider a two-level linear mixed model as described by <cit.> for groups indexed by i and observations within groups indexed by j Y_ij = X_ijβ+Z_ijb_i+ϵ_ij where ϵ_ij∼ N(0,σ^2), b_i∼ N(0,σ^2V(θ)). In this paper we are interested in estimating β, σ^2, and θ, not the realised b_i. Under this model, Y|X has a multivariate Normal distribution with mean vector Xβ. The covariance matrix of Y is block-diagonal with the block for group i being σ^2Ξ_i = σ^2(I+Z_i^TVZ_i). The loglikelihood for this model is ℓ(β,θ,σ^2)= -1/2∑_ilog |σ^2Ξ_i(θ)| -1/2σ^2∑_i (Y_i-X_i^Tβ)^TΞ_i(θ)^-1(Y_i-X_i^Tβ) There is a notation clash between the survey literature and the multilevel model literature. Suppose we are studying a sample of students within each of a sample of schools. From the viewpoint of sampling we would call the schools “stage 1” and the students “stage 2”, but from the viewpoint of multilevel models the students are “level 1” and the schools “level 2”. We will specify 'stage' or 'level' explicitly. As noted above, the usual approach to design-based inference is to estimate the population objective function or estimating function by a weighted sum over the sample. It is not straightforward to estimate ℓ from a multistage sample: when there is subsampling within groups Ξ_i^-1 and |Ξ_i| for a group depend on Z for both sampled and non-sampled units. The population objective function (the `census composite loglikelihood') for the pairwise likelihood approach would be ℓ̃_P(β,θ)= ∑_i ∑_j<kℓ_i,jk(β,θ) where ℓ_i,jk(β,θ) is the likelihood based on the two observations Y_ij and Y_ik. As <cit.> pointed out when originally describing composite likelihood, each ℓ_i,jk(β,θ) is a true loglikelihood, so that E_β_0,θ_0[ℓ_i,jk(β,θ,σ^2)] is maximised at (β,θ,σ^2)=(β_0,θ_0,σ^2_0) and the derivative ∇ℓ_i,jk(β,θ,σ^2) is an unbiased estimating function. It follows immediately that E_β_0,θ_0,σ^2_0[ℓ̃_P(β,θ,σ^2)] is also maximised by (β,θ,σ^2)=(β_0,θ_0,σ^2_0), and standard smoothness arguments then show that the population maximum pairwise likelihood estimators (β̃,θ̃,σ̃^2) are consistent for (β, θ, σ^2) as N→∞. In a sample we observe only those pairs with R_ijR_ik=0, and need to weight by the reciprocal of the probability of observing a pair, π_i,jk=E[R_ijR_ik], to obtain ℓ̂_P(β,θ,σ^2)= ∑_i ∑_j<kR_i,jR_i,k/π_i,jkℓ_i,jk(β,θ,σ^2), the design-weighted pairwise loglikelihood. If the design allows a law of large numbers and central limit theorem, standard arguments again show that the maximum weighted pairwise loglikelihood estimators (β̂, θ̂,σ̂^2) are consistent and asymptotically Normal. Importantly, the asymptotic setting for consistency does not require group sizes to go to infinity. § COMPUTATIONAL ISSUES <cit.> proposed estimating the parameters by solving the weighted pairwise score equations. We have done this for simple models, but it is relatively inconvenient to automate for more complex models. Instead, we follow the approach of <cit.> by profiling out β and σ^2 to obtain a profile weighted pairwise deviance and then using a general-purpose optimiser to minimise it. Since the dimension of θ is often much lower than that of β, profiling gives a simpler optimisation problem. We define the profile deviance as d̃(θ) = -2max_β,σ^2ℓ̃_P(β,θ,σ^2) for the population and d̂(θ) = -2max_β,σ^2ℓ̂_P(β,θ,σ^2) for the sample. The corresponding estimates β̃_θ and β̂_θ for β are given by generalised least squares in an expanded data set. Define X_P as the 2N_P× p matrix formed by stacking the 2× p design matrices for the N_p pairs. Similarly, Y_P is formed by stacking the 2-vectors of Y for each pair, and Ξ_P is block-diagonal with 2× 2 blocks Ξ_i,jk. In the sample, define X_S and Y_S as the rows of X_P and Y_P corresponding to sampled pairs, and Ξ_S as the submatrix of Ξ_P corresponding to sampled pairs. Note that because Ξ_P is block-diagonal with blocks for each pair, the submatrix of Ξ_P^-1 corresponding to sampled pairs is just Ξ_S^-1. The maximum pairwise likelihood estimate of β does not depend on σ^2, so we can write it just as a function of θ: in the population β̃_θ = (X_P^TΞ_P^-1(θ)X_P)^-1X_P^TΞ_P^-1(θ)Y_P and in the sample β̂_θ = (X_S^TW^1/2Ξ_S^-1W^1/2(θ)X_S)^-1X_S^TW^1/2Ξ_S^-1(θ)W^1/2Y_P where W is the diagonal matrix whose two entries for the pair (ij, ik) are both π_i,jk^-1 The MPLE of σ^2 is σ̃^2_θ = 1/2N_P(Y_P-X_Pβ̃_θ)^TΞ_P^-1(θ) (Y_P-X_Pβ̃_θ)^T in the population and the weighted estimator in the sample is σ̂^2_θ = 1/2N̂_P(Y_S-X_Sβ̂_θ)^TΞ_S^-1(θ) (Y_S-X_Sβ̂_θ)^T where N̂_p = ∑_i ∑_j<kπ_i,jk^-1. Plugging these into the pairwise loglikelihoods gives the population profile pairwise deviance d̃(θ) = 2N_Plog(2πσ̃^2_θ) + ∑_i∑_j<klog|Ξ_i,jk(θ)|. and its sample estimator d̂(θ) = 2N̂_Plog(2πσ̂^2_θ) + ∑_i∑_j<kR_i,jR_i,k/π_i,jklog|Ξ_i,jk(θ)|. We use Powell's quadratic bound-constrained optimiser BOBYQA <cit.> to find θ minimising d̂(θ), with a starting value obtained from an unweighted (`naive') maximum-likelihood fit. In computing d̂(θ) we take advantage of the explicit formulas available for inverse and determinant of 2× 2 matrices to rewrite computations over i,j,k and i',j',k' as sets of three or four matrix operations over i and i'. Although there are potentially more pairs than observations, using pairwise composite likelihood does not significantly increase computational effort, and may actually reduce it when group sizes are large. Consider the unweighted case: if there are n_1 groups of size m, computing the pairwise profile deviance involves a bounded number of operations for each of the m(m-1) pairs and so scales as m^2. In general, computing the full deviance or score requires computing a determinant and solving an m× m linear system in each group, both of which scale as m^3. In special cases the determinant may be available in closed form and the matrix Ξ^-1 for a group may be available directly; computing the deviance then requires an m× m matrix–vector multiplication, which scales computationally as m^2 and is of the same order as the pairwise computation. §.§ Variance estimation For general designs, a Horvitz–Thompson-type variance estimator for pairwise composite likelihood estimators involves fourth-order sampling probabilities. These are typically not supplied with survey data. Computing them is straightforward but tedious for simple designs, but is infeasible for the multi-phase designs used in many large surveys. It is common practice in secondary analysis of survey data to approximate the variance by treating PSUs as sampled with replacement. Following <cit.> we use a similar with-replacement approximation to make variance calculations tractable. Define the score for a pair of observations U_i,jk=U_i,jk(β̂,θ̂,σ̂^2)=.∂ℓ_i,jk/∂β|_(β,θ,σ^2)=(β̂,θ̂,σ̂^2) and the corresponding Fisher infomation I_i,jk=-.∂^2 ℓ_i,jk/∂β^2|_(β,θ,σ^2)=(β̂,θ̂,σ̂^2). The empirical population sensitivity and variability matrices <cit.> are, respectively, H̃ = ∑_i∑_j<k I_i,jk J̃ = ∑_i ∑_j<k∑_j'<k'U_i,jk^TU_i,j'k' The variance of the census parameter would be estimated by var[β̃]= H̃^-1J̃H̃^-1 Writing v^⊗ 2 for v^Tv, we approximate the sample sensitivity and variability matrices by Ĥ = ∑_i∑_j<kR_i,jR_i,k/π_ijπ_ikI_i,jk Ĵ = n_1/n_1-1∑_i (R_i/π_i∑_j<kR_i,jR_i,k/π_j|iπ_k|iU_i,jk)^⊗ 2 and the variance of β̂ by var[β̂]= Ĥ^-1ĴĤ^-1. Since β̂ is a generalised least squares estimator, the form of Ĥ and Ĵ involve straightforward weighted sums of squares and products of X and residuals. The same argument can be used to approximate the variances of σ̂^2 and θ̂, but the expressions for U_i,jk and I_i,jk become more complex, and the Normal approximation to their distribution less accurate. We suggest resampling to estimate the uncertainty in the variance parameters when it is of interest; this is implemented in the function. §.§ Strata and additional sampling stages It is straightforward to allow for additional sampling stages before the stage at which the model groups are sampled. Suppose that a survey takes a stratified sample of school districts, then samples schools within the districts and students within the schools. We can fit a two-level model with schools and students as the levels. The sampling probabilities π_i will be probabilities for schools, and π_jk|i will be conditional probabilities for pairs of students given schools. Point estimation proceeds exactly as before. The only change in variance estimation is that Ĵ is replaced by Ĵ = ∑_h∈strata n_h/n_h-1∑_l=1^n_h(∑_i∈PSU(l)R_i/π_i∑_j<kR_i,jR_i,k/π_j|iπ_k|iU_i,jk)^⊗ 2, where n_h is the number of PSUs — school districts — in stratum h. That is, the variance is computed treating weighted totals for PSUs in each first-stage stratum as independent and identically distributed, rather than treating weighted totals for groups as independent and identically distributed. §.§ User interface The svy2lme function combines user interface ideas from the survey <cit.> and lme4 <cit.>. The notation for specifying mixed models is the same as in lme4, using the | conditioning operator as an addition to the traditional model formula notation. The survey design is specified using design objects from the survey package, which combine the data and survey metadata into a single object; this object is then passed to analysis functions in place of a data frame. The notation specifies a model with and and an intercept in the X matrix of fixed effects and and an intercept in the matrix of random effects Z for each value of . As in lme4, the default is that random effects are not constrained to be independent, but this constraint can be added by specifying multiple random-effect groups. That is, specifies a random intercept for school and, independent of this, a random coefficient for without a random intercept. For the common case of two-stage sampling with stratified simple random samples at each stage, the pairwise sampling probabilities can easily be computed from sampling probabilities or population sizes at each stage. When the user supplies probabilities for each stage but these do not agree with stratified simple random sampling, an approximation due to Hajék (1964) is used within each stage π_ij≈π_iπ_j[1-(1-π_i)(1-π_j)/∑_kπ_k(1-π_k)] with the denominator approximated by the unbiased estimator ∑_k R_i/π_kπ_k(1-π_k)=∑_k R_i(1-π_k). § SIMULATIONS We present simulations comparing the point estimates for β, σ^2 and θ for naive maximum likelihood in the sample, our proposed composite likelihood estimator, and three versions of stagewise pseudolikelihood as implemented in Stata <cit.>. The three stagewise pseudolikelihood estimators are based on three approaches to rescaling the sampling weights to reduce bias: unscaled weights, stage-2 weights scaled to sum to the (population) cluster size, and the proposal of <cit.> that scales stage-1 weights to the average weight for observations in the cluster and the stage-2 weights to 1. In the first set of simulations our focus is on the relative efficiency of the composite likelihood estimator. In the second set, we use simulation settings where the stagewise pseudolikelihood estimator is known to be substantially biased and demonstrate that the composite likelihood estimator does not share its bias. In each iteration of each simulation we create a finite population satisfying a linear mixed model and define clusters to be the groups in the mixed model. We define strata without reference to the model except that clusters are nested in strata, and take a stratified two-stage sample, and then fit the five estimators. We have one covariate, X with a fixed slope and one covariate, Z, with a random intercept and slope. Except as otherwise specified we do not assume the random intercept and slope are independent (though we do not display the covariance estimate for reasons of space). That is we simulate from Y=β_0+β_1 X+β_2 X + b_i0+b_izZ+ϵ and fit the model . All of the simulation code is available at <https://github.com/tslumley/svylme-paper> and in the supplementary material. In these simulations we use as the true finite-population parameter values the estimates from a linear mixed model fitted to the whole population and estimate the bias and standard error after subtracting these true values from the estimates. We estimate the bias by the median and the standard error by the scaled median absolute deviation mad(x) = 1.4826×med|x-med(x)|. Since the stagewise pseudolikelihood estimator and the pairwise likelihood estimator use essentially the same sandwich variance estimators, and given space restrictions, we do not report a comparison of the estimated standard errors. §.§ Comparing efficiency under non-informative sampling In Tables 1–6 we are primarily interested in the efficiency of estimation, comparing stagewise pseudolikelihood, pairwise likelihood, and unweighted (naive) maximum likelihood. The sampling in all these simulations is non-informative, allowing efficiency to be compared to naive maximum likelihood. We see across all the tables that, even under non-informative sampling, stagewise pseudolikelihood without weight scaling is unreliable, giving very large biases for the variance components. All the other estimators are essentially unbiased for all the parameters in all the settings considered. stagewise pseudolikelihood with the GK scaling performs very well across these simulations, with essentially no loss of efficiency compared to maximum likelihood. The cluster size scaling has minor loss of efficiency, primarily for the variance components. The pairwise likelihood estimator has substantial efficiency loss for the variance components, especially for the random-intercept variance. The loss of efficiency for variance components is larger when the variance components themselves are larger: eg, comparing tables 2 and 5 to 3 and 4. Notably, there is much less loss of efficiency for the pairwise likelihood estimator in Table 6, where cluster sizes are smaller. If all cluster sizes were equal to two and there was simple random sampling of clusters, the pairwise likelihood and maximum likelihood estimators would be the same, so it is reasonable that pairwise likelihood performs relatively better with smaller clusters. §.§ Comparing bias under strongly informative sampling The stagewise pseudollikelihood estimator is consistent as cluster size goes to infinity, regardless of the sampling design, and without need for weight scaling, because the realised random effects can then be estimated precisely and the estimation problem effectively reduces to linear regression with cluster-specific offsets. When clusters are not large, the stagewise pseudolikelihood estimator can be biased, especially when sampllng is strongly informative for the random effects, and the weight scaling strategy matters. In contrast, the weighted pairwise score equations are unbiased under any sampling design. We present three simulations with strongly informative sampling to illustrate this distinction. They use the same population, strata, and clusters as Table 2, but differ in the sampling. In Table <ref>, 2 or 6 observations are taken from a cluster according to whether the residual variance is above the median, in Table <ref>, 2 or 6 observations are taken according to whether the absolute value of b_1 is above the median, and in Table <ref>, 2 or 6 observations are taken according to whether b_1 is positive or negative. In all three scenarios, the pairwise likelihood estimator is approximately unbiased for all parameters. It still has higher variance than the other estimators, as it did under non-informative sampling. stagewise pseudolikelihood with unscaled weights is approximately unbiased for the regression parameters, but as before has substantial bias for the variance parameters. Each of the scaled stagewise pseudolikelihood estimators fails for at least one of the scenarios, and in Table 9 the two scaled estimators have appreciable bias even in the regression coefficients. § EXAMPLE We fit a linear mixed model to data selected from the PISA 2012 educational survey <cit.>, obtained from the pisa2012lite R package<cit.>. PISA is an international survey including questionnaires about school, parent, and student characteristics and evaluations of student performance on a variety of domains, with subsampling and multiple imputation to reduce respondent burden. We analysed data on mathematics performance and gender, related problem-solving skills, the proportion of girls at the school, and the ratio of students to mathematics teachers, using the New Zealand subset of the data. The outcome variable is provided as five multiple imputations (`plausible values'), and we present results for the first plausible value from weighted pairwise likelihood and from Stata's stagewise pseudolikelihood using the `gk' scaling. We also present a combined result from all five plausible values using weighted pairwise likelihood and combining results with Rubin's rules <cit.>. The full code and results are in the supplementary material and at <github.com/tslumley/svylme-paper>. The stagewise pseudolikelihood estimator was very close to the boundary of the parameter space, and different maximisation options produced somewhat different variance component estimates. Table <ref> shows that the two estimators give broadly comparable results. Since `female' is the reference category for the gender variable, the interaction with proportion of girls shows that girls did better in schools with more girls and boys did better in schools with more boys. Mathematics self-efficacy and openness to problem solving are strongly associated with better results. Staff:student ratio shows no evidence of association. There is modest evidence of variation in the mean result between schools, which could be quite large. Variation in the gender difference appears to be small, and may be essentially non-existent. The sandwich standard errors are 10–20% smaller than the bootstrap standard errors; the bootstrap is recommended if it is feasible. Table <ref> reinforces these messages. As the higher standard deviations for the variance components indicate, the variance component estimates were less stable between plausible values than the fixed-effect estimates. § DISCUSSION The pairwise likelihood estimator is less efficient than the stagewise pseudolikelihood estimator, and while it is more widely reliable, the settings where informative sampling causes bias in the stagewise pseudolikelihood estimator are arguably unrealistic in any practical application. Our results confirm again that appropriate scaling of weights at each stage is important for stagewise pseudolikelihood. It may be surprising that the pairwise likelihood estimator can be so inefficient, since Normal distributions are characterised by their means and variances. There are two potential explanations. First, the loglikelihoods for pairs are correlated and we do not take advantage of this correlation. Second, the Gaussian loglikelihood depends more directly on the precision matrix than the variance matrix, and the sample precision matrix is not the sample submatrix of the population precision matrix. The pairwise likelihood estimator has the potential to be extended to settings where the model groups and design clusters are independent. Its performance compared to the stagewise pseudolikelihood estimator is sufficiently good that such an extension would be worth pursuing. unsrtnat
http://arxiv.org/abs/2307.04424v1
20230710085906
About the algebraic closure of formal power series in several variables
[ "Michel Hickel", "Mickaël Matusinski" ]
math.AC
[ "math.AC", "math.AG", "13J05, 13F25, 14J99, 12-08" ]
theoTheorem[section] definition[theo]Definition defi remarque[theo]Remark remark exemple[theo]Example ex lemma[theo]Lemma propo[theo]Proposition coro[theo]Corollary nota[theo]Notation notation prf1Idea of the proof demo1 prfProof demo *theorem*Theorem @#1#2 @th###1@font Lim1.5@#2-@ @@@ @rlay @rlay#1#2 @skip -@th#1###2 𝔸 ℕ ℤ ℚ ℝ ℂ 𝕂 ℙ δ̣ Michel Hickel and Mickaël Matusinski, Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400 Talence, France [2020]13J05, 13F25, 14J99 and 12-08 Let K be a field of characteristic zero. We deal with the algebraic closure of the field of fractions of the ring of formal power series K[[x_1,…,x_r]], r≥ 2. More precisely, we view the latter as a subfield of an iterated Puiseux series field 𝒦_r. On the one hand, given y_0∈𝒦_r which is algebraic, we provide an algorithm that reconstructs the space of all polynomials which annihilates y_0 up to a certain order (arbitrarily high). On the other hand, given a polynomial P∈ K[[x_1,…,x_r]][y] with simple roots, we derive a closed form formula for the coefficients of a root y_0 in terms of the coefficients of P and a fixed initial part of y_0. About the algebraic closure of formal power series in several variables. Michel Hickel and Mickaël Matusinski August 12, 2023 ======================================================================== § INTRODUCTION. Let K be a field of characteristic zero and K its algebraic closure. Let x:=(x_1,…,x_r) be an r-tuple of indeterminates where r∈, r≥ 2. Let K[x] and K[[x]] denote respectively the domains of polynomials and of formal power series in r variables with coefficients in K, and K(x) and K((x)) their fraction fields. Both fields embed naturally into K((x_r))((x_r-1))⋯((x_1)), the latter being naturally endowed with the lexicographic valuation in the variables (x_1,…,x_r) (see Section <ref>). By iteration of the classical Newton-Puiseux theorem (see e.g. <cit.> and <cit.>), one can derive a description of an algebraic closure of K((x_r))((x_r-1))⋯((x_1)) in terms of iterated fractional Laurent series (see <cit.><cit.>): The following field, where L ranges over the finite extensions of K in K: ℒ_r:= _p∈ℕ^*_L L((x_r^1/p))((x_r-1^1/p))⋯ ((x_1^1/p)) is the algebraic closure of K((x_r))((x_r-1))⋯((x_1)). Within this framework, there are several results concerning those iterated fractional Laurent series which are solutions of polynomial equations with coefficients either in K(x) or K((x)). More precisely, the authors provide necessary constraints on the supports of such a series (see <cit.>, <cit.>, <cit.> <cit.>, <cit.>). More recently, Aroca, Decaup and Rond study more precisely the support of Laurent-Puiseux power series which are algebraic over K[[x]] (with certain results for K of positive characteristic) <cit.>. As asserted in <cit.>, one can prove the following result (see the proof in Section <ref>), which could also be derived from the methods in <cit.> or <cit.>: The following field 𝒦_r, where L ranges over the finite extensions of K in K, is an algebraically closed extension of K(x) and K((x)) in ℒ_r: 𝒦_r := _(p,q)∈ℕ^*×ℕ^r-1_L L(( ( x_1/x_2^q_1)^1/p,…, ( x_r-1/x_r^q_r-1)^1/p ,x_r^1/p)). Let ỹ_0∈𝒦_r and f̃,g̃∈ L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] such that ỹ_0=f̃/g̃. Let α be the lexicographic valuation of g̃ (where it is understood that the valuation of x_i^1/p is equal to 1/p times the valuation of x_i). Denote g̃=ax^α(1-ε) with ε having positive valuation. We expand: ỹ_0=f̃/g̃=f̃ a^-1x^-α∑_k∈ε^k as a generalized power series ∑_n∈(^r,≤_lex) c_n/px^n/p (the latter is well defined by <cit.>). We set: Supp(∑_n∈(^r,≤_lex) c_n/px^n/p):={1/pn∈(1/p^r,≤_lex) | c_n/p≠ 0}. Let us call the elements of 𝒦_r rational polyhedral Puiseux series (since one can observe that the support with respect to the variables x_i's of such a series is included in the translation of some rational convex polyhedral cone). We are interested in those rational polyhedral Puiseux series that are algebraic over K((x)), say the rational polyhedral Puiseux series which verify a polynomial equation P̃(x,y)=0 with coefficients which are themselves formal power series in x: P̃(x,y)∈ K[[x]][y]∖{0}. Let us call such a series algebroid. If such a series ỹ_0 admits a vanishing polynomial of degree at most d in y, we will say that ỹ_0 is algebroid of degree bounded by d. More precisely, we extend our previous work on algebraic (over K(x)) Puiseux series in several variables <cit.>, by dealing with the following analogous questions: ∙ Reconstruction of pseudo-vanishing polynomials for a given algebroid rational polyhedral Puiseux series. In this part, for simplicity reasons, we will assume that K is algebraically closed. For Q̃(x,y)∈ K[[x]][y] a nonzero polynomial, the (x)-adic order of Q̃ is the maximum of the integers k such that Q̃(x,y)∈ (x)^kK[[x]][y] where (x) denotes the ideal of K[[x]] generated by x_1,…,x_r. We consider ỹ_0=f̃/g̃ with f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] algebroid of degree bounded by d. For an arbitrarily large valuation l∈, we provide an algorithm which computes polynomials Q̃(x,y)∈ K[[x]][y] such that the expansion of Q̃(x,ỹ_0)∈𝒦_r as a rational polyhedral Puiseux series has valuation greater than l. More precisely, let us denote ζ_i:=(x_i/x_i+1^q_i)^1/p for i=1,…,r-1, and ζ_r:=x_r^1/p. We suppose that for any k∈, one can compute all the coefficients of ζ^n with n_1+⋯+n_r≤ k in f̃ and g̃. Moreover, we assume that the lexicographic valuations with respect to ζ of f̃ and g̃ are given. Let d∈^* and ν̃_0∈. Let ỹ_0∈𝒦_r be algebroid of degree bounded by d. We assume that there is a vanishing polynomial P̃ of degree bounded by d and of (x)-adic order bounded by ν̃_0. We consider formal power series f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] such that ỹ_0=f̃/g̃. Let β=(β_1,…,β_r) be the lexicographic valuation of f̃g̃ with respect to the variables ζ_i:=(x_i/x_i+1^q_i)^1/p, ζ_r:=x_r^1/p, and q_i':=q_i+β_i+1+1 for i=1,…,r-1. We set: [ L̃: ^r → ; (n_1,…,n_r) ↦ n_r+q'_r-1n_r-1+q'_r-1q'_r-2n_r-2+⋯+q'_r-1q'_r-2⋯ q'_1n_1. ] The algorithm described in Section <ref> provides for any ν∈ a parametric description of the space of all the polynomials Q̃_ν(x,y)∈ K[[x]][y] with _yQ̃_ν≤ d and of (x)-adic order bounded by ν̃_0 such that, for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), one has: L̃(n)≥ν. Note that the condition L̃(n)≥ν for 1/pn∈Supp Q̃_ν(x,ỹ_0) implies that infinitely many coefficients of Q̃_ν(x,ỹ_0) vanish since n∈^r. With more information on ỹ_0, we can use other linear forms L̃, see Theorem <ref>. ∙ Description of the coefficients of an algebroid rational polyhedral Puiseux series in terms of the coefficients of a vanishing polynomial. Now, let a polynomial P̃(x,y)∈ K[[x]][y] with only simple roots and a root ỹ_0∈𝒦_r be given. Up to a change of coordinates (see Section <ref>), we reduce to the case of a polynomial P(u,y)∈ K[[u]][y] whose support has constraints (see Lemma <ref>), and a simple root y_0∈ L[[u]] (where [L:K]<∞). In Theorem <ref> and Corollary <ref>, we provide a closed form formula for the coefficients of y_0 in terms of the coefficients of P and the coefficients of a fixed initial part of y_0. This is obtained as a consequence of a generalization of the multivariate Flajolet-Soria formula for Henselian equations (<cit.>), see Theorem <ref>. Our article is organized as follows. In Section <ref>, we prove a monomialization lemma (Lemma <ref>) which is a key to reduce to the case of formal power series annihilating a polynomial whose support has constraints (Lemma <ref>). This is done by a change of variable (<ref>) corresponding to the lexicographic valuation. Moreover, we distinguish two sets s and t of variables and we show that our series y_0 can be expanded as y_0=∑_nc_n(s) t^n where the c_n(s)∈ K[[s]] are algebraic power series (see Lemma <ref>) of bounded degree (see Lemma <ref>). Section <ref> is devoted to the proof of the nested depth lemma (Theorem <ref>). It is used in the subsequent sections to ensure the finiteness of the computations. We use elementary properties on Bézout's identity and the resultant of two polynomials. In Section <ref>, we show how to reconstruct all the polynomials of given bounded degrees which vanish at given several algebraic power series. This is based on Section <ref> and our previous work on algebraic multivariate power series <cit.>. In Section <ref>, we prove our first main result, Theorem <ref> and its variant Theorem <ref>. Sections <ref> and <ref> are devoted to our second question. In Section <ref>, we study what we call strongly reduced Henselian equations (see Definition <ref>) and prove a generalisation of the multivariate Flajolet-Soria formula (see Theorem <ref>). In Section <ref>, we prove how to reduce to the case of a strongly reduced Henselian equation (see Theorem <ref>) and, in the case of an equation with only simple roots, we derive a closed form formula for the coefficients of a solution y_0 in terms of the coefficients of the equation and of a bounded initial part of y_0 (see Corollary <ref>). § PRELIMINARIES Let us denote ℕ:=ℤ_≥ 0 and ℕ^*:=ℕ∖{0}=ℤ_>0. For any set ℰ, we denote by |ℰ| its cardinal. We systematically write the vectors using underlined letters, e.g. x:=(x_1,…,x_r), n:=(n_1,…,n_r), and in particular 0:=(0,…,0). Moreover, x^n:=x_1^n_1⋯ x_r^n_r. The floor function will be denoted by ⌊ q ⌋ for q∈ℚ. For a polynomial P(y)=∑_i=0^d a_iy^i with coefficients a_i in a domain and a_d≠ 0, we consider that its discriminant Δ_P is equal to the resultant of P and ∂ P/∂ y (instead of the more usual convention Δ_P=(-1)^d(d-1)/2/a_dRes(P,∂ P/∂ y)). For any sequence of nonnegative integers m=(m_i,j)_i,j with finite support and any sequence of scalars a=(a_i,j)_i,j indexed by i∈ℤ^r and j∈ℕ, we set: * m!:=∏_i,jm_i,j!; * a^m:=∏_i,ja_i,j^m_i,j; * |m|:=∑_i,jm_i,j, ||m||:= ∑_i,jm_i,j j∈ and g(m) := ∑_i,jm_i,j i∈^r. In the case where k=(k_0,…,k_l), we set k :=∑_j=0^lk_j j. In the case where k=(k_i)_i∈Δ where Δ is a finite subset of ℤ^r, we set g(k):=∑_i∈Δk_i i. We will consider the following orders on tuples in ℤ^r: The lexicographic order n≤_lexm :⇔ n_1<m_1 or (n_1=m_1 and n_2<m_2) or ⋯ or (n_1=m_1, n_2=m_2, … and n_r<m_r). The graded lexicographic order n≤_grlexm :⇔ |n |<|m| or (|n |=|m| and n≤_lexm). The product (partial) order n≤m :⇔ n_1≤ m_1 and n_2≤ m_2 ⋯ and n_r≤ m_r. Note that we will apply also the lexicographic order on ℚ^r. Similarly, one has the anti-lexicographic order denoted by ≤_alex. Considering the restriction of ≤_grlex to ^r (for which ^r has order type ω), we denote by S(k) (respectively A(k) for k≠ 0), the successor element (respectively the predecessor element) of k in (ℕ^r,≤_grlex). Given a variable x and a field K, we call Laurent series in x with coefficients in K any formal series ∑_n≥ n^0c_nx^n for some n^0∈ and c_n∈ K for any n. They consist in a field, which is identified with the fraction field K((x)) of K[[x]]. To view the fields K(x) and K((x)) as embedded into K((x_r))((x_r-1))⋯((x_1)) means that the rational fractions or formal meromorphic fractions can be represented as iterated formal Laurent series, i.e. Laurent series in x_1 whose coefficients are Laurent series in x_2, whose coefficients... etc. This corresponds to the following approach. As in <cit.>, we identify K((x_r))((x_r-1))⋯((x_1)) with the field of generalized power series (in the sense of <cit.>, see also <cit.>) with coefficients in K and exponents in ℤ^r ordered lexicographically, usually denoted by K((X^ℤ^r))^lex. By definition, such a generalized series is a formal expression s=∑_n∈ℤ^rc_nX^n (say a map ℤ^r→ K) whose support (s):={n∈ℤ^r | c_n≠ 0} is well-ordered. The field K((X^ℤ^r))^lex comes naturally equipped with the following valuation of rank r: [ v_x: K((X^ℤ^r))^lex → (ℤ^r∪{∞},≤_lex); s≠ 0 ↦ min((s)); 0 ↦ ∞ ] The identification of K((X^ℤ^r)) and K((x_r))((x_r-1))⋯((x_1)) reduces to the identification X^(1,0,…,0)=x_1 , X^(0,1,…,0)=x_2 , … , X^(0,…,0,1)=x_r. By abuse of terminology, we call K((X^ℤ^r))^lex or K((x_r))((x_r-1))⋯((x_1)) the field of (iterated) multivariate Laurent series. Note also that this corresponds to the fact that the power series in the rings K[x] and K[[x]] are viewed as expanded along (ℤ^r,≤_lex). Similarly, the field ℒ_r is a union of fields of generalized series L((X^(ℤ^r)/p))^lex and comes naturally equipped with the valuation of rank r: [ v_x: ℒ_r → (ℚ^r∪{∞},≤_lex); s≠ 0 ↦ min((s)); 0 ↦ ∞. ] We will need another representation of the elements in K(x) and K((x)), via the embedding of these fields into the field K((X^ℤ^r))^grlex with valuation: [ w_x: K((X^ℤ^r))^grlex → (ℤ^r∪{∞},≤_grlex); s≠ 0 ↦ min((s)); 0 ↦ ∞. ] and the same identification: X^(1,0,…,0)=x_1 , X^(0,1,…,0)=x_2 , … , X^(0,…,0,1)=x_r. For a polynomial P(y)=∑_j=0^da_jy^j∈ K((X^ℤ^r))^grlex[y], we denote: w_x(P(y)):=min_j=0,…,d{w_x(a_j)}. We will also use the following notations to keep track of the variables used to write the monomials. Given a ring R, we denote by R((x_1^ℤ,…,x_r^ℤ))^lex and R((x_1^ℤ,…,x_r^ℤ))^grlex the corresponding rings of generalized series ∑_n∈ℤ^rc_nx^n with coefficients c_n in R. Accordingly, let us write R((x_1^ℤ,…,x_r^ℤ))^lex_Mod and R((x_1^ℤ,…,x_r^ℤ))^grlex_Mod the subrings of series whose actual exponents are all bounded by below by some constant for the product order. Note that these subrings are both isomorphic to the ring ⋃_n∈ℤ^rx^nR[[x]]. Let us write also R((x_1^ℤ,…,x_r^ℤ))^lex_≥_lex0 and R((x_1^ℤ,…,x_r^ℤ))^grlex_≥_grlex0 the subrings of series s with v_x(s)≥_lex0, respectively w_x(s)≥_grlex0. Let f be non zero in K[[ξ_1,…,ξ_r]]. There exists ρ_1,…,ρ_r-1∈ℕ such that, if we set {[ η_1 := ξ_1/ξ_2^ρ_1; ⋮; η_r-1 := ξ_r-1/ξ_r^ρ_r-1; η_r := ξ_r ]. then f(ξ_1,…,ξ_r)=η^αg(η_1,…,η_r) where α∈ℕ^r and g is an invertible element of K[[η_1,…,η_r]]. Moreover, for all i=1,…,r-1, ρ_i≤ 1+β_i+1 where β:=v_ξ(f). Let us write f=ξ^β h where β=v_ξ(f) and h∈ K((ξ_1^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v_ξ(h)=0. Note that h can be written as h=h_0+h_1 where h_0∈ K((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v_ξ(h_0)=0, and h_1∈ξ_1K[[ξ_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_Mod. If h_1∈ K[[ξ_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod, then we set ρ_1=0. Otherwise, let ρ_1 be the smallest positive integer such that: ρ_1≥sup{1 ; (1-m_2)/m_1, m∈supp h_1}. Note that, since m_1≥ 1 and m_2≥ -β_2, we have that ρ_1≤ 1+β_2. We also remark that the supremum is achieved for 0≥ m_2≥ -β_2 and 1+β_2 ≥ m_1≥ 1. Let η_1:=ξ_1/ξ_2^ρ_1. For every monomial in h_1, one has ξ_1^m_1ξ_2^m_2…ξ_r^m_r=η_1^m_1ξ_2^m_2+ρ_1m_1…ξ_r^m_r. Hence, m_2+ρ_1m_1≥ 1 by definition of ρ_1. So (m_2+ρ_1m_1,…,m_r)>_lex0, meaning that h_1∈ K[[η_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and that v(h_1)>_lex0 where here v is the lexicographic valuation with respect to the variables (η_1,ξ_2,…,ξ_r). So h∈ K[[η_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0. Note that the exponents m_3,…, m_r remain unchanged in the support of h. Suppose now that we have obtained h∈ K[[η_1,…,η_p]]((ξ_p+1^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and that v(h)=0 where v is now the lexicographic valuation with respect to the variables (η_1,…,η_p,ξ_p+1,…,ξ_r). The induction step is similar to the initial one. As before, let us write h=h_0^(p+1)+h_1^(p+1) where h_0^(p+1)∈ K[[η_1,…,η_p]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v(h_0^(p+1))=0, and h_1^(p+1)∈ξ_p+1K[[η_1,…,η_p,ξ_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_Mod. If h_1^(p+1)∈ K[[η_1,…,η_p,ξ_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod, then we set ρ_p+1=0. Otherwise, let ρ_p+1 be the smallest positive integer such that: ρ_p+1≥sup{1 ; (1-m_p+2)/m_p+1, m∈supp h_1^(p+1)}. Note that, since m_p+1≥ 1 and m_p+2≥ -β_p+2 (since these exponents m_p+2 remained unchanged until this step), we have that ρ_p+1≤ 1+β_p+2. If we set η_p+1:=ξ_p+1/ξ_p+2^ρ_p+1, then h∈ K[[η_1,…,η_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0 (where v is now the lexicographic valuation with respect to the variables (η_1,…,η_p+1,ξ_p+2,…,ξ_r)). By iteration of this process, we obtain that h ∈ K[[η_1,…,η_r-1]]((ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0 (where v is now the lexicographic valuation with respect to the variables (η_1,…,η_r-1, ξ_r)), which means that h∈ K[[η_1,…,η_r-1,ξ_r]] with h invertible. Since ξ^β=η^α for some α∈^r, the lemma follows. (i) Let ỹ_0:=f̃/g̃∈𝒦_r. There exist (p,q)∈ℕ^*×ℕ^r-1 and L with [L:K]<+∞ such that ỹ_0∈ L(((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p)). We note that we can rewrite ỹ_0 as a monomial (with integer exponents) times an invertible power series in other variables (( x_1/x_2^q_1')^1/p,…, (x_r-1/x_r^q_r-1')^1/p ,x_r^1/p). Indeed, let us denote ξ=(ξ_1,…,ξ_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p). So ỹ_0=f̃/g̃ for some f̃,g̃∈ L[[ξ]]. By the preceding lemma, we can monomialize the product f̃.g̃, so f̃ and g̃ simultaneously, by a suitable transformation (<ref>). Note that this transformation maps L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] into some L[[(x_1/x_2^q_1')^1/p,…, (x_r-1/x_r^q'_r-1)^1/p ,x_r^1/p]]. Indeed, a monomial in ξ is transformed into a monomial in η, and one has that: η_1^i_1/p⋯η_r-1^i_r-1/pη_r^i_r/p= (x_1/x_2^q_1/(x_2/x_3^q_2)^ρ_1)^i_1/p⋯(x_r-1/x_r^q_r-1/x_r^ρ_r-1)^i_r-1/px_r^i_r/p = (x_1/x_2^q_1+ρ_1)^i_1/p⋯(x_r-1/x_r^q_r-1+ρ_r-1)^i_r-1/p x_r^i_r/p(x_3^q_2ρ_1)^i_1/p(x_4^q_3ρ_2)^i_2/p⋯(x_r^q_r-1ρ_r-2)^i_r-2/p and we write (x_3^q_2ρ_1)^i_1/p= (x_3/x_4^q_3+ρ_3)^q_2ρ_1i_1/px_4^(q_3+ρ_3)q_2ρ_1i_1/p and so on. Thus we obtain a monomial in the variables ((x_1/x_2^q_1+ρ_1)^1/p,…, (x_r-1/x_r^q_r-1+ρ_r-1)^1/p, x_r^1/p). (ii) Let f∈ K[[ξ]], ρ_1,…,ρ_r-1∈ℕ, and η be as in the Monomialization Lemma <ref>. Let β=v_ξ(f). If we replace ρ_1,…,ρ_r-1 by ρ_1',…,ρ_r-1' with ρ_i'≥ρ_i for all i, and we proceed to the corresponding change of variables η' as in (<ref>), then we still have f(ξ)=(η')^αg'(η') for some invertible g'∈ K[[η']]. So Lemma <ref> holds true if we take 1+β_i+1 instead of ρ_i whenever ρ_i>0. 𝒦_r is an algebraically closed extension of K((x)). This is a consequence of Abhyankar-Jung Theorem <cit.>, see <cit.>, and our Monomialization Lemma <ref>. Let P(y)=∑_i=0^da_iy^i∈ L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]][y] where [L:K]<+∞, p∈^*, q_i∈ for i=1,..r-1 and a_d≠ 0. We want to show that P has a root in 𝒦_r. Up to multiplication by a_d^d-1 and change of variable z=a_dy, we may assume that P is monic. Let us denote ξ=(ξ_1,…,ξ_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p) and P(y)=P(ξ,y). Up to replacing L by a finite algebraic extension of it, we may also suppose that P(0,y)=(y-c_1)^α_1⋯ (y-c_m)^α_m with c_i∈ L. By Hensel's Lemma [CITE Raynaud Propo 5 4) and Lafon Alg locale, chap 12, theo 12.5 p.166], there exist polynomials P_1(ξ,y),…,P_m(ξ,y) such that P_i(0,y)=(y-c_i)^α_i (i=1,..,m) and P=P_1⋯ P_m. It is enough to show that P_1 has a root in 𝒦_r. By a change of variable y=z-c_1, we are lead to the case of a polynomial P(ξ,y)=y^d+∑_i=0^d-1a_i(ξ)y^i with a_i(0)=0, i=0,..,d-1. By our Monomialization Lemma <ref> and Remark <ref>(i), we may assume that the discriminant of P is monomialized. Hence, Abhyankar-Jung Theorem applies. Note that this last step may require to replace L by a finite algebraic extension. Let ỹ_0∈𝒦_r be a non zero rational polyhedral Puiseux series. Let us show that the existence of a nonzero polynomial P̃(x,y) cancelling ỹ_0 is equivalent to the one of a polynomial P(u,y) cancelling y_0∈ L[[u]], but with constraints on the support of P. Indeed, by our Monomialization Lemma <ref> and Remark <ref>(i), there are (p,q)∈ℕ^*×ℕ^r-1 such that, if we set: (u_1,…,u_r-1,u_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p), then we can rewrite ỹ_0 =∑_n≥ñ^0c̃_nu^n, c̃_ñ^0≠ 0. Let us denote c_n:=c̃_n+ñ^0, and: ỹ_0=u^ñ^0∑_n≥0 c_nu^n=u^ñ^0 y_0 with c_0≠ 0. Hence, y_0 is a formal power series in u with coefficient in a finite algebraic extension L of K. By the change of variable (<ref>), we have: x_k=u_k^pu_k+1^pq_ku_k+2^pq_kq_k+1⋯ u_r^pq_kq_k+1⋯ q_r-1, k=1,…,r The rational polyhedral Puiseux series ỹ_0 is a root of a polynomial P̃(x,y)=∑_j=0^d∑_i∈^rã_i,jx^iy^j ∈ K[[x]][y] of degree d in y if and only if the power series y_0=∑_n∈^r c_nu^n∈ L[[u]] is a root of u^m̃^0P̃( u_1^pu_2^pq_1⋯ u_r^pq_1q_2⋯ q_r-1 , … , u_r^p , u^ñ^0y), the latter being a polynomial P(u,y) in K[[u]][y] for m̃^0 such that m̃^0_k=max{0 ; -ñ_k^0d}, k=1,…,r . Note that the transformation is uniquely defined by p,q,d and ñ^0. In the following lemma, we clarify the constraints on the support of the polynomial P. With the notations of (<ref>), we set u=( t_0, s_1, t_1,…, s_σ, t_σ) where t_0 might be empty, such that u_i∈ s_k if and only if q_i≠ 0 (and, so u_i∈ t_k if and only if q_i=0). Moreover, we write s:=( s_1,…, s_σ) and t:=( t_0, t_1,…, t_σ). Hence, a polynomial P̃(x,y) ∈ K[[x]][y] is changed by the transformation induced by (<ref>) and (<ref>) into a polynomial: P(s,t,y)=∑_l≥0∑_j=0^dP_l,j(s)y^j t^l∈ K[s,y][[t]] with for any i such that u_i∈s_k, _u_i(P_l,j(s))-(m̃^0_i+jñ_i^0) ≤_u_i+1 (P_l,j(s) t^l)-(m̃^0_i+1+jñ_i+1^0) /q_i, j=0,..,d. Conversely, any polynomial P(s,t,y)=∑_l≥0∑_j=0^dP_l,j(s)y^j t^l∈ K[s,y][[t]] comes from a unique polynomial P̃(x,y) ∈ K[[x]][y] by the transformation induced by (<ref>) and (<ref>) if and only if each monomial u^αy^j in the support of P satisfies the following conditions: (i) α≥m̃^0+jñ^0; (ii) ∀ i=1,…,r, α_i-(m̃^0_i+jñ_i^0)≡ 0 (p) ; (iii) For any u_i∈s_k, α_i-(m̃^0_i+jñ_i^0)≤α_i+1-(m̃^0_i+1+jñ_i+1^0)/q_i. Let us collect the variables x_i according to the distinction between t_j and s_k among the variables u_l. We set x_k for the sub-tuple of variables x_i corresponding to t_k, and ξ_k for s_k respectively. Let us consider a general monomial: x^ ny^j = x_0^ n_0 ξ_1^ m_1 x_1^ n_1⋯ξ_σ^ m_σ x_σ^ n_σy^j. where n=( n_0, m _1, n_1,…, m_σ, n_σ). For k=1,…,σ, we denote ξ_k=(x_i_k,…,x_j_k-1) and x_k=(x_j_k,…,x_i_k+1-1), and accordingly m_k=(n_i_k,…,n_j_k-1) and n_k=(n_j_k,…,n_i_k+1-1) with i_σ+1:=r+1. For k=0 when t_0 is not empty, we denote x_0= t_0=(x_j_0,…,x_i_1-1) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1. By the change of variable (<ref>), for each k=1,…,σ, we obtain that: ξ_k^ m_k x_k^ n_k= ((x_i_k/x_i_k+1^q_i_k)^1/p)^pn_i_k( (x_i_k+1/x_i_k+2^q_i_k+1)^1/p)^p(n_i_k+1+q_i_kn_i_k)⋯ ((x_j_k-1/x_j_k^q_j_k-1)^1/p)^p(n_j_k-1+q_j_k-2n_j_k-2 +q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k) ×( x_j_k^1/p)^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) ×( x_j_k+1^1/p)^pn_j_k+1⋯(x_i_k+1-1^1/p)^pn_i_k+1-1 = u_i_k^pn_i_ku_i_k+1^p(n_i_k+1+q_i_kn_i_k)⋯u_j_k-1^p(n_j_k-1+q_j_k-2n_j_k-2+q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k) u_j_k^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) u_j_k+1^pn_j_k+1⋯u_i_k+1-1^pn_i_k+1-1 [ = s_i_k^pn_i_ks_i_k+1^p(n_i_k+1+q_i_kn_i_k)⋯s_j_k-1^p(n_j_k-1+q_j_k-2n_j_k-2+q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k); t_j_k^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) t_j_k+1^pn_j_k+1⋯t_i_k+1-1^pn_i_k+1-1. ] Moreover, y^j is transformed into u^m̃^0+jñ^0y^j. For u_i∈ s_k, we denote by c_i its exponent in Formula (<ref>). If i<j_k-1, then u_i+1∈ s_k and its exponent is c_i+1=p(n_i+1+q_in_i+⋯ +q_iq_i-1⋯ q_i_kn_i_k) =pn_i+1+q_ic_i. The total exponent of u_i in the transform of x^ ny^j is c_i+m̃^0_i+jñ_i^0. So, _u_i+1 (P_l,j(s) y^j t^l)-(m̃^0_i+1+jñ_i+1^0) = _u_i+1 (P_l,j(s))-(m̃^0_i+1+jñ_i+1^0) ≥q_i(_u_i (P_l,j(s))-(m̃^0_i+jñ_i^0)). If i=j_k-1, then u_i+1=t_j_k∈ t_k. Likewise, its exponent in (<ref>) is pn_j_k+q_j_k-1c_j_k-1. We obtain that _u_i+1 (P_l(s) y^j t^l)-(m̃^0_j_k+jñ_j_k^0) =_t_j_kt^l-(m̃^0_j_k+jñ_j_k^0) ≥q_j_k-1(_u_j_k-1P_l(s,y)-(m̃^0_j_k-1+jñ_j_k-1^0) ). Conversely, we consider a monomial s_k^λ t_k^μ. It is of the form (<ref>), that is, it comes from a monomial ξ_k^ m_k x_k^ n_k, if and only if _u_is_k^λ≤_u_i+1s_k^λ t_k^μ/q_i and λ_i≡μ_j≡ 0 (p), which are equivalent to the conditions (ii) and (iii). Taking into account the transformation (<ref>), this gives the converse part of the lemma. Note that, if x^ny^j≠x^n'y^j', the transformation applied to these monomials gives u^αy^j≠u^α'y^j'. For the rest of this section, and also for Sections <ref>, <ref> and <ref>, we assume that the field K is algebraically closed, hence K=L=K. If for all i, q_i=0, namely if u_i=x_i^1/p, then any ỹ_0=f/g with f,g∈ K[[u]] is algebroid. Indeed, let θ_p denote a primitive pth root of unity. We set: P̃(u,y) := ∏_i=1,…,r∏_k_i=0,…,p-1g(θ_p^k_1u_1,…,θ_p^k_ru_r) (y-ỹ_0(θ_p^k_1u_1,…,θ_p^k_ru_r)) = ∏_i=1,…,r∏_k_i=0,…,p-1[g(θ_p^k_1u_1,…,θ_p^k_ru_r) y-f(θ_p^k_1u_1,…,θ_p^k_ru_r)]. Note that P̃(u,ỹ_0)=0. Moreover, since P̃(u_1,…,θ_pu_i,…,u_r,y)=P̃(u,y) for any i=1,…,r, we conclude that P̃∈ K[[x]][y]. Consequently, from now on, we consider the case where q_i≠ 0 for at least one i∈{1,…,r}. Let us denote by τ the number of variables in s, and so r-τ is the number of variables in t. We consider y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n such that c_0,0≠ 0 which satisfies an equation P(s,t,y)=0 where P agrees conditions (i), (ii) and (iii) of Lemma <ref>. The series c_n(s)∈ K[[s]], n∈ℕ^r-τ, are all algebraic over K(s), and lie in a finite extension of K(s). We consider y_0 =∑_n∈ℕ^r-τ c_n(s) t^n root of a non-trivial polynomial P(s,t,y)=∑_l∈ℕ^r-τ P_l(s,y) t^l∈ K[s,y][[t]] which satisfies conditions (i), (ii) and (iii). We proceed by induction on ℕ^r-τ ordered by ≤_ grlex. Given some n∈ℕ^r-τ, we set y_0=z̃_n+c_nt^n+y_n with z̃_n=∑_β<_grlexn c_βt^β, y_n=∑_β>_grlexn c_βt^β, (and z_0:=0 which corresponds to the initial step of the induction). We assume that the coefficients c_β of z̃_n belong to a finite extension L_n of K(s). We set Q_n(t,y):=P(s,t,z̃_n+y)∈ L_n[y][[t]] and we denote it by: Q_n(t,y)=∑_l≥0Q_n,l(y) t^l. We claim that w_t(P)=w_t(Q_n). This is clear if n=0. For n>_ grlex0, let l_0:=w_t(P). We have Q_n(t,y)=P_l_0(s, z̃_n+y)t^l_0+⋯ =( ∑_j=0^d 1/j!∂^j P_l_0/∂ y^j(s,y)z̃_n^j )t^l_0+⋯ Let d_l_0 :=_y P_l_0: the coefficient of y^d_l_0 in the previous parenthesis is not zero for j=0 but zero for j≥ 1. Namely, it is the coefficient of P_l_0(s,y), which is of the form a(s)y^d_l_0t^l_0 and therefore cannot overlap with other terms. By Taylor's formula, we have that: Q_n(t,Ct^n+y)=∑_l≥_ grlexl_0∑_j=0^d 1/j!∂^j Q_n,l/∂ y^j(0) (Ct^n+y)^j t^l. Recall that y_n∈ K[[s]][[t]] with w_t(y_n)>_grlexn. Then Q_n(t,Ct^n+y_n)≠ 0 as a polynomial in C (otherwise P would have more than d roots). Necessarily, w_t( Q_n(t,Ct^n+y_n)) is of the form ω=l_1+j_1 n. Indeed, let us consider ω:=min_l,j{l+j n | ∂^j Q_n,l/∂ y^j(0)≠ 0}, and among the (l,j)'s which achieve this minimum, consider the term with the biggest j. This term cannot be cancelled. The correspondent coefficient of t^ω in Q_n(t,Ct^n+y_n) is a nonzero polynomial in C of the form: ∑_l_k+j_k n=ω1/j_k!∂^j_k Q_n,l_k/∂ y^j_k(0) C ^j_k. Since y_0 is a root of P, this polynomial needs to vanish for C=c_n, which proves by the induction hypothesis that c_n is itself algebraic over K(s). Without loss of generality, we may assume that y_0 is a simple root of P, hence, ∂ P/∂ y(s,t,y_0) ≠ 0. With the same notations as above, we consider n_0:= w_t(∂ P/∂ y(s,t,y_0)) ∈ℕ^r-τ. For any n>_grlexn_0, ∂ Q_n/∂ y(t,0)=∂ P/∂ y(s,t,z̃_ n) and w_t(∂ Q_n/∂ y(t,0)-∂ P/∂ y(s,t,y_0))=w_t(∂ P/∂ y(s,t,z̃_ n)-∂ P/∂ y(s,t,y_0))≥_grlexn>_grlexn_0. So w_t(∂ Q_n/∂ y(t,0))=n_0. By Taylor's formula: Q_n(t,Ct^n+y_n)=∑_j=0^d 1/j!∂^j Q_n/∂y_n^j(t,0) (Ct^n+y)^j. We have: w_t(∂ Q_n/∂ y(t,0) (Ct^n+y_n))= n+n_0, and for any j≥ 2: w_t(∂^j Q_n/∂ y^j(t,0) (Ct^n+y_n)^j)≥_grlex 2n>n+n_0. We deduce by (<ref>) that w_t(Q_n(t,0)) ≥_grlexn+n_0 since, otherwise, Q_n(t,Ct^n+y_n) could not vanish at C=c_n. Let us prove by induction on n∈ℕ^r-τ ordered by ≤_ grlex, n≥_ grlexn_0, that the coefficients c_l of t^l in z̃_n all belong to L_ n_0=K(s,c_0,…,c_n_0). The initial case is clear. Assume that the property holds for less than some given n. Let us denote ∂ Q_n/∂ y(t,0)=a_n_0t^n_0+R(t) with w_t(R(t)) >_grlexn_0, a_n_0≠ 0, and Q_n(t,0)=b_n+n_0t^n+n_0+S(t) with w_t(S(t)) >_grlexn+n_0. By (<ref>) and the induction hypothesis, a_n_0 and b_n+n_0 belong to L_n_0. Looking at the coefficient of t^n+n_0 in (<ref>) evaluated at C=c_n, we get: a_n_0c_n +b_n+n_0=0. Hence we obtain that c_n∈ L_n_0=K(s,c_0,…,c_n_0) for all n>_ grlexn_0. Let us recall that A(n) denotes the predecessor element of n in (ℕ^r,≤_grlex). The following lemma will be used in Section<ref> in order to apply the results of Section <ref>. Let d, m̃^0, ñ^0, q, p and P be as above (see (<ref>) and (<ref>)). As in the proof of the previous lemma, we set l_0:=w_t(P). We resume the notations of Lemma <ref>. For k=1,…,σ, with s_k=(u_i_k,…,u_j_k-1), we denote e_s_k:=1/q_i_kq_i_k+1⋯ q_j_k-1+1/q_i_k+1⋯ q_j_k-1+⋯ + 1/ q_j_k-1, and ñ^0,s_k (respectively m̃^0,s_k), the multi-index obtained from ñ^0 (respectively m̃^0), by restriction to the components corresponding to the variables in s_k. Likewise, we set ñ^0,t_k and m̃^0,t_k corresponding to the variables in t_k for k=0,…, σ. Let n∈^r-τ, then there exists T_n∈ K[s,( C_β)_β≤_grlexn]∖{0} such that T_n(s,c_0,…,c_A(n),c_n)=0, T_n(s,c_0,…,c_A(n),C_n)≢0 with _C_βT_n≤ d, _ sT_n≤( |l_0|+d |n| )a+b, where a:=∑_k=1^σ e_s_k, b:=ε(∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k)+∑_k=1^σ |m̃^0,s_k|-∑_k=1^σm̃^0,t_k_j_k e_s_k, with ñ^0,t_k_j_k (respectively m̃^0,t_k_j_k) the first component of ñ^0,t_k (respectively m̃^0,t_k), and ε:={[ 0 if ∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k≤ 0,; d if ∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k> 0. ]. Resuming the notations and computations of the previous lemma (see (<ref>) to (<ref>)), c_n is a root of a nonzero polynomial in C of the form: ∑_l_k+p_k n=ω1/p_k!∂^p_k Q_ n,l_k/∂ y^p_k(0) C ^p_k where ω:=w_t( Q_n(t,Ct^n+y_n))=l_1+p_1 n≤_grlexl_1+d n. Let us denote by T_n the polynomial obtained from the preceding expression by substituting C_n to C and C_β to c_β for β<_grlexn. More precisely, if we set H_n(s,t,(C_β)_β≤_grlexn,y)= P(s,t,∑_β≤_grlexn C_βt^β+y ) =∑_l∈ℕ^r-τ H_n,l(s,(C_β)_β≤_grlexn,y)t^l then T_n(s,(C_β)_β≤_grlexn):=H_n,ω(s,(C_β)_β≤_grlexn,0). Since w_t(Q_ n)=w_t(P) by (<ref>), we observe that l_0=min_≤_grlex{l | ∃ p, ∂^p Q_ n,l/∂ y^p(0)≠ 0 }. Let p_0 = min{p | ∂^p Q_ n,l_0/∂ y^p(0)≠ 0 }. Then the coefficient of C^p_0t^l_0 +p_0n in the expansion of Q_ n(t,C t^n+y_n) is not zero. Since we have that: Q_ n(t,Ct^n+y_n)=∑_l≥0∑_j=0^d 1/j!∂^j Q_ n,l/∂ y^j(0) (Ct^n+y_n)^j t^l, the term 1/p_0!∂^p_0 Q_ n,l_0/∂ y^p_0(0) C ^ p_0 t^l_0+p_0n cannot overlap with other terms since the latter will necessarily be of the form 1/(p-p_0)!p_0!∂^p Q_ n,l/∂ y^p(0) C ^ p_0 t^l+p_0ny_n^p-p_0 with l≥_grlexl_0, p≥ p_0 and w_t(y_n)>_grlexn. (see (<ref>)). So, ω≤_grlexl_0+p_0n≤_grlexl_0+dn. Let us detail the expression of the connection between P and Q_ n. We denote P(s,t,y)=∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^ky^j) t^l, and we get: Q_ n(s,t,y) =P(s,t,z̃_n+y ) =∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^k(∑_β<_grlexn c_βt^β+y)^j) t^l =∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^k(∑_|j|=jj!/j!(∏_β<_grlexn c_β^j_β) y^j_nt^g(j)-j_nn)) t^l =∑_l∈ℕ^r-τ∑_k∈^τ∑_j=0^d∑_|j|=j a_k,l,js^kj!/j!(∏_β<_grlexn c_β^j_β) y^j_nt^l+g(j)-j_nn where j=(j_0,…,j_n) and g(j) is as in Notation <ref>. Next, we evaluate y at C t^n+y_n and we consider the (l,j)'s such that l+g(j)=ω for which the coefficient of t^ω is the non-trivial polynomial of which c_n is a root. Then, the multi-indices l involved are such that l≤_grlexl_0+dn. Consider such a monomial s^ kt^ly^j written as u^αy^j as in (<ref>). Recall that the elements of the support of P satisfy Condition (iii) of Lemma <ref>: for any k=1,…,σ, for any u_i∈s_k, α_i-(m̃^0_i+jñ_i^0)≤α_i+1-(m̃^0_i+1+jñ_i+1^0)/q_i. For s_k=(u_i_k,…,u_j_k-1) and t_k=(u_j_k,…,u_i_k+1-1), we claim that for any i=i_k,…,j_k-1, α_i≤α_j_k/q_iq_i+1⋯ q_j_k-1+j ( ñ^0_i-ñ^0_j_k/q_iq_i+1⋯ q_j_k-1)+m̃^0_i-m̃^0_j_k/q_iq_i+1⋯ q_j_k-1. The case i=j_k-1 is given by Condition (iii). Suppose that the formula holds until i+1, i.e. α_i+1≤α_j_k/q_i+1⋯ q_j_k-1+j ( ñ^0_i+1-ñ^0_j_k/q_i+1⋯ q_j_k-1)+m̃^0_i+1-m̃^0_j_k/q_i+1⋯ q_j_k-1. Since, by Condition (iii), we have α_i≤α_i+1/q_i+j(ñ_i^0-ñ_i+1^0/q_i)+m̃_i^0-m̃_i+1^0/q_i, we obtain the formula for α_i as expected. Now, we consider the sum for i=i_k,…,j_k-1 of these inequalities (<ref>): ∑_i=i_k^j_k-1α_i≤α_j_ke_s_k+j(|ñ^0,s_k|-ñ^0 _j_ke_s_k)+|m̃^0,s_k|-m̃^0 _j_ke_s_k. Note that ñ^0 _j_k=ñ^0,t_k_j_k and m̃^0 _j_k=m̃^0,t_k_j_k. Moreover, α_j_k is equal to some l_γ component of l, so α_j_k≤ |l_0|+d|n|. So, ∑_i=i_k^j_k-1α_i≤(|l_0|+d|n|)e_s_k+j(|ñ^0,s_k|-ñ^0,t_k_j_ke_s_k)+|m̃^0,s_k|-m̃^0,t_k_j_ke_s_k. Taking the sum for k=1,…,σ, we obtain: |k|≤(|l_0|+d|n|)∑_i=1^σe_s_k+j(∑_i=1^σ|ñ^0,s_k|-∑_i=1^σñ^0,t_k_j_ke_s_k)+∑_i=1^σ|m̃^0,s_k|-∑_i=1^σm̃^0,t_k_j_ke_s_k. Since 0≤ j≤ d, we finally obtain: |k|≤(|l_0|+d|n|)∑_i=1^σe_s_k+ε(∑_i=1^σ|ñ^0,s_k|-∑_i=1^σñ^0,t_k_j_ke_s_k)+∑_i=1^σ|m̃^0,s_k|-∑_i=1^σm̃^0,t_k_j_ke_s_k. From the previous proof, we observe that, for any monomial s^ kt^ly^j in the support of a polynomial P which satisfies the conditions of Lemma <ref>, one has that: | k|≤ a | l|+b, where a and b are as in Lemma <ref>. To see this, use α_j_k≤ |l| in place of α_j_k≤ |l_0|+d|n| in (<ref>). For r=2, let p,q∈^* and ñ^0=(ñ^0_1,ñ^0_2)∈^2. * Let us consider: ỹ_0=(x_1/x_2^q)^ñ^0_1/px_2^ñ^0_2/p∑_i,j=0^p-1(1/1-x_2x_2^q/x_2^q-x_1) (x_1/x_2^q)^i/p x_2^j/p∈𝒦_2. The series ỹ_0 is algebroid, even algebraic, since it is a finite sum and product of algebraic series. Hence, (u_1,u_2)=( (x_1/x_2^q_1)^1/p, x_2^1/p)=(s,t). Moreover, it has a full support: {1/pñ^0+(k/p, l-qk/p) | (k,l)∈^2 }. < g r a p h i c s > * Let us consider ỹ_0=(x_1/x_2^q)^ñ^0_1/px_2^ñ^0_2/p(1/1-x_2^1/p) exp((x_1/x_2^q)^1/p) ∈𝒦_2. The series ỹ_0 is transcendental over K[[x_1,x_2]]. Indeed, with the same notations as above, ỹ_0=s^ñ^0_1/pt^ñ^0_2/p1/1-texp(s) is algebroid if and only if exp(s) is algebraic by Lemma <ref>. This is clearly not the case. Moreover, ỹ_0 has the same support as above. In <cit.>, the authors ask whether K((x)) is a Rayner field. The above example with p=1 provides us with two series having same support, the first belonging to K((x)), and the second not. Following the argument after <cit.>, this shows that K((x)) is not a Rayner field. § A NESTED DEPTH LEMMA. Let d_x, d, _̣x, ∈̣ℕ^*. Given two polynomials P∈ K[x,y]∖{0 }, _xP≤ d_x, _yP≤ d, and Q∈ K[x,y]∖{0 }, _xQ≤_̣x, _yQ≤$̣, we denote byR∈K[x] their resultant. It satisfies_xR≤d_̣x+ḍ_x. Moreover, in the Bézout identity:AP+BQ=R,one can choose the polynomialsS, T ∈K[x,y]which satisfy:{[ _xA≤ d_x(-̣1)+_̣xd _yA≤-̣1; _xB≤ d_x+̣_̣x(d-1) _yB≤ d-1 ].We consider the following linear map: [ φ: K(x)[y]_× K(x)[y]_d → K(x)[y]_d+; (A,B) ↦ AP+BQ, ] where K(x)[y]_n denotes the K(x)-vector space of polynomials of degree less than n in y. The matrix M of φ in the standard basis {(y^i,0)}∪{(0,y^j)} and {y^k} is the Sylvester matrix of P and Q. The polynomial R∈ K[x] is its determinant. So, _xR≤ d_̣x+ḍ_x. Let M' be the matrix of cofactors of M. From the relation M. ^tM'=R Id_d+, one deduces the Bézout identity AP+BQ=R, the coefficients of A and B being minors of M of maximal order minus 1. Let 𝔄 be a domain and 𝔎 its field of fractions. Given n∈, n≥ 2, we consider an n× n matrix M=(m_i,j) with coefficients in 𝔄. We suppose that M (as a matrix with coefficients in 𝔎) has rank n-p for some 1≤ p<n. Then there exists a vector V∈𝔄^n∖{0} whose nonzero coefficients are equal, up to sign ±, to minors of order n-p of M and such that M.V=0. Without loss of generality, we can suppose that the minor of order n-p, say Δ, given by the first n-p rows and columns is not zero. Denote V:=(Δ_1,…, Δ_n). For k>n-p+1, set Δ_k:=0. For k=n-p+1, set Δ_k:=(-1)^n-p+1Δ≠ 0. For k< n-p+1, we set Δ_k equal to (-1)^k times the minor of M given by the first n-p rows, and all but the k'th first n-p+1 columns. Denote M.V:=(c_1,…,c_n). We claim that M.V=0. Indeed, c_1= ∑_j=1^n-p+1 m_1,jΔ_j which is the determinant of the (n-p+1)×(n-p+1)-matrix (δ_i,j) with δ_i,j=m_i,j for 1≤ i≤ n-p and 1≤ j≤ n-p+1, and δ_n-p+1,j=m_1,j for 1≤ j≤ n-p+1. This determinant vanishes since it has two identical rows. Similarly, we have that c_2=⋯=c_n-p=0. Now, c_n-p+1=∑_j=1^n-p+1 m_n-p+1,jΔ_j, which is equal to a minor of order n-p+1 of M. It vanishes since M has rank n-p. Similarly, c_n-p+2=…=c_n=0. Let 𝔄 be a domain and 𝔎 its field of fractions. Let P_1,P_2∈𝔄[y]∖{0} of positive degrees d_1≥ d_2 respectively. The Sylvester matrix of P_1 and P_2 has rank at least d_1. Moreover, it has rank d_1 if and only if aP_1=BP_2 for some a∈𝔄 and B∈𝔄[y]∖{0}. In this case, one can take a=q_d_2^d_1-d_2 + 1 (where q_d_2 is the coefficient of y^d_2 in P_2) and the coefficients of such a polynomial B can be computed as homogeneous polynomial formulas in the coefficients of P_1 and P_2 of degree d_1-d_2+1, each monomial consisting of d_1-d_2 coefficients of P_2 times 1 coefficient of P_1. As in the proof of Lemma <ref>, we denote by M_P_1,P_2 the Sylvester matrix of P_1 and P_2. By definition, its d_1 columns corresponding to the coefficients of y^lP_2, l=0,…,d_1-1, being upper triangular are linearly independent (and the same holds for the d_2 columns corresponding to the coefficients of y^kP_1). Hence, M_P_1,P_2 has rank at least max{d_1,d_2}=d_1. Moreover, an equality aP_1=BP_2 translates exactly into a linear relation between the column corresponding to P_1 and the columns corresponding to y^lP_2 for l=0,…,d_1-d_2. In this case, the linear relation repeats mutatis mutandi between the column corresponding to y^k P_1 and the columns corresponding to y^lP_2 for l=k,…,d_1-d_2+k, corresponding to an equality ay^kP_1=y^kBP_2. Let us consider the submatrix N_P_1,P_2 of M_P_1,P_2 consisting of the column corresponding to P_1 and the columns corresponding to y^lP_2 for l=0,…,d_1-d_2. It has rank d_1-d_2+1. By the previous lemma, there exists a nonzero vector in the kernel of N_P_1,P_2, given by minors of order d_1-d_2+1. More precisely, we are in the case of a Cramer system encoding an equality BP_2 = aP_1, with in particular a=q_d_2^d_1-d_2+1 corresponding to the determinant of the matrix of the linear map B↦ BP_2. By Cramer's rules, the coefficients of B are computed as determinants which indeed give homogeneous polynomial formulas with monomials consisting of d_1-d_2 coefficients of P_2 and 1 coefficient of P_1. Let d_x, d, _̣x, ∈̣ℕ^* and P, Q∈ K[x,y]∖{0 }, _xP≤ d_x, _yP≤ d, _xQ≤_̣x, _yQ≤$̣. For any seriesc_0 ∈ K[[x]]such thatP(x,c_0)=0andQ(x,c_0)≠ 0, one has thatord_xQ(x,c_0)≤_̣xd+ d_x.̣Let c_0 be a series as in the statement of Lemma <ref>. We consider the prime ideal ℑ_0:={R(x,y)∈ K[x,y] | R(x,c_0)=0}. Since ℑ_0≠ (0), (K[x,y]/ℑ_0)=trdeg_KFrac(K[x,y]/ℑ_0)≤ r. But, in Frac(K[x,y]/ℑ_0), the elements x_1,…,x_r are algebraically independant (if not, we would have T(x_1,…,x_r)=0 for some non trivial T∈ K[X], i.e. T(x_1,…,x_r)∈ℑ_0, a contradiction). Thus, ℑ_0 is a height one prime ideal of the factorial ring K[x,y]. It is generated by an irreducible polynomial P_0(x,y)∈ K[x,y]. We set d_x,0:=_x P_0 and d_y,0:=_y P_0. Note also that, by factoriality of K[x,y], P_0 is also irreducible as an element of K(x)[y]. Let P be as in the statement of Lemma <ref>. One has that P=SP_0 for some S∈ K[x,y]. Hence d_x,0≤ d_x and d_y,0≤ d. Let Q∈ K[x,y] be such that Q(x,c_0)≠ 0 with _x Q≤_̣x, _yQ≤$̣. SoP_0andQare coprime inK(x)[y]. Their resultantR(x)is nonzero. One has the following Bézout relation inK[x][y]:A(x,y)P_0(x,y)+B(x,y)Q(x,y)=R(x).We evaluate aty=c_0:0+B(x,c_0)Q(x,c_0)=R(x).But, by Lemma <ref>,_x R ≤ d_y,0_̣x+ ḍ_x,0≤ d_̣x+ ḍ_x. Hence, one has that:ord_x Q(x,c_0)≤ord_xR ≤_x R≤ d_̣x+ ḍ_x.Let i, d_x, d, _̣x, ∈̣ℕ, d≥ 2, ≥̣1. There exists ω(i,d_x, d, _̣x, )̣∈ minimal such that: for any j=0,…,i, given c_j=∑_n∈^r c_j,nx^n∈ K[[x]] power series satisfying some equations P_j(x,c_0,…,c_j)=0 where P_j∈ K[x,z_0,z_1,…,z_j ]∖{0 }, _xP_j≤ d_x, _z_kP_j≤ d for k=0,…,j, and P_j (x,c_0,…,c_j-1,z_j)≢0, and given Q_i∈ K[x,z_0,z_1,…,z_i ]∖{0 }, _xQ_i≤_̣x, _z_jQ_i≤$̣ forj=0,…,ia polynomial such thatQ_i(x,c_0,c_1,…,c_i)≠ 0, one has that_xQ_i(x,c_0,c_1,…,c_i) ≤ ω(i,d_x, d, _̣x, )̣.Moreover, for≥̣3: [ ω(i,d_x, d, _̣x, )̣≤ (2.3^d^i-1+⋯+d^2+d+1 -2^i3^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x^̣d^i+; 2^i.3^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2_̣x^̣d^i-1 . ] So, ford≥ 3: ω(i,d_x, d, d_x, d )≤ 2.3^d^i-1+⋯+d^2+d+1 d_x d^d^i+⋯+d^2+d+1 . Finally, for anyε>0, there is_̣εsuch that, for≥̣_̣ε: [ ω(i,d_x, d, _̣x, )̣≤; ( 2.(2+ε)^d^i-1+⋯+d^2+d+1 - (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1))d^d^i-1+⋯+d^2+d+1 d_x^̣d^i +; (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2_̣x^̣d^i-1 , ] and ford≥_̣ε: ω(i,d_x, d, d_x, d ) ≤ 2.(2+ε)^d^i-1+⋯+d^2+d+1 d^d^i+d^i-1+⋯+d^2+d+1 d_x. We proceed by induction on i∈, the case i=0 being Lemma <ref> where we set d^i-1+⋯+d^2+d+1:=0, d^i-1+⋯+d^2+d+2:=d^i-1+⋯+d^2+d+1+1=1 and d^i-1+⋯+d^2+d-(i-1):=0 and where we get: ord_xQ_0(x,c_0)≤_̣xd+ d_x.̣ Suppose that the property holds until some rank i-1≥ 0, and consider polynomials P_i and Q_i as in the statement of the theorem. Let R_1 be the resultant of P_i and Q_i with respect to z_i, and the following Bézout identity according to Lemma <ref> (where x there stands for x or z_j, j=0,..,i-1, here): A_1P_i+B_1Q_i=R_1. There are two cases. If R_1(x,c_0,…,c_i-1)≠ 0, since R_1∈ K[x,z_0,…,z_i-1] with _xR_1≤ d_x+̣_̣xd, _z_jR_1≤ 2d $̣ forj=1,…,i-1, we deduce from the induction hypothesis thatord_x R_1(x,c_0,…,c_i-1)≤ω(i-1,d_x,d,d_x+̣_̣xd, 2d )̣. So, by the Bézout identity:ord_x Q_i(x,c_0,…,c_i)≤ord_xR_1(x,c_0,…,c_i-1) ≤ω (i-1,d_x,d,d_x+̣_̣xd, 2d )̣.IfR_1(x,c_0,…,c_i-1)=0, thenB_1(x,c_0,…,c_i-1,c_i)=0. There are several sub-cases. If R_1(x,c_0,…,c_i-1)=0, then there exist A,B∈ K[x,z_0,…,z_i] such that B(x,c_0,…,c_i-1,c_i)=0, B(x,c_0,…,c_i-1,z_i)≢0 and A(x,c_0,…,c_i-1,z_i)P_i(x,c_0,…,c_i-1,z_i)+B(x,c_0,…,c_i-1,z_i) Q_i(x,c_0,…,c_i-1,z_i)=0 with _xB≤ d_x+̣_̣x(d-1), _z_jB≤ (2d-1) $̣ forj=1,…,i-1, and_z_i B≤ d-1. If B_1(x,c_0,…,c_i-1,z_i)≢0, we take A=A_1 and B=B_1, noticing by Lemma <ref> that _xB_1≤ d_x+̣_̣x(d-1), _z_jB_1≤ (2d-1) $̣ forj=1,…,i-1, and_z_i B_1≤ d-1. IfB_1(x,c_0,…,c_i-1,z_i)≡ 0, necessarilyA_1(x,c_0,…,c_i-1,z_i)≡ 0. Let us denoteP̃_i:=P_i(x,c_0,…,c_i-1,z_i)andQ̃_i:=Q_i(x,c_0,…,c_i-1,z_i), henceP̃_i,Q̃_i∈ K[x,c_0,…,c_i-1][z_i], with degreesd̃andinz_irespectively. Note thatd̃≥ 1and≥ 1(if not,R_1(x,c_0,…,c_i-1)≠ 0). LetM_P̃_i,Q̃_ibe the Sylvester matrix ofP̃_iandQ̃_i, andd̃+-pits rank. Hence,p≥ 1. Suppose thatp=1. Let us denote byM'_P̃_i,Q̃_ithe matrix of cofactors ofM_P̃_i,Q̃_i, and by^tM'_P̃_i,Q̃_iits transpose. At least one of the columns of^tM'_P̃_i,Q̃_iis not zero. Since we have thatM_P̃_i,Q̃_i.^tM'_P̃_i,Q̃_i=0, this column determines a non-trivial relationÃP̃_i+B̃Q̃_i=0where the coefficients ofÃ,B̃are given by the coefficients of this column. Moreover,B̃(x,c_0,…,c_i-1,c_i)=0sinceP̃_i(x,c_0,…,c_i-1,c_i)=0andQ̃_i(x,c_0,…,c_i-1,c_i)≠ 0, andB̃(x,c_0,…,c_i-1,z_i)≢0(if not, we would haveÃ(x,c_0,…,c_i-1,z_i)≡ 0sinceP̃_i(x,c_0,…,c_i-1,z_i)≢0). The coefficients ofB̃are homogeneous polynomial formulas incoefficients ofP̃_iandd̃-1coefficients ofQ̃_i. Lifting these formulas toK[x,z_0,…,z_i-1,z_i]by replacing thec_j's by thez_j's, we obtainAandBwith_xB≤ d_x +_̣x(d̃-1), _z_jB≤ d +(̣d̃-1)forj=1,…,i-1, and_z_i B≤d̃-1. We conclude since≤$̣ and d̃≤ d. Suppose that p≥ 2. The columns corresponding to the coefficients of the z_i^k P̃_i's, k=0,..,-1, are linearly independent (since they form an upper triangular system). We complete them with d̃-p columns corresponding to the coefficients of the z_i^k Q̃_i to a maximal linearly independent family. There is a non-zero minor, say Δ, of maximal order +d̃-p of this family. Proceeding as in Lemma <ref>, there is a non-zero vector V in the kernel of M_P̃_i,Q̃_i whose coefficients are minors of order +d̃-p. More precisely, except for Δ, the other minors are obtained by replacing a column of Δ by the corresponding part of another column of M_P̃_i,Q̃_i. Hence, they consist of either d̃-p+1 columns with coefficients of Q̃_i and -1 columns with coefficients of P̃_i, or d̃-p columns with coefficients of Q̃_i and columns with coefficients of P̃_i. We translate the relation M_P̃_i,Q̃_i.V=0 to a non-trivial relation ÃP̃_i+B̃Q̃_i=0 where the coefficients of Ã,B̃ are given by the coefficients de V. Moreover, B̃(x,c_0,…,c_i-1,c_i)=0 since P̃_i(x,c_0,…,c_i-1,c_i)=0 and Q̃_i(x,c_0,…,c_i-1,c_i)≠ 0, and B̃(x,c_0,…,c_i-1,z_i)≢0 (if not, we would have Ã(x,c_0,…,c_i-1,z_i)≡ 0 since P̃_i(x,c_0,…,c_i-1,z_i)≢0). The coefficients of B̃ are homogeneous polynomial formulas in at most coefficients of P̃_i and d̃-p+1 coefficients of Q̃_i. Lifting these formulas to K[x,z_0,…,z_i-1,z_i] by replacing the c_j's by the z_j's, since p≥ 2, we obtain A and B with _xB≤ d_x +_̣x(d̃-1), _z_jB≤ d +(̣d̃-1) for j=1,…,i-1, and _z_i B≤d̃-1. We conclude since ≤$̣ andd̃≤ d. We denote byB_1the polynomialBof the previous lemma. In any case, we are in position to replacePbyB_1, with_xB_1≤ d_x+̣_̣x(d-1), _z_jB_1≤ (2d-1) $̣ for j=1,…,i-1, and _z_i B_1≤ d-1. We obtain another Bézout identity: A_2B_1+B_2Q_i=R_2 with R_2 the resultant of B_1 and Q_i with respect to z_i, _xR_2≤ (d_x+̣_̣x(d -1) )+̣_̣x(d -1) = d_x^2+_̣x ((d-1)+̣(d -2)+1), likewise, for j=1,…,i-1, _z_jR_2≤ d ^̣2+(̣(d -1)+̣(d-2)+1). Moreover, [ _xB_2 ≤ (_xB_1)+̣_̣x(_z_iB_1 -1); ≤ (d_x+̣_̣x(d-1))+̣_̣x(d-1 -1)=d_x^̣2+_̣x((̣d-1)+d-2), ] and likewise, for j=1,…,i-1, [ _z_jB_2 ≤ (_z_jB_1)+̣ (_z_iB_1-1) ≤ (2d-1)^̣2+(d-2)=̣d^̣2+(̣(̣d-1)+d-2), ] and _z_iB_2≤_z_i B_1-1≤ d-2. If R_2(x,c_0,…,c_i-1)≠ 0, we proceed as before Lemma <ref>, and we obtain: ord_x Q_i(x,c_0,…,c_i)≤ord_x R_2(x,c_0,…,c_i-1)≤ω(i-1,d_x, d, d_x^2+_̣x ((d-1)+̣(d -2)+1), d ^̣2+(̣(d -1)+̣(d-2)+1)). Note that this new bound for ord_x Q_i(x,c_0,…,c_i-1,c_i) has increased with respect to the previous one, since d≤ (d-1)(+̣1)=(d-1)+̣(d -2)+1 for any d≥ 2, ≥̣1. At worst, one can have repeatedly the second case with successive Bézout identities: A_kB_k-1+B_kQ_i=R_k with R_k(x,c_0,…,c_i-1)=0 where for j=0,…,i-1, {[ _xR_k ≤ d_x^̣k+_̣x(^̣k-1(d-1)+^k-2(d-2)+⋯+(d-(k-1))+(d-k)+1); _z_jR_k ≤ d^k+(^k-1(d-1)+^k-2(d-2)+⋯+(d-(k-1))+(d-k)+1), ]. and with {[ _xB_k ≤ d_x^k+_̣x(^k-1(d-1)+^k-2(d-2)+⋯+(d-k+1)+(d-k)); _z_jB_k ≤ d^k+(^k-1(d-1)+^k-2(d-2)+⋯+(d-k+1)+(d-k)); _z_iB_k ≤ d-k. ]. The greatest bound is obtained for k=d-1, for which B_d-1 has _z_iB_d-1= 1. In this case, B_d-1 has c_i as unique root and Q_i(x,c_0,…,c_i-1,c_i)≠ 0, so R_d(x,c_0,…,c_i-1)≠ 0. We set for n,m∈^*: [ ϕ(n,m) := (n-1)m^n-1+(n-2)m^n-2+⋯+m +1; = ((n-1)m^n-2+(n-2)m^n-3+⋯+2m +1)m+1; = (n-1)m^n+1-nm^n+m^2-m+1/(m-1)^2 for m≠ 1 ] We have for j=0,…,i-1: {[ _xR_d ≤ d_x^̣d+_̣xϕ(d ,)̣; _z_jR_d ≤ d^̣d+ϕ̣(d,)̣, ]. By the induction hypothesis, ord_x R_d(x,c_0,…,c_i-1) is bounded by ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣). We get the corresponding expected bound: ord_x Q_i(x,c_0,…,c_i-1,c_i)≤ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣), which proves the existence of ω(i,d_x,d, _̣x,)̣ with ω(i,d_x,d,_̣x,)≤ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣). To bound ω(i,d_x,d, _̣x,)̣, we need to find estimates for ϕ. First step: for n,m≥ 2, ϕ(n,m)≤ (n-1)m^n. Indeed, ϕ(n,m)=(n-1)m^n+1-nm^n+m^2-m+1/(m-1)^2. For n≥ 2, -nm^n+m^2-m+1≤ 0, so ϕ(n,m)≤(n-1)m^n+1/(m-1)^2 and (n-1)m^n+1/(m-1)^2≤ (n-1) m^n⇔ m/(m-1)^2≤ 1 ⇔ m^2- 3m+1≥ 0 with Δ=5 et m=(3+√(5))/2< 3. This holds for m≥ 3. For m=2, we compute: ϕ(n,2)=(n-1)2^n+1-n2^n+3≤ (n-1)2^n⇔ 3≤ 2^n This holds for n≥ 2. On the other hand, this does not hold for m=1 and n≥ 3. Second step: for n≥ 3, m≥ 2, ϕ(n,m)≤ (2n-3)m^n-1 Indeed, from the first step: [ ϕ(n,m):=(n-1)m^n-1+(n-2)m^n-2+⋯+m +1 = (n-1)m^n-1+ϕ(n-1,m); ≤ (n-1)m^n-1+(n-2)m^n-1; ≤ (2n-3)m^n-1 ] Let ε>0. For n≥ 2, since -nm^n+m^2-m+1≤ 0, the inequality ϕ(n,m)≤ (1+ε)(n-1)m^n-1 is implied by (n-1)m^n+1/(m-1)^2≤ (1+ε)(n-1)m^n-1⇔m^2/(m-1)^2≤ 1+ε. This holds for m large enough, say for m≥ m_ε, since m^2/(m-1)^2 decreases to 1. Now, let us prove the estimates for ω(i,…) by induction on i. For i=0, ω(0,…)≤ d_̣x+ ḍ_x by Lemma <ref>. Suppose that the estimates (<ref>), (<ref>), (<ref>) and (<ref>) hold until some i≥ 0. By (<ref>): ω(i+1,d_x,d,_̣x,)≤ω(i,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣) ≤ω(i,d_x,d, d_x^̣d+_̣x (2d-3)^̣d-1, d^̣d+(̣2d-3)^̣d-1) ≤ω(i,d_x,d, d_x^̣d+_̣x 2d^̣d-1, d^̣d+2̣d^̣d-1) ≤ω(i,d_x,d, d_x^̣d+_̣x 2d^̣d-1, 3d^̣d) ≤ (2.3^d^i-1+⋯+d^2+d+1 -2^i3^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x(3d^̣d)^d^i+ 2^i.3^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2 (d_x^̣d+_̣x 2d^̣d-1) (3d^̣d)^d^i-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i.3^d^i+d^i-1+⋯+d^2+d-(i-1)-1 d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 1/32^i3^d^i+d^i-1+⋯+d^2+d-(i-1) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2/32^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i+13^d^i+d^i-1+⋯+d^2+d-i) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1. This proves (<ref>), and also (<ref>) by letting ≤̣d and _̣x≤ d_x. Similarly, given ε>0, we use (<ref>) and (<ref>) with ≥̣_̣ε and, since d-1<d, we get: ω(i+1,d_x,d,_̣x,)≤ω(i,d_x,d, d_x^̣d+_̣x (1+ε)d^̣d-1, (2+ε)d^̣d) ≤ (2.(2+ε)^d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x((2+ε)d^̣d)^d^i+ (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2 (d_x^̣d+_̣x (1+ε)d^̣d-1) ((2+ε)d^̣d)^d^i-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)-1 d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 1/(2+ε)(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)/(2+ε)(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1.(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1. This proves (<ref>), and also (<ref>) by letting ≤̣d and _̣x≤ d_x. § TOTAL RECONSTRUCTION OF VANISHING POLYNOMIALS FOR SEVERAL ALGEBRAIC SERIES. In the present section, we provide several improvements of <cit.>. §.§ Total reconstruction in the algebraic case. * Let ℱ' and 𝒢' be two strictly increasing finite sequences of pairs (k,j)∈(ℕ^τ×ℕ)_alex* ordered anti-lexicographically: (k_1,j_1) ≤_alex* (k_2,j_2)⇔ j_1 < j_2 or (j_1 = j_2 and k_1 ≤_grlexk_2). We suppose additionally that (k_1,j_1) ≥_alex*(0,1)>_alex*(k_2,j_2) for any (k_1,j_1)∈ℱ' and (k_2,j_2)∈𝒢' (thus the elements of 𝒢' are ordered pairs of the form (k_2,0), and those of ℱ' are of the form (k_1,j_1), j_1≥ 1). We denote d_y'':=max{j, (k,j)∈ℱ'} and d_ s':=max{|k|, (k,j)∈ℱ'∪𝒢'}. * We say that a series y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] is algebraic relatively to (ℱ',𝒢') if there exists a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=0. * Let d_y'', d_ s'∈, d_y''≥ 1. We say that a series y_0' ∈ K[[s]] is algebraic of degrees bounded by d_y'' and d_ s' if it is algebraic relatively to (ℱ',𝒢') where ℱ' and 𝒢' are the complete sequences of indices (k,j)∈(ℕ^τ×ℕ)_alex* with j≤ d_y'' and |k|≤ d_ s'. Let us consider a series Y_0'=∑_m∈ℕ^τ C_ms^m∈ K[(C_m)_m∈ℕ^τ][[s]] where s and the C_m's are variables. We denote the multinomial expansion of the jth power Y_0'^j of Y_0' by: Y_0'^j=∑_m∈ℕ^τ C_m^(j)s^m. where C_m^(j)∈ K[(C_m)_m∈ℕ^τ]. For instance, one has that C_0^(j)=C_0^j. For j=0, we set Y_0'^0:=1. More generally, for any m and any j≤ |m|, C_m^(j) is a homogeneous polynomial of degree j in the C_k's for k∈ℕ^τ, k≤m, with coefficients in ℕ^*. Now suppose we are given a series y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0}. For any j∈ℕ, we denote the multinomial expansion of y_0'^j by: y_0'^j=∑_m∈ℕ^τ c_m^(j)s^m. So, c_m^(j)=C_m^(j)(c_0,…,c_m). Let y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0}. * Given a pair (k,j)∈ℕ^τ×ℕ, we call Wilczynski vectorV_k,j (associated to y_0') the infinite vector with components γ_m^k,j with m∈ℕ^τ ordered with ≤_grlex: - if j≥ 1: V_k,j:=(γ_m^k,j)_m∈ℕ^τ with γ_m^k,j={[ =c_m-k^(j) if m≥k; =0 otherwise ]. - otherwise: 1 in the kth position and 0 for the other coefficients, V_k,0:=(0,…,1,0,0,…,0,…). So γ_m^k,j is the coefficient of s^m in the expansion of s^ky_0'^j. * Let ℱ' and 𝒢' be two sequences as in Definition <ref>. We associate to ℱ', 𝒢' and y_0' the (infinite) Wilczynski matrix whose columns are the corresponding vectors V_k,j: M_ℱ',𝒢':=(V_k,j)_(k,j)∈ℱ'∪𝒢' ,ℱ'∪𝒢' being ordered by ≤_alex* as in Definition <ref>. We also define the reduced Wilczynski matrix, M_ℱ',𝒢'^red: it is the matrix obtained from M_ℱ',𝒢' by removing the columns indexed in 𝒢', and also removing the corresponding rows (suppress the kth row for any (k,0)∈𝒢'). This amounts exactly to remove the rows containing the coefficient 1 for some Wilczynski vector indexed in 𝒢'. For (i,j)∈ℱ', we also denote by V_i,j^red the corresponding vectors obtained from V_i,j by suppressing the kth row for any (k,0)∈𝒢' and we call them reduced Wilczynski vectors. The following result is <cit.>: The series y_0' is algebraic relatively to (ℱ',𝒢') if and only if all the minors of order |ℱ'∪𝒢'| of the Wilczynski matrix M_ℱ',𝒢' vanish, or also if and only if all the minors of order |ℱ'| of the reduced Wilczynski matrix M_ℱ',𝒢'^red vanish. Let us give an outline of the reconstruction process of <cit.>. Let ℱ' and 𝒢' be two sequences as in Definition <ref> and y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0} be algebraic relatively to (ℱ',𝒢'). Our purpose is to describe the K-vector space whose non-zero elements are the polynomials P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=0. The components of the infinite vector computed as M_ℱ',𝒢'· (a_k,j)_(k,j)∈ℱ'∪𝒢' are exactly the coefficients of the expansion of P(s,y_0') in K[[s]]. Let us now remark that, in the infinite vector M_ℱ',𝒢'· (a_k,j)_(k,j)∈ℱ'∪𝒢', if we remove the components indexed by k for (k,0)∈𝒢', then we get exactly the infinite vector M_ℱ',𝒢'^red· (a_k,j)_(k,j)∈ℱ'. The vanishing of the latter means precisely that the rank of M_ℱ',𝒢'^red is less than |ℱ|. Conversely, if the columns of M_ℱ',𝒢'^red are dependent for certain ℱ' and 𝒢', we denote by (a_k,j)_(k,j)∈ℱ' a corresponding sequence of coefficients of a nontrivial vanishing linear combination of the column vectors. Then it suffices to note that the remaining coefficients a_k,0 for (k,0)∈𝒢' are uniquely determined as follows: a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j) . We consider a maximal family ℱ”⊊ℱ' such that the corresponding reduced Wilczynski vectors are K-linearly independent. Proceeding as in Lemma 3.7 in <cit.>, ℱ” is such a family if and only if, in the reduced Wilczynski matrix M_ℱ',𝒢'^red, there is a nonzero minor (A) where A has columns indexed in ℱ” and lowest row with index m such that |m|≤ 2d_s'd_y'' and ℱ” is maximal with this property. Moreover, among such A's, we take one that has its lowest row having an index minimal for ≤_grlex, and we denote the latter index by p̂. For any (k_0,j_0)∈ℱ'∖ℱ”, the family of reduced Wilczynski vectors (V_k,j^red) with (k,j)∈ℱ”∪{(k_0,j_0)} is K-linearly dependent. There is a unique relation: V_k_0,j_0^red =∑_(k,j)∈ℱ”λ_k,j^k_0,j_0 V_k,j^red with λ_k,j^k_0,j_0∈ K. We consider the restriction of M_ℱ',𝒢'^red to the rows of A. For these rows, by Cramer's rule, we reconstruct the linear combination (<ref>). The coefficients λ_k,j^k_0,j_0 of such a linear combination are quotients of homogeneous polynomials with integer coefficients in terms of the entries of these restricted matrix, hence quotients of polynomials in the corresponding c_m's, |m|≤ 2d_s'd_y''. Let P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0}. One has P(s,y_0')=0 if and only if (<ref>) holds as well as: ∑_(k,j)∈ℱ”a_k,j V_k,j^red+∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0 V_k_0,j_0^red=0 ⇔∑_(k,j)∈ℱ”a_k,j V_k,j^red+∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0(∑_(k,j)∈ℱ”λ_k,j^k_0,j_0 V_k,j^red)=0 ⇔∑_(k,j)∈ℱ”( a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0)V_k,j^red=0 ⇔∀ (k,j)∈ℱ”, a_k,j =-∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0, Let ℱ',𝒢',d_s',d_y'', y_0',ℱ” be as above. Then, the K-vector space of polynomials P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y'] such that P(s,y_0')=0 is the set of polynomials such that ∀ (k,j)∈ℱ”, a_k,j =-∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0, and ∀ (k,0)∈𝒢', a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j) , where the λ_k,j^k_0,j_0's are computed as in (<ref>) as quotients of polynomials with integer coefficients in the c_m's for |m|≤ 2d_s'd_y''. Note that the set of polynomials P(s,y')∈ K[s,y'] with support in ℱ'∪𝒢' such that P(s,y_0')=0 is a K-vector space of dimension |ℱ'|-|ℱ”|≥ 1. §.§ Total algebraic reconstruction in the non-homogeneous case. Let ℱ',𝒢', d_y'',d_ s' be as in Definition <ref>. §.§.§ First case. Let y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] be algebraic relatively to (ℱ',𝒢'). Let i, d_s, d' ∈ℕ, d'≥ 3, d_ s'≤ d_ s and d_y''≤ d'. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]] which satisfy some equations P_j(s,y'_0,…,y'_j)=0 where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. In particular, c_m=c_0,m for any m. Let z'=R(s,y'_0,…,y'_i)∈ K[[s]]∖{0}, where R∈ K[s,z_0,z_1,…,z_i ]∖{0 } with _sR≤ d_s,_z_kR≤ d' for k=0,…,i. We want to determine when there is a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=z' and, subsequently, to reconstruct all such possible P's. Let V be the infinite vector with components the coefficients of z', and V^red the corresponding reduced vector as in Definition <ref>. For ℱ” as in the previous section, we have P(s,y_0')=z' if and only if: ∑_(k,j)∈ℱ”( a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0)V_k,j^red= V^red. We want to examine when the vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent. Let N^red be the infinite matrix with columns (V_k,j^red)_(k,j)∈ℱ” and V^red. The vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent if and only if all the minors of maximal order of N^red up to the row p with: |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1 vanish. The vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent if and only if all the minors of N^red of maximal order vanish: see <cit.>. Conversely, we suppose that the vectors are linearly independent. So, there is a minor of N of maximal order which is nonzero. Let p be the smallest multi-index for ≤_grlex such that there is such a nonzero minor of N^red of maximal order with lowest row of index p. Hence, there is a subminor of it based on the columns indexed in ℱ” which is nonzero, say (B). The lowest row of B is at most p. So, by minimality of p̂ (see before (<ref>) in the previous section), p≥_grlexp̂. If p=p̂, then | p|≤ 2d_s'd' and we are done. If p>_grlexp̂, let us denote by p̃ the predecessor of p for ≤_grlex. Then p̃≥_grlexp̂. For any multi-index m∈^r, denote by N_m^red, V_k,j,m^red,V_m^red the truncations up to the row m of N^red,V_k,j^red,V^red respectively. By definition of p, the rank of the matrix N^red_p is |ℱ”|+1, whereas the rank of N^red_p̃ is |ℱ”|. There exists a nonzero vector ((a_i,j)_(i,j)∈ℱ”,-a) of elements of K such that N_p̃^red · ( [ (a_i,j)_(i,j)∈ℱ”; -a ])= 0, where a can be chosen to be 1 since the vectors (V_k,j,p̃^red)_(k,j)∈ℱ” are independent. The components of the resulting vector N_p̃^red · ( [ (a_i,j)_(i,j)∈ℱ”; -1 ]) are exactly the coefficients e_k, (k,0)∉𝒢' and k≤_grlexp̃, of the expansion of ∑_(i,j)∈ℱ”a_i,j s^i (y_0')^j-z'. By computing the coefficients a_k,0 for (k,0)∈𝒢' as: a_k,0=-∑_(i,j)∈ℱ”, k>i a_i,jc_k-i^(j)+f_k, where f_k denotes the coefficient of s^k in z', we obtain the vanishing of the first terms of Q(s,y_0',…,y'_i):=∑_(i,j)∈ℱ”∪𝒢' a_i,js^i(y_0')^j-z' up to p̃. So, w_s(Q(s,y_0',…,y'_i))≥_grlexp and, therefore, (Q(s,y_0',…,y'_i))≥ |p|. On the contrary, we have: N_p^red · ( [ (a_i,j)_(i,j)∈ℱ”; -1 ])≠ 0. From (<ref>) and (<ref>), we deduce that the coefficient e_p of s^p in the expansion of ∑_(i,j)∈ℱ”a_i,j x^i (y_0')^j-z' is nonzero. Observe that this term of the latter series does not overlap with the terms of ∑_(i,0)∈𝒢'a_i,0 s^i since (p,0)∉𝒢'. Therefore, w_s(Q(s,y_0',…,y'_i))=p. In particular, Q(s,y_0',…,y'_i)≠ 0, so the bound (<ref>) in Theorem <ref> applies: |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1 . Let us return to (<ref>). Let A be the square matrix defined after (<ref>). For any (k,j)∈ℱ”, we denote by A_k,j the matrix deduced from A by substituting the corresponding part of V^red instead of the column indexed by (k,j). Equality (<ref>) holds if and only if the vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent, and by Cramer's rule, one has: ∀ (k,j)∈ℱ”, a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0= (A_k,j) /( A). Recall that one determines that (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent by examining the dependence of the finite truncation of these vectors according to Lemma <ref>. Finally, the remaining coefficients a_k,0 for (k,0)∈𝒢' are each uniquely determined as follows: a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j)+f_ k , where f_k denotes the coefficient of s^k in z'. As a conclusion, we obtain the affine space of P(s,y')∈ K[s,y']∖{0} such that P(s,y_0')=z' as a parametric family of its coefficients with free parameters the a_k_0,j_0's for (k_0,j_0)∈ℱ'∖ℱ”. §.§.§ Second case. Let _̣ s'∈ and y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] be algebraic of degrees d'_y' and _̣ s', but not algebraic relatively to (ℱ',𝒢'). Let i, d_s, d' ∈ℕ, d'≥ 3, d_ s'≤ d_ s and d_y''≤ d'. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]] which satisfy some equations P_j(s,y'_0,…,y'_j)=0 where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. In particular, c_m=c_0,m for any m. Let z'=R(s,y'_0,…,y'_i)∈ K[[s]]∖{0}, where R∈ K[s,z_0,z_1,…,z_j ]∖{0 } with _sR≤ d_s,_z_kR≤ d' for k=0,…,j. As in the previous section, our purpose is to determine when there is a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=z'. Note that such a polynomial is necessarily unique, since y_0' is not algebraic relatively to (ℱ',𝒢'). We consider the corresponding reduced Wilczynski matrix M_ℱ',𝒢'^red. Proceeding as in Lemma 3.7 in <cit.> and using Lemma <ref>, there is a nonzero minor (B) of maximal order where the lowest row of B is indexed by m such that |m|≤(_̣s'+ d'_s)d'_y'. We resume the notations of the previous section. There is a polynomial P such that P(s,y_0')=z' if and only if the vectors (V_k,j^red)_(k,j)∈ℱ' and V^red are K-linearly dependent, since the vectors (V_k,j^red)_(k,j)∈ℱ' are independent. One determines that (V_k,j^red)_(k,j)∈ℱ' and V^red are linearly dependent by examining the dependence of the finite truncation of these vectors according to the following lemma. The vectors (V_k,j^red)_(k,j)∈ℱ' and V^red are linearly dependent if and only if, in the corresponding matrix denoted by N^red, all the minors of maximal order up to the row p with |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1 vanish. The proof is analogous to that of Lemma <ref>, also using Theorem <ref>. We proceed as in the previous section. For any (k,j)∈ℱ', we denote by B_k,j the matrix deduced from B by substituting the corresponding part of V^red instead of the column indexed by (k,j). If the condition of the previous lemma holds, by Cramer's rule, one has: ∀ (k,j)∈ℱ', a_k,j = (B_k,j) /( B). Then it suffices to note that the remaining coefficients a_k,0 for (k,0)∈𝒢' are each uniquely determined as follows: a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j)+f_ k , where f_k denotes the coefficient of s^k in z'. §.§ Total algebraic reconstruction with several algebraic series. Let i, d_s, d' ∈ℕ, d'≥ 3. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]] which satisfy some equations P_j(s,y'_0,…,y'_j)=0 where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. Let 𝒦' and ℒ', 𝒦'≠∅, be two strictly increasing finite sequences of pairs (k,l)∈(ℕ^τ×ℕ^i+1) ordered anti-lexicographically: (k_1,l_1) ≤_alex* (k_2,l_2)⇔l_1 <_grlexl_2 or (l_1 = l_2 and k_1 ≤_grlexk_2). We suppose additionally that 𝒦'≥_alex*(0,(0,…,0,1))>_alex*ℒ' (thus the elements of ℒ' are ordered tuples of the form (k,0), and those of 𝒦' are of the form (k,l), |l|≥ 1). We set d_y'_j':=max{l_j, (k,l)∈𝒦'} for j=0,…,i, and d_ s':=max{|k|, (k,l)∈𝒦'∪ℒ'}. We assume that d_y'_j'≤ d' for j=0,…,i, and d_ s'≤ d_s. Let us set z=(z_0,…,z_i) and y'=(y_0',…,y_i'). We assume that y'≠0. We want to determine when there is a polynomial P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z]∖{0} such that P(s,y')=0 and, subsequently, to reconstruct all such possible P's. It is a generalization of Section <ref>. For any j=0,…,i, for any l_j∈ℕ, we denote the multinomial expansion of y_j'^l_j by: y_j'^l_j=∑_n_j∈ℕ^τ c_j,n_j^(l_j)s^n_j. So the coefficient of s^m in y'^l=y_0'^l_0⋯y_i'^l_i is equal to: c_m^(l):=∑_n_0∈^τ,…,n_i∈^τ, n_0+⋯+n_i=m c_0,n_0^(l_0)⋯ c_i,n_i^(l_i). * Given an ordered pair (k,l)∈ℕ^τ×ℕ^i+1, we call Wilczynski vectorV_k,l the infinite vector with components γ_m^k,l with m∈ℕ^τ ordered with ≤_grlex: - if l≥_grlex (0,…,0,1): V_k,l:= (γ_m^k,l)_m∈ℕ^τ with γ_m^k,l={[ =c_m-k^(l) if m≥k; =0 otherwise ]. - otherwise: 1 in the kth position and 0 for the other coefficients, V_k,0:=(0,…,1,0,0,…,0,…). So γ_m^k,l is the coefficient of s^m in the expansion of s^ky'^l. * Let 𝒦' and ℒ' be two sequences as above. We associate to 𝒦' and ℒ' the (infinite) Wilczynski matrix whose columns are the corresponding vectors V_k,l: M_𝒦',ℒ':=(V_k,l)_(k,l)∈𝒦'∪ℒ' ,𝒦'∪ℒ' being ordered by ≤_alex* as above. We also define the reduced Wilczynski matrix, M_𝒦',ℒ'^red: it is the matrix obtained from M_𝒦',ℒ' by removing the columns indexed in ℒ', and also removing the corresponding rows (suppress the kth row for any (k,0)∈ℒ'). This amounts exactly to remove the rows containing the coefficient 1 for some Wilczynski vector indexed in ℒ'. For (i,l)∈𝒦', we also denote by V_i,l^red the corresponding vectors obtained from V_i,l by suppressing the kth row for any (k,0)∈ℒ' and we call them reduced Wilczynski vectors. There exists a nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y' if and only if all the minors of order |𝒦'∪ℒ'| of the Wilczynski matrix M_𝒦',ℒ' vanish, or also if and only if all the minors of order |𝒦'| of the reduced Wilczynski matrix M_𝒦',ℒ'^red vanish. By construction of the Wilczynski matrix M_𝒦',ℒ', the existence of such a polynomial is equivalent to the fact that the corresponding Wilczynski vectors are K-linearly dependent. This is in turn equivalent to the vanishing of all the minors of maximal order of M_𝒦',ℒ'. Suppose that we are given a nonzero vector (a_k,l)_(k,l)∈𝒦'∪ℒ' such that M_𝒦',ℒ'·(a_k,l)_(k,l)∈𝒦'∪ℒ'=0. Observe that, necessarily, the vector (a_k,l)_(k,l)∈𝒦' is also nonzero (since the vectors V_k,0 for (k,0)∈ℒ' are independent). Let us remark that: M_𝒦',ℒ'^red·(a_k,l)_(k,l)∈𝒦'=0 since the latter vector is deduced from the former one by deleting the rows corresponding to (k,0)∈ℒ'. So, the columns of M_𝒦',ℒ'^red are linked, which is equivalent to the vanishing of its minors of maximal order. Conversely, suppose that there exists a nonzero (a_k,l)_(k,l)∈𝒦' such that M_𝒦',ℒ'^red·(a_k,l)_(k,l)∈𝒦'=0. Then, we can complete the list of coefficients (a_k,l)_(k,l)∈𝒦'∪ℒ' by setting: a_k,0=- ∑_(i,l)∈𝒦', i≤k a_i,l c_k-i^(l). There exists a nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y' if and only if all the minors of the reduced Wilczynski matrix M_𝒦',ℒ'^red of order |𝒦'| and with lowest row indexed by m with: |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1, vanish. The direct part follows from the previous lemma. Suppose that there is no nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y'. So there is a nonzero minor of the reduced Wilczynski matrix M_𝒦',ℒ'^red of order |𝒦'| and with lowest row indexed by m that we assume to be minimal for ≤_grlex. Reasoning as in the proof of Lemma <ref>, we obtain a nonzero polynomial Q(s,z_0,…,z_i) with Supp(Q)⊆𝒦'∪ℒ', such that Q(s,y')≠ 0, and with _s(Q(s,y'))≥ |m|. Since d_y'_j'≤ d' for j=0,…,i, and d_ s'≤ d_s, by Theorem <ref>, we obtain that: _s(Q(s,y'))≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1, which gives the expected result. Let us suppose that there is a nonzero polynomial P with support included in 𝒦'∪ℒ' which vanishes at y'. Our purpose is to determine the space of all such polynomials. For this, we consider a maximal family 𝒦”⊊𝒦' such that the corresponding reduced Wilczynski vectors are K-linearly independent. This is equivalent to the fact that, for the matrix consisting of the (V_k,l^red) with (k,l)∈𝒦”, there is a nonzero minor (A) of maximal order and with lowest row indexed by m with |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1. For any (k_0,l_0)∈𝒦'∖𝒦”, the corresponding family of reduced Wilczynski vectors (V_k,l^red) with (k,l)∈ℱ”∪{(k_0,l_0)} is K-linearly dependent. There is a unique relation: V_k_0,l_0^red =∑_(k,l)∈𝒦”λ_k,l^k_0,l_0 V_k,l^red with λ_k,l^k_0,l_0∈ K. which can be computed by Cramer's rule based on (A). The coefficients λ_k,l^k_0,l_0 of such a linear combination are quotients of homogeneous polynomials with integer coefficients in terms of the entries of these restricted matrices, hence quotients of polynomials in the corresponding c_m's, |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1. Let z=(z_0,…,z_i), and P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z]∖{0}. One has P(s,y')=0 if and only if (<ref>) holds as well as: ∑_(k,l)∈𝒦”a_k,l V_k,l^red+∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0 V_k_0,l_0^red=0 ⇔∑_(k,l)∈𝒦”a_k,l V_k,l^red+∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0(∑_(k,l)∈𝒦”λ_k,l^k_0,l_0 V_k,l^red)=0 ⇔∑_(k,l)∈𝒦”( a_k,l +∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0)V_k,l^red=0 ⇔∀ (k,l)∈𝒦”, a_k,l =-∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0. Let 𝒦',ℒ',d_s,d', y',𝒦” be as above. Then, the set of polynomials P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z] such that P(s,y')=0 is the set of polynomials such that ∀ (k,l)∈𝒦”, a_k,l =-∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0, and ∀ (k,0)∈ℒ', a_k,0=-∑_(i,l)∈𝒦', i≤k a_i,lc_k-i^(j) , where the λ_k,l^k_0,l_0's are computed as in (<ref>) as quotients of polynomials with integer coefficients in the c_m's for |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1. Note that the set of polynomials P(s,z)∈ K[s,z] with support in 𝒦'∪ℒ' such that P(s,y')=0 is a K-vector space of dimension |𝒦'|-|𝒦”|≥ 1. § RECONSTRUCTION OF AN EQUATION FOR AN ALGEBROID SERIES. §.§ The reconstruction algorithm We resume the notations of Section <ref>, in particular Lemma <ref> and after. In particular, recall that τ is the number of variables in s, and so r-τ is the number of variables in t. Let ℱ and 𝒢 be two strictly increasing sequences of triples (k,l,j)∈ℕ^τ×ℕ^r-τ×ℕ ordered as follows: (k_1,l_1,j_1) ≤_*alex* (k_2,l_2,j_2):⇔ j_1 < j_2 or (j_1 = j_2 and (k_1,l_1) ≤_alex* (k_2,l_2)) with (k_1,l_1) ≤_alex* (k_2,l_2):⇔l_1 <_grlexl_2 or (l_1 = l_2 and k_1 ≤_grlexk_2). We suppose additionally that (k_1,l_1,j_1)≥_*alex*(0,0,1)>_*alex*(k_2,l_2,j_2) for any (k_1,l_1,j_1)∈ℱ and (k_2,l_2,j_2)∈𝒢 (thus the elements of 𝒢 are ordered triples of the form (k_2,l_2,0), and those of ℱ are of the form (k_1,l_1,j_1), j_1≥ 1). Moreover, we assume that there is d∈, d≥ 1, such that j≤ d for any (k,l,j)∈ℱ∪𝒢, and we set d:= max{j | ∃ (k,l,j)∈ℱ∪𝒢}. We say that a series y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n∈ K[[s,t]], c_0,0≠ 0, is algebroid relatively to (ℱ,𝒢) if there exists a polynomial P(s,t,y)=∑_(k,l,j)∈ℱ∪𝒢 a_k,l,js^kt^ly^j∈ K[[s, t]][y]∖{0} such that P(s,t,y_0)=0. For any ℱ,𝒢 satisfying Conditions (i), (ii), (iii) of Lemma <ref>, let us denote by (K[s][[t]][y])_ℱ,𝒢 the subset of polynomials in K[s][[t]][y]∖{0} with support in ℱ∪𝒢. The purpose of the following discussion is to make more explicit the conditions in Lemma <ref> for the vanishing of a polynomial P∈(K[s][[t]][y])_ℱ,𝒢 for some ℱ,𝒢 corresponding to (i), (ii), (iii) in Lemma <ref>, at a formal power series y_0∈ K[[s]][[t]]. As we have seen in Section <ref>, one can always assume that y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n is such that c_0,0≠ 0. Let us consider a series Y_0=∑_n∈ℕ^r-τ (∑_m∈ℕ^τ C_m,ns^m)t^n = ∑_n∈ℕ^r-τC_n(s) t^n∈ K[(C_m , n)_m∈ℕ^τ, n∈ℕ^r-τ][[s]][[t]] where s, t and the C_m,n's are variables. We denote the multinomial expansion of the jth power Y_0^j of Y_0 by: Y_0^j=∑_n∈ℕ^r-τ (∑_m∈ℕ^τ C_m,n^(j)s^m)t^n = ∑_n∈ℕ^r-τC_n^(j)(s) t^n where C_m,n^(j)∈ K[(C_k,l)_k≤m, l≤n] and C_n^(j)(s)∈ K[(C_l(s))_l≤n]⊆ K[(C_k,l)_k≤m, l≤_grlexn][[s]]. We also set Y_0^0:=1. Now, suppose we are given a series y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m, ns^mt^n∈ K[[s,t]] with c_0,0≠ 0. For any j∈ℕ, we denote the multinomial expansion of y_0^j by: y_0^j=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m,n^(j)s^mt^n= ∑_n∈ℕ^r-τc_n^(j)(s) t^n. So, c_m,n^(j)=C_m,n^(j)(c_0,0,…,c_m,n) and c_n^(j)(s)=C_n^(j)(c_0(s),…,c_n(s)). We also set y_0^0:=1. For a polynomial P∈(K[s][[t]][y])_ℱ,𝒢∖{0}, we denote P(s,t,y)=∑_(k,l,j)∈ℱ∪𝒢 a_k,l,js^kt^ly^j =∑_l∈ℕ^r-τ, j=0,..,d a_l,j(s)t^ly^j. A series y_0∈ K[[s]][[t]], y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n, is a root of P if and only if the following polynomial relations hold when evaluated at the series c_0(s),…, c_n(s): ∀l∈ℕ^r-τ, ∑_j=0,..,d a_l,j(s) C_0^j(s)=- ∑_i<l, j=0,..,d a_i,j(s) C_l-i^(j)(s) . Let us compute: P(s,t,y_0)=∑_i∈ℕ^r-τ, j=0,..,d a_i,j(s)t^iy_0^j =∑_i∈ℕ^r-τ, j=0,..,d a_i,j(s)t^i(∑_n∈ℕ^r-τc_n^(j)(s) t^n) =∑_l∈ℕ^r-τ(∑_i≤l, j=0,..,d a_i,j(s)c_l-i^(j)(s))t^l. So, y_0 is a root of P if and only if, in the latter formula, the coefficient of t^l for each l vanishes, which is equivalent to the vanishing of (<ref>) (noticing that C_0^(j)= C_0^j for all j). Let ℱ,𝒢 be as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>. Let y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n=∑_n∈ℕ^r-τc_n(s) t^n∈ K[[s,t]], c_0,0≠ 0, be a series algebroid relatively to (ℱ,𝒢). Let P∈(K[s][[t]][y])_ℱ,𝒢∖{0} be a polynomial such that P(s,t,y_0)=0. We notice that w_t(P) is the index of the first non-trivial relation (<ref>), for ℕ^r-τ ordered with ≤_grlex. Let l̂_0∈^r-τ be such that w_t(P)≤_grlexl̂_0. If w_t(P) is known, then one can take l̂_0=w_t(P). §.§.§ First step For any l∈^r-τ, we denote by ℱ_l' and 𝒢_l' the corresponding sets of tuples (k,j)∈^τ× where (k,l,j)∈ℱ and (k,l,0)∈𝒢 respectively. We denote d'_s,l:=max{|k| | (k,j)∈ℱ_l'∪𝒢_l' } (which is well-defined thanks to Condition (iii) of Lemma <ref>). By (<ref>) in Remark <ref>, we have that: d'_s,l≤ a|l|+b, where a and b are as in Lemma <ref>. Let l≤_grlexl̂_0 (or directly l=w_t(P) if known). As we are interested in the first non trivial relation in (<ref>), we consider its following instance: ∑_j=0,..,d a_l,j(s) C_0^j=∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kC_0^j=0 . By Lemma <ref>, there is l≤_grlexl̂_0 such that c_0 satisfies the latter relation, i.e. c_0 is algebraic relatively to (ℱ'_l,𝒢'_l). In particular, c_0 is algebraic relatively to (⋃_l≤_grlexl̂_0ℱ'_l,⋃_l≤_grlexl̂_0𝒢'_l). We denote d'_s:=max_l≤_grlexl̂_0(d'_s,l). Let us now describe the reconstruction method for this first step: * We determine the multi-indices l≤_grlexl̂_0 such that ℱ'_l∪𝒢'_l≠∅. * For each l≤_grlexl̂_0 as above, we determine whether c_0 is algebraic relatively to (ℱ'_l,𝒢'_l) by computing the first minors of maximal order of the corresponding Wilczynski matrix M_ℱ'_l,𝒢'_l^red. Proceeding as in <cit.> or Lemma <ref>, it suffices to compute them up to the row indexed by the biggest m∈^τ such that | m|≤ 2 d d'_s. * Let l≤_grlexl̂_0 such that c_0 is algebraic relatively to (ℱ'_l,𝒢'_l). We reconstruct the K-vector space of polynomials corresponding to Equation (<ref>) according to the method in Section <ref>, in particular Lemma <ref>, applied to (ℱ'_l,𝒢'_l) and c_0. We denote by E_l this space. * For each l'<_grlexl, we set a_k, l',j:=0 for (k, l',j)∈ℱ∪𝒢. §.§.§ Second step With the notations of the previous section, let l be such that E_l≠{0}. Let us consider the instances of (<ref>) corresponding to the l' such that: l<_grlexl'<_grlexl+(0,…,0,1), For such l', we claim that the set of indices i such that i<l' and i≥_grlexl is empty. Indeed, by (<ref>), note that | l'|=| l|. For such i, one necessarily has | i|<| l'|=| l|, but also | i|≥ | l|: a contradiction. According to (4) at the end of First Step above and to the previous claim, the right hand sides of such instances are equal to 0. Hence, they also are of the same form as (<ref>): ∑_j=0,..,d a_l',j(s) C_0^j=∑_(k,j)∈ℱ'_l'∪𝒢'_l' a_k,l',js^kC_0^j=0 . We perform the same method of reconstruction as in the First Step <ref> to determine E_l' the K-vector space of polynomials corresponding to this equation. Note that E_l' might be equal to {0}. At this step, for each l≤_grlexl̂_0 such that E_l≠{0} from the First Step, we have built the vector spaces E_l' (possibly {0}) of all the coefficients a_k,l',j for (k, l',j)∈ℱ∪𝒢 satisfying the instances of (<ref>) for l'<_grlexl+(0,…,0,1). §.§.§ Third step Let l≤_grlexl̂_0 such that E_l≠{0} as in the First Step <ref>. We consider the instance of (<ref>) corresponding to l+(0,…,0,1). Note that for i< l+(0,…,0,1), we have that i≤_grlexl. Applying (4) from the end of the First Step, we obtain: ∑_j=0,..,d a_l+(0,…,0,1),j(s) C_0^j=- ∑_j=0,..,d a_l,j(s) C_(0,…,0,1)^(j) . Noticing that C_(0,…,0,1)^(j)=j C_0^j-1C_(0,…,0,1), we get: ∑_(k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) a_k,l+(0,…,0,1),js^kC_0^j=- (∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kj C_0^j-1) C_(0,…,0,1) . There is l≤_grlexl̂_0 such that c_0 and c_(0,…,0,1) satisfy the latter relation, and c_0 satisfies the relations (<ref>) and (<ref>). If c_(0,…,0,1)=0, then there are two cases. Either ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)=∅ i.e. there is no coefficient a_k,l+(0,…,0,1),j to reconstruct. Or else, we obtain an equation like (<ref>) and we derive E_l+(0,…,0,1) as in the first and second step. If c_(0,…,0,1)≠ 0, let us denote θ_s,(0,…,0,1):= (| l̂_0|+d)a +b where a and b are as in Lemma <ref>. By this lemma, there are non-trivial polynomial relations P_0(s,z_0)=0 and P_1(s,z_0,z_1)=0 satisfied by c_0 and c_(0,…,0,1) with _sP_j≤θ_s,(0,…,0,1), _z_0P_j≤ d and _z_1P_1≤ d. There are several cases. ∙ Suppose that ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)=∅. Equation (<ref>) reduces to: ∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kj c_0^j-1= ∑_(k,j)∈ℱ'_l a_k,l,js^kj c_0^j-1=0, which means that c_0 is at least a double root of (<ref>). We resume the notations of Section <ref>. Let us denote by ℱ”_l the family corresponding to ℱ” for (<ref>), and λ_l, k,j^k_0,j_0 the coefficients corresponding to λ_ k,j^k_0,j_0. Formula (<ref>) of Lemma <ref> becomes: ∀ (k,j)∈ℱ”_l, a_k,l,j =-∑_(k_0,j_0)∈ℱ'_l∖ℱ”_la_k_0,l,j_0 λ_l, k,j^k_0,j_0 . Substituting this formula in (<ref>) gives: ∑_(k_0,j_0)∈ℱ'_l∖ℱ”_l a_k_0,l,j_0s^k_0j_0 c_0^j_0-1 + ∑_(k,j)∈ℱ”_l( -∑_(k_0,j_0)∈ℱ'_l∖ℱ”_la_k_0,l,j_0 λ_l, k,j^k_0,j_0) s^kj c_0^j-1 =0 , which is: ∑_(k_0,j_0)∈ℱ'_l∖ℱ”_l a_k_0,l,j_0( s^k_0j_0 c_0^j_0-1 - ∑_(k,j)∈ℱ”_l λ_l, k,j^k_0,j_0s^kj c_0^j-1) =0 . Either, the latter relation is trivial, i.e. for all (k_0,j_0)∈ℱ'_l∖ℱ”_l, the contents of the parenthesis are all 0. In this case, the space E_l of possible equations for c_0 remains unchanged. Or, the dimension of E_l drops. Since the contents of these parenthesis are polynomials in s and c_0, by Lemma <ref>, the s-adic order of the non-vanishing ones is at most 2d'_sd. The vanishing of (<ref>) follows from the vanishing of the terms of s-adic order up to 2d'_sd. This gives linear relations (with at least one that is nontrivial) between the a_k_0,l,j_0's for (k_0,j_0)∈ℱ'_l∖ℱ”_l. Accordingly, we derive a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices. ⋆ Suppose now that ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)≠∅. We determine whether c_0 is algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)). For this, we examine the vanishing of the minors of maximal order of M_ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)^red up to the lowest row of order 2d'_s,l+(0,…,0,1)d. There are two subcases. ⋆∙ If c_0 is algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)), according to Equation (<ref>), we set z'=- (∑_(k,j)∈ℱ'_l a_k,l,js^kj c_0^j-1) c_(0,…,0,1). We have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1). We consider as in Section <ref>, a subfamily ℱ”_l+(0,…,0,1) of ℱ'_l+(0,…,0,1), the vectors (V_l+(0,…,0,1), k,j^red)_(k,j)∈ℱ”_l+(0,…,0,1) and V^red_l+(0,…,0,1) for z', and the corresponding matrix N^red_l+(0,…,0,1). According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_l+(0,…,0,1) of maximal order up to the row p with |p| ≤ 2.3. θ_s,(0,…,0,1)d^d+1. Let us consider one of these minors, say (D). For (k,j)∈ℱ'_l, we denote by W_k,j^red the infinite vector corresponding to s^kj c_0^j-1 c_(0,…,0,1). Hence, we have: V^red_l+(0,…,0,1)= -∑_(k,j)∈ℱ'_l a_k,l,j W_k,j^red. For each (k,j)∈ℱ'_l, we set D_k,j the matrix obtained from D by substituting to its last column, i.e. the part of V^red_l+(0,…,0,1), the corresponding part of the W_k,j^red. By multilinearity of the determinant, one obtains: (D)=-∑_(k,j)∈ℱ'_l(D_k,j)a_k,l,j. So, the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,l,j's for (k,j)∈ℱ'_l. Considering the linear relations for all these D's, we derive from E_l a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices. If E_l≠{0}, for each a_l:=(a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l list of coefficients of a polynomial in E_l, we perform the method in Section <ref> and we reconstruct the space Φ_l+(0,…,0,1)(a_l) of coefficients (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) for a relation (<ref>). By (<ref>) and (<ref>), it is an affine space ϕ_l+(0,…,0,1)(a_l) + F_l+(0,…,0,1) where ϕ_l+(0,…,0,1)(a_l) is a point and F_l+(0,…,0,1) a vector space. Note that ϕ_l+(0,…,0,1)(a_l) depends linearly on a_l and that its computation is done by computing a finite number of minors of matrices given by the W_k',j'^red's, (k',j')∈ℱ'_l , and the V_k”,j”^red's, (k”,j”)∈ℱ”_l+(0,…,0,1). Also, we have that F_l+(0,…,0,1) is independent of a_l. Finally, we observe that, for a given l, the set of admissible ((a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l , (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1))'s is a nonzero K-vector space. ⋆⋆ If c_0 is not algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)), we have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1). Note that in this case, such a polynomial P is necessarily unique for a given z'. We proceed as above with ℱ'_l+(0,…,0,1) instead of ℱ”_l+(0,…,0,1) and as in Section <ref>, in particular Lemma <ref> with 2.3. θ_s,(0,…,0,1)d^d+1 as bound for the depth of the minors involved. This determines from E_l a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices. Also, if E_l≠{0}, for each a_l∈ E_l≠{0}, we reconstruct the list of coefficients ϕ_l+(0,…,0,1)(a_l):= (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) for a relation (<ref>). By (<ref>) and (<ref>), ϕ_l+(0,…,0,1)(a_l) depends linearly on a_l and its computation is done by computing a finite number of minors of matrices given by the W_k',j'^red's, (k',j')∈ℱ'_l , and the V_k”,j”^red's, (k”,j”)∈ℱ'_l+(0,…,0,1). Again, we observe that, for a given l, the set of admissible ((a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l , (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1))'s is a nonzero K-vector space. To sum up Sections <ref> to <ref>, we have reconstructed a finite number of multi-indices l (i.e. possible initial steps l_0:=w_t(P)) and, for each of these l's, the nonzero K-vector space E_l,l+(0,…,0,1) of coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l≤_grlexl'≤_grlexl+(0,…,0,1) for the initial part of a possible vanishing polynomial for y_0. §.§.§ Induction step. For each l≤_grlexl̂_0 possible initial step as above, we assume that up to some l̃≥_grlexl+(0,…,0,1) we have reconstructed the nonzero K-vector space, say E_l,l̃, of coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃ for the initial part of a possible vanishing polynomial for y_0. Recall that, for λ∈^r, S(λ) (respectively A(λ) for λ≠ 0) denotes the successor (respectively the predecessor) for ≤_grlex of λ in ^r. Equation (<ref>) gives: ∑_j=0,..,d a_S(l̃),j(s) C_0^j=- ∑_i<S(l̃), j=0,..,d a_i,j(s) C_S(l̃)-i^(j) , which we write as: ∑_(k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) a_k,S(l̃),js^kC_0^j=- ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k C_S(l̃)-i^(j)) . Let us denote θ_s,S(l̃):= (|l̂_0|+d |S(l̃)|)a +b where a and b are as in Lemma <ref>. By this lemma, there exist polynomials (P_λ(s,z_0,…,z_λ))_λ= 0,…,S(l̃) such that P_λ(s,c_0,…,c_λ)=0, P_λ(s,c_0,…,c_A(λ),z_λ)≢0, _sP_λ≤θ_s,S(l̃), _z_μP_λ≤ d for μ≤_grlexλ. Let us denote i_S(l̃):=([ |S(l̃)|+r-τ; |S(l̃)| ])-1. Note that i_S(l̃)+1 is at most the number of multi-indices λ such that λ≤_grlexS(l̃). ∙ Suppose that ℱ'_S(l̃)∪𝒢'_S(l̃)=∅. Equation (<ref>) evaluated at c_0,…,c_S(l̃) reduces to: ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j))=0 . Let us expand c_n^(j) in (<ref>): y_0^j= ∑_n∈ℕ^r-τc_n^(j) t^n= (∑_γ∈ℕ^r-τc_γ t^γ)^j, so, c_n^(j)=∑_j / |j|=j g(j)=nj!/j!c^j where j:=(j_0,…,j_n) and c^j:= c_0^j_0⋯ c_n^j_n (and where g is as in Notation <ref>). Let us expand the left hand side of (<ref>): ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j))= ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k ∑_j / |j|=j g(j)=S(l̃)-ij!/j!c^j) (where j:=(j_0,…,j_S(l̃)) and c^j:= c_0^j_0⋯ c_S(l̃)^j_S(l̃)). We set 𝒦'_S(l̃) the set of (k,j) where k∈^τ and j:=(j_0,…,j_S(l̃)), j≠0, such that j:=|j|∈{0,…,d} and there exists i∈^r-τ with i<S(l̃), (k,j)∈ℱ'_i∪𝒢'_i, g(j)=S(l̃)-i. Equation (<ref>) becomes: ∑_(k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃)j!/j!a_k,S(l̃)-g(j),j s^kc^j=0 . Thanks to Remark <ref>, for any (k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃), we have that |k|≤ a |S(l̃)|+b≤θ_s,S(l̃). We are in position to apply the method of reconstruction of Section <ref> of all the polynomials such that ∑_(k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃) b_k,j s^kc^j=0. This requires computations of minors of the corresponding Wilczynski matrix up to a finite depth bounded by 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) d^d^i_S(l̃)+⋯+d^2+d+1 (see Lemma <ref>). By Lemma <ref>, the formulas (<ref>) and (<ref>) give us with a vector space B_S(l̃) (possibly zero) of coefficients b_k,j, hence a corresponding vector space A_S(l̃) of coefficients a_k,S(l̃)-g(j),j=j!/j!b_k,j. We take the intersection of A_S(l̃) with E_l,l̃ and we obtain another vector space of admissible coefficients that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices. ⋆ Suppose that ℱ'_S(l̃)∪𝒢'_S(l̃)≠∅. We determine whether c_0 is algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)). For this, we examine the vanishing of the minors of maximal order of M_ℱ'_S(l̃),𝒢'_S(l̃)^red up to the lowest row of order 2d'_s,S(l̃)d (see Section <ref> for the notation). There are two subcases. ⋆∙ If c_0 is algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)), according to Equation (<ref>), we set z':=- ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j)). We have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_S(l̃)∪𝒢'_S(l̃). We consider as in Section <ref>, a subfamily ℱ”_S(l̃) of ℱ'_S(l̃), the vectors (V_S(l̃), k,j^red)_(k,j)∈ℱ”_S(l̃) and V^red_S(l̃) for z', and the corresponding matrix N^red_S(l̃). According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_S(l̃) of maximal order up to the row p with |p| ≤ 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) . d^d^i_S(l̃)+⋯+d^2+d+1 Let us consider one of these minors, say (D). For i<S(l̃), for (k,j)∈ℱ'_i∪𝒢'_i, we denote by W_k,i,j^red the infinite vector corresponding to s^k c_S(l̃)-i^(j). We set D_k,i,j the matrix obtained from D by substituting to its last column, i.e. the part of V^red_S(l̃), the corresponding parts of the W_k,i,j^red's. Since V^red_S(l̃)= ∑_i<S(l̃) ( ∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,j.W_k,i,j^red), one has: (D)=- ∑_i<S(l̃) ( ∑_(k,j)∈ℱ'_i∪𝒢'_i(D_k,i,j) a_k,i,j). So, the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,i,j's for i<S(l̃) and (k,j)∈ℱ'_i∪𝒢'_i. Considering these linear relations, we derive from E_l,l̃ a new space of possible coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃, that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices. If this projection is not {0}, so in particular E_l≠{0}, for each a_l̃:=(a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ list of coefficients of a polynomial in E_l,l̃, we perform the method in Section <ref> and we reconstruct the space Φ_S(l̃)(a_l̃) of coefficients (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) for a relation (<ref>). By (<ref>) and (<ref>), it is an affine space ϕ_S(l̃)(a_l̃) + F_S(l̃) where ϕ_S(l̃)(a_l̃) is a point and F_S(l̃) a vector space. Note that ϕ_S(l̃)(a_l̃) depends linearly on a_l̃ and that its computation is done by computing a finite number of minors of matrices given by the W_k',i,j'^red's, i<S(l̃), (k',j')∈ℱ'_i∪𝒢'_i, and the V_k”,j”^red's, (k”,j”)∈ℱ”_S(l̃). Also, we have that F_S(l̃) is independent of a_l̃. Finally, we observe that the set of admissible ((a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ , (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃))'s, for a given l, is a nonzero K-vector space which we denote by E_l,S(l̃). ⋆⋆ If c_0 is not algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)), according to Equation (<ref>), we set z'=- ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j)). We want to determine if there exists a relation P( s, c_0)=z' with P having support in ℱ'_S(l̃)∪𝒢'_S(l̃). As in Section <ref>, we consider the vectors (V_S(l̃), k,j^red)_(k,j)∈ℱ'_S(l̃), V^red_S(l̃) for z', and the corresponding matrix N^red_S(l̃). According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_S(l̃) of maximal order up to the row p with |p| ≤ 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) . d^d^i_S(l̃)+⋯+d^2+d+1 where i_S(l̃) is defined by (<ref>). As previously, for any of such minors, say (D), the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,i,j's for i<S(l̃) and (k,j)∈ℱ'_i∪𝒢'_i. Considering these linear relations, we derive from E_l,l̃ a new space of possible coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃, that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices. If this projection is not {0}, so in particular E_l≠{0}, for each a_l̃:=(a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ list of coefficients of a polynomial in E_l,l̃, we perform the method in Section <ref> and we reconstruct the unique list of coefficients (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) for a relation (<ref>). Note that this list depends linearly on (a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ by relations (<ref>) and (<ref>). Finally, we denote by E_l,S(l̃) the K-vector space of ((a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ , (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃)) admissible. As a conclusion, we obtain: Let ñ^0∈^r, p∈^*, q∈^r-1∖{0}, d∈^* be given. Let ℱ,𝒢 be as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>. Let y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n=∑_n∈ℕ^r-τc_n(s) t^n∈ K[[s,t]], c_0,0≠ 0, be a series algebroid relatively to (ℱ,𝒢). Let l̂_0∈^r-τ be given. Assume that there exists a polynomial P∈(K[s][[t]][y])_ℱ,𝒢∖{0} such that P(s,t,y_0)=0 and w_t(P)≤_grlexl̂_0. For any l≤_grlexl̂_0, for any l̃≥_grlexl, Sections <ref> to <ref> provide the vector space E_l,l̃ of all the polynomials Q_l,l̃∈(K[s][[t]][y])_ℱ,𝒢 such that: w_t(Q_l,l̃)=l and w_t(Q_l,l̃(s,t,y_0) )>_grlexl̃. §.§ Proof of Theorem <ref> Theorem <ref> will be a corollary of the following result: Let d∈^* and ν̃_0∈. Let ỹ_0∈𝒦_r, more precisely ỹ_0=f̃/g̃ for some formal power series f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]]. We assume that ỹ_0 is algebroid of degree bounded by d, and that there is a vanishing polynomial P̃ of degree bounded by d and of (x)-adic order bounded by ν̃_0. Let q_i'≥ q_i, i=1,…,r-1, be such that the transform fg of f̃g̃ under the change of variables u_i:=(x_i/x_i+1^q_i')^1/p, i=1,…,r-1, u_r=x_r^1/p, is monomialized with respect to the u_i's: (fg)(u):=(f̃g̃)( u_1^pu_2^pq'_1⋯ u_r^pq'_1q'_2⋯ q'_r-1 , … , u_r-1^pu_r^p q'_r-1 , u_r^p , y) We resume the notations of (<ref>), (<ref>), (<ref>), in particular, x_i∈ξ_k if and only if q_i'>0, and otherwise x_i ∈x_k for some k: x^ ny^j = x_0^ n_0 ξ_1^ m_1 x_1^ n_1⋯ξ_σ^ m_σ x_σ^ n_σy^j. where n=( n_0, m _1, n_1,…, m_σ, n_σ). For k=1,…,σ, we denote ξ_k=(x_i_k,…,x_j_k-1) and x_k=(x_j_k,…,x_i_k+1-1), and accordingly m_k=(n_i_k,…,n_j_k-1) and n_k=(n_j_k,…,n_i_k+1-1) with i_σ+1:=r+1. For k=0 when x_0 is not empty, we denote x_0=(x_j_0,…,x_i_1-1) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1. When x_0 is empty, we set n_0=0. We set: [ L̃_k: ^i_k+1-i_k → ; (m_k,n_k)=(n_i_k,…,n_i_k+1-1) ↦ L̃_k(m_k,0)+ |n_k| ] where: L̃_k(m_k,0):=q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k+⋯+q'_j_k-1q'_j_k-2n_j_k-2 + q'_j_k-1n_j_k-1. Moreover, let L̃(n):=|n_0|+∑_k=1,…,σL̃_k(m_k,n_k). The algorithm described in Section <ref> provides for any ν∈ all the polynomials Q̃_ν(x,y)∈ K[[x]][y] with _yQ̃_ν≤ d and of (x)-adic order bounded by ν̃_0 such that, for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), one has: L̃(n)≥ν. Recall that, by the Monomialization Lemma <ref> and by Remark <ref>, if β=(β_1,…,β_r) is the lexicographic valuation of f̃g̃ with respect to the variables ζ_i:=(x_i/x_i+1^q_i)^1/p for i=1,…,r-1, ζ_r:=x_r^1/p, then the assumptions of Theorem <ref> are satisfied with q_i':=q_i+β_i+1+1. Therefore, Theorem <ref> follows. Let us now deduce Theorem <ref> from Theorem <ref>. Suppose that _xP̃≤ν̃_0. Let ℱ,𝒢 be as in Definition <ref> and such that ℱ∪𝒢 is the total family of multi-indices (α,j) satisfying Conditions (i), (ii), (iii) of Lemma <ref> with q_i' instead of q_i. By the transformations described in (<ref>), (<ref>) and (<ref>) associated to the change of variables u_i:=(x_i/x_i+1^q_i')^1/p, i=1,…,r-1, u_r=x_r^1/p, we obtain a polynomial P(u,y):=u^m̃^0P̃( u_1^pu_2^pq'_1⋯ u_r^pq'_1q'_2⋯ q'_r-1 , … , u_r^p , u^ñ^0y)∈(K[[u]][y])_ℱ,𝒢. Recall that we denote by x_k, ξ_k the sub-tuple of variables x_i corresponding to t_k, s_k respectively. For k=0 when t_0 is not empty, we denote x_0=(x_j_0,…,x_i_1-1), t_0=(u_j_0,…,u_i_1-1)=(x_j_0^1/p,…,x_i_1-1^1/p) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1. According to (<ref>), (<ref>), (<ref>), a monomial x^ n is transformed into a monomial u^α=s^βt^γ such that, for k=1,…,σ, we have: [ ξ_k^ m_k x_k^ n_k= s_i_k^pn_i_ks_i_k+1^p(n_i_k+1+q'_i_kn_i_k)⋯s_j_k-1^p(n_j_k-1+q'_j_k-2n_j_k-2+q'_j_k-2q'_j_k-3n_j_k-3+⋯+ q'_j_k-2q'_j_k-3⋯ q'_i_kn_i_k); t_j_k^p(n_j_k+q'_j_k-1n_j_k-1+q'_j_k-1q'_j_k-2n_j_k-2+⋯+ q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k) t_j_k+1^pn_j_k+1⋯t_i_k+1-1^pn_i_k+1-1. ] Hence, a monomial x^ ny^j of P̃(x,y) gives a monomial u^αu^m̃^0+jñ^0y^j=s^βt^γu^m̃^0+jñ^0y^j of P(u,y). Since (P̃) contains a monomial x^ ny^j such that |n|= |n_0|+∑_k=1^σ(|m_k|+|n_k|)≤ν̃_0, we have that: _tP≤ p|n_0|+ ∑_k=1^σ(pq'_j_k-1q'_j_k-2⋯ q'_i_k|m_k|+p|n_k|) + |(m̃^0+jñ^0)_|t| ≤ p.κ.ν̃_0 + d.ρ where n_|t denotes the components of n corresponding to the exponents of the variables t in u^n, κ:=max_k=1,..,σ(q'_j_k-1q'_j_k-2⋯ q'_i_k) and ρ:=∑_k=0^σ( |ñ^0_j_k|+⋯+|ñ^0_i_k+1-1|). We set l̂_0:= (p.κ.ν̃_0 + d.ρ,0,…,0)∈^r-τ, so that w_t(P)≤_grlexl̂_0. Given Q̃_ν(x,y) as in Theorem <ref>, let us denote by Q_ν(u,y) its transform via (<ref>), (<ref>), (<ref>) as recalled between P̃ and P above. One gets Q̃_ν(x,ỹ_0)=u^m̃^0Q_ν(u,y_0). According to (<ref>), (<ref>), (<ref>), a monomial x^ n/p of Q̃_ν(x,ỹ_0) is transformed into a monomial u^α=s^βt^γ such that, for k=1,…,σ, we have: [ ξ_k^ m_k/p x_k^ n_k/p= s_i_k^n_i_ks_i_k+1^n_i_k+1+q'_i_kn_i_k⋯s_j_k-1^n_j_k-1+q'_j_k-2n_j_k-2+q'_j_k-2q'_j_k-3n_j_k-3+⋯+ q'_j_k-2q'_j_k-3⋯ q'_i_kn_i_k; t_j_k^n_j_k+q'_j_k-1n_j_k-1+q'_j_k-1q'_j_k-2n_j_k-2+⋯+ q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k t_j_k+1^n_j_k+1⋯t_i_k+1-1^n_i_k+1-1. ] So the monomials of Q_ν(u,y_0) are of the form u^α-m̃^0. As in the computation of (<ref>), _xQ̃_ν(x,y)≤ν̃_0 implies that _tQ_ν(u,y)≤ p.κ.ν̃_0 + d.ρ, so w_t(Q_ν(u,y))≤_grlexl̂_0. Moreover, since Q̃_ν(x,ỹ_0)=u^m̃^0Q_ν(u,y_0), the condition such that for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), L̃(n)≥ν, is equivalent to _t(Q_ν(u,y_0))+|m̃^0_ |t|≥ν. This is in turn equivalent to w_t(Q_ν(u,y_0))≥(0,…,0,ν-|m̃^0_ |t|). We set l̃_ν:= (0,…,0,ν-|m̃^0_ |t|), and l:=w_t(Q_ν(u,y)). A polynomial Q̃_ν(x,y) satisfying the conditions of Theorem <ref> comes from a polynomial Q_ν(u,y) as above satisfying w_t(Q_ν(u,y))≤_grlexl̂_0 and w_t(Q_ν(u,y_0))≥l̃_ν. The construction of such polynomials Q_ν(u,y)=Q_l,l̃_ν(u,y) is given by Theorem <ref>. This achieves the proofs of Theorems <ref> and <ref>. §.§ Plan of the algorithm and example For the convenience of the reader, we now give several flowcharts in order to describe the algorithm. The first one provides the plan of the algorithm. The others consist of the details of the corresponding steps. < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > The purpose of the present example is to illustrate the various points of our Theorem <ref>. For r=d=p=2 and q_1=ν̃_0=1, let us consider ỹ_0=f̃/g̃∈𝒦_2 with f̃,g̃∈ K[[(x_1/x_2)^1/2,x_2^1/2]] a root of the following equation: P̃(x_1,x_2,y) := sin(x_1+x_2)y^2+e^x_1x_1x_2y-x_2^2cos(x_1x_2) = 0. For instance, ỹ_0 := - e^x_1x_1x_2+ √( e^2x_1x_1^2x_2^2+4 x_2^2cos( x_1x_2 ) sin( x_1+x_2 ) )/2 sin( x_1+x_2 ) = - e^x_1/x_2x_2x_1/x_2x_2+ x_2^1/2√( e^2x_1/x_2x_2(x_1/x_2)^2x_2+4 cos( x_1/x_2x_2^2 ) sin( x_1/x_2x_2+x_2)/x_2)/2 sin( x_1/x_2x_2+x_2) / x_2 and therefore: f̃ := [ 2+x_1/x_2-1/4(x_1/x_2)^2+1/8(x_1/x_2)^3-5/64(x_1/x_2)^4+7/128(x_1/x_2)^5] x_2^1/2 -x_1/x_2x_2 +[ 1/4(x_1/x_2)^2-1/8(x_1/x_2)^3+3/32(x_1/x_2)^4-5/64(x_1/x_2)^5]x_2^3/2 -(x_1/x_2)^2x_2^2 +[ -1/6-5/12x_1/x_2-5/16(x_1/x_2)^2+43/96(x_1/x_2)^3-199/768(x_1/x_2)^4+107/512(x_1/x_2)^5] x_2^5/2 -1/2(x_1/x_2)^2x_2^3+⋯ g̃ := [2+2 x_1/x_2]-[ 1/3+x_1/x_2+(x_1/x_2)^2+1/3(x_1/x_2)^3]x_2^2 + [1/60+1/12x_1/x_2+1/6(x_1/x_2)^2+1/6(x_1/x_2)^3+1/12(x_1/x_2)^4 +1/60(x_1/x_2)^5]x_2^4 -1/2520[∑_k=0^7 7!/k!(7-k)! (x_1/x_2)^k ]x_2^6+⋯ In this case, note that the transform fg of f̃g̃ under the change of variables u_1:=(x_1/x_2)^1/2, u_2=x_2^1/2, is monomialized with respect to (u_1,u_2), so that q_1'=q_1=1 and (u_1,u_2)=(s,t). Hence, r-τ=τ=1. Therefore, one can expand ỹ_0 as a monomialized power series in (s,t): ỹ_0=ty_0 with y_0 = 1-1/2s^2+3/8s^4-5/16s^6+35/128s^8-63/256s^10+⋯ + ( -1/2s^2+1/2s^4-1/2s^6+1/2s^8-1/2s^10+⋯)t + (1/8s^4-3/16s^6+15/64s^8-35/128s^10+⋯)t^2 +(-1/2s^4+1/2s^6-1/2s^8+1/2s^10+⋯)t^3 +( 1/12+1/8s^2+1/32s^4+47/192s^6-195/512s^8+499/1024s^10+⋯)t^4 ( -1/12s^2-1/12s^4-1/4s^6+1/4s^8-1/4s^10+⋯)t^5 +⋯ = ∑_n∈ℕc_n(s) t^n with c_0,0=1≠ 0 As described after (<ref>), now we are in position to apply the algorithm as stated in Theorem <ref> with ñ^0=(0,1) and ñ^0=(0,0) and l̂_0:= p.κ.ν̃_0 + d.ρ=2× 1× 1+2×1=4. The corresponding support of the vanishing polynomial P belongs to some ℱ∪𝒢 as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>, namely for any (k,l,j)∈ℱ∪𝒢: (i) (k,l)≥ (0,j); (ii)k and l-j are even; (iii)k≤ l-j. For the first step of the algorithm (Section <ref>), the list of plausible indices to begin with are all the non-negative integers l≤l̂_0=4. We resume the notations of Section <ref> (see also the method in Section <ref>). For simplicity, let us write c_0 for c_0(s). Step 1. If l=0 then j=0 and thefore l=k=0, so ℱ'_0=∅ and 𝒢'_0={(0,0,0)}. Equation (<ref>) translates as a_0,0,0=0, which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=0 from the list of admissible indices. If l=1 then j=0 or 1. But l-j has to be even, so j=1 and l-j=0=k. Thus, ℱ'_1={(0,1,1)} and 𝒢'_1=∅. Equation (<ref>) translates as a_0,1,1.s.C_0=0⇔ a_0,1,1=0, which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=1 from the list of admissible indices. If l=2 then j∈{0,1,2}. But l-j has to be even, so j=0 or 2. Since k is even, in the former case, k=0 or 2, and in the latter case k=0. Thus, ℱ'_2={(0,2,2)} and 𝒢'_2={(0,2,0), (2,2,0)}. Equation (<ref>) translates as a_0,2,2.C_0^2+a_0,2,0+a_2,2,0.s^2=0. However, since c_0^2=1-s^2+s^4-s^6+s^8-s^10+⋯ is not a polynomial of degree at most 2, the only possibility is a_0,2,2=a_0,2,0=a_2,2,0=0 which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=2 from the list of admissible indices. If l=3 then j∈{0,1,2} (recall that _y P=2≤ d=2). But l-j has to be even, so j=1. Since k is even, k=0 or 2. Thus, ℱ'_3={(0,3,1), (2,3,1)} and 𝒢'_3=∅. Equation (<ref>) translates as (a_0,3,1+a_2,3,1.s^2).C_0=0⇔ a_0,3,1=a_2,3,1=0, which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=3 from the list of admissible indices. If l=4, again since l-j has to be even, we have that j=0 or 2. Since k is even, in the former case, k∈{0,2,4}, and in the latter case k∈{0,2}. Thus, ℱ'_4={(0,4,2), (2,4,2)} and 𝒢'_2={(0,4,0), (2,4,0),(4,4,0)}. Equation (<ref>) translates as (a_0,4,2+a_2,4,2.s^2).C_0^2+a_0,4,0+a_2,4,0.s^2+a_4,4,0.s^4=0. Let us consider the corresponding Wilczynski matrices, where for simplicity the lines consists only of the coefficients of 1, s^2, s^4, etc. M_ℱ'_4,𝒢'_4 :=[[ 1 0 0 1 0; 0 1 0 -1 1; 0 0 1 1 -1; 0 0 0 -1 1; 0 0 0 1 -1; 0 0 0 -1 1; ⋮ ⋮ ⋮ ⋮ ⋮; ]] and M_ℱ'_4,𝒢'_4^red :=[[ -1 1; 1 -1; -1 1; 1 -1; -1 1; ⋮ ⋮; ]] (Recall that here the reduced matrix is obtained by removing the 3 first rows and columns.) One can easily check that all the minors of maximal order vanish up to order 2d_sd=2× 4× 2=16: as expected, c_0 is algebraic relatively to (ℱ'_4,𝒢'_4). Moreover, a first non-zero minor of order 1 in M_ℱ'_4,𝒢'_4^red is obtained e.g. with the coefficient 1 of the second column (this is the coefficient of s^6 in the expansion of s^2.c_0^2). Using the Cramer's rule, we identify it, up to a multiplicative constant λ∈ K, with a_2,4,2, and we also get a_0,4,2=λ. According to (<ref>), we derive a_0,4,0=-λ and a_2,4,0=a_4,4,0=0. As a conclusion, the K-vector space E_4 of polynomials corresponding to Equation (<ref>) is E_4:={λ[(1+s^2)y^2-1]t^4+R(s,t,y) | λ∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 5}. Here, the linear form L̃ of Theorem <ref> is given by: L̃(n_1,n_2)=1n_1+n_2=n_1+n_2. We go back to the variables (x_1,x_2) by the following transformation: Q(s, t,y)=Q̃(s^2t^2,t^2,ty). The space E_4 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form: λ[(x_1+x_2)y^2-x_2^2]+ R̃(x_1,x_2,y) with λ∈ K, R̃∈ K[[x_1,x_2]][y] such that: R̃=ã_0+ã_1y+ã_2y^2 with _x(ã_0)≥ 3, _x(ã_1)≥ 2 and _x(ã_2)≥ 2. Step 2. Here, there isn't any l'>4 as in (<ref>). Step 3. We consider the case where l+1=5 corresponding to Third Step <ref>. By applying Conditions (i), (ii), (iii) of Lemma <ref> as before, we obtain: ℱ'_5={(0,5,1),(2,5,1),(4,5,1) } and 𝒢'_5=∅. The instance of (<ref>) is: [ (a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4).C_0 = -( a_0,4,2+a_2,4,2.s^2)2C_0C_1; = -λ(1+s^2) 2C_0C_1. ] Here, c_1≠ 0, and c_0 is not algebraic relatively to (ℱ'_5,𝒢'_5) since 𝒢'_5=∅, so we are in the case ⋆⋆ of Third Step <ref>. Note that θ_s,1=(4+2)a+b with a=1, b=0 (see Lemma <ref>), so θ_s,1=6. According to Lemma <ref>, we are assured to find a non zero reconstruction minor at depth at most 2.3.θ_s,(0,…,0,1)d^d+1=2× 3× 6× 2^3=288. However, here, the Wilczynski matrices (where again for simplicity we only consider the lines consisting of the coefficients of 1, s^2, s^4, etc.) are triangular with non zero diagonal coefficients: M_ℱ'_5,𝒢'_5=M_ℱ'_5,𝒢'_5^red =[[ 1 0 0; -1/2 1 0; 3/8 -1/2 1; -5/16 3/8 -1/2; 35/128 -5/16 3/8; ⋮ ⋮ ⋮; ]]. A first nonzero minor is obtained with the three first lines, and is equal to 1. But we notice that, here, Equation (<ref>) can be simplified by C_0 (since c_0≠ 0) and we get: a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4= -λ(1+s^2) 2 C_1. By evaluating at c_1=-1/2s^2+1/2s^4-1/2s^6+1/2s^8-1/2s^10+⋯, we see that: -λ(1+s^2) 2 c_1=λ s^2 and therefore a_0,5,1= a_4,5,1=0 and a_2,5,1=λ. As a conclusion, the K-vector space E_4,5 of polynomials corresponding to Third Step <ref> is E_4,5:= {λ[(1+s^2)y^2-1]t^4+(λ s^2 y) t^5+R(s,t,y) | λ∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 6}. As before, we go back to the variables (x_1,x_2) by the following transformation: Q(s, t,y)=Q̃(s^2t^2,t^2,ty). The space E_4,5 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form: λ[(x_1+x_2)y^2+ x_1x_2 y-x_2^2]+ R̃(x_1,x_2,y) with λ∈ K, R̃∈ K[[x_1,x_2]][y] such that: R̃=ã_0+ã_1y+ã_2y^2 with _x(ã_0)≥ 3, _x(ã_1)≥ 3 and _x(ã_2)≥ 2. Step 4. We consider the case where S(l̃)=6 corresponding to Induction Step <ref>. By applying Conditions (i), (ii), (iii) of Lemma <ref> as before, we obtain: ℱ'_6={(0,6,2),(2,6,2),(4,6,2) } and 𝒢'_6={(0,6,0),(2,6,0),(4,6,0),(6,6,0) }. The instance of (<ref>) is: [ (a_0,6,2+a_2,6,2.s^2++a_4,6,2.s^4).C_0^2+ a_0,6,0+a_2,6,0.s^2+a_4,6,0.s^4+a_6,6,0.s^6; =-(( a_0,4,2+a_2,4,2.s^2)(2C_0C_2+ C_1^2) +(a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4).C_1); = -λ[(1+s^2) (2C_0C_2+ C_1^2) + s^2 C_1]. ] Note that we are in the case ⋆∙ of Induction Step <ref> since c_0 is algebraic relatively to (ℱ'_6,𝒢'_6). Moreover, when evaluating at c_0, c_1 and c_2=1/8s^4-3/16s^6+15/64s^8-35/128s^10+⋯, we obtain that the right-hand side of (<ref>) vanishes. So we get: (a_0,6,2+a_2,6,2.s^2++a_4,6,2.s^4).C_0^2+ a_0,6,0+a_2,6,0.s^2+a_4,6,0.s^4+a_6,6,0.s^6=0 which is of the same type as (<ref>). The corresponding Wilczynski matrices (where again for simplicity the lines consists only of the coefficients of 1, s^2, s^4, etc.) are M_ℱ'_6,𝒢'_6 :=[[ 1 0 0 0 1 0 0; 0 1 0 0 -1 1 0; 0 0 1 0 1 -1 1; 0 0 0 1 -1 1 -1; 0 0 0 0 1 -1 1; 0 0 0 0 -1 1 -1; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; ]] and M_ℱ'_6,𝒢'_6^red :=[[ -1 1 -1; 1 -1 1; -1 1 -1; 1 -1 1; -1 1 -1; ⋮ ⋮ ⋮; ]] We apply the reconstruction method of Section <ref> with maximal subfamily ℱ”_6={(2,6,2)}. According to Lemma <ref>, we obtain: a_2,6,2= a_0,6,2λ_2,6,2^0,6,2+ a_4,6,2λ_2,6,2^4,6,2 where here λ_2,6,2^0,6,2=-1 is the coefficient relating the column (0,6,2) to the column (2,6,2). Likewise, λ_2,6,2^4,6,2=-1. Let us consider a_0,6,2 and a_4,6,2 as parameters α,β∈ K, so a_2,6,2=-α-β. Moreover, we compute the coefficients of 𝒢'_6 according to (<ref>) in Lemma <ref>: [ a_0,6,0 = -a_0,6,2. 1 = -α; a_2,6,0 = a_0,6,2. 1 -a_2,6,2.1 = 2α+β; a_4,6,0 = - a_0,6,2. 1 +a_2,6,2.1 -a_4,6,2.1 = -2α-2β; a_6,6,0 = a_0,6,2. 1 -a_2,6,2.1 +a_4,6,2.1 = 2α+2β ] As a conclusion, the K-vector space E_4,6 of polynomials corresponding to Induction Step <ref> is [ E_4,6:={λ[(1+s^2)y^2-1]t^4+(λ s^2 y) t^5 +.; [ (α - (α +β )s^2 +β s^4) y^2 -α +(2α+β)s^2- 2(α +β)s^4+ 2(α +β)s^6 ]t^6 +R(s,t,y) |; .λ,α,β∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 7}. ] As before, we go back to the variables (x_1,x_2) by the following transformation: Q(s, t,y)=Q̃(s^2t^2,t^2,ty). The space E_4,6 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form: [ (λ x_1+λ x_2+ αx_2^2- (α +β )x_1x_2 +β x_1^2)y^2+ λ x_1x_2 y; -λx_2^2 -αx_2^3 +(2α+β)x_1x_2^2- 2(α +β)x_1^2x_2+ 2(α +β)x_1^3 + R̃(x_1,x_2,y) ] with λ,α,β∈ K, R̃∈ K[[x_1,x_2]][y] such that: R̃=ã_0+ã_1y+ã_2y^2 with _x(ã_0)≥ 4, _x(ã_1)≥ 3 and _x(ã_2)≥ 3. Note that we recover the beginning of the analytic expansion of P̃ at 0 in (<ref>) for λ=1 and α=β=0. § A GENERALIZATION OF THE FLAJOLET-SORIA FORMULA. In the monovariate context, let Q(x,y)=∑_i,ja_i,jx^iy^j ∈ K[x,y] with Q(0,0)=∂ Q/∂ y(0,0)=0 and Q(x,0)≠ 0. In <cit.>, P. Flajolet and M. Soria give the following formula for the coefficients of the unique formal solution y_0=∑_n≥ 1c_nx^n of the implicit equation y=Q(x,y): [Flajolet-Soria's Formula <cit.>] c_n=∑_m=1^2n-11/m∑_|k|=m, ||k||=m-1, g(k)=nm!/∏_i,jk_i,j!∏_i,ja_i,j^k_i,j, where k=(k_i,j)_i,j, |k|=∑_i,jk_i,j, ||k|| = ∑_i,jj k_i,j and g(k) = ∑_i,ji k_i,j. Note that in the particular case where the coefficients of Q verify a_0,j=0 for all j, one has m≤ n in the summation. One can derive immediately from Theorems 3.5 and 3.6 in <cit.> a multivariate version of the Flajolet-Soria Formula in the case where Q(x,y)∈ K[x,y]. The purpose of the present section is to generalize the latter result to the case where Q(x,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y]. We will need a special version of Hensel's Lemma for multivariate power series elements of K((x_1^ℤ,…,x_r^ℤ))^grlex. Recall that the latter denotes the field of generalized series (K((X^ℤ^r))^grlex, w) where w is the graded lexicographic valuation as described in Section <ref>. Generalized series fields are known to be Henselian <cit.>. For the convenience of the reader, we give a short proof in our particular context. We call strongly reduced Henselian equation any equation of the following type: y=F(u,y) with F(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod, such that w(F(u,y))>_grlex0 and F(u,0) 0. [Hensel's lemma] Any strongly reduced Henselian equation admits a unique solution y_0= ∑_n>_grlex0c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex. Let y=F(u,y) be a strongly reduced Henselian equation and let y_0=∑_n>_grlex0c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex. For n∈ℤ^r, n>_grlex0, let us denote z̃_n:= ∑_m<_grlexn c_mu^m. We get started with the following key lemma: The following are equivalent: * a series y_0 is a solution of (<ref>); * for any n∈ℤ^r, n>_grlex0, w(z̃_n-F(u,z̃_n))=w(y_0-z̃_n); * for any n∈ℤ^r, n>_grlex0, w(z̃_n-F(u,z̃_n))≥_grlexn. For n>_grlex0, let us denote ỹ_n:=y_0-z̃_n=∑_m≥_grlexn c_mu^m. We apply Taylor's Formula to G(u,y):=y-F(u,y) at z̃_n: G(u,z̃_n+y) =z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))y +y^2H(u,y), where H(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[y] with w(R(u,y))>_grlex0. The series y_0 is a solution of (<ref>) iff for any n, ỹ_n is a root of G(u,z̃_n+y)=0, i.e.: z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))ỹ_n+ ỹ_n^2H(u,ỹ_n)=0. Now consider y_0 a solution of (<ref>) and n∈ℤ^r, n>_grlex0. Either ỹ_n=0, i.e. y_0=z̃_n: (2) holds trivially. Or ỹ_n≠ 0, so we have: n≤_grlex w((1-∂ G/∂ y(u,z̃_n))ỹ_n) =w(ỹ_n)<_grlex 2w(ỹ_n)<_grlex w(ỹ_n^2H(u,ỹ_n)). So we must have w(z̃_n-G(u,z̃_n))=w(ỹ_n). Now, (2) ⇒ (3) since w(ỹ_n)≥_grlexn. Finally, suppose that for any n, w(z̃_n-F(u,z̃_n))≥_grlexn. If y_0-F(x,y_0)≠ 0, denote n_0:= w(y_0-F(u,y_0)). For n>_grlexn_0, one has n_0=w(z̃_n-F(u,z̃_n))≥_grlexn. A contradiction. Let us return to the proof of Theorem <ref>. Note that, if y_0 is a solution of (<ref>), then its support needs to be included in the monoid 𝒮 generated by the i's from the nonzero coefficients a_i,j of F(x,y). If not, consider the smallest index n for ≤_grlex which is not in 𝒮. Property (2) of Lemma <ref> gives a contradiction for this index. 𝒮 is a well-ordered subset of (ℤ^r)_≥_grlex0 by <cit.>. Let us prove by transfinite induction on n∈𝒮 the existence and uniqueness of a sequence of series z̃_n as in the statement of the previous lemma. Suppose that for some n∈𝒮, we are given a series z̃_n with support included in 𝒮 and <_grlexn, such that w(z̃_n-F(u,z̃_n))≥_grlexn. Then by Taylor's formula as in the proof of the previous lemma, denoting by m the successor of n in 𝒮 for ≤_grlex: G(u,z̃_m)=G(u,z̃_n+c_nu^n) =z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))c_nu^n +c_n^2u^2nH(u,z̃_n). Note that w(H(u,z̃_n))≥_grlex0 since w(z̃_n)>_grlex0 and w(F(u,y))>_grlex0. Therefore, one has: w(G(u,z̃_m))=w(z̃_m-F(u,z̃_m))≥_grlexm>_grlexn if and only if c_n is equal to the coefficient of u^n in F(u,z̃_n). This determines z̃_m in a unique way as desired. We prove now our generalized version of the Flajolet-Soria Formula <cit.>. Our proof, as the one in <cit.>, uses the classical Lagrange Inversion Formula in one variable. We will use Notation <ref>. [Generalized multivariate Flajolet-Soria Formula] Let y=F(u,y)=∑_i,ja_i,ju^iy^j be a strongly reduced Henselian equation. Define ι_0=(ι_0,1,…,ι_0,r) by: -ι_0,k:=min{0, i_k / a_i,j≠ 0, i = (i_1,…,i_k,…,i_r)}, k=1,…,r. Then the coefficients c_n of the unique solution y_0=∑_n>_grlex0 c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex are given by: c_n=∑_m=1^μ_n1/m∑_|M|=m, ||M||=m-1, g(M)=nm!/M!A^M where μ_n is the greatest integer m such that there exists an M with |M|=m, ||M||=m-1 and g(M)=n. Moreover, for n=(n_1,…,n_r), μ_n≤∑_k=1^rλ_k n_k with: λ_k={[ ∏_j=k+1^r-1(1+ι_0,j)+∏_j=1^r-1(1+ι_0,j) if k<r-1;; 1+∏_j=1^r-1(1+ι_0,j) if k=r-1;; ∏_j=1^r-1(1+ι_0,j) if k=r. ]. * In (<ref>), note that the second sum is finite. Indeed, let M=(m_i,j) be such that |M|=m, ||M||=m-1, g(M)=n. Since F∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y], if i has a component negative enough, then a_i,j=0. On the other hand, since |M|=m and g(M)=n, the positive components of i are bounded. * By <cit.>, 1/m·m!/M!∈ℕ. If we set m_j:=∑_im_i ,j and N=(m_j)_j, then |N|=m, N=m-1 and: 1/m·m!/M!= 1/m·m!/N!·N!/M!, where N!/M! is a product of multinomial coefficients and 1/m·m!/N! is an integer again by <cit.>. Thus, each c_n is the evaluation at the a_i,j's of a polynomial with coefficients in ℤ. For a given strongly reduced Henselian equation y=F(u,y), one can expand: f(u,y):=y/F(u,y)=∑_n≥ 1b_n(u)y^n ∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]] with b_1≠ 0, which admits a unique formal inverse in K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]]: f̃(u,y)= ∑_m≥ 1d_m(u) y^m. The Lagrange Inversion Theorem (see e.g. <cit.> with ℱ=K((u_1^ℤ,…,u_r^ℤ))^grlex and P=f(u,y)) applies: for any m, d_m(u) is equal to the coefficient of y^m-1 in [F(u,y)]^m, divided by m. Hence, according to the multinomial expansion of [F(u,y)]^m=[∑_i,ja_i,ju^iy^j]^m: d_m(u)=1/m∑_|M|=m, ||M||=m-1m!/M!A^Mu^g(M). Note that the powers n of u that appear in d_m are nonzero elements of the monoid generated by the exponents i of the monomials u^iy^j appearing in F(u,y), so they are >_grlex0. Now, it will suffice to show that, for any fixed n, the number ∑_k=1^rλ_k n_k is indeed a bound for the number μ_n of m's for which d_m can contribute to the coefficient of u^n. Indeed, this will show that f̃(u,y)∈ K[y]((u_1^ℤ,…,u_r^ℤ))^grlex. But, by definition of f̃, one has that: f̃(u,y)=y F(u,f̃(u,y)) ∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]]. Hence, both members of this equality are in fact in K[y]((u_1^ℤ,…,u_r^ℤ))^grlex. So, for y=1, we get that f̃(u,1)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex is a solution with w(f̃(u,1))>_grlex0 of the equation: f(u,y)=y/F(u,y)=1 ⇔ y=F(u,y). It is equal to the unique solution y_0 of Theorem <ref>: y_0=f̃(u,1)= ∑_m≥ 1d_m(u). We consider the relation: g(M)=n ⇔ {[ ∑_i,jm_i,j i_1 = n_1;; ⋮; ∑_i,jm_i,j i_r = n_r. ]. Let us decompose m=|M|=∑_i,jm_i,j as follows: |M|=∑_|i|>0m_i,j+∑_|i|=0, i_1>0m_i,j+⋯+ ∑_|i|=0=i_1=⋯=i_r-2, i_r-1>0m_i,j. So, the relation g(M)=n can be written as: {[ ∑_|i|>0m_i,j i_1+∑_|i|=0, i_1>0m_i,j i_1 = n_1;; ⋮; ∑_|i|>0m_i,j i_k+∑_|i|=0, i_1>0m_i,j i_k+⋯+ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j i_k = n_k;; ⋮; ∑_i,jm_i,j i_r = n_r. ]. Firstly, let us show by induction on k∈{0,…,r-1} that: [ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j ≤ ∑_q=1^k-1[ι_0,k(∏_p=q+1^k-1(1+ι_0,p) + ∏_p=1^k-1(1+ι_0,p) )]n_q; +[1+ι_0,k∏_p=1^k-1(1+ι_0,p) ]n_k; +[ι_0,k∏_p=1^k-1(1+ι_0,p)]n_k+1 +⋯+[ι_0,k∏_p=1^k-1(1+ι_0,p)]n_r , ] the initial step k=0 being: ∑_|i|>0m_i,j≤ n_1+…+n_r. This case k=0 follows directly from (<ref>), by summing its r relations: ∑_|i|>0m_i,j≤∑_|i|>0m_i,j|i|≤ n_1+…+n_r. Suppose that we have the desired property until some rank k-1. Recall that for any i, i_k≥ -ι_0,k. By the k'th equation in (<ref>), we have: [ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j ≤ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j i_k ][ ≤ n_k-( ∑_|i|>0m_i,j i_k+∑_|i|=0, i_1>0m_i,j i_k+⋯+ ∑_|i|=0=i_1=⋯=i_k-2, i_k-1>0m_i,j i_k); ≤ n_k+ι_0,k( ∑_|i|>0m_i,j +∑_|i|=0, i_1>0m_i,j +⋯+ ∑_|i|=0=i_1=⋯=i_k-2, i_k-1>0m_i,j). ] We apply the induction hypothesis to these k sums and obtain an inequality of type: ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j≤α_k,1 n_1+⋯+α_k,r n_r. For q>k, let us compute: [ α_k,q = ι_0,k( 1+ ι_0,1+ ι_0,2(1+ι_0,1)+ι_0,3(1+ι_0,1)(1+ι_0,2)+⋯ + ι_0,k-1∏_p=1^k-2(1+ι_0,p) ); = ι_0,k∏_p=1^k-1(1+ι_0,p). ] For q=k, we have the same computation, plus the contribution of the isolated term n_k. Hence: α_k,k=1+ι_0,k∏_p=1^k-1(1+ι_0,p). For q<k, we have a part of the terms leading again by the same computation to the formula ι_0,k∏_p=1^k-1(1+ι_0,p). The other part consists of terms starting to appear at the rank q and whose sum can be computed as: ι_0,k( 1+ ι_0,q+1+ ι_0,q+2(1+ι_0,q+1)+⋯ + ι_0,k-1∏_p=q+1^k-2(1+ι_0,p) ) = ι_0,k∏_p=q+1^k-1(1+ι_0,p). So we obtain as desired: α_k,q= ι_0,k[ ∏_p=q+1^k-1(1+ι_0,p)+ ∏_p=1^k-1(1+ι_0,p)]. Subsequently, we obtain an inequality for m=|M|=∑_i,jm_i,j of type: [ m = ∑_|i|>0m_i,j+∑_|i|=0, i_1>0m_i,j+⋯+ ∑_|i|=0=i_1=⋯=i_r-2, i_r-1>0m_i,j; ≤ α_1 n_1+⋯ +α_r n_r, ] with α_k= 1+∑_l=1^r-1α_l,k for any k. For k=r, let us compute in a similar way as before for α_k,q: [ α_r = 1+ι_0,1+ι_0,2(1+ι_0,1)+⋯ +ι_0,k∏_p=1^k-1(1+ι_0,p)+⋯ +ι_0,r-1∏_p=1^r-2(1+ι_0,p); = ∏_p=1^r-1(1+ι_0,p)=λ_r. ] For k=r-1, we have the same computation plus 1 coming from the term α_r-1,r-1. Hence: α_r-1=1+ ∏_p=1^r-1(1+ι_0,p)=λ_r-1. For k∈{1,…,r-2}, we have a part of the terms leading again by the same computation to the formula ∏_p=1^r-1(1+ι_0,p). The other part consists of terms starting to appear at the rank k and whose sum can be computed as: 1+ι_0,k+1+ι_0,k+2(1+ι_0,k+1)+⋯+ι_0,r-1∏_p=k+1^r-2(1+ι_0,p)=∏_p=k+1^r-1(1+ι_0,p) Altogether, we obtain as desired: α_k=∏_p=k+1^r-1(1+ι_0,p)+∏_p=1^r-1(1+ι_0,p)=λ_k. * Note that for any k∈{1,…,r-1}, λ_k=λ_r(1/(1+ι_0,1)⋯(1+ι_0,k)+1), so λ_1≥λ_k>λ_r. Thus, we obtain that: μ_n≤λ_1|n|. Moreover, in the particular case where ι_0=0– i.e. when Q(x,y)∈ K[[x]][y] and y_0∈ K[[x]] as in <cit.>– we have λ_k=2 for k∈{1,…,r-1} and λ_r=1. Thus we obtain: μ_n≤ 2|n|-n_r≤ 2|n|. Note that : |n| ≤ 2|n|-n_r≤ 2|n| which can be related in this context with the effective bounds 2|n|-1 (case w_x(Q(x,y))≥_grlex0) and |n| (case w_x(Q(x,y))>_grlex0) given in <cit.>. * With the notation from Theorem <ref>, any strongly reduced Henselian equation y=Q(x,y) can be written: x^ι_0y=Q̃(x,y)with Q̃(x,y)∈ K[[x]][y] and w_x(Q̃(x,y))>_grlexι_0. Any element n of Supp y_0, being in the monoid 𝒮 of the proof of Theorem <ref>, is of the form: n=m-k ι_0 with m∈ℕ^r, k∈ℕ and k |ι_0|≤ |m|. Let us consider the following example of strongly reduced Henselian equation: [ y = a_1,-1,2x_1x_2^-1 y^2 + a_-1,2,0x_1^-1x_2^2 +a_0,1,1x_2y+ a_-1,3,0x_1^-1x_2^3 +a_0,2,1x_2^2y; +(a_1, 1, 0+ a_1,1,2y^2)x_1 x_2 +a_1,2,0 x_1x_2^2+a_2,1,1yx_1^2x_2; + a_1,3,0 x_1x_2^3 +a_2,2,1 yx_1^2x_2^2+a_3,1,2y^2x_1^3x_2. ] The support of the solution is included in the monoid 𝒮 generated by the exponents of (x_1,x_2), which is equal to the pairs n=(n_1,n_2)∈ℤ^2 with n_2=-n_1+ l and n_1≥ -l for l∈ℕ. We have ι_0=(1,1), so (λ_1,λ_2)=(3,2) and μ_n≤ 3n_1+2n_2=n_1+2l. We are in position to compute the first coefficients of the unique solution y_0. Let us give the details for the computation of the first terms, for l=0. In this case, to compute c_n_1,-n_1, n_1>0, we consider m such that 1≤ m≤μ_n_1,-n_1≤ n_1, and M=(m_i,j)_i,j such that: {[ |M|=m ⇔ ∑_i,jm_i,j=m≤ n_1;; M=m-1 ⇔ ∑_i,jm_i,jj=m-1≤ n_1-1;; g(M)=n ⇔ {[ ∑_i,jm_i,j i_1 = n_1>0;; ∑_i,jm_i,j i_2 = -n_1<0. ]. ]. The last condition implies that m_1,-1,2≥ n_1. But, according to the second condition, this gives n_1-1≥M≥ 2 m_1,-1,2≥ 2 n_1, a contradiction. Hence, c_n_1,-n_1=0 for any n_1>0. In the case l=1, we consider the corresponding conditions to compute c_n_1,-n_1+1 for n_1≥ -1. We obtain that 1≤ m≤μ_n_1,-n_1+1≤ n_1+2. Suming the two conditions in g(M)=(n_1,-n_1+1), we get m_-1,2,0+m_0,1,1=1 and m_i,j=0 for any i such that i_1+i_2≥ 2. So we are left with the following linear system: {[ (L_1) m_1,-1,2 + m_-1,2,0 + m_0,1,1 = m ≤ n_1+2; (L_2) 2 m_1,-1,2 + m_0,1,1 = m-1 ≤ n_1+1; (L_3) m_1,-1,2 - m_-1,2,0 = n_1; (L_4) -m_1,-1,2 + 2 m_-1,2,0 + m_0,1,1 = -n_1+1; ]. By comparing (L_2)-(L_3) and (L_1), we get that m=m-1-n_1, so n_1=-1. Consequently, by (L_1), m=1, and by (L_2), m_1,-1,2=m_0,1,1=0. Since m_-1,2,0+m_0,1,1=1, we obtain m_-1,2,0=1 which indeed gives the only solution. Finally, c_n_1,-n_1+1=0 for any n_1≥ 0 and: c_-1,2=1/11!/1!0!a_-1,2,0^1=a_-1,2,0. Similarly, we claim that one can determine that: [ c_-2,4 = 0, μ_n≤ 2;; c_-1,3 = a_-1,3,0+a_0,1,1a_-1,2,0+a_1,-1,2a_-1,2,0^2, μ_n≤ 3;; c_0,2 = 0, μ_n≤ 4;; c_1,1 = a_1,1,0, μ_n≤ 5;; c_n_1,-n_1+2 = 0 for n_1≥ 0, n_1≠ 1 μ_n≤ n_1+4;; c_n_1,-n_1+3 = 0 for -3≤ n_1≤ -2, μ_n≤ n_1+6;; c_-1,4 = a_0,2,1a_-1,2,0+a_0,1,1a_-1,3,0+2 a_1,-1,2a_-1,2,0a_-1,3,0; +a_0,1,1^2a_-1,2,0+3 a_0,1,1a_1,-1,2a_-1,2,0^2+2 a_1,-1,2^2a_-1,2,0^3, μ_n≤ 5;; ⋮ ] § CLOSED-FORM EXPRESSION OF AN ALGEBROID MULTIVARIATE SERIES. The field K of coefficients has still characteristic zero. Our purpose is to determine the coefficients of an algebroid series in terms of the coefficients of a vanishing polynomial. We consider the following polynomial of degree in y bounded by d_y and satisfying the conditions (i) to (iii) of Lemma <ref>: [ P(u,y) = ∑_i∈^r∑_j=0^d_ya_i,ju^iy^j , with P(u,y)∈ K[[u]][y]∖{0}; = ∑_i∈^rπ_i^P(y)u^i; = ∑_j=0^d_ya_j^P(u)y^j, ] and a formal power series: y_0=∑_n≥_grlex0c_ nu^n, with y_0∈ K[[u]], c_0≠ 0. The field K((u)) is endowed with the graded lexicographic valuation w. For any k∈ℕ^r and for any Q(u,y)=∑_j=0^da_j^Q(u)y^j∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[y], we denote: * S(k) the successor element of k in (ℕ^r,≤_grlex); * w(Q):=min{w (a_j^Q(u)), j=0,..,d}; * For any k∈^r, z_k:=∑_n=0^kc_nu ^n; * y_k:=y_0-z_k=∑_n≥_grlexS( k)c_nu^n; * Q_k(u,y):=Q(u,z_k+u^S(k)y) =∑_i≥_grlexi_kπ^Q_k,i(y)u^i where i_k:=w( Q_k). Note that the sequence (i_k)_k∈ℕ^r is nondecreasing since Q_S(k)(u,y)=Q_k(u,c_S(k)+u^ny) for n=S^2(k)-S(k)>_grlex0, n∈ℤ^r. As for the algebraic case <cit.>, we consider y_0 solution of the equation P=0 via an adaptation in several variables of the algorithmic method of Newton-Puiseux, also with two stages: * a first stage of separation of the solutions, which illustrates the following fact: y_0 may share an initial part with other roots of P. But, if y_0 is a simple root of P, this step concerns only finitely many of the first terms of y_0 since w(∂ P/∂ y (u,y_0)) is finite. * a second stage of unique "automatic" resolution: for y_0 a simple root of P, once it has been separated from the other solutions, we will show that the remaining part of y_0 is a root of a strongly reduced Henselian equation, in the sense of Definition <ref>, naturally derived from P and an initial part of y_0. (i) The series y_0 is a root of P(u,y) if and only if the sequence (i_k)_k∈ℕ^r where i_k:=w( P_k) is strictly increasing. (ii) The series y_0 is a simple root of P(u,y) if and only if the sequence (i_k)_k∈ℕ^r is strictly increasing and there exists a lowest multi-index k_0 such that i_S(k_0)=i_k_0-S(k_0)+S^2(k_0). In that case, one has that i_S(k)=i_k-S(k)+S^2(k)=i_k_0-S(k_0)+S^2(k) for any k≥_grlexk_0. (i) Note that for any k∈ℕ^r,i_k≤_grlex w(P_k(u,0)=w(P(u,z_k)). Hence, if the sequence (i_k)_k∈ℕ^r is strictly increasing in (ℕ^r,≤_grlex), it tends to +∞ (i.e. ∀n∈ℕ^r, ∃k_0∈ℕ^r, ∀k≥_grlexk_0, i_k≥_grlexn), and so does w(P(u,z_k)). The series y_0 is indeed a root of P(u,y). Conversely, suppose that there exist k<_grlexl such that i_k≥_grlexi_l. Since the sequence (i_n)_n∈ℕ^r is nondecreasing, one has that i_l≥i_k, so i_l=i_k. We apply the multivariate Taylor's formula to P_j(u,y) for j>_grlexk: [ P_j(u,y) = P_k(u,c_S(k)+ c_S^2(k)u^S^2(k)-S(k) +⋯+c_ju^j-S(k)+u^S(j)-S(k)y); = ∑_i≥_grlexi_kπ^P_k,i(c_S(k)+ c_S^2(k)u^S^2(k)-S(k) +⋯+u^S(j)-S(k)y) u^i; = π^P_k,i_k(c_S(k))u^i_k+b_S(i_k)u^S(i_k)+ ⋯. ] Note that b_S(i_k)= π^P_k,S(i_k)(c_S(k)) or b_S(i_k)= (π^P_k,i_k )'(c_S(k)) c_S^2(k)+π^P_k,S(i_k)(c_S(k)) depending on whether S(i_k)<_grlexi_k+S^2(k)-S(k) or S(i_k)=i_k+S^2(k)-S(k). For j=l, we deduce that π^P_k,i_k(c_S(k))≠ 0. This implies that for any j>_grlexk, i_j=i_k and w(P_j(u,0))=w(P(u,z_j))=i_k. Hence w(P(u,y_0))=i_k≠ +∞. (ii) The series y_0 is a double root of P if and only if it is a root of P and ∂ P/∂ y. Let y_0 be a root of P. Let us expand the multivariate Taylor's formula (<ref>) for j=S(k): [ [ P_S(k)(u,y) = π^P_k,i_k(c_S(k))u^i_k+ π^P_k,S(i_k)(c_S(k))u^S(i_k)+⋯; +[(π^P_k,i_k)'(c_S(k)) y+π^P_k,i_k+S^2(k)-S(k)(c_S(k))]u^i_k+S^2(k)-S(k)+⋯ + ]; [(π^P_k,i_k)”(c_S(k))/2 y^2+(π^P_k,i_k+S^2(k)-S(k))'(c_S(k)) y+π^P_k,i_k+2(S^2(k)-S(k))(c_S(k))]u^i_k+2(S^2(k)-S(k))+⋯ ] Note that if S(i_k)=i_k+S^2(k)-S(k), then there are no intermediary terms between the first one and the one with valuation i_k+S^2(k)-S(k). We have by definition of P_k: ∂ P_k/∂ y(u,y)=u^S(k)(∂ P/∂ y)_k(u,y)=∑_i≥_grlexi_k(π^P_k,i)'(y)u^i One has that π^P_k,i_k(y) 0 and π^P_k,i_k(c_S(k))=0 (see the point (i) above), so (π^P_k,i_k)'(y) 0. Thus: w((∂ P/∂ y)_k)=i_k-S(k). We perform the Taylor's expansion of (∂ P/∂ y)_S(k): [ (∂ P/∂ y)_S(k)(u,y) = (∂ P/∂ y)_k(u,c_S(k)+u^S^2(k)-S(k)y); = ( π^P_k,i_k)'(c_S(k))u^i_k-S(k)+⋯; + [(π^P_k,i_k)”(c_S(k)) y+(π^P_k,i_k+S^2(k)-S(k))'(c_S(k))]u^i_k+S^2(k)-2S(k)+⋯. ] By the point (i) applied to ∂ P/∂ y, if y_0 is a double root P, we must have (π^P_k,i_k)'(c_S(k))=0. Moreover, if π^P_k,i(c_S(k))≠ 0 for some i∈{S(i_k), … , i_k+S^2(k)-S(k)}, by Formula (<ref>) we would have i_S(k)≤_grlexi_k+S^2(k)-S(k) and even i_j≤_grlexi_k+S^2(k)-S(k) for every j>_grlexk according to Formula (<ref>): y_0 could not be a root of P. So, π^P_k,i(c_S(k))= 0 for i=S(i_k),..,i_k+S^2(k)-S(k), and, accordingly, i_S(k)>_grlexi_k+S^2(k)-S(k). If y_0 is a simple root of P, from the point (i) and its proof there exists a lowest k_0 such that the sequence (i_k-S(k))_k∈ℕ^r is no longer strictly increasing, that is to say, such that (π^P_k_0,i_k_0)'(c_S(k_0))≠ 0. For any k≥_grlexk_0, we consider the Taylor's expansion of (∂ P/∂ y)_S(k)=(∂ P/∂ y)_k_0(c_S(k_0)+⋯+u^S^2(k)-S(k_0 )y): [ (∂ P/∂ y)_S(k)(u,y) = (π^P_k_0,i_k_0)'(c_S(k_0))u^i_k_0-S(k_0)+⋯; +[(π^P_k_0,i_k_0)”(c_S(k_0))c_S^2(k_0)+(π^P_k_0, i_k_0+S^2(k_0)-S(k_0))' (c_S(k_0))]u^i_k_0+ S^2(k_0)-S(k_0) +⋯ ] and we get that: w(∂ P/∂ y(z_S(k),0) )=w((∂ P/∂ y)_S(k)(u,0))=w((∂ P/∂ y)_S(k))=i_k_0-S(k_0). By Equation (<ref>), we obtain that w((∂ P/∂ y)_S(k))=i_S(k)-S^2(k). So, i_S(k)=i_k_0-S(k_0)+S^2(k). As every k>_grlexk_0 is the successor of some k'≥_grlexk_0, we get that for every k≥_grlexk_0, i_k-S(k)=i_k_0-S(k_0). So, finally, i_S(k)=i_k-S(k)+S^2(k) as desired. Resuming the notations of Lemma <ref>, the multi-index k_0 represents the length of the initial part in the stage of separation of the solutions. In the following lemma, we bound it using the discriminant Δ_P of P (see just before Notation <ref>). Let P(u,y) be a nonzero polynomial with _y(P)≤ d_y and with only simple roots. Let y_0=∑_n∈^rc_ nu^n, c_0≠ 0 be one of these roots. The multi-index k_0 of Lemma <ref> verifies that: |k_0|≤_u(Δ_P(u)). By definition of k_0 and by Formula (<ref>), for any k≥_grlexk_0, w( ∂ P/∂ y(u,z_S(k)))=w(∂ P/∂ y(u,z_S(k_0)))=i_k_0-S(k_0). So, w(∂ P/∂ y(u,y_0))=w(∂ P/∂ y(u,z_S(k_0))). Moreover, by minimality of k_0, the sequence (i_k-S(k))_k is strictly increasing up to k_0, so by Formula (<ref>): w( ∂ P/∂ y(u,y_0))=w(∂ P/∂ y(u,z_S(k_0)))=w((∂ P/∂ y)_S(k_0)(u,0))≥_grlex w((∂ P/∂ y)_S(k_0))≥_grlexk_0. So: |k_0|≤|w( ∂ P/∂ y(u,y_0))|=ord_u∂ P/∂ y(u,y_0). Since P has only simple roots, its discriminant Δ_P is nonzero and one has a Bezout identity: A(u,y)P(u,y)+B(u,y)∂ P/∂ y(u,y)=Δ_P(u) with A,B∈ K[[u]][y]. By evaluating this identity at y=y_0, we obtain that _u(∂ P/∂ y(u,y_0) )≤_u(Δ_P(u)), so |k_0|≤_u(Δ_P(u)) as desired. Resuming Notation <ref> and the content of Lemma <ref>, we set: ω_0:=(π^P_k_0,i_k_0)'(c_S(k_0)). By Formula (<ref>), we note that (∂ P/∂ y)(u,y_0)=ω_0 u^i_k_0-S(k_0)+⋯. Thus, ω_0 is the initial coefficient of (∂ P/∂ y)(u,y_0) with respect to ≤_grlex, hence ω_0≠ 0. Consider the following nonzero polynomial in K[[u]][y] of degree in y bounded by d_y: P(u,y)=∑_i∈^r∑_j=0^d_ya_i,ju^iy^j = ∑_i≥_grlex0π^P_i(y)u^i, and a formal power series which is a simple root: y_0=∑_n≥_grlex0c_nu^n ∈ K[[u]], c_0≠ 0. Resuming Notations <ref> and <ref> and the content of Lemma <ref>, recall that ω_0:=(π^P_k_0,i_k_0)'(c_S(k_0))≠ 0. Then, for any k>_grlexk_0: * either the polynomial z_S(k)=∑_n=0^S(k)c_nu^n is a solution of P(u,y)=0; * or _kR(u,y):=P_k(u,y+c_S(k))/-ω_0u^i_k=-y+ _kQ(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] defines a strongly reduced Henselian equation: y= _kQ(u,y) as in Definition <ref> and satisfied by: t_S(k):=y_0-z_S(k)/u^S(k)=c_S^2(k)u^S^2(k)-S(k)+c_S^3(k)u^S^3(k)-S(k)+⋯. We show by induction on k∈(ℕ^r,≤_grlex), k>_grlexk_0, that _kR(u,y)=-y+ _kQ(u,y) with _kQ(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] is such that w( _kQ(u,y)) >_grlex0. Let us apply Formula (<ref>) with parameter k=k_0. Since i_S(k_0)=i_k_0+S^2(k_0)-S(k_0), we have that π^P_k_0,i(c_S(k_0))=0 for i_k_0≤_grlexi<_grlexi_k_0+S^2(k_0)-S(k_0), and accordingly: P_S(k_0)(u,y)=[ω_0 y+π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0))]u^i_k_0+S^2(k_0)-S(k_0)+ _S(k_0)T(u,y) where _S(k_0)T(u,y)∈ K[[u]][y] with w( _S(k_0)T(u,y))>_grlexi_k_0+S^2(k_0)-S(k_0). Since i_S^2(k_0)=i_k_0+S^3(k_0)-S(k_0)>_grlexi_k_0+S^2(k_0)-S(k_0), we obtain that: π^P_S(k_0),i_k_0+S^2(k_0)-S(k_0)(y)=ω_0 y+π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0)) vanishes at c_S^2(k_0), which implies that c_S^2(k_0)= -π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0))/ω_0. Computing _S(k_0)R(u,y), it follows that: _S(k_0)R(u,y)=-y+ _S(k_0)Q(u,y), with _S(k_0)Q(u,y)=_S(k_0)T(u,y +c_S^2(k_0))/-ω_0u^i_k_0+S^2(k_0)-S(k_0). So _S(k_0)Q(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] with w( _S(k_0)Q(u,y))>_grlex0. Now suppose that the property holds true at a rank k≥_grlexS(k_0), which means that _kR(u,y):=P_k(u,y+c_S(k))/-ω_0u^i_k=-y+ _kQ(u,y). Therefore, for _kQ̌(u,y)=-ω_0 _kQ(u,y-c_S(k))∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] which is such that w( _kQ̌(u, y)) >_grlex0, we can write: [ P_k(u,y) = ω_0(y-c_S(k))u^i_k+ u^i_k· _kQ̌(u,y); = π^P_k,i_k(y)u^i_k+π^P_k,S(i_k)(y)u^S(i_k)+ ⋯. ] Since P_S(k)(u,y)= P_k(u,c_S(k)+u^S^2(k)-S(k)y) and i_S(k)=i_k+S^2(k)-S(k) by Lemma <ref>, we have that: P_S(k)(u,y)=[ω_0 y+π^P_k,i_k+S^2(k)-S(k)(c_S(k))]u^i_k+S^2(k)-S(k)+π^P_S(k),S(i_S(k))(y)u^S(i_S(k))+⋯. But, again by Lemma <ref>, i_S^2(k)=i_S(k)+S^3(k)-S^2(k) >_grlexi_S(k)=i_k+S^2(k)-S(k). So we must have π^P_S(k),i_S(k)(c_S^2(k))=0, i.e. c_S^2(k)=-π^P_k,i_k+S^2(k)-S(k)(c_S(k))/ω_0. It follows that: P_S(k)(u,y)=ω_0(y-c_S^2(k))u^i_k+S^2(k)-S(k)+π^P_S(k),S(i_S(k))(y)u^S(i_S(k))+⋯, Since, by definition, _S(k)R(u,y):=P_S(k)(u,y+c_S^2(k))/-ω_0u^i_S(k)=-y+ _S(k)Q(u,y), we get that: [ _S(k)R(u,y) = -y- π^P_S(k),S(i_S(k))(y+c_S^2(k))/ω_0u^S(i_S(k))-i_S(k)+ ⋯; = -y+ _S(k)Q(u,y), _S(k)Q∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y], ] with w( _kQ(u,y)) >_grlex0 as desired. To conclude the proof, it suffices to note that the equation _kR(u,y)=0 is strongly reduced Henselian if and only if _kQ(u,0) 0, which is equivalent to z_S(k) not being a root of P. We will need the following lemma: Let P(u,y)∈ K[[u]][y]∖{0} be a polynomial of degree _y(P)≤ d_y with only simple roots. Assume that y_0, y_1∈ K[[u]] are two distinct roots. One has that: ord_u (y_0-y_1)≤_u(Δ_P(u)). Note that the hypothesis imply that d_y≥ 2. Let us write y_1-y_0=δ_1,0 and k:=w(y_1-y_0)=w(δ_1,0)∈ℕ^r. By Taylor's Formula, we have: [ P(u,y_0+δ_1,0) = 0; = P(u,y_0)+∂ P/∂ y(u,y_0) δ_1,0+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y; = δ_1,0(∂ P/∂ y(u,y_0)+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y-1). ] Since δ_1,0≠ 0 and ∂ P/∂ y(u,y_0)≠ 0, one has that: ∂ P/∂ y(u,y_0)=-δ_1,0(1/2∂^2 P/∂ y^2(u,y_0)+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y-2) The valuation of the right hand side being at least k, we obtain that: w(∂ P/∂ y(u,y_0))≥_grlexk. But, by Lemma <ref>, we must have ord_u(∂ P/∂ y(u,y_0))≤_u(Δ_P(u)). So |k|≤_u(Δ_P(u)). For the courageous reader, in the case where y_0 is a series which is not a polynomial, we deduce from Theorem <ref> and from the generalized Flajolet-Soria's Formula <ref> a closed-form expression for the coefficients of y_0 in terms of the coefficients a_i,j of P and of the coefficients of an initial part z_k of y_0 sufficiently large, in particular for any k∈ℕ^r such that |k|≥_u(Δ_P(u))+1. Recall that i_k=w( P_k(u,y)). Note that for such a k, since y_0 is not a polynomial, by Lemma <ref>, z_S(k) cannot be a root of P. Let P(u,y)∈ K[[u]][y]∖{0} be a polynomial of degree _y(P)≤ d_y with only simple roots. Let k∈ℕ^r be such that |k|≥_u(Δ_P(u))+1. For any p>_grlex S(k), consider n:=p-S(k). Then: c_p=c_S(k)+n=∑_q=1^μ_n1/q(-1/ω_0)^q∑_|S|=q, S≥ q-1A^S(∑_|T_S|=S-q+1 g(T_S)=n+qi_k-(q-1)S(k)-g(S)e_T_SC^T_S), where μ_n is as in Theorem <ref> for the equation y= _kQ(u,y) of Theorem <ref>, S=(s_i,j)_i∈^r, j=0,…,d_y with finite support, and as in Notation <ref>, A^S=∏_i, ja_i,j^s_i,j, T_S=(t_S,i), C^T_S=∏_i=0^S(k)c_i^t_S,i, and e_T_S∈ℕ is of the form: e_T_S= ∑_(n^l,m_i,j,L)q!/∏_l =S(i_k)-i_k,…, d_yS(k)+(d_u,0,…,0)-i_k m=0,…,m_l∏_|i|=0,…,d_u j=m,…,d_y∏_|L|=j-m g(L)=l+i_k-mS(k)-in^l,m_i,j,L!∏_l=S(i_k)-i_k,…, d_y S(k)+(d_u,0,…,0)-i_k m=0,…,m_l∏_|i|=0,…,d_u j=m,…,d_y∏_|L|=j-m g(L)=l+i_k-mS(k)-i(j!/m! L!)^n^l,m_i,j,L, where we denote m_l:=min{d_y, max{m∈ℕ / mS(k)≤_grlexl +i_k}}, L=L_i,j^l,m=(l_i,j,0^l,m,…,l_i,j,S(k)^l,m), and where the sum is taken over the set of tuples (n^l,m_i,j,L)_l= S(i_k)-i_k,…,d_yS(k)+(d_u,0,…,0)-i_k, m=0,…,m_l |i|=0,…,d_u, j=m,…,d_y, |L|=j-m, g(L)=l+i_k-mS(k)-i such that: ∑_l,m∑_L n^l,m_i,j,L=s_i,j, ∑_l,m∑_i,j∑_Ln^l,m_i,j,L=q and ∑_l,m∑_i,j∑_Ln^l,m_i,j,LL= T_S. Note that the coefficients e_T_S are indeed natural numbers, since they are sums of products of multinomial coefficients because ∑_l,m∑_i,j∑_L n^l,m_i,j,L=q and m+|L|=j. In fact, 1/qe_T_S∈ℕ by Remark <ref> as we will see along the proof. We get started by computing the coefficients of ω_0u^i_k _kR, in order to get those of _kQ: [ -ω_0u^i_k _kR = P_k(u, y+c_S(k)); = P(u,z_S(k)+u^S(k)y); = ∑_i∈^r , j=0,…,d_ya_i,ju^i(z_S(k)+u ^S(k)y)^j; = ∑_i∈^r , j=0,…,d_ya_i,ju^i∑_m=0^jj!/m! (j-m)!z_S(k)^j-mu^mS(k)y^m. ] For L=(l_0,⋯,l_S(k)), we denote C^L:=c_0^l_0⋯ c_S(k)^l_S(k). One has that: z_S(k)^j-m=∑_|L|=j-m(j-m)!/L!C^Lu^g(L). So: -ω_0u^i_k _kR=∑_m=0^d_y∑_i∈^r j=m,…,d_ya_i,j∑_|L|=j-mj!/m! L!C^Lu^g(L) +mS(k)+i y^m. We set l̂=g(L)+mS(k)+i. It verifies: l̂≥ mS(k). Thus: -ω_0u^i_k _kR=∑_m=0,…,d_y ∑_l̂ ≥ mS(k)∑_i ≤ l̂- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l̂-mS(k)-ij!/m! L!C^Lu^l̂y^m. Since _kR(u,y)=-y+ _kQ(u,y) with w( _kQ(u,y))>_grlex0, the coefficients of _kQ are obtained for l̂≥_grlexS(i_k). We set l:=l̂-i_k and m_l:=min{d_y, max{m∈ℕ / mS(k)≤l +i_k}}. We obtain: _kQ(u,y)=∑_l ≥_grlex S(i_k)-i_k m=0,…,m_lb_l,mu^ly^m, with: b_l,m=-1/ω_0∑_i ≤ l+i_k- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L. According to Lemma <ref>, Theorem <ref> and Lemma <ref>, we are in position to apply the generalized Flajolet-Soria's Formula of Theorem <ref> in order to compute the coefficients of the solution t_S(k)=c_S^2(k)u^S^2(k)-S(k)+c_S^3(k)u^S^3(k)-S(k)+⋯. Thus, denoting B:=(b_l,m), Q:=(q_l,m) with finite support and B^Q:=∏_l,m b_l,m^q_l,m for l≥_grlexS(i_k)-i_k and m=0,…,m_l, we obtain for n>_grlex0: c_S(k)+n=∑_q=1^μ_n1/q∑_|Q|=q, Q=q-1 , g(Q)=nq!/Q!B^Q. As in Remark <ref> (1), the previous sum is finite, and as in Remark <ref> (2), we have 1/q·q!/Q!∈ℕ. Let us compute: [ [ b_l,m^q_l,m = (-1/ω_0)^q_l,m(∑_i ≤ l+i_k- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l +i_k-mS(k)-ij!/m! L!C^L)^q_l,m; = (-1/ω_0)^q_l,m∑_|M_l,m|=q_l,mq_l,m!/M_l,m!A^M_l,m∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)- ij!/m! L!C^L)^m^l,m_i,j ]; where M_l,m=(m^l,m_i,j) for i≤l+i_k- mS(k) , j=0,…,d_y and m^l,m_i,j=0 for j<m. ] Note that, in the previous formula, (-ω_0)^q_l,mb_l,m^q_l,m is the evaluation at A and C of a polynomial with coefficients in ℕ. Since 1/q·q!/Q!∈ℕ, the expansion of (-ω_0)^q1/q·q!/Q!B^Q as a polynomial in A and C will only have natural numbers as coefficients. Let us expand the expression ∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j. For each (l,m,i,j), we enumerate the terms j!/m! L!C^L with h=1,…,α_i,j^l,m. Subsequently: [ (∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j = (∑_h=1^α_i,j^l,mj!/m! L_i,j,h^l,m!C^L_i,j,h^l,m)^ m^l,m_i,j; = ∑_|N^l,m_i,j|=m^l,m_i,jm^l,m_i,j!/N^l,m_i,j!( ∏_h=1^α_i,j^l,m(j!/m! L_i,j,h^l,m!)^ n^l,m_i,j,h) C^∑_h=1^α^l,m_i,j n^l,m_i,j,hL_i,j,h^l,m, ] where N^l,m_i,j= (n^l,m_i,j,h)_h=1,…,α_i,j^l,m, N^l,m_i,j!= ∏_h=1^α_i,j^l,m n^l,m_i,j,h!. Denoting H_l,m=(h^l,m_0,…,h^l,m_S(k)):= ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m, one computes: [ |H_l,m| = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,h|L_i,j,h^l,m|; = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_h=1^α_i,j^l,m n^l,m_i,j,h)(j-m); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_ym^l,m_i,j(j-m); = M_l,m-m q_l,m. ] Likewise, one computes: [ g(H_l,m) = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,hg(L_i,j,h^l,m); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_h=1^α_i,j^l,m n^l,m_i,j,h)(l+i_k-mS(k)-i); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_ym^l,m_i,j(l+i_k-mS(k)-i); = q_l,m[l+i_k-mS(k)]-g(M_l,m). ] So, according to Formula (<ref>) and the new way of writing the expression ∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j, we obtain: [ b_l,m^q_l,m = (-1/ω_0)^q_l,m∑_|M_l,m|=q_l,mA^M_l,m∑_|H_l,m|=M_l,m-m q_l,m g(H_l,m)=q_l,m[l+i_k-mS(k)]-g(M_l,m) d_H_l,mC^H_l,m; with d_H_l,m:=∑_(N^l,m_i,j)q_l,m!/∏_i ≤ l+i_k- mS(k) j=m,…,d_yN^l,m_i,j!∏_i ≤ l+i_k- mS(k) j=m,…,d_y∏_h=1^α_i,j^l,m(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h, ] where the sum is taken over {(N^l,m_i,j)_i ≤ l+i_k- mS(k) j=m,…,d_y such that |N^l,m_i,j|=m^l,m_i,j and ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,hL_i,j,h^l,m=H_l,m}. Note that, if the latter set is empty, then d_H_l,m=0. Recall that we consider Q:=(q_l,m) with finite support and such that |Q|=q, Q=q-1 and g(Q)=n. We deduce that: [ B^Q = ∏_l ≥_grlexS(i_k)-i_k m=0,…,m_lb_l,m^q_l,m; = (-1/ω_0)^q∏_l,m[∑_|M_l,m|=q_l,mA^M_l,m∑_|H_l,m|=M_l,m-m q_l,mH_l,m=q_l,m(l+i_k-mS(k))-g(M_l,m)d_H_l,mC^H_l,m]. ] Now, in order to expand the latter product of sums, we consider the corresponding sets: 𝒮_Q:={∑_l,mM_l,m / ∃ (M_l,m) s.t. |M_l,m|=q_l,m and ∀l,m, m^l,m_i,j=0 for j<m or i ≰ l+i_k- mS(k)} and, for any S∈𝒮_Q, ℋ_Q,S:={(H_l,m) / ∃ (M_l,m) s.t. |M_l,m|=q_l,m and ∀l,m, m^l,m_i,j=0 for j<m or i ≰ l+i_k- mS(k), . . ∑_l,mM_l,m=S, |H_l,m|=M_l,m-m q_l,m and g(H_l,m)=q_l,m(l+i_k-mS(k))-g(M_l,m) /} and 𝒯_Q,S:={∑_l,mH_l,m / (H_l,m)∈ℋ_Q,S}. We have: [ B^Q = (-1/ω_0)^q∑_S∈𝒮_QA^S∑_T_S∈𝒯_Q,S(∑_(H_l,m)∈ℋ_Q,S∑_l,mH_l,m=T_S∏_l,m d_H_l,m) C^T_S; = (-1/ω_0)^q∑_S∈𝒮_QA^S∑_T_S∈𝒯_Q,Se_Q,T_SC^T_S. ] where : e_Q,T_S:= ∑_(N^l,m_i,j)∏_l,mq_l,m!/∏_l,m∏_i,jN^l,m_i,j!∏_l,m∏_i,j∏_h(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h and where the previous sum is taken over: ℰ_Q,T_S:={( N^l,m_i,j)_l ≥_grlexS(i_k)-i_k, m=0,…,m_li ≤ l+i_k- mS(k), j=m,…,d_y / ∀i,j, ∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h=s_i,j, . . ∀l,m, ∑_i,j|N^l,m_i,j|=q_l,m, and ∑_l,m∑_i, j∑_h=1^α_i,j^l,m n^l,m_i,j,hL_i,j,h^l,m =T_S}. Note that, if the latter set is empty, then e_Q,T_S=0. Observe that 1/qq!/Q!e_Q,T_S lies in ℕ as a coefficient of (-ω_0) ^q1/qq!/Q!B^Q as seen before. Note also that, for any Q and for any S∈𝒮_Q, |S|=∑_l,mq_l,m=q and S≥∑_l,mmq_l,m=Q=q-1. Moreover, for any T_S∈𝒯_Q,S: [ |T_S| = ∑_l,mM_l,m-m q_l,m; = S-Q; = S-q+1 ] and: [ g(T_S) = ∑_l,mq_l,m(l+i_k-mS(k))-g(M_l,m); = g(Q)+|Q| i_k-Q S(k)-g(S); = n+q i_k-(q-1) S(k)-g(S). ] Let us show that: [ ∑_|Q|=q, Q=q-1, g(Q)=nq!/Q!B^Q = (-1/ω_0)^q∑_|S|=q, S≥ q-1A^S∑_|T_S|=S-q+1 g(T_S)=n+qi_k-(q-1)S(k)-g(S)e_T_SC^T_S, ] where e_T_S:=∑_(N^l,m_i,j)q!/∏_l,m∏_i,jN^l,m_i,j!∏_l,m∏_i,j∏_h(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h and where the sum is taken over ℰ_T_S:={(N^l,m_i,j)_l ≥_grlexS(i_k)-i_k, m=0,…,m_li ≤ l+i_k- mS(k), j=m,…,d_y s.t. ∑_l,m∑_h n^l,m_i,j,h=s_i,j, ∑_l,m∑_i,j|N^l,m_i,j|=q. . and ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m=T_S}. Note that, if the latter set is empty, then e_T_S=0. Recall that N^l,m_i,j!= ∏_h=1^α_i,j^l,m n^l,m_i,j,h! and that the L^l,m_i,j,h's enumerate the L's such that |L|=j-m and g(L)=l+i_k-m S(k)-i for given l,m,i,j. Let us consider S and T_S such that |S|=q, S≥ q-1, |T_S|=S-q+1, g(T_S)=n+qi_k-(q-1)S(k)-g(S) and such that ℰ_T_S≠∅. Take an element ( n^l,m_i,j,h)∈ℰ_T_S. Define m^l,m_i,j:=∑_h=1^α_i,j^l,m n^l,m_i,j,h for each i, j, l, m with j≥ m, and m^l,m_i,j:=0 if j<m or i ≰ l+i_k- mS(k). Set M_l,m:=(m^l,m_i,j)_i,j for each l, m. So, ∑_l,mm^l,m_i,j=∑_l,m∑_h=1^α_i,j^l,m n^l,m_i,j,h=s_i,j, and S=∑_l,mM_l,m. Define q_l,m:=∑_i,jm^l,m_i,j=|M_l,m| for each l, m, and Q:=(q_l,m). Let us show that |Q|=q, g(Q)=n and Q=q-1. By definition of ℰ_T_S, |Q|:=∑_l,mq_l,m= ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h=q. Recall that Q:=∑_l,mmq_l,m. We have: [ |T_S|= |∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m|=S -q+1; ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h|L_i,j,h^l,m|=∑_i,jjs_i,j-q+1; ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h(j-m)= ∑_i,jjs_i,j-q+1; ⇔ ∑_i,j j∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h- ∑_l,mm∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h= ∑_i,jjs_i,j-q+1; ⇔ ∑_i,j js_i,j-∑_l,mmq_l,m =∑_i,jjs_i,j-q+1; ⇔ Q=q-1. ] Recall that g(Q):=∑_l,mq_l,ml. We have: [ g(T_S)= g(∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,u ^l,m)=n+q i_k -(q-1)S(k) -g(S); ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hg(L_i,j,h^l,m)=n+q i_k -(q-1)S(k) -g(S); ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h(l+i_k -mS(k) -i)= n+q i_k -(q-1)S(k) - g(S); ⇔ [ ∑_l,ml∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h+ i_k∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h -S(k) ∑_l,mm∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h; -∑_i,ji∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h=n+q i_k -(q-1)S(k) -g(S); ]; ⇔ ∑_l,mq_l,ml+q i_k-S(k) ∑_l,mm q_l,m-∑_i,j s_i,ji= n+q i_k -(q-1)S(k) -g(S); ⇔ g(Q)+q i_k-QS(k)-g(S)=n+q i_k -(q-1)S(k) -g(S). ] Since Q=q-1, we deduce that g(Q)=n as desired. So, S∈𝒮_Q for Q as in the left-hand side of (<ref>). Now, set H_l,m:=∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m, so ∑_l,mH_l,m=T_S. Let us show that (H_l,m)∈ℋ_Q,S, which implies that T_S∈𝒯_Q,S as desired. The existence of (M_l,m) such that |M_l,m|=q_l,m and m^l,m_i,j=0 for j<m and ∑_l,mM_l,m=S follows by construction. Conditions |H_l,m|=M_l,m-m q_l,m and g(H_l,m)=q_l,m[l+i_k-mS(k)]-g(M_l,m) are obtained exactly as in (<ref>) and (<ref>). This shows that (n^l ,m_i,j,h) ∈ℰ_Q,T_S, so: ℰ_T_S⊆_|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S. The reverse inclusion holds trivially since |Q|=q, so: ℰ_T_S = _|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S. We deduce that: e_T_S=∑_|Q|=q, g(Q)=n, Q=q-1q!/Q! e_Q,T_S. We conclude that any term occuring in the right-hand side of (<ref>) comes from a term from the left-hand side. Conversely, for any Q as in the left-hand side of Formula (<ref>), S∈𝒮_Q and T_S∈𝒯_Q,S verify the following conditions: |S|=q, S≥ q-1, |T_S|=S-q+1 , T_S=n+q i_k-(q-1)S(k)-g(S) and ℰ_T_S = _|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S, e_T_S=∑_|Q|=q, g(Q)=n, Q=q-1q!/Q! e_Q,T_S. Hence, any term occuring in the expansion of B^Q contributes to the right hand side of Formula (<ref>). Thus we obtain Formula (<ref>) from which the statement of Corollary <ref> follows. Note also that: 1/qe_T_S=∑_|Q|=q, g(Q)=n, Q=q-11/qq!/Q! e_Q,T_S, so 1/qe_T_S∈. We have seen in Theorem <ref> and its proof (see Formula (<ref>) with k=k_0) that ω_0=(π^P_k_0,i_k_0)'(c_S(k_0)) is the coefficient of the monomial u^i_S(k_0)y in the expansion of P_S(k_0)(u,y)=P(u,c_0u_r+⋯+c_S(k_0)u^S(k_0)+ u^S^2(k_0)y), and that c_S^2(k_0)=-π^P_k_0,i_S(k_0)(c_S(k_0))/ω_0 where π^P_k_0,i_S(k_0)(c_S(k_0)) is the coefficient of u^i_S(k_0) in the expansion of P_S(k_0)(u,y). Expanding P_S(k_0)(u,y), having done the whole computations, we deduce that: {[ ω_0 = ∑_i ≤ l+i_k- mS(k), j=1,..,d_y ∑_|L|=j-1, g(L)=i_k_0-S(k_0)-ij!/L!a_i,jC^L ;; c_S^2(k_0) = -1/ω_0∑_i ≤ l+i_k- mS(k), j=0,..,d_y ∑_|L|=j, g(L)=i_S(k_0)-i j!/L!a_i,jC ^L, ]. where C:=(c_0,…,c_S(k_0)) and L:=(l_0,…,l_S(k_0)). amsalpha'' EKM+01[Abh56]abh:val-cent-local-domain S. S. Abhyankar, On the valuations centered in a local domain, Amer. J. Math. 78 (1956), 321–348. [ADR22]aroca-decaup-rond:support-alg-laurent-series F. Aroca, J. Decaup, and G. Rond, The minimal cone of an algebraic Laurent series, Math. Ann. 382 (2022), no. 3-4, 1745–1773 (English). [AI09]aroca-ilardi:puiseux-multivar F. Aroca and G. Ilardi, A family of algebraically closed fields containing polynomials in several variables, Comm. Algebra 37 (2009), no. 4, 1284–1296. 2510985 (2010f:12008)[AR19]aroca-rond:support-alg-series F. Aroca and G. Rond, Support of Laurent series algebraic over the field of formal power series, Proc. Lond. Math. Soc. (3) 118 (2019), no. 3, 577–605. [EKM+01]evans-al:tot-ord-commut-monoids K. Evans, M. Konikoff, J. J. Madden, R. Mathis, and G. Whipple, Totally ordered commutative monoids, Semigroup Forum 62 (2001), no. 2, 249–278. [EP05]engler-prestel:valued-fields A. J. Engler and A. Prestel, Valued fields, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2005. [FS97]flajolet-soria:coeff-alg-series P. Flajolet and M. Soria, Coefficients of algebraic series, Algorithms seminar 1997-1998, Tech. Report, INRIA, 1997. [GP00]gonzalez-perez_singul-quasi-ord P. D. González Pérez, Singularités quasi-ordinaires toriques et polyèdre de Newton du discriminant, Canad. J. Math. 52 (2000), no. 2, 348–368. [Hah07]hahn:nichtarchim H. Hahn, Über die nichtarchimedischen Grössensystem, Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften, Mathematisch - Naturwissenschaftliche Klasse (Wien) 116 (1907), no. Abteilung IIa, 601–655. [Hen64]henrici:lagr-burmann P. Henrici, An algebraic proof of the Lagrange-Bürmann formula, J. Math. Anal. Appl. 8 (1964), 218–224. [HM17]hickel-matu:puiseux-alg M. Hickel and M. Matusinski, On the algebraicity of Puiseux series, Rev. Mat. Complut. 30 (2017), no. 3, 589–620. [HM19]hickel-matu:puiseux-alg-multivar M. Hickel and M. Matusinski, About algebraic Puiseux series in several variables, J. Algebra 527 (2019), 55–108. [KKS23]kuhlmann-krapp-serra:generalised-LRR L. S. Krapp, S. Kuhlmann, and M. Serra, Generalised power series determined by linear recurrence relations, 2023, Arxiv: arxiv.org/abs/2206.04126. [Leg30]legendre:theorie-nbres A.-M. Legendre, Théorie des nombres t.1, Firmin-Didot (Paris), 1830. [McD95]mcdonald_puiseux-multivar J. McDonald, Fiber polytopes and fractional power series, J. Pure Appl. Algebra 104 (1995), no. 2, 213–233. [Neu49]neumann:ord-div-rings B. H. Neumann, On ordered division rings, Trans. Amer. Math. Soc. 66 (1949), 202–252. [PR12]parusinski-rond:abhyankar-jung A. Parusiński and G. Rond, The Abhyankar-Jung theorem, J. Algebra 365 (2012), 29–41. [Ray74]rayner_puiseux-multivar F. J. Rayner, Algebraically closed fields analogous to fields of Puiseux series, J. London Math. Soc. (2) 8 (1974), 504–506. [Rib92]rib:series-fields-alg-closed P. Ribenboim, Fields: algebraically closed and others, Manuscripta Math. 75 (1992), no. 2, 115–150. [RvdD84]rib-vdd_ratio-funct-field P. Ribenboim and L. van den Dries, The absolute Galois group of a rational function field in characteristic zero is a semidirect product, Canad. Math. Bull. 27 (1984), no. 3, 313–315. [Saf00]safonov:algebraic-power-series K. V. Safonov, On power series of algebraic and rational functions in C^n, J. Math. Anal. Appl. 243 (2000), no. 2, 261–277. [Sat83]sathaye:newt-puiseux-exp_abh-moh-semigr A. Sathaye, Generalized Newton-Puiseux expansion and Abhyankar-Moh semigroup theorem, Inventiones Mathematicae 74 (1983), 149–157, 10.1007/BF01388535. [Sin80]singmaster:binomial-multinomial D. Singmaster, Divisibility of binomial and multinomial coefficients by primes and prime powers, A collection of manuscripts related to the Fibonacci sequence, Fibonacci Assoc., Santa Clara, Calif., 1980, pp. 98–113. [Sok11]sokal:implicit-function A. D. Sokal, A ridiculously simple and explicit implicit function theorem, Sém. Lothar. Combin. 61A (2009/11), Art. B61Ad, 21. [SV06]soto-vicente:polyhedral-cones M. J. Soto and J. L. Vicente, Polyhedral cones and monomial blowing-ups, Linear Algebra Appl. 412 (2006), no. 2-3, 362–372. [SV11]soto-vicente_puiseux-multivar, The Newton procedure for several variables, Linear Algebra Appl. 435 (2011), no. 2, 255–269. 2782778[Wal78]walker_alg-curves R. J. Walker, Algebraic curves, Springer-Verlag, New York, 1978, Reprint of the 1950 edition. [Wil19]wilczynski:alg-power-series E. J. Wilczynski, On the form of the power series for an algebraic function., Am. Math. Mon.26 (1919), 9–12 (English).
http://arxiv.org/abs/2307.06068v1
20230712103829
On the renormalization of non-polynomial field theories
[ "Andrea Santonocito", "Dario Zappala" ]
hep-th
[ "hep-th" ]
plain 160mm 230mm -10mm 0mm
http://arxiv.org/abs/2307.04562v1
20230710135124
Full-F Turbulent Simulation in a Linear Device using a Gyro-Moment Approach
[ "B. J. Frei", "J. Mencke", "P. Ricci" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
AIP/123-QED ]Full-F Turbulent Simulation in a Linear Device using a Gyro-Moment Approach [email protected] Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center, CH-1015 Lausanne, Switzerland Max-Planck-Institut für Plasmaphysik, D-85748 Garching, Germany Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center, CH-1015 Lausanne, Switzerland Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center, CH-1015 Lausanne, Switzerland The first full-F and turbulent simulations based on the Gyro-Moment (GM) are presented by considering a linear device configuration with open and straight field lines. The simulations are based on a simplified version of the gyrokinetic (GK) model proposed by B. J. Frei et al. [J. Plasma Phys. 86, 905860205 (2020)]. By focusing on the electrostatic and long-wavelength limit, a full-F GM hierarchy equation is derived to evolve the ion dynamics, which includes a nonlinear Dougherty collision operator, localized sources, and Bohm sheath boundary conditions. An electron fluid Braginskii model is used to evolve the electron dynamics, coupled to the full-F ion GM hierarchy equation via a vorticity equation. A set of full-F turbulent simulations is performed using the parameters of the LAPD experiments with different numbers of GMs and regimes of collisionality. The GM results (time-averaged profiles and turbulent properties) are compared with those from two-fluid Braginskii simulations, finding good qualitative agreement. Furthermore, the ion distribution function is analyzed, showing the good convergence properties of the GM approach. [ P. Ricci August 12, 2023 =================== § INTRODUCTION Despite recent progress in the development of gyrokinetic (GK) codes, such as <cit.>, <cit.>, <cit.> and <cit.>, extending the GK model from the core to the boundary remains challenging since it requires dealing with a wide range of collisionality, order-one fluctuations across various scales, complex magnetic field geometry, steep pressure gradients and the interaction of the plasma with the wall. As a consequence, less computationally demanding tools such as fluid simulations (see, e.g., Refs. Stegmeir2018,De2022,Giacomin2022) based on the drift-reduced Braginskii model <cit.>, are used to simulate the plasma dynamics in the boundary. However, the validity of a fluid approach remains limited to the collisional region of the boundary, namely the scrape-off layer (SOL), as the fluid modeling lacks kinetic effects. To tackle the challenges of the boundary region, an approach is formulated in Ref. Frei2020 based on the Hermite-Laguerre expansion of the full (full-F) distribution function, which is referred to as the gyro-moment (GM) approach. This approach features kinetic effects <cit.>, which are absent in Braginskii-like fluid models, and collisional effects modeled using advanced collision operators <cit.>. So far, investigations based on the GM approach are limited to the δ f regime, where only the a priori small deviation of the distribution function from thermal equilibrium is evolved <cit.>. To the knowledge of the authors, this work presents the first full-F turbulent results using a moment approach. In particular, we focus on simulations of plasma turbulence in a linear plasma device. Linear plasma devices, such as LAPD <cit.>, HelCat <cit.>, and RAID <cit.>, are experiments that allow for the investigation of basic plasma phenomena in a simplified magnetic geometry characterized by the absence of magnetic gradients, curvature, and shear <cit.>. Despite their simplicity and the lack of kinetic effects such as trapped electrons, linear plasma devices share some of the most important physical processes that occur in the boundary of magnetic confinement devices. In fact, similar to the boundary, the turbulent dynamics in a linear plasma device result from the interplay of cross-field transport, parallel flows to the magnetic field, and plasma losses at the end plates where a sheath forms due to plasma-wall interactions. At the same time, the straight magnetic field lines in these devices facilitate the development of new modeling tools, compared to complex magnetic geometry characterizing the boundary of fusion devices. The modeling in these devices is also simplified by the perpendicular incidence of the magnetic field lines to the wall of the machine, which simplifies the sheath boundary model compared to an oblique incidence <cit.> and by the low plasma temperatures comparable to typical SOL values (e.g., T_i ≲ T_e ∼ 6 eV in typical LAPD discharges <cit.>), which are ideal for applying the full-F GM approach. Indeed, the low plasma temperature allows for a direct comparison of the GM approach with fluid simulations <cit.>, valid in the collisional conditions often met in, e.g., LAPD experiments. By focusing on the drift-kinetic (or long-wavelength) and electrostatic limit of the GK equations <cit.>, a linear plasma device configuration is chosen to perform the first full-F GK simulation in open field lines with the code that uses a discontinuous-Galerkin approach to discretize the velocity-space in Ref. Shi2017. LAPD turbulent simulations using the GK code are also reported in Ref. Pan2018, based on the same physical model. Linear plasma devices provide, therefore, an ideal testbed to perform the first full-F turbulent simulations using the GM approach. In this work, we consider a simplified version of the full-F GM model derived in Ref. Frei2020. In particular, we focus on the long-wavelength and electrostatic limit of the GK model to describe the ion dynamics, with ion-ion collisions modeled using a simple nonlinear Dougherty <cit.> collision operator (similar to the one used in Refs. Shi2017,Pan2018). On the other hand, electrons are assumed collisional, such that their dynamics can be approximated by the drift-reduced Braginskii model <cit.>. In contrast to previous GK simulations of linear devices <cit.>, the ion GK equation is solved within the GM approach where the full ion distribution function F_i is expanded on a Hermite and Laguerre polynomial basis. A parallel (to the magnetic field) velocity-space coordinate shifted by the local ion parallel fluid velocity and the adiabatic invariant are used to describe efficiently sonic ion parallel flows near the end plates where the sheath forms. A full-F ion GM hierarchy equation for the expansion coefficients is then derived. The ion full-F GM hierarchy equation and the fluid electron model are coupled through a vorticity equation. To incorporate the losses at the end plates, Bohm sheath boundary conditions <cit.> are implemented in the parallel direction, which are equivalent to the ones used in the previous Braginskii simulation of LAPD <cit.>. Nonlinear simulations of LAPD are then performed with various numbers of GMs. For comparison, a set of nonlinear turbulent simulations are also performed using the two-fluid drift-reduced Braginskii equations <cit.> (or simply Braginskii model), similarly to Refs. Rogers2010,Fisher2015, and using a reduced cold-ion model derived from the full-F ion GM hierarchy. The present results demonstrate that the full-F GM approach properly describes fluctuations in an open-field line geometry. A detailed analysis shows that turbulence, driven by a long perpendicular wavelength Kelvin-Helmoltz instability, is in qualitative agreement with the Braginskii model. Our results are weakly dependent on the number of GMs used in the simulations and on the collisional regime because of the absence of strong kinetic effects in LAPD. The analysis of the velocity-space representation of the ion distribution function demonstrates that the amplitude of the GMs decays rapidly with the order of the polynomial when collisions are considered. On the other hand, a larger number of GMs is necessary to describe deviations from thermal equilibrium at lower collisionality than LAPD. This investigation also reveals that a simple closure based on the truncation of the GM hierarchy is sufficient in our case and has little effect on turbulence. It is important to note that the purpose of these simulations is not to achieve a highly-fidelity and realistic description of LAPD turbulence, but rather to establish confidence in the applicability of the GM approach in full-F turbulent calculations. Furthermore, a direct comparison with LAPD experimental data <cit.> and with previous GK simulations <cit.> falls outside the scope of our study, but will be addressed in future work. The paper is structured as follows. In sec:linearplasmadevicemodel, we derive the ion full-F GM hierarchy equation in a straight magnetic field and introduce the electron fluid model, as well as the two-fluid drift-reduced Braginskii model. The numerical implementation of the full-F GM hierarchy equation is detailed in sec:numericalimplementation. The results of the first full-F GM turbulent simulations are presented in sec:turbulentsimulations, which includes a detailed comparison with the Braginskii simulations and an analysis of the ion distribution function. We conclude in sec:conclusion. § LINEAR PLASMA DEVICE MODEL In this section, we derive the full-F GM hierarchy equation for the ion dynamics by expanding the ion distribution function onto a Hermite and Laguerre polynomials basis. The hierarchy includes particle and energy sources and a simple nonlinear long-wavelength Dougherty collision operator. A reduced cold-ion model is also considered for comparison purposes, which is obtained analytically from the full-F GM hierarchy in the T_i≪ 1 limit. For the electron dynamics, the Braginskii fluid equations are used to evolve the electron density n_e, parallel velocity U_ e, and temperature T_e. A vorticity equation is derived for the electrostatic potential ϕ, which couples the ion and electron models. Finally, we present the two-fluid Braginskii model. The simple magnetic geometry in a linear plasma device allows us to introduce a simple coordinate system. In particular, assuming a rectangular shape of the linear device cross-section, we define the cartesian coordinate system (x,y,z), such that the (x,y) coordinates describe the plane perpendicular to , while z is the coordinate along the magnetic field lines. The height and width of the perpendicular cross-section are L_x and L_y, respectively, and the length of the linear plasma device is L_z. The magnetic field can simply be written as = × A = B z, where A is the constant magnetic vector potential and z is the unit vector pointing along the axis of the linear device. This section is structured as follows. We describe the ion full-F model in subsec:iongkmodel and we derive the ion full-F GM hierarchy equation in subsec:fullFhierarchy. A presentation of the reduced cold-ion model is then obtained from the GM hierarchy equation in subsec:coldion. The fluid electron model follows in subsec:electronbraginskii and the vorticity equation is derived in subsec:vorticityequation. subsec:braginskii describes the two-fluid Braginskii model used for comparison purposes and, finally, subsec:bc details the Bohm sheath boundary conditions we use in our simulations. §.§ Ion full-F model Focusing on the electrostatic and long-wavelength limits with constant and straight magnetic field lines, the ion one-form Γ_i <cit.> expressed in the gyrocenter coordinates Z = ( R, μ, v_∥, θ), where R = x x + y y + z z is the gyrocenter position, μ = m_i v_⊥^2/(2 B) is the magnetic moment and v_∥ = b · v is the velocity parallel to the magnetic field with b = B / B, reduces to Γ_i( R, μ, v_∥,t)= q_i A_i^* ·- μ R/Ω_iθ̇- m_i v_∥^2 / 2 - q_i Φ_i, with q_i Φ_i =q_i ϕ + μ B and q_i A_i^* = q_i + m_i v_∥ b. In (<ref>), the electrostatic potential ϕ is evaluated at the gyrocenter position, i.e. ϕ = ϕ( R ), such that ion FLR effects are neglected. From (<ref>), we deduce the ion equations of motion, Ṙ = b v_∥ + × b/B , v̇_∥ = q_i/m_i b · E , and μ̇=0, with v_∥ = · b and = -ϕ the electric field, being = x ∂_x + y ∂_y + z ∂_z. (<ref>) describes the parallel streaming along the magnetic field lines and the perpendicular drift due to the × velocity, while (<ref>) represents the acceleration in the parallel direction associated with the electric field E. Using the equations of motion given in (<ref>), the evolution equation of the full-F (gyrophase-independent) ion distribution function, F_i = F_i ( R, μ, v_∥, t), in the long-wavelength limit of the electrostatic GK ion Boltzmann equation <cit.> is given by ∂/∂ t( _i F_i ) + ·( _i F_i ) + ∂/∂ v_∥( _i F_i v̇_∥) = _i _i + _i S_i, where _i = B / m_i is the gyrocenter phase-space Jacobian, which is a constant in the case of linear devices. On the right-hand side of (<ref>), S_i = S_i( R, μ , v_∥) = S_N + S_E model particle (S_N) and energy (S_E) sources and are defined by <cit.> S_N = 𝒜_N F_Mi, S_E = 𝒜_E ( s_ i^2 + x_i - 3/2) F_Mi, respectively. In (<ref>), the functions 𝒜_N = 𝒜_N(x,y) and 𝒜_E = 𝒜_E (x,y) describe the spatial localization of the sources that mimic, for instance, the ionization processes due to fast electrons and of fast ions <cit.>. We remark that in previous fluid investigations of LAPD, the low ion temperature assumption (T_i ≪ T_e) is used and the ion energy source is neglected. In the present work, we consider a finite ion energy source, S_E. We assume that these sources have a uniform, localized, and top-hat-like shape in the case of the LAPD experiment. For instance, 𝒜_N (x,y) is given by <cit.> 𝒜_N (x,y) = 𝒜_N0 0.5 [ 1 - tanh( r -r_s/L_s) ] + 𝒜_N ∞, where r = √(x^2 + y^2) is the perpendicular distance from the center of the device (r =0), r_s is the radial extent of the plasma source, L_s > 0 is its typical source decay scale length, 𝒜_N0 is a positive and constant coefficient, which represents the particle fuelling rate near the center of the device, while 𝒜_N ∞ represents a small positive and constant particle source away from r ∼ r_s added for numerical reasons, in particular, to avoid regions of negative plasma density. Similar definitions for 𝒜_E, 𝒜_E0 and 𝒜_E∞ are used. We remark that the effects of neutral ionization, fast ions, and the presence of localized sources (not uniform in z) near the end plates are neglected in the present work. In (<ref>), we also introduce a shifted Maxwellian distribution function defined by F_Mi = N_i/π^3/2 v_Ti^3 e^- s_∥ i^2 e^- x_i, with the parallel and shifted normalized velocity-space coordinate, s_∥ i= (v_∥ - U_∥ i)/ v_Ti with v_Ti^2 = 2 T_i0 /m_i (T_i0 is the reference constant ion temperature) and U_∥ i = b · u_i = ∫ d v F_i v_∥ / N_i the ion parallel fluid velocity, and the perpendicular velocity-space coordinate x_i = μ B /T_i0. The choice of using the shifted parallel velocity-space coordinate, s_ i, is motivated in subsec:GMspectrum. Finally, the term _i in (<ref>) is a full-F and nonlinear collision operator model describing ion-ion collisions. In particular, we use a long-wavelength Dougherty collision operator <cit.>, given by _i = ν_i∂/∂ v·[ 2 T_i/m_i∂/∂ vF_i- ( v - u_i) F_i ], where T_i = ∫ d F_i m_i ( v - u_i)^2 /( 3N_i) and u_i = ∫ d F_i v / N_i are the ion temperature and mean fluid velocity, and ν_i = 4 √(π) N_i q_i^4 lnΛ/(3 m_i^1/2 T_i0^3/2) is the ion-ion collision frequency, which is constant in the present work. The effects of ion-electron collisions are neglected in (<ref>) since they occur on a time scale larger by, at least, a factor proportional to √(m_i / m_e) than ion-ion collisions. (<ref>) is equivalent to the ion GK model used in previous GK turbulent simulations of LAPD, implemented in the <cit.> and in the <cit.> codes. Both implementations use the same nonlinear Dougherty collision operator for ion-ion collisions (given in (<ref>)) and neglect ion-electron collisions. The code employs a discontinuous-Galerkin approach and uses a finite-volume method to discretize the velocity-space coordinates (v_∥, μ), while our work uses the GM approach to simulate the full-F ion distribution function F_i. To our knowledge, this is the first time such a moment approach is applied to perform nonlinear full-F turbulent simulations. §.§ Full-F ion GM Hierarchy Equation Following Ref. Frei2020, we perform the GM expansion of the full-F ion distribution function, F_i. More precisely, we expand F_i onto a set of Hermite (H_p) and Laguerre (L_j) velocity-space polynomials <cit.>, such that F_i = ∑_p=0^∞∑_j=0^∞^pjH_p(s_∥ i) L_j(x_i)/√(2^p p!)F_Mi/N_i, where ^pj are the ion GMs, evaluated by using the Hermite and Laguerre orthogonality relations <cit.> ^pj = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μB/m_i F_i H_p(s_ i) L_j(x_i)/√(2^p p!). By introducing the GM projector pjχ = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μχB/m_i F_i H_p(s_ i) L_j(x_i)/√(2^p p!). with χ = χ( R, μ, v_∥, t) being an arbitrary gyrocenter phase-space function, we find 𝒩^pj = pj1 from (<ref>). We remark that, in (<ref>), the shift of the parallel velocity coordinate s_ i, appearing in F_Mi defined in (<ref>) and in the argument of the Hermite polynomial H_p, is necessary to ensure good convergence property of the GM approach with respect to the number of GMs in (<ref>), paticularly in the presence of sonic ion flows (see sec:vsp). These flows appear at the sheath entrance where ions are accelerated to the ion sound speed (see subsec:bc). Additionally, we note that F_Mi, defined in (<ref>), is assumed to have the same parallel and perpendicular temperature, T_∥ i = T_⊥ i = T_i0. The assumption of an isotropic Maxwellian distribution function in (<ref>) is justified by the large ion-ion collision frequency typically found in a linear plasma device (where T_i ≲ 1 eV) compared to the boundary region in fusion devices (where T_i ≳ 10 eV). The absence of strong external energy sources driving temperature anisotropy in LAPD experiments supports this assumption (see (<ref>)). The lowest-order GMs can be related to fluid ion gyrocenter quantities, such as the ion gyrocenter density N_i, the ion parallel velocity U_∥ i, and the ion parallel and perpendicular pressure and temperature P_∥ i = T_∥ i N_i and P_⊥ i = T_⊥ i N_i, respectively. Indeed, using (<ref>), we derive that N_i = ^00 , ^10 =0 , P_∥ i = N_i T_∥ i = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μ B F_i(v_∥ - U_∥ i)^2 = T_i0( √(2)^20 + N_i ), P_⊥ i = N_i T_⊥ i = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μB/m_iμ B F_i = T_i0 ( N_i - ^01), with the total ion temperature defined by T_i = ∫ d F_i m_i ( v - b U_∥ i)^2 /( 3N_i) = (T_∥ i + 2 T_⊥ i)/3 = T_i0 ( √(2)^20 + 3 N_i - 2 ^01)/( 3N_i ). We remark that (<ref>) is a direct consequence of our choice of using a shifted parallel velocity-space coordinate s_ i in (<ref>). We now derive the full-F GM hierarchy equation describing the evolution of an arbitrary number of GMs, ^pj. This is obtained by projecting the ion full-F equation given in (<ref>) onto the Hermite-Laguerre basis. In addition, we normalize time t to R / c_s0 (with c_s0 = √(T_e0 / m_i) the ion sound speed evaluated at the reference constant electron temperature T_e0 and R the radial extension of the plasma chamber in the direction perpendicular to B), the potential ϕ to T_e0 / e, the parallel and perpendicular spatial scales to R and ρ_s0 = c_s0 / Ω_i, respectively. We also normalize the ion and electron densities, N_i and N_e, to the constant reference density N_0, the parallel electron velocity U_∥ e to c_s0, and the electron temperature, T_e, to T_e0. In addition, we assume q_i = e, considering a hydrogen plasma. Hence, we derive the normalized ion GM hierarchy equation, which describes the evolution of the GMs ^pj, i.e. ∂/∂ t^pj + √(p/τ_i)^p-1j∂/∂ t U_ i + ·pj + √(p/τ_i )p-1 j· U_ i - √(p/τ_i)p-1jv̇_∥ = ^pj_i + S_N^pj + S_E^pj, where the GM projections are given by ·pj =√(τ_i)∂_z (√(p+1)^p+1 j+√(p)^p-1 j) + ∂_z ( U_ i^ pj) +1/ρ_*ϕ^ p j, √(p/τ_i)p-1jṘ· U_ i = ( p^pj + √(p(p-1))^p-2j. . + √(p/τ_i) U_ i^p-1j) ∂_z U_ i + √(p/τ_i)1/ρ_*^ p-1 jϕU_ i , √(p/τ_i)p-1jv̇_∥ = - ^p-1j√(p/τ_i)∂_z ϕ, with ρ^* = ρ_s0 / R and τ_i = T_i0 / T_e0. In (<ref>), we introduce the Poisson bracket operator that is fg = ∂_x f ∂_y g - ∂_y f ∂_x g. The GM expansions of the particle and energy sources, S_N^pj and S_E^pj, are given by S_N^pj = 𝒜_N δ_p^0 δ_j^0 = S_N, S_E^pj = 𝒜_E ( δ_p^2 δ_j^0/√(2) - δ_p^0 δ_j^1 ), respectively. Finally, we express the nonlinear Dougherty collision operator in terms of GMs. We first express (<ref>) in terms of the velocity-space coordinates (s_ i, x_i, ) and project it onto the Hermite-Laguerre basis. This yields ^pj_i = ν_i[ -(p+2j) ^pj + (T_i -1 ) . . ×(√(p(p-1))^p-2j - 2j ^pj-1) ], where T_i is expressed in terms of the GMs using (<ref>). The nonlinear Dougherty collision operator conserves particles (_i^00 =0), momentum (_i^10 =0) and energy (_i^20 = √(2)_i^01). While simpler in form compared to the GM expansion of the nonlinear Fokker-Planck Landau collision operator <cit.>, the Dougherty collision operator constitutes an initial step to incorporate advanced collisional effects in the nonlinear and full-F ion GM hierarchy equation. The numerical implementation of the nonlinear Fokker-Planck Landau collision operator <cit.> will be considered in future work. To obtain the time evolution of the GMs 𝒩^pj, it is necessary to derive an explicit expression for the time derivative of the ion parallel velocity, ∂_t U_ i which appears in (<ref>) and resulting from the use of the shifted parallel velocity-space coordinate s_ i. By setting (p,j) = (1,0) in (<ref>) and using the fact that ^10 vanishes exactly (see (<ref>)), we derive the desired expression for ∂_t U_ i given by N_i ∂_t U_ i + N_i/ρ_* ϕU_ i +τ_i ∂_z P_∥ i+ N_i U_ i∂_z U_ i + N_i ∂_z ϕ =0, where the parallel ion pressure P_ i is expressed in terms of GMs according to (<ref>). We note that the full-F GM hierarchy equation, given in (<ref>), can also be derived from the electromagnetic full-F GM hierarchy equation described in Ref. Frei2020. This is achieved by considering the electrostatic limit, neglecting FLR effects, and assuming anisotropic ion temperature effects in F_Mi. Notably, the GMs with different p are coupled in (<ref>) due to the parallel streaming terms, associated with the ion Landau damping. On the other hand, the GMs with different j are only coupled through the collision operator (see (<ref>)), since our model neglects FLR effects and magnetic drifts responsible for kinetic effects leading to additional coupling in j <cit.>. As a result, a few Laguerre GMs are expected to be sufficient in our nonlinear turbulent simulations. To carry out the numerical turbulent simulations presented here, a simple closure by truncation is applied to the GM hierarchy equation. More precisely, we set ^pj =0 for all (p,j) > (P,J) with 0 ≤ P,J < ∞. The full-F GM hierarchy equation enables us to perform turbulent simulations of LAPD using an arbitrary number of GMs. Different values of (P,J) are considered in sec:turbulentsimulations where we demonstrate that the closure by truncation is sufficient to perform full-F turbulent simulations in our case. §.§ Cold-ion reduced model We consider here the cold-ion limit of the full-F GM hierarchy and derive a simplified model, similar to the one used in previous turbulent investigations of linear devices based on fluid models (see, e.g., Refs. Rogers2010,Popovich2010,Fisher2015) where the effects of finite ion temperature T_i are neglected. In the cold ion limit, only the GMs _i^00 and _i^10, associated with the ion gyrocenter density and the parallel ion velocity, need to be evolved and the contribution from the parallel ion pressure P_∥ i in (<ref>) can be neglected. As a consequence, the ion GM hierarchy equation given in (<ref>) reduces to the ion gyrocenter continuity equation for N_i and to the ion parallel momentum equation for U_ i, i.e. ∂/∂ t N_i + 1/ρ^*ϕ N_i + ∂_z ( U_ i N_i ) = S_N, ∂_t U_ i + 1/ρ_*ϕU_ i + U_ i∂_z U_ i + ∂_z ϕ =0, respectively. We remark that the particle and momentum conservation of the collision operator is used in deriving (<ref>). §.§ Electron fluid model We use the Braginskii model to evolve the electron dynamics, avoiding the evolution of their distribution function, in contrast to Refs. Shi2017,Pan2018. The fluid approach for the electrons is justified when the electron collision frequency is much larger than the ion collision frequency and electron FLR effects are negligible for modes developing at k_⊥ρ_s ∼ 1, which is the case of LAPD experiments. Hence, the time evolution of the electron density n_e, electron parallel velocity U_∥ e, and temperature T_e is determined by the continuity equation, the generalized Ohm's law, and the temperature equation, respectively. These equations are given by ∂_t n_e + 1/ρ^*ϕ n_e + ∂_z ( U_∥ e n_e ) = S_N , ∂_t U_∥ e + 1/ρ^*ϕ U_ e + U_∥ e∂_z U_∥ e = m_i/m_e[ ν_∥ J_∥ + ∂_z ϕ. . - T_e/n_e∂_z n_e - 1.71 ∂_z T_e ] , ∂_t T_e + 1/ρ^*ϕ T_e + U_∥ e∂_z T_e = 2/3T_e ( 0.71/N_e∂_z J_∥ - ∂_z U_∥ e) + ∂_z ( χ_∥ e ∂_z T_e ) + S_T_e , where the normalized parallel electrical resistivity and electron thermal conductivity are given by ν_∥ = ν_0 / T_e^3/2 and χ_∥ e = 1.075 T_e^5/2 / ν_0, respectively. Here, ν_0 = 4 √(2 π) e^4 n_e0 R √(m_e)lnλ /[ 3 c_s0 m_i T_e0^3/2 1.96 ] is the normalized electron collisionality. On the right-hand side of (<ref>) and (<ref>), S_N and S_T_e are the normalized density and temperature sources. In (<ref>), the parallel electrical current is J_∥ = n_e ( U_ i - U_ e ). §.§ Vorticity equation We now obtain the vorticity equation that governs the evolution of the electrostatic potential ϕ. This equation imposes the charge conservation constraint to the time evolution of the plasma densities and electrical currents. To derive the vorticity equation, we consider the quasineutrality condition in the long-wavelength limit, given by <cit.> - e n_e + q_i N_i = - ·( q_i^2 N_i/m_i Ω_i^2_⊥ϕ). (<ref>) neglects the FLR effects, associated with the difference between the particle and gyrocenter position and proportional to the perpendicular ion pressure. We also notice that (<ref>) is equivalent to the quasineutrality condition used in previous GK turbulent simulations of LAPD <cit.> if the Boussineq approximation is used, i.e. N_i ≃ N_0. This approximation is widely used in fluid codes <cit.> and we use it below to derive the vorticity equation. While (<ref>) can be solved to obtain ϕ given the electron and ion densities, n_e and N_i respectively, we use a vorticity equation instead, as this is often considered in turbulent fluid codes <cit.>. The vorticity equation is derived by taking the time derivative of the quasineutrality equation given in (<ref>) and by using the electron and ion continuity equations, given in Eqs. (<ref>) and (<ref>), respectively. It yields - ∂_t Ω - 1/ρ^*ϕΩ - ∂_z( U_∥ iΩ ) + 1/N_i∂_z J_∥ = 0, with Ω = ^2_⊥ϕ the vorticity variable using the Boussinesq approximation. The effects of the Boussinesq approximation on plasma turbulence is the subject of previous studies <cit.>. While it might not be justified in LAPD when steep density gradients are present, it allows us to reduce the computational cost of our simulations when inverting the two-dimensional Laplacian to obtain ϕ from the vorticity variable Ω. We use the vorticity equation given in (<ref>) to evolve Ω when considering the full-F ion GM hierarchy equation and the cold ion models, given in Eqs. (<ref>) and (<ref>) respectively, coupled to the fluid electron model in (<ref>). §.§ Two-fluid Braginskii fluid model We finally introduce the two-fluid Braginskii fluid model <cit.>, valid in the high-collisional regime, for comparison with the full-F ion GM hierarchy equation and the cold-ion model. In addition to the fluid electron fluid equations for n_e, U_ e and T_e already described in subsec:electronbraginskii, the two-fluid Braginskii equations prescribe a parallel ion momentum equation to evolve the ion parallel velocity U_∥ i, an ion temperature equation to evolve the ion temperature T_i, and vorticity equations for Ω. These equations are given by ∂_t U_∥ i + 1/ρ^*ϕ U_∥ i + U_∥ i∂_z U_∥ i = - ∂_z T_e - τ_i ∂_z T_i - (T_e + τ_i T_i) ∂_z n_e/n_e , ∂_t T_i + 1/ρ^*ϕ T_i + U_∥ i∂_z T_i = +2/3 T_i [(U_∥ i- U_∥ e) ∂_z n_e/n_e- ∂_z U_∥ e] + ∂_z ( χ_∥ i∂_z T_i ) + 𝒜_E/n_e +(1 - T_i) S_N/n_e, ∂_t Ω + τ_i ∂_i _⊥^2 T_i = 1/n_e∂_z J_∥ - 1/ρ^*ϕΩ + τ_i _⊥^2 T_i -U_∥ i∂_z ( Ω + τ_i _⊥^2 T_i ). respectively. In (<ref>), χ_∥ i = 1.32 √(m_e / m_i) (τ _i T_i)^5/2 / ν_0 is the normalized parallel ion thermal conductivity. We remark that the two last terms in (<ref>) are the ion temperature sources associated with the energy source S_E (see (<ref>)), which appears on the right-hand side of (<ref>). In contrast to the cold-ion model given in (<ref>), the two-fluid Braginskii model considered here allows for finite ion temperature effects, but assumes quasineutrality, such that n_e ≃ N_i. In addition, the parallel electric field ∂_z ϕ, appearing in (<ref>), is approximated in (<ref>) by the electron parallel pressure gradient, such that ∂_z ϕ≃∂_z P_e with P_e = n_e T_e (see (<ref>)). We remark that the vorticity equation, (<ref>), corresponds to the one implemented in fluid codes used to study the plasma turbulence in the SOL region <cit.>, such as the GBS code <cit.>. We also remark that the terms proportional to the Laplacian of the ion temperature, i.e. τ_i _⊥^2 T_i, are absent in (<ref>). Indeed, these terms are associated with FLR effects, which are neglected in (<ref>). However, we note that, as the ion temperature in LAPD experiments is generally lower than the electron temperature (τ_i < 1), neglecting finite ion perpendicular pressure in the vorticity equation deduced from the quasineutrality condition in (<ref>) is expected not to significantly affect the plasma dynamics in the simulations described below. §.§ Boundary conditions Boundary conditions are required for the ion GMs, ^pj, the electron fluid quantities, N_e, U_∥ e, T_e, and the potential ϕ in the perpendicular (x,y) plane at x = ± L_x / 2 and y = ± L_y / 2 and at the end plates located in the z direction at z = ± L_z /2, where a sheath forms due to the plasma-wall interaction. At x = ± L_x / 2 and y = ± L_y / 2, homogenous Neumann boundary conditions are used for all quantities. These ad-hoc boundary conditions have a negligible effect on plasma turbulence near the center of the device as they are imposed at a distance sufficiently large from the center of the device. On the other hand, the boundary conditions in the z direction have an important impact since the formation of a Debye sheath is observed when the magnetic field lines intercept the end plates that control the plasma losses <cit.>. Since the sheath region cannot be modeled by the field equations derived in subsec:vorticityequation (the GK formalism is violated in this region), the sheath is modeled in our simulations by a set of appropriate boundary conditions imposed at the sheath entrance. In previous GK simulations of LAPD <cit.>, a conducting wall is considered. Accordingly, the fraction of electrons that cross the sheath and are lost being absorbed by the walls is determined by the value of the potential at the sheath entrance. This fraction is imposed by evaluating the cutoff velocity of the electron distribution function numerically. Leveraging the GM approach, we use the standard fluid Bohm boundary conditions <cit.> which sets the value of the parallel electron and ion velocities, U_∥ e and U_∥ i, at the sheath entrance. Therefore, we assume that <cit.> U_∥ e(x,y,z = { 0, L_z }) = ±√(T_e,s) e^Λ - ϕ_s / T_e,s, U_∥ i(x,y,z = { 0, L_z }) = ± c_s = ±√(T_e,s)√(1 + τ_i T_i,s / T_e,s), with Λ = log m_i/(2m_e) ≃ 3 for hydrogen plasmas. In (<ref>), T_e,s and T_i,s are the electron and ion temperatures evaluated at the sheath entrance, i.e. T_e,s = T_e(x,y, z = ± L_z / 2 and T_i,s = T_i(x,y, z = ± L_z / 2), and, similarly, ϕ_s = ϕ(x,y, z =± L_z / 2). We notice that the boundary conditions in (<ref>) reduce to the ones used in Ref. Rogers2010 when T_i ≪ T_e and correspond to the ones used in SOL turbulent simulations using the drift-reduced Braginskii model <cit.>. For the remaining quantities, we assume, for simplicity, that the gradients of electron density, n_e, electron temperature, T_e, ion GMs, ^pj, and electrostatic potentials, ϕ, vanish along the direction of the magnetic field at the sheath entrance, i.e. homogenous Neumann boundary conditions are imposed at z = ± L_z / 2. While the homogenous Neumann boundary conditions considered here are sufficient to ensure the numerical stability of the present simulations, further investigations are needed to develop first-principles sheath boundary conditions for the GM approach. In particular, the analytical procedure outlined in, e.g., Refs. Loizu2012,Mosetto2015, can be extended to an arbitrary number of GMs and kinetic sheath boundary conditions can also be developed <cit.>. Magnetic field lines intercept the machine wall with a small oblique angle in fusion devices, further complicating the treatment of the sheath boundary conditions <cit.>. § NUMERICAL IMPLEMENTATION To solve the full-F ion GM hierarchy in (<ref>) coupled with the electron fluid model in (<ref>), we have developed a new three-dimensional full-F code. This code solves the turbulent dynamics for an arbitrary number of GMs and also implements a two-fluid Braginskii model for comparison with the GM results. To evolve the plasma dynamics, we employ similar numerical algorithms as the two-fluid code <cit.>. More precisely, an explicit fourth-order Runge-Kutta time-stepping scheme is used. The perpendicular and parallel directions are discretized using a uniform cartesian grid in the (x,y,z) coordinates with the x, y, and z directions discretized using N_x, N_y and N_z points uniformly distributed between the intervals [- L_x /2, + L_x /2], [- L_y /2, + L_y /2] and [- L_z /2, L_z/2], respectively. The Poisson bracket operator, [ f,g] = b × f · g = ∂_x f ∂_y g - ∂_y f ∂_x g, with b = B / B = e_z, is evaluated by using a fourth-order Arakawa method <cit.>. The numerical evaluation of the other spatial operators appearing in the GM hierarchy equation is based on a fourth-order and centered finite difference scheme, resulting in a 5-points centered stencil <cit.>. To avoid checkerboard patterns <cit.>, the grid, referred to as the v-grid, used to evolve the parallel velocities, U_∥ e and U_ i, and the GMs ^pj with odd p, is staggered to the left along the z-direction by Δ z /2 (Δ z is the grid spacing) with respect to the grid, referred to as the n-grid, where the other fluid quantities, i.e. n_e, T_e, Ω (and thus ϕ), and the GMs ^pj with even p are evaluated. Fourth-order interpolation techniques are used between the n- and v- grids <cit.>. To improve the numerical stability of our numerical simulations, parallel and perpendicular numerical diffusions, such as D(f) = η_⊥(∂_xx^2 + ∂_yy^2) f + η_z ∂_zz^2 f, where f denotes one of the evolved quantities, are added to the right hand-side of all equations. We choose the perpendicular and parallel diffusion coefficients, η_⊥ and η_z, to be constant and sufficiently small not to affect significantly the results. The model is implemented in a Fortran code using a MPI domain decomposition in all directions. The initial conditions of the turbulent nonlinear simulations impose equal electron and ion densities and temperatures, such that n_e = ^00 and T_e = T_i with top-hat-like profiles in the perpendicular plane and uniform in z. In addition, we set ϕ = Λ T_e to avoid unphysical and large electron current into the sheath region. The initial values of ^20 and ^01, given the initial ion density and ion temperature T_i profiles, are obtained by inverting (<ref>), which yields ^20 = N_i ( T_i- 1) / √(2) and ^01 = N_i ( 1 - T_i), with T_∥ i = T_⊥ i = T_i. Finally, the parallel velocities, U_ i and U_ e, are initialized with smooth profiles along z, with values at the end plates fixed according to the boundary conditions given in (<ref>). Random noise is added to the initial profiles, with constant amplitude 0.01, to seed turbulence. Typically, a quasi-steady state is achieved after 100 c_s0 / R time unit (corresponding to t ∼ 4 ms), similarly to previous GK and Braginskii turbulent simulations of LAPD <cit.>, where the sources of particle and energy are compensated by the losses at the end plates. § FULL-F TURBULENT SIMULATION RESULTS In this section, we present the first turbulent and full-F simulations of the GM approach of a linear plasma device, focusing on the parameters of the LAPD experiment. We perform a comparison between the turbulent predictions of the full-F GM approach (see subsec:fullFhierarchy), with different numbers of GMs and values of collisionality and compare them with the Braginskii model introduced in subsec:braginskii. Our simulations parameters are similar to those used in Ref. Rogers2010, where a helium LAPD plasma is considered. These parameters are sumarized as follows: n_e0 = 2 × 10^12 cm^-3, T_e0 = 6 eV, T_i0 = 3 eV (τ_i = 0.5), Ω_i∼ 960 kHz, ρ_s0 = 1.4 cm, c_s0= 1.3 × 10^6 cm s^-1, m_i / m_e = 400 and ν_0 = 0.03. The LAPD vacuum chamber has a radius R ≃ 0.56 m (i.e., R ≃ 40 ρ_s0) and a parallel length of L_z ≃ 18 m, such that we use L_x = L_y = 100 ρ_s0 (or L_x ∼ L_y ∼ 1.4 m) and L_z = 36 R. The reference time is R / c_s0∼ 43 μs. We use a numerical resolution of N_x = N_y = 192 in the perpendicular plane and a coarser resolution in the parallel direction of N_z = 64 thanks to the dominant k_∥≃ 0 turbulent structures. We consider the following parameters for the density and temperature sources L_s = ρ_s0, r_s = 20 ρ_s0, 𝒜_N0 = 𝒜_T_e0 = 0.04 (with 𝒜_N∞ = 𝒜_T_e ∞ = 0.001) and 𝒜_E0 = 0.02 (with 𝒜_E∞ = 𝒜_N∞). In order to investigate the impact of ion collisions, we conduct a set of nonlinear simulations in the high (HC) and low (LC) ion collisionality regime. For each set, we consider different numbers of GMs (P,J) to investigate the convergence of the GM approach. More precisely, we consider (P,J) = (2,1), (6,1), (12,1) in the LC regime and (P,J) = (2,1), (6,1) in the HC regime. We change the ion collisionality by varying the ion collision frequency ν_i as an independent parameter while keeping all other parameters constant. In the HC regime, the ion collision frequency is computed using the LAPD physical parameters, such that ν_i = 1.38 √(m_i / m_e)ν_0 /τ_i^3/2≃ 2.34. In this regime, the ion mean-free-path, λ_mpf, is considerably shorter than the total length L_z, i.e. λ_mpf / L_z ≃√(2 τ_i) R / L_z / ν_i ≪ 1, and the effects of the collision operator are expected to be important. On the other hand, we set the ion collision frequency to be small in the LC regime, such that ν_i ≃ 4 × 10^-3 yielding λ_mpf / L_z ∼ 6.9. In this regime, the effect of the collision operator on the GMs is expected to be negligible. We remark that using J=1 is sufficient to represent the ion distribution function F_i since fine structures in x_i are not present due to the absence of strong kinetic effects (e.g., trapped particles). This section is structured as follows. First, sec:simulationresults provides an analysis and comparison of simulations based on the full-F GM hierarchy, the cold-ion, and the Braginskii models. Second, the turbulence characteristics are analyzed and compared in more details in sec:turbulence. Finally, we investigate the ion distribution function in velocity-space in sec:vsp and the GM spectrum in quasi-steady state in subsec:GMspectrum as a function of the number of GMs and for the two collisionality regimes. §.§ Simulation results This section presents a set of nonlinear and turbulent simulations of the LAPD using the full-F GM hierarchy equation given in (<ref>), the cold-ion model in (<ref>), and the Braginskii model introduced in (<ref>). A typical nonlinear evolution of the electron density, n_e, obtained by using the GM hierarchy equation with (P,J) = (6,1) GMs in the HC regime is shown in fig:snapshotsne. For t ≲ 28 R / c_s0, the profiles build up because of the localized particle and energy sources present in the system. The steep density and temperature gradients near r ∼ r_s drive an unstable resistive drift-wave, with the most unstable mode occurring at k_⊥ρ_s0∼ 0.5 (k_⊥ is the perpendicular wavenumber) with finite parallel wavenumber and rotating in the ion diamagnetic direction. Large poloidal flows, with associated velocity typically larger than the phase-velocity of the resistive drift waves <cit.>, nonlinearly trigger a Kelvin-Helmholtz (KH) instability, characterized by a long perpendicular wavelength and k_∥≃ 0. The KH instability becomes clearly visible around t ≃ 33 R / c_s0. This instability, which has been shown to dominate the radial transport in LAPD <cit.>, saturates at t ≃ 43 R / c_s0, transporting the plasma to the r ≳ r_s region and yielding the broadening of the initial profiles. The role of the KH-dominated transport in our simulations is confirmed by the strong steepening of the profiles when the nonlinear term ϕΩ in (<ref>) is artificially suppressed. After t ∼ 91 R / c_s0, a quasi-steady state is reached, where the sources are compensated by the losses at the end plates. A similar qualitative evolution is observed with a higher number of GMs in the LC and HC regime, as well as in the cold-ion and Braginskii simulations. The dynamics in the direction parallel to the magnetic field is shown in fig:snapshotsxz during the quasi-steady state. Instantaneous snapshots of the parallel turbulent structures of the electrostatic potential ϕ, electron density n_e, and electron temperature T_e reveal elongated (k_∥≃ 0) structures. All quantities show larger values at the center (z =0) and decrease near the end plates (located at z= - 18 R and at z = 18 R) due to the particle and energy losses caused by the sheath boundary conditions. Similar parallel structures are obtained when using a larger number of GMs. The turbulent structures observed in fig:snapshotsxz are in good qualitative agreement with previous fluid <cit.> and GK <cit.> turbulent simulations of LAPD. We now examine the time-averaged radial profiles. These profiles are averaged over a time window of ∼ 2 ms during the quasi-steady state as well as over the central region of LAPD - 8 R ≤ z ≤ + 8R (or - 4 m≲ z ≲ 4 m), a region commonly considered to present experimental data <cit.> (a similar approach is used in previous GK simulations <cit.>). The results are shown in fig:profiles, which displays the averaged radial profiles of ϕ, n_e, and T_e obtained in the GM simulations, using different numbers of GMs, in the cold-ion and the Braginskii simulations. Instantaneous profiles are also included for comparison. We note, first, that the plasma profiles extend beyond r_s illustrating the broadening caused by the KH instability <cit.>. More precisely, the profiles are approximatively constant close to the center of the device (r < r_s) and far from the source region (r > r_s), showing a region of steep gradients near r ∼ r_s, where the fluctuation level is large and the radial transport is important (see sec:turbulence). Second, the time-averaged radial profiles from the GM simulations are very similar to the ones obtained from the Braginskii model. Third, no noticeable differences are found between the simulations in the LC and HC regimes and with different numbers of GMs. This suggests that ion kinetic effects may not significantly influence the predictions of the equilibrium (time-averaged) profiles in LAPD. On the other hand, the cold-ion model consistently predicts larger time-averaged radial profiles, while the gradients (not shown here) are of the same order as those obtained in the GM and Braginskii simulations. Fourth, the analysis of the instantaneous profiles (indicated by dotted lines in fig:profiles) shows the existence of large perpendicular turbulent structures associated with the KH instability. Finally, we note that the time-averaged profiles obtained in fig:profiles closely remind those obtained in previous fluid simulations <cit.> and GK simulations <cit.>. We remark that the electrostatic potential profile ϕ follows approximatively the electron temperature T_e, as shown in fig:profiles. Indeed, ϕ∼Λ T_e is required to have comparable electron and ion outflows in steady-state, such that U_i∼ U_ e near the end plates, according to (<ref>). To verify that ϕ∼Λ T_e in our simulations, we evaluate the radial profile of the instantaneous difference, ϕ - Λ T_e, taken at the center of the device (z = 0 R) for the GM (both the LC and HC regimes are considered), cold-ion and Braginskii simulations during the quasi-steady state. The results are shown in fig:philambdaTe. We first observe that the GM and Braginskii simulations yield similar ϕ - Λ T_e values. On the other hand, ϕ - Λ T_e is roughly constant and approximatively vanishes for all radii in the cold-ion model. Even if the deviations of ϕ from Λ T_e are larger in the GM and Braginskii simulations, the differences ϕ - Λ T_e remain smaller than the values of ϕ and Λ T_e (ϕ - Λ T_e ∼ 0.1 for r ≲ r_s compared to ϕ∼Λ T_e ∼ 2, see fig:snapshotsxz). §.§ Turbulence analysis We now delve into the analysis of the turbulence properties, comparing the GM predictions with Braginskii simulations. The instantaneous fluctuations are obtained by subtracting the time-averaged profiles from the full quantities, such that the fluctuations of, e.g., the electrostatic potential, ϕ, is defined by ϕ = ϕ - ϕ, where ϕ denotes the time-averaged potential. Similar definitions for the other quantities are used. The top panels of fig:phisnapshots show instantaneous snapshots of ϕ in the plane perpendicular to the magnetic field at the center of the device z=0 R, while the bottom panels illustrate ϕ snapshots. The Braginskii, cold-ion, and GM simulations with various (P,J) are considered. We first observe that the fluctuations in the Braginskii model closely remind those obtained in Ref. Fisher2015. In particular, the level of fluctuations is low at the center of the device and far from the source region, while it is large where the equilibrium gradient is steeper, in particular near r ∼ r_s (see fig:profiles). Notably, the ϕ snapshots reveal the presence of large amplitude structures propagating outwards. These observations hold for all the GM simulations, demonstrating a good qualitative agreement between the GM approach and the Braginskii model. While the fluctuations of the potential ϕ is not significantly affected by the number of GMs used in the simulation or by the collisionality regime, pointing out the fact that the KH instability (which drives turbulence) has a fluid nature, minor differences in the turbulent properties can still be observed. In fact, the use of a small number of GMs tends to produce slightly larger turbulent structures. This can be observed, for instance, by comparing the results of (P,J) = (2,1) with the (12,1) simulations in the LC regime. Finally, we observe that the cold-ion model produces the largest turbulent structures, which is consistent with the broad time-averaged profiles observed in fig:profiles. The same observations apply to the snapshots of ion gyrocenter density N_i and its associated fluctuations N_i, as shown in fig:nisnapshots. Similar plots are obtained for n_e and T_e, but not shown. We now proceed to analyze the root mean square (RMS) of the fluctuations, defined as √( n_e^2) in the case of electron density fluctuations n_e and similarly for the other quantities. fig:rms displays the RMS of the electron density, n_e, and the electrostatic potential, ϕ, fluctuations plotted as a function of the radius. The data are computed at z = 0 R and normalized to n_e(r) (and to ϕ(r)) <cit.>. We find that the RMS values of the density displayed in fig:rms closely recall those obtained in previous fluid <cit.> and GK <cit.> simulations. Consistent with the observations made in fig:nisnapshots, the RMS values reach their maximum when the gradients are most pronounced near r ∼ r_s. For r ≲ r_s and for r ≳ r_s (where the gradients are smaller), the RMS values decrease because of the absence of the instability drive. Using a low number of GMs or considering the LC regime results in slightly larger RMS values (in particular of ϕ). Overall, this indicates that the level of fluctuations in the steep gradient region is sensitive to the number of GMs used in the simulations. We demonstrate in sec:vsp that the large RMS values observed in fig:rms are associated with a lack of resolution (i.e. insufficient number of GMs) to describe the ion distribution function F_i. Finally, we remark that the best agreement with the Braginskii predictions is obtained by the GM simulation with (P,J) = (6,1) in the HC regime and the largest RMS values (in particular of N_i) are obtained in the cold-ion model. We compare the RMS of the parallel electrical current J_∥ measured at the sheath entrance located at z = - 18 R. The results are shown in fig:rmsjpar as a function of the radius and normalized to the maximum of J_∥(r). It is clearly observed that the boundary conditions imposed on the electron and ion parallel velocities allow for the parallel current to fluctuate. This is in contrast to the case of logical sheath boundary condition, where J_∥ = 0 is imposed everywhere <cit.>. We remark that larger fluctuations of J_∥ are obtained in the Braginskii simulations, while the largest RMS is observed in the case of the cold-ion model. We now turn our attention to the skewness of the ion density fluctuations, which is defined as the third normalized moment of the ion gyrocenter density fluctuation, that is N_i^3 / N_i^2^3/2. The skewness of the density is often used to characterize the presence of plasma holes and blobs, associated with negative and positive skewness respectively <cit.>. fig:skewness shows the skewness of the ion density N_i. In all cases, the skewness is negative for r ≲ r_s, indicating the presence of density holes in the region where the plasma source is present. On the other hand, in the region where r ≳ r_s, the skewness is positive. The sign and amplitude of the skewness shown in fig:skewness are consistent with previous fluid <cit.> and GK <cit.> simulations, with the values obtained in the GM simulations being similar to those observed in the Braginskii case, albeit slightly smaller. Overall, the present turbulent analysis demonstrates that the full-F GM approach is in qualitative agreement with the Braginskii model, employed in previous numerical investigations <cit.> and validated with experimental data <cit.>. §.§ Ion distribution function at quasi-steady state We now investigate the features of the ion distribution function F_i in velocity-space. To obtain the full-F ion distribution function, F_i, from the GM simulations, we use the expansion in (<ref>), truncated to a finite number of GMs, and we compute it as a function of x_i and the unshifted parallel coordinate v_∥ / v_Ti (v_∥ / v_Ti = s_ i + √(2 τ_i) U_ i). Also for this analysis, we consider the quasi-steady period. fig:vsp shows F_i obtained from the (P,J) = (6,1) simulations in the HC regime at the center of the machine (z=0R) and at the sheath entrances, z=- 18R and z=18R. At the two sheath entrances, F_i is centered around the ion parallel velocity, U_ i =± c_s respectively, a consequence of the Bohm sheath boundary conditions given in (<ref>). On the other hand, F_i is centered around v_∥≃ 0 at z = 0R, where U_ i≃ 0. The absence of fine velocity-space structures in fig:vsp is a consequence of the lack of strong kinetic effects such as trapped particles and FLR effects <cit.> in LAPD and explains the weak dependence of the turbulence properties on the number of GMs, reported in sec:turbulence. fig:vspslices shows the ion distribution function at the sheath entrance (z = 18 R and x = y = 0) for x_i =0, in the LC and HC regimes and for different values of (P,J). We first observe that the bulk region of F_i (near v_∥ / v_Ti∼ 1) is well approximated by a shifted Maxwellian. However, deviations from the Maxwellian distribution function are noticeable in the tails of F_i in the LC regime. These deviations become pronounced as (P,J) increases (e.g., from (6,1) to (12,1)) which indicates that F_i is not sufficiently resolved in the LC regime at low (P,J). Finally, we remark that collisional effects tend to widen F_i due to the collisional parallel velocity-space diffusion present in the nonlinear Dougherty operator. We note that the use of v_∥ / v_Ti as an argument in the Hermite polynomials, H_p, in (<ref>) would compromise the convergence properties of the GM approach, with respect to the use of v_∥ / v_Ti, leading to simulations that show unphysical distribution functions with negative values when the same number of GMs are considered as in the simulations presented here (see fig:vsp). In fact, if the unshifted GMs _v_∥^pj, defined with respect to v_∥ / v_Ti as the argument of H_p, i.e. 𝒩^pj_v_∥ = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μB/m_i F_i H_p ( v_∥/ v_Ti) L_j(x_i)/√(2^p p!), are used to expand F_i, it is found that _v_∥^pj≠ 0 for (p,j) > 0, even when F_i is a Maxwellian distribution function centered at U_ i≠ 0. Indeed, using (<ref>), one derives the analytical expression of the unshifted GMs for F_i = F_Mi, _v_∥^pj = δ_j^0/√(π)∫_- ∞^∞ d ( v_∥/v_Ti) e^- (v_∥ / v_Ti - √(2 τ_i ) U_ i)^2 H_p( v_∥/v_Ti) /√(2^p p!) = √(2^p/p!)(√(2 τ_i ) U_ i)^p δ_j^0, where U_ i is normalized to c_s0. While the amplitude of the unshifted GM decreases rapidly in the presence of subsonic ion flow, U_ i≪ 1, the decrease of the amplitude with p is slower in the presence of sonic flows, such that ^pj_v_∥∼√(2^p / p!). §.§ GM spectrum at quasi-steady sate To better assess the velocity-space representation of F_i in our simulations, we plot the amplitude of the GMs, ^p0, at the sheath entrance of the device, z=18 R and r =0, in fig:gmspectrum (a similar plot is obtained for ^p1 showing considerably smaller amplitudes). This is the amplitudes of the GMs associated with the distribution functions displayed in fig:vspslices. As it can be clearly observed, the amplitude of the GMs decays faster in the HC regime than in the LC regime. The results of the LC (P,J) = (12,1) simulation shows that P ≳ 12 ensures that F_i is well resolved since ^P0 provides a negligible contribution compared to ^00 to F_i. On the other hand, the contributions from ^p0 with p ≳ 4 are negligible in the HC regime, thereby justifying the closure by truncation for P ≳ 4. We also notice that ^10 =0 in all cases, as a consequence of (<ref>). Finally, we note that the amplitude of the low-order GMs is not sensitive to P, as shown in fig:gmspectrum. More precisely, the low-order GMs for (P,J) = (6,1) strongly resemble the ones of the (P,J) = (12,1) simulation in the LC case. This holds true also in the HC regime, for instance, by comparing the (P,J) = (6,1) and (P,J) = (2,1) simulations. This suggests (in addition to the similar results obtained in sec:turbulence with different (P,J)) that full-F turbulent calculations using the GM approach are less sensitive to the values of P and J than linear computations <cit.>, where applying a closure by truncation at low P and J can introduce spurious artifacts <cit.>. Otherwise, fig:gmspectrum reveals that the large RMS values depicted in fig:rms (e.g., (P,J) = (6,1) in the LC and (P,J) = (2,1) in the HC regime) correspond to cases where the GM representation of F_i is unresolved, but still yielding to good turbulent predictions. Additional investigations are required to verify the effect of closure in the presence of kinetic effects such as trapped particles and magnetic drifts, which are absent in LAPD. Finally, fig:npj presents snapshots of the GMs for different values of p in the perpendicular plane obtained for the (P,J) = (6,1) simulations in the LC and HC regimes. It is clearly visible that the turbulent structures are dominated by a long-wavelength perpendicular KH instability for all values of p. The decay of the amplitude of the turbulent structures due to collisions and with increasing p is also evident. § CONCLUSIONS In this work, we present the first full-F turbulent simulations based on the GM approach in a linear plasma device configuration with open straight field lines, such as LAPD. We consider an electrostatic and long-wavelength ion GK model for the full ion distribution function F_i, coupled to the electron Braginskii fluid model for the electron density n_e, parallel velocity U_ e, and temperature T_e. The ion GK model is solved by deriving a full-F ion GM hierarchy equation, based on the Hermite-Laguerre polynomials expansion of F_i. In particular, a velocity-space coordinate centered at the local fluid ion parallel velocity is used to expand F_i, which ensures good convergence properties of the Hermite expansion in the presence of sonic ion flows. The GM hierarchy equation we consider is equivalent to the electrostatic and long-wavelength limit of the GK moment model for the boundary region derived in Ref. Frei2020. To account for the parallel losses at the end plates, Bohm sheath boundary conditions, equivalent to the ones previously used in LAPD fluid simulations <cit.>, are used. We also consider a nonlinear ion-ion Dougherty collision operator. The ion GM hierarchy equation is implemented in a numerical code enabling us to perform the first full-F turbulent calculations based on a moment approach. We present the simulations of a linear device using LAPD physical parameters based on a Helium plasma <cit.> and a first-of-the-kind comparison with the two-fluid Braginskii model. Several nonlinear simulations are performed using a different number of Hermite and Laguerre GMs in a low and high-collisional ion regime. Overall, a good qualitative agreement on the time-averaged radial profiles with the Braginskii model is observed with the GM approach. This is expected from our analysis which shows that turbulence is dominated by the long perpendicular wavelength and k_∥≃ 0 Kelvin-Helmoltz instability of fluid nature. The RMS and skewness of the fluctuations in the GM simulations also agree with the ones previously obtained in fluid <cit.> and GK <cit.> simulations of LAPD. In particular, we find that the RMS values are often larger than the ones predicted by the Braginskii model, if the number of GMs is not sufficient to properly resolve the ion distribution function. The largest RMS values are observed with the cold-ion reduced model (with a difference up to ∼ 20 % with respect to the Braginskii model), while the results closest to the one of the Braginskii model are obtained if collisions are introduced in the GM approach with a sufficient number of GMs, in this case (P,J) = (6,1). Overall, collisions reduce the turbulent fluctuations level, but they do not significantly alter the observed turbulent regimes and radial transport. At the same time, the analysis of the ion distribution function F_i reveals that collisions damp the amplitudes of the GMs, thereby allowing for a reduction in the number of GMs required in the simulations (typically from (P,J) ∼ (12,1) in the low collisional regime to (P,J) ∼ (6,1) in the high-collisional regime of LAPD). Overall, the present work constitutes a step toward the development of future full-F turbulent simulations of the boundary region of fusion devices using the GM approach, which offers an ideal flexible tool to capture kinetic and collisional effects at the desired level of accuracy. The simulations of the boundary of fusion devices require that the present model is extended to the full GM hierarchy of Ref. Frei2020 to include electron kinetic, electromagnetic, FLR, and geometry effects. In addition, a more accurate description of the role of ion-ion collisions involves the implementation of a nonlinear collision operator model with increasing physics fidelity, such as the nonlinear Coulomb operator <cit.>. Proper sheath boundary conditions for the GM hierarchy equation, which extend the simplified Bohm sheath boundary condition used here (see (<ref>)), can enhance the reliability of our simulations. These boundary conditions can be obtained by following a procedure similar to the one outlined in Ref. Loizu2012. Finally, we remark that the implementation of a kinetic electron description is essential also to perform high-fidelity LAPD simulations, as fast and less collisional electrons (with T_e ∼ 15 eV) are emitted by pulsed plasma discharges in experiments <cit.>. Furthermore, kinetic electrons are important in setting the sheath boundary conditions where electrons are reflected because of the potential drop, yielding strong velocity-space gradients in the electron distribution function <cit.>. § ACKNOWLEDGEMENT The authors acknowledge helpful discussions with Alessandro Geraldini and Stephan Brunner. This work has been carried out within the framework of the EUROfusion Consortium, via the Euratom Research and Training Programme (Grant Agreement No 101052200 — EUROfusion) and funded by the Swiss State Secretariat for Education, Research and Innovation (SERI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union, the European Commission, or SERI. Neither the European Union nor the European Commission nor SERI can be held responsible for them. The simulations presented herein were carried out in part on the CINECA Marconi supercomputer under the TSVVT422 project and in part at CSCS (Swiss National Supercomputing Center). This work was supported in part by the Swiss National Science Foundation.
http://arxiv.org/abs/2307.04423v1
20230710085537
Density asymmetry and wind velocities in the orbital plane of the symbiotic binary EG Andromedae
[ "N. Shagatova", "A. Skopal", "E. Kundra", "R. Komžík", "S. Yu. Shugarov", "T. Pribulla", "V. Krushevska" ]
astro-ph.SR
[ "astro-ph.SR" ]
Density asymmetry and velocities of the wind in EG And N. Shagatova, A. Skopal, E. Kundra et al. Astronomical Institute, Slovak Academy of Sciences, 059 60 Tatranská Lomnica, Slovakia [email protected] Main Astronomical Observatory of National Academy of Sciences of Ukraine, 27 Akademika Zabolotnoho St., 031 43, Kyiv, Ukraine Non-dusty late-type giants without a corona and large-scale pulsations represent objects that do not fulfil the conditions under which standard mass-loss mechanisms can be applied efficiently. Despite the progress during the past decades, the driving mechanism of their winds is still unknown. One of the crucial constraints of aspiring wind-driving theories can be provided by the measured velocity and density fields of outflowing matter. The main goal of this work is to match the radial velocities of absorbing matter with a depth in the red giant (RG) atmosphere in the S-type symbiotic star EG And. We measured fluxes and radial velocities of ten Fe I absorption lines from spectroscopic observations with a resolution of ≈ 30 000. At selected orbital phases, we modelled their broadened profiles, including all significant broadening mechanisms. The selected Fe I absorption lines at 5151 - 6469 Å originate at a radial distance ≈ 1.03 RG radii from its centre. The corresponding radial velocity is typically ≈ 1 , which represents a few percent of the terminal velocity of the RG wind. The high scatter of the radial velocities of several in the narrow layer of the stellar atmosphere points to the complex nature of the near-surface wind mass flow. The average rotational velocity of 11implies that the rotation of the donor star can contribute to observed focusing the wind towards the orbital plane. The orbital variability of the absorbed flux indicates the highest column densities of the wind in the area between the binary components, even though the absorbing neutral material is geometrically more extended from the opposite side of the giant. This wind density asymmetry in the orbital plane region can be ascribed to gravitational focusing by the white dwarf companion. Our results suggest that both gravitational and rotational focusing contribute to the observed enhancement of the RG wind towards the orbital plane, which makes mass transfer by the stellar wind highly efficient. Density asymmetry and wind velocities in the orbital plane of the symbiotic binary EG Andromedae N. Shagatova <ref> A. Skopal <ref> E. Kundra <ref> R. Komžík <ref> S. Yu. Shugarov <ref> T. Pribulla <ref> V. Krushevska <ref>,<ref> Received / Accepted ============================================================================================================================================================================================ § INTRODUCTION The atmospheres of late-type giant stars include slow and dense winds reaching terminal velocities lower than 100with decreasing values for later spectral types <cit.>. For the asymptotic giant branch (AGB) evolutionary stage, the driving mechanism of the outflow is thought to be based on a combination of the dust-forming levitation of the wind by stellar pulsations and of the acceleration by radiation pressure on dusty envelopes <cit.>. On the other hand, the lack of dust in the atmospheres of normal red giant stars (RGs) and the inefficiency of other known driving mechanisms represent a complication for the understanding of their winds. Since the late 20th century, the dissipation of magnetic waves is thought to be the key ingredient in their mass-loss process. A review of attempts to resolve the mechanism behind RG winds can be found in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Recently, <cit.> investigated the wind properties of Arcturus (K1.5 III) using the Wentzel–Kramers–Brillouin Alfvén wave-driven wind theory <cit.>. They found that the wave periods that are required to match the observed damping rates correspond to hours to days, consistent with the photospheric granulation timescale. The late-type giants play the role of donor star in the symbiotic stars (SySts), which are long-period (P≳ years) binary systems with a mass transfer of the giant wind towards a compact companion, usually a white dwarf <cit.>. The donor star supplying the dense wind matter is either a normal RG star (S-type SySts) or an AGB star (D-type SySts). The RGs in S-type systems have experienced the first dredge-up, which was confirmed by their low ^12C/^13C ratio in the range 5 - 23 <cit.>. The white dwarf as a source of ultraviolet radiation enables us to probe the cool wind from the giant at different directions. For example, the continuum depression around the Ly-α line as a function of the orbital phase has shown a very slow wind velocity up to 1-2 RG radii, R_ g, above the donor surface and a steep increase to the terminal velocity afterwards in S-type SySts <cit.>. For D-type systems, this way of deriving the wind velocity profile is complicated by very long (P≈ 10-100 yr) and often poorly known orbital periods. However, for single O-rich AGB stars, the expansion velocities of the wind were determined by <cit.> as a measure of the half-width of the molecular lines at the baseline level. The majority of the stars in their sample has a distinct low-velocity region in front of the velocity jump to the terminal value, but in a few cases, the wind reaches terminal velocity already within the innermost parts. In one case, the authors found a deceleration of the gas as it moves away from the star (R Dor). The low-velocity region close to the star and a steep increase to the terminal velocity in O-rich AGB stars was also indicated by molecular line modelling with a non-local thermal equilibrium radiative transfer code <cit.>. In C-rich AGB stars, the wind velocity profile can be steeper because the opacity of the dust grains is higher <cit.>. For the C-rich AGB star CW Leo, a steep increase in the wind velocity was found to start at a distance of ≈ 5 stellar radii <cit.>. The presence of the hot white dwarf <cit.> accompanied by the cool giant <cit.> in S-type SySts leads to a complex ionization structure of the circumbinary material. During quiescent phases when there is no ongoing eruptive burning on the surface of the white dwarf, a fraction of the surrounding RG wind is photoionized by energetic radiation from the hot component. As a result, the neutral area around the RG is cone-shaped, with the RG near its apex facing the white dwarf <cit.>, where a thin boundary between the neutral and ionized zone is determined by the balance between the flux of ionizing photons from the white dwarf and the flux of neutral particles from the RG. EG And is an S-type SySt with no recorded outburst of its white dwarf. The effective temperature of the white dwarf is ≈ 7.5× 10^4 K <cit.> and its mass is 0.4± 0.1 <cit.>. The system is eclipsing <cit.> with an orbital inclination of ≈ 80^∘ <cit.> and an orbital period of 483 days <cit.>. The donor star is an RG of spectral class M2-3 III <cit.> with an effective temperature ≈ 3700 K <cit.>, luminosity ≈(1-2)× 10^3 <cit.>, and metallicity [Fe/H]≈ 0 <cit.>. Its mass is estimated to be 1.5± 0.6 <cit.> and its radius is estimated to be 75± 10 R_⊙ <cit.>, corresponding to log g ≈ 0.5 - 1.1. The slow and dense wind of RG is assumed to have a terminal velocity v_∞≈ 30<cit.>. The velocity profile of the wind suggests an almost steady wind up to around 1.5 R_ g from the RG centre and subsequent rapid acceleration towards the terminal velocity, as derived from hydrogen column density values measured from the Lyα-line attenuation <cit.>. This approach accounts for the wind density distribution at the near orbital plane due to the point-like relative size of the white dwarf as a source of the probing radiation. The giant wind in this system is distributed asymmetrically, with denser parts concentrated at the orbital plane and diluted areas located around the poles <cit.>. The geometric distribution and radial velocity (RV) profile of the RG wind are essential components for exploring the physical mechanism driving the outflow and shaping the RG wind. In this work, we analyse the orbital variability of fluxes and RVs of Fe I absorption lines of EG And (Sect. <ref>). We intend to match the resulting RVs of individual lines with the depth of their origin in the atmosphere by modelling their profile using a semi-empirical model atmosphere (Sect. <ref>) and including several broadening mechanisms (Sect. <ref>). The results are given in Sect. <ref>. The discussion and conclusions can be found in Sects. <ref> and <ref>, respectively. § OBSERVATIONS In the optical wavelength range, the main source of the continuum radiation in EG And is the RG companion <cit.>. Its spectrum is superposed with dominant Balmer emission lines arising in the symbiotic nebula and many absorption lines of molecules and atoms originating in the cool giant wind <cit.>. We collected 53 spectroscopic observations from Skalnaté Pleso Observatory (SP) from 2016 - 2023 in the wavelength range 4200 - 7300 Å (Table <ref> or <ref>). The observatory is equipped with a 1.3 m Nasmyth-Cassegrain telescope (f/8.36) with a fibre-fed échelle spectrograph (R∼30 000) similar to the MUSICOS design <cit.>. The spectra were reduced with the Image Reduction and Analysis Facility (IRAF; <cit.>) using specific scripts and programs <cit.>. The spectra were wavelength-calibrated using the ThAr hollow-cathode lamp. The achieved accuracy for our set of spectra corresponds to the systematic error of RV measurements, which typically is in the range 0.2 - 0.6. Our spectra were dereddened with E_ B-V = 0.05 mag <cit.> using the extinction curve of <cit.>. We determined the orbital phase φ of EG And using the ephemeris of the inferior conjunction of the RG (φ = 0) given as <cit.> JD_ sp. conj. = 2 450 683.2(± 2.4) + 482.6(± 0.5)× E . We assumed a systemic velocity of v_ sys = -94.88 kms^-1 <cit.>. Similar values were determined by <cit.>, <cit.>, <cit.>, and <cit.>. We converted the spectra from relative into absolute fluxes by scaling them to the closest-date photometric fluxes using a fourth-degree polynomial function. We used the UBVR_ C photometry of EG And published by <cit.> together with new photometric observations obtained at the G2 pavilion of the Stará Lesná Observatory, which is equipped with a 60 cm, f/12.5 Cassegrain telescope <cit.>. To complement our dataset during 2022, we used photometric observations available in the International Database of the American Association of Variable Star Observers (AAVSO[<https://aavso.org>]). We converted the photometric magnitudes into fluxes according to the calibration in Table 2.2 of <cit.>. § ANALYSIS AND RESULTS To investigate the velocity distribution in the RG atmosphere of EG And, we selected ten Fe I absorption lines between 5151 and 6469 Å that were not severely blended. We measured their orbital variability and modelled their absorption profiles to track the density conditions and dynamics of the corresponding part of the wind area. §.§ Orbital variations of the Fe I absorption lines The selected absorption lines of neutral iron show the orbital variability in RVs and absorbed fluxes. To measure these changes along the orbit, we fitted the lines with a Gaussian profile superimposed on a fourth-order polynomial function representing the continuum radiation of the spectrum (Sect. <ref>) using the curve-fitting program Fityk[<https://fityk.nieto.pl>] <cit.>. The resulting variability in RV values, v_r, is plotted in Fig. <ref> together with the RV curve of the RG according to the solution of <cit.>. Shifts up to ≈ -5 in the RVs of individual Fe I absorption lines relative to the RG curve are measured. This is consistent with a slow outflow of the absorbing material. However, around orbital phases φ≈ 0 - 0.2, the RV values especially of the Fe I 5340 Å line suggest a slow inflow. The orbital variability of the fluxes (Fig. <ref>) shows the strongest absorption around the orbital phase ≈ 0.6, and the possible weakest absorption can be indicated at φ≈ 0.1, but there is a lack of the data around this orbital phase. When the conical shape of the neutral wind area around the RG is taken into account <cit.>, this result points to the highest densities of the wind between the RG and the apex of the neutral area cone (Fig. <ref>). This agrees with the orbital variability of the absorption and the core-emission component of the Hα line, which suggests that high-density matter lies in the area between the binary stellar components <cit.>. The complete list of measured RV and flux values is given in Tables <ref> and <ref> in the appendix. §.§ Model atmosphere grid The spread of RV values of the Fe I absorption lines within ≈ -5/+3around the RV curve of the RG, v_r^ g (Fig. <ref>) suggests that these lines originate in the vicinity of the stellar surface. To match the velocities with a depth in the RG atmosphere through modelling the profiles of Fe I lines (Sect. <ref>), we constructed a semi-empirical model atmosphere. This model is based on a simplified extension of the MARCS model atmosphere <cit.> up to a distance of 150 R_ g from the stellar centre. We defined the distribution of three physical parameters in the atmosphere as a logarithmically spaced grid: the neutral hydrogen density N_ H [cm^-3], temperature T [K], and electron pressure P_ e [Ba] over the required range of radial distance r [R_ g]. The MARCS model atmosphere extends up to a distance of 1.1 R_ g from the stellar centre. From the available database,[<https://marcs.astro.uu.se>] we selected the model with parameters closest to those of the RG in EG And (Sect. <ref>), a moderately CN-cycled model with ^12C/^13C=20, with a spherical geometry, effective temperature T_ eff=3700 K, mass M=1.0, log g = 0.5, metallicity [Fe/H] =0, and microturbulence parameter of 2, which is a typical value for RGs in S-type SySts <cit.>. The selected model atmosphere corresponds to a star with a radius R = 93 and a luminosity L = 1478. Beyond the radial distances covered by the MARCS atmosphere, we set the extrapolation up to r=150 R_ g, where the wind density is sufficiently low to have a negligible impact on the Fe I line absorption profile. At this outer edge of the atmosphere model, we estimated values of N_ H and T from the hydrodynamical simulation of the M-giant γ Eri wind by <cit.>. We assessed the corresponding value of P_ e for a representative value of the ionization fraction ≈ 10^-6 for dense interstellar medium clouds <cit.>. We defined the values of the physical parameters N_ H, T and P_ e between a radial distance 1.1 and 150 R_ g by interpolating the corresponding functions (Table <ref>). The selection of the N_ H(r) interpolation function has a crucial effect on the Fe I absorption line profile. We used the form corresponding to the model of measured H^0 column densities of EG And by <cit.>, N_ H(r) = n_1/2λ_1 R_ g1 + ξ r^1-K/r^2, where n_1, ξ [this parameter is given as ξ=n_Kλ_1/(n_1λ_K), where n_K is a model parameter, and λ_K is the Kth eigenvalue of the Abel operator] and K are the model parameters, and λ_1 = π/2 is the Abel operator eigenvalue <cit.>. Since the column density model is most reliable at distances of r of several R_ g, we applied the condition on the interpolation function (<ref>) that N_ H(r=3 R_ g)=1.6× 10^10 cm^-3, that is, it equals the value of model J (i=80^∘) from <cit.>. This approach led to smooth profiles of the atmosphere parameters over the required range of radial distances (Fig. <ref>). Finally, we took the asymmetric conical shape of the neutral wind zone into account. For orbital phases when the line of sight crosses the boundary between neutral and ionized wind, we estimated its distance from the RG surface from Fig. 6 in <cit.>. We assumed that only the neutral wind contributes to the absorption in Fe I lines. Therefore, we limited the radial size of the model atmosphere to the H^0/H^+ area border at these orbital phases. At the rest of the orbital phases, the radial length of the neutral area was assumed to be 150 R_ g. §.§ Line profile of the Fe I absorption lines To reproduce the spectral profiles of ten Fe I absorption lines from 5151 to 6469 Å at all orbital phases with a step of 0.1, we considered several broadening mechanisms that we incorporated into a custom Python code. We used the mass absorption coefficient including natural, pressure, thermal, and microturbulence broadening in the form given by <cit.>. The values of the Ritz wavelengths, the inner quantum numbers J, the oscillator strengths, and the excitation potentials were acquired from the National Institute of Standards and Technology (NIST) database[<https://www.nist.gov/pml/atomic-spectra-database>] and the natural damping constants from the Vienna Atomic Line Database[< http://vald.astro.uu.se>] (VALD). The values of the partition functions for Fe I, Fe II, and Fe III were interpolated through the atmosphere grid from the tables of <cit.> and <cit.>. For the atmosphere layers with temperatures below 1000 K, we assumed constant partition functions. We calculated the values of the Hjerting function as the real part of the Fadeeva function with the wofz function within scipy.special library[< https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.wofz.html>]. The pressure broadening was treated as caused by the collisions with neutral hydrogen, using the impact approximation with line-broadening cross sections computed as a function of the effective principal quantum numbers <cit.> with the tabulated values of the broadening cross-section σ and velocity parameter α given by <cit.>. Furthermore, we included rotational broadening using the Python rotBroad function that is part of the PyAstronomy.pyasl library [ <https://pyastronomy.readthedocs.io/en/latest/pyaslDoc/aslDoc/rotBroad.html>]. Since the projected rotational velocity v_ rotsin (i) can be dependent on tidal forces in the outer regions of the RG <cit.>, we allowed it to be a free parameter. After first fitting trials with a free linear limb-darkening coefficient ε, most of the fits converged to ε=1. As this is a reasonable value <cit.>, we kept ε=1 in all line-profile fits. As the typical value of the macroturbulence velocity in RGs is ≈ 3<cit.>, it adds to the broadening of the absorption-line profile. Often, the radial-tangential (RT) anisotropic macroturbulence is the preferred broadening model in a spectroscopic analysis <cit.>. On the other hand, <cit.> showed that the RT macroturbulence model is not adequate at least for solar-type stars because it overestimates turbulent velocity dispersion. They obtained more preferable results for the Gaussian anisotropic macroturbulence model. The resolution of our spectra and relatively low macroturbulent velocity does not allow us to distinguish between different macroturbulence models. Generally, there is agreement that neglecting macroturbulence as a source of line broadening leads to overestimated values of v_ rotsin (i), and, on the other hand, including a simple isotropic Gaussian macroturbulence model provides severely underestimated values of v_ rotsin (i) <cit.>. Therefore, we decided to include the isotropic Gaussian model with two values of macroturbulence velocity, 0 and 3, to obtain lower and upper limits of v_ rotsin (i) values. Finally, we included the instrumental broadening using a Gaussian kernel. The width of the Gaussian profile used in the convolution is given by the resolution R, which depends on the wavelength and was estimated using ThAr lines. For the wavelength range of the selected lines 5151-6469 Å, the spectral resolution of our spectra ranges from ≈ 39100 to 24000. We used the broadGaussFast function from the PyAstronomy.pyasl library[ <https://pyastronomy.readthedocs.io/en/latest/pyaslDoc/aslDoc/broad.html>] to include macroturbulent and instrumental broadening. The line profiles depicted at the right bottom panel of Fig. <ref> compare the strength of individual broadening mechanisms. We performed line profile modelling of Fe I lines at nine different orbital phases. Example fits at orbital phase φ=0.1 are depicted in Fig. <ref>. We evaluated the goodness of fit using the reduced χ-square, χ_ red^2. Its value is often > 2 due to the low value of the degrees of freedom and the uncertain value of the standard observational error, which can vary from one observation to the next. We adopted a rather strict value of a 2% standard deviation of the flux values for all observations to avoid overestimating the errors for the best-quality spectra. The errors due to the simplified model of macroturbulence are relatively small (Fig. <ref> and <ref>) and have practically no effect on the resulting maximum depth of the origin of the spectral line in the atmosphere. The values of the v_ rotsin (i) parameter could be affected by unresolved blending of an absorption line. An important source of systematic error can be introduced by the simplifications in our model, namely the symmetry of the wind distribution, which except for the shape of the neutral area, does not reflect the asymmetry of the egress/ingress and orbital-plane/pole-region in the distribution of the physical quantities (Fig. 4 of <cit.> and Fig. 8 of <cit.>). Moreover, the particular shape of the neutral area itself represents a further source of systematic error. We estimated the distance from the RG centre to the ionization boundary by adapting the shape of the neutral area computed for the symbiotic binary SY Mus <cit.>. This system comprises a white dwarf that is more luminous than the hot companion in EG And by two orders of magnitude <cit.>. On the other hand, the mass loss from its RG is probably higher <cit.>, leading to a denser wind zone. These characteristics affect the shape of the ionization boundary, and the actual shape for EG And can therefore deviate from the one in SY Mus. However, the similar measured egress values of the H^0 column densities and the practically identical asymptote to the egress ionization boundary in the orbital plane, which is located at φ≈ 0.17 <cit.>, strongly suggest that the ionization boundaries in these systems are similar. The lack of measured H^0 column densities at ingress orbital phases for EG And precludes us from modelling the full shape of the ionization boundary. To estimate the sensitivity of the resulting physical parameters on the location of the ionization boundary, we performed fits for a shifted radial distance of the ionization boundary by -0.5R_ g and +0.5R_ g for a subset of modelled spectra with all Fe I absorption lines and orbital phases with the finite radial size of the neutral area represented. For the ionization boundary closer to the RG by -0.5R_ g, we obtained the same values of column densities or lower values by up to 6.0%, and for the boundary that is more distant by +0.5R_ g, the values were the same or higher by up to 1.1%. In both cases, higher values of the errors of n_ H correspond to orbital phases ≈ 0.4 - 0.7, where the position of the ionization boundary is closer to the RG. The corresponding values of the projected rotational velocity v_ rotsin (i) remained unchanged for all fits, as did the values of the minimum distance from the RG centre r (Sect. <ref>). This confirms the dominant role of the densest parts of the RG atmosphere in the formation of Fe I absorption line profiles. Given the rather low magnitude of the errors of n_ H due to the uniform shifts and the most probably similar shape of the ionization structure for both systems, which is supported by similar profiles of the measured H^0 column densities <cit.>, an uncertain precise location of the apex of the neutral zone will probably not seriously affect the ratios of n_ H values at individual orbital phases yielded by the line-profile modelling. Another source of systematic error comes from the uncertain level of the continuum, which is mainly due to the spread in the photometric data. In our dataset, the typical deviation of the continuum values from the average relative to the flux ranges from 3% to 9% at the positions of individual Fe I absorption lines. This leads to errors in the n_ H values with a magnitude of typically ≈ 10 - 20%, v_ rotsin (i) of ≈ 1 - 7% and a minimum distance r of ≈ 0.1 - 0.5%. Therefore, the uncertainty in the level of the continuum represents a more significant source of error than the uncertainty in the position of the ionization boundary. Still, these systematic errors are of lower magnitude than the values of the standard deviations of the resulting values from the set of modelled spectra. §.§ Distribution of the physical parameters within the atmosphere §.§.§ The height above the photosphere Our models provided us with the total columns of the wind material that form the spectral profiles of individual Fe I lines in our set. From now on, the values of r are understood as the distances of the lowest layers of the atmosphere model, corresponding to the resulting neutral columns from the line-profile fits. In other words, a particular value r represents the maximum depth within the model atmosphere where the integration of the line-profile stops, and it corresponds to the deepest layer of the origin of the spectral line. The maximum depths of the Fe I line profile fits correspond to a relatively small height, ≈0.02 to ≈0.06 R_ g, above the RG photosphere. Figure <ref> shows this result with the corresponding column densities. The resulting physical parameters averaged over the orbital phases are presented in Table <ref>. There is no sign of significant variations in the column density with orbital phase, but a slightly higher average value is measured at φ=0.5-0.6 (Fig. <ref>). §.§.§ Radial velocities The total average and standard deviation over ten modelled spectra and ten Fe I absorption lines corresponds to RV -0.89± 1.26at a radial distance 1.03± 0.01 R_ g (Fig. <ref>). Assuming a terminal velocity of 30, we compared our RV values with velocity profiles obtained for EG And from modelling the measured column densities by <cit.>. As shown in Fig. <ref>, our results support very slow wind velocities close to the RG surface before the acceleration of the wind starts. §.§.§ Rotational velocities The orbit-averaged values of the projected rotational velocities of all modelled lines fall within 9.6 - 12.8 with standard deviations of 4 - 22% (Table <ref>), except for the Fe I 6469 Å line with v_ rotsin (i) = 8.5 and a significantly higher standard deviation of 36%. While it is reasonable not to expect the same rotational velocity in any depth in the RG atmosphere, the measured differences can in part be caused by errors due to the blending of the lines. Moreover, the reliability of the v_ rotsin (i) determination is affected by the comparable strength of the instrumental broadening. The average and standard deviation over the whole sample of ten line-profile models per ten fitted lines corresponds to v_ rotsin (i) = 10.9 ± 2.0, which is in a typical range of ≈ 5 - 11determined for RGs in S-type SySts <cit.>. There are also much faster rotators in this group of stars with v_ rotsin (i) up to ≈ 50 <cit.>. Assuming an orbital inclination of i=80^∘± 10^∘, we obtained v_ rot = 11.1_-2.2^+2.6. Then, for RG radius R_ g=75± 10 R_⊙, the proportion of orbital to rotational period is P_ orb/P_ rot=1.4_-0.4^+0.6. Therefore, it is possible that the rotation of the RG is bounded to its orbital motion. § DISCUSSION For our sample of ten Fe I absorption lines, we determined the absorbed flux and RV from their Gaussian fits (Sect. <ref>). Both quantities show the relative displacements for individual lines along the orbit (Figs. <ref> and <ref>). The largest average shift in RVs by -3.8 with respect to the v_r^ g(φ) curve is shown by the Fe I 6469 Å line (Fig. <ref>, dotted line). Around φ = 0.1, the RVs of many lines indicate a slow flow of absorbing material towards the RG, especially the Fe I 5340 Å line with an average RV shift of +0.5. The line-profile models accounting for several broadening mechanisms at the selected ten orbital phases (Sect. <ref>) enabled us to match their RV values with the deepest layer of the atmosphere, where the absorption line is predominantly created, characterized by r, T, N_ H and P_ e values. For the resulting depths in the range of 1.02 - 1.06 R_ g, all averaged RV values are low. Specifically, the outflow values lie within the interval from -0.2 to -3.7 (Table <ref>). This represents 0.7 - 12.3 % of the estimated terminal wind velocity of 30. While the typical RV at r ≈ 1.03 R_ g is ≈ 1 (≈ 3% of v_∞), there is a considerable dispersion in individual RV values (Fig. <ref>, bottom). The highest range of RV values is measured at the shortest distances of ≈ 1.02 - 1.03 R_ g. This variability can be a result of the highly complex flows of matter in the close surroundings of cool evolved stars <cit.>. In the light of our results, the orbital phase ≈ 0.6 seems to be exceptional in several ways. First, most of the Fe I lines from our set reach the maximum absorbed flux at this orbital phase (Fig. <ref>), pointing to a higher column density in the neutral zone between the apex of its cone and the RG, that is, in the direction towards the white dwarf companion (Fig. <ref>). In the same way, we could interpret the local maxima in the resulting column densities of the line-profile models (Fig. <ref>). Simultaneously, a higher dispersion of the RV values and the overall highest outflow velocities were measured around this orbital phase, suggesting enhanced outflow of the wind. The same feature was observed for the core-emission and absorption components of the Hα line at orbital phases 0.6-0.7 <cit.>. Higher densities and, at the same time, higher velocities of the neutral matter may represent a challenge for hydrodynamical simulations of outflows from evolved cool stars in binary systems. In our previous work, we investigated the geometrical distribution of the RG wind in EG And. By modelling H^0 column densities, we found that the wind from the RG is focused towards the orbital plane <cit.>. On the other hand, the RV orbital variability of the [OIII] 5007 Å line, which coincides with the v_r^ g(φ) curve in both phase and amplitude, indicates a dilution of the wind around the poles of the RG <cit.>. However, the underlying mechanism that focuses wind in this system remains unclear. <cit.> applied the wind-compression disk model proposed by <cit.> to RGs in S-type symbiotic systems with rotational velocities of 6-10and found that the wind focusing occurs at the equatorial plane with a factor of 5–10 relative to the spherically symmetric wind. The average value v_ rot = 11.1(Sect. <ref>) is therefore sufficiently high for rotation-induced compression of the wind from the giant in EG And. The wind focusing can also potentially explain the higher densities of the neutral wind between the binary components, in contrast to the lower densities in the opposite direction, even though the neutral zone is more extended there. However, the wind compression by the RG rotation cannot explain this asymmetry because this mechanism acts equally strongly in all outward directions in the plane perpendicular to the rotational axis. Therefore, the gravitational effect of the white dwarf companion is the more natural explanation for this measured asymmetry. In a recent 3D hydrodynamical simulation of the accretion process for representative parameters of S-type symbiotic systems by <cit.>, the centre of the oblique region with highest densities around the RG is shifted towards the white dwarf, and the wind enhancement in the area of the orbital plane is also visible in their Fig. 2. For S-type system, recurrent nova RS Oph, the simulations of <cit.> showed a dense equatorial outflow in the system as a result of the interaction of a slow wind with a binary companion. Therefore, gravitational focusing likely shapes the circumstellar matter in S-type SySts, as well as in D-type systems <cit.>. Often, the analysis of spectral lines in stellar atmospheres is focused on the determination of elemental abundances and basic stellar parameters by comparing synthetic and observational spectra <cit.> In our work, we aimed to assess the physical conditions at different heights in the RG atmosphere in interacting binary star from Fe I absorption line profiles. In principle, this approach can also be used for isolated non-dusty RG stars, which can potentially have different wind velocity profiles. The presence of the companion of a mass-loosing star affects the flow of matter in the wind region. Its gravitational pull can support the wind outflow from the RG, and we cannot exclude that in the case of single RGs, the low-velocity region is more extended and the velocities are lower. To form an idea about the proportions of gravitational force of the two stellar components in EG And, we compared the values of the gravitational force of the white dwarf and RG at several distances r on the line joining the two stars. When we assume the separation between the two components of 4.5R_ g from the interval given by <cit.>, the magnitude of the white dwarf force at r = 1.02 - 1.06 R_ g, where the Fe I absorption lines are predominantly created, is small but not negligible. It is about 2% of the value of the RG gravitational force. At r = 1.5R_ g, where the acceleration of the wind starts (Fig. <ref>, top), this value is ≈ 7%, and at r = 2R_ g in the acceleration region, it is ≈ 17%. At the location at ≈ 3R_ g, where the terminal velocity of the wind is reached, the gravitational forces from the two stars are already comparable. Close to the RG surface, where most of the absorption in Fe I lines occurs, the gravitational effect of the white dwarf is small, and we do not observe any tendency in the wind RVs as a function of orbital phase (Fig. <ref>), that is, at different distances of the near-surface regions from the white dwarf companion. Therefore, the RVs near the surface of the RG in EG And are probably comparable to those in isolated giants with similar evolutionary and physical characteristics. In the future, modelling of the Fe I absorption line-profiles for single late-type giants can be used to probe this assumption. § CONCLUSIONS The RVs of the investigated Fe I absorption lines trace the orbital motion of the giant in the binary star EG And. They are displaced from the RV curve of the giant by 0.1 to 3.8 (i.e. up to 13% of the terminal wind velocity), which indicates a slow outflow of mass from the RG (Fig. <ref>). Modelling of their profiles showed that they are formed at maximum depths from ≈ 0.02 to ≈ 0.06 R_ g above the photosphere. The typical value of the RV at these distances is around 1, which is consistent with the previously determined wind velocity profile from measured values of H^0 column densities (Fig. <ref>). It is interesting to note that several Fe I lines, especially the 5340 Å line, showed a slow inflow of the absorbing matter towards the RG around orbital phase 0.1. Together with the dispersion of the RV values of several , this may be a sign that the nature of the near-surface mass flows in the RG atmosphere is complex (Fig. <ref> and <ref>, bottom). The orbital variations of the Fe I absorption line fluxes (Fig. <ref>) indicate that higher-density matter resides in the region between the binary components than in other directions from the RG at the near-orbital plane area. This asymmetry can be the result of gravitational interaction of the white dwarf with the RG wind, as was indicated by numerical simulations of gravitationally focused winds in interacting binaries. The measured rotational velocity of the RG, ≈ 11.1, suggests an additional compression of the wind from the giant towards the orbital plane due to its rotation. Our results therefore support the contribution of both mechanisms to the observed RG wind enhancement and its asymmetry in the orbital plane of EG And. The results of measuring the wind density asymmetry in the near-orbital plane region are consistent with our previous results on the wind focusing <cit.>. Our direct observational finding shows a wind density enhancement between the binary components. This confirms the high efficiency of the wind mass transfer in SySts. We wish to thank to Zoltán Garai, Andrii Maliuk, Matej Sekeráš and Peter Sivanič for obtaining 1-2 spectral/photometric observations each, used in this work. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. This work was supported by the Slovak Research and Development Agency under the contract No. APVV-20-0148 and by a grant of the Slovak Academy of Sciences, VEGA No. 2/0030/21. VK acknowledges the support from the Government Office of the Slovak Republic within NextGenerationEU programme under project No. 09I03-03-V01-00002. Reproduced with permission from Astronomy & Astrophysics, ESO. aa § RADIAL VELOCITIES AND FLUXES OF SELECTED FE I ABSORPTION LINES
http://arxiv.org/abs/2307.04257v1
20230709195309
Hyperon polarization and its correlation with directed flow in high-energy nuclear collisions
[ "Ze-Fang Jiang", "Xiang-Yu Wu", "Shanshan Cao", "Ben-Wei Zhang" ]
nucl-th
[ "nucl-th", "hep-ph" ]
This line only printed with preprint option [email protected] Department of Physics and Electronic-Information Engineering, Hubei Engineering University, Xiaogan, Hubei, 432000, China Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China [email protected] Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China [email protected] Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong, 266237, China [email protected] Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou, Guangdong, 510006, China We investigate the hyperon polarization and its correlation with the directed flow of the quark-gluon plasma (QGP) in non-central Au+Au collisions at =27 GeV. A modified 3-dimensional (3D) Glauber model is developed and coupled to a (3+1)-D viscous hydrodynamic evolution of the QGP. Within this framework, we obtain a satisfactory simultaneous description of the directed flow of identified particles and Λ polarization, and show sensitivity of polarization to both the tilted geometry and the longitudinal flow profile of the QGP. A non-monotonic transverse momentum dependence of the Λ polarization is found in our calculation, which is absent from hydrodynamic simulation using other initialization methods and can be tested by future experimental data with higher precision. A strong correlation (or anti-correlation) is found between the global polarization and directed flow of Λ when the longitudinal flow field (or medium deformation) varies, indicating the common origin of these two quantities. Therefore, a combination of these observables may provide a more stringent constraint on the initial condition of the QGP. Hyperon polarization and its correlation with directed flow in high-energy nuclear collisions Ben-Wei Zhang August 12, 2023 ============================================================================================= § INTRODUCTION A highly excited state of nuclear matter, known as the Quark-Gluon Plasma (QGP), is created in high-energy nucleus-nucleus collisions at the Relativistic Heavy-Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). Quantifying the QGP properties becomes one of the primary goals of the heavy-ion collision programs since its discovery at the beginning of this century <cit.>. In non-central heavy-ion collisions, huge orbital angular momentum (OAM) or vorticity field can be deposited into the QGP, leading to the global polarization of hyperons through the spin-orbital coupling <cit.> or spin-vorticity coupling <cit.>. This initiates the exploration of spin physics in a strongly coupled system. The chiral kinetic theory <cit.> and phenomenology, such as chiral vortical effect <cit.>, chiral vortical wave <cit.>, the change of the QCD phase diagram induced by the vorticity effect <cit.> and the spin-hydrodynamics <cit.> are under active investigation. Recently, the STAR experiment has confirmed the global polarization of Λ(Λ̅) hyperons in semi-peripheral Au+Au collisions <cit.>, which implies an average fluid vorticity of ω≈ (9±1) × 10^21 s^-1. This is the most vortical fluid ever observed in nature. Further analyses of the global and local polarization have revealed new insights into the vortical properties of the QGP <cit.>. Various theoretical approaches have been developed to study the influence of the fluid vorticity on spin polarization, including transport models (e.g. AMPT) with the assumption of local thermal equilibrium <cit.>, the Quark-Gluon-String Model (QGSM) <cit.> and (3+1)-dimensional viscous hydrodynamic models <cit.>. These models consistently capture the features of the beam energy dependence of the global polarization along the out-of-plane direction (-P^y) as observed from the RHIC to the LHC energies. However, inconsistency still remains in the azimuthal angle dependence of the local polarization between theoretical calculations and the experimental data <cit.>. Considerable efforts have been devoted in resolving this local polarization puzzle <cit.>. Within the hydrodynamic approach, it has been found that the hyperon polarization is sensitive to the initial condition of the QGP evolution <cit.>. Significant impacts on the polarization have been revealed from the initial velocity field of the medium <cit.>, and the initial geometry of the medium which affects the vorticity field inside the QGP <cit.>. Since these aspects are also the origin of other soft hadron observables like their collective flow coefficients, it would be of great interest to study polarization together with these observables in the same framework and utilize their combination to better constrain the initial condition of the QGP <cit.>. This is also the focus of our present work. Following our previous exploration  <cit.> on the interplay between the hydrodynamic initial condition and the directed flow of hadrons in non-central heavy-ion collisions, we will further investigate how the tilted geometry of the QGP fireball and its longitudinal flow velocity field affect the hyperon polarization, including its dependence on rapidity, centrality and transverse momentum. Detailed comparisons on the Λ polarization will be conducted between different contributions to polarization from the kinetic theory, and also between different initialization models of our hydrodynamic simulation. Since the asymmetric initial condition serves as the common origin of both the hyperon polarization and the directed flow of hadrons, we will explore the correlation between these two observables as the medium geometry and flow field vary. We will use Au+Au collisions at =27 GeV as an environment for our discussion, considering the abundance of experimental data on both polarization and directed flow coefficient of Λ hyperons in this collision system. The rest of this paper will be structured as follows. In Sec. <ref>, we will first present the theoretical framework we develop for a simultaneous investigation on directed flow and polarization of the QGP, including a 3-dimensional (3D) Glauber model that involves a tilted medium geometry and an initial longitudinal flow field, a (3+1)-D hydrodynamic model for the QGP evolution and a modified Cooper-Frye formalism for evaluating the polarization pseudo-vector on the chemical freezeout hypersurface. Numerical results on the hadron directed flow and the hyperon polarization will then be presented in Sec. <ref>, with specific focus on the dependence of the Λ polarization on the medium geometry and longitudinal flow profile, and the correlation between polarization and directed flow. In the end, we will summarize in Sec. <ref>. § MODEL FRAMEWORK §.§ Initial condition We use a modified Glauber model to generate the initial condition of hydrodynamic evolution of the QGP, which possesses a counterclockwisely tilted geometry in the reaction plane with respect to the beam (longitudinal) direction <cit.>. The Woods-Saxon (WS) distribution of nucleons is applied to calculate the nuclear thickness function of the Au nucleus as T(x,y)=∫_-∞^∞dzρ_0/1+exp[(r-R_0)/d_0], where ρ_0=0.17fm^-3 is the average nucleon density, r=√(x^2+y^2+z^2) is the radial position with x, y, z being the space coordinates, R_0=6.38 fm is the radius of nucleus and d_0=0.535 fm is the surface diffusiveness parameter. For two nuclei travelling along the longitudinal (±ẑ) direction and colliding with an impact parameter 𝐛, their thickness functions are then given by T_+(𝐱_T)=T(𝐱_T-𝐛/2),    T_-(𝐱_T)=T(𝐱_T+𝐛/2), where 𝐱_T=(x,y) is the transverse plane coordinate. According to the Glauber model, their corresponding densities of participant nucleons of inelasitic scatterings are given by T_1(𝐱_T) =T_+(𝐱_T){1-[1-σ_NN T_-(𝐱_T)/A]^A} , T_2(𝐱_T) =T_-(𝐱_T){1-[1-σ_NN T_+(𝐱_T)/A]^A} , with A being the mass number and σ_NN being the inelastic nucleon-nucleon scattering cross section <cit.>. Inspired by the anisotropy of hadrons emitted by the QGP, it has been proposed in Ref. <cit.> that non-central collisions deposit energy asymmetrically along the longitudinal direction, as illustrated in the upper panel of Fig. <ref>. This leads to a counterclockwise tilt of the QGP fireball in the reaction plane with respect to the beam direction. Different parameterization schemes of the initial condition have been proposed in literature <cit.> to introduce this deformation of the nuclear matter and have been shown consistent with each other. In this work, we follow our earlier studies <cit.> and parameterize the spacetime rapidity (η_s) dependence of wounded (or participant) nucleon distribution as W_N(x,y,η_s)=  T_1(x,y)+T_2(x,y) +  H_t[T_1(x,y)-T_2(x,y)]tan(η_s/η_t), where the parameter H_t reflects the overall imbalance strength of energy deposition between forward and backward η_s. It relies on the impact parameter of collisions, and is set as H_t = 2.07b/fm in the present study in order to consistently describe the centrality dependence of the soft hadron observables in Au+Au collisions at =27 GeV later. Additionally, the function tan (η_s/η_t) in Eq. (<ref>) determines how the imbalance varies with η_s. We use a constant parameter η_t=8.0 in this study, which provides a good description of the directed flow (v_1) of charged particles in our previous work <cit.>. After accounting for contributions from both wounded nucleons and binary (hard) collisions, the total weight function reads W(x,y,η_s)=(1-α)W_N(x,y,η_s)+α n_BC(x,y)/[(1-α)W_N(0,0,0)+α n_BC(0,0)]|_𝐛=0, where n_BC(x,y)=σ_NNT_+(x,y)T_-(x,y) represents the number of binary collisions, and α=0.05 is called the collision hardness parameter determined by the centrality (or 𝐛) dependence of the soft hadron yield <cit.>. Under the Bjorken flow assumption, the initial energy density ε_0 and the normalized local net baryon density n_0 are given by <cit.> ε_0(x,y,η_s) =K · W(x,y,η_s) · H(η_s) , n_0(x,y,η_s) =1/N· W(x,y,η_s) · H(η_s) · H_B(η_s) , with the overall factor K set by the multiplicity distribution (dN_ch/dη or dN_ch/dy) of soft hadrons, and N being a normalization factor for n_0. In Eqs. (<ref>) and (<ref>), a function H(η_s)=exp[-(|η_s|-η_w)^2/2σ^2_ηθ(|η_s|-η_w) ] is introduced to describe the plateau structure in the longitudinal distribution of emitted hadrons, in which η_w controls the width of the central rapidity plateau and σ_η determines the width (speed) of the Gaussian decay outside the plateau region <cit.>. In order to model the accumulation of baryons in the forward and backward rapidity regions, we also include the following distribution of baryon density in the longitudinal direction <cit.> H_B(η_s)=exp[-(η_s-η_n)^2/2σ^2_n]+exp[-(η_s+η_n)^2/2σ^2_n], where parameters η_n and σ_n are calibrated by the p_T spectra of protons and antiprotons <cit.>. Since we aim at exploring the hyperon polarization in the same framework, which is sensitive to the gradient of the fluid velocity field <cit.>, we need to extend the initialization model beyond the Bjorken approximation for the fluid velocity. Following Refs. <cit.>, we construct the initial energy-momentum tensor components as T^ττ =ε_0(x,y,η_s)cosh(y_L) , T^τη_s =1/τ_0ε_0(x,y,η_s)sinh(y_L) , where the rapidity variable is modeled as y_L≡ f_v y_CM. Here, the center of mass rapidity y_CM at a given transverse location (x,y) depends on both the beam energy y_beam≡arccosh[√(s_NN)/(2m_N)] and the imbalance between the participant thickness functions as y_CM=arctanh[T_1-T_2/T_1+T_2tanh (y_beam)], where m_N is the nucleon mass and f_v∈ [0, 1] parameterizes the fraction of y_CM deposited into the longitudinal flow velocity. This f_v parameter allows one to vary the magnitude of the longitudinal flow velocity gradient, which influences both local and global polarization of Λ(Λ̅) hyperons. When f_v=0, one recovers the Bjorken flow scenario with y_L=0 <cit.>. With Eqs. (<ref>) and (<ref>), the initial fluid velocity in the η_s direction is given by v_η_s=T^τη_s/(T^ττ+P), in which P is the pressure. In the present work, the initial fluid velocity in the transverse plane is assumed to be zero by setting T^τ x = T^τ y = 0. In Tab. <ref>, we summarize the parameters used to initialize the QGP medium in this study. The first four parameters (K, τ_0, σ_η, and η_w) are adjusted based on the rapidity dependence of the charged particle yields (dN_ch/dy) in the most central collisions at a given beam energy. With these parameters, the combination of our initial condition and the CLVisc hydrodynamic simulation is able to provide a good description of the p_T spectra of different types of identified particles (π^+, K^+, p and p̅) in different centrality regions across the RHIC-BES energies <cit.>. This provides a reliable baseline for our subsequent investigation on the global and local polarization of hyperons in this work. The last parameter (f_v) in In Tab. <ref> is adjusted according to the directed flow coefficients of π^-, p and p̅, Λ and Λ̅. The value of f_v we use here is different from the one used in Ref. <cit.> due to our different assumptions on the initial geometry of the QGP profile. With the decrease of the beam energy, a larger fraction of the longitudinal momentum of the colliding nuclei can be deposited into the initial longitudinal velocity <cit.>. With the model parameters listed above, we first present in Fig. <ref> the distributions of the initial energy density (middle panel) and net baryon number density (bottom panel) on the η_s-x plane for 20-50% (b=8.57 fm) Au+Au collisions at =27 GeV. Their values beyond the Bjorken approximation are solved from the modified energy-momentum tensor components in Eqs. (<ref>) and (<ref>). One may clearly observes a tilted geometry of the QGP fireball with respect to the beam direction within this initialization model. Apart from an asymmetrical shift along the forward and backward rapidity directions, a counterclockwise tilt in the η_s-x plane can be seen for both the energy and net baryon densities. Due to their different parameterizations in Eq. (<ref>) and Eq. (<ref>), the baryon density exhibits stronger shift towards large rapidity as well as stronger tilt compared to the energy density. As discussed in <cit.>, this could be understood with the string models of the initial state <cit.>: while the baryon density deposition is driven by the valence quarks in the participant nucleons, energy density deposition originates from the melting of strings that involves both valence and sea quarks. We expect stronger tilt of these density profiles in more peripheral collisions due to the stronger drag experienced by participant nucleons from spectators. In phenomenology, the asymmetry in the energy density is responsible for the rapidity-odd directed flow of soft hadrons, while the asymmetry in the baryon density affects the abundance of baryons and anti-baryons produced from different locations of the QGP <cit.>. §.§ Hydrodynamic evolution Starting with the initial condition constructed in the previous subsection, we utilize a (3+1)-D viscous hydrodynamic model CLVisc <cit.> to describe the further evolution of the QGP medium. Under finite baryon chemical potential, the hydrodynamic equations read <cit.> ∇_μ T^μν =0 , ∇_μ J^μ =0 , where the energy-momentum tensor T^μν and the net baryon current J^μ are defined as T^μν = ε U^μU^ν - PΔ^μν + π^μν , J^μ = nU^μ+V^μ , with ε, P, n, u^μ, π^μν, V^μ being the local energy density, pressure, net baryon density, flow velocity field, shear stress tensor and baryon diffusion current respectively. The projection tensor is given by Δ^μν = g^μν-u^μu^ν with the metric tensor g^μν = diag (1,-1,-1,-1). Effects of the bulk viscosity is not included in the present study yet. The dissipative currents π^μν and V^μ are given by the following expressions based on the Israel-Stewart-like second order hydrodynamic expansion <cit.>: Δ^μν_αβ (u·∂) π^αβ = -1/τ_π(π^μν - η_vσ^μν) - 4/3π^μνθ -5/7π^α<μσ_α^ν>+ 9/704/e+Pπ^<μ_απ^ν>α , Δ^μν (u·∂) V_ν = - 1/τ_V(V^μ-κ_B▽^μμ_B/T)-V^μθ -3/10V_νσ^μν , where θ = ∂· u is the expansion rate, σ^μν = ∂^<μ u^ν> is the shear tensor, η_v and κ_B are the shear viscosity and baryon diffusion coefficient. For an arbitrary tensor A^μν, its traceless symmetric part is given by A^<μν> = 1/2[(Δ^μαΔ^νβ+Δ^ναΔ^μβ)-2/3Δ^μνΔ^αβ]A_αβ <cit.>. The specific shear viscosity C_η_v and the baryon diffusion coefficient κ_B are model parameters in hydrodynamic simulation, which are connected to η_v and parameter C_B via C_η_v = η_v T/e+P, κ_B = C_B/Tn[1/3(μ_B/T)-nT/e+P] , where μ_B is the baryon chemical potential. In this work, we use C_η_v=0.08 and C_B=0.4 for all collision centrality classes <cit.>, and set the relaxation times as τ_π = 5C_η_v/T and τ_V = C_B/T. We solve the hydrodynamic equations using the NEOS-BQS equation of state (EOS) <cit.>, which extends the lattice EOS at zero net baryon density to finite net baryon density via the Taylor expansion <cit.>. This EOS provides a smooth crossover between the QGP and the hadron phase under the conditions of strangeness neutrality (n_S=0) and electric charge density n_Q = 0.4n_B. §.§ Particlization We use the isoenergy-density freezeout condition <cit.> in our study and determine the freezeout hypersurface by a fixed energy density (e_frz= 0.4 GeV/fm^3) <cit.>. We apply the Cooper-Frye formalism on this hypersurface to obtain the hadron momentum distribution: dN/p_T dp_T dϕ dy = g_i/(2π)^3∫_Σ p^μdΣ_μf_eq(1+δ f_π+δ f_V) . In the above equation, g_i is the spin-color degeneracy factor for identified hadrons, and dΣ_μ is the hypersurface element determined by the projection method <cit.>. The thermal distribution (f_ eq) and the out-of-equilibrium corrections (δ f_π and δ f_V) satisfy f_ eq = 1/exp[(p_μU^μ - Bμ_B )/T_f] ∓ 1 , δ f_π(x,p) = (1± f^eq(x,p)) p_μp_νπ^μν/2T^2_f(e+P), δ f_V(x,p) = (1± f^eq(x,p))(n_B/e+P-B/U^μp_μ)p^μV_μ/κ_B/ τ_V , where T_f is the chemical freezeout temperature, and B represents the baryon number of an identified hadron. The out-of-equilibrium corrections above are obtained from the Boltzmann equation via the relaxation time approximation <cit.>. Contributions from resonance decay have been taken into account in this work based on Ref. <cit.>, although hadronic scatterings after the QGP phase has not been included yet. §.§ Spin polarization In non-central heavy-ion collisions, the quarks are polarized due to the massive initial orbital angular momentum of the QGP fireball <cit.>. We assume collision system to be in local thermal equilibrium on the freezeout hypersurface. Meanwhile, the conservation of spin is respected during hadronization and resonance decay processes <cit.>. The polarization pseudo-vector for spin-1/2 fermions can be obtained using the modified Cooper-Frye formalism as <cit.> 𝒮^μ(𝐩)=∫ d Σ· p 𝒥_5^μ(p, X)/2 m ∫ d Σ·𝒩(p, X), where 𝒥^μ_5 is the axial charge current density and 𝒩^μ(p, X) is the number density of fermions in the phase space. Following the quantum kinetic theory <cit.>, 𝒮^μ(𝐩) can be decomposed into different sources as 𝒮^μ(𝐩) = 𝒮_thermal^μ(𝐩) +𝒮_shear^μ(𝐩)+𝒮_accT^μ(𝐩) +𝒮_chemical^μ(𝐩)+𝒮_EB^μ(𝐩), where 𝒮_thermal^μ(𝐩) = ∫ dΣ^σF_σϵ^μναβp_ν∂_αu_β/T, 𝒮_shear^μ(𝐩) = ∫ dΣ^σF_σϵ^μναβp_ν u_β/(u· p)T × p^ρ(∂_ρu_α+∂_αu_ρ-u_ρDu_α), 𝒮_accT^μ(𝐩) = -∫ dΣ^σF_σϵ^μναβp_νu_α/T(Du_β-∂_βT/T), 𝒮_chemical^μ(𝐩) = 2∫ dΣ^σF_σ1/(u· p)ϵ^μναβp_αu_β∂_νμ/T, 𝒮_EB^μ(𝐩) = 2∫ dΣ^σF_σ[ϵ^μναβp_αu_βE_ν/(u· p)T+B^μ/T], with F^μ = ħ/8m_ΛΦ(𝐩)p^μf_eq(1-f_eq), Φ(𝐩) = ∫ dΣ^μp_μf_eq. The five terms in Eq. (<ref>) represent polarization induced by the thermal vorticity (𝒮_thermal^μ), the shear tensor (𝒮_shear^μ), the fluid acceleration minus temperature gradient (𝒮_accT^μ), the gradient of chemical potential over temperature (𝒮_chemical^μ), and the external electromagnetic field (𝒮_EB^μ), respectively. Detailed expressions of these terms can be derived from the statistic model <cit.> and the Kubo formula <cit.>. Here, S^μ_shear and S^μ_chemical are also named as the shear-induced polarization (SIP) and the baryonic spin Hall effect (SHE) in literature <cit.>. Since the electromagnetic field decays rapidly and its evolution profile has not been well constrained in heavy-ion collisions yet, we only take into account the first four terms but neglect the 𝒮_EB^μ term in the current study <cit.>. The polarization vector of Λ (or Λ̅) in its rest frame can then be constructed as P⃗^*(𝐩) = P⃗(𝐩)-P⃗(𝐩) ·𝐩/p^0(p^0+m)𝐩, where P^μ(𝐩) ≡1/s𝒮^μ(𝐩), with s=1/2 being the particle spin. After averaging over the transverse momentum, one obtains the local polarization as ⟨P⃗(ϕ_p) ⟩ = ∫_y_min^y_maxdy ∫_p_Tmin^p_Tmaxp_Tdp_T [ Φ (𝐩)P⃗^*(𝐩)]/∫_y_min^y_maxdy ∫_p_Tmin^p_Tmaxp_Tdp_TΦ(𝐩) , in which ϕ_p is the azimuthal angle, and Φ(𝐩) is an integration on the freezeout hypersurface defined in Eq. (<ref>). The mass of Λ (or Λ) is set as m = 1.116 GeV. Finally, the global polarization of Λ and Λ is obtained by further averaging P⃗^*(𝐩) over ϕ_p in Eq. (<ref>). § NUMERICAL RESULTS In this section, we present the directed flow coefficient and polarization of Λ(Λ̅) hyperons in Au+Au collisions at = 27 GeV from the CLVisc hydrodynamic calculation using the tilted initial geometry with non-zero initial longitudinal flow velocity field. We first analyze the directed flow v_1 of pions, protons and antiprotons in various centrality classes to determine the H_t value for the tilted QGP fireballs at different centralities. Using the H_t value extracted from the directed flow, we then investigate the relation between the global polarization of Λ(Λ̅) hyperons and centrality, transverse momentum, and pseudo-rapidity in Sec. <ref>. We further study the dependence of the global polarization of Λ hyperons on the tilted QGP geometry and the initial velocity field in Sec. <ref>. The global polarization generated by different initial condition models – the tilted Glauber model, AMPT, and SMASH – are compared in Sec. <ref>. The correlation between global polarization and the directed flow of Λ̅ hyperons is investigated in Sec. <ref>. In the end, we present results for the local polarization of Λ hyperons in Sec. <ref> §.§ Directed flow of identified particles and global polarization of Λ hyperons We start with validating our model setup by comparing the directed flow of identified hadrons and global polarization of Λ(Λ̅) hyperons between our calculation and the STAR data <cit.> in Figs. <ref>-<ref>. The directed flow coefficient v_1 can be extracted as the first-order Fourier coefficient of the azimuthal distribution of particle momentum as v_1(y)=⟨cos(ϕ-Ψ_1)⟩=∫cos(ϕ-Ψ_1)dN/dy dϕdϕ/∫dN/dy dϕdϕ, where Ψ_1 is the first order event plane angle of a nucleus-nucleus collision. Due to the use of a smooth initial condition of the energy density and baryon number density, effects of event-by-event fluctuations have not been taken into account. As a result, the event plane coincides with the spectator plane, which can be identified using deflected neutrons measured at large rapidity. In Fig. <ref>, we first present the v_1 of different species of hadrons as a function of rapitidy in Au-Au collisions at =27 GeV. The transverse momentum range 0<p_T<3.0 GeV of these hadrons is used for the analysis. In the upper panel, we show the v_1 of π^- in three different centrality regions. By using a linear dependence H_t=2.07 b/fm between the tilt parameter and the impact parameter in Eq. (<ref>), a reasonable centrality dependence of the pion v_1 can be obtained. Using the same model setup, we present the v_1 of protons and anti-protons in the middle panel for a given centrality bin. As discussed in Refs. <cit.>, introducing the tilted geometry for the net baryon density provides a satisfactory description of the splitting of v_1 between p and p̅. Similarly, our model results on the v_1 of Λ and Λ̅ are also consistent with the STAR observation <cit.>, as shown in the lower panel of Fig. <ref>. In Fig. <ref>, we present the global polarization of hyperons along the out-of-plane direction, -P^y, analyzed within the kinematic region of p_T∈ [0.5 GeV, 3.0 GeV] and y∈ [-1, 1]. In the upper panels, we compare different contributions, i.e., different terms in Eq. (<ref>), to the polarization of Λ as functions of (from left to right) centrality, transverse momentum and rapidity, respectively. One observes that after integrating over p_T, the thermal vorticity is the dominant contributor to the global polarization of Λ across different centralities and rapidities (left and right). However, in the middle panel, it is interesting to note that opposite tends with respect to p_T can be seen between the thermal vorticity and shear tensor contributions: the former decreases while the latter increases as p_T becomes larger. Contribution from the shear term becomes non-negligible above p_T∼ 1 GeV and even becomes dominant above p_T∼ 1.5 GeV. Later, we will show that the p_T dependences of these two terms rely on the medium geometry and the longitudinal flow field of the QGP. In the lower panels of Fig. <ref>, we combine contributions from the four terms (thermal, shear, accT, and chemical) and present the global polarization (-P^y) of both Λ and Λ̅ as functions of centrality, transverse momentum and rapidity. Our model calculation provides a satisfactory description of the hyperon polarization compared to the STAR data <cit.>. Only a minor difference is observed between Λ and Λ̅, which results from the chemical term contribution to -P^y. In addition, due to the opposite p_T dependences between thermal and shear contributions (middle panel in the upper row), their combination leads to a non-monotonic dependence of -P^y on p_T (middle panel in the lower row). This feature can be examined with more precise data in the future, and provide more stringent constraints on different components of hyperon polarization. With these validations of our model calculation, we will explore the dependence of hyperon polarization on the medium profiles and its correlation with the directed flow in the rest of this work. §.§ Effects of the initial QGP geometry and longitudinal flow on global polarization In this subsection, we implement a detailed analysis on how the initial geometry and longitudinal flow profiles of the QGP affect the global polarization of Λ hyperons. In Fig. <ref>, we first fix the initial longitudinal flow velocity field with f_v=0.23 and study how the tilt of the QGP geometry influences different components of Λ polarization. The upper plot shows the global polarization as a function of p_T. And in each panel, we study how the H_t parameter affects each contribution – thermal, shear, accT, and chemical – to the Λ polarization. As H_t increases from 0 to 15, one observes the slope of -P^y(p_T) decreases from positive to negative values in the thermal vorticity term, while increases from negative to positive values in the shear tensor term. This could be understood with the -u_β∂_α T/T^2 component in the S_thermal^μ term and the u_β/T component in the S_shear^μ term, which are both amplified with a more asymmetric medium and lower temperature at mid-rapidity when H_t increases. Consequently, the non-monotonic dependence of their combination on p_T may provide additional constraint on the medium geometry if the experimental data becomes sufficiently precise. Little impact from H_t has been found on the Λ polarization from the fluid acceleration (accT) term and the SHE (chemical) term. A similar investigation is conducted in the lower plot of Fig. <ref>, where the Λ polarization is studied as a function of rapidity. As the value of H_t increases from 0 to 15, the dip structure of the Λ polarization at mid-rapidity from the thermal vorticity term gradually transits into a peak structure. The value of this global polarization near y=0 is enhanced from 0.40 to 0.73. For the other three terms of global polarization, impact of this tilted deformation of the QGP appears small. In Fig. <ref>, we combine contributions from the four terms above and present the total value of Λ polarization as functions of both p_T (upper panel) and y (lower panel). When the f_v parameter is fixed at 0.23, one observes an enhancement in the value of -P^y as one increases the tilt parameter H_t. Meanwhile, a clear non-monotonic behavior of polarization with respect to p_T appears when H_t is sufficiently large, which may serve as a signature of the tilted geometry of the QGP fireball. Similarly, we study the relation between the longitudinal flow velocity field (or f_v) and the global polarization in Figs. <ref> and <ref>. Here, we fix H_t=2.07b/fm for the medium geometry, which is fitted from the centrality dependence of the hadron v_1 earlier. In Fig. <ref>, we present p_T (upper plot) and y (lower plot) dependences of -P^y for four different contributions separately. As one increases the value of f_v from 0 to 0.3, an enhanced global polarization is seen from the thermal vorticity term. This can be understood with the stronger longitudinal velocity gradient deposited into the QGP when f_v becomes larger, which directly increases the global vorticity of the medium and therefore the Λ polarization. On the other hand, little variation is observed in the other three terms when we change the f_v parameter. The total value of polarization is presented in Fig. <ref> after contributions from the four terms are combined. When the medium geometry is fixed via H_t=2.07b/fm, a non-monotonic p_T dependence of Λ polarization can be observed in the upper panel for different values of f_v applied here. Increasing the f_v value significantly enhances the magnitude of polarization. As shown in the lower panel, this enhancement appears more prominent at mid-rapidity than at large rapidity. §.§ Comparison between different initialization models Constraining the initial condition from the final state hadron observables is an ongoing effort of heavy-ion programs. It has been suggested in Ref. <cit.> that the Λ polarization can be affected by implementing different initialization models. Therefore, it is of great interest to investigate whether the initial condition we develop in this work introduces further impacts on polarization. In this subsection, we compare the Λ polarization between three different initialization methods: the titled optical Glauber model described in Sec. <ref>, SMASH <cit.> and AMPT <cit.>. The parameters and settings of SMASH and AMPT are identical to those used in Ref. <cit.>. And after the CLVisc hydrodynamic evolution, these three initial conditions are able to produce comparable p_T spectra of charged particles. Shown in Fig. <ref> is the global polarization of Λ in 20-50% Au+Au collisions at =27 GeV as functions of p_T (upper panel) and y (lower panel), compared between CLVisc hydrodynamic calculations with three different initialization models. One can observe a larger value of polarization from using our current tilted optical Glauber model (labeled as “CCNU") than from using SMASH and AMPT. This results from both the tilted geometry of the QGP fireball and the longitudinal flow gradient introduced in our current model. As discussed in the previous subsection, the tilted geometry also gives rise to the non-monotonic p_T dependence of the global polarization, which is absent in results from using the other two initialization models. When the tilt is strong, the magnitude of shear induced polarization polarization increases rapidly with p_T. On the other hand, this shear term from SMASH or AMPT initial condition only increases moderately in the given p_T region. Currently, it is hard to distinguish between the three initialization models based on the experimental data due to its large uncertainties. Future measurements with higher precision may help better constrain the initial condition in heavy-ion collisions. §.§ Correlation between global polarization and directed flow As seen in the previous two subsections, the value of hyperon polarization strongly depends on the initial condition of the QGP. Meanwhile, the initial geometry and flow field of the QGP also determine the collective flow coefficients of the final state hadrons. Therefore, one would naturally expect certain correlation between these two observables in heavy-ion collisions, as already suggested by both experimental data <cit.> and theoretical studies <cit.>. In this subsection, we will combine our analyses on the directed flow and global polarization of hyperons and explore how they are correlated with each other. Similar to Figs. <ref>-<ref>, we first review the dependence of the hadron v_1 on the tilted geometry and the initial longitudinal flow profile in Fig. <ref> for 10-40% Au+Au collisions at √(s_NN)=27 GeV. Here we choose the Λ̅ hyperon since the anisotropy of the anti-baryons is mainly driven by the energy distribution of the QGP rather than the baryon number density deposited by the projectile and target nuclei <cit.>. In the upper panel, we fix the f_v=0.23 parameter for the initial longitudinal flow and vary the H_t parameter for the tilted deformation of the medium geometry. One observes that as H_t increases from 0 to 25, the slope of directed flow with respect to rapidity (dv_1/dy) around mid-rapidity decreases from positive to negative values. On the other hand, when we fix H_t=14.8 (using H_t=2.07b/fm) for the medium geometry and vary f_v for the longitudinal flow in the lower panel, one observes an increase in dv_1/dy from negative values towards 0. These observations are consistent with our findings for anti-baryons in a prior work <cit.> on the directed flow coefficients of different hadron species at the BES energies. In Fig. <ref>, we combine results of dv_1/dy and -P^y of Λ̅ around mid-rapidity from our hydrodynamic calculation using different values of H_t and f_v. According to Figs. <ref>, <ref> and <ref>, when f_v=0.23 is fixed, increasing H_t increases -P^y but decreases dv_1/dy. This leads to an anti-correlation between the global polarization and the slope of directed flow of Λ̅, as shown by the red diamond symbols in Fig. <ref>. Contrarily, when H_t=14.8 is fixed, increasing f_v simultaneously increases dv_1/dy and -P^y of Λ̅, resulting in a positive correlation between these two observables as shown by the green star symbols. In both cases, good linear relations can be seen between the v_1 slope and the global polarization of Λ̅. Therefore, as suggested in Ref. <cit.>, between directed flow and global polarization, one may infer the value of one from the other. §.§ Local polarization of Λ hyperons In the end, we complete our study by presenting the local polarization of Λ hyperons. Shown in Fig. <ref> is the local polarization in the -ŷ direction as a function of the azimuthal angle (ϕ_p) in 20-50% Au+Au collisions at =27 GeV, compared between CLVisc hydrodynamic calculations using different H_t (upper panel) and f_v (lower panel) parameters. Consistent with our previous conclusions on the global polarization, enhancing the tilted deformation of the QGP or its initial longitudinal flow gradient also increases the local value of -P^y at different ϕ_p between 0 and π. Similarly, increasing H_t and f_v also enhances the magnitude of local polarization in the z direction (|P^z|), as shown in the upper and lower panels of Fig. <ref> respectively. Note that the cosine-like feature of -P^y and the negative sine shape of P^z with respect to ϕ_p are both opposite to observations in the experimental data <cit.>. Although it has been proposed that contributions from the shear induced term and the spin Hall term help improve the theoretical description of local polarization towards the experimental observation <cit.>, after combining them with the dominating term of thermal vorticity, the discrepancies still exist in our current results. § CONCLUSIONS We have studied the hyperon polarization and its correlation with the directed flow of hadrons in Au+Au collisions at =27 GeV. The CLVisc hydrodynamic simulation is coupled to a modified 3D Glauber initial condition that models a tilted QGP medium with an initial longitudinal velocity field. Using model parameters determined by the directed flow coefficient of different species of identified particles, our calculation provides a satisfactory description of the global polarization of Λ(Λ̅) hyperons observed at the STAR experiment, as functions of centrality, transverse momentum and rapidity. We find that the thermal vorticity dominates the p_T-integrated global and local polarization of hyperons, while the shear-induced polarization is important at high p_T. Increasing the counterclockwise tilt of the QGP fireball with respect to the beam direction enhances the thermal vorticity contribution to the Λ polarization at low p_T, while suppresses its contribution at high p_T. The opposite trend is found for the shear-induced contribution. Therefore, a non-monotonic dependence on p_T is found for the global polarization of Λ with the presence of a tilted QGP profile. Effects of this tilted geometry on the fluid acceleration term and the baryonic spin Hall term are found small in our calculation. Depositing stronger initial longitudinal flow velocity into the QGP gives rise to a larger orbital angular momentum and therefore a larger thermal vorticity contribution to the Λ polarization. However, effects of this initial velocity on the other three terms of polarization are found negligible. Compared to the same hydrodynamic simulation using SMASH or AMPT initial condition, our current calculation provides a larger value of Λ polarization, indicating the sensitivity of global polarization to the initial geometry and the longitudinal flow velocity of the QGP. Furthermore, a strong correlation is found between the Λ polarization and its directed flow coefficient. When the medium geometry is fixed, the Λ polarization is linearly correlated with the slope of v_1(y) near mid-rapidity as the initial longitudinal flow velocity is varied. To the contrary, these two quantities are linearly anti-correlated when the initial flow is fixed while the tilt of the medium is varied. These imply the medium geometry and the longitudinal flow velocity are the common origin of polarization and directed flow, and therefore the combination of these two observables may provide a tight constraint on the initial condition of the QGP produced in non-central heavy-ion collisions. The framework presented in the present work can be extended to studying the hyperon polarization at other beam energies at RHIC and LHC. However, apart from the medium geometry and longitudinal flow profile, other effects might be crucial for understanding the polarization phenomenology at lower collision energies. For instance, the electromagnetic field produced in energetic nuclear collisions can cause directional drift of charged quarks and thus affect the splitting of global polarization between Λ and Λ̅ <cit.>. The deformation of nuclear structure may also contribute to the polarization of hyperons <cit.>. In addition, the correlation between the hyperon polarization and its directed flow found in this work can be further extended to correlation with hard probe observables for an even more stringent constraint on the QGP properties <cit.>. These aspects will be explored in our upcoming efforts. This work was supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 11935007, 12175122 and 2021-867, Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, the Natural Science Foundation of Hubei Province No. 2021CFB272, the Education Department of Hubei Province of China with Young Talents Project No. Q20212703, the Open Foundation of Key Laboratory of Quark and Lepton Physics (MOE) No. QLPL202104 and the Xiaogan Natural Science Foundation under Grant No. XGKJ2021010016. unsrt
http://arxiv.org/abs/2307.04335v2
20230710041331
The tree-child network problem for line trees and the shortest common supersequences for permutations are NP-hard
[ "Laurent Bulteau", "Louxin Zhang" ]
math.CO
[ "math.CO", "05A16, 05C30, 92D15" ]
[ Eiji Saitoh August 12, 2023 =================== Reconstructing phylogenetic networks presents a significant and complex challenge within the fields of phylogenetics and genome evolution. One strategy for reconstruction of phylogenetic networks is to solve the phylogenetic network problem, which involves inferring phylogenetic trees first and subsequently computing the smallest phylogenetic network that displays all the trees. This approach capitalizes on exceptional tools available for inferring phylogenetic trees from biomolecular sequences. Since the vast space of phylogenetic networks poses difficulties in obtaining comprehensive sampling, the researchers switch their attention to inferring tree-child networks from multiple phylogenetic trees, where in a tree-child network each non-leaf node must have at least one child that is a tree node (i.e. indegree-one node). We prove that the tree-child network problem for multiple line trees remains NP-hard by a reduction from the shortest common supersequnece problem for permuations and proving that the latter is NP-hard. § INTRODUCTION Recent genomic studies have highlighted the significant roles of recombination and introgression in genome evolution <cit.>. Consequently, there has been an increasing use of phylogenetic networks to model the evolution of genomes with the presence of recombination, introgression and other reticulate events <cit.>. A phylogenetic network is a rooted directed acyclic graph (DAG) that represents taxa (genomes, individuals, or species) as its leaves and evolutionary events (speciation, recombination, or introgression) as its internal nodes. Over the past three decades, substantial progress has been made in understanding the theoretical aspacts of phylogenetic networks <cit.> (see also <cit.>). The space of phylogenetic networks is vast, making it challenging to perform comprehensive sampling. As a result, popular methods like maximum likelihood and Bayesian approaches, commonly used for phylogeny reconstruction, are not efficient enough for reconstructing phylogenetic networks containing a large number of reticulate events on more than 10 taxa <cit.>. This has prompted researchers to focus on inferring phylogenetic networks with specific combinatorial properties <cit.>. Popular classes of phylogenetic networks include galled trees <cit.>, galled networks <cit.>, and tree-child networks <cit.>, which can be enumerated and counted efficiently <cit.>. Furthermore, researchers are also investigating the parsimonious inference of phylogenetic networks from multiple trees, aiming to infer a network with the smallest hybridization number (HN) that display all the trees <cit.>. The HN, a generalization of the number of reticulate nodes in binary phylogenetic networks, quantifies the complexity of the network (refer to Section <ref> for more details). Notably, a scalable method has been recently developed to compute a tree-child network with the minimum HN from binary trees <cit.>. Inference of an arbitrary phylogenetic network with the smallest HN is known to be NP-hard, even in the case of two input trees <cit.> and in the case tree-child networks are inferred <cit.>. In this paper, we prove that the problem remains NP-hard even for inferring tree-child networks from line trees. § BASIC CONCEPTS AND NOTATION Let X be a set of taxa. In this paper, a phylogenetic network on X is a rooted DAG such that: * The root is of indegree 0 and outdegree 1. There is at least one directed path from the root to every other node. * The leaves (which are of indegree 1 and outdegree 0) are labeled one-to-one with the taxa. * All nodes except for the leaves and the root are either a tree node or a reticulate node. The tree nodes are of indegree 1 and outdegree 2, whereas the reticulate nodes are of indegree more than 1 and outdegree 1. In a phylogenetic network, a node u is said to be below another v if there exists a directed path from v to u. A phylogenetic network is binary if every reticulate node is of indegree 2. A binary phylogenetic tree is a binary phylogenetic network that does not have any reticulate nodes. In this paper, a binary phylogenetic tree is simply mentioned as a binary tree. A line tree is a binary tree in which all internal nodes but the out-degree-1 root have at least one child that is a leaf. An important parameter of a phylogenetic network is the hybridization number (HN). It is defined as the sum over all the reticulation nodes of the indegree of that node minus the number of the reticulate nodes. Note that for a binary phylogenetic network B, each reticulate node has indegree 2 and thus the HN of B is equal to the number of the reticulate nodes of B. A tree-child network is a phylogenetic network in which every non-leaf node has at least one child that is either a tree node or a leaf. Let Σ be an n-letter alphabet, and ℓ be a new letter not in Σ. For a permutation P=p_1p_2⋯ p_n on Σ, we use T(P) to denote the line tree on Σ∪{ℓ} that has the node set Σ∪{r, v_i, ℓ | 1≤ i≤ n} and the directed edge set { (r, v_1), (v_i, v_i+1), (v_i, p_i), (v_n, ℓ), (v_n, p_n) | 1≤ i≤ n-1} (left, Figure  <ref>). Let v be a node of indegree 1 and outdegree 1 in a directed graph. Then, there is a unique edge (u, v) entering v and a unique edge (v, w) leaving v. We contract v by removing v and replacing (u, v) and (v, w) with a new edge (u, w). For a sequence Q=q_1q_2⋯ q_m on Σ, we use N(Q) to denote the one-component tree-child network on Σ∪{ℓ} that is obtained by applying degree-2 node contraction from the DAG consisting of the node set Σ∪{r, ℓ, v_i, r_j | 1≤ i≤ m, 1≤ j≤ n} and the directed edge set E_1∪ E_2, where E_1={ (r, v_1), (v_i, v_i+1), (v_m, ℓ), | 1≤ i≤ m-1}∪{(r_j, a_j) | a_j∈Σ} and E_2 contains (v_i, r_j) if q_i=a_j for every possible i and j (right, Figure <ref>). Clearly, the HN of N(Q) is m-n. §.§ The tree-child network problem A binary tree is displayed in a tree-child network if it can be obtained from the network by (i) deletion of all but one incoming edge for each reticulate node and subsequently (ii) contraction of all indegree-1 and out-degree-1 nodes. We focus on how to infer a tree-child network with the minimum HN that display all the input trees. This problem is formally defined as: The Tree-Child Network (TCN) Problem Input A set of binary trees on X. Output A tree-child network with the minimum HN that displays all the trees. §.§ The shortest common supersequence problem A string on an alphabet is a supersequence of another if the latter can be obtained from the former by the deletion of 0 or more letters. A string is a common supersequence of multiple strings if it is a supersequence of every string. The length of a string is the total number of the occurrences of the letters in the string. A common supersequence is a shortest common supersequence (SCS) if it has the smallest length, over all the common supersequences of the strings. The SCS problem is formally defined as: Input A set of strings on an alphabet. Output A SCS of the strings. The SCS problem is a fundamental NP-complete problem <cit.>. § TREE-CHILD NETWORK INFERENCE VIA LINEAGE TAXA STRINGS Let X be a set of n taxa and T_i (1≤ i≤ k) be k binary trees on X. The minimum tree-child networks that display all the k trees can be constructed from the lineage taxon strings (LTSs) of the taxa under an ordering on X <cit.>. In this section, we shall restate the construction process on which our main result will be based. Consider an ordering π on X. For any x, x'∈ X, we write x<_π x' if x is less than x' under π. For a node u of a tree on X, we use min_π(u) to denote the smallest of the taxa below u. We label the root with the smallest taxon under π and each of non-root internal node u with the larger of min_π(u') and min_π(u”), where u' and u” are the two children of u. In this way, the root and the remaining n-1 internal nodes are uniquely labeled with a taxon. Moreover, the leaf f is below the unique internal node w that had been labeled with f. As a result, there exists a path P_wf from w to f. The LTS of the taxa f consists of the taxon labels of the inner nodes in P_wf. For example, if the alphabet ordering (i.e. a<b<c<d<e<ℓ) is used, in the tree in Figure <ref>, the root is labeled with a; v_1 to v_5 are labeled with e, d, b, c, ℓ, respectively. Therefore, the LTS of a, b, c are edb, c, ℓ, respectively, whereas the LTS of d, e, ℓ are the empty string. Let π be π_1<π_2 <⋯ <π_n. Note that X={π_1, π_2, ⋯, π_n}. We further assume that β_1, β_2, ⋯, β_n are n sequences satisfying the following conditions: (C1) For each i<n, β_i is a string on {π_i+1, ⋯, π_n}; (C2) β_n is the empty sequence. It is proved in <cit.> that the following algorithm outputs a tree-child network containing all T_i, written as N(π, {β_i}^n_i=1), whose HN is equal to ∑_1≤ i≤ n|β_i |. Tree-Child Network Construction <cit.> 1. (Vertical edges) For each β_i, define a path P_i with |β_i| +2 nodes: h_i, v_i1, v_i2, ⋯, v_i|β_i|, π_i, where β_n is the empty sequence. 2. (Left–right edges) Arrange the n paths from left to right as P_1, P_2, ⋯, P_n. If the m-th symbol of β_i is π_j, we add an edge (v_im, h_j) for each i and each m. 3. For each i>1, contract h_i if h_i is of indegree 1. Consider k binary trees T_j on X. We write α_ji for the LTS of π_i in T_j for each i≤ n and each j≤ k. Then, for each j, α_j1, α_j2, ⋯, α_jn satisfy the conditions (C1) and (C2). Moreover, let β_i be a SCS of α_1i, α_2i, ⋯, α_ki for each i. The sequences β_1, β_2, ⋯, β_n also satisfy the conditions (C1) and (C2). Let T_j (1≤ j≤ k) be k trees on X and let N be a tree-child network on X that displays all the trees. If N has the minimum HN, there exists a permutation π such that N=N(π, {β_i }^n_i=1), where γ_i is a SCS of the LTSs α_ji of π_i in the input trees T_j and whose HN is ∑_1≤ i≤ n|β_i|. The proof of Theorem <ref> appears in Section A of the Supplemental Methods of <cit.>. Since there could be multiple SCSs for a set of sequences, Theorem <ref> implies that the TCN problem have multiple solutions. § EQUIVALENCE OF THE TCN AND SCS PROBLEMS By Theorem <ref>, we know that the TCN problem can be solved using multiple SCS sub-problems, with the LTS of each taxa. Aiming at a reduction from SCS to TCN, we now show that for any instance of SCS where all input strings are permutations, an instance of TCN can be built such that a single taxa has a non-trivial LTS in each tree, and such that each such LTS is exactly one of the input permutations. Consider an instance of the SCS problem consisting of k permutations P_i (1≤ i≤ k) on Σ. By Theorem <ref>, all the tree-child networks with the smallest HN that display all the T(P_i) can be obtained from the LTS of taxa under a permutation. Consider an ordering π: π_1< π_2 <⋯ <π_n <π_n+1 on Σ∪{ℓ}. We have the following two cases. Case 1: ℓ=π_t, where t>1. For each i, we let P_i=p_i1p_i2⋯ p_in. If p_in >_πℓ, the LTS of the leaf ℓ ends with p_in and thus is nonempty in T(P_i). But, the LTS of the leaf p_in is empty in T(P_i). If p_in<_πℓ, the LTS of ℓ is empty. But, the LTS of p_in contains ℓ and thus is nonempty in T(P_i). In general, define β_i1=π_1. For each j> 1 such that β_ij=p_ix < min{p_in, ℓ}, define β_i (j+1)=min_π{ p_i(x+1), ⋯, p_in, ℓ}. We obtain a sequence: β_i1=π_1, β_i2, ⋯, β_iw_i= min{p_in, ℓ}. Then, in L(P_i), the LTS of β_ij end with β_i(j+1) and thus are nonempty under π for each j<w_i; the LTS of β_iw_i ends with ℓ if β_w_i=p_in and p_in if β_w_i=ℓ under Π. It is also true that the LTS is empty for any other taxon under π. Moreover, we have the following fact. Let the LTS of β_ij be S_ij in T_i under π: π_1<π_2<⋯ <π_n+1, where ℓ≠π_1. Then, for each i, P_i= S_i1[1, |S_i1|-1]β_i1S_i2[1, |S_i2|-1]β_i2⋯ S_i(w_i-1)[1, |S_i(w_i-1)|-1]β_i(w_i-1)S'_iw_i, S'_iw_i={[ S_iw_i β_iw_i=ℓ; S_iw_i[1, |S_iw_i|-1]β_iw_i β_iw_i≠ℓ ]. where S_it[1, |S_it|-1] denotes the string obtained by removal of the last letter of S_it for each possible t and the right-hand side is the concatenation of the strings and letters. Example 1. For the line tree in the left panel in Figure <ref>, the corresponding permutation is P:edabc on the alphabet {a, b, c, d, e}. Under the ordering a<b<c<d<e<ℓ, β_1=a, β_2=b, β_3=c, whose LTSs are edb, c, ℓ, respectively. Proposition <ref> is verified by ed a· b· c=P, where the symbol '·' is added to indicate different parts of P for clearness. Let the LTS of β_ij be S_ij in T_i under π. Fix an π_j for some 1≤ j≤ n+1. If the LTS of π_j is empty for every i, define Q_j to be the empty string. If S_ij is nonempty only for indices i_1, i_2, ⋯, i_j, we define Q_j to be the string obtained from W_j=(S_i_1j, S_i_2j, ⋯, S_i_jj) by removing the last letter of W_j. Note that different SCS of the strings give different Q_j of the same length. Example 2. Consider the ordering π: a<b<c<ℓ<d<e for the tree lines trees in Figure <ref>. The LTSs of the taxa under π in the three trees are listed in the following table, from which we obtain a tree-child network of an HN of 5 (right, Figure <ref>). Taxon LTS in T(P_1) LTS in T(P_2) LTS in T(P_3) SCS a eb cb cb ecb b dc eℓ ℓ dceℓ c ℓ ϵ ϵ ℓ ℓ ϵ d ed ed d ϵ ϵ ϵ ϵ e ϵ ϵ ϵ ϵ Here, ϵ denotes the empty string. The LTSs of b are dc, eℓ and ℓ in T(P_1), T(P_2), T(P_3), respectively. A SCS of dc, eℓ, ℓ is dceℓ, from which we obtain Q_b=dec. Similarly, we obtain Q_a=ec and Q_ℓ=e and that Q_c, Q_d and Q_e are empty. Assume that Q_h_1, Q_h_2, ⋯, Q_h_j be all the nonempty strings defined from π by the method described above, where π_h_1<_ππ_h_2<_π⋯ <_ππ_h_j. If π_h_j=ℓ, we set Q=Q_h_1π_h_1Q_h_2π_h_2⋯ Q_h_j-1π_h_j-1 W_h_j. If π_h_j <_πℓ, then, ℓ must appear in Q_h_j if it is not removed. In this case, we set Q to the string obtained from Q_h_1π_h_1Q_h_2π_h_2⋯ Q_h_j-1π_h_j-1Q_h_jπ_h_j by deleting the occurrences of ℓ. Since |Q| is equal to or less than the sum of the lengths of the SCS of the LTSs of the π_i in the k line trees T(P_j) (1≤ j≤ k), the HN of N(Q) is equal to or less than the HN of N_π. On the other hand, by Proposition 1, Q is a common supersequence of P_1, P_2, ⋯, P_k. Thus, |Q|≥(P_1, P_2, ⋯, P_k). Therefore, the HN of the one-component tree-child network N(Q) is not less than that of N((P_1, P_2, ⋯, P_k)). Example 2 (Continued). For the trees in Figure <ref>, Q=Q_aa· Q_bb· Q_cc· W_ℓ=eca· deℓ b· c· ed. After removing ℓ, we obtain Q'=ecadebced, which is also a supersequence of eadbc, caebd, and cabed. The one-component tree-child network N(Q') is shown in Figure <ref>. Case 2: ℓ=π_1. By definition, the LTS is P_i for ℓ and the empty string for π_i for every i>1. In this case, we obtain a tree-child network N((P_1, P_2, ⋯, P_k)). Taken together, the discussion for the two cases imply the following result. Let N be the tree-child network constructed from T(P_1), T(P_2), ⋯, T(P_k) by applying the algorithm with an ordering π: π_1<π_2<⋯ < π_n+1. It has the smallest HN if and only if ℓ is the smallest element under π, where N=N((P_1, P_2, ⋯, P_k)). Propositions 1 and 2 imply the following result. Let X be a set of taxa such that |X|=n+1 and T be a set of line trees on X in which there is a common lowest leaf ℓ, There is a tree-child network displaying all the trees of T with q reticulations if and only if the permutations on X∖{ℓ} that correspond with the line trees have a SCS of length n+q § NP-HARDNESS OF THE SCS PROBLEM FOR PERMUTATIONS The SCS problem is NP-hard for permutations SCS is already known to be NP-hard when all input strings consist of 2 distinct characters <cit.>; let us denote this variant 2-SCS (we further need the trivial constraint that no character appears in every input string). We thus provide a reduction as follows: consider an instance 𝒮 of 2-SCS with m length-2 strings over a size-n alphabet X={x_1,…,x_n}, and an integer k. Let N=n+k+1, and create a size-N set Y={y_1,…,y_N} of separators. In the context of strings, we also write X and Y for the strings x_1… x_n and y_1… y_N, respectively. For any string ab∈𝒮 (with a,b∈ X and a≠ b), we write X_-ab for the subsequence of x_1x_2… x_n obtained by removing a and b, and S_ab = a b · Y · X_-ab. Note that each S_ab is a permutation of X∪ Y. Let us write 𝒮' = {S_ab, ab∈𝒮} and k'=k+N+n. We now prove the following equivalence that completes the reduction. Strings in 𝒮 have a common supersequence T of size k ⇔ Strings in 𝒮' have a common supersequence T' of size k' ⇒ Build T' = T · Y · X. String T' is a length-k' string, and it is a supersequence of any S_ab for ab∈ S' (since T is a supersequence of ab and X is a supersequence of X_-ab). ⇐ Pick such a string T'. It contains at least one occurrence of Y as a subsequence. Let P,R be the matching prefix and suffix of T' (i.e. T'=P· R) such that R is the smallest suffix containing Y as a subsequence. Let T be the subsequence of P obtained by removing all separator characters. We have |P|≤ k'-N = k+n <N, so P may not contain an entire copy of Y. Hence, for any S_ab = ab· Y· X_-ab∈𝒮', we have that ab is a subsequence of P and X_-ab is a subsequence of R. Overall, P, and also T, are common supersequence of all ab∈𝒮, and R is a common supersequence of all X_-ab. In order to bound their sizes, note that R contains each character of X and Y at least once, so |R|≥ N+n. Hence, T has size at most k'-N-n=k, and is a common supersequence of 𝒮. The TCN problem is NP-hard even for line trees. Proof. The statement is derived from Thereom 2 and Theorem 3. Open Problem Does the TCN problem remain NP-hard for two line trees? The TCN problem for three line trees was studied by Van Iersel et al. in <cit.>. § ACKNOWLEDGEMENTS LX Zhang was partially supported by Singapore MOE Tier 1 grant R-146-000-318-114 and Merlin 2023. He thanks Yufeng Wu for useful discussion in the early stage of this work. 10 albrecht2012fast Benjamin Albrecht, Celine Scornavacca, Alberto Cenci, and Daniel H Huson. Fast computation of minimum hybridization networks. Bioinformatics, 28(2):191–197, 2012. bordewich2007computing Magnus Bordewich and Charles Semple. Computing the minimum number of hybridization events for a consistent evolutionary history. Discrete Applied. Math., 155(8):914–928, 2007. cardona2009metrics2 Gabriel Cardona, Mercè Llabrés, Francesc Rosselló, and Gabriel Valiente. Metrics for phylogenetic networks II: Nodal and triplets metrics. IEEE/ACM-TCBB, 6(3):454–469, 2009. cardona2020counting Gabriel Cardona and Louxin Zhang. Counting and enumerating tree-child networks and their subclasses. Journal of Computer and System Sciences, 114:84–104, 2020. elworth2019advances RA Leo Elworth, Huw A Ogilvie, Jiafan Zhu, and Luay Nakhleh. Advances in computational methods for phylogenetic networks in the presence of hybridization. In Bioinformatics and Phylogenetics, pages 317–360. Springer, 2019. Fontaine_15 Michael C Fontaine, James B Pease, Aaron Steele, and et al. Extensive introgression in a malaria vector species complex revealed by phylogenomics. Science, 347(6217):1258524–1258524, 2015. garey1979computers Michael R Garey and David S Johnson. Computers and intractability. Freeman San Francisco, 1979. gogarten2005horizontal J Peter Gogarten and Jeffrey P Townsend. Horizontal gene transfer, genome innovation and evolution. Nature Reviews Microbiol., 3(9):679–687, 2005. gusfield2014book Dan Gusfield. ReCombinatorics: the algorithmics of ancestral recombination graphs and explicit phylogenetic networks. MIT press, 2014. huson2009computing Daniel H Huson, Regula Rupp, Vincent Berry, Philippe Gambette, and Christophe Paul. Computing galled networks from real data. Bioinformatics, 25(12):i85–i93, 2009. huson2010book Daniel H Huson, Regula Rupp, and Celine Scornavacca. Phylogenetic networks: concepts, algorithms and applications. Cambridge University Press, 2010. koblmuller2007reticulate Stephan Koblmüller, Nina Duftner, Kristina M Sefc, Mitsuto Aibara, Martina Stipacek, Michel Blanc, Bernd Egger, and Christian Sturmbauer. Reticulate phylogeny of gastropod-shell-breeding cichlids from lake tanganyika–the result of repeated introgressive hybridization. BMC Evol. Biol., 7(1):1–13, 2007. koonin2001horizontal Eugene V Koonin, Kira S Makarova, and L Aravind. Horizontal gene transfer in prokaryotes: quantification and classification. Annual Rev. Microbiol., 55(1):709–742, 2001. linz2019attaching Simone Linz and Charles Semple. Attaching leaves and picking cherries to characterise the hybridisation number for a set of phylogenies. Adv. Applied Math., 105:102–129, 2019. lutteropp2022netrax Sarah Lutteropp, Céline Scornavacca, Alexey M Kozlov, Benoit Morel, and Alexandros Stamatakis. Netrax: accurate and fast maximum likelihood phylogenetic network inference. Bioinformatics, 38(15):3725–3733, 2022. Marcussen_14 Thomas Marcussen, Simen R Sandve, Lise Heier, Manuel Spannagl, Matthias Pfeifer, The International Wheat Genome Sequencing Consortium, Kjetill S Jakobsen, Brande BH Wulff, Burkhard Steuernagel, Klaus FX Mayer, and Odd-Arne Olsen. Ancient hybridizations among the ancestral genomes of bread wheat. Science, 345(6194):1250092–1250092, 2014. mirzaei2015fast Sajad Mirzaei and Yufeng Wu. Fast construction of near parsimonious hybridization networks for multiple phylogenetic trees. IEEE/ACM Trans. Comput. Biol. Bioinform., 13(3):565–570, 2015. pickrell2012inference Joseph Pickrell and Jonathan Pritchard. Inference of population splits and mixtures from genome-wide allele frequency data. Nat Prec, 2012. solis2016inferring Claudia Solís-Lemus and Cécile Ané. Inferring phylogenetic networks with maximum pseudolikelihood under incomplete lineage sorting. PLoS genetics, 12(3):e1005896, 2016. steel2016phylogeny Mike Steel. Phylogeny: discrete and random processes in evolution. SIAM, 2016. timkovskii1989complexity VG Timkovskii. Complexity of common subsequence and supersequence problems and related problems. Cybernetics, 25:565–580, 1989. van2022practical Leo van Iersel, Remie Janssen, Mark Jones, Yukihiro Murakami, and Norbert Zeh. A practical fixed-parameter algorithm for constructing tree-child networks from multiple binary trees. Algorithmica, 84(4):917–960, 2022. van2023three Leo van Iersel, Mark Jones, and Mathias Weller. When three trees go to war. hal.science, 2023. wang2001perfect Lusheng Wang, Kaizhong Zhang, and Louxin Zhang. Perfect phylogenetic networks with recombination. Journal of Computational Biology, 8(1):69–78, 2001. wu2010close Yufeng Wu. Close lower and upper bounds for the minimum reticulate network of multiple phylogenetic trees. Bioinformatics, 26(12):i140–i148, 2010. yamada2020improved Kohei Yamada, Zhi-Zhong Chen, and Lusheng Wang. Improved practical algorithms for rooted subtree prune and regraft (rSPR) distance and hybridization number. J. Comput. Biol., 27(9):1422–1432, 2020. zhang2018bayesian Chi Zhang, Huw A Ogilvie, Alexei J Drummond, and Tanja Stadler. Bayesian inference of species networks from multilocus sequence data. Molecular biology and evolution, 35(2):504–517, 2018. zhang2019clusters Louxin Zhang. Clusters, trees, and phylogenetic network classes. In Bioinformatics and Phylogenetics: Seminal Contributions of Bernard Moret. Springer, 2019. zhang2019 Louxin Zhang. Generating normal networks via leaf insertion and nearest neighbor interchange. BMC Bioinform., 20(20):1–9, 2019. zhang2023fast Louxin Zhang, Niloufar Abhari, Caroline Colijn, and Yufeng Wu. A fast and scalable method for inferring phylogenetic networks from trees by aligning lineage taxon strings. Genome Research, 33:gr–277669, 2023.
http://arxiv.org/abs/2307.05330v1
20230708201724
The Value of Chess Squares
[ "Aditya Gupta", "Shiva Maharaj", "Nicholas Polson", "Vadim Sokolov" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Typology of Risks of Generative Text-to-Image Models Atoosa Kasirzadeh ==================================================== Valuing chess squares and determining the placement of pieces on the board are the main objectives of our study. With the emergence of chess AI, it has become possible to accurately assess the worth of positions in a game of chess. The conventional approach assigns fixed values to pieces (=∞, =9, =5, =3, =3, =1). We enhance this analysis by introducing marginal valuations for both pieces and squares. We demonstrate our method by examining the positioning of Knights and Bishops, and also provide valuable insights into the valuation of pawns. Notably, Nimzowitsch was among the pioneers in advocating for the significance of Pawn structure and valuation. Finally, we conclude by suggesting potential avenues for future research. Key Words: AI, AlphaZero, Bayes, Chess, Deep Learning, Neural Network, Chess Piece Values, Knights, Bishops, Pawns. Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory, there must be a solution, a right procedure in any position. —John von Neumann § INTRODUCTION Chess AI was pioneered by <cit.>, <cit.>, and <cit.>, who developed algorithms for solving chess. Shannon's approach was one of trial and error and “learning” the optimal policy. Turing (and Champernowne) valued the pieces marginally. They had the following positional evaluation functions: piece mobility, piece safety, king mobility, king safety, and castling. Modern day methods are based on state dependent objective function evaluation via learning (a.k.a reinforcement learning) <cit.>. Solving Chess is a daunting NP-hard computational problem, with the Shannon number, which measures the number of possible board states, being (with legal moves). A major advance over pure look-ahead calculation engines are deep neural networks which interpolate the value and policy functions from empirical game playing. For example, AlphaZero uses self-play to allow quick solution paths to be calculated and “learns" chess in less than four hours without any prior knowledge, see <cit.> and <cit.> for further discussion. While much recent work has been done in Chess AI, the question of the value of a chess square has not yet been explored. In this work, we propose a system to measure the advantage/disadvantage offered by control of particular chess squares with different pieces. In particular, we propose a method for measuring the advantage/disadvantage states of the form s ∈Color×Piece×Square. For example, the notion that certain state combinations, such as having a White on f5 provides an advantage to White players is a widely held belief in the world of chess. We analyze these key combinations to see whether the games of high-level chess grandmasters provide merit to this belief. Our investigation will shed light on the strategic nuances and patterns that emerge from such positions and contribute to the understanding of chess at the highest level of play. To value pieces on squares, we create a Neural Network to analyze a dataset of Grandmaster games and make predictions regarding winning probabilities. This uses Centipawn evaluations for specific subsets of chess states involving Knight and Bishop pieces. The results show that our model successfully generated predictions for White Knights and Bishops, as well as Black Knights and Bishops. The predictions provided valuable insights into the advantages and disadvantages associated with different states and positions on the chessboard. For example, the analysis revealed that Knights placed in the corners of the board had lower winning probabilities, likely due to their limited mobility and restricted influence. On the other hand, as Knights moved closer to the opponent's side, their positional value tended to increase, potentially allowing them to infiltrate enemy territory and exert greater control over the game. The study's results enhance the understanding of chess strategies and gameplay dynamics, aiding in strategic decision-making and the evaluation of different gameplay approaches. Several chess maxims are reflected in our neural network predictions. For example, Pawns are observed to gain in value as they cross the 4th rank, highlighting the significance of advancing pawns beyond this milestone. Pawns positioned on the h and a files on the 5th rank are particularly powerful, contributing to central control and potential attacking opportunities. Pawns on the 6th rank, especially when supported by a pawn on the 5th rank, become highly threatening. Edge pawns tend to be weaker compared to central pawns, emphasizing the importance of controlling central squares. Additionally, kingside pawns are often more dangerous when advanced than queenside pawns, influencing the dynamics of the game. Important squares for the white pawn are identified by examining the highest Centipawn evaluation c(s) values in each column. The squares e4, h4, c5, and h6 are highlighted as critical positions for white pawns. Occupying these squares provides advantages, such as central control, support for piece development, and potential attacking opportunities. Similarly, for black pawns, the squares f5, d5, c4, d3, and f3 emerge as key positions. Placing pawns on these squares enhances black's control of central areas, supports piece coordination, and enables counter-play against white's position. Understanding the significance of these key squares and applying the derived insights allows players to make informed decisions regarding pawn placement, pawn breaks, and strategic plans. This knowledge empowers players to optimize their pawn structures, control critical areas of the board, and leverage their pawns to gain a competitive advantage in the game. The rest of the paper is outlined as follows. Section <ref> provides connections with previous literature. Section <ref> goes over the methods we used. Section <ref> provides an application of the proposed methods to Grandmasters and Magnus Carlsen, the World Chess Champion. Section <ref> provides an application to Pawns. Finally, Section <ref> concludes. §.§ Connections with Previous Work In the field of Chess AI, previous research has primarily focused on predicting the probabilities of winning w(s) and Centipawn evaluations c(s) for more simplified states. <cit.> explored simpler states where s belongs to the set of Piece. In their work, they utilized Logistic Regression methods to determine the value of a chess piece by creating a model that predicts the outcome of a game based on existing piece imbalances in a given position. A recent lichess study also tried similar approaches <cit.> <cit.>. Building upon this previous work, our research extends the scope by proposing an augmented state representation s that encompasses Color×Piece×Square, thereby incorporating the square (location) information as an additional component of the state. This augmentation enables a more comprehensive understanding of the game dynamics by considering both the piece and its position on the board. Furthermore, we employ Neural Networks as our chosen methodology, allowing us to capture and model the intricate relationships between the state s and its corresponding Centipawn evaluation c(s). One crucial distinction between our proposed approach and previous methodologies lies in the predictive target. While prior research focused on predicting the binary outcome of the game (win or loss), our proposed model aims to predict the Centipawn evaluation c(s) instead. By doing so, we shift the focus towards assessing the advantage or disadvantage of a particular chess position, providing more granular information beyond a simple win/loss prediction. By using the augmented state representation and employing Neural Networks, our proposed model offers a more comprehensive and nuanced analysis of the chess game. This allows us to capture the intricate interplay between the color, piece type, square, and Centipawn evaluation, providing a deeper understanding of the factors influencing the game's outcome. In the realm of Chess AI research, <cit.> made significant strides by employing Q-learning methods, as discussed in Section <ref>, with a specific focus on chess gambits. Their work aimed to uncover key characteristics and insights associated with these strategic opening moves by calculating Q-values for various chess gambits. This initial exploration into the application of Q-learning in analyzing and understanding chess gambits laid a solid foundation for further research in this field. This paper extends the work of <cit.> and proposes novel architectures that can predict the probabilities of winning w(s) and Centipawn evaluations c(s) for all possible states s ∈Color×Piece×Square. While previous work focused on specific subsets of states, particularly those related to gambits, our approach seeks to encompass the entire chessboard by incorporating the color, piece type, and square information into a comprehensive state representation. By embracing a wider scope of analysis that covers all possible states, our research aims to provide a more comprehensive understanding of the game, surpassing the limitations imposed by narrow subsets. To achieve this, we employ advanced techniques, such as Neural Networks, to capture the intricate relationships between the components of a state and the corresponding probabilities of winning w(s) and Centipawn evaluations c(s). This allows us to offer valuable insights into the dynamics of chess gameplay across a vast array of states, thereby providing a more holistic and comprehensive analysis. Through our research, we strive to advance the field by developing robust and effective models capable of accurately predicting the probabilities of winning and assessing the Centipawn evaluations for any given state. By considering the full spectrum of states represented by Color×Piece×Square, our proposed architectures pave the way for a deeper understanding of chess strategies. They enable us to evaluate the efficacy of these strategies and unravel the intricacies of the game, ultimately contributing to the development of more sophisticated and intelligent Chess AI systems. § CHESS PIECE AND SQUARE VALUATION Our work will provide values for states consisting of a combination of pieces and squares For example, we make wish to assess the value of a fianchetto bishop of the queen's side ad that bishop controls a key diagonal. We denote this value by V ( , b2 ) or a white knight on a good outpost such as f5, wish is denoted V ( , f5). As valuation will be based on the probability of winning, as calculated by a chess engine, the law of probability gives us a key identity V ( ) = ∑_position V ( , position ), where the sum is taken over all future positions. Hence, we can see that the initial value of the knight (a.k.a V ( )=3 comes from its total use throughout the game. Once the pieces have moved, there's a different marginal values. Our goal is to be able to assess values such as V ( , f5). The commonly used chess piece valuations are given by ( , , , , , ) = ( ∞ , 9 , 5 , 3, 3 ,1 ) These were modified in <cit.> through the use of Machine Learning techniques to be ( , , , , , ) = ( ∞ , 8.9 , 4.6 , 3.3, 3 ,1 ) and in a recent lichess study on finding the value for pieces finds ( , , , , , ) = ( ∞ , 9.82 , 4.93 , 3.28, 3.16 , 1 ). We build on this line of research by adding square position to the state vector. §.§ Centipawn Evaluation and Optimal Play In our approach, we begin by formalizing the theoretical functions used in Q-learning. The value function, denoted as V(s), represents the probability of winning the game given a specific state s. This state s belongs to the set Color×Piece×Square, and it is worth emphasizing that V(s) is calculated with respect to the color parameter in any given state. To assess any legal chess position, we derive a Centipawn evaluation denoted as c(s). The Centipawn serves as a measurement unit for evaluating the advantage in chess, where one Centipawn is equal to 1/100 of a pawn. The win probability w(s) can be directly obtained from c(s) using the following equation: w(s) = ℙ(winning|s) = 1/1+10^-c(s)/4, and c(s) = 4log_10(w(s)/1-w(s)). For example, if White has a c(s) =0.2 advantage, then the win probability is w(s) = 0.526. To address the sequential decision problem, we employ the dynamic programming technique known as Q-learning. This methodology involves breaking down the decision problem into smaller sub-problems. A key principle utilized in Q-learning is Bellman's principle of optimality, which states: Bellman Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (Bellman, 1957) To solve this sequential decision problem, we employ Backwards Induction, which determines the most optimal action at the last node in the decision tree (i.e., the checkmate position). Utilizing this information, we can then determine the best action for the second-to-last decision point, and this process continues backward until we identify the optimal action for every possible situation, effectively solving the Bellman equation. In recent years, the field of artificial intelligence has witnessed significant advancements, particularly in the realm of AI algorithms like deep learning, alongside the development of remarkably powerful computer chess engines. These technological breakthroughs have revolutionized the way we evaluate and understand chess positions, enabling us to delve into the intricacies of the game with unparalleled precision. One notable achievement stemming from these advancements is the ability to accurately assess chess positions. By leveraging AI algorithms, particularly deep learning techniques, we can now analyze and comprehend chess moves and strategies in a manner that was previously unimaginable. These algorithms have been specifically designed to process vast amounts of data, learn from patterns, and make informed decisions, ultimately resulting in highly accurate evaluations of chess positions. Moreover, the advent of advanced computer chess engines, exemplified by the likes of Stockfish 15 <cit.>, has played a pivotal role in shaping the landscape of chess analysis and study. These engines, meticulously crafted through a combination of cutting-edge algorithms and extensive programming, have transformed the way chess is played and understood. Gone are the days when determining the optimality of specific chess lines of play relied solely on human intuition and analysis. The emergence of chess engines has effectively shifted the burden from human players and theorists to these intelligent systems. By leveraging their computational power and algorithmic prowess, chess engines have assumed the responsibility of assessing various lines of play, thus solving the Bellman equation. By adhering to Bellman's optimality condition, computer chess engines fulfill the requirements of possessing complete knowledge about the chess environment and evaluating all possible actions and their consequences. Through this rigorous analysis, they provide insights into the optimal move in a given position §.§ Q-Values The corresponding Q-value represents the probability of winning, given a policy/move a in a given state s, by following the optimal Bellman path thereafter: Q(s, a) = ℙ(winning|s, a). To address the optimal sequential decision problem, we employ Q-learning, which calculates the Q-matrix (<cit.>, <cit.>), denoted as Q(s, a) for a given state s and action a. The Q-value matrix describes the value of performing action a and then acting optimally thereafter. The current optimal policy and value function can be expressed as follows: V(s) = amax Q(s, a) = Q(s, a^*(s)) where a^*(s) = argmax_a Q(s, a). The policy function establishes the optimal mapping from states to actions, and by substituting the Q-values, we obtain the value function for a given state. In Section <ref>, we introduce a Neural Network architecture designed specifically for predicting the value of c(s) given the state s. By harnessing the predictive capability of this Neural Network, we can subsequently determine the probability of a player winning, denoted as w(s), based on their corresponding state s. The Neural Network model comprises interconnected layers, including an input layer that accepts the state s as input. Through a series of computations within the hidden layers, the model captures complex relationships and patterns inherent in the input data. Ultimately, the output layer produces the predicted value of c(s). By employing this trained Neural Network model, we can make predictions of c(s) for unseen states s. These predicted values can then be utilized to compute the probability of a player winning, denoted as w(s). The specific relationship between c(s) and w(s) is contingent upon the characteristics and dynamics of the chess game under analysis. With the ability to predict w(s), we gain valuable insights into the probability of a player winning based on their current state s. This information can be harnessed in various ways, including evaluating strategic moves, assessing the overall advantage or disadvantage of specific board configurations, and guiding decision-making during gameplay. The Neural Network's capacity to capture intricate patterns and relationships within the input data significantly contributes to more accurate predictions and a deeper understanding of the dynamics of the chess game. By incorporating the predicted values of c(s) and computing the corresponding probabilities of winning, we enhance our analytical capabilities and facilitate informed decision-making in the context of chess gameplay. §.§ Neural Network Architecture We design a specific 3-layer Neural Network aimed at predicting the value of a chess square and piece combination, denoted as c(s) for s ∈Color×Piece×Square, as shown in Figure <ref>. This model incorporates a hyperbolic tangent (tanh) activation function as a key component of its architecture. By applying the tanh activation function to the network layers, the model becomes capable of capturing and processing intricate patterns and relationships within the input data. To ensure effective training of the model, we curate a meticulously crafted dataset. This dataset consists of two essential elements: the state information, represented by s, and the corresponding critical power level (CPL) recorded for each state. The state information encompasses relevant factors, variables, or parameters that define the chessboard system or environment. Through supervised learning using this dataset, the model learns to associate the given state information with the corresponding CPL. Consequently, it acquires the ability to predict the CPL based on the provided state information as input. This training process involves iteratively adjusting the model's parameters to minimize the disparity between its predictions and the actual CPL values present in the training dataset. The selection of the tanh activation function holds particular significance for our chess square and piece prediction model. The tanh function introduces non-linearity into the model, enabling it to capture complex relationships specific to chessboard configurations. This non-linearity allows the model to interpret intricate patterns and dependencies between the input variables and the output, facilitating more accurate predictions. Furthermore, the tanh activation function maps the input values into the range [-1, 1], which is well-suited for our chess-related application. This bounded output range ensures that the model's predictions for critical power levels remain within a specific value range, aligning with the constraints and limitations inherent to chess strategies. By incorporating the tanh activation function and training the model on the state information and corresponding CPL data, our proposed model strives to provide a robust and dependable framework for predicting critical power levels in various chess scenarios. Its ability to capture the intricate relationships specific to chess squares and pieces makes it particularly valuable for tasks such as evaluating the relative strength of different board configurations, predicting advantageous moves, and assisting in strategic decision-making during chess gameplay. §.§ Data In order to train the Neural Network effectively, a training dataset is constructed, comprising two essential components. This dataset consists of elements that contain both the state information denoted by s, as well as the corresponding evaluation associated with that particular state. To gather the necessary chess game data for analysis, a vast mega database containing millions of previously played chess games is utilized. Within this database, each game is represented using the Portable Game Notation (PGN) notation, which allows for standardized representation and compatibility with various chess software and applications. The process of constructing the training dataset involves parsing and evaluating all positions p within each game. The Forsyth-Edwards Notation (FEN) is employed to determine the location of relevant chess pieces within each position p. As a result, all states s ∈ p are extracted and added to the training dataset. To navigate through the moves of each chess game systematically, the Python Chess library is utilized. This library provides a comprehensive set of functions and classes specifically designed for working with chess games and positions, enabling efficient traversal of the stored games in the database. For every position p within the dataset, an evaluation is obtained. To accomplish this, the research incorporates the Stockfish engine, a widely recognized and powerful chess engine. Stockfish employs advanced algorithms and evaluation functions to assess the strength of positions. By leveraging the capabilities of Stockfish, the training dataset can determine the evaluation of each position p on the chessboard accurately. Finally, this evaluation is associated with all states s ∈ p, resulting in a comprehensive dataset that encompasses both the state s and the evaluation associated with the position p from which s was derived. This dataset serves as the foundation for training the Neural Network, enabling it to learn and make informed decisions based on the provided state information. § KNIGHT AND BISHOP VALUATION In this study, our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Knight, Bishop}}. Although our focus is initially on the Knight and Bishop pieces, it is important to note that the model can be expanded to encompass all pieces, offering a broader analysis of the game. To provide a visual representation of the predicted values, heat maps are generated for both w(s) and c(s) corresponding to each valid combination within the specified subset. These heat maps offer a comprehensive overview of the probabilities of winning and Centipawn evaluations associated with the Knight and Bishop pieces in different states. To illustrate the efficacy of our model, we first employ it to predict the Centipawn evaluations c(s) specifically for states where the color c is White and the piece p is Knight or Bishop. The resulting predictions are showcased in Figure <ref> and Figure <ref>, providing valuable insights into the relative advantages or disadvantages of such states. Building upon this, we further use c(s) to derive the corresponding probabilities of winning w(s) for these specific states. The model-generated probabilities are visualized in Figure <ref> and Figure <ref>, offering a clear representation of the likelihood of White winning the game given the occurrence of the specified state s. By leveraging our proposed model, we gain a deeper understanding of the dynamics of the game, specifically in relation to the Knight and Bishop pieces within the context of the White color. This analysis not only facilitates strategic decision-making but also provides a basis for evaluating the effectiveness of various gameplay approaches. Moreover, the model's expandability to encompass all pieces allows for a comprehensive examination of the game across different states, enabling us to uncover additional insights and enhance the overall understanding of chess strategies and gameplay dynamics. The model is then used to determine c(s) and w(s) for states { (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop"}, as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> respectively. Key squares for the Bishops can be seen in <ref>: The applications of the model on Grandmaster games provide valuable insights into the dynamics and strategies employed by top-level chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we gain a deeper understanding of the advantages and disadvantages associated with different chess positions. These insights have several practical applications in chess analysis and gameplay evaluation. The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions. By focusing on specific subsets of states, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to the overall gameplay strategies employed by Grandmasters. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations. Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game and evaluate the effectiveness of various gameplay approaches. This broader perspective enhances our overall understanding of chess strategies and gameplay dynamics. The predictions generated by the model can also be utilized for comparative analysis between different players or groups of players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states, we can identify patterns and trends in the strategies employed by Grandmasters. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities. For example, in Figure <ref>, where w(s) represents the evaluation of the knight-square state, we can observe that the lowest values of w(s) are found in the white corners of the chessboard, specifically squares a1 and h1. This observation aligns with the widely held belief that knights are generally considered being in their worst positions when confined to the corners of the board. The disadvantage of having a knight in the corner may stem from its limited mobility and restricted scope of influence. When placed in the corners, knights have fewer potential squares to reach and can easily become isolated from the central and more strategically significant areas of the board. On the other hand, as the knights move closer to the opponent's side of the board, their positional value tends to increase. This is most likely due to the knights' ability to infiltrate enemy territory, potentially attacking key squares, pieces, or pawns. The increasing value of knight-square states as the knights advance can be attributed to several factors. Firstly, the proximity to the opponent's pieces and pawns provides more targets for the knight's maneuvers and attacks. Secondly, knights positioned closer to the enemy's side can exert greater control over central squares and influence the dynamics of the game. This control can restrict the opponent's options and potentially create weaknesses in their position. Analyzing the values of knight-square states in different positions on the board, such as the corners and closer to the opponent's side, supports the claim that the placement of knights significantly affects their effectiveness. Understanding the strengths and weaknesses associated with different knight positions helps players make informed decisions about piece placement, strategic plans, and tactical considerations. Key squares for the knight to occupy are marked in Figure <ref>. The applications of our model on Grandmaster games provide valuable insights into the dynamics and strategies employed in high-level chess. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of the game across different states, facilitating a deeper understanding of chess strategies and enhancing the overall gameplay experience. §.§ Magnus Carlsen Our proposed model can be further applied to gain insights into the playing style and performance of specific players. In this section, we focus on the world-renowned chess player Magnus Carlsen, the reigning World Chess Champion. By applying our model to the games played by Carlsen, we aim to uncover unique patterns and characteristics that contribute to his success and distinguish his gameplay from other Grandmasters. Our proposed model is applied to a dataset consisting of 2000+ Carlsen games played in the last 5 years. Similar to the previous section, we begin by predicting the Centipawn evaluations c(s) for states where Carlsen plays as the “White" color and utilizes the “Knight" or “Bishop" piece. These predictions provide valuable insights into the relative advantages or disadvantages of Carlsen's chosen states, shedding light on his strategic decision-making process. The resulting heat maps, showcased in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, offer a visual representation of the predicted Centipawn evaluations for Carlsen's specific subset of states. Building upon this analysis, we further utilize the Centipawn evaluations c(s) to derive the corresponding probabilities of winning w(s) for Carlsen's selected states. The model-generated winning probabilities provide a clear representation of Carlsen's likelihood of winning the game given the occurrence of the specified state s. By focusing on Carlsen's gameplay, we gain a deeper understanding of his preferred strategies and tendencies when employing the Knight piece as the “White" color. This analysis allows us to assess the effectiveness of Carlsen's gameplay choices, providing insights into his decision-making process and potential areas of strength or improvement. Additionally, comparing Carlsen's results to the general dataset of Grandmaster games helps us evaluate his performance against the broader chess community. The model is then used to determine c(s) and w(s) for states (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop", as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, respectively. The applications of the model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by one of the world's top chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we can gain a deeper understanding of the advantages and disadvantages associated with different chess positions in Carlsen's games. These insights have numerous practical applications in chess analysis and gameplay evaluation. The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states encountered by Magnus Carlsen. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops in Carlsen's games. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions as encountered by Carlsen. By focusing on specific subsets of states in Carlsen's games, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to Carlsen's overall gameplay strategies. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations based on Carlsen's approach. Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states in Carlsen's games. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game as played by Carlsen and evaluate the effectiveness of various gameplay approaches employed by him. This broader perspective enhances our overall understanding of Carlsen's strategies and gameplay dynamics. The predictions generated by the model can also be utilized for comparative analysis between Magnus Carlsen and other players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states in Carlsen's games, we can identify patterns and trends in his strategies. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities while considering Carlsen's approach. In Figure <ref>, we discover the solution to one of the questions raised in Section <ref>: the value of the white knight on f5. Figure <ref> illustrates the distribution of c(s) for the White Knight on f5 in Carlsen's games. It is evident that the c(s) values for the White Knight exhibit a positive skew, indicating that this particular state s is typically associated with favorable c(s) values. Therefore, having a white knight positioned on f5 often confers an advantage. By incorporating such insights into our analysis of Carlsen's games, we gain a more comprehensive understanding of the strengths, weaknesses, and strategic implications of the Knight and Bishop pieces as employed by Magnus Carlsen. In sum, the applications of our model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by this world-class chess player. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions encountered by Carlsen, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of Carlsen's games, facilitating a deeper understanding of his strategies and enhancing the overall gameplay experience. § PAWN VALUATION No pawn exchanges, no file-opening, no attack—Aron Nimzowitsch Our study is not complete until we apply the model to the mighty pawn. Our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Pawn}}. The results of the model when applied to the White Pawn are shown in Figure <ref> and Figure <ref>. We note a few chess maxims that are reflected in the model predictions. * Pawns gain in value as they cross the 4th rank: This point highlights an important principle in chess, where advancing pawns beyond the 4th rank often leads to increased positional strength and potential threats. As pawns move forward, they gain control over more squares, restrict the opponent's piece mobility, and open up lines for their own pieces. Crossing the 4th rank is a significant milestone that can significantly impact the dynamics of the game. * Pawns on the h and a files are very good on the 5th rank: This point emphasizes the strategic importance of pawns positioned on the h and a files when they reach the 5th rank. Pawns on these files can have a powerful influence on the game, particularly in the endgame. Placing pawns on the 5th rank provides support for the central pawns, helps control key central squares, and may facilitate piece activity and potential attacks on the opponent's position. * Pawns on the 6th rank are deadly, especially when supported by a pawn on the 5th rank: This point highlights the strength of pawns on the 6th rank, which is just two steps away from promotion. Pawns advanced to this rank become highly dangerous, as they pose a direct threat to promote to a more powerful piece. When supported by a pawn on the 5th rank, these pawns can create a formidable pawn duo, exerting significant pressure on the opponent's position and potentially leading to advantageous tactical opportunities. * Edge pawns tend to be weaker than central pawns: This point draws attention to the relative weakness of pawns placed on the edges of the board (such as the a and h files) compared to pawns in central positions. Edge pawns have fewer potential squares to advance or support other pieces, limiting their mobility and influence. In contrast, central pawns control more critical squares, contribute to a stronger pawn structure, and have a greater impact on the overall game dynamics. * Kingside pawns are more dangerous when advanced than queenside pawns: This point highlights a positional aspect where advancing pawns on the kingside (g and h files for White, g and h files for Black) can have a more immediate and aggressive impact compared to advancing pawns on the queenside (a and b files for White, a and b files for Black). Advanced kingside pawns can create open lines, potentially exposing the opponent's king to attacks or weakening their pawn structure. Understanding this distinction helps players assess the strategic implications of pawn advances on different sides of the board. Important squares for the white pawn can also be seen by examining the highest Centipawn evaluation c(s) values in each column. By analyzing the rows in the heatmap corresponding to the white pawns, we can identify squares that consistently have high Centipawn evaluations, indicating their significance for white pawns. Starting from the top row (from White's perspective), the squares with the highest c(s) values are e4, h4, c5, and h6. These squares represent critical positions for white pawns. The square e4, located in the fourth row, is a well-known central square in chess. Occupying e4 with a white pawn can provide several advantages, such as controlling important central squares, supporting piece development, and establishing a strong pawn presence in the center. Also in the fourth row, we find the square h4. Although it is on the edge of the board, it is an important square for white pawns. Placing a pawn on h4 can serve multiple purposes, including potentially supporting a kingside pawn storm, reinforcing control over the g5 square, or preparing to launch an attack on the opponent's position. In the fifth row, we encounter the square c5. Occupying c5 with a white pawn can contribute to a solid pawn structure and provide control over central squares. It may also support piece mobility and influence the game's dynamics, particularly in the context of pawn breaks or central pawn exchanges. Finally, in the sixth row, the square h6 stands out with the highest c(s) value. Placing a pawn on h6 can have strategic implications, such as potentially supporting kingside attacks or acting as a defensive shield for the king. By identifying these squares with high c(s) values, we gain valuable insights into the strategic positioning of white pawns. These squares offer opportunities for central control, piece activity, attacking potential, and overall pawn structure. Understanding the significance of these squares helps players make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their advantage in the game. We next apply this model to the black pawns. The results are shown in Figure <ref> and Figure <ref>. Similar conclusions can be drawn for the black pawns. By analyzing the highest Centipawn evaluation c(s) values in each column for the black pawns, we can identify the key squares that consistently have high evaluations, signifying their significance for black pawns. Just like for the white pawns, the rows in the heatmap corresponding to the black pawns reveal important squares. The squares with the highest c(s) values for black pawns are f5, d5, c4, d3, and f3. These squares play a crucial role in determining the strength and strategic positioning of the black pawns. The square f5, located in the fifth row, emerges as one of the critical squares for black pawns. Placing a pawn on f5 can provide black with control over central squares, potential support for piece development, and opportunities for counterplay. The square d5 stands out with a high c(s) value. Occupying d5 with a black pawn contributes to central control, potentially restricts white's pawn breaks, and provides a solid foundation for black's pawn structure. In the fourth row, the square c4 is identified as an important square for black pawns. Occupying c4 can offer black strategic advantages, such as central control, potential support for piece activity, and the creation of tactical opportunities. Furthermore, the square d3 in the third row holds significance for black pawns. Placing a pawn on d3 strengthens black's central presence, potentially restricts white's pawn advancements, and helps solidify black's position in the center. Lastly, the square f3 in the third row also demonstrates a high c(s) value. Occupying f3 with a black pawn can support kingside counterplay, potentially restrict white's piece mobility, and offer opportunities for tactical operations. Analyzing these key squares for black pawns, namely f5, d5, c4, d3, and f3, provides valuable insights into the strategic considerations and potential strengths of the black pawn structure. Occupying and controlling these squares strategically enhances black's control of central areas, supports piece coordination, and enables counterplay against white's position. By understanding the significance of these squares, players can make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their potential advantage and navigate the complexities of the game from the black perspective. § DISCUSSION In this paper, we presented a comprehensive methodology for evaluating chess positions and predicting the probabilities of winning w(s) and Centipawn evaluations c(s). Our approach utilized a combination of Centipawn evaluation, Q-learning, and Neural Networks to capture the complex dynamics of the game and facilitate informed decision-making. We began by formalizing the theoretical functions used in Q-learning, such as the value function V(s) and Centipawn evaluation c(s). The value function represented the probability of winning the game given a specific state s, while the Centipawn evaluation measured the advantage in chess. We derived the win probability w(s) from the Centipawn evaluation using a mathematical equation. To address the sequential decision problem, we employed the dynamic programming technique of Q-learning, which involved breaking down the problem into smaller sub-problems and solving the Bellman equation. The Q-value matrix represented the probability of winning given a policy/move in a specific state, and we determined the optimal policy and value function using the Q-values. To predict Centipawn evaluations c(s), we designed a Neural Network architecture specifically tailored for chess positions. This model incorporated the tanh activation function to capture intricate patterns and relationships within the input data. By training the Neural Network on a meticulously crafted dataset, we could make accurate predictions of Centipawn evaluations for unseen states. Our methodology expanded upon previous work by considering a comprehensive state representation that encompassed color, piece type, and square information. This allowed for a more nuanced analysis of the game dynamics and a deeper understanding of the factors influencing the outcome. We also showcased the applications of our model, focusing on specific subsets of states, such as the Knight and Bishop pieces, and visualizing the predicted probabilities of winning and Centipawn evaluations through heat maps. Further research in this area could explore the dynamic nature of square values, taking into account positional changes and the interaction between different pieces. By refining and expanding our methodology, we can continue to deepen our understanding of the intricate dynamics of chess positions and contribute to advancements in the field of chess AI. In conclusion, our methodology provides a robust framework for evaluating chess positions and making informed decisions during gameplay. By combining Centipawn evaluation, Q-learning, and Neural Networks, we achieved a comprehensive analysis of the game dynamics and enhanced our ability to assess strategic moves and guide decision-making. Our research contributes to the development of more sophisticated and intelligent Chess AI systems, paving the way for deeper insights into the intricacies of the game. With our methodology, we strive to unravel the logical relations of chess and provide a comprehensive understanding of the game, empowering players and researchers alike to unlock new levels of strategic thinking and mastery. plainnat
http://arxiv.org/abs/2307.04662v2
20230710155941
Baryogenesis and Dark Matter in the Mirror Twin Higgs
[ "Pedro Bittar", "Gustavo Burdman", "Larissa Kiriliuk" ]
hep-ph
[ "hep-ph", "hep-ex" ]
On the Generalized Uncertainty Principle and Cosmology. M. Sabido^a August 12, 2023 ======================================================= § INTRODUCTION The standard model (SM) of particle physics is an extremely successful quantum field theory describing the interactions of all known elementary particles.[The only exception, gravity, is non-renormalizable, and its effects can be safely neglected up to extremely high energies.] Nonetheless, there remain many questions that need to be addressed by the SM. Among them is the nature of dark matter, the origin of the baryon asymmetry, and the stability and origin of the only energy scale appearing in the SM. The Mirror Twin Higgs Model (MTH) <cit.>, originally conceived to stabilize the electroweak scale, can be an intriguing source of dark matter candidates. For instance, Refs. <cit.> and <cit.> consider thermal relics in the MTH and fraternal <cit.> twin scenarios, respectively. The possibility of Asymmetric Dark Matter (ADM) <cit.> in TH models is considered in Refs. <cit.> for the fraternal TH, and in Refs. <cit.> for a variety of Twin Higgs scenarios, but most importantly for us, in the context of the MTH. In the MTH <cit.>, the SM is extended to have a twin SM copy supplemented by a 2 symmetry. The Higgs sector realizes the spontaneous breaking of a global symmetry at some scale f. The SM Higgs is then a pseudo-Nambu-Goldsone boson of this breaking, explaining the stability of the weak scale v, at least up to the energy scale ≃ 4π f. Experimental bounds, mostly from the unobserved invisible Higgs boson decays to the twin sector, impose the need for a soft 2 breaking[In some cases it is even possible to have the MTH with an exact 2 symmetry as shown in <cit.>], resulting in f/v > 1. However, whatever the origin of this soft 2 breaking, this does not reintroduce the hierarchy problem since the 2 breaking is assumed to be valid in the ultra-violet (UV). In Ref. <cit.>, the MTH was considered to build a model for DM. There it is argued that if a twin baryon is to provide the correct DM abundance, it is necessary to introduce a hard 2 breaking in order to allow for m_ DM≃ 5 m_N , where m_N is the nucleon mass and m_ DM is the mass of the twin baryon. The need for hard 2 breaking results from the fact that just using the soft breaking (i.e., f/v>1) is not enough to obtain the desired value in (<ref>). Its effects in the renormalization group running in the twin sector, appearing through the modification of quark masses and the resulting speed up of the twin QCD running, results in only a mild enhancement of Λ̃_ QCD, much smaller than the factor of 5 needed. Thus, the introduction of hard breaking in the twin QCD coupling. Although it is possible to arrange for the hard breaking to be small enough not to reintroduce the hierarchy problem, this remains an ad hoc aspect of the ADM models in the MTH. In this paper, we consider a MTH scenario with an additional sector responsible for the baryon and dark matter asymmetries. As we show below, one of the features of the model is that it allows for the correct DM abundance even if (<ref>) is not satisfied therefore vacating the need for hard 2 breaking. This is achieved by showing that we can obtain different baryon and DM number densities without such breaking. In this way, in order to obtain the correct DM to baryon abundance ratio Ω_ DM/Ω_B = n_ DM/n_B m_DM/m_N, with m_ DM∼ O(1) m_N, the ratio of number densities must be different. The models we consider require the addition of a sector resulting in baryon number violation on both sides of the MTH. We illustrate this with simple models of baryon number violation with out-of-equilibrium decays. These models are mostly available in the literature as applied to the SM alone. We aim to show the general mechanism that allows ADM models in the context of the MTH to obtain the correct DM abundance without hard 2 breaking. On the other hand, it is interesting that the resulting models address the hierarchy problem, the origin of dark matter, and the baryon asymmetry in a natural way. The rest of the paper is organized as follows: In the next section, we review the status of ADM models in the context of the MTH. In Section <ref>, we propose mechanisms for generating both the baryon and dark matter number densities needed on both sides of the MTH without incurring hard 2 breaking. Finally, we conclude in Section <ref>. § ASYMMETRIC DARK MATTER AND THE MIRROR TWIN HIGGS The Twin Higgs mechanism <cit.> was originally introduced as a possible solution to the little hierarchy problem. A copy of the SM matter and interactions, supplemented by a 2 symmetry, results in a global symmetry (SU(4)) which is spontaneously broken at a scale f, resulting in a spectrum of Nambu-Goldstone bosons that make up the SM-like Higgs doublet. The SM interactions explicitly break the global symmetry generating a Higgs potential and leading to electroweak symmetry breaking. In the original version, which we call the MTH, all SM particles and interactions are mirrored in the twin sector. However, as it was first pointed out in Ref. <cit.>, the minimum matter content in the twin sector that addresses the little hierarchy problem does not require an entire copy of the SM but just a twin third generation. This case is the so-called fraternal TH (FTH). The twin Higgs scenario provides several possibilities for DM model building. For instance, models with thermal relics have been considered in the context of the FTH in Refs. <cit.>. In these cases, the twin tau is cosmologically stable due to an accidental U(1) lepton number. The WIMP miracle is recreated since these DM candidates have masses of tens of GeV up to about 100  GeV, and the twin weak interactions determine their thermal relic abundance. Also, in the FTH case, Ref. <cit.> examines asymmetric DM (ADM) models. The preferred scenario involves a light twin b quark with a mass below Λ̃_QCD, asymmetry connected to the SM baryon asymmetry through some UV mechanism, and a cosmologically long-lived twin b baryon. One of the main advantages of the FTH scenarios is that they minimize the new relativistic degrees of freedom, which makes it easier for them to avoid conflicts with the cosmological bounds on N_ eff.. On the other hand, as we will see below, it is more natural to build ADM models in the MTH scenario. We consider the MTH model with an effective cutoff of Λ≃ 4π f, where f is the spontaneous symmetry breaking scale of the twin Higgs global symmetry. In the limit of exact 2 symmetry, f=v, with v the vacuum expectation value of the Higgs doublet in the SM sector. However, the current experimental bounds from the measurements of the Higgs boson couplings at the LHC <cit.> require the f/v ≳ 3 <cit.>. This requirement can be achieved by assuming a soft breaking of the 2 symmetry, i.e., a breaking occurring in the infrared (IR) by some mechanism that respects the 2 symmetry in the ultraviolet (UV). This soft breaking guarantees that the hierarchy problem is not reintroduced in loops correcting the Higgs potential since the UV 2 symmetry forces the cancellation of contributions quadratically dependent on the cutoff in the Higgs boson two-point function. The soft 2 breaking paradigm can accommodate all known collider phenomenology with minimal tuning <cit.>. The twin sector of the MTH is particularly well suited to building models of dark matter. In particular, here we consider the scenario where twin baryons, which carry an accidentally conserved global charge just as protons carry baryon number, may constitute all of the observed DM abundance. Thus, we focus on ADM models in the context of the MTH, in which the origin of the baryon and twin baryon asymmetries are related and at the heart of the apparent similarity in the DM and baryon abundances. In particular, it was shown in Ref. <cit.> that the twin neutron in the MTH model is a viable candidate for DM. This results from a scenario where twin neutrinos somehow acquire large masses in order to avoid tight constraints from the cosmological measurements of N_ eff.. However, the twin photon is still in the spectrum. If only twin baryon number B̃ is generated in the twin sector (i.e., no twin lepton number L̃), then charge neutrality of the universe results in the generation of a net twin neutron ñ number after the twin QCD phase transition. Although π̃^± are also stable, their abundance is negligible <cit.>, whereas π̃^0 still decays to twin photons. Finally, nucleosynthesis does not proceed without light twin neutrinos, and we conclude that DM is made entirely of ñ. On the other hand, Ref. <cit.> also raised a problem with this picture. If the softly broken 2 implies that the number densities of baryon and DM are similar, i.e. n_B≃ n_DM , then (<ref>) implies (<ref>). However, it seems that in order to achieve m_ DM≃ 5 m_B, the 2 symmetry has to be broken in the UV. To see this, we notice that in this scenario m_ DM∼Λ̃_ QCD , where Λ̃_ QCD is the twin sector strong interaction IR scale. But if we only allow for a soft 2 breaking, the only effects raising this scale compared to Λ_ QCD are given by the enhancements of the twin quark masses. This results in a speed-up of the running giving Λ̃_ QCD≃ 1.4 Λ_ QCD , slightly depending on the value of f/v. In this way, with only a soft 2, we have m_ DM≃ O(1) m_N . Thus, if the baryon and twin baryon number densities, n_B and n_ DM in (<ref>), were to be equal, we could not obtain the correct DM abundance. Ref. <cit.> argues that the 2 symmetry forces the number density equality and that, in order to obtain the correct DM abundance, the only way out is a hard breaking of the 2 symmetry, which would be enough to give m_ DM≃ 5 m_N. These masses can be achieved, for instance, by having different values of the QCD and twin QCD couplings at the cutoff Λ. However, this reintroduces two-loop contributions to the Higgs mass squared that are quadratic in Λ. It was then argued that it is possible to introduce enough α̃(Λ) - α(Λ) to obtain the desired value of Λ̃_ QCD≃ 5Λ_ QCD at the same time that a fine-tuning of at the most ≃ 1% is required. The situation described above, although technically feasible, is far from satisfactory. The MTH remains a natural extension of the SM controlling the Higgs mass UV sensitivity even after the most recent LHC bounds <cit.>, which come mostly from the constraints on the Higgs couplings, but also from the invisible Higgs boson branching ratio <cit.>. Therefore it is desirable to maintain this feature of the model, i.e., to avoid forcing the MTH scenario into a fine-tuned corner of parameter space for the purpose of obtaining the correct DM abundance. Luckily, as we will show below, it is possible to avoid introducing a hard breaking of the 2 and still obtain the observed DM abundance. The key point, of course, is to relax the approximate equality n_ DM≃ n_B to accommodate eqns. (<ref>) and (<ref>) while still using the result (<ref>), i.e. without introducing ad hoc hard 2 breaking. Although we present a full model in Section <ref> as proof of principle for how this can be achieved, we can sketch the general idea here. Models of baryogenesis set n_B after annihilation of the symmetric part of the particle-antiparticle plasma, leaving an asymmetric component. The final number density depends on the CP asymmetry, ϵ_CP <cit.>. The details of its computation depends on the specific baryogenesis model under consideration. However, CP violation generically requires ϵ_CP to be proportional to the relative complex phases of the couplings of the theory, which we denote as sinϕ. Because DM is asymmetric, the same structure for the final number density appears in the DM sector of the MTH. The remaining asymmetric component of the DM plasma sets the final DM number density <cit.>. Therefore, the twin ϵ_CP will also be proportional to the complex phase of the couplings of the twin sector, which we generically denote as sinϕ. Due to the 2 symmetry, the baryon asymmetry and DM asymmetry have the same microscopic origin; however, if the baryon and DM phases ϕ and ϕ are different, we can rewrite (<ref>) as Ω_ DM/Ω_B∼m_ DM/m_N|sinϕ/sinϕ|≃ 5. From (<ref>), we can see that it is possible to satisfy the DM abundance ratio to baryons if there is an order one misalignment between the phases ϕ in the visible sector and ϕ in the twin sector. We argue that this can be the case even in the absence of Z_2 breaking in the UV, given that the relative phases in the visible and twin sectors maybe be defined by IR processes that are not necessarily identical and, therefore, could generally come from a soft Z_2 breaking. A simple example is the vacuum alignment leading to the spontaneous breaking of the twin global symmetry at the scale f. This can be compared with the vacuum alignment in the visible sector leading to electroweak symmetry breaking at the scale v. There is no reason why the visible sector vacuum expectation value (VEV) should be real relative to the twin sector VEV. We can parameterize the vev of the Higgs bi-doublet as ⟨ H⟩ =f [ 0; sinθ; 0; e^iδcosθ ]. where the Twin Higgs VEV ⟨ H_B ⟩ has a relative phase δ with respect of the SM Higgs VEV ⟨ H_A ⟩. This relative phase propagates to the couplings of visible and twin sector states, e.g. fermions, coupled to the SM and twin Higgses. As a result, the SM Higgs couplings will have relative phases with respect to their twin sector counterparts. For instance, this implies that the CKM phases in the twin sector in the MTH need not be the same as those in the SM. Furthermore, since we are introducing couplings among quarks and new fields in both baryogenesis and darkogenesis, the overall relative phase of these is potentially receiving additional IR misalignment. Thus, we conclude that the phases entering in (<ref>) need not be related by a Z_2 transformation in the UV and can differ by order one values, which may result in the correct DM abundance even if the ratio of DM to nuclean masses is still just over unity. In the next section we show an explicit model of ADM in the MTH in which this mechanism is sucessfully implemented. § BARYOGENESIS AND DARKOGENESIS In this section, we specify a model of baryogenesis and its MTH counterpart to exemplify that it is possible to obtain a successful ADM dark matter abundance in this context without introducing hard 2 breaking. §.§ Baryogenesis We start by providing a simple and concrete model for baryogenesis. We generate the baryon asymmetry directly at low scales below the sphaleron decoupling temperature. Baryogenesis at low temperatures is appropriate for the twin Higgs since we expect the theory to be completed at the UV scale Λ=4π f ≈ 10. Therefore, we can imagine that the UV completion can play a role in the baryon asymmetry generation. The model is based on the out-of-equilibrium decays of a singlet fermion N_α that violates baryon number. The need for CP-violation requires at least two flavors of N_1,2, with a mass hierarchy M_N_2>M_N_1, so that the tree-level and loop amplitudes can interfere with different phases. We also require the existence of a colored scalar X in the (3,1)_2/3 representation of the group. We then add the following interactions to the SM sector: Δℒ_Bgen= λ_iα N_αX̅^a (u_R^ i)_a + ξ_ijϵ^abc X_a (d_R^ i)_b (d_R^ j)_c + h.c. Here, i,j are the quark generation indices, α=1,2 is the neutral fermion flavor, and a,b,c are color indices. Because of the antisymmetric nature of ϵ^abc, the ξ_ij coupling must be antisymmetric in flavor. This model is often considered in the context of low-temperature baryogenesis <cit.> since it is a simple realization of baryon number violation without proton decay <cit.>. The baryon asymmetry is generated by the decay of the lightest neutral fermion, N_1. The baryon asymmetry parameter is given by Y_Δ B=n_N_1/s(Γ(N_1→ B) - Γ(N_1→B̅)/Γ (N_1→tot)) ≡ Y_N_1ϵ_CP^N_1, where Γ(N→ f) are the decay widths of N_1 to baryon number B=+1 or B=-1 final states, Y_N_1≡n_N_1s is the N_1 yield, s is the entropy density, and ϵ_CP^N_1 is the CP asymmetry. We proceed to compute each piece of (<ref>) separately. We start with ϵ_CP^N_1. CP violation results from the interference of tree-level and loop amplitudes in N_1 decay as indicated in Fig. <ref>. The decay amplitude can be written as ℳ = c_0 𝒜_0 + c_1 𝒜_1, where in c_0 and c_1, we separate all the couplings in the matrix elements. We can then write the CP asymmetry in the generic form <cit.> ϵ_CP^N_1= Γ (N_1→ Xu_i^c)-Γ (N_1→Xu_i^c)/Γ (N_1→ Xu_i^c)+Γ (N_1→Xu_i^c)= Im{c_0c_1^*}/∑_α|c_0|^22∫ Im{ A_0 A_1^*}δ̃ dΠ_uX/∫ | A_0|^2 δ̃ dΠ_uX , where δ̃=(2π)^4δ^4(p_i-p_f) for the initial and final state momenta and dΠ_uX is the final state phase space factor. From (<ref>) we see that to have a CP asymmetry, there must be a complex phase in the product of the couplings as well as a non-zero relative phase of the two matrix elements, A_0 A_1. To meet this last condition, we need on-shell intermediate states in the loop diagrams so that their matrix elements have a complex phase relative to the tree level one. This, in turn, imposes a lower bound on the mass of N_1 from the mass of the colored scalar X, which should be above the scale due to LHC bounds, M_N_1>M_X ≳few . Assuming that the couplings are complex [There is no reason a priori for the couplings to be real. In the following sections, we comment on possible sources for the complex phases.]. Using the diagrams in Fig. <ref>, we obtain ϵ_CP^N_1 =∑_i,j (λ _i1 λ _i2 ^*λ _j2 ^*λ _j1 )/24π∑_i|λ _i1 |^2 [ 3 ℱ_S ( M_N_2^2/M_N_1^2 )+ℱ_V ( M_N_2^2/M_N_1^2 ) ], where the functions ℱ_S,V(x) coming from the loop diagrams are defined as ℱ_S(x) =2√(x)/x-1,ℱ_V(x)=√(x) ( 1+1/x ). Assuming that there are no flavor hierarchies between the λ_iα couplings, we take λ_iα=λ_uα for all i=1,2,3 as a simplifying approximmation. Then the CP asymmetry is given by ϵ_CP^N_1 = 3|λ _u2|^2 /24π [ 3 ℱ_S ( M_N_2^2/M_N_1^2 )+ℱ_V ( M_N_2^2/M_N_1^2 ) ] sinϕ, where ϕ is the complex phase of the product of the couplings λ _u1λ _u2^*λ _u2^*λ _u1. Next, we turn our attention to the N_1 yield Y_N_1. Before the decay of N1, the yield will be constant once the early universe processes cease due to the expansion rate. Y_N_1 is set either thermally or non-thermally depending on the physical processes at play and the values of couplings and masses. The important scale that distinguishes the two cases is T_FO, the freeze-out temperature of the processes that change the number density of N_1 in the thermal bath. In this case, the relevant processes are the decay of N_1, its inverse decay, and N_1 annihilations as shown in Fig. <ref>. Once the freeze-out conditions are satisfied, the inverse decay and annihilation processes will stop happening, leading to the following conditions, .Γ_u X → N_1/2H|_T^ inv._FO=1, 60mu.Γ_N_1 N_1→ u u/2H|_T^ ann._FO=1. In these, Γ_i are the process rates for the two reactions , and T^ inv._FO and T^ ann._FO are their respective freeze out temperatures. Since we will be working with small couplings, we can safely assume that the freeze-out temperatures of these reactions will be high, above the mass of N_1. Therefore, N_1 is produced in equilibrium at high energies in the thermal scenario, and the thermal distribution determines its yield. As the inverse decay and annihilations freeze out, only the decay process will change the number density of N_1, mainly after the lifetime of N_1 has elapsed. As argued before, the out-of-equilibrium decays of N_1 are responsible for the CP and baryon number violation required for baryogenesis. In addition, we need to compute the N_1 lifetime to know when baryogenesis mostly occurs. A long enough lifetime is required to reach the post-sphaleron baryogenesis window, as inverse decays and annihilations usually freeze out at higher temperatures. The lifetime of N_1 is given by τ_N_1 = 1/Γ_N_1→ u X= 16π^2/3 |λ_u1|^2m_N_1^3/m_N_1^4-m_X^4, where we included all the decay channels for different flavors of u_i={u,c,t}. Assuming that N_1 and X have masses above the few- region[In this model, N_1 and X can be significantly heavier than the few-s without changing the mechanism and the coupling bounds we derived.], we place a bound on the coupling λ_u1 by requiring the lifetime to be larger than the cosmological time for sphaleron processes on the one hand and lower than the time of BBN. This results in (τ_BBN≈ 10  s) ≥τ_N_1≥(τ_Sph≈ 10^-12 s) ⇒ 10^-14≲λ_u1≲ 10^-7 To compute the thermal yield, we assume that the early universe processes decouple relativistically as it leads to the largest value of Y_N_1. As we will see shortly, this requirement will minimize the necessary tuning between the different couplings λ_u1 and λ_u2 when fixing the observed value for the baryon asymmetry. For relativistic freeze-out, the equilibrium distribution gives the following yield Y_N_1=Y_N_1^EQ≃45 ζ(3)/2π^4g_N/g_*,S(T), where g_N is the N_1 effective number of internal degrees of freedom, and g_*,S is the total entropic effective number of degrees of freedom[The number of effective degrees of freedom is approximately twice the SM since we need to include the twin states as they are coupled to the SM bath through the Higgs portal.]. Finally, we impose the correct baryon asymmetry in order to constrain the parameters of the theory. Using (<ref>), (<ref>) and (<ref>) we have Y_Δ B=45 ζ(3)/16π^5g_N/g_*,S(T)|λ _u2|^2 [ 3 ℱ_S ( M_N_2^2/M_N_1^2 )+ℱ_V ( M_N_2^2/M_N_1^2 ) ] sinϕ. The observed baryon abundance measured by the Planck telescope <cit.> is Y_Δ B^exp=(8.75± 0.23)× 10^-11. Then, we can write (<ref>) as Y_Δ B/8.7× 10^-11=(213.5/g_*,S(T_FO)) ( 3 ℱ_S( 1.5 ) +ℱ_V( 1.5 )/15.3) (|λ _u2|sin^1/2ϕ/2.3× 10^-4)^2 . We have selected m_N_2^2=1.5  m_N_1^2 as a benchmark point, yet the final abundance only exhibits a weak dependence on the specific choice of mass splitting between the neutral fermions. For larger splittings, the value of the coupling |λ_u2|sin^1/2ϕ is expected become slightly larger. As we will argue later, we do not have any theoretical information on the origin of ϕ, the relative phase in the couplings. Because of this, we can assume the phases have values uniformly distributed from 0 to 2π[Note that this range means that the phase sinϕ can be negative. However, we can redefine what particle or anti-particle means in this case, thus always making the baryon asymmetry parameter positive.]. If this is the case, the quantity sinϕ is naturally expected to be an order 𝒪(1) parameter. Therefore, we see from (<ref>) that we can fix the observed value for the baryon asymmetry assuming that the coupling of N_2 is of order λ_u2∼ 10^-4. This value means that in this specific model of low-temperature baryogenesis, a coupling hierarchy between λ_u1 and λ_u2 is necessary. The coupling λ_u1 must be smaller in order for N_1 to be long-lived enough to get to post-sphaleron temperatures, and λ_u2 must be larger to reproduce the observed value of the baryon asymmetry, i.e. 10^-14≲ |λ_u1| ≲ 10^-7, |λ_u2|∼ 10^-4. On the other hand, this hierarchy of couplings is only necessary if we assume thermal production of N_1. Conversely, N_1 could be non-thermally produced if an additional mechanism was active after the freeze-out temperature of the processes in Fig. <ref>. One possibility is the production via the decay of a new heavy particle, for example, the "reaheaton" τ, a scalar field that induces a reheating period in early cosmology. The reheaton could be originated in different BSM scenarios, like non-thermal DM sectors, inflationary models, or SUSY/string models for the early universe <cit.>. The specific origin of this particle is beyond the scope of our work. The advantage of the N_1 non-thermal production mechanism is that there is no need for a hierarchy between the couplings λ_u1 and λ_u2, such as the one in (<ref>) and (<ref>) for the thermal case. We may then consider, for simplicity, that both couplings are equal in absolute value and that N_1 decays shortly after the reaheton decay. The prompt decay of N_1 is achieved by a larger λ_u1 coupling, of order λ_u1∼ 10^-4. In this case, the overall baryon asymmetry is given by Y_Δ B = n_τ/s(Γ(τ→ N_1 → B)-Γ(τ→ N_1 →B)/Γ(τ→tot)) =Y_τ Br_N_1 ϵ_N_1^CP Where ϵ_N_1^CP is given by (<ref>), Y_τ is the non-thermal yield due to the decay of the reheaton, and Br_N_1 is the branching ratio of the reheaton decay into N_1. We can estimate the non-thermal yield as a function of the reheating temperature by calculating the number density at this earlier matter-radiation equality epoch. The nonrelativistic energy density of the reheaton-dominated universe is ρ≃ m_τ n_τ. At the radiation epoch, we have the energy density, ρ_τ=π^230g_*(T) T^4. Therefore, we have n_τ∼^4m_τ and we can obtain the reheaton yield as Y_τ = n_τ/s_RH≃3/4/m_τ. Here, s_RH=2π^245g_*,S()^3 is the entropy at the reheating temperature. We also used that the effective degrees of freedom g_*(T) and g_*,S(T) are approximately equal at this early epochs. The reheating temperature can be estimated by the freeze out of the reheaton processes in the early universe. The reheaton decay rate can be obtained assuming that since it is a long-lived particle, its decay is possibly mediated by a nonrenormalizable operator. For example, in Refs. <cit.> a dimension five operator was used, resulting in Γ_τ=α^2/2πm_τ^3/M_*^2 where α is the effective coupling of the processes and M_* is some high scale of the theory. Then, comparing with the Hubble rate at the freeze-out temperature, we have ≃Γ_τ^1/2 M_Pl^1/2/g_*^4(T)=α m_τ^3/2/(2π)^1/2g_*^4(T)M_Pl/M_*^2. Then, assuming M_*∼ M_Pl, the reheaton yield (<ref>) is Y_τ≃3/4α/(2π)^1/2g_*^4(T)(m_τ/M_Pl)^1/2 With equation (<ref>) and (<ref>), we see that Y_τ≲ 10^-3. Notice that the reheating temperatures are below the range of temperatures in which spharaleons processes are active. Because of this, the couplings λ_u1 can be larger. Consequently, N_1 does not need a long lifetime, and baryogenesis will occur at lower temperatures, close to the reheating temperature. Parametrically, we can write the baryon asymmetry for the non-thermal case as Y_Δ B/8.7× 10^-11= ( 213.5/g_*(T_) )^4 ( m_τ/100 )^1/2( 3 ℱ_S( 1.5 ) +ℱ_V( 1.5 )/15.3) (α^1/2 |λ _u2|sin^1/2ϕ/1.1× 10^4)^2 . Notice that in the non-thermal case, the effective degrees of freedom g_*(T_) can assume values with different orders of magnitude depending on the chosen reheating temperature. The reheaton may decay at any time between post-sphaleron and prior to the BBN time. In the numerical example above, we chose a reheating temperature before the SM and Twin QCD phase transition, which yields a total effective degrees of freedom of g_*≃ 213.5. To conclude this section, we emphasize that our proposal is independent of the specific details of the visible sector baryogenesis model. Reproducing the observed baryon asymmetry is sufficient for the purpose of this work. Our primary focus will be to demonstrate how the 2 symmetry ensures the origin of the dark matter abundance in the twin sector, with a particular focus on the baryon asymmetry dependence on the phase and coupling given either by (<ref>) or (<ref>). What is clear is that a generic baryogenesis model that relies on the out-of-equilibrium decay of some new particle should have a similar dependence on these parameters. As such, our findings have implications beyond the specific model we have presented. We leave the extension of the model to higher temperatures via leptogenesis, the achievement of baryogenesis without hierarchical couplings, and the origin of the reheaton in the non-thermal case for future work that focuses explicitly on baryogenesis. §.§ Twin Darkogenesis Now that we have a successful baryogenesis model, we can compute the corresponding DM asymmetry using the twin mechanism discussed in Section 2. The 2 mirror symmetry results in a twin baryon asymmetry that generates the dark matter abundance in the twin sector. Then, to compute the DM abundance, we use the baryogenesis model of Section <ref>. This approach means that, analogously to the SM sector case, we introduce an out-of-equilibrium, CP, and baryon number violating decay. The new twin sector particles are two neutral fermions Ñ_1,2 and a twin-colored scalar X̃ in the (3,1)_2/3 representation of twin QCD, S̃Ũ(3). Then, we can add the following interactions to the twin part of the theory, Δℒ_Bgen^twin= λ_iαN_αX_a (u_R)^i_a + ξ_ijϵ^abcX_a (d_R)^i_b (d_R)^j_c + h.c. Here, i,j are the twin-quark generation indices, α=1,2 is the neutral fermion flavor, and a,b,c are twin-color indices. The tilde superscripts indicate that all quantities are associated with the twin sector. The mechanism for generating the ADM abundance is the result of imposing the 2 symmetry on the baryogenesis mechanism of the previous section. The out-of-equilibrium decay of the twin Ñ_1 violates CP and baryon number and generates a CP asymmetry. As for the case of the N_1, here, the Ñ_1 yield can be obtained thermally or non-thermally. Then, we can write the DM asymmetry as the twin baryon asymmetry parameter as Y_DM^ thermal/8.7× 10^-11 =(213.5/g_*,S(T_FO)) ( 3 ℱ_S( 1.5 ) +ℱ_V( 1.5 )/15.3) (|λ _u2|sin^1/2ϕ/2.3× 10^-4)^2, where the expression above corresponds to the thermal determination of the yield. In the non-thermal process, we would have a similar expression for the DM asymmetry, except for the non-thermal yield (<ref>). Our work does not introduce any new interactions between the visible and twin sectors generating the baryon and DN asymmetries. In this way, we emphasize that the mirror 2 mechanism uniquely gives their common origin without any need for cogenesis and asymmetry transfer between the two sectors. Thus, this method for realizing the ADM idea is similar to previous models of mirror DM <cit.>. One could worry that additional renormalizable interactions could be present or generated in the theory. However, these are very suppressed in our model. Since the particles we introduced in the SM sector do not interact with the Higgs directly and the Higgs portal is the only communication between the twin and SM sectors, any visible-twin interactions happen only at multiple loop order. Lastly, the MTH is expected to be UV completed near the cutoff of the theory. Therefore, one could imagine that both Δℒ_Bgen and Δℒ_Bgen^twin have a common origin approximately at the scale 4π f. In this case, there could be more renormalizable portals beyond the twin Higgs. However, to introduce any new portals, we would need to make assumptions about the structure of the UV theory, which is beyond the scope of this paper. The most important aspect of the baryogenesis extensions we added to both sectors is that they leave the hierarchy problem unaffected, as there are no new interactions with the Higgs. Since the DM abundance is larger than the baryon abundance, some source of misalignment will be necessary to achieve darkogenesis. The relation between the abundances is Ω_DM/Ω_B= n_DM/n_Bm_DM/m_B∼ 5. As discussed in Section <ref>, if there is a process that enforces n_B∼ n_DM, we must have m_DM∼ 5m_B. It is difficult to achieve this mass around 5 with only soft 2 breaking in the MTH. To see this, we observe that given that DM is a twin nucleon, the ratio of the QCD and twin QCD confinement scales can predict the ratio between the DM and nucleon masses. In appendix <ref>, we derive the ratio of the two QCD scales, which is given by Λ_/Λ _≃ ( f/v )^2/9 This scaling would result in a twin nucleon mass not much above 1. Thus, a hard 2 breaking was usually assumed in QCD running couplings to make the twin nucleon heavier. The central point of this work is to show that it is possible to have a ≃ 1 twin nucleon DM candidate without resorting to hard 2 breaking. To show that, we can rewrite (<ref>) to make explicit the dependence on the baryon and twin baryon asymmetries, Y_Δ B and Y_DM. This results in Ω_DM/Ω_B=m_p/m_pY_ DM/Y_Δ B(1-r/1-r), where we defined the baryon and twin baryon fractional asymmetries r and r as r=n_B/n_B, r=n_B/n_B. Because we assumed that the SM and twin sectors have the same mechanism to generate the asymmetry, the fractional asymmetries are expected to be the same. Then, using either (<ref>) or (<ref>) and m_p^twin/m_p≃ (f/v)^2/9, we can write (<ref>) as Ω_DM/Ω_B=( f/v)^2/9(3 ℱ_S ( Δ_N )+ℱ_V ( Δ_N ) / 3 ℱ_S ( Δ_N )+ℱ_V ( Δ_N ) )|λ_u2|^2/|λ_u2|^2|sinϕ/sinϕ|, where Δ_N=M_N_2^2/M_N_1^2 and Δ_N=M_N_2^2/M_N_1^2. Since we are not interested in introducing hard 2 breaking, we set |λ_u2|=|λ_u2|. Also, we assume that the mass splittings Δ_N and Δ_Ñ are the same in both sectors. In any case, and as already discussed in Section <ref>, the final abundances only depend weakly on the change of these parameters. Finally, because of the 2 symmetry, the fractional asymmetries in both sectors should be the same. In this way we can rewrite (<ref>) as Ω_DM/Ω_B≃( f/v)^2/9|sinϕ/sinϕ|≃ 5. Therefore, it is possible to satisfy the ADM requirement for the DM to baryon abundance if the phases in the visible and twin sectors are misaligned. In Fig. <ref>, we show the allowed phases needed in order to satisfy relation (<ref>). As we argued in section <ref>, the misalignment of the phases can be viewed as an IR effect and does not qualify as hard 2 breaking. One source of misalignment comes from the relative phase between the SM and twin vevs, δ, defined by (<ref>). Once there is a relative phase difference in the SM and twin masses, the CKM field redefinition introduces different phases on the couplings of (<ref>) and (<ref>). SM: (Flavor Basis) λ_iα N_αX^a (u_R^ i)_a ⟶λ_iα U^ij_u,R N_αX̅^a (u_R^ j)_a (Mass Basis). Twin: (Flavor Basis) λ_iα N_αX^a (u_R^ i)_a ⟶λ_iα e^iδ U^ij_u,R N_αX̅^a (u_R^ j)_a (Mass Basis). Here, U_u,R and e^iδU_u,R are the unitary matrix used to diagonalize the Yukawa terms in the SM and twin sector, respectively. Even if the exact 2 ensures λ_iα=λ_iα in (<ref>) and (<ref>) if the phase is non-zero we expect different imaginary parts of the visible and twin couplings. λ_iα U^ij_u,R≠λ_iα e^iδU^ij_u,R. We conclude that having different phases is not a hard breaking of the 2 since it is still a symmetry of the UV theory and does not affect the hierarchy problem in any way. Concerning the UV theory, we are not addressing the specific structure of the twin Higgs model in the UV; we only assume that the 2 can arise as an exact symmetry at those scales. Then, going to IR scales, the phase misalignment mechanism could have other sources beyond the twin Higgs potential. In general, it is difficult to point out all the sources of phase misalignment since this would imply that we have complete knowledge of the flavor sector of the theory[Even in the SM, we do not have information on the origin of the CP phase in the CKM matrix for example.] and all the relaxation mechanisms that took place in the thermal evolution of the model. Therefore, we are justified in treating the phases ϕ and ϕ as unknown parameters and scanning for the values that reproduce the DM to baryon ratio as in Fig. <ref>. Once we reproduce the observed abundances of DM and visible matter, as well as the baryon asymmetry, we can study the phenomenological implications of the model and see how it can be constrained or observed in the future. § PHENOMENOLOGY Once we have obtained the observed baryon asymmetry from the twin ADM model, we can study the phenomenological constraints and signals of the model. Usually, the most important constraints on twin Higgs models come from cosmology. In the original implementation of the MTH, a mirror SM copy in a hidden sector induces significant contributions to dark radiation at Cosmic Microwave Background (CMB) and Big Bang Nucleosynthesis (BBN) epochs. If the twin sector had a thermal history similar to the SM, relativistic twin neutrinos and photons would contribute to the total effective number of relativistic degrees of freedom of the universe. Many solutions in the literature deal with the potential cosmological problems of Twin Higgs models. One straightforward approach is to decrease the number of relativistic degrees of freedom in the Twin sector with explicit 2 breaking. As discussed in section <ref>, this strategy is used in the Fraternal Twin Higgs model where only the third generation of fermions are kept in the Twin sector <cit.>. This structure is the minimal particle content to address the electroweak Hierarchy Problem. However, it relies on the hard breaking of the 2 symmetry at the UV scale, which does not apply to this work. Another possibility is an asymmetric reheating that injects more energy into the SM sector than the twin sector. This possibility was considered in <cit.>. In asymmetric reheating, a massive long-lived particle freezes out from the thermal bath while still relativistic. As the universe expands, they become non-relativistic and decay in both sectors after they decouple. However, in these mechanisms, the massive particle decays are preferentially arranged to decay into the SM sector. As a consequence, the temperature in the visible sector will be bigger than in the Twin sector, alleviating the Δ_ tension. In principle, we could implement asymmetric reheating in the decays of N_1 of our model. However, since there are preferential decays to the SM, the numerical predictions we found in the last section would change. Instead, we assume a more straightforward solution to the Δ_ tension and leave other implementations for future work. A simple solution to the cosmological problems of the MTH was introduced by <cit.> and worked out in <cit.>. The idea is to give a large mass to twin neutrinos, making them decouple non-relativistically from the thermal plasma much earlier in cosmic history. Effectively, the twin neutrino contribution to Δ_ is removed. We assume that a seesaw-like mechanism exists and is responsible for generating large twin neutrino masses. The implementation details can be found in previously mentioned literature on the twin neutrinos solution to Δ N_ eff. Once the twin neutrinos are heavy, our scenario in the MTH has a single viable candidate for DM - twin neutrons. The argument proceeds as in <cit.>. Since Δ N_ eff with heavy twin neutrinos is within experimental bounds, we can keep the twin photon in the spectrum to enforce the 2 symmetry[If the SM prediction of Δ N_ eff remains confirmed with future data, keeping the twin photon in the model can become problematic beyond the 3σ level. If this becomes the case, one could implement asymmetric reheating to avoid the cosmological bounds or have a massive twin photon as in <cit.>.]. Only twin baryon number B̃ is generated with no twin Lepton asymmetry. Therefore, all twin leptons can annihilate, and twin electrons are not DM candidates[If the twin electrons do not annihilate, they could potentially add to the abundance to the point of leading to overclosure.]. The charge neutrality of the universe requires that there is no net production of twin protons since they cannot combine into neutral objects. Therefore, after the twin QCD phase transition a net twin neutron ñ number is generated. Finally, twin nucleosynthesis cannot proceed in the presence of heavy twin neutrinos since there are no protons to combine with neutrons. Because the neutron is stable and the only twin relic, we conclude that dark matter is made entirely of ñ. Now that we have established that the MTH DM candidate is the twin neutron, we can study the direct detection signals. As previously mentioned, we can estimate the mass of the ñ to be near that of the visible nucleons, corrected by the twin sector scale. The precise relation follows from the definition of the QCD scales in both sectors. The leading order contribution to Λ_ QCD arises from the running of the strong couplings coupling, α_s(Q^2)=1/b_0(N_f) lnQ^2Λ^2_ QCD Here, we have defined b_0(N_f)=33-2 N_f and N_f is the number of active quark flavors lighter than the relevant scale m_f<Q. Because of the N_f dependence, there will also be an effect on Λ_ QCD due to the quark mass thresholds. In appendix <ref>, we compute the mass-threshold contributions due to integrating out the heavy quark states f=t,b,c to the QCD scale. If we divide the QCD and twin QCD scales, we obtain the following relation Λ_ QCD/Λ_ QCD∼(y_t/y_ty_b/y_by_c/y_c)^2/27(f/v)^2/9exp[-2π/9(1/α̃_s-1/α_s)] Assuming there is no hard 2 breaking, we can set ỹ_f=y_f and α̃_s=α_s. Finally, there is only a soft 2 breaking due to the heavier vev of the twin sector, and we recover equation (<ref>), Λ̃_/Λ _≃ ( f/v )^2/9. Since the neutron and twin-neutron masses are proportional to their respective QCD scales, we can write m_ DM = m_n=(f/v)^2/9 m_ n. Then, assuming that f/v≳ 3 from the LHC Higgs coupling measurements <cit.> and f/v≲ 10 to limit the fine-tuning of the model, we arrive at a rather narrow range of DM mass in this model: 1.2≲ m_ DM≲ 1.6 . Next, we estimate the nucleon-DM cross-section. The starting point is understanding the halo's local dark matter profile. Because twin dark matter is twin neutrons, its self-interactions should be of the order of the nucleon cross-sections, around ∼ 1cm^2/g at energies of a few . Because of this value, we observe that twin DM is within or borderline close to the bounds from small-scale structure formation and merging clusters <cit.>. While the suppression of small-scale structure could be a signal of this or other similar ADM models, we leave this part for future work. Several complications are still under debate in the literature regarding the need for suppression in small scales[The reliability of the collisionless cold dark matter simulations to predict small-scale structure suppression and the role of baryonic feedback are some examples of recent discussions in the literature.]. We therefore assume the twin-neutron self-interaction cross-section satisfies the bound. Assuming this, we then expect twin dark matter to have an approximately uniform distribution within the galaxy halo, allowing for the usual dark matter halo profile and velocity distribution. Direct detection of twin dark matter assumes that the two sectors communicate. This communication can occur either through the Higgs portal or other operators at the UV completion scale of the MTH model. Since we are interested in the scattering of nuclei and twin DM at low energies, we can use the effective theory of light quarks and twin quarks. Generically we can write ℒ_ eff=c_q q^ij/Λ^2(q_iΓ q_i)(q_jΓq_j),Γ,Γ=1, iγ_5,γ^μ, γ_5γ^μ,σ^μν, where, i=u,d,s and j=ũ,d̃, s̃. c_q q^ij are the Wilson coefficients of the operator and Λ is some scale high compared to 1. In general, we can write different Lorentz structures, Γ. However, for our purposes, we are only interested in effective quark operators that generate spin-independent interactions that survive the point-like nucleon approximation. Therefore, we only keep Γ,Γ=1,γ^μ since these generate spin independent NR interactions <cit.>. In the case of (<ref>) being generated by the Higgs portal interaction, we have the following scalar 4-fermion operator, ℒ_ eff^ higgs=y_i y_j/m_h^2ξ (q_i q_i)(q_jq_j), where ξ=v^2/f^2 and we used the 2 symmetry to write the twin quark Yukawa coupling ỹ_j to be equal to the visible Yukawa couplings. In this case, we expect this operator to generate a small nucleon cross-section since a double suppression comes from the Yukawa couplings of the light SM and twin quarks. The other possibility is that the effective operators (<ref>) are generated at the MTH cutoff. Considering this case, we can write the two operators that generate spin-independent non-relativistic interactions, a scalar and a vector operators: ℒ_ eff^Λ_S=c_S/Λ_S^2(q_i q_i)(q_jq_j), ℒ_ eff^Λ_V=c_V/Λ_V^2(q_iγ^μ q_i)(q_jγ_μq_j). Furthermore, we absorb the coefficients of the scalar and vector operators into the definition of the cutoff scales Λ_S,V, effectively setting c_S=c_V=1. The spin-independent cross-section, σ_SI, can be calculated using standard methods as described in appendix <ref>. For the scalar and vector operators, σ_SI is given by σ_SI^ scalar=μ_Nñ^2/πf_N^2 f_ n^2/Λ_S^4, σ_SI^ vector=μ_Nñ^2/πb_N^2 b_ n^2/Λ_V^4. where μ_Nñ is the reduced mass of the twin-neutron and nucleon system, and the zero momentum constants are derived from the form factors as f_N=∑_q f_Tq^(N)≃ 0.3, f_n=∑_q f_Tq̃^( n)≃ 0.3, b_N=∑_q F_1^q,N(0)=3, b_n =∑_qF_1^q,N(0)=3 . Notice that the vector form factors are ten times larger than the scalar ones at zero momentum. Since the form factor goes with the fourth power in (<ref>), there will be a significant difference in reach for the scales in the vector and scalar operators. In Figures <ref> and <ref>, we show the spin-independent twin-neutron nucleon scattering cross-section parameter space for the scalar and vector operators, respectively. The green rectangle shows the allowed DM mass given by (<ref>) and the scale of the operator for each cross-section. The different plots highlight the contrasting reach of the scalar and vector scales, with high Λ_S down into the neutrino fog. In the scalar case, the Higgs portal appears at a higher effective scale, around Λ_S ∼ 40, due to the double suppression of the first generation Yukawa couplings. The filled regions are the current exclusion bounds from Darkside 2022 data <cit.>, CRESST-III <cit.> and XENON1T <cit.>. In the case of Darkside, nuclear recoils are subject to quenching effects, which cause a reduction in the energy signal due to various mechanisms whose statistics are not fully understood. Because of these effects, <cit.> considered two models to bound the quenching effect region where quenching fluctuations are suppressed (NQ) or unsuppressed (QF). NQ corresponds to the filled solid pink region (DarkSide50 2022), and QF is the DS50 QF curve in Figs. <ref> and <ref>. While the quenching factor can vary between events, it is typically quantified using calibration sources and simulations. Once these analyses are done, the real exclusion region should lie somewhere in between the NQ and QF curves. For the neutrino background, we present the Xenon neutrino fog as defined by <cit.>. The index n, the gradient of the DM discovery limit over some exposure measure, labels the different neutrino fog curves and is given by n=-(dlogσ/dlog N)^-1 , where σ is the discovery limit, and N is the number of events. Given a cross-section experimental sensitivity, this definition means that reducing the sensitivity by a factor of x requires increasing the exposure by x^n. Therefore, future experiments can put exclusion bounds inside the neutrino fog region by having sufficient exposure time. The dashed lines correspond to projections by the SuperCDMS <cit.> and SBC <cit.> experiments. A large portion of the parameter space for twin dark matter will likely be probed in the future, especially for the effective vector operators. Due to the smaller scalar form factors, reaching very high cutoff scales is more challenging, and part of the interesting parameter space is down the n>3 neutrino fog region. Promising strategies beyond maximizing exposure could be adopted to probe this region. One of these is using the directionality of the neutrino flux to reduce their background. For a review of direct detection prospects below the neutrino fog, we point out to <cit.>. We can conclude from the figures above that the interesting scales for the UV completion of the MTH, typically of the order of 10TeV, are beginning to be probed by the Darkside collaboration. This is clearly the case for the vector operator (Figure <ref>). On the other hand, for the scalar operator (Figure <ref>) the suppressed sensitivity resulting from the smaller zero momentum constants in (<ref>) puts this interesting UV completion scale under the neutrino fog, making its detection more challenging. Additionally, the Higgs portal should be always present in the MTH independently of the UV completion. Therefore, reaching the Twin Higgs portal cross section has the potential of excluding or confirming the model. However, due to the double first-generation Yukawa coupling suppression, the signal for direct detection goes deep into the neutrino fog, with difficult experimental prospects. In any case, we see that the direct exploration of the parameter space of this ADM scenario of the MTH is becoming feasible in current and future experiments. To finish this section, we briefly comment on other possibilities for the phenomenology of the presented model. First, bounds from neutrons oscillation experiments do not apply here since the interactions of X and quarks in (<ref>) is anti-symmetric in flavor. Additionally, the charged X production could be explored at the LHC. This paper assumes that X is heavy enough to be out of reach by collider experiment. However, we are pursuing the collider phenomenology of this low-temperature baryogenesis scenario in a forthcoming publication. § CONCLUSIONS The primary focus of this paper is to present a Mirror Twin Higgs implementation of asymmetric dark matter, as a proof of principle that there is no need to introduce a hard 2 breaking in order to have a consistent dark matter candidate. We focused on adding a particular baryogenesis model to the visible sector of the MTH that successfully generates the baryon asymmetry at low temperatures after sphaleron decoupling. The model relies on the out-of-equilibrium decays of a neutral fermion that violates baryon number and CP symmetry. We showed that the observed baryon asymmetry can be correctly obtained, assuming that the neutral fermion is produced in the early universe, independently of weather this production is thermal or non-thermal. The 2 symmetry of the MTH model extends the cosmological mechanism responsible for baryogenesis to the twin sector. This mirroring gives rise to an abundance of asymmetric dark matter predominantly composed of twin neutrons. Misalignment of the complex phases between the visible and twin sectors make possible a dark matter abundance consistent with the observed value of Ω_ DM≃ 5 Ω_B. This phase misalignment could arise solely as an IR effect, ensuring that no hard breaking 2 needs to be introduced. A simple example is vacuum misalignment between the two scalar sectors, leading to different phases in their couplings entering the CP asymmetries. This, as well as similar misalignments in the phases of fields entering the singlet couplings in both sectors, can be IR effects and therefore thought of as soft Z_2 breaking, Preserving the UV 2 symmetry of the twin Higgs model is a desirable feature concerning the electroweak stability of the theory, and its protection guarantees that the solution to the little hierarchy problem remains unspoiled, without the need of further tunings. It is possible to reach the same final DM abundance with different baryogenesis implementations. Because of this, our results are not limited to the specific baryon asymmetry mechanism used in section <ref>. Once a visible baryon asymmetry is achieved, the 2 symmetry and phase misalignment are enough to reproduce the abundance ratio between DM and baryons. Consequently, the mechanism exhibits the potential for generalization to alternative baryogenesis models, such as high-scale leptogenesis or electroweak baryogenesis. These extensions can be explored in future developments of ADM in the MTH framework. Regardless of the chosen baryogenesis model, implementing ADM in the MTH model without hard 2 breaking predicts that dark matter consists mainly of twin neutrons with masses ranging from 1.2 to 1.6. A significant part of the parameter space is probed assuming an effective interaction of the light quarks and twin quarks. Part of the parameter space is excluded by the data from the Crest-III and Darkside-50 experiments. Promisingly, future experiments such as SuperCDMS and SBC can probe higher effective scales beyond the TeV range. Furthermore, direct detection experiments below the neutrino fog hold significant potential for uncovering the nature of twin asymmetric dark matter. We conclude that the mirror twin Higgs model is a well-motivated BSM approach to address the electroweak stability, the nature of DM, as well as the origin of the baryon asymmetry with the same core concepts. The model presents a compelling candidate for dark matter in the range, requiring extensive exploration through future DM detection experiments. The authors thank Ivone Albuquerque, Nicolás Bernal, Chee Sheng Fong and Seth Koren for helpful discussions. They also acknowledge the support of FAPESP grants 2019/04837-9 and 2021/02757-8, and CAPES 88887.816450/2023-00. § QCD AND TWIN QCD SCALES In this appendix we derive the leading order relationship between the SM QCD and twin QCD scales we used throughout the text, Λ_QCD/Λ_QCD= (f/v)^2/9. Following <cit.>, the derivation makes use of the quark-mass threshold contributions to Λ_QCD. At leading order, the running coupling α(Q^2,N_f) can be written for momentum greater than the top quark as α_s(Q^2,6)=1/b(6)log Q^2/Λ_UV^2, Q^2>m_t^2. Here, N_f=6 is the number of active quark flavors at high energies and b(N_f)=33-2N_f. Crucially, the UV QCD scale defined in this relation is the same between the visible and the twin sector. The first quark threshold correction appears when we integrate out the top-quark. The coupling below the top quark mass can be written as α_s(Q^2,5)=1/b(5)log Q^2/Λ_UV^2+c, Q^2>m_b^2, where c is a constant fixed by requiring the matching between the theory with six and five quarks at the top-quark mass scale, α_s(m_t,6)=α_s(m_t,5). Calculating c we arrive at 1/α_s(Q^2,5)=b(5)logQ^2/m_t^2+b(6)logm_b^2/Λ_5^2, However, α_s(Q^2,5) also defines the QCD scale for the theory with only 5 active quarks, Λ_5. 1/α_s(Q^2,5)=b(5)logQ^2/Λ_5^2, Q^2>m_b^2. Comparing (<ref>) to (<ref>), we arrive at a relation for the 5 quark-flavors QCD scale. Λ_5=Λ_UVm_t^1-b(6)/b(5)/Λ_UV^1-b(6)/b(5). We can do the same procedure to obtain the quark-mass threshold contributions to the QCD up to the charm-quark. Beyond this point, the theory becomes strongly interacting and we cannot perturbatively integrate out the light quark-flavors since they are below the QCD scale. Thus, the definition for the QCD scale includes threshold contributions from the three heavy states, the top, bottom and charm quarks. We can them write Λ_QCD≡Λ_3 = Λ_UV^b(6)/b(3) m_t^(1-b(6)b(5))b(5)b(3) m_b^(1-b(5)b(4))b(4)b(3) m_c^(1-b(4)b(3)). Substituting the values of b(N_f) and using that the mass-Yukawa relation m_q=y_q v, we obtain Λ_QCD=Λ_UV^7/9 y_t^2/27 y_b^2/27 y_c^2/27 v^2/9. Similarly, for the twin QCD scale we can write Λ_QCD=Λ_UV^7/9y_t^2/27y_b^2/27y_c^2/27 f^2/9. Since there is no 2 breaking in the model, we can write the twin Yukawa couplings as y_q =y_q. Finally, dividing (<ref>) by (<ref>) we obtain the proposed relation, Λ_QCD/Λ_QCD= (f/v)^2/9. § SPIN-INDEPENDENT CROSS-SECTION OF TWIN DM Now, to find the cross-section, we compute the nucleon-DM scattering matrix. To first order in the perturbative expansion, we have the following nucleon amplitude ℳ_ N = ⟨n' N'|ℒ_ eff|n N⟩ = 1/Λ^2⟨ N'| q_iΓ q_i |N ⟩⟨n'| q_jΓq_j |ñ⟩. We use N' and n' to denote the nucleon and twin-neutron final states, respectively. The nucleon-spinor bilinears can parameterize each matrix element, ⟨ N'| q_iΓ q_i |N ⟩ = ∑_i=u,d,s F_i^(N)(q^2) u_N'Γ u_N, where F_l^(N)(q^2) are the hadronic form factors associated to the nucleons N=p,n. For direct detection of twin DM, it is sufficient to use the hadronic form factors at zero transferred momentum since their variation is negligible compared to the recoil energies considered. In this limit, we can relate the scalar form factors with the fraction of the nucleon mass carried by the light quarks, ⟨ N| qq |N⟩ =m_N/m_qF_i^(N)(0)u_N u_N = f_Tq^(N)u_N u_N. The vector form factors are related to the conserved flavor singlet vector current associated with the baryon number. ⟨ N| qγ^μ q |N ⟩ = F_i^q,N(0)u_N γ^μ u_N The form factors at zero momentum can be obtained by perturbative and lattice calculations or by experiment. We used the values from <cit.>. Finally, we can calculate the spin-independent cross-section. σ_SI^ scalar=μ_Nñ^2/πf_N^2 f_ n^2/Λ_S^4, σ_SI^ vector=μ_Nñ^2/πb_N^2 b_ n^2/Λ_V^4. where μ_Nñ is the reduced mass of twin-neutron and nucleon, and we have defined the constants f_N=∑_q f_Tq^(N)≃ 0.3, f_n=∑_q f_Tq̃^( n)≃ 0.3, b_N=∑_q F_1^q,N(0)=3, b_n =∑_qF_1^q,N(0)=3. Notice that the vector form factors are ten times larger than the scalar ones at zero momentum. Since the form factor goes with the fourth power in (<ref>), there will be a significant difference in reach for the vector and scalar probes to the scale of the operator. JHEP
http://arxiv.org/abs/2307.05427v1
20230711164805
Effective Whitney Stratification of Real Algebraic Varieties
[ "Martin Helmer", "Vidit Nanda" ]
math.AG
[ "math.AG", "14P05, 14P10" ]
[MH] Department of Mathematics, North Carolina State University, Raleigh, NC, [email protected] [VN]Mathematical Institute, University of Oxford, Oxford, [email protected] Effective Whitney Stratification of Real Algebraic Varieties Vidit Nanda August 12, 2023 ============================================================= We describe an algorithm to compute Whitney stratifications of real algebraic varieties. The basic idea is to first stratify the complexified version of the given real variety using conormal techniques, and then to show that the resulting stratifications admit a description using only real polynomials. This method also extends to stratification problems involving certain basic semialgebraic sets as well as certain algebraic maps. One of the map stratification algorithms described here yields a new method for solving the real root classification problem. § INTRODUCTION A pair (M,N) of smooth submanifolds of ^n satisfies Whitney's Condition (B) if the following property holds at every point q ∈ N. Given any pair of sequences p_k⊂ M and q_k⊂ N with lim p_k = q = lim q_k, if the limiting tangent space and the limiting secant line T := lim_k →∞ T_p_kM and ℓ := lim_k →∞[p_k,q_k] both exist, then ℓ⊂ T. A Whitney stratification of a subset X ⊂ℝ^n is any locally-finite decomposition of X = ∐_αM_α into smooth, connected nonempty manifolds M_α⊂ X called strata, so that every pair (M_α,M_β) satisfies Condition (B). The main contribution of this note is a practical algorithm for constructing Whitney stratifications of real algebraic varieties. The existence of such stratifications dates back to the work of Whitney <cit.> — every real algebraic variety X admits a Whitney stratification ∐_α M_α such that for each dimension i ≥ 0, the union X_i ⊂^n of all strata of dimension ≤ i is a subvariety of X <cit.>. §.§ From real to complex and back In prior work, we used conormal spaces and primary decomposition to algorithmically stratify complex algebraic varieties <cit.>. In the introductory remarks to that paper, we highlighted the lack of Gröbner basis techniques over as a primary obstacle to performing similar stratifications for real algebraic varieties and semialgebraic sets. We overcome this obstacle in Section <ref> of this paper by constructing, for any real variety X ⊂ℝ^n, the corresponding complex variety X() ⊂^n — this is precisely the vanishing locus of the defining polynomials of X, treated as an ideal in [x_1,…,x_n]. The key insight is that the subvarieties arising from a stratification of X() produced by the methods of <cit.> are also generated by real polynomials; and the real varieties defined by those polynomials constitute a valid Whitney stratification of X. §.§ Stratifying real algebraic maps A stratification of a real algebraic map f:X → Y is a pair of Whitney stratifications of X and Y so that f sends each stratum M ⊂ X smoothly and submersively to a single stratum N ⊂ Y. Whenever f is proper (i.e., if X is compact) then the restriction of f to f^-1(N) forms a locally trivial fiber bundle over N, which in particular implies that the stratified homeomorphism type of the fiber f^-1(y) is independent of the choice of y ∈ N. Our second contribution, carried out in Section <ref>, is to describe an algorithm for stratifying any given f. As with real varieties, the key step is to first consider a complexified version f_:X() → Y() of the morphism, and to then employ the methods of <cit.>. §.§ Stratifying full semialgebraic sets The ability to algorithmically stratify real varieties also allows us to produce Whitney stratifications of certain basic semialgebraic sets. In Section <ref>, we consider full semialgebraic sets B of the form X ∩ C, where X is a real algebraic variety and C ⊂^n is a region carved out by polynomial inequalities of the form f_i(x) ≥ 0 whose interior is an open n-dimensional submanifold of ^n. Let Y_C be the real hypersurface defined by the vanishing of ∏_i f_i. Our third contribution here is to describe a mechanism for inducing a Whitney stratification of B from Whitney stratifications of the real varieties X and X ∩ Y_C. We note that full semialgebraic sets arise rather frequently in applications, so we expect their stratifications to be useful across a broad spectrum of practical problems. §.§ Dominant maps and real root classification The real root classification problem seeks to describe how the number of real roots of a parametric polynomial system varies as a function of the parameters. This problem appears in a variety of applied contexts, including chemical reaction networks <cit.>, medical imaging <cit.>, computer vision <cit.>, kinematics and robotics <cit.>, ordinary differential equations <cit.>, and quadrature domains <cit.>. In Section <ref> we describe an algorithm for stratifying dominant polynomial maps f:X → Y between real varieties of the same dimension; we are able to partially recreate the stratified fiber bundle property of proper maps by carefully analysing and decomposing the locus of points at which f fails to be proper. In Section <ref>, we show how dominant map stratifications help solve the real root classification problem by decomposing the parameter space into certain strata over which the number of roots is locally constant. § STRATIFYING REAL ALGEBRAIC VARIETIES Let [x_1,…,x_n] be the ring of real polynomials in n indeterminates, and fix a radical ideal I of this ring. By definition, the vanishing locus X := (I) constitutes a real algebraic subvariety of ^n. Since [x_1,…,x_n] is a subring of [x_1,…,x_n], the ideal I similarly defines a complex algebraic subvariety _(I) of ^n, which we will denote by X(). Let X_ reg denote the manifold of smooth points in X. In this section _(X) will denote the dimension of manifold X_ reg and _(X()) will denote the dimension of the manifold (X())_ reg. Our immediate goal here is to show that certain Whitney stratifications of X() induce Whitney stratifications of X. We let ι:^n ^n be the embedding of real points in complex Euclidean space. Let X ⊂^n be a real algebraic variety. * The embedding ι identifies X with the real points of X(). * Assume that _ X equals _ X(). If ι(p) is a smooth point of X() for some p ∈ X, then p is a smooth point of X. The first assertion is a tautology. Turning to the second assertion, set d := _ X() = _ X. Since the roots of real polynomials occur in complex conjugate pairs, the variety X() is invariant under complex conjugation and ι(X) equals the fixed point set of this conjugation. Noting that ι(p) is a smooth point by assumption, the tangent space T_ι(p)X() exists and has complex dimension d; this tangent space also inherits invariance under complex conjugation. Thus, T_ι(p)X() is the complexification of a real d-dimensional vector space V whose elements consist of all real tangent vectors at ι(p); this V is evidently isomorphic to T_pX, as desired. Every finite descending chain I_∙ of radical R-ideals I_0 I_1 ⋯ I_m = I produces an ascending flag X_∙ := (I_∙) of subvarieties of X: X_0 ⊂ X_1 ⊂⋯⊂ X_m = X. The next result shows that successive differences of X_∙ inherit a smooth manifold structure from the successive differences of X_∙(). Let W ⊂ Z be a pair of real algebraic varieties in ^n. If the difference Z()-W() is either empty or a smooth i-dimensional complex manifold, then M := (Z-W) is either empty or a smooth i-dimensional real manifold. There are two cases to consider — either the image ι(Z) lies entirely within the singular locus Z()_sing, or there exists some p ∈ M with ι(p) ∈ Z()_reg. In the first case, since Z()-W() is smooth, we know that Z()_sing lies entirely within W() and hence that ι(Z) ⊂ι(W); but since we have assumed W ⊂ Z, we must have W=Z, whence M is empty. On the other hand, let p be a point in (Z-W) for which ι(p) is a smooth point of Z(). We may safely assume that the generating ideal of Z is prime in [x_1,…,x_n] by passing to the irreducible component which contains ι(p). It now follows from <cit.> or <cit.> that (Z-W) has dimension i. Finally, Lemma <ref> ensures that (Z-W) is a smooth real i-manifold. It follows from the above result that if X_∙() is a Whitney stratification of X(), then the successive differences of X_∙ are either empty or smooth manifolds of the expected dimension. We show below these successive differences also satisfy Condition (B). Let I_∙ be a descending chain of radical ideals in [x_1,…,x_n]. If the flag X_∙() := _(I_∙) constitutes a Whitney stratification of X(), then the corresponding flag X_∙ := (I_∙) yields a Whitney stratification of X. Consider a non-empty connected component M ⊂ S_i, and let V ⊂ X_i be the irreducible component which contains M. Similarly, let W ⊂ X_i() be the irreducible component which contains ι(M) and define M_ := W - X_i-1(). We note that ι(M) forms an open subset of M_, which must in turn by an i-stratum of X(). Similarly, consider a nonempty connected N ⊂ S_j with i > j and analogously define N_⊂ X_j(). We will show that the pair (M,N) satisfies Condition (B). To this end, consider a point q ∈ N along with sequences p_k⊂ M and q_k⊂ N which converge to q. Letting ℓ_k denote the secant line [p_k,q_k] and T_k the tangent plane T_p_kM, we assume further that the limits ℓ = limℓ_k and T = lim T_k both exist. Let ℓ_k() be the secant line [ι(p_k),ι(q_k)] in ^n and T_k() the tangent space T_ι(p_k)M_. Since ι(p_k) and ι(q_k) are real points for all k, the the linear equations defining both the secant lines ℓ_k() and the tangent space T_k() as varieties are exactly the same as those defining ℓ_k and T_k, respectively. Thus, the limits ℓ() and T() both exist because the corresponding real limits exist – one may view these as limits of real sequences inside a complex Grassmannian – and they are defined by the same algebraic equations as their counterparts ℓ and T. By definition of secant lines, the image ι(w) of any w ∈ℓ is a real point of ℓ(). Since the pair (M_,N_) satisfies Condition (B) by assumption, we know that ℓ() ⊂ T(), whence ι(w) must be a real point of T(). Since ι(M) is an open subset of M_, we have T_ι(p_k)M_ = T_ι(p_k)ι(M) for all k; and by the proof of Lemma <ref>, the real points of T_ι(p_k)ι(M) are identified with T_q_kN. Thus, ι(T) contains all the real points of T(), including ι(w). Since ι is injective, we have w ∈ T as desired. Let X be an algebraic variety in ^n. The WhitStrat algorithm of <cit.>, when applied to X(), produces a Whitney stratification of X. The WhitStrat algorithm performs three types of operations: ideal addition, Gröbner basis computation, and primary decomposition. Each of these operations leaves the coefficient field of all intermediate polynomials unchanged. § STRATIFYING REAL ALGEBRAIC MORPHISMS Maps between Whitney stratified spaces are typically required to satisfy additional criteria beyond smoothly sending strata to strata — see <cit.> or <cit.> for instance. Let 𝒳_∙ and 𝒴_∙ be Whitney stratifications of topological spaces 𝒳 and 𝒴. A continuous function ϕ:𝒳→𝒴 is stratified with respect to 𝒳_∙ and 𝒴_∙ if for each stratum M ⊂𝒳 there exists a a stratum N ⊂𝒴 satisfying two requirements: * the image ϕ(M) is wholly contained in N; and moreover, * the restricted map ϕ|_M:M → N is a smooth submersion.[Explicitly, its derivative d(ϕ|_S)_x:T_xM → T_ϕ(x)N is surjective at each point x in M.] The pair (𝒳_∙,𝒴_∙) is called a stratification of ϕ. The second requirement of Definition <ref> ensures the following crucial property via Thom's first isotopy lemma <cit.>. If ϕ is a proper map – namely, if the inverse image of every compact subset of Y is compact in X – then for every stratum N ⊂ Y, the restriction of ϕ forms a locally trivial fiber bundle from ϕ^-1(N) to N. In general, the fibers are not guaranteed to be smooth. Consider algebraic varieties X ⊂^n and Y ⊂^m, and let f:X→ Y be an algebraic morphism — concretely, this amounts to an m-tuple of real polynomials (f_1(x_1,…,x_n),   f_2(x_1,…,x_n),  …,   f_m(x_1,…,x_n)) whose evaluation at a point of X yields a point of Y. Since each f_i is automatically a complex polynomial, there is an evident morphism f_:X() → Y() of complex algebraic varieties. Let I_∙ and J_∙ be descending chains of radical ideals in [x_1,…,x_m] and [y_1,…,y_m] respectively so that X_∙() := _(I_∙) and Y_∙() := _(J_∙) constitute Whitney stratifications of X() and Y() respectively. It follows from Theorem <ref> that X_∙ := (I_∙) is a Whitney stratification of X while Y_∙ := (J_∙) is a Whitney stratification of Y. If f_ is stratified with respect to X_∙() and Y_∙(), then f is stratified with respect to X_∙ and Y_∙. Let M ⊂ X be a nonempty connected component of the i-stratum X_i-X_i-1, and let M_ be the i-stratum of X_∙() which contains ι(M). By definition, the image f_(M_) contains f(M) in its locus of real points. Since f_ is stratified with respect to X_∙() and Y_∙(), the first requirement of Definition <ref> guarantees the existence of a single stratum N_⊂ Y which contains f_(M_). Thus, f(M) lies in the locus of real points of N_. Letting N denote the stratum of Y_∙ corresponding to N_, we know that the real locus of N_ equals N, whence we obtain f(M) ⊂ N and it remains to show that the restriction of f to M yields a submersion. Let x be any point of M, and note that ι(x) lies in M_. Since f_|_M_ is a submersion, its derivative at ι(x) is a surjective linear map from the tangent space to M_ at ι(x) to the tangent space to N_ at f_∘ι(x). But by construction, f_∘ι equals f. Thus, we have rank_(df_|_M_(ι(x))) = _ N_. To conclude the argument, we note that the derivative arising on the left side of the above equality may be represented by the Jacobian matrix of f at x, and the rank of this matrix is preserved under field extension to . On the other hand, by Proposition <ref> we know that the complex dimension of N_ equals the real dimension of N. Thus, our equality simplifies to rank_(df|_M(x)) = _ N, as desired. Assume that a morphism f:X → Y has been stratified as described in Theorem <ref>. It is readily checked that the image f(M) of the closure of a stratum M ⊂ X is not an algebraic subvariety of Y in general — the best that one can expect is that f(M) will be semialgebraic. It is important to note, in the context of the above theorem, that we do not obtain semialgebraic descriptions of such images. § STRATIFYING FULL SEMIALGEBRAIC SETS A (basic, closed) semialgebraic set is any subset B ⊂^n which can be expressed as an intersection of the form B := X ∩ C where X is a real algebraic subvariety of ^n, while the set C, called an inequality locus, is given as follows: C := x ∈^n | g_i(x) ≥ 0 for 0 ≤ i ≤ k. Here the g_i's are a finite collection of polynomials in [x_1,…,x_n]. By convention, when the number of inequalities k equals zero, we have C = ^n. Thus, every algebraic variety is automatically a semialgebraic set in the above sense. The sets X and C are not uniquely determined for a given B in general — we may, for instance, safely remove any polynomial generator f:^n → from the defining ideal of X while adding f ≥ 0 and -f ≥ 0 to the inequality locus. It is therefore customary to omit X entirely and simply define B as the set of points which satisfy a collection of polynomial inequalities. We find it convenient to write B = X ∩ C here because this allows us to highlight a relevant sub-class of semialgebraic sets. A semialgebraic set B ⊂^n is called a full if it admits an inequality locus C of the form (<ref>), with the additional requirement that its subset C^∘ := x ∈^n | g_i(x) > 0 for 0 ≤ i ≤ k is an n-dimensional smooth manifold whose closure equals C. Given an inequality locus C of a full semialgebraic set, we call C^∘ its interior and define its boundary as the difference ∂ C := C - C^∘. This boundary is a semialgebraic subset of the real algebraic variety Y_C := (∏_1^k g_i). We adopt the usual convention that the product over the empty set equals 1, which forces ∂ C = Y_C = ∅ when k=0. We recall that a Whitney stratification X_∙ of X is subordinate to a flag F_0 ⊂ F_1 ⊂⋯⊂ F_k = X if for each X_∙-stratum S ⊂ X there exists some j satisfying S ⊂ (F_j-F_j-1), see <cit.> for additional details. Our next result establishes that every full semialgebraic set B = X ∩ C inherits a Whitney stratification from Whitney stratifications of the real algebraic varieties X and X ∩ Y_C. Let B = X ∩ C be a full semialgebraic set, and let X_∙ be a Whitney stratification of X. If Y_∙ is a Whitney stratification of X ∩ Y_C which is subordinate to the flag X_∙∩ Y_C, then setting B_i : =(X_i∪ Y_i) ∩ C produces a Whitney stratification of B. Since B is full, we know that the interior C^∘ of its inequality locus is a smooth open n-dimensional submanifold of ^n. Therefore, the intersections X_i ∩ C^∘ form a Whitney stratification X'_∙ of X ∩ C^∘. Let Y'_∙ be the subset of Y_∙-strata which intersect ∂ C. Since C is the disjoint union of C^∘ and ∂ C, it follows that the union of X'_∙-strata and Y'_∙-strata partitions B. It remains to check that Condition (B) holds for those strata pairs (M,N) of this union for which N intersects the closure of M. There are now three cases to consider, of which the two easy ones are handled as follows: * if both M and N are strata of X'_∙, then Condition (B) holds because both are full-dimensional open subsets of X_∙-strata by construction, and X_∙ is assumed to be a Whitney stratification. * if M is a Y'_∙-stratum, then the fact that N intersects the closure of M forces N to be contained in X ∩∂ C, since both X and ∂ C are closed subsets of ^n. Thus, N must also be a Y'_∙-stratum in which case Condition (B) holds because Y'_∙ is Whitney. Turning now to the third case, assume that M is an X'_∙-stratum and N is a Y'_∙-stratum. By construction, M must be (a connected component of) the intersection M_* ∩ C^∘ for some X_∙-stratum M_*. Since C^∘ is n-dimensional, the tangent spaces T_xM and T_xM_* coincide for every x in M. Fix a point p ∈ N, and let N_* be the unique X_∙-stratum containing p in its interior. Since Y_∙ is chosen subordinate to X_∙∩ Y_C, the Y_∙-strata are refinements of X_∙-strata, so N must be obtained by removing some (possibly empty) set from N^*∩∂ C. It follows that N is a subset of N^* in a small ball around p. Finally, (M,N) must satisfy Condition (B) at p because (M_*,N_*) satisfy Condition (B) at p. The stratifications obtained in Theorem <ref> provide a complete description of the flag B_i. Using the techniques of <cit.>, one can perform various fundamental algorithmic tasks involving such strata. These include testing whether the i-stratum B_i - B_i-1 is empty for each i, and sampling points from the non-empty strata. § STRATIFYING DOMINANT MAPS BETWEEN EQUIDIMENSIONAL VARIETIES Let X ⊂𝕂^n and Y ⊂𝕂^m be algebraic varieties defined by ideals I_X and I_Y over a field 𝕂∈,. Let 𝕂[X] denote the coordinate ring 𝕂[x_1,…,x_n]/I_X and similarly for Y; any morphism of varieties f:X → Y canonically induces a contravariant ring homomorphism f^*:𝕂[Y] →𝕂[X]. Let f:X → Y be a morphism of algebraic varieties over 𝕂∈,. * we say that f is dominant if f^* is a monomorphism (or equivalently, if the image f(X) is dense in Y. * we say that f is finite if it is dominant, and moreover, if f^* gives 𝕂[X] the structure of an integral extension of 𝕂[Y]. It is a classical result <cit.> that if f:X → Y is finite in the above sense, then it is also a proper map for any field (see Remark <ref> for the significance of this result in our context). Over 𝕂=, a dominant morphism is finite if and only if it is proper . Throughout this section, f:X→ Y will denote a morphism between real algebraic varieties of the same dimension d; we will further require that the map f_:X() → Y() is dominant and that _ X() = _ Y() = d. The Jelonek set of f is the subset (f) ⊂ Y() consisting of all points y for which there exists a sequence x_n⊂ X() satisfying both lim_n →∞ |x_n| = ∞ and lim_n→∞ f_(x_n) = y. It is shown in <cit.> that (f) is either empty or an algebraic hypersurface of Y(); a Gröbner basis algorithm for computing the Jelonek set is given in <cit.>. It follows from this algorithm that if the Jelonek set is non-empty, then it is defined by a polynomial with real coefficients. Finally, it is shown in <cit.> that (f) is precisely the locus of points at which f fails to be finite. Thus, if we define V() := (f) and W() := f^-1(V), then the restriction of f forms a proper map (X()-W()) → (Y()-V()) — see <cit.> for details. Note that the polynomials defining V() and W() are real; take V to be the real zero set of the polynomials defining V(), and similarly let W be the real zero set of the polynomials defining W(). It follows immediately that the restriction of f to the difference (X-W) constitutes a proper map to the difference (Y-V). The Jelonek flag of f:X→ Y is a pair (W_∙,V_∙) of flags, both of length d = X = Y: ∅ =W_-1⊂ W_0⊂⋯⊂ W_d=X ∅ =V_-1⊂ V_0⊂⋯⊂ V_d=Y , defined via reverse-induction on i ∈d-1,d-2,…,1,0 as follows. Starting with V_d-1 as the real points of (f), we * let W_i be f^-1(V_i), and * let V_i-1 be the real points in (f|_W_i:W_i → V_i). By construction of the Jelonek flag, at each i we have the following alternative: * if V_i() is nonempty, then V_i() = W_i() = i and the restriction of f forms a proper map (V_i-V_i-1) → (W_i-W_i-1); otherwise, * if V_i() is empty, then f|_W_i:W_i → V_i is not dominant; on the other hand, f|_W_i-1 is dominant, but with W_i-1() > V_i-1(). Let (W_∙,V_∙) be the Jelonek flag of f:X → Y. Assume that X_∙ is a Whitney stratification of X subordinate W_∙ and that Y_∙ is a Whitney stratification of Y subordinate to V_∙. Whenever V_i = W_i holds, we have that for every Y_∙-stratum R ⊂ (V_i-V_i-1), the map f|_f^-1(R):f^-1(R)→ R is a locally trivial fiber bundle. Since the stratification Y_∙ of Y is subordinate to V_∙ then for all strata R we have that R⊂ V_i-V_i-1 for some i, and also since X_∙ is subordinate to W_∙ we also have f^-1(R)⊂ W_i-W_i-1. If (V_i)=(W_i) holds, then the map f:(W_i-W_i-1)→ (V_i-V_i-1) is proper. An appeal to the semialgebraic version of Thom's first isotopy lemma <cit.> achieves the desired result. § REAL ROOT CLASSIFICATION In this section we explore how the methods developed in the previous Section can be used to study the real root classification problem. To this end, fix integers m,n ≥ 1 and define the subset P ⊂(ℝ[c_1,…,c_m])[x_1,…,x_n]. consisting of all polynomials which have the form f(x,c) = ∑_j=1^k c_j · x_1^a_1,j⋯ x_n^a_n,j+g(x), where a_i,j is some k × n matrix of non-negative integers and g(x) is a polynomial in [x_1,…, x_n]. Consider polynomials f_1(x,c), …, f_n(x,c) in P, and suppose that the system f_i(x,c)=0 | 1 ≤ i ≤ n has finitely many complex solutions in ^n, for a generically chosen parameter c ∈^m. The real root classification problem seeks a decomposition ^m = ∐_j M_j so that either the number of real solutions to f_i(x,c^*)= 0 is locally constant across c^* ∈ M_j, or the system has infinitely many complex solutions. Henceforth, we treat the polynomials f_1,…, f_n in P as polynomials in [x,c] and consider the variety X=(f_1,…, f_n) in ^n×^m. Note that (X)=m by the assumption that for a fixed generic parameter value the system (<ref>) has finitely many complex solutions. Hence, if we take π:^n×^m→^m to be the coordinate projection onto the last m coordinates, then the resulting map π:X→^m is dominant and (X)=m. Thus, π is a dominant map between varities of the same dimension. Note that for a point q∈ Y the fiber π^-1(q) consists of a set of points of the form (x,q) in ^n×^m where the x values are exactly the solutions in ^n to the system (<ref>) for the choice of parameters c=q; also note that for generic q the set π^-1(q) is finite. Set Y=^m and consider the dominant projection map π:X→ Y defined above. Let (W_∙,V_∙) be the Jelonek flag of π. If (X_∙, Y_∙) is a stratification of π in the sense of Definition <ref>, with X_∙ subordinate to W_∙ and Y_∙ subordinate to V_∙, then for any Y_∙-stratum N, we have that either the number of real points in π^-1(q) is fixed and independent of q or π^-1(q) has infinitely many complex points. Since Y_∙ is subordinate to V_∙ we have that N⊂ (V_i -V_i-1) for some i. There are two cases to consider: * if π:W_i→ V_i is dominant with (W_i)>(V_i), then we have (f^-1(Z))>(Z) for any subvariety Z ⊂ V_i. Hence, for any q ∈ N the fiber f^-1(q) consists of infinitely many complex points. * if (V_i)=(W_i), then the conclusion follows immediately from Theorem <ref> since whenever the fibers are zero dimensional and the number of real points is exactly the number of connected components. Thus, any Jelonek-subordinate stratification of π:X → Y directly solves the real root classification problem §.§ Examples We conclude with two simple examples of real root classification arising from dominant map stratification. Consider the quadratic equation: ax^2+bx+c=0 where we think of a,b,c as real parameters. We wish to classify its real solutions, to do this we consider the variety X=(ax^2+bx+c) in ^4 and the projection map π:^4→^3 onto Y=^3 specified by (x,a,b,c)↦ (a,b,c). A stratification of π as in Theorem <ref> is given by: X_∙=(ax^2+bx+c) ⊃( (b^2-4ac, ax^2+bx+c)∪(a, bx+c) )⊃(a,b,c) and Y_∙=^3⊃( (b^2-4ac)∪(a) )⊃(a,b)⊃(a,b,c). The number of real solutions of ax^2+bx+c=0 is locally constant on every stratum of Y. First consider the strata of dimension 3 arising from M_3=^3- ( (b^2-4ac)∪(a) ); M_3 has 4 connected components, one representative point of each of these is: (-1,1,1)∈ S_1, (-1,1,-1)∈ S_2, (1,1,1)∈ S_3, (1,1,-1)∈ S_4. The boundary of the 4 connected components @par 0 r0.52 < g r a p h i c s > The surface (b^2-4ac)∪(a)bounding the connected components of M_3. of M_3 is illustrated below in Figure <ref>. Using the representative points we see that (<ref>) has two real solutions for all coefficients (a,b,c)∈ S_1 and (a,b,c)∈ S_4, and no real solutions for (a,b,c)∈ S_2 and for (a,b,c)∈ S_3. Note that our algorithm does not produce the semi-algebraic description of the sets S_i, e.g. we do not compute that S_1={(a,b,c)∈^3 | b^2-4ac>0, a<0}, even though in this case it is easy to deduce from the description above. Instead we only sample points from them and determine the number of solutions to the original system in a given region of parameter space. Next we consider the connected strata arising from M_2=( (b^2-4ac)∪(a) )- (a,b), this again has 4 connected components, two arising from (b^2-4ac)- (a,b), one of which has c≥ 0 and one c≤ 0, and two from (a)- (a,b), one representative from each of these is: (1,2,1)∈ T_1, (-1,2,-1)∈ T_2, (0,1,1)∈ T_3, (0,-1,1)∈ T_4. We see that (<ref>) has one real solution for all coefficients (a,b,c)∈ T_i, for i=1,…, 4. Next we consider the connected strata arising from M_1= (a,b)-(a,b,c). This has two connected components, one representative from each of these is: (0,0,1)∈ Z_1, (0,0,-1)∈ Z_2 and (<ref>) has no real solutions for all coefficients (a,b,c) in both Z_1 and Z_2. Finally we have the closed stratum (a,b,c) which is a single point and the corresponding system (<ref>) has infinitely many solutions. Consider the parametric system of equations in ^2 given by: x^2-y^2+b=-ax+x^2+by=0 where we think of a,b as real parameters and x,y as real variables. To classify the solutions we consider the variety X=(x^2-y^2+b,-ax+x^2+by) in ^4 and the projection map π: ^4→^2 onto Y=^2 defined by (x,y,a,b)↦ (a,b). For the sake of brevity we display only the Y_∙ portion of the stratification of π as in Theorem <ref>, as for real root classification we in fact only use this part, it is: Y_∙=^2⊃( (a^6 - 3a^4b^2 + 3a^2b^4 - b^6 + a^4b - 20a^2b^3 - 8b^5 - 16b^4)∪(b) )⊃(a,b) ∪(a,b+4). The stratification is illustrated in Figure <ref>. Set W=(a^6 - 3a^4b^2 + 3a^2b^4 - b^6 + a^4b - 20a^2b^3 - 8b^5 - 16b^4). @par 0 r0.57 < g r a p h i c s > The algebraic constraints defining the connected strata of Y_∙. There are seven two dimensional connected strata making up M_2=^2-(W∪(b)). A sample point in each of them is: (0,2)∈ S_1, (4,1)∈ S_2, (-4,1)∈ S_3, (0,-1)∈ S_4, (0,-6)∈ S_5, (-3,-4)∈ S_6, (3,-4)∈ S_7. The system (<ref>) has 4 real solutions for (a,b)∈ S_i for i=2,3,5. The system (<ref>) has 2 real solutions for (a,b)∈ S_i for i=1,6,7. Finally the system (<ref>) has no real solutions for (a,b)∈ S_4. There are eight one dimensional connected strata making up M_1=(W∪(b))-((a,b) ∪(a,b+4)). A sample point in each of them is: ( 14/27, 4/27)∈ Z_1, ( -14/27, 4/27)∈ Z_2, (-2,0)∈ Z_3, (2,0)∈ Z_4, ( 16/27, -16/27)∈ Z_5, ( -16/27, -16/27)∈ Z_6, (2.74669...,-10)∈ Z_7, (-2.74669...,-10)∈ Z_8. The system (<ref>) has 3 real solutions for (a,b)∈ Z_i for i=1,2,3,4, 7,8. The system (<ref>) has 1 real solution for (a,b)∈ Z_i for i=5,6. Finally there are two zero dimensional strata, the points (0,0),(0,-4). When a=b=0 the system (<ref>) has 1 real solution, when a=0,b=-4 the system (<ref>) has 2 real solutions. abbrv
http://arxiv.org/abs/2307.04719v1
20230710173139
On the curvature of the loss landscape
[ "Alison Pouplin", "Hrittik Roy", "Sidak Pal Singh", "Georgios Arvanitidis" ]
cs.LG
[ "cs.LG" ]
Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer Minxuan Hee1,addr1,addr2 Daohan Wange2,addr3 ========================================================================== One of the main challenges in modern deep learning is to understand why such over-parameterized models perform so well when trained on finite data. A way to analyze this generalization concept is through the properties of the associated loss landscape. In this work, we consider the loss landscape as an embedded Riemannian manifold and show that the differential geometric properties of the manifold can be used when analyzing the generalization abilities of a deep net. In particular, we focus on the scalar curvature, which can be computed analytically for our manifold, and show connections to several settings that potentially imply generalization. § FLATNESS AND GENERALIZATION IN MACHINE LEARNING The relationship between the generalization ability of a model and the flatness of its loss landscape has been a subject of interest in machine learning. Flatness refers to the shape of the hypersurface representing the loss function, parameterized by the parameters of the model. Flat minima are characterized by a wide and shallow basin. Generalization refers to the ability of a model to perform well on unseen data. A widely accepted hypothesis, proposed by various research groups hochreiter1997flat, hinton1993keeping, buntine1991bayesian several decades ago, suggests that flat minima are associated with better generalization compared to sharp minima. The basis of this hypothesis stems from the observation that when the minima of the optimization landscape are flatter, it enables the utilization of weights with lower precision. This, in turn, has the potential to improve the robustness of the model. 0.6 < g r a p h i c s > 0.4 width= figureOn the left, a surface represents a loss function f(, ) on its parameter space {, }. We can see two minima, a sharp minima and a flatter minima. A Brownian motion navigates the parameter space around those two minima, in blue for the sharp one, and red for the shallow one. On the right, the upper figure represents the Brownian motion navigating in the parameter space. The same is used for both minima. The lower figure represents the perturbations of the loss f in both the sharp (blue) and flat (red) minima. The loss is more robust to perturbation in the flatter minima. The notion of flatness has been challenged by dinh2017sharp, who argued that the different flatness measures proposed are not invariant under reparametrization of the parameter space and questioned the assumption that flatness directly causes generalization. Yet, numerous empirical and theoretical studies have presented compelling evidence that supports the relationship between flatness and enhanced generalization. This relationship has been observed in various contexts, by averaging weights izmailov2018averaging, studying inductive biases neyshabur2017geometry, imaizumi2022generalization, introducing different noise in gradient descent chaudhari2019entropy, pittorino2021entropic, adopting smaller batch sizes keskar2016large, and investigating ReLU Neural networks yi2019positively. The exact relationship between flatness and generalization is still an open problem in machine learning. In this preliminary work, we build upon the flatness hypothesis as a primary motivation to investigate the curvature of the loss landscape, approaching it from a differential geometric perspective. In this preliminary work, we analyze the loss landscape as a Riemannian manifold and derive its scalar curvature, an intrinsic Riemannian object that characterizes the local curvature of the manifold. We found that the scalar curvature, at minima, has a straightforward expression and can be related to the norm of the Hessian. While the norm of the Hessian may not always accurately measure flatness, it remains a valuable indicator for understanding optimization. Our findings demonstrate that the scalar curvature possesses all the benefits of the Hessian norm without its limitations. § GEOMETRY OF THE LOSS LANDSCAPE AND CURVATURE We are interested in finding the parameters of a model that minimizes the loss function denoted f. The loss function is a smooth function defined on the parameter space ⊂^q, where q is the number of parameters. In order to study the loss landscape of a model, we can look at the geometry of the graph of the loss function, which is a hypersurface embedded in ^q+1. Let f:Ω⊂^q→ be a smooth function. We call graph of a function the set: Γ_f={(, y) ∈Ω×| y = f()}. The graph Γ_f is an topological smooth manifold embedded in ^q+1, and it is isometric to the Riemannian manifold (, g) with ⊂^q and the induced metric g_ij = δ_ij + ∂_i f ∂_j f. the metric is obtained by pulling back, in one case, the loss function to the parameter space (∂_i f ∂_j f), and in another case, the parameter space to itself (δ_ij), lee2018introduction. Instead of working in the ambient space ^q+1, it is more convenient to study the intrinsic geometry of the loss function in the parameter space (, g). In particular, knowing the Riemannian metric, we can compute the associated geometric quantities of the loss landscape as the Christoffel symbols, the Riemannian curvature tensor, and the scalar curvature (See Appendix <ref> for an introduction of those quantities). In the following, we will denote ∇ the Euclidean gradient operator of the loss function f, and the Euclidean Hessian of f. Gradient(f): (∇ f)_i = _i= ∂_i f = f_,i Hessian(f): ()_ij = ∂_i ∂_j f = f_,ij §.§ Curvature in Riemannian geometry The Christoffel symbols define a corrective term used to compute covariant derivatives in a curved space. They can be derived from the Riemannian metric. The Christoffel symbols are given by: Γ^i_kl = β f_,i f_,kl, with β = (1+∇ f^2)^-1. See Appendix <ref>. Using those Christoffel symbols, we can directly compute the Riemannian curvature tensor. Using the Einstein summation convention, the Riemannian curvature tensor is an intrinsic mathematical object that characterizes the deviation of the curved manifold from the flat Euclidean manifold. The Riemannian curvature tensor is given by: ^i_jkm = β (f_,ikf_,jm - f_,jmf_,jk) - β^2 f_,if_,r (f_,rkf_,im - f_,rmf_,jk), with β = (1+∇ f^2)^-1. See Appendix <ref>. While those four-dimensional tensor gives us a complete picture of the curvature of a manifold, it can be difficult to interpret in practice. Instead, a scalar object, the scalar curvature, can be derived from the Riemannian curvature tensor. The scalar curvature quantifies locally how curved is the manifold. The scalar curvature is given by: = β(()^2 - (^2)) + 2 β^2 ( ∇ f^⊤ (^2 - () ) ∇ f), with β = (1+∇ f^2)^-1. See Appendix <ref>. This expression simplifies when the gradient is zero, which corresponds to a critical point of the loss function. In this case, the scalar curvature is given by: When an extremum is reached (∇ f=0), the scalar curvature becomes: (_min) = ()^2 - (^2) This is a direct result of Proposition <ref>, when ∇ f = 0. Note that we can also write, at the minimum, (_min) = _*^2 - _F^2, with ·_* the nuclear norm and ·_F the Frobenius norm. §.§ The scalar curvature as the deviation of the volume of geodesic balls This scalar curvature has a simple interpretation, as it corresponds to the difference in volume between a geodesic ball embedded in the Riemannian manifold and a ball of reference, the Euclidean ball. In hyperbolic spaces, the Riemannian ball will be bigger than the Euclidean one, and in spherical spaces, it will be smaller. If the curved space is flat, they are both equal in volume, and the scalar curvature is null. [Theorem 3.98]gallot1990riemannian The scalar curvature () at a point ∈ of the Riemannian manifold of dimension q is related to the asymptotic expansion of the volume of a ball on the manifold ℬ_g(r) compared to the volume of the ball in the Euclidean space ℬ_e(r), when the radius r tends to 0. (ℬ_g(r)) = (ℬ_e(r)) (1-()/6(q+2) r^2 + o(r^2)) § SCALAR CURVATURE AND OPTIMIZATION Corollary <ref> establishes a connection between the scalar curvature at each peak or valley in the loss landscape and the magnitude of the Hessian: () = *^2 - _F^2. Although the Hessian norm plays a key role in optimization tasks, we contend that it is not the most reliable gauge of flatness in all situations. On one hand, will delve into some issues that arise from only using the Hessian norm in Section <ref>. On the other hand, we will see how the scalar curvature reduces to the Hessian norm in some cases and supports theoretical findings in optimization in Section <ref>. §.§ Limitations of the trace of the Hessian as a measure of flatness The Hessian of the loss function, specifically its trace, has been shown to influence the convergence of optimization algorithms. For instance, wei2019noise revealed that stochastic gradient descent (SGD) reduces the trace of the loss function's Hessian in the context of over-parameterized networks. In a similar vein, orvieto2022anticorrelated discovered that SGD with anti-correlated perturbations enhances generalization due to the induced noise reducing the Hessian's trace. They also identified that the trace serves as an upper limit on the mean loss over a posterior distribution. Furthermore, within Graphical Neural Networks, ju2023generalization demonstrated that the trace of the Hessian can evaluate the model's resilience to noise. §.§.§ The saddle point problem Yet, relying solely on the trace of the Hessian may not provide an accurate measure of flatness. For instance, if half of the eigenvalues are positive and the other half are negative, with their sum equaling zero, the trace of the Hessian will also be zero. This is misleading as it suggests a flat region, when in reality it is a saddle point. [Curvature of a parameterized function] Let us imagine that the loss is represented by a function taking in inputs two weights u and v such that: f(u,v) = e^-c usin(u) sin(v), with c a positive constant. We notably have lim_u→∞ f(u,v) = 0, and so the surface tends to be flatter with u increasing. The trace of the Hessian of f and its scalar curvature can be computed analytically, and we have at a point =(u,v): ()() = e^-c u (-2 ucos(u) + (c^2-2)sin(u)) sin(v) () = (c^2-1)cos(2 u)-cos(2v) - c (c-2sin(2u))/e^2cu + cos(v)sin(u)^2 + (cos(u)-csin(u))sin(v)^2 §.§.§ The expected flatness over mini-batches [14]r0.4 < g r a p h i c s > The data points fit a sinus. The dataset is split into 7 batches of different colors. If the flatness is defined as (), the flatness over the entire dataset is equal to the expectation of the flatness of a batch. Thus, the curve is considered flat. Another challenge emerges when the dataset is divided into small batches. If we choose the Hessian's trace as the measure of flatness, the overall flatness of the entire dataset equals the average flatness over these batches (Equation <ref>). This could potentially induce the wrong conclusion depending on the method used to partition the dataset: In Figure <ref>, the dataset is split in such a way that the trace of the Hessian is null for each batch, which means that the curve is considered as flat over the entire dataset. The dataset, denoted 𝒟, is split into k mini-batches: {ℬ_1, ℬ_2, …, ℬ_k }. By linearity, the Hessian of the loss function over the entire dataset can be written as the mean of the Hessian of mini-batches i.e.: _𝒟 = 1/k∑_i _ℬ_i As a consequence, since the trace commutes with a summation, we have: (_𝒟) = (1/k∑_i _ℬ_i) = 1/k∑_i (_ℬ_i) = 𝔼[(_ℬ_i)]. The trace of the Hessian of the loss function over the entire dataset is the expectation of the Hessian over mini-batches: (_𝒟) = 𝔼[(_ℬ_i)] The corresponding result does not hold for the scalar curvature in general. The scalar curvature of the hessian of the full dataset is not equal to the expectation of the Scalar curvature over mini-batches. That is there exists a dataset, 𝒟, and mini-batches, {ℬ_1, ℬ_2, …, ℬ_k } such that: (_𝒟) ≠𝔼[(_ℬ_i)] See Appendix <ref>. §.§ The scalar curvature supports previous theoretical findings through the Hessian norm Although the two previous given examples suggest that in some cases, the trace of the Hessian is not a good definition of flatness, it is associated with the optimization process and the model's capacity to generalize in various ways. We will observe that under certain circumstances, the scalar curvature simplifies to the Hessian norm. §.§.§ Perturbations on the weights seong2018towards showed that the robustness of the loss function to inputs perturbations is related to the Hessian. We similarly show that the resilience of the loss function to weights perturbations is upper bounded by the norm of the Hessian. Additionally, a smaller scalar curvature implies stronger robustness. Let _min an extremum, ε, a small scalar (ε≪ 1) and a normalized vector (=1). The trace of the square of the Hessian is an upper bound to the difference of the loss functions when perturbed by the weights: f(_min + ε) - f(_min)_2^2 ≤1/4ε^4 (^2_min) This is obtained by applying the Taylor expansion, for a very small pertubation ε≪ 1. See Appendix <ref> for the full proof. [b]0.3 < g r a p h i c s > [b]0.3 < g r a p h i c s > Empirical demonstration of Proposition <ref>. We train two identical and differently initialized deep nets using the same optimizer (Adam). We then perturb pointwise the learned weights using Gaussian noise 𝒩(0,0.1^2). As expected the model on the left with scalar curvature ≈ 430 is more robust to perturbations compared to the right model with scalar curvature ≈ 610. Let us assume two minima _1 and _2, and we suppose that the loss function at _1 is flatter than the one at _2 in terms of scalar curvature so 0 ≤(_1) ≤(_2). Being at the minimum implies that (_1) = (_1)^2 - (^2_1) and (_2) = (_2)^2 - (^2_2) respectively. Then: 0 ≤(x_1) ≤(x_2) 0 ≤(_1)^2 - (^2_1) ≤(_2)^2 - (^2_2) ⇒(^2_1) ≤(^2_2). A flatter minima (_1)≤(_2) leads to more robustness of the loss function to weights perturbations: f(_1 + ε) - f(_1) _2^2 ≤f(_2 + ε) - f(_2) _2^2. In Figure <ref>, we consider ε∼𝒩(0,0.01) to be a small perturbation and we plotted the original loss function with the perturbed losses. We computed the ^2 at the minimum. When the scalar curvature is smaller, the variance across the perturbations at the minimum is smaller and the perturbations are more centered around the original loss function. §.§.§ Efficiency of escaping minima Stochastic gradient descent can be conceptualized as an Ornstein-Uhlenbeck process uhlenbeck1930theory, which is a continuous-time stochastic process that characterizes the behavior of a particle influenced by random fluctuations mandt2017stochastic. By considering the non-linear relationship between the weights and the covariance, the update rules in gradient descent resemble the optimization approach employed in the multivariate Ornstein-Uhlenbeck process. When approximating the covariance by the Hessian [Appendix A]jastrzebski2017three, the gradient descent can be seen as an Ornstein-Uhlenbeck process with: d_t = - _t dt + ^1/2d W_t The escaping efficiency measure is a metric used to evaluate the performance of optimization algorithms, including gradient descent, in escaping from local minima and finding the global minimum of the loss function, and is defined as [ f(_t)- f()]. zhu2018anisotropic used this definition and the expression of the gradient descent process (Equation <ref>) to approximate the escaping efficiency: [ f(_t)- f()] ≈t/2(^2). Similar to the example above, gradient descent will have more difficulties to escape from a minima with a small scalar curvature, and so it will converge more quickly to the flat minima. §.§.§ The scalar curvature is the squared norm of the Hessian in over-parameterized neural networks We note the Hessian of the loss of a model with q parameters, and the scalar curvature, obtained in Proposition <ref> and Corollary <ref>. When we reach a flat minimum, supposing the eigenvalues of are similar, for a high number of parameters q, we have: (_min) q →∞∼()^2 Let us suppose that, at a flat minimum, all the eigenvalues are similar: λ_1 = ⋯ =λ_q = λ≥ 0. Then we, have _*^2 = q^2 λ^2 and _F^2 = q λ^2. When the number of parameters increases, _F^2 = o(_*^2), and as a consequence _*^2 - _F^2 ∼_*^2. In this proposition, we assume that all the eigenvalues are similar. This strong assumption is supported by empirical results ghorbani2019investigation. The empirical results show that during the optimization process, the spectrum of the eigenvalues becomes entirely flat, especially when the neural network includes batch normalization. §.§ Reparametrization of the parameter space The main argument challenging the link between flatness and generalization is that the flatness definitions, so far, are not invariant under reparametrization. Reparametrization refers to a change in the parametrization of the model, which can be achieved by transforming the original parameters (θ) into a new set of parameters (η). Even if we assume that the models have the same performance: {f_θ, θ∈Θ⊂^q} = {f_φ(η), η∈φ^-1(Θ)}, this reparametrization alters the shape of the loss function landscape in ^q. This is the core of the problem: dinh2017sharp compared the flatness of f_θ and f_φ(η) with respect to the same ambient space ^q, while each measure should be defined, and compared, relative to their respective parameter space, and not to an arbitrary space of reference. The scalar curvature is not invariant under reparametrization of the parameter space, and it should not be. It is, however, an intrinsic quantity, which means that it does not depend on an ambient space. As a consequence, it is also equivariant under diffeomorphism, and notably, if and ' are two Riemannian manifolds related by an isometry Ψ:→', then () = (Ψ()), for all ∈. In the case of the scalar curvature, if we apply a diffeomorphism to the parameters space with φ:→', and f:'⊂^q→ the loss function, then: (f ∘φ) = (φ)^⊤(f) (φ) + _k(f) ^k(φ), with (φ), (f) the Jacobian of φ and f, and (f ∘φ), (f) and ^k(φ) the Hessian of f ∘φ and f. we note ^k(φ)_ij = ∂_i ∂_j φ^k the Hessian of the k-th component of φ. At the minimum of the loss function, (f)=0, with φ:→' a diffeomorphism, and '=φ(), the scalar curvatures on and ' is derived as: () = _f_*^2 - _f_F^2, (') = _φ_φ^⊤_f_*^2 - _φ_φ^⊤_f_F^2. § DISCUSSION Our research focused on analyzing the loss landscape as a Riemannian manifold and its connection to optimization generalization. We introduced a Riemannian metric on the parameter space and examined the scalar curvatures of the loss landscape. We found that the scalar curvature at minima is defined as the difference between the nuclear and Frobenius norm of the Hessian of the loss function. The flatness hypothesis forms the basis of our study, suggesting that flat minima lead to better generalization compared to sharp ones. The Hessian of the loss function is known to be crucial in understanding optimization. However, analyzing the spectrum of the Hessian, particularly in over-parameterized models, can be challenging. As a result, the research community has started relying on the norm of the Hessian. We show that, in certain scenarios, the Hessian norm doesn't effectively gauge flatness, whereas scalar curvature does. Despite this, the Hessian norm is still relevant to theoretical results in optimization, including the model's stability against perturbations and the algorithm's ability to converge. Similarly, these characteristics are also satisfied by the scalar curvature. In essence, the scalar curvature combines all the advantages of the Hessian norm while accurately describing the curvature of the parameter space. Future research could explore the curvature within stochastic optimization and investigate the scalar curvature as a random variable affected by the underlying data and batch distribution. It would also be interesting to understand how the scalar curvature relates to the stochastic process and whether it is connected to any implicit regularization in the model. Overall, our study contributes to the understanding of the loss function's parameter space as a Riemannian manifold and provides insights into the curvature properties that impact optimization and generalization. § APPENDIX § A PRIMER ON CURVATURES IN RIEMANNIAN GEOMETRY The key strength of the Riemannian geometry is to allow for calculations to be conducted independently of the choice of the coordinates. However, this flexibility results in more sophisticated computations. Specifically, as a vector moves across a manifold, its local coordinates also change. We must consider this shift, which is accomplished by including a correction factor, denoted as Γ, to the derivative of the vector. These factors Γ are known as Christoffel symbols. Let (,g) be a Riemannian manifold, and and two vector fields on . On the manifold, we need to add the Christoffel symbols Γ ^k_ij to account for the variation of the local basis represented by _i. The covariant derivative, or connection, is then defined by: ∇ _ =u^i ∂_i v^j _j + u^i v^j Γ^k_ji_k, with ∇_ = u^i ∂_i v^j _j the covariant derivative of along in the Euclidean plane. We can further compute the Christoffel symbols based on the Riemannian metric tensor g_ij: Γ^k_ij = 1/2 g^kl( ∂_i g_jl + ∂_j g_il - ∂_l g_ij), Now, we are interested in the concept of curvature. In Riemannian geometry, the curvature is defined as the deviation of the manifold from the Euclidean plane. The principal intrinsic tool that assess the curvature of a manifold is the Riemann curvature tensor, denoted . It characterises the change of the direction of a vector, when transported along an infinitesimally small closed loop. The Riemannian curvature tensor is defined the following way: Let (, g, ∇) be a Riemannian manifold. The Riemannian curvature tensor is defined by: (, ; ) = ∇_∇_ - ∇_∇_ - ∇_[,], for any vector fields , , ∈, with [·,·] the Lie bracket. At the local basis represented by _i, it can be expressed in terms of indices: ^l_ijk = ^l (_j, _k; _i), and in terms of the Christoffel symbols as: R_ijk^l = ∂_iljk - ∂_j lik + mjklim - mikljm The Riemann curvature tensor being a fourth order tensor, it can difficult to interpret. Instead, we can look at a scalar quantity called the scalar curvature or equivalently the scalar Ricci curvature, which is a contraction of the Riemann curvature tensor. Let (,g) be a Riemannian manifold. The scalar curvature is defined as: = g^ij^k_ikj, using the Einstein summation convention, with g^ij the inverse of the metric tensor g_ij, and ^k_ikj the components of the Riemannian curvature tensor. Just like the Riemannian curvature tensor and the Riemannian metric tensor, the scalar curvature is defined for every point on the manifold. The scalar curvature is null when the manifold is isometric to the Euclidean plane. It is be negative when the manifold is hyperbolic, or positive when the manifold is spherical. By definition, the scalar curvature is an intrinsic quantity, meaning that it does not depend on the ambient space. As a consequence, the scalar curvature is equivariant under diffeomorphisms. If we map a manifold (, g) to another manifold (', g', ∇') with a diffeomorphism φ: ' →, we can express the connection ∇' as the pullback of ∇: ∇' = dφ^*∇. The curvature of the pullback connection is the pullback of the curvature of the original connection. In other terms: dφ^*(∇) = (dφ^*∇) [Proposition 2.59]andrews2010ricci. In particular, if φ is an isometry: (∇) = (∇'). § THEORETICAL RESULTS §.§ Definition of the scalar curvature and other curvature measures The Christoffel symbols of the metric = + ∇_x f ∇_x f^⊤, in the parameter space Ω⊂^q with f the loss function is given by: Γ^i_kl = f_,i f_,kl/1+∇ f^2 We use below the Einstein sum notation, and in particular, for the scalar function f: ∂_i ∂_j f = f_,ij. The Christoffels symbols are obtained with the Riemannian metric: Γ_kl^i = 1/2 g^im( g_mk,l + g_ml,k - g_kl,m) Our metric is = + ∇ f ∇ f^⊤. Using the Sherman-Morrison formula: ^-1 = - ∇ f ∇ f^⊤/1+ ∇ f^2 g_ij = _ij = δ_ij + f_,i f_,j g_ij, k = f_,ik f_,j + f_,i f_,jk g_mk, l + g_ml, k - g_kl, m = 2 f_,kl f_,m g^im = ^-1_im = δ_im - f_,i f_,m/1+∇ f^2 Then: Γ^i_kl = (δ_im - f_,i f_,m/1+∇ f^2) f_,kl f_,m = f_,kl f_,i - f_,kl f_,i f_,m^2/1+∇ f^2 = f_,i f_,kl/1+∇ f^2. The coordinates of the Riemannian tensor curvature can be written with the Christoffel symbols: R^σ_μνκ = ∂Γ^σ_μκ/∂ x^ν - ∂Γ^σ_μν/∂ x^κ + Γ^σ_νλΓ^λ_μκ - Γ^σ_κλΓ^λ _μν The metric tensor = + ∇ f ∇ f^⊤ has for eigenvalues: {1,1,⋯, 1, 1+∇ f^2}. is a symmetric positive definite matrix, hence it is diagonalisable and all its eigenvectors w⃗ are orthogonal. Let's note = ∇ f. For the eigenvector : = (1+^2). For all the other eigenvectors, w⃗ =0 and w⃗=w⃗. The contraction of the Christoffel symbols for the metric = + ∇ f ∇ f^⊤: Γ_ki^i = f_,ik f_,i/1+∇ f^2. By definition, we have Γ^i_ki = ∂_k ln√(). By the previous lemma, we know that G = 1+∇ f^2 = 1+ f_,i^2. Γ^i_ki = ∂_k ln√() = ∂_k ln√( 1+ f_,i^2) = 1/2∂_k (1+ f_,i^2)/1+∇ f^2 = f_,ik f_,i/1+∇ f^2. Another method is to use the general expression of Γ^i_kl = f_,i f_,kl/1+∇ f^2, and the result is obtained for i=l. The Riemannian curvature tensor is given by: R^i_jkm = β ( f_,ik f_,jm - f_,jm f_,jk) - β^2 f_,i f_,r ( f_,rk f_,im - f_,rm f_,jk) The Riemannian curvature tensor is given by: R^i_jkm = ∂_k Γ^i_jm - ∂_m Γ^i_jk + Γ^i_rkΓ^r_jm - Γ^i_rmΓ^r_jk, and we have for Christoffel symbols: Γ^i_jm = β f_,i f_,jm. We note β = (1+∇ f^2)^-1. We have: ∂_k (β f_,i f_,jm) = ∂_k (β) f_,i f_,jm + β ( f_,ik f_,jm+ f_,i f_,jmk), and ∂_k (β) = - 2 β^2 f_ka f_a. ∂_k Γ^i_jm = -2β^2 f_,a f_,ak f_,i f_,jm + β ( f_,ik f_,jm+ f_,i f_,jmk) ∂_m Γ^i_jk = -2β^2 f_,a f_,ak f_,i f_,jm + β ( f_,im f_,jk+ f_,i f_,jkm) Γ^i_rkΓ^r_jm = β^2 f_,i f_,rk f_,r f_,jm Γ^i_rmΓ^r_jk = β^2 f_,i f_,rm f_,r f_,jk The Ricci scalar curvature is given by: R = β(()^2 - (^2)) + 2 β^2 ( ∇ f^⊤ (^2 - () ) ∇ f), with the Hessian of f. We use β^-1 = 1+∇ f^2, the Hessian of f, and ·_1,1 the matrix norm L_1,1. The Ricci tensor is given by: R_ab = R^i_aib = β ( f_,ii f_,ab- f_,bi f_,ai) - β^2 f_,i f_,r( f_,ir f_,ab- f_,br f_,ai) = β (()_ab-_ab^2) - β^2 ((∇ f^⊤∇ f)_ab-(∇ f)_a(∇ f)_b) The Ricci scalar is given by g^ab R_ab = δ_ab R_ab - β f_,a f_,b R_ab, and we notice: _aa = () f_,a_ab f_b = ∇ f^⊤∇ f (∇ f)_a f_,a = ∇ f^⊤∇ f R_ab = R_aa - β f_,a f_,b R_ab R_aa = β (()^2 - ()^2) - β^2 ((∇ f^⊤∇ f)() - ∇ f^⊤^2∇ f) β f_,a f_,b R_ab = β^2 (∇ f^⊤∇ f)() - ∇ f^⊤^2∇ f) - β^3 ((∇ f^⊤∇ f)^2 - (∇ f^⊤∇ f)^2) Then: R = β(()^2 - (^2)) - 2 β^2 ( ∇ f^⊤ (() - ^2) ∇ f) §.§ Perturbations on the weights Let _min an extremum, ε≪ 1 and a normalized vector. Then, minimising the trace of the square of the Hessian is equivalent to minimising the influence of the perturbations on the weights: f(_min + ε) - f(_min)_2^2 ≤1/4ε^4 (^2_min) The general Taylor expansion on f at _min + ε, with ε≪ 1 is: f(_min + ε) = f(_min) + ε^⊤ + ε^2/2^⊤ + o(ε^2 ^2). We now assume that is normalised such that =1. Note that, if is an eigenvector of then: ^⊤ = (). In general, each element of the vector is inferior to 1: _i^2 ≤ 1 and so, λ_i^2 _i^4 ≤λ_i^2. Furthermore, we have (_min) = 0. Thus: f(_min + ) - f(_min)_2^2 = ε^4/4(^⊤)^2 + o(ε^4) ≤ε^4/4(^2) + o(ε^4) §.§ Curvature over minibatches The Scalar curvature of the hessian of the full dataset is not equal to the expectation of the Scalar curvature over mini-batches. That is there exists a dataset, 𝒟, and mini-batches, {ℬ_1, ℬ_2, …, ℬ_k } such that: R(_𝒟) ≠𝔼[R(_ℬ_i)] Suppose we have a dataset 𝒟 and mini-batches {ℬ_1, ℬ_2} such that the Hessians over the minibatches are given by: [ -2 0; 4 1 ],[ 1 2 1; 2 -2 ] They both have equal trace, -1, and their ricci curvatures are -2 and -6 respectively. The hessian over the full dataset is given by: [ -1 2; 6 -1 ] This has the same trace as the minibatches but its ricci curvature is -22 not equal to the average of the ricci curatures over miniabtches.
http://arxiv.org/abs/2307.03872v1
20230708012336
Domain Adaptation using Silver Standard Labels for Ki-67 Scoring in Digital Pathology: A Step Closer to Widescale Deployment
[ "Amanda Dy", "Ngoc-Nhu Jennifer Nguyen", "Seyed Hossein Mirjahanmardi", "Melanie Dawe", "Anthony Fyles", "Wei Shi", "Fei-Fei Liu", "Dimitrios Androutsos", "Susan Done", "April Khademi" ]
eess.IV
[ "eess.IV", "cs.AI", "cs.CV", "cs.LG" ]
Why does dissolving salt in water decrease its dielectric permittivity Xifan Wu ====================================================================== Deep learning systems have been proposed to improve the objectivity and efficiency of Ki-67 PI scoring. The challenge is that while very accurate, deep learning techniques suffer from reduced performance when applied to out-of-domain data. This is a critical challenge for clinical translation, as models are typically trained using data available to the vendor, which is not from the target domain. To address this challenge, this study proposes a domain adaptation pipeline that employs an unsupervised framework to generate silver standard (pseudo) labels in the target domain, which is used to augment the gold standard (GS) source domain data. Five training regimes were tested on two validated Ki-67 scoring architectures (UV-Net and piNET), (1) SS Only: trained on target silver standard (SS) labels, (2) GS Only: trained on source GS labels, (3) Mixed: trained on target SS and source GS labels, (4) GS+SS: trained on source GS labels and fine-tuned on target SS labels, and our proposed method (5) SS+GS: trained on source SS labels and fine-tuned on source GS labels. The SS+GS method yielded significantly (p<0.05) higher PI accuracy (95.9%) and more consistent results compared to the GS Only model on target data. Analysis of t-SNE plots showed features learned by the SS+GS models are more aligned for source and target data, resulting in improved generalization. The proposed pipeline provides an efficient method for learning the target distribution without manual annotations, which are time-consuming and costly to generate for medical images. This framework can be applied to any target site as a per-laboratory calibration method, for widescale deployment. Ki-67, proliferation index, domain adaptation, self-supervised learning § INTRODUCTION Breast cancer is the most diagnosed cancer and the leading cause of cancer-related death in women worldwide <cit.>. Ki-67 immunohistochemistry (IHC) biomarker is gaining traction for evaluating the proliferation rate of invasive breast cancers <cit.>. Ki-67 expression is related to prognosis and can identify high-risk early-stage breast cancers <cit.> and determine treatment modalities <cit.>. The Ki-67 proliferation index (PI) is the score associated with the proportion of Ki-67^+ tumour cells to the total number of tumour cells in a breast tissue section <cit.>. However, quantifying this biomarker is labour-intensive, time-consuming, and subject to poor visual estimation concordance <cit.>. Fortunately, Ki-67 PI can be calculated with deep learning nuclei detection algorithms for more efficient and objective quantification. There have been a few deep learning tools addressing automated Ki-67 PI scoring in literature, such as piNET <cit.> and UV-Net <cit.> which were specifically developed for Ki-67 PI quantification in breast cancer. As automated artificial intelligence (AI) tools become more robust, there is a chance for translation and deployment. However, a challenge with widescale adoption is performance degradation at deployed target sites resulting when the target data is from a center that is not included in the (source) training set. It is especially evident in digital pathology given the variation in patient factors, specimen processing, staining protocols and acquisition devices across pathology laboratories. Annotations from target sites could be included in training sets, but generating gold standard (GS) ground truths are laborious and expensive for medical imaging. Mitigating domain shift has become a topic of extensive research <cit.> and unsupervised domain adaptation (UDA) is gaining considerable attention for this task. UDA methods seek to overcome the domain gap without the need for labelled target data. Self-training (pseudo label-based methods) has emerged as a promising UDA solution <cit.>. Self-training generates a set of pseudo labels in the target domain and re-trains a network based on these pseudo labels. Self-training loss encourages cross-domain feature alignment by learning from the labelled source data and pseudo-labelled target data. Pseudo labels can be quickly generated for any number of datasets, which is cost-effective and reduces development time. However, perfect accuracy cannot be guaranteed which can lead to propagated errors when fine-tuning. Because pseudo labels do not capture detailed features as well as clean labels we hypothesize that pre-training a network on pseudo labels from the target domain will allow a network to first learn dataset-specific characteristics and low-level features that are task-dependent, thereby providing optimal parameter initialization. Fine-tuning with GS (clean) labels from the source domain can then allow more detailed features to be captured by the network. This work proposes a pipeline that (1) uses an unsupervised Ki-67 PI quantification algorithm to generate pseudo labels, which we call silver standard (SS) labels, in the unlabeled target domain, (2) pre-trains a network on SS labels, and (3) fine-tunes the network on GS labels from the source domain. This pipeline can be used to calibrate automated deep learning-based medical imaging tools on a per-dataset basis, in an easy and unsupervised manner. We validate our method on 325 clinical tissue microarrays (TMAs) (20800 patches) from the target domain. Experimental results show the proposed approach achieves superior performance on the pixel-level and patient-level, therefore, providing a DA training method for robust and accurate Ki-67 PI estimation. § METHODS §.§ Deep Learning Models Two deep learning architectures are used for experiments: UV-Net and piNET, both developed for Ki-67 PI quantification in breast cancer and validated on large multi-institutional datasets. piNET was built using the U-NET architecture with an extra layer <cit.> and UV-Net was designed to preserve nuclear features of clustering or overlapping nuclei through dense 'V' blocks to retain the high-resolution details <cit.>. The output of piNET and UV-Net is a multi-channel probability map, with center locations of tumour nuclei detected for two classes: Ki-67^- and Ki-67^+ cells. §.§ Transfer Learning Transfer learning (TL) <cit.> has proven to be effective for many real-world applications by exploiting knowledge in labelled training data from a source domain. TL has made major contributions to medical image analysis as it overcomes the data scarcity problem as well as preserving time and hardware resources. In this study, we introduce a TL approach that uses an unsupervised Ki-67 nuclei detection scheme to generate SS labels in the target domain for pre-training the model. This enables the model to learn the low-level nuclei features and attain optimal parameter initialization. We will then fine-tune the model using gold GS labels to capture more precise details and improve the accuracy of the learned features. We compare the performance of two network architectures, UV-Net and piNET, in the following scenarios: (1) pre-training with GS labels and fine-tuning with SS labels, and (2) pre-training with SS labels and fine-tuning with GS labels. The results are compared against training methods without TL. §.§ Pseudo Label Generation: Silver Standards In UDA settings, there are no labels for the target domain. Our goal is to improve performance on the target, so we train the model with the target SS labels generated by a previously developed and validated unsupervised Ki-67 nuclei detection method called the immunohistochemical colour histogram (IHCCH). The process includes vector median filtering, background subtraction, an unsupervised colour separation method that separates blue and brown objects automatically based on the histogram of the b* channel, and adaptive radius nuclei detection. More details can be found in <cit.>. §.§ Dataset This study uses Ki-67 stained invasive breast cancer images obtained from three institutions. Table <ref> summarizes the Ki-67 datasets used for each training method. Source Dataset: 510 patches of 256×256 pixels in size are extracted from whole slide images provided by St. Michael's Hospital (SMH) in Toronto and an open-source database, Deepslide <cit.>. The ×20 Aperio AT Turbo and ×40 Aperio ScanScope scanners were used, respectively. Deepslide images are down-sampled to ×20 for compatibility. Images were annotated by marking Ki-67^- and Ki-67^+ centroids <cit.>. Centroid annotations were recast into a Gaussian kernel to allow the system to contextual learn information from the nuclei to help the classifier discover more robust features. Artifacts including overstaining, background, folders, blur, and dust are common in tissue slides; therefore, 15% of the training dataset includes patches with artifacts and non-tumorous areas to reduce false positives. This dataset represents our source domain and contains GS labels. Each patch contains 58 tumourous cells on average for a total of 29571 cells. Target Dataset: The target dataset was provided by the University Health Network (UHN) and contains 411 tissue microarrays (TMA) from 175 patients. Each patient has 1 to 3 corresponding TMAs of 2000 × 2000 pixels in size and an expert PI estimate is available for each patient. 24 TMAs from 24 patients were used to create the SS labels. These 24 TMAs were tiled into patches of size 256x256 pixels and 345 patches which contained ≥ 80 % tumorous tissue were extracted and the remaining patches were discarded. The TMAs from patients used for SS label generation were removed from our target dataset to prevent patient data leakage. 10 TMAs were randomly selected from the remaining pool and annotated by an anatomical pathology resident (N.N.J.N) and verified by a breast pathologist (S.D.) to produce pixel-wise nuclei annotations for testing in the target domain. Each annotated TMA contains 2093 tumourous cells on average for a total of 20930 cells. Accordingly, the target domain test set contains 325 TMAs from 151 patients with patient-level PI scores and 10 TMAs with nuclei annotations. §.§ Evaluation Metrics Nuclei detection is evaluated by comparing the Ki-67^- and Ki-67^+ centroids between the AI prediction and GS ground truths through the F1 score. The F1 score is the harmonic mean of precision and recall which is dependent on the number of true positives (TP), false positives (FP), and false negatives (FN). A TP is detected whenever the Euclidean distance between an annotation centroid and a detected centroid is less than 6 µm. This value corresponds to the average radius of tumourous cells from the source dataset. All detected cells not within 6 µm of a ground truth annotation are considered FP. Multiple detections of an already counted cell are also counted as FP. All ground truth cells without a detection within 6 µm proximity are considered FN. The F1 scores report raw nuclei detection performance, therefore, if a model is operating on an image with a low tumour nuclei count, a single missed nucleus can greatly skew the overall F1 score. Thus, different metrics, such as the proliferation index (PI) error should also be used. Tumour proliferation is measured by: PI=# Ki-67^+ tumour cells/#(Ki-67^+ + Ki-67^-) tumour cells which is computed over the whole TMA based on the detected nuclei. The PI difference is used to investigate the error between predicted and actual PI values: Δ PI= |PI_actual - PI_predicted|. Pairwise one-way ANOVA is used to compare model performance. §.§ Experimental Setup Five training methods are used to study Ki-67 nuclei detection and PI estimation accuracy. The first configuration is GS only, which uses only the GS data from the source domain. The second configuration, SS only, uses the SS data generated by the unsupervised IHCCH algorithm from the target domain. The third configuration, Mixed, includes both GS and SS in the training pool. The fourth configuration, GS+SS, uses GS for pre-training and SS for fine-tuning and the final configuration is our proposed method, SS+GS, which uses SS for pre-training and GS for fine-tuning. All methods that use SS are trained with increments of 100 where each increment contains SS from previous increments. Table <ref> summarizes the configurations of each training method. The IHCCH (unsupervised) method is also evaluated to verify the stand-alone performance of the tool. To ensure robustness to training variations we use a 3-fold cross-validation protocol for all experiments. We divide our 510 source patches with GS annotations into 3 subsets. For each fold, we select one subset as the held-out patches and the other 340 patches are used in the training pool. An Adam optimizer was used with a learning rate of 1e-3, a batch size of 4 with 100 epochs, and a Huber loss function, the epoch with the lowest validation loss was saved. Data augmentations were applied for rotation and scaling. All experiments were run using a GeForce RTX 3070 Ti. § RESULTS Quantitative results are summarized in Table <ref>. Nuclei predictions are shown in Figure <ref>. Reproducibility (standard deviation between 3-fold cross-validation models when predicting on the same target distribution data) is shown in Table <ref>. §.§ Source Domain: Nuclei Detection 170 unseen patches from the source domain with pixel-level Ki-67^- and Ki-67^+ centroid annotations are used to test nuclei detection performance. The distributions of the F1 scores are shown in Figure <ref> and summarized in Table <ref>. The proposed SS+GS method yielded superior or competitive F1 performance on the source domain when compared to the baseline method, GS Only, whereas IHCCH, SS Only, Mixed and GS+SS methods performed generally worse. Nuclei detection performance on the source domain serves as our model verification step. Our findings indicate that including SS data from the target domain does not degrade model performance on the source domain. §.§ Target Domain: Nuclei Detection We next test our method on an adaptation task as we shift from source domain to target domain pixel-level assessments. 10 TMAs from the target domain with pixel-level Ki-67^- and Ki-67^+ expert annotations were used to test nuclei detection performance. The distribution of the F1 scores on the target domain test set is shown in Figure <ref> and summarized in Table <ref>. The GS+SS method achieves superior performance exceeding all other methods and significantly higher performance than the baseline method regardless of the SS increment. §.§ Target Domain: PI Computation We extend the use of our approach to another adaptation task involving a change in the level of assessment, specifically from patch-level to patient-level. ΔPI is assessed on 151 patients (325 TMAs) from the target domain. The distributions of the ΔPI are shown in Figure <ref> and summarized in Table <ref>. SS+GS achieves superior PI prediction performance exceeding all other methods and achieving significantly lower PI error (p<0.05) compared to the baseline method, GS Only, regardless of the SS increment. The ΔPI for GS only methods is ∼ 7.5%, but using the SS+GS method leads to a decrease in error by ∼ 3.5%, which is a significantly greater improvement compared to other methods. SS+GS methods also yielded the lowest ΔPI standard deviation signifying less variability and more consistent and reliable predictions. As some PI intervals have greater clinical significance, the patient-level PI performance was evaluated in intervals of 10% as depicted in Figure <ref>. SS+GS methods maintain the lowest ΔPI across all intervals (excluding 30% to 40% for UV-Net) which demonstrates optimal performance in clinically relevant ranges. §.§ Qualitative Evaluation: t-SNE We analyze the effects of the models on source and target domains further with t-SNE, a popular method to visualize high-dimensional data in 2D <cit.>. Figure <ref> illustrates such feature visualizations from source and target images obtained from GS Only, GS+SS and SS+GS models. The features learned for the source and target domains in the GS only and GS+SS models are diffuse and mostly non-overlapping, which likely causes reduced generalization. However, features from the SS+GS model are similar across source and target domains, which likely resulted in improved generalization and top performance on target domain data. § DISCUSSION Ki-67 PI is visually assessed by pathologists to estimate prognosis <cit.> and decide whether adjuvant chemotherapy should be added to a patient's treatment plan <cit.>. A high Ki-67 proliferation index is associated with a poor prognosis <cit.> and better eligibility for adjuvant chemotherapy <cit.>. The monarchE Phase 3 <cit.>—establishes >20% Ki-67 PI as a clinically relevant threshold to stratify patients with estrogen receptor-positive early breast cancer eligible for adjuvant chemotherapy. However, various preanalytical, analytical and interpretation factors affect the scoring of Ki-67 by pathologists and lead to high inter-rater variability. Automated tools, such as deep learning can be used to bring objectivity and efficiency, thus improving the clinical utility of Ki-67 scoring. While more accurate than other tools, deep learning methods experience a reduction in performance when applied to out-of-domain data. Covariate shifts between source and target domains are common in digital pathology due to different staining protocols and scanning equipment/software. This presents a significant challenge for clinical translation, as the current industry standard is to train models using data only available to the vendor. To address this issue and move closer to widespread deployment, this work presents an unsupervised domain adaptation method for Ki-67 quantification to focus on creating models that generalize to target data. The proposed pipeline learns the target distribution without manual annotations, which would be time-consuming and costly to obtain for medical images. Pseudo labels (SS labels) are extracted from the target domain in an unsupervised manner using the IHCCH method, and this data is used to supplement training datasets to learn domain- and problem-specific features. This framework can be easily implemented at any target site as a laboratory-specific calibration method, which can simplify deployment not only for Ki-67 quantification but also for a wide range of medical imaging applications. We evaluated five training configurations (GS Only, SS Only, Mixed, GS+SS, SS+GS) on two Ki67 architectures (piNET and UV-Net) and found improved performance, particularly for the SS+GS configuration compared to the baseline, GS only. This suggests that although the SS labels may be slightly noisy (F1 score of 0.53 on source and 0.57 on target), incorporating data from the target domain can help the models learn domain-specific features. This was evident from the t-SNE plots, which showed a clear overlap in features learned for the target and source distributions in the SS+GS models. On the other hand, the GS+SS models did not perform as well, despite being the standard practice in the community. We believe that fine-tuning with the noisy SS labels forces the model to remember the noise more prominently. However, in the SS+GS configuration, the model was first trained with the noisy SS labels and then refined with clean GS labels, leading to better performance and an overall PI accuracy of 95.9% achieved using piNET. Furthermore, across clinically relevant PI ranges, the SS+GS models exhibited the best performance and demonstrated consistency (low standard deviation across multiple training runs). We recognize there is ample opportunity to enhance performance and gain a deeper understanding of the impact of SS labels. Our strategy includes enhancing pseudo label generation, refining patch selection, diversifying patient cohorts, and assessing SS label source domain effects. We'll also compare our approach to domain adversarial learning and self-supervised model distillation. Future studies will explore per-site calibration in other datasets and benchmark against state-of-the-art methods. § CONCLUSION In this study, we address the problem of domain adaptation for automated Ki-67 quantification in invasive breast cancer. We present a novel self-supervised approach that shows that using target domain pseudo labels (SS) for pre-training and fine-tuning with ground truth (GS) data from the source domain leads to improved performance on both source and target domains. The proposed method enhances the robustness of AI models to domain variations and improves adaptation to unseen data distributions. The training pipeline overcomes the difficulties of scarce labelled data and costly manual annotations; a challenge in medical imaging applications. These findings can drive widespread clinical utilization of automated quantification tools in digital pathology. We acknowledge the Canadian Cancer Society, and MITACs Canada for funding this research. § APPENDIX
http://arxiv.org/abs/2307.04060v1
20230708233916
Double instability of Schwarzschild black holes in Einstein-Weyl-scalar theory
[ "Yun Soo Myung" ]
gr-qc
[ "gr-qc", "hep-th" ]
Double instability of Schwarzschild black holes in Einstein-Weyl-scalar theory Yun Soo Myung^a[e-mail address: [email protected]] ^aInstitute of Basic Sciences and Department of Computer Simulation, Inje University, Gimhae 50834, Korea We study the stability of Schwarzschild black hole in Einstein-Weyl-scalar (EWS) theory with a quadratic scalar coupling to the Weyl term. Its linearized theory admits the Lichnerowicz equation for Ricci tensor as well as scalar equation. The linearized Ricci-tensor carries with a regular mass term (m^2_2), whereas the linearized scalar has a tachyonic mass term (-1/m^2_2). It turns out that the double instability of Schwarzschild black hole in EWS theory is given by Gregory-Laflamme and tachyonic instabilities. In the small mass regime of m_2<0.876, the Schwarzschild black hole becomes unstable against Ricci-tensor perturbations, while tachyonic instability is achieved for m_2<1.174. The former would provide a single branch of scalarized black holes, whereas the latter would induce infinite branches of scalarized black holes. § INTRODUCTION Recently, black hole solutions with scalar hair obtained from Einstein-Gauss-Bonnet-scalar (EGBS) theories <cit.> and Einstein-Maxwell-scalar theory <cit.> have received much attention because they have uncovered easily an evasion of the no-hair theorem <cit.> by introducing a non-minimal (quadratic) scalar coupling function f(ϕ) to Gauss-Bonnet and Maxwell terms. We note that these scalarized black hole solutions are closely related to the appearance of tachyonic instability for bald black holes. In these linearized theories, the instability of Schwarzschild black hole is determined solely by the linearized scalar equation where the Gauss-Bonnet term acts as an effective mass term <cit.>, while the instability of Reissner-Nordström (RN) black hole is given just by the linearized scalar equation where the Maxwell term plays the role of an effective mass term <cit.>. This is allowed because their linearized Einstein and Einstein-Maxwell equations reduce to those for the linearized Einstein theory around Schwarzschild black hole and the Einstein-Maxwell theory around RN black hole, which turned out to be stable against tensor (metric) and vector-tensor perturbations. It was well known that a higher curvature gravity (Einstein-Weyl theory) with a mass coupling parameter m^2_2 has provided the non-Schwarzschild black hole solution which crosses the Schwarzschild black hole solution at the bifurcation point of m_2=0.876 <cit.>. This solution indicates the black hole with non-zero Ricci tensor (R̅_μν≠0), comparing to zero Ricci tensor (R̅_μν=0) for Schwarzschild black hole. We note that the trace no-hair theorem for Ricci scalar played an important role in obtaining the non-Schwarzschild black hole solution. It is worth noting that the instability of Schwarzschild black hole was found in the massive gravity theory <cit.> since the Schwarzschild black hole was known to be dynamically stable against tensor perturbations in Einstein theory <cit.>. In the linearized Einstein-Weyl theory, the instability bound of Schwarzschild black hole was found as m_2<0.876 with r_+=1 when solving the Lichnerowicz equation for the linearized Ricci tensor <cit.>, which is the same equation as the linearized Einstein equation around a (4+1)-dimensional black string where the Gregory-Laflamme (GL) instability appeared firstly <cit.>. A little difference is that the instability of Schwarzschild black hole arose from the massiveness of m_2≠0 in the Einstein-Weyl theory, whereas the GL instability appeared from the geometry of an extra z dimension in (4+1)-dimensional black string theory. This means that the mass m_2 trades for the extra dimension z. In the present work, we wish to study two instabilities of Schwarzschild black holes simultaneously by introducing the Einstein-Weyl-scalar theory with a quadratic scalar coupling to Weyl term, instead of Gauss-Bonnet term. In this case, the linearized Ricci-tensor δ R_μν has a regular mass term m^2_2, whereas the linearized scalar δϕ possesses a tachyonic mass term (-1/m^2_2). The linearized scalar equation around Schwarzschild black hole undergoes tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor reveals GL instability for m_2<0.876. We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits a single branch (m_2≠0) of scalarized black holes. This means that their role of the mass term are quite different for producing scalarized black holes. § EINSTEIN-WEYL-SCALAR (EWS) THEORY We introduce the EWS theory defined by S_ EWS=1/16 π∫ d^4 x√(-g)[ R-2∂_μϕ∂^μϕ-f(ϕ)/2m^2_2 C^2], where f(ϕ)=1+ϕ^2 is a quadratic scalar coupling function, m_2^2 denotes a mass coupling parameter, and C^2 represents the Weyl term (Weyl scalar invariant) given by C^2(≡ C_μνρσC^μνρσ)=2(R_μνR^μν-R^2/3)+ R_ GB^2 with the Gauss-Bonnet term R_ GB^2=R^2-4R_μνR^μν+R_μνρσR^μνρσ. In the limit of m_2^2→∞, the Weyl term decouples and the theory reduces to the tensor-scalar theory. We wish to emphasize that scalar couplings to Gauss-Bonnet term were mostly used to find black holes with scalar hair within EGBS theory because it provides an effective mass term for a linearized scalar without modifying metric perturbations <cit.>. This is so because the Gauss-Bonnet term is a topological term in four dimensions. Actually, the Weyl term is similar to the Maxwell term (F^2) because both they are conformally invariant and their variations with respect to g_μν are traceless. From the action (<ref>), we derive the Einstein equation G_μν=2∂ _μϕ∂ _νϕ -(∂ϕ)^2g_μν+2(1+ϕ^2)B_μν/m^2_2-Γ_μν/m^2_2, where G_μν=R_μν-(R/2)g_μν is the Einstein tensor. Here, B_μν (B^μ _μ=0) coming from the first part of (<ref>) is the Bach tensor defined as B_μν = R_μρνσR^ρσ-g_μν/4 R_ρσR^ρσ- R/3(R_μν-g_μν/4R) + 1/2(∇^2R_μν-g_μν/6∇^2 R-1/3∇_μ∇_ν R) and Γ_μν is given by Γ_μν = -4/3R∇_(μΨ_ν)-∇^αΨ_α(3R_μν-4g_μν/3R)+ 6R_(μ|α|∇^αΨ_ν) - 3 R^αβ∇_αΨ_β g_μν +4R^β_ μαν∇^αΨ_β with Ψ_μ= 2ϕ∂_μϕ. Its trace is not zero as Γ^μ _μ=R∇^ρΨ_ρ-2R^ρσ∇_ρΨ_σ. Importantly, the scalar equation is given by ∇^2 ϕ +C^2/4m^2_2ϕ=0 . Considering ϕ̅=0, the Schwarzschild solution is found from Eqs.(<ref>) and (<ref>) as ds^2_ SBH= g̅_μνdx^μ dx^ν=-(1-r_+/r)dt^2+dr^2/(1-r_+/r)+r^2dΩ^2_2 with horizon radius r_+=2M. This Schwarzschild background gives us R̅_μνρσ≠0, R̅_μν=0, and R̅=0. In this case, one finds easily that C̅^2=R̅_μνρσR̅^μνρσ=12r_+^2/r^6=R̅^2_ GB. § DOUBLE INSTABILITY FOR SCHWARZSCHILD BLACK HOLE For the stability analysis of Schwarzschild black hole, we need the two linearized equations which describe the metric perturbation h_μν in (g_μν=g̅_μν+h_μν) and scalar perturbation δϕ in (ϕ=0+δϕ) propagating around (<ref>). They are obtained by linearizing Eqs.(<ref>) and (<ref>) as ∇̅^2δ G_μν+2R̅_μρνσδ G^ρσ-1/3(∇̅_μ∇̅_ν-g̅_μν∇̅^2)δ R-m^2_2 δ G_μν=0 , (∇̅^2+ 3r_+^2/m^2_2r^6)δϕ= 0 with δ G_μν=δ R_μν-δ R g̅_μν/2 the linearized Einstein tensor. Here, we note that `m^2_2' in Eq.(<ref>) is regarded as a regular mass term, while `3r_+^2/m^2_2r^6' in Eq.(<ref>) corresponds to a tachyonic mass term for m^2_2>0. Taking the trace over Eq.(<ref>) leads to m^2_2 δ R=0, which implies the non-propagation of a linearized Ricci scalar as δ R=0. We confirm Eq.(<ref>) by linearizing R=2(∂ϕ)^2+Γ^μ _μ/m^2_2. This non-propagation of linearized scalar plays an important role in obtaining a linearized theory of the EWS theory. Plugging Eq.(<ref>) into Eq.(<ref>), one finds the Lichnerowicz-Ricci tensor equation for the traceless and transverse Ricci tensor δ R_μν as (Δ̅_ L+m^2_2 ) δ R_μν=0, where the Lichnerowicz operator on the Schwarzschild background is given by Δ̅_ Lδ R_μν=-∇̅^2δ R_μν-2R̅_μρνσδ R^ρσ. Here, we consider m^2_2>0 for non-tachyonic case. Actually, Eq.(<ref>) describes a massive spin-2 mode (δ R_μν) with mass m_2 propagating on the Schwarzschild black hole background. Let us solve the Lichnerowicz-Ricci tensor equation (<ref>) by adopting δ R_μν(t, x)=e^Ω tδR̃_μν( x). Its s(l=0)-mode in polar sector satisfies the Schrödinger-type equation when introducing a tortoise coordinate r_*=∫[dr/(1-r_+/r)] d^2δR̃^l=0_μν/dr^2_*-[Ω^2+V_ Z(r)]δR̃^l=0_μν=0, where the Zerilli potential V_ Z(r) is given by <cit.> V_ Z(r)=(1-r_+/r)[m^2_2 +r_+/r^3-12m^2_2r_+(r-0.5r_+)+6m^4_2r^3(2r_+-r)/(r_++m^2_2r^3)^2]. As is shown in (Left) Fig. 1, all potentials with m_2≠0 induce negative region near the horizon, while their asymptotic forms are given by m^2_2>0. The negative region becomes wide and deep as the mass parameter m_2 decreases, implying GL instability of the Schwarzschild black hole. In case of m_2=0, however, there is no GL instability because its potential V_ Z(r) is positive definite outside the horizon. Solving Eq.(<ref>) numerically with appropriate boundary conditions, one finds the GL instability bound from (Left) Fig. 2 as 0<m_2<m_2^ th=0.876, for r_+=1, where m_2^ th denotes threshold of GL instability. It is important to note that this bound is found in the EWS theory, but there is no such bound in the EGBS theory. In the study of the instability for the Euclidean Schwarzschild black hole together with Einstein gravity, Gross, Perry, and Yaffe have found that there is just one normalizable negative-eigenvalue mode of the Licherowicz operator [(Δ^ E_ L-λ_ GPY)h_μν=0] <cit.>. This connection could be realized from Eq.(<ref>) because when one considers δ R_μν=Δ̅_ Lh_μν/2 for ∇̅^μ h_μν=0 and h^μ _μ=0, Eq.(<ref>) implies that Δ̅_ Lh_μν=0 or (Δ̅_ L+m^2_2)h_μν=0. Its eingenvalue is given by λ_ GPY[=-(m_2^ th)^2]=-0.768/r_+^2 which was noted in the early study of Schwarzschild black hole within higher curvature gravity <cit.>. Indeed, λ_ GPY is related to the thermodynamic instability of negative heat capacity C=-2π r_+^2 for Schwarzschild black hole in canonical ensemble. On the other hand, we focus on the linearized scalar equation (<ref>) which is the same form as found in the linearized EGBS theory. Considering δϕ(t,r,θ,φ)=u(r)/re^-iω tY_lm(θ,φ), the radial equation for s(l=0)-mode scalar leads to the Schrödinger-type equation d^2u/dr_*^2+[ω^2-V_ S(r)]u(r)=0, where the scalar potential V_ S(r) is given by V_ S(r)=(1-r_+/r)[r_+/r^3-3r_+^2/m^2_2r^6], where the last term corresponds to a tachyonic mass term. Considering ∫^∞_r_+ dr [V_ S(r)/(1-r_+/r)]<0, one may introduce a sufficient condition of tachyonic instability for a mass parameter m_2 <cit.> m^2_2r_+^2<12/10⇒ m_2<m_2^ sc=1.095/r_+. However, Eq.(<ref>) is not a necessary and sufficient condition for tachyonic instability. Observing (Right) Fig. 1, one finds that the negative region becomes wide and deep as the mass parameter m_2 decreases, implying tachyonic instability of the Schwarzschild black hole. To determine the threshold of tachyonic instability, one has to solve the second-order differential equation (<ref>) with ω=iΩ numerically, which may allow an exponentially growing mode of e^Ω t as an unstable mode. In this case, we choose two boundary conditions: a normalizable solution of u(∞)∼ e^-Ω r_* at infinity and a solution of u(r_+)∼(r-r_+)^Ω r_+ near the horizon. By observing (Right) Fig. 2 together with r_+=1, we read off the bound for tachyonic instability as m_2<m_2^ sth=1.174 which implies that the threshold of tachyonic instability is given by 1.174 being greater than 1.095 (sufficient condition for tachyonic instability). This corresponds to a bifurcation point between Schwarzschild and n=0 branch of scalarized black holes. In the limit of m^2_2 → 0, one has an infinitely negative potential which implies a large Ω as seen from (Right) Fig. 2. Finally, we obtain an inequality bound for threshold of GL and tachyonic instabilities as m_2^ th<m_2^ sth. However, we remind the reader that the linearized Ricci-tensor δ R_μν carries with a regular mass term (m^2_2), whereas the linearized scalar δϕ has a tachyonic mass term (-1/m^2_2). In this sense, the GL instability is quite different from the tachyonic instability <cit.>. § DISCUSSIONS In this work, we have investigated two instabilities of Schwarzschild black holes simultaneously by introducing the EWS theory with a quadratic scalar coupling to Weyl term. Here, the linearized Ricci-tensor has a regular mass term (m^2_2), whereas the linearized scalar possesses a tachyonic mass term (-1/m^2_2). The linearized scalar equation around black hole indicates tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor shows GL instability for m_2<0.876. This suggests that their mass terms play different roles for generating scalarized black holes because the GL instability is quite different from the tachyonic instability. We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits single branch (m_2>0) of scalarized black holes. Now, we would like to mention the non-Schwarzschild black hole solutions obtained from the Einstein-Weyl theory (ϕ=0 EWS theory with m_2^2>0). This solution can be obtained numerically by requiring the no-hair theorem for Ricci scalar (R=0) <cit.>. Actually, it corresponds to single branch of non-Schwarzschild black holes with Ricci-tensor hair <cit.>. Recently, it was shown that the long-wave length instability bound for non-Schwarzschild black holes is given by m_2<0.876 <cit.>, which is the same bound as the GL instability for Schwarzschild black hole <cit.>, but it contradicts to the conjecture from black hole thermodynamics addressed in <cit.>. We expect that a single branch of non-Schwarzschild black holes with Ricci-tensor and scalar hairs would be found from the EWS theory with f(ϕ)=1+ϕ^2. On the other hand, we consider the scalar equation (<ref>) with tachyonic mass. From its static equation with ω=0, we obtain an infinite spectrum of parameter m_2 : m_2∈ [1.174=m_2^ sth, 0.453, 0.280, 0.202, · · ·], which defines infinite branches of scalarized black holes: n=0((0,1.174]), n=1((0,0.453]), n=2((0,0.28]), n=3((0,0.202]),⋯. Also, n=0, 1, 2, 3,⋯ are identified with the number of nodes for δϕ(z) = u(z)/z profile. Thus, it is expected that infinite branches (n=0, 1, 2, 3,⋯) of black hole with scalar hair would be found when solving Eqs.(<ref>) and (<ref>) numerically. However, this computation seems not to be easy because Eq.(<ref>) includes fourth-order derivatives and its Ricci scalar is not zero (R=2(∂ϕ)^2+Γ^μ _μ/m^2_2). We wish to introduce a conventional case of f(ϕ)=ϕ^2 quadratic coupling function. In this case, there is no GL instability because the Bach tensor-term does not contribute to the linearized Einstein equation (<ref>). Here, the linearized EWS theory reduces to the linearized EGBS theory which provides n=0 band with bandwidth of 1.174 < m_2 < 1.272  <cit.>. This band of black holes with scalar hair is unstable against radial perturbations <cit.>. This is reason why we choose the EWS theory with the quadratic coupling function f(ϕ)=1+ϕ^2. Finally, for the EWS theory with a quartic coupling function f(ϕ)=(1-e^-κϕ^4)/4κ <cit.>, the linearized scalar equation leads to ∇̅^2δϕ=0, which implies that there is no tachyonic instability. Also, its linearized Einstein equation is given by δ G_μν=0 which indicates that there is no GL instability. In this quartic coupling case, the linearized EWS theory reduces to the linearized EGBS theory, showing tachyonic stability. Without tachyonic instability, one expects to have a single branch of nonlinearly scalarized black holes but not infinite branches of scalarized black holes. Acknowledgments The author thanks De-Cheng Zou for helpful discussions. 99 Antoniou:2017acq G. Antoniou, A. Bakopoulos and P. Kanti, Phys. Rev. Lett. 120, no.13, 131102 (2018) doi:10.1103/PhysRevLett.120.131102 [arXiv:1711.03390 [hep-th]]. Doneva:2017bvd D. D. Doneva and S. S. Yazadjiev, Phys. Rev. Lett. 120, no.13, 131103 (2018) doi:10.1103/PhysRevLett.120.131103 [arXiv:1711.01187 [gr-qc]]. Silva:2017uqg H. O. Silva, J. Sakstein, L. Gualtieri, T. P. Sotiriou and E. Berti, Phys. Rev. Lett. 120, no.13, 131104 (2018) doi:10.1103/PhysRevLett.120.131104 [arXiv:1711.02080 [gr-qc]]. Herdeiro:2018wub C. A. R. Herdeiro, E. Radu, N. Sanchis-Gual and J. A. Font, Phys. Rev. Lett. 121, no.10, 101102 (2018) doi:10.1103/PhysRevLett.121.101102 [arXiv:1806.05190 [gr-qc]]. Bekenstein:1995un J. D. Bekenstein, Phys. Rev. D 51, no.12, R6608 (1995) doi:10.1103/PhysRevD.51.R6608 Myung:2018iyq Y. S. Myung and D. C. Zou, Phys. Rev. D 98, no.2, 024030 (2018) doi:10.1103/PhysRevD.98.024030 [arXiv:1805.05023 [gr-qc]]. Myung:2018vug Y. S. Myung and D. C. Zou, Eur. Phys. J. C 79, no.3, 273 (2019) doi:10.1140/epjc/s10052-019-6792-6 [arXiv:1808.02609 [gr-qc]]. Lu:2015cqa H. Lu, A. Perkins, C. N. Pope and K. S. Stelle, Phys. Rev. Lett. 114, no.17, 171601 (2015) doi:10.1103/PhysRevLett.114.171601 [arXiv:1502.01028 [hep-th]]. Babichev:2013una E. Babichev and A. Fabbri, Class. Quant. Grav. 30, 152001 (2013) doi:10.1088/0264-9381/30/15/152001 [arXiv:1304.5992 [gr-qc]]. Brito:2013wya R. Brito, V. Cardoso and P. Pani, Phys. Rev. D 88, no.2, 023514 (2013) doi:10.1103/PhysRevD.88.023514 [arXiv:1304.6725 [gr-qc]]. Regge:1957td T. Regge and J. A. Wheeler, Phys. Rev. 108, 1063-1069 (1957) doi:10.1103/PhysRev.108.1063 Zerilli:1970se F. J. Zerilli, Phys. Rev. Lett. 24, 737-738 (1970) doi:10.1103/PhysRevLett.24.737 Myung:2013doa Y. S. Myung, Phys. Rev. D 88, no.2, 024039 (2013) doi:10.1103/PhysRevD.88.024039 [arXiv:1306.3725 [gr-qc]]. Gregory:1993vy R. Gregory and R. Laflamme, Phys. Rev. Lett. 70, 2837-2840 (1993) doi:10.1103/PhysRevLett.70.2837 [arXiv:hep-th/9301052 [hep-th]]. Lu:2017kzi H. Lü, A. Perkins, C. N. Pope and K. S. Stelle, Phys. Rev. D 96, no.4, 046006 (2017) doi:10.1103/PhysRevD.96.046006 [arXiv:1704.05493 [hep-th]]. Gross:1982cv D. J. Gross, M. J. Perry and L. G. Yaffe, Phys. Rev. D 25, 330-355 (1982) doi:10.1103/PhysRevD.25.330 Whitt:1985ki B. Whitt, Phys. Rev. D 32, 379 (1985) doi:10.1103/PhysRevD.32.379 Held:2022abx A. Held and J. Zhang, Phys. Rev. D 107, no.6, 064060 (2023) doi:10.1103/PhysRevD.107.064060 [arXiv:2209.01867 [gr-qc]]. Stelle:2017bdu K. S. Stelle, Int. J. Mod. Phys. A 32, no.09, 1741012 (2017) doi:10.1142/S0217751X17410123 Blazquez-Salcedo:2018jnn J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev, Phys. Rev. D 98, no.8, 084011 (2018) doi:10.1103/PhysRevD.98.084011 [arXiv:1805.05755 [gr-qc]]. Doneva:2021tvn D. D. Doneva and S. S. Yazadjiev, Phys. Rev. D 105, no.4, L041502 (2022) doi:10.1103/PhysRevD.105.L041502 [arXiv:2107.01738 [gr-qc]]. Blazquez-Salcedo:2022omw J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev, Phys. Rev. D 105, no.12, 124005 (2022) doi:10.1103/PhysRevD.105.124005 [arXiv:2203.00709 [gr-qc]]. Lai:2023gwe M. Y. Lai, D. C. Zou, R. H. Yue and Y. S. Myung, [arXiv:2304.08012 [gr-qc]].
http://arxiv.org/abs/2307.04110v1
20230709065359
Learning Space-Time Continuous Neural PDEs from Partially Observed States
[ "Valerii Iakovlev", "Markus Heinonen", "Harri Lähdesmäki" ]
cs.LG
[ "cs.LG" ]
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication José Miguel Mateos-Ramos, Student Member, IEEE, Christian Häger, Member, IEEE, Musa Furkan Keskin, Member, IEEE, Luc Le Magoarou, Member, IEEE, Henk Wymeersch, Senior Member, IEEE This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718. José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]). Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]). Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= We introduce a novel grid-independent model for learning partial differential equations (PDEs) from noisy and partial observations on irregular spatiotemporal grids. We propose a space-time continuous latent neural PDE model with an efficient probabilistic framework and a novel encoder design for improved data efficiency and grid independence. The latent state dynamics are governed by a PDE model that combines the collocation method and the method of lines. We employ amortized variational inference for approximate posterior estimation and utilize a multiple shooting technique for enhanced training speed and stability. Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, overcoming limitations of previous approaches and effectively handling partially-observed data. The proposed model outperforms recent methods, showing its potential to advance data-driven PDE modeling and enabling robust, grid-independent modeling of complex partially-observed dynamic processes. § INTRODUCTION All source code and datasets will be made publicly available after review. Modeling spatiotemporal processes allows to understand and predict the behavior of complex systems that evolve over time and space <cit.>. Partial differential equations (PDEs) are a popular tool for this task as they have a solid mathematical foundation <cit.> and can describe the dynamics of a wide range of physical, biological, and social phenomena <cit.>. However, deriving PDEs can be challenging, especially when the system's underlying mechanisms are complex and not well understood. Data-driven methods can bypass these challenges <cit.>. By learning the underlying system dynamics directly from data, we can develop accurate PDE models that capture the essential features of the system. This approach has changed our ability to model complex systems and make predictions about their behavior in a data-driven manner. While current data-driven PDE models have been successful at modeling complex spatiotemporal phenomena, they often operate under various simplifying assumptions such as regularity of the spatial or temporal grids <cit.>, discreteness in space or time <cit.>, and availability of complete and noiseless observations <cit.>. Such assumptions become increasingly limiting in more realistic scenarios with scarce data and irregularly spaced, noisy and partial observations. We address the limitations of existing methods and propose a space-time continuous and grid-independent model that can learn PDE dynamics from noisy and partial observations made on irregular spatiotemporal grids. Our main contributions include: * Development of an efficient generative modeling framework for learning latent neural PDE models from noisy and partially-observed data; * Novel PDE model that merges two PDE solution techniques – the collocation method and the method of lines – to achieve space-time continuity, grid-independence, and data efficiency; * Novel encoder design that operates on local spatiotemporal neighborhoods for improved data-efficiency and grid-independence. Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, opening up the possibility for accurate and efficient modeling of complex dynamic processes and promoting further advancements in data-driven PDE modeling. § PROBLEM SETUP In this work we are concerned with modeling of spatiotemporal processes. For brevity, we present our method for a single observed trajectory, but extension to multiple trajectories is straightforward. We observe a spatiotemporal dynamical system evolving over time on a spatial domain Ω. The observations are made at M arbitrary consecutive time points t_1:M:=(t_1, …, t_M) and N arbitrary observation locations _1:N:=(_1, …, _N), where _i ∈Ω. This generates a sequence of observations _1:M:=(_1, …, _M), where _i ∈ℝ^N × D contains D-dimensional observations at the N observation locations. We define _i^j as the observation at time t_i and location _j. The number of time points and observation locations may vary between different observed trajectories. We assume the data is generated by a dynamical system with a latent state (t, ) ∈ℝ^d, where t is time and ∈Ω is spatial location. The latent state is governed by an unknown PDE and is mapped to the observed state (t, ) ∈ℝ^D by an unknown observation function g and likelihood model p: ∂(t, x)/∂ t = F((t,), ∂_(t,), ∂^2_(t,),…), (t,) ∼ p(g((t,))), where ∂^∙_(t,) denotes partial derivatives wrt . In this work we make two assumptions that are highly relevant in real-world scenarios. First, we assume partial observations, that is, the observed state (t,) does not contain all information about the latent state (t,) (e.g., (t,) contains pressure and velocity, but (t,) contains information only about the pressure). Second, we assume out-of-distribution time points and observation locations, that is, their number, positions, and density can change arbitrarily at test time. § MODEL [9]r0.4 < g r a p h i c s > Model sketch. Initial latent state (t_1,) is evolved via F_θ_dyn to the following latent states which are then mapped to the observed states by g_θ_dec. Here we describe the model components (Sec. <ref>) which are then used to construct the generative model (Sec. <ref>). §.§ Model components Our model consists of four parts: space-time continuous latent state (t, ) and observed state (t, ), a dynamics function F_θ_dyn governing the temporal evolution of the latent state, and an observation function g_θ_dec mapping the latent state to the observed state (see Figure <ref>). Next, we describe these components in detail. Latent state. To define a space-time continuous latent state (t, ) ∈ℝ^d, we introduce (t):=(^1(t), …, ^N(t)) ∈ℝ^N × d, where each ^i(t) ∈ℝ^d corresponds to the observation location _i. Then, we define the latent state (t, ) as a spatial interpolant of (t): (t, ) := Interpolate((t))(), where Interpolate(·) maps (t) to an interpolant which can be evaluated at any spatial location ∈Ω (see Figure <ref>). We do not rely on a particular interpolation method, but in this work we use linear interpolation as it shows good performance and facilitates efficient implementation. Latent state dynamics. [13]r0.3 < g r a p h i c s > Latent state (t,) defined as an interpolant of (t) := (^1(t), ..., ^4(t)). Given a space-time continuous latent state, one can naturally define its dynamics in terms of a PDE: ∂(t, x)/∂ t = F_θ_dyn((t,), ∂_(t,), ∂^2_(t,),…), where F_θ_dyn is a dynamics function with parameters θ_dyn. This is a viable approach known as the collocation method <cit.>, but it has several limitations. It requires us to decide which partial derivatives to include in the dynamics function, and also requires an interpolant which has all the selected partial derivatives (e.g., linear interpolant has only first order derivatives). To avoid these limitations, we combine the collocation method with another PDE solution technique known as the method of lines <cit.>, which approximates spatial derivatives ∂^∙_(t,) using only evaluations of (t,), and then let the dynamics function approximate all required derivatives in a data-driven manner. To do that, we define the spatial neighborhood of as 𝒩_S(), which is a set containing and its spatial neighbors, and also define (t, 𝒩_S()), which is a set of evaluations of the interpolant (t, ) at points in 𝒩_S(): 𝒩_S() := {' ∈Ω : '= or ' is a spatial neighbor of }, (t, 𝒩_S()) := {(t, ') : ' ∈𝒩_S() }, and assume that this information is sufficient to approximate all required spatial derivatives at . This is a reasonable assumption since, e.g., finite differences can approximate derivatives using only function values and locations of the evaluation points. Hence, we define the dynamics of (t, ) as ∂(t, )/∂ t = F_θ_dyn(𝒩_S(), (t, 𝒩_S())), which is defined only in terms of the values of the latent state, but not its spatial derivatives. [17]r0.225 < g r a p h i c s > Example of 𝒩_S(_i). Instead of using the observation locations (dots) to define spatial neighbors, we use spatial locations arranged in a fixed predefined pattern (crosses). One way to define the spatial neighbors for is in terms of the observation locations _1:N (e.g., use the nearest ones) as was done, for example, in <cit.>. Instead, we utilize continuity of the latent state (t, ), and define the spatial neighbors in a grid-independent manner as a fixed number of points arranged in a predefined patter around (see Figure <ref>). This allows to fix the shape and size of the spatial neighborhoods in advance, making them independent of the observation locations. In this work we use the spatial neighborhood consisting of two concentric circles of radius r and r/2, each circle contains 8 evaluation points as in Figure <ref>. In Appendix <ref> we compare neighborhoods of various shapes and sizes. Equation <ref> allows to simulate the temporal evolution of (t, ) at any spatial location. However, since (t, ) is defined only in terms of a spatial interpolant of (t) (see Eq. <ref>), with ^i(t) = (t, _i), it is sufficient to simulate the latent state dynamics only at the observation locations _1:N. Hence, we can completely characterize the latent state dynamics in terms of a system of N ODEs: d(t)/dt := [ d^1(t)/dt; ⋮; d^N(t)/dt ] = [ ∂(t, _1)/∂ t; ⋮; ∂(t, _N)/∂ t ] = [ F_θ_dyn(𝒩_S(_1), (t, 𝒩_S(_1))); ⋮; F_θ_dyn(𝒩_S(_N), (t, 𝒩_S(_N))) ]. For convenience, we define (t; t_1, _1, θ_dyn) := ODESolve(t;t_1,_1,θ_dyn) as the solution of the ODE system in Equation <ref> at time t with initial state (t_1)=_1 and parameters θ_dyn. We also define (t, ; t_1, _1, θ_dyn) as the spatial interpolant of (t; t_1, _1, θ_dyn) as in Equation <ref>. We solve the ODEs using off the shelf differentiable ODE solvers from torchdiffeq package <cit.>. Note that we solve for the state (t) only at the observation locations _1:N, so to get the neighborhood values (t, 𝒩_S(_i)) we perform interpolation at every step of the ODE solver. Observation function. We define the mapping from the latent space to the observation space as a parametric function g_θ_dec with parameters θ_dec: (t,) ∼𝒩(g_θ_dec((t, )), σ_u^2I_D), where 𝒩 is the Gaussian distribution, σ_u^2 is noise variance, and I_D is D-by-D identity matrix. §.§ Generative model [18]r0.3 < g r a p h i c s > Multiple shooting splits a trajectory with one initial state (top) into two sub-trajectories with two initial states (bottom) and tries to minimize the gap between sub-trajectories (orange arrow). Training models of dynamic systems is often challenging due to long training times and training instabilities <cit.>. To alleviate these problems, various heuristics have been proposed, such as progressive lengthening and splitting of the training trajectories <cit.>. We use multiple shooting <cit.>, a simple and efficient technique which has demonstrated its effectiveness in ODE learning applications <cit.>. We extent the multiple shooting framework for latent ODE models presented in <cit.> to our PDE modeling setup by introducing spatial dimensions in the latent state and designing an encoder adapted specifically to the PDE setting (Section <ref>). Multiple shooting splits a single trajectory {(t_i)}_i=1,...,M with one initial state _1 into B consecutive non-overlapping sub-trajectories {(t_i)}_i ∈ℐ_b, b=1,…,B with B initial states _1:B:=(_1,…,_B) while imposing a continuity penalty between the sub-trajectories (see Figure <ref>). The index set ℐ_b contains time point indices for the b'th sub-trajectory. We also denote the temporal position of _b as t_[b] and place _b at the first time point preceding the b'th sub-trajectory (except _1 which is placed at t_1). Note that the shooting states _b have the same dimension as the original latent state (t) i.e., _b ∈ℝ^N × d. Multiple shooting allows to parallelize the simulation over the sub-trajectories and shortens the simulation intervals thus improving the training speed and stability. In Appendix <ref> we demonstrate the effect of multiple shooting on the model training and prediction accuracy. We begin by defining the prior over the unknown model parameters and initial states: p(_1:B, θ_dyn, θ_dec) = p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec), where p(θ_dyn) and p(θ_dec) are zero-mean diagonal Gaussians, and the continuity inducing prior p(_1:B|θ_dyn) is defined as in <cit.> p(_1:B| θ_dyn) = p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn). Intuitively, the continuity prior p(_b|_b-1, θ_dyn) takes the initial latent state _b-1, simulates it forward from time t_[b-1] to t_[b] to get μ_[b] = ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn), and then forces μ_[b] to approximately match the initial state _b of the next sub-trajectory, thus promoting continuity of the full trajectory. We assume the continuity inducing prior factorizes across the grid points, i.e., p(_1:B| θ_dyn) = [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)], = [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I_d )], where p(_1^j) is a diagonal Gaussian, and parameter σ_c^2 controls the strength of the prior. Note that the term (t_[b], _j; t_[b-1], _b-1, θ_dyn) in Equation <ref> equals the ODE forward solution ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn) at grid location _j. Finally, we define our generative in terms of the following sampling procedure: θ_dyn, θ_dec, _1:B ∼ p(θ_dyn)p(θ_dec) p(_1:B | θ_dyn), (t_i) = (t_i; t_[b], _b, θ_dyn), b ∈{1, ..., B}, i ∈ℐ_b, _i^j ∼ p(_i^j | g_θ_dec((t_i, _j)), i = 1, …, M, j=1,…,N, with the following joint distribution (see Appendix <ref> for details about the model specification.): p(_1:M, _1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N[ p(_i^j|_b, θ_dyn, θ_dec) ] p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec). § PARAMETER INFERENCE §.§ Amortized variational inference We approximate the true posterior over the model parameters and initial states p(_1:B, θ_dyn, θ_dec | _1:M) using variational inference <cit.> with the following approximate posterior: q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j), where q_ψ_dyn, q_ψ_dec and q_ψ_b^j are diagonal Gaussians, and ψ_dyn, ψ_dec and ψ_b^j are variational parameters. To avoid direct optimization over the local variational parameters ψ_b^j, we use amortized variational inference <cit.> and train an encoder h_θ_enc with parameters θ_enc which maps observations _1:M to ψ_b^j (see Section <ref>). For brevity, we sometimes omit the dependence of approximate posteriors on variational parameters and simply write e.g., q(_b^j). In variational inference the best approximation of the posterior is obtained by minimizing the Kullback-Leibler divergence: KL[q(θ_dyn, θ_dec, _1:B) ‖ p(θ_dyn, θ_dec, _1:B|_1:N)], which is equivalent to maximizing the evidence lower bound (ELBO), defined for our model as: ℒ = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(_b, θ_dyn, θ_dec)[ log p (_i^j | _b, θ_dyn, θ_dec) ] _(i) observation model -∑_j=1^NKL[ q(_1^j) ‖ p(_1^j) ]_(ii) initial state prior - ∑_b=2^B∑_j=1^N𝔼_q(θ_dyn, _b-1)[ KL[ q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ] ]_(iii) continuity prior -KL[q(θ_dyn) ‖ p(θ_dyn)]_(iv) dynamics prior -KL[q(θ_dec) ‖ p(θ_dec)]_(v) decoder prior. The terms (ii), (iv), and (v) are computed analytically, while terms (i) and (iii) are approximated using Monte Carlo integration for expectations, and numerical ODE solvers for initial value problems. See Appendix <ref> and <ref> approximate posterior details and derivation and computation of the ELBO. §.§ Encoder Here we describe our encoder which maps observations _1:M to local variational parameters ψ_b^j required to sample the initial latent state of the sub-trajectory b at time point t_[b] and observation location _j. Similarly to our model, the encoder should be data-efficient and grid-independent. Similarly to our model (Section <ref>), we enable grid-independence by making the encoder operate on spatial interpolants of the observations _1:M (even if they are noisy): _i() := Interpolate(_i)(), i=1,…,M, where spatial interpolation is done separately for each time point i. We then use the interpolants _i() to define the spatial neighborhoods 𝒩_S() in a grid-independent manner. To improve data-efficiency, we assume ψ_b^j does not depend on the whole observed sequence _1:M, but only on some local information in a spatiotemporal neighborhood of t_[b] and _j. We define the temporal neighborhood of t_[b] as 𝒩_T(t_[b]) {k : |t_k - t_[b]| ≤δ_T, k=1,…,M}, where δ_T is a hyperparameter controlling the neighborhood size, and then define the spatiotemporal neighborhood of t_[b] and _j as [t_[b], _j] := {_k() : k ∈𝒩_T(t_[b]), ∈𝒩_S(_j) }. Our encoder operates on such spatiotemporal neighborhoods [t_[b], _j] and works in three steps (see Figure <ref>). First, for each time index k ∈𝒩_T(t_[b]) it aggregates the spatial information {_k()}_∈𝒩(_j) into a vector α_k^S. Then, it aggregates the spatial representations α_k^S across time into another vector α_[b]^T which is finally mapped to the variational parameters ψ_b^j as follows: ψ_b^j = h_θ_enc([t_[b], _j]) = h_read(h_temporal(h_spatial([t_[b], _j]))). Spatial aggregation. Since the spatial neighborhoods are fixed and remain identical for all spatial locations (see Figure <ref>), we implement the spatial aggregation function h_spatial as an MLP which takes elements of the set {_k()}_∈𝒩_S(_j) stacked in a fixed order as the input. Temporal aggregation. We implement h_temporal as a stack of transformer layers <cit.> which allows it to operate on input sets of arbitrary size. We use time-aware attention and continuous relative positional encodings <cit.> which were shown to be effective on data from dynamical systems observed at irregular time intervals. Each transformer layer takes a layer-specific input set {ξ_k^in}_k ∈𝒩_T(t_[b]), where ξ_k^in is located at t_k, and maps it to an output set {ξ_k^out}_k ∈𝒩_T(t_[b]), where each ξ_k^out is computed using only the input elements within distance δ_T from t_k, thus promoting temporal locality. Furthermore, instead of using absolute positional encodings the model assumes the behavior of the system does not depend on time and uses relative temporal distances to inject positional information. The first layer takes {α_k^S}_k ∈𝒩_T(t_[b]) as the input, while the last layer returns a single element at time point t_[b], which represents the temporal aggregation α_[b]^T. Variational parameter readout. Since α_i^T is a fixed-length vector, we implement h_read as an MLP. § EXPERIMENTS We use three challenging datasets: Shallow Water, Navier-Stokes, and Scalar Flow which contain observations of spatiotemporal system at N ≈ 1100 grid points evolving over time (see Figure <ref>). The first two datasets are synthetic and generated using numeric PDE solvers (we use scikit-fdiff <cit.> for Shallow Water, and PhiFlow <cit.> for Navier-Stokes), while the third dataset contains real-world observations (camera images) of smoke plumes raising in warm air <cit.>. In all cases the observations are made at irregular spatiotemporal grids and contain only partial information about the true system state. All datasets contain 60/20/20 training/validation/testing trajectories. See Appendix <ref> for details. We train our model for 20k iterations with constant learning rate of 3e-4 and linear warmup. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). Training is done on a single NVIDIA Tesla V100 GPU, with a single run taking 3-4 hours. We use the mean absolute error (MAE) on the test set as the performance measure. Error bars are standard errors over 4 random seeds. For forecasting we use the expected value of the posterior predictive distribution. See Appendix <ref> for all details about the training, validation, and testing setup. Latent state dimension. Here we show the advantage of using latent-space models on partially observed data. We change the latent state dimension d from 1 to 5 and measure the test MAE. Note that for d=1 we effectively have a data-space model which models the observations without trying to reconstruct the missing states. Figure <ref> shows that in all cases there is improvement in performance as the latent dimension grows. For Shallow Water and Navier-Stokes the true latent dimension is 3. Since Scalar Flow is a real-world process, there is no true latent dimension. As a benchmark, we provide the performance of our model trained on fully-observed versions of the synthetic datasets (we use the same architecture and hyperparameters, but fix d to 3). Figure <ref> also shows examples of model predictions (at the final time point) for different values of d. We see a huge difference between d=1 and d=3,5. Note how apparently small difference in MAE at d=1 and d=5 for Scalar Flow corresponds to a dramatic improvement in the prediction quality. Grid independence. Here we show the grid-independence property of our model by training it on grids with ≈ 1100 observation locations, and then testing on a coarser, original, and finer grids. For Shallow Water and Navier-Stokes the coarser/finer grids contain 290/4200 nodes, while for Scalar Flow we have 560/6420 nodes, respectively. Figure <ref> shows the model's performance on different spatial grids. We see that A performance drop on coarse grids is expected since as we get less accurate information about the system's initial state and simulate the dynamics on coarse grids. Figure <ref> also shows examples of model predictions (at the final time point) for different grid sizes. Comparison to other models. Here we compare our model with two recent models from the literature: MAgNet <cit.> and DINo <cit.>. Similarly to our model, these models also produce space-time continuous predictions: MAgNet uses neural network-based interpolation and Euler time discretization, while DINo uses implicit neural representation-based [6]r0.5 Test MAE for different models. 1! Shallow Water Navier-Stokes Scalar-Flow MAgNet 0.061 ± 0.001 0.103 ± 0.003 0.056 ± 0.003 DINo 0.063 ± 0.003 0.113 ± 0.002 0.059 ± 0.001 Ours 0.016 ± 0.002 0.041 ± 0.003 0.042 ± 0.001 decoder and continuous-time dynamics. These two methods also use an encoder that takes a history of observations and map them to an initial state in the latent space, where the latent dynamics are learned and the latent state is mapped to the observation space via a decoder (we use the non-Markovian version of DINo). We use the official implementations of both models and tune the hyperparameters for the best performance. For Shallow Water and Navier-Stokes we use the history size of 5 and predict the next 20 steps, while for Scalar Flow the history size is 10 and we predict the next 10 steps. See Appendix <ref> for hyperparameter details. The results are shown in Table <ref>, and the model predictions are shown in Figure <ref>. Our model shows the best performance, achieving very accurate predictions on the synthetic data, and also shows the capacity for modeling real-world data managing to predict the smoke speed, direction, and even the smoke separation. In Figure <ref> we also test data efficiency of the models and show that our model requires much less data to converge to its lowest error. In Appendix <ref> we further demonstrate our model's capability to learn dynamics from noisy data. § RELATED WORK Closest to our work is <cit.>, where they considered the problem of learning PDEs from partial observations and proposed a discrete and grid-dependent model that is restricted to regular spatiotemporal grids. Another related work is that of <cit.>, where they proposed a variational inference framework for learning ODEs from noisy and partially-observed data. However, they consider only low-dimensional ODEs and are restricted to regular grids. Other works considered learning the latent space PDE dynamics using the “encode-process-decode” approach. <cit.> use GNN-based encoder and dynamics function and map the observations to the same spatial grid in the latent space and learn the latent space dynamics. <cit.> use a similar approach but with CNNs and map the observations to a coarser latent grid and learn the coarse-scale dynamics. <cit.> use CNNs to map observations to a low-dimensional latent vector and learn the latent dynamics. However, all these approaches are grid-dependent, limited to regular spatial/temporal grids, and require fully-observed data. Interpolation has been used in numerous studies for various applications. Works such as <cit.> use interpolation to map latent states on coarse grids to observations on finer grids. <cit.> used interpolation as a post-processing step to obtain continuous predictions, while <cit.> used it to recover observations at missing nodes. § CONCLUSION We proposed a novel space-time continuous, grid-independent model for learning PDE dynamics from noisy and partial observations on irregular spatiotemporal grids. Our contributions include an efficient generative modeling framework, a novel latent PDE model merging collocation and method of lines, and a data-efficient, grid-independent encoder design. The model demonstrates state-of-the-art performance on complex datasets, highlighting its potential for advancing data-driven PDE modeling and enabling accurate predictions of spatiotemporal phenomena in diverse fields. However, our model and encoder operate on every spatial and temporal location which might not be the most efficient approach and hinders scaling to extremely large grids, hence research into more efficient latent state extraction and dynamics modeling methods is needed. plainnat § APPENDIX A §.§ Model specification. Here we provide all details about our model specification. The joint distribution for our model is p(_1:M, _1:B, θ_dyn, θ_dec) = p(_1:N|_1:B, θ_dyn, θ_dec) p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec). Next, we specify each component in detail. Parameter priors. The parameter priors are isotropic zero-mean multivariate normal distributions: p(θ_dyn) = 𝒩(θ_dyn | 0, I), p(θ_dec) = 𝒩(θ_dec | 0, I), where 𝒩 is the normal distribution, 0 is a zero vector, and I is the identity matrix, both have an appropriate dimensionality dependent on the number of encoder and dynamics parameters. Continuity prior. We define the continuity prior as p(_1:B| θ_dyn) = p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn), = [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)], = [ ∏_j=1^N𝒩(_1^j | 0, I) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I ).], where 𝒩 is the normal distribution, 0∈ℝ^d is a zero vector, I ∈ℝ^d × d is the identity matrix, and σ_c ∈ℝ is the parameter controlling the strength of the prior. Smaller values of σ_c tend to produce smaller gaps between the sub-trajectories. Observation model p(_1:N|_1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N p(_i^j|_b, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^Np(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn))) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N𝒩(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn)), σ_u^2 I), where 𝒩 is the normal distribution, σ_u^2 is the observation noise variance, and I ∈ℝ^D × D is the identity matrix. Note again that (t_i, _j; t_[b], _b, θ_dyn) above equals the ODE forward solution ODESolve(t_i ; t_[b], _b, θ_dyn) at grid location _j. §.§ Approximate posterior specification. Here we provide all details about the approximate posterior. We define the approximate posterior as q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j). Next, we specify each component in detail. Dynamics parameters posterior. We define q_ψ_dyn(θ_dyn) as q_ψ_dyn(θ_dyn) = 𝒩(θ_dyn | γ_dyn, diag (τ_dyn^2)), where γ_dyn and τ_dyn^2 are vectors with an appropriate dimension (dependent on the number of dynamics parameters), and diag (τ_dyn^2) is a matrix with τ_dyn^2 on the diagonal. We define the vector of variational parameters as ψ_dyn = (γ_dyn, τ_dyn^2). We optimize directly over ψ_dyn and initialize γ_dyn using Xavier <cit.> initialization, while τ_dyn is initialized with each element equal to 9 · 10^-4. Decoder parameters posterior. We define q_ψ_dec(θ_dec) as q_ψ_dec(θ_dec) = 𝒩(θ_dec | γ_dec, diag (τ_dec^2)), where γ_dec and τ_dec^2 are vectors with an appropriate dimension (dependent on the number of decoder parameters), and diag (τ_dec^2) is a matrix with τ_dec^2 on the diagonal. We define the vector of variational parameters as ψ_dec = (γ_dec, τ_dec^2). We optimize directly over ψ_dec and initialize γ_dec using Xavier <cit.> initialization, while τ_dec is initialized with each element equal to 9 · 10^-4. Shooting variables posterior. We define q_ψ_b^j(_b^j) as q_ψ_b^j(_b^j) = 𝒩(_b^j | γ_b^j, diag ([τ_b^j]^2))), where the vectors γ_b^j, τ_b^j ∈ℝ^d are returned by the encoder h_θ_enc, and diag ([τ_b^j]^2) is a matrix with [τ_b^j]^2 on the diagonal. We define the vector of variational parameters as ψ_b^j = (γ_b^j, [τ_b^j]). Because the variational inference for the shooting variables is amortized, our model is trained w.r.t. the parameters of the encoder network, θ_enc. § APPENDIX B §.§ Derivation of ELBO. For our model and the choice of the approximate posterior the ELBO can be written as ℒ = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M, _1:B, θ_dyn, θ_dec)/q(θ_dyn, θ_dec, _1:B)dθ_dyn dθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M|_1:B, θ_dyn, θ_dec)p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec)/q(_1:B)q(θ_dyn)q(θ_dec)dθ_dyn dθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B - ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyn dθ_dec d_1:B - ∫q(θ_dyn, θ_dec, _1:B) lnq(θ_dyn)/p(θ_dyn)dθ_dyn dθ_dec d_1:B - ∫q(θ_dec, θ_dec, _1:B) lnq(θ_dec)/p(θ_dec)dθ_dyn dθ_dec d_1:B = ℒ_1 - ℒ_2 - ℒ_3 - ℒ_4. Next, we will look at each term ℒ_i separately. ℒ_1 = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=1^B∏_i ∈ℐ_b∏_j=1^Np(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _b) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_b = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(θ_dyn, θ_dec, _b)ln[p(_i^j | _b, θ_dyn, θ_dec)]. ℒ_2 = ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyndθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) ln[q(_1)/p(_1)∏_b=2^Bq(_b)/p(_b|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) ln[∏_j=1^Nq(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B + ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=2^B∏_j=1^Nq(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B = ∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[q(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B + ∑_b=2^B∫q(θ_dyn, θ_dec, _1:B) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B = ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j + ∑_b=2^B∫q(θ_dyn, _b-1, _b) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyn d_b-1 d_b = ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j + ∑_b=2^B∫q(θ_dyn, _b-1) ∑_j=1^N[ ∫ q(_b^j) lnq(_b^j)/p(_b^j|_b-1, θ_dyn)d_b^j]dθ_dyn d_b-1 = ∑_j=1^NKL( q(_1^j) ‖ p(_1^j) ) + ∑_b=2^B𝔼_q(θ_dyn, _b-1)[ ∑_j=1^NKL( q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ) ], where KL is Kullback–Leibler (KL) divergence. Both of the KL divergences above have a closed form but the expectation w.r.t. q(θ_dyn, _b-1) does not. ℒ_3 = KL(q(θ_dyn) ‖ p(θ_dyn)), ℒ_4 = KL(q(θ_dec) ‖ p(θ_dec)). §.§ Computation of ELBO. We compute the ELBO using the following algorithm: * Sample θ_dyn, θ_dec from q_ψ_dyn(θ_dyn), q_ψ_dec(θ_dec). * Sample _1:B by sampling each _b^j from q_ψ_b^j(_b^j) with ψ_b^j = h_θ_enc([t_[b], _j]). * Compute _1:M from _1:B as in Equations <ref>-<ref>. * Compute ELBO ℒ (KL terms are computed in closed form, for expectations we use Monte Carlo integration with one sample). Sampling is done using reparametrization to allow unbiased gradients w.r.t. the model parameters. § APPENDIX C §.§ Datasets. Shallow Water. The shallow water equations are a system of partial differential equations (PDEs) that simulate the behavior of water in a shallow basin. These equations are effectively a depth-integrated version of the Navier-Stokes equations, assuming the horizontal length scale is significantly larger than the vertical length scale. Given these assumptions, they provide a model for water dynamics in a basin or similar environment, and are commonly utilized in predicting the propagation of water waves, tides, tsunamis, and coastal currents. The state of the system modeled by these equations consists of the wave height h(t, x, y), velocity in the x-direction u(t, x, y) and velocity in the y-direction v(t, x, y). Given an initial state (h_0, u_0, v_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The shallow water equations are defined as: ∂ h/∂ t + ∂ (hu)/∂ x + ∂ (hv)/∂ y = 0, ∂ u/∂ t + u∂ u/∂ x + v∂ u/∂ y + g∂ h/∂ x = 0, ∂ v/∂ t + u∂ v/∂ x + v∂ v/∂ y + g∂ h/∂ y = 0, where g is the gravitational constant. We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=0.1. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial end temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the wave height h(t,x,y). For each trajectory, we start with zero initial velocities and the initial height h_0(x,y) generated as: h̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)), h_0(x, y) = 1 + h̃_0(x, y) - min(h̃_0)/max(h̃_0) - min(h̃_0), where N = 3 and λ_kl, γ_kl∼𝒩(0, 1). The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively. We use scikit-fdiff <cit.> to solve the PDEs. Navier-Stokes. For this dataset we model the propagation of a scalar field (e.g., smoke concentration) in a fluid (e.g., air). The modeling is done by coupling the Navier-Stokes equations with the Boussinesq buoyancy term and the transport equation to model the propagation of the scalar field. The state of the system modeled by these equations consists of the scalar field c(t,x,y), velocity in x-direction u(t,x,y), velocity in y-direction v(t,x,y), and pressure p(t,x,y). Given an initial state (c_0, u_0, v_0, p_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The Navier-Stokes equations with the transport equation are defined as: ∂ u/∂ x + ∂ v/∂ y = 0, ∂ u/∂ t + u ∂ u/∂ x + v ∂ u/∂ y = - ∂ p/∂ x + ν( ∂^2 u/∂ x^2 + ∂^2 u/∂ y^2), ∂ v/∂ t + u ∂ v/∂ x + v ∂ v/∂ y = - ∂ p/∂ y + ν( ∂^2 v/∂ x^2 + ∂^2 v/∂ y^2) + c, ∂ c/∂ t = - u ∂ c/∂ x - v ∂ c/∂ y + ν( ∂^2 c/∂ x^2 + ∂^2 c/∂ y^2), where ν = 0.002. We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=2.0, but drop the first 0.5 seconds due to slow dynamics during this time period. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial and temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the scalar field c(t,x,y). For each trajectory, we start with zero initial velocities and pressure, and the initial scalar field c_0(x,y) is generated as: c̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)), c_0(x, y) = c̃_0(x, y) - min(c̃_0)/max(c̃_0) - min(c̃_0), where N = 2 and λ_kl, γ_kl∼𝒩(0, 1). The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively. We use PhiFlow <cit.> to solve the PDEs. Scalar Flow. r0.2 < g r a p h i c s > Spatial grid used for Scalar Flow dataset. This dataset, proposed by <cit.>, consists of observations of smoke plumes rising in hot air. The observations are post-processed camera images of the smoke plumes taken from multiple views. For simplicity, we use only the front view. The dataset contains 104 trajectories, where each trajectory has 150 time points and each image has the resolution 1080 × 1920. To reduce dimensionality of the observations we sub-sample the original spatial and temporal grids. For the temporal grid, we remove the first 50 time points, which leaves 100 time points, and then take every 4th time point, thus leaving 20 time points in total. The original 1080 × 1920 spatial grid is first down-sampled by a factor of 9 giving a new grid with resolution 120 × 213, and then the new grid is further sub-sampled based on the smoke density at each node. In particular, we compute the average smoke density at each node (averaged over time), and then sample the nodes without replacement with the probability proportional to the average smoke density (thus, nodes that have zero density most of the time are not selected). See example of a final grid in Figure <ref>. This gives a new grid with 1089 nodes. We further smooth the observations by applying Gaussian smoothing with the standard deviation of 1.5 (assuming domain size 120 × 213). We use the first 60 trajectories for training, next 20 for validation and next 20 for testing. §.§ Model architecture and hyper-parameters. Dynamics function. For all datasets we define F_θ_dyn as an MLP. For Shallow Water/Navier-Stokes/Scalar Flow we use 1/3/3 hidden layers with the size of 1024/512/512, respectively. We use ReLU nonlinearities. Observation function. For all datasets we define g_θ_dec as a selector function which takes the latent state (t, x) ∈ℝ^d and returns its first component. Encoder. Our encoder h_θ_enc consists of three function: h_θ_spatial, h_θ_temporal, and h_θ_read. The spatial aggregation function h_θ_spatial is a linear mapping to ℝ^128. The temporal aggregation function h_θ_temporal is a stack of transformer layers with temporal attention and continuous relative positional encodings <cit.>. For all datasets, we set the number of transformer layers to 6. Finally, the variational parameter readout function h_θ_read is a mapping defined as ψ_b^j = h_θ_read(α_[b]^T) = [ γ_b^j; τ_b^j ]= [ Linear(α_[b]^T); exp(Linear(α_[b]^T)) ], where Linear is a linear layer (different for each line), and γ_b^j and τ_b^j are the variational parameters discussed in Appendix A. Spatial and temporal neighborhoods. We use the same spatial neighborhoods 𝒩_S() for both the encoder and the dynamics function. We define 𝒩_S() as the set of points consisting of the point and points on two concentric circles centered at , with radii r and r/2, respectively. Each circle contains 8 points spaced 45 degrees apart (see Figure <ref> (right)). The radius r is set to 0.1. For Shallow Water/Navier-Stokes/Scalar Flow the size of temporal neighborhood (δ_T) is set to 0.1/0.1/0.2, respectively. Multiple Shooting. For Shallow Water/Navier-Stokes/Scalar Flow we split the full training trajectories into 4/4/19 sub-trajectories, or, equivalently, have the sub-trajectory length of 6/6/2. §.§ Training, validation, and testing setup. Data preprocessing. We scale the temporal grids, spatial grids, and observations to be within the interval [0, 1]. Training. We train our model for 20000 iterations using Adam <cit.> optimizer with constant learning rate 3e-4 and linear warmup for 200 iterations. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). The batch size is 1. Validation. We use validation set to track the performance of our model during training and save the parameters that produce the best validation performance. As performance measure we use the mean absolute error at predicting the full validation trajectories given some number of initial observations. For Shallow Water/Navier-Stokes/Scalar Flow we use the first 5/5/10 observations. The predictions are made by taking one sample from the posterior predictive distribution (see Appendix C.4 for details). Testing. Testing is done similarly to validation, except that as the prediction we use an estimate of the expected value of the posterior predictive distribution (see Appendix C.4 for details). §.§ Forecasting. Given initial observations _1:m at time points t_1:m, we predict the future observation _n at a time point t_n > t_m as the expected value of the approximate posterior predictive distribution: p(_n | _1:m, _1:M) ≈∫ p(_n | _m, θ_dyn, θ_dec) q(_m) q(θ_dyn) q(θ_dec) d_m dθ_dyn dθ_dec. The expected value is estimated via Monte Carlo integration, so the algorithm for predicting _n is: * Sample θ_dyn, θ_dec from q(θ_dyn), q(θ_dec). * Sample _m from q(_m) = ∏_j=1^Nq_ψ_m^j(_m^j), where the variational parameters ψ_m^j are given by the encoder h_θ_enc operating on the initial observations _1:m as ψ_m^j = h_θ_enc([t_m, _j]). * Compute the latent state (t_n) = (t_n; t_m, _m, θ_dyn). * Sample _n by sampling each _n^j from 𝒩(_n^j | g_θ_dec((t_n, _j))), σ_u^2 I). * Repeat steps 1-4 n times and average the predictions (we use n=10). §.§ Model comparison setup. DINo. We use the official implementation of DINo <cit.>. The encoder is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The code dimension is 100. The dynamics function is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The decoder has 3 layers and 64 channels. MAgNet. We use the official implementation of MAgNet <cit.>. We use the graph neural network variant of the model. The number of message-passing steps is 5. All MLPs have 4 layers with 128 neurons each in each layer. The latent state dimension is 128. § APPENDIX D §.§ Spatiotemporal neighborhood shapes and sizes. Here we investigate the effect of changing the shape and size of spatial and temporal neighborhoods used by the encoder and dynamics functions. We use the default hyperparameters discussed in Appendix C and change only the neighborhood shape or size. A neighborhood size of zero implies no spatial/temporal aggregation. Initially, we use the original circular neighborhood displayed in Figure <ref> for both encoder and dynamics function and change only its size (radius). The results are presented in Figures <ref> and <ref>. In Figure <ref>, it is surprising to see very little effect from changing the encoder's spatial neighborhood size. A potential explanation is that the dynamics function shares the spatial aggregation task with the encoder. However, the results in Figure <ref> are more intuitive, displaying a U-shaped curve for the test MAE, indicating the importance of using spatial neighborhoods of appropriate size. Interestingly, the best results tend to be achieved with relatively large neighborhood sizes. Similarly, Figure <ref> shows U-shaped curves for the encoder's temporal neighborhood size, suggesting that latent state inference benefits from utilizing local temporal information. We then examine the effect of changing the shape of the dynamics function's spatial neighborhood. We use ncircle neighborhoods, which consist of n equidistant concentric circular neighborhoods (see examples in Figure <ref>). Effectively, we maintain a fixed neighborhood size while altering its density. The results can be seen in Figure <ref>. We find that performance does not significantly improve when using denser (and presumably more informative) spatial neighborhoods, indicating that accurate predictions only require a relatively sparse neighborhood with appropriate size. §.§ Multiple shooting. Here we demonstrate the effect of using multiple shooting for model training. In Figure <ref> (left), we vary the sub-trajectory length (longer sub-trajectories imply more difficult training) and plot the test errors for each sub-trajectory length. We observe that in all cases, the best results are achieved when the sub-trajectory length is considerably smaller than the full trajectory length. In Figure <ref> (right) we further show the training times, and as can be seen multiple shooting allows to noticeably reduce the training times. § APPENDIX E Noisy Data. Here we show the effect of observation noise on our model and compare the results against other models. We train all models with data noise of various strengths, and then compute test MAE on noiseless data (we still use noisy data to infer the initial state at test time). Figure <ref> shows that our model can manage noise strength up to 0.1 without significant drops in performance. Note that all observations are in the range [0, 1].
http://arxiv.org/abs/2307.04525v2
20230710124936
Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans
[ "Mingze Yuan", "Yingda Xia", "Xin Chen", "Jiawen Yao", "Junli Wang", "Mingyan Qiu", "Hexin Dong", "Jingren Zhou", "Bin Dong", "Le Lu", "Li Zhang", "Zaiyi Liu", "Ling Zhang" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Yuan, M. et al. Effective Opportunistic Gastric Cancer Screening ^1DAMO Academy, Alibaba Group ^2Peking University ^3Hupan Lab, 310023, Hangzhou, China ^4Guangdong Province People's Hospital ^5The First Affiliated Hospital of Zhejiang University ^6Peking University Changsha Institute for Computing and Digital Economy Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans Mingze Yuan^1,2,3,*, Yingda Xia^1,, Xin Chen^4,, Jiawen Yao^1,3, Junli Wang^5, Mingyan Qiu^1,3, Hexin Dong^1,2,3, Jingren Zhou^1, Bin Dong^2,6, Le Lu^1, Li Zhang^2, Zaiyi Liu^4,, Ling Zhang^1 August 12, 2023 =================================================================================================================================================================================================== Gastric cancer is the third leading cause of cancer-related mortality worldwide, but no guideline-recommended screening test exists. Existing methods can be invasive, expensive, and lack sensitivity to identify early-stage gastric cancer. In this study, we explore the feasibility of using a deep learning approach on non-contrast CT scans for gastric cancer detection. We propose a novel cluster-induced Mask Transformer that jointly segments the tumor and classifies abnormality in a multi-task manner. Our model incorporates learnable clusters that encode the texture and shape prototypes of gastric cancer, utilizing self- and cross-attention to interact with convolutional features. In our experiments, the proposed method achieves a sensitivity of 85.0% and specificity of 92.6% for detecting gastric tumors on a hold-out test set consisting of 100 patients with cancer and 148 normal. In comparison, two radiologists have an average sensitivity of 73.5% and specificity of 84.3%. We also obtain a specificity of 97.7% on an external test set with 903 normal cases. Our approach performs comparably to established state-of-the-art gastric cancer screening tools like blood testing and endoscopy, while also being more sensitive in detecting early-stage cancer. This demonstrates the potential of our approach as a novel, non-invasive, low-cost, and accurate method for opportunistic gastric cancer screening. Work was done during an internship at DAMO Academy, Alibaba Group. Corresponding authors: [email protected]; {wolfchenxin, zyliu}@163.com § INTRODUCTION Gastric cancer (GC) is the third leading cause of cancer-related deaths worldwide <cit.>. The five-year survival rate for GC is approximately 33% <cit.>, which is mainly attributed to patients being diagnosed with advanced-stage disease harboring unresectable tumors. This is often due to the latent and nonspecific signs and symptoms of early-stage GC. However, patients with early-stage disease have a substantially higher five-year survival rate of around 72% <cit.>. Therefore, early detection of resectable/curable gastric cancers, preferably before the onset of symptoms, presents a promising strategy to reduce associated mortality. Unfortunately, current guidelines do not recommend any screening tests for GC <cit.>. While several screening tools have been developed, such as Barium-meal gastric photofluorography <cit.>, upper endoscopy <cit.>, and serum pepsinogen levels <cit.>, they are challenging to apply to the general population due to their invasiveness, moderate sensitivity/specificity, high cost, or side effects. Therefore, there is an urgent need for novel screening methods that are noninvasive, highly accurate, low-cost, and ready to distribute. Non-contrast CT is a commonly used imaging protocol for various clinical purposes. It is a non-invasive, relatively low-cost, and safe procedure that exposes patients to less radiation dose and does not require the use of contrast injection that may cause serious side effects (compared to multi-phase contrast-enhanced CT). With recent advances in AI, opportunistic screening of diseases using non-contrast CT during routine clinical care performed for other clinical indications, such as lung and colorectal cancer screening, presents an attractive approach to early detect treatable and preventable diseases <cit.>. However, whether early detection of gastric cancer using non-contrast CT scans is possible remains unknown. This is because early-stage gastric tumors may only invade the mucosal and muscularis layers, which are difficult to identify without the help of stomach preparation and contrast injection. Additionally, the poor contrast between the tumor and normal stomach wall/tissues on non-contrast CT scans and various shape alterations of gastric cancer, further exacerbates this challenge. In this paper, we propose a novel approach for detecting gastric cancer on non-contrast CT scans. Unlike the conventional “segmentation for classification" methods that directly employ segmentation networks, we developed a cluster-induced Mask Transformer that performs segmentation and global classification simultaneously. Given the high variability in shape and texture of gastric cancer, we encode these features into learnable clusters and utilize cluster analysis during inference. By incorporating self-attention layers for global context modeling, our model can leverage both local and global cues for accurate detection. In our experiments, the proposed approach outperforms nnUNet <cit.> by 0.032 in AUC, 5.0% in sensitivity, and 4.1% in specificity. These results demonstrate the potential of our approach for opportunistic screening of gastric cancer in asymptomatic patients using non-contrast CT scans. § RELATED WORK Automated Cancer Detection. Researchers have explored automated tumor detection techniques on endoscopic <cit.>, pathological images <cit.>, and the prediction of cancer prognosis <cit.>. Recent developments in deep learning have significantly improved the segmentation of gastric tumors <cit.>, which is critical for their detection. However, our framework is specifically designed for non-contrast CT scans, which is beneficial for asymptomatic patients. While previous studies have successfully detected pancreatic <cit.> and esophageal <cit.> cancers on non-contrast CT, identifying gastric cancer presents a unique challenge due to its subtle texture changes, various shape alterations, and complex background, e.g., irregular gastric wall; liquid and contents in the stomach. Mask Transformers. Recent studies have used Transformers for natural and medical image segmentation <cit.>. Mask Transformers <cit.> further enhance CNN-based backbones by incorporating stand-alone Transformer blocks, treating object queries in DETR <cit.> as memory-encoded queries for segmentation. CMT-Deeplab <cit.> and KMaX-Deeplab <cit.> have recently proposed interpreting the queries as clustering centers and adding regulatory constraints for learning the cluster representations of the queries. Mask Transformers are locally sensitive to image textures for precise segmentation and globally aware of organ-tumor morphology for recognition. Their cluster representations demonstrate a remarkable balance of intra-cluster similarity and inter-class discrepancy. Therefore, Mask Transformers are an ideal choice for an end-to-end joint segmentation and classification system for detecting gastric cancer. § METHODS Problem Formulation. Given a non-contrast CT scan, cancer screening is a binary classification with two classes as ℒ={0, 1}, where 0 stands for“normal” and 1 for“GC” (gastric cancer). The entire dataset is denoted by 𝒮 = {(𝐗_i, 𝐘_i, 𝐏_i) | i=1,2,⋯,N}, where 𝐗_i is the i-th non-contrast CT volume, with 𝐘_i being the voxel-wise label map of the same size as 𝐗_i and K channels. Here, K=3 represents the background, stomach, and GC tumor. 𝐏_i ∈ℒ is the class label of the image, confirmed by pathology, radiology, or clinical records. In the testing phase, only 𝐗_i is given, and our goal is to predict a class label for 𝐗_i. Knowledge Transfer from Contrast-Enhanced to Non-contrast CT. To address difficulties with tumor annotation on non-contrast CTs, the radiologists start by annotating a voxel-wise tumor mask on the contrast-enhanced CT, referring to clinical and endoscopy reports as needed. DEEDs <cit.> registration is then performed to align the contrast-enhanced CT with the non-contrast CT and the resulting deformation field is applied to the annotated mask. Any misaligned ones are revised manually. In this manner (Fig. <ref>d), a relatively coarse yet highly reliable tumor mask can be obtained for the non-contrast CT image. Cluster-Induced Classification with Mask Transformers. Segmentation for classification is widely used in tumor detection <cit.>. We first train a UNet <cit.> to segment the stomach and tumor regions using the masks from the previous step. This UNet considers local information and can only extract stomach ROIs well during testing. However, local textures are inadequate for accurate gastric tumor detection on non-contrast CTs, so we need a network of both local sensitivity to textures and global awareness of the organ-tumor morphology. Mask transformer <cit.> is a well-suited approach to boost the CNN backbone with stand-alone transformer blocks. Recent studies <cit.> suggest interpreting object queries as cluster centers, which naturally exhibit intra-cluster similarity and inter-class discrepancy. Inspired by this, we further develop a deep classification model on top of learnable cluster representations. Specifically, given image 𝐗∈ℝ^H × W × D, annotation 𝐘∈ℝ^K × HWD, and patient class 𝐏∈ℒ, our model consists of three components: 1) a CNN backbone to extract its pixel-wise features 𝐅∈ℝ^C × HWD (Fig. <ref>a), 2) a transformer module (Fig. <ref>b), and 3) a multi-task cluster inference module(Fig. <ref>c). The transformer module gradually updates a set of randomly initialized object queries 𝐂∈ℝ^N × C, i.e., to meaningful mask embedding vectors through cross-attention between object queries and multi-scale pixel features, 𝐂←𝐂 + max_N (𝐐^c (𝐊^p)^T) 𝐕^p, where c and p stand for query and pixel features, 𝐐^c, 𝐊^p, 𝐕^p represent linearly projected query, key, and value. We adopt cluster-wise argmax from KMax-DeepLab <cit.> to substitute spatial-wise softmax in the original settings. We further interpret the object queries as cluster centers from a cluster analysis perspective. All the pixels in the convolutional feature map are assigned to different clusters based on these centers. The assignment of clusters (a.k.a. mask prediction) 𝐌∈ℝ^N × HWD is computed as the cluster-wise softmax function over the matrix product between the cluster centers 𝐂 and pixel-wise feature matrix 𝐅, i.e., 𝐌 = Softmax_N(𝐑) = Softmax_N(𝐂𝐅). The final segmentation logits 𝐙∈ℝ^K × HWD are obtained by aggregating the pixels within each cluster according to cluster-wise classification, which treats pixels within a cluster as a whole. The aggregation of pixels is achieved by 𝐙 = 𝐂_K 𝐌, where the cluster-wise classification 𝐂_K is represented by an MLP that projects the cluster centers 𝐂 to K channels (the number of segmentation classes). The learned cluster centers possess high-level semantics with both inter-cluster discrepancy and intra-cluster similarity for effective classification. Rather than directly classifying the final feature map, we first generate the cluster-path feature vector by taking the channel-wise average of cluster centers 𝐂 = 1/N∑_i=1𝐂_i ∈ℝ^C. Additionally, to enhance the consistency between the segmentation and classification outputs, we apply global max pooling to cluster assignments 𝐑 to obtain the pixel-path feature vector 𝐑∈ℝ^N. This establishes a direct connection between classification features and segmentation predictions. Finally, we concatenate these two feature vectors to obtain the final feature and project it onto the classification prediction 𝐏∈ℝ^2 via a two-layer MLP. The overall training objective is formulated as, ℒ = ℒ_seg(𝐙, 𝐘) + ℒ_cls(𝐏, 𝐏), where the segmentation loss ℒ_seg(·,·) is a combination of Dice and cross entropy losses, and the classification loss ℒ_cls(·,·) is cross entropy loss. § EXPERIMENTS §.§ Experimental setup Dataset and Ground Truth. Our study analyzed a dataset of CT scans collected from Guangdong Province People's Hospital between years 2018 and 2020, with 2,139 patients consisting of 787 gastric cancer and 1,352 normal cases. We used the latest patients in the second half of 2020 as a hold-out test set, resulting in a training set of 687 gastric cancer and 1,204 normal cases, and a test set of 100 gastric cancer and 148 normal cases. We randomly selected 20% of the training data as an internal validation set. To further evaluate specificity in a larger population, we collected an external test set of 903 normal cases from Shengjing Hospital. Cancer cases were confirmed through endoscopy (and pathology) reports, while normal cases were confirmed by radiology reports and a two-year follow-up. All patients underwent multi-phase CTs with a median spacing of 0.75 × 0.75 × 5.0 mm and an average size of (512, 512, 108) voxel. Tumors were annotated on the venous phase by an experienced radiologist specializing in gastric imaging using CTLabeler <cit.>, while the stomach was automatically annotated using a self-learning model <cit.>. Implementation Details. We resampled each CT volume to the median spacing while normalizing it to have zero mean and unit variance. During training, we cropped the 3D bounding box of the stomach and added a small margin of (32, 32, 4). We used nnUNet <cit.> as the backbone, with four transformer decoders, each taking pixel features with output strides of 32, 16, 8, and 4. We set the number of object queries N to 8, with each having a dimension of 128, and included an eight-head self-attention layer in each block. The patch size used during training and inference is (192, 224, 40) voxel. We followed <cit.> to augment data. We trained the model with RAdam using a learning rate of 10^-4 and a (backbone) learning rate multiplier of 0.1 for 1000 epochs, with a frozen backbone of the pre-trained nnUNet <cit.> for the first 50 epochs. To enhance performance, we added deep supervision by aligning the cross-attention map with the final segmentation map, as per KMax-Deeplab <cit.>. The hidden layer dimension in the two-layer MLP is 128. We also trained a standard UNet <cit.> to localize the stomach region in the entire image in the testing phase. Evaluation Metrics and Reader Study. For the binary classification, model performance is evaluated using area under ROC curve (AUC), sensitivity (Sens.), and specificity (Spec.). And successful localization of the tumors is considered when the overlap between the segmentation mask generated by the model and the ground truth is greater than 0.01, measured by the Dice score. A reader study was conducted with two experienced radiologists, one from Guangdong Province People's Hospital with 20 years of experience and the other from The First Affiliated Hospital of Zhejiang University with 9 years of experience in gastric imaging. The readers were given 248 non-contrast CT scans from the test set and asked to provide a binary decision for each scan, indicating whether the scan showed gastric cancer. No patient information or records were provided to the readers. Readers were informed that the dataset might contain more tumor cases than the standard prevalence observed in screening, but the proportion of case types was not disclosed. Readers used ITK-SNAP <cit.> to interpret the CT scans without any time constraints. Compared Baselines. <ref> presents a comparative analysis of our proposed method with three baselines. The first two approaches belong to “Segmentation for classification" (S4C) <cit.>, using nnUNet <cit.> and TransUNet <cit.>. A case is classified as positive if the segmented tumor volume exceeds a threshold that maximizes the sum of sensitivity and specificity on the validation set. The third baseline (denoted as “nnUNet-Joint") integrates a CNN classification head into UNet <cit.> and trained end-to-end. We obtain the 95% confidence interval of AUC, sensitivity, and specificity values from 1000 bootstrap replicas of the test dataset for statistical analysis. For statistical significance, we conduct a DeLong test between two AUCs (ours vs. compared method) and a permutation test between two sensitivities or specificities (ours vs. compared method and radiologists). §.§ Results Our method Outperforms Baselines. Our method outperforms three baselines (<ref>) in all metrics, particularly in AUC and sensitivity. The advantage of our approach is that it captures the local and global information simultaneously in virtue of the unique architecture of mask transformer. It also extracts high-level semantics from cluster representations, making it suitable for classification and facilitating a holistic decision-making process. Moreover, our method reaches a considerable specificity of 97.7% on the external test set, which is crucial in opportunistic screening for less false positives and unnecessary human workload. AI Models Surpass Experienced Radiologists on Non-contrast CT Scans. As shown in <ref>, our AI model's ROC curve is superior to that of two experienced radiologists. The model achieves a sensitivity of 85.0% in detecting gastric cancer, which significantly exceeds the mean performance of doctors (73.5%) and also surpasses the best performing doctor (R2: 75.0%), while maintaining a high specificity. A visual example is presented in <ref>. This early-stage cancer (T1) is miss-detected by both radiologists, whereas classified and localized precisely by our model. Subgroup Analysis. In <ref>, we report the performance of patient-level detection and tumor-level localization stratified by tumor (T) stage. We compare our model's performance with that of both radiologists. The results show that our model performs better in detecting early stage tumors (T1, T2) and provides more precise tumor localization. Specifically, our model detects 60.0% (6/10) T1 cancers, and 77.8% (7/9) T2 cancers, surpassing the best performing expert (50% T1, 55.6% T2). Meanwhile, our model maintains a reliable detection rate and credible localization accuracy for T3 and T4 tumors (2 of 34 T3 tumors missed). Comparison with Established Screening Tools. Our method surpasses or performs on par with established screening tools <cit.> in terms of sensitivity for gastric cancer detection at a similar specificity level with a relatively large testing patient size (n=1151 by integrating the internal and external test sets), as shown in <ref>. This finding sheds light on the opportunity to employ automated AI systems to screen gastric cancer using non-contrast CT scans. § CONCLUSION We propose a novel Cluster-induced Mask Transformer for gastric cancer detection on non-contrast CT scans. Our approach outperforms strong baselines and experienced radiologists. Compared to other screening methods, such as blood tests, endoscopy, upper-gastrointestinal series, and ME-NBI, our approach is non-invasive, cost-effective, safe, and more accurate for detecting early-stage tumors. The robust performance of our approach demonstrates its potential for opportunistic screening of gastric cancer in the general population. Acknowledgement This work was supported by Alibaba Group through Alibaba Research Intern Program. Bin Dong and Li Zhang was partly supported by NSFC 12090022 and 11831002, and Clinical Medicine Plus X-Young Scholars Project of Peking University PKU2023LCXQ041. splncs04
http://arxiv.org/abs/2307.11168v1
20230712010646
On the hydraulic fracturing in naturally-layered porous media using the phase field method
[ "Xiaoying Zhuang", "Shuwei Zhou", "Mao Sheng", "Gensheng Li" ]
physics.geo-ph
[ "physics.geo-ph", "cs.NA", "math.NA" ]
[figure]labelfont=bf,name=Fig.,labelsep=period apa authoryear,round,aysep=,,yysep=, DeepMapping: The Case for Learned Data Mapping for Compression and Efficient Query Processing Jia Zou ============================================================================================= 2 1 Department of Geotechnical Engineering, College of Civil Engineering, Tongji University, Shanghai 200092, P.R. China 2 Institute of Continuum Mechanics, Leibniz University Hannover, Hannover 30167, Germany 3 State Key Laboratory of Petroleum Resources and Prospecting, China University of Petroleum, Beijing, China * Corresponding author: Shuwei Zhou ([email protected]; [email protected]) In the hydraulic fracturing of natural rocks, understanding and predicting crack penetrations into the neighboring layers is crucial and relevant in terms of cost-efficiency in engineering and environmental protection. This study constitutes a phase field framework to examine hydraulic fracture propagation in naturally-layered porous media. Biot's poroelasticity theory is used to couple the displacement and flow field, while a phase field method helps characterize fracture growth behavior. Additional fracture criteria are not required and fracture propagation is governed by the equation of phase field evolution. Thus, penetration criteria are not required when hydraulic fractures reach the material interfaces. The phase field method is implemented within a staggered scheme that sequentially solves the displacement, phase field, and fluid pressure. We consider the soft-to-stiff and the stiff-to-soft configurations, where the layer interface exhibits different inclination angles θ. Penetration, singly-deflected, and doubly-deflected fracture scenarios can be predicted by our simulations. In the soft-to-stiff configuration, θ=0^∘ exhibits penetration or symmetrical doubly-deflected scenarios, and θ=15^∘ exhibits singly-deflected or asymmetric doubly-deflected scenarios. Only the singly-deflected scenario is obtained for θ=30^∘. In the stiff-to-soft configuration, only the penetration scenario is obtained with widening fractures when hydraulic fractures penetrate into the soft layer. Keywords: Cap layer, Reservoir layer, Numerical simulation, Phase field, Hydraulic fracturing, Staggered scheme § INTRODUCTION Hydraulic fracturing (HF) in porous media is a challenging area for mechanical, environmental, energy, and geological engineering <cit.>. On the one hand, HF applies pressurized fluid <cit.> to form highly permeable fractures into the rock strata and the fracture network facilitate the linking of wellbores with the expected natural resources. On the other hand, HF is also relatively controversial because of its potential impact on the engineering geological environment. Unexpected fractures may be stimulated and propagate into neighboring rock strata, thereby resulting in a risk of water contamination. Moreover, uncontrolled hydraulic fractures along dominant or previously unknown faults may cause an increase in seismic activity <cit.>, which is detrimental to the stability of the entire geological system. Therefore, better prediction of hydraulic fracture propagation is one of the most critical issues in recent years <cit.>, especially in engineering geology. Naturally-layered rock strata are an important and representative part of the engineering geological environment. In general, numerous natural or artificial discontinuities are contained in naturally-layered reservoirs, such as fractures or material interfaces. Many studies have examined how fluid-driven fractures interact with natural discontinuities. Some contributions can be referred to <cit.>. These studies have indicated that three fracture patterns exist in layered domains, particular when hydraulic fractures reach the layer interfaces: penetration, singly-deflected, and doubly-deflected scenarios (see Fig. <ref>). However, these studies have yet to achieve a consensus on mechanisms for different fracture penetration patterns. Hence, predicting hydraulic fractures in naturally-layered media remains an open research topic and new insights should be provided to conduct further research. For example, advanced numerical approaches can be applied in naturally-layered media, although hydraulic fractures can be also investigated either analytically <cit.> or experimentally <cit.>. The numerical models for fracture can be classified into discrete and continuous approaches, including the extended finite element method (XFEM) <cit.>, cohesive element method <cit.>, element-erosion method <cit.>, phantom-node method <cit.>, mesh-free methods <cit.>, cracking particle methods <cit.>, peridynamics <cit.>, gradient damage models <cit.>, screened-Poisson models <cit.>, and phase field models (PFMs) <cit.>. Although some continuous and discrete approaches have been used in HF, predicting fracturing networks across different rock formations, such as reservoir and cap layers, remains a challenging topic. For example, an XFEM framework <cit.> was recently established to investigate hydraulic fracture propagation in a layered domain. Nevertheless, mandatory penetration criteria are applied when the fractures reach the layer interfaces. Although singly-deflected and penetration scenarios can be observed, the hydraulic fractures are “man-made", and not the automatically predicted ones with respect to physical principles. The current study investigates the propagation of hydraulic fractures in naturally layered geological formations by applying the phase field model <cit.> to solve the fracture penetration issue at the layer interface and provide a new perspective. Note that the used PFM is based on the quasi-static formulation of <cit.> for fluid-driven fracture, which is different from the PFMs for single-phase solid <cit.> and for dynamic fluid-driven fracture <cit.>. In the numerical investigation, we consider two configurations, namely, the soft-to-stiff and the stiff-to-soft, and also different inclination angles of the layer interface. The phase field characteristics of fluid-driven fracture in layered geological formations are firstly and systematically explored. In addition, the displacement and stress characteristics related to geological system stability are investigated. The relationship between the phase field (fracture pattern) and the stiffness contrast and inclination angle of the geological formations, is also further revealed. Based on minimization of the energy functional, our simulations automatically reveal the three potential fracture scenarios in naturally-layered geological formations, i.e., the penetration, singly-deflected, and doubly-deflected scenarios, which must be achieved by other numerical methods such as XFEM <cit.> by imposing “man-made" penetration criteria. The phase field simulation in this study can deepen our understanding of crack patterns and their governing factors in layered geological formations. The successful application of PFM also shows its immense potential in modeling the interface crack between two layers and how cracks appear into neighboring layers, which are not easily captured with conventional methods. The remainder of this paper is organized as follows: Sections <ref>, <ref>, and <ref> present the theoretical model, numerical implementation, and validation examples, respectively. Sections <ref> and <ref> provide the numerical investigations on the soft-to-stiff and stiff-to-soft configurations, respectively. Lastly, Section <ref> concludes this study. § MATHEMATICAL MODELS FOR FRACTURE PROPAGATION §.§ Energy functional Let Ω be a cracked two-dimensional permeable porous solid and x be the position vector in Fig. <ref>. Two different layers, Ω_A and Ω_B, compose the calculation domain Ω (Ω_A ∪Ω_B= Ω). The full bond between the two layers is assumed and continuity conditions are naturally fulfilled. The boundary of the domain Ω is ∂Ω. Two disjointed parts, ∂Ω_u and ∂Ω_t, are defined with the prescribed displacement u̅( x,t) and traction t^*( x,t). Moreover, a body force b( x,t) is applied on Ω. We also define the outward unit normal vector n and the internal fracture in the domain as Γ in Fig. <ref>. The main purpose of this study is to investigate fracture propagation in the domains Ω_A and Ω_B, particularly the fracture behavior around the interface of the two layers. Furthermore, this research assumes that the pore size is considerably smaller than the fracture length scale; the porous media are elastic and homogeneous with compressible and viscous fluids. We use the variational approach of Griffith's theory <cit.> to study fracture propagation. For the purely single phase problem, the energy functional Ψ( u,Γ) for the entire calculation domain is composed of only the elastic energy Ψ_ε(ε), dissipation energy Ψ_f and external work W_ext <cit.>. By contrast, the influence of fluid pressure p must be considered in a porous medium and the energy functional involves an additional pressure-related term <cit.>: Ψ( u,p,Γ) = ∫_Ωψ_ε(ε) dΩ_Ψ_ε-∫_Ωα p · (∇· u) dΩ_pressure-related term+∫_ΓG_c dS_Ψ_f-∫_Ω b· udΩ - ∫_∂Ω_tt^*· udS_W_ext where α and G_c represent the Biot coefficient and critical energy release rate, respectively, and ε represents the linear strain tensor. In Eq. (<ref>), ψ_ε(ε) denotes the elastic energy density, which can be expressed as follows in an intact isotropic and linear elastic solid <cit.>: ψ_ε(ε) = 1/2λε_iiε_jj+με_ijε_ij where λ,μ>0 are the Lamé constants. §.§ Phase field description The phase field model <cit.>, which smears the discontinuous fracture Γ over the domain Ω, is used to describe fracture propagation in the porous media. In this study, the phase field varies from 0 to 1, ϕ=0 means that the material is unbroken, and ϕ=1 is for a “fully" broken region. Note that the value of the phase field can be used in identifying a fracture, while a damaged region bounded by a threshold value of phase field is commonly adopted to reflect the fracture shape similar to those in the discrete settings <cit.>. Thereafter, the crack surface density is represented by the phase field and its gradient as follows <cit.>: γ(ϕ,▽ϕ)=ϕ^2/2l_0+l_0/2∇ϕ·∇ϕ where l_0 is the length scale parameter. The regularized formulation (<ref>) facilitate the transfer of the dissipation energy integration over a discontinuity into an integration over a continuous domain. This treatment considerably facilitates the numerical implementation in fracture problems. Therefore, substituting Eq. (<ref>) into Eq. (<ref>) yields the dissipation energy Ψ_f: ∫_ΓG_c dS=G_c∫_ΓdS≈ G_c∫_ΩγdΩ =∫_ΩG_c[ϕ^2/2l_0+l_0/2∇ϕ·∇ϕ]dΩ In the PFM, the elastic energy appears as a driving term in the evolution equation. Therefore, for the purpose of removing unrealistic fracture patterns in the simulation <cit.>, the elastic energy must be decomposed. The strain decomposition is used first where the strain ε is composed of the tensile strain tensor ε^+ and compressive ε^-: ε^±=∑_a=1^d ⟨ε_a⟩^± n_a⊗ n_a where ε_a denotes the principal strains, and n_a represents the direction vectors. The two operators in Eq. (<ref>) are ⟨⟩^+=max(,0) and ⟨⟩^-=min(,0). Thereafter, the positive and negative elastic energy densities can be defined in terms of the tensile and compressive strain tensors: ψ_ε^±(ε) = λ/2⟨ tr(ε)⟩^± 2+μ tr (ε^± 2) According to <cit.>, the total elastic energy density is expressed as ψ_ε(ε)=[(1-k)(1-ϕ)^2+k]ψ_ε^+(ε)+ψ_ε^-(ε) where 0<k≪1 is a stability parameter. In Eq. (<ref>), if the phase field ϕ = 1, then the stiffness against tension is only k times of its original value, and k=0 will lead to a singularity in the stiffness matrix. Therefore, the positive stability parameter can effectively avoid this singularity and its detrimental effect on convergence in the simulation. §.§ Governing equations for the phase field evolution The energy functional (<ref>) is renewed according to Eqs. (<ref>) and (<ref>). For the variational approach <cit.>, fracture initiation and growth at time t is a process that the energy functional Ψ seeks for a minimum value. In a natural manner, setting the first variation of the functional Ψ as 0 yields δΨ=∫_∂Ω_t[(σ_ij-α pδ_ij)n_j-t_i^* ] δ u_i dS_1 -∫_Ω[(σ_ij-α p δ_ij)_,j+b_i]δ u_i dΩ_2 - ∫_Ω[ 2(ϕ-1)(1-k)ψ_ε^+ + G_c ϕ/l_0-G_c l_0∂^2ϕ/∂ x_i^2]δϕdΩ_3 + ∫_∂Ω( ∂ϕ/∂ x_in_i)δϕdS_4=0 where the component of the effective stress tensor σ(ε) is σ_ij= [(1-k)(1-ϕ)^2+k ]∂ψ_ε^+/∂ε_ij+∂ψ_ε^-/∂ε_ij Thereafter, the Cauchy stress tensor σ^por <cit.> is set as follows: σ^por(ε)=σ(ε)-α p I, in Ω In Eq. (<ref>), the dimensionless Biot coefficient α reflects the extent of perturbation in the total stress σ^por owing to the changes in fluid pressure p <cit.>. Given that Eq. (<ref>) consistently holds for all possible δ u and δϕ, in Eq. (<ref>)2 and 3, except the displacement and phase field variations, the main bodies in the integrals must constantly be 0. Therefore, Eq. (<ref>)2 and 3 produce the following governing equations: {∂σ_ij^por/∂ x_j+b_i=0 [2l_0(1-k)ψ_ε^+/G_c+1]ϕ-l_0^2∂^2 ϕ/∂x_i^2=2l_0(1-k)ψ_ε^+/G_c. To ensure the availability of PFM, the irreversibility condition must be established which indicates that a fracture cannot be healed. An easy treatment is the introduction of a history field H( x,t), which represents the maximum tensile elastic energy density at the time interval [0,t] <cit.>. That is, the history field H can be expressed as follows: H( x,t) = max_s∈[0,t]ψ_ε^+(ε( x,s)) History field H satisfies the Kuhn-Tucker conditions <cit.>. Thereby, a monotonic increase in the phase field is ensured under compression or unloading. Substituting H( x,t) into ψ_ε^+, Eq. (<ref>) yields the following strong form: {∂σ_ij^por/∂ x_i+b_i=0 [2l_0(1-k)H/G_c+1]ϕ-l_0^2∂^2 ϕ/∂x_i^2=2l_0(1-k)H/G_c. Apart from the Dirichlet boundary condition, Eq. (<ref>) is also subjected to the Neumann condition: σ_ij^porn_j=t_i, on∂Ω_t In addition, the entire calculation domain has an initial phase field ϕ_0=0 and the approach of <cit.> is used to artificially induce a pre-existing crack. §.§ Flow field Three parts of the flow domain are distinguished: unbroken domain (reservoir domain) Ω_r(t), fracture domain Ω_f(t), and transition domain Ω_t(t) <cit.>. These three domains are defined by setting two phase field thresholds, c_1 and c_2. The subdomain is an unbroken domain Ω_r(t) if ϕ≤ c_1, but a fracture domain Ω_f(t) if ϕ≥ c_2. In the case of c_1<ϕ<c_2, the subdomain is a transition domain. Given that a sharp fracture is diffused in PFM, determining the hydraulic and solid parameters in the transition domain is evidently a challenge. For simplicity, many studies such as <cit.> have adopted linear interpolation between the unbroken and fractured domains for the flow field, which was proven to receive favorable numerical results. The current study uses a linear relationship between the hydraulic and solid parameters of the unbroken and fractured domains. Thereafter, two indicator functions, χ_r and χ_f, are naturally established by applying the phase field ϕ <cit.>: χ_r(·,ϕ)={ 1, ϕ≤ c_1 c_2-ϕ/c_2-c_1 c_1<ϕ<c_2 0, ϕ≥ c_2 .,χ_f(·,ϕ)={ 0, ϕ≤ c_1 ϕ-c_1/c_2-c_1 c_1<ϕ<c_2 1, ϕ≥ c_2 . Note that the indicator functions χ_r and χ_f have also been established in <cit.>, indicating that the fracture pattern is relatively insensitive to the indicator functions. In addition, <cit.> suggested a relation c_1=0.5-m_x and c_2=0.5+m_x, with 0<m_x<0.5. However, this study considers c_1 and c_2 as two independent values. Darcy's law is applied to describe the flow field in the porous media. In the entire domain, mass conservation is expressed as follows<cit.>: ρ S ∂ p/∂ t+∇·(ρ v)=q_m-ραχ_r∂ε_vol/∂ t where ρ, S, v, and ε_vol are the flow density, storage coefficient, flow velocity, volumetric strain of the domain, and source term, respectively. By denoting ρ_r and ρ_f as the fluid densities in Ω_r and Ω_f, we have ρ=ρ_rχ_r+ρ_fχ_f; similarly, α=α_rχ_r+α_fχ_f. Given that the Biot coefficient α=1 for the fracture domain, α=α_rχ_r+χ_f, with α_r representing the Biot coefficient in Ω_r. In addition, the volumetric strain ε_vol=∇· u. Note that we provide one feasible model for coupling the fluid flow with the phase field. We mention that more general methods such as the one proposed by <cit.> can be applied to provide improved description of the fluid flow through the porous media. The storage coefficient S can be expressed as follows <cit.>: S=ε_pc+(α-ε_p)(1-α)/K_Vr where ε_p, c, and K_Vr are the porosity, fluid compressibility, and bulk modulus of Ω_r, respectively. Naturally, c=c_rχ_r+c_fχ_f, with c_r and c_f as the respective fluid compressibility in Ω_r and Ω_f. Note that we set ε_p=1 for Ω_f. Therefore, ε_p=ε_prχ_r+χ_f, with ε_pr representing the porosity of the reservoir domain. Darcy's velocity v is subsequently expressed as v=-K/μ_e(∇ p+ρ g) where K and μ_e represent the effective permeability and fluid viscosity, respectively. K=K_rχ_r+K_fχ_f, with K_r and K_f being the permeabilities of Ω_r and Ω_f, respectively. μ_e=μ_rχ_r+μ_f χ_f, while μ_r and μ_f are the fluid viscosity in Ω_r and Ω_f, respectively; g denotes the gravity. Moreover, the novel method proposed by <cit.> can extensively describe the fluid flow velocity in porous media. The feasibility and advantages of this method in the phase field modeling will be examined in future research. Lastly, the following equation that governs fluid flow in Ω is expressed in terms of p: ρ S ∂ p/∂ t-∇·ρ K/μ_e(∇ p+ρ g)=q_m-ραχ_r∂ε_vol/∂ t which is subjected to the Dirichlet and Neumann conditions as shown in <cit.>. § NUMERICAL IMPLEMENTATION Finite element method (FEM) is used to solve the governing equations of the multi-field fracture problem in the porous media. Therefore, weak-form formulations are required <cit.> with a modified stiffness matrix to solve the displacement field. The implicit Generalized-α method <cit.> is used to discretize the time domain and unconditionally enhance numerical stability. In addition, we subsequently solve the three fields in a staggered manner. That is, solving each field is independent in one time step. Note that we couple u and p and solve them in one staggered step. Table <ref> presents the flowchart of the staggered scheme for the hydraulic fracture propagation in naturally-layered porous media. The Newton-Raphson iteration method is used <cit.> in each segregated step. § VALIDATION OF PFM FOR HYDRAULIC FRACTURE The fluid-driven fracture in a permeable solid is affected by competing dissipative processes owing to fluid viscosity and medium toughness <cit.>. Therefore, we check the fracture propagation and fluid pressure in the following cases <cit.> to further validate the used phase field method: * Toughness-dominated regime. The porous medium is impermeable and fracturing expends substantially higher energy than the viscous dissipation. * Viscous-dominated regime. The porous medium is impermeable and energy dissipation in fracture propagation is considerably lower than the viscous dissipation. * Leak-off toughness-dominated regime. The porous medium is permeable and more fluid is stored in the porous medium than in the fracture. In addition, fracturing expends significantly higher energy than the viscous dissipation. A 60 m × 30 m rectangular domain is considered with an initial injection notch of 1.2 m × 0.2 m in the domain center. After being modeled through the initial history field, the initial notch has a fluid injection rate of Q_f. All the outer boundaries of the domain are permeable with p=0 and fixed in the displacement. The parameters listed in Tables <ref> and <ref> are used in the simulation. Note that these parameters correspond to those used in <cit.> for different fluid regimes. We discretize the domain with uniform quadrilateral elements with the maximum element size of h=0.2 m. In all simulations, a time increment Δ t= 0.025 s is used. Figures <ref> and <ref> show the evolution of half-length of the fracture and fluid pressure at the fracture center for the toughness-dominated, viscous-dominated, and leak-off dominated regimes. The two figures also compare the results obtained by PFM and those by using the analytical approach presented by <cit.>. Figures. <ref> and <ref> show that compared with the analytical method, the used PFM reproduces consistent trends in the evolution of the half-length of the fracture and mid-point fluid pressure. Multiple reasons account for the differences in the actual values obtained by both methods. First, the used PFM aims for a permeable porous medium, whereas the analytical approach of <cit.> is for an impermeable medium. Despite using parameters that approximate an impermeable solid, this important difference cannot be completely eliminated in our phase field simulation owing to the inevitable fluid penetration. Second, the fracture in the phase field simulation evolves from an initial notch with a length of 1.2 m. Therefore, the half-length of the fracture starts from 0.6 m, and an evident ascending stage is observed before the fracture initiation in the fluid pressure-time curves. However, the analytical method assumes point-injection fluid and the initial fracture length is 0, while the increase stage before fracture initiation is disregarded. Moreover, the used PFM and analytical method include different flow models. Especially, for the leak-off toughness-dominated regime, the leak-off term in PFM is related to the volumetric strain of the domain while a fixed leak-off coefficient of 5×10^-4 m/s^1/2 is used in the analytical method according to <cit.>. Another reason is that PFM smears the fracture in a finite width, while the analytical method deals with the fracture in a discrete setting. § SOFT-TO-STIFF CONFIGURATION §.§ Geometry and boundary conditions We investigate the hydraulic fracture pattern in two porous layers, which is a relatively typical setting in engineering geology, rock engineering, and oil/gas exploration. The setup for the two-media problem is described in Fig. <ref>; the center of the analysis domain is set as the origin of the coordinate system. 2D simulations are performed and we mark the two layers with the notes, 1 and 2, respectively. A pre-existing notch is set in the layer 1, with its position and length also shown in Fig. <ref>. The width of the pre-existing notch is l_0, while the fluid is injected into the notch with the source term q_f= 10 kg/(m^3·s). Except the boundary conditions in Fig. <ref>, the left boundary of the layer 1 is impermeable. The interface of the two layers has an inclination angle of θ. We also test the influence of the inclination angle and choose θ = 0^∘, 15^∘, and 30^∘. Note that although our simulations are performed in a relative small scale that corresponds to the XFEM simulations of <cit.>, the phase field simulation can be easily extended to large-scale problem by adjusting the geometry, mesh size, and other relevant parameters. A total of twelve cases are examined using the various values of Young's modulus and G_c listed in Table <ref>. The Young's modulus E and G_c of the two layers are denoted as E_1, E_2, G_c1, and G_c2. Note that a stiffer layer exhibits a larger E and G_c than the softer layer. The Poisson's ratios of the two layers are the same(i.e., ν_1=ν_2=0.3). The parameter setting in this paper is similar to that of <cit.> where fracture toughness k_I is used but G_c is used in the PFM. Note that the selection of the different values of Young's modulus and critical energy release rate will not affect the fracture patterns in layered domain which are depicted in Fig. <ref>. The other parameters of the two layers for calculation are listed in Table <ref>. It is quite often the case in substantial rock formation the contrast of Young's modulus and tensile strength between soft and hard layers can be few times such as the pseudo fracture studies in <cit.>. Therefore, the following cases with various combinations of material parameters for soft and hard layers relatively represent classic situations such as contract about 3 times in Young's modulus were used in <cit.>. The computation is performed based on the finite element meshes shown in Fig. <ref>. In most of the domain we use linear triangular elements that have a maximum size h=8 mm. In addition, we restrict the mesh size in the region where fractures may initiate and propagate to h= 2 mm. The pre-existing notch is established through the initial history field H_0 and the notch width is fixed as 4 mm. In the simulation, a time step Δ t = 0.05 s is used. §.§ Fracture pattern The settings of the numerical model are classified into two types, namely, soft-to-stiff configuration (Cases 1 to 6) and stiff-to-soft configuration (Cases 7 to 12). Note again that only 2D simulations are performed because 3D cases are time-consuming and unsuitable for analysis on multiple influencing factors. In addition, gravity is disregarded for simplicity, while x- and y- axes correspond to the horizontal and vertical directions, respectively. Figure <ref> depicts the hydraulic fracture patterns for E_2=2E_1 at t = 200 s, in which the fractures completely propagate and their paths are evident. When the inclination angle θ = 0^∘, the penetration scenario is shown. The fracture propagates perpendicular to the interface between two layers of rock and penetrates deep into the layer 2. The reason is that the stiffness of the layer 2 is insufficiently large to depress the fracture across the layer interface. The increasing fluid pressure increases the tensile stress around the fracture tip. Thereafter, the effective stress becomes higher than the tensile strength depending on the Young's modulus, fracture toughness, and length scale in the stiffer layer at the layer interface according to <cit.>. Therefore, the fracture cannot be prevented. A singly-deflected scenario is observed when the inclination angle θ= 15^∘ and 30^∘. At an inclination angle, the fracture propagation is blocked when it reaches the layer interface. The fracture is arrested by the interface and then propagates along the interface towards the bottom boundary of the domain because of the increasing fluid injection. The singly-deflected scenario results from the fact that the fluid pressure-induced stress rotates at the layer interface in a complex manner. In this case, the decomposed effective stress exceeds the tensile strength along the interface more easily than that in the second layer. Figure <ref> presents the hydraulic fracture patterns for E_2=4E_1 at different times. When E_2=4E_1, the fracture patterns are not the same as those for E_2=2E_1. The doubly-deflected scenario is observed when θ = 0^∘ and 15^∘. The stiffness of the layer 2 is sufficiently large and the increasing elastic energy cannot support a fracture penetration into the second layer. The tensile strength of the layer 2 is approximately four times that of the layer 1 according to <cit.>. The fracture branching is a natural result of the evolution equation of the phase field after the fracture reaches the layer interface. Given different inclination angles, the time for a fracture propagates to the same distance is relatively different. Therefore, the time point displayed is 100 s in Fig. <ref>a and <ref>b and 63.02 s in Fig. <ref>c. The branched fractures are symmetrical for θ = 0^∘ but asymmetric for θ = 15^∘. The lower fracture is longer than the upper fracture because of the asymmetry of the layer interface along the vertical direction. For the inclination angle θ =30^∘, the fracture pattern is similar to that of E_2=2E_1 in Fig. <ref>c. Only the singly-deflected scenario is shown and the propagating fracture moves to the bottom boundary while the case of E_2=4E_1 produces a relatively large fracture propagation velocity. In addition, Figs. <ref> and <ref> show a narrower fracture after the fracture penetrates or deflects because of unavoidable fluid penetration in the fractures formed at a relatively small fluid pressure level with normal extension of damage. §.§ Effective maximum stress In the soft-to-stiff configuration, the effective maximum stress distributions for E_2=2E_1 and E_2=4E_1 are displayed in Figs. <ref> and <ref>, respectively. Note that the effective maximum stress in this study is defined as max(σ_1,0), with σ_1 being the effective first principal stress. In addition, the region with ϕ≥0.95 is removed from Figs. <ref> and <ref> to show an improved fracture shape. The stress distributions coincide with the fracture patterns. The stress concentration is observed only around the fracture tip for the penetration and doubly-deflected scenarios. However, stress for the singly-deflected scenario also concentrates in the area where the fracture deflects, except the fracture tip. Note that the stress values in Figs. <ref> and <ref> are realistic because stress singularity exists at a fracture tip and the stress value theoretically approaches infinity based on linear elastic fracture mechanics. However, the phase field in a PFM smears the sharp fracture. Therefore, the tensile stress value at the fracture tip cannot be infinity. By contrast, this stress decreases as the length scale l_0 increases. §.§ Displacement field We also investigate the vertical displacement distributions of the calculation domain for E_2=2E_1 and E_2=4E_1 at different time t. In the simulations, the vertical displacement is relatively large around the fracture region and the displacement increases with fracture propagation. We set a straight monitoring path L1 with its starting point (-2 m, 0.02 m) and ending point (2 m, 0.02 m). That is, the vertical coordinate of the path L1 is identical to that of the upper edge of the pre-existing notch. Figure <ref> shows the vertical displacement along path L1 for E_2=2E_1 and E_2=4E_1 when the fracture reaches the layer interface. The displacement pattern is similar to those in <cit.>. The vertical displacement decreases along the direction of the fracture propagation, which is consistent with the analytical solution of <cit.>. In addition, the increase in the inclination angle θ slightly decreases the vertical displacement. The reason is that a larger inclination angle results in a larger stiffer layer volume in the direction normal to the propagating fracture, thereby restricting the upward deformation of the softer layer. Figure <ref> shows the maximum vertical displacement on the upper left edge of the domain when the time increases. In the soft-to-stiff configuration, the maximum displacement increases rapidly in the first 50 s (E_2=2E_1) and 25 s (E_2=4E_1). Thereafter, the maximum displacement increases at a low rate because the fracture propagation is depressed by the stiffer layer 2. In particular, the inclination angle θ=30^∘ achieves a relatively small maximum vertical displacement. Another reason for the second stage with a small increase rate in the vertical displacement is fracture deflection at the layer interface. In addition, slight displacement fluctuation is observed in the XFEM simulation of <cit.>. §.§ Fluid pressure Figures <ref> and <ref> show the fluid pressure field for E_2=2E_1 and E_2=4E_1. The fluid pressure field is consistent with the phase field while the maximum pressure appears in the hydraulic fractures. Figure <ref> shows the fluid pressure with the increasing time. The data is picked at the point (-2 m, 0). The fluid pressure has a similar trend with the maximum vertical displacement in Fig. <ref>. Note that the fluid pressure increases rapidly in the first 40 s (E_2=2E_1) and 20 s (E_2=4E_1). Thereafter, the increasing rate of the fluid pressure decreases. When θ=0^∘ and 30^∘, the fluid pressure-time curves are similar. However, the inclination angle θ=30^∘ obtains a relatively low fluid pressure. The pattern of fluid pressure mainly results from the fracture pattern, soft-to-stiff setting, and relatively low porosity used in the simulation. § STIFF-TO-SOFT CONFIGURATION §.§ Fracture pattern This section presents the numerical results for the stiff-to-soft configuration (E_1=2E_2 and E_1=3E_2). Figures <ref> and <ref> represent the hydraulic fracture patterns for E_1=2E_2 and E_1=3E_2 at different time t, respectively. Only the penetration scenario is obtained in the stiff-to-soft configuration, which is different from the observations in the soft-to-stiff configuration. Singly-deflected, doubly-deflected, and penetration scenarios can be all simulated in the soft-to-stiff configuration. The fracture penetration scenario in the stiff-to-soft configuration is a natural result calculated from the evolution equation of the phase field, and the fracture penetration pattern is formed because the softer layer 2 has a lower tensile strength than the stiffer layer 1 according to <cit.>. Figures <ref> and <ref> also show that the fracture width increases when the fracture penetrates into the softer layer 2. By comparing Figs. <ref> and <ref>, it is observed that the fracture in the layer 2 has a larger width when the stiffness of the layer 2 is smaller. The inclination angle also influences the fracture pattern. When θ=0, the fractures in the layers 1 and 2 propagate horizontally. However, when θ=15^∘ and 30^∘, the hydraulic fracture deflects slightly after it crosses the layer interface. The hydraulic fracture in the layer 2 intersects the fracture in the layer 1 at a small angle, which increases slightly as θ increases. §.§ Effective maximum stress In the stiff to soft configuration, the effective maximum stress distributions for E_1=2E_2 and E_1=3E_2 at different time t are shown in Figs. <ref> and <ref>, respectively. The stress distributions coincide with the fracture patterns. The stress concentration is observed only around the fracture tip owing to the penetration scenario. In addition, the effective maximum stress around the fracture tip decreases when the hydraulic fracture propagates into the layer 2. The difference between the stress in the two layers 1 and 2 reflects again the difference in the tensile strength for resisting fracturing. §.§ Displacement field The vertical displacement distributions of the calculation domain for E_1=2E_1 and E_1=3E_2 at different time t is investigated. When hydraulic fracture propagates in the stiffer layer 1, the maximum vertical displacement occurs around the fluid injection region and the displacement increases along with the fracture propagation. However, increasing vertical displacement exists around the fracture domain in the softer layer 2 when the hydraulic fracture penetrates deep into the layer 2. Figure <ref> shows the vertical displacement along the path L1 for E_1=2E_2 and E_1=3E_2 when the fracture reaches the layer interface. The displacement pattern is similar to those in the soft to stiff configuration. The vertical displacement decreases along the direction of fracture propagation while the displacement increases slightly as the inclination angle θ increases. Figure <ref> shows the maximum vertical displacement on the upper left edge when time increases in the stiff-to-soft configuration. As shown in Fig. <ref>, the maximum displacement increases at a relative small rate in the first 65 s. Thereafter, when the hydraulic fracture reaches the layer interface and propagates deep into the layer 2, the maximum displacement increases at a relatively large rate. In addition, the curves of the maximum displacement versus time are minimally affected by the inclination angle θ. §.§ Fluid pressure Figure <ref> shows the fluid pressure when the time increases in the stiff-to-soft configuration. The data is also picked at the point (-2 m, 0). The fluid pressure has a similar trend to that in the soft-to-stiff configuration in Fig. <ref>. The fluid pressure increases rapidly in the first 65 s. Thereafter, the increasing rate of the fluid pressure decreases and the pressure is nearly stable. Furthermore, for the stiff-to-soft configuration, the second stage is observed to be considerably shorter than the soft-to-stiff configuration. The inclination angle θ has minimal effect on the fluid pressure-time curve. §.§ Guidance for HF practice Figure <ref> summarizes all the fracture patterns at the layer interface for different E_2/E_1, inclination angle θ, and the data in a single porous layer <cit.> (E_2=E_1). On the basis of the relationship between E_2/E_1 and θ, three regions formed in this figure represent the three fracture patterns–penetration, singly-deflected, and doubly-deflected scenarios. Note that a low E_2/E_1 or a medium E_2/E_1 with a low θ produces the penetration scenario. A high θ with a medium or high E_2/E_1 corresponds to the singly-deflected scenario, while only a high E_2/E_1 with a low θ will form the doubly-deflected scenario. Therefore, the phase field model and numerical investigation in this research can easily reflect the influence of layer stiffness and interface angle on fracture patterns. In addition, the summary figure, which will be further improved by using PFM in future studies, can be applied for unconventional HF in shale gas development and to study safety and environmental concerns and efficiency issues in current HF practices. For example, if the elastic parameters of two neighboring layers are known, then the perforation direction in HF can be optimized according to Fig. <ref> to avoid fracture penetration into the neighboring layer due to the potential risks for water contamination and stability of the geological system. § CONCLUSIONS This study applies a phase field framework to examine hydraulic fracture growth in naturally layered geological formations. The total energy functional used fully includes the influence of fluid pressure, and the fracture pattern is the natural result of minimization of the energy functional. In addition, we consider the soft-to-stiff and the stiff-to-soft configurations where the layer interface exhibits different inclination angles. Therefore, the relationship between the phase field (fracture pattern) and the stiffness contrast and inclination angle of the geological formations, is revealed. The numerical investigation in this study supports the following: (1) The huge advantage of the phase field framework over the X-FEM framework lies in the fact that penetration criteria are also not required when hydraulic fractures reach the material interfaces. In addition, the phase field implementation does not require tracking the fracture paths algorithmically. (2) Penetration, singly-deflected, and doubly-deflected fracture scenarios can be predicted using PFM. In the soft-to-stiff configuration, the simulations exhibit penetration or symmetrical doubly-deflected scenarios when the pre-existing fracture is perpendicular to the layer interface. When the interface angle is 15^∘, singly-deflected or asymmetric doubly-deflected scenarios are obtained. Only the singly-deflected scenario is obtained for the interface angle of 30^∘. (3) In the stiff-to-soft configuration, only the penetration scenario is obtained with widening fractures when the hydraulic fractures penetrate into the softer layer. The fracture in the softer layer deflects at a small angle with the fracture in the stiffer layer and the angle increases as the layer interface angle increases. Note that our study includes a perfect bonding at the layer interface, which is not always the case in geological settings <cit.>. Therefore, in future studies, layer interfaces with weak bonding and methods on fixing the interface parameters from the neighboring layers should be involved in the PFM on fracture propagation in layered formations. Laboratory experiments on fractures along weakly-bonded interfaces <cit.> will also be used for further validation of PFM. Another limitation of PFM is that the direct coupling of the permeability and crack opening is difficult to apply because of the smeared representation of fracture. In this sense, future PFMs should introduce substantially accurate permeability models for fractured porous domains. § ACKNOWLEDGMENT The authors gratefully acknowledge the financial support provided by the Natural Science Foundation of China (51474157), and the RISE-project BESTOFRAC (734370).
http://arxiv.org/abs/2307.07486v1
20230714171212
Global sensitivity analysis in the limited data setting with application to char combustion
[ "Dongjin Lee", "Elle Lavichant", "Boris Kramer" ]
math.NA
[ "math.NA", "cs.NA" ]
@pprintTitle oddheadempty evenheadempty oddfoot evenfootoddfoot 36pt[b][36pt]August 12, 2023 36pt[b][36pt]August 12, 2023 @aparagraph[#1]#2[#1]#2addpunct. @bparagraph#1*#1addpunct. theoremTheorem exampleExample corollaryCorollary remarkRemark assumptionAssumption propositionProposition problemProblem
http://arxiv.org/abs/2307.06168v2
20230712135010
A comparative study of different approaches for heavy quark energy loss, based on the latest experimental data
[ "Marjan Rahimi Nezhad", "Fatemeh Taghavi Shahri", "Sharareh Mehrabi Pari", "Kurosh Javidan" ]
hep-ph
[ "hep-ph", "nucl-th" ]
[email protected] [email protected] (Corresponding author) [email protected] [email protected] ^(1)Department of Physics, Ferdowsi University of Mashhad, P.O.Box 1436, Mashhad, Iran This paper presents a comparative analysis of three distinct methods used to calculate the collisional energy loss of heavy quarks in Quark-Gluon Plasma. The study focuses on the calculation of the nuclear suppression factor of charm quarks in Pb-Pb collisions at √(S_NN) = 5.02 TeV. All three models are examined using the same numerical evolution based on the well-known Fokker-Planck equation by considering critical phenomena like a non-equilibrium state at the onset of heavy ion collision. The outcomes of each approach are compared with the latest data from ALICE and ATLAS experiments spanning from 2018 to 2022. This study aims to compare the degree of agreement between each approach and recently obtained experimental data, in the intermediate and high P_T regions. 12.38.Bx, 12.39.-x, 14.65.Bt A comparative study of different approaches for heavy quark energy loss, based on the latest experimental data Kurosh Javidan^1 August 12, 2023 ================================================================================================================ § INTRODUCTION Interaction between quarks and gluons is described by Quantum Chromo Dynamics (QCD) theory, in which quarks act as constituents of hadrons, and gluons act as quantum bosons <cit.>. Two prominent features of QCD are asymptotic freedom and confinement. Asymptotic freedom means that the interaction between quarks is weak when they are close to each other, but as the quarks move away from each other, this force becomes stronger and increases, which is called confinement. As a result of the asymptotic freedom, when matter reaches extremely high temperatures and/or densities, the strong interaction weakens, and quarks and gluons are freed from each other. In other words, at high temperatures and/or densities hadrons will melt and the degrees of freedom of matter will be quarks and gluons. In this state, a fluid called Quark-Gluon Plasma (QGP) is formed <cit.>. Studies indicate that matter was in the plasma phase until a few microseconds after the Big Bang and then a phase transition occurred. Therefore, studying the transition of the quark phase to the hadron phase and investigating the quark-gluon plasma properties are very important in understanding the evolution of the early universe. Furthermore, matter in heavy cosmic bodies like neutron stars may exist in this state due to its extremely dense composition. Hence, this is a significant issue in Astrophysics as well <cit.>. QGP was first theorized in the 1970s, but experiments at the Relativistic Heavy Ion Collider (RHIC) and later at the Large Hadron Collider (LHC) confirmed the existence of QGP in the late 1990s. These experiments involve colliding heavy ions, such as gold or lead, at very high energies, creating a hot and dense environment that allows quarks and gluons to move freely and interact strongly with each other <cit.>. This highly excited state of matter whose main constituents are light quarks and gluons displays properties similar to a nearly perfect fluid and can be successfully described by hydrodynamic models <cit.>. Two different conditions are needed to describe the QGP by hydrodynamics; the first one is that the system should have a local thermal equilibrium for a sufficient period of time, and the second one is that the scale of interactions (or mean free distance of particles) should be much smaller than the dimensions of the system. Hydrodynamics could be considered a macroscopic effective field theory that has the ability to investigate the evolution of non-equilibrium systems. Heavy quarks such as b and c quarks play an essential role in studying the properties of Quark-Gluon Plasma created in heavy-ion collisions <cit.>. These quarks are formed in the early stages of the collision. Due to their large mass, they reach equilibrium with the environment later and may even leave the plasma without reaching equilibrium. So they are good witnesses of the whole space-time history of the deconfined medium. In order to study the evolution of heavy quarks in QGP, a possible approach is to examine the time evolution of their distribution functions in the transverse momentum plane. It is reasonable to assume that these heavy particles in a non-equilibrium state undergo Brownian motion within a heat bath that is in thermodynamic equilibrium. The Fokker-Planck equation can be used to obtain the temporal evolution of the transverse momentum spectrum of heavy quarks. When heavy quarks pass through the plasma, they interact with the QGP constituents and lose energy through radiation and elastic collisions. The energy loss of these quarks provides information about the properties of the QGP, such as its temperature and viscosity. In addition, the study of heavy quark energy loss is important for understanding the mechanism of jet quenching, which is the suppression of high-energy partons in the QGP. In this article, we are going to investigate the evolution of the charm quark distribution function in Pb-Pb collision at √(S_NN)= 5.02 TeV. In our evolution process, we consider different approaches for collisional energy loss, along with radiation energy loss. Eventually, by calculating the nuclear modification factor, R_AA, we are able to compare theoretical results with the most recent experimental data from LHC, in order to determine which method of energy dissipation is most compatible with new experimental data. This paper is organized as follows: In section (<ref>), we review the Fokker-Planck equation and the evolution of the QGP system. We also introduce different methods of energy loss that we have examined. Section (<ref>) is focused on calculating the nuclear suppression factor and presenting our theoretical results. Our R_AA results for each energy loss model are compared with new data from ATLAS and ALICE. Finally, the conclusion is given in Section (<ref>). § METHODS In this section, we describe the details of our modeling framework. To begin with, it would be helpful to review the stages of quark-gluon plasma formation. Heavy-ion collisions pass through various stages from collision to hadronization. The collision initially produces a fireball of quarks and gluons known as quark-gluon plasma. After a while, the system quickly reaches local thermodynamic equilibrium, and high-energy partons lose energy through passing the plasma. As the system continues to expand and cool, all interactions stop and the system reaches the freeze-out temperature (T_f). In this state, dynamic information remains constant and hadrons are formed. §.§ System evolution To study the QGP system, various models could be used to calculate the time evolution of dynamic parameters such as temperature and viscosity. Here we consider the time dependence of temperature as follows <cit.>: T(τ) = T_0(τ_0/τ)^1/3[1+2/3τ_0T_0η/s(1-(τ_0/τ)^2/3)] where T_0 and τ_0 are the initial temperature and proper time and η/s is the viscosity to entropy ratio which has been taken from <cit.>. Also using a temperature-dependent function for the running coupling α_s(T) is essential because the temperature is a critical scale that controls the QCD coupling in the QGP system <cit.>: α_s(T) = 6π/(33-2N_f)ln(19T/Λ_MS) where we assume N_f=3 as the number of active flavors in the QGP and the QCD cut-off parameter has been taken as Λ_MS= 80 MeV. The heavy quark evolution in the QGP system can be described by two different approaches: the Langevin transport equation and the Fokker-Planck equation. In this article, we will employ the Fokker-Planck equation which is a simplified form of the Boltzmann equation. The Fokker-Planck equation provides a suitable framework for investigating the temporal evolution of heavy quarks <cit.>. This equation was first introduced by Fokker and Planck to explain the Brownian motion of particles in a fluid. According to this equation, the temporal evolution of the distribution function of heavy quarks is given by: ∂/∂ tf(p,t) = -∂/∂ p[A(p)f(p,t)] + ∂^2/∂ p^2[D(p)f(p,t)]. To solve this equation, we require three input parameters: The initial distribution function of heavy quarks (f_in(p,t)), drag coefficient (A(p)), and diffusion coefficient (D(p)). To derive an analytical solution for the Fokker-Planck equation, we assume that the drag and diffusion coefficients are momentum-independent. This assumption is reasonable since the time dependence of these coefficients arises from temperature fluctuations, which are time-dependent. The drag and diffusion coefficients are determined by the non-equilibrium energy dissipation of particles in a thermal environment. The energy dissipation of heavy quarks in a plasma environment occurs through two main processes: (1) collisions with other particles and (2) gluon bremsstrahlung or radiation due to interactions of heavy quarks with other quarks, anti-quarks, and gluons present in the thermal bath. Therefore, the drag coefficient can be obtained using the following relation <cit.>: A(p) = -1/pdE/dL while we consider energy loss in both cases: dE/dL= (dE/dL)_coll+ k (dE/dL)_rad The value of k is uncertain and needs to be determined through an optimization process. This value shows the impact of radiation term on the results. The diffusion coefficient can be determined using Einstein's relation when there is a weak coupling between the heavy quark and the thermal bath <cit.>: D(p) = T A(p) E T represents the temperature of the thermal bath and E represents the energy of heavy quarks. Note that the drag coefficient carries information about the dynamics of heavy quarks collisions with the medium and is expected to be determined by the properties of the thermal bath. Therefore, the most critical point for finding the time evolution of the HQ distribution function is calculating the drag force acting on the HQ or the corresponding rate of energy loss per unit distance of the HQ path in QGP. One should use a gauge invariant field theory that does not have any infrared divergence to properly account for thermal effects and obtain accurate outcomes for these values. By calculating energy loss, we have all parameters to solve the FP equation and study HQ evolution from the onset of plasma formation to reaching the critical temperature and hadronization. §.§ Energy loss approaches The asymptotic freedom of QCD implies that, for a quark-gluon plasma at a sufficiently high temperature, the rate of energy loss dE/dx can be calculated using perturbation theory based on the running coupling constant α_s(T). Unfortunately, it is not possible to compute dE/dx directly by evaluating the tree-level Feynman diagrams for scattering off of thermal quarks and gluons in the plasma. There are different divergences due to the long-range interactions mediated by the gluon. Indeed, gluon exchange diagrams give rise to logarithmically infrared divergent integrals over the momentum transfer q of the gluon. In this study, we compare three different approaches to calculate collisional energy loss for heavy quarks in the QGP. Each of these approaches has addressed the divergence problem in its own way. The Fokker-Planck equation is employed to investigate each approach to obtain the evolution of HQ distribution functions from the time of plasma equilibration to the hadronization. The R_AA plot is utilized to compare the degree of agreement of each approach with the latest experimental data. The first approach for collisional energy loss (Model A) has been calculated by Bjorken <cit.>. It is indeed the first calculation of the heavy quark energy loss due to QGP-HQ interaction. He calculated the energy loss of a massless quark due to elastic scattering off of the QGP constituents by averaging the cross section multiplied by the mean energy transfer over the thermal distribution. Infrared divergences were cut off by hand at a reasonable scale. The second approach (Model B), proposed by Thoma and Gyulassy <cit.>, combines techniques of plasma physics with high-temperature QCD <cit.> in order to calculate collisional energy loss. Through this method, dE/dx is computed using the induced chromoelectric field in the wake of a high-energy quark. That reduced field is related to the longitudinal and transverse dielectric functions, which in turn, can be expressed in terms of the gluon self-energy. An advantage of this approach is its ability to automatically regulate infrared singularities through the Debye mass. The last approach studied in this research (Model C) is proposed by Braaten and Thoma <cit.>, which includes calculating the energy loss of a quark with energy E in two different limits: E ≪M^2/T and E ≫M^2/T. In this method, soft and hard contributions to the energy loss are calculated separately and added together. This approach utilizes the hard-thermal loop (HTL) framework <cit.>. The radiative energy loss of a quark has been calculated using the proposed model in Ref. <cit.>. This formalism has been constructed by considering the reaction operator formalism (DGLV) and employing the generalized dead cone approach <cit.>. The DGLV approach <cit.> relies on expanding the quark energy loss based on the number of scatterings encountered by the quark as it moves through the medium. The single hard scattering limit considers only the leading order term. See the appendix for more details. §.§ Nuclear modification factor Quark-gluon plasma formation cannot be directly observed in the laboratory because the formed matter, quickly cools down and has a very short lifetime (on the order of 10^-23 seconds). What is observed and recorded by detectors are only photons, leptons, and stable final hadrons. Therefore, we need measurable quantities that are dependent on the characteristics of the initial stages of the system to obtain information about the early stages of plasma formation. Here, we introduce one of the most important signals of plasma formation which is the nuclear suppression factor (R_AA). This quantity represents the ratio of the number of electrons produced from semi-leptonic decay of mesons per unit rapidity and transverse momentum in nuclear-nuclear collisions to the same value in the proton-proton collisions <cit.>: R_AA(p_T) = (dN^e/dP_T^2 dy)^A-A/N_coll×(dN^e/dP_T^2 dy)^p-p The term "N_coll" in the denominator represents the number of nucleon-nucleon collisions in nucleus-nucleus collisions and can be estimated via Glauber model calculations <cit.>. The nuclear modification factor quantifies the amount of energy lost during nucleus collisions due to heavy quarks' transportation in the partonic medium. When there is no creation of the quark-gluon plasma, the nuclear modification factor is equal to one, which signifies the non-existence of a novel medium. However, if the value of this factor is less than one, it reveals the interaction between high-energy jets and the thermal environment formed during the collision of energetic nuclei. It should be noted that what is observed in detectors are electrons produced from the decay of D and B mesons. Therefore, to obtain more precise results, the corresponding hadron distribution functions can be derived by utilizing a suitable fragmentation function on the output of the FP equation. Although, the application of the fragmentation function to the final result has a negligible impact and can be ignored <cit.>. In this work, the partonic distribution functions are directly divided by each other to calculate the nuclear suppression factor. Adding a scaled factor would result in the following outcome: R_AA = 1/Nf_f(P_T)^A-A/f_i(P_T)^P-P § RESULTS AND DISCUSSION In this section, we calculate the nuclear suppression factor of the charm quark in a Pb-Pb collision at a center-of-mass energy of 5.02 TeV. The parton distribution functions required to perform this calculation are evaluated using the Fokker-Planck (FP) equation. FP equation is solved numerically at third-order relativistic hydrodynamics <cit.> until the fireball cools down to its freeze-out temperature. The evolution of HQ distributions is calculated from initial proper time τ_0=0.33 fm/c to thermal freeze-out T_c=155 Mev for LHC <cit.>. The initial transverse momentum distribution of c quark is obtained from <cit.>. To compute drag and diffusion coefficients, besides considering radiation energy loss, we employ three distinct approaches which previously introduced for evaluating collisional energy loss. The outcomes of each approach are compared with the most recent data from ALICE and ATLAS in 2018, 2021 and 2022 <cit.>. Our purpose is to determine which approach is most consistent with experimental results. The final result is optimized to fit on experimental data by adjusting initial parameters such as k and N, and minimizing the unweighted Chi-squared value: χ^2=∑_i (R_AA^exp (P_T(i))-R_AA^th (P_T(i))^2/σ_i^2 R_AA^exp and R_AA^th are experimental and theoretical predictions for suppression factors, respectively, and σ is related to experimental error. We use the Minuit package <cit.> for our parameter optimization process, which is a powerful tool that enables us to achieve high accuracy in minimizing chi-squared values. We present our results for the nuclear modification factor in Fig.(<ref>) to Fig.(<ref>) over a wide range of P_T for all the three energy loss approaches. These results are compared with ATLAS 2022 data and ALICE data in 2018, 2021, and 2022. In addition, we repeat our fitting procedure for the momentum interval 2<P_T<12, as shown in Fig.(<ref>). This range, known as the intermediate P_T range, is of particular interest in the study of heavy-ion collisions as it encompasses a transition region between low P_T and high P_T. This range of P_T enables us to explore the interaction between high-energy scattering events and the collective properties of the QGP. In tables (<ref>) to (<ref>), we summarize the obtained results for the free parameters, N and K, as well as the χ^2 values for each model. Note that the charm quark distribution has been computed for P_T> 1 GeV. Therefore, our results within P_T< 1 GeV region are invalid. To avoid errors resulting from unreliable regions, we calculate the χ^2 value for data with P_T > 1.5 GeV. As can be seen from the charts, in general, all three energy loss approaches agree well with experimental data from ALICE and ATLAS. Although they describe intermediate P_T values better than large P_T and there is a slight deviation from the experimental data observed for high P_T. That's because all models face divergence at the limits of integration and different approaches have used different methods to resolve this issue. It is anticipated that the introduction of novel methods capable of effectively resolving the divergence associated with the upper and lower limits of interaction would yield improved outcomes for both small and large P_T values. Figure (<ref>) corresponds to the ALICE 2018 dataset, and Table (<ref>) presents the fitting outcomes of these data for the three collisional energy loss models. As χ^2 values show, for the 2018 dataset, there is no significant difference observed among the three approaches. This lack of differentiation can be attributed to the limited number of data points available in 2018, coupled with their relatively high error margin (averaging 0.2). Hence, these data lack sufficient resolution to show the difference between the models and are not a good criterion for concluding. In the Alice 2021 and 2022 experiments, the number of data points has increased, and their errors have decreased. Consequently, these datasets provide a more reliable reference for investigating and comparing different energy loss models. In the ALICE 2021 experiment, the measured data is related to muons decaying from D meson (Pb+Pb →Muon + X), while in the ALICE 2022 experiment, the interaction is related to D meson production (Pb+Pb →D0 + X). Consequently, the ALICE 2021 data has a smaller error compared to the ALICE 2022 data (the average error in the ALICE 2022 data is approximately 0.1, whereas the average error in the ALICE 2021 data is roughly half of that value). However, Alice's 2022 data cover a broader range of P_T values. Therefore, they are more suitable for globally comparing energy loss models. Conversely, for the intermediate range of P_T values, the 2021 data are a better reference due to their smaller error. Figure (<ref>) presents the results obtained for the ALICE 2021 data. As it is evident from Table (<ref>) and the χ^2 values, model C performs better than other models in the range of intermediate P_T values. This indicates that the HTL mechanism effectively describes the intermediate P_T range, suggesting that model C is a more suitable choice for this region. Furthermore, model B outperforms model A as expected. Figure (<ref>) illustrates the obtained results for the ALICE 2022 data. The ALICE 2022 data, covering a wider range of P_T values, are more suitable for global analysis and comparison of different approaches. According to Table (<ref>), for all P_T ranges, Model B performs slightly better than other models. However, in general, there is not a significant difference between the three energy loss models in the global analysis. We need more data points with higher P_T values to make a definite conclusion. The fitting outcomes obtained from the ALICE 2022 dataset do not indicate a global advantage of model C. Note that in our analysis, we have considered the range of 2 < P_T < 12 as intermediate P_Ts, while in model C <cit.> the boundary between soft momentum and hard momentum occurs around P_T = 20 GeV. Consequently, we should note that in the review of model C, most of the available data are evaluated using the HTL mechanism. To obtain better results for a global investigation of model C, the boundary for these two regions may need to be modified at higher center-of-mass energies. Finally, the ATLAS 2022 data were analyzed to compare with the ALICE dat; Figure (<ref>). However, the limited number of ATLAS data points and their higher error compared to the ALICE 2021 data make it difficult to differentiate between the three methods of energy loss. Nonetheless, as seen in Table (<ref>), in this case as well, all the models provide better descriptions of the data in the range of intermediate P_T values in comparison by the whole P_T region. § SUMMARY AND CONCLUSIONS Our study employed the Fokker-Planck equation to investigate the evolution of transverse momentum distribution functions of charm quarks produced in lead-lead collisions at 5.02 TeV. During the evolution, we considered three different approaches for collisional energy loss to compare these approaches with each other. Our purpose was to assess their compatibility with the latest experimental data. We have found that the recent data have sufficient precision to distinguish among these different models, particularly in the region of intermediate transverse momentum. Although published data older than 2018, do not exhibit significant differences between the various energy loss models, mainly due to their limited quantity and large errors. In general, all three energy loss approaches describe the range of intermediate P_T better than small or large P_T regions. Among these models, the model proposed by Braaten and Thoma <cit.> provides a better description of the intermediate P_T range, in comparison with the other energy dissipation methods. indicating that the HTL mechanism is an appropriate mechanism for average P_Ts. For the global analysis, we considered ALICE 2022 data as a benchmark because it covers a wider region of P_T. There was no significant difference between the χ^2 values for different energy loss models. In fact, the major difference between these models lies in managing the convergence at high and low momenta and the method of field regularization. Therefore, by increasing the number of data points for small and large P_Ts, and decreasing their error, we should be able to distinguish between these energy loss models more effectively for global analysis. Also, It is expected that such evolution will be employed in the near future for the b quark distribution function. At present, the available data on the b quark distribution function are insufficient to determine the validity of a proper model. § ACKNOWLEDGMENTS Special thanks go to Dr. Samira Shoeibi for providing guidance in using the Minuit package. This work is supported by the Ferdowsi University of Mashhad under grant numbers 3/58322 (1401/07/23). § APPENDIX As mentioned before, in order to calculate the drag and diffusion coefficients in the Fokker-Planck equation, we must calculate the energy loss of heavy quarks while passing through the plasma and consider both modes of energy loss through collisions and radiation. Here, we introduce three common approaches for calculating collisional energy loss, as well as one of the most common approaches to calculating radiant energy loss. The first calculation of collision energy loss (Model A in our article) is proposed by Bjorken <cit.> which is : -dE/dx=16π/9α_s^2T^2ln(4pT/k_D^2)[exp(-k_D/T)(1+k_D/T)] P is the momentum of the particle, T is the temperature of the plasma, and k_D = √(3) m_g. We also have: m_g^2 = 4πα_s T^2/3(1+n_f/6) Another approach for calculating collision energy loss is presented by Thoma and Gyulassy <cit.>. Through this approach which is our second model, we have: -dE/dx = 16π/9α_s^2 T^2 ln(k_max/k_D) 1/ν^2[ν + (ν^2 - 1)/2ln(1+ν/1-ν)] In which: k_max≈4pT/√(p^2 + M^2) - p + 4T Model C for collisional energy loss <cit.> involved calculating the energy loss of a quark with energy E in two different limits: E ≪M^2/T and E ≫M^2/T A QED calculation has been used to determine contributions to the energy loss for some parts of the calculation. In order to achieve this, "e" in the QED calculations will be replaced by the g_s=4/3√(4πα_s) in the QCD calculations. The thermal photon mass m=eT/3 is also replaced by the thermal gluon mass which is m_g=g_s T √(1+n_f/6/3) So for the E ≪M^2/T limit we will have: -dE/dx = 8πα_s^2 T^2/3(1+n_f/6)[1/v-1-v^2/2v^2ln(1+v/1-v)]ln(2^n_f/(6+n_f)B(v)ET/m_g M) B(v) is a smooth function that starts at B(0)=0.604, increases to B(0.88)=0.731, and then decreases to B(1)=0.629. And in the E ≫M^2/T limit, we have: -dE/dx = 8πα_s^2 T^2/3(1+n_f/6)ln(2^n_f/12+2n_f 0.920 √(ET)/m_g) A smooth connection between two limits is required for the intermediate region, E ≈ M^2/T. Calculations indicate that we can use the first equation up to E_cross = 1.8 M^2/T and then switch to the second one. Also, the radiative energy loss of a heavy quark in a QGP is calculated as follows: -dE/dx = 24α_s^3 ρ_QGP1/μ_g (1-β_1) (√(1/1-β_1ln1/β_1) - 1) F(δ) F(δ) = 2δ - 1/2ln(1+M^2/se^2δ/1+M^2/s e^-2δ) - (M^2/ssinh(2δ)/1+2M^2/scosh(2δ)+M^4/s^2) δ = 1/2ln[ 1/1 - β_1ln( 1/β_1) ( 1 + √(1 - 1 - β_1/ln (1/β_1)))^2 ] C = 3/2 - M^2/48E^2 T^2 β_0ln[ M^2+6ET(1+β_0)/M^2+6ET(1-β_0)] for more details see <cit.>. References Seymour:2005hs Michael H. Seymour. https://doi.org/10.48550/arXiv.hep-ph/0505192arXiv:hep-ph/0505192. Report number: CERN-PH-TH-2005-083 QGP.Collins:1974ky J.C. Collins, M.J. Perry, “Superdense matter: Neutrons or asymptotically free quarks?", https://doi.org/10.1103/PhysRevLett.34.1353Phys. Rev. Lett. 34.1353 (1975). QGP.Shuryak:1977ut E. V. Shuryak, “Theory of Hadronic Plasma,” https://inspirehep.net/literature/121016Sov.Phys.JETP 47 (1978) 212-219 [Zh. Eksp. Teor. Fiz. 74, 408 (1978)]. QGP.Martinez:2013xka G. Martinez, “Advances in Quark Gluon Plasma", https://doi.org/10.48550/arXiv.1304.1452arXiv:1304.1452. [nucl-ex] Hagedorn:1965st R. Hagedorn, https://inspirehep.net/files/0f47965fc72ecec80d0e4f8f71f7d9e5Nuovo Cim. Suppl. 3, 147-186 (1965) CERN-TH-520. STAR:2005gfr J. Adams [STAR Collaboration], et al., “Experimental and theoretical challenges in the search for the quark-gluon plasma: the STAR Collaboration’s critical assessment of the evidence from RHIC collisions". https://doi.org/10.1016/j.nuclphysa.2005.03.085 Nucl.Phys.A 757 (2005) 102-183. PHENIX:2004vcz K. Adcox [PHENIX Collaboration] et al., “Formation of dense partonic matter in relativistic nucleus-nucleus collisions at RHIC: experimental evaluation by the PHENIX collaboration". https://doi.org/10.1016/j.nuclphysa.2005.03.086Nucl.Phys.A 757 (2005) 184-283. Ollitrault:2007du Jean-Yves Ollitrault,“Relativistic hydrodynamics for heavy-ion collisions", https://doi.org/10.1088/0143-0807/29/2/010 Eur.J.Phys. 29 (2008) 275-302. Song:2010mg H. Song, S. A.Bass, U. Heinz, T. Hirano and C. Shen, https://doi.org/10.1103/PhysRevLett.106.192301Phys.Rev.Lett. 106 (2011) 192301. Becattini:2014rea Francesco Becattini, “The Quark Gluon Plasma and relativistic heavy ion collisions in the LHC era", https://doi.org/10.1088/1742-6596/527/1/012012 J.Phys.Conf.Ser. 527 (2014) 012012. HQ.vanHees:2005wb H. van Hees, V. Greco, R. Rapp, “Heavy-quark probes of the quark-gluon plasma at RHIC". https://doi.org/10.1103/PhysRevC.73.034913 Phys.Rev.C 73 (2006) 034913. HQ.Rapp:2009my R. Rapp, H. van Hees, “Heavy Quarks in the Quark–Gluon Plasma". https://doi.org/10.1142/9789814293297_0003 Quark–Gluon Plasma 4, 111–206 (2010). Chattopadhyay:2018apf C. Chattopadhyay, U. Heinz, S. Pal, G. Vujanovic, “Higher order and anisotropic hydrodynamics for Bjorken and Gubser flows”, https://doi.org/10.1103/PhysRevC.97.064909 Phys.Rev.C 97 (2018) 6, 064909. , arXiv:1801.07755 [nucl-th]. Grozdanov:2015kqa S. Grozdanov, N. Kaplis, “Constructing higher-order hydrodynamics: The third order”, https://doi.org/10.1103/PhysRevD.93.066012Phys.Rev.D 93 (2016) 6, 066012. , arXiv:1507.02461 [hep-th]. Arnold:2000dr P.B. Arnold, G.D. Moore, L.G. Yaffe, "Transport coefficients in high temperature gauge theories. 1. Leading log results". https://doi.org/10.1088/1126-6708/2000/11/001JHEP 11, 001 (2000). Arnold:2003zc P.B. Arnold, G.D. Moore, L.G. Yaffe, "Transport coefficients in high temperature gauge theories. 2. Beyond leading log". https://doi.org/10.1088/1126-6708/2003/05/051JHEP 05, 051 (2003). Mattiello:2010nfi S. Mattielloa, W. Cassing, "Shear viscosity of the Quark-Gluon Plasma from a virial expansion". https://doi.org/10.1140/epjc/s10052-010-1459-3Eur. Phys. J. C 70, 243–249 (2010). Sheibani:2021ovo Sheibani, J. and Javidan, K. and Mirjalili, "Impact of EMC effect on D meson modification factor in equilibrating QGP". https://doi.org/10.1140/epjp/s13360-022-02966-3Eur.Phys.J.Plus 137 (2022) 807 Braaten:1989kk E. Braaten, R.D. Pisarski, “Resummation and Gauge Invariance of the Gluon Damping Rate in Hot QCD". https://doi.org/10.1103/PhysRevLett.64.1338 Phys.Rev.Lett. 64 (1990) 1338. FP.Dong:2019byy X. Dong, Y.J. Lee, R. Rapp, “Open Heavy-Flavor Production in Heavy-Ion Collisions", https://doi.org/10.1146/annurev-nucl-101918-023806Ann.Rev.Nucl.Part.Sci. 69 (2019) 417-445. Diffusion.Das:2010tj S.K. Das, J. Alam, P. Mohanty, “Drag of heavy quarks in Quark gluon plasma at the large hadron collider". https://doi.org/10.1103/PhysRevC.82.014908 Phys. Rev. C 82 (2010) 014908 , arXiv:1003.5508. Diffusion.Srivastava:2016igg P. K. Srivastava, B. K. Patra, “Drag and Diffusion of Heavy Quarks in a hot and anisotropic QCD medium,” https://doi.org/10.1140/epja/i2017-12299-0 Eur. Phys. J. A 53, no. 6, 116 (2017). Diffusion.Akamatsu:2008ge Y. Akamatsu, T. Hatsuda and T. Hirano, “Heavy Quark Diffusion with Relativistic Langevin Dynamics in the Quark-Gluon Fluid". https://doi.org/10.1103/PhysRevC.79.054907Phys.Rev.C 79 (2009) 054907. Bjorken:1982tu Bjorken, J. D. 1982, Fermilab, Report number: https://lss.fnal.gov/archive/1982/pub/Pub-82-059-T.pdfFERMILAB-PUB-82-059-THY; FERMILAB-PUB-82-059-T. Thoma:1990fm M. Thoma and M. Gyulassy, https://doi.org/10.1016/S0550-3213(05)80031-8Nucl.Phys.B 351 (1991) 491-506 Pisarski:1989cs R.D. Pisarski, Physica A 158 (1989) 146-157. Braaten:1991we Braaten and Thoma, Markus H., https://doi.org/10.1103/PhysRevD.44.R2625Phys.Rev.D 44 (1991) 9, R2625. Saraswat:2017vuy K. Saraswat, P. Shukla, V. Kumar, and V. Singh, “Energy loss of heavy quarks and B and D meson spectra in PbPb collisions at LHC energies”, https://doi.org/10.1016/j.nuclphysa.2017.02.013 Nucl. Phys. A 961, 169-182 (2017). Saraswat:2015ena K. Saraswat, P. Shukla and V. Singh, https://doi.org/10.1016/j.nuclphysa.2015.08.005Nucl.Phys.A 943 (2015) 83-100. [arXiv:1506.06604 [nucl-ex]]. Abir:2012pu R. Abir, U. Jamil, M. G. Mustafa and D. K. Srivastava, https://doi.org/10.1016/j.physletb.2012.07.044Phys.Lett.B 715 (2012) 183-189 [arXiv:1203.5221 [hep-ph]]. Gyulassy:2000fs M. Gyulassy, P. Levai and I. Vitev, https://doi.org/10.1103/PhysRevLett.85.5535Phys.Rev.Lett. 85 (2000) 5535-5538. Djordjevic:2003zk M. Djordjevic and M. Gyulassy, https://doi.org/10.1016/j.nuclphysa.2003.12.020Nucl.Phys.A 733 (2004) 265-298 Wicks:2005gt S. Wicks, W. Horowitz, M. Djordjevic and M. Gyulassy, https://doi.org/10.1016/j.nuclphysa.2006.12.048Nucl.Phys.A 784 (2007) 426-442 CMS:2016xef Khachatryan V et al (CMS Collaboration). https://doi.org/10.1007/JHEP04(2017)039JHEP 04 (2017) 039 , arXiv:1611.01664 [nucl-ex]. Raa.Miller:2007ri M.L. Miller, K. Reygers, S.J. Sanders, P. Steinberg, “Glauber modeling in high energy nuclear collisions". https://doi.org/10.1146/annurev.nucl.57.090506.123020Ann.Rev.Nucl.Part.Sci. 57 (2007) 205-243. Raa.Zigic:2019sth D. Zigic, B. Ilic, M. Djordjevic, M. Djordjevic, “Exploring the initial stages in heavy-ion collisions with high-P_T R_AA and ν_2 theory and data". https://doi.org/10.1103/PhysRevC.101.064909Phys.Rev.C 101 (2020) 6, 064909. Miller:2007ri M. Miller, K. Reygers, S. J. Sanders, and P. Steinberg, “Glauber modeling in high energy nuclear collisions”, arXiv:nucl-ex/0701025. https://doi.org/10.1146/annurev.nucl.57.090506.123020Ann.Rev.Nucl.Part.Sci. 57 (2007) 205-243. Tripathy:2017kwb S. Tripathy, A. Khuntia, S. K. Tiwari, R. Sahoo, “TransverseMomentum Spectra and Nuclear Modification Factor using Boltzmann Transport Equation with Flow in Pb+Pb collisions at √(S_NN)=2.76 TeV”, https://doi.org/10.1140/epja/i2017-12283-8Eur. Phys. J. A 53, no. 5, 99 (2017). Qiao:2020yry L. Qiao, G. Che, J. Gu, H. Zheng, W. Zhang, “Nuclear modification factor in Pb-Pb and p-Pb collisions using Boltzmann transport equation”, https://doi.org/10.1088/1361-6471/ab8744J.Phys.G 47 (2020) 7, 075101. Numerical solution V. Palleschi, F. Sarri, G. Marcozzi, M.R. Torquati, "Numerical solution of the Fokker-Planck equation: A fast and accurate algorithm", https://doi.org/10.1016/0375-9601(90)90717-3Physics Letters A, Volume 146, Issues 7–8 (1990). Freeze-Out Parameters.Chatterjee:2015 Sandeep Chatterjee, Sabita Das, Lokesh Kumar, D. Mishra, Bedangadas Mohanty, Raghunath Sahoo, Natasha Sharma, “Freeze-Out Parameters in Heavy-Ion Collisions at AGS, SPS, RHIC, and LHC Energies", https://doi.org/10.1155/2015/349013Advances in High Energy Physics, vol. 2015, Article ID 349013. Freeze-Out Parameters.HotQCD:2014kol A. Bazavov et al., [HotQCD Collaboration], “Equation of state in (2+1)-flavor QCD”, https://doi.org/10.1103/PhysRevD.90.094503Phys. Rev. D 90, 094503 (2014). Modarres:2021gva Modarres, M. and Taghavi, R. and Nik, R. Aminzadeh and Valeshabadi, R. Kord. https://doi.org/10.1103/PhysRevD.104.056005Phys.Rev.D 104 (2021) 5, 056005. Olanj:2020lkt N. Olanj, M. Modarres. https://doi.org/10.1016/j.nuclphysa.2020.121735Nucl.Phys.A 998 (2020) 121735. ALICE:2018lyv ALICE Collaboration, https://doi.org/10.1007/JHEP10(2018)174 JHEP 10 (2018) 174 , Report number: CERN-EP-2018-066. ALICE:2020sjb ALICE Collaboration, https://doi.org/10.1016/j.physletb.2021.136558 Phys.Lett.B 820 (2021) 136558 , Report number: CERN-EP-2020-186. ALICE:2021rxa ALICE Collaboration, https://doi.org/10.1007/JHEP01(2022)174 JHEP 01 (2022) 174 , Report number: CERN-EP-2021-213. ATLAS:2021xtw ATLAS Collaboration, https://doi.org/10.1016/j.physletb.2022.137077 Phys.Lett.B 829 (2022) 137077 , Report number: CERN-EP-2021-153. Minuit.James:1975dr F. James and M. Roos, http://dx.doi.org/10.1016/0010-4655(75)90039-9Comput. Phys. Commun. 10, 343-367 (1975).
http://arxiv.org/abs/2307.04453v1
20230710100605
Tracking the Long-Term GW Phase Evolution for HM Cancri-like Binaries with LISA
[ "Naoki Seto" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "astro-ph.IM" ]
http://arxiv.org/abs/2307.05884v1
20230712031214
Learning Koopman Operators with Control Using Bi-level Optimization
[ "Abhinav Pandey", "Daning Huang", "Yin Yu", "Junyi Geng" ]
eess.SY
[ "eess.SY", "cs.RO", "cs.SY" ]
[ Junyi Xie August 12, 2023 =================== empty empty The accurate modeling and control of nonlinear dynamical effects are crucial for numerous robotic systems. The Koopman formalism emerges as a valuable tool for linear control design in nonlinear systems within unknown environments. However, it still remains a challenging task to learn the Koopman operator with control from data, and in particular, the simultaneous identification of the Koopman linear dynamics and the mapping between the state and Koopman spaces. Conventional approaches, based on single-level unconstrained optimization, may lack model robustness, training efficiency, and long-term predictive accuracy. This paper presents a bi-level optimization framework that jointly learns the Koopman embedding mapping and Koopman dynamics with explicit multi-step dynamical constraints, eliminating the need for heuristically-tuned loss terms. Leveraging implicit differentiation, our formulation allows back-propagation in standard learning framework and the use of state-of-the-art optimizers, yielding more stable and robust system performance over various applications compared to conventional methods. § INTRODUCTION Accurately modeling and controlling nonlinear dynamical effects is critical for robots, especially in challenging scenarios such as aerial robotics <cit.>, aerial manipulation tasks <cit.>, offroad driving <cit.>, etc. These scenarios often exhibit nonlinear effects, such as the coupling between translation and rotational motion, the self-motion and the manipulated objects, or the complex dynamics due to the environment, making control design difficult. Traditional methods, such as state feedback <cit.> or optimization-based control <cit.>, require full knowledge of the system model to predict dynamics and design controllers. However, real-world effects such as wind gusts, boundary layer effects, rough terrain for mobile robots, and hidden dynamics of chaotic nonlinear effects are too complex to be fully captured, leading to poor control performance under these scenarios. As a result, new approaches are needed to model and control these systems accurately and efficiently, especially when faced with complex, uncertain, or rapidly changing environments. Data-driven approaches have been successful in capturing unknown dynamics and patterns in complex systems <cit.>, allowing for accurate dynamics prediction. However, in many cases, these methods produce nonlinear models and hence require nonlinear control methods such as iterative Linear Quadratic Regulator (iLQR) <cit.> or Nonlinear Model Predictive Control (NMPC) <cit.> to achieve effective system control. These control methods can be computationally expensive as the system states increases, making them infeasible for real-time applications where fast and accurate control is essential. Although some Reinforcement Learning (RL) approaches <cit.>, either model-based or model-free, can also achieve good performance on nonlinear control, they often suffer from sampling inefficiency and lack of generalizability. Therefore, there is a need for more efficient control methods that can be used in conjunction with data-driven techniques to enable real-time control of nonlinear systems. Koopman operator has recently attracted growing interest and shown great potential to provide an elegant way of addressing the control problem under unknown dynamics <cit.>. It embeds the nonlinear system dynamics in a lifted, higher-dimensional space where the dynamics is governed by a linear but possibly infinite dimensional operator <cit.>. Data-driven methods for identifying the Koopman models have gained considerable attention due to the strong expressive power and the rigorous operator-theoretic guarantees <cit.>. The learned linear system on the embedded space is readily amenable for linear control techniques. However, finding the mapping between the original space and the linear space and selecting the embedding representation remains a challenging task, especially in terms of maintaining the predictive accuracy and generalizability. There are some existing approaches focusing on learning the embedding functions with either linear regressors or deep neural networks, and then apply linear control methods <cit.>. However, due to the lack of capability to handle constraints in standard learning frameworks, the existing Koopman learning approaches often rely on a single-level unconstrained optimization formulation that attempts to minimize either only one step prediction errors, or multi-step prediction errors via hand-tuned approximate penalty terms. Such approaches not only require significant amount of effort to tune the penalty term coefficients and loss components during practical implementation, but also suffer from increased computational overhead in backpropagation when multi-step prediction errors are optimized. As a result, the existing methods suffer from poor training efficiency, and the learned models may lack robustness to data noise and long-term predictive accuracy. To overcome the above limitations, this paper proposes a bi-level optimization framework to learn the Koopman operator with control by jointly learning the embedding and the Koopman dynamics. Specifically, in the inner optimization, we minimize the loss in the Koopman embedding space with explicit constraints of multi-step Koopman dynamics; in the outer optimization, we minimize the embedding loss in the original space with the inner optimization serving as constraints. This formulation removes the need to hand-tune weight parameters and exactly enforces Koopman dynamics during the learning process. Furthermore, our framework leverages implicit differentiation of the inner optimization, aka. adjoint-based method, to eliminate the nested backpropagation calculations in the conventional formulations to boost up training efficiency while maintaining the compatibility with standard learning frameworks. Overall, the framework enforces the reproduction of dynamics over entire trajectory and thus mitigates the issues in data noise and the long-term prediction instability, and holds promise for a more accurate and numerically stable predictive model for control applications. The paper is organized as follows. Section II presents a brief summary of Koopman operator with control, specifically the Koopman Bilinear Form. In Section III, we provide the details in the formulation, analysis, and numerical algorithms of the proposed bi-level optimization framework. In Section IV, we present numerical examples to show the effectiveness of the proposed methodology in terms of training efficiency, predictive accuracy and generalizability. Finally, we conclude the work and point out possible future directions for further investigation in Section V. § KOOPMAN THEORY PRELIMINARY Koopman Bilinear Form (KBF) <cit.> provides a means to globally bilinearize a control-affine system of the following form, =_0()+∑_i=1^m_i()u_i, (0)=_0 where ∈⊆^r is the state vector, =[u_1… u_m]^⊤∈^m is the input vector, _0:→ is the system dynamics, and _i:→ are the control input coupling terms. In the autonomous case <cit.>, i.e., when =0, the system generates a flow _t(_0)=(t) from an initial condition _0. The continuous time Koopman operator :→ is an infinite-dimensional linear operator such that g = g∘_t for all g∈, where g:→ is a complex-valued observable function of the state vector , is the function space of all possible observables, and ∘ denotes function composition. As a linear operator, admits eigenpairs (λ,φ) such that φ = φ∘_t = e^λ tφ where λ∈ and φ∈ are the Koopman eigenvalue and Koopman eigenfunction, respectively. The infinitesimal generator of associated with _0, referred to as the Koopman generator, is defined as L__0=lim_t→0-I/t, where I is the identity operator, and turns out to be the Lie derivative L__0=_0·∇, with eigenpair (λ,φ), φ̇ = L__0φ = λφ Given a set of eigenpairs {(λ_i,φ_i)}_i=1^n, the Koopman Canonical Transform (KCT) <cit.> of the control-affine system (<ref>) is = + ∑_i=1^mL__i u_i, where =([λ_1,⋯,λ_n]), =[φ_1,⋯,φ_n], and Lie derivatives for the control terms are L__i=_i·∇. Suppose the set of eigenfunctions is sufficiently large, such that span an invariant space for L__i, i.e., each of L__i can be represented using a l× l matrix _i, L__i=_i, then the KCT can be brought to a bilinear form <cit.>, = + ∑_i=1^m_i u_i. Often it is difficult to directly obtain the eigenfunctions of , and instead it is more convenient to learn the bilinear dynamics in a lifted coordinates via a mapping =(;)∈^n with learnable parameters , leading to the commonly used Koopman Bilinear Form <cit.>, =+∑_i=1^m_i u_i The eigendecomposition =^H reproduces the Koopman eigenvalues and the Koopman eigenfunctions =^H. The original states are recovered from an inverse mapping =^-1()≡(). Lastly, for control applications, it is necessary to discretize the KBF model with a time step size Δ t. Assuming a zeroth-order hold of input _k at time t_k and a sufficiently small Δ t, discrete-time KBF takes the form _k+1 = exp(Δ t + ∑_i=1^m_i u_k,iΔ t)_k ≈ (+Δ t)_k + ∑_i=1^m(_i Δ t)_k u_k,i _k+1 ≡_d_k + ∑_i=1^m_d,i_k u_k,i≡_k_k where _k=_d + ∑_i=1^m_d,i u_k,i is effectively a family of matrices parametrized by {_d,_d,1,⋯,_d,m}. Thus the learning of the discrete-time KBF model reduces to the learning of a time-varying Koopman operator <cit.>. § LEARNING KOOPMAN OPERATOR USING BI-LEVEL OPTIMIZATION We first revisit the single-level unconstrained optimization used by most of the other state-of-the-art methods. Then, we present the proposed bi-level optimization framework for learning Koopman system. §.§ Single-level unconstrained optimization First consider a generic form of single-level optimization (SLO) formulation that is typically employed for deep learning of Koopman operator <cit.>. Given a controlled trajectory of (N+1) steps ={_k,_k}_k=0^N, denote ={_k}_k=1^N and the following loss terms are defined, * Reconstruction loss: _r(,;)=∑_k=0^N _k-((_k))^2 * Koopman dynamics loss: _k(,;)=∑_k=1^N (_k)-(∏_i=0^k-1_i)(_0)^2 * Nonlinear dynamics loss: _n(,,;)=∑_k=1^N _k-((∏_i=0^k-1_i)(_0))^2 with the understanding that and in the losses refer to the learnable parameters in the encoder and decoder, respectively. In addition, typically a regularization term () is included to improve the model generalizability. Note that in these losses, the Koopman states _k are not explicitly involved and the learnable parameters are ={,,}. In the conventional methodology, the multiple losses are combined into one by weighted sum _s=_r+α_1_k+α_2_n+α_3, leading to a single-level unconstrained optimization problem for the learning. Typically, the hand-tuning of multiple weights α_i may turn out to be a tedious task. Furthermore, the performance of the trained model is sensitive to the choice of the weights, and the optimal choice of the weights for one problem typically do not migrate to another problem. The choice and tuning of the weights in the loss is still under active research <cit.>. Specific to Koopman learning, the SLO poses two more major concerns. First, due to the trade-off between the losses, the two dynamics losses would never be exactly driven to zero, resulting in an inaccurate Koopman operator. During the prediction, the error may start to accumulate within a short time horizon, and limit the long-term predictive capability of the learned model. Second, both the dynamics losses have a recursive formulation that requires nested evaluation of the matrix products, leading to a high computational cost in the backpropagation during training on the order of O(N^2), i.e., a quadratic growth with respect to the length of time horizon; this renders the learning process inefficient. §.§ Bi-level equality-constrained optimization To address the limitations of the conventional SLO, this paper proposes an alternative formulation involving two levels of optimizations. First, two new loss terms are introduced, * Encoder loss: _e(;,) = ∑_k=0^N _k-(_k)^2 * Decoder loss: _d(;,) = ∑_k=0^N _k-(_k)^2 where the Koopman states ={_k}_k=0^N are introduced as auxiliary variables. Then, the bi-level optimization (BLO) is formulated as _,, (,;,) s.t. min_(;,,) s.t. _k=_k_k-1, k=1,⋯,N _0 = (_0) At the inner level, (<ref>)-(<ref>), the Koopman states are optimized with a set of equality constraints for the exact enforcement of the Koopman dynamics over the time horizon of length N; at the outer level, the model parameters are optimized given the the optimized that serve as the expected output of the encoder and input to the decoder. In the current implementation, outer loss optimizes autoencoder reconstruction =_r and inner loss optimizes Koopman dynamics =_e+_d. A comparison between the SLO and BLO formulations are shown in Fig. <ref>. If the equality constraints in BLO are exactly satisfied, the encoder and decoder losses correspond to the Koopman and nonlinear dynamics losses, respectively, in the SLO. Clearly, by construction, the BLO entirely removes the loss weights, exactly enforces Koopman dynamics, and eliminates the need for nested backpropagation calculations, which holds promise to produce a more accurate and stable predictive model. §.§ Gradient computation for bi-level optimization The practical implementation of BLO in a learning framework hinges on the efficient computation of the gradient given the constraints defined by the inner optimization. The gradient computation is achieved using the adjoint-based method, such that the computational cost scales linearly with the dimension of . For conciseness, the BLO is written as _ (,) s.t. min_(,) s.t. (,) = 0 where is the Koopman states, includes all trainable parameters from the autoencoder and the Koopman dynamics, and and are the outer and inner losses, respectively. First, the inner optimization is solved and transformed to a set of algebraic equations using the Lagrange multiplier method. Define L(,,) = (,) + ^⊤(,) and the solution to the inner optimization is found by solving the stationarity condition, L = + ^⊤ = 0 L = (,) = 0 Denoting (<ref>) as (,)=0 with ={,}, (<ref>) can be solved by a Newton-type algorithm with the Hessian = [ ^⊤; ] where = ∂^2 /∂^2+∂^2 ^⊤/∂^2, = Next, having solved the inner optimization, the BLO is transformed into a single-level equality-constrained optimization, _ (,) s.t. (,)=0 The desired gradient is computed as, = + = - ( )^-1 ≡ + ^⊤ where the adjoint variable is introduced as the solution to a linear system of equations, ( )^⊤ = - ( )^⊤ = [ - 0]^⊤ §.§ Computational complexity analysis The complete Koopman learning algorithm based on BLO is listed in Alg. <ref> and the computational cost of each major step is labelled. The details are discussed further as follows. First, note that in the Koopman learning problem (<ref>), the equality constraints are linear with respect to and _k is only related to _k-1, hence = is a (nN× nN) constant block matrix with diagonal structure and ∂^2 /∂^2=0. The inner loss is a sum of step-wise error norm, hence ∂^2 /∂^2 is a (nN× nN) block-diagonal matrix. Together, the (2nN× 2nN) Hessian matrix is a highly sparse matrix with diagonal structures. The costs for forming the matrix and solving the associated linear systems are both O(nN). Next, examine the steps in Alg. <ref>. Solving the inner optimization problem may require K Newton steps, and hence the total cost is O(nNK). Solving the adjoint system (<ref>) costs O(nN). The gradient computation (<ref>) scales with the dimension of trainable parameters with a cost O(nNP). In total, since typically P≫ K, the computational complexity of the BLO algorithm is O(nNP) and scales linearly with the length of time horizon; this is in sharp contrast with the quadratic growth in single-level formulation. § NUMERICAL SIMULATION To demonstrate the effectiveness of the proposed approach, we investigate two example nonlinear systems: a two dimension nonlinear system which has been widely used in various physical phenomenon <cit.>; and a double pendulum system with dimension 4, which shows that the proposed algorithm can generalize to high-dimensional systems. §.§ A two-dimension nonlinear system To illustrate the effectiveness of the bi-level optimization, we consider a variant of a well-known nonlinear system <cit.>: ẋ_1 = μ x_1 + u_1 + u_3 x_1 ẋ_2 = λ(x_2 - x_1^2) + u_2 where μ=-3 and λ=-2 are pre-defined system parameters controlling characteristic time scales, and (u_1, u_2, u_3) are time-varying controls to the system. The system has an isolated equilibrium point at _e=(-u_1/μ+u_3, u_1^2/(μ+u_3)^2-u_2/λ), and increasing u_3 slows down the convergence to x_e,1. The trajectories for model training and validation were generated by uniformly sampling initial conditions _0 ∈ [-5, 5]×[-5, 5] as with a step size of 0.5 in both directions, producing a total of 121 trajectories. For the controlled case, the control were randomly generated with u_i ∈ [-1.8, 1.8]. All trajectories were generated using by 4th order Runge Kutta with a time step size of 0.02s for 100 steps. All trajectories were normalized to [0, 1]; 105 of them were used for model training, and 16 were used for validation. A 4-dimensional embedding space is selected based on empirical observation, where three dimensions are learned using a neural network for encoding/decoding, and the remaining dimension is set to be 1 for the completeness of the basis. Both the encoder and decoder have 3 hidden layers with PReLu activation and have sizes (16, 30, 24) and (24, 30, 16), respectively. The model is trained for 5000 epochs using the RMSProp optimizer with learning rate 0.05. §.§.§ Benchmark with single-level optimization We first present the results for the system without control input to establish the feasibility of the propose method. The results are benchmarked with the state-of-the-art single-level optimization <cit.> using the same set of data for training. Some hand-tuning of weights is required for the SLO case and the best result is reported. Figure <ref> shows the comparison in prediction, where the dotted and dashed curves are the trajectories from the SLO-based and BLO-based Koopman models, respectively. The yellow box represents the region that contains the training trajectories. Clearly, when the trajectory starts from outside the sampling region, the SLO-based dynamics deviates significantly from the true dynamics, while the BLO-based model almost exactly predicts the dynamics. This highlights the generalizability and long-term prediction capability of the BLO-based model. Such capability is attributed to the bi-level formulation that exactly enforces the Koopman dynamics and thus enables the reproduction of the dynamics over a wide range of state space beyond the training data. The two methods are further compared in terms of convergence characteristics in Fig. <ref>. Due to the differences in the implementation, only the prediction losses in the original state space are reported, and the losses are normalized by their respective initial values, so that the relative decreases in the loss are compared. While the SLO converges earlier at ∼2400 iterations, but it fails to reduce the predictive loss further, presumably due to the competing effects with the other losses. The BLO shows a fast initial convergence rate and achieves a loss comparable to SLO within 500 iterations, and eventually arrives at a loss that is three orders of magnitude smaller than the SLO results. Note that such superior convergence rate is achieved without the need to hand-tune weight parameters. §.§.§ Learning with time varying control input Next we present the results for the system with time varying control input in Fig. <ref>, where all the 16 validation cases are plotted. The solid curves are the truth and the dotted curves are the prediction. The trajectories without control, but from the same initial conditions, are shown on the left as reference; the effects of control are seen from the distortion of the trajectories on the right. In the controlled case, besides the _d matrix for autonomous dynamics, the control coupling matrices _d,i are also learned from data. Similar to the uncontrolled case, the trajectories are well beyond the sampling region, and yet the learned model robustly captures the ground truth with negligible errors. §.§ Double Pendulum Next, we consider a double pendulum problem to show the feasibility of the proposed algorithm for high-dimensional systems. Following the canonical setup, the masses and lengths of the two pendulums are m_1 = 2 kg (upper), m_2 = 1 kg (lower), l_1 = 1 m, l_2 = 1 m. 25 trajectories were generated with initial conditions of (θ_1, θ_2) ∈ [-30,30]×[-30,30] and zero initial velocities, such that system stays within the non-chaotic regime. Of these, 20 trajectories were used to train and 5 were used for validation. The embedded space is of dimension 9, with 8 being learned through a neural-network based encoder, and one added to be the constant 1. The encoder and decoder hidden state sizes are (20, 30, 30, 24) and (24, 30, 30, 20) respectively. all the rest of the details are the same as those for the first example. Figure <ref> shows the prediction performance of the learned Koopman model and proposed method performs well even for this higher dimensional system. The model matches with the truth well with the maximum error less than 1%. The prediction also accurately captures the periods of the double pendulum system while maintaining the oscillation amplitude. § CONCLUSION This paper presents a bi-level optimization framework to learn the Koopman Bilinear Form by jointly optimizing the Koopman embedding and dynamics with explicit and exact constraints of multi-step Koopman dynamics. Our approach does not involve the conventional penalty terms that would require hand-tuning the weights. By leveraging the implicit differentiation and the adjoint-based method, our method eliminate the nested back-propagation and boost up training efficiency while maintaining the compatibility with the standard learning frameworks. We validate the proposed approach on two example nonlinear systems with control. Results show that our method successfully learns the nonlinear dynamics. Comparing to the single-level optimization method, our method achieves more accurate prediction with low prediction error and faster convergence, and generalizes to the trajectories well outside the sampling region with longer horizon prediction. Future work will investigate online model prediction with other control methods for more complex physical scenarios, including the aerial vehicle flying in the wind gust environment. The developed differentiable bi-level optimization framework is on-going integrated into our open-source robotic learning library Pypose <cit.>. IEEEtran
http://arxiv.org/abs/2307.10199v1
20230712131008
New solution of Einstein-Yang-Mills equations
[ "Yuewen Chen", "Jie Du", "Shing-Tung Yau" ]
gr-qc
[ "gr-qc" ]
New solution of Einstein-Yang-Mills equations Yuewen ChenYau Mathematical Sciences Center, Tsinghua University, Beijing 100084, P.R. China. E-mail: [email protected], Jie DuYau Mathematical Sciences Center, Tsinghua University, Beijing, 100084, P.R. China. Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing 101408, P.R. China. E-mail: [email protected], Shing-Tung YauYau Mathematical Sciences Center, Tsinghua University, Beijing, 100084, P.R. China. Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing 101408, P.R. China. Department of Mathematics, Harvard University, Cambridge, MA 02138, USA. E-mail: [email protected] ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Abstract In this paper, we show the numerical solution for spherically symmetric SU(2) Einstein-Yang-Mills (EYM) equations. We show the existence of entropy weak solution for EYM. Key Words:Einstein-Yang-Mills equations. arabic § INTRODUCTION The Einstein-Yang-Mills (EYM) equation plays an important role in GR. In this paper, we aim to find the stable solution of EYM equations with the SU(2) gauge group numerically. §.§ Einstein-Yang-Mills equations We first introduce the formulations and basic properties of EYM equations. We adopt the following static, spherically symmetric metric g=-A e^-2 δ dt^2 +dr^2/A +r^2 (dθ^2 +sin^2θ dϕ ^2), where (θ,ϕ) is the spherical coordinate system, r is the radius, t is the coordinate time, A=A(r), and δ=δ(r). Denoting τ_1,τ_2,τ_3 as the Pauli matrices, the spherically symmetric Yang-Mills connection with SU(2) gauge group can be written in the following form 𝔄=W(r) τ_1 dθ +(θτ_3+W(r) τ_2 )sinθ dϕ, The EYM equations with SU(2) gauge potential have been derived in many papers <cit.>, and take the following form r^2 AW^” =((W^2-1)^2/r+r(A-1))W^'+W(W^2-1) , r A^' =1-(W^2-1)^2/r^2-A(2W^'^2+1) , δ^' =-2(W^')^2/r, with boundary conditions W(0) =± 1, W(∞) =∓ 1, A(0) =1, δ(0) =0. In this system, Eq. (<ref>) is the matter field equation for solving W(r) in the Yang-Mills field, also called the Yang-Mills equation. Eqs. (<ref>) and (<ref>) are Einstein equations to determine A(r) and δ(r) in the metric, in which Eq. (<ref>) is also called the Hamiltonian constraint equation. Notice that Eqs. (<ref>) and (<ref>) do not involve δ, and hence one can first solve these two equations for A(r) and W(r) and then use (<ref>) to obtain δ(r). If the solution to the EYM equations satisfies A(r_*)=0 at some point r_*, then we call such a solution as a black hole solution and r_* is the position of the event horizon. In 1988, Bartnik and McKinnon <cit.> numerically discovered a global nontrivial static nonsingular particle-like solution that is not a black hole solution; this work sparked a great deal of interest in the general relativity community<cit.>. The EYM equation has also caught the interest of specialists in the field of differential equations based on the numerical observation. J. Smoller and his associates published a number of publications <cit.>that represented the key advancements in the theoretical analysis. The SU(2) EYM equations accept an infinite family of black hole solutions with a regular event horizon, as demonstrated by Smoller and Yau et.al, who conclusively demonstrated the existence of a globally defined smooth static solution . In the meanwhile, they established that there are an endless number of smooth, static, regular solutions to the EYM equations <cit.>. However, all solutions founded in history are not dynamically stable and therefore are not physical <cit.>. In this paper, we present the high order schemes to solve EYM equations globally and show a stable solution for EYM. In this paper, we are interested in finding a stable black hole solution globally. Since the horizon r_* is usually close to r=0 and more details of the solution concentrate in the region with a small r, we adopt the following coordinate transformation x =ln(r) , r=e^x, and then rewrite the equations (<ref>) and (<ref>) for W and A as c(r,A,W)W_x=A W_xx+(1-W^2)W, A_x=-(1+2/r^2W_x^2) A+1-1/r^2(W^2-1)^2, where c(r,A,W)=2A-1+1/r^2(W^2-1)^2. The new idea in this paper, which is different from classical methods, is to consider the steady state of the parabolic version of EYM equations. Instead of solving the above static system of equations (<ref>) and (<ref>) directly, we raise the problem one-dimensional higher and consider the following time-dependent parabolic problem W_t +c(r,A,W)W_x =A W_xx+(1-W^2)W, A_x =-(1+2/r^2 W_x^2) A+1-1/r^2(W^2-1)^2 , where A=A(x,t) and W=W(x,t). By introducing suitable initial conditions, we aim to march this system in time to steady state numerically. We will discuss more details about the choices of the initial condition in the numerical examples section. §.§ High order WENO scheme Since only shooting methods were used to solve the static EYM equations directly and no one considers its time evolution version (<ref>)-(<ref>) in literature, it is meaningful to investigate more suitable numerical methods. This problem forms a convection-diffusion system with source terms. Since the diffusion coefficient in (<ref>) is A, the system becomes degenerate for black hole solutions in which A=0 at the horizon. In this case, there will be a sharp front in the solution near the horizon. In this paper, we are interested in solving this system by using high order WENO methods. §.§ Contributions and organization of the paper The rest of this paper is organized as follows. In section 2 we give the definition of entropy weak solution of EYM equations and the jump condition and the convergence analysis of first order TVD finite difference scheme. In section 3, to make it simpler to compute the implicit scheme, we construct a new WENO scheme for the second derivative of the YM equation. In section 4, we construct a WENO scheme for constraint equation. In section 5, we provide numerical test to demonstrate the behavior of the new WENO scheme and the numerical solution of EYM systems. Finally, we will give the conclusions in Section 6. § FIRST ORDER FINITE DIFFERENCE SCHEME AND CONVERGENCE ANALYSIS In this section, we would like to study the jump condition of EYM and first order scheme. Firstly, we define the weak solution of EYM equation that belong to BV class and satisfy the entropy condition and give the RH jump condition. Secondly, we study the convergence of first-order schemes and prove that it can converge to weak solutions.Moreover ,such weak solution also satisfy the entropy condition. There is no a prior estimate for A to guarantee that A ≥ 0 during the evolution, to maintain the parabolic properties, we define Ã=max(0,A). However, by numerical experience, in the steady sate, we show that A ≥ 0, then A=Ã (See Figure <ref>). These guarantee the steady state solution of system (<ref>) satisfy the static EYM equation. §.§ Entropy weak solution and jump condition So, we are interested in the following modified problem A_x =-A(1+2/r^2W_x^2)+1-1/r^2(1-W^2)^2, Ã =max(A,0), W_t+B(x,W) W_x =Ã W_xx+(1-W^2)W , where B(x,W)=2A-1+1/r^2(1-W^2)^2, and we denote g(W)=W(1-W^2), a(x)=2A-1 and f(W)=∫^W (1-s^2)^2 ds=W(1-2/3W^2+1/5W^4). We can rewrite B(x,W)W_x as B(x,W)W_x =(2A-1)W_x+1/r^2f(W)_x =a(x)W_x+1/r^2f(W)_x. A_x =-A(1+2/r^2W_x^2)+1-1/r^2(1-W^2)^2, Ã =max(A,0), W_t+a(x)W_x+1/r^2f(W)_x =Ã W_xx+(1-W^2)W . Following Wu and Yin <cit.>, we give the definition of entropy weak solution for EYM equations. Let Q_T={(t,x): 0<t<T,x∈[-5,5]}. A function W(x,t)∈ L^∞(Q_T)⋂ BV(Q_T) is said to be an entropy weak solution of Eq. (<ref>) if the following two conditions holds: (1) A satisfy constraint equation (18). (2) For any c ∈ℝ and ϕ∈ C_0^∞(ℝ,ℝ^+), ∬_Q_T |W-c|ϕ_t +(a(x)ϕ)_x |W-c|+(1/r^2ϕ)_x sgn(W-c)(f(W)-f(c)) -(Ãϕ)_x|W-c|_x +sgn(W-c) g(W) ϕ dx dt ≥ 0. For more theory about scalar degenerate convection diffusion equation, one can see <cit.>. Numerically, we are interested in the shock speed. We derive the jump condition of W. Since A is Lipschitz continue, there is no jump condition for A. Remark Since Eq. (18) is a standard linear ODE, which can be solved as A=e^Q(-5)-Q(x)+e^-Q(x)∫_-5^x (1-1/r^2(1-W^2)^2)e^Q(s) ds, where Q(x)=∫_-5^x 1+2/r(s)^2W_x^2(s) ds . We will give a jump condition for EYM equation. Let Γ be a smooth curve across which W has a jump discontinuity,where Γ is given by x=x(t). The shock speed of the discontinuity is given by s=dx/dt. Assume P is any point on Γ, and D be a small ball centered at P. Let ϕ∈ C_0^ ∞(D),then we have the follow jump condition. The shock speed s is given by s[W]=1/r^2[f]+a[W]-{Ã_x}[W]-[ÃW_x], where [W]=W^+-W^-, {Ã_x }=1/2(Ã_x^+ +Ã_x^-). Moreover, in static case, we have the RH jump condition for static EYM black hole solution 0=1/r^2[f]/[W]+a-{A_x}-[AW_x]/[W]. Remark As t →∞,A ≥0, then Ã=A. However, it is difficult to prove this phenomena. By lots of numerical experience, we find that Ã=A in steady state (One can see Fig. <ref> for more detail). So we get a jump condition for static EYM black hole solution 0= a+1/r^2[f]/[W]-[AW_x]/[W]-{ A_x}. §.§ Convergence analysis for TVD scheme In this section, we will construct the first order TVD scheme and prove that the approximate solution could converge to the entropy weak solution. We set up a grid: let -5=x_0 <x_1<...<x_N=5 denote a uniform grid with a mesh size h=x_2-x_1. In time direction, t=n dt,n∈ℤ^+. For a function W(x,t), we use the notation W^n_i to denote the value of W at the mesh point (i h, n dt). The discrete L^∞ norm ,the L^1 norm and the BV seminorm are defined as follows: W_∞ =max_0≤ i ≤ N |W_i|, W_1 =h∑_0≤ i ≤ N |W_i|, |W|_BV =∑_0≤ i ≤ N-1|W_i+1-W_i|. Then a Lax-Friedrichs scheme for YM equation is given by A^n_i+1-A^n_i+hA^n_i+1(1+2/r_i^2(W^n_i+1-W^n_i)^2/h^2) =h/2(F^n_i+1+F^n_i), Ã^n_i =max(A^n_i,0), 1/dt(W_i^n+1-W^n_i)+a^n_i/2h(W^n_i+1-W^n_i-1)+1/r_i^21/2h(f^n_i+1-f^n_i-1) =α^n_i+1/2/2h(W^n_i+1-W^n_i) -α^n_i-1/2/2h(W^n_i-W^n_i-1) +Ã^n_i1/h^2(W^n_i+1-2W^n_i+W^n_i-1)+g^n_i , where F=1-1/r^2(1-W^2)^2, and the viscosity coefficient α_i+1/2 can be chosen as Lax-Friedrichs type α_i+1/2 =max_i( |a_i|+1/r_i^2(1-W^2_i)^2 ). Next, we would present a-prior estimate for scheme (<ref>). Under following CFL condition 1/2>dt/hmax_i|α^n_i+1/2|+2dt/h^2max_i(|A^n_i|),dt<1/4. We have W^n+1_∞≤ 1. W^n+1_i =W^n_i-dt/2ha^n_i(W^n_i+1-W^n_i-1)-1/r_i^2dt/2hf^'(θ_i)(W^n_i+1-W^n_i-1) +dt/2hα^n_i+1/2(W^n_i+1-W^n_i) -dt/2hα^n_i-1/2(W^n_i-W^n_i-1) +dt/h^2Ã^n_i(W^n_i+1-2W^n_i+W^n_i-1)+dtg^n_i =W_i^n(1/2-dt/2hα^n_i+1/2 -dt/2hα^n_i-1/2-2dt/h^2Ã^n_i) +W^n_i+1(-dt/2ha^n_i-1/r_i^2dt/2hf^'(θ^n_i)+dt/2hα^n_i+1/2+dt/h^2Ã^n_i) +W^n_i-1(dt/2ha^n_i+1/r_i^2dt/2hf^'(θ^n_i)+dt/2hα^n_i-1/2+dt/h^2Ã^n_i)+1/2W^n_i+dt g^n_i, where θ^n_i is between W^n_i-1 and W^n_i+1. Assume that W^n_∞≤ 1, Using Lemma <ref>, we have |1/2W^n_i+dt g^n_i|≤1/2. Under the CFL condition, all the coefficients of W^n_i-1,W^n_i,W^n_i+1 are nonnegative, then |W^n+1_i| ≤ |W^n_i||(1/2-dt/2hα^n_i+1/2 -dt/2hα^n_i-1/2-2dt/h^2Ã_i) | +|W^n_i-1||dt/2ha^n_i+1/r_i^2dt/2hf^'(θ^n_i)+dt/2hα^n_i-1/2+dt/h^2Ã_i | +|W^n_i+1||-dt/2ha^n_i-1/r_i^2dt/2hf^'(θ^n_i)+dt/2hα^n_i+1/2+dt/h^2Ã_i |+1/2 ≤1/2max(|W^n_i-1|,|W^n_i|,|W^n_i+1|)+1/2. Then W^n+1_∞≤ 1. Let Q(W)=1/2W+dt W(1-W^2), if |W| ≤ 1,dt<1/4, then Q'(W)>0, and |Q(W)|≤1/2, i.e. for any W_i+1≥ W_i, |W_i+1|≤1,|W_i|≤ 1, then Q(W_i+1) ≥ Q(W_i). By direct calculation, we get Q'(W)=1/2+dt(1-3W^2). Since |1-3W^2|≤ 2, if we take 1/4>dt, then Q'(W)=1/2+dt(1-3W^2)>0. Then Q(W) attained the maximum Q(1)=1/2 and the minimum Q(-1)=-1/2, so |Q(W)|≤1/2. Then for W_i+1≥ W_i, we get Q(W_i+1) ≥ Q(W_i). A large number of numerical experiments show that for any initial conditions, the obtained steady-state solutions are monotonic, so we only need to study the monotonic initial conditions. For the monotonic initial conditions, we can get the following BV estimate. Assume W^n is monotonically increasing, then |W^n+1|_BV≤ |W^n|_BV. We rewrite the scheme (<ref>) as Harten's version W_i^n+1 =W_i^n+D_i+1/2(W^n_i+1-W_i^n)-C_i-1/2(W_i^n-W^n_i-1)+dt g^n_i =1/2W_i^n+D_i+1/2(W^n_i+1-W_i^n)-C_i-1/2(W_i^n-W^n_i-1)+1/2W_i^n+dt g^n_i, where D_i+1/2 =dt/h(-1/2a_i^n-f'(θ^n_i)/2r_i^2+1/2α^n_i+1/2)+dt/h^2Ã_i, C_i-1/2 =dt/h(+1/2a_i^n-f'(θ^n_i)/2r_i^2+1/2α^n_i-1/2)+dt/h^2Ã_i, and θ^n_i is between W^n_i-1 and W^n_i+1. Using Lemma <ref>, the term 1/2W_i^n+dt g_i is monotone increase with respect W_i, we define Q_i=1/2W_i^n+dt g_i. Taking W^n+1_i+1-W^n_i and sum, we have W^n+1_i+1-W^n+1_i =1/2(W^n_i+1-W^n_i)+D_i+3/2(W^n_i+2-W^n_i+1)-C_i+1/2(W^n_i+1-W_i^n)+Q_i+1 -D_i+1/2(W^n_i+1-W^n_i)+C_i-1/2(W^n_i-W^n_i-1)-Q_i, and ∑_i |W^n+1_i+1-W^n+1_i| ≤∑_i (1/2-C_i+1/2-D_i+1/2)|W^n_i+1-W_i^n| +∑_i C_i-1/2|W^n_i-W^n_i-1| +∑_i D_i+3/2|W^n_i+2-W^n_i+1| +∑_i (Q_i+1-Q_i). Since Q_i is monotone increase with respect W_i, we have ∑_i Q_i+1-Q_i=(Q_N-Q_0)=1, then ∑_i |W^n+1_i+1-W^n+1_i|≤1/2|W^n|_BV+1 ≤ 2. We need a Lemma to show the L_1 continue in time direction. That is W^m-W^n_1≤√((m-n) Δ t). We rewrite the scheme (<ref>) as W_i^n+1-W^n_i =dt/h^2Ã_i(W^n_i+1-2W^n_i+W^n_i-1)+dt/hD_i+1/2(W^n_i+1-W^n_i) -dt/hC_i-1/2(W^n_i-W^n_i-1)+dt g^n_i, where D_i+1/2 =-1/2 a^n_i -1/2 r_i^2 f^'(θ^n_i) +1/2α^n_i+1/2, C_i-1/2 =+1/2 a^n_i +1/2 r_i^2 f^'(θ^n_i) +1/2α^n_i-1/2, and θ_i^n is between W^n_i-1 and W^n_i+1. Multiplying the difference equation by test function ϕ(x) ∈ C_0^∞([-5,5]) and summation by parts, we have h∑_i ϕ_i (W^m_i-W^n_i) = h ∑_i ∑_ℓ=n^m-1ϕ_i (W^ℓ-1_i-W^ℓ_i) =h dt ∑_i ∑_ℓ=n^m-1ϕ_i D_i+1/21/h(W^ℓ+1_i+1-W^ℓ_i) - ϕ_i C_i-1/21/h(W^ℓ+1_i-W^ℓ_i-1) +dt h ∑_i ∑_ℓ=n^m-1 g^ℓ _i ϕ_i + dt h ∑_i ∑_ℓ=n^m-11/h^2Ã_̃ĩϕ_i (W^ℓ_i+1-W^ℓ_i) - 1/h^2Ã_̃ĩϕ_i (W^ℓ_i-W^ℓ_i-1) = dt h ∑_i ∑_ℓ=n^m-1 (ϕ_iD_i+1/2-ϕ_i+1C_i+1/2)1/h(W_i+1-W_i) + h dt ∑_i ∑_ℓ=n^m-11/h(ϕ_i Ã_i-Ã_i+1ϕ_i+1)(W^ℓ_i+1-W^ℓ_i)1/h + h dt ∑_i ∑_ℓ=n^m-1 g_i^ℓϕ_i, where 1/h|ϕ_i+1Ã_i+1-ϕ_i Ã_i| =1/h|(Ã_i+1-Ã_i)ϕ+Ã_i(ϕ_i+1-ϕ_i)| ≤1/h|Ã_i+1-Ã_i||ϕ_i|+ |Ã_i|1/h|ϕ_i+1-ϕ_i| ≤ C_1(ϕ_∞+ϕ_x_∞). Here we use the A_x_∞≤ +∞ (since A is Lipschitz continous) and |ϕ_iD_i+1/2-ϕ_i+1C_i+1/2| ≤ |ϕ_i-ϕ_i+1||D_i+1/2|+|ϕ_i+1||D_i+1/2-C_i+1/2| ≤ C_2. Then we have h∑_i |ϕ_i (W^m_i-W^n_i)| ≤ dt ∑_ℓ=n^m-1∑_i |ϕ_i D_i+1/2-ϕ_i+1C_i+1/2||W_i+1^ℓ-W_i^ℓ| + dt ∑_ℓ=n^m-1∑_i 1/h|Ã_i+1ϕ_i+1-ϕ_i Ã_i||W^ℓ_i+1-W^ℓ_i|+dtC ≤ dt (C_2+C_1) ∑_ℓ=n^m-1 |W^ℓ|_BV+dtC ≤ (C_2+C_1(ϕ_∞+ϕ_x_∞ ))dt(m-n) |W^n|_BV+dtC. Next, introduce the function β(x)={ sgn(∑_i ∈ℤ (W^m_i-W^n_i)χ_i(x) ) , if |x|≤ J-ρ 0 , otherwise . where χ_i(x) is the characteristic function of [x_0+ih,x_0+(i+1)h ) and J∈ℤ. Let ω_ρ(x) be a standard C_0^∞ mollifier given by ω_ρ(x)=1/ρω(x/ρ), where ω(x)={1/Ωexp(1/|x|^2-1), |x|<1 0, |x|≥ 1 . and Ω=∫_0^1 exp(1/|x|^2-1) dx. Let β^ρ =ω_ρ *β, we can check that β^ρ_L^∞≤ 1, β^ρ_x_L^∞≤ O(1/ρ). Taking test function ϕ=β^ρ, we get h ∑_i=-J^J |W_i^m-W_i^n| ≤ ((C_3+C_1)+C_4/ρ) dt (m-n) |W^n|_BV+ dt C ≤C/ρdt(m-n)|W^n|_BV. Taking ρ=√((m-n)dt) and J →∞, we have h ∑_i ∈ℤ |W^m_i-W^n_i| ≤ C √((m-n)dt). Next, we will drive a cell entropy inequality. By the standards process in <cit.> and <cit.>. We use the standard notations u ∨ v=max(u,v), u ∧ v =min(u,v). To simplify the notation, we define the finite difference operators, D^-W_i=1/h(W_i-W_i-1), D^+W_i=1/h(W_i+1-W_i). Let U(W)=|W-c|, F(W)=sgn(W-c)(f(W)-f(c)), where c is a constant. Then the following inequality holds. The cell entropy inequality holds for EYM equation 1/dt(U^n+1_i-U^n_i)+a_i^n/2h(U^n_i+1-U^n_i-1)+1/2 h r_i^2(F^n_i+1-F^n_i-1) -(Ã^n_i1/h^2+α^n/2h)(U^n_i+1-2U^n_i+U^n_i-1) -g^n_i sgn(W^n_i-c) ≤ 0. To simplify the proof, taking the viscosity coefficient as α^n we rewrite the scheme as 1/dt(W^n+1_i-W^n_i)+D^-Φ(W^n_i+1,W^n_i)-Ã_iD^-( D^+W^n_i)-g^n_i =0, where Φ is given by Φ(W_i+1,W_i) = a_i/2(W_i+W_i+1) +1/21/r_i^2(f_i+f_i+1)-α^n/2(W_i+1-W_i). Define H(W^n_i-1,W^n_i,W^n_i+1)=W^n_i-dt D^-Φ(W^n_i+1,W^n_i)+dt Ã_i D^-( D^+W^n_i) +dt g^n_i. It is easy to check that if CFL condition is satisfied, then ∂ H/∂ W_j≥ 0, j=i-1, i, i+1. Consider H(c ∨ W_i-1 ,c ∨ W_i,c ∨ W_i+1) =W_i ∨ c -dt D^-Φ(W_i+1∨ c,W_i∨ c)+dt Ã_iD^-( D^+(W_i∨ c)), then H(c ∨ W^n_i-1,c ∨ W^n_i,c ∨ W^n_i+1)-H(c∧ W^n_i-1,c∧ W^n_i,c∧ W^n_i+1) =|W_i-c|- dt D^-(Φ(W_i+1∨ c,W_i∨ c)-Φ(W_i+1∧ c,W_i ∧ c))+dt Ã_i D^-( D^+(W_i∨ c) - D^+(W_i∧ c ))+dt sgn(W_i-c) g_i. By monstrosity of the scheme, we get H(c ∨ W^n_i-1,c ∨ W^n_i,c ∨ W^n_i+1)-H(c∧ W^n_i-1,c∧ W^n_i,c∧ W^n_i+1) ≥ H(W_i-1^n,W_i^n,W_i+1^n)∨ c- H(W_i-1^n,W_i^n,W_i+1^n)∧ c =|W^n+1-c|, which inserted into Eq. (<ref>), we get the cell entropy inequality |W^n+1_i-c|-|W^n_i-c|/dt -Ã_iD^-( D^+(W_i^n∨ c)- D^+(W_i^n∧ c) +D^-(Φ(W^n_i+1∨ c,W^n_i∨ c)-Φ(W_i+1^n ∧ c,W_i^n ∧ c)) -sgn(W^n_j-c)g(W_i^n)≤ 0. Using the following relation f(W∨ c)-f(W∧ c) =sgn(W-c)(f(W)-f(c))=F(W), W∨ c-W∧ c =sgn(W-c)(W-c)=U(W), we get Φ(W_i+1∨ c,W_i ∨ c)-Φ(W_i+1∧ c,W_i ∧ c) =a_i^n/2(U_i^n+U^n_i+1)+1/2r_i^2(F^n_i+F^n_i+1)-α^n/2(U^n_i+1-U^n_i), then, equation (<ref>) can be rewritten as 1/dt(U^n+1_i-U^n_i)+a_i^n/2h(U^n_i+1-U^n_i-1)+1/2 h r_i^2(F^n_i+1-F^n_i-1) -(Ã^n_i1/h^2+α^n/2h)(U^n_i+1-2U^n_i+U^n_i-1) -g^n_i sgn(W^n_i-c) ≤ 0. Define (A_Δ,W_Δ) be the interpolating of degree one using A^n_i and W^n_i, where Δ=(h,dt). W_Δ interpolate at the vertices of each rectangle D^n_i=[x_0+i h,x_0+(i+1)h] × [n dt,(n+1) dt] and A_Δ interpolate at the one dimension domain [x_0+ih,x_0+(i+1)h]. (A_Δ,W_Δ) are piece wise line segment, and we have W_Δ(x,t) =W_i^n+(W^n_i+1-W^n_i)x-ih/h+(W^n+1_i-W^n_i)t-n dt/dt, +(W^n+1_i+1-W^n+1_i-W^n_i+1+W^n_i)x-i h/ht-n dt/dt, A_Δ(x,t_n) =A_i^n+(A^n_i+1-A^n_i)x-ih/h. Let {Δ} be a sequence of democratization parameters tending to zeros. Then there exist a subsequence {Δ_i} such that {W_Δ_i} converges in L^1_loc(Q_T) and point-wise almost everywhere in Q_T to a limit W as i →∞. By Lemma 2, we have W_Δ_∞≤ 1. Using Lemma 3, we have ∫_Q_T|∂_x W_Δ| dx dt ≤∑_i,n∫_D^n_i1/h(1-t-n dt/dt)|W_i+1^n-W^n_i| dxdt +∑_i,n∫_D^n_i1/h(t-n dt/dt)|W_i+1^n+1-W^n+1_i| dxdt ≤dt/2∑_i,n|W_i+1^n-W^n_i|+dt/2∑_i,n|W_i+1^n+1-W^n+1_i| ≤ T|W^0|_BV. Using Lemma 5, we have ∫_Q_T|∂_t W_Δ| dx dt ≤∑_i,n∫_D^n_i1/dt(1-x-ih/h)|W^n+1_i-W^n_i| dxdt +∑_i,n∫_D^n_i1/dtx-i h/h|W^n+1_i+1-W^n_i+1| dx dt ≤h/2∑_i,n|W^n+1_i-W^n_i|+h/2∑_i,n|W^n+1_i+1-W^n_i+1| ≤ h √(T) |W^0|_BV. Then, there is a finite constant C(T,|W^0|_BV)>0 such that W_Δ_L^∞(Q_T) ≤ 1, |W_Δ|_BV(Q_T)≤ C(T,|W^0|_BV), which means {W_Δ} is bounded in BV(D) for any compact set D ⊂ Q_T. Since BV(D) is compactly imbedded into L^1(D), there is a sub-sequences converges in L^1(D) and point-wise almost everywhere in D. Next step, we use the diagonal process to construct a sequence converges in L^1_loc(Q_T) and point-wise almost everywhere in Q_T to a limit W, W(x,t) ∈ L^∞(Q_T) ∩ BV(Q_T). Remark It is easy to show that W satisfy the entropy inequality (<ref>). Taking ϕ∈ C_0^∞(Q_T),ϕ≥ 0, multiplying the cell inequality (<ref>) in Lemma 5 by ϕ dt h and summation by parts, we get h dt ∑_i,n1/dt U^n_i(ϕ_i^n-1-ϕ^n_i) +(a^n_i-1ϕ^n_i-1- a^n_i+1ϕ^n_i+1)1/2hU^n_i+F^n_i1/2h(ϕ_i-1^n/r_i-1^2-ϕ_i+1^n/r_i+1^2) +h dt∑_i,n1/h(Ã^n_i+1ϕ^n_i+1 -Ã^n_iϕ^n_i )1/h(U^n_i+1-U^n_i) -h dt ∑_i,nhα/2U^n_i 1/h^2(ϕ^n_i+1-2ϕ^n_i+ϕ^n_i-1)+ g^n_i sgn(W^n_i-c)ϕ^n_i ≤ 0. Taking limit h → 0, we have the entropy inequality ∬_Q_T |W-c|ϕ_t +(a(x)ϕ)_x |W-c|+(1/r^2ϕ)_x sgn(W-c)(f(W)-f(c)) -(Ãϕ)_x|W-c|_x +sgn(W-c) g(W) ϕ dx dt ≥ 0. Finally we have The sequence {A_Δ,W_Δ}, which is constructed from scheme (<ref>), converges in L^1_loc(Q_T) and point-wise almost everywhere in Q_T to a BV entropy weak solution of (18)-(20). Remark Since A_i can be solved by scheme (<ref>), the convergence of numerical for ODE is trivial, so its proof is ignored here. § WENO SCHEMES FOR THE YANG-MILLS EQUATION §.§ A new WENO approximation for the diffusion term In this section, we construct a fourth order WENO approximation for W_xx. Given a uniform grid {x_i}_i=1^N+1⊂ [-5,5] with a constant mesh size h=x_i+1-x_i. Consider a real value function W(x) defined on interval [-5,5] and denote W_i=W(x_i), we would like to approximate the second order derivative on a 5-point large stencil S={x_i-2, x_i-1, x_i, x_i+1, x_i+2}. First, let's recall the WENO scheme of Liu, Shu and Zhang <cit.>. Consider a degenerate parabolic equations u_t=g(u)_xx. One can construct a conservative finite difference scheme for (<ref>), written in the form d/dtu_i(t)=1/h^2(ĝ_i+1/2-ĝ_i-1/2), where u_i(t) is the numerical approximation to the point value u(x_i,t) of the solution to (<ref>), and the numerical flux function is given by ĝ_i+1/2=ĝ(u_i-r,..,u_i+s). The construction of WENO schemes in this section consists of the following steps. 1.Taking a big stencil S={ x_i-r,...,x_i+r+1}. 2. We choose s consecutive small stencils, S^(m)={x_i-r+m,...,x_i+r+m+2-s}, m=0,...,s-1, and construct a series of lower order linear schemes with their numerical fluxes denoted by ĝ^(m)_i+1/2. Here, s can be chosen to be between 2 and and 2r + 1, corresponding to each small stencil containing 2r + 1 to 2 points, respectively. 3. We find the linear weights, namely, constants d_m, such that the flux on the big stencil is a linear combination of the fluxes on the small stencils with d_m as the combination coefficients ĝ_i+1/2=∑_m=0^s-1 d_m ĝ^(m)_i+1/2. For fourth order scheme, d_0=-1/12, d_1=7/6, d_2=-1/12. 4. According to the standard procedure in <cit.>, we can compute nonlinear weights ω_0,ω_1,...,ω_s-1. Finally, we have ĝ_i+1/2=∑_m=0^s-1ω_m ĝ^(m)_i+1/2. It is exceedingly expensive to compute the six nonlinear weights required to build a 4th-order WENO scheme. Can we construct a cheaper WENO4 scheme for equation (<ref>)? In this section, we offer a new technique based on the following simple intuitive: three points scheme keep the total variation non-increase. Assume g'(u) ≥ 0, consider three points scheme for equation (<ref>), 1/dt(u_i^n+1-u^n_i)=1/h^2(g_i+1^n-2g_i^n+g_i-1^n) , which can be written as u^n+1_i =u^n_i-dt/h^2g'(θ^n_i-1/2)(u^n_i-u^n_i-1)+dt/h^2g'(θ^n_i+1/2)(u^n_i+1-u^n_i), where θ^n_i-1/2 is between u^n_i-1 and u^n_i. Equation (<ref>) satisfies Harten's Lemma, so TV(u^n+1)≤ TV(u^n), which means the three points scheme has no oscillation. So, we can divided the big stencil S into two sub-stencils {S^0,S^1}, where S^0={x_i-1,x_i, x_i+1},S^1={ x_i-2, x_i, x_i+2}. The fourth-order approximation ĝ_xx,i=g_xx(x_i)+O(h^4) is built through the convex combinations of ĝ^(k)_xx,i, defined in each one of the stencils S^k: ĝ_xx,i=ω_0 ĝ^(0)_xx,i+ω_1 ĝ^(1)_xx,i. If there is shock in big stencil S,one can use more weights of stencil S^0 and less weights of S^1. In this way, we can avoid the oscillation. For EYM equation, we need to approximate W_xx using WENO scheme. We described this process as following. The fourth-order approximation Ŵ_xx,i=W_xx(x_i)+O(h^4) is built through the convex combinations of Ŵ^(k)_xx,i, defined in each one of the stencils S^k: Ŵ_xx,i=ω_0 Ŵ^(0)_xx,i+ω_1 Ŵ^(1)_xx,i, where Ŵ^(0)_xx,i =1/h^2(W_i+1-2W_i+W_i-1), Ŵ^(1)_xx,i = 1/4h^2(W_i+2-2W_i+W_i-2) . The W_xx can be approximated in the big stencil Ŵ_xx,i= 1/12 h^2(-W_i-2+16 W_i-1-30 W_i+16 W_i+1-W_i+2), and Ŵ_xx,i=d_0 W_xx,i^(0)+ d_1 W_xx,i^(1), where d_k are linear weights d_0=4/3, d_1=-1/3. To handle this negative weights, we consider the following standards procedure γ̃^+_k =1/2(d_k+θ |d_k|),k=0,1,θ=2, γ̃^-_k = γ̃^+_k -d_k, σ^+ =γ̃^+_0+γ̃^+_1=13/6, σ^- =γ̃^-_0+γ̃^-_1 =7/6. Then we obtain the following γ^+_0 =γ̃^+_0/σ^+=12/13, γ^+_1 =γ̃^+_1/σ^+=1/13, γ^-_0 =γ̃^-_0/σ^-=4/7, γ^-_1 =γ̃^-_1/σ^-=3/7. The smoothness indicators β_0,β_1 are computed as β_k=∫_x_i-1^x_i+1 h (d p_k/d x)^2 dx + ∫_x_i-1^x_i+1 h^3 (d^2 p_k/d x^2)^2 dx, k=0, 1, β_0 =1/2( W_i+1-W_i-1)^2+8/3(W_i+1 -2W_i+W_i-1 )^2, β_1 =1/8(W_i+2-W_i-2)^2+1/6(W_i+2-2W_i+W_i-2)^2. We obtained the nonlinear weights by α^±_k =γ^±_k/(ϵ +β_k)^2, ω^±_k =α^±_k/α^±_0 + α^±_1, ω_k =σ^+ ω^+_r -σ^- ω^-_k, k=0,1 where ϵ is used to avoid the division by zero in the denominator. We take ϵ=10^-10. Then we have W_xx=ω_0 W^(0)_xx + ω_1 W^(1)_xx. We drive a sufficient condition for fourth order convergence of Eq. (<ref>). Adding and subtracting ∑_k=0^1 d_k Ŵ^(k)_xx from Eq. (<ref>) give Ŵ_xx =∑_k=0^1 d_k Ŵ^(k)_xx + ∑_k=0^1 (ω_k -d_k) Ŵ^(k)_xx , where the first term on the right hand side produces the 4th order accurate. The second term must be at least O(h^4) in order for Ŵ_xx to be approximated at 4th order. Noting that the W^(k)_xx are 2nd order approximations of Ŵ_xx(x_i), we have ∑_i=0^1 (ω_k-d_k) Ŵ^(k)_xx = ∑_i=0^1 (ω_k-d_k) (W_xx+O(h^2)) = ∑_i=0^1 (ω_k-d_k) W_xx (x_i) + ∑_i=0^1 (ω_k-d_k) (O(h^2)), where the first term on the right hand side vanishes due to the normalization of the weights. Thus, it is sufficient to require ω_k=d_k +O(h^2). By Taylor expansion, we have β_0 =8/3(W_j+1-2W_j+W_j-1)^2+1/2(W_j+1-W_j-1)^2 =8/3( h^2 W_xx(x_j) )^2+1/2(2h W_x(x_j)+1/3h^3 W_xxx(x_j))^2+O(h^6), β_1 =1/6(W_j+2-2W_j+W_j-2)^2+1/8(W_j+2-W_j-2)^2 =1/6( (2h)^2W_xx(x_j) )^2+1/8( 4h W_x(x_j) +1/3(2h)^3W_xxx(x_j) )^2+O(h^6). If W_x(x_j) ≠ 0, β_0=2h^2(W_x(x_j))^2[1+O(h^2)], β_1=2h^2(W_x(x_j))^2[1+O(h^2)]. If W_x(x_j)=0,W_xx(x_j) ≠ 0, β_0=8/3 h^4 (W_xx (x_j))^2[1+O(h^2)],β_1=4/6 h^4 (W_xx (x_j))^2[1+O(h^2)]. Then we have β_k=D(1+O(h^2) ), k=0,1, where D is a constant independent of the k. By the Taylor expansion γ^±_k /(ε +β_k)^2 =γ^±_k/D^2 ( 1+ O(h^2) )^2, =γ^±_k/D^2(1+O(h^2) ), then γ_k^± =ω^±_k ( ∑_ℓ =0^1 γ^±_ℓ/( ε +β_ℓ)^2)(ε+β_k)^2 =ω^±_k(1/D^2(1+O(h^2) ) )(D(1+O(h^2)) )^2 =ω^±_k+ O(h^2). By the definition of ω_k ω_k =σ^+ ω_k^+ - σ^- ω_k^- =σ^+(γ^+_k+O(h^2)) -σ^-(γ^-_k+O(h^2)) =d_k+O(h^2). To achieve fourth order accuracy in critical points, we fix the nonlinear weights ω_k,k=0,1 by a mapping function <cit.> g_k(ω)=ω(d_k+d_k^2-3d_kω+ω^2)/d_k^2+ω(1-2d_k). The mapped nonlinear weights are given by α_k =g_k(ω_k),k=0,1, ω_k^new =α_k/α_0+α_1. Then, we replace the original nonlinear weights in (<ref>) by ω_k^new. This method worked well. On the other hand,we propose a simple modified limiting procedure: β_k ={ 0, R(β) <ξ β_k, otherwise . where R(β)=max_0≤ k≤ 1β_k, and ξ can be chosen suitably. The basic idea behind this method is that in the smooth region, we just need the nonlinear weights to be the linear weights d_k.Then there is no accuracy loss phenomena in the critical point. §.§ WENO5 scheme for the W_x and convection term We give the left bias fifth order finite difference WENO approximate of the first derivative W_x at the grid point x_j: W_x,j^- =1/h(Ŵ_j+1/2-Ŵ_j-1/2). The numerical flux Ŵ_j is given by Ŵ_j+1/2=ω_1 Ŵ^(1)_j+1/2+ω_2 Ŵ^(2)_j+1/2+ ω_3 Ŵ^(3)_j+1/2, where Ŵ^(i)_j+1/2,i=1,2,3, are three third order fluxes on the three difference small stencils given by Ŵ^(1)_j+1/2 =1/3W_j-2-7/6W_j-1+11/6W_j, Ŵ^(2)_j+1/2 =-1/6W_j-1+5/6W_j+1/3W_j+1, Ŵ^(3)_j+1/2 =1/3W_j+5/6W_j+1-1/6W_j+2 . The nonlinear weights ω_i are given by ω_i =ω̃_i/∑_k=1^3ω̃_k, ω̃_k =γ_k/ ( ε+β_k )^2, with the linear weights γ_k given by γ_1 =1/10, γ_2=6/10, γ_3=3/10. The smoothness indicators β_k are given by β_1 =13/12(W_j-2-2W_j-1+W_j )^2    +1/4(W_j-2-4W_j-1+3W_j )^2, β_2 =13/12(W_j-1-2W_j+W_j+1 )^2    +1/4(W_j-1-W_j+1)^2, β_3 =13/12(W_j-2W_j+1+W_j+2 )^2    +1/4(3W_j-4W_j+1+W_j+2)^2. The right bias fifth order finite difference WENO approximate W^+_x is mirror symmetric to that for W^-_x. To achieve fifth order accuracy in the critical point, one can fix the nonlinear weights ω_k by a mapping described in Eq. (<ref>). For more details, one can see <cit.>. Another new idea is given by R. Borges <cit.>, which is called WENO-Z scheme. The novel method is to use the big stencil to construct a new smoothness indicator of higher order than the classical smoothness indicators β_k. The new indicator is denoted as τ_5 τ_5=|β_3-β_1|. We define the new smoothness indicators β_k^z=( β_k+ε/β_k+τ_5+ε), k=1,2,3, and the new WENO weights ω^z_k as ω^z_k =α^z_k/∑_i=1^3α^z_i α^z_i =d_i(1+(τ_5/β_i+ε)^q), i=1,2,3, where ε=10^-10,q=2. Next, we consider the WENO5 approximations for f(W)_x. Since f'(W)=(1-W^2)^2 ≥ 0, then f(W)_x at x_j can be represented as <cit.> 1/h(f̂_j+1/2-f̂_j-1/2). The numerical flux f̂_j+1/2 can be reconstructed by the point value f(W_j) as the following procedure f̂_j+1/2=ω_1 f̂^(1)_j+1/2+ω_2 f̂^(2)_j+1/2+ ω_3 f̂^(3)_j+1/2, where f̂^(i)_j+1/2,i=1,2,3, are three third order fluxes on the three difference small stencils given by f̂^(1)_j+1/2 =1/3f_j-2-7/6f_j-1+11/6f_j, f̂^(2)_j+1/2 =-1/6f_j-1+5/6f_j+1/3f_j+1, f̂^(3)_j+1/2 =1/3f_j+5/6f_j+1-1/6f_j+2 . The nonlinear weights ω_i are given by ω_i =ω̃_i/∑_k=1^3ω̃_k, ω̃_k =γ_k/ ( ε+β_k )^2, with the linear weights γ_k given by γ_1 =1/10, γ_2=6/10, γ_3=3/10. The smoothness indicators β_k are given by β_1 =13/12(f_j-2-2f_j-1+f_j )^2    +1/4(f_j-2-4f_j-1+3f_j )^2, β_2 =13/12(f_j-1-2f_j+f_j+1 )^2    +1/4(f_j-1-f_j+1)^2, β_3 =13/12(f_j-2f_j+1+f_j+2 )^2    +1/4(3f_j-4f_j+1+f_j+2)^2. We can fix this nonlinear weights using (<ref>). §.§ Explicit/Implicit WENO scheme for the Yang-Mills equation In this section we construct the fourth order WENO scheme for the convection-diffusion equation W_t+a(x)W_x+1/r^2f(W)_x=ÃW_xx+(1-W^2)W. Define a_i^+=1/2(a_i+ |a_i| ),a_i^-=1/2(a_i- |a_i| ). We have the following explicit WENO scheme Ã^n_iŴ^n_xx,i+(1-(W_i^n)^2)W_i^n= 1/dt( W_i^n+1-W_i^n ) +a^+_iŴ_i^n- + a^-_iŴ_i^n++1/r_i^2h(f̂^n_i+1/2-f̂^n_i-1/2), where f̂^n_i±1/2 is constructed in (<ref>). The CFL condition is given by dt < 0.3 h^2/2max_i A_i+max_i|c_i|h, where c=2A-1+1/r^2(1-W^2)^2. To accelerate decay, we design an implicit scheme for EYM equations as following Ã^n_iŴ^n+1_xx,i+(1-(W_i^n)^2)W_i^n= 1/dt( W_i^n+1-W_i^n ) +a^+_iŴ_i^n- + a^-_iŴ_i^n++1/r_i^2h(f̂^n_i+1/2-f̂^n_i-1/2), where Ŵ^n+1_xx,i can be approximated by Ŵ^n+1_xx,i=ω_0 1/h^2(W_i+1^n+1 -2W_i^n+1+W^n+1_i-1) + ω_1 1/4h^2(W_i+2^n+1 -2W_i^n+1+W^n+1_i-2), and the nonlinear weights ω_k, k=0,1 are calculated by W^n as (<ref>). § WENO SCHEMES FOR THE EINSTEIN CONSTRAINT EQUATION §.§ WENO type Adams solver First step, we consider a simple ODE problem y_x =f(x), y(x_0) =y_0, where f(x) is discontinued or very sharp at somewhere. The high order integral could cause numerical oscillation near the discontinued point. Integral the equation in interval I_i+1/2=[x_i,x_i+1], then y_i+1-y_i=∫_x_i^x_i+1 f(x) dx, where ∫_x_i^x_i+1 f(x) dx can be approximated by the WENO integral as following. We chose two sub stencils S^0={ x_i-2, x_i-1, x_i},S^1={ x_i-1, x_i, x_i+1}. There is a unique polynomial p_r(x) of degree at most 2 which interpolates f(x) at the nodes in S^r, r=0,1. We denote the integral of p_r(x) on I_i+1/2 by J^(r) ,r=0,1 J^(0)=h/12(23 f_i-16 f_i-1+5f_i-2), J^(1)=h/12(8f_i-f_i-1+5f_i+1). In the large stencil S={ x_i-2, x_i-1, x_i, x_i+1}, the integral of ∫_x_i^x_i+1 f(x) dx can be approximated with the linear coefficients J=h/24(19 f_i-5 f_i-1+f_i-2 +9 f_i+1) . The WENO integration would take a convex combination of J^(r) defined above as a new approximation to the integral ∫_I_i+1/2 f(x) dx J=ω_0 J^(0)+ω_1J^(1). We ask the nonlinear ω_r ≥ 0 and ω_0+ω_1=1 for stability and consistency. We know that for smooth g(x), then J=d_0 J^(0)+d_1 J^1 =∫_I_i+1/2 f(x) dx+ O(h^2k-1), Where d_0=1/10,d_1=9/10. In the smooth case, we hope to have ω_r=d_r+O(h^k-1),r=0,1 so that 5th order accuracy can be achieved for the integral. When the function g is discontinued at one stencil, we ask the corresponding weights ω_r to essentially 0 to avoid oscillation. We can construct the nonlinear weights as following: First, we construct the smoothness indicators in every small stencil S_i^r, r=0,1, β_r=∫_x_i-1/2^x_i+1/2 h (d p_r/d x)^2 + h^3 (d^2 p_r/d x^2)^2 dx . Then we have β_0 =1/4( 3 f_i-4 f_i-1+f_i-2)^2+13/12(f_i-2f_i-1+f_i-2)^2, β_1 =1/4(f_i+1-f_i-1)^2+13/12(f_i+1-2f_i+f_i-1)^2, or β_r=∫_x_i^x_i+1 h (d p_r/d x)^2 + h^3 (d^2 p_r/d x^2)^2 dx . Then we have β_0 =( 2 f_i-3f_i-1+f_i-2)^2+13/12(f_i-2f_i-1+f_i-2)^2, β_1 =(f_i+1-f_i)^2+13/12(f_i+1-2f_i+f_i-1)^2. Second step, we construct α_r, r=0,1. α_r=d_r/(ε +β_r)^2, r=0,1. Finally, we get the nonlinear weights for central WENO integral ω_r=α_r/α_0+α_1,r=0,1. y_i+1 -y_i =ω_0 h/12 (23f_i -16 f_i-1 + 5 f_i-2 ) +ω_1 h/12 (8f_i - f_i-1 + 5 f_i+1 ). In the smooth region ω_k=d_k, which is the familiar Adams-Moulton 4 scheme: y_i+1-y_i =h/24(19 f_i+9 f_i+1-5f_i-1+f_i-2). At the left boundary, we use the following scheme y_i+1-y_i=h/24(19 f_i+1 +9f_i -5 f_i+2 +f_i+3),i=1. §.§ Three sub stencils WENO integration In this section, we design WENO integration on three sub stencils. Consider sub stencils S^0={ x_i-2,x_i-1}, S^1={x_i-1,x_i}, S^2={x_i, x_i+1}. Denote p_k(x),k=0,1,2 be the first order Interpolation polynomials of f(x) at each sub stencils and J^k, k=0,1,2 are the integral of p_k(x) on the interval [x_i, x_i+1], J^0 =∫_x_i^x_i+1 p_0(x) dx =h/2(5f_i-1-3f_i-2), J^1 =∫_x_i^x_i+1 p_1(x) dx =h/2(-f_i-1+3f_i), J^2 =∫_x_i^x_i+1 p_2(x) dx =h/2(f_i+3f_i+1) . Integrate f(x) on the large stencils, we have J =h/24(f_i-2-5f_i-1+19f_i+9f_i+1) =d_0 J^0+d_1 J^1+d_2 J^2, where the linear weights d_k, k=0,1,2 are given by d_0 =-1/36, d_1 =10/36, d_2 =27/36. To handle this negative weights, we consider the following standards procedure. We define γ̃^+_k =1/2(d_k +θ |d_k|),θ=3,k=0,1,2 γ̃^-_k =γ̃^+_k-d_k, and γ̃_0^+ =1/36, γ̃_0^- =2/36 , γ̃_1^+ =20/36, γ̃_1^- =10/36, γ̃_2^+ =54/36, γ̃_2^- =27/36. Define σ^± as follows σ^+ =∑_ℓ=0^2 γ̃_ℓ^+=75/36, σ^- =∑_ℓ=0^2 γ̃_ℓ^-=39/36. Define γ_k^± =γ̃_k^±/σ^±, then γ^+_0 =1/75 , γ^-_0=2/39, γ^+_1 =20/75 , γ^-_1=10/39, γ^+_2 =54/75 , γ^-_1=27/39. The smoothness indicators β_k ,k=0,1,2 are given by β_0 =(f_i-1-f_i-2)^2 , β_1 =(f_i-f_i-1)^2 , β_2 =(f_i+1-f_i)^2 . The nonlinear weights are computed by α_k^± =γ^±_k/(ε +β_k)^2, ω^±_k =α_k^±/∑_j=0^2 α_j^±. Then ω_k=σ^+ ω_k^+-σ^-ω_k^-, k=0,1,2. Finally, we have a WENO solver for the ODE y_x=f(x): y_i+1-y_i =ω_0 J^0+ω_1 J^1+ω_2 J^2 =ω_0 h/2(5f_i-1-3f_i-2)+ω_1 h/2(-f_i-1+3f_i) +ω_2 h/2(f_i+f_i+1). §.§ WENO type Adams solver for constraint equations We define the auxiliary function , q, S as follows =2W_r^2, S =1-1/r^2(1-W^2)^2, q_x =1+. In this section, we consider the Einstein constraint equation (Ae^q)_x =S e^q. Integrate in the interval I_i+1/2=[x_i,x_i+1], A_i+1 e^q_i+1- A_i e^q_i = ∫_x_i^x_i+1 S e^q dx . ∫_x_i^x_i+1 S e^q dx can be approximate by the WENO described above. We chose two sub stencils S^0={ x_i-2, x_i-1, x_i},S^1={ x_i-1, x_i, x_i+1}. There is a unique polynomial p_r(x) of degree at most 2 which interpolates g:=S e^q at the nodes in S^r, r=0,1. We denote the integral of p_r(x) on I_i+1/2 by J^(r) ,r=0,1, J^(0)=h/12(23 g_i-16 g_i-1+5g_i-2), J^(1)=h/12(8g_i-g_i-1+5g_i+1). In the large stencil S={ x_i-2, x_i-1, x_i, x_i+1}, the integral of ∫_x_i^x_i+1 g dx can be approximated with the linear coefficients J=h/24(19 g_i-5 g_i-1+g_i-2 +9 g_i+1) . The WENO integration would take a convex combination of J^(r) defined above as a new approximation to the integral ∫_I_i+1/2 g(u,x) dx J=ω_0 J^(0)+ω_1J^(1). We ask the nonlinear ω_r ≥ 0 and ω_0+ω_1=1 for stability and consistency. We know that for smooth g(x), then J=d_0 J^(0)+d_1 J^1 =∫_I_i+1/2 g(x) dx+ O(h^2k-1), where d_0=1/10,d_1=9/10. In the smooth case, we hope to have ω_r=d_r+O(h^k-1),r=0,1 so that 5th order accuracy can be achieved for the integral. When the function g is discontinued at one stencil, we ask the corresponding weights ω_r to essentially 0 to avoid oscillation. We can construct the nonlinear weights as following. First ,we construct the smoothness indicators in every small stencil S_i^r, r=0,1, β^r=∫_x_i-1/2^x_i+1/2 h (d p_r/d x)^2 + h^3 (d^2 p_r/d x^2)^2 dx . Then we have β_0 =1/4( 3 g_i-4 g_i-1+g_i-2)^2+13/12(g_i-2g_i-1+g_i-2)^2, β_1 =1/4(g_i+1-g_i-1)^2+13/12(g_i+1-2g_i+g_i-1)^2. Second step, we construct α_r, r=0,1, α_r=d_r/(ε +β_r)^2, r=0,1. Finally, we get the nonlinear weights for central WENO integral ω_r=α_r/α_0+α_1,r=0,1, A_i+1 -A_i e^q_i-q_i+1 =ω_0 h/12 (23S_i e^q_i-q_i+1 -16 S_i-1 e^q_i-1-q_i+1 + 5 S_i-2 e^q_i-2-q_i+1 ) +ω_1 h/12 (8S_i e^q_i-q_i+1 - S_i-1 e^q_i-1-q_i+1 + 5 S_i+1 ), where the q_j-q_i=∫_x_i ^x_j 1+ dx and the integral can be approximated by the WENO procedure described above. Remark To approximate the =2/r^2W_x^2, we only use linear scheme. However, in order to avoid possible oscillations, we follow a simple "min-mod" principle. We define the numerical approximation of W_x^2 in x_i is Ŵ_x,i^2,which can be given by Ŵ_x,i^2=min((W_x,i^-)^2,(W_x,i^+)^2, (W^c_x,i)^2 ), where W^c_x,i=1/2(W_x,i^- +W_x,i^+), W^-_x,i is the left bias fifths order linear approximation of W_x and W^+_x,i is the right bias fifths order approximation. §.§ WENO-Admas schemes in three sub stencils Define g_i-2 =S_i-2 e^q_i-2-q_i+1, g_i-1 =S_i-1e^q_i-1-q_i+1, g_i =S_i e^q_i-q_i+1 , g_i+1 =S_i+1. Then we have the three sub stencils WENO type Admas schemes A_i+1-A_i e^q_i-q_i+1 =ω_0 h/2(5g_i-1-3g_i-2)+ω_1 h/2(-g_i-1+3g_i) +ω_2 h/2(g_i+g_i+1), where the nonlinear weights ω_k are computed as (<ref>) and q_j-q_i=∫_x_i^x_j 1+ dx are calculated by the method described in (<ref>). At the left boundary, we use the following linear scheme A_i+1-A_i e^q_i-q_i+1=h/24(19 S_i+1 +9S_ie^q_i-q_i+1 -5 S_i+2e^q_i+2-q_i+1 +S_i+3e^q_i+3-q_i+1),i=1. We combine (<ref>) and (<ref>) together A_i+1-A_i e^q_i-q_i+1 =ω_0 h/2(5g_i-1-3g_i-2)+ω_1 h/2(-g_i-1+3g_i) +ω_2 h/2(g_i+g_i+1), Ã_iŴ^n+1_xx,i+(1-(W_i^n)^2)W_i^n= 1/dt( W_i^n+1-W_i^n ) +a^+_iŴ_i^n- + a^-_iŴ_i^n++1/r_i^2h(f̂^n_i+1/2-f̂^n_i-1/2). § NUMERICAL EXPERIMENTS In this section, we provide numerical experiments to demonstrate the effect of our methods. We first test our new WENO approximation on the diffusion term in Section 5.1, and then apply WENO schemes to the complete system of EYM equations in Section 5.2. Comparison between the first order Lax-Friedrichs scheme and the high-order WENO scheme will be given in Section 5.3. §.§ Numerical accuracy test for EYM equations In this section, we apply our WENO schemes (<ref>)-(<ref>) to the complete EYM equations. We consider the computational domain [-5,5] and the following initial boundary conditions: W(x,0) =tanh(10(x-0.1)), W(-5,t) =-1, W(5,t)=1, A(-5)=1. We march our scheme in time to steady-state at which the maximum absolute point value of W_t reduces to machine zero. The steady-state solutions for A and W using our WENO schemes with 3200 grid points are shown in Fig. <ref>. Here A is Lipschitz continuous. We further test the accuracy of our scheme in the smooth region. Since we do not know the exact solution of EYM equations, we use numerical solutions obtained on a very fine grid with 2^15 points as approximations to the exact solutions. The L^∞ and L^1 errors and orders of our scheme are shown in Table <ref>, in which we can clearly observe the expected fourth-order convergence rate. §.§ Lax-Friedrichs schemes In this subsection, we compute the EYM equations by using the Lax-Friedrichs schemes (<ref>) and (<ref>) and compare the results of different schemes. We use the same initial boundary conditions as in the last subsection and still take 3200 grid points. The steady-state solution for the Lax-Friedrichs schemes are shown in Fig. <ref>. We also compare the Lax-Friedrichs scheme and WENO scheme in Fig. <ref>. For the Lax-Friedrichs scheme, we can observe that there are few point values of A being negative, even at the steady state. However, we show in Fig. <ref> that as the mesh size h → 0, we have min(A) → 0. So even there are few negative point values, as the mesh size close to 0, we would have A≥ 0, and then Ã=A globally. § CONCLUSIONS In this paper, we consider the SU(2) EYM equations and aim to solve for the stable static solutions. We study the first order TVD scheme theoretically and provide new high-order WENO schemes for solving this problem. Numerical experiments are given to show the effect of our schemes. 99 BartnikR. Bartnik and J. McKinnon, Particle-like solutions of the Einstein Yang-Mills equations, https://doi.org/10.1103/PhysRevLett.61.141Phys. Rev. Lett. 61, 141 (1988). 2P. Bizon, Colored black holes, https://doi.org/10.1103/PhysRevLett.64.2844Phys. Rev. Lett. 64, 2844 (1990). 3 M. S. Volkov and D. V. Gal'tsov, Non-Abelian Einstein-Yang-Mills black holes, Sov. J. Nucl. Phys. 51, 1171 (1990). Kun H. P. Künzle and A. K. M. Masood-ul-Alam, Spherically symmetric static SU(2) Einstein-Yang-Mills fields, https://doi.org/10.1063/1.528773J. Math. Phys. 31, 928 (1990). cho1 M. W. Choptuik, J. Chmaj, and P. Bizoń, Critical Behavior in Gravitational Collapse of a Yang-Mills Field, https://doi.org/10.1103/PhysRevLett.77.424Phys. Rev. Lett. 773, 424-427 (1996). cho2M. Maliborski and O. Rinne, Critical phenomena in the general spherically symmetric Einstein-Yang-Mills system, https://doi.org/10.1103/PhysRevD.97.044053Phys. Rev. D 97, 044053 (2018). cho3 M. W. Choptuik, E. W. Hirschmann, and R. L. Marsa. New Critical Behavior in Einstein-Yang-Mills Collapse, https://doi.org/10.1103/physrevd.60.124011Phys. Rev. D 60, 124011 (1999). cho4 O. Rinne, Formation and decay of Einstein-Yang-Mills black holes, https://doi.org/10.1103/physrevd.90.124084Phys. Rev. D 90, 124084 (2014). nu1 A. Zenginoğlu, A hyperboloidal study of tail decay rates for scalar and Yang-Mills fields, https://doi.org/10.1088/0264-9381/25/17/175013Class. Quantum Grav. 25, 175013 (2008). nu2 M. Pürrer and P. C. Aichelburg, Tails for the Einstein-Yang-Mills system, https://doi.org/10.1088/0264-9381/26/3/035004Class. Quantum Grav. 26, 035004 (2009). nu3P. Bizoń, A. Rostworowski, and A. Zenginouğlu, Saddle-point dynamics of a Yang-Mills field on the exterior Schwarzschild spacetime, https://doi.org/10.1088/0264-9381/27/17/175003Class. Quantum Grav. 27, 175003 (2010). s2 J. A. Smoller, A. G. Wasserman, S.-T. Yau, and J. B. McLeod, Smooth static solutions of the Einstein-Yang/Mills equation, https://doi.org/10.1007/BF02100288Commun. Math. Phys. 143, 115-147 (1991). s4 J. A. Smoller, A. G. Wasserman, and S.-T. Yau, Existence of black-hole solutions for the Einstein-Yang/Mills equations, https://doi.org/10.1007/BF02097002 Commun. Math. Phys. 154, 377-401 (1993). s1J. A. Smoller, A. G. Wasserman, Existence of infinitely-many smooth, static, global solutions of the Einstein/Yang-Mills equations, https://doi.org/10.1007/BF02096771Commun. Math. Phys. 151, 303-325 (1993). s3 J. A. Smoller and A. G. Wasserman, Reissner-Nordström-like solutions of the spherically symmetric SU(2) Einstein/Yang-Mills equations, https://doi.org/10.1063/1.532224J. Math. Phys. 38, 6522-6559 (1997). s5 J. A. Smoller and A. Wasserman, Regular solutions of the Einstein-Yang-Mills equations, https://doi.org/10.1063/1.530963J. Math. Phys. 36, 4301-4323 (1995). un1N. Straumann and Z. H. Zhou, Instability of the Bartnik-mckinnon solution of the Einstein-Yang-Mills Equations, https://doi.org/10.1016/0370-2693(90)91188-H Phys. Lett. B 237, 353-356 (1990). un2N. Straumann and Z. H. Zhou, Instability of a colored black hole solution, https://doi.org/10.1016/0370-2693(90)90951-2Phys. Lett. B 243, 33-35 (1990). un3 P. Bizon and R. M. Wald, The n=1 colored black hole is unstable, https://doi.org/https://doi.org/10.1016/0370-2693(91)91243-OPhys. Lett. B 267, 173-174 (1991). un4Z. H. Zhou and N. Straumann, Nonlinear perturbations of Einstein-Yang-Mills solitons and non-abelian black holes, https://doi.org/10.1016/0550-3213(91)90439-5Nucl. Phys. B 360, 180-196 (1991). shu1 Y. Liu, C.-W Shu, and M. Zhang, High order finite difference WENO schemes for nonlinear degenerate parabolic equations, https://doi.org/10.1137/100791002SIAM J. Sci. Comput. 33 (2), 939–965 (2011). shu2 C.-W. Shu, Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws, in Advanced Numerical Approximation of Nonlinear Hyperbolic Equations. B. Cockburn, C. Johnson, C.-W. Shu, and E. Tadmor (Editor: A. Quarteroni), Lecture Notes in Mathematics, volume 1697, Springer, pp. 325-432 (1998). dm B. Carr and F. Kühnel, Primordial black holes as dark matter: recent developments, https://doi.org/10.1146/annurev-nucl-050520-125911Annu. Rev. Nucl. Part. Sci. 70, 355–394 (2020). bcm S. Bird, I. Cholis, J. B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E. D. Kovetz, A. Raccanelli, and A. G. Riess, Did LIGO detect dark matter? https://link.aps.org/doi/10.1103/PhysRevLett.116.201301Phys. Rev. Lett. 116, 201301 (2016). chen Y. Chen, J. Du, and S.-T Yau, Stable black hole with Yang-Mills Hair, https://doi.org/10.48550/arXiv.2210.03046Arxiv: 2210, 03106 (2022). Har A. Harten, B. Engquist, S. Osher, and S. Chakravarthy, Uniformly high order essentially non-oscillatory schemes III, https://doi.org/10.1016/0021-9991(87)90031-3J. Comput. Phys. 71, 231–303 (1987). Shu1 C.-W. Shu and S. Osher, Efficient implementation of essentially non-oscillatory shock capturing schemes, https://doi.org/10.1016/0021-9991(88)90177-5J. Comput. Phys. 77, 439–471 ( 1988). Shu2 C.-W. Shu and S. Osher, Efficient implementation of essentially non-oscillatory shock capturing schemes II, https://doi.org/10.1016/0021-9991(89)90222-2J. Comput. Phys. 83, 32–78 (1989). LiuO X.-D. Liu, S. Osher, and T. Chan, Weighted essentially non-oscillatory schemes, https://doi.org/10.1006/jcph.1994.1187J. Comput. Phys. 115, 200–212 (1994). JiangShu G.-S. Jiang and C.-W. Shu, Efficient implementation of weighted ENO schemes, https://doi.org/10.1006/jcph.1996.0130J. Comput. Phys. 126, 202–228 (1996). Majda M. G. Crandall and A. Majda, Monotone difference approximations for scalar conservation laws, https://doi.org/10.2307/2006218Math. Comp. 34, 1-21 (1980). Evje S. Evje and K. Karlsen, Monotone difference approximations of BV solutions to degenerate convection-diffusion equations, https://doi.org/10.1137/S0036142998336138SIAM J. Numer. Anal. 37, 1838-1860 (2000). HJ1 G.-S. Jiang, D. Peng, Weighted ENO schemes for Hamilton–Jacobi equations, https://doi.org/10.1137/S106482759732455XSIAM J. Sci. Comput. 21, 2126–2143 (2000). HJ2 X.-G. Li, C. K. Chan, High-order schemes for Hamilton–Jacobi equations on triangular meshes, https://doi.org/10.1016/j.cam.2003.09.051J. Comput. Appl. Math. 167, 227–241 (2004). HJ3 J. Qiu, C.-W. Shu, Hermite WENO schemes and their applicationas limiters for Runge–Kutta discontinuous Galerkin method: one dimensional case, https://doi.org/10.1016/j.compfluid.2004.05.005J. Comput. Phys. 193, 115–135 (2004). HJ4 J. Yan, S. Osher, A local discontinuous Galerkin method for directly solving Hamilton–Jacobi equations, https://doi.org/10.1016/j.jcp.2010.09.022J. Comput. Phys. 230, 232–244 (2011). Wu Z. Wu and J. Yin, Some properties of functions in BV_x and their applications to the uniqueness of solutions for degenerate quasilinear parabolic equations, Northeast. Math. J. 5, 395–422 (1989). map15A. K. Henrick, T. D. Aslam, and J. M. Powers, Mapped weighted essentially non-oscillatory schemes: Achieving optimal order near critical points, https://doi.org/10.1016/j.jcp.2005.01.023J. Comput. Phys. 207, 542–567 (2005). Borg R. Borges, M. Carmona, B. Costa, and W. S. Don, An improved weighted essentially non-oscillatory scheme for hyperbolic conservation laws, https://doi.org/10.1016/j.jcp.2007.11.038J. Comput. Phys. 227, 3191-3211 (2008). Zdo Ya. B. Zel’dovich and A. S. Kompaneetz, Towards a theory of heat conduction with thermal conductivity depending on the temperature, in Collection of Papers Dedicated to the 70th Anniversary of A. F. Ioffe, Izd. Akad. Nauk SSSR, Moscow, 1950, pp. 61–72.
http://arxiv.org/abs/2307.04243v1
20230709184355
Swimming Efficiently by Wrapping
[ "H. Gidituri", "M. Ellero", "F. Balboa Usabiaga" ]
cond-mat.soft
[ "cond-mat.soft", "physics.flu-dyn" ]
1 BCAM - Basque Center for Applied Mathematics, Alameda de Mazarredo 14, E48009 Bilbao, Basque Country - Spain 2 Ikerbasque, Basque Foundation for Science, Calle de Maria Diaz de Haro 3, E48013 Bilbao, Basque Country - Spain 3 Zienkiewicz Center for Computational Engineering (ZCCE), Swansea University, Bay Campus, Swansea SA1 8EN, UK Swimming Efficiently by Wrapping H. Gidituri1 M. Ellero1,2,3 F. Balboa Usabiaga1 [email protected] August 12, 2023 =========================================================================== Single flagellated bacteria are ubiquitous in nature. They exhibit various swimming modes using their flagella to explore complex surroundings such as soil and porous polymer networks. Some single-flagellated bacteria swim with two distinct modes, one with its flagellum extended away from its body and another with its flagellum wrapped around it. The wrapped mode has been observed when the bacteria swim under tight confinements or in highly viscous polymeric melts. In this study we investigate the hydrodynamics of these two modes inside a circular pipe. We find that the wrap mode is slower than the extended mode in bulk but more efficient under strong confinement due to a hydrodynamic increased of its flagellum translation-rotation coupling. § INTRODUCTION Bacteria are prokaryotic microorganisms forced to live in a zero Reynolds number environment. Due to the kinematic reversibility of viscous flows, some bacteria have developed a non-reciprocal propulsion mechanism for locomotion, the rotation of flagella. The cell body and the flagella are rotated in opposite directions by molecular motors. Under rotation the flagella adopt an helical shape and propel the bacterium by working as a screw. Some bacteria can move both forward or backward, in a push or pull mode, depending on the direction of rotation of the molecular motors and on the chirality of their flagella. As bacteria are often found in confined environments they have developed different strategies to swim while foraging in those conditions. One example is a swimming mode used by some monotrichous and bipolar bacteria where bacteria wrap their flagella around their own bodies resembling an Archimedes' screw <cit.>. These bacteria swim alternating between two different modes, the wrapped mode and the extended mode, where the later has the flagella extended away from their bodies. The wrap mode emerges when a cell encounter highly viscous or strongly confined environments <cit.>. When a cell gets trapped during its forward pushing mode a buckling instability occurs in the flagellar hook that triggers the flagellum wrapped mode <cit.>. The number of known bacterial species showcasing a wrap mode under confinement is growing <cit.>. Thus, a natural question arises: is the wrapped mode a mere accident or is it selected due to some advantage to the bacteria? Some studies suggest that the wrapped mode confer advantages to the motion in confinement environments. Kühn et al. observed experimentally that the wrapped mode can enhance the motion in highly viscous and structured environments <cit.>. Kinosita et al. studied the motion of bacteria with wrapped mode in very tight confinements and concluded that the wrapped mode can allow the bacteria to glide over the substrate <cit.>. Along this line of work we investigate how the flagella motion in the wrapped mode favors the motion of bacteria under strong confinement by hydrodynamic interactions only. To this end we investigate the swimming of bacteria inside circular pipes by means of CFD simulations. We show that the extended mode is more efficient in bulk and wide pipes while the wrapped mode can be more efficient in tight pipes. The scheme of the paper is the following. In Sec. <ref> we describe our numerical method, describe our results in Sec. <ref> and conclude in Sec. <ref>. § NUMERICAL METHOD We model a monotrichous bacterium as a rigid ellipsoid with an helical flagellum attached to one of its poles. The flagellum is also modeled as a rigid object, which is a good approximation to study steady state swimming <cit.>. The body and the flagellum are connected by inextensible links that allow the flagellum to rotate freely around its main axis but otherwise it is forced to move concomitant to the rigid ellipsoid. The rigid objects, _n, move with linear and angular velocities, _n and _n, where we use the subindex n to denote either the bacterium body or the flagellum. Due to the small bacterium size, the flow Reynolds number is vanishingly small, Re∼ 10^-5. Thus, the flow can be modeled with the Stokes equations - ∇ p + μ∇^2 v = , ∇·v = 0, where p and are the fluid pressure and velocity and μ its viscosity. The no-slip boundary condition is imposed on the surface of the bacterium body and its flagellum () = u_n + ω_n× (r-q_n ) for on the bacterium, where _n is tracking point of the rigid bodies (e.g. the bacterium body center and the flagellum attaching point respectively). To solve the coupled fluid-structure interaction problem we use the rigid multiblob method for articulated bodies. We summarized the numerical method while a detailed description can be found elsewhere <cit.>. The rigid bodies are discretized with a finite number of blobs with position _i as shown in Fig. <ref>. As the inertia is negligible the conservation of momentum reduces to the balance of force and torque. The discrete force and torque balance for the rigid object n can be written as, ∑_i∈_n_i - ∑_i∈ℒ_n_n = _n, ∑_i∈ℬ_n (r_i -q_n) ×λ_i - ∑_i∈ℒ_n (Δl_np -q_n) ×ϕ_n = τ_n, where _n and _n are the external forces and torques acting on the rigid objects while _i are the constrained forces acting on the blobs that ensure the rigid motion of the bacterium body and the flagellum. The second sums in (<ref>)-(<ref>) run over the links, ℒ_n, attached to the rigid object n and _n is the force exerted by the link n to keep the rigid bodies connected while |Δl_np| is the link length. The discrete no-slip condition evaluated at each blob i is, (_i) = ∑_j_ij_j = u_n + ω_n× (r_i-q_n ) for i∈ℬ_n. The mobility matrix _ij gives the hydrodynamic interaction between any two blobs, i and j, of radius a_i and a_j. We use a regularized version of the Oseen tensor, the Rotne-Prager tensor, <cit.>. _ij = 1(4π a_i a_j)^2∫δ (|r'-r_i|- a_i) G(r',r”)δ (|r”-r_j|- a_j) ^3r' ^3r” , where (,') is the Green's function of the Stokes equation and δ(r) the Dirac's delta function. The advantage of this formulation is that the regularized mobility has no divergence even when blobs get close and it is not necessary to use special quadrature rules. The equations (<ref>)-(<ref>) form a linear system for the unknown velocities, _n and _n, and constraint forces, _j and _n, that can be solved efficiently with iterative methods such as GMRES <cit.>. § RESULTS AND DISCUSSION In this section we study the swimming of bacteria inside circular pipes of radius r_0 and length L_0 ≈ 21 r_0 aligned along z. Keeping the aspect ratio constant ensures that the flow disturbance created by a bacterium decays to negligible values at the pipes ends <cit.>. We model the pipes as immobile rigid objects <cit.>. We place the bacteria in the middle of the pipes and we use that configuration to compute the bacteria velocity. As the Stokes equation assume a steady state flow solving one mobility problem is enough to determine the velocities. Later, we will consider the case where bacteria freely swim in a pipe periodic along its main axis. We consider two different swimming modes. First, the extended mode where the flagellum is attached to the body front part and it extends away from it. In the second mode the flagellum is wrapped around the bacteria body, see Fig. <ref>. In both cases we apply constant and opposite torques, of magnitude τ=0.46 pNμ m, to the body and the flagellum to model the work exerted by a molecular motor. Thus, we assume that the molecular motor always works on the low frequency (constant torque) regime <cit.>. In most numerical experiments the flagellum extends along its main axis a length similar to the bacterium body. Thus, in the wrapped mode the body is fully covered by the flagellum. The bacterium body, always 2.04 μ m long and 0.49 μ m wide, is discretized with 292 blobs of radius a=0.0425 μ m. The geometric details of the helical flagella and pipes used in this work are presented in Tables <ref> and <ref>. All the motion is driven by the rotation of the flagellum. Therefore, we start looking at its angular velocity, ω_z, see Fig. <ref>a. In bulk the flagellum rotates two times faster in the extended mode than in the wrapped mode. The slower rotation can be explained by the additional drag experienced by the flagellum in the wrapped mode, which is caused by the proximity of the flagellum to the bacterium body. Both modes reduce their angular velocities as r_0 decreases due to the additional hydrodynamic drag generated by the pipe walls. However, the decrease is proportionally less important in the wrapped mode as its initial drag was larger. Thus, the ratio between the angular frequencies of the two modes falls from a factor 2.0 in bulk to a factor 1.6 in the smallest pipe considered. Next, we look at the swimming speed along the pipe axis, u_z, see Fig. <ref>b. We observe that in bulk the wrapped mode swims about twice slower than the extended mode. This result is consistent with experimental observations <cit.>. The slower swimming speed in the wrapped mode is a consequence of the slower rotation of its flagellum. Under confinement the swimming speed, u_z, decreases for the extended mode as the pipe radius is decreased. Again, the additional hydrodynamic drag generated by the pipe walls is responsible for this effect. In contrast, the wrapped mode exhibits a non-monotonic trend in its swimming speed. As the pipe radius is decreased the bacterium swims faster up to the point where the ratio between the pipe radius and the flagellum amplitude is r_0 / α≈ 1.5. Beyond that point the swimming speed decreases with r_0. The Stokes equations are linear and thus the linear and angular velocity are proportional when keeping all geometric parameters constant. We could have imagined that changing the pipe radius would affect the flagellum rotation and the bacterium translation to a similar degree. That is approximately true for the extended mode but completely false for the wrapped mode as shown in the inset of Fig. <ref>a. To understand this difference and the unusual swimming speed increase observed with the wrapped mode we consider the motion of a single helical flagellum inside a pipe. We apply a constant torque on the helical flagellum and measure its translational and rotational speeds. Note that in this case the flagellum is not a torque-free swimmer, as there is no body to which apply an opposite torque. Nonetheless, this numerical experiment is useful to understand the more complex wrapped mode. We observe an increase in the swimming speed for decreasing pipe radius with respect to the bulk value above a critical pipe radius, see Fig. <ref>a, similar to the wrapped mode results. For the single flagellum its swimming speed can be written as u_z = M_trτ_z. For moderate confinements the hydrodynamic interactions with the wall increase the value of the mobility coupling term, M_tr, with respect to the bulk values, thus, the swimming speed is increased. For very tight confinements the lubrication interactions dominate the interactions with the wall and M_tr decreases below the bulk values. These effects were already reported by Liu et al. for an infinite flagellum within an infinite pipe <cit.>. This speed increase is observed despite the reduction in the flagellum angular velocity, ω_z, with r_0, see Fig. <ref>a inset. The wrapped mode takes advantage of the increased translation-rotation coupling of its flagellum under confinement to increase its speed. In the extended mode the flagellum translation-rotation coupling is increased just as in the wrapped mode. However, the drag on the body increases faster with smaller r_0, the combined effect is to reduce the swimming speed. In the wrapped mode the body is protected by the flagellum, moving in the same direction, and thus the increase in the body drag is less important. This interplay between the enhanced translation-rotation coupling, which increases thrust and the swimming speed, and the drag on the bacterium body which reduces it, has been observed in a recent experimental study with E. coli <cit.>. Vizsnyiczai et al. observed that a bacterium swimming in a extended mode inside a pipe swims slower than a bacterium in a channel. However, when the bacterium is exiting the pipe and only its flagella remain inside, the swimming speed is larger than a channel. The reason is the increased translation-rotational coupling experienced by the flagella and the lack of an additional drag acting on the bacterium body. This result was nicely reported in Fig. 5 of Ref. <cit.>. After the flagella exit the pipe the speed decreases to the bulk value. Our results agree with their observations. §.§ Power and Efficiency The power consumption is an important quantity for a microswimmer propelling in a viscous environment and the efficiency can be more important than the absolute swimming speed. Thus, we measure these quantities. Considering the chemical energy used within the cell is beyond the scope of our work, thus, we limit ourselves to study the power dissipated by the Stokes flow and the microswimmers hydrodynamic efficiency. The power exerted by a microswimmer to the medium and dissipated by the flow is P = ∑_n _n ·_n = ∑_n _n ·_n + _n ·_n, where the sum is over rigid bodies, in our case the bacterium body and its flagellum. As the power is generated by the motor, the power consumed by a bacteria during its swimming can be rewritten as P_m = _m ·_m = _m · (_flag - _body). In the absence of elastic or soft steric interactions both expressions are equivalent. We will always use (<ref>) to account for soft steric interactions used in Sec. <ref>. The wrapped mode consume less power for all pipe radii owing to the slower rotation of its flagellum, see Fig. <ref>b inset. Under confinement the power exerted by the motor decays for both swimming modes. Of more interest is the hydrodynamic efficiency of the swimmers to propel themselves. There are several approaches to define the hydrodynamic efficiency <cit.>. We follow a classical approach and define the inverse efficiency as the power normalized with the power necessary to pull the body with the same speed <cit.> η^-1 = M_zzu_z^2 P, where M_zz=u_z/f_z is body mobility along the pipe axis and u_z the velocity. The Fig. <ref>b shows the variation of the inverse efficiency as a function of the pipe radius. It is evident from the figure that in bulk and wide pipes the extended mode is more efficient. However, there is a crossover and for tight confinements the wrapped mode becomes more efficient. This is a result of the lower power consumption of the wrapped mode and, importantly, its enhanced velocity within the pipe. This result suggest that the wrapped mode is beneficial to selfpropel in confined spaces. So far we have only used one flagellum, model II, and a bacterium placed exactly on the middle of the pipe. In the next two sections we explore whether these results are robust under a change of these conditions. §.§ Robustness of results: Effect of N_λ and L Bacteria species present flagella of different lengths, amplitudes and pitch angles which affect the bacteria bulk speeds and efficiencies <cit.>. Here, we explore if the wrapped mode is a more efficient swimming style in confined environments for a wide variety of flagella models. We build five flagella models by varying simultaneously the flagellum length, L, and the number of waves along its length, N_λ=L_z / λ, where L_z is the flagellum extension along its axis and λ the wavelength of the helical wave, see Fig. <ref>(a,b) and Table <ref>. We present the inverse efficiency for all flagella models and pipe radius in Fig. <ref>(c,d) The general trend is the same as before. For wide pipes the extended mode swims more efficiently than the wrapped mode for all flagella models except one (N_λ=2.5). Under confinement both swimmers increase their efficiency but the improvement is stronger for the wrapped mode which becomes the most efficient for pipes with r_0 / α⪅ 1.7. In those situations the wrapped mode is approximately two times more efficient than the extended mode. The efficiency, for both swimming modes, is non-monotonous on N_λ. When N_λ≪ 1 the flagellum is almost straight, thus, it cannot propel the bacterium. Therefore, the swimming speed and the efficiency initially grow with N_λ. Beyond a certain value of N_λ the flagellum tangent forms a large angle with the direction of motion, which again reduces the propulsion efficiency. For intermediate values of N_λ the flagellum is helical-shaped which allows propulsion. For both modes the flagellum with N_λ=1.5 is the most efficient under confinement for the flagella lengths considered. For bacteria swimming in bulk the optimum is also close to N_λ=1.5, although the exact optimum N_λ depends on the flagellum length <cit.>. For the extended mode, optimal swimming occurs around the non-dimensional pipe radius, r_0/α = 1.5 for all values of N_λ. For the wrap mode the optimal swimming occurs for lower values of r_0/α. §.§ Robustness of results: dynamical simulations So far we have computed the swimming speed when the bacteria are located in the middle of the pipe and aligned along it. However, freely swimming bacteria can tilt and move towards the pipe wall. To verify if the results reported so far are robust, we perform dynamic simulations where the bacteria are free to displace away from the pipe centerline and to change orientations. We use the same pipe models as before but imposing periodic boundary conditions along the pipe. To solve the Stokes equations with these boundary conditions we use a periodic Fast Multipole Method implemented in the library STKFMM <cit.>. To avoid the overlap of the bacterium with the pipe we include a steric repulsion interaction between the blobs of pipe and bacterium with a repulsion strength f=5×10^-5pN μ m for overlapping blobs and with an exponential decay with a characteristic length ξ=0.01 μ m for non-overlapping blobs. For all models considered in this section we simulate the bacteria for 10 s so the bacteria can swim at least 70 μ m. We use the last 8 s to extract the swimming speed and the power consumption. The results for bacteria with the flagella model VII, the one used in Fig. <ref>, are shown as full symbols in Fig. <ref>. The same general trend as for the static simulations is observed. However, the efficiency curves do not cross over. The cross over is not observed because this time the wrapped swimming speed along the pipe, u_z, barely increases with confinement, and the efficiency depends strongly on u_z. The magnitude of u_z does not increase because the bacterium swims with a tilt towards the wall, see Fig. <ref>c and Movie 1. In contrast, the extended mode cannot tilt significantly on small pipes as that is prevented by its rigid flagellum, which favours the motion along the pipe. To verify the role of the tilt we run another set of simulations using a longer flagellum, model VI, that extends beyond the bacterium body, see Fig. <ref>c and Movie 2. The results are presented as open symbols in Fig. <ref>. In this case the speed of the wrapped mode is approximately independent on the confinement but larger than with the shorter flagellum. As a result we observe a crossover between the efficiencies of the wrapped and extended modes. Overall, these results show that (i) the swimming speed is less sensitive to confinement for the wrapped mode than for the extended mode, (ii), the efficiency improves strongly for the wrapped mode and (iii), depending on the flagellum details, the wrapped mode can be the most efficient way to swim. § CONCLUSIONS In this paper we have presented the dynamics of two different swimming modes, namely the extended and wrapped modes of monotrichous type bacteria. Under bulk conditions the extended mode swims faster and more efficiently than the wrapped mode. However, under strong confinement the efficiency of the wrapped mode improves faster than for the extended mode. For a wide number of flagella shapes, with different lengths and wavelengths, the bacteria in the wrapped mode swim more efficiently. These results are complementary to the experimental work of Kinosita et al. where the bacteria Burkholderia adopting the wrapped mode was observed to glide in very narrow ducts <cit.>. It seems that, either by gliding over a substrate or by means of hydrodynamic interactions, the wrapped mode promotes the motion of bacteria on tight confinements. It is interesting to note that some bipolar flagellated bacteria can display a wrapped and an extended mode simultaneously, where the flagellum at the front pole wraps around the body and the rear one remains extended <cit.>. Such mixed mode could present some advantages under confinement that should be investigated. § ACKNOWLEDGMENTS The project that gave rise to these results received the support of a fellowship from “la Caixa” Foundation (ID 100010434), fellowship LCF/BQ/PI20/11760014, and from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847648. Funding provided by the Basque Government through the BERC 2022-2025 program and by the Ministry of Science and Innovation: BCAM Severo Ochoa accreditation CEX2021-001142-S/MICIN/AEI/10.13039/501100011033 and the project PID2020-117080RB-C55 “Microscopic foundations of soft matter experiments: computational nano-hydrodynamics (Compu-Nano-Hydro)” are also acknowledged. jfm
http://arxiv.org/abs/2307.04039v1
20230708195157
A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers
[ "Guy Blanc", "Caleb Koch", "Carmen Strassle", "Li-Yang Tan" ]
cs.CC
[ "cs.CC", "cs.DS" ]
Typology of Risks of Generative Text-to-Image Models Atoosa Kasirzadeh ==================================================== We prove a strong composition theorem for junta complexity and show how such theorems can be used to generically boost the performance of property testers. The ε-approximate junta complexity of a function f is the smallest integer r such that f is ε-close to a function that depends only on r variables. A strong composition theorem states that if f has large ε-approximate junta complexity, then g ∘ f has even larger ε’-approximate junta complexity, even for ε’ ≫ε. We develop a fairly complete understanding of this behavior, proving that the junta complexity of g ∘ f is characterized by that of f along with the multivariate noise sensitivity of g. For the important case of symmetric functions g, we relate their multivariate noise sensitivity to the simpler and well-studied case of univariate noise sensitivity. We then show how strong composition theorems yield boosting algorithms for property testers: with a strong composition theorem for any class of functions, a large-distance tester for that class is immediately upgraded into one for small distances. Combining our contributions yields a booster for junta testers, and with it new implications for junta testing. This is the first boosting-type result in property testing, and we hope that the connection to composition theorems adds compelling motivation to the study of both topics. empty empty § INTRODUCTION The growth in the sizes of modern datasets is both a blessing and a curse. These datasets, many of which now come with billions of features, contain a wealth of information that machine learning algorithms seek to tap into. On the other hand, their size stands in the way of the opportunities they present, as many of the algorithms that we would like to run on them simply cannot handle their dimensionality. Thankfully, for many tasks of interest the vast majority of features are irrelevant. This motivates the design of algorithms that are able to quickly home in on the small number of relevant features, and whose efficiency scales gracefully with the number of such features. Already in the early 1990s Blum <cit.> (see also <cit.>) proposed the clean theoretical challenge of learning an unknown r-junta, a function that depends on r≪ n many of its n variables. Quoting <cit.>, “It is my belief that some of the most central open problems in computational learning theory are, at their core, questions about finding relevant variables.” This is now known simply as the junta problem and is the subject of intensive study <cit.>, having distinguished itself as “the single most important open question in uniform distribution learning" <cit.>. The premise of the junta problem suggests an even more basic algorithmic problem, that of determining if an unknown function is even an r-junta to begin with. This is the problem of testing juntas, introduced by Fischer, Kindler, Ron, Safra, and Samorodnitsky <cit.> and subsequently studied in numerous works <cit.>. Junta testers are also at the heart of the best known testers for numerous other classes of functions, the key insight being that many functions are well-approximated by small juntas (see <cit.> and Chapter 5 of <cit.> for more on this connection). The surveys by Blais <cit.> give broad overviews of various junta testers and their applications throughout theoretical computer science. This work. These algorithmic applications motivate the study of approximability by small juntas as a complexity measure. For a function f : ^n → and a distribution 𝒟 over ^n, the ε-approximate junta complexity of f with respect to 𝒟, denoted J_𝒟(f,ε), is the smallest integer r such that f is ε-close to an r-junta. Among the most basic questions one can ask about any complexity measure of functions is how it behaves under composition. In the first part of this paper we develop, from the ground up, a fairly complete understanding of this question for junta complexity. We prove a near-optimal composition theorem (<Ref>) that is built on notions of noise stability, both classical and new. In the second part we draw a general connection (<Ref>) between the type of composition theorem that we prove—a strong composition theorem, which we will soon define—and property testing, showing how they can be used to design the first generic boosters for property testers. Combining our two main contributions yields new implications for junta testing. § OUR RESULTS AND TECHNIQUES §.§ First main result: A strong composition theorem for junta complexity Composition theorems are statements about hardness amplification: the goal is to understand the extent to which the disjoint composition (g ∘ f)(x) g(f(x^(1)),…,f(x^(k))) is more complex than f itself, and how this depends on intrinsic properties of the combining function g. For approximate measures such has junta complexity, we are furthermore interested in strong composition theorems, statements of the form: J_𝒟^k(g∘ f, ε_large)≫ J_𝒟(f, ε_small) even for ε_large≫ε_small. In words, the composed function requires much more resources—in our case, much larger junta approximators—even if one only seeks a much coarser approximation. Strong composition theorems stand in contrast to weak ones that only amplify hardness with respect to one of the two parameters, either resources or approximation quality only. The canonical example in this context is Yao’s XOR lemma <cit.>, which says that if f is mildly hard to approximate with size-s circuits, then XOR∘ f is extremely hard to approximate with size-s’ circuits. A long-recognized downside of this important result, inherent to all known proofs of it <cit.> and its generalizations to arbitrary combining functions <cit.>, is the fact that it is only known to hold for s’ ≪ s, whereas intuitively it should hold even for s’ ≫ s. Composition theorems, both weak and strong, have been studied for a variety of complexity measures but appear to have been underexplored for junta complexity. One reason may be that the question appears deceptively simple. Indeed, things are completely straightforward in the zero-error setting, where we have the intuitive identity J(g ∘ f, 0) = J(g,0)· J(f,0). However, we show that the question becomes surprisingly intricate once error is allowed. §.§.§ Context and motivation: Counterexamples to natural composition theorems The question proves to be tricky even in the special case where the combining function g is symmetric. We now state a sequence of three seemingly intuitive conjectures for this special case. While false, these conjectures and their counterexamples will motivate and lead us to the statement of our actual composition theorem. (Details and proofs of the counterexamples discussed in this section are given in <Ref>.) The following notation will be useful for us throughout this paper: Notation. For a function f : ^n→, distribution 𝒟 over ^n, and integer r, we write f̃_𝒟,r to denote the best r-junta approximator of f with respect to 𝒟. When 𝒟 is clear from context, we simply write f̃_r. Conjecture 1. It will be convenient for us to consider composition theorems in their contrapositive form. Suppose we would like to approximate g ∘ f with an R-junta, say with respect to the uniform distribution. If g is a k-variable symmetric function, how would we go about constructing an approximator that achieves the highest accuracy possible? Since g is symmetric, one may be inclined to divide the “junta budget” of R evenly among the k inner functions and conjecture that g ∘f̃_R/k = g(f̃_R/k,…,f̃_R/k) achieves the best, or close to the best, accuracy among all R-junta approximators. However, this is badly false. Let g be the k-variable Majority function and f the n-variable Parity function. For any choice of R satisfying R/k < n (i.e. each inner Parity receiving a budget that falls short of its arity), we have Pr[g∘f̃_R/k g∘ f] = 1/2. This is because it is “all or nothing” when it comes to approximating Parity: no (n-1)-junta can achieve accuracy better than that of a constant approximator. The best strategy is therefore to allocate a full budget of n to as many of the inner Parities as possible (i.e. R/n many of them), and a budget of zero to the others. This shows a gap of 1/2 versus 1-o(1) in the accuracies of the “divide budget equally” strategy and the optimal one. Conjecture 2. In light of this counterexample, one may then conjecture that the best strategy is to partition the junta budget optimally among the k inner functions and feed the respective approximators of f into g. That is, the conjecture is that the best approximator is of the form: g(f̃_r_1,…,f̃_r_k) where ∑_i=1^k r_i = R. While this is true for our example above, it is again badly false in general. In fact, the error of such an approximator can be close to 1, even worse than the trivial bound of ≤1/2 achievable with a constant approximator. Our counterexample reveals another counterintuitive aspect of the overall problem. Consider an approximator for g∘ f of the form g(f̃_r_1,…,f̃_r_k). We show its approximation accuracy can increase if we replace one of the inner approximators for f with a worse one: e.g. if we replace f̃_r_1 with f̃_r_1’ where r_1’ < r_1. In more technical terms that we will soon define: while the noise stability of a function is, as one would expect, monotone in the noise rate, we show that the natural generalization of it where the corruption probabilities of 0’s and 1’s are decoupled (defined in <Ref>) is not monotone. Conjecture 3. Finally, we consider a conjecture that is far laxer than either of the previous ones. It simply states that the optimal approximator for the composed function g∘ f is one of composed form: h(q^(1),…,q^(k)) for some h : ^k → and q^(1),…,q^(k) : ^n →, where the relevant variables of q^(i) fall within the ith block of variables. We show (to our own surprise) that this conjecture is still false: there are composed functions for which the optimal approximator is not of composed form. However, unlike the first two conjectures, our work shows that this conjecture is morally true in a precise sense. §.§.§ Our Strong Composition Theorem Our strong composition theorem implies a close quantitative relationship between the error of the optimal approximator and that of the optimal composed form approximator, and indeed one with a specific structure that we call canonical: We say that a composed form approximator for g∘ f is canonical if it is of the form: h(f̃_r_1,…,f̃_r_k), where h : ^k→ is the function: h(y) = (_∼𝒟^k[ (g∘ f)()|y_i = f̃_r_i(^(i)) for all i∈ [k]]). For intuition regarding the choice of h, we note that for the fixed k-tuple of functions f̃_r_1,…,f̃_r_k, it is the combining function that minimizes error with respect to g∘ f. Canonical composed form approximators are therefore ones whose individual components are “locally" optimal: each f̃_r_i is the optimal r_i-junta approximator for f, and h the optimal way of combining the f_r_i's. Our strong composition theorem will say that we can get very close to the globally optimal approximator this way. The notion of noise stability is central to our work: For any μ∈ (-1,1) and vector ρ⃗∈ [0,1]^k, we define the multivariate noise stability of g as _μ,ρ⃗(g) = [g()g()] where independently for each i ∈ [k], we draw (_i, _i) as follows: Using π_μ to denote the unique distribution supported on with mean μ, _i ∼π_μ, and _i = _i w.p. ρ⃗_i Independent draw from π_μ w.p. 1 - ρ⃗_i. When μ = 0 we simply write _ρ⃗(g). This definition allows for a different noise rate for each coordinate, generalizing the more commonly studied definition where the noise rates are the same for every coordinate (see e.g. Chapter 2 of <cit.>). We use the terms multivariate noise stability and univariate noise stability to distinguish these definitions. Even in the case of symmetric combining functions g, our strong composition theorem will naturally involve its multivariate noise stability (necessarily so, as already suggested by the counterexample to Conjecture 1). We present our strong composition theorem as a sequence of two parts that each carries a standalone message, the first of which formalizes the fact that the optimal canonical composed form approximator is a good proxy for the actual optimal approximator. It will be more convenient for us to state our results in terms of advantage instead of error, the two quantities being related via the identity advantage = 1-2·error. Also, for notational clarity we only state here the special case where f is balanced (i.e. _𝒟[f] = 0). [colback = white,arc=1mm, boxrule=0.25mm] Let f : ^n→ and g:^k → be arbitrary functions and 𝒟 be any distribution over ^n. Assume that _𝒟[f]=0. For the task of approximating g ∘ f under 𝒟^k with an R-junta, there is a correlation vector ρ⃗∈ [0,1]^k such that _ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator ≤Advantage of optimal approximator≤√(_ρ⃗(g)). For most applications of composition theorems, including those in this paper, the parameters of interest are such that the quartic gap between the upper and lower bounds above are inconsequential. (In particular, if the advantage of the optimal canonical composed form approximator diminishes to 0 as k grows, our bounds imply that the same is true for the actual optimal approximator. Indeed, the two rates of convergence are the same up to a polynomial factor.) Part II of <Ref> elaborates on the correlation vector ρ⃗, showing how it is is determined by the junta complexity of f and the noise stability of g: [colback = white,arc=1mm, boxrule=0.25mm] Theorem 1 (Part II: Explicit description of ρ⃗). The correlation vector ρ⃗∈ [0,1]^k in Part I is the vector that maximizes _ρ⃗(g), subject to the constraint: ρ⃗_i = _𝒟[f·f̃_r_i] for all i∈ [k] where ∑_i=1^k r_i = R. Taken together, the two parts of <Ref> show that the junta complexity of g∘ f is tightly characterized by the junta complexity of f and the multivariate noise stability of g. It furthermore gives a simple and explicit strategy for constructing a near-optimal approximator: first partition the junta budget optimally among the k inner functions; next approximate each inner function optimally with its allocated budget; and finally combine these approximators in the optimal way. Naturally, it would be preferable to understand the strategy for constructing the actual optimal approximator, but our counterexamples suggest that it defies a clean and interpretable description even for symmetric g (indeed, even for g being the And function). Corollary: Highly noise sensitive functions strongly amplify junta complexity. <Ref> yields a hardness amplification statement of the form <ref> the following way. Suppose f is mildly hard for r-juntas, i.e. [f̃_r f] ≥ε_small. Our goal is to show that g ∘ f is extremely hard for R-juntas, [(g∘ f)_R g∘ f] _large≫ε_small, even for R ≫ r. For any partition of R = ∑_i=1^k r_i, at most a 0.999-fraction of the r_i's exceed 1.01R/ k r. <Ref> therefore tells us that the advantage of the optimal R-junta is upper bounded by √(_ρ⃗(g)) where at least a 0.001-fraction of ρ⃗'s coordinates are at most 1-2·_small. (Equivalently, at least a 0.001-fraction of coordinates receive at least an _small amount of noise.) This motivates the following definition: The (δ,)-noise stability of a function g:^k→ is the quantity max{_ρ⃗(g)at least a δ-fraction of ρ⃗'s coordinates are at most 1-2}. By the monotonicity of noise stability, this maximum is achieved by a ρ⃗ with exactly a δ-fraction of coordinates being exactly 1-2, and the remaining (1-δ)-fraction being 1. We have sketched the following corollary of <Ref>: Let g : ^k → be a function whose (1/2,_small)-noise stability is at most τ. Then for all functions f, J_𝒟^k(g∘ f, 12(1-√(τ)))__≥ 0.99k·J_𝒟(f,_small)__. In words, g ∘ f requires much larger junta approximators, an Ω(k) multiplicative factor more, even if we allow much larger error, 1/2(1- √(τ)) _large instead of _small. As two extreme examples of combining functions g, ∘ The (0.001,_small)-noise stability of the k-variable Parity function is (1-2·_small)^Ω(k), making it an excellent amplifier of junta complexity. ∘ The (0.001,_small)-noise stability of a dictator function g(x) = x_i is 1, making it a terrible amplifier of junta complexity as one would expect: if g is a dictator function then g∘ f ≡ f is of course no more complex than f itself. The partial-noise stability of these two specific examples are straightforward to compute, but the calculations quickly become unwieldy even for other basic functions. In addition to being a quantity of independent technical interest, the upcoming connections between strong composition theorems and the boosting of property testers will also motivate understanding the partial-noise stability of broad classes of functions beyond just parity and dictator. (Roughly speaking, to boost testers for a property 𝒫 we need to analyze a function g such that 𝒫 is closed under g.) Our next result is a general technique that yields sharp bounds on the partial-noise stability, and more generally the multivariate noise stability, of all symmetric functions. The multivariate noise sensitivity of symmetric functions. For a symmetric function g : ^k → one intuits that its multivariate noise stability at a vector ρ⃗∈ [0,1]^k should be related to its univariate noise stability at a value ρ^⋆∈ [0,1] that is an “average" of the coordinates of ρ⃗. (This is certainly not true for general functions; consider for example the dictator function.) Using techniques from the study of negative association, we formalize this intuition and prove that indeed it is sandwiched by the arithmetic and geometric means of the coordinates of ρ⃗: Let g : ^k→ be a symmetric function, μ∈ (-1,1), and ρ⃗∈ [0,1]^k. Define (∏_i ∈ [k]ρ⃗_i)^1/k and 1/k∑_i ∈ [k]ρ⃗_i. Then _μ,(g) ≤_μ,ρ⃗(g) ≤_μ,(g). Furthermore, the lower bound holds under the weaker assumption that g is transitive. The more “reasonable" ρ⃗ is, the closer the upper and lower bounds of <Ref> are. In particular, we get the following bound on the (δ,)-noise stability of symmetric functions: For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆∈ [0,1] satisfying 1 - 2δ - O(^2) ≤ρ^⋆≤ 1 - 2δ. Recall that corresponds to the initial inapproximability factor _small in <Ref>, and so the additive gap of O(^2) between the upper and lower bounds is indeed small for our intended application. §.§ Second main result: Composition theorems and boosting of property testers Composition theorems are most naturally thought of as statements about hardness amplification, and indeed that is how they are most commonly used. As our second main contribution, we show how they can be used fruitfully in their contrapositive form as meta-algorithms. In more detail, we show how they can be used to generically boost the performance guarantees of property testers. While boosting is a story of success in both the theory and practice of machine learning, to our knowledge the analogous concept in property testing has not yet been considered. The connection that we draw can be instantiated with either strong or weak composition theorems, but as we now see, the parameters are qualitatively better in case of strong composition theorems. Within property testing, a major strand of research, initiated by Parnas, Ron, and Samorodnitsky <cit.>, concerns testing whether an unknown function has a concise representation. Consider any parameterized property 𝒫 = {𝒫_s}_s ∈ℕ of boolean functions: size-s parities, size-s juntas, size-s decision trees, s-sparse polynomials over various fields, and so on. The task is as follows: Given queries to an unknown function f : ^n →, access to i.i.d. draws from a distribution 𝒟, and parameters s,s'∈ and > 0, distinguish between: ∘ Yes: f ∈𝒫_s ∘ No: f is ε-far under 𝒟 from every function in 𝒫_s'. Note that the task is more challenging as ε gets smaller, and as the gap between s and s' gets smaller. We show how a composition theorem for 𝒫 allows one to trade off these two parameters: a tester for large ε can be upgraded into one for small ε, at the price of larger gap between s and s'. The stronger the composition theorem, the more favorable this tradeoff is, and with an optimally strong composition theorem one is able to improve the ε-dependence without any associated price in the multiplicative gap between s and s': [colback = white,arc=1mm, boxrule=0.25mm] Let 𝒫 = {𝒫_s }_s∈ be a property and g : ^k→ be such that 𝒫 behaves linearly w.r.t. g. Suppose that 𝒫 admits an (_small, _large,λ)-composition theorem w.r.t. g. Then any (_large,ks,λ ks')-tester for 𝒫 can be converted in to an (_small, s,s')-tester for 𝒫. We defer the precise definitions of the terms “(_small,_large,λ)-composition theorem" and “behaves linearly" to the body of the paper, mentioning for now that λ∈ [0,1] measures the strength of the composition theorem: such a theorem says that the composed function requires λ k more resources to achieve _large error than original function to achieve _small error. Therefore λ = 1/k can be viewed as the threshold separating weak and strong composition theorems, with λ = 1 corresponding to an optimally strong one. (<Ref>, for example, achieves λ = 0.99.) Note that if λ = 1 in <Ref>, then an (_large,s,s)-tester for all s yields an (_small,s,s)-tester for all s. The formal version of <Ref> will also show that it upgrades uniform-distribution testers to strong uniform-distribution testers, and distribution-free testers to strong distribution-free testers. This stands in contrast to standard boosting in learning which can only upgrade distribution-free learners. §.§.§ Example applications of <Ref>: New implications for junta testing As mentioned in the introduction, juntas are among the most basic and intensively-studied function classes in property testing. Owing to two decades of research, the complexity of testing juntas in the non-tolerant setting is now fairly well-understood: we have highly-efficient adaptive <cit.>, non-adaptive <cit.>, and distribution-free testers <cit.>, all of them achieving query complexities that are essentially optimal <cit.>. The picture is much less clear in the more challenging tolerant setting. For the uniform distribution, the best known testers require exponentially many queries <cit.>, and there are no known distribution-free testers. By generalization <Ref> to the tolerant setting and instantiating it with our strong composition theorem for juntas, we obtain new implications, both positive and negative, that help clarify this picture. Positive implication: boosting of tolerant junta testers. First, any tolerant junta tester for large distance parameter can now be converted into one for small distance parameters, at the price of a slight gap in the junta sizes of the Yes and No cases. For example, for both the uniform and distribution-free settings we get: Suppose we have a (r)-query tester that distinguishes between ∘ Yes: f is 1/4-close to an r-junta ∘ No: f is 1/3-far from every r-junta. Then for every > 0 we have a (r/)-query tester that distinguishes between ∘ Yes: f is -close to an r-junta ∘ No: f is Ω()-far from every 1.001r-junta. The resulting gap between the junta sizes of the Yes and No cases, while mild, is admittedly not ideal. As alluded to above, this stems from the fact that the “strength parameter" of <Ref> is λ = 0.99 and not λ = 1. Designing boosters that do not incur this gap, either via an optimally strong composition theorem or otherwise, is a natural avenue for future work. On the other hand, we now show that even with this gap, <Ref> already carries with it an interesting consequence. This consequence crucially relies on our composition theorem for juntas being strong; the proof would not have gone through had the strength parameter of <Ref> only been λ = 1/k. Negative implication: NP-hardness in the distribution-free setting. This implication concerns the time rather than query complexity of testers. The same proof of <Ref> also converts a (r,n)-time tester into a (r,1/,n)-time tester. Implicit in the work of Hancock, Jiang, Li, and Tromp <cit.> is an NP-hardness result for tolerantly testing juntas in the distribution-free setting. One downside of their result is that it only holds in the regime of = 1/(n). Applying the time-analogue of <Ref>, we lift this hardness up to the standard regime of constant : The following task is NP-hard under randomized reductions. Given queries to a function f : ^n→, access to i.i.d. draws from a distribution 𝒟, and parameters r∈ and > 0, distinguish between: ∘ Yes: f is 1/4-close under 𝒟 to an r-junta; ∘ No: f is 1/3-far under 𝒟 from every r-junta. This implies a fairly dramatic separation between the non-tolerant versus tolerant versions of the problem. The recent (r)-query non-tolerant testers <cit.> are also time efficient, running in (r,n) time. <Ref> shows that any tolerant tester, regardless of query efficiency, must have time complexity that is as bad as that of SAT: e.g. if SAT requires randomized exponential time, then so does any tolerant tester. In fact, our actual result is stronger than as stated in <Ref>: we prove that the task is NP-hard even if the Yes case states that f is 0-close under 𝒟 to an r-junta. We therefore show that the testers of <cit.> are quite fragile in the sense that they break if the Yes case in the definition of non-tolerant testing is changed from “f is an r-junta" to “f is 0-close under 𝒟 to an r-junta". § OTHER RELATED WORK O'Donnell's generalization of Yao's XOR lemma. Yao's XOR lemma states that if f is -hard against circuits of size s, meaning every size-s circuit differs from f on at least an -fraction of inputs, then XOR_k∘ f is (1/2 + 1/2(1-2)^k + δ)-hard against circuits of size s' where s'= Θ(δ^2/log(1/))· s. The (1-2)^k term in the resulting inapproximability factor agrees precisely with the (univariate) noise stability of XOR_k at ρ = 1-2. In <cit.> O'Donnell showed that this is no coincidence. He proved a far-reaching generalization of Yao's XOR lemma that allows for an arbitrary combining function g : ^k → instead of XOR, and showed that the resulting inapproximability of g∘ f is given by the “expected bias" of g, a quantity that is closely related to the (univariate) noise stability of g. Like Yao's XOR lemma, <cit.>'s composition theorem is weak in the sense that the hardness of g∘ f only holds against size s' circuits where s' ≪ s. (In fact, <cit.> incurs an additional multiplicative loss of k in the resulting circuit size.) Our composition theorem concerns a different resource, juntas instead of circuits, and as emphasized in the introduction, our main focus is on proving a composition theorem that is strong in the sense of amplifying both the amount of resource required and the inapproximability factor. Both our work and <cit.> utilize Fourier analysis in our proofs, which is to be expected given the centrality of noise stability to both works. That aside, our overall approach and techniques are entirely different from <cit.>'s—necessarily so, as we elaborate next. Hardness amplification via boosting. In <cit.> Klivans and Servedio observed that most known hardness amplification results are proved via a boosting-type argument. For example, for Yao's XOR lemma and <cit.>'s generalization of it, one proceeds by contradiction: one assumes that XOR_k∘ f can be mildly approximated by a size-s' circuit C (in the language of boosting, C is a weak hypothesis for XOR_k ∘ f), and one constructs a larger circuit C^⋆ of size s that well-approximates f (i.e. C^⋆ is a strong hypothesis for f). In boosting, the strong hypothesis is built out of many weak hypotheses; likewise, in Yao's XOR lemma the size-s circuit C^⋆ is built out of many size-s' circuits that are like C. The work of <cit.> formalizes this connection. From this perspective, it becomes clear why such approaches are fundamentally limited to weak composition theorems where s' ≪ s. Strong composition theorems therefore necessitate a different tack, and indeed our proof proceeds via the forward implication instead of the contrapositive: we reason directly about the inapproximability of g∘ f under the assumption about the inapproximability of f. Somewhat ironically, our second main contribution is then an application of strong composition theorems to the boosting of property testers, which goes in the opposite direction to <cit.>'s “Boosting ⇒ Hardness Amplification" observation above. Independent work of Chen and Patel <cit.>. A recent work of Chen and Patel also gives new lower bounds for tolerant junta testing. For the problem of testing whether an unknown function is _1-close to or _2-far from a k-junta under the uniform distribution, they prove a query lower bound of k^Ω(log(1/(_2-_1))), which is superpolynomial when the gap _2-_1 is subconstant. This yields the first superpolynomial query complexity separation between tolerant and non-tolerant testing for a natural property of boolean functions. Their result is incomparable to <Ref> in several respects. We give a time lower bound when the gap _2-_1 is a fixed constant in the distribution-free setting. Being an NP-hardness result, our lower bound is conditional whereas theirs is unconditional. § DISCUSSION AND FUTURE WORK Complexity measures can behave in highly counterintuitive ways under composition, which makes composition theorems, and strong composition theorems in particular, tricky to prove. A motivating goal of this work is to develop an understanding of strong composition theorems from first principles, and hence our focus on junta complexity, perhaps the most basic complexity measure of a function. We are optimistic that our techniques can apply to other measures, though we believe that as in this work, much of the challenge will lie in first figuring out the right statement to prove. Consider for example decision tree complexity, a natural next step from junta complexity. There are existing strong XOR lemmas for decision tree complexity, but they come with limitations and do not appear to be the final word. (Briefly, the XOR lemma of <cit.> is only strong when the initial inapproximability factor _small is at least a constant, and the strong XOR lemma of <cit.> only holds for decision trees that are allowed to “abort".) Indeed, Shaltiel <cit.> has shown that certain hoped-for strong XOR lemmas for decision tree complexity are false, though as he remarked, his counterexample “seems to exploit defects in the formation of the problem rather than show that our general intuition for direct product assertions is false". We hope that our results, and specifically the new connections to various notions of noise stability, can serve as a guide to the right statement for decision tree complexity and other measures. As for our second main result, the general connection between strong composition theorems and the boosting of property testers, we believe that it adds compelling algorithmic motivation to the study of composition theorems, a topic traditionally considered to be mostly of complexity-theoretic interest. Likewise, we hope that our work spurs future research on this new notion of boosting for property testers, a notion that we believe is of interest independent of the connections to composition theorems. For example, an ambitious goal for future work is to broadly understand when and how a tester for constant distance parameter can be automatically upgraded into one with the optimal -dependence, as well as the associated costs of such a transformation. § PRELIMINARIES Distributions and random variables. We use bold font (e.g ∼) to denote random variables. For any set S, we use ∼ S as shorthand for ∼Unif(S) where Unif(·) denotes the uniform distribution. Of particular importance to this work will be μ-biased distributions over the Boolean hypercube. For any μ∈ (-1,1), we use π_μ to denote the unique distribution over with mean μ. Formally, for ∼π_μ, = 1 with probability 1 + μ/2 -1 with probability 1 - μ/2. Similarly, for ∈ [-1,1]^k, we use π_ to denote the product distribution π__1×⋯×π__k. Fix some bias μ∈ (-1,1). For any ∈ [0,1]^k and y ∈^k, we write y to denote that for each i ∈ [k], _i is independently drawn as _i = y_i with probability _i Drawn from π_μ with probability 1 - _i. Whenever we use the above notation, the choice of μ will be clear from context. This gives the following more succinct way to express <Ref>, defining multivariate noise stability, _μ,(g) _∼ (π_μ)^k,[g()g()]. Some useful sets. For any integers a ≤ b, we use [a,b] as shorthand for the set {a, a+1, …, b}. Similarly, for b ≥ 1, we use [b] as shorthand for the set [1,b]. For any set S and ℓ≤ |S|, we use Sℓ to denote all subsets of S with cardinality ℓ. Junta complexity. For any function f: ^n →, and S ⊆ [n], we say that f is an S-junta if for all x,y ∈^n for which x_i = y_i whenever i ∈ S it holds that f(x) = f(y). With a slight abuse of notation, when r ∈ [n] is an integer, we say that f is an r-junta if there is a set |S| ≤ r for which f is an r-junta. Advantage. For any functions f, g:^n → and distribution over ^n, we define _(f,g) _∼[f() g()]. With a slight abuse of notation, we define for f:^n → and S ⊆ [n], _(f,S) max_S-junta g:^n →_(f,g). Similarly, for r ∈ [n], _(f,r) max_r-junta g:^n →_(f,g). When the base distribution is clear, we will drop it from our notation. Furthermore, for any function f:^n → and S ⊆ [n] or r ∈ [n], we use f̃_S and f̃_r to denote the S-junta and r-junta respectively maximizing the above two advantages. Function composition. For a function f: ^n →, its direct product f^⊗ k:*^n^k→^k is defined as f^⊗ k(x^(1), …, x^(k)) = (f(x^(1)), …, f(x^(k))). For any g:^k →, we use g ∘ f:*^n^k→ as shorthand for g∘ f^⊗ k, meaning, (g∘ f)(x^(1), …, x^(k)) = g(f(x^(1)), …, f(x^(k))). Vector powers. For any vector v ∈^k and set S ⊆ [k], we'll use the notation v^S as shorthand for v^S ∏_i ∈ S v_i. §.§ Fourier Analysis Our proof of <Ref> will make heavy use of Fourier analysis over the μ-biased hypercube, (π_μ)^k. In this section, we will review relevant definitions and facts. A more complete exposition is given in <cit.>. For any μ∈ (-1,1), we define ϕ_μ(x) x-μ/σ where σ√(1 - μ^2). Every g: ^k → can be uniquely decomposed as g(y) = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(y_i) where ĝ_μ(S) = _∼ (π_μ)^k*g() ∏_i ∈ Sϕ_μ(_i). This decomposition has a number of useful properties stemming from the fact that transforming g from its representation as a truth table to its Fourier coefficients ĝ_μ(S) is an orthonormal transformation. [Basic facts about the Fourier decomposition] * Plancherel's theorem: For any g, h: ^k → and μ∈ (-1,1), _∼ (π_μ)^k[g()h()] = ∑_S ⊆ [k]ĝ_μ(S)ĥ_μ(S). * Parseval's theorem: For any g: ^k → and μ∈ (-1,1), _∼ (π_μ)^k[g()^2] = ∑_S ⊆ [k]ĝ_μ(S)^2. In particular, when g has a range of , Parseval's theorem guarantees that the sum of its squared Fourier coefficients is 1. As a result, the following distribution is well defined. For any g: ^k → and bias μ∈ (-1,1), the spectral sample of g, denoted _μ(g), is the probably distribution over subsets of [k] in which the set S has probability ĝ_μ(S)^2. The Fourier decomposition gives a concise way to represent important quantities, as in the following results. For any μ∈ (-1,1) and ∈ [0,1]^k, _μ, can be related to g's μ-biased Fourier decomposition as, _μ, (g) = ∑_S ⊆ [k]ĝ(S)^2 ^S = _∼_μ(g)[()^]. We define g^()(y) _ y[g()]. Then, by Plancherel's theorem, _μ, (g) = _∼ (π_μ)^k[g() g^()()] = ∑_S ⊆ [k]g_μ(S) g^()_μ(S). Next, we compute the Fourier decomposition of g^(). g^()_μ(S) = _∼ (π_μ)^k*g^()() ∏_i ∈ Sϕ_μ(_i) = _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i) = _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i)(,) distributed identically to (, ) = _∼ (π_μ)^k*g() ·_*∏_i ∈ Sϕ_μ(_i). Applying the independence of _1, …, _k conditioned on and that [ϕ_μ(_i)] = _i ϕ_μ(_i), g^()_μ(S) = _∼ (π_μ)^k*g() ·∏_i ∈ S_i ϕ_μ(_i) = ()^S ·_∼ (π_μ)^k*g() ·∏_i ∈ Sϕ_μ(_i) = ()^S g_μ(S). Putting the above together, _μ, (g) = ∑_S ⊆ [k]ĝ_μ(S)^2 ()^S. One immediate corollary of the above is that multivariate noise stability is monotone. For any μ∈ (-1,1), g:^k →, and , ρ⃗'⃗∈ [0,1]^k satisfying _i ≤ρ⃗'⃗_i for all i ∈ [k], _μ, (g) ≤_μ, ρ⃗'⃗(g). Recall that for any ν∈ [-1,1]^k, the distribution π_ν is the unique product distribution supported on ^k with mean ν. The Fourier decomposition of g also gives a useful way to compute _∼π_ν[g()]. For any g: ^k →, μ∈ (-1,1), and ν∈ [-1,1]^k, _∼π_ν[g()] = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(ν_i). We expand g into it's Fourier decomposition [g()] = ∑_S ⊆ [k]ĝ_μ(S) *∏_i ∈ Sϕ_μ(_i)Linearity of expectation = ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*ϕ_μ(_i)_1, …, _k are independent = ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*_i - μ/σDefinition of ϕ_μ = ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ Sϕ_μ(ν_i). Linearity of expectation § A STRONG COMPOSITION THEOREM FOR JUNTAS In this section, we characterize the junta size required to approximate g ∘ f in terms of the multivariate noise stability of g, and the junta size required to approximate f. For any g: ^k →, f: ^n → and base distribution over ^n, let μ = _∼[f()]. * Lower bound on advantage: For any approximators q^(1), …, q^(k): ^n →, define the lower normalized correlations, for each i ∈ [k] as α_i max*0, _(f, q^(i))^2 - μ^2/1 - μ^2. Then, there is an h:^k → for which _^k(g∘ f, h (q^(1), …, q^(k))) ≥_μ, α(g). * Upper bound on advantage: For any S_1,…, S_k, define the upper normalized correlation as β_i max*0,_(f, S_i) - μ^2/1 - μ^2, construct S ⊆ [n] × [k] by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}). Then, _^k(g∘ f, S) ≤√(_μ, β(g)). Our goal is to understand the error of the best R-junta approximating g ∘ f. <Ref> says that for any way to partition R = r_1 + ⋯ r_k, the approximator h (f̃_r_1, …, f̃_r_k) achieves nearly optimal advantage across all R-juntas that partition their budget this way. Of course, by maximizing both sides across all partitions, we can conclude that there is some partitioning and function h for which h (f̃_r_1, …, f̃_r_k) has nearly optimal advantage among all R-juntas. Indeed, as a simple corollary of <Ref>, we can show that the error of the optimal canonical composed form approximator is within a factor of 4 of the optimal approximator. Recall that _(q_1,q_2) = _∼[q_1() ≠ q_2()] and is related to advantage via the equality = 1 - 2·. For any g: ^k →, f:^n →, junta budget R, and base distribution , there is an h:^n → and partition of the budget r_1 + ⋯ + r_k = R for which,. _^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤ 4 ·_^k(g∘ f, R). When μ = 0, the guarantee of <Ref> can further be given in the concise form of <Ref>: For an appropriately chosen ∈ [0,1]^k, _ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator ≤Advantage of optimal approximator≤√(_ρ⃗(g)). We include the proofs of <Ref> and <Ref> in <Ref>. §.§ Proof of the lower bound on advantage In this subsection, we show that (x_1, …, x_k) → h(f̃_r_1(x_1), …, f̃_r_k(x_k)) is close to the best R-junta approximator for g ∘ f. Here, the function h can be different than g, and this is necessary as shown in the counterexample to conjecture 2 in <Ref>. For any g:^k →, f:^n →, and approximators q^(1), …, q^(k), there is some h:^k → for which _^k(g∘ f, h ∘ (q^(1), …, q^(k))) ≥_μ, α(g), where μ = _∼[f()] and for each i ∈ [k], α_i max*0, (f, q^(i))^2 - μ^2/1 - μ^2. Note α_i naturally interpolates between 0 and 1. Setting q^(i) to the better of the constant -1 or the constant +1 function will lead to α_i = 0, while setting q^(i) = f gives α_i = 1. §.§.§ Characterizing the advantage of composed form approximators To ease notation, we begin with a simpler setting. Suppose we use the same budget, r R/k, in each of the k pieces. Our goal is to understand max_h:^k →(g∘ f, h∘f̃_r) in terms of the noise sensitivity of g and (f, f̃_r). To do so, we will consider unbalanced noise stability. For any x ∈^k, we use the notation x to denote that for each i ∈ [k], _i is independently drawn as * If x_i = -1, with probability a, we set _i = x_i and otherwise set _i = -x_i * If x_i = 1, with probability b, we set _i = x_i and otherwise set _i = -x_i. For any g,h:^k →, μ∈ [-1,1] and a,b ∈ [0,1], we define the unbalanced noise stability as _μ, (a,b)(g,h) = _∼ (π_μ)^k, [g()h()]. We refer to the above notion as unbalanced because when drawing x, the probability of the i^th coordinate flipping from -1 to 1 and from 1 to -1 may differ. Unbalanced noise stability is useful in our setting due to the following proposition. For any f, f̃: ^n → and g,h:^k →, _∼^k[(g ∘ f)() · (h ∘f̃)()] = _μ, (a,b)(g,h), where μ_∼[f()], a _∼[f̃() = -1 | f() = -1], b _∼[f̃() = 1 | f() = 1]. Draw ∼^k and then define f^⊗ k(), f̃^⊗ k(). Clearly, _∼^k[(g ∘ f)() · (h ∘f̃)()] = [g() h()]. Furthermore, the distribution of , is equivalent to if we drew ∼ (π_μ)^k,. The above quantity therefore matches the definition of _μ, (a,b)(g,h). §.§.§ Unbalanced noise stability behaves strangely The most basic requirement of our approximation for g ∘ f is that it have advantage at least 0, as either the constant -1 or the constant +1 function is guaranteed to have such an advantage. Indeed, in the balanced case, it is well known that the approximation will satisfy this basic requirement even if we take h = g. For any g:^k → and a ∈ [0,1/2], _0, (a,a)(g,g) ≥ 0. However, in the unbalanced case, this basic requirement no longer holds. For any k ≥ 0, and a,b ∈ [0,1] for which |a-b| ≥ 0.01, there is a function g:^k → for which _0, (a,b)(g,g) ≤ -(1-2^-Ω(k)). Without loss of generality, we assume b ≥ a + 0.01. We define g(x) 1 if ∑_i ∈ [k]x_i ≥ 0.005k, -1 otherwise. Draw ∼ (π_μ)^k,. Then, *∑_i ∈ [k]_i = 0 , *∑_i ∈ [k]_i = k(b-a). Furthermore, a standard application of Hoeffding's inequality implies that [g() = 1] ≤ 2^-Ω(k) , [g() = -1]≤ 2^-Ω(k). By union bound, with probability at least 2^-Ω(k), we have that both g() = -1 and g() = 1. This implies the desired result. §.§.§ Unbalanced noise stability behaves well if we use the best h Surprisingly, we show that if we use the best h, our approximation does meet this most basic requirement. Furthermore, we can relate it to the classical notion of balanced noise stability. The below Lemma directly implies <Ref>. For any g:^k → and distribution over , each in ^k satisfying, * The pairs (_1, _1), …, (_k, _k) are independent of one another. * The means satisfy [_1] = ⋯ = [_k] = μ. Define the correlations α_1, …, α_k as α_i max*0,[_i _i]^2 - μ^2/1 - μ^2. Then, there is an h:^k → for which [g()h()] ≥_μ, α(g). Comparing to <Ref>, if μ = 0, then α_i = max(0,1-a-b) for all i ∈ [k]. Since _μ, α(g) ≥ 0 whenever α≥ 0, <Ref> shows that the phenomenon in <Ref> cannot occur if we use the best approximator h. The following Lemma will be useful in the proof of <Ref>. For any function g: ^k →, let _1, …, _k be independent random variables each with mean μ and supported on [-1,1]. Then, _*_∼π_[g()]^2 = _μ, ([ϕ_μ(_1)^2], …, [ϕ_μ(_k)^2])(g). We'll use the μ-biased Fourier expansion of g. Applying <Ref>, _*_∼π_[g()]^2 = _**∑_S ⊆ [k]ĝ(S) ∏_i ∈ Sϕ_μ(_i)^2 = ∑_S_1, S_2 ⊆ [k]ĝ(S_1)ĝ(S_2)*∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i). We claim that, in the above sum, any term in which S_1 ≠ S_2 is equal to 0. Let S_1 S_2 denote the symmetric difference of S_1 and S_2. Then, due to the independence of _1, …, _k, *∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i) = ∏_i ∈ S_1 ∩ S_2[ϕ_μ(_i)^2] ∏_i ∈ S_1 S_2[ϕ_μ(_i)]. Since the mean of _i is μ, [ϕ_μ(_i)] = ϕ_μ(μ) = 0. If S_1 ≠ S_2, there is at least one element in S_1 S_2, and so the term is 0. We are therefore left with, _*_∼()[g()]^2 = ∑_S ⊆ [k]ĝ(S)^2∏_i ∈ S*ϕ_μ(_i)^2. This is exactly the Fourier expansion for the claimed result. We'll also use the following proposition. For any random variable bounded on [-1,1] almost surely and with mean μ, max*0,[]^2 - μ^2/1 - μ^2≤[ϕ_μ()^2] ≤[] - μ^2/1 - μ^2 . We expand, using linearity of expectation, [ϕ_μ()^2] = *( - μ)^2/1 - μ^2 = [ρ^2] - 2μ[] + μ^2/1 - μ^2. Since [] = μ, we have that [ϕ_μ()^2] = [^2] - μ^2/1 - μ^2. Therefore, by Jensen's inequality, []^2 - μ^2/1 - μ^2≤[ϕ_μ()^2]. Furthermore, since ^2 ≤, [ϕ_μ()^2] ≤[] - μ^2/1 - μ^2. Lastly, [ϕ_μ()^2] ≥ 0 follows from non-negativity. Finally, we are ready to prove <Ref>. For any y ∈^n, we define g_(y) = [g() | = y]. Then, setting h(y) (g_(y)), [g()h()] = _**g_()≥_**g_()^2. Note that, conditioning on = y, the distribution of is still product. Let ν(y) be the mean of this distribution, so that g_(y) = _∼π_ν(y)*g(). By <Ref>, _**_∼π_ν()*g()^2 = _μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g). For each i ∈ [k], [ϕ_μ(ν()_i)^2] ≥max*0,_[ν()_i]^2 - μ^2/1 - μ^2<Ref> ≥max*0,_[_iν()_i]^2 - μ^2/1 - μ^2x≥ cx when c ∈ = max*0,_,[_i_i]^2 - μ^2/1 - μ^2Definition of ν(y) = α_i. Putting all of the above together, [g()h()] ≥_μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g) ≥_μ, ρ(g), where the final inequality follows from the monotonicity of noise stability. §.§ Proof of the upper bound on advantage In this section, we prove the following. For any g: ^k→, f:^n →, μ_∼[f()], and S_1,…, S_k, define the upper normalized correlation as β_i _(f, S_i) - μ^2/1 - μ^2. For S ⊆ [n] × [k] constructed by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}).. Then, _^k(g∘ f, S) ≤√(_μ, β(g)). To begin with, we rewrite advantage in the following form. For any function q: ^m →, distribution over ^m, and S ⊆ [m], define q_S, ^(x) _∼[q() |_S = x_S], where y_S = x_S is shorthand for x_i = y_i for all i ∈ S. Then, _(q, S) = _∼**q_S, ^(). Consider any S-junta h. Then, _(q, h) = _∼[ q() h()] = _∼*_∼[q() h() |_S = _S]. Since h is an S-junta, it must classify x and y the same whenever x_S = y_S. Therefore, (q, h) = _∼*h()_∼[q() |_S = _S] = _∼*h()q^_S,(). to maximize the above advantage among all h, we set h(x) = (q^_S, (x)), in which case (q, h) = _∼**q^_S, (). Given <Ref>, to compute _^k(g∘ f, S), it suffices to understand the function (g ∘ f)^_S,. We proceed to transform that function into a form which is easier to understand. In the setting of <Ref>, for any x ∈ (^n)^k, let ν(x) ∈ [-1,1]^k be the vector where ν(x)_i _∼^k[f() | x^(i)_S_i = _S_i]. Then, (g ∘ f)^_S,^k(x) = _∼π_ν(x)[g()]. Consider drawing ∼ (^n)^k conditioned on _S = x_S. Let = f^⊗ k(). By definition, (g ∘ f)^_S, ^k(x) = [g()]. Therefore, we merely need to show that the distribution of is that of π_ν(x). For this it is sufficient that, * Each _1, …, _k is independent. This follows from the fact _1, …, _k are independent, and that the restriction that _S = x_S is a disjoint restriction for each of the k components. * For each i ∈ [k], that [_i] = ν(x)_i. This follows from the definition of ν(x)_i. The desired result follows from the fact that π_ν(x) is the unique product distribution over ^k with mean ν(x). We now prove the upper bound. Let ν be as defined in <Ref>. Applying it and <Ref>, _^k(g∘ f, S) = _∼^k**_∼π_ν()[g()]≤√(_∼^k**_∼π_ν()[g()]^2). The inequality above is Jensen's. Consider the random variables ν()_1, …, ν()_k. The have the following two properties. * They are independent. This is because the value of ν()_i depends on only the value of _i, which is independent of the other _j for j ≠ i. * They each have mean μ. This is because, [ν()_i] = *_∼[f() | (^(i))_S_i = y_S_i] = _∼[f()] = μ. Therefore, we can use <Ref>: _∼^k**_∼π_ν()[g()]^2 = _μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2])(g). We can further upper bound, [ϕ_μ(ν()_i)^2] ≤[ν()_i] - μ^2/1 - μ^2<Ref> = (f, S_i) - μ^2/1 - μ^2<Ref> = β_i. Putting the above together, we have that _^k(g∘ f, S) ≤√(_μ, β(g)). §.§ Proofs of the consequences of our strong composition theorem In this section, we complete the proofs of <Ref> and <Ref>. For any partition of the budget junta budget r_1 + ⋯ + r_k = R, let (r_1,…,r_k) be the vector, (r_1,…,r_k)_i _D(f, r_i). Then, applying the upper bound on advantage of <Ref> and maximizing over all possible partitions of the budget R, we have that _^k(g∘ f, R) ≤max_r_1 + ⋯ + r_k = R√(_(r_1, …, r_k)(g)). This completes the upper bound on the advantage of the optimal R-junta approximator of g ∘ f of <Ref>. For the lower bound on the advantage of the optimal composed form approximator, let r_1, …, r_k be the partition of budget maximizing _(r_1, …, r_k)(g). Using the lower bound of <Ref>, and using (·)^2 to refer to an elementwise squaring of a vector, _^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≥_(r_1,…,r_k)^2(g). Using the Fourier expression for stability <Ref>, _(r_1,…,r_k)^2(g) = _∼_μ(g)*(((r_1,…,r_k)^2)^ =_∼_μ(g)*(((r_1,…,r_k)^)^2 ≥_∼_μ(g)*(((r_1,…,r_k)^)^2 Jensen's inequality = _(r_1,…,r_k)(g)^2. Therefore, there is a composed form approximator with advantage at least _(r_1, …, r_k)(g)^2. Our proof of <Ref> uses the following. For any α_1,…, α_m ∈ [0,1] and β_1, …, β_m ∈ [0,1], satisfying (1-α_i) ≤ 2(1-β_i) for each i ∈ [m], 1 - ∏_i ∈ [m]α_i ≤ 2* 1 - ∏_i ∈ [m]β_i . We consider the vector β' ∈ [0,1]^m satisfying 1 - α_i = 2 · (1 - β'_i). Note that β'_i ≥β_i, which means that 1 - ∏_i ∈ [m]β'_i ≤ 1 - ∏_i ∈ [m]β_i. Now, consider the function q:[0,1] → [0,1] defined as q(x) 1 - ∏_i ∈ [m]1 - x(1- α_i). A quick calculation confirms that the second derivative of q is nonpositive, so q is concave. Furthermore, it satisfies, q(0) = 0, q(1) = 1 - ∏_i ∈ [m]α_i, q(1/2) = 1 - ∏_i ∈ [m]β'_i. We conclude, 1 - ∏_i ∈ [m]α_i concavity of q≤ 2*1 - ∏_i ∈ [m]β'_i≤*1 - ∏_i ∈ [m]β_i. Let r_1 + ⋯ + r_k = R be the partition of R used in the junta achieving minimum error relative to g ∘ f and define, for each i ∈ [k], α_i max*0, _(f, r_i)^2 - μ^2/1 - μ^2, β_i max*0, _(f, r_i) - μ^2/1 - μ^2, which satisfy the relation 1-α_i ≤ 2(1 - β_i). Applying <Ref> and the relation = 1 - /2, we have that _^k(g∘ f, R) ≥1 - √(_μ, β(g))/2, and _^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤1 - _μ, α(g)/2. Our goal is to show the following series of inequalities, which would imply the desired result, 1 - _μ, α(g) (iq 1)≤ 2(1 - _μ, β(g)) (iq 2)≤ 4(1 - √(_μ, β(g))). The second, (inequality 2), follows the fact that for any x ∈ [0,1], (1-x) ≤ 2(1-√(x)). For the first inequality, using <Ref>, we can express stability via the Fourier spectrum of g as 1 - _μ, α(g) = ∑_Sĝ(S)^2(1 - ∏_i ∈ Sα_i) ≤ 2∑_Sĝ(S)^2(1 - ∏_i ∈ Sβ_i) <Ref>, 1-α_i ≤ 2(1 - β_i) = 2(1 - _μ, β(g)). This proves inequality 1, giving the desired result. § MULTIVARIATE NOISE STABILITY OF SYMMETRIC FUNCTIONS In this section, we prove <Ref> and <Ref>, connecting the multivariate noise stability of symmetric functions to their univariate noise stability. For any function g:^k →, a permutation σ:[k]→ [k] is an automorphism of g if for all inputs x ∈^k, g(x) = g(x_σ(1), …, x_σ(k)). We say g is symmetric if every permutation of [k] is an automorphism of g. Similarly, g is transitive if for all i,j ∈ [k], there is an automorphism of g sending i to j. §.§ The upper bound on the multivariate noise stability of symmetric functions For any symmetric g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let 1/k ·∑_i ∈ [k]_i. Then, _μ, (g)≤_μ, (g). Our proof of <Ref> will use make heavy use of the negative association of random variables. A set of random variables _1, …, _m supported on are negatively associated if for all disjoint subsets S_1, S_2 ⊆ [m] and S_1-juntas f_1:^m →, S_2-juntas f_2:^m → both monotonically nondecreasing, [f_1()f_2()] ≤[f_1()][f_2()]. For our purposes, we will only need a few useful facts about negatively associated random variables given in <cit.> (see also <cit.> for a useful overview). [Permutation distributions are negatively associated, <cit.>] For any z_1, …, z_m ∈, draw a uniformly random permutation :[m] → [m] and set _i = z_(i) for each i ∈ [k]. Then, _1, …, _m are negatively associated. [Subsets of negatively associated random variables are negatively associated] For any 2 ≤ m' ≤ m, if _1, …, _m are negatively associated, then _1, …, _m' are also negatively associated. [Product consequence of negative association] For any negatively associated _1, …, _m and nondecreasing f:→_≥ 0, *∏_i ∈ [m]f(_i)≤∏_i ∈ [m]*f(_i). Given the above, facts about negative associated random variables, we can now prove <Ref>. We expand _μ, (g) using the Fourier spectrum of g (<Ref>), _μ, (g) = _∼_μ(g)[()^]. Let be the distributed the same as || for ∼_μ(g). Then, _μ, (g) = _*_∼_μ(g)[()^| || = ℓ]. Since g is symmetric, for any |S_1| = |S_2|, ĝ(S_1) = ĝ(S_2). As a result the distribution of ∼_μ(g) conditioned on || = ℓ is simply a uniformly random size-ℓ subset of [k]. Formally, _μ, (g) = _*_∼[k][()^]. Let _1, …, _k be a uniform random permutation of _1, …, _k. Then, the distribution of ()^ for ∼[k]ℓ is identical to that of ∏_i ∈ [ℓ]_i. By <Ref>, _1, …, _ℓ are negatively associated, and so, _∼[k]ℓ[()^] = *∏_i ∈ [ℓ]_i(<Ref>)≤∏_i ∈ [ℓ][_i] = *^ℓ. Therefore, _μ, (g) ≤_**^ = _μ, (g). §.§ The lower bound on the multivariate noise stability of symmetric functions For any transitive g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let *∏_i ∈ [k]ρ⃗_i^1/k. Then, _μ, (g)≥_μ, (g). Note that every transitive g is also symmetric, but the reverse does not hold. Similarly to the proof of <Ref>, let be the distribution of || when ∼_μ(g). Then, _μ, (g) = _*_∼_μ(g)[()^| || = ]. For each S ⊆ [k], we'll use χ(S) ∈^k to denote the characteristic vector of S, meaning χ(S)_i [i ∈ S]. Then, _μ, (g) = _*_∼_μ(g)*∏_i ∈ [k] (_i)^χ()_i | || = = _*_∼_μ(g)*exp*∑_i ∈ [k]χ()_i log(_i) | || = ≥_*exp*_∼_μ(g)*∑_i ∈ [k]χ()_i log(_i) | || = Jensen's inequality = _*exp*∑_i ∈ [k]log(_i) _∼_μ(g)*i ∈| || = . Linearity of expectation Fix any i_1, i_2 ∈ [k] and level ℓ∈ [0,k]. Since g is transitive, there is an automorphism, σ, of g sending i_1 to i_2. Since σ is an automorphism of g, for any S ⊆ [k], for ∼_μ(g), [ = S] = [ = σ(S)]. As a result _∼_μ(g)*i_1 ∈| || = ℓ = _∼_μ(g)*i_2 ∈| || = ℓ, and so _∼_μ(g)*i ∈| || = ℓ must be the same for all i ∈ [k]. The sum of these probabilities is ℓ, meaning each is ℓ/k. This allows us to bound, _μ, (g) ≥_*exp*∑_i ∈ [k]log(_i) ·/k =_*∏_i ∈ [k]*_i^/k =_*()^ = _μ, (g). §.§ Bounding the (δ,)-noise stability of symmetric functions Recall, from <Ref>, that the (δ,)-noise stability of a function g:^k→ is the quantity max{_ρ⃗(g)at least δ-fraction of ρ⃗'s coordinates are at most 1-2}. We prove <Ref>, restated below. For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), let δ'kδ/k be δ rounded up to the nearest integer multiple of 1/k. Then, the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆ satisfying 1 - 2δ' - 4^2 ≤ρ^⋆≤ 1 - 2δ'. Since stability is monotone (<Ref>), the (δ, )-noise stability of g is its multivariate noise stability with a correlation vector where δ' fraction of the coordinates are 1 - 2 and the remainder are 1. The arithmetic mean of this vector is exactly 1 - 2δ', and its geometric mean is (1 - 2)^δ'. The desired result then follows from <Ref> and the inequality (1 - x)^c ≥ 1-cx - (1-c)x^2 ≥ 1 - cx - x^2 which holds for all c,x ∈ [0,1]. To prove this inequality, it is sufficient that q_c(x) ≥ 0 for all x,c ∈ [0,1] where q_c(x) (1-x)^c - 1 +cx + (1-c)x^2. To see this, we note that for any c ∈ [0,1], the function q_c(x) has roots at x = 0 and x=1. It is furthermore increasing at x = 0, and decreasing at x = 1. If q_c(x) were to be negative for any x ∈ [0,1], then, it would need to have at least 3 local extrema. However, the derivative q_c'(x) is concave, so it can only be zero at a maximum of 2 points. This proves the desired inequality. (If the reader prefers, <Ref> gives a “proof by picture".) § COMPOSITION THEOREMS YIELD BOOSTERS FOR PROPERTY TESTING §.§ A general boosting framework Let 𝒫={𝒫_s}_s∈ be a parametrized property of Boolean functions. For a function f:^n→ and distribution 𝒟 over ^n, we write _𝒟(f,𝒫_s)min_h∈𝒫_s_𝒟(f,h) to denote f's distance to 𝒫_s over 𝒟. We are interested in the relaxed testing regime for size parameters s>s' where we want to decide whether an unknown target function f belongs to 𝒫_s or is -far from 𝒫_s' under 𝒟: _𝒟(f,𝒫_s')> (recall <Ref>). We say that 𝒫 is (,s,s')-testable if there exists an algorithm for (,s,s')-testing 𝒫 for every distribution 𝒟. As → 0, the gap between the Yes and No cases becomes smaller and (,s,s')-testing becomes more difficult. The main result of this section is that if 𝒫 “behaves well” under function composition, then testers for large can be boosted to testers for the more challenging regime of small . We will specialize our attention to properties which behave linearly with respect to function composition. A parametrized property 𝒫={𝒫_s}_s∈ behaves linearly (with respect to function composition) if f∈𝒫_s ⇒ g∘ f∈𝒫_k· s for all g:^k→, f:^n→, and s∈. Examples. Being an s-junta, depth-s decision tree, depth-s formula, or degree-s polynomial are all properties of Boolean functions which behave linearly with respect to composition. As is often the case, it is straightforward to show from their definitions that these properties behave linearly. Many properties which do not a priori behave linearly can be converted into ones that do by applying an appropriate transformation to their size. For example, the property 𝒫_s={size-exp(s) decision trees} behaves linearly. Strong composition theorems for properties. A property 𝒫 which behaves linearly with respect to function composition is said to admit a strong composition theorem if the upper bound from <Ref> can be shown to be nearly tight. This definition generalizes the relation <ref>. A parametrized property 𝒫={𝒫_s}_s∈ admits an (,,λ)-composition theorem with respect to g:^k→ for ,∈ (0,1) and a constant λ>0 if _𝒟(f,𝒫_s)> ⇒ _𝒟^k(g∘ f,𝒫_λ ks)> for all f:^n→ and distributions 𝒟 over ^n. Strong composition theorems depend on the combining function g. For example, if g is a constant function then one would not expect the upper bound from <Ref> to be tight. For this reason, the dependence on g is made explicit in the definition of strong composition theorem. Roughly speaking, the definition says that if a property 𝒫 behaves linearly and admits a strong composition theorem with respect to g, then composing with g turns a function in 𝒫_s into one in 𝒫_s k and turns a function slightly far from 𝒫_s into one very far from 𝒫_Θ(s k). For a fixed , having an (,,λ)-composition theorem with respect to g becomes stronger as approaches 0. In general, we are interested in (,,λ)-composition theorems when ≫. The parameter λ is built into the definition to tolerate a small amount of slack between the upper and lower bounds on g∘ f. For many applications, this constant factor is necessary. We are now equipped to state our main boosting theorem. Let 𝒫={𝒫_s}_s∈ be a property which behaves linearly and admits an (,,λ)-composition theorem with respect to g:^k→. If 𝒫 is (,s,s')-testable in q(,s,s') queries, then it is (,s,λ^-1 s')-testable using k· q(,ks,ks') many queries. Let be an algorithm for (,s,s')-testing 𝒫. Given queries to a function f:^n→ and random samples from a distribution 𝒟 over ^n, we (,s, λ^-1 s')-test 𝒫 using the procedure in <Ref> where is given an instance of (,ks,ks')-testing 𝒫. Query complexity. The target g∘ f:^nk→ is a (, ks,ks')-testing instance for . Therefore, makes q(,ks,ks') queries to the target g∘ f:^nk→ before terminating. Our tester makes k queries to f for each query to g∘ f. So our tester for f makes k· q(,ks,ks') queries in total. Correctness.In the Yes case, f∈𝒫_s. We then have g∘ f∈𝒫_sk since 𝒫 behaves linearly. This ensures that outputs Yes. In the No case, _𝒟(f,𝒫_s'/λ)>. We then have _^k(g∘ f,𝒫_ks')>λ since 𝒫 admits an (,,λ)-composition theorem. This ensures that outputs No. §.§ Implications for current landscape of junta testing Our results have new implications for tolerantly testing juntas. In this regime, the Yes case of <Ref> is relaxed to only require that f is close to an r-junta over 𝒟. Given parameters r≤ r' and ≤, queries to an unknown function f:^n→, and random samples from a distribution 𝒟 over ^n, distinguish between * Yes: f is -close to being an r-junta under 𝒟, and * No: f is -far from being an r'-junta under 𝒟. In all of our applications, we will be using <Ref>, or a variant of it, with g set to _k. For this reason, we start with some useful properties about the noise stability of parity. §.§.§ Noise stability of parity under general product distributions For any f:^n →, distribution over ^n, junta budget R, and R-junta h, _^k(_k ∘ f, h) ≥min_r_1+⋯+r_k=R1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2. Our proof of <Ref> will use the multivariate noise stability of parity. For any μ∈ (-1,1), ρ⃗∈ [0,1]^k, _μ, (_k) = ∏_i ∈ [k]*_i + (1-_i)·μ^2=∏_i ∈ [k]*1 - (1-_i)(1-μ^2). Note that _k(y_1, …, y_k) = ∏_i ∈ [k]y_i. Therefore, _μ, (_k) = _∼ (π_μ)^k, *∏_i ∈ [k]_i _i. Each pair (_i, _i) are independent of another, so _μ, (_k) = ∏_i ∈ [k]*_i _i. The distribution of (_i, _i) can be succinctly described: With probability _i, _i = _i. Otherwise, they are each independent draws from π_μ. Therefore, *_i _i = _i + (1-_i)·μ^2. The desired result follows from combining the above equations We apply our strong composition theorem, <Ref>. It is stated in terms of advantage and gives max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(_μ, β(r_1, …, r_k)(_k)), where we define μ = _∼[f()], and β(r_1, …, r_k) ∈ [0,1]^k is the vector β(r_1, …, r_k)_i = _(f, f̃_r_i) - μ^2/1 - μ^2 = 1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2. Applying <Ref>, max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *1 -1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2(1 - μ^2)) = max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *2·_(f, f̃_r_i)/1 - μ^2(1 - μ^2)) = max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i)). The desired result follows from = 1 - /2. §.§.§ Warmup: weak testers suffice for (0,,r,r')-testing juntas We first boost tolerant testers in the regime where is fixed to 0 in <Ref>. This version is slightly easier to state and is also the version we will use later in proving <Ref>. If juntas can be (0,,r,r')-tested using q(,r,r') queries, then for all k∈ and λ∈ (0,1), they can be (0,,r,λ^-1 r')-tested in k· q(,kr,kr') queries where =1-(1-2)^(1-λ)k/2/2. We will need to following composition theorem for juntas. It is a more precise version of <Ref> stated in terms of <Ref>. For any λ∈ (0,1), the property of being an r-junta admits an (, ,λ)-composition theorem with respect to _k for any ≤ where = 1-(1-2)^(1-λ)k/2/2. Assume that f:^n→ is -far from being an r-junta over 𝒟. We would like to show that _k∘ f is -far from being a λ r k-junta over 𝒟^k where is defined as in the lemma statement. Let r_1+⋯+r_k=λ rk be the partition of the junta budgets which minimizes the expression 1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2 from <Ref>. Let A_≤ r [k] denote the indices for which r_i≤ r and let A_>r=[k]∖ A_≤ r. By a counting argument, at least a (1-λ)-fraction of r_i satisfy r_i≤ r and so |A_≤ r|≥ (1-λ)k. By our assumption that f is far from being an r-junta, for these r_i, we get _𝒟(f,f_r_i)>. Therefore, we can conclude that for any λ rk-junta h:^nk→: _𝒟^k(_k∘ f,h) ≥1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2<Ref> =1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i)·∏_i∈ A_>r*1 - 2·_(f, f̃_r_i))/2 ≥1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i))/2≤1/2 > 1 - *1 - 2^(1-λ)k/2/2_𝒟(f,f_r_i)> for i∈ A_≤ r. Since h was arbitrary, this shows that _k∘ f is -far from being a λ rk-junta. <Ref> is stated in the non-tolerant regime. However, we note that the same theorem holds in the (0,,r,r')-testing regime. That is, under the conditions of <Ref>, if 𝒫 is (0,,s,s')-testable, then it is also (0,,s,λ^-1s')-testable. This is because if f is a 0-approximator of f over 𝒟, then g∘f is a 0-approximator of g∘ f over 𝒟^k. <Ref> shows that the property of being an r-junta admits an (, 1-(1-2)^(1-λ)k/2/2,λ)-composition theorem. Therefore, <Ref> shows that if juntas can be (0,,r,r')-tested in q(,r,r') queries then they can be (, r,r')-tested in k· q(,kr,kr') queries where =1-(1-2)^(1-λ)k/2/2. §.§.§ Weak testers suffice for tolerant junta testing If there is a q(r)-query tester that, given queries to f:^n→ and random samples from a distribution 𝒟, distinguishes between * Yes: f is 1/4-close to an r-junta, and * No: f is 1/3-far from every r-junta, then for every >0 and λ∈ (0,1), there is a q(r/(4))/4-query algorithm that distinguishes between * Yes: f is -close to an r-junta, and * No: f is Ω(/1-λ)-far from every λ^-1r-junta. Let 𝒯 be a q(r)-query tester for juntas that satisfies the theorem statement. Given queries to a function f:^n→ and random samples to , we design an algorithm for (,5/1-λ, r,λ^-1r)-testing f over . The algorithm is straightforward. We choose k=1/4, and run the procedure in <Ref> with g=_k:^k→ and junta size kr. Query complexity.𝒯 makes q(kr)=q(r/4) queries to the target _k∘ f:^nk→ before it terminates. Our tester makes k queries to f for each query to _k∘ f. Therefore, our tester makes k· q(r/4)=(r/4)/(4) queries in total. Correctness.For correctness, we need to show: Yes case: if f is -close to being an r-junta over , then _k∘ f is 1/4-close to being a kr-junta over ^k, and No case: if f is 5/1-λ-far from being an λ^-1r-junta over , then _k∘ f is 1/3-far from being a kr-junta over ^k. Yes case. Let f be an r-junta which -approximates f over . By a union bound: _∼𝒟^k[XOR_k∘ f()≠XOR_k∘f()] ≤_∼𝒟^k[some f(^(i))≠ f(^(i))] ≤ k·_𝒟(f,f)≤ k = 1/4. Since _k∘f is a kr-junta, this shows that _k∘ f is 1/4-close to a kr-junta. No case. If f is 5/(1-λ)-far from being a λ^-1r-junta, then <Ref> implies that _k∘ f is 1-(1-2)^(1-λ)k/2/2 far from being a λλ^-1kr=kr-junta over ^k where 5/(1-λ). Therefore, it is sufficient to show that 1-(1-2)^(1-λ)k/2/2≥1/3. We observe 2/(1-λ)k≤log_1/3(e)· which implies 3^-2/((1-λ)k)≥ e^-2≥ 1-2. It follows: 1/3≥ (1-2)^(1-λ)k/2 which provides the desired bound. §.§.§ Hardness of distribution-free tolerant junta testing We prove the following which implies <Ref>. Given queries to a function f:^n→ and random samples from a distribution 𝒟, and r≤ n, it is NP-hard under randomized reductions to distinguish between * Yes: f is 0-close an r-junta over 𝒟, and * No: f is 1/3-far from every Ω(rlog n)-junta over 𝒟. We reduce from the SetCover problem. A SetCover instance over a universe [m] is a collection of subsets 𝒮 = { S_1,…,S_n} where S_i [m]. The SetCover problem is to compute a minimal size subcollection {S_i_1,…, S_i_r} which covers the universe: [m]=S_i_1∪⋯∪ S_i_r. SetCover is known to be hard to approximate. Given a SetCover instance 𝒮 and a parameter r, it is NP-hard to distinguish between * Yes: 𝒮 has a size-r set cover, and * No: 𝒮 requires set covers of size Ω(rlog n). Suppose we have an algorithm 𝒯_weak for testing juntas that can distinguish between the Yes and No cases in the theorem statement. In particular, there is a (0,1/3,r,Ω(rlog n))-tester for juntas. <Ref> implies that there is a (0,,r,Ω(rlog n))-tester, 𝒯_strong, for juntas as long as satisfies 1/3≤1-(1-2)^(1-λ)k/2/2⊛. In the reduction, we will choose appropriately and use this boosted tester to solve SetCover. The reduction.The reduction from SetCover to junta testing is standard <cit.>. We will restate it here for convenience. Let 𝒮 = { S_1,…,S_n} be a SetCover instance over the universe [m] and define u^(1),…,u^(m)∈^n where (u^(j))_i = 1 if j ∈ S_i -1 otherwise. Let 𝒟 be the uniform distribution over { u^(1),…,u^(m), (-1)^n} and let f:^n→ be the function which is the disjunction of its inputs: f x_1⋯ x_n (where 1 is interpreted as true and -1 as false). We choose k=Θ(m) so that <ref> holds with Ω(1/m)<<1/m+1. We then run the boosted tester 𝒯_strong on the function f and distribution 𝒟, to test if f is 0-close to an r-junta or -far from being a Ω(rlog n)-junta (where the parameters r and Ω(rlog n) correspond to the SetCover parameters). Our algorithm for SetCover outputs Yes if and only if the tester accepts f as being 0-close to an r-junta. Runtime. If the tester 𝒯_weak runs in polynomial time, then since k=Θ(m) and =Θ(1/m), the tester 𝒯_strong runs in polynomial time. Queries to the target function f and random samples from can also be simulated in randomized polynomial time. Correctness. For correctness, we need to show: Yes case: if 𝒮 has a size-r set cover, then f is 0-close to an r-junta over , and No case: if 𝒮 requires set covers of size Ω(rlog n), then f is -far from being a Ω(klog n)-junta over . Yes case. Let S_i_1,…, S_i_r be a size-r set cover. Consider the function f=x_i_1⋯ x_i_r. Since these indices form a set cover of 𝒮, f(u^(i))=1 for all i∈ [m] and f((-1)^n)=-1. This shows _(f,f)=0. It follows that f is 0-close to an r-junta over since f is an r-junta. No case. Suppose f is an r'-junta satisfying _𝒟(f,f)< 1/m+1. The relevant variables of f must correspond to a set cover of 𝒮: if some element i∈ [m] is not covered, then f(u^(i))=f((-1)^n) and _𝒟(f,f)≥1/m+1. This shows if 𝒮 requires set covers of size Ω(rlog n) then f is 1/m+1-far from every Ω(rlog n)-junta. In particular, since <1/m+1, every Ω(rlog n)-junta is -far from f. § ACKNOWLEDGMENTS We thank the FOCS reviewers for their helpful comments and feedback. The authors are supported by NSF awards 1942123, 2211237, 2224246 and a Google Research Scholar award. Caleb is also supported by an NDSEG fellowship, and Carmen by a Stanford Computer Science Distinguished Fellowship. alpha § COUNTEREXAMPLES TO NATURAL COMPOSITION THEOREMS §.§ Counterexample to Conjecture 1 For any odd k and n ≥ k let R = (n-1)k and be the uniform distribution over ^n. There are symmetric functions g:^k → and f:^n → for which the following holds. * There is an R-junta h achieving, _^k(g∘ f, h) ≤ O(1/√(k)). * The natural strategy of dividing the budget equally achieves, _^k(g∘ f, g ∘f̃_R/k) = 1/2. We set g = _k to be the majority function on k bits, g(y_1, …, y_k) = 1 if ∑_i ∈ [k] y_i ≥ 0 -1 otherwise. and f = _n to be the parity function, f(x_1, …, x_n) = ∏_i ∈ [n] x_i. The following fact will be useful in giving a strategy that achieves low error. Let _1, …, _k-1 each be uniform and independent samples from . Then, for any choice of c, *∑_i ∈ [k-1]_i = c≤ O*1/√(k). We now give the junta achieving low error. Let h = _k-1∘_n. Then, * h is an ((k-1)n ≤ R)-junta. * h achieves, _^k(g∘ f, h) ≤ O(1/√(k)). Clearly h depends on only the first (k-1)n bits of its inputs, so it is an R-junta as long as (k-1)n ≤ (n-1)k, which is guaranteed by the assumption n≥ k in <Ref>. We compute h's error, _^k(g∘ f, h) = _∼^n[_k() ≠_k-1()]. In order for _k() ≠_k-1(), it must be the case that the ∑_i ∈ [k-1]_i is -1 or 0. The desired result follows from <Ref>. We'll next show the natural strategy achieves advantage 0, equivalent to error 1/2. Let f = _n and be the uniform distribution over ^n. Then, _(f, f̃_n-1) = 0. By <Ref>, it is sufficient to show that for any set |S| = n-1 and any x ∈^n, _∼[f() |_S = x_S] = 0. For any fixed x, there are two y ∈^n satisfying y_S = x_S: The first choice if y = x, and the second choice is x with a single bit flipped (the one bit not in S). One of these two choices will have a parity of +1 and one will have a parity of -1, so the average parity is 0, as desired. For any odd k, μ = 0, and = [0,…, 0], _μ, (_k) = 0. For odd k, _k is an odd function, _∼^n[_k()]. Then, _μ, (_k) = __1 ∼^k, _2 ∼^k[_k(_1)_k(_2)] = __1 ∼^k[_k(_1)]__2 ∼^k[_k(_2)] _1, _2 independent = 0 · 0 =0._k is odd The following completes the proof of <Ref>. In the setting of <Ref>, _^k(g∘ f, g ∘f̃_R/k) = 0. This follows from <Ref> and <Ref>. §.§ Counterexample to Conjecture 2 For any n ≥ 10, k ∈, and R ≤ n/2, let be uniform over ^n. There are g: ^k and f:^n → for which, for all partitions r_1 + ⋯ +r_k = R, _^k(g∘ f, g(f̃_r_1, …, f̃_r_k)) ≥ 1 - 2^-Ω(k). <Ref> is particularly surprising in light of the fact that either the constant -1 or constant 1 functions, both of which are 0-juntas, will achieve error ≤ 1/2 with respect to g ∘ f. We begin with a probabilistic construction of f achieving the following. For any n ≥ 10, there is an f: ^n → for which _∼^n[f()] ≤ 0.5 but, for all |S| ≤ n/2 and x ∈^n, _∼^n[f() | = x] > 0. Consider a random function where, for each x ∈^n, (x) ∼π_0.25. We'll show that meets the desired criteria with a strictly positive probability, proving the existence of at least one such f. Let μ() _∼^n[()]. Then μ() is the average of 2^n independent samples of π_0.25. Applying Hoeffding's inequality, [μ() > 0.5] ≤exp(-2 · (0.25)^2 · 2^n) = exp(-2^n/2). Similarly, for any |S| ≤ n/2 and x ∈^n, let μ(, S, x) _∼^n[() | = x]. μ(,S,x) the average of at least 2^n/2 independent samples of π_0.25. Once again, by Hoeffding's inequality, [μ(,S,x) ≤ 0] ≤exp(-2 · (0.25)^2 · 2^n/2) = exp(-2^n/2/2). Union bounding over all 2^n choices of S and 2^n choices for x, we have that meets the desired criteria with probability at least 1 - exp(-2^n/2) - 2^2nexp(-2^n/2/2). When n ≥ 10, the above probability is strictly positive, so such an f must exist. Let f be a function with the properties of <Ref>, and g = And_k return +1 if and only if all k of its inputs are +1. By <Ref>, for any r ≤ n/2, f̃_r is the constant +1 function. Therefore, for any r_1 + ⋯ + r_k = R, g(f̃_r_1, …, f̃_r_k) is the constant +1 function. However, _∼^k[(g ∘ f)() = +1] = (3/4)^k. §.§ Counterexample to Conjecture 3 There is g:^k →, f:^n →, distribution over ^n, and budget R for which no R-junta of composed form achieves optimal error among all R-Juntas for g∘ f with respect to ^k. We'll set k = 2, g = And_2. Let p:^2 → [0,1] be defined as p(x) 1 if x_1 = x_2 = 1, 3/4 if x_1 ≠ x_2, 3/5 if x_1 = x_2 = -1. We begin by describing a probabilistic construction: Given the input x, the value of (x) will still be a random variable. In particular, we set n =2, and (x) is set to +1 with probability p(x) and -1 otherwise. This probabilistic construction will later be derandomized. We allow a junta budget of R = 4. Next, we construct an optimal approximator for g ∘. Given an input x^(1), x^(2), let _1 = (x^(1)) and _2 = (x^(2)). For succinctness, we'll use p_i to refer to the [_i = 1]. Then, since g = And_2, the optimal approximator will return 1 iff p_1p_2 ≥ 1/2. For our particular the only choices for p_i are 3/5,3/4,1. As a result, h^(opt)(p_1,p_2) = 1 if p_1 = 1 or p_2 = 1, 1 if p_1 = p_2 = 3/4, 0 otherwise. However, no composed form can achieve the above optimal approximator. Recall that composed form approximators are of the form h(q_1, q_2), where each q_i has range . The fact that the size of this range is 2, but there are three possible choices (3/5, 3/4, 1) for p_i, is the crux of the issue. In more detail, of the three choices (3/5,3/4,1) for p_i, q_1 must classify at least two of them the same way. This gives three cases. * If q_1 classifies 3/4 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/5 and p_1 = 1, p_2 = 3/5, and so cannot be optimal. * If q_1 classifies 3/5 and 3/4 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/4 and p_1 = 3/5, p_2 = 3/4, and so cannot be optimal. * If q_1 classifies 3/5 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/5, p_2 = 3/4 and p_1 = 1, p_2 = 3/4, and so cannot be optimal. In all three cases composed form cannot achieve optimal error. It will always be off by some constant. To derandomize this construction, we set n ≫ 2 sufficiently large. For each x ∈^n, we sample the value f(x) to be +1 with probability p(x_1,x_2) and -1 otherwise. Note that after randomly selecting the value of f on each input x ∈^n, f is now a deterministic function. Following the same arguments as in <Ref>, with high probability over the random choices in defining f, the error of the optimal 4-junta and of the optimal composed form 4-junta for g∘ f are within ±(n) of what they are for g ∘, where (n) goes to 0 as n →∞. Therefore, for sufficiently large n, there exists an f meeting the desired criteria.
http://arxiv.org/abs/2307.06344v1
20230712161423
The Whole Pathological Slide Classification via Weakly Supervised Learning
[ "Qiehe Sun", "Jiawen Li", "Jin Xu", "Junru Cheng", "Tian Guan", "Yonghong He" ]
q-bio.QM
[ "q-bio.QM", "cs.CV", "eess.IV" ]
UTF8gbsn Spintronics in 2D graphene-based van der Waals heterostructures David T. S. Perkins and Aires Ferreira =============================================================== Due to its superior efficiency in utilizing annotations and addressing gigapixel-sized images, multiple instance learning (MIL) has shown great promise as a framework for whole slide image (WSI) classification in digital pathology diagnosis. However, existing methods tend to focus on advanced aggregators with different structures, often overlooking the intrinsic features of H&E pathological slides. To address this limitation, we introduced two pathological priors: nuclear heterogeneity of diseased cells and spatial correlation of pathological tiles. Leveraging the former, we proposed a data augmentation method that utilizes stain separation during extractor training via a contrastive learning strategy to obtain instance-level representations. We then described the spatial relationships between the tiles using an adjacency matrix. By integrating these two views, we designed a multi-instance framework for analyzing H&E-stained tissue images based on pathological inductive bias, encompassing feature extraction, filtering, and aggregation. Extensive experiments on the Camelyon16 breast dataset and TCGA-NSCLC Lung dataset demonstrate that our proposed framework can effectively handle tasks related to cancer detection and differentiation of subtypes, outperforming state-of-the-art medical image classification methods based on MIL. The code will be released later. § INTRODUCTION Histopathological slide examination is widely regarded as the most reliable and accurate standard for clinical diagnosis of many diseases <cit.>. However, during the actual diagnostic process, pathologists are required to locate the region of interest (ROI) within the low magnification field of view. They must then carefully examine at the high magnification for signs of tissue structure abnormalities, the presence of a notable number of inflammatory cells, and other relevant factors. In clinical practice, despite the fact that the majority of breast, colon, cervical tissue samples obtained through population screening, as well as numerous lymph node sections removed during surgery from patients, are negative, they still require meticulous screening <cit.>. This process can be time-consuming and labor-intensive. To make matters worse, in some regions with limited medical resources, even obtaining a simple pathological report can be challenging, leading to delayed treatment of disease. The situation remained unresolved until the advent of scanners capable of scanning stained pathology sections into pyramid-structured images, known as the whole slide image (WSI), along with the development of artificial intelligence. Due to the immense success of deep learning in natural image tasks, computational pathology has also experienced a significant boost in development. Nevertheless, there are still two major challenges in transferring deep models to the field of pathological images. Firstly, WSI at the highest magnification level is a three-dimensional and high-resolution image that contains at least a billion pixels. Scaling it down to a size that can be processed by GPUs will result in the loss of cellular-level and tissue-level information. The current solution is to cut it into patches with only 10^4 pixels. However, this approach poses a second challenge —- obtaining patch-wise labels is difficult and requires experts to annotate millions of images. Slide-level annotations, which are more accessible, only include basic clinical information such as disease progression, molecular subtypes, and survival rates. Therefore, a current research hotspot is how to fully utilize these clinical-level labels without requiring additional manual annotations. Multiple Instance Learning (MIL) is a special type of weakly-supervised method<cit.> that infers fine-grained information through coarse-grained annotations such as clinical diagnoses. In this context, slide and patch correspond to the concepts of bag and instance, respectively, where the attributes of a bag are the sum of the features possessed by its instances. In other words, a positive bag must contain at least one positive instance, while all instances in a negative bag should be negative. The process of MIL involves the extraction, selection, and aggregation of instance features. Various attention-based aggregators constructed by neural networks have been the key to its success in pathological tasks<cit.>, but little research has been done on feature extractors and selection strategies<cit.>. Most MIL methods use deep residual network pre-trained on the ImageNet<cit.> dataset as instance feature extractors<cit.>. However, the texture and color of natural images differ significantly from those of pathological images stained with hematoxylin-eosin (H&E) dye. To obtain suitable pathological representations without introducing additional supervised signals, self-supervised methods have become crucial. As shown in Figure 1, contrastive learning (CL) methods can effectively distinguish pathological images in feature space, while ResNet<cit.> pre-trained on Imagenet fails. Despite some previous works attempted to address this unrealistic situation, they still lack the ability to utilize the inherent inductive biases of pathological images to guide the classification results. In this paper, we introduce a data augmentation method based on stain separation, which is integrated into the existing contrastive learning framework, allowing the feature extractor to focus on more diagnostically valuable information. Stain separation is the process of separating H&E images into individual images stained with hematoxylin and eosin, as shown in Figure 2. Hematoxylin is a bluish-purple basophilic dye and mainly stains chromatin in the nucleus and nucleic, while eosin is an acidic dye that stains components in the cytoplasm and extracellular matrix red-pink<cit.>. Nuclear abnormality is one of the indicators for pathological diagnosis, and the significance of using separated images that are distinguished from grayscale images as sample pairs in contrastive learning lies in separating the foreground of cell nuclei from the background of cytoplasm, thereby guiding the feature encoder to focus more on nuclear variations. In addition, we also introduced another pathological prior: spatial correlation, which means that adjacent patches in spatial position on the WSI have mutual attention. Therefore, we represented the spatial relationship of all tiles in a slide as an adjacency matrix and used it as the input to a graph attention network (GAT) to constrain the attention flow between representations. Based on this consideration, we designed a aggregation network and conducted experiments on two publicly available pathological datasets——Camelyon16 and TCGA-NSCLC, achieving better performance than the state-of-the-art MIL methods. Our main contributions can be summarized as follows: * We proposed a stain-separation based data augmentation technique and applied it to train MIL feature extractors with contrastive learning. * We introduced the absolute positional relationship between tiles to constrain the mutual attention, and designed a graph attention aggregator according to it. * Abundant experiments on two public datasets demonstrate the effectiveness of our framework for slide-level diagnosis. § RELATED WORK In recent years, the development of deep learning has led to the gradual replacement of MIL algorithms based on shallow structures. We will introduce the current situation of deep MIL models from two perspectives: their development and applications. §.§ Deep Multiple Instance Learning Early frameworks used simple maximum or average pooling as feature aggregators<cit.>, but subsequent studies suggested that parameterized neural networks were better suited for fitting the contributions of different instances and achieving better results<cit.>. Ilse et al.<cit.> categorized deep MIL methods into embedding-level and instance-level, following the theorem proposed by Zaheer et al.<cit.>. The key component of embedding-level approach is the aggregator, which incorporates attention mechanisms to account for the varying contributions of individual instances to the bag<cit.>. The advent of Transformer<cit.> has enabled the use of self-attention mechanisms for modeling intrinsic relationships between instances, and it has been demonstrated to reduce the information entropy of MIL, thereby mitigating uncertainty<cit.>. It should be noted that some recent works have recognized the importance of effective bag embeddings. Due to learnability, instance-level approach is employed for training the extractor<cit.>. Nevertheless, such approaches are inherently designed for binary classification problems and may not be well-suited for other types of tasks. To obtain more universal feature extractors, contrastive learning methods, such as SimCLR<cit.> and DINO<cit.>, have been applied to maximize the separation of patches in the feature space<cit.>. Aside from contrastive learning, variational autoencoders (VAEs) and generative adversarial networks (GANs) can also be used as methods for training feature extractors<cit.>. Several studies have also addressed how to filter instance-level embeddings to obtain the optimal bag-level representations, and reinforcement learning (RL) has been employed to select the most representative patches instead of random selection<cit.>. As WSIs are organized in a pyramid data structure, graph neural networks (GNNs), eg. graph convolutional neural networks (GCNs) have been utilized to model the inter-layer and intra-layer relationships<cit.>. Although we employed graph neural network like the aforementioned works, our purpose was to constrain inter-instance attention and learn more interpretable bag representations. To integrate the multi-scale information inherent in pathological images, patch-level features at different magnifications have been utilized as input to the aggregator to model both fine-grained and coarse-grained information of the diseased tissue<cit.>. §.§ Pathology Applications based on MIL In the analysis of whole slide images, multiple instance learning is widely used due to its label-efficiency and interpretability. This has been demonstrated on several large, diverse, private datasets, including but not limited to colorectal cancer, lung cancer, prostate cancer, bladder cancer, and skin cancer<cit.>. However, in practice, MIL models have not always met the clinical requirements for small datasets. To address this issue, Zhang et al.<cit.> introduced the concept of "pseudo-bags" to artificially expand the dataset. MIL has also shown excellent performance in segmentation, clustering, and other tasks<cit.>. Moreover, the success of MIL on immunohistochemistry (IHC) images has opened up possibilities for multimodal analysis beyond just hematoxylin-eosin (H&E) images<cit.>. § METHOD We developed a weakly-supervised learning framework for slide-level classification based on two pathological priors. In this section, we will describe how we incorporated these priors into MIL framework and provide an overview of our model. §.§ Multiple Instance Learning Datasets used for multiple instance learning typically contain instances and bags with a hierarchical relationship. There are N bags { ( X_i, Y_i ) } ^N_i=1 in a dataset. Each bag consists of n instances, where n varies across different bags. Then the label of X_i = { x_i^1 , x_i^2,⋯, x_i^n| x_i^k∈ℝ^H × W × C_h} is Y_i∈ℝ^C. In the above equations, C_h=3 for RGB images while C denotes classes for classification task. Assuming that the true label of an instance is denoted by y_i^k∈ℝ^C which is actually unknown, in binary problem, we define MIL as: Y_i= 0, iff ∑_k=1^ny_i^k=0 1, otherwise Expanding the above equation to multiclass classification, we have: Y_i = S(X_i)=g(∑_x_i∈ X_if(x_i)) Where S ( · ) is a scoring function for instances in bag X_i, which is permutation-invariant to x_i, while f ( · ) and g ( · ) are two different transformations<cit.>. Depending on the choice of transformations, MIL can be classified into two categories: instance-level approach and embedding-level approach. Instance-level approach utilizes an instance-level classifier as f ( · ), with g ( · ) being an identity function. However, insufficient training during training may introduce unnecessary error. On the other hand, embedding-level approach tends to construct a bag-level aggregator as g ( · ), with f ( · ) serving as a feature extraction network that is solely used to generate instance embeddings. Nonetheless, due to the absence of large-scale pathological databases such as ImageNet, the extractor may fail to accurately capture the crucial features of instances. We rethought the fomulation of MIL and employed contrastive learning to train the feature extractor on top of the embedding-level methods. By using such a self-supervised method, we can guide the representations of slides with specific data augmentation techniques while minimizing the initial error. §.§ Two Priors Nuclear Heterogeneity of Diseased Cell. Abnormalities in the nucleus and chromosomal organization are hallmarks of many diseases, including cancer<cit.>. Pathologists rely on these aberrations to diagnose and grade tumors. For instance, in low-grade ductal carcinoma in situ (DCIS) of the breast, cells are small, regular, and evenly distributed, with nuclei located centrally. By contrast, high-grade carcinoma features large and irregular nuclei. Intermediate-grade falls somewhere in between. The degree of malignancy is directly associated with the tumor's rate of progression, metastasis, and patient survival<cit.>. Hematoxylin and eosin (H&E) is one of the most widely used staining methods in pathological diagnosis. Hematoxylin displays a high affinity for chromatin within the cell nuclei, yielding a bluish-purple hue of the cell nucleus. Then the presence of nuclear abnormalities can be assessed by pathologists through visual observation. Motivated by this, we decomposed H&E-stained RGB images into H and E components, and subsequently designed an image data augmentation technique to guide the encoder in sensitively capturing the morphological changes of the cell nucleus. In the subsequent section, we will provide a detailed exposition of this data augmentation method. Spatial Correlation of Pathological Tiles. Almost all computational pathology methods involve dividing WSIs into patches to accommodate GPU memory. In the context of MIL, these patches are regarded as individual instances, and their collective representation forms the basis for evaluating the corresponding WSI. Given that the cutting process is typically automated and uncontrollable, a region of cancerous tissue may be shared among several patches, leading to spatially adjacent patches having similar properties. Consequently, during the aggregation process, graph attention may be more appropriate than self-attention, as non-adjacent patches, despite belonging to the same class, lack inherent coupling. §.§ Pathological Prior Based MIL We undertake a reexamination of the limitations inherent in existing MIL frameworks<cit.> and, leveraging the two aforementioned pathological priors, develop an innovative MIL framework, as illustrated in Figure 3. In order to eliminate noise from the background and optimize training efficiency, we use the OTSU algorithm to obtain foreground masks for the WSIs and generate patches at a specific magnification according to them. These patches are used to train a feature extractor with contrastive learning. For ease of exposition, we denote the set of patches obtained from the i-th slide as X_i = { x_i^1 , x_i^2,⋯, x_i^n| x_i^k∈ℝ^H × W × 3}, with corresponding labels Y_i∈ℝ^C. Given that the efficacy of contrastive learning is sensitive to the data augmentation scheme utilized during training<cit.>, we enhance the existing data augmentation strategy by incorporating random H&E separation to improve instance-level embeddings with respect to nuclear heterogeneity. Subsequently, we pass the patches through the feature extractor on a per-bag basis, and concatenate them to obtain the bag-level embedding E_i. This process can be represented as: e⃗_⃗i⃗^⃗k⃗ = f_θ ( x_i^k ) , k=1, 2, …, n E⃗_⃗i⃗ = | | _k=1^ne⃗_⃗i⃗^⃗k⃗ where θ represents parameters of the extractor f_θ, and | | is concatenation operation. Note that during extraction, θ is frozen. Ultimately, E_i serves as the input to train the aggregator g_τ, as follows: Ŷ_i = g_τ ( E_i ) where Ŷ_̂î is the predicted label. In the following, we will present a detailed account of our data augmentation strategy as well as the specific structure of the aggregator. Contrastive Learning for Extractor. Due to the influence of staining agents, the color gamut of pathological images is significantly narrower than that of natural images. Therefore, features such as texture and shape are more critical than color. In most MIL methods, a ResNet pre-trained on ImageNet is directly transferred as the feature extractor. However, this often leads to poor instance discrimination in the feature space, as depicted in Figure 1. To address this issue and avoid introducing additional manual annotation, we employ contrastive learning, a self-supervised method, to train the feature extractor f_θ ( · ). Among the state-of-the-art contrastive learning methods, SimSiam stands out for its ability to learn stable instance-level embeddings with even small batch size. In detail, all patches constitute a sample space Ω and are packed into batches. For each patch x ∈Ω, a pair of samples (x_1, x_2 ) is generated through random data augmentation. They serve as positive samples for each other, while all other samples in the same batch are negative samples for them. The pairs (x_1, x_2 ) are then fed into f_θ(·) and a projection MLP to obtain their latent vectors (z_1, z_2 ), which are further fed into a prediction MLP to maximize their consistency. In particular, for H&E pathological images, we add random H&E separation to existing data augmentation scheme. Random H&E Separation. To achieve images with single dye stained, we utilized the Vahadane method<cit.> for stain separation. For a given pathological image, the relative optical density matrix V ∈ℝ^m × n can be expressed as a product of the stain color appearance matrix W ∈ℝ^m × r and the stain density maps H ∈ℝ^r × n, where m is the number of channels, r is the number of stains, and n is the number of pixels, given by: V=logI_0/I=WH I represents the matrix of RGB intensities, and I_0 is the illuminating light intensity (usually 255 for 8 bit images). Then, we can estimate W and H by solving the problem of sparse non-negative matrix factorization: min_W,H1/2 V-WH _F^2 + λ∑_j=1^r H ( j,: ) _1, s.t. W,H ≥ 0, W ( :,j ) _2^2 = 1 From the stain density maps H, we can derive the H channel and E channel images, denoted as I_h and I_e, respectively: I_h = I_0 exp(- H[0, :]) I_e = I_0 exp(− H[1, :]) In practical applications, it is imperative to preserve the color features of pathological images. To this end, we introduce a probabilistic parameter p, which stochastically converts patches to either their H-channel or E-channel images. Spatially-constrained aggregation network. In deep MIL models, attention mechanisms have been widely used in aggregators. Initially, it was recognized that although the bag-level embedding is a whole, it is actually composed of multiple instance-level embeddings. According to the formulation of MIL, these instance-level embeddings contribute differently to the final prediction. For example, in cancer detection, positive instances have larger weights in determining the final diagnosis of a slice, while negative instances in a negative slice have a more average impact on the result. Additionally, semantic information between instances should also be correlated. Therefore, a self-attention mechanism is used to model this, and it has been demonstrated to reduce the uncertainty of MIL<cit.>. However, this approach has significant limitations, as there may not be intrinsic connections between every pair of instance-level embeddings. Analyzing the true distribution of lesions from the slices, only two instances that are spatially close are likely to have consistent attributes. Taking this into consideration, we propose an aggregator that employs graph attention<cit.> to restrict the flow of mutual attention. In the final pooling stage, we design a global attention module that modifies the calculation method of self-attention to better fit the contributions of each instance, as shown in Figure 4. In the graph attention module, the adjacency matrix A of the graph attention layer is generated using the absolute positional indices of each instance: A_i,j = 0, if d_i,j > √(2) 1, if 0 ≤ d_i,j≤√(2) where d_i,j denotes the Euclidean distance between the coordinates of the i-th and j-th instances. Due to the non-negligible role of self-attention, the indices i and j may be equal. Following this, we leverage A to compute masked attention, which solely assigns attention to the neighbor node set N_i of instance x_i (i.e., x_j ∈ N_i). The attention score between the node vector e⃗_i of instance x_i and the node vector e⃗_j of its neighboring instance is then calculated as: α _i,j = exp(LeakyRelu(a⃗^T [ We⃗_i | |We⃗_j ])) /∑ _k ∈ N_iexp(LeakyReLu(a⃗^T [ We⃗_i | |We⃗_k ])) In this equation, W denotes a weight matrix that is responsible for performing a linear transformation on the input feature e⃗_i, while a⃗ represents a fully connected layer. Consequently, the output node vector e⃗^⃗'⃗_i can be defined as follows: e⃗^⃗'⃗ _i = σ (∑_j ∈ N_iα_i,j We⃗_j ) In this context, σ represents a non-linear activation function. Two successive graph attention layer are employed and during the final pooling stage, we utilize global-attention on the bag-level representation E_i to derive weights for its spatial dimensions inspired by self-attention mechanism, which in turn model the contribution of the instance-level embeddings within the bag: E⃗_⃗i⃗^⃗'⃗ = sigmoid(φ( E⃗_⃗i⃗ W_Q (E⃗_⃗i⃗ W_K)^T) /√(d_K) )E⃗_⃗i⃗W_V The weight matrices W_Q, W_K and W_V are used to generate the query, key and value vectors, respectively. φ means dimensionality reduction which we use to generate the spatial attention score, and average pooling was eventually chosen. E⃗_⃗i⃗^⃗'⃗ finally is projected into a low-dimensional space as the prediction for slide. § EXPERIMENT In our experimentation, we evaluated our approach on two publicly available clinical pathology datasets: Camelyon16 and TCGA-NSCLC. These datasets offer a diverse range of MIL problems, spanning both balanced/unbalanced and single/multiple class scenarios. We conducted comparative experiments to assess the efficacy of our aggregator. Moreover, to corroborate the effectiveness of the individual components of our proposed framework, we carried out ablation studies. §.§ Dataset Camelyon16<cit.> is a publicly unbalanced dataset focused on differentiating between cancer and non-cancer cases for metastasis detection in breast cancer, which consists of 270 slides for training and 129 for testing. Following pre-processing with the OTSU algorithm, we acquired approximately 460,000 patches at 20× magnification, with an average of 1704 patches per slide. TCGA-NSCLC<cit.> includes two subtype projects, i.e., Lung Squamous Cell Carcinoma (LUSC) and Lung Adenocarcinoma (LUAD), for a total of 1034 diagnostic WSIs, including 527 LUAD slides and 507 LUSC slides. After pre-processing, the mean number of patches extracted per slide at 20× magnification is 3114. §.§ Experiment Setup and Evaluation Metrics. To obtain non-overlapping 256 × 256 patches, we employed the OTSU algorithm to generate foreground masks for all Whole Slide Images (WSIs). For the Camelyon16 dataset, we split the official training set into training and validation sets in an 8:2 ratio. The model was trained for 50 epochs on the training set and evaluated on the official test set to select the best-performing model on the validation set. For the TCGA-NSCLC dataset, we randomly divided all slides into 80% training and 20% validation, using four-fold cross-validation to assess model performance. We adopted accuracy, area under curve (AUC), and F1-score as evaluation metrics to measure the classification performance of model. §.§ Implementation Details To train the feature extractor, we utilized SimSiam<cit.> and incorporated random H&E separation in addition to random crop and color distortion. We employed Adam optimizer with an initial rate of 1e-4 and decayed the learning rate with the cosine decay schedule. The size of mini-batch was 256 and ResNet50 was selected as backbone. During MIL training, the feature of each patch is embedded in a 1024-dimensional vector by pre-trained extractor. We used Lookahead optimizer<cit.> with a constant learning rate of 2e-4 and weight decay of 1e-5. The mini-batch was 1. §.§ Baseline The baselines we chosed include deep models with traditional pooling operators such as mean-pooling, max-pooling and the current state-of-the-art embedding-level models, the attention gate based pooling operator ABMIL<cit.>, non-local attention based pooling operator DSMIL<cit.>, single-attention-branch CLAM-SB<cit.>, multi-attention-branch CLAM-MB<cit.>, self-attention based aggregator TransMIL<cit.>, and RNN based aggregation MIL-RNN<cit.>. Furthermore, we also evaluate the instance-level approach MIL-Score in our experiments. §.§ Slide-level Classification Results of the cancer/non-cancer detection task on Camelyon16 and the subtypes classification task on TCGA-NSCLC are presented in Table 1. The experimental settings for other comparative methods are consistent with the official code. On the Camelyon16 dataset, only a small fraction of regions exhibit malignant growth. Moreover, the distribution of positive and negative slides manifests a remarkable degree of imbalance. All deep MIL models exhibit superior performance compared to traditional pooling operations. Furthermore, our proposed model outperforms the state-of-the-art ABMIL method in terms of accuracy, AUC, and F1 Score by 1.32%, 1.24%, and 1.44%, respectively. Notably, the DSMIL method, which shares our approach of utilizing contrastive learning, also attains promising results. On TCGA-NSCLC, LUAD and LUSC exhibit significant differences in tissue structure, and the affected area accounts for over 80% of the total tissue region. As there is no patch quantity imbalance between the two classes, instance-level MIL Score methods show great potential. Our proposed method achieves competitive performance compared to highly effective TransMIL and DSMIL models, with an improvement of 0.24% and 0.31% in accuracy and AUC, respectively. Notably, DSMIL outperforms our method in terms of F1 Score. In addition, nuclear heterogeneity is of paramount importance in cancer detection, as cancerous cell nuclei exhibit distinct morphological differences from normal cell nuclei. As a result, our proposed method demonstrates a more significant improvement on Camelyon16, whereas it may not be the case on TCGA-NSCLC. §.§ Ablation Study Our model's primary contribution lies in the introduction of two pathological priors. With this in mind, we conducted a thorough ablation study on the pre-training strategy of the feature extractor, as well as the dimension decay method in the global attention gate of the aggregator. A series of comprehensive experiments conducted on the Camelyon16 dataset confirmed the effectiveness of these components. Pre-training Strategy. We utilized two feature extractor models: ResNet50 pre-trained on ImageNet and ResNet50 pre-trained using SimSiam. During the SimSiam training process, we employed three data augmentation schemes, including random cropping and color distortion, and additionally incorporated random H-separation and random H&E-separation instead of random grayscale. We extracted features from the four strategies and used them as inputs to the aggregator to evaluate the effectiveness of our proposed random H&E-separation. The experimental results presented in Table 2 demonstrate that the encoder pre-trained using SimSiam generally outperforms the one pre-trained on ImageNet, achieving an accuracy improvement of 3%-4%. This improvement is more pronounced in terms of AUC and F1 score. As the H-channel emphasizes the morphological characteristics of the nucleus, unlike simple random grayscale, and the E-channel only serves as a supplement, the performance of using random H-separation and random H&E-separation is almost comparable. Dimensionality Reduction. In the aggregator structure, the global attention gate produces attention scores to weight the spatial dimensions and transform package-level embeddings into slice-level representations. We compared the effects of using max-pooling and average-pooling gates on the final performance. As shown in Table 3, the performance of average-pooling is consistently higher than that of max-pooling, with improvements of 1.55%, 1.23%, and 2.30% in accuracy, AUC, and F1 score, respectively. § CONCLUSION In this paper, we propose a MIL framework based on two pathological priors, which has been shown to outperform previous methods on pathological datasets. Our key innovations are twofold. Firstly, we introduce a H&E separation based data augmentation method that emphasizes nuclear heterogeneity and apply it to the pre-training of extractor. Secondly, we design an MIL aggregator based on the principle of positional similarity, which is highly interpretable. We use graph attention to calculate the mutual attention between only relevant patches and fit the weights of different instances through attention gate based on self-attention mechanism. In future research endeavors, the absence of comprehensive large-scale pathology standard databases accentuates the criticality of rational upstream pre-training methodologies. Within the context of MIL, self-supervised approaches warrant further investigation and comparison, including the utility of contrastive learning, autoencoder architectures, and generative adversarial networks, each with their unique advantages. Moreover, while multiscale information is increasingly valued within existing MIL paradigms, pre-processing complexities at different magnifications require due diligence, necessitating further discourse on effectively exploiting coarse and fine-grained features at a fixed magnification. TLvdLC19 [AWM+17]aeffner2017gold Famke Aeffner, Kristin Wilson, Nathan T Martin, Joshua C Black, Cris L Luengo Hendriks, Brad Bolon, Daniel G Rudmann, Roberto Gianani, Sally R Koegler, Joseph Krueger, et al. The gold standard paradox in digital image analysis: manual versus automated scoring as ground truth. Archives of pathology & laboratory medicine, 141(9):1267–1275, 2017. [BVVD+17]bejnordi2017diagnostic Babak Ehteshami Bejnordi, Mitko Veta, Paul Johannes Van Diest, Bram Van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen AWM Van Der Laak, Meyke Hermsen, Quirine F Manson, Maschenka Balkenhol, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Jama, 318(22):2199–2210, 2017. [CCL+22]chen2022scaling Richard J Chen, Chengkuan Chen, Yicong Li, Tiffany Y Chen, Andrew D Trister, Rahul G Krishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16144–16155, 2022. [CH21]chen2021exploring Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15750–15758, 2021. [CHG+19]campanella2019clinical Gabriele Campanella, Matthew G Hanna, Luke Geneslaw, Allen Miraflor, Vitor Werneck Krauss Silva, Klaus J Busam, Edi Brogi, Victor E Reuter, David S Klimstra, and Thomas J Fuchs. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nature medicine, 25(8):1301–1309, 2019. [CKNH20]chen2020simple Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020. [CMM+20]caron2020unsupervised Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems, 33:9912–9924, 2020. [CTM+21]caron2021emerging Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650–9660, 2021. [CZC+21]chen2021diagnose Zhen Chen, Jun Zhang, Shuanlong Che, Junzhou Huang, Xiao Han, and Yixuan Yuan. Diagnose like a pathologist: Weakly-supervised pathologist-tree network for slide-level immunohistochemical scoring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 47–54, 2021. [DDS+09]deng2009imagenet Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. [DLLP97]dietterich1997solving Thomas G Dietterich, Richard H Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1-2):31–71, 1997. [FZ17]feng2017deep Ji Feng and Zhi-Hua Zhou. Deep miml network. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017. [GSA+20]grill2020bootstrap Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020. [HJH+20]hayward2020derivation Mary-Kate Hayward, J Louise Jones, Allison Hall, Lorraine King, Alastair J Ironside, Andrew C Nelson, E Shelley Hwang, and Valerie M Weaver. Derivation of a nuclear heterogeneity image index to grade dcis. Computational and Structural Biotechnology Journal, 18:4063–4070, 2020. [HSK+16]hou2016patch Le Hou, Dimitris Samaras, Tahsin M Kurc, Yi Gao, James E Davis, and Joel H Saltz. Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2424–2433, 2016. [HYL+22]hou2022h Wentai Hou, Lequan Yu, Chengxuan Lin, Helong Huang, Rongshan Yu, Jing Qin, and Liansheng Wang. H 2-mil: Exploring hierarchical representation with heterogeneous multiple instance learning for whole slide image analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 933–941, 2022. [HZRS16]he2016deep Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [ITW18]ilse2018attention Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In International conference on machine learning, pages 2127–2136. PMLR, 2018. [LLE21]li2021dual Bin Li, Yin Li, and Kevin W Eliceiri. Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318–14328, 2021. [LWC+21]lu2021data Ming Y Lu, Drew FK Williamson, Tiffany Y Chen, Richard J Chen, Matteo Barbieri, and Faisal Mahmood. Data-efficient and weakly supervised computational pathology on whole-slide images. Nature biomedical engineering, 5(6):555–570, 2021. [PC15]pinheiro2015image Pedro O Pinheiro and Ronan Collobert. From image-level to pixel-level labeling with convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1713–1721, 2015. [QCCL17]quellec2017multiple Gwenolé Quellec, Guy Cazuguel, Béatrice Cochener, and Mathieu Lamard. Multiple-instance learning for medical image and video analysis. IEEE reviews in biomedical engineering, 10:213–234, 2017. [SBC+21]shao2021transmil Zhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, et al. Transmil: Transformer based correlated multiple instance learning for whole slide image classification. Advances in neural information processing systems, 34:2136–2147, 2021. [TCP+22]thandiackal2022differentiable Kevin Thandiackal, Boqi Chen, Pushpak Pati, Guillaume Jaume, Drew FK Williamson, Maria Gabrani, and Orcun Goksel. Differentiable zooming for multiple instance learning on whole-slide images. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXI, pages 699–715. Springer, 2022. [TCW15]tomczak2015review Katarzyna Tomczak, Patrycja Czerwińska, and Maciej Wiznerowicz. Review the cancer genome atlas (tcga): an immeasurable source of knowledge. Contemporary Oncology/Współczesna Onkologia, 2015(1):68–77, 2015. [TLvdLC19]tellez2019neural David Tellez, Geert Litjens, Jeroen van der Laak, and Francesco Ciompi. Neural image compression for gigapixel histopathology image analysis. IEEE transactions on pattern analysis and machine intelligence, 43(2):567–578, 2019. [US18]uhler2018nuclear Caroline Uhler and GV Shivashankar. Nuclear mechanopathology and cancer diagnosis. Trends in cancer, 4(4):320–331, 2018. [VCC+17]velivckovic2017graph Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. [VdLLC21]van2021deep Jeroen Van der Laak, Geert Litjens, and Francesco Ciompi. Deep learning in histopathology: the path to the clinic. Nature medicine, 27(5):775–784, 2021. [VPS+16]vahadane2016structure Abhishek Vahadane, Tingying Peng, Amit Sethi, Shadi Albarqouni, Lichao Wang, Maximilian Baust, Katja Steiger, Anna Melissa Schlitter, Irene Esposito, and Nassir Navab. Structure-preserving color normalization and sparse stain separation for histological images. IEEE transactions on medical imaging, 35(8):1962–1971, 2016. [VSP+17]vaswani2017attention Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [WYT+18]wang2018revisiting Xinggang Wang, Yongluan Yan, Peng Tang, Xiang Bai, and Wenyu Liu. Revisiting multiple instance neural networks. Pattern Recognition, 74:15–24, 2018. [XSS+19]xu2019camel Gang Xu, Zhigang Song, Zhuo Sun, Calvin Ku, Zhe Yang, Cancheng Liu, Shuhao Wang, Jianpeng Ma, and Wei Xu. Camel: A weakly supervised learning framework for histopathology image segmentation. In Proceedings of the IEEE/CVF International Conference on computer vision, pages 10682–10691, 2019. [XZE+14]xu2014weakly Yan Xu, Jun-Yan Zhu, I Eric, Chao Chang, Maode Lai, and Zhuowen Tu. Weakly supervised histopathology cancer image segmentation and classification. Medical image analysis, 18(3):591–604, 2014. [YZJ+20]yao2020whole Jiawen Yao, Xinliang Zhu, Jitendra Jonnagaddala, Nicholas Hawkins, and Junzhou Huang. Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks. Medical Image Analysis, 65:101789, 2020. [ZKR+17]zaheer2017deep Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. Advances in neural information processing systems, 30, 2017. [ZLBH19]zhang2019lookahead Michael Zhang, James Lucas, Jimmy Ba, and Geoffrey E Hinton. Lookahead optimizer: k steps forward, 1 step back. Advances in neural information processing systems, 32, 2019. [ZMZ+22]zhang2022dtfd Hongrun Zhang, Yanda Meng, Yitian Zhao, Yihong Qiao, Xiaoyun Yang, Sarah E Coupland, and Yalin Zheng. Dtfd-mil: Double-tier feature distillation multiple instance learning for histopathology whole slide image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18802–18812, 2022. [ZYF+20]zhao2020predicting Yu Zhao, Fan Yang, Yuqi Fang, Hailing Liu, Niyun Zhou, Jun Zhang, Jiarui Sun, Sen Yang, Bjoern Menze, Xinjuan Fan, et al. Predicting lymph node metastasis using histopathological images based on multiple instance learning with deep graph convolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4837–4846, 2020. [ZYW+22]zhu2022murcl Zhonghang Zhu, Lequan Yu, Wei Wu, Rongshan Yu, Defu Zhang, and Liansheng Wang. Murcl: Multi-instance reinforcement contrastive learning for whole slide image classification. IEEE Transactions on Medical Imaging, 2022.
http://arxiv.org/abs/2307.04755v1
20230710175732
Information decomposition to identify relevant variation in complex systems with machine learning
[ "Kieran A. Murphy", "Dani S. Bassett" ]
cs.LG
[ "cs.LG", "cond-mat.soft", "cs.IT", "math.IT", "physics.data-an" ]
Dept. of Bioengineering, School of Engineering & Applied Science, Dept. of Bioengineering, School of Engineering & Applied Science, Dept. of Electrical & Systems Engineering, School of Engineering & Applied Science, Dept. of Neurology, Perelman School of Medicine, Dept. of Psychiatry, Perelman School of Medicine, Dept. of Physics & Astronomy, College of Arts & Sciences, University of Pennsylvania, Philadelphia, PA 19104, USA The Santa Fe Institute, Santa Fe, NM 87501, USA To whom correspondence should be addressed: [email protected] One of the fundamental steps toward understanding a complex system is identifying variation at the scale of the system's components that is most relevant to behavior on a macroscopic scale. Mutual information is a natural means of linking variation across scales of a system due to its independence of the particular functional relationship between variables. However, estimating mutual information given high-dimensional, continuous-valued data is notoriously difficult, and the desideratum—to reveal important variation in a comprehensible manner—is only readily achieved through exhaustive search. Here we propose a practical, efficient, and broadly applicable methodology to decompose the information contained in a set of measurements by lossily compressing each measurement with machine learning. Guided by the distributed information bottleneck as a learning objective, the information decomposition sorts variation in the measurements of the system state by relevance to specified macroscale behavior, revealing the most important subsets of measurements for different amounts of predictive information. Additional granularity is achieved by inspection of the learned compression schemes: the variation transmitted during compression is composed of distinctions among measurement values that are most relevant to the macroscale behavior. We focus our analysis on two paradigmatic complex systems: a Boolean circuit and an amorphous material undergoing plastic deformation. In both examples, specific bits of entropy are identified out of the high entropy of the system state as most related to macroscale behavior for insight about the connection between micro- and macro- in the complex system. The identification of meaningful variation in data, with the full generality brought by information theory, is made practical for the study of complex systems. Information decomposition to identify relevant variation in complex systems with machine learning Dani S. Bassett Version of June 20, 2023 =================================================================================================== A complex system is a system of interacting components where some sense of order present at the scale of the system is not apparent, or even conceivable, from the observations of single components <cit.>. A broad categorization, it includes many systems of relevance to our daily lives, from the economy to the internet and from the human brain to artificial neural networks <cit.>. Before attempting a reductionist description of a complex system, one must first identify variation in the system that is most relevant to emergent order at larger scales. The notion of relevance can be formalized with information theory, wherein mutual information serves as a general measure of statistical dependence to connect variation across different scales of system behavior <cit.>. Information theory and complexity science have a rich history; information theory commonly forms the foundation of definitions of what it means to be complex <cit.>. Machine learning is well-suited for the analysis of complex systems, grounded in its natural capacity to identify patterns in high dimensional data <cit.>. However, distilling insight from a successfully trained model is often infeasible due to a characteristic lack of interpretability of machine learning models <cit.>. Restricting to simpler classes of models, for example linear combinations of observables, recovers a degree of interpretability at the expense of functional expressivity <cit.>. For the study of complex systems, such a trade-off is unacceptable if the complexity of the system is no longer faithfully represented. In this work, we do not attempt to explain the relationship between microscale and macroscale, and are instead interested in identifying the information contained in microscale observables that is most predictive of macroscale behavior—independent of functional relationship. We employ a recent method from interpretable machine learning that identifies the most relevant information in a set of measurements <cit.>. Based on the distributed information bottleneck <cit.>, a variant of the information bottleneck (IB) <cit.>, the method lossily compresses a set of measurements while preserving information about a relevance quantity. Optimization serves to decompose the information present in the measurements, providing a general-purpose method to identify the important variation in composite measurements of complex systems. Identifying important variation is a powerful means of analysis of complex systems, as we demonstrate on two paradigmatic examples. First we study a Boolean circuit, whose fully-specified joint distribution and intuitive interactions between variables facilitate understanding of the information decomposition found by the distributed IB. Boolean circuits are networks of binary variables that interact through logic functions, serving as the building blocks of computation <cit.> and as elementary models of gene control networks <cit.>. Second, we decompose the information contained in the local structure of an amorphous material subjected to global deformation. Amorphous materials are condensed matter systems composed of simple elements (e.g., atoms or grains) that interact via volume exclusion and whose disorder gives rise to a host of complex macroscale phenomena, such as collective rearrangement events spanning a wide range of magnitudes <cit.> and nontrivial phase transitions <cit.>. Although the state space that describes all of the degrees of freedom is large, as is generally true of complex systems, the proposed method is able to identify important bits of variation by partitioning entropy and leveraging machine learning to process the high dimensional data. § METHODS Mutual information is a measure of statistical dependence between two random variables X and Y that is independent of the functional transformation that relates X and Y (in contrast to linear correlation, for example, which measures the degree to which two variables are linearly related). Mutual information is defined as the entropy reduction in one variable after learning the value of the other <cit.>, I(X;Y) = H(Y) - H(Y|X), with H(X)=𝔼_x∼ p(x)[-log p(x)] Shannon's entropy <cit.>. The distributed information bottleneck is an optimization objective to extract the information most relevant to a variable Y from a composite measurement: a random vector X = (X_1, ..., X_N) <cit.>. Each component X_i undergoes lossy compression to an auxiliary variable U_i=f(X_i), and then the compressed variables U=(U_1, ..., U_N) are used to predict the output Y. Minimization of the distributed IB Lagrangian, ℒ_DIB = β∑_i=1^N I(U_i;X_i) - I(U;Y), extracts the entropy (or information) in X that is most descriptive of Y. By sweeping over the magnitude of the bottleneck strength β, a continuous spectrum of approximations to the relationship between X and Y is found. The optimized compression schemes for each component of X reveal the amount of relevant information and the specific entropy selected for every level of approximation. In place of Eqn. <ref>, variational bounds on mutual information have been developed that are amenable to data and machine learning <cit.>. The lossy compression schemes are parameterized by neural networks that encode data points to probability distributions in a continuous latent space. Transmitted information is upper bounded by the expectation of the Kullback-Leibler divergence <cit.> between the encoded distributions and an arbitrary prior distribution, identical to the process of information restriction in a variational autoencoder <cit.>. Over the course of training, the amount of information conveyed by each compression scheme I(U_i;X_i) is estimated using bounds derived in Ref. <cit.>. Although mutual information is generally difficult to estimate from data <cit.>, compressing the partial measurements X_i separately isolates the information such that the amount of mutual information is small enough to allow precise estimates, with the interval between bounds on the order of 0.01 bits. Details about mutual information estimation are in Appendix A. § RESULTS Boolean circuit. A Boolean circuit (Fig. <ref>a) was constructed with ten binary inputs X=(X_1,...,X_10) and a binary output Y. Assuming a uniform distribution over inputs, the truth table specifies the joint distribution p(x_1,...,x_10,y), and the interactions between inputs are prescribed by a wiring of logical , , and gates. An information bottleneck was distributed to every input X_i to monitor from where the predictive information originated via compressed variables U_i (Fig. <ref>b). We trained a multilayer perceptron (MLP) to learn the relationship between the lossy compressions U and Y. Over the course of a single training run, the coefficient of the information bottleneck strength β was swept to obtain a spectrum of predictive models. The distributed information plane (Fig. <ref>c) <cit.> displays the predictive power as a function of the total information about the inputs ∑ I(U_i;X_i). The predictive performance ranged from zero predictive information without any information about the inputs (Fig. <ref>c, lower left) to all entropy H(Y) accounted for by utilizing all ten bits of input information (Fig. <ref>c, upper right). For every point on the spectrum there was an allocation of information over the inputs; the distributed IB objective identified the information across all inputs that was most predictive. The most predictive information about Y was found to reside in X_3—the input that routes through the fewest gates to Y—and then in the pair X_3,X_10, and so on. Powered by machine learning, we traversed the space of lossy compression schemes of X_i, decomposing the information contained in the circuit inputs about the output. Included in the space of compression schemes is information transmitted about each of the 2^10 discrete subsets of the inputs. To be concrete, there are ten subsets of a single input, 45 pairs of inputs, and so on, with each subset sharing mutual information with Y based on the role of the specific inputs inside the circuit. Fig. <ref>d displays the information contained in every discrete subset of inputs (black points) along with the continuous trajectory found by optimization of the distributed IB (gray curve). The distributed IB, maximizing predictive information while minimizing information taken from the inputs, closely traced the upper boundary of discrete information allocations and identified a majority of the most informative subsets of inputs. To decompose the information in the circuit's inputs required only a single sweep with the distributed IB, not an exhaustive search through all subsets of inputs. We note that the product of the distributed IB is not an ordering of single variable mutual information terms I(X_i;Y), which would be straightforward to calculate, but instead the ordering of information selected from all of X that is maximally informative about Y. Decomposing structural information in a physical system. Linking structure and dynamics in amorphous materials—complex systems consisting of particles that interact primarily through volume exclusion—has been a longstanding challenge in physics <cit.>. Searching for signatures of collective behavior in the multitude of microscopic degrees of freedom is an endeavor emblematic of complex systems more generally and one well-suited for machine learning and information theory. We accept that the functional relationship between the micro- and macroscale variation is potentially incomprehensible, and are instead interested in the information at the microscale that is maximally predictive of behavior at the macroscale. While prior work has analyzed the information content of hand-crafted structural descriptors individually <cit.>, the distributed IB searches through the space of information from many structural measurements in combination. Two-dimensional simulated glasses, prepared by either rapid or gradual quenching and composed of small (type A) and large (type B) particles that interact with a Lennard-Jones potential, were subjected to global shear deformation <cit.>. Local regions were identified as the origins of imminent rearrangement and paired with negative samples from elsewhere in the system to create a binary classification dataset. We first considered a scheme of measurements of the microscale structure that has been associated with plastic rearrangement in a variety of amorphous systems: the densities of radial bands around the center of a region <cit.>. By training a support vector machine (SVM) to predict rearrangement based on the radial density measurements, a linear combination of the values is learned. In the literature, that combination is commonly referred to as softness, and has proven to be a useful local order parameter <cit.>. We approached the same prediction task from an information theoretic perspective, seeking the specific bits of variation in the density measurements that are most predictive of collective rearrangement. Each radial density measurement underwent lossy compression by its own neural network before all compressions were concatenated and used as input to an MLP to predict rearrangement. By sweeping β, a single optimization recovered a sequence of approximations, each allocating a limited amount of information across the 100 density measurements to be most predictive of imminent rearrangement (Fig. <ref>). The trajectories in the distributed information plane, for both gradually and rapidly quenched glasses, reflect the growth of predictive information and prediction accuracy given maximally predictive information about the radial densities (Fig. <ref>a,c). With only one bit of information from the density measurements, 71.8% predictive accuracy was achieved for the gradually quenched glass and 69.5% was achieved for the rapidly quenched glass; with twenty bits, the accuracy jumped to 91.3% and 85.4%, respectively. Beyond twenty bits of density information, the predictive accuracy became comparable to that of the support vector machine, which can utilize all of the continuous-valued density measurements for prediction with a linear relationship. For every point along the trajectory, information was identified from the density measurements that, together, formed the combination of bits that were most predictive of rearrangement (Fig. <ref>b,d). The majority of the information was selected from smaller radii (close to the center of the region), which can be expected given the localized nature of rearrangement events <cit.>. Less intuitive is the information decomposition as it relates to the radial distribution functions g_AA(r) and g_AB(r), the system-averaged radial densities of type A and B particles in regions with a type A particle at the center. For both glasses, the most predictive bits originated in the low density radial bands nearest the center. As more information was incorporated into the prediction, the additional bits came from radial bands that corresponded to particular features of g_AA(r) and g_AB(r). Outside of the first low density trough, the selected information often came from the high density radii of type A particles and the low density radii of type B particles; this trend held true for both glasses. While the information decomposition highlighted similar features in both glasses, the more pronounced structure of selected information out to larger radii for the gradually quenched glass is indicative of its higher structural regularity, which is also seen in the pronounced features of its radial distribution functions g_AA(r) and g_AB(r). The amount of information utilized from each density measurement was predominantly a single bit or less. Of the ways to compress the infinite entropy of a continuous-valued density to a single bit, what was the specific variation extracted from each density measurement? Through inspection of the learned compression schemes, the extracted information can be further decomposed by the degree of distinctions between density values that were preserved for the predictive model (Fig. <ref>a) <cit.>. The single most important bit of information for the gradually quenched glass was a composition of partial bits from multiple density measurements, mostly arising from the first low-density shell of each type of particle (Fig. <ref>b). For both measurements, the compression scheme acted as a threshold on the range of possible density values: values less than a cutoff ρ^' were indistinguishable from each other for the purposes of prediction and were partially distinguishable from density values above the cutoff. By examining the distribution of density values in these radial shells, we see that the cutoff values leverage the separability of the density distributions when conditioned on rearrangement. With more information utilized for prediction, some of the compression schemes differed from simple thresholds (shown for the rapidly quenched glass in Fig. <ref>c). For the predictive model operating with a total of twenty bits of density information, two density measurements contributed more than a bit each. The learned compression of the first high-density shell of type A particles essentially counted the number of particles in the shell, with distinguishability between densities as if there were several thresholds over the range of the values that act to roughly discretize the density measurement. Information decomposition with the distributed IB depends upon the particular scheme used to measure the system <cit.>. In the study of complex systems, there can be multiple `natural' schemes of measuring a system state. Density measurements of radial bands lead to an essentially linear relationship between structure and rearrangement <cit.>; what if we had not inherited such a fortuitous measurement scheme? Another natural basis of measurements is the position of all of the particles (Fig. <ref>a). In contrast to radial density measurements, per-particle measurements lack a canonical ordering; accordingly, we used a permutation-invariant transformer architecture for the predictive model <cit.>. Every particle position was transmitted in parallel through a single compression channel, rather than through a uniquely learned compression scheme per measurement as before. An analogue of the distributed IB task is to write a note for each particle in the region with the goal to predict whether the region will rearrange. Under a constraint on time or effort, more careful notes would be taken for the informative particles, while less careful notes would be taken for the rest. The per-particle measurement scheme imposed no structure on the selection of configurational information. Nevertheless, we found that the information cost per particle as a function of the position in the neighborhood had a radial structure (Fig. <ref>b). The information per particle was highest in the low density radial bands near the center of the region (Fig. <ref>c), and inspection of the compression scheme indicated that negligible azimuthal information was transmitted (Fig. <ref>d). The information decomposition allowed for similar insights to be derived as in the radial density measurement scheme, even though the nature of the predictive model in the two cases was substantially different. Additionally, because the distributed IB operates entirely on the input side of an arbitrary predictive model, the information analysis was agnostic to whether the model was a simple fully connected network or a more complicated transformer architecture. § DISCUSSION A universal challenge faced when studying complex systems, fundamental to what makes a system complex, is the abundance of entropy from the perspective of the microscale that obscures relevant information about macroscale behavior. The generality of mutual information as a measure of statistical relatedness, and the expressivity of deep learning when handling high-dimensional data, allow the distributed IB to be as readily utilized to identify structural defects relevant to a given material property as it is to reveal gene variation relevant to a given affliction. Tens, hundreds, and potentially thousands of measurements of a complex system are handled simultaneously, rendering practical analyses that would have previously been infeasible through exhaustive search or severely limited by constraints on functional relationships between variables. Information theory has long held appeal for the analysis of complex systems owing to the generality of mutual information <cit.>. However, the estimation of mutual information from data is fraught with difficulties <cit.>, which have hindered information theoretic analyses of data from complex systems. By distributing information bottlenecks across multiple partial measurements of a complex system, entropy is partitioned to a degree that makes precise estimation of mutual information possible while simultaneously revealing the most important combinations of bits for insight about the system. Machine learning navigates the space of lossy compression schemes for each variable and allows the identification of meaningful variation without consideration of the black box functional relationship found by the predictive model. Instead of compressing partial measurements in parallel, the information bottleneck <cit.> extracts the relevant information from one random variable in its entirety about another, and is foundational to many works in representation learning <cit.>. In the physical sciences, the IB has been used to extract relevant degrees of freedom with a theoretical equivalence to coarse-graining in the renormalization group <cit.>, and to identify useful reaction coordinates in biomolecular reactions <cit.>. However, the IB has limited capacity to find useful approximations, particularly when the relationship between X and Y is deterministic (or nearly so) <cit.>. Much of the spectrum of learned approximations is the trivial noisy rendition of a high-fidelity reconstruction <cit.>. Additionally, compression schemes found by IB are rarely interpretable because the singular bottleneck occurs after processing the complete input, allowing the compression scheme to involve arbitrarily complex relationships between components of the input without penalty. The distribution of information bottlenecks is critical to an interpretable information decomposition, and to accurately estimating the necessary mutual information terms. A growing body of literature focuses on a fundamentally different route to decompose the information contained in multiple random variables {X_i} about a relevant random variable Y; that alternative route is partial information decomposition (PID) <cit.>. Although there is no consensus on how to achieve PID in practice, its goal is to account for the mutual information between {X_i} and Y in terms of subsets of {X_i}, by analogy to set theory <cit.>. PID allocates information to the input variables in their entirety, whereas the distributed IB selects partial entropy from the input variables in the form of lossy compression schemes, with one scheme per variable. While PID has been proposed as an information theoretic route to study complex systems <cit.> and quantify complexity <cit.>, the super-exponential growth of PID terms renders the methodology rather impractical. There are 5× 10^22 PID terms for a Boolean circuit with 8 inputs <cit.> and the number of terms for the simple 10 input circuit from Fig. <ref> is not known <cit.>. By contrast, the distributed IB offers a pragmatic route to the decomposition of information in a complex system: it is amenable to machine learning and data, and can readily process one hundred (continuous) input variables as in the amorphous plasticity experiments. § ACKNOWLEDGEMENTS We gratefully acknowledge Sam Dillavou and Zhuowen Yin for helpful discussions and comments on the manuscript, and Sylvain Patinet for the amorphous plasticity data. § CODE AVAILABILITY The full code base has been released on Github and may be found through the following link: https://distributed-information-bottleneck.github.iodistributed-information-bottleneck.github.io. Every analysis included in this work can be repeated from scratch with the corresponding Google Colab iPython notebook in https://github.com/distributed-information-bottleneck/distributed-information-bottleneck.github.io/tree/main/colab § DATA AVAILABILITY The train and validation splits of the amorphous plasticity data, consisting of local neighborhoods that were subsequently “measured” as radial densities (Figs. <ref>,<ref>) or as per-particle descriptors (Fig. <ref>), can be found through the project page and can be downloaded https://drive.google.com/drive/folders/1vzWSv_4dE4VyjAXbLrZtcbuV1R6igFEEhere. The full dataset with all particle locations before and after all events is available with the permission of the authors of Ref. <cit.>. § APPENDIX A: MUTUAL INFORMATION BOUNDS The full method presented in this work requires us to bound the mutual information for high dimensional data; identifying this bound is notoriously difficult <cit.>. Fortunately, there are factors in our favor to facilitate optimization with machine learning and the recovery of tight bounds on the information transmitted by the compression channels U_i. To optimize the distributed information bottleneck objective (Eqn. <ref>) requires an upper bound on I(U_i;X_i) and a lower bound on I(U;Y). The (distributed) variational information bottleneck objective <cit.> upper bounds I(U_i;X_i) with the expectation of the Kullback-Leibler (KL) divergence between the encoded distributions p(u_i|x_i) and an arbitrary prior distribution r(u_i) in latent space, I(U_i;X_i) ≤𝔼_x_i ∼ p(x_i) [D_KL(p(u_i|x_i)||r(u_i))]. Normal distributions are used for both the encoded distribution, p(u_i|x_i) = 𝒩(μ=f_μ(x_i), σ=f_σ(x_i)), and the prior, r(u_i)=𝒩(0, 1) so that the KL divergence has a simple analytic form. Over the course of training, the KL divergence is computed for each channel U_i, thereby providing a proxy quantity for the amount of information that is contained in the compression scheme. Although the KL divergence can be used for a qualitative sense of information allocation to features <cit.>, it is a rather poor estimate of the mutual information. Because the encoded distributions p(u_i|x_i) have a known form, we can use the noise contrastive estimation (InfoNCE) lower bound and “leave one out” upper bound from Ref. <cit.> with a large number of samples to obtain tight bounds on the amount of mutual information in the learned compression schemes. The lower and upper bounds on I(U_i;X_i) are based on likelihood ratios at points sampled from the dataset x_i ∼ p(x_i) and from the corresponding conditional distributions, u_i ∼ p(u_i|x_i). To be specific, the mutual information for each channel U=f(X) (dropping channel indices for simplicity) is lower bounded by I(U;X) ≥𝔼 [1/K∑_i^K logp(u_i|x_i)/1/K∑_j^K p(u_i|x_j) ] and upper bounded by I(U;X) ≤𝔼 [ 1/K∑_i^K logp(u_i|x_i)/1/K-1∑_j i^K p(u_i|x_j) ]. The expectation values in both equations are taken over samples {u_i,x_i}_i=1^K of size K extracted repeatedly from the joint distribution p(u,x)=p(x)p(u|x). We estimated with as large an evaluation batch size K as feasible given memory and time considerations, and then averaged over multiple batches to reduce the variance of the bound. Evaluation with a batch size of 1024, averaged over 8 draws, yielded bounds on the mutual information that was on the order of 0.01 bits for the Boolean circuit and glass data. The size of the validation dataset for the glass and the size of the truth table of the Boolean circuit were both on the order of one thousand points. Hence, the benefit of averaging comes from repeated sampling of the latent representations. We show in Fig. <ref> the performance of the mutual information bounds for compression schemes that encode up to several bits of information. X is a discrete random variable that is uniformly distributed over its support and has one to six bits of entropy; for each X a fixed dataset of size 1024 was sampled for mutual information estimation according to the following method of compression. Each outcome x was encoded to a normal distribution with unit variance in 32-dimensional space, p(u|x)=𝒩(μ, 1). The encoded distributions were placed along orthogonal axes a distance d from the origin; in the limits of d=0 and d≫ 1 the information transmitted by the compression scheme is 0 and H(X), respectively. A Monte Carlo estimate of the mutual information sampled 2×10^5 points from p(u,x) to compute 𝔼_p(u,x)[log p(u|x)/p(u)]. The “leave one out” upper and InfoNCE lower bounds were computed with different evaluation batch sizes K, and averaged over 4096 sampled batches. The standard deviation of the bounds is displayed as the shaded region around each trace, and is left out of the plots for the residual (the difference between the bound and the Monte Carlo estimate) for all but the evaluation batch size of 1024. When the information contained in the compression is less than about two bits—as was the case for the majority of the experiments of the main text—the bounds are tight in expectation for even the smallest evaluation batch size. The variance is reducible by averaging over multiple batches. As the transmitted information grows, the benefit of increasing the evaluation batch size grows more pronounced, though bounds with a range of less than 0.1 bits can still be achieved for up to six bits of transmitted information. §.§ Information transmitted per particle For the per-particle measurement scheme on the amorphous plasticity data, a single compression channel U was used for all particles. The information conveyed by the channel I(U;X) may be estimated as above, with X being the particle position and type. Note that we are particularly interested in the information cost for specific particle positions and for each particle type. The outer summation of the bounds (Eqn. <ref> and <ref>) serves to average over the measurement outcomes x_i in a random sample; we use the summand corresponding to {x_i, u_i} as the information contribution for the specific outcome x_i. To generate the information heatmaps of Fig. <ref>b, we randomly sampled 512 neighborhoods from the dataset, corresponding to an evaluation batch size K=512 neighborhoods×50 particles / neighborhood =25,600 particles (data points), and averaged over 100 such batches. A probe particle with specified particle type and position (one for each point in the grid) was inserted into the batch, and then the corresponding summand for the lower and upper information bounds served to quantify the information transmitted per particle. To be specific, I(X=x;U) ≥𝔼 [ logp(u|x)/1/K∑_j^K p(u|x_j) ], with the expectation taken over u ∼ p(u|x) and samples {x_i}_i=1^K∼∏_i^K p(x). The upper bound differed only by inclusion of the distribution p(u|x) corresponding to the probe point in the denominator's sum. § APPENDIX B: IMPLEMENTATION SPECIFICS All experiments were implemented in TensorFlow and run on a single computer with a 12 GB GeForce RTX 3060 GPU. Computing mutual information bounds repeatedly throughout an optimization run contributed the most to running time. Including the information estimation, the Boolean circuit optimization took about half an hour, and the glass experiments took several hours. §.§ Boolean circuit Each input may take only one of two values (0 or 1), allowing the encoders to be extremely simple. Trainable scalars (μ⃗_i,log σ⃗_i^2) were used to encode p(u_i|x_i)= 𝒩 ((2x_i - 1)×μ⃗_i, σ⃗_i^2). The decoder was a multilayer perceptron (MLP) consisting of three fully connected layers with 256 units (α=0.3) each. We increased the value of β logarithmically from 5×10^-4 to 5 in 5×10^4 steps, with a batch size of 512 input-output pairs sampled randomly from the entire 1024-element truth table. The Adam optimizer was used with a learning rate of 10^-3. §.§ Amorphous plasticity The simulated glass data comes from Ref. <cit.>: 10,000 particles in a two-dimensional cell with Lees-Edwards boundary conditions interact via a Lennard-Jones potential, slightly modified to be twice differentiable <cit.>. Simple shear was applied with energy minimization after each step of applied strain. The critical mode was identified as the eigenvector—existing in the 2N-dimensional configuration space of all the particles' positions—of the Hessian whose eigenvalue crossed zero at the onset of global shear stress decrease. The particle that was identified as the locus of the rearrangement event had the largest contribution to the critical mode <cit.>. We used data from the gradual quench (“GQ”) and rapid quench (high temperature liquid, “HTL”) protocols. Following Ref. <cit.>, we considered only neighborhoods with type A particles (the smaller particles) at the center. We used all of the events in the dataset: 7,255 for the gradually quenched and 10,178 for the rapidly quenched glasses. For each rearrangement event with a type A particle as the locus, we selected at random another region from the same system state with a type A particle at the center to serve as a negative example. 90% of all rearrangement events with type A particles as the locus were used for the training set and the remaining 10% were used as the validation set; the regions and specific training and validation splits used in this work can be found on the project webpage. §.§.§ Radial density measurement scheme For the radial density measurements (Figs. <ref>, <ref>), the local neighborhood of each sample was processed 50 radial density structure functions for each particle type, evenly spaced over the interval r=[0.5, 4]. Specifically, for particle i at the center and the set of neighboring particles 𝒮_A of type A, G_A(i;r,δ)=∑_j∈𝒮_Aexp(-(R_ij-r)^2/2δ^2), where R_ij is the distance between particles i and j. The same expression was used to compute G_B, the structure functions for the type B particles in the local neighborhood. The width parameter δ was equal to 50% of each radius interval. After computing the 100 values summarizing each local neighborhood, the training and validation sets were normalized with the mean and standard deviation of each structure function across the training set. The best validation results from a logarithmic scan over values for the C parameter were used for the value of the SVM accuracy in Fig. <ref>. For the distributed IB, each of the 100 scalar values for the structure functions were input to their own MLP consisting of 2 layers of 128 units with activation. The embedding dimension of each U_i was 32. Then the 100 embeddings were concatenated for input to the predictive model, which was an MLP consisting of 3 layers of 256 units with activation. The output was a single logit to classify whether the particle at the center is the locus of imminent rearrangement. We increased β in equally spaced logarithmic steps from 10^-6 to 1 over 250 epochs (an epoch is one pass through the training data). The batch size was 256. The Adam optimizer was used with a learning rate of 10^-4. §.§.§ Per-particle measurement scheme For the per-particle measurements, the nearest 50 particles to the center of each region were compressed by the same encoder, an MLP with two layers of 128 activation (α=0.1), to a 32-dimensional latent space. The only information available to the encoder was the particle's position and type, though the values were preprocessed before input to the encoder to help with optimization: for each particle position r⃗=(x,y), we concatenated x^2, y^2, r=|r⃗|, log r, log x^2, log y^2, and r⃗/r. All were positionally encoded (i.e., before being passed to the MLP, inputs were mapped to x ← (x, sinω_1 x, sinω_2 x, ... )) with frequencies ω_k = 2^k, with k ∈{1, 2, 3, 4, 5} <cit.>. After compression, the 50 representations (one for each particle) were input to a set transformer <cit.>, a permutation-invariant architecture that is free to learn how to relate different particles via self-attention. We used 6 multi-head attention (MHA) blocks with 12 heads each, and a key dimension of 128. Following Ref. <cit.>, each MHA block adds the output of multi-head attention to a skip connection of the block's input, and applies layer normalization to the sum. This intermediate output is passed through an MLP (a single layer with 128 units, in our case) and added to itself (another skip connection) before a second round of layer normalization. After the MHA blocks, the 50 particle representations were mean-pooled and passed through a final fully connected layer of 256 units with activation (α=0.1) before outputting a logit for prediction. Training proceeded for 25,000 training steps, and the learning rate was ramped linearly from zero to 10^-4 over the first 10% of training. Over the duration of training, β increased logarithmically from 3× 10^-8 to 3 × 10^-3. The batch size was 64. § APPENDIX D: CITATION DIVERSITY STATEMENT Science is a human endeavour and consequently vulnerable to many forms of bias; the responsible scientist identifies and mitigates such bias wherever possible. Meta-analyses of research in multiple fields have measured significant bias in how research works are cited, to the detriment of scholars in minority groups <cit.>. We use this space to amplify studies, perspectives, and tools that we found influential during the execution of this research <cit.>. 73 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Mitchell(2006)]mitchell2006complexity author author M. Mitchell, title title Complex systems: Network thinking, https://doi.org/https://doi.org/10.1016/j.artint.2006.10.002 journal journal Artificial Intelligence volume 170, pages 1194 (year 2006), note special Review IssueNoStop [Kwapień and Drożdż(2012)]kwapien2012complexity author author J. Kwapień and author S. Drożdż, title title Physical approach to complex systems, https://doi.org/https://doi.org/10.1016/j.physrep.2012.01.007 journal journal Physics Reports volume 515, pages 115 (year 2012)NoStop [Mitchell(2009)]mitchell2009complexitybook author author M. Mitchell, @noop title Complexity: A guided tour (publisher Oxford University Press, year 2009)NoStop [Newman(2011)]newman2011complex author author M. E. Newman, title title Complex systems: A survey, @noop journal journal arXiv preprint arXiv:1112.1440 (year 2011)NoStop [Matsuda(2000)]matsuda2000physical author author H. Matsuda, title title Physical nature of higher-order mutual information: Intrinsic correlations and frustration, @noop journal journal Physical review E volume 62, pages 3096 (year 2000)NoStop [Koch-Janusz and Ringel(2018)]koch2018natphys author author M. Koch-Janusz and author Z. Ringel, title title Mutual information, neural networks and the renormalization group, @noop journal journal Nature Physics volume 14, pages 578 (year 2018)NoStop [Grassberger(1986)]grassberger1986toward author author P. Grassberger, title title Toward a quantitative theory of self-generated complexity, @noop journal journal International Journal of Theoretical Physics volume 25, pages 907 (year 1986)NoStop [Tononi et al.(1994)Tononi, Sporns, and Edelman]tononi1994measure author author G. Tononi, author O. Sporns, and author G. M. Edelman, title title A measure for brain complexity: relating functional segregation and integration in the nervous system, @noop journal journal Proceedings of the National Academy of Sciences volume 91, pages 5033 (year 1994)NoStop [Gell-Mann and Lloyd(1996)]gellmann1996effective author author M. Gell-Mann and author S. Lloyd, title title Information measures, effective complexity, and total information, @noop journal journal Complexity volume 2, pages 44 (year 1996)NoStop [Golan and Harte(2022)]golan2022pnas author author A. Golan and author J. Harte, title title Information theory: A foundation for complexity science, https://doi.org/10.1073/pnas.2119089119 journal journal Proceedings of the National Academy of Sciences volume 119, pages e2119089119 (year 2022), https://arxiv.org/abs/https://www.pnas.org/doi/pdf/10.1073/pnas.2119089119 https://www.pnas.org/doi/pdf/10.1073/pnas.2119089119 NoStop [LeCun et al.(2015)LeCun, Bengio, and Hinton]lecun2015deep author author Y. LeCun, author Y. Bengio, and author G. Hinton, title title Deep learning, @noop journal journal nature volume 521, pages 436 (year 2015)NoStop [Rudin(2019)]rudin2019stop author author C. Rudin, title title Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, @noop journal journal Nature Machine Intelligence volume 1, pages 206 (year 2019)NoStop [Rudin et al.(2022)Rudin, Chen, Chen, Huang, Semenova, and Zhong]rudin2022interpretable author author C. Rudin, author C. Chen, author Z. Chen, author H. Huang, author L. Semenova, and author C. Zhong, title title Interpretable machine learning: Fundamental principles and 10 grand challenges, @noop journal journal Statistics Surveys volume 16, pages 1 (year 2022)NoStop [Molnar(2022)]molnar2022interpretableML author author C. Molnar, @noop title Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (year 2022)NoStop [Murphy and Bassett(2023)]dib_ml author author K. A. Murphy and author D. S. Bassett, title title Interpretability with full complexity by constraining feature information, in https://openreview.net/forum?id=R_OL5mLhsv booktitle The Eleventh International Conference on Learning Representations (year 2023)NoStop [Estella Aguerri and Zaidi(2018)]aguerri2018DIB author author I. Estella Aguerri and author A. Zaidi, title title Distributed information bottleneck method for discrete and gaussian sources, in @noop booktitle International Zurich Seminar on Information and Communication (IZS 2018) Proceedings (organization ETH Zurich, year 2018) pp. pages 35–39NoStop [Aguerri and Zaidi(2021)]aguerriDVIB2021 author author I. E. Aguerri and author A. Zaidi, title title Distributed variational representation learning, https://doi.org/10.1109/TPAMI.2019.2928806 journal journal IEEE Transactions on Pattern Analysis and Machine Intelligence volume 43, pages 120 (year 2021)NoStop [Tishby et al.(2000)Tishby, Pereira, and Bialek]tishbyIB2000 author author N. Tishby, author F. C. Pereira, and author W. Bialek, title title The information bottleneck method, @noop journal journal arXiv preprint physics/0004057 (year 2000)NoStop [Savage(1998)]savage1998models author author J. E. Savage, @noop title Models of computation, Vol. volume 136 (publisher Addison-Wesley Reading, MA, year 1998)NoStop [Chaves et al.(2006)Chaves, Sontag, and Albert]chaves2006methods author author M. Chaves, author E. D. Sontag, and author R. Albert, title title Methods of robustness analysis for Boolean models of gene control networks, @noop journal journal IEE Proceedings-Systems Biology volume 153, pages 154 (year 2006)NoStop [Huynh-Thu and Sanguinetti(2019)]huynh2019gene author author V. A. Huynh-Thu and author G. Sanguinetti, title title Gene regulatory network inference: an introductory survey, in @noop booktitle Gene Regulatory Networks (publisher Springer, year 2019) pp. pages 1–23NoStop [Cubuk et al.(2017)Cubuk, Ivancic, Schoenholz, Strickland, Basu, Davidson, Fontaine, Hor, Huang, Jiang, Keim, Koshigan, Lefever, Liu, Ma, Magagnosc, Morrow, Ortiz, Rieser, Shavit, Still, Xu, Zhang, Nordstrom, Arratia, Carpick, Durian, Fakhraai, Jerolmack, Lee, Li, Riggleman, Turner, Yodh, Gianola, and Liu]cubuk2017science author author E. D. Cubuk, author R. J. S. Ivancic, author S. S. Schoenholz, author D. J. Strickland, author A. Basu, author Z. S. Davidson, author J. Fontaine, author J. L. Hor, author Y.-R. Huang, author Y. Jiang, author N. C. Keim, author K. D. Koshigan, author J. A. Lefever, author T. Liu, author X.-G. Ma, author D. J. Magagnosc, author E. Morrow, author C. P. Ortiz, author J. M. Rieser, author A. Shavit, author T. Still, author Y. Xu, author Y. Zhang, author K. N. Nordstrom, author P. E. Arratia, author R. W. Carpick, author D. J. Durian, author Z. Fakhraai, author D. J. Jerolmack, author D. Lee, author J. Li, author R. Riggleman, author K. T. Turner, author A. G. Yodh, author D. S. Gianola, and author A. J. Liu, title title Structure-property relationships from universal signatures of plasticity in disordered solids, https://doi.org/10.1126/science.aai8830 journal journal Science volume 358, pages 1033 (year 2017), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aai8830 https://www.science.org/doi/pdf/10.1126/science.aai8830 NoStop [Murphy et al.(2019)Murphy, Dahmen, and Jaeger]murphy2019transforming author author K. A. Murphy, author K. A. Dahmen, and author H. M. Jaeger, title title Transforming mesoscale granular plasticity through particle shape, @noop journal journal Physical Review X volume 9, pages 011014 (year 2019)NoStop [Jaeger and Nagel(1992)]jaeger1992 author author H. M. Jaeger and author S. R. Nagel, title title Physics of the granular state, https://doi.org/10.1126/science.255.5051.1523 journal journal Science volume 255, pages 1523 (year 1992), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.255.5051.1523 https://www.science.org/doi/pdf/10.1126/science.255.5051.1523 NoStop [Liu and Nagel(1998)]liu1998jammingcool author author A. J. Liu and author S. R. Nagel, title title Jamming is not just cool any more, @noop journal journal Nature volume 396, pages 21 (year 1998)NoStop [Bi et al.(2011)Bi, Zhang, Chakraborty, and Behringer]bi2011jammingshear author author D. Bi, author J. Zhang, author B. Chakraborty, and author R. P. Behringer, title title Jamming by shear, @noop journal journal Nature volume 480, pages 355 (year 2011)NoStop [Berthier et al.(2019)Berthier, Biroli, Charbonneau, Corwin, Franz, and Zamponi]berthier2019gardner author author L. Berthier, author G. Biroli, author P. Charbonneau, author E. I. Corwin, author S. Franz, and author F. Zamponi, title title Gardner physics in amorphous solids and beyond, @noop journal journal The Journal of chemical physics volume 151, pages 010901 (year 2019)NoStop [Cover and Thomas(1999)]cover1999elements author author T. M. Cover and author J. A. Thomas, @noop title Elements of information theory (publisher John Wiley & Sons, year 1999)NoStop [Shannon(1948)]shannon1948mathematical author author C. E. Shannon, title title A mathematical theory of communication, @noop journal journal The Bell System Technical Journal volume 27, pages 379 (year 1948)NoStop [Steiner and Kuehn(2021)]steiner2021distributedcompression author author S. Steiner and author V. Kuehn, title title Distributed compression using the information bottleneck principle, in https://doi.org/10.1109/ICC42927.2021.9500324 booktitle ICC 2021 - IEEE International Conference on Communications (year 2021) pp. pages 1–6NoStop [Alemi et al.(2016)Alemi, Fischer, Dillon, and Murphy]alemiVIB2016 author author A. A. Alemi, author I. Fischer, author J. V. Dillon, and author K. Murphy, title title Deep variational information bottleneck, @noop journal journal arXiv preprint arXiv:1612.00410 (year 2016)NoStop [Kingma and Welling(2014)]vae author author D. P. Kingma and author M. Welling, title title Auto-encoding variational Bayes, in @noop booktitle International Conference on Learning Representations (ICLR) (year 2014)NoStop [Poole et al.(2019)Poole, Ozair, Van Den Oord, Alemi, and Tucker]poole2019variational author author B. Poole, author S. Ozair, author A. Van Den Oord, author A. Alemi, and author G. Tucker, title title On variational bounds of mutual information, in @noop booktitle International Conference on Machine Learning (organization PMLR, year 2019) pp. pages 5171–5180NoStop [McAllester and Stratos(2020)]mcallester2020infolimitations author author D. McAllester and author K. Stratos, title title Formal limitations on the measurement of mutual information, in @noop booktitle International Conference on Artificial Intelligence and Statistics (organization PMLR, year 2020) pp. pages 875–884NoStop [Argon and Kuo(1979)]argon1979bubbles author author A. S. Argon and author H. Y. Kuo, title title Plastic flow in a disordered bubble raft (an analog of a metallic glass), @noop journal journal Materials Science and Engineering volume 39, pages 101 (year 1979)NoStop [Falk and Langer(1998)]falk1998dynamics author author M. L. Falk and author J. S. Langer, title title Dynamics of viscoplastic deformation in amorphous solids, @noop journal journal Physical Review E volume 57, pages 7192 (year 1998)NoStop [Manning and Liu(2011)]manning2011vibrational author author M. L. Manning and author A. J. Liu, title title Vibrational modes identify soft spots in a sheared disordered packing, @noop journal journal Physical Review Letters volume 107, pages 108302 (year 2011)NoStop [Richard et al.(2020)Richard, Ozawa, Patinet, Stanifer, Shang, Ridout, Xu, Zhang, Morse, Barrat, Berthier, Falk, Guan, Liu, Martens, Sastry, Vandembroucq, Lerner, and Manning]richard2020indicators author author D. Richard, author M. Ozawa, author S. Patinet, author E. Stanifer, author B. Shang, author S. A. Ridout, author B. Xu, author G. Zhang, author P. K. Morse, author J.-L. Barrat, author L. Berthier, author M. L. Falk, author P. Guan, author A. J. Liu, author K. Martens, author S. Sastry, author D. Vandembroucq, author E. Lerner, and author M. L. Manning, title title Predicting plasticity in disordered solids from structural indicators, @noop journal journal Physical Review Materials volume 4, pages 113609 (year 2020)NoStop [Dunleavy et al.(2012)Dunleavy, Wiesner, and Royall]dunleavy2012information author author A. J. Dunleavy, author K. Wiesner, and author C. P. Royall, title title Using mutual information to measure order in model glass formers, @noop journal journal Physical Review E volume 86, pages 041505 (year 2012)NoStop [Jack et al.(2014)Jack, Dunleavy, and Royall]jack2014information author author R. L. Jack, author A. J. Dunleavy, and author C. P. Royall, title title Information-theoretic measurements of coupling between structure and dynamics in glass formers, @noop journal journal Physical review letters volume 113, pages 095703 (year 2014)NoStop [Dunleavy et al.(2015)Dunleavy, Wiesner, Yamamoto, and Royall]dunleavy2015mutual author author A. J. Dunleavy, author K. Wiesner, author R. Yamamoto, and author C. P. Royall, title title Mutual information reveals multiple structural relaxation mechanisms in a model glass former, @noop journal journal Nature communications volume 6, pages 6089 (year 2015)NoStop [Behler and Parrinello(2007)]behler2007structurefns author author J. Behler and author M. Parrinello, title title Generalized neural-network representation of high-dimensional potential-energy surfaces, @noop journal journal Physical Review Letters volume 98, pages 146401 (year 2007)NoStop [Cubuk et al.(2015)Cubuk, Schoenholz, Rieser, Malone, Rottler, Durian, Kaxiras, and Liu]cubuk2015PRL author author E. D. Cubuk, author S. S. Schoenholz, author J. M. Rieser, author B. D. Malone, author J. Rottler, author D. J. Durian, author E. Kaxiras, and author A. J. Liu, title title Identifying structural flow defects in disordered solids using machine-learning methods, https://doi.org/10.1103/PhysRevLett.114.108001 journal journal Physical Review Letters volume 114, pages 108001 (year 2015)NoStop [Schoenholz et al.(2016)Schoenholz, Cubuk, Sussman, Kaxiras, and Liu]schoenholz2016natphys author author S. S. Schoenholz, author E. D. Cubuk, author D. M. Sussman, author E. Kaxiras, and author A. J. Liu, title title A structural approach to relaxation in glassy liquids, @noop journal journal Nature Physics volume 12, pages 469 (year 2016)NoStop [Sharp et al.(2018)Sharp, Thomas, Cubuk, Schoenholz, Srolovitz, and Liu]softnessGrainBoundaries author author T. A. Sharp, author S. L. Thomas, author E. D. Cubuk, author S. S. Schoenholz, author D. J. Srolovitz, and author A. J. Liu, title title Machine learning determination of atomic dynamics at grain boundaries, https://doi.org/10.1073/pnas.1807176115 journal journal Proceedings of the National Academy of Sciences volume 115, pages 10943 (year 2018)NoStop [Sussman et al.(2017)Sussman, Schoenholz, Cubuk, and Liu]softnessFilms author author D. M. Sussman, author S. S. Schoenholz, author E. D. Cubuk, and author A. J. Liu, title title Disconnecting structure and dynamics in glassy thin films, https://doi.org/10.1073/pnas.1703927114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 10601 (year 2017)NoStop [Zhang et al.(2021)Zhang, Ridout, and Liu]ridout2021avalanche author author G. Zhang, author S. A. Ridout, and author A. J. Liu, title title Interplay of rearrangements, strain, and local structure during avalanche propagation, @noop journal journal Physical Review X volume 11, pages 041019 (year 2021)NoStop [Murphy and Bassett(2022)]dib_orig author author K. A. Murphy and author D. S. Bassett, title title The distributed information bottleneck reveals the explanatory structure of complex systems, @noop journal journal arXiv preprint arXiv:2204.07576 (year 2022)NoStop [Lee et al.(2019)Lee, Lee, Kim, Kosiorek, Choi, and Teh]lee2019set author author J. Lee, author Y. Lee, author J. Kim, author A. Kosiorek, author S. Choi, and author Y. W. Teh, title title Set transformer: A framework for attention-based permutation-invariant neural networks, in @noop booktitle International conference on machine learning (organization PMLR, year 2019) pp. pages 3744–3753NoStop [Saxe et al.(2019)Saxe, Bansal, Dapello, Advani, Kolchinsky, Tracey, and Cox]saxe2019 author author A. M. Saxe, author Y. Bansal, author J. Dapello, author M. Advani, author A. Kolchinsky, author B. D. Tracey, and author D. D. Cox, title title On the information bottleneck theory of deep learning, https://doi.org/10.1088/1742-5468/ab3985 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2019, pages 124020 (year 2019)NoStop [Zaidi et al.(2020)Zaidi, Estella-Aguerri, and Shamai (Shitz)]zaidi2020IBreview author author A. Zaidi, author I. Estella-Aguerri, and author S. Shamai (Shitz), title title On the information bottleneck problems: Models, connections, applications and information theoretic views, https://www.mdpi.com/1099-4300/22/2/151 journal journal Entropy volume 22 (year 2020)NoStop [Goldfeld and Polyanskiy(2020)]goldfeld2020information author author Z. Goldfeld and author Y. Polyanskiy, title title The information bottleneck problem and its applications in machine learning, @noop journal journal IEEE Journal on Selected Areas in Information Theory volume 1, pages 19 (year 2020)NoStop [Gordon et al.(2021)Gordon, Banerjee, Koch-Janusz, and Ringel]gordonrelevance2021 author author A. Gordon, author A. Banerjee, author M. Koch-Janusz, and author Z. Ringel, title title Relevance in the renormalization group and in information theory, https://doi.org/10.1103/PhysRevLett.126.240601 journal journal Physical Review Letters volume 126, pages 240601 (year 2021)NoStop [Kline and Palmer(2021)]kline2021RGIB author author A. G. Kline and author S. Palmer, title title Gaussian information bottleneck and the non-perturbative renormalization group, @noop journal journal New Journal of Physics (year 2021)NoStop [Wang et al.(2019)Wang, Ribeiro, and Tiwary]wang2019PIB author author Y. Wang, author J. M. L. Ribeiro, and author P. Tiwary, title title Past–future information bottleneck for sampling molecular reaction coordinate simultaneously with thermodynamics and kinetics, @noop journal journal Nature Communications volume 10, pages 1 (year 2019)NoStop [Kolchinsky et al.(2019a)Kolchinsky, Tracey, and Van Kuyk]kolchinsky2018caveats author author A. Kolchinsky, author B. D. Tracey, and author S. Van Kuyk, title title Caveats for information bottleneck in deterministic scenarios, @noop journal journal International Conference on Learning Representations (ICLR) (year 2019a)NoStop [Kolchinsky et al.(2019b)Kolchinsky, Tracey, and Wolpert]kolchinskyNonlinearIB2019 author author A. Kolchinsky, author B. D. Tracey, and author D. H. Wolpert, title title Nonlinear information bottleneck, journal journal Entropy volume 21, https://doi.org/10.3390/e21121181 10.3390/e21121181 (year 2019b)NoStop [Williams and Beer(2010)]williams2010PID author author P. L. Williams and author R. D. Beer, title title Nonnegative decomposition of multivariate information, @noop journal journal arXiv preprint arXiv:1004.2515 (year 2010)NoStop [Gutknecht et al.(2021)Gutknecht, Wibral, and Makkeh]gutknecht2021bits author author A. J. Gutknecht, author M. Wibral, and author A. Makkeh, title title Bits and pieces: Understanding information decomposition from part-whole relationships and formal logic, @noop journal journal Proceedings of the Royal Society A volume 477, pages 20210110 (year 2021)NoStop [Kolchinsky(2022)]kolchinsky2022PID author author A. Kolchinsky, title title A novel approach to the partial information decomposition, @noop journal journal Entropy volume 24, pages 403 (year 2022)NoStop [Varley and Hoel(2022)]varley2022emergence author author T. F. Varley and author E. Hoel, title title Emergence as the conversion of information: A unifying theory, @noop journal journal Philosophical Transactions of the Royal Society A volume 380, pages 20210150 (year 2022)NoStop [Ehrlich et al.(2022)Ehrlich, Schneider, Wibral, Priesemann, and Makkeh]ehrlich2022partial author author D. A. Ehrlich, author A. C. Schneider, author M. Wibral, author V. Priesemann, and author A. Makkeh, title title Partial information decomposition reveals the structure of neural representations, @noop journal journal arXiv preprint arXiv:2209.10438 (year 2022)NoStop [Barbot et al.(2018)Barbot, Lerbinger, Hernandez-Garcia, García-García, Falk, Vandembroucq, and Patinet]barbot2018simulations author author A. Barbot, author M. Lerbinger, author A. Hernandez-Garcia, author R. García-García, author M. L. Falk, author D. Vandembroucq, and author S. Patinet, title title Local yield stress statistics in model amorphous solids, https://doi.org/10.1103/PhysRevE.97.033001 journal journal Physical Review E volume 97, pages 033001 (year 2018)NoStop [Tancik et al.(2020)Tancik, Srinivasan, Mildenhall, Fridovich-Keil, Raghavan, Singhal, Ramamoorthi, Barron, and Ng]tancik2020fourier author author M. Tancik, author P. Srinivasan, author B. Mildenhall, author S. Fridovich-Keil, author N. Raghavan, author U. Singhal, author R. Ramamoorthi, author J. Barron, and author R. Ng, title title Fourier features let networks learn high frequency functions in low dimensional domains, @noop journal journal Advances in Neural Information Processing Systems volume 33, pages 7537 (year 2020)NoStop [Maliniak et al.(2013)Maliniak, Powers, and Walter]maliniak2013gender author author D. Maliniak, author R. Powers, and author B. F. Walter, title title The gender citation gap in international relations, @noop journal journal International Organization volume 67, pages 889 (year 2013)NoStop [Caplar et al.(2017)Caplar, Tacchella, and Birrer]caplar2017quantitative author author N. Caplar, author S. Tacchella, and author S. Birrer, title title Quantitative evaluation of gender bias in astronomical publications from citation counts, @noop journal journal Nature Astronomy volume 1, pages 1 (year 2017)NoStop [Chakravartty et al.(2018)Chakravartty, Kuo, Grubbs, and McIlwain]chakravartty2018communicationsowhite author author P. Chakravartty, author R. Kuo, author V. Grubbs, and author C. McIlwain, title title #CommunicationSoWhite, @noop journal journal Journal of Communication volume 68, pages 254 (year 2018)NoStop [Dion et al.(2018)Dion, Sumner, and Mitchell]dion2018gendered author author M. L. Dion, author J. L. Sumner, and author S. M. Mitchell, title title Gendered citation patterns across political science and social science methodology fields, @noop journal journal Political Analysis volume 26, pages 312 (year 2018)NoStop [Dworkin et al.(2020a)Dworkin, Linn, Teich, Zurn, Shinohara, and Bassett]dworkin2020extent author author J. D. Dworkin, author K. A. Linn, author E. G. Teich, author P. Zurn, author R. T. Shinohara, and author D. S. Bassett, title title The extent and drivers of gender imbalance in neuroscience reference lists, @noop journal journal Nature Neuroscience volume 23, pages 918 (year 2020a)NoStop [Zurn et al.(2020)Zurn, Bassett, and Rust]zurn2020citation author author P. Zurn, author D. S. Bassett, and author N. C. Rust, title title The citation diversity statement: a practice of transparency, a way of life, @noop journal journal Trends in Cognitive Sciences volume 24, pages 669 (year 2020)NoStop [Dworkin et al.(2020b)Dworkin, Zurn, and Bassett]dworkin2020citing author author J. Dworkin, author P. Zurn, and author D. S. Bassett, title title (In)citing action to realize an equitable future, @noop journal journal Neuron volume 106, pages 890 (year 2020b)NoStop [Zhou et al.(2020)Zhou, Cornblath, Stiso, Teich, Dworkin, Blevins, and Bassett]zhou2020gender author author D. Zhou, author E. J. Cornblath, author J. Stiso, author E. G. Teich, author J. D. Dworkin, author A. S. Blevins, and author D. S. Bassett, title title Gender diversity statement and code notebook v1. 0, @noop journal journal Zenodo (year 2020)NoStop [Budrikis(2020)]budrikis2020growing author author Z. Budrikis, title title Growing citation gender gap, @noop journal journal Nature Reviews Physics volume 2, pages 346 (year 2020)NoStop
http://arxiv.org/abs/2307.04960v1
20230711014210
Simple Reference Immutability for System F-sub
[ "Edward Lee", "Ondřej Lhoták" ]
cs.PL
[ "cs.PL" ]
Keeps objects fresh for up to 5X longer! Computer Science University of Waterloo 200 University Ave W. Waterloo ON N2L 3G1 Canada Computer Science University of Waterloo 200 University Ave W. Waterloo ON N2L 3G1 Canada Reference immutability is a type based technique for taming mutation that has long been studied in the context of object-oriented languages, like Java. Recently, though, languages like Scala have blurred the lines between functional programming languages and object oriented programming languages. We explore how reference immutability interacts with features commonly found in these hybrid languages, in particular with higher-order functions – polymorphism – and subtyping. We construct a calculus which encodes a reference immutability system as a simple extension of and prove that it satisfies the standard soundness and immutability safety properties. <ccs2012> <concept> <concept_id>10011007.10011006.10011008</concept_id> <concept_desc>Software and its engineering General programming languages</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011006.10011041</concept_id> <concept_desc>Software and its engineering Compilers</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Software and its engineering General programming languages [500]Software and its engineering Compilers Simple Reference Immutability for Ondřej Lhoták October 2023 ================================== § INTRODUCTION Code written in a pure, functional language is referentially transparent – it has no side effects and hence can be run multiple times to produce the same result. Reasoning about referentially transparent code is easier for both humans and computers. However, purely functional code can be hard to write and inefficient, so many functional languages contain impure language features. One important side effect that is difficult to reason about is mutation of state. Mutation arises naturally, but can cause bugs which can be hard to untangle; for example, two modules which at first glance are completely unrelated may interact through some shared mutable variable. Taming – or controlling – where and how mutation can occur can reduce these issues. One method of taming mutation is reference immutability <cit.>. In this setting, the type of each reference to a value can be either mutable or immutable. An immutable reference cannot be used to mutate the value or any other values transitively reached from it. Mutable and immutable references can coexist for the same value, so an immutable reference does not guarantee that the value will not change through some other, mutable reference. This is in contrast to the stronger guarantee of object immutability, which applies to values, and ensures that a particular value does not change through any of the references to it. Reference immutability has long been studied in existing object-oriented programming languages such as Java <cit.> and C# <cit.>. However, reference immutability is largely unexplored in the context of functional languages with impure fragments – languages like Scala or OCaml, for example. Many programs in Scala are mostly immutable <cit.>. A system that formally enforces specified patterns of immutability would help programmers and compilers better reason about immutability in such programs. One feature that is important in all languages but especially essential in functional programs is polymorphism. The interaction of polymorphism and reference immutability raises interesting questions. Should type variables abstract over annotated types including their immutability annotations (such as ), or only over the base types without immutability annotations (such as )? Should uses of type variables admit an immutability annotation like other types do? For example, should be allowed, where is a type variable rather than a concrete type? If yes, then how should one interpret an annotated variable itself instantiated with an annotated type? For example, what should the type mean if the variable is instantiated with ? Our contribution to this area is a simple and sound treatment of reference immutability in  <cit.>. Specifically, we formulate a simple extension of with the following properties: * Immutability safety: When dealing with reference immutability, one important property to show is immutability safety: showing that when a reference is given a read-only type, then the underlying value is not modified through that reference. In we introduce a dynamic form of immutability, a term-level construct, which makes precise the runtime guarantees that we expect from a reference that is statically designated as immutable by the type system. We do this by formalizing , an untyped calculus with references and seals. Dynamic seals are transitive in that they seal any new references that are read from a field of an object through a sealed reference. * -style polymorphism: preserves the same bounded-quantification structure of . At the same time, it allows type variables to be further modified by immutability modifiers. * Immutable types are types: To allow for -style polymorphism, we need to treat immutable types as types themselves. To do so, instead of type qualifiers, we introduce a type operator that can be freely applied to existing types (including type variables). The operator turns a type into an immutable version of the same type. While this complicates the definition of subtyping and proofs of canonical forms lemmas, we resolve these issues by reducing types to a normal form. Our hope is to enable reference immutability systems in functional languages by giving simple, sound foundations in , a calculus that underpins many practical functional programming languages. The rest of this paper is organized as follows. In Section <ref> we give an overview of reference immutability. In Section <ref> we introduce an un-typed core calculus, , to describe sealing and how it relates to reference immutability safety at run time. In Section <ref> we present , which enriches with types, and show that it satisfies the standard soundness theorems. In Section <ref> we use the soundness results from and the dynamic safety results from to show that our desired immutability safety properties hold in . We survey related and possible future work in Section <ref> and we conclude in Section <ref>. Our development is mechanized in the Coq artifact that we will submit to the OOPSLA artifact evaluation process. § REFERENCE IMMUTABILITY Reference immutability at its core is concerned with two key ideas: * Immutable references: References to values can be made immutable, so that the underlying value cannot be modified through that reference. * Transitive immutability: An immutable reference to a compound value that contains other references cannot be used to obtain a mutable reference to another value. For example, if x is a read-only reference to a pair, the result of evaluating x.first should be viewpoint adapted <cit.> to be a read-only reference, even if the pair contains references that are otherwise mutable. For example, consider the following snippet of Scala-like code that deals with polymorphic mutable pairs. [language=Scala] case class Pair[X](var first: X, var second: X) def good(x : Pair[Int]) = x.first = 5 def bad1(y : @readonly Pair[Int]) = y.first = 7 def bad2(y : @readonly Pair[Pair[Int]]) = y.first.first = 5 def access(z: @readonly Pair[Pair[Int]]): @readonly Pair[Int] = z.first A reference immutability system would deem the function good to be well-typed because it mutates the pair through a mutable reference x. However, it would disallow bad1 because it mutates the pair through a read-only reference y. Moreover, it would also disallow bad2 because it mutates the pair referenced indirectly through the read-only reference y. This can also be seen by looking at the access function, which returns a read-only reference of type @readonly Pair[Int] to the first component of the pair referenced by z. §.§ Why though? Immutable values are crucial even in impure functional programming languages because pure code is often easier to reason about. This benefits both the programmer writing the code, making debugging easier, and the compiler when applying optimizations. Although most values, even in impure languages, are immutable by default <cit.>, mutable values are sometimes necessary for various reasons. For example, consider a compiler for a pure, functional, language. Such a compiler might be split into multiple passes, one which first builds and generates a symbol table of procedures during semantic analysis, and one which then uses that symbol table during code generation. For efficiency, we may wish to build both the table and the procedures in that table with an impure loop. [language=Scala] object analysis class Procedure(name : String) val locals : mutable.Map[String, Procedure] = mutable.Map.empty def addLocalProcedure(name: String, proc: Procedure) = local += (name -> proc) val table : mutable.Map[String, Procedure] = mutable.Map.empty val analyze(ast: AST) = ast.forEach(() => table.add(new Procedure(...)) ) The symbol table and the properties of the procedure should not be mutable everywhere, though; during code generation, our compiler should be able to use the information in the table to generate code but shouldn't be able to change the table nor the information in it! How do we enforce this though? One solution is to create an immutable copy of the symbol table for the code generator, but this can be fragile. A naive solution which merely clones the table itself will not suffice, for example: [language=Scala] object analysis private val table[analysis] = ... def symbolTable : Map[String, Procedure] = table.toMap // create immutable copy of table. object codegen def go() = analysis.symbolTable["main"].locals += ("bad" -> ...) // whoops... While this does create an immutable copy of the symbol table for the code generator, it does not create immutable copies of the procedures held in the table itself! We would need to recursively rebuild a new, immutable symbol table with new, immutable procedures to guarantee immutability, which can be an expensive proposition, both in terms of code and in terms of runtime costs. Moreover, creating an immutable copy might not even work in all cases. Consider an interpreter for a pure, functional language with support for letrec x := e in f. The environment in which e is interpreted contains a cyclic reference to x, which necessitates mutation in the interpreter. Without special tricks like lazyness this sort of structure cannot be constructed, let alone copied, without mutation. [language=Scala] abstract class Value type Env = Map[String, Value] case class Closure(var env: Env, params: List[String], body: Exp) extends Value def interpret_letrec(env: Env, x: String, e: Exp, f: Exp) : Value = val v = interpret(env + (x -> Nothing), e) case v of Closure(env, params, body) => v.env = v.env + (x -> v) // Update binding interpret (env + (x -> v), f) Here, the closure that v refers to needs to be mutable while it is being constructed, but since the underlying language is pure, it should be immutable afterwards. In particular, we should not be able to mutate the closure through the self-referential reference v.env = env + (x -> v), nor should we be able to mutate the closure while interpreting f. We would like a system that prevents writes to v from the self-referential binding in its environment and from the reference we pass to interpret (env + (x -> v), f). This is what reference immutability provides. [language=Scala] abstract class Value type Env = Map[String, @readonly Value] case class Closure(env: var Env, params: List[String], body: Exp) def interpret_letrec(env: Env, x: String, e: Exp, f: Exp) : Value = val v = interpret(env + (x -> Nothing), e) case v of Closure(env, params, body) => v.env = env + (x -> @readonly v) // update binding interpret (env + (x -> @readonly v), f) § DYNAMIC IMMUTABILITY SAFETY Now, to formalize reference immutability, we need to formalize exactly when references are used to update the values they refer to. For example, from above, how do we check that access does what it claims to do? [language=Scala] def access(z: @readonly Pair[Pair[Int]]): @readonly Pair[Int] = z.first How do we check that access returns a reference to z.first that, at runtime, is never used to write to z.first or any other values transitively reachable from it through other references? How do we even express this guarantee precisely? If we consider a reference as a collection of getter and setter methods for the fields of the object it refers to, we could ensure that a reference is immutable by dropping all the setter methods. To ensure that immutability is transitive, we would also need to ensure that the result of applying a getter method is also immutable, i.e. by also dropping its setter methods and recursively applying the same modification to its getter methods. We will make this precise by introducing the calculus with a notion of sealed references. §.§ To answer this question we introduce , the untyped lambda calculus with collections of mutable references – namely, records – extended with a mechanism for sealing references. is adapted from the CS-machine of <cit.> and extended with rules for dealing with sealed references. Sealed references: To address the question about dynamic, runtime safety – can we ensure that read-only references are never used to mutate values – references can be explicitly sealed so that any operation that will mutate the cell referenced will fail to evaluate; see Figure <ref>. rules-untyped The seal form protects its result from writes. A term under a seal form reduces until it becomes a value. At that point, values that are not records, like functions and type abstractions, are just transparently passed through the seal construct. However, values that are – records – remain protected by the seal form, and do not reduce further. For example: ({y : 0001}) is an irreducible value – a sealed record where the first field is stored at location 1 in the store. Intuitively, this can be viewed as removing the setter methods from an object reference. A sealed reference v behaves exactly like its unsealed variant v except that writes to v are forbidden and reads from v return sealed results. Rules that mutate the cells corresponding to a record explicitly require an unsealed open record; see write-field. This ensures that any ill-behaved program that mutates a store cell through a sealed record will get stuck, while an unsealed record can have its fields updated: [ ⟨{x : 10}.x = 5, []⟩ ⟨{x : 0001}.x = 5, [0001: 10] ⟩; ⟨ 10, [0001: 5] ⟩ ] A sealed record cannot have its fields written to. Unlike record field reads, for which there is a sealed sealed-field counterpart to the standard record read rule field, there is no corresponding rule for writing to a sealed record for write-field. Recall that write-field requires an open, unsealed record as input: [] l : v ∈σ ⟨{x : l }.x = v', σ⟩⟨v, σ[l ↦v'] ⟩ The calculus does not contain any rule like the following, which would reduce writes on a sealed record: [] l : v ∈σ ⟨({x : l }).x = v', σ⟩⟨v, σ[l ↦v'] ⟩ So a term like: [ ⟨ ({x : 10}).x = 5, []⟩ ⟨({x : 0001}).x = 5, [0001: 10] ⟩; ] Dynamic viewpoint adaptation: After reading a field from a sealed record, the semantics seals that value, ensuring transitive safety – see sealed-field. [] l : v ∈σ ⟨({x : l }).x, σ⟩⟨v, σ⟩ For example: [ ⟨ ({y : {x : 10}}).y, []⟩ ⟨({y : {x: 001}}).y, [001: 10] ⟩; ⟨({y : 002}).y, [001: 10, 002: {x: 001}] ⟩; ⟨ ({x: 001}), [001: 10, 002: {x: 001}] ⟩ ] Sealed references and dynamic viewpoint adaptation allow for a succinct guarantee of dynamic transitive immutability safety – that no value is ever mutated through a read-only reference or any other references transitively derived from it. Aside from preventing writes through sealed references, we should show that sealing does not otherwise affect reduction. For this we need a definition that relates pairs of terms that are essentially equivalent except that one has more seals than the other. Let s and t be two terms. We say s ≤ t if t can be obtained from s by repeatedly replacing sub-terms s' of s with sealed subterms s'. This implies a similar definition for stores: Let σ and σ' be two stores. We say σ≤σ' if and only if they have the same locations and for every location l ∈σ, we have σ(l) ≤σ'(l). The following three lemmas formalize how reduction behaves for terms that are equivalent modulo seals. The first one is for a term t that is equivalent to a value – it states that if t reduces, the resulting term is still equivalent to the same value. It also shows that the resulting term has fewer seals than t, which we'll need later for an inductive argument. Let s be a term. Then |s| is the number of seals in s. Let v be a value, σ_v be a store, t be a term such that v ≤ t, and σ_t be a store such that σ_v ≤σ_t. If ⟨ t, σ_t ⟩⟨ t', σ_t'⟩ then v ≤ t', σ_v ≤σ_t', and |t'| < |t|. The next lemma is an analogue of Lemma <ref> for terms. Given two equivalent terms s and t, if s steps to s' and t steps to t', then either s and t' are equivalent or s' and t' are equivalent. Moreover, again, to show that reduction in t is equivalent to reduction in s, we have that |t'| < t if s ≤ t'. Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that σ_s ≤σ_t. If ⟨ s, σ_s⟩⟨ s', σ_s'⟩ and ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then: * Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or * s' ≤ t' and σ_s' ≤σ_t'. Together, Lemmas <ref> and <ref> relate how terms s and t reduce when they are equivalent modulo seals. Assuming that both s and t reduce, every step of s corresponds to finitely many steps of t, and they reduce to equivalent results as well. This shows that sealing is transparent when added onto references that are never written to, allowing for a succinct guarantee of immutability safety. Finally, the last lemma states that erasing seals will never cause a term to get stuck. Seals can be safely erased without affecting reduction. Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that σ_s ≤σ_t. If ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then: * Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or * There exists s' and σ_s' such that ⟨ s, σ_s⟩⟨ s', σ_s'⟩, s' ≤ t' and σ_s' ≤σ_t'. From this we can derive the following multi-step analogue, after observing the following lemma: If s is a term and v is a value such that s ≤ v, then s is also a value. Hence: Suppose s and t are terms such that s ≤ t. If ⟨ t, σ_t⟩⟨ v_t, σ_t'⟩ for some value v_t, then for any σ_s ≤σ_t we have ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩ such that v_s' ≤ v_s' and σ_s' ≤σ_t'. Finally, it can be shown that the seals are to blame when two equivalent terms s and t reduce differently – in particular, when one reduces but the other gets stuck. Let s, t be terms such that s ≤ t, and let σ_s, σ_t be stores such that σ_s ≤σ_t. If ⟨ s, σ_s⟩⟨ s', σ_s'⟩ and t gets stuck, then the reduction performed on s was a write to a record using rule write-field. (Sketch) As s cannot further reduce, the evaluation context of s and t must match; there are no extraneous seals that need to be discharged. As such, from inspection of the reduction rules, we see that in all cases except for write-field, for every possible reduction that s could have taken, there is a possible reduction that t could have taken as well, as desired. § TYPING AND STATIC SAFETY provides a dynamic guarantee that a given program will never modify its sealed references, but it does not provide any static guarantees about the dynamic behavior of a given program. To do that, we need a type system for that will reject programs like access(seal Pair(3,5)).first = 10, which we know will crash. To ensure that well-typed programs do not get stuck, a type system for needs a static analogue of sealing – a way to turn an existing type into a read-only type. Read-only types denote references that are immutable and that (transitively) adapt any other references read through them to be immutable as well. Issues arise, however, when we introduce polymorphism. §.§ Polymorphism Recall our earlier example – a polymorphic Pair object. [language=Scala] case class Pair[X](var first: X, var second: X) In a functional language, it is only natural to write higher-order functions that are polymorphic over the elements stored in the pair. Consider an in-place map function over pairs, which applies a function to each element in the pair, storing the result in the original pair. This naturally requires mutable access to a pair. [language=Scala] def inplace_map[X](pair: Pair[X], f: X => X): Unit = pair.first = f(pair.first); pair.second = f(pair.second); This is all well and good, but we may wish to restrict the behaviour of f over the elements of the pair. It may be safer to restrict the behaviour of f so that it could not mutate the elements passed to it. Note that we cannot restrict access to the pair, however, as we still need to mutate it. [language=Scala] // Is this well founded? def inplace_map[X](pair: Pair[X], f: @readonly X => X): Unit = pair.first = f(pair.first); pair.second = f(pair.second); Now, such a definition requires the ability to further modify type variables with immutability qualifiers. This raises important questions – for example, is this operation even well founded? This depends on what X ranges over. *X ranges over an unqualified type: If type variables range over types which have not been qualified by @readonly, then this operation is clearly well founded – it is simply qualifying the unqualified type that X will eventually be substituted by with the @readonly qualifier. This approach has been used by ReIm for Java and for an immutability system for C# – <cit.>. However, this raises the problem of polymorphism over immutability qualifiers as well – for example, a Pair should be able to store both immutable and mutable object references. The only natural solution is to then introduce a mutablity qualifier binder to allow for polymorphism over immutability qualifiers, as thus: [language=Scala] case class Pair[M, X](var first: M X, var second: M X) def inplace_map[M, X](pair: Pair[M, X], f: @readonly X => M X): Unit = pair.first = f(pair.first); pair.second = f(pair.second); Mutability qualifier binders have been used previously, most notably by <cit.>. For one, updating the binding structure of a language is not an easy task – ReIm notably omits this sort of parametric mutability polymorphism <cit.>. However, this sort of solution has its downsides; in particular, existing higher-order functions need to be updated with immutability annotations or variables, as type variables no longer stand for a full type. For example, an existing definition of List map which appears as thus originally: [language=Scala] def map[X](l: List[X], f: X => X): List[X] needs to be updated to read as the following instead: [language=Scala] def map[M, X](l: List[M X], f: M X => M X): List[M X] Instead, we would like to have X range over fully qualified types as well, but as we will see that poses some issues as well. X ranges over fully-qualified types: If type variables can range over types which have been already qualified by @readonly, then we can avoid introducing mutability binders in the definitions for Pair, inplace_map, and map above. A Pair can be polymorphic over its contents X without caring about the underlying mutability of X. However, this raises the question – how do we interpret repeated applications of the @readonly qualifier? For example, what if we applied inplace_map on a Pair[@readonly Pair[Int]]? Then inplace_map would expect a function f with type @readonly (@readonly Pair[Int]) => @readonly Pair[Int]. While our intuition would tell us that @readonly (@readonly Pair[Int]) is really just a @readonly Pair[Int], discharging this equivalence in a proof is not so easy. One response is to explicitly prevent type variables from being further qualified. Calculi which take this approach include <cit.>. However, this restriction prevents this version of inplace_map from being expressed. How can we address this? Our approach, which we explain below, is to treat @readonly as a type operator that works over all types. Following the intuition that sealing removes setters from references, @readonly should be a type operator which removes setters from types. While this does cause complications, we show below how types like @readonly @readonly Pair[Int] can be dealt with, using subtyping and type normalization. §.§ To address these issues, we introduce , which adds a type system in the style of to . The syntax of is given in Figure <ref>; changes from are noted in grey. syntax is a straightforward extension of with collections of mutable references – namely, records – and with two new extensions: read-only types and sealed references. To be close to existing functional languages with subtyping and records, records in are modelled as intersections of single-element record types, to support record subsumption, as in <cit.> and <cit.>. See Figures <ref> and <ref> for full subtyping and typing rules respectively. normalform subtyping typing Read-only types: The readonly type operator transforms an existing type to a read-only version of itself. Unlike the read-only mutability qualifier in Javari and ReIm, which is paired with a type to form a pair of a qualifier and a type, a read-only type in is itself a type. The readonly operator can be seen as the static counterpart of sealing or of deleting setter methods from an object-oriented class type. Any type T is naturally a subtype of its readonly counterpart T, which motivates the choice of as a base calculus. This subtyping relationship is reflected in the subtyping rule mutable. The seal typing rule gives a read-only type to sealed references. Static viewpoint adaptation: The readonly-record-elim rule is a static counterpart of the sealed-field reduction rule. Given a reference s to a record with read-only type, it gives a read-only type to the result of a read s.x of a field x from that reference. If S is the type of field x in the record type given to s, the rule viewpoint-adapts the type, giving s.x the type S. §.§.§ Normal Forms for Types In , is a type operator that can be applied to any type, which enables us to express types such as X, where X is some type variable of unknown mutability. However, if X is itself instantiated with some readonly type T, the type X becomes T, with two occurrences of the type operator. Intuitively, such a type should have the same meaning as T. Additionally, certain types should be equivalent under subtyping. For example, for both backwards compatibility and simplicity, arrow S → T and for-all types ∀ (X S).T should be equivalent under subtyping to their read-only forms (S → T) and (∀ (X S).T), respectively, as well. Having multiple representations for the same type, even infinitely many, complicates reasoning about the meanings of types and proofs of soundness. Therefore, we define a canonical representation for types as follows: A type T is in normal form if: * T is the top type ⊤. * T is a function type S_1 → S_2, where S_1 and S_2 are in normal form. * T is an abstraction type ∀(X S_1).S_2, where S_1 and S_2 are in normal form. * T is an intersection type S_1 ∧ S_2, where S_1 and S_2 are in normal form. * T is a record type { x : S }, where S is in normal form. * T is a read-only record type { x : S }, where S is in normal form. * Type variables X and read-only type variables X are in normal form. A type in normal form is simple – it is an intersection of function, abstraction, and record types, each possibly modified by a single readonly operator. For example, {x : X}∧{y : Y} is in normal form. The type ({x : X}∧{y : Y}) is not. A grammar for types in normal form can be found in Figure <ref>. This allows us to reason about both the shape of the underlying value being typed, and whether or not it has been modified by a readonly operator. Naturally we need a theorem which states that every type has a normal form and a function nf to compute that normal form. Such a function nf is shown in Figure <ref>. Normalization both computes a normal form and is idempotent – a type in normal form normalizes to itself. normalization For any type T, nf(T) is in normal form. Moreover, if T is in normal form, nf(T) = T. Moreover, types are equivalent to their normalized forms under the subtyping relationship. Γ | Σ⊢ nf(T) T and Γ | Σ⊢ T nf(T). For one direction, note that nf(nf(T)) = nf(T), and hence nf(nf(T)) nf(T). Applying denormalize allows us to show that nf(T) T, as desired. The other case follows by a symmetric argument. Not only does this allow us to simplify types to a normal form, this also allows us to state and prove canonical form lemmas and inversion lemmas, necessary for preservation and progress: Theorems <ref> and <ref>. Below we give examples for record types. Similar lemmas exist and are mechanized for function types and type-abstraction types as well. If S is a subtype of {f : T'}, and S is in normal form, then at least one of its components is a type variable X or a record type {f : S'}, where Γ⊢ T' S' T'. If v is a value and ∅ | Σ⊢ v : {f : T}, then v is a record and f is a field of v that maps to some location l. If S is a subtype of {f : T'}, and S is in normal form, then at least one of its components is a type variable X, read-only type variable X, a record type {f : S'} where Γ⊢ T' S' T', or a read-only record type {f : S'} where Γ⊢ T' S' T'. If v is a value and ∅ | Σ⊢ v : {f : T}, then v is a record or a sealed record and f is a field of v that maps to some location l. Note that normalization is necessary to state the inversion lemmas for read-only records, as {f : T'}, {f : T'}, etc, give an infinite series of syntactically in-equivalent but semantically equivalent types describing the same object – a read-only record where field f has type T'. §.§.§ Operational Safety Operationally, we give small-step reduction semantics coupled with a store to in Figure <ref>. evaluation Again, these rules are a straightforward extension of with mutable boxes and records, with additional rules for reducing sealed records. To prove progress and preservation theorems, we additionally need to ensure that the store σ itself is well typed in the context of some store typing environment Σ – see rule store. The crux of preservation for is to show that sealed records are never given a non-read-only type, so that the typing rule for reading from a mutable record – record-elim – cannot be applied to sealed record values. Suppose Γ | Σ⊢ r : T for some record r. If T is in normal form, then the components of T are: * The top type ⊤, or * a read-only record type {f : T'}. From this key result we can show that preservation holds for . Suppose ⟨ s, σ⟩⟨ t, σ' ⟩. If Γ | Σ⊢σ and Γ | Σ⊢ s : T for some type T, then there is some environment extension Σ' of Σ such that Γ | Σ' ⊢σ' and Γ | Σ' ⊢ t : T. Conversely, values given a non-read-only record type must be an unsealed collection of references. Suppose ∅ | Σ⊢ v : {f : T} for runtime value v. Then v is an unsealed runtime record where field f maps to some location l. This lemma is needed to prove progress. Suppose ∅ | Σ⊢σ and ∅, Σ⊢ s : T. Then either s is a value or there is some t and σ' such that ⟨ s, σ⟩⟨ t, σ' ⟩. § STATIC IMMUTABILITY SAFETY Armed with Progress and Preservation, we can state immutability safety for full . allows us to show that sealed records are never used to mutate their underlying referenced values. shows that well-typed programs using seals do not get stuck. To prove immutability safety for , one problem still remains – allows records that are not sealed to be given a read-only type. We still need to show that records with such a type are not used to mutate their values. In other words, we need to show that records with a read-only type could be sealed, and that the resulting program would execute in the same way. We will do this by showing that, given an original, well-typed program s, we can add seals to its read-only subterms to obtain a new, well-typed program t, and furthermore that t behaves the same way as s, up to having additional seals in the resulting state. The first step is to show that sealing does not disturb the typing judgment for terms. Suppose Γ | Σ⊢ t : T. Then Γ | Σ⊢ t : T. By seal, Γ | Σ⊢ t : T. Then since T <: T, by sub, Γ | Σ⊢ t : T, as desired. From this, given a term s and a typing derivation for s, D = Γ | Σ⊢ s : T, we can seal those subterms of s that are given a read-only type in D. Let C be a term context with n holes, and let s=C[s_1, s_2, s_3, , s_n] be a term. Suppose D is a typing derivation showing that Γ | Σ⊢ s : T. Suppose also that D gives each subterm s_i of s a type T_i. Then s' = s[ s_1, s_2, , s_n] has the following properties: * s ≤ s', and * There exists a typing derivation D' showing that Γ | Σ⊢ s' : T as well. (1) is by definition. As for (2), to construct D', walk through the typing derivation D showing that Γ | Σ⊢ s : T. When we reach the point in the typing derivation that shows that s_i is given the type T_i, note that s_i can also be given the type T_i by the derivation given by Lemma <ref>. Replace the sub-derivation in D with the derivation given by Lemma <ref> to give a derivation in D' for s_i, as desired. This motivates the following definition. Let s be a term and let D = Γ | Σ⊢ s : T be a typing derivation for s. Define (s,D) to be the term constructed from s by replacing all subterms s_i of s given a read-only type in D by s_i. A crested term essentially seals any sub-term of the original term that is given a read-only type in a particular typing derivation. By definition, for any term s and typing derivation D for s, we have s ≤(s,D). Moreover, a crested term can be given the same type as its original term as well. Let s be a term and let D = Γ | Σ⊢ s : T be a typing derivation for s. Then s ≤(s,D), and there exists a typing derivation showing that Γ | Σ⊢(s, D) : T as well. Now by progress – Theorem <ref> – we have that for any well typed term s with typing derivation D = ∅ | Σ⊢ s : T, its protected – crested – version (s,D) will also step. By preservation – Theorem <ref> – we have that (s,D) either eventually steps to a value or runs forever, but never gets stuck. It remains to relate the reduction steps of (s,D) to those of s, and specifically to show that if one reduces to some specific value and store, then the other also reduces to an equivalent pair of value and store. We will do so by using the dynamic immutability safety properties proven in Section <ref>. satisfies the same sealing-equivalence properties as – seals do not affect reduction, except perhaps by introducing other seals. The following are analogues of Lemmas <ref>, <ref>, and <ref> for . Let v be a value, σ_v be a store, t be a term such that v ≤ t, and σ_t be a store such that σ_v ≤σ_t. If ⟨ t, σ_t ⟩⟨ t', σ_t'⟩ then v ≤ t', σ_v ≤σ_t', and |t'| < |t|. Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that σ_s ≤σ_t. If ⟨ s, σ_s⟩⟨ s', σ_s'⟩ and ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then: * Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or * s' ≤ t' and σ_s' ≤σ_t'. Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that σ_s ≤σ_t. If ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then: * Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or * There exists s' and σ_s' such that ⟨ s, σ_s⟩⟨ s', σ_s'⟩, s' ≤ t' and σ_s' ≤σ_t'. Stepping back, we can see using Lemma <ref> that one step of s to a term s' corresponds to finitely many steps of (s,D); every step that (s,D) takes either removes a seal or corresponds to a reduction step that s originally took. Hence (s,D) eventually steps to a term t' such that s' ≤ t', preserving the desired equivalence of reduction between s and (s,D). The following is a generalization of the previous statement to two arbitrarily chosen well-typed terms s and t satisfying s ≤ t. Suppose ∅, Σ⊢σ_s and ∅, Σ⊢ s : T. Suppose ⟨ s, σ_s ⟩⟨ s', σ_s' ⟩. For σ_s ≤σ_t, and s ≤ t, such that Γ, Σ⊢σ_s and Γ, Σ⊢ t : T, we have that ⟨ t, σ_t ⟩⟨ t', σ_t' ⟩ where s' ≤ t' and σ_s' ≤σ_t'. From Theorem <ref> we have that there exists a t' and σ_t' that ⟨ t, σ_t⟩⟨ t', σ_t'⟩. By Lemma <ref> we have that either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or that s' ≤ t' and σ_s' ≤σ_t'. If s' ≤ t' and σ_s' ≤σ_t' we are done. Otherwise, observe that since |t'| < |t|, a seal was removed. This can only occur a finite number of times, as t and t' have at most a finite number of seals, so we can simply loop until we obtain a t' and σ_t' such that s' ≤ t' and σ_s' ≤σ_t'. Note that Preservation – Theorem <ref> allows us to do so as each intermediate step t' can be given the same type Γ | Σ⊢ t': T. Finally, when s eventually reduces to a value v, we can use Lemma <ref> to show that (s,D) reduces to a similar value v' as well. Again, the following is a generalization of the previous statement to two arbitrarily chosen well-typed terms s and t satisfying s ≤ t. Suppose ∅, Σ⊢σ_s and ∅, Σ⊢ s : T such that s eventually reduces to a value v_s – namely, that ⟨ s, σ_e⟩⟨ v_s, σ_s'⟩ for some σ_s'. Then for any t such that s ≤ t and ∅, Σ⊢ t : T, we have that t eventually reduces to some value v_t, – namely ⟨ t, σ_e⟩⟨ v_t, σ_t'⟩, such that v_s ≤ v_t and σ_s' ≤σ_t'. For each step in the multi-step reduction from ⟨ s, σ_e⟩⟨ v_s, σ_s'⟩ we can apply Lemma <ref> to show that ⟨ t, σ_t⟩ eventually reduces to ⟨ t', σ_t'⟩ where v_s ≤ t' and σ_s' ≤σ_t'. Now by Theorem <ref> and Lemma  <ref> we have that either t' is a value, in which case we are done, or that ⟨ t', σ_t' ⟩ steps to ⟨ t”, σ_s' ⟩ where v_s ≤ t”. Again, we can only take a finite number of steps of this fashion as the rule which reduces t' t” can only be one that removed a seal, so eventually we obtain a value v_s such that ⟨ t, σ_s ⟩⟨ v_t, σ_t'⟩ with v_s ≤ v_t, and σ_s' ≤σ_t', as desired. Again, note that Preservation – Theorem <ref> allows us to do so as each intermediate step t' can be given the same type Γ | Σ⊢ t': T. Now from Lemma <ref> we obtain our desired immutability safety results as a consequence – namely, given a well-typed term s that reduces to a value v_s, any references in s with a type are never actually mutated, since they can be transparently sealed (which does not change the typing) to no ill effect. Formally, our main result is: Suppose s is a term, D = ∅ | Σ⊢ s : T is a typing derivation for s, and let σ_s be some initial store such that ∅ | Σ⊢σ_s. Then: * (s, D) can be given the same type as s – ∅ | Σ⊢ crest(s,D) : T. Moreover, if ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩, for some value v_s, then: * (s, D) will reduce to a value v_t – ⟨ crest(s, D), σ_e⟩⟨ v_t, σ_t'⟩, such that * v_t and σ_t' are equivalent to v_s and σ_s', modulo additional seals – namely, that v_s ≤ v_t and σ_s' ≤σ_t'. Finally, it is useful to show that the converse result is also true; seals can be safely removed without affecting reduction. First note that seals themselves can be transparently removed without affecting the types assigned to the term. Suppose Γ | Σ⊢ s : T. Then Γ | Σ⊢ s : T. Moreover, the following analogue of Lemma <ref> holds in . Suppose s and t are terms such that s ≤ t. If ⟨ t, σ_t⟩⟨ v_t, σ_t'⟩ for some value v_t, then for any σ_s ≤σ_t we have ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩ such that v_s ≤ v_t and σ_s' ≤σ_t'. While Lemma <ref> is enough to show when s ≤ t, if t reduces to a value then so does s, we need Lemma <ref> to reason about the types of s and v_s. Suppose s and t are terms such that s ≤ t. If ⟨ t, σ_t⟩⟨ v_t, σ_t'⟩ for some value v_t, then for any σ_s ≤σ_t we have ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩ for some value v_s such that v_s' ≤ v_s' and σ_s' ≤σ_t'. Moreover, Γ | Σ⊢ s : T and Γ | (Σ', Σ) ⊢ v_s : T for some Σ' as well. By Lemma <ref> we can show that Γ | Σ⊢ s : T. By Lemma <ref> we have that v reduces to some value v_s. By preservation – Theorem <ref> we have that v_s has type T, as desired. § MECHANIZATION Our mechanization of is based on the mechanization of by <cit.>. Our mechanization is a faithful model of as described in this paper except for one case. To facilitate mechanization, reduction in our mechanization is done via explicit congruence rules in each reduction rule instead of an implicit rule for reducing inside an evaluation context, similar to how <cit.> originally mechanize as well. Proofs for all lemmas except for Theorem <ref> and Lemmas <ref>, <ref>, and <ref> have been mechanized using Coq 8.15 in the attached artifact. Theorem <ref> and Lemmas <ref>, <ref>, and <ref> have not been mechanized as they require computation on typing derivations which is hard to encode in Coq as computation on Prop cannot be reflected into Set. Lemma <ref> has been omitted from our mechanization as it is hard to formally state, let alone prove, in a setting where reduction is done by congruence, though it almost follows intuitively from how the reduction rules are set up. As the proofs of Lemmas <ref>, <ref>, <ref>, and <ref> do not rely on any extra structure present in over , proofs for their analogues Lemmas <ref>, <ref>, <ref>, and <ref> have been omitted, as they can be recovered by erasing the appropriate cases from their analogues. § RELATED AND FUTURE WORK §.§ Limitations – Parametric Mutability Polymorphism Unlike other systems, does not support directly mutability polymorphism, neither through a restricted @polyread modifier as seen in <cit.>, nor through explicit mutability variables as seen in <cit.>. This is a true limitation of , however, we note that it is possible to desugar parametric mutability polymorphism from a surface language into a core calculus like . As <cit.> point out in their work, parametric mutability polymorphism can be desugared via overloading, noting that overloading itself can be dealt with in a surface language before desugaring into a base calculus, as seen before with Featherweight Java <cit.>. For example, consider the following top-level parametric function, access, which is parametric on mutability variable M: [language=Scala] def access[M](z: [M] Pair[Pair[Int]]): M Pair[Int] = z.first This function can be rewritten instead as two functions with the same name access, one taking in a regular, mutable pair, and one taking in a a readonly pair: [language=Scala] def access(z: Pair[Pair[Int]]): Pair[Int] = z.first def access(@readonly z: Pair[Pair[Int]]): @readonly Pair[Int] = z.first Nested and first-class functions are a little trickier but one can view a polymorphic, first-class function value as a read-only record packaging up both overloads. [language=Scala] access: (z: Pair[Pair[Int]]) => z.first , access: (@readonly z: Pair[Pair[Int]]) => z.first It would be interesting future work to see how one could add parametric mutability polymorphism to . §.§ Future Work – Algorithmic Subtyping The subtyping rules of are fairly involved and it is difficult to see if an algorithmic subtyping system could be devised. We would conjecture that one could do so, using techniques from <cit.>'s integrated subtyping work, but nonetheless algorithmic subtyping for remains an interesting and open problem. §.§ Viewpoint Adaptation Viewpoint adaptation has been used in reference immutability systems to denote the type-level adaptation which is enforced to guarantee transitive immutability safety. When a field r.f is read from some record r, the mutability of the resulting reference needs to be adapted from both the mutability of r and from the type of f in the record itself. While this notion of adaptation was known as early as Javari <cit.>, the term “viewpoint adaptation” was first coined by <cit.>. They realized that this notion of adaptation could be generalized to arbitrary qualifiers – whether or not the type of a field read r.f should be qualified by some qualifier @q should depend on whether or not f's type is qualified and whether or not r's type is qualfied as well – and used it to implement an ownership system for Java references in order to tame aliasing in Java programs. §.§ Reference Immutability Reference immutability has long been studied in the context of existing object-oriented languages such as Java and C#, and more recently has been studied in impure, functional languages like Scala. roDOT <cit.>: roDOT extends the calculus of Dependent Object Types <cit.> with support for reference immutability. In their system, immutability constraints are expressed through a type member field x.M of each object, where x is mutable if and only if M ≤, and x is read-only if and only if M ≥⊤. Polymorphism in roDOT is out of all reference immutability systems closest to how polymorphism is done in . Type variables quantify over full types, and type variables can be further restricted to be read-only as in . Constructing a read-only version of a type, like how we use readonly in , is done in roDOT by taking an intersection with a bound on the type member M. For example, inplace_map from before could be expressed in roDOT using an intersection type to modify immutability on the type variable X: [language=Scala] def inplace_map[X](Pair[X]: pair, f: (X M :> Any) => X): Unit Dort et. al. also prove that roDOT respects immutability safety, but with different techniques than how we show immutability safety in . Instead of giving operational semantics with special forms that guard references from being mutated, and relying on progress and preservation to imply static safety, they take a different approach and show instead that values on the heap that change during reduction must be reachable by some statically-typed mutable reference in the initial program. roDOT is a stronger system than , as in particular mutabilities can be combined. For example, one could write a generic getF function which reads a field f out of any record that has f as a field polymorphic over both the mutabilities of the record x and the field f: [language=Scala] def getF[T](x: M: *, f : T) : T M :> x.M = x.f Here, the return type of getF will give the proper, tightest, viewpoint-adapted type for reading x.f depending on both the mutabilities of x and f. This is not directly expressible in and can only be expressed using overloading: [language=Scala] def getF[T](x: @readonly f : T): @readonly T = x.f def getF[T](x: f : T) : T = x.f However, in contrast, roDOT is significantly more complicated than . Immutability for C# <cit.>: Of all the object calculi with reference immutability the calculus of <cit.> is closest to that of roDOT in terms of flexibility. Polymorphism is possible over both mutabilities and types in Gordon's system, but must be done separately; type variables instead quantify over base types that have not been qualified with some immutability annotation, whether that be read-only or mutable. The inplace_map function that we discussed earlier would be expressed with both a base-type variable as well as a mutability variable: [language=Scala] def inplace_map[M, X](Pair[M X]: pair, f: @readonly X => M X): Unit Like roDOT, Gordon's system also allows for mutability annotations to be combined in types, in effect allowing viewpoint adaptation to be expressed at the type level using the mutability operator ~>. For example, getF could be written as the following in Gordon's system: [language=Scala] def getF[MS, MT, T, S <: f : MT T](x: MS S) : (MS  > MT) T = x.f Unlike roDOT however, which allows for inferences to be drawn about the mutability of the type (T & {M :> x.M}).M depending on the bounds on T and S, the only allowable judgment we can draw about MS ~> MT is that it can be widened to @readonly. We cannot conclude, for example, that MS ~> MT <: M in the following, even though both MS <: M and MT <: M: [language=Scala] def getF[M, MS <: M, MT <: M, T, S <: f : MT T](x: MS S) : (MS  > MT) T = x.f Gordon et. al. also demonstrate the soundness and immutability safety of their system but through an embedding into a program logic <cit.>. Javari <cit.>: Reference immutability was first modelled in the context of Java; Javari is the earliest such extension. In Javari's formalization, Lightweight Javari, type variables X stand in for either other type variables, class types, and readonly-qualified class types. Unlike roDOT and , in Lightweight Javari, type variables cannot be further qualified by the readonly type qualifier. Lightweight Javari, however, does support parametric mutability polymorphism for class types, but does not support parametric mutability polymorphism directly on methods. Instead, limited parametric mutability method polymorphism in Javari, denoted with the keyword romaybe, is desugared using overloading into the two underlying methods handling the read-only case and the mutable case replacing romaybe in the source. Our earlier example, getF, can be written using romaybe as follows: [language=Java] class HasF<T> T f; romaybe T getF() romaybe return f; However, this example is inexpressible in the core calculus Lightweight Javari, as @readonly T is ill-formed. As for safety, immutability safety is done in Lightweight Javari through a case analysis on how typed Lightweight Javari program terms can reduce. <cit.> claim that the soundness of Lightweight Javari reduces to showing the soundness of Lightweight Java, but no formal proof is given. ReIm: <cit.>: ReIm simplifies Javari to enable fast, scalable mutability inference and analysis. Like Javari, ReIm supports two type qualifiers – readonly and polyread, where readonly marks a read-only type and polyread is an analogue of romaybe from Javari. Like Lightweight Javari, and unlike roDOT and , ReIm restricts how qualifiers interact with generics. ReIm's polymorphism model is similar to that of <cit.> – type variables range over unqualified types. However, ReIm has no mechanism for mutability polymorphism, and therefore getF cannot be written in ReIm at all. Unlike other related work, neither soundness nor immutability safety is proven to hold for ReIm. Immutability Generic Java: <cit.>: Immutability Generic Java is a scheme for expressing immutability using Java's existing generics system. The type List<Mutable> denotes a mutable reference to a List, whereas the type List<Readonly> denotes a read-only reference to a list. Viewpoint adaptation is not supported, and transitive immutability must be explicitly opted into. For example, in the following snippet, the field value of C is always mutable. Transitive immutability must be explicitly opted into by instantiating List with the immutability parameter ImmutOfC. [language=Java] class C<ImmutOfC> List<Mutable /* ImmutOfC for transitivity */, Int> value; Moreover, transitive immutability cannot be expressed at all over fields given a generic type. Type variables by the nature of how immutability is expressed in IGJ range over fully qualified types, and there is no mechanism for re-qualifying a type variable with a new immutability qualifier. For example, the mutability of value in any Box below depends solely on whether or not T is mutable. Hence the value field of a Box is mutable even if it was read through a read-only Box reference – that is, a reference of type Box<ReadOnly>. [language=Java] class Box<ImmutOfBox, T> T value; Box<Readonly, List<Mutable,Int>> b = new Box(...) b.value.add(10); // OK – even though it mutates the underlying List. §.§ Languages with Immutability Systems Finally, some languages have been explicitly designed with immutability in mind. C++: const-qualified methods and values provide limited viewpoint adaptation. Reading a field from a const-qualified object returns a const-qualified field, and C++ supports function and method dispatching based on the constness of its arguments <cit.>. Mutability polymorphism is not explicitly supported but can be done with a combination of templates and overloading. [language=C++] struct BoxedInt int v0; ; template<typename T> struct HasF<T> T f; T getF() return f; const T getF() const return f; const HasF<BoxedInt> x; x.getF() // Calls const qualified getF() const BoxedInt OK = x.f; // OK, as x.f is of type const BoxedInt. BoxedInt Bad = x.f; // Bad, discards const-qualifier. In this example a C++ compiler would disallow Bad because the type of x.f has been adapted to a l-value of const BoxedInt. However, viewpoint adaptation does not lift to reference or pointer types in C++. For example, if instead we had a pointer-to-T in HasF: [language=C++] template<typename T> struct HasF<T> T* f; BoxedInt b5; const HasF<BoxedInt> x b; BoxedInt* NotGreat = x.f; // OK, as x stores a constant pointer to a mutable BoxedInt NotGreat->v = 10; // Modifies b! C++'s limited viewpoint adaptation gives x.f the type BoxedInt * const, which is a constant pointer to a mutable BoxedInt, not the type BoxedInt const * const, which would be a constant pointer to a constant BoxedInt. This allows the underlying field to be mutated. D: In contrast to C++, where const becomes useless for pointer and reference fields, D supports full reference immutability and viewpoint adaptation with a transitive const extended to work for pointer and reference types <cit.>. Again, mutability polymorphism is not directly supported but can be encoded with D's compile-time meta-programming system. Rust: In Rust, references are either mutable or read-only, and only one mutable reference can exist for any given value. Read-only references are transitive, like they are in , roDOT, and other reference immutability systems, and unlike C++. Here, in this example, we cannot write to s3.f as it s3 is an read-only reference to s2, even though s2.f has type &mut String. [language=Rust] struct HasF<T> f: T fn main() let mut s1 = String::from("hello"); let s2 = HasF f: mut s1 ; s2.f.push_str("OK"); let s3 = s2; s3.f.push_str("BAD"); Unlike other languages, though, the mutability of a reference is an intrinsic property of the reference type itself. Instead of having a type operator readonly that, given a reference type T, creates a read-only version of that reference type, Rust instead defines & and &mut, type operators that, given a type T, produce the type of a read-only reference to a T and the type of a mutable reference to a T, respectively. Here, in the following example, s1 is a String, s2 is a mutable reference to a s1 – &mut String, and s3 is a read-only reference to s2 – & (&mut String), where all three of s1, s2, and s3 are stored at distinct locations in memory. [language=Rust] let s1 = String::from("hello"); let mut s2 = s1; let s3 = s2; As such, in Rust, one cannot create a read-only version of an existing reference type. This makes higher-order functions over references that are polymorphic over mutability, like inplace_map from above, inexpressible in Rust. However, if we instead had a Pair that owned its elements, we could write the following version of inplace_map: [language=Rust] struct Pair<T> fst: T, snd: T fn inplace_map<T>(p: mut Pair<T>, f: fn ( T) -> T) p.fst = f( p.fst); p.snd = f( p.snd); Note, though, that in this setting, the elements p.fst and p.snd are embedded in the pair p and owned by it. §.§ Type Qualifiers and Polymorphism <cit.> formalize a system for enriching types with qualifiers with support for polymorphism over both ground, unqualified types and qualifiers themselves. In this setting, readonly can be viewed as a type qualifier, similar to how C++'s const can be viewed as a qualifier in <cit.>. The resulting calculus which arises is similar to the calculus of <cit.> restricted only to reference immutability qualifiers. §.§ Contracts Our approach to sealing references is similar to and was inspired by practical programming experience with Racket contracts – <cit.>. Sealing, in particular, can be viewed as attaching a chaperone contract which raises an exception whenever the underlying chaperoned value is written to, and attaches fa similar chaperone to every value read out of the value. For example, a dynamic reference immutability scheme for Racket vectors could be implemented with the following chaperone contract: [language=Scheme] (define (chaperone-read vec idx v) (seal v)) (define (chaperone-write vec idx v) (error 'seal "Tried to write through an immutable reference.")) (define (seal v) (cond [(vector? v) (chaperone-vector vec chaperone-read chaperone-write)) [else v])) Strickland et. al. prove that chaperones can be safely erased without changing the behaviour of the underlying program when it reduces to a value. Our results on dynamic safety, Lemmas <ref>, <ref>, and <ref> can be viewed as an analogue of <cit.> specialized to reference immutability. In this setting, our static immutability safety results show that a well-typed program will never raise an error by writing to a chaperoned vector. § CONCLUSION We contributed a simple and sound treatment of reference immutability in . We show how a simple idea, sealing references, can provide dynamic immutability safety guarantees in an untyped context – – and how soundness and -style polymorphism can be recovered in a typed calculus which builds on both and . Our hope is to enable reference immutability systems in functional languages via this work, by giving simple soundness foundations in a calculus () which underpins many impure functional languages today. We thank Yaoyu Zhao for his interesting discussions on reference immutability. We thank Alexis Hunt and Hermann (Jianlin) Li for their useful feedback on early drafts of this work. This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and by an Ontario Graduate Scholarship. No seals were clubbed in the creation of this paper.
http://arxiv.org/abs/2307.04091v1
20230709042412
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
[ "Jun Cen", "Shiwei Zhang", "Yixuan Pei", "Kun Li", "Hang Zheng", "Maochun Luo", "Yingya Zhang", "Qifeng Chen" ]
cs.CV
[ "cs.CV" ]
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation ^1Authors are with Cheng Kar-Shun Robotics Institute, The Hong Kong University of Science and Technology, Hong Kong SAR, China. jcenaa}@connect.ust.hk. {cqf}@ust.hk. ^2Authors are with Alibaba Group, China. zhangjin.zsw, zh334251, luomaochun.lmc, yingya.zyy}@alibaba-inc.com. lk158400}@cainiao.com. ^3Authors are with the SMILES LAB at the School of Information and Communication Engineering'an Jiaotong University, Xi'an, China. peiyixuan}@stu.xjtu.edu. ^*Work done as an intern at Alibaba DAMO Academy. Jun Cen^1,2*, Shiwei Zhang^2, Yixuan Pei^3, Kun Li^2, Hang Zheng^2, Maochun Luo^2, Yingya Zhang^2, Qifeng Chen^1 August 12, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 2D RGB images and 3D LIDAR point clouds provide complementary knowledge for the perception system of autonomous vehicles. Several 2D and 3D fusion methods have been explored for the LIDAR semantic segmentation task, but they suffer from different problems. 2D-to-3D fusion methods require strictly paired data during inference, which may not be available in real-world scenarios, while 3D-to-2D fusion methods cannot explicitly make full use of the 2D information. Therefore, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) in this work. Our method has two contributions. First, our bidirectional fusion scheme explicitly and implicitly enhances the 3D feature via 2D-to-3D fusion and 3D-to-2D fusion, respectively, which surpasses either one of the single fusion schemes. Second, we distillate the 2D knowledge from a 2D network (Camera branch) to a 3D network (2D knowledge branch) so that the 3D network can generate 2D information even for those points not in the FOV (field of view) of the camera. In this way, RGB images are not required during inference anymore since the 2D knowledge branch provides 2D information according to the 3D LIDAR input. We show that our CMDFusion achieves the best performance among all fusion-based methods on SemanticKITTI and nuScenes datasets. The code will be released at https://github.com/Jun-CEN/CMDFusion. § INTRODUCTION 3D LIDAR is significant for the perception system of autonomous vehicles, and one of the applicable tasks with LIDAR is semantic segmentation. Great efforts have been made for better LIDAR semantic segmentation performance using single LIDAR modality <cit.>. Recently, several multi-modality methods are developed <cit.> to fuse the features of LIDAR and colorful cameras since they provide complementary information. LIDAR provides reliable depth information and is robust to light conditions such as dark nights, while the camera offers a dense colorful appearance and fine-grained textures. In this work, we also aim to study how to effectively leverage these two modality data for better LIDAR semantic segmentation. Existing fusion-based methods can be divided into 2D-to-3D fusion method (PMF <cit.>) and 3D-to-2D fusion method (2DPASS <cit.>), as shown in Fig. <ref> (a) and (b). PMF injects 2D knowledge into the LIDAR features, so it needs strictly paired data during training and inference. However, the FOV of LIDAR and the camera may not totally overlap with each other, so those points out of the FOV of the camera cannot be tested. For example, SemanticKITTI <cit.> only provides two front-view images, and points at the side and back cannot be involved in the PMF framework. 2DPASS notices this problem and proposed injecting 3D features into 2D features during training to implicitly enhance the 3D features. In this way, 2DPASS does not require images during inference. However, 3D features do not explicitly contain 2D information in such a 3D-to-2D scheme. To solve the mentioned problems of 2D-to-3D and 3D-to-2D fusion methods, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion), as shown in Fig. <ref> (c). Specifically, on the one hand, we propose a Bidirectional Fusion Block (BFB) to explicitly and implicitly enhance the 3D features through 2D-to-3D and 3D-to-2D injection, which owns the benefits of both single fusion schemes. On the other hand, we propose a Cross-Modality Distillation (CMD) module to let a 3D network (2D knowledge branch) memorize the information of the 2D network (camera branch) during training. During inference, the 2D knowledge branch provides the 2D image information based on the 3D LIDAR point cloud inputs so that we can obtain the 2D knowledge for the whole point cloud, including those points not in the FOV of the camera. We evaluate our method on two challenging datasets, including SemanticKITTI <cit.> and NuScenes <cit.>. Experiments show that our method achieves the best performance among all fusion-based methods. In summary, our contributions include the following: * We develop a bidirectional fusion method CMDFusion for the LIDAR semantic segmentation task, which surpasses the single directional 2D-to-3D fusion and 3D-to-2D fusion methods. * We develop a cross-modality distillation module to generate 2D information for those points that are out of the FOV of the camera. * We experimentally show that our method achieves the best performance among fusion-based methods on SemanticKITTI and Nuscenes datasets. § RELATED WORK 3D LIDAR semantic segmentation has grown very fast based on well-annotated public datasets, such as SemanticKITTI <cit.> and NuScenes <cit.>. Most methods in this area are single-modality, i.e., only use LIDAR point cloud to extract information. Specifically, single-modality methods can be categorized into point-based, projection-based, voxel-based, and multi-view fusion methods. 1) Point-based methods <cit.> adapt PointNet <cit.> and PointNet++ <cit.> to the LIDAR domain. These point-based methods do not generalize very well in the LIDAR point cloud scenarios since their sampling and searching algorithms cannot perfectly handle the sparse outdoor point clouds. 2) Voxel-based methods divide the whole point cloud into voxels <cit.> and apply efficient 3D convolution for semantic segmentation like SparseConv <cit.>. Cylinder3D <cit.> proposed a cylindrical partition and asymmetrical 3D convolutional network which follows the geometry structure of the LIDAR point cloud. 3) Projection-based methods first project 3D LIDAR point cloud into 2D range-view images <cit.> or bird’s-eye-view (BEV) images <cit.> and then apply 2D convolution network for semantic segmentation. However, such a projection inevitably loses some of the 3D geometry information. 4) Multi-view fusion methods combine different views of the LIDAR point cloud as inputs. FusionNet <cit.> and SPVCNN <cit.> fuse voxel and point level information, while RPVNet <cit.> fuses the information of voxel, point, and range views. Recently, multi-modality fusion has become popular in the autonomous driving area. In the 3D object detection task, BEV fusion <cit.> unifies the LIDAR and image features in the BEV space and achieves the state-of-the-art performance. However, the height information is much more critical in the semantic segmentation task than the object detection task, so the BEV-based method <cit.> has limited performance on the semantic segmentation task. Instead, PMF <cit.> projects the LIDAR point cloud into the image space and then conducts 2D-to-3D fusion for better 3D feature representation. 2DPASS <cit.> finds that the 2D-to-3D fusion method like PMF can only be applied on the points in the overlapping FOVs of the LIDAR and camera, so 2DPASS conducts 3D-to-2D fusion to strengthen the 3D features by supervising the 3D features from the 2D branch. Compared to PMF and 2DPASS, our bidirectional fusion network enjoys the benefits of both 2D-to-3D and 3D-to-2D fusion schemes. Besides, we propose a cross-modality distillation module so that our network can be applied to the whole LIDAR point cloud, including the points that are out of the FOV of the camera. § METHODOLOGY §.§ Framework Overview The simplified and specific overall structure of our proposed CMDFusion is shown in Fig. <ref> (c) and Fig. <ref> (a), respectively. Our CMDFusion is composed of three branches, including a camera branch (2D network), a 2D knowledge branch (3D network), and a 3D LIDAR branch (3D network). §.§.§ Training During training, the 2D knowledge branch (a 3D network) learns the 2D image information from the camera branch (a 2D network) via Cross-Modality Distillation (CMD). Although the CMD is conducted on those points in the overlapping FOVs of the LIDAR and camera, the 2D knowledge branch can be generalized to the points that are out of the FOV of the camera. In this way, we can obtain the 2D information of the whole point cloud, which is not approachable in PMF <cit.> or 2DPASS <cit.>. Then we fuse the features of the 2D knowledge branch and 3D LIDAR branch through Bidirectional Fusion Block (BFB). On the one hand, 2D-to-3D directional fusion explicitly enhances the 3D feature via 2D information injection. On the other hand, 3D-to-2D directional fusion implicitly improves the robustness of the 3D feature since it is required to have the potential to be well adapted to the 2D space. Therefore, our BFB enjoys the benefits of both PMF and 2DPASS. §.§.§ Testing During inference, the camera branch is not needed anymore since its knowledge is already distilled to the 2D knowledge branch. Besides, only 2D-to-3D directional fusion is involved as the final prediction results come from the 3D LIDAR branch. The right-hand side of Fig. <ref> (c) shows the parts that are needed during inference. §.§ Point-to-pixel Corrspondence Point-to-pixel correspondence is the pre-request of Cross-Modality Distillation (CMD). Given a LIDAR point cloud P = {p_i}_i=1^N ∈ℝ^N× 3, where p_i = (x_i, y_i, z_i) ∈ℝ^3 refers to the XYZ coordinates of a point and N is the number of points in the point cloud, the projected 2D coordinates of the point p_i is calculated as: [u_i, v_i, 1]^T = 1/z_i× K× T × [x_i, y_i, z_i, 1]^T, where K ∈ℝ^3× 4 and T ∈ℝ^4× 4 denote the intrinsic and extrinsic matrices of the camera, respectively. Then we have p̂_i =(⌊ v_i⌋ , ⌊ u_i ⌋ ) ∈ℝ^2 as the integer projected 2D coordinates, where ⌊·⌋ is the floor operation. For the SemanticKITTI dataset, K and T are already given. For the NuScenes dataset, the extrinsic matrix T is calculated as: T=T_C←ego_t_c× T_ego_t_c←G× T_G←ego_t_l× T_ego_t_l←L, where L, C, and G refer to the LIDAR, camera, and global. Note that CMD is only applied on the points that are in the overlapping FOVs of LIDAR and camera, as shown in the colorized region in the input of the 2D knowledge branch in Fig. <ref> (a). Formally, suppose the points set in the overlapping FOVs of LIDAR and camera is P^O = {p_i}_i=1^N^O∈ℝ^N^O× 3, where N^O denotes the number of points in the overlapping FOVs of the LIDAR and camera, then for each point p_i in P^O, its corresponding projected coordinates p̂_i =(⌊ v_i⌋ , ⌊ u_i ⌋ ) should meet: {[ 0 ≤⌊ v_i⌋≤ H; 0 ≤⌊ u_i⌋≤ W, ]. where H and W refer to the height and width of corresponding images. Note that for feature maps under different scales, we first upsample the feature maps to the original scale and then use the corresponding point-to-pixel corresponding. §.§ Cross-Modality Distillation Cross-Modality Distillation (CMD) is to distillate the 2D knowledge from the camera branch (a 2D network) to the 2D knowledge branch (a 3D network), so we can generate the 2D information for those points out of the FOV of the camera and do not need the images during inference. §.§.§ Camera Branch Unlike PMF <cit.> and 2DPASS <cit.> that train the camera branch with the ground truth projected from the LIDAR point cloud, we use a ResNet101 <cit.> which is pre-trained on the Cityscapes dataset <cit.>. Cityscapes is a popular dataset for 2D image semantic segmentation in the autonomous driving scenario. We adopt this strategy for two reasons. First, if we use the ground truth which is projected from the LIDAR point cloud, the camera branch may learn the overlapping knowledge with the 3D LIDAR branch since they share the same ground truth source. In contrast, the pre-trained camera branch using another dataset could provide additional information on top of the LIDAR point cloud. Second, we could freeze the camera branch during training since it is well-trained, so less back-propagation is needed for the whole structure. In this way, the training process consumes less GPU memory and time. §.§.§ 2D Knowledge Branch Following 2DPASS <cit.>, we use SPVCNN <cit.> as the 3D network used in this paper, including the 2D knowledge branch and 3D LIDAR branch. Now let us formulate the process of CMD. For points in the overlapping FOVs of LIDAR and camera p_i ∈ P^O, we feed them into the 2D knowledge branch f_2D to obtain the features z_2D^s: z_2D^s={ f_2D^s(p_i) }_i=1^N^O∈ℝ^N^O× d, where s={1,2,3,4 } and d refer to the feature map scale and the dimension of the features, respectively. Then we obtain the corresponding features z_C^s of P^O from the camera branch through the point-to-pixel projection described in Sec. <ref>. The CMD is realized through this loss ℒ_CMD: ℒ_CMD = 1/N^O∑ z_2D^s - z_C^s _2, where ·_2 denotes the L2 loss. In this way, the 2D knowledge branch can mimic the function of the camera branch to provide the 2D information based on the 3D LIDAR point cloud. Although ℒ_CMD is only available for P^O during training, the trained 2D knowledge branch can be generalized to the whole point cloud P during inference. §.§ Bidirectional Fusion Our bidirectional fusion block (BFB) is composed of a 3D-to-2D fusion block and a 2D-to-3D fusion block, as shown in Fig. <ref> (b). 2D-to-3D directional fusion explicitly enhances the 3D features via 2D feature injection, while 3D-to-2D implicitly enhances the 3D features via 2D knowledge branch supervision. Note that the 3D-to-2D fusion block and 2D-to-3D fusion block share the same single directional fusion structure, as shown in Fig. <ref> (c), and the only difference is the input position. Fig. <ref> (c) is the example of the 3D-to-2D single directional fusion block, and we can obtain the 2D-to-3D single directional fusion block by simply changing the positions of two inputs in Fig. <ref> (c). Unlike CMD which can only be applied on the P^O, BFB is applied on the whole point cloud. So z_2D^s ∈ℝ^N× d and z_3D^s∈ℝ^N× d in this section. §.§.§ 3D-to-2D Fusion 3D-to-2D fusion is illustrated in Fig. <ref> (c). Formally, we first have: z_3D2D^s = _2((_1(z_3D^s), z_2D^s)), where is a multiplayer perceptron, and refers to the feature concatenation. _1 is used to transfer the 3D feature z_3D^s into the 2D feature space. _2 is responsible to transfer the concatenated feature into the residual space of z_2D^s. Then we have: z̃_2D^s = z_2D^s ⊕σ(_3(((z_3D2D^s),z_3D2D^s))) ⊙ z_3D2D^s, where ⊕ and ⊙ denote the element-wise plus and element-wise multiply, respectively. means global average pooling, and σ means Sigmoid activation function. is used to integrate the gloable information, and _3 is used to transfer the feature into the attention value. z̃_2D^s represents the enhanced 2D features of scale s. Then we concatenate z̃_2D^s and the enhanced features of previous scales z_2DF^s-1 to obtain z_2DF^s: z_2DF^s = (z_2DF^s-1,z̃_2D^s), where z_2DF^s contains all enhanced 2D features from scale 1 to s. Finally, z_2DF^4 contains the enhanced 2D features of all 4 scales, and we use a linear classifier g_2D to output the logits. The loss of 2D knowledge branch ℒ_2D is formulated as: ℒ_2D = -1/N∑ ylog(g_2D(z_2DF^4)_y), where y refers to the ground truth, and g(z_2DF^4)_y denotes the y^th logit of g(z_2DF^4). Note that single directional fusion does not share MLPs for different scales. §.§.§ 2D-to-3D Fusion 2D-to-3D fusion shares the symmetric structure with 2D-to-3D fusion. Formally, we have the following: z_2D3D^s = _2( (_1(z_2D^s), z_3D^s)), z̃_3D^s = z_3D^s ⊕σ(_3(( (z_2D3D^s),z_2D3D^s))) ⊙ z_2D3D^s, z_3DF^s = ( z_3DF^s-1,z̃_3D^s). Similarly, z_3DF^4 is the final enhanced 3D feature, and a linear classifier g_3D is used to output the logits. The loss of 3D knowledge branch ℒ_3D is formulated as: ℒ_3D = -1/N∑ ylog(g_3D(z_3DF^4)_y). Note that 2D-to-3D fusion blocks do not share MLPs and classifiers with 3D-to-2D fusion blocks. §.§ Overall Training and Testing Process §.§.§ Training The overall loss ℒ_all for training the model is calculated as: ℒ_all = ℒ_CMD + ℒ_2D + ℒ_3D. §.§.§ Testing We use the output of the classifier in the 3D LIDAR branch as the final prediction results. Specifically, the prediction result ŷ is: ŷ = max_i=1,2,...,C g_3D(z_3DF^4)_i, where C denotes the total number of classes in the dataset. § EXPERIMENTS §.§ Experiment Settings §.§.§ Datasets We conduct experiments on three large-sclae outdoor datasets, including SemanticKITTI <cit.>, SemanticKITTI-O <cit.> and Nuscenes <cit.>. SemanticKITTI provides the dense segmentation labels for 00-10 sequences, in which sequence 08 is used for validation and others are used for training. The ground truth of sequences 11-21 is not reachable to the public and is used for testing. Two front-view colorful images are equipped with each LIDAR scan in SemanicKITTI. We use the image captured by the left camera in our experiments. NuScenes contains 8130 samples for training, 6019 samples for validation, and 6008 samples for testing. Six images are equipped for every LIDAR scan in Nuscenes, and we randomly pick up one image for training. SemanicKITTI-O is a subset of SemanticKITTI, which contains the points in the overlapping FOVs of the camera and LIDAR. The reason that PMF <cit.> proposed the SemanicKITTI-O is that PMF cannot be applied on the points that are out of the FOV of the camera because of its 2D-to-3D fusion scheme. §.§.§ Evaluation Metrics We adopt the commonly used mean intersection-over-union (mIoU) of all classes as the evaluation metric. Specifically, mIoU is formulated as: mIoU = TP_c/TP_c + FP_c + NP_c. In addition, we also report the frequency-weighted IOU (fwIoU) provided by the NuScenes leaderboard. FwIoU is a weighted version of mIoU by the point-level frequency of different classes. §.§.§ Network Settings The camera branch is a ResNet101 <cit.> network pre-trained using Cityscpaes <cit.> dataset. Following 2DPASS <cit.>, the 2D knowledge branch and 3D LIDAR branch are two modified SPVCNN <cit.> with the same structure. The feature maps from three branches are firstly reduced to the dimension of 128 and 256 for SemanticKITTI and NuScenes datasets, and then they are upsampled through bilinear interpolation to the original scale and used for CMD and BFB. As shown in Fig. <ref> (a), we use feature maps from 4 scales for better performance. §.§.§ Training and Inference Details Our model is trained in an end-to-end manner with the SGD optimizer. The initial learning rate is set to be 0.24, following 2DPASS <cit.> and SPVCNN <cit.>. We train the model for 128 epochs for SemanticKITTI and 80 epochs for NuScenes dataset. We use the commonly used augmentation strategy in the LIDAR semantic segmentation, including global scaling with a random scaling factor sampled from [0.95, 1.05], and global rotation around the Z axis with a random angle. Image augmentation includes horizontal flipping and color jitter. The cropped image size is 1200 × 360 (W × H) for SemanticKITTI and 400 × 240 for NuScenes. The voxel size in the 2D knowledge branch and 3D LIDAR branch is set to 0.1. We train our model with batch size 8 on 2 Nvidia Tesla A100 GPUs with 80G memory. §.§ Results on Benchmarks §.§.§ Results on SemanticKITTI-O PMF <cit.> provides the comprehensive benchmark on the SemanticKITTI-O validation set, as shown in Table <ref>. The traditional 2D-to-3D fusion methods like PointPainting <cit.>, RGBAL <cit.>, and PMF conduct both training and inference based on the LIDAR and camera modality data, while our CMDFusion is trained on the LIDAR and camera pairs, but does not require the camera data during inference. We can see that our method significantly surpasses the PMF method by 6.2 mIoU. Note that our CMDFusion can be trained on the whole SemanticKITTI dataset based on our 2D knowledge branch and CMD, while PointPainting, RGBAL, and PMF can be only trained on the training set of SemanticKITTI-O due to their 2D-to-3D fusion scheme. §.§.§ Results on SemanticKITTI Similar to 2DPASS <cit.>, our CMDFusion is trained on the LIDAR and camera modality, while only LIDAR modality is required during inference, so 2DPASS and our CMDFusion can be tested on the whole LIDAR point cloud. However, our CMDFusion includes both 2D-to-3D and 3D-to-2D fusion while 2DPASS only includes 3D-to-2D fusion, so our method surpasses the 2DPASS according to Table <ref>. Note that 2DPASS only released the codebase and the checkpoint without the validation set involved in the training set and instance-level augmentation, so we retrain their model following the same setting and evaluate on the test set. We also try their released checkpoint on the test set and find that both of them achieve a similar mIoU (67.7). We follow the same setting for fair comparison and our method achieves the better performance (68.6 mIoU). We also try the instance-level augmentation from Polarmix <cit.> on 2DPASS and our method, and our method still surpasses the 2DPASS by 0.6 mIoU. Note that since 2DPASS does not release the code to reproduce the performance reported in their paper, we only compare with them under the same training settings, where our method achieves the better performance. To avoid the mis-correspondence between images and LIDAR point cloud brought by the instance-level augmentation, we do not involve the camera branch during finetuning, and use the frozen 2D knowledge branch to provide 2D information and only finetune the 3D LIDAR branch. In general, our method achieves the best performance among all public methods. §.§.§ Results on NuScenes Table <ref> shows that our method achieves better performance (2.0 mIoU) than 2DPASS. Similar to the SemanticKITTI, the performance of 2DPASS comes from the higher one between our retrained model and their released checkpoint. Unlike the SemanticKITTI dataset, the NuScenes dataset provides 6 images to cover the FOV of the LIDAR, so the 2D-to-3D fusion methods like PMF <cit.> and 2D3DNet <cit.> can also be evaluated on the whole LIDAR point cloud. Among all fusion-based methods, our CMDFusion achieves the best performance. §.§.§ Visualization We provide two samples from SemanticKITTI and NuScenes datasets in Fig. <ref>. The top sample shows that 2DPASS and our method have less error on the building compared to the SPVCNN, which illustrates the effectiveness of multi-modality fusion. Besides, our method has better results on the car and truck than 2DPASS, because 2D-to-3D fusion is involved in our method but not in the 2DPASS. In addition, we visualize the feature representation of 2DPASS and our method on the NuScenes dataset. As shown in Fig. <ref>, our method has more discriminative features, e.g., the pedestrian class is more separable in our method than 2DPASS. §.§ Runtime Analysis Table <ref> provides the runtime analysis on the NuScenes dataset. PointPainting, RGBAL, and PMF use 2D networks for semantic segmentation since the input is range-view or perspective-view, so they can be accelerated using TensorRT by a large margin (125.0 to 22.3 ms for the PMF method). In contrast, the 3D network in Cylinder3D, 2DPASS, and our method cannot be accelerated by TensorRT. Compared to PMF without TensorRT, our method has a smaller number of FLOPs and parameters during inference, while sharing the same runtime. Compared to 2DPASS, our method achieves better performance since two 3D networks are used during inference (2D Knowledge branch and 3D LIDAR branch), which inevitably consumes more runtime. §.§ Ablation Study We conduct a careful ablation study to show the effectiveness of different modules in our method. The comprehensive ablation results are based on the Semantic-O dataset since the classical 2D-to-3D fusion without CMD can only be applied on the points in the overlapping FOVs of LIDAR and camera. The results are in Table <ref>. The baseline refers to a single SPVCNN 3D network. We can see that both 3D-to-2D fusion and 2D-to-3D fusion are helpful, but 2D-to-3D fusion brings more performance gain since the camera information is explicitly injected into the LIDAR branch. After we replace the camera branch (CB) with a frozen CB pre-trained on Cityscapes, the performance is further improved. The reason may be that the pre-trained camera branch could provide additional information for the current LIDAR point cloud dataset. Then we introduce cross-modality distillation (CMD) to let a 3D network output the 2D information so that the model could be trained on the whole dataset rather than the overlapping FOVs of the camera and LIDAR. As a result, the performance is greatly boosted by the CMD. Similar to 2DPASS, we also apply the voting test-time augmentation (TTA), i.e., rotating the input point cloud with 12 angles around the Z axis and averaging the prediction scores as the final outputs. TTA brings better performance by 2.46 mIoU. § CONCLUSION In this paper, we propose a Bidirectional Fusion Network with Cross-Modality Knowledge Distillation (CMDFusion) to fuse the information of the camera and LIDAR for better LIDAR semantic segmentation. Compared to the 2D-to-3D fusion-based method PMF <cit.>, our proposed Cross-Modality Distillation (CMD) module solves the problem that the camera branch cannot output the 2D information for those points out of the FOV of the camera. Compared to 3D-to-2D fusion-based method 2DPASS <cit.>, our proposed Bidirectional Fuision Block (BFB) contains additional 2D-to-3D fusion, which explicitly strengthens the 3D information through 2D information injection for better LIDAR semantic segmentation. We show the effectiveness of our proposed method through comprehensive experiments on SemanticKITTI and NuScenes datasets. Overall, we provide an alternative approach to fully utilize the multi-modality information for 3D semantic segmentation, and introduce a new and feasible way to solve the problem that multi-sensors' FOVs are not overlapping. We hope this paper can provide inspiration for future work in autonomous vehicles and robots. § ACKNOWLEDGMENT This work is supported by Alibaba Group through Alibaba Research Intern Program. IEEEtran
http://arxiv.org/abs/2307.05086v1
20230711073740
Neutron star equation of state: identifying hadronic matter characteristics
[ "Constança Providência", "Tuhin Malik", "Milena Bastos Albino", "Márcio Ferreira" ]
nucl-th
[ "nucl-th", "astro-ph.HE", "hep-ph" ]
apsrev4-2
http://arxiv.org/abs/2307.03980v1
20230708140837
Building and Road Segmentation Using EffUNet and Transfer Learning Approach
[ "Sahil Gangurde" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
Building and Road Segmentation Using EffUNet and Transfer Learning Approach Sahil Gangurde ABV-Indian Institute of Information Technology & Management, Gwalior, India [email protected] =========================================================================================================================== In city, information about urban objects such as water supply, railway lines, power lines, buildings, roads, etc., is necessary for city planning. In particular, information about the spread of these objects, locations and capacity is needed for the policymakers to make impactful decisions. This thesis aims to segment the building and roads from the aerial image captured by the satellites and UAVs. Many different architectures have been proposed for the semantic segmentation task and UNet being one of them. In this thesis, we propose a novel architecture based on Google's newly proposed EfficientNetV2 as an encoder for feature extraction with UNet decoder for constructing the segmentation map. Using this approach we achieved a benchmark score for the Massachusetts Building and Road dataset with an mIOU of 0.8365 and 0.9153 respectively. segmentation,urban planning, state-of-the-art, mask, road, building § INTRODUCTION With the increasing population, city areas will increase, so the road network and building networks will get congested and intertwined. It will be difficult for humans to look at the aerial views of the scene and create proper layouts of the roads and buildings. Land cover segmentation has been in the picture for a very long time. The area of unmanned aerial vehicles (UAVs) has seen significant growth in attention in recent years, particularly in research and industry. As unmanned aerial vehicles become more commercially successful, aerial photographs provide a new and intriguing study avenue. Integrating drones with computer vision is a unique and demanding notion that allows unmanned aerial vehicles to grasp the overflown region. The process of aerial image interpretation entails inspecting aerial images for the express goal of detecting numerous distinguishing qualities of the objects of interest. Several stages are required to acquire complete scene comprehension from an aerial photograph. Given a picture, a segmentation phase is used to separate the scene into sections of specific categories (such as residential areas, flood, woodland, roads, and so on), essentially seeing the entire environment as a completely linked location with all categories interacting with each other. Semantic segmentation is the process of segregating different parts of images into predefined classes. It helps identify different labels in the image and pinpoint the exact map of it. Various problems related to medical imagery, satellite imagery, and urban planning can be solved by automating the process of detecting and segmenting multiple objects associated with the corresponding domain. The ability to recognize various objects from UAV images, such as railway lines, water bodies, forests, and other categories, could be beneficial in multiple applications, including creating and maintaining maps of cities, improving urban planning, noting environmental changes, and disaster relief. Our study focuses on creating effective ways of recognizing buildings from top-down aerial photos and establishing an efficient automatic system capable of identifying individual structures. In this paper segmentation on aerial images is performed to extract building mask. Then the project explores road segmentation and can be further extended for other classes. § RELATED WORK performed road network segmentation from SAR images using FCN <cit.>. They evaluated three models, FCN-8s, VGG19 with UNet, and DeepLabv3+. The paper uses inferior backbone models along with UNet <cit.>. This is a major drawback and achieved very low accuracy over the custom dataset. proposed stacking two UNets and generating the output mask <cit.>. The input image is first divided into blocks of 224x224 pixels and trained of the two UNets. The patches are then again converted into the real segmented mask. Though it gave promising results on the Massachusetts Building Dataset and Inria's Aerial dataset, the problem of converting the image into different patches and then reconstructing the image again is computationally expensive. proposed EU-Net to perform building segmentation <cit.>. The paper uses a dense spatial pyramid pooling structure(DSPP) after the encoder network to increase the multi-class feature extraction. The decoder used is a UNet architecture. The DSPP block achieved better results than normal UNet in performing segmentation of different sizes of buildings. proposed using an attention mechanism in the encoder to extract the features <cit.>. The encoder then produces the segmentation mask, transferring it to the edge detection block, which makes road edges. Thus, using a hybrid encoder mechanism provided very high accuracy for the road segmentation task. used of attention mechanism in the encoder to extract the features <cit.>. The proposed model employed a hybrid encoder separated into two parts: the first harvests full-resolution features, while the second creates high-resolution feature encoding. The second half, on the other hand, employs max-pooling layers to expand our network's entire receptive field, providing the network with adequate context information to operate with. Before the features from both sections are combined, a 2D activation map is constructed for each portion, letting the network choose how much attention to devote to the features from each encoder step. This helped the segmentation of huge roadways and the development of fine-edged segmentation masks used a novel vision transformer network to perform building segmentation <cit.>. The transformer simultaneously captured global and spatial detail contexts using a dual path structure for accurate building segmentation. The disadvantage of this approach was that to gather the global context; the search window size has to be large, which causes very high computation resources. § PROPOSED WORK The problem statement can be formulated as follows: * Develop models to segment buildings and roads from the urban environment and generate the mask for the same. * Evaluate the models for segmentation metrics and choose the best one. In this paper, we combine the state-of-the-art CNN architectures like Resnet50<cit.> and variations of EfficientNet<cit.> as encoders for UNet architecture and train them on the Massachusetts dataset<cit.>. This will generate the mask of the roads and buildings; hence, the two can be identified from the actual image. Figure <ref> shows the complete process of steps involved to achieve our goal. § DATASET The datasets used in this project are Massachusetts Road dataset and Massachusetts Building dataset<cit.>. Both the datasets have a aeriel view of the Boston city and the corresponding segmented masks of roads and building. §.§ Building Dataset The dataset used is Massachussetts Buildings Dataset. It includes a total of 151 images shot from UAV in the Boston region. A single image has dimension of 1500 x 1500 pixels. Each image convers an approcimate area of 2.25 sq km of land. The whole dataset expands over a region of 340 sq km. The dataset have been split into three parts: * Training Data: 137 images * Validation Data: 4 images * Test data: 10 photos The segmentation masks are created by using the building footprints of the OpenStreetMap project. The dataset covers urban and suburban part of Boston. The building labels include houses, buildings, garages all of various sizes. The images are made available by the Massachusetts government. The segmentation masks after computing computationally were further hand corrected for higher accuracy on the model training. Figure <ref> shows the sample images and their masks of the building dataset. §.§ Road Dataset The Massachusetts Roads Dataset contains 1171 photos clicked from the UAV. Each picture has a resolution of 1500x1500 pixels. A single photo covers area of 2.25 sq km. The images were randomly divided into three sets: * Training Data: 1108 images * Validation Data: 14 photos * Test Data: 49 photos The dataset spans over 2600 square kilometers and includes many urban, suburban, and rural areas. The test site alone spans 110 square kilometers. The segmentation masks are created by using the road centerline footprints of the OpenStreetMap project. The labels i.e. the centerline is then given a thickness of 7 pixels. All picture is rescaled to 1 pixel per square metre resolution. Figure <ref> shows the sample images and their masks of the building dataset. § MECHANISM/ALGORITHMS We will train the above datasets on the following models as encoders to the UNet. The encoder's general functionality is extracting features present in the image using mask labels. These models will extract the necessary details from the images the decoder will reconstruct the mask for the input. §.§ Encoder-Decoder Architecture The encoder is a CNN model which extracts the features from the image. The encoder downsamples the image, reduces the feature map resolution so that it captures the high level details from the original image. This is followed by many SOTA models in past like ResNet<cit.>. It is a common practice in CNN architecture to reduce the size of input image to extract the high level details. It is challenging to create a segmentation map based on the final feature map of the encoder due to its reduced size. A decoder network consists of set of layers that upsamples the feature maps extracted form the encoder network to again recover the information. Figure <ref> §.§ UNet . created the UNET for biomedical image segmentation<cit.>. The UNet has two parts one is the encoder and other is the decoder. The encoder extracts the features from the input image and the decoder achieves exact localization using transposed convolutions. The encoder consists of only convolutional and max pooling layers. It was mainly developed for the use of medical image segmentation but for our task this model we will use this model along with other encoder and find the segmentation mask. §.§ EfficientNet introduced the EfficientNet with a scaling convolutional strategy. Figure <ref> displays the architecture of EfficientNet<cit.>. As the depth of the network increases the the accuracy increases is what is shown by the ResNet<cit.> architecture. But at some point the accuracy of the network cannot be increased due to the problem of vanishing gradient. To solve this issue, scaling must be performed in all dimensions i.e. depth, width and resolution. EfficientNet introduced a new method called 'compound scaling' through which each of these above mentioned parameters get scaled by a factor ϕ. The parameters for scaling are given in <ref>. depth(d) = α^ϕ width(w) = β^ϕ resolution(r) = γ^ϕ such that, (αβ^2γ^2)^ϕ≈2 where α≥1, β≥1and γ≥1 With ϕ = 1 using grid search authors came up with the value of α=1.2, β=1.1 and γ=1.15. Now keeping these value as constant we can change the factor ϕ to get the scaled models from EfficientNetB1 upto EfficientNetB7. §.§ EfficientNetV2 Compound scaling in EfficientNet scales all the parameters of the model by the factor of ϕ. This type of scaling is not necessary as this scales in all parameters. So EfficientNet gives less control over the model parameters. Also in EfficientNet as the size of the image increases we need to decrease the batch size. This increase in image size needs more time to compute the features. EfficientNet uses MBConv layer which uses depth wise convolution which is an expensive operation. The motive behind EfficientNetV2<cit.> was to create a CNN model with the motive to increase the accuracy(A) while decreasing the training step time(S) and having less parameters(P). Basically max(A) while min(S^w, P^v) where w and v are experimentally determined. To solve the problem of less parameters and less training time NAS was used to create a model with the above given objective function. To reduce the depthwise convolution time, proposed Fused MBconv method. In Fused MBConv instead of performing a depthwise convolution and we perform a convolution with a filter of 3x3. As the depthwise convolution performs multiplication over all channels by removing it we reduce the computation cost and create faster models. The Figure <ref> shows the MBConv operation. § TECHNOLOGIES USED FOR IMPLEMENTATION The problem involves solving the segmentation task of various deep learning libraries, matrix manipulation libraries, image processing libraries, and plotting libraries are used. Table <ref> shows the different libraries, frameworks, and other technologies used in this project. Most of the code runs were done in the Kaggle environment. Kaggle environments are backed by Google Cloud, which provides free computation power to run ML tasks. § DATA PROCESSING Following are the steps involved in data processing: * One-hot encoding: For all the images, perform one hot encoding. One hot encoding is the process of converting the pixel values into number of the class we want the image to be segmented. Figure <ref> shows the original image, real mask and the constructed one hot encoded mask of the image. * Augmentation: Perform random horizontal flip, vertical flip and 90 degree rotation on the images and their corresponding masks. * Padding: The encoder models are implemented in such a way that the padding is added to arbitrary input size to match the input size of various encoders. * Dataset Loader: Create a data loader for model to train with input as image and the label as the one hot encoded mask. § RESULTS AND DISCUSSION §.§ System Configuration All the models are trained on Kaggle with Google Cloud backbone. Table <ref> shows the system parameters of the environment under which the models are trained. § EVALUATION METRICS The given models will be evaluated on two metrics - Intersection Over Union(IOU) and F1 Score. * Intersection Over Union (IOU) - Intersection Over Union is also known as the Jaccard index, is used to calculate the percentage of overlap between the true mask and the predicted output mask. IOU = y ∩ y^'/y ∪ y^' The intersection consists of the pixels found in the true mask and the predicted mask and the union consists of the pixels containing the true mask as well as the predicted mask. Equation <ref> shows the formula for IOU calculation. * F1 Score - F1 score or dice coefficient calculates the overlap of two masks. The values of the dice coefficient lie between 0 and 1 inclusive where 1 denotes perfect overlap and 0 represent no overlap. The equation <ref> shows the formula for Dice coefficient. Dice Coefficient = 2 * |y ∩ y^'|/|y| ∩ |y^'| The loss function for the neural network to minimise is given by 'Dice Loss' and is shown in <ref>. Dice Loss = 1 - 2∑_pixels^yy^'/∑_pixelsy^2 + ∑_pixelsy^'2 * Accuracy - Accuracy is defined as the ratio of sum of how many pixels in the image are correctly identified as the true segmented pixel and the number of pixels not identified as true segmented correctly to all the pixels present. Basically in terms of true position, negative and false positive negative in terms of pixel values accuracy can be defined as given in equation <ref> Accuracy = TP + TN/TP + TN + FP + FN * Precision - It shows the purity of the positive detection relative to the ground truth values. In precision TP mask having an IOU of above threshold, while FP represent the mask having an IOU of below threshold. Precision = TP/TP + FP * Recall - Recall determines the completeness of positive prediction with respect to ground truth label. Equation <ref> represents the formula to calculate the recall. Recall = TP/TP + FN §.§ Building Segmentation The goal of this experiment is to detect building mask from the input aerial image. Five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for all the models are same. Table <ref> shows in-detail configuration of the parameters. §.§ Road Segmentation The goal of this experiment is to detect road mask from the input aerial image. Similiar to the building segmentation, five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for road segmentation task are given in the Table <ref>. These parameters are common for all the models in this experiment. §.§ Results and discussion The models are tested on the test data, and the results obtained are shown in Table <ref> and <ref> for building and road segmentation respectively. The scores written in bold represent the best score achieved under a particular metric. §.§ Benchmarks The results derived from the experiments outperform the benchmark scores for both datasets. Table <ref> shows the recent papers presented for the building dataset, and the best accuracy achieved has been presented in this paper. Also, in Table <ref> existing models are compared concerning mIOU and mDice for the road dataset. The models presented in the paper set new benchmark scores for the Massachusetts dataset. § LIMITATIONS AND FUTURE SCOPE The size of the input image is a massive problem for UAV-based segmentation. Very high GPU memory is required for images with higher dimensions to load the model with weights. Standard images consist of 3 channels, but satellite images can contain more than three channels. In that case, the whole UNet architecture must be changed to fit the extra 3rd dimension data. In this paper, only roads and buildings are segmented as a part of urban object segmentation. Aerial images from different cities can be taken, and masks for various classes such as manholes, power lines, railway tracks, etc. must be created to expand the segmentation classes and allow more objects to be segmented in the urban environment. Attention mechanism must be explored on EfficientNet+Unet architecture to improve the accuracy further. § CONCLUSION Based on the experiments, we can conclude that for building and road segmentation, UNet architecture with a pre-trained encoder is the best architecture to be used. Using transfer learning, the training time and GPU cost are reduced, and the accuracy of the models is very high. The problems discussed in the research gaps regarding transfer learning are filled, and models pre-trained with an imagenet dataset were used. The thesis presents the new benchmark score for the Massachusetts Building and Road dataset. For the building segmentation task, EfficientNetV2L+UNet achieved an IOU of 0.8365, and for the road segmentation task, EfficientNetB7+UNet gave an IOU of 0.9153. IEEEtranN
http://arxiv.org/abs/2307.07630v1
20230714210037
Optical Studies of Seven Bright Southern Cataclysmic Variable Stars
[ "John R. Thorstensen", "Chase K. Alvarado-Anderson", "Abigail D. Burrows", "Rowan M. Goebel-Bain", "David C. Katz" ]
astro-ph.SR
[ "astro-ph.SR" ]
0000-0002-4964-4144]John R. Thorstensen 0000-0001-5214-9008]Chase K. Alvarado-Anderson 0000-0002-5922-4469]Abigail D. Burrows 0009-0006-4917-4628]Rowan M. Goebel-Bain Department of Physics and Astronomy, 6127 Wilder Laboratory, Dartmouth College, Hanover, NH 03755-3528 We report spectroscopic observations of seven bright southern cataclysmic variable stars, collected on a single two-week observing run using the 1.9-m Radcliffe telescope at the South African Astronomical Observatory. We used radial velocity time series, in some cases in combination with other data, to determine or clarify orbital periods for five of them, namely ATO J061.1478-31.0634, BMAM-V547, MGAB-V202, NSV 4202, and V1147 Cen. For BMAM-V547, we use data from the Transiting Exoplanet Survey Satellite (TESS) to corroborate and sharpen the orbital period; the TESS data also show a photometric period near 3.93 d, likely indicating precession of the accretion disk. Also, we find a periodic modulation in the radial velocities of the SU UMa-type dwarf nova Var Ret2005, but are unable to specify a unique cycle count. Finally, we show a spectrum of ASASSN-V J061528.41-412007.3 that appears typical of a luminous novalike variable. § INTRODUCTION Cataclysmic variables (CVs) are a subclass of mass-exchange binary stars, in which a white dwarf (WD; the primary) accretes matter from a more extended companion (or secondary) that fills its Roche critical lobe. Most commonly, the companion resembles a main-sequence star, but with differences in detail caused by the complicated history of mass transfer <cit.>. Material transferred from the secondary through the inner Lagrangian point usually settles into an accretion disk around the white dwarf primary, and in most CVs the disk dominates the optical luminosity. The class is diverse; <cit.> gives a comprehensive review. If the WD has a strong magnetic field, it can disrupt the formation of the disk; material instead threads onto the field and falls down the field lines onto the polar caps, leading to an AM Herculis star, or `polar', so-called due to the polarization of their optical light. If the accretion is very slow, the disk can be faint enough for the WD to make a strong spectral contribution, especially in the ultraviolet; such systems tend to have very short orbital periods P_ orb. If P_ orb is 4-6 hours or longer, the secondary's contribution to the combined spectrum often becomes visible. CVs with P_ orb > 1 d are rare. As their name implies, all CVs are variable stars. Accretion disks evidently are subject to a limit-cycle instability, leading to dramatic brightening of typically a few magnitudes, developing over hours and lasting for several days, during which enough mass is dropped onto the white dwarf to re-establish the low-density state of the disk. Systems that do this are called dwarf novae (DNae). Most known CVs are DNae, and there is an elaborate taxonomy that describes their outbursts. Other disk CVs persist in their high-accretion states; these are the novalike variables (NLs). Spectroscopically, DNe at minimum light show strong Balmer and He1 lines, greatly broadened by motions in the disk, while NLs show strong continua. Some novalikes show almost no emission, while others show complex line profiles that vary with orbital phase; most of these are SW Sextantis stars <cit.>. Population. All CVs vary, and most call attention to themselves through their optical variation, though many have been discovered because of their unusually blue or ultraviolet color, or through X-ray emission. The data have become rich and complete enough that <cit.> created a sample of CVs within 110 pc that they claim is essentially complete. However, the pace of discovery remains extremely high due to the proliferation of high-cadence surveys of sufficient depth such as ZTF <cit.>, ASAS-SN <cit.>, and ATLAS <cit.>. With increasingly-complete samples, it should be possible to extend Pala's project to much greater depth. Complete samples are key observables for CV population synthesis models such as those of <cit.> and <cit.>. We undertook this study to elucidate the nature of several CVs and candidate CVs in the south celestial hemisphere, which remains somewhat less explored than the north. We selected our targets from a master CV list maintained by the lead author. Because of time and aperture constraints, we targeted CVs that remained little-studied, in particular objects with unknown or uncertain P_ orb. § OBSERVATIONS All our observations are from the 1.9 m Radcliffe telescope operated by the South African Astronomical Observatory. We used the SpUpNIC spectrograph <cit.> with Grating 6, which covered from 4220 to 6860 Å. The 1.^''1 slit yielded a FWHM resolution of ∼ 4.5 Å. Most of our individual exposures were between 8 and 20 min. We took spectra of a CuAr arc at each new setting of the telescope, and about once an hour as the telescope tracked. For our final calibration we used the night-sky airglow lines, especially the strong [OI] lines at λλ 5577 and 6300, to adjust the calibration slightly, typically by ∼ 20 km s^-1. The night-sky adjustment failed for a few spectra; for those, we reverted to the arc calibration. When the weather was clear, we observed flux standards in twilight. From the scatter in the standard star normalizations – most likely caused by seeing variations and the narrow slit – we estimate that the absolute calibration is accurate to ∼ 20 per cent, but the relative flux scale should be better than that. To reduce the data we used a combination of IRAF routines called from pyraf, and python scripts that made extensive use of astropy routines. In particular, we extracted 1-dimensional spectra from the images using our own implementation of the variance-weighted extraction algorithm of <cit.>, as well as the modified wavelength calibration described earlier. We measured radial velocities of the Hα emission line – the strongest emission feature in all these objects – using convolution techniques described by <cit.> and <cit.>. When a contribution from a late-type star was present, we also measured its radial velocity using the fxcor task in IRAF, which implements a cross-correlation technique similar to <cit.>. For the correlation template, we used the sum of 76 spectra of IAU velocity standards, mostly early K stars, which were individually shifted to zero velocity before summing. To search for periods we created an oversampled grid of test frequencies ω, and at each ω fit the velocities v(t) with a general least-squares sinusoid v(t) = A sin(ω t) + B cos(ω t) + C, and transformed this to v(t) = γ + K sin(ω(t - T_0)). The periodograms we present are derived from these fits; the quantity plotted as a function of ω is 1 χ^2 = [1 N-3∑_i = 0^i = N(v(t) - v_i σ_i)^2]^-1, where the v_i are the N measured velocities, and σ_i are their estimated uncertainties. The N-3 term in the denominator arises because at each ω, the three parameters K, T_0, and γ are adjusted. When a late-type star was present, we estimated its spectral type and contribution to the total spectrum using the procedure described by <cit.>. Table <ref> lists the stars we observed. The first column gives the primary name used in the American Association of Variable Star Observer's International Variable Star Index (VSX) [ at https://www.aavso.org/vsx/]. Some of these objects have multiple designations, generally because they have appeared in multiple surveys, and VSX lists these designations. All the objects here are variable, so the G magnitude is only illustrative. In the discussion below, we shorten the lengthier coordinate-based names. lrrrrr 0pt 6 Stars Discussed Here List of Objects VSX Name α_ ICRS δ_ ICRS G 1/π_ DR2 SIMBAD name [h:m:s] [d:m:s] [mag] [pc] ATO J061.1478-31.0634 4 04 35.483 -31 03 48.38 14.4 481(4) Gaia 19fes Var Ret2005 4 11 09.288 -59 11 16.27 16.1 329(4) EC 04102-5918 ASASSN-V J061528.41-412007.3 6 15 28.406 -41 20 07.24 13.2 636(6) UCAC4 244-008602 BMAM-V547 6 57 33.663 -53 34 22.03 14.1 1072(17) Gaia DR3 … MGAB-V202 8 18 08.715 -42 34 16.91 14.1 783(11) Gaia DR3 … NSV 4202 8 39 18.497 -70 32 41.64 16.6 730(24) OGLE MC-DN-32 V1147 Cen 13 00 57.58 -49 12 12.46 12.6 351(3) V* V1147 Cen The celestial coordinates and distance estimates are from the Gaia Data Release 2 <cit.>. Distances are inverses of the parallax, without further adjustment. The SIMBAD designations in the final column omit the Gaia numbers for the sake of space. SIMBAD entries for these objects can be found using coordinates. We list all our radial velocities in Table <ref>, and give parameters of the best-fitting sinusoids in Table <ref>. The next section discusses the individual stars in greater detail. llrrrr 0pt 6 Radial Velocities Object Timea v_ abs σ v_ emn σ d km s^-1 km s^-1 km s^-1 km s^-1 ATO J061-31 59990.2968 0 14 48 11 ATO J061-31 59990.3024 35 11 39 9 ATO J061-31 59990.3096 53 10 34 10 ATO J061-31 59990.3180 73 11 -22 10 ATO J061-31 59991.2852 47 10 25 9 ATO J061-31 59991.2956 74 11 -15 10 ATO J061-31 59991.3061 108 14 -21 10 aTime of mid-exposure in Barycentric Julian Days, minus 2,400,000, referred to UTC. Radial velocities used in this study. The time argument is referred to UTC (not TAI) and is the barycentric julian date of mid-exposure minus 2,400,000., which differs from MJD by 0.5 d. The full table is published as a machine-readable table, and the first few lines are shown here to indicate its form and content. lllrrrr 0pt 7 Parameters of Sinusoidal Velocity Fits Data set T_0 P K γ N σ BJD [d] km s^-1 km s^-1 km s^-1 ATO J061-31 abs. 59994.2295(13) 0.245282a 132(5) 39(3) 25 11 ATO J061-31 emn. 59994.363(4) 0.245282a 93(10) 4(7) 25 23 BMAM-V547 59986.376(9) 0.15536b 19(7) 12(5) 28 17 MGAB-V202 wings 59988.401(2) 0.15612(10) 188(19) 17(12) 76 57 NSV-4202 59989.553(4) 0.2839(6) 80(7) -21(5) 31 16 V1147 Cen abs. 59997.3812(13) 0.4190(5)b 152(3) -34(2) 24 8 V1148 Cen emn. 59997.613(2) 0.4190 134(5) -7(4) 24 12 aPeriod held fixed at twice Monard's value. bPeriod chosen corresponds to photometric modulation in TESS data. § THE INDIVIDUAL STARS §.§ ATO J061-31 The Catalina light curve <cit.> of this bright dwarf nova shows a relatively steady minimum near 14.4 < V < 14.7, and a single outburst to V = 12.3. It has been followed for some years, but there are evidently no spectra in the literature. P_ orb is not definitively determined; VSX lists 0.122641(4) d (≈ 2.94 h) from a photometric modulation at minimum light, attributed to B. Monard[ The vsnet-alert site maintained at Kyoto University retains an archive of messages about variable stars, mainly CVs. Monthly digests of the messages can be downloaded from their website, http://ooruri.kusastro.kyoto-u.ac.jp/mailman3/postorius/lists/vsnet-alert.ooruri.kusastro.kyoto-u.ac.jp/. Monard's period is relayed by T. Kato in vsnet-alert 23816 (from 2019 December). ] . The 0.1226-d period would be unusual for a dwarf nova; CVs near this period, near the long edge of the roughly 2- to 3-h `gap' in the CV period distribution, tend to be NLs rather than DNae. On the other hand, DNae with P_ orb twice as long(∼ 5.88 h) often have prominent secondary stars and display two `humps' per orbit due to the changing aspect of the tidally-elongated secondary. During our observing run, the target was west of the meridian at evening twilight, so we could not determine a definitive P_ orb from our velocities alone. Our aim instead was to distinguish between candidate periods of 2.94 h and 5.88 h. We obtained 25 exposures totaling 5.9 h, spread over three nights, spanning somewhat over 3 h of hour angle. Fig. <ref> shows the results. The mean spectrum (top panel) shows multiple absorption features from a late-type star in addition to the broadened Balmer and HeI emission typical of dwarf novae. The middle panel shows the periodogram of the absorption velocities, which despite the limited sampling, clearly indicates a 5.88-h period, double the VSX value. There is no significant modulation at half this period (P = 2.94 h). Allowing the period to vary we find P = 0.24521(12) d, consistent within the uncertainties with (double) the more precise Monard period; we therefore adopt P_ orb = 0.245282(8) d. The lower panel shows the folded radial velocities. As expected, the emission velocities move approximately in antiphase to the absorption. The upper trace of Fig. <ref> shows the mean spectrum, and the lower shows it after subtraction of a scaled spectrum of the K0.5V-type star, HD124752. The scaling factor was chosen interactively to best cancel the late-type features in the difference spectrum. Our best estimate of the spectral type is K0-1, with a plausible range from G6 to K4. <cit.> compiled numerous spectral-type estimates for CV donor stars with known period, and finds that around P_ orb = 6 hr, the typical spectral type is near M0 (see his Fig. 7). The secondary in ATO 061-31 therefore appears to be significantly warmer than typical. This might indicate that some nuclear evolution has taken place in the secondary. Evolved donor stars can be much hotter than expected at a given P_ orb <cit.>. ATO J061-31 was observed by the Transiting Exoplanet Survey Satellite (TESS) in Sectors 4 and 5, with 1800 sec cadence. We downloaded the TESS `PSDCSAP' data using the lightkurve python module, edited out obvious artifacts, and folded the remaining data on the 0.245282-d period. The result (Fig. <ref>) shows double-humped modulation due to the changing aspect of the tidally-distorted secondary star. Note that one maximum appears slightly fainter than the other, corroborating once again that the period is 5.88 hours and not half that. The TESS data were taken in late 2018, about 4.5 years before our spectra, and our nominal period (based on doubling the Monard value) is not quite precise enough to specify an unambiguous cycle count across this gap. §.§ Var Ret2005 The Gaia alerts-index lists this object as Gaia20cdc, and the Gaia light curve shows typical quiescent G magnitude between 16 and 17, but with outbursts to G ∼ 13 at intervals ranging from several months to over a year. Outbursts were also noted on the vsnet-alert message board; T. Kato, in vsnet-alert 23341 (2019 July) classified some of these as superoutbursts, and concluded that the object is “almost certainly an SU UMa star.” However, we were unable to find mention of any candidate superhump period, nor any spectroscopic studies in the literature. The average of our 20 spectra (Fig. <ref>, top) is typical of quiescent short-period dwarf novae (see, e.g., ), with strong Balmer and HeI emission lines, and no hint of a late-type companion. On the nights we observed we could not obtain a range of hour angles sufficient to determine an unambiguous radial velocity period, and weather constrained our two visits to be two nights apart. Consequently, the periodogram (Fig. <ref>, middle) shows strong aliases spaced by Δ f = 1 / (2 d). Nonetheless, we constrain the period to the values shown in Table <ref>. The candidate periods are shorter than 2 h. This corroborates Kato's suggested SU UMa classification, since nearly all dwarf novae in this range are SU UMa stars. Assuming the classification is correct, more complete observations of a superoutburst should reveal a superhump period, which in turn would resolve the orbital period ambiguity, since P_ sh is generally a few per cent longer than the orbital period in SU UMa stars (see, e.g. and references therein). lrrc 0pt 4 Candidate Periods in Var Ret2005 Rank P 1/P σ (d) (d^-1) (km s^-1) 1 0.06355 15.736 11.9 2 0.06557 15.252 12.1 3 0.06165 16.220 12.5 4 0.06771 14.769 13.0 5 0.05986 16.706 13.6 6 0.07000 14.286 14.4 7 0.05816 17.193 15.3 8 0.07244 13.804 16.3 9 0.05656 17.682 17.2 Ranked list of alias periods and corresponding frequencies from the Var Ret2005 velocities. The last column gives the scatter around the best fit at each period. The uncertainties in the individual periods are of order 5 × 10^-5 d. The lower panel of Fig. <ref> shows our Hα radial velocities folded at our best period, but readers are cautioned that the period chosen is not unambiguous. §.§ ASASSN J0615-41 In a 2018 July message on vsnet-chat[http://ooruri.kusastro.kyoto-u.ac.jp /mailarchive/vsnet-chat/8036], T. Kato suggested this object is a novalike CV, based on its absolute magnitude. The Catalina Real Time Survey (CRTS; ) collected 284 magnitudes from 2005 to 2013, which show irregular variation between 12.8 and 13.8, similar to many NLs. The object is listed in the SIMBAD database as a `star'. We observed the object because we were unable to find any published spectra. Fig. <ref> shows the average of two 480-s exposures. Hα emission is present with an emission equivalent width of 3.3 Åand a FWHM around 14 Å. Hβ is in absorption, with an emission core, and the higher Balmer lines are entirely in absorption. The absorption feature just shortward of λ 5900 appears to be NaD absorption, likely interstellar, blended somewhat with weak HeI λ 5876 absorption. The spectrum is consistent with a thick-disk, or UX-UMa type, novalike variable <cit.>. The variability and spectrum bolster the case that this is a bona fide novalike CV. §.§ BMAM-V547 This object was first noted by Mariusz Bajer in archival data[This and MGAB-V202 have apparently not been discussed in the literature indexed by SIMBAD and ADS. Please refer to the VSX entries for details.]. The ASAS-SN light curve shows it varying irregularly around V = 14.2, and more recently fading to about g = 15.0, still with irregular variations. It is not classified as a CV in SIMBAD. The mean spectrum (top panel of Fig. <ref>) shows a strong, blue continuum and weak, narrow emission lines, typical of a novalike variable. The amplitude of the emission radial velocity variations is small, and their periodogram (middle panel) does not indicate a unique period. However, one of the possible periods, marked with a vertical line in the figure, aligns with the photometric period we derive from TESS observations (see below). The lower panel shows the velocities folded on this period, which we identify as the likely P_ orb. rccc 0pt 4 TESS Observations of BMAM-V547 Sector Starta End Mean Flux (electrons s^-1) 2 2018-08-23 2018-09-15 838 6 2018-12-15 2019-01-06 644 29 2020-08-26 2020-09-19 320 33 2020-12-18 2021-01-13 329 34 2021-01-14 2021-02-08 286 35 2021-02-10 2021-03-06 330 39 2021-05-27 2021-06-24 307 aDates are the UT of the first and last points used, in year-month-day form. We also analyzed TESS observations of this star, which are summarized in Table <ref>. We downloaded the PSDCSAP files, edited out apparent artifacts (and some possible flares, as well, since our aim was to find periodicities), and computed periodograms using the LombScargle task from the astropy module timeseries. All the sectors separately showed very strong modulation near 6.435 cycles d^-1, equivalent to P = 0.1554 d, or 3.730 h. To explore this, we combined data from four sectors in which the mean brightness was consistent and relatively low – sectors 29, 33, 35, and 39 – and searched this data set for periods (see the top panel of Fig. <ref>). This refined the frequency to 6.4365(9) d^-1, or P = 0.15536(2) d (near 3.73 hr), where the uncertainty was estimated by examining light curves folded over a range of nearby periods. The modulation apparently maintains coherence over the 301-day span of the data, which amounts to 1940 cycles. The middle panel of Fig. <ref> shows the low-state TESS data set folded on this period. All the TESS data sectors except Sector 2 (during which the source was brightest) show a second, weaker modulation (also indicated in Fig. <ref>) near 0.254 d^-1, or 3.93 d. The lower panel of Fig. <ref> shows the low-state TESS data folded on this much longer period. The period of the ∼ 3.73-hr modulation in the TESS data is typical of NL orbital periods. This, together with its coherence and the evidence for radial velocity modulation consistent with the same period, suggests that the 0.15536-d period is P_ orb, rather than being caused by some other clock in the system, although given the relatively weak velocity modulation we cannot be certain of this. The spectrum, photometric modulation, and velocity modulation are all consistent with a novalike variable. Periods comparable to the 3.93-d period, much longer than P_ orb and often called superorbital periods, are seen in other novalike CVs (see, e.g., and references therein). These are generally attributed to the precession of a disk – either precession of the major axis of an elliptical disk, or precession of the line of nodes of a tilted disk. Often, systems with eccentric or tilted disks also show superhumps, periodic modulations at frequencies close to the orbital frequency f_0. These frequencies are thought to be beats between the precession and the orbit, and appear at f_0 + f_ p or f_0 - f p, where f_ p is a disk precessional frequency. We do not find these frequencies in MGAB-V547. Our favored P_ orb is flanked by sidelobes, but these appear to artifacts of the gaps between TESS sectors. In particular, we do not detect noticeable power near f_0 ± f_ p. §.§ MGAB-V202 This object was apparently first identified as a CV candidate by Gabriel Murawski. The VSX listing includes a light curve from ASAS-SN showing irregular variation 13.8 ≲ V ≲ 14.4. Again, SIMBAD does not include a classification as a CV. TESS observed the source in Sectors 34, 35 (2021 February and March, roughly) and 61 (starting in 2023 January). Lomb-Scargle periodograms of the data from Sectors 34 and 35 both show an apparently significant periodicity near 5.797 cycles d^-1. In a simple fold of the data (Fig. <ref>, top), the modulation is evidently masked by irregular flickering, but averaging in phase bins does reveal a low-amplitude modulation (middle panel). The data from Sector 61, shows a stronger periodicity near a different frequency, 6.037 d^-1. This modulation is discernible in the folded data (lower panel). In phase-binned averages (not shown), its maximum and minimum are respectively near 305 and 325 TESS counts s^-1. We obtained 76 spectra of MGAB-V202, a total of 13.8 h of exposure time spanning hour angles from -2.1 h to +5.6 h. The mean spectrum (top panel of Fig. <ref>) shows relatively strong Balmer and HeI emission lines on a blue continuum. For the radial velocities, we obtained the clearest result using the double-gaussian convolution with a separation of 42 Å, which isolated the motion of the rather faint wings (or base) of the Hα emission line. This gave the periodogram shown in the middle panel. The prominent peak is at 6.405 cycles d^-1, or 0.1561 d. This is, notably, not seen in any of the TESS photometry, and it is not a daily alias of any of the TESS periods, either. Thanks to the large span of hour angle, it is determined without significant ambiguity in the cycle count; a 1000-trial Monte Carlo simulation of the measurement <cit.> returned the correct period every time. The lower panel shows the folded line-wing velocities with the best-fit sinusoid superposed. Fig. <ref> displays the spectra as a function of phase in a two-dimensional image. The lower panel is `stretched' to show the large-amplitude motion of the Hα line wings. Also, the HeI emission lines at λλ 5876 and 6678 both show absorption over part of the phase that appears to drift blueward, which is a classic symptom of the SW Sex phenomenon <cit.>. Faint, large-amplitude Balmer line wings are seen also in the novalikes V795 Her <cit.> and LAMOST J204305.95+341340.6 <cit.>. Based on its spectral appearance, orbital period, and detailed spectral behavior, MGAB-V202 is clearly an SW Sex star. Both of the photometric periods seen in TESS data taken ∼ 2 years apart are distinct from P_ orb. The 0.1725-d period seen in 2021 is 10.5 per cent longer than P_ orb, and the stronger 0.1656-d period in early 2023 is 6.1 per cent longer. As noted earlier, novalikes in this range of P_ orb frequently show superhumps, either called positive superhumps, with P_ sh somewhat longer than P_ orb, thought to be caused by precession of an eccentric disk, or negative superhumps with periods shorter than P_ orb, thought to be from apsidal precession of a tilted disk. <cit.> recently studied long-term TESS light curves of a large sample of novalikes, and found many examples in which the the superhump modulations disappear and/or change period, as seen here. The modulations in MGAB-V202 appear to be examples of positive superhumps. §.§ NSV 4202 This object was apparently first noticed by <cit.>. Sebastian Otero added it to the VSX catalog and classified it as a low-amplitude dwarf nova based on its light curve from ASAS-3. It was also detected by the OGLE-III survey <cit.>. The ASAS-SN light curve shows shows a rather flat quiescence near V = 14.4, and outbursts to V ∼ 12.8 at irregular intervals of order 100 days, all typical of dwarf novae. We were unable to find any candidate orbital period in the literature. The mean spectrum (Fig. <ref>, top panel) shows strong Balmer and HeI emission typical of a dwarf nova at minimum light. The Hα radial velocities show periodicity at 0.2839(6) d, or 6.81 h. This is 3.52 cycle d^-1. A daily cycle-count alias at 4.55 cycle d^-1, or 5.3 h, is marginally possible but gives a much poorer fit. The data have good alias discrimination because of the 6.2-h range of hour angle covered, and because of the amplitude of the modulation relative to the noise (i.e., K/σ); the Monte Carlo test <cit.> prefers the stronger period more than 97 per cent of the time. The lower panel shows velocities folded on the 6.81-h period. The period is fairly long for a dwarf nova, but it is somewhat surprising that the mean spectrum shows no contribution from a late-type secondary. A secondary contribution is almost always seen in high signal-to-noise spectra of quiescent dwarf novae with periods above six hours or so (see, e.g., the spectra of ATO 061-31 that were discussed earlier), so we looked for other evidence to corroborate the period. Unfortunately, none of the synoptic surveys appear to have sampled this object densely enough to corroborate our period, and although TESS has observed this deep southerly location many times, NSV 4202 is ∼ 15 arcsec from a significantly brighter star and light curves are not available. We also prepared a phase-resolved image of the rectified spectra, similar to that of MGAB-V202 shown in Fig. <ref>, but that also showed no sign of a secondary star's spectrum. §.§ V1147 Cen This object, first noted as a variable star by <cit.>, is apparently the longest-known of those studied here. It was recognized as a likely U Gem star by <cit.>, who presented an ASAS-3 light curve showing a quiescent level near 13.5 mag and frequent outbursts to 11.0 mag. <cit.> bestowed the designation V1147 Cen, and listed the type as “UGSS:”. The ASAS-SN light curve is entirely typical of an active dwarf nova, with outbursts typically ∼ 40 days apart. No detailed study appears to have been published, and the orbital period remains unknown. TESS light curves are available from Sectors 11 and 37. Both show a strong periodicity at 4.767 cycles d^-1 (P = 5.035 h), with less power at half that frequency (10.07 h). The mean spectrum (Fig. <ref>, top panel) shows typical dwarf nova emission lines and also a contribution from a late-type star. We have only 4.0 hours of data covering 4.8 hours of hour angle, so the velocities do not define the period uniquely, but both the emission and absorption velocities show a strong, consistent low-frequency modulation (Fig. <ref>, middle panel). One of the aliases of this modulation is at P = 0.4190(5) d, or 2.38(3) cycle d^-1, consistent with half the dominant TESS frequency. The TESS modulation is clearly due to ellipsoidal variation of the secondary, with two humps per orbit. The lower panel shows both emission and absorption velocities folded on the spectroscopic orbital period, which amounts to 10.06 h. The top trace in Fig. <ref> is the average flux-calibrated spectrum of V1147 Cen, resampled into the rest frame of the secondary prior to averaging. The lower trace shows the difference between this average and a scaled spectrum of the K2V star HD109111. The secondary features cancel very well; we found adequate cancellation for types K0 to K3. Comparing to other CVs with P_ orb∼ 10 h in Fig. 7 of <cit.>, the secondary in V1147 is cooler than the analytic fit, but similar to several other examples plotted in his figure and listed in Knigge's Table 2. In summary, we confirm that V1147 Cen is a UGSS star and show that it has a relatively long P_ orb. Had it been more northerly, it would likely have already been swept up by SDSS, LAMOST, and other surveys and attracted more attention. § SUMMARY We obtained spectra of selected bright CVs in the southern hemisphere, with the aim of characterizing them more fully. The orbital period of a CV is its most fundamental observable, and for most of our targets we succeeded in measuring P_ orb. Our targets, while selected on the basis of tractability, represent several different subclasses of CVs – three novalikes, including an apparent SW Sextantis star, and four dwarf novae, including two with visible secondary stars and one short-period SU UMa-type system. None of the objects appears grossly atypical, but there are a few notable findings: * MGAB-V202 is evidently a new SW Sextantis star. In TESS photometry it shows two periods clearly different from P_ orb, neither of which is detected consistently. These may be related to disk precession, and further monitoring may be enlightening. * The TESS photometry of BMAM-V547 shows a clear, persistent modulation at a period that agrees with one of our possible radial velocity periods. In addition, the TESS photometry shows a superorbital period near 3.93 d. * The secondary star in the dwarf nova ATO J061-31 is slightly warmer than expected at its orbital period. * The dwarf nova NSV 4202 does not show a secondary-star spectrum, despite its relatively long P_ orb. § ACKNOWLEDGMENTS This paper uses observations made at the South African Astronomical Observatory (SAAO). We are deeply thankful to the SAAO staff for their warm hospitality and expert assistance. Student travel to and from the observatory, and accommodations at SAAO, were underwritten by a generous donation from Heather and Jay Weed. The observations reported here were taken as part of the Dartmouth College Foreign Study Program in astronomy; Professors Brian Chaboyer and Ryan Hickox were essential in arranging, supporting, and carrying out this program. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555 yahapj
http://arxiv.org/abs/2307.06180v1
20230712141159
Compact dual-band spectral analysis via multiplexed rotated chirped volume Bragg gratings
[ "Oussama Mhibik", "Murat Yessenov", "Leonid Glebov", "Ayman F. Abouraddy", "Ivan Divliansky" ]
physics.optics
[ "physics.optics" ]
APS/123-QED CREOL, The College of Optics & Photonics, University of Central Florida, Orlando, FL 32816, USA Corresponding author: [email protected] CREOL, The College of Optics & Photonics, University of Central Florida, Orlando, FL 32816, USA CREOL, The College of Optics & Photonics, University of Central Florida, Orlando, FL 32816, USA CREOL, The College of Optics & Photonics, University of Central Florida, Orlando, FL 32816, USA CREOL, The College of Optics & Photonics, University of Central Florida, Orlando, FL 32816, USA Chirped Bragg volume gratings (CBGs) offer a useful alternative for spectral analysis, but increasing the bandwidth necessitates increasing the device area. In contrast, recently developed rotated CBGs (r-CBGs), in which the Bragg structure is rotated by 45^∘ with respect to the device facets, require increasing only the device length to extend the bandwidth, in addition to the convenience of resolving the spectrum at normal incidence. Here, we multiplex r-CBGs in the same device to enable spectral analysis in two independent spectral windows without increasing the system volume. This new device, which we term an X-CBG, allows for compact multi-band spectroscopy in contiguous or separated spectral windows in the visible and near-infrared for applications in nonlinear microscopy and materials identification and sensing. Compact dual-band spectral analysis via multiplexed rotated chirped volume Bragg gratings Ivan Divliansky August 12, 2023 =========================================================================================== Spectrometry is one of the fundamental tools used in optics for the identification of materials and chemical compounds, for monitoring changes in the environment, for quality control of food and industrial processes, and for medical diagnostics <cit.>. Optical spectral analysis can be implemented via diverse strategies, including tunable narrowband filters <cit.>, Fourier transform systems <cit.>, and – more recently – by exploiting computational techniques for reconstructing the spectrum by trained detectors <cit.>. The leading approach for many commercial systems in the visible and near-infrared (NIR) remains the utilization of a dispersive element, such as diffraction gratings, to spatially resolve the spectrum before recording it with a linear detector array <cit.>. Such systems occupy a middle ground with respect to system performance, cost, and size. Over the past three decades, efforts have been dedicated towards miniaturization of the volume of spectrometers <cit.>, which enables applications that require handheld or portable devices <cit.> for use in smartphones <cit.>, for hyperspectral imagers on drones, or for on-chip implementations <cit.> (see the recent reviews in Refs. <cit.>). One example exploits random media in lieu of a conventional diffraction grating, whether a multimode fiber <cit.> or an on-chip structure <cit.>. Another recent example combines computational techniques with new measurement techniques to drastically reduce the system volume <cit.>. It is nevertheless now well-understood that any optical functionality, such as spectral analysis, requires a minimal volume to be carried out <cit.>. In other words, even if the optical components used are reduced to thin elements (e.g., metasurfaces), a minimum system volume is nevertheless required. This volume is usually dominated by the space needed for free propagation rather than the optical components themselves. We recently introduced a new class of compact optical devices for spectral analysis that we have called rotated chirped volume Bragg gratings (r-CBGs) <cit.>. In an r-CBG, the Bragg structure is rotated by 45^∘ with respect to the device facets. Consequently, the spectrum of a field incident normally on the input facet is spatially resolved and exits normally from a side facet orthogonal to the input. Such a device enables constructing compact, mechanically stable spectrometers. Moreover, r-CBGs have been shown to facilitate ultra-compact systems for spatio-temporally structuring light, which has been validated by synthesizing space-time wave packets <cit.>. Despite the rich tapestry of available spectroscopic techniques, a useful degree-of-freedom has yet to be realized to the best of our knowledge; namely, multiplexing multiple spectrometers in the same volume, each operating in a different spectral range and with potentially different bandwidths. Of course, two different spectrometers could be combined, but this adds to the system volume and complexity. Providing spectral analysis simultaneously in two different ranges in the same compact device with no need for first separating the input signals has important applications in pump-probe measurements, and fluorescence and nonlinear microscopy. In many of these scenarios, two optical signals in different spectral ranges but sharing a common path are of interest. In this paper, we report on a new advance in the functionality of CBGs by realizing a dual-band spectral analysis device in which two r-CBGs are multiplexed in the same volume. One r-CBG is written at an angle 45^∘ with respect to the propagation axis and operates in one spectral range, and another r-CBG operating in a distinct spectral range is written at an angle -45^∘ (orthogonal to the first). Because the chirped Bragg structure has the form of orthogonally intersecting lines throughout the device volume, we call this new optical element an X-CBG. When two optical signals in different spectral bands are incident normally at the entrance facet, the spectrum of one signal is spatially resolved and directed normally out of one facet, while the spatially resolved spectrum of the other signal exits from a different facet. We confirm the operation of X-CBG's in visible and NIR spectral channels, and in contiguous spectral ranges in the visible. We verify that the parameters of each spectral channel (central wavelength and bandwidth) can be tuned independently by implementing different chirp profiles. This work paves the way to compact multi-channel spectral analysis in the visible and the near-infrared (NIR). We start by describing the writing process and the structure of X-CBGs in comparison to traditional CBGs and the recently introduced r-CBGs, which is illustrated in Fig. <ref>. A pair of UV laser beams (one converging and the other diverging) are combined at a prescribed relative angle to produce an interference pattern and write the target chirped Bragg structure in a photosensitive sample <cit.>. This results in a conventional CBG [Fig. <ref>(a)] in which the Bragg structure is parallel to the sample facets <cit.>. The central wavelength is changed by tuning the relative angle between the two overlapping beams, and the chirp rate is changed by varying the focal lengths of the two lenses producing the converging and diverging beams <cit.>. A normally incident field is retro-reflected after acquiring a spectral chirp and is thus stretched temporally [Fig. <ref>(b)] because each wavelength is reflected from a different depth within the structure <cit.>. This configuration is useful for coherent pulse amplification systems <cit.>. Such a CBG can be operated in a different modality to spatially resolve the spectrum of the incident field. A collimated, obliquely incident field is reflected with the spatially resolved spectrum <cit.> – so-called spatial dispersion <cit.>, which is useful in optical communications for signal multiplexing and de-multiplexing; see Fig. <ref>(b). As mentioned earlier, increasing the bandwidth at a fixed chirp rate requires increasing both the length L and the width W <cit.>. It is easy to see that increasing the resolved bandwidth at a fixed chirp rate requires increasing the area of the CBG <cit.>. An r-CBG can be written using the same interference pattern after rotating it by 45^∘ with respect to the sample volume [Fig. <ref>(c)]. For a normally incident field, different wavelengths reflect from different positions along the device and exit normally from a facet orthogonal to the input [Fig. <ref>(d)]. By virtue of this novel geometric configuration, increasing the bandwidth of an r-CBG at fixed chirp rate necessitates increasing only the length of the device L rather than its area. This dramatically reduces the volume required for resolving a target bandwidth with respect to a conventional CBG <cit.>. Moreover, the spectrum is resolved immediately at the r-CBG exit without need for any further propagation. This feature, in addition to the convenience of normal incidence and exit, allows for constructing ultra-compact, mechanically stable spectrometers by abutting a one-dimensional calibrated detector array to the r-CBG exit facet. An X-CBG is produced by first writing an r-CBG in the sample [Fig. <ref>(c)] and then rotating the sample by 90^∘ followed by recording a second r-CBG using a new interference pattern [Fig. <ref>(e)]. The two interferograms can be designed independently to produce chirped Bragg structures in different spectral bands and different bandwidths. Therefore, two independent but spatially overlapping Bragg structures are recorded in the same volume orthogonally to each other: one r-CBG is rotated by 45^∘ with respect to the sample axis length L, and the other r-CBG is rotated by -45^∘. When a field is incident normally on the input facet, part of the spectrum is spatially resolved by one r-CBG and is directed to exit one facet, while a different spectral range is spatially resolved by the second r-CBG and exits the opposing facet [Fig. <ref>(f)]. Because the two Bragg structures are written independently of each other, the two resolved spectral bands can be – in principle – chosen arbitrarily (within the transparency window of the sample material used). For example, one r-CBG can be designed to resolve the visible spectrum and the other to resolve the NIR. Alternatively, the two r-CBGs can be designed to resolve two contiguous spectral ranges. Both of these scenarios will be realized experimentally below. We recorded the gratings in Photo-Thermo-Refractive glass (PTR) <cit.> using a He:Cd UV laser at a wavelength 325 nm via a pair of positive and negative cylindrical lenses. The focal lengths are f=±100 cm for both outputs in X-CBG_1 [Fig. <ref> (b,c)]; f=±50 cm for output-1 of X-CBG_2 [Fig. <ref> (d)] and f=±100 cm for output-2 [Fig. <ref> (e)]; f=±25 cm for both outputs in X-CBG_3 [Fig. <ref> (f,g)]. For each r-CBG, the angle between the two focused beams was varied to adjust for the target wavelength. To characterize the X-CBGs, we used two broadband sources. A supercontinuum source (SuperK EVO ERL-04, NKT Inc.) was used to characterize the visible grating, while a superluminescent diode (SLD1050S-A60) was used to characterize the NIR grating. Capturing the light field with a multimode fiber of 100-μm core diameter, we record the spectra of the visible and NIR spectra using two different spectrometers (Ocean Optics, S2000 and Thorlabs, CCS175, respectively). Scanning the fiber along the spatially resolved fields emerging from the two X-CBG spectral channels, we determined the chirp rate and the bandwidth of the two multiplexed r-CBGs. We produced here three X-CBGs whose spectra for both channels are plotted in Fig. <ref>(b-g). The lengths of all three devices are L≈25 mm, and the input facet dimensions are 12× 6 mm^2. In the first device, X-CBG_1, we introduced an r-CBG operating in the visible (bandwidth Δλ≈28 nm) and another operating in the NIR (Δλ≈39 nm). The measured chirp rates for the visible and NIR spectral channels are β≈1.8 nm/mm [Fig. <ref>(b)] and β≈1.9 nm/mm [Fig. <ref>(c)], respectively. In a second device, X-CBG_2, we maintained the central wavelengths of the visible and NIR spectral channels as in X-CBG_1 but increased the bandwidth of the visible channel (Δλ≈73 nm) by increasing the chirp rate to β≈3 nm/mm [Fig. <ref>(d,e)]. In the third device, X-CBG_3, we maintained the bandwidth of the first spectral channel but shifted the central wavelength of the NIR channel into the visible and increased the bandwidth (Δλ≈124 nm, β≈5 nm/mm). The two channels thus occupy contiguous portions of the visible spectrum [Fig. <ref>(f,g)]. To evaluate the spectral resolution of the X-CBGs, we illuminate the device with a broadband source and then scan a fiber along the spatially resolved spectrum. We carried out measurements in both spectral channels of X-CBG_2 [Fig. <ref>(d,e)]. The spectra were recorded with commercial spectrometers: Thorlabs CCS175 with a 1-nm resolution for the visible channel, and an optical spectrum analyzer (Yokogawa AQ6370D) with spectral resolution ≈20 pm in the NIR channel. We fix the fiber location and reduce the fiber core size, which results in a narrowing of the recorded spectrum. We find the spectrum width to reach a minimum value at a fiber core size of 100 μm: 1 nm for the visible channel and 0.5 nm for the NIR channel. These measurements reveal the intrinsic spectral resolution of the X-CBG in the two channels. Next, we scan the fiber to determine the spectral chirp rate along the z-axis (along which the spectra are spatially resolved). The FWHM of the spectral linewidth is 0.5 mm and 0.3 mm in the visible and NIR channels. The X-CBG is expected to lead to miniaturized spectroscopic devices. In Fig. <ref>(a,b) we depict schematically the envisioned compact dual-band spectrometer enabled by an X-CBG. The field is incident normally at the X-CBG input facet, and two prescribed spectral ranges are directed to the left and to the right normally to the exit facets. Two wavelength-appropriate linear detector arrays are placed at these facets to intercept the spatially resolved spectra [Fig. <ref>(a)]. Because the spectra are spatially resolved with no additionally required free-space propagation, the two arrays can be directly abutted to the X-CBG, resulting in an ultra-compact dual-band spectrometer system [Fig. <ref>(b)]. In Fig. <ref>(c,d) we demonstrate one possibility using the X-CBG from Fig. <ref>(f,g) that resolves two portions of the visible spectrum. We make use of two linear silicon CCD chips each comprising 3648 pixels (Toshiba TCD1304DG). The width of each pixel is 8 μm, so that the CCD width of ≈30 mm matches the width of the spatially resolved spectrum emerging from the X-CBG (the height of the CCD chip is 200 μm). The two CCD chips can be abutted directly to the X-CBG [Fig. <ref>(d)] and connected to the appropriate electronic circuitry. To validate the potential for such a dual-band spectrometer, we make use of the supercontinuum source and measure the spectrum directly with a commercial spectrometer (Ocean Optics, S2000). We direct a beam of diameter ∼1 mm from the source to the input facet of the X-CBG and capture the spatially resolved spectra using the CCD chips on either side of the X-CBG. We plot the measured spectra in each channel as resolved by the X-CBG alongside the reference spectra in Fig. <ref>(e) and Fig. <ref>(f). The measured and reference spectra are in good agreement except at the short-wavelength end of the spectrum, which emerges at the far end of the X-CBG as depicted schematically in Fig. <ref>(a). Note that we have not calibrated the CCD chip to account for the spectral efficiency of the X-CBG. Further work is needed to optimize the diffraction efficiency of the multiplexed r-CBGs in an X-CBG. In addition to multiplexing two orthogonal r-CBGs in the same volume, another possibility is to multiplex distinct devices along the sample axis. This would yield massive multi-functionality in an ultra-compact footprint. For example, two axially multiplexed X-CBGs can provide 4 widely different spectral channels in the same device. Furthermore, the two facets of the device – the top and bottom facets in Fig. <ref>(a,b) – can also be exploited as spectral channels. In this conception, a single device volume can provide massively parallel spectral analysis capabilities. Finally, we note that r-CBGs also provide the opportunity for polarization-sensitive spectral analysis, which we have not exploited here but will report on in more detail in a separate study. In conclusion, we have realized a novel optical device that we have called an X-CBG, which comprises multiplexed r-CBGs in the same volume. By writing multiple holograms in the same volume, the resulting X-CBG provides the possibility of dual-band spectral analysis in a compact footprint. We realized here several X-CBGs in which we vary the central wavelength and the bandwidth in both spectral channels. Such X-CBGs may pave the way to new applications in miniaturized multi-wavelength and multi-band spectral analysis in fluorescence and nonlinear microscopy, environmental sensing, and portable or handheld devices. Funding: U.S. Office of Naval Research (ONR) N00014-17-1-2458 and N00014-20-1-2789. Disclosures: The authors declare no conflicts of interest. Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
http://arxiv.org/abs/2307.04239v1
20230709175720
First-order Phase Transition interpretation of PTA signal produces solar-mass Black Holes
[ "Yann Gouttenoire" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "gr-qc" ]
[email protected] School of Physics and Astronomy, Tel-Aviv University, Tel-Aviv 69978, Israel We perform a Bayesian analysis of NANOGrav 15yr and IPTA DR2 pulsar timing residuals and show that the recently detected stochastic gravitational-wave background (SGWB) is compatible with a SGWB produced by bubble dynamics during a cosmological first-order phase transition. The timing data suggests that the phase transition would occur around QCD confinement temperature and would have a slow rate of completion. This scenario can naturally lead to the abundant production of primordial black holes (PBHs) with solar masses. These PBHs can potentially be detected by current and advanced gravitational wave detectors LIGO-Virgo-Kagra, Einstein Telescope, Cosmic Explorer, by astrometry with GAIA and by 21-cm survey. First-order Phase Transition interpretation of PTA signal produces solar-mass Black Holes Yann Gouttenoire August 12, 2023 ========================================================================================= § INTRODUCTION By measuring cross-correlations in the arrival times of pulses emitted by rotating neutron stars, Pulsar Timing Arrays (PTAs) have been established as a mean to detect nano-Hertz (nHz) frequency Gravitational Waves (GW). In 2020, a common low-frequency noise has been identified in the datasets of the NANOGrav <cit.>, EPTA <cit.>, PPTA <cit.> and IPTA <cit.> which combines data from the former and therefore provides the largest data release to date. To distinguish a GW origin from systematic effects requires timing delay correlations to have a quadrupolar dependence on the angular separation between pulsars <cit.>. In June 2023, upon analysing their most recent data, the NANOGrav and the EPTA collaboration (NG15 and EPTA DR2) have found statistical evidences for such interpulsar correlations <cit.>, with Bayes factors of 600 and 60 respectively. The primary expected source of GWs at low frequencies is believed to be from supermassive black holes binaries (SMBH) <cit.>. The stochastic GW background (SGWB) inferred from PTA data corresponds to the upper limit of the astrophysical predicted interval, see Fig. <ref>. Recent studies suggest the possibility of SMBH binaries being slightly more massive and more numerous than initially anticipated <cit.>. Alternatively, the PTA SGWB might originate from new physics taking place in the early universe <cit.>. The last hypothesis however comes with its own set of challenges. For instance, ascribing the SGWB to inflation necessitates unnaturally large values for the spectral tilt n_t ≃ 1.8 and a low reheating temperature T_ reh≲ 10  GeV <cit.>. GW induced by a Gaussian spectrum of curvature perturbation would results in excessive PBHs production <cit.>, same for a SGWB produced from domain wall annihilation <cit.>. A SGWB resulting from PBH mergers would not align with structure formation <cit.>. A cosmic strings network, when arising from a global symmetry is excluded by Big-Bang Nucleosynthesis (BBN) <cit.>, while when arising from a local symmetry is not favoured by the Bayesian analysis <cit.>. To evade BBN bound, a first-order phase transition (1stOPT) sourcing PTA signal would necessitate the latent heat to be released dominantly to the Standard Model, e.g. <cit.> Interestingly however, the 1stOPT interpretation of PTA SGWB requires a reheating temperature around the scale of QCD confinement 100 MeV, with a rather low completion rate β/H ∼ 10 and a large latent heat fraction α≳ 1 <cit.>. This overlaps with the region where 1stOPT have been recently found to produce PBH in observable amount <cit.>. The PBH prior has been omitted in all previous analysis of the 1stOPT interpretation of PTA data <cit.>. In this letter, we perform a Bayesian search for SGWB from 1stOPT in NANOGrav 15-year (NG15) and IPTA DR2 (IPTA2) timing residuals, including both BBN-N_ eff-bound and PBH-overproduction constraints as priors in the analysis. To simplify the numerical strategy, we focus on the region α≫ 1 of strong supercooling where PBH production is the most efficient.[The Bayesian analysis of 1stOPT with finite α will be presented elsewhere.] We argue that the SGWB from 1stOPT is given by the bulk flow model independently of whether the latent heat is still stored in bubble walls at percolation or has been released to the plasma before. We find that PBH formation does not exclude the 1stOPT interpretation of PTA signal. Instead, a SGWB from supercooled PT is favoured with respect to the SMBH binary hypothesis by a Bayes factor of 15 in NG15 data set. We point for the first time, the existence of a multi-messenger window: the NG15 posterior contains a region producing [10-100] solar masses PBHs, see Fig. <ref>. The merging of such PBHs would source GWs with kHz frequencies in the range of LIGO-Virgo <cit.>, and ET/CE <cit.>. Additionally, their presence could be detected from lensing in GAIA <cit.> or from heating in 21-cm survey <cit.>. We also consider the negative hypothesis in which the SGWB observed in PTA would not result from a supercooled PT and derive lower limits on the rate of completion β/H ≳ [10-20], implying that the universe could not have boiled longer than [5%-10%] of a Hubble time during the QCD phase transition. § GRAVITATIONAL WAVES FROM FIRST-ORDER PT PT parameters — The strength of a 1stOPT is characterized by the ratio of its latent heat Δ V, defined as the vacuum energy difference between the two minima of the potential driving the transition, to the radiation energy density ρ_ rad(T_n) at the nucleation temperature T_n α≡Δ V/ρ_ rad(T_n)≡( T_ eq/T_ n)^4. In this work, we assume α≫ 1, in which case the universe enters a stage of vacuum-domination at temperature T_ eq which ends at T_n when bubble growth converts the latent heat into radiation energy density. The rate at which nucleation takes place is controlled by the time derivative of the tunneling rate per unit of volume Γ_V β≡1/Γ_VdΓ_V/dt. After the phase transition completes, the universe is reheated back to the temperature T_ eq up to changes in number of degrees of freedom which we neglect. Energy budget — The dynamics of weak phase transition α < 1 is rather well understood <cit.>. The non-relativistic motion of bubble walls, γ_w≃ 1, converts the latent heat into thermal and kinetic energy of the plasma, which propagate under the form of long-lasting sound waves <cit.>, and ultimately turn into turbulence <cit.>. GWs sourced by sound waves have been intensively simulated on the lattice in the recent years <cit.>, and analytical modelling have been proposed <cit.>. The dynamics of supercooled phase transition α >1 is more complex due to the large Lorentz factor γ_w≫ 1 of bubble walls <cit.>. In the relativistic limit, the acceleration of bubble walls with tension σ is set by the pressure balance <cit.> dγ_w/dt = Δ V-𝒫_ fric/σ. The friction pressure 𝒫_ fric is dominantly induced by transition radiation <cit.>, which resummed at leading-logs, reads <cit.> 𝒫_ fric = c_0 g_ D^3 γ_w v_ϕ T_n^3 log( v_ϕ/T_n), c_0=𝒪(1), where g_ D is a gauge coupling and v_ϕ is the vev of the scalar field driving the phase transition. As bubble walls accelerate, the retarding pressure 𝒫_ fric grows linearly with γ_w. Scalar field gradient — It is necessary to distinguish two scenarios according to whether the retarding pressure stops the walls from accelerating before collision 𝒫_ fric = Δ V or not 𝒫_ fric≪Δ V <cit.>. In the later case, bubble walls run-away γ_w ↗, and the latent heat is dominantly kept in terms of bubble wall kinetic energy which is the main source of GWs. This occurs for very large supercooling T_ n/T_ eq ≲  5.3 × 10^-5(v_ϕ/ 1  GeVβ/H/100.45/g_ D)^1/4. GWs from scalar field gradient were first computed in the so-called “envelop” approximation where walls are infinitely thin and collided parts are neglected <cit.>. Later, collided parts were added to the computation in the so-called “bulk flow" model at the analytical <cit.> and numerical level <cit.>. It was found that the long-lasting propagation of the infinitely thin shells produces an IR enhancement of the GW spectrum as Ω_ PT∝ f^1 instead of Ω_ PT∝ f^3. For relativistic wall velocities, the bulk flow model predicts <cit.> Ω_ PTh^2 ≃10^-6/(g_*/100)^1/3(H_*/β)^2( α/1+α)^2 S_ PT(f)S_H(f), with the spectral shape S_ PT(f) peaked on f_ϕ S_ PT(f) = 3(f/f_ PT)^0.9/2.1+0.9(f/f_ PT)^3, f_ PT = (a_*/a_0) 0.8 (β/2π), and the redshift factor between percolation “*” and today “0” a_*/a_0 = 1.65 × 10^-2  mHz (T_ eq/100  GeV) ( g_ eff, reh/100)^1/6 H_*^-1. We added the correction factor S_H(f) = (f/f_∗)^2.1/1+(f/f_∗)^2.1, f_∗ = c_*(a_*/a_0)(H_*/2π), with c_* = 𝒪(1) to impose an f^3 scaling for emitted frequencies smaller than the Hubble factor H_∗/(2π) as required by causality <cit.>. We fix c_*=1 and leave the determination of c_* for future studies. Plasma dynamics — If Eq. (<ref>) is not satisfied, bubble walls reach a constant Lorentz factor γ̇_w=0, and the latent heat of the phase transition is dominantly transferred to the plasma, which is the main source of GWs. Friction-dominated bubble wall motion is expected to generate extremely thin and relativistic fluid configurations, which become long-lasting shock waves after bubble collisions <cit.>. The large hierarchy between the bubble radius and the thickness of the shock front is a major challenge to numerical treatment. However, from a gravitational viewpoint an extremely peaked momentum distribution carried by the plasma should be indistinguishable from an extremely peaked momentum distribution carried by the scalar field. Hence we expect the GW signal in both situation to be similar. A second difficulty in modelling plasma dynamics is the possibility for bubble walls to be followed by relativistic shells of free-streaming particles <cit.>, breaking down the fluid description. A recent study in the moderately relativistic regime γ_w ≲ 10 <cit.> suggests that the GW spectrum again resembles the one predicted in bulk flow model. For the two aforementioned reasons, in the present work we assume the GW signal to be given by the bulk flow model in Eq. (<ref>) in the whole strongly supercooled regime T_n ≪ T_ eq, independently of whether Eq. (<ref>) is satisfied or not.[We thank Ryusuke Jinno for fruitful discussions regarding this point.] § PTA DATA ANALYSIS Numerical strategy — We searched for GW from 1stOPT in two open-access datasets, NG15 <cit.> and IPTA2 <cit.>. The released data are presented in terms of the timing-residual cross-power spectral density S_ab(f)≡Γ_ab h^2_c(f)/(12π^2)f^-3, where h_c(f)≃ 1.26· 10^-18(Hz/f)√(h^2Ω_GW(f)) signifies the characteristic strain spectrum <cit.> and Γ_ab denotes the Overlap Reduction Function (ORF) between pulsars 'a' and 'b' within a given PTA <cit.>. We used the software packages enterprise <cit.> and enterprise_extensions <cit.> to compute the likelihood of observing given timing residuals assuming the presence of the SGWB from 1stOPT given in Eq. (<ref>). We used PTMCMC <cit.> to generate the posterior distribution. For IPTA2, we marginalized over white, red and dispersion measure noises as prescribed in <cit.>. For NG15, we instead used the handy wrapper PTArcade <cit.> with “enterprise” mode in which marginalization over noise parameters is automatized. We used GetDist <cit.> tool to plot the results. To circumvent pulsar-intrinsic excess noise at high frequencies, the SGWB search was confined to the lowest 14 and 13 frequency bins of the NG15 and IPTA2 datasets, respectively. We included the BBN constraints assuming that the 1stOPT sector reheates dominantly into Standard Model degrees of freedom and, when specified, the one from PBH overproduction discussed in Sec. <ref>, to infer the prior distribution of 1stOPT parameters. Detailed information regarding data analysis and prior choices can be found in App.<ref>. Supercooled PT — We conducted searches for GW from strong 1stOPT (α≫ 1) in isolation, GW from SMBH binaries individually, as well as a combined analysis of 1stOPT and SMBH binaries. In Fig. <ref>, we show the GW spectra with parameters set to their mean posterior values given in Tab. <ref>. The 68% and 95% confidence contours are depicted in Fig. <ref>-left. The posterior for the combined analysis of 1stOPT and SMBH is reported to Figs. <ref> and <ref> in the appendix. We assumed a flat prior on the strain amplitude of the SGWB from SMBH binaries. To quantify the evidence provided by the observed PTA data, denoted as 𝒟, in favor of one model, say X, versus another, say Y, we employ the Bayesian factor BF_Y,X≡𝒫(𝒟|Y) / 𝒫(𝒟/X), which we compute using the product-space sampling method <cit.> implemented in enterprise_extensions <cit.>. Here, 𝒫(𝒟/X) is the likelihood probability of observing data D given the model X. The outcomes of the Bayesian model comparison presented in Tab. <ref>, according to Jeffrey's scale <cit.>, suggests that NG15 data `substantially' favours the presence of a GW signal from 1stOPT aside to the one from SMBHB. Instead, IPTA2 data remains inconclusive. Exclusion bounds — Under the assumption that the PTA signal is due to SMBHB or any sources other than 1stOPT, we have derived upper limits on the GW signal emanating from 1stOPT. As depicted in Fig. <ref>-right, these limits correspond to lower bounds on the rate of completion, going up to β/H ≲ 20. § PRIMORDIAL BLACK HOLES Supercooled late-blooming mechanism — In <cit.>, it was demonstrated that PBHs could be produced in observable amount during supercooled PT through a process termed “late-blooming”. During 1stOPT, the nucleation sites of bubbles are randomly dispersed across the entire volume of the false vacuum. As the universe gets close to the point of percolation, there remains a non-zero probability of identifying Hubble-sized regions where nucleation has not yet initiated. Throughout the supercooled PT, these delayed regions maintain a constant vacuum energy, while the energy density in their vicinity redshifts like radiation. Upon completion of percolation, these “late-bloomers” evolve into over-dense regions. If these regions are Hubble-sized and exceed a certain density threshold δρ/ρ≳ 0.45, they collapse into PBHs. We direct the reader to <cit.> for the precise analytical formula to estimate the abundance and mass of those PBHs.[Some other works <cit.> find a different PBH abundance. Refs. <cit.> find a lower PBH abundance because the formalism is restricting collapsing patch to remain 100% vacuum dominated until collapse. Ref. <cit.> find a larger abundance because nucleation is not accounted in the entire past light-cone of a collapsing patch. Instead, Ref. <cit.> accounts for nucleation to take place not only in the whole past light-cone but also in the collapsing patch itself as long as the critical overdensity is reached.] We included the PBH overproduction constraints as a prior in the Bayesiasn analysis. The Bayes factors shown in Tab. <ref> is unaffected for IPTA2 and only decreases from 24 to 15 for NG15. We have plotted the contour lines representing the PBH fraction of dark matter f_ PBH in Fig. <ref> and the PBH mass in Fig. <ref>. In addition, we overlay cosmological and astrophysical constraints on this population of PBHs. Excluded regions and detection prospects — With solid lines, we show current constraints. In yellow, we have the exclusion regions arising from distortion of the Cosmic Microwave Background (CMB) caused by X-rays from accretion which modify the ionization history between recombination and reionizaton <cit.>. In purple, we show the constraints using the search for photometric magnification (strong lensing) of stars in the Magellanic clouds conducted on Eros data <cit.>. The solid cyan-colored region represents constraints derived from the data collected by LIGO/Virgo interferometers <cit.>. With dashed lines, we show future prospects. In green, we have the reach of 21 cm surveys due to heating and ionization of the intergalactic medium via X-rays produced during accretion <cit.>. In red, we have the forecast from the search for transient astrometric deviation (weak lensing) of single or multiple stars in GAIA time-series data <cit.>. Finally, in dashed cyan we show the prospect for detecting GW from PBH binaries with Einstein telescope and Cosmic Explorer <cit.>. § CONCLUSION We conducted a Bayesian analysis of the NANOGrav 15-yr (NG15) and IPTA DR2 (IPTA2) timing residuals. Our findings indicate that NG15 indicate a substantial preference for the presence of a strong first-order phase transitions (1stOPT) in isolation or combined with SGWB from SMBH binaries, while IPTA2 remains inconclusive on which scenario is preferred. The phase transition is characterized by a remarkably low completion rate, e.g. β/H≃ 12.6 and 10.7 for NG15 with and without astrophysical signal from SMBH binaries. From a theoretical perspective, such a value is typical of supercooled phase transitions, characterized by a strong first-order phase transition with a parameter α significantly larger than 1, e.g. <cit.>, which motivates the choice of prior α≫ 1 done in this work. These cosmological scenarios have been demonstrated to produce primordial black holes (PBHs) in considerable quantities when β/H ≲ 7 <cit.>. We checked that in contrast to the scalar-induced <cit.> and domain-wall <cit.> interpretations of PTA signal, the 1stOPT interpretation does not fall into the PBH graveyard of PTA's interpretations. The Bayes factor of the strong 1stOPT interpretation with respect to SMBH binary one is only reduced from 24 to 15 in NG15 after including the PBH prior, while it is not affected in IPTA2. We further assessed the potential for detecting these PBHs using different observational techniques, including 21 cm cosmological hydrogen line observations, astrometry with the GAIA mission and next-generation kilohertz frequency GW interferometers such as the Einstein Telescope (ET) and Cosmic Explorer (CE). In the event that an astrophysical explanation becomes definitive, we established 68% and 95% exclusion constraints on the parameter space of 1stOPT, up until β/H≳ 20. Under these conditions, it would effectively preclude any possibility of detecting PBHs from supercooled PTs within the mass range [1 M_⊙, 10^3 M_⊙]. We must emphasize that our current comprehension of the GW spectrum resulting from supercooled phase transitions is still in its early stages. The assumptions are founded on the bulk flow model, in which GWs are sourced by the expansion of an infinitely thin distribution of the stress-energy momentum tensor. Future investigations are necessitated to probe potential modifications of the GW spectrum that could be induced by non-linear effects, such as those arising from relativistic shock waves, or deviations from a fluid description. Notwithstanding these constraints, the concept of employing multi-messenger observations of GWs at nHz and kHz frequencies to investigate supercooled phase transitions occurring around the QCD epoch remains an approach consistent with the overarching goal of exploring the cosmos using all available messengers and signals. Acknowledgements.— The author is grateful to Iason Baldes, Ryusuke Jinno, Marius Kongsore, Fabrizio Rompineve, Miguel Vanvlasselaer and Tomer Volansky for fruitful discussions and to the Azrieli Foundation for the award of an Azrieli Fellowship. 1113 § 1214.1em §.§ 1214.1em §.§.§ 1214)1em 1214:1em § DATA ANALYSIS The purpose of this Appendix is to delineate the Bayesian search methodology employed in our study. We started rely on the NG15 dataset <cit.> and on Version B of the IPTA2 dataset <cit.>. To ascertain noise parameters of IPTA2, we closely follow the approach adopted by IPTA collaboration <cit.>, see also <cit.>. We then checked that we obtained consistent result with the software PTArcade <cit.> in which noise marginalization has been automatised, see Fig. <ref>-left. Instead the Bayesian analysis of NG15 was done solely with the “enterprise” mode of PTArcade <cit.>. We perform the search for SGWB in the first 13 and 14 frequency bins of IPTA2 and NG15, respectively. We now described the Bayesian analysis of IPTA2 which we performed ourselves without the use of PTArcade <cit.>. We adapted the software packages enterprise <cit.> and enterprise_extensions <cit.> to incorporate GW spectra from 1stOPT in terms of the power spectrum in timing residual, and used them to compute the likelihood function, symbolized as 𝒫(𝒟|θ). This function encapsulates the probability of observing the data 𝒟 given a specific set of model parameters θ. The posterior distribution, 𝒫(θ|𝒟), which illustrates the probability distribution of model parameters θ given the observed data 𝒟, is linked to the likelihood function via Bayes's theorem 𝒫(θ|𝒟) = 𝒫(𝒟|θ)𝒫(θ)/𝒫(𝒟). Within this equation, 𝒫(θ) is the prior distribution, representing preliminary knowledge of the parameters prior to data observation, while 𝒫(𝒟) is the marginal likelihood or evidence, functioning as a normalization constant to ensure that the posterior distribution integrates to 1. The parallel-tempering Markov Chain Monte-Carlo sampler PTMCMC <cit.> was employed to reconstruct the posterior distribution 𝒫(θ|𝒟) using an enhanced version of the Metropolis-Hastings algorithm <cit.>. The GetDist tool <cit.> was subsequently used to plot the posterior distributions and upper limits. The pulsar noise parameters employed in the likelihood function can be classified into three distinct categories: white noise, red noise, and dispersion measures (DM). The white noise parameters are grouped into three sets for each backend/receiver associated with a given pulsar: EFAC (E_k), EQUAD (Q_k[s]), and ECORR (J_k[s]). The values of the white noise parameters are fixed to the mean posterior values obtained by performing single pulsar analysis devoid of GW signals. We only kept pulsars with more than 3 years of observation time which corresponds to 53 pulsars. Instead the Bayesian analysis of NG15 data performed via PTArcade contains 68 pulsars with more than 3 years of observation. We employ the Jet Propulsion Laboratory Development Ephemeris DE438 and the Terrestrial Time reference timescale of the International Bureau of Weights and Measures BIPM18. Next, for the multi-pulsar analysis incorporating the GW signals, we account for two power-law red noise parameters per pulsar, specifically the amplitude at the reference frequency of yr^-1 denoted as A_red, and the spectral index denoted as γ_red. Additionally, we incorporate power-law errors associated with dispersion measures (DM). We note that the treatment of DM noise as a Gaussian process is specific of IPTADR2 dataset. Instead, in the analysis of NG15 data performed via PTArcade, but also in the analysis of NANOGrav 12.5-year (NG12) done in <cit.>, pulse dispersion is modelled by a set of “per- epoch” parameters describing the DM offset from a nominal fixed value <cit.>. These can add dozens of additional parameters per pulsar <cit.>. In the individual pulsar analysis of PSR J1713+0747 (in IPTA2 but also in NG15), we extend our consideration to encompass a DM exponential dip parameter, following the methodology described in <cit.>. The priors for the noise parameters are reported in Tab. <ref>, along with the priors for the parameters for the GW spectra from 1stOPT and SMBH binaries. To economize on computational time, we adopt the methodology of previous studies <cit.> and in our search for a GW background we utilize only auto-correlation terms I=J in the Overlap Reduction Function (ORF) Γ_IJ, rather than the complete Hellings-Downs ORF with I≠ J. We acquire 10^6 samples per analysis presented in this study and discard 25% of each chain as burn-in. We could replicate the posteriors of <cit.> and <cit.> for a power-law model with excellent concurrence. The violin features shown in Figs. <ref> and <ref> are obtained with the free-spectrum approach described in <cit.>. We do not repeat this analysis and instead take the data directly from https://zenodo.org/record/8060824NG15 and https://zenodo.org/record/5787557IPTA2. Our study encompasses two types of analyses. The first, a detection analysis, identifies the region of parameter space in which GWs from 1stOPT can account for the common-spectrum process in the datasets. Here, we use a uniform prior on the logarithm of each parameter and adopt a prior on β/H due to the BBN bound and - when mentioned - PBH overproduction. The second, an lower-limit analysis, seeks to constrain the rate of completion of the phase transition β/H. There, we use a uniform prior on H/β instead of log_10(β/H) as described in <cit.>. All prior choices are given in Tab. <ref>. BBN prior. — As a sub-component of the total energy density of the universe, the latent heat Δ V can impact the expansion rate of the universe which is strongly constrained by BBN and CMB. Its effect can be encoded in the effective number of extra neutrino relics N_ eff = 8/7( ρ_ tot-ρ_γ/ρ_γ)( 11/4)^4/3, where ρ_γ is the photon number density. The total number of effective degrees is constrained by CMB measurements <cit.> to N_ eff = 2.99_-0.33^+0.34 and by BBN predictions <cit.> to N_ eff = 2.90_-0.22^+0.22 whereas the SM prediction <cit.> is N_ eff≃ 3.045. The latent heat parameter of a generic 1stOPT reads α = ρ_ DW(T)/π^2/30g_*(T)T^4, where T is the photon temperature and g_*(T) contains eventual dark degrees of freedom. The maximal contribution to N_ eff occurs at reheating after percolation Δ N_ eff(T) = 2.20g_*(T)α(T). The BBN bound Δ N_ eff≲ 0.3 <cit.> applies after neutrino decouples below the temperature T_ dec where g_*(T< T_ dec)≡ 2+(7/8)· 6· (4/11)^4/3≃ 3.36. We obtain Δ N_ eff = 7.4 α ≲ 0.3, Two scenarios must be distinguished. The first one is when reheating after percolation occurs in a dark sector, in which case Eq. <ref> is the BBN constraints. The second one is when reheating after the 1stOPT occurs into the Standard Model, in which case Eq. <ref> applies only if the reheating temperature is below the neutrino decoupling temperature T_ reh≲ 1  MeV. The last case is the scenario we consider in this work. PBH prior. — The condition of not producing PBH with an energy density larger than the one of observed dark matter, f_ PBH < 1, implies a lower bound on the rate of completion of a 1stOPT <cit.> β/H  ≳ (5.54+0.232 log_10(T_ reh/ GeV)-0.00512 log_10^2(T_ reh/ GeV))( 1 - 0.0695ln( 1+908.1/α^3.204) ), where we have introduced an analytical function fitted on numerical results of <cit.>. When specified, we include the constraint in Eq. (<ref>) as prior information on β/H and T_ eq. Due to the exponential dependence of the PBH abundance on β/H, the precise PBH constraints due to astrophysical and cosmological constraints, as shown in e.g. Fig. <ref>, make little difference with respect to simple criterion f_PBH <1. § COMBINED GW FROM 1STOPT AND SMBH BINARIES The squared characteristic strain spectrum of a population of circular SMBHBs n(z,ℳ) can be written as <cit.> h_c^2(f) = 4G^5/3/3π^1/3f^4/3∫ dℳ∫ dz ℳ^5/3/(1+z)^1/3d^2n/dzdℳ, where ℳ=(m_1m_2)^3/5/(m_1+m_2)^1/5 is the chirp mass and 1/(1+z) accounts for the cosmological redshifting of GW energy. The strain spectrum can be expressed as a red-tilted power-law h_c(f) = A_ SMBH(f/1  yr^-1)^-2/3, where A_ SMBH is the strain amplitude at 1  yr^-1≃ 3.2× 10^-8. In terms of the fractional energy density, it corresponds to the blue-tilted power-law Ω_ SMBH(f) = 2π^2/3H_0^2f^2 h_c^2(f) ∝ f^2/3. We conduct search for combined GW from both supercooled 1stOPT and SMBH binaries. We present the posterior distribution of model parameters (A_ SMBH, T_ reh, β/H) in Fig. <ref>. We included BBN and PBH constraints in the prior distribution of 1stOPT parameters. The mean posterior values of the parameters are reported in Tab. <ref> and the associated GW spectra are plotted in Fig. <ref>. § 1214.1em §.§ 1214.1em §.§.§ 1214)1em 1214:1em
http://arxiv.org/abs/2307.04713v1
20230710172234
Sphaleron in the Higgs Triplet Model
[ "Jiahang Hu", "Bingrong Yu", "Shun Zhou" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "hep-th" ]
footnote Sphaleron in the Higgs Triplet Model Jiahang Hu ^a [E-mail: [email protected]], Bingrong Yu ^a, b [E-mail: [email protected] (corresponding author)], Shun Zhou ^a, b [E-mail: [email protected] (corresponding author)] ^a School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China ^b Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China The Higgs triplet model (HTM) extends the Standard Model (SM) by one complex triplet scalar (also known as the type-II seesaw model), offering a simple and viable way to account for nonzero neutrino masses. On the other hand, the nontrivial couplings of the triplet to the gauge fields and to the SM Higgs field are expected to influence the topological vacuum structure of the SM, and consequently, the energy and the field configuration of the electroweak sphaleron. The sphaleron process plays a crucial role in dynamically generating the baryon asymmetry of the Universe. In this work, we study the vacuum structure of the gauge and Higgs fields and calculate the saddle-point sphaleron configuration in the HTM. The coupled nonlinear equations of motion of the sphaleron are solved using the spectral method. We find the inclusion of the triplet scalar could in principle significantly change the sphaleron energy compared with the SM. Nevertheless, at zero temperature, the current stringent experimental constraint on the vacuum expectation value of the triplet suppresses the difference. Interestingly, we find that there still exists some narrow parameter space where the sphaleron energy can be enhanced up to 30% compared with the SM case. footnote § INTRODUCTION Despite its great success, the Standard Model (SM) of particle physics is unable to accommodate nonzero neutrino masses, which has been firmly established by the neutrino oscillation experiments during the last two decades <cit.> (see, e.g., Ref. <cit.> for a recent theoretical review). Another important unsolved problem in the SM is the observed baryon asymmetry of the Universe <cit.>. Given the 125 GeV Higgs boson discovered at the Large Hadron Collider <cit.>, the SM cannot provide a successful electroweak (EW) baryogenesis since the EW phase transition in the SM is a smooth cross-over <cit.>, failing to depart from thermal equilibrium <cit.>. Therefore, the SM should be incomplete, and new physics beyond the SM is indispensable. The extension of the SM by adding one triplet scalar with hypercharge Y=-1, dubbed the Higgs Triplet Model (HTM), offers an economical way to explain the tiny neutrino masses through the type-II seesaw mechanism <cit.>. On the other hand, following the idea of thermal leptogenesis <cit.>, the out-of-equilibrium decays of the heavy triplets in the early Universe generate the lepton number asymmetry <cit.>,[In order to generate CP violation, at least two triplet scalars are needed. Alternatively, one can also introduce one triplet scalar and one additional heavy Majorana neutrino, which is able to accommodate both the neutrino mass spectrum and the observed baryon asymmetry <cit.>. Recently, it was pointed out that the inclusion of only one triplet scalar could fulfill successful leptogenesis through the Affleck-Dine mechanism <cit.> while the triplet could also play a role in inflation <cit.>.] which can partly be converted to the baryon number asymmetry via the sphaleron process <cit.>. In addition, the triplet scalar modifies the scalar potential of the SM and thus may change the pattern of the EW phase transition. Recently, it was found that there exists viable parameter space for a strong first-order EW phase transition in the HTM, and the spectrum of the produced gravitational waves was calculated <cit.>. Nevertheless, it is still unclear whether or not a successful EW baryogenesis could be fulfilled in the framework of the HTM. To achieve this goal, a necessary step is to calculate the sphaleron configuration in the presence of a triplet scalar, which is the main purpose of the present work. The sphaleron process plays a crucial role in dynamically generating the cosmological matter-antimatter asymmetry <cit.>. It is well known that the vacuum structure of non-Abelian gauge theories is nontrivial and the topologically distinct vacua are characterized by the Chern-Simons numbers <cit.>, which can be directly related to the baryon (B) and lepton (L) numbers. Due to the chiral anomaly <cit.>, B and L are not conserved in the SM. The transition between two topologically distinct vacua changes the Chern-Simons number and hence B and L (but with B-L conserved). The energy barrier between different vacua is characterized by the sphaleron energy E^_ sph. At zero temperature, we have E^_ sph∼ 4π v/g ∼ 5  TeV, where v≈ 246  GeV is the EW vacuum expectation value (VEV) and g≈ 0.65 is the SU(2)_ L^ gauge coupling. Therefore, the B-violating sphaleron rate is highly suppressed at low temperatures: Γ^_ sph∼ exp(-E^_ sph/T) <cit.>. At temperatures above the EW scale, the VEV becomes zero and the energy barrier vanishes. In this case, the B-violating rate is no longer suppressed[Strictly speaking, there is no classical sphaleron solution above the critical temperature T^_c of the EW phase transition. This is because the temperature-dependent VEV v(T) turns out to be zero at T>T^_c and the classical configuration scale 1/v(T) goes infinity. However, the B-violating process is still significant above T^_c and the temperature provides a typical scale (α^_ W T)^-1 for the sphaleron-like configuration <cit.>.] and is given by Γ^_ sph∼α_ W^5 T^4 with α^_ W≡ g^2/(4π) <cit.>. On the other hand, from the view of the classical field theory, the sphaleron configuration is the saddle-point solution of the energy functional <cit.>. The sphaleron energy in the SM is mainly contributed by the Higgs and the gauge bosons. However, in the HTM, the triplet scalar has additional couplings to the gauge fields and to the SM Higgs field, hence is expected to influence the vacuum structure and the sphaleron configuration. As has been discussed above, the sphaleron energy plays an important role in both EW baryogenesis and leptogenesis. Therefore, it is necessary to recalculate the sphaleron configuration in the presence of a triplet scalar in order to realize a self-consistent baryogenesis in the framework of the HTM. The remaining part of this paper is organized as follows. In Sec. <ref>, we briefly review the minimax procedure to find the sphaleron solution and set up our formalism. In Sec. <ref> and Sec. <ref>, we calculate the sphaleron configuration in the HTM, where a minimal version of the potential and a full potential is adopted, respectively. Our main conclusion is summarized in Sec. <ref>, together with some further discussions. Finally, the numerical techniques to solve the equations of motion (EOM) of the sphaleron are provided in appendices. § THEORETICAL SETUP AND SPHALERON ANSATZ In this section, we set up the general formalism to calculate the sphaleron configuration in the SM extended by a complex triplet scalar. We make the following two reasonable assumptions: * The contribution from fermion fields to the sphaleron is neglected. * The finite Weinberg angle has little influence on the sphaleron (e.g., less than 1% correction to the sphaleron energy) <cit.>. Therefore, we can safely neglect the mixing between SU(2)_ L and U(1)_ Y gauge bosons such that the sphaleron configuration is spherically symmetric. Under the above assumptions, the Lagrangian in the HTM is given by L_ HTM=-1/2(F^_μνF^μν_)+(D^_μϕ)^†_(D^μ_ϕ)+1/2[(D^μ_Δ)^†_(D^_μΔ)]-V(ϕ,Δ) . The field strength in Eq. (<ref>) is defined as F^_μν=∂^_μ W^_ν-∂^_ν W^_μ- ig[W^_μ,W^_ν], where W^_μ≡ W_μ^a σ^a_/2 with W_μ^a the SU(2)^_ L gauge fields and σ^a_ (for a=1,2,3) the Pauli matrices. In addition, D^_μ is the covariant derivative, ϕ is the SM Higgs doublet, and Δ is the triplet scalar with hypercharge Y=-1 and transforms according to the adjoint representation of the SU(2)^_ L group ϕ=( ϕ^+_ ϕ^0_) , Δ=( Δ^-_ -√(2)Δ^0_ √(2)Δ^–_ -Δ^-_) . The VEVs of the scalar fields, namely ⟨ϕ⟩=v^_ϕ/√(2) and ⟨Δ⟩= -v^_Δ, are determined by minimizing the scalar potential V(ϕ,Δ), and satisfy √(v_ϕ^2+2v_Δ^2)=v≈ 246  GeV. We will discuss it in more detail later. For the calculation of the sphaleron, since we are only focusing on the static field configuration, all the time components in Eq. (<ref>) can consistently be set to zero. Then the energy density reads H[W^_μ,ϕ,Δ]=1/2g^ik_g^jl_ Tr(F^_ijF^_kl)+g^ij_(D^_iϕ)^†_(D^_jϕ)+1/2g^ij_[(D^_i Δ)^†_(D^_jΔ)]+V(ϕ,Δ) , where g^ij_ is the metric of the coordinate system. Since the sphaleron has a spherical symmetry in a pure SU(2)_ L gauge theory, it is most convenient to adopt the spherical coordinates (r,θ,φ). Then we have g^_ij=(g^ij_)^-1_= diag(1,r^2_,r^2_sin^2_θ). Moreover, the degrees of freedom from the gauge symmetry allow us to take the polar gauge. That is, the radial part of the gauge field can always be set to zero: W^_r=0. The total energy is determined by integrating over the whole space E[W^_μ,ϕ,Δ]=∫_0^2π dφ∫_0^π dθsinθ∫_0^∞ dr r^2_ H[W^_μ,ϕ,Δ] , which is the functional of the field configuration. Below we use the minimax procedure <cit.> to find the sphaleron solution in the HTM. The basic idea is to construct a set of non-contractible loops[The loops are defined on the infinite-dimensional field configuration space {W_μ( x),ϕ( x),Δ( x)}, on which the energy functional E[W_μ( x),ϕ( x),Δ( x)] is also defined. Here x denotes the general spatial indices.] starting and ending at the vacuum. For each of the loop there exists a configuration with maximum energy. Then the infimum of the maximum energies defines the sphaleron configuration, which corresponds to the saddle point of the energy functional. Along this line, the sphaleron configuration in the SM can be worked out <cit.>. Similar strategies have also been used to study the sphaleron in the new-physics scenarios, which extend the SM by adding new singlet or doublet scalars <cit.>. However, as far as we know, the study of the sphaleron in the presence of a triplet scalar is still lacking. In what follows we show that the minimax procedure works in the HTM as well. First, the fields at infinity (r→∞) should be related to the vacuum configuration via W_j^∞ = - i/g∂^_j U^_∞(θ,φ)U_∞^-1(θ,φ) , j=θ,φ , ϕ^∞_ = 1/√(2)U_∞(θ,φ)^( 0 v^_ϕ) , Δ^∞_ = U^_∞(θ,φ)( 0 -v^_Δ 0 0 )U_∞^-1(θ,φ) , where U^_∞(θ,φ)∈ SU(2)^_ L denotes the gauge transformation that preserves the polar gauge condition. Note that Eq. (<ref>) satisfies the pure gauge such that the field strength F_μν vanishes at the infinity, and Eq. (<ref>) comes from the fact that Δ belongs to the adjoint representation of SU(2)^_ L. The gauge transformation U_∞(θ,φ) (or equivalently, the Higgs field at infinity ϕ^∞_) defines a map: S^2 → S^3 that is contractible, because the homotopy group π^_2(S^3_) is trivial. This implies that the fields at infinity can be continuously transformed to the vacuum configuration. In order to find a non-contractible loop in the field configuration space, we could introduce a new parameter μ∈ [0,π], and extend the gauge transformation to U(μ,θ,φ)=( e^ iμ_(cosμ- isinμcosθ) e^ iφ_sinμsinθ -e^- iφ_sinμsinθ e^- iμ_(cosμ+ isinμcosθ) ) , which satisfies U(μ,θ=0,φ)=U(μ=0,θ,φ)=U(μ=π,θ,φ)=1 with 1 the identity matrix. Therefore, μ=0 and μ=π correspond to the vacuum configuration, and the varying μ∈ [0,π] parametrizes the loop. Then it follows that equipped with the loop parametrized by μ, the gauge transformation U(μ,θ,φ) defines a map: S^3 → S^3. Since the homotopy group is π_3(S^3)=ℤ, the topological degree of the map is nonzero and the loop is non-contractible. Now it is straightforward to construct the general field configuration using Eq. (<ref>). A suitable ansatz is W^_j(μ,r,θ,φ) = - i/gf(r)∂^_j U(μ,θ,φ)U^-1_(μ,θ,φ) , j=θ,φ , ϕ(μ,r,θ,φ) = v^_ϕ/√(2)h(r)U(μ,θ,φ)( 0 1 ) , Δ(μ,r,θ,φ) = v^_Δ h^_Δ(r) U(μ,θ,φ)( 0 -1 0 0 )U^-1(μ,θ,φ) , where f(r), h(r) and h^_Δ(r) are radial profile functions to be determined. Since the polar gauge is singular at the origin, the smoothness requires the profile functions of all gauge multiplets to vanish at the origin. In addition, at spatial infinity the field configuration should go back to the vacuum configuration. This ensures the finiteness of the energy. Therefore, the boundary conditions of the profile functions should be f(0) = h(0)=h_Δ(0)=0 , f(∞) = h(∞)=h^_Δ(∞)=1 . Substituting Eqs. (<ref>)-(<ref>) into Eq. (<ref>), we obtain the kinematic terms 1/2g^jk_g^jl_(F^_ijF^_kl) = 4/g^2_ r^4_sin^2_μ[2f^2_(1-f)^2sin^2_μ+r^2_ f'^2_] , g^ij_(D^_iϕ)^†_(D^_jϕ) = v_ϕ^2/2r^2_[2(1-f)^2_ h^2_sin^2_μ+r^2_ h'^2_] , 1/2g^ij_[(D^_i Δ)^†_(D^_jΔ)] = v_Δ^2/2r^2_[(5-cos2θ)(1-f)^2_h_Δ^2 sin^2_μ+r^2_ h_Δ'^2] , where we have suppressed all arguments in the profile functions for simplicity, and all derivatives are with respect to r. It is interesting to notice that the kinetic terms of gauge fields and the doublet are spherically symmetric while that of the triplet is not. Also note that the contribution from the kinetic term of the triplet is suppressed by v_Δ^2/v_ϕ^2 compared with that of the doublet. Furthermore, once the scalar potential V(ϕ,Δ) is known (as shown in the next two sections), one could obtain the total energy E(μ) by performing the integral in Eq. (<ref>), which is the function of the loop parameter μ. The sphaleron configuration (labeled by μ^_0) is determined by finding the maximum energy along the non-contractible loop, namely δ E(μ)/δμ|_μ=μ_0 = 0 , δ^2 E(μ)/δμ^2|_μ=μ_0 < 0 . The sphaleron energy is given by E_ sph=E(μ_0), and the EOM of the sphaleron are obtained from δ E(μ^_0)/δ f=δ E(μ^_0)/δ h=δ E(μ^_0)/δ h^_Δ=0 . Solving the EOM together with the boundary conditions in Eq. (<ref>), one obtains the field configuration of the sphaleron. In the next two sections, we will use the above formalism to calculate the sphaleron configuration in the HTM. § SPHALERON WITH THE MINIMAL POTENTIAL §.§ Scalar Potential The most general scalar potential in the HTM has 8 independent parameters. Before investigating the full potential in the next section, we first consider a simplified potential V(ϕ,Δ)=λ(ϕ^†_ϕ)^2_-κ^2_ϕ^†_ϕ+1/2M_Δ^2 (Δ^†_Δ)-(λ^_Δ M^_Δϕ^ T_ϵΔϕ+ h.c.) , where ϵ≡ iσ^2_. In Eq. (<ref>), only the trilinear interaction (ϕ-Δ-ϕ) is kept and all the quartic terms of triplet self-interaction and doublet-triplet interaction are turned off. This is a minimal version of the HTM, which still violates the lepton number and can accommodate the tiny neutrino masses. We will restrict ourselves to the minimal HTM throughout this section. It helps to exhibit the effects of the triplet on the sphaleron in a more apparent way. Without loss of any generality, we can take M_Δ and λ_Δ in Eq. (<ref>) to be real and positive. Substituting the VEVs into the scalar potential we have V(v^_ϕ,v^_Δ)≡ V(⟨ϕ⟩,⟨Δ⟩)=1/4λ v_ϕ^4-1/2κ^2_ v_ϕ^2+1/2M_Δ^2 v_Δ^2-λ^_Δ M^_Δ v^_Δ v_ϕ^2 . The VEVs are determined by minimizing the potential ∂/∂ v^_ϕV(v^_ϕ,v^_Δ)=λ v_ϕ^3-κ^2_ v^_ϕ-2λ^_Δ M^_Δ v^_Δ v^_ϕ=0 , ∂/∂ v^_ΔV(v^_ϕ,v^_Δ)=M_Δ^2 v^_Δ-λ^_Δ M^_Δ v_ϕ^2=0 , from which one obtains v^_ϕ=√(κ^2_/λ-2λ_Δ^2) , v_Δ^=λ^_Δ v_ϕ^2/M^_Δ . In order to have a real positive v_ϕ, we require κ^2_>0 and λ-2λ_Δ^2>0. Besides, the vacuum stability requires λ>0. Substituting the VEVs back to Eq. (<ref>) we obtain the minimum V^_ min=-κ^4_/4(λ-2λ_Δ^2)=-1/4(λ-2λ_Δ^2) v_ϕ^4 . The nonzero minimum of the potential would bring about infinity after integrating over the whole space. To obtain a finite energy, one can perform a constant shift to the potential V(ϕ,Δ) → V(ϕ,Δ)+1/4(λ-2λ_Δ^2) v_ϕ^4 = λ(ϕ^†_ϕ-v_ϕ^2/2)^2_+2λ_Δ^2 v_ϕ^2(ϕ^†_ϕ-v_ϕ^2/2)+λ_Δ^2 v_ϕ^4/2v_Δ^2[(Δ^†_Δ)-v_Δ^2] +λ_Δ^2 v_ϕ^2/v^_Δ[v^_Δ v_ϕ^2-2 Re(ϕ^ T_ϵΔϕ)] . Note that such a shift has no impact on the sphaleron configuration since it does not involve any dynamical degrees of freedom. In Eq. (<ref>) we have replaced κ^2_ and M^_Δ with the VEVs using Eq. (<ref>). Therefore, in the minimal HTM the scalar potential depends on 4 real positive parameters: {λ,λ^_Δ,v^_ϕ,v^_Δ}. Substituting Eqs. (<ref>)-(<ref>) into Eq. (<ref>), we get the scalar potential in terms of the profile functions V(ϕ,Δ)=1/4v_ϕ^4[λ(1-h^2_)^2_+2λ_Δ^2(2h^2_-1-h^_Δ)(1-h^_Δ)] . It can be seen that the scalar potential is also spherically symmetric, although the fields themselves (i.e., ϕ and Δ) are not. §.§ Equations of Motion Now one can calculate the total energy using Eq. (<ref>). It is helpful to define the following dimensionless quantity ξ≡ g v r ≈ 8.1 ×(r/10^-15  cm) , where we have used g≈ 0.65 and v=√(v_ϕ^2+2v_Δ^2)≈ 246  GeV. As one can see later, ξ characterizes the typical scale of the sphaleron. Substituting Eqs. (<ref>)-(<ref>) and (<ref>) into Eq. (<ref>) and integrating out the angular part, we obtain E(μ)=4π v/g∫_0^∞ dξ( H^_ gauge+ H^_ doublet+ H^_ triplet) , where[From here on, unless otherwise specified, all derivatives are with respect to ξ.] H^_ gauge = 4 f'^2_sin^2_μ+8/ξ^2_f^2_(1-f)^2sin^4_μ , H^_ doublet = ϱ^_1/4β^2_ξ^2_(1-h^2_)^2+1/2βξ^2_ h'^2_+1/βh^2_(1-f)^2sin^2_μ , H^_ triplet = ϱ_2/4β^2_ξ^2(2h^2_-1-h^_Δ)(1-h^_Δ)+ϱ^_3/6β[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2sin^2_μ] , and ϱ^_1≡λ/g^2_ , ϱ^_2≡2λ_Δ^2/g^2_ , ϱ^_3≡v_Δ^2/v_ϕ^2 , β≡v^2_/v_ϕ^2=1+2ϱ^_3 . In Eq. (<ref>) we have divided the contributions into three parts: H^_ gauge and H^_ doublet come from the kinetic and self-interaction terms of the gauge bosons and the doublet, respectively, while H^_ triplet arises from the triplet kinetic term, the triplet mass term, and the doublet-triplet interaction. To reduce to the SM case, one can simply take ϱ^_2=ϱ^_3=0. The next step is to determine the value of μ corresponding to the maximum energy. To this end, we calculate the variation of the energy with respective to μ, i.e., δ E(μ)/δμ=4π v/3gsin2μ∫_0^∞ dξ[12f'^2_+1/β(1-f)^2_(3h^2_+8ϱ^_3 h_Δ^2)+48/ξ^2_f^2_(1-f)^2_sin^2_μ]=0 , which gives μ=0, π/2 or π. A further investigation of the second-order variation leads to δ^2_ E(μ)/δμ^2|^_μ=0 =δ^2_ E(μ)/δμ^2_|^_μ=π=4π v/g∫_0^∞ dξ[8f'^2_+2/3β(1-f)^2(3h^2_+8ϱ^_3 h_Δ^2)]>0 , δ^2_ E(μ)/δμ^2_|^_μ=π/2 =4π v/g∫_0^∞ dξ[-8f'^2_-2/3β(1-f)^2_(3h^2_+8ϱ^_3 h_Δ^2)-32/ξ^2_f^2_(1-f)^2_]<0 . Therefore, μ=0 or π corresponds to the minimum energy (i.e., the vacuum configuration) as expected, while μ=π/2 corresponds to the maximum energy (i.e., the sphaleron configuration). Substituting μ=π/2 into Eq. (<ref>) we obtain the sphaleron energy E^_ sph=4π v/g∫_0^∞ dξ { 4f'^2_+8/ξ^2_f^2_(1-f)^2_+1/β(1-f)^2_h^2_+1/2βξ^2_ h'^2_+ϱ^_3/6β[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2_]. .+ξ^2_/4β^2_[(ϱ^_1-ϱ^_2)(1-h^2_)^2_+ϱ^_2(h^2_-h^_Δ)^2_] } . The EOM of the fields are determined by the variation of the sphaleron energy with respect to the profile functions δ E^_ sph/δ f=δ E^_ sph/δ h=δ E^_ sph/δ h^_Δ=0 , which results in ξ^2_ f” = 2 f(1-f)(1-2f)-ξ^2_/4β(1-f)h^2_-2ϱ^_3/3βξ^2_(1-f)h_Δ^2 , (ξ^2_ h')' = 2(1-f)^2_ h-ξ^2_/β[(ϱ^_1-ϱ^_2)h(1-h^2_)-ϱ^_2h(h^2_-h^_Δ)] , (ξ^2_ h_Δ')' = 16/3(1-f)^2_h^_Δ-ϱ^_2/2βϱ^_3ξ^2_(h^2_-h^_Δ) . In addition, the profile functions should satisfy the boundary conditions in Eq. (<ref>). Once the solutions of the EOM are found, one can simply substitute them back to Eq. (<ref>) to get the sphaleron energy, which is expected to be of the order of 4π v/g≈ 5  TeV. Before solving Eqs. (<ref>)-(<ref>), it is interesting to first take a look at the heavy-mass limit of the triplet scalar (i.e., M_Δ→∞ or v^_Δ/v^_ϕ→ 0). Note that the coupling ϱ^_2/(2ϱ^_3) in Eq. (<ref>) is actually M_Δ^2/(g^2 v_ϕ^2) using the second relation in Eq. (<ref>). In the heavy-mass limit, M_Δ^2/(g^2_ v_ϕ^2) goes infinity and Eq. (<ref>) enforces h^_Δ→ h^2_. Then the EOM of f(ξ) and h(ξ) reduce to ξ^2_ f” = 2f(1-f)(1-2f)-ξ^2_/4(1-f)h^2_ , (ξ^2_ h')' = 2(1-f)^2_ h-ξ^2_(ϱ^_1-ϱ^_2)h(1-h^2_) , which are exactly those in the SM <cit.>, except for the replacement ϱ^_1→ϱ^_1-ϱ^_2, or equivalently, λ→λ_ eff≡λ-2λ_Δ^2. Therefore, a very heavy triplet scalar has no influence on the sphaleron but only shifts the quartic Higgs coupling λ to λ^_ eff. This is consistent with the result that one integrates out the triplet scalar at the tree level and retains only the leading-order term: L^_ eff= L^_ SM+2λ_Δ^2 (ϕ^†_ϕ)^2_+ O(1/M^_Δ) . §.§ Sphaleron Solution The EOM in Eqs. (<ref>)-(<ref>) are coupled nonlinear differential equations. It is difficult to solve them analytically. In Appendix <ref>, we have developed a numerical algorithm based on the spectral method that can be used to efficiently solve the sphaleron EOM. See Appendix <ref> for more details. The solutions of the profile functions and the sphaleron energy density obtained from the spectral method are shown in Fig. <ref>. Note that ϱ_3 violates the custodial symmetry and thus is strictly constrained by the EW precision measurements: √(ϱ^_3) = v^_Δ/v^_ϕ≲ 0.03 <cit.>. Moreover, in the SM, ϱ^_1 is related to the mass ratio of the Higgs boson and W boson via ϱ_1^ SM=m_h^2/(8m_ W^2)≈ 0.306. In Fig. <ref>, as an illustration, we have taken ϱ^_3 to saturate the experimental upper bound, namely ϱ^_3=10^-3_ (corresponding to v^_Δ≈ 8  GeV). We also fix ϱ^_1=ϱ_1^ SM and show the solutions of profile functions and the sphaleron energy density for different ϱ^_2. From Fig. <ref>, it can be seen that all the profile functions approach the vacuum configuration [i.e., f(∞)=h(∞)=h^_Δ(∞)=1] quickly. The sphaleron energy is restricted within a very narrow region: ξ≲ 10, corresponding to r≲ 10^-15_  cm using Eq. (<ref>), which is even two orders of magnitude smaller than the length scale of a proton. This implies that the sphaleron looks like a “particle" localized near the origin. If the triplet couples with the doublet, then a larger trilinear coupling ϱ^_2 makes the profile functions tend to the vacuum configuration more slowly. In addition, ϱ^_2 would diffuse the distribution of the sphaleron energy density and also decrease the total energy of the sphaleron. It is also interesting to investigate the asymptotic behavior of the triplet field near the origin. First, from Eqs. (<ref>) and (<ref>), the smoothness of the profile functions at the origin requires f and h to satisfy f∼ξ^2 and h∼ξ, which is the same as the SM case <cit.>. Then suppose h^_Δ∼ξ^α_ (with α>0) near ξ=0 and substitute it into Eq. (<ref>). If ϱ^_3≠ 0, keeping only the leading-order term of ξ one obtains[If ϱ^_3=0, the term proportional to ξ^2_/ϱ^_3 in Eq. (<ref>) cannot be neglected near ξ=0. Instead, the finiteness of the both sides of Eq. (<ref>) enforces h_Δ→ h^2. Therefore we have h_Δ∼ h^2∼ξ^2 near the origin if ϱ^_3=0.] α(α-1)+2α=16/3 ⇒ α=1/6(√(201)-3)≈ 1.86 . The above asymptotic behavior of the triplet field near the origin has also been verified numerically. In the left panel of Fig. <ref>, we show the contour plot of the sphaleron energy with respect to ϱ^_1 and ϱ^_2, where ϱ^_3=10^-3_ is fixed. It is obvious that a larger ϱ^_1 (or ϱ^_2) would increase (or decrease) the sphaleron energy. One may wonder how large is the difference of the sphaleron energy between the minimal HTM and the SM. The answer is that for ϱ^_3≲ 10^-3_ the difference is negligible. This is because for such a small ϱ^_3, the triplet almost decouples and shifts λ to λ-2λ_Δ^2. As a result, the sphaleron energy in the minimal HTM only depends on ϱ^_1-ϱ^_2, as is shown in the left panel of Fig. <ref>. In Table <ref>, we compare the sphaleron energy in the SM and in the minimal HTM. As one can see, the difference is only about 1‰, if one replaces ϱ_1 in the SM with ϱ^_1-ϱ^_2 in the minimal HTM. Note that such a difference is of the same order of ϱ^_3. However, things are different for a larger ϱ^_3.[We comment here that a large value of v^_Δ/v^_ϕ may be available when taking into account the temperature corrections in the early Universe. See more discussions in Sec. <ref>.] In the right panel of Fig. <ref> we show the behavior of E^_ sph with ϱ_3. It can be seen that a large ϱ^_3 could significantly decrease the sphaleron energy. This can be understood as follows. For small ϱ^_3, β≈ 1, h^_Δ≈ h^2_, and the term proportional to ϱ^_3 in Eq. (<ref>) is suppressed, which means the contribution of the triplet to the sphaleron energy is negligible, and it reduces to the SM case. However, for large ϱ^_3 we have β≈ 2ϱ^_3, then the terms relevant to the doublet in Eq. (<ref>) are suppressed by the inverse power of β. In this case, the sphaleron energy is dominated by the contribution of gauge fields and the triplet. More explicitly, we have E^_ sph(ϱ^_3≫ 1)≈4π v/g∫_0^∞ dξ{ 4f'^2_+8/ξ^2_f^2_(1-f)^2_+1/12[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2_] }≈ 1.32×4π v/g , which tends to a fixed value. This explains why curves with different ϱ^_2 in the right panel of Fig. <ref> converge together in the large ϱ^_3 limit. Compared with the case of small ϱ^_3, we find the sphaleron energy could be decreased by 30% if ϱ^_3 is sufficiently large. To summarize, in the minimal HTM, there are three relevant parameters which could affect the sphaleron configuration, i.e., the doublet quartic coupling ϱ^_1, the doublet-triplet trilinear coupling ϱ_2, and the VEV-ratio parameter ϱ^_3. As in the SM, the sphaleron energy increases monotonically with ϱ^_1, while the two additional parameters ϱ^_2 and ϱ^_3 would decrease the sphaleron energy. However, at zero temperature, the stringent constraint on the triplet VEV has highly suppressed the effects of the triplet on the sphaleron. The sphaleron energy in the minimal HTM can be simply obtained from that in the SM with the replacement ϱ^_1→ϱ^_1-ϱ^_2. As we will see below, the situation becomes different when considering the full potential in the HTM. § SPHALERON WITH THE FULL POTENTIAL In this section, we calculate the sphaleron configuration in the HTM with the full potential. §.§ Scalar Potential and Equations of Motion The most general scalar potential in the HTM is given by V(ϕ,Δ)= λ(ϕ^†_ϕ)^2_-κ^2_ϕ^†_ϕ+1/2M_Δ^2 (Δ^†_Δ)-(λ^_Δ M^_Δϕ^ T_ϵΔϕ+ h.c.) +λ_1/4[(Δ^†_Δ)]^2_+λ^_2/4[(Δ^†_Δ)^2_]+λ^_3(ϕ^†_ϕ)(Δ^†_Δ)+λ^_4ϕ^†_ΔΔ^†_ϕ , where λ^_i (for i=1,2,3,4) are real couplings. Substituting the VEVs of the doublet and the triplet into the potential above and minimizing it leads to ∂/∂ v^_ϕV(v^_ϕ,v^_Δ) =(-κ^2_+λ v_ϕ^2-2λ^_ΔM^_Δv^_Δ+λ^_3v_Δ^2)v^_ϕ=0 , ∂/∂ v^_ΔV(v^_ϕ,v^_Δ) =-λ^_ΔM^_Δv_ϕ^2+M_Δ^2v^_Δ+(λ^_1+λ^_2)v_Δ^3+λ^_3v_ϕ^2v^_Δ=0 . From Eqs. (<ref>) and (<ref>) one can determine v^_ϕ and v^_Δ from the couplings, though the general expressions are very tedious. Alternatively, we could also use Eqs. (<ref>) and (<ref>) to express the couplings as λ^_3 =κ^2_-λ v_ϕ^2+2λ^_ΔM^_Δv^_Δ/v_Δ^2 , λ^_1+λ^_2 =-M^_Δ/v_Δ^3(v^_Δ M^_Δ + λ^_Δ v_ϕ^2 )+v_ϕ^2/v_Δ^4(λ v_ϕ^2 - κ^2_) . With the help of Eqs. (<ref>) and (<ref>), the vacuum energy is given by V(v_ϕ,v_Δ)=1/4[M^_Δv^_Δ(M^_Δv^_Δ-λ^_Δv_ϕ^2)-κ^2_v_ϕ^2] . As what we have done before, in order to have a finite total energy, we perform a shift to the potential to make the vacuum energy being zero V(ϕ,Δ) → V(ϕ,Δ)-1/4[M^_Δv^_Δ(M^_Δv^_Δ-λ^_Δv^2_)-κ^2_v_ϕ^2] = +λ[(ϕ^†_ϕ)-v_ϕ^2/2]^2_+(λ v_ϕ^2-κ^2_)[(ϕ^†_ϕ)-v_ϕ^2/2]+1/2M_Δ^2 [(Δ^†_Δ)-v_Δ^2] -λ^_Δ M^_Δ[2 (ϕ^ T_ϵΔϕ)-v^_Δ v_ϕ^2]+λ^_1/4{[(Δ^†_Δ)]^2_-v_Δ^4}+λ^_2/4{[(Δ^†_Δ)^2_]-v_Δ^4} +λ^_3[(ϕ^†_ϕ)(Δ^†_Δ)-1/2v_ϕ^2 v_Δ^2]+λ^_4 ϕ^†_ΔΔ^†_ϕ . With the above scalar potential, the total energy turns out to be E(μ)=4π v/g∫_0^∞ dξ( H^_ gauge+ H^_ doublet+ H^_ triplet) , where H^_ gauge and H^_ doublet are the same as those in the minimal HTM [i.e., Eqs. (<ref>) and (<ref>)], and H^_ triplet is given by H^_ triplet= +λ_Δ^2/2g^2_β^2_ξ^2_(2h^2_-1-h^_Δ)(1-h^_Δ)+v_Δ^2/6β v^2_ϕ[3ξ^2_ h_Δ'^2+16h_Δ^2(1-f)^2_sin^2_μ] +λ_Δ^2/2g^2_β^2_ξ^2_{κ^2_-(λ-2λ_Δ^2 )v_ϕ^2/λ_Δ^2 v_ϕ^2(1-h^2_). . +(v^_Δ M^_Δ/λ^_Δ v_ϕ^2-1)[2(1-h^2_ h_Δ^2)-(v^_Δ M^_Δ/λ^_Δ v_ϕ^2+1)(1-h_Δ^2)]} -λ^_1+λ^_2/4g^2_β^2_v_Δ^4/v_ϕ^4ξ^2_(1-h_Δ^4)-λ^_3/2g^2_β^2_v_Δ^2/v_ϕ^2ξ^2_(1-h^2_ h_Δ^2) , where β is still defined as β≡ v^2_/v_ϕ^2. Note that λ_4 does not appear in the energy, because ϕ^†_ΔΔ^†_ϕ always vanishes with the ansatz in Eqs. (<ref>) and (<ref>). It is easy to check that in the limit of λ^_1+λ^_2 =0 and λ^_3=0, the parameters κ^2_ and M^_Δ are related to the VEVs by Eq. (<ref>), then the 2nd to 4th lines of Eq. (<ref>) vanish and Eq. (<ref>) reduces to Eq. (<ref>). Moreover, the terms in the 2nd to 4th lines of Eq. (<ref>) are independent of the loop parameter μ, implying that they do not influence the extreme points of the energy. Therefore, we conclude that the sphaleron configuration in the HTM with the full potential is still located at μ=π/2. In order to recast the sphaleron energy into a more compact form, we introduce the following dimensionless parameters ϱ^_1≡λ/g^2_ , ϱ^_2≡2λ_Δ^2/g^2_ , ϱ^_3≡v_Δ^2/v_ϕ^2 , ϱ^_4≡κ^2_/g^2_ v_ϕ^2 , ϱ^_5≡M_Δ^2/g^2_ v_ϕ^2 . Then λ_1+λ_2 and λ_3 are related to them via λ^_1+λ^_2=g^2_(-ϱ^_5/ϱ^_3-ϱ^_5/ϱ^_3√(ϱ^_2/2ϱ^_3ϱ^_5)+ϱ^_1-ϱ^_4/ϱ_3^2) , λ^_3=g^2_(ϱ^_4-ϱ^_1/ϱ^_3+√(2ϱ^_2ϱ^_5/ϱ^_3)) . Notice that in the limit of λ^_3 = 0 and λ^_1+λ^_2=0, it goes back to the minimal HTM, where ϱ^_4 and ϱ^_5 are not independent and they are related to other three parameters by ϱ^_4 =ϱ^_1-ϱ^_2 and ϱ^_5=ϱ^_2/(2ϱ^_3). With the help of Eq. (<ref>), the sphaleron energy can be written as E^_ sph=4π v/g∫_0^∞ dξ { 4f'^2_+8/ξ^2_f^2_(1-f)^2_+1/β(1-f)^2_ h^2_+1/2βξ^2_ h'^2_. .+ξ^2/4β^2_[(ϱ^_1-ϱ^_2)(1-h^2_)^2_+ϱ^_2(h^2_-h^_Δ)^2_]+ϱ^_3/6β[3ξ^2_ h_Δ'^2+16h_Δ^2 (1-f)^2_]. .+ξ^2_/4β^2_[2(ϱ^_4-ϱ^_1+ϱ^_2 )(1-h^2_)-(2ϱ^_3 ϱ^_5 -ϱ^_2)(1-h_Δ^2)]. .+ξ^2_/2β^2_(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)(1-h^2_ h^_Δ)+ξ^2_/2β^2_(ϱ^_1-ϱ^_4-√(2ϱ^_2 ϱ^_3 ϱ^_5))(1-h^2_ h_Δ^2). .-ξ^2_/4β^2_(ϱ^_1-ϱ^_4-ϱ^_3 ϱ^_5-√(ϱ^_2 ϱ^_3 ϱ^_5/2))(1-h_Δ^4)} . Starting with the energy, we obtain the sphaleron EOM via Eq. (<ref>) ξ^2_ f” = 2f(1-f)(1-2f)-ξ^2_/4β(1-f)h^2_-2ϱ^_3/3βξ^2_(1-f)h_Δ^2 , (ξ^2_ h')' = 2(1-f)^2_ h-ξ^2_/β[(ϱ^_1-ϱ^_2)h(1-h^2_)-ϱ^_2 h(h^2_-h^_Δ). .+(ϱ^_4-ϱ^_1+ϱ^_2)h+(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)h h^_Δ+(ϱ^_1-ϱ^_4-√(2ϱ^_2ϱ^_3 ϱ^_5))h h_Δ^2 ] , ϱ^_3 (ξ^2_ h_Δ')' = 16/3ϱ^_3 (1-f)^2_ h^_Δ -ϱ^_2 ξ^2_/2β(h^2_-h^_Δ)+ξ^2_/2β[(2ϱ^_3 ϱ^_5 - ϱ^_2 )h^_Δ-(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)h^2_. .-2(ϱ^_1-ϱ^_4-√(2ϱ^_2 ϱ^_3 ϱ^_5)) h^2_ h^_Δ+2(ϱ^_1-ϱ^_4-ϱ^_3 ϱ^_5-√(ϱ^_2 ϱ^_3 ϱ^_5/2))h_Δ^3] . The profile functions f, h and h_Δ should also satisfy the boundary conditions in Eq. (<ref>). Although there are totally 8 parameters in the scalar potential, namely λ, λ^_Δ, κ^2_, M^_Δ, and λ^_i (for i=1,2,3,4), the sphaleron configuration is only affected by 5 independent parameters, i.e., ϱ^_1-ϱ^_5 defined in Eq. (<ref>). This implies that not all parameters in the HTM are relevant to the B-violating process. §.§ Constraints on the Parameters We have seen that the sphaleron configuration in the HTM is determined by 5 parameters. Using the spectral method developed in Appendix <ref>, one can solve Eqs. (<ref>)-(<ref>) and calculate the sphaleron energy in Eq. (<ref>) for any given parameters. However, there are constraints from both theoretical and experimental aspects on the parameters in the HTM <cit.>. Below we list all the constraints that are relevant to the sphaleron. * Triplet VEV: From the first equality of Eq. (<ref>) one can obtain ϱ^_4 = ϱ^_1-1/2ϱ^_3ϱ^_5(2+√(2ϱ^_2/ϱ^_3ϱ^_5))-(λ^_1+λ^_2)ϱ_3^2/g^2_ ≈ ϱ^_1-1/2ϱ^_3ϱ^_5(2+√(2ϱ^_2/ϱ^_3ϱ^_5)) , where in the second line we have neglected the term proportional to ϱ_3^2. This is a good approximation because the EW precision measurements require ϱ^_3≲ 10^-3, and λ^_i cannot be too large for unitarity. Therefore, ϱ^_4 can be approximated using Eq. (<ref>) in the calculation of the sphaleron. Substituting Eq. (<ref>) back to the second equality of Eq. (<ref>) we have λ^_3/g^2≈√(ϱ^_2ϱ^_5/2ϱ^_3)-ϱ^_5 . * Bounded-from-below conditions and the requirement of unitarity: These conditions provide a series of inequalities on the couplings λ_i in the scalar potential, and part of them can be translated to the constrains on ρ_i. For a complete set of these constraints, see Refs. <cit.>. Here we only list those which are relevant to the sphaleron: 0 < ϱ^_1⩽4π/g^2_ , -√(4π/g^2_ϱ^_1) < √(ϱ^_2ϱ^_5/2ϱ^_3)-ϱ^_5⩽4π/g^2_ , ϱ^_1-ϱ^_3ϱ^_5-√(ϱ^_2ϱ^_3ϱ^_5/2)>0 . In addition, there are also constraints relevant to λ_4: -√(4π/g^2_ϱ^_1)<λ^_3+λ^_4/g^2_⩽4π/g^2_ , |2λ^_3 + 3λ^_4|⩽8π , |2λ^_3-λ^_4|⩽ 8π . Although λ^_4 does not directly contribute to the sphaleron configuration, it would be related to other parameters via the Higgs mass (as discussed below). * Higgs mass: The HTM should also predict a CP-even neutral Higgs boson h, whose mass is around 125 GeV. In the HTM, the mass of h is predicted by m_h^2=g^2_ v_ϕ^2 [ϱ^_1+1/2√(ϱ^_2ϱ^_5/2ϱ^_3)+λ^_1+λ^_2/g^2_ϱ^_3. .-√((ϱ^_1-1/2√(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_1+λ^_2/g^2_ϱ^_3)^2_+4(√(ϱ^_2ϱ^_5/2)-λ^_3+λ^_4/g^2_√(ϱ^_3))^2_)] . The terms proportional to λ^_1+λ^_2 in Eq. (<ref>) are suppressed by ϱ^_3 and can be safely neglected. Then one can extract λ^_4 in terms of m_h and ϱ^_i: λ^_4/g^2_≈ϱ^_5±1/(2ϱ^_3)^3/4√((ϱ^_1-m_h^2/2 g^2_ v_ϕ^2)(√(ϱ^_2ϱ^_5)-√(2ϱ^_3)m_h^2/g^2_ v_ϕ^2)) . Given g≈ 0.65, m^_h≈ 125  GeV and v^_ϕ≈ 246  GeV, the combination of Eqs. (<ref>), (<ref>) and (<ref>) provides additional constraints on ϱ^_i. * Collider constraints: The collider searches put the lower bound on the mass of doubly-charged Higgs, namely m^_H^±±_≳ 350  GeV or m^_H^±±_≳ 1  TeV for the decay channels dominated by vector-boson (v^_Δ≳ 10^-4_  GeV) or charged-lepton (v^_Δ≲ 10^-4_  GeV) final states, respectively <cit.>. In the HTM, the mass of the doubly-charged Higgs is predicted to be m_H^±±_^2=g^2_v_ϕ^2(√(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_4/g^2_-λ^_2/g^2_ϱ^_3)≈ g^2_v_ϕ^2(√(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_4/g^2_) . For ϱ^_3=10^-3_, the dominant decay channel is the gauge-boson final state, so the collider constraint implies √(ϱ^_2ϱ^_5/2ϱ^_3)-λ^_4/g^2_≳ 4.8 , where g≈0.65 and v^_ϕ≈ 246  GeV have been used. * Charged lepton flavor violation (cLFV): The lack of the observation of cLFV in the HTM gives <cit.> M^_Δ v^_Δ≳ 10^2  GeV· eV ⇒ ϱ^_3ϱ^_5≳ 10^-24_ . This constraint is easy to satisfy for v^_Δ∼ O( GeV). In summary, the relevant constraints on the parameters that contribute to the sphaleron configuration are given by Eqs. (<ref>), (<ref>), (<ref>) and (<ref>), where λ^_3 and λ^_4 are given by Eqs. (<ref>) and (<ref>), respectively. §.§ Sphaleron Solution Basically, the contribution of the triplet to the sphaleron energy is suppressed by its VEV. For a small enough VEV-ratio parameter ϱ^_3, it should reduce to the SM case. Therefore, we fix ϱ^_3 to be its upper bound (i.e., ϱ^_3=10^-3_), and see how much the difference of the sphaleron energy between the HTM and the SM is under all theoretical and experimental constraints. In addition, ϱ^_4 could be calculated from Eq. (<ref>) as a good approximation. Therefore, we are left with three independent parameters, namely the doublet quartic coupling ϱ^_1, the doublet-triplet trilinear coupling ϱ^_2, and the triplet mass parameter ϱ^_5. In the SM, ϱ^_1 is completely fixed by the Higgs mass, i.e., ϱ_1^ SM=m_h^2/(2g^2_ v_ϕ^2)≈ 0.306, and so is the sphaleron energy E_ sph^ SM≈ 1.92× 4π v/g. However, in the HTM, ϱ^_1 is not fixed because the Higgs mass depends on other parameters [see Eq. (<ref>)]. It is not difficult to prove that for ϱ^_1<ϱ_1^ SM there is no allowed parameter space under the constraints discussed in Sec. <ref>. Therefore we must have ϱ^_1⩾ϱ_1^ SM≈ 0.306 and ϱ^_2ϱ^_5⩾ 2 ϱ^_3 m_h^4/(g^4_ v_ϕ^4)≈ 7.5 × 10^-4_. In Fig. <ref>, we have taken ϱ^_1=0.306 and shown the sphaleron energy with respect to ϱ^_2 and ϱ_5. It is clear that a larger ϱ^_5 (corresponding to a heavier triplet) would decrease the sphaleron energy, though the difference is small compared with the SM case because of the suppression from ϱ^_3. However, unlike the SM where ϱ^_1 is fixed to be 0.306, ϱ^_1>0.306 is also allowed in the HTM. Due to the constraints in Sec. <ref>, the parameter space of ϱ_2 and ϱ^_5 begins to split into two distinct regions when ϱ^_1≳ 0.34, as is shown in Fig. <ref>. In Region A (left panel of Fig. <ref>), it can be seen that the allowed parameter space of ϱ^_2 and ϱ^_5 moves to upper-right as ϱ^_1 increases. This can be understood by observing the expression of λ^_4 in Eq. (<ref>), whose magnitude should be bounded by the requirement of unitarity. Moreover, the sphaleron energy decreases as ϱ^_5 increases, while larger ϱ^_1 would bring about larger sphaleron energies. The value of ϱ^_1 can keep increasing until the unitarity bound, i.e., ϱ_1^ max=4π/g^2, is reached. We have verified numerically that the maximum sphaleron energy in Region A is around 1.97× 4π v/g. Basically, the parameters in Region A correspond to a heavy mass scale M^_Δ of the triplet scalar, which can reach TeV or above. Things are quite different for Region B (shown in the right panel of Fig. <ref>). The allowed values of ϱ^_2 and ϱ^_5 are much smaller. More explicitly, the lower and upper bounds of ϱ^_2ϱ^_5 in Region B are given by √(ϱ^_2ϱ^_5) ⩽ 1/2[1/√(2ϱ^_3)(ϱ^_1-m_h^2/2 g^2_ v^2_ϕ)-4√(2π)/g√(ϱ^_1ϱ^_3)-√(A^_1)] , √(ϱ^_2ϱ^_5) ⩾ 1/2[2√(2ϱ^_3)(ϱ^_5+24/5)+1/√(2ϱ^_3)(ϱ^_1-m_h^2/2g^2_v^2_ϕ)-√(A^_2)] , where A^_1 ≡ 1/2ϱ^_3(ϱ^_1-m_h^2/2 g^2_ v^2_ϕ)[ϱ^_1-16√(π)ϱ^_3/g√(ϱ^_1)-m^2_h/2g^2_ v^2_ϕ(1+16ϱ^_3)] , A^_2 ≡ 1/10ϱ^_3(ϱ^_1-m^2_h/2g^2_v^2_ϕ)[5ϱ^_1+8ϱ^_3(24+5ϱ^_5)-5 m_h^2/2g^2 v^2_ϕ(1+16ϱ^_3)] . Note that Eq. (<ref>) comes from the constraints in Sec. <ref>, where m^_h≈ 125  GeV, g≈ 0.65 and ϱ^_3=10^-3_ should be substituted to evaluate the lower and upper bounds. The allowed values of ϱ^_2 and ϱ^_5 are restricted to a narrow parameter space by Eq. (<ref>). For example, for ϱ^_1=0.6, the validity of Eq. (<ref>) requires ϱ^_5≲ 0.987 and 1.05 × 10^-3_≲ϱ^_2ϱ^_5≲ 1.22× 10^-3_, which corresponds to the narrow band in the bottom-right subfigure of Fig. <ref>. Since ϱ^_5 is relatively small, the sphaleron energy in Region B can be significantly enhanced as ϱ^_1 increases. In particular, for ϱ^_1=ϱ_1^ max=4π/g^2, the sphaleron energy can reach 2.48× 4π v/g, which is enhanced by about 30% compared with the sphaleron energy in the SM. The parameters in Region B correspond to a much smaller M^_Δ than that in Region A (basically lighter than 1 TeV). However, it does not violate the collider constraints on the mass of doubly-charged Higgs, because m^_H^±± depends on the combination of ϱ^_2ϱ^_5 rather than ϱ^_5 itself, and is enhanced by ϱ_3^-1/2 [see Eq. (<ref>)]. On the other hand, since the allowed parameter space in Region B is quite narrow and is sensitive to the lower bound of m^_H^±±, we point out that it is readily testable by future collider searches and EW precision measurements. In Fig. <ref>, we have shown the sphaleron energy with respect to ϱ^_1 and ϱ^_2 for different values of ϱ^_5. Note that all allowed parameters in Fig. <ref> belong to Region A because the corresponding values of ϱ^_5 are not small enough to satisfy Eq. (<ref>). It is clear that for larger ϱ^_5, the allowed parameter space moves to upper-right. The increase of ϱ^_1 (or ϱ^_5) would enhance (or reduce) the sphaleron energy. For ϱ^_5≳ 100 (corresponding to M^_Δ≳ 1.6  TeV), the lower bound of the sphaleron energy tends to about 1.88× 4π v/g. To sum up, the sphaleron energy in the SM is completely fixed by the Higgs mass, while that in the HTM is not. The allowed parameter space begins to split into two regions when ϱ^_1≳ 0.34. In Region A, the sphaleron energy is bounded to be 1.88 × 4π v/g ≲ E_ sph^≲ 1.97 × 4 π v/g. The difference of the sphaleron energy between the HTM and the SM is less than 3%. On the contrary, in Region B, since ϱ^_5 is relatively small, the sphaleron energy could be significantly enhanced as ϱ^_1 increases. Therefore we have 1.92 × 4π v/g ≲ E_ sph^≲ 2.48 × 4 π v/g, where the sphaleron energy could be enhanced up to about 30% compared with the SM case. § SUMMARY AND DISCUSSIONS The origin of neutrino masses and the baryon asymmetry of the Universe are two of the most important unsolved problems in the SM. Both of them are possible to be explained in a unified framework of the HTM, which extends the SM by adding a complex triplet scalar. The couplings of the triplet to the gauge fields and to the SM Higgs field are expected to affect the sphaleron configuration in the SM, which plays an important role in baryogenesis. Therefore, to realize a self-consistent baryogenesis in the HTM, either via EW baryogenesis or via leptogenesis, the calculation of the sphaleron energy is indispensable. In this work, we calculate the sphaleron configuration in the HTM for the first time, where both the doublet and the triplet scalar fields exist. Although there are 8 parameters in the scalar potential of the HTM, we find that the sphaleron configuration is determined by only 5 independent parameters, i.e., those defined in Eq. (<ref>). Among them, the doublet quartic parameter ϱ^_1 would increase the sphaleron energy, as in the SM case; while the doublet-triplet trilinear parameter ϱ^_2, the VEV-ratio parameter ϱ^_3, and the triplet mass parameter ϱ^_5 would decrease the sphaleron energy in general compared with the SM. Nevertheless, at zero temperature, the constraint from EW precision measurements on the triplet VEV puts a stringent upper bound on ϱ^_3, thus highly suppresses the difference of the sphaleron energy between the HTM and the SM. Interestingly, we find there still exists some narrow parameter space where the sphaleron energy could be enhanced by 30% compared with the SM case. Such narrow parameter space can be tested by future collider searches of doubly-charged Higgs and EW precision measurements. In the following, we discuss some possible extensions of the present work. All of the calculations in this paper have neglected the finite-temperature effects. However, the sphaleron transition rate is significant above the temperature of O(100)  GeV in the early Universe, which is a crucial process for baryogenesis. Therefore, in principle one should include the finite-temperature corrections as well as the one-loop corrections into the scalar potential in Eq. (<ref>) and recalculate the sphaleron configuration using the formalism developed above. This is beyond the scope of this paper, and will be left for a future work. As a good approximation, one could estimate the sphaleron energy at finite temperatures using the scaling law <cit.> E^_ sph(T)=E^_ sphv(T)/v , where v and E_ sph are the VEV and the sphaleron energy at zero temperature, and v(T)=[v_ϕ^2(T)+2v_Δ^2(T)]^1/2_ is the VEV at a finite temperature, with v^_ϕ(T) and v^_Δ(T) being the VEVs of the doublet and the triplet. On this point, it is worthwhile to emphasize that v^_Δ(T)/v^_ϕ(T) is not constrained by experiments as at zero temperature, and hopefully we could have a larger ϱ^_3 at finite temperatures. As has been shown in the right panel of Fig. <ref>, a large ϱ^_3 would significantly decrease the sphaleron energy compared with the SM. Apart from the finite-temperature effects, one can study the sphaleron configuration in the Georgi-Machacek (GM) model <cit.>. The GM model further extends the HTM by introducing an additional real triplet scalar with hypercharge Y=0, and can maintain the custodial symmetry at the tree level by adjusting the VEVs of the complex and real triplets. In this way, the VEVs of the triplets are no longer suppressed and can even be larger than that of the doublet. This may significantly change the sphaleron configuration in the SM according to the results in this work. Therefore, it would be interesting to calculate the sphaleron energy and investigate whether a successful EW baryogenesis could be carried out in the GM model, given that the strong first-order EW phase transition is possible in this model <cit.>. Note added. During the final preparation of this paper, a relevant work <cit.> appeared, which studied the sphaleron configuration in extensions of the SM with general electroweak multiplets (see also Ref. <cit.> for earlier efforts). In particular, Ref. <cit.> calculated the sphaleron energy in a septuplet extension of the SM. Besides, Ref. <cit.> focused on the scenario where the neutral component of the multiplet can be a dark matter candidate. In this case, the hypercharge of the multiplet should be zero and the VEV is vanished at zero temperature. This is different from the scenario we considered in the current work. § ACKNOWLEDGEMENTS We would like to thank Huai-Ke Guo, Yu Tian, Yanda Wu and Deshan Yang for helpful discussions about the sphaleron energy and the spectral method. This work was supported in part by the National Natural Science Foundation of China under grants No. 11835013 and No. 12235008. § SPECTRAL METHODS The EOM of the relevant fields in the calculation of the sphaleron configuration are nonlinear differential equations coupled with each other. It is usually difficult to solve them in an analytical way. In this appendix we show how to use the spectral method to numerically solve the EOM and calculate the sphaleron energy.[The code is publicly available at https://github.com/Bingrong-Yu/Spectral_Sphaleron_Solverhttps://github.com/Bingrong-Yu/Spectral_Sphaleron_Solver.] The main advantage of the spectral method is that it converges very quickly with high precision as the number of the grid points increases. In what follows, we first give a brief introduction to the spectral method, and then apply it to the SM and the HTM. §.§ Basic Ideas The spectral method is an efficient technique to numerically solve differential equations <cit.>. The core idea is to approximate the unknown function by a set of basis functions. Let {ϕ^_n(x)} being a set of orthogonal and complete functions, the unknown function u(x) can be expanded as u(x)=∑_n=0^∞a^_n ϕ^_n(x) , a^_n=∫ dx u(x) ϕ^*_n(x) . For practical numerical computation, one has to truncate at a finite number n=N, and u(x) can be approximated by u(x)≈ u^_N(x)=∑_n=0^Na^_n ϕ^_n(x) , where the coefficients a^_n are calculated at grid points {x^_i} a^_n ≈∑_i=1^Nu^_i ϕ^*_n(x_i) , with u_i≡ u(x_i). Substituting Eq. (<ref>) back to (<ref>) one obtains u^_N(x) = ∑_n=0^N∑_i=0^Nu^_i ϕ_n^*(x_i)ϕ^_n(x) . Then the derivative of the unknown function can be approximated by that of the basis functions, namely u_j' ≈ u_N'(x)|_x=x_j = ∑_n=0^N∑_i=0^Nu^_i ϕ_n^*(x_i)ϕ_n'(x)|_x=x^_j . The differentiation matrix D^_N, which relates the unknown function to its derivative at grid points, is given by (D_N)^_ji=∑_n=0^Nϕ_n^*(x^_i) ϕ_n'(x)|_x=x^_j . Starting from the differentiation matrix, the values of the derivative function can be easily expressed as the linear combination of the values of the raw function. For example, we have u_j' = ∑_i=1^N(D^_N)^_jiu^_i , u_j” = ∑_i=1^N(D_N^2)^_jiu^_i . Then the differential equations of u(x) are reduced to a set of algebraic equations of {u^_i}, which can be numerically solved directly. The numerical error of the above method is described by the residual function R(x)=|u(x)-u^_N(x)|. Therefore, a “good choice" of the basis functions {ϕ^_n(x)} and the grid points {x^_i} should make the residual function as small as possible. For periodic functions, the best choice of the basis functions is the Fourier series. However, for non-periodic functions, as what we encountered in the calculation of the sphaleron, it can be shown that in most cases the best choice of the basis functions is the Chebyshev polynomials (see Appendix <ref>) <cit.>. In addition, the grid points should be taken as the extrema of the Chebyshev polynomials, i.e., x_j = cos(jπ/N) , j=0,1,⋯,N . Then it is straightforward to construct the Chebyshev spectral differentiation matrix <cit.> (D^_N)^_00 = 2N^2_+1/6 , (D^_N)^_NN = -2N_^2+1/6 , (D^_N)^_jj = -x^_j/2(1-x_j^2), j=1,⋯, N-1 , (D^_N)^_ij = c^_i/c^_j(-1)_^i+j/x^_i-x^_j, i≠ j, 0⩽ i,j ⩽ N , where c^_i={ 2 i=0 or N 1 otherwise . . One should keep in mind that when using the Chebyshev spectral method to solve differential equations, the following two conditions need to be satisfied * domain of the variable: x∈ [-1,1] ; * boundary conditions: u(-1) = u(1) =0 . They are easily to achieve after a linear transformation of the variable. In the following parts we will show how to use the spectral method introduced above to solve the differential equations relevant to the sphaleron. §.§ Sphaleron in the Standard Model As a warm up, we first use the spectral method to calculate the sphaleron configuration in the SM. There are only two dynamical fields [i.e., f(ξ) and h(ξ)], and their EOM are given by (recalling that we have defined ξ≡ g v r and ϱ^_1≡λ/g^2_) ξ_^2 f” = 2f(1-f)(1-2f)-ξ_^2/4(1-f)h_^2 , (ξ_^2 h')' = 2(1-f)_^2 h-ϱ_1 ξ_^2 h(1-h_^2) , with the boundary conditions f(0)=h(0)=0 and f(∞)=h(∞)=1. In the practical calculation, the variable is truncated at some finite distance ξ^_ max=2a. This is reasonable because the sphaleron energy is localized near the origin and the profile functions f and h tend to the constant quickly as the distance increases. In order to satisfy the conditions of the Chebyshev spectral method, we perform a linear transformation to the variable ξ→ x=ξ/a-1 . In addition, the profile functions should be shifted to f(x) →f̅(x) = f(x) - 1+x/2 , h(x) →h̅(x) = h(x) - 1+x/2 . Then the domain of the variable is x∈ [-1,1] and the boundary conditions become f̅(-1)=f̅(1)=h̅(-1)=h̅(1)=0. The EOM of the shifted profile functions turn out to be 2(1+x)_^2 f̅” = (2f̅+1+x)(2f̅-1+x)(2f̅+x) +a_^2/16(1+x)_^2(2f̅-1+x)(2h̅+1+x)_^2 , (1+x)_^2 h̅” + (1+x)(2h̅'+1) = 1/4(2f̅-1+x)_^2(2h̅+1+x) - a_^2 ϱ^_1/8(1+x)_^2(2h̅+1+x)[4-(2h̅+1+x)_^2] . Note that all the derivatives in Eqs. (<ref>) and (<ref>) are with respective to x rather than ξ. Now we can use the Chebyshev spectral method introduced above to solve the EOM. Given the grid points in Eq. (<ref>), it is straightforward to construct the (N+1)× (N+1) differentiation matrix D_N using Eq. (<ref>). The derivatives of the profile functions are given by f̅' = D^_N f̅, h̅' = D^_N h̅, f̅” = D_N^2 f̅, and h̅” = D_N^2 h̅. Then Eqs. (<ref>) and (<ref>) are reduced to 2(N-1) algebraic equations with respect to {f̅(x^_1),⋯,f̅(x^_N-1),h̅(x^_1),⋯,h̅(x^_N-1)} , which can be numerically solved directly. Finally, the profile functions should be shifted back via f(x)=f̅(x)+(1+x)/2 and h(x)=h̅(x)+(1+x)/2, and the energy of the sphaleron could be computed by E^_ sph=4π v a/g∫_x^_N-1^x^_1 dx {4/a^2_f'^2_+8/a_^2(1+x)_^2f_^2(1-f)_^2+(1-f)_^2h_^2+1/2(1+x)_^2 h'^2_. .+ϱ^_1/4 a_^2(1+x)_^2 (1-h_^2)_^2 } , where the upper and lower bounds x^_1 and x^_N-1 are given by Eq. (<ref>). We find the results converge rapidly as the number of grid points N increases (see Fig. <ref>). For N ≳ 20, the numerical results are stable and independent of the cut-off a. This is because the profile functions and the sphaleron energy density tend to constants quickly as ξ increases. In Fig. <ref> we show the sphaleron configuration in the SM obtained using the spectral method, where N=60 and a=30 have been taken. It is worthwhile to mention that the spectral method takes only about 1 second to calculate the sphaleron configuration for a given ϱ^_1 using a usual personal desktop. In particular, for ϱ_1=0, ϱ^_1=0.306 and ϱ^_1 →∞, we obtain E^_ sph≈ 1.54, E^_ sph≈ 1.92 and E^_ sph≈ 2.71 (in units of 4π v/g), which matches very well with the result in the literature <cit.>. §.§ Sphaleron in the Higgs Triplet Model Then we turn to calculate the sphaleron configuration in the HTM using the spectral method. We have three dynamical fields, i.e., f(ξ), h(ξ), and h^_Δ(ξ). As what we did in the SM, in order to satisfy the boundary conditions of the spectral method, the variable ξ should be transform to x via Eq. (<ref>), and the profile functions should be shifted to f(x) →f̅(x) = f(x) - 1+x/2 , h(x) →h̅(x) = h(x) - 1+x/2 , h^_Δ(x) →h̅^_Δ(x) = h^_Δ(x) - 1+x/2 . Now the domain of the variable is x∈ [-1,1] and the boundary conditions become f̅(-1)=f̅(1)=h̅(-1)=h̅(1)=h̅^_Δ(-1)=h̅^_Δ(1)=0. After some straightforward calculations, the EOM of the shifted profile functions turn out to be 2(1+x)_^2 f̅” = (2f̅+1+x)(2f̅-1+x)(2f̅+x) +a_^2/16β(1+x)_^2(2f̅-1+x)(2h̅+1+x)_^2 +a_^2 ϱ_3/6β(1+x)_^2(2f̅-1+x)(2h̅^_Δ+1+x)_^2 , (1+x)_^2 h̅” + (1+x)(2h̅'+1) = 1/4(2f̅-1+x)_^2(2h̅+1+x) -a_^2/8β(1+x)_^2{(ϱ^_1-ϱ^_2)(2h̅+1+x) [4-(2h̅+1+x)_^2]. .-ϱ^_2(2h̅+1+x)[(2h̅+1+x)_^2-2(2h̅^_Δ+1+x)]. .+4(ϱ^_4-ϱ^_1+ϱ^_2)(2h̅+1+x). .+2(√(2ϱ^_2ϱ^_3ϱ^_5)-ϱ^_2)(2h̅+1+x)(2h̅^_Δ+1+x). .+(ϱ^_1-ϱ^_4-√(2ϱ^_2ϱ^_3ϱ^_5))(2h̅+1+x)(2h̅^_Δ+1+x)_^2} , ϱ^_3(1+x)_^2 h̅_Δ” + ϱ^_3(1+x)(2h̅_Δ'+1) = 2ϱ^_3/3(2f̅-1+x)_^2(2h̅^_Δ+1+x) -a_^2 ϱ^_2/8β(1+x)_^2[(2h̅+1+x)_^2-2(2h̅^_Δ+1+x)] +a_^2/8β(1+x)_^2[2(2ϱ^_3ϱ^_5-ϱ^_2)(2h̅^_Δ+1+x). .-(√(2ϱ^_2ϱ^_3ϱ^_5)-ϱ^_2)(2h̅+1+x)^2. .-(ϱ^_1-ϱ^_4-√(2ϱ^_2ϱ^_3ϱ^_5))(2h̅+1+x)_^2 (2h̅^_Δ+1+x). .+(ϱ^_1-ϱ^_4-ϱ^_3ϱ^_5-√(ϱ^_2ϱ^_3ϱ^_5/2))(2h̅^_Δ+1+x)_^3] . Note that all the derivatives are with respective to x. If we take ϱ^_4=ϱ^_1-ϱ^_2 and ϱ^_5=ϱ^_2/(2ϱ^_3), then Eqs. (<ref>)-(<ref>) simply reduce to the EOM of the shifted profile functions in the minimal HTM. Constructing the differentiation matrix D_N using Eq. (<ref>), the derivatives of the profile functions are given by f̅' = D_N f̅ , h̅' = D_N h̅ , h̅_Δ' = D_N h̅_Δ , f̅” = D_N^2 f̅ , h̅” = D_N^2 h̅ , h̅”_Δ= D_N^2 h̅^_Δ . Then Eqs. (<ref>)-(<ref>) reduce to 3(N-1) algebraic equations with respective to {f̅(x^_1),⋯,f̅(x^_N-1),h̅(x^_1),⋯,h̅(x^_N-1),h̅^_Δ(x^_1),⋯,h̅^_Δ(x^_N-1)} , and they can be numerically solved directly. The profile functions should be shifted back: f(x)=f̅(x)+(1+x)/2, h(x)=h̅(x)+(1+x)/2, and h^_Δ(x)=h̅^_Δ(x)+(1+x)/2. Finally, the energy of the sphaleron is calculated by E_ sph=4π v a/g∫_x^_N-1^x^_1 dx {4/a_^2f'^2_+8/a_^2(1+x)_^2f_^2(1-f)_^2+1/β(1-f)^2h_^2+1/2β(1+x)_^2 h'^2_. .+a_^2(1+x)_^2/4β^2_[(ϱ^_1-ϱ^_2)(1-h_^2)^2_+ϱ^_2(h_^2-h^_Δ)^2_]+ϱ^_3/6β[3(1+x)_^2 h_Δ'^2.. .. +16h_Δ^2 (1-f)_^2]+a_^2(1+x)^2/4β^2_[2(ϱ^_4-ϱ^_1+ϱ^_2 )(1-h_^2).. ..-(2ϱ^_3 ϱ^_5 -ϱ^_2)(1-h_Δ^2)]+a_^2(1+x)_^2/2β_^2(√(2ϱ^_2 ϱ^_3 ϱ^_5)-ϱ^_2)(1-h_^2 h^_Δ). .+a_^2(1+x)_^2/2β_^2(ϱ^_1-ϱ^_4-√(2ϱ^_2 ϱ^_3 ϱ^_5))(1-h_^2 h_Δ^2). .-a^2_(1+x)^2_/4β^2_(ϱ^_1-ϱ^_4-ϱ^_3 ϱ^_5-√(ϱ^_2 ϱ^_3 ϱ^_5/2))(1-h_Δ^4)} , where x^_1 = cos(π/N) and x^_N-1=cos[(N-1)π/N]=-cos(π/N). As in the SM, we find the final results converge rapidly as N increases and depend very weakly on a. Therefore, in the numerical calculation throughout this work, we fix N=60 and a=30. § CHEBYSHEV POLYNOMIALS In this mathematical appendix, we briefly review some properties of the Chebyshev polynomials. We also demonstrate why the Chebyshev polynomials serve as a “good candidate" of the basis functions in the spectral method. The Chebyshev polynomial of degree n is defined as T^_n(cosθ)=cos(nθ) , n=0,1,2,⋯ . From the definition one can obtain T^_0(x) = 1 , T^_1(x) = x , T^_n+2(x) = 2xT^_n+1(x)-T^_n(x) . It is easy to show that the Chebyshev polynomials satisfy the following properties: * Orthonormality. The Chebyshev polynomials are orthogonal with respect to the weight function ρ(x)=1/√(1-x_^2), i.e., ∫_-1^1 dx/√(1-x^2)T^_m(x) T^_n(x) = 0 for m≠ n , ∫_-1^1 dx/√(1-x^2)T_n^2(x) = { π for n=0 π/2 for n=1,2,3,⋯ . . * Completeness. Any function u(x) defined on [-1,1] can be expanded as u(x) = '∑_n=0^∞ a^_n T^_n(x) , a^_n=2/π∫_-1^1 dx/√(1-x^2)u(x)T^_n(x) , where ∑' denotes a sum whose first term is halved. * Roots and extrema. The Chebyshev polynomial of degree n has n+1 extrema and n roots in [-1,1] extrema: x^_j = cos(jπ/n) , j=0,1,⋯,n , roots: x̃^_j = cos(2j+1/2nπ) , j=0,1,⋯,n-1 . For the practical numerical calculation, the infinite sum in Eq. (<ref>) should be truncated at n=N, and the coefficients are evaluated at grid points <cit.> u(x)≈ u^_N(x) = ”∑_j=0^N b^_n T^_n(x) , b^_n = 2/N”∑_n=0^N u(x^_j) T^_n(x^_j) , where ∑” denotes a sum whose first and last terms are halved, and x^_j = cos(jπ/N) (for j=0,1,⋯,N) are extrema of the Chebyshev polynomial of degree N. Alternatively, one can also evaluate the coefficients at roots of the Chebyshev polynomials u(x)≈ũ^_N(x)= '∑_n=0^Nb̃^_n T^_n(x) , b̃^_n = 2/N+1∑_j=0^N u(x̃^_j) T^_n(x̃^_j) , where x̃^_j=cos[(2j+1)π/(2N+2)] (for j=0,1,⋯,N) are roots of of the Chebyshev polynomial of degree N+1. Then it follows that the interpolation functions u^_N(x) and ũ^_N(x) fit u(x) exactly at the grid points, i.e., u^_N(x^_j)=u(x^_j) and ũ^_N(x̃^_j)=u(x̃^_j). Moreover, it can be proved that the upper bounds of the residue functions turn out to be <cit.> |u(x)-u^_N(x)| ⩽ 2 ∑_n=N+1^∞|a^_n| , |u(x)-ũ^_N(x)| ⩽ 2 ∑_n=N+1^∞|a^_n| . This means the error of evaluating the coefficients at grid points can never exceed twice the error of computing the coefficients using the integral in Eq. (<ref>). The grid points in Eqs. (<ref>) and (<ref>) are known as extrema grid and roots grid, respectively. Both of them have been widely used in the Chebyshev spectral method <cit.>. In this work, we take the grid points to be extrema grid [see Eq. (<ref>)]. One could also compare the interpolation using the Chebyshev polynomials with other polynomials. First, recall that the general Lagrange interpolation of u(x) is given by L(x) = ∑_j=0^Nu^_j ℓ^_j(x) , where u^_j≡ u(x^_j) and ℓ^_j(x)=1/c^_j∏^N_k=0 k≠ j(x-x^_k) , c^_j=∏^N_k=0 k≠ j(x^_j-x^_k) . Then we have L(x^_j)=u(x^_j) (for j=0,1,⋯ N). The remainder of the Lagrange interpolation reads R(x)=u(x)-L(x)=u_^(N+1)(ζ)/(N+1)!P^_N+1(x) , P^_N+1(x)≡(x-x^_1)⋯(x-x^_N) , where u_^(N)(x) is the N-th derivative of u(x) and ζ∈ (-1,1). The question is: how to choose the grid points x_j so that we could have the smallest remainder? An intuitive answer is to look at the upper bound of the remainder, which turns out to be max|R(x)|⩽ max|u_^(N+1)(x)|/(N+1)! max|P^_N+1(x)| . It is not difficult to prove that max|P^_N+1(x)|⩾1/2^N_ max|T^_N+1(x)|=1/2^N_ . If P^_N+1(x) is the monic Chebyshev polynomial T^_N+1(x)/2^N, namely the grid points x_j are taken to be the roots of T^_N+1(x), then max|R(x)| has the minimum upper bound. Therefore, the Chebyshev polynomial is the “best choice" of the interpolation polynomial, in the sense that the remainder has a minimum upper bound. elsarticle-num
http://arxiv.org/abs/2307.05361v1
20230708230112
A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle Force and Joint Kinematics
[ "Yue Shi", "Shuhao Ma", "Yihui Zhao", "Zhiqiang Zhang" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.LG", "cs.RO" ]
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives [ ==================================================================== Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis of the dynamic interplay among neural muscle stimulation, muscle dynamics, and kinetics. Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner. However, the small sample nature and physical interpretability of biomechanical analysis limit the applications of DNNs. This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics. This method seamlessly integrates Lagrange's equation of motion and inverse dynamic muscle model into the generative adversarial network (GAN) framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Experimental validations are conducted on two scenarios (i.e. the walking trials and wrist motion trials). Results indicate that the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics, which outperforms the selected benchmark methods, including physics-informed convolution neural network (PI-CNN), vallina generative adversarial network (GAN), and multi-layer extreme learning machine (ML-ELM). § INTRODUCTION Human movements involve complex interactions within the neuromuscular system. The surface electromyography (sEMG)-driven estimation of muscle force and joint kinematics dynamics provides detailed biomechanical analysis to understand the neuromuscular system <cit.>, which benefits various applications, such as sports rehabilitation treatments <cit.>, <cit.>, and optimizing robotic design for individuals with impairments <cit.>. Although physics-based models explicitly explain and map sEMG signals to joint kinematics, the high cost of their static optimization has always limited the practical applications of these models <cit.>. Recently, deep neural networks (DNNs) provide an alternative solution to map the sEMG signals to the joint kinetics and kinematics <cit.>. In this kind of model, the multi-layer convolution architecture has been explored to establish relationships between movement variables and neuromuscular status <cit.>. For example, Nasr et al <cit.> mapped the sEMG signals to the regression of joint angle, joint velocity, joint acceleration, joint torque, and activation torque, illustrating that the multi-layer convolution operators are capable of extracting underlying motor control information. Zhang et al <cit.> developed an active deep convolutional neural network to enhance the dynamic tracking capability of the musculoskeletal model on unseen data. Despite the advantages, traditional DNNs are data-hungry and their performance is highly dependent on the quantity and quality of data <cit.>. Meanwhile, biomechanics analysis is typically a physics-based extrapolation process with small sample nature <cit.>. Therefore, it is a challenge to train DNNs with small sample data so that the DNNs perform consistently with the physics-based model. To fill this research gap, the low-shot learning (LSL) technique has attracted many researchers' attention <cit.>. For example, Rahimian et al <cit.> introduced a Few-Shot Learning Hand Gesture Recognition (FS-HGR) model to enhance the generalization capability of DNNs from a limited number of instances. Lehmler et al <cit.> explored a low-shot learning methodology that adjusts DNNs to new users with only a small size of training data. In addition, the generative adversarial network (GAN) framework has shown great potential in handling physical extrapolating and predictive problems <cit.>. The GAN-based model is capable of discovering the structured patterns of the references and extrapolating the underlying data distribution characteristics during the adversarial learning process <cit.>. For example, Chen et al <cit.> tested and evaluated the performance of the deep convolutional generative adversarial network (DCGAN) on sEMG-based data enhancement, and their results indicated that the extrapolated data is able to augment the diversity of the original data. Fahimi et al <cit.> proposed a generative adversarial learning framework for generating artificial electroencephalogram (EEG) data to extrapolate the brain-computer interface, and their findings suggest that generated EEG augmentation can significantly improve brain-computer interface performance. In this study, we propose a physics-informed low-shot learning method for muscle force and joint kinematics estimation from multi-channel sEMG signals. This method seamlessly integrates physics knowledge with the GAN framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Results show the muscle forces and joint kinematics estimated from the proposed method are unbiased compared to the physics-based inverse dynamics. The remainder of this paper is organized as follows: Section <ref> detailed describes the algorithm of the proposed physics-informed policy gradient for reinforcement generative adversarial learning, including the mathematics framework of the algorithm and network architectures. Section <ref> presents the material and experimental methods. Section <ref> discusses the experimental results and model evaluations. and Section <ref> presents the conclusions. § PHYSICS-INFORMED LOW-SHOT LEARNING METHOD The continuous estimation of muscle forces (F) and joint kinematics(θ) from multi-channel sEMG can be denoted as the time-series generation problem. Thus, given a real multi-channel sEMG time series, we train a σ parameterized generative network G_σ to estimate the muscle force (F̂) and joint kinematics (θ̂). In this section, we propose a GAN framework, as shown in Fig.<ref>, to train the G_σ on the small sample data. Specifically, we denote the F̂ and θ̂ estimated by G_σ as the negative samples (see details in Section <ref>), the ground truth (θ) and the inverse dynamics-based (F) <cit.> as positive samples (i.e. references). The ϕ-parameterized discriminative model D_ϕ is introduced to distinguish the positive samples and negative samples (see details in Section <ref>). During adversarial learning, the task of D_ϕ is to determine if an input sample is positive or negative, and the task of G_σ is to generate the unbiased negative samples to fool the discriminator D_ϕ. The model optimization process is driven by the newly proposed physics-informed policy gradient (see details in Section <ref>) which rewards the homogeneity of physics representation and structural characteristics between the positive and negative samples. §.§ GAN optimization via physics-informed policy gradient The physics-informed policy gradient method, inspired by reinforcement learning <cit.>, aims to optimize the learning process of the GAN-based model yielding physical extrapolations from the small sample data (i.e. low-shot learning). Mathematically, the physics-informed policy gradient method maximizes its expected reward J(σ) based on the physics law and structured characteristics from the small sample data. The J(σ) consists of two parts, the structural reward R_G_σ and physics representation action Q_D(ϕ)^G(σ). The J(σ) is defined as follows. J(σ) = 𝔼[R_G_σ(G_σ(sEMG_0:T))] · Q_Dϕ^Gσ((G_σ(sEMG_0:T), [F,θ]_0:T) = 𝔼[R_G_σ ([F̂, θ̂]_0:T)] · Q_Dϕ^Gσ([F̂, θ̂]_0:T, [F, θ]_0:T) where sEMG_0:T is the input multi-channel sEMG time series for T time steps. The J(σ) is beginning with the expected reward from a predetermined state from the positive samples. And then, the R_G_σ and Q_D(ϕ)^G(σ) will jointly optimize the generative network G_σ to generate the unbiased ([F̂, θ̂]_0:T) following the physics laws. Specifically, the structural reward R_G_σ is computed by the G_σ and defined as follows. R_G(([F̂, θ̂]_0:T) = exp ^ PL^2 ([F̂, θ̂]_0:T) where PL([F̂, θ̂]_0:T) is the physics law used to restrict the hierarchical structure of the generated data, which provides the additional information to the regularize the learning process from the small sample data. In this case, we use the Lagrange equation of motion <cit.> as the physics law, which is defined as follows. PL([F̂, θ̂]_0:T) = 1/T∑_t=1^T (m(θ̂_t)θ̈̂̈_t + c(θ̂_t, θ̇̂̇_t + g(θ̂_t) - ∑_n=1^NF̂^n_t)^2 where T is the number of time-steps, N is the channels of the F̂, m(θ̂_t), c(θ̂_t, θ̇̂̇_t, and g(θ̂_t) denote mass matrix, the Centrifugal and Coriolis force, and the gravity, respectively <cit.>. In this manner, the G_σ will generate the structured outputs of (F̂, θ̂). The Q_D(ϕ)^G(σ) is computed by the D(ϕ) and interprets the physics constraint action values as the estimated probability of being physics real by D(ϕ). These physics constraint action values lead to the improvement of GAN model in physical extrapolation from the small training data. The Q_D(ϕ)^G(σ) can be formulated as: Q_Dϕ^Gσ((G_σ( sEMG_0:T), [F, θ]_0:T) = 𝔼_[F̂, θ̂]_0:T∼ [F, θ]_0:T [log Dϕ([F̂, θ̂]_0:T)] + 𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T))[log (1-Dϕ([F̂, θ̂]_0:T))] For each epoch, once the new R_G and Q_D(ϕ)^G(σ) has been obtained, the policy model G(σ) will be updated following the gradient of the reward function as follows. ∇_σ J(σ) = 𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T)∑∇_σ R_G_σ([F̂, θ̂]_0:T|[F, θ]_0:T) · Q^G_σ_D_ϕ ([F̂, θ̂]_0:T, [F, θ]_0:T) Using likelihood ratios, the unbiased estimation for Eq. <ref> on one epoch can be described as follows. ∇_σJ(σ) ≃1/T∑_t=1^T∑_y_t ∈ [F̂, θ̂]_t∇_σ R_G_σ(y_t|[F, θ]_t) · Q^G_σ_D_ϕ (y_t, [F, θ]_t) =1/T∑_t=1^T ∑_y_t ∈ [F̂,θ̂]_t G_σ(y_t|[F, θ]_t) ∇_σlog G_σ(y_t|[F, θ]_t) · Q^G_σ_D_ϕ(y_t, [F, θ]_t) The parameters of the policy model G_σ can be updated as follows. σ←σ + α∇_σ J(σ) where α∈ℝ is the learning rate. To summarize, Algorithm 1 provides an in-depth look at our proposed GAN optimization via a physics-informed policy gradient. Initially, G_σ is pre-trained on the training set sEMG = {X_1:T} using the maximum likelihood estimation (MLE). And then, the G_σ and D_ϕ undergo adversarial learning. As the G_σ improves, the D_ϕ is routinely retrained to stay synchronized with the G_σ improvement. We ensure balance by generating an equal number of negative samples for each training step as the positive samples. §.§ The generative network The proposed physics-informed low-shot learning method does not depend on the specific generative network architecture. In this study, considering the long-term temporal dependencies of the F and θ sequences to the input multi-channel sEMG sequence, we employ the Long Short-Term Memory (LSTM) cells to our generative model <cit.>. The architecture of the generator network G is shown in Fig.<ref>. It serves three functions: multi-channel sEMG feature extraction, residual learning with LSTM, and musculoskeletal tokens sequence generation. Firstly, for the multi-channel sEMG feature extraction, a 1-dimensional (1D) convolution filter with a 2 /times 1 kernel is introduced to capture the multiple sEMG features at time step t. The extracted convolution features represent the hierarchical structures of the multi-channel sEMG. In this study, the convolution kernel is set to 1 × b for a b-channel sEMG input. Considering the batch normalization (BN) layer would normalize the features and get rid of the range flexibility for upscaling features <cit.>, no BN layer is used here to avoid blurring the sEMG responses hidden in the extracted features. The max-pooling layer is used to combine the extracted sEMG features into a single neuron by using the maximum value from each convolution window. The max-pooling operation reduces the number of parameters and network computation costs and has the effect of adjusting over-fitting. Secondly, the LSTM blocks are employed for residual learning of the time-series characteristics of the target musculoskeletal tokens. The LSTM layer is well suited for time-series sequence generation by addressing the explosive and vanishing gradient issues <cit.>. An LSTM block consists of a memory cell, an input gate, an output gate, and a forget gate, the detailed definitions of the components are described in <cit.>'s study. Specifically, in this study, in time step t, the memory cell remembers structured feature values over the previous t-1 intervals and the three gates regulate the flow of information into and out of the memory cell, which has a great preference for preserving long-term temporal structure characteristics by consolidating previous temporal correlations as memory units. Meanwhile, the high-level sEMG features extracted from the convolution layer represent the current multi-channel sEMG responses to muscle force and joint kinematics. The skip-connect of the memory cell and the high-level sEMG features not only represent extracted local kinetic invariances but also represent the temporal dynamics of the motions. It is noteworthy that the traditional LSTM layer only produces fitness between the current time step and the previous time steps. However, we expect the model also can pay insight into the resulting future outputs. In order to compute the action value for future physical fitness, a Monte Carlo (MC) search with a roll-out strategy is used to sample the unknown last T-t time steps. and the N-time Monte Carlo search can be formulated as: {(F_0:T, θ_0:T)^1, ..., (F_0:T, θ_0:T)^N = MC(F_0:t, θ_0:t)} Finally, the fully connected layers are used to generate the musculoskeletal tokens sequence over a motion period. The output of the LSTM unit is flattened to a feature vector and scaled to the muscle force F and joint kinematics θ. §.§ The discriminative model In this study, a ϕ parameterized discriminator network D_ϕ is built to guide the iterations of G_σ from the small sample data. D_ϕ outputs a probability indicating the heterogeneity between [F̂, θ̂] and [F, θ]. For this purpose, we employ a convolution neural network (CNN) <cit.> as the discriminative model because of its successful applications in sequence classification. In this study, we concentrate on the situation where the discriminator estimates the likelihood of a completed [F̂, θ̂] time-series from the physical-law model (i.e. ID). We first represent an input muscle force and joint kinematics time series x_1,...,x_T as E_0:T = [F̂, θ̂]_0 ⊕ [F̂, θ̂]_2 ⊕ ... ⊕ [F̂, θ̂]_T where, x_t ∈ℝ^b is the muscle force and joint kinematics in time-step t and ⊕ is the concatenation operator to build the matrix E_1:T∈ℝ^T. Then the convolution operator is used to produce a new feature map: c_i = ρ(w ⊙ E_i:i+l-1 + b) where ⊙ is the element-wise production, b is a bias term and ρ is a non-linear function. In this study, the discriminator, as shown in Fig.<ref>, employs various numbers of kernels with different window sizes to extract different features from the input musculoskeletal sequence. And the max-pooling operation over the feature maps to reduce the number of parameters and network computation costs. In order to enhance the discrimination performance, a highway operator <cit.> based on the pooled feature maps is also employed in our discriminative model. Finally, a fully connected layer with softmax activation is used to output the estimation of the likelihood that the input sequence conforms to physical laws. § MATERIAL AND EXPERIMENTAL METHODS In this study, we test our proposed method on two joint motion scenarios. The first one is the knee joint modeling from an open-access dataset of walking trials, and the second one is the wrist joint modeling from the self-collected dataset of wrist motions. §.§ Open-access dataset of walking trials The open-access dataset of walking trails is obtained from a real-world experiment reported in <cit.>. This dataset involves six healthy participants with an average age of 12.9 ± 3.2 years and an average weight of 51.8 ± 19.1 Kg. Participants are instructed to walk at four distinct speeds, which include very slow (0.53 ± 0.1 m/s), slow (0.75 ± 0.1 m/s), free (1.15 ± 0.08 m/s), and fast (1.56 ± 0.21 m/s) speeds. The sEMG signals are captured from the biceps femoris short head (BFS) and the rectus femoris (RF) as they are the primary flexor and extensor of the knee joint. In this study, we normalize each gait cycle into 100 frames for model training and testing, and the original data for model extrapolation evaluation. In the model training and testing session, each walking trial sample is formatted into a source matrix that includes the time step, gait motion data, and enveloped sEMG signals. All of the samples from different participants are combined to create a comprehensive dataset for model training and testing. §.§ Self-collected dataset of wrist motions Our wrist motions experiment, approved by the MaPS and Engineering Joint Faculty Research Ethics Committee of the University of Leeds (MEEC 18-002), involved six participants with signed consent. Participants were instructed to keep their torso straight with their shoulder abducted at 90 degrees and their elbow joint flexed at 90 degrees. The VICON motion capture system is used to record continuous wrist flexion/extension motion. Joint motions are calculated using an upper limb model with 16 reflective markers with 250 Hz sampling rate. Concurrently, sEMG signals are captured from the primary wrist muscles (n = 1, 2,..., 5), including the flexor carpi radialis (FCR), the flexor carpi ulnaris (FCU), the extensor carpi radialis longus (ECRL), the extensor carpi radialis brevis (ECRB), and the extensor carpi ulnaris (ECU) using Avanti Sensors (sampling rate is 2000 Hz). Electrodes are placed by palpation and their placement is validated by observing the signal during contraction before the experiment. The sEMG signals and motion data were synchronized and resampled at 1000 Hz. Each participant performed five repetitive trials with a three-minute break between trials to prevent muscle fatigue. The recorded sEMG signals are pre-processed by a 20 Hz and 450 Hz band-pass filter, full rectification, and a 6 Hz low-pass filter. These signals are then normalized based on the maximum voluntary contraction recorded prior to the experiment, yielding the enveloped sEMG signals. We normalize each motion cycle into 156 frames for model training and testing, and the original data for model extrapolation evaluation. A total of 360 motion data are then combined to create a comprehensive dataset for model training and testing, and 6 motion data are used for model evaluation. §.§ Benchmark models and parameter settings To evaluate the performance and effectiveness of the proposed physics-informed policy gradient for low-shot generative adversarial learning, the benchmark models employ three representative methods, including physics-Informed convolutional neural network (PI-CNN) <cit.> which represents the state-of-the-art deep learning based musculoskeletal modeling method, ML-ELM <cit.> which represents the general musculoskeletal modeling method, and the vanilla GAN which represents the traditional GAN family without physical-law <cit.>. §.§ Evaluation metrics The evaluation metrics include 1) the metrics for evaluating the quality of the generated samples including the information entropy associated peak signal-to-noise ratio (PSNR) <cit.>, coefficient of Determination (R^2) <cit.>, root mean square error (RMSE) <cit.>, Spearman's Rank Correlation Coefficient (SRCC) <cit.>, and 2) the metrics for evaluating the mode collapse of GANs, including 1) inception score (IS) <cit.>, and 2) Frechet inception distance (FID) <cit.>. § RESULTS AND DISCUSSION In this section, we evaluate the performance of the proposed physics-informed low-shot learning in the knee joint and wrist joint scenarios. We first carry out overall comparisons of the results from the proposed and benchmark methods. We also evaluate the model performance on small training data and handling mode collapse. Lastly, we investigate the robustness and generalization performance of the proposed method in intersession scenarios. The training of the proposed framework and benchmark methods was conducted using PyTorch on a workstation equipped with NVIDIA Quadro K4200 graphics cards and 256G RAM. §.§ Overall evaluation of the muscle force dynamics modeling In this section, we first carry out overall comparisons between the proposed and benchmark methods on the test dataset. Fig. <ref> demonstrates the overall results of the joint kinematics generation in one motion circle from the proposed and benchmark methods for both the knee joint (the first row of Fig. <ref>) and wrist joint cases (the second row of Fig. <ref>). The average joint kinematics and standard deviation distribution from the proposed method align well with the ground truth in both the knee joint and wrist joint cases. These findings indicate the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the joint kinematics. Similarly, Fig. <ref> and Fig.<ref> demonstrate the overall results of the muscle force estimations in one motion circle for both the knee joint (i.e. RF and BFS) and wrist joint (i.e. FCR, FCU, ECRL, ECRB, and ECU) cases, respectively. The average muscle forces estimated by the proposed method align well with the inverse dynamics, demonstrating the excellent multiple muscle tracking capability of the proposed model. In addition, the standard deviation distribution of the proposed model-generated muscle forces is perfectly consistent with the standard deviation distribution of the inverse dynamics-based references. These results indicate that the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the muscle force from the multi-channel sEMG signals. To further assess the extrapolation performance quantitatively, we present detailed comparisons of the proposed and benchmark models on both of the test data and evaluation data. Table <ref> and Table <ref> respectively shows the results for the knee joint case and the wrist joint case. The results indicate that the proposed model performs best on both of the testing and evaluation data. Specifically, for model testing, the PSNR, R^2, RMSE, SRCC of the proposed model are 15.57%, 6.22%, 28.08%, 7.2% higher than that of the second best model (i.e. PI-CNN). For model evaluation, the PSNR, R^2, RMSE, SRCC of the proposed model are 24.72%, 16.29%, 38.99%, 17.66% higher than that of the second best model (i.e. GAN). In addition, because the evaluation data involve the original sEMG recordings, the comparison of the testing results and evaluation results indicates the model extrapolation from the experimental scenarios to real scenarios. The proposed model shows the best extrapolated estimation of muscle force and joint kinematics among the benchmark models, the results from the testing data and evaluation data is consistent. In contrast, the performance of the benchmark models show serious decline on evaluation data. §.§ Evaluation of low-shot learning The proposed physics-informed policy gradient incorporates the temporal relationship of the muscle force and joint kinematics dynamics from the Lagrange motion equation, resulting in an improved kinetics estimation from the low-shot samples. Initially, the physical information is used to constrain the model reward accumulated following the periodic multi-channel sEMG signals. And then, the accumulative reward is used to guide the Monte Carlo search to generate the unbiased estimation of muscle force and joint kinematics dynamics. To quantitatively assess the effectiveness of the proposed method on low-shot learning, we firstly regard the modeling results shown in Table <ref> and Table <ref> as the baselines that represent the optimal performance of the proposed and benchmark models, and then we train the models with different training sample sizes for 1500 epochs as low-shot learning learning. The percentages of the low-shot learning learning results and the baseline joint kinematics modeling results, denote as P-PSNR, P-R^2, P-RMSE, and P-SRCC, are used as the evaluation metrics to describe what percentage of the performance of the baseline models can be achieved with the new models. The evaluation of the low-shot learning of the proposed and benchmark models on the knee joint and wrist joint kinematics modeling is shown in Table <ref>. It is obvious that the proposed model with a physics-informed policy gradient outperforms all of the benchmark models in low-shot learning. The 10-shot learning is able to achieve over 80% baseline performance in terms of PSNR, R^2, RMSE, and SRCC. In comparison, the PINN and GAN models achieving a similar modeling performance require at least 80-shot learning. Therefore, it can be inferred that the proposed physics-informed policy gradient relies heavily on the physical representations and temporal structural characteristics of the training data, rather than the quantity of the data. This is encouraging as it suggests that the proposed method facilitates the applications of deep learning in biomechanical engineering from the general issue of limited sample size. §.§ Mode collapse evaluation Mathematically, the generative model is easy to find a biased estimation caused by mode collapse, which leads to the generated samples only being located in the partial real distribution where it can fool the discriminative model and ignore other modes of real distribution during the adversarial learning. To handle this issue, the proposed physics-informed policy gradient alleviates the random noises and makes the generated feature sequence governed by the physics law, which facilitates the estimation of compound kinematics patterns and achieves the unbiased estimation of kinematics generation. In order to evaluate the performance of the proposed method on alleviating the mode collapse, we test and compare the proposed model with the benchmark model from two aspects: 1) a quantitative evaluation of the diversity of the generated motions, based on the distance-derived IS and FID metrics; and 2) a monotonicity assessment on the generator iterations during the network training process; and 3) visualization of the distributions of the real and the generated motion samples. Firstly, the quantitative evaluation for the diversity of the generated motions is conducted on the testing dataset. The higher IS and lower FID indicate the better diversity of the generated super-resolution HSIs, which further indicates the alleviation of mode collapse. The results demonstrated in Table <ref> show the proposed model outperforms the competitors in terms of the IS and FID measurements for both the knee joint and wrist joint motion generation. In addition, the benchmark GAN model, with the network architecture as same as the proposed model, is 19.11% higher in IS, and 14.23% lower in FID than the proposed model. These findings suggest that the proposed physics-informed policy gradient optimization approach has great performance in alleviating the mode collapse during adversarial learning. Secondly, in order to further explore the performance of the proposed physics-informed policy gradient on the mode collapse issue, we compare the generator iterations of the same GAN architectures with and without the physics-informed policy gradient (Fig. <ref>). The IS and FID curves from the GAN with the proposed physics-informed policy gradient are more monotonous than the GAN without the physics-informed policy gradient, along with the increase of iteration number. Thus, the curves of IS from the proposed physics-informed policy gradient steadily increase and the curves of FID steadily decrease for both knee joint (<ref>a and b) and wrist joint (<ref>c and d) cases. §.§ Model application on intra-session scenario In musculoskeletal modeling, the intra-session scenario is regarded as the multiple sets of motions that occur within the same session. To test the robustness of the proposed model in the intra-session scenario, we use the knee joint data with different walking speeds for one subject as the intra-session evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best among the baseline methods. Importantly, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different walking speeds. In comparison, the median and quartiles of the baseline methods, such as the GAN model without using the physics-informed policy gradient, show significant inconsistencies with the real data, indicating a declined performance in the intra-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in intra-session scenarios. §.§ Model application on inter-session scenario The inter-session scenario generally refers to a situation where motion data are collected across multiple sessions. To test the robustness of the proposed model in the inter-session scenario, we use the wrist joint data with different subjects as the evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best on the musculoskeletal modeling among the baseline methods. Specifically, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different subjects. In comparison, the baseline methods, such as the GAN model without using the physics-informed policy gradient, show a declined performance in the inter-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in inter-session scenarios. § CONCLUSION This paper develops a physics-informed low-shot learning method, which seamlessly integrates the Lagrange equation of motion and inverse dynamic muscle model into the adversarial learning process, to train the generative network for the unbiased estimation of the muscle force and joint kinematics from the small size sEMG time series. Specifically, the Lagrange equation of motion is introduced as physical constraint, which facilitates the generator to estimate the muscle force and joint kinematics with more temporal structural representations. Meanwhile, the physics-informed policy gradient rewards the physical consistency of the generated muscle force and joint kinematics and the inverse dynamics-based references, which improve the extrapolation performance of the generative network. Comprehensive experiments on the knee joints and wrist joints indicate the feasibility of the proposed method. The resultant findings suggest that the proposed method performs well in handling the mode collapse issue on the small sample data, and the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics. These findings suggest that the proposed method may reduce the gaps between laboratory prototypes and clinical applications. However, it is worth noting that the physics reference (i.e. the inverse dynamics for this study) plays an important role in constraining the physics representation of the generated samples. Therefore, the choice of physics module may vary when the proposed approach is extended to other application cases. Going forward, we plan to delve deeper into the properties of the physics-informed deep learning framework in the context of sEMG-based musculoskeletal modeling. We aim to investigate the potential of the low-shot learning-based model on the continuous and simultaneous estimation of multiple joint kinematic chains from sEMG signals. We also plan to adjust the compositions of the proposed method to cater to different application scenarios. Furthermore, we intend to evaluate the reliability and accuracy of the proposed framework through more complex movements. unsrtnat
http://arxiv.org/abs/2307.03908v1
20230708054722
Incorporating Deep Q -- Network with Multiclass Classification Algorithms
[ "Noopur Zambare", "Ravindranath Sawane" ]
cs.LG
[ "cs.LG" ]
1 Indian Institute of Technology, Jodhpur, India 2 Western University, Ontario, Canada In this study, we explore how Deep Q-Network (DQN) might improve the functionality of multiclass classification algorithms. We will use a benchmark dataset from Kaggle to create a framework incorporating DQN with existing supervised multiclass classification algorithms. The findings of this study will bring insight into how deep reinforcement learning strategies may be used to increase multiclass classification accuracy. They have been used in a number of fields, including image recognition, natural language processing, and bioinformatics. This study is focused on the prediction of financial distress in companies in addition to the wider application of Deep Q-Network in multiclass classification. Identifying businesses that are likely to experience financial distress is a crucial task in the fields of finance and risk management. Whenever a business experiences serious challenges keeping its operations going and meeting its financial responsibilities, it is said to be in financial distress. It commonly happens when a company has a sharp and sustained recession in profitability, cash flow issues, or an unsustainable level of debt. DQN (Deep Q - Network)Deep Reinforcement Learning Financial Distress Multiclass Classification, Decision Tree Classifier Naive Bayes, Random Forest Classifier § INTRODUCTION §.§ Background The goal of Reinforcement Learning (RL), a branch of machine learning, is to train agents how to make decisions sequentially in an environment that optimises a reward signal. By interacting with the environment, getting feedback in the form of rewards or penalties, and adapting their behaviour in response, RL algorithms learn through trial and error. The Deep Q-Network (DQN) is a deep reinforcement learning method that combines the Q-learning algorithm and the capability of deep neural networks. Financial distress refers to a state in which a company faces considerable challenges in meeting its financial obligations. Early indications of financial problems might help proactive actions like restructuring, obtaining more finance, or putting cost-cutting measures into place. Machine learning has made breakthroughs in recent years when it comes to applying reinforcement learning algorithms, particularly DQN, to different problem domains. We use a wide range of supervised learning algorithms, such as Decision Tree, Random Forest Classifier, and Naive Bayes, to create the DQN framework. The DQN ensemble's underlying models are represented by these algorithms. We intend to study the potential advantages and performance enhancements that can be achieved by combining supervised learning with the reinforcement learning approach of DQN using supervised learning algorithms as the foundation models. The use of DQN for multiclass classification to forecast financial difficulties in businesses is explored in this study. §.§ Problem Statement The goal of this paper is to investigate the use of Deep Q-Network in multiclass classification problems. We intend to adapt and use DQN's skills for resolving multiclass classification issues despite the fact that its typical application is mostly in the field of reinforcement learning. The subject of interest is the application of DQN for multiclass classification to predict financial distress in businesses. By effectively resolving this problem, we want to open up the possibility of applying reinforcement learning principles to a variety of classification problems. § STATE OF THE ART In DQN, our goal is to train an action-value function Q(s, a) that calculates the predicted cumulative reward for performing action 'a' in state 's'. The Bellman Equation or Q-Learning update equation is defined as follows: Q(s,a) = (1 - ϵ) Q(s,a) + α[ r + γ max Q(s^', a^') - Q(s,a) ] where, Q(s, a) = Current estimate of the predicted future benefits of action 'a' in state 's' ϵ = exploration-exploitation trade-off α = learning rate r = immediate received reward γ = discount factor A variation of the Q-learning process called Deep Q-Network makes use of neural networks to make approximations of the Q-value function. The expected reward for performing a specific action in a given condition is provided by the Q-value function. The Q-value function is represented as a table in conventional Q-learning but as a neural network in DQN. Experience replay and a technique called fixed Q-targets are both used by the DQN algorithm to stabilise the learning process. Experience replay involves sampling small batches of experiences for training and storing observed transitions (s, a, r, and s') in a replay buffer. Using a target network with set parameters for a predetermined number of iterations before updating it with the parameters of the online network is recognised as leveraging fixed Q-targets. § METHODOLOGY §.§ Dataset The study involves the use of a dataset gathered by Kaggle that includes different financial parameters and company characteristics. The dataset, which is accessible in CSV format, includes statistics on the company's performance as well as relevant contextual information. Using methods like label encoding, a preprocessing step is implemented to handle missing data, normalise features, and transform categorical variables. Then, training and testing sets are created from the preprocessed dataset. §.§ Baseline Multiclass Classification Algorithms §.§.§ Decision Tree In this algorithm, the space of features is recursively divided according to a set of criteria in order to generate a decision tree. Information gain or Gini impurity is the most widely used criterion. They can handle categorical and numerical features, as well as non-linear relationships, and they can capture both. Decision trees, show a tendency to overfit the training set if they are not appropriately regularised or pruned. Overfitting can be reduced using strategies like pruning, establishing a minimum number of samples needed to split a node, or using ensemble methods. §.§.§ Random Forest Classifier An ensemble technique called the Random Forest Classifier combines several decision trees to produce predictions. A random subset of features is taken into account at each split of each tree, which is trained on a bootstrap sample of the training data. By combining the predictions of various trees, either through majority voting or averaging, the final prediction is obtained. §.§.§ Naive Bayes The Naive Bayes algorithm is a probabilistic classifier that relies on the Bayes theorem and makes the assumption that features are independent of the class. Given the input features, it calculates the probabilities of each class and chooses the class with the highest probability as the prediction. §.§ Multiclass Classification Algorithms with DQN Integration §.§.§ Defining Agent The DQN class is used to represent the agent. Based on the input features given, it acts as the decision-making entity that learns to categorise the different levels of financial distress. The agent employs a method akin to the DQN, using a group of Decision Tree Classifier, Random Forest Classifier and Naive Bayes models as the Q-network. §.§.§ Defining Environment In this case, the environment is the classification problem itself, which involves determining the levels of economic distress based on the given input features. The agent receives rewards from the environment as feedback, which helps it improve its classification performance. §.§.§ State Representation The input features that were utilised to train the agent define the state representation. In this instance, the features Company, Time, x1, x2, x3, and x4 serve as representations of the state. These features are taken out of the data frame and sent to the classification agent as input. §.§.§ Setting Reward Function The act() method of the DQN class contains a definition of the reward. If any of the true class labels in the y variable match the predicted action (class label), the agent is rewarded with a value of 1. If not, it is rewarded with -1. The goal of the reward system is to encourage the agent to forecast classes correctly. §.§.§ Selection of Action The action selection method makes sure that the model chooses the best class label depending on the situation at hand and previously learnt information. The class labels that are available in this situation make up the action space. To determine the class for a particular input, the agent will select an action (class label) from this collection. The number of classes in the classification problem and the size of the action space are related. §.§.§ Training Iterating through episodes and the stages in each episode are both parts of the training process. In agreement with an epsilon-greedy exploration-exploitation strategy, the agent chooses a course of action (class label). In accordance with the accuracy of its forecast, it is rewarded, and the ensemble of decision tree models is updated. For the specified number of episodes, training is ongoing. §.§.§ Evaluation By comparing the predicted labels with the actual labels using the test data, it is feasible to assess how accurate the agent's predictions were. The calculated accuracy of the base model and the accuracy of the DQN-based agent after training are compared. §.§ Evaluation Metrics The various metrics involved in analysis are accuracy, recall score and precision score. However, the performance of models was also analyzed using a confusion matrix. § RESULTS AND ANALYSIS §.§ Comparison with Baseline Algorithms On the chosen benchmark datasets, the performance of the proposed framework, which incorporates Deep Q-Network with multiclass classification algorithms, is compared with that of the baseline algorithms. 4|c|Comparitive Analysis Model Accuracy Recall Precision Decision Tree 0.98 0.50 0.50 With DQN 0.33 0.28 0.34 Random Forest 0.99 0.50 0.50 With DQN 0.32 0.29 0.34 Naive Bayes 0.99 0.75 0.67 With DQN 0.31 0.28 0.34 §.§ Analysis of Computational Efficiency §.§.§ Decision Tree * In comparison to the DQN-based model, the baseline model (Decision Tree Classifier) often takes less time to train. As they do not require iterative optimisation, decision trees can be trained quickly because they directly learn the decision boundaries and feature splits. In comparison, the DQN-based model employs a more computationally costly training procedure that requires repeatedly training an ensemble of Decision Tree Classifiers. * Compared to the DQN-based approach, the baseline model often uses a smaller amount of memory. To store the separate models, the ensemble of Decision Tree Classifiers utilised in the DQN model needs more memory. The baseline approach, in comparison, only has to keep one decision tree, which requires less memory. * Comparing the baseline model to the DQN-based approach, the baseline model is certainly more effective in terms of computation. §.§.§ Random Forest Classifier * In comparison to the DQN-based model, the baseline model (Random Forest Classifier) often requires less training time. Due to its ability to create numerous decision trees at once while using parallel processing, random forests can be trained effectively. A random subset of characteristics and data samples is used to train each decision tree individually. The DQN-based model, on the other hand, requires several pieces of training for an ensemble of Random Forest Classifiers, which can be computationally more taxing. * Usually, the baseline model uses less memory than the DQN-based model. The DQN model's ensemble of Random Forest Classifiers requires extra memory to store each individual model. * In conclusion, compared to the DQN-based model, the baseline model (Random Forest Classifier) is anticipated to be computationally more efficient in terms of training time, inference time, and memory use. §.§.§ Naive Bayes * Due to its simplicity, the basic framework (Gaussian Naive Bayes) is computationally effective for both training and prediction. While the DQN model similarly employs Naive Bayes classifiers, the ensemble technique adds more complexity and increases processing overhead when compared to the base model. * A single Naive Bayes classifier, along with its associated parameters and probability distributions, must be stored in memory by the base model (Gaussian Naive Bayes). In order to store an ensemble of Naive Bayes classifiers, which consists of various models with their unique parameters and probability distributions, the DQN model needs memory. § DISCUSSION §.§ Advantages Multiclass classification methods that incorporate Deep Q-Network (DQN) have various benefits and provide special capabilities to the task. Benefits involve : * Handling Complex Decision-Making * Adaptability to Dynamic Environments * Handling Imbalanced Datasets * Real-time Classification §.§ Limitations §.§.§ Large Memory Requirements Especially when employing experience replay, which includes storing and sampling from a significant replay buffer, DQN often needs a lot of RAM. §.§.§ Curse of Dimensionality Finding the most effective measures and achieving efficient convergence can be more difficult when the DQN training and learning process is impacted by the curse of dimensionality. Consequently, DQN's ability to do multiclass classification well may be constrained by its ability to handle significant feature spaces. §.§.§ Limited Generalization to New Classes It often acquires policies unique to the classes found in the training set. They are efficient at handling well-known classes, but they have a limited ability to generalise to unfamiliar or new classes. In dynamic classification contexts where new classes continually emerge, the technique is less adaptive since incorporating new classes into the model often requires retraining or considerable fine-tuning. §.§ Future scope Future prospects are promising when Deep Q-Network is incorporated into multiclass classification algorithms. Such as Transfer Learning and Knowledge Transfer, Real-time Classification, Hierarchical Multiclass Classification, Adaptive Learning and Dynamic Feature Selection, and many others. § CONCLUSION The study uses multiclass classification to show the significance of using DQN for financial distress prediction in businesses. The study's findings may help businesses, investors, and financial institutions make informed decisions and take preventive action to reduce the risks associated with the financial crisis. Possible reasons for less accuracy by the DQN model than the base model : * The classifier for the base model is trained directly on the labelled training data using a traditional supervised learning methodology. In a single step, it learns the probability distributions and class boundaries from the data. While the DQN model iteratively changes its ensemble of classifiers based on the rewards it receives from the environment, it is trained using a reinforcement learning methodology. This recurrent training procedure may generate noise and instability, resulting in less accurate convergence. * While lowering bias and variance can help ensembles perform better, they also add to the complexity and risk of inconsistencies across the various models. Lower accuracy may be the consequence if the ensemble is unable to fully capture the underlying patterns and relationships. § REFERENCES * Melrose Roderick, James MacGlashan, Stefanie Tellex "Implementing the Deep Q-Network" arXiv:1711.07478v1 [cs.LG] 20 Nov 2017 * Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:486-489, 2020 * Z. Gao, Y. Gao, Y. Hu, Z. Jiang and J. Su, "Application of Deep Q-Network in Portfolio Management," 2020 5th IEEE International Conference on Big Data Analytics (ICBDA), Xiamen, China, 2020, pp. 268-275, doi: 10.1109/ICBDA49040.2020.9101333. * Mills P. Solving for multi-class: a survey and synthesis. arXiv preprint arXiv:1809.05929. 2018 Sep 16. * Wen G, Wu K. Building decision tree for imbalanced classification via deep reinforcement learning. Asian Conference on Machine Learning 2021 Nov 28 (pp. 1645-1659). PMLR. * Fu Q, Li K, Chen J, Wang J, Lu Y, Wang Y. Building energy consumption prediction using a deep-forest-based DQN method. Buildings. 2022 Jan 27;12(2):131. * Reddy EM, Gurrala A, Hasitha VB, Kumar KV. Introduction to Naive Bayes and a Review on Its Subtypes with Applications. Bayesian Reason. Gaussian Process. Mach. Learn. Appl. 2022 Apr 19:1-4. * Whitaker RB. The early stages of financial distress. Journal of Economics and Finance. 1999 Jun;23(2):123-32. * Lau AH. A five-state financial distress prediction model. Journal of Accounting Research. 1987 Apr 1:127-38. * Mselmi N, Lahiani A, Hamza T. Financial distress prediction: The case of French small and medium-sized firms. International Review of Financial Analysis. 2017 Mar 1;50:67-80. * Grandini M, Bagli E, Visani G. Metrics for multi-class classification: an overview. arXiv preprint arXiv:2008.05756. 2020 Aug 13. * Toupas P, Chamou D, Giannoutakis KM, Drosou A, Tzovaras D. An intrusion detection system for multi-class classification based on deep neural networks. In2019 18th IEEE International Conference on machine learning and Applications (ICMLA) 2019 Dec 16 (pp. 1253-1258). IEEE. * Li J, Liu Y, Yin R, Zhang H, Ding L, Wang W. Multi-class learning: From theory to algorithm. Advances in Neural Information Processing Systems. 2018;31.
http://arxiv.org/abs/2307.05164v2
20230711104759
Conformal bounds in three dimensions from entanglement entropy
[ "Pablo Bueno", "Horacio Casini", "Oscar Lasso Andino", "Javier Moreno" ]
hep-th
[ "hep-th", "cond-mat.str-el" ]
=1 [4] [1] =.9theoremTheorem corollaryCorollary lemma[theorem]Lemma defiDefinition conjectureConjecture ⌈⌉ OMSzplmmn
http://arxiv.org/abs/2307.04447v1
20230710095733
Combinatorial Nullstellensatz and Turán numbers of complete $r$-partite $r$-uniform hypergraphs
[ "Alexey Gordeev" ]
math.CO
[ "math.CO" ]
Combinatorial Nullstellensatz and Turán numbers of complete r-partite r-uniform hypergraphs Alexey Gordeev =========================================================================================== In this note we describe how Lasoń's generalization of Alon's Combinatorial Nullstellensatz gives a framework for constructing lower bounds on the Turán number (n, K^(r)_s_1,…,s_r) of the complete r-partite r-uniform hypergraph K^(r)_s_1,…,s_r. To illustrate the potential of this method, we give a short and simple explicit construction for the Erdős box problem, showing that (n, K^(r)_2,…,2) = Ω(n^r - 1/r), which asymptotically matches best known bounds when r ≤ 4. § INTRODUCTION §.§ Turán numbers of complete r-partite r-uniform hypergraphs A hypergraph H = (V, E) consists of a set of vertices V and a set of edges E, each edge being some subset of V. A hypergraph is r-uniform if each edge in it contains exactly r vertices. An r-uniform hypergraph is r-partite if its set of vertices can be represented as a disjoint union of r parts with every edge containing one vertex from each part. The complete r-partite r-uniform hypergraph with parts of sizes s_1, …, s_r contains all s_1 ⋯ s_r possible edges and is denoted by K^(r)_s_1, …, s_r. Let H be an r-uniform hypergraph. The Turán number (n, H) is the maximum number of edges in an r-uniform hypergraph on n vertices containing no copies of H. A classical result of Erdős <cit.> implies that for s_1 ≤…≤ s_r, (n, K^(r)_s_1, …, s_r) = O( n^r - 1/s_1 ⋯ s_r - 1). In <cit.>, Mubayi conjectured that bound (<ref>) is asymptotically tight. Recently, Pohoata and Zakharov <cit.> showed that this is true whenever s_1, …, s_r ≥ 2 and s_r ≥ ((r - 1)(s_1 ⋯ s_r - 1 - 1))! + 1, extending earlier results of Alon, Kollár, Rónyai and Szabó <cit.> and Ma, Yuan and Zhang <cit.>. Nevertheless, the conjecture remains open even in a special case (n, K^(r)_2,…, 2), which is often referred to as the Erdős box problem. The best known lower bound is due to Conlon, Pohoata and Zakharov <cit.>, who showed that for any r ≥ 2, (n, K^(r)_2,…, 2) = Ω( n^r - ⌈2^r - 1/r⌉^-1). §.§ Generalized Combinatorial Nullstellensatz Let be an arbitrary field, and let f ∈[x_1,…,x_r] be a polynomial in r variables. A monomial x_1^d_1⋯ x_r^d_r is a monomial of a polynomial f if the coefficient of x_1^d_1⋯ x_r^d_r in f is non-zero. Recall the famous Combinatorial Nullstellensatz by Alon (see Theorem 1.2 in <cit.>). Let x_1^d_1⋯ x_r^d_r be a monomial of f, and let f ≤ d_1 + … + d_r. Then for any subsets A_1, …, A_r of with sizes |A_i| ≥ d_i + 1, f does not vanish on A_1×…× A_r, i.e. f(a_1,…,a_r) ≠ 0 for some a_i ∈ A_i. A monomial x_1^d_1⋯ x_r^d_r of f is maximal if it does not divide any other monomial of f. Lasoń showed the following generalization of Combinatorial Nullstellensatz (see Theorem 2 in <cit.>). It should be mentioned that an even stronger theorem was proved by Schauz in 2008 (see Theorem 3.2(ii) in <cit.>). Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f. Then for any subsets A_1, …, A_r of with sizes |A_i| ≥ d_i + 1, f does not vanish on A_1×…× A_r, i.e. f(a_1,…,a_r) ≠ 0 for some a_i ∈ A_i. Notably, in most applications of Combinatorial Nullstellensatz the condition f ≤ d_1 + … + d_r from Theorem <ref> turns out to be sufficient and thus the more general Theorem <ref> is not needed. Below we give a rare example of an application in which the full power of Theorem <ref> is essential. § THE FRAMEWORK For subsets B_1, …, B_r of a field denote the set of zeros of a polynomial f ∈[x_1, …, x_r] on B_1 ×…× B_r as Z(f; B_1,…, B_r) := { (a_1, …, a_r) ∈ B_1 ×…× B_r | f(a_1, …, a_r) = 0 }. In the case B_1 = … = B_r = B we will write Z(f; B, r) instead of Z(f; B_1, …, B_r). The set Z(f; B_1,…, B_r) can be viewed as the set of edges of an r-partite r-uniform hypergraph H(f; B_1, …, B_r) with parts B_1, …, B_r. Our key observation is the following lemma which immediately follows from Theorem <ref>. Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f. Then for any subsets B_1, …, B_r of the hypergraph H(f; B_1,…, B_r) is free of copies of K^(r)_d_1 + 1, …, d_r + 1. This lemma gives us a new tool for constructing lower bounds on (n, K^(r)_s_1, …, s_r). In Section <ref> we give a simple example of such construction for (n, K^(r)_2, …, 2) which asymptotically matches (<ref>) when r ≤ 4. Combining Lemma <ref> with (<ref>), we also get the following Schwartz–Zippel type corollary, which may be of independent interest. Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f, where d_1 ≤…≤ d_r. Then for any subsets B_1, …, B_r of with sizes |B_i| = n, | Z(f; B_1, …, B_r) | = O( n^r - 1/(d_1 + 1) ⋯ (d_r - 1 + 1)). The described framework was also recently discussed in an article by Rote (see Section 8 in <cit.>). § CONSTRUCTION Here _p^r is the finite field of size p^r and _p^r^* = _p^r∖{0}. Let p be a prime number, and let f ∈_p^r[x_1, …, x_r] be the following polynomial: f(x_1, …, x_r) = x_1 ⋯ x_r + ∑_i = 1^r ∏_j = 1^r - 1 x_i + j^p^r - p^j, where indices are interpreted modulo n, i.e. x_r + 1 = x_1, x_r + 2 = x_2, etc. Then |Z(f; _p^r^*, r)| = p^r - 1 ( p^r - 1 )^r - 1. Note that for any a_1, …, a_r ∈_p^r^* we have a_i^p^r = a_i, so f(a_1, …, a_r) = a_1 ⋯ a_r ( 1 + ∑_i = 1^r ∏_j = 0^r - 1 a_i + j^-p^j) = a_1 ⋯ a_r ( 1 + ( a_1^-1 a_2^-p⋯ a_r^-p^r - 1) ), where (a) = a + a^p + … + a^p^r - 1 is the trace of the field extension _p^r / _p. Now let us fix a_2, …, a_r ∈_p^r^*. As a_1 runs over all values of _p^r^*, so does a_1^-1 a_2^-p⋯ a_r^-p^r - 1. There are exactly p^r - 1 elements a ∈_p^r^* for which (a) = -1, i.e. for any fixed a_2, …, a_r there are exactly p^r - 1 values of a_1 for which f(a_1, …, a_r) = 0. For any r ≥ 2, (n, K^(r)_2, …, 2) = Ω( n^r - 1/r). Note that x_1 ⋯ x_r is a maximal monomial of the polynomial f from Lemma <ref>. Thus, due to Lemma <ref>, a hypergraph H_p = H(f; _p^r^*, r) with r(p^r - 1) vertices and p^r - 1 ( p^r - 1 )^r - 1 edges is free of copies of K^(r)_2, …, 2 for every prime p, which gives the desired bound. § CONCLUDING REMARKS The construction from Section <ref> in the case r = 3 is structurally similar to the one given by Katz, Krop and Maggioni in <cit.>. Their construction can be generalized to higher dimensions giving an alternative proof of Theorem <ref> (private communication with Cosmin Pohoata; see also Proposition 11.2 in <cit.>). Our approach gives a simpler construction and a much shorter proof. Motivated by the ideas discussed in Section <ref>, Rote posed a problem (see Problem 1 in <cit.>), equivalent to asking how large can the set Z(f; B_1, B_2) be for a polynomial of the form f(x, y) = xy + P(x) + Q(y) and sets B_1, B_2 of size n each. Lemma <ref> answers this question asymptotically if sets B_1 and B_2 are allowed to be taken from the finite field _p^2. § ACKNOWLEDGEMENTS I would like to thank Danila Cherkashin and Fedor Petrov for helpful discussions, and Günter Rote for useful comments on a draft of this note. abbrv
http://arxiv.org/abs/2307.05810v1
20230711212131
The Clifford theory of the $n$-qubit Clifford group
[ "Kieran Mastel" ]
quant-ph
[ "quant-ph", "math-ph", "math.MP", "math.RT" ]
Twin-width of graphs on surfacesThe first two authors have been supported by the MUNI Award in Science and Humanities (MUNI/I/1677/2018) of the Grant Agency of Masaryk University. The third author is supported by the Slovenian Research Agency (research program P1-0383, research projects J1-3002, J1-4008, and a Young Researchers Grant). Daniel KráľFaculty of Informatics, Masaryk University, Botanická 68A, 602 00 Brno, Czech Republic. E-mails: [email protected] and [email protected] Kristýna Pekárková^lthKenny ŠtorgelFaculty of Information Studies in Novo mesto, Ljubljanska cesta 31a, 8000 Novo mesto, Slovenia. E-mail: [email protected]. ============================================================================================================================================================================================================================================================================================================================================================= The n-qubit Pauli group and its normalizer the n-qubit Clifford group have applications in quantum error correction and device characterization. Recent applications have made use of the representation theory of the Clifford group. We apply the tools of (the coincidentally named) Clifford theory to examine the representation theory of the Clifford group using the much simpler representation theory of the Pauli group. We find an unexpected correspondence between irreducible characters of the n-qubit Clifford group and those of the (n+1)-qubit Clifford group. § INTRODUCTION The Pauli group and its normalizer, the Clifford group, are fundamental structures in quantum information theory. These groups have applications in quantum error correction <cit.> and randomized benchmarking <cit.>. By the Gottesman-Knill theorem, quantum computation using Clifford unitaries is efficiently simulable on a classical computer <cit.>. The Clifford group is a unitary 2-design <cit.>, in other words, `averages over the Clifford group approximate averages over the unitary group well.' Generating Haar random Clifford unitaries is less computationally expensive than sampling Haar random unitaries <cit.>. Thus random Clifford elements have utility in performing randomized protocols. Recent applications of the Clifford group to randomized benchmarking and classical shadow estimation have utilized its representation theory <cit.>. Determining the character table of the Clifford group, which classifies its irreducible representations, is a natural open problem prompted by these papers. Surprisingly, despite the usefulness of the representation theory of the Clifford group, its character table has not been determined. The representation theory of the Pauli group is simple and explained in section 4. Thus it would be advantageous to use our understanding of the representation theory of the Pauli group to examine that of the Clifford group. To do this, we can apply the tools of Clifford theory (which is named after Alfred H. Clifford, while William K. Clifford gave his name to the group). Clifford theory is the subset of representation theory focused on relating representations of a normal subgroup N of G to representations of G. The inertia subgroup I_G(σ) is the subgroup of G that maps σ to an isomorphic representation under conjugation. The central result of Clifford theory is the Clifford correspondence between irreducible representations of the inertia subgroup and certain irreducible representations of G. When the inertia subgroup is understood, this simplifies the calculation of irreducible characters of G. Since any two nontrivial irreducible Pauli representations are conjugate in the Clifford group and conjugate representations have isomorphic inertia subgroups, we need only examine one inertia subgroup. In our first result, we determine the inertia subgroup of a nontrivial irreducible representation of the n-qubit Pauli group in the n-qubit Clifford group up to complex phases for n≥ 2. Clifford theory does not fully calculate the character table of the Clifford group. The Clifford correspondence does not give us any information when the inertia subgroup I_G(σ) is all of G. In particular, the Clifford correspondence does not help when σ is the trivial representation of N. If G is a group, inflation produces a bijection between irreducible representations of G/N and irreducible representations of G whose restriction to N is trivial. We can thus understand the case where the Clifford correspondence offers no information by examining the representation theory of the quotient group. For the n-qubit Clifford group and the n-qubit Pauli group, the quotient group is the symplectic group Sp(2n,2). The symplectic group is a finite group of Lie type, and thus its representation theory is calculated by the Deligne-Lusztig theory <cit.>, which we do not examine in this paper. Together with the representations calculated using Clifford theory, this accounts for all the irreducible representations of the Clifford group. In section <ref>, we show that the inertia quotient group I_G(σ)/N of a nontrivial Pauli representation in the n-qubit Clifford group is a central extension of the affine symplectic group Sp(2(n-1),2)⋉ℤ^2(n-1)_2 by ℤ_2. The Clifford group, in the literature on finite group extensions, is the unique non-split extension of Sp(2n,2) by ℤ^2n_2. By examining the Clifford group from this perspective, Bernd Fischer showed in <cit.> that the Clifford and affine symplectic groups have identical character tables. Combining these facts allows us to produce a surprising correspondence between irreducible characters of the n-qubit Clifford group C_n and the (n+1)-qubit Clifford group 𝒞_n+1. Any irreducible character of 𝒞_n can be viewed as an irreducible character of Sp(2n,2)⋉ℤ_2^2n which inflates to an irreducible character of IN_n+1 which induces an irreducible character of 𝒞_n+1. The natural map of characters from 𝒞_n to 𝒞_n+1 is induction. However, unlike induction of characters, our correspondence maps irreducible characters to irreducible characters. Knowing the irreducible characters of 𝒞_n allows us to calculate an equal number of the irreducible characters of 𝒞_n+1. §.§ Acknowledgements I thank William Slofstra and Jack Davis for helpful discussions. I acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), in particular this work was supported by an NSERC Post Graduate Scholarship - Doctoral. § PRELIMINARIES §.§ Representation theory In this section, we recall from <cit.> some basic facts about representation and character theory of finite groups. A linear representation of a finite group G is a homomorphism ρ from the group G into the group GL(V), where V is a vector space over ℂ. If W is a vector subspace of V such that ρ(g)x ∈ W for all g ∈ G and x ∈ W, then the restriction ρ^W(g) of ρ(g) to W is a linear representation of G on W. We call W a subrepresentation of V. An irreducible representation is one where V is not 0 and no nontrivial vector subspace of V is G-stable. It is a standard result that every representation is a direct sum of irreducible representations. If ρ and σ are representations of a finite group G on the vector spaces V and W respectively then a linear map ϕ:V → W is called an intertwining map of representations if ϕ(ρ(g)v) = σ(g)ϕ(v) for all g ∈ G and all v ∈ V. The vector space of all such G-linear maps between V and W is denoted by Hom_G(ρ,σ) or Hom_G(V,W). If ϕ is also invertible, it is said to be an isomorphism of representations. When we classify irreducible representations, we do so up to isomorphism. Isomorphic representations are sometimes called equivalent representations. Let F be a field, a projective representation of a finite group G is a is a map Φ:G → GL_n(F) such that for every g,h ∈ G, there exists a scalar α(g,h)∈ F such that Φ(g)Φ(h) = Φ(gh)α(g,h). α is called the factor set. Note that it is uniquely determined by Φ. The notions of equivalence and irreducibility translate verbatim for projective representations. We refer the reader to section 7.2 of <cit.> for a more exhaustive discussion of projective representations. Let ρ: G → GL(V) be a linear representation of a finite group G. For each g ∈ G, define χ_ρ(g) = Tr(ρ(g)), with Tr(ρ(g)) being the trace of the operator ρ(g)∈ GL(V). The function χ_ρ on G is called the character of the representation ρ. If ρ is irreducible, we call χ_ρ an irreducible character. It is a standard result that two representations are isomorphic if and only if they have the same character. Note that, from properties of the trace, χ_ρ(h^-1gh) = χ_ρ(g) and thus characters are constant on the conjugacy classes of groups. In other terms, characters are class functions. Since characters form an orthonormal basis of the space of class functions, the number of inequivalent irreducible representations equals the number of conjugacy classes of G. If χ is the character of a representation (ρ,V) of G, and e∈ G is the identity, then χ(e) = dim V and is called the degree of the character. If G is abelian, then every character is of degree 1. The character table of a finite group G is the table with rows corresponding to inequivalent irreducible characters of G and columns corresponding to conjugacy classes of G. Entry (i,j) of the table is the value of the i^th irreducible character of G on the j^th conjugacy class of G. Clifford theory deals with induced and restricted representations, which we will now define. If ρ is a representation of G and H is a subgroup of G then we can define the restriction of ρ to H (Res_H^Gρ)(h) := ρ(h), for all h∈ H. The restriction is a representation of H by definition. If χ is the character of ρ, we can also define the restriction of χ to H by (Res^G_Hχ)(h) := χ(h), for all h∈ H. Notice that Res^G_Hχ is the character of Res_H^Gρ. Let ρ be a representation of G and ψ be a subrepresentation of the restriction Res^G_Hρ of ρ to a subgroup H of G. Let V and W be the respective representation spaces of ρ and ψ. For s ∈ G the vector space ρ(s)W depends only on the left coset sH of s. Thus if γ is a left coset of H we can define the subspace W_γ of V to be ρ(s)W for any s ∈γ. Clearly, the W_γ are permuted by ρ(s) for s ∈ G. This tells us that ∑_γ∈ G/HW_γ is a subrepresentation of V. We say that the representation ρ of G is induced by the representation ψ of H on W if V is equal to the direct sum of the W_σ for σ∈ G/H. Restriction and induction of representations do not preserve irreducibility in general. We state the following theorem from <cit.> without proof. Let (W,ψ) be a representation of H. There exists a linear representation of G induced by ψ which we denote Ind_H^Gψ or Ind_H^GW. This induced representation is unique up to isomorphism. §.§ Clifford theory The objective of Clifford theory is to study the representation theory of a group via the representation theory of its normal subgroups. Here we review the central results of Clifford theory. In this section, we largely follow the outline of Clifford theory given in part 2 of <cit.>, and we refer the reader there for proofs and a more thorough exposition. In addition, we collect some results from <cit.>, which will prove essential to our analysis. Let G be a finite group and N G be a normal subgroup of G. Let G and N denote the set of all irreducible representations of G and N respectively up to equivalence. For two representations ρ and σ we write σ≽ρ to denote that ρ is a subrepresentation of σ and ρ∼σ to denote that ρ and σ are isomorphic representations. Let σ∈N and g ∈ G. We define G(σ) = {θ∈G : Res^G_N(θ)≽σ}. The g-conjugate of σ is the representation ^gσ∈N defined by ^gσ(n) = σ(g^-1ng), for all n ∈ N. The inertia subgroup of σ∈N is defined I_G(σ) = {g ∈ G : ^gσ∼σ}. Note that ^gσ is irreducible since any subspace invariant under ^gσ is also invariant under σ. Since ^ghσ(n) = σ((gh)^-1n(gh)) = σ(h^-1(g^-1ng)h) = ^g(^hσ(n)), equation <ref> defines an action of G on N, and thus I_G(σ) is the stabilizer of σ in G. Notice that ^n_1σ(n) = σ(n_1^-1nn_1) = σ(n_1)^-1σ(n)σ(n_1)for n_1,n∈ N. So if χ is the character of σ and χ_n_1 is the character of ^n_1σ we have for n ∈ N χ_n_1(n) = tr(^n_1σ(n)) = tr(σ(n_1)^-1σ(n)σ(n_1)) = tr(σ(n)) = χ(n). Thus we have ^n_1σ∼σ for n_1 ∈ N, and therefore N ≤ I_G(σ). If σ and ^gσ are conjugate representations of a normal subgroup N of a finite group G, and I_G(σ) and I_G(^gσ) are the respective inertia subgroups, then I_G(σ) and I_G(^gσ) are conjugate subgroups of G and in particular I_G(σ) and I_G(^gσ) are isomorphic. We can now recall from <cit.> some central results of Clifford theory. Let R be a family of coset representatives for the left I_G(σ)-cosets in G with e_G ∈ R, that is G = _r ∈ RrI_G(σ). Then {^gσ : g ∈ G} = {^rσ : r ∈ R} and the representations ^rσ are pairwise inequivalent. Suppose that N G and σ∈N and θ∈G(σ). If we set d = [I_G(σ):N] and let l denote the multiplicity of σ in Res^G_Nθ, we have: * Hom_G(Ind^G_Nσ,Ind^G_Nσ) ≅ℂ^d as vector spaces. * Res_N^G θ≅ l⊕_r ∈ R^rσ. The number l = dim(Hom_N(σ,Res^G_Nθ)) is called the inertia index of θ∈G(σ) with respect to N. Let N G, σ∈N and I = I_G(σ), then I(σ) →G(σ): ψ⟼Ind^G_Iψ is a bijection. The inertia index of ψ∈I(σ) with respect to N coincides with the inertia index of Ind^G_Iψ with respect to N. In turn, the inertia index of Ind^G_Iψ with respect to N is equal to the multiplicity m_ψ of ψ in Ind^I_Nψ. Furthermore, Res^I_Nψ = m_ψσ. Unfortunately, this correspondence does not tell us anything in the case where the inertia subgroup is all of G. The study of what happens in this case is known as stable Clifford theory and can be quite complicated <cit.>. Adapting a result from section 8.1 of <cit.> to our notation we can make the following corollary. If N is an abelian normal subgroup of G, the degree of each irreducible representation ρ of G divides the index [G:N] of N in G. Let ψ be a representation of G/N, the inflation ψ of ψ is a representation of G defined by setting ψ(g) = ψ(gN) for all g ∈ G. If χ and χ be characters of ψ and ψ respectively, then the map χ⟼χ is a bijection between the irreducible characters of G/N and the irreducible characters of G with N in their kernel (i.e. χ(n)=deg χ). Note that deg(χ) = deg(χ). Let σ be an irreducible representation of G and ρ_1 be the trivial representation of N (the representation mapping every n∈ N to 1). If we suppose Res^G_Nσ≽ρ_1 then notice that ^hρ_1(g) = ρ_1(h^-1gh) = 1 = ρ_1(g) for all h,g∈ G, thus ^hρ_1 ∼ρ_1 for all h ∈ H. Combining this observation with part 2 of theorem <ref> we see Res_N^G σ≅⊕_l = 1^deg(σ)ρ_1. So N ≤ker(σ) and thus σ is the inflation of an irreducible representation of G/N. Let H≤ G, and let σ be a representation of H. We call a representation σ' of G an extension of σ if Res_H^Gσ' = σ. We can now state a consequence of the Clifford correspondence that will prove very useful in our study of the Clifford group. Let G be a finite group with N G a normal subgroup. Suppose that any σ∈N has an extension σ' to its inertia group I_G(σ). In N define an equivalence relation ≈ by setting σ_1 ≈σ_2 if there exists g∈ G such that ^gσ_1 ∼σ_2. Let Σ be a set of representatives for the equivalence classes of ≈. For ψ∈I_G(σ)/N let ψ be its inflation to I_G(σ). Then G = {Ind^G_I_G(σ)(σ'⊗ψ):σ∈Σ, ψ∈I_G(σ)/N}, that is, the representations σ'⊗ψ form a complete list of irreducible representations of G and are pairwise inequivalent. Let Q, G, and N be groups. If we have an injective homomorphism ι: N → G, and a surjective homomorphism π: G → Q, and if ι(N) = ker(π), then we call G an extension of Q by N. If ι(N) is contained in the center of G, then we call G a central extension. A group extension G is often written as a short exact sequence 1→ N G Q→ 1. Theorem <ref> classifies the irreducible representations of a group extension G under the constraint that the irreducible representations σ of the normal subgroup N can always be extended to representations σ' of the corresponding inertia subgroup I_G(σ). Let 1 → B G H → 1 be a central extension. When G is a central extension and G ≇H× B, the little group method does not apply. To examine this case we require more specialized machinery. A section of the extension is a map t:H→ G which is a right inverse for π, that is π(t(h)) = h for all h ∈ H. We call the section normalized if t(1_H) = 1_G. For h,k ∈ H we have π[t(h)t(k)] = π(t(h))π(t(k)) = hk = π(t(hk)), so there exists a unique b(h,k) ∈ B such that t(h)t(k) = t(hk)ι[b(h,k)]. Let H^α denote all the irreducible projective representations of a finite group H with factor set α. We may now state a version of the little group method for central extensions. For every ξ∈B we have I_G(ξ) = G. Let η(h,k) = ξ(b(h,k)), Let η̅(h,k) = (η(h,k))^-1, then the map H^η̅ ⟶G(ξ) Φ ⟼Θ is a bijection, with Θ defined by Θ(t(h)b) = ξ(b)Φ(h) for all h ∈ H(ξ), b ∈ B. Finally, G = {Θ: ξ∈B, Θas in (<ref>), Φ∈I_G(ξ)/B^η̅}. § THE PAULI AND CLIFFORD GROUPS §.§ Definitions Here we recall the definitions of the Pauli and Clifford groups and discuss the various definitions of the Clifford group in the literature. Let U(d) be the set of d-by-d unitary matrices where d is some power of 2. This has a standard representation on the complex vector space ℂ^d <cit.>. Let v_0, v_1 be an orthonormal basis of ℂ^2 and define the linear operators X, Y and Z by Xv_l = v_l+1, Zv_l = (-1)^lv_l, Yv_l = -iZXv_l = (-1)^liv_l+1 for l ∈{0,1}, with addition over indices being modulo 2. These operators are unitary. We define the n-qubit Pauli group 𝒫_n as the subset of the unitary group U(2^n) consisting of all n-fold tensor products of elements of 𝒫_1 := <X,Z,iI_2>, where I_2 is the identity on ℂ^2. 𝒫_1 is a group of order 16 with centre |Z(𝒫_1)| = 4. Since 𝒫_n consists of n-fold tensor products of elements of 𝒫_1 it is a central product of 𝒫_1, and thus |𝒫_n| = 4^n+1. The operators X, Y, and Z can be written in matrix form with respect to the eigenbasis of Z as X = [ 0 1; 1 0 ]Z =[ 1 0; 0 -1 ]Y = [ 0 -i; i 0 ]. These are known as the Pauli matrices. The n-qubit Clifford group Cliff(n) is the normalizer of the n-qubit Pauli group in the unitary group Cliff(n) = {U∈ U(2^n) : U𝒫_nU^†⊆𝒫_n}. Since in quantum information theory global phases have no measurable effect, it is common to define the Clifford group mod phases. We denote this group 𝒞_n = {U∈ U(2^n) : U𝒫_nU^†⊆𝒫_n}/U(1), and will call the n-qubit projective Clifford group to differentiate it from other ways the Clifford group is defined in the literature. This is the version of the Clifford group whose representation theory we would like to understand. The Clifford group Cliff(n) is generated by the Hadamard (H) and Phase (S) gates on each qubit (i.e. on each tensor factor), and Controlled-Z (CZ) gate on each pair of qubits, along with phases. In matrix form these gates are H = 1/√(2)[ 1 1; 1 -1 ]S =[ 1 0; 0 i ]CZ =[ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 -1 ]. Operators that are n-fold tensor products only of 2-by-2 matrices are said to consist only of single-qubit operations. If an operator has 4-by-4 matrices in its decomposition, such as CZ that do not decompose further into tensor factors, it is said to contain multi-qubit, or entangling, operations. Multi-qubit Pauli operators either commute or anticommute. Notice that the group C_4 = ⟨ iI_n⟩ of phases in 𝒫_n is the centre of 𝒫_n. Define the n-qubit projective Pauli group to be 𝒫_n = 𝒫_n/C_4. Since C_4 contains the commutator subgroup C_2 = ⟨ -I_n⟩ of 𝒫_n, we have that 𝒫_n is abelian. Notice, 𝒫_n is a normal subgroup of 𝒞_n. Since 𝒫_n ≅ℤ^2n_2 we will often just write ℤ^2n_2 for the projective Pauli group. §.§ The symplectic structure of the Clifford group The quotient of the Clifford group by the Pauli group and phases is essential to the stable Clifford theory of the Clifford group. We will need the following proposition, which we present without proof. Let ϕ:𝒫_n →𝒫_n be an automorphism of the Pauli group that fixes scalars. That is, ϕ(i^lI_2^n) = i^lI_n^2. Then there exists U ∈Cliff(n) unique up to phases such that for all P ∈𝒫_n we have UPU^† = ϕ(P). This says that 𝒞_n = Aut_⟨ i⟩(𝒫_n), that is, the Clifford group consists of the automorphisms of the Pauli group that fix the centre. Following arguments from <cit.> and <cit.> we can show the following. The quotient of the Clifford group Cliff(n) by the Pauli group and phases is 𝒞_n/𝒫_n ≅ Sp(2n,2), the symplectic group of degree 2n over ℤ_2. For 𝐱 = (𝐩,𝐪) ∈ℤ^2n define the Weyl Operator W_𝐱 = W_𝐩,𝐪 = i^-𝐩·𝐪(Z^p_1X^q_1)⊗⋯⊗(Z^p_nX^q_n). Clearly, all Weyl operators are elements of the Pauli group 𝒫_n, and any element of the Pauli group is a Weyl operator up to a power of i. Weyl operators only depend on 𝐱 modulo 4, since W_𝐱+2𝐳 = (-1)^[𝐱,𝐳]W_𝐱, where we have introduced the ℤ-valued symplectic form [·,·] on ℤ^2n [𝐱,𝐳] = [(𝐩,𝐪),(𝐩',𝐪')] = 𝐩·𝐪'-𝐪·𝐩'. We will use this form when 𝐱,𝐲∈ℤ^2n_4, and interpret accordingly. For example, by direct computation we have W_𝐱W_𝐲 = i^[𝐱,𝐲]W_𝐱+𝐲. Then for all 𝐱,𝐲∈ℤ^2n_4, we have W_𝐱W_𝐲 = i^[𝐱,𝐲]W_𝐱+𝐲 = i^-[𝐲,𝐱]W_𝐲+𝐱 = i^-2[𝐲,𝐱]W_𝐲W_𝐱 = (-1)^[𝐱,𝐲]W_𝐲W_𝐱. Thus the commutation relation depends on [𝐱,𝐲] mod 2. By definition if U ∈Cliff(n) then for every 𝐱∈ℤ^2n_2, UW_𝐱U^† is proportional to a Weyl operator W_𝐱' by some power of i and by equation <ref> we can take 𝐱' ∈ℤ^2n_2. We define the function g: ℤ_2^2n→ℤ_4 where i^g(𝐱)W_𝐱' = UW_𝐱U^†. Since conjugation preserves commutation relations, we have (-1)^[𝐱,𝐲]W_𝐲'W_𝐱' = W_𝐱'W_𝐲' = (-1)^[𝐱',𝐲']W_𝐲'W_𝐱'. Thus the map Γ: 𝐱↦𝐱' preserves the symplectic form [·,·]. Furthermore, i^g(𝐱+𝐲)+[𝐱,𝐲]W_(𝐱+𝐲)' = Ui^[𝐱,𝐲]W_𝐱+𝐲U^† = UW_𝐱W_𝐲U^† = UW_𝐱U^†UW_𝐲U^† = i^g(𝐱)+g(𝐲)W_𝐱'W_𝐲' = i^[𝐱,𝐲]+g(𝐱)+g(𝐲)W_𝐱'+𝐲'. Thus i^g(𝐱+𝐲)W_(𝐱+𝐲)' = i^g(𝐱)+g(𝐲)W_𝐱'+𝐲'. Then W_(𝐱+𝐲)' = ± W_𝐱'+𝐲'. So by equation <ref>, Γ is compatible with addition in ℤ^2n_2. Since ℤ_2 has only the scalars 0 and 1, we deduce that Γ is linear and thus symplectic Γ∈ Sp(2n,2). Then for each U ∈Cliff(n) there is a Γ∈ Sp(2n,2) and a function g: ℤ_2^2n→ℤ_4 such that UW_𝐱U^† = i^g(𝐱)W_Γ(𝐱). Now notice that the n-qubit Pauli matrices form a basis of the vector space M_2^n(ℂ) of all 2^n-by-2^n matrices. If we specify the action of U ∈Cliff(n) on a generating set of the 𝒫_n, then we determine U up to a phase since U' = e^iθU has the same action as U by conjugation. From lemma <ref> we have for any automorphism ϕ of 𝒫_n that fixes scalars, that ϕ(P) = UPU^† for all P ∈𝒫_n and some U ∈Cliff(n). For any linear Γ: ℤ^2n_4 →ℤ^2n_4 that preserves the symplectic product modulo 4, we can define the map Φ : 𝒫_n →𝒫_n by Φ(W_𝐱) = W_Γ𝐱, for all 𝐱∈ℤ_4^2n. To see this is well defined, notice that W_Γ𝐱 is expressible as a linear combination of other Weyl operators only if ± W_Γ𝐲 = W_Γ𝐱 for some 𝐲∈ℤ_4^2n. Then by dimension counting and equation <ref> we have Γ𝐲 = Γ(𝐱+2𝐳) = Γ𝐱+2Γ𝐳. Thus the sign is given by (-1)^[Γ𝐱,Γ𝐳] = (-1)^[𝐱,𝐳]. We see that W_𝐱 = ± W_𝐲 with the same sign so Φ is well defined. Furthermore, since Φ(W_𝐱)Φ(W_𝐲) = W_Γ𝐱W_Γ𝐲 = i^[𝐱,𝐲]W_Γ𝐱+Γ𝐲 = i^[𝐱,𝐲]Φ(W_𝐱+𝐲) = Φ(W_𝐱W_𝐲), extending Φ by linearity defines an automorphism on 𝒫_n that fixes scalars. We thus have a U ∈Cliff(n) such that UW_𝐱U^† = W_Γ𝐱. Let {𝐞_j}_j=1^2n be a basis of ℤ_2^2n. For any Γ∈ Sp(2n,2), let Γ𝐞_j = 𝐯_j for all j ∈{1,…,2n}. In other words let C = [𝐯_1⋯𝐯_2n] be the matrix corresponding to the symplectic map Γ. Define 𝐯_1 := 𝐯_1. For each subsequent j>1 we define 𝐯_j := 𝐯_j+2𝐱_j with 𝐱_j ∈ℤ_2^2n. Notice that for all j we have 𝐯_j ∈ℤ_4^2n. We choose the 𝐱_j such that [𝐯_j,𝐯_j] = 0 mod 4 and [𝐯_h,𝐯_j] = δ_h,n+j-δ_j,n+h mod 4 for h<j, where δ_a,b is the Kronecker delta. Since Γ preserves the symplectic product mod 2 (and thus preserves commutators and anticommutators), we have both of these restrictions already satisfied mod 2. The matrix C = [𝐯_1⋯𝐯_2n] is symplectic modulo 4 and C = C mod 2. This means that for each Γ∈ Sp(2n,2) there is a Γ∈ Sp(ℤ_4^2n) such that Γ𝐱 = Γ𝐱 mod 2 thus we obtain Γ𝐱-Γ𝐱 = 2𝐳 for some 𝐳∈ℤ^2n. If we define the function f:ℤ_2^2n→ℤ_2 by f(𝐱) = [Γ𝐱,𝐳] mod 2, we have (-1)^f(𝐱)W_Γ𝐱 = W_Γ𝐱+2𝐳 = W_Γ𝐱. This implies that for every Γ∈ Sp(2n,2) there exists a U ∈Cliff(n) and a function f: ℤ_2^2n→ℤ_2 such that for all 𝐱∈ℤ_2^2n UW_𝐱U^† = (-1)^f(𝐱)W_Γ𝐱. This U is determined uniquely up to phase since we have determined its action by conjugation. Thus we have a surjective correspondence U ↦Γ between 𝒞_n and Sp(2n,2), and we see that the quotient of Cliff(n) by Paulis and phases is Sp(2n,2). Note that this implies that 𝒞_n is an extension of Sp(2n,2) by ℤ_2^2n, but since we cannot specify that f ≡ 0 for all choices of U ∈ Cliff(n) in equation <ref> the extension does not split for n > 1. For n = 1 we have 𝒞_1 ≅ S_4 ≅ Sp(2,2)⋉ℤ^2_2. Since |Sp(2n,2)| = 2^n^2∏_j=1^n(2^2j-1), we obtain the following immediate corollary. The order of the Clifford group is |𝒞_n| = |𝒫_n||Sp(2n,2)| = 2^n^2+2n∏_j=1^n(2^2j-1). §.§ The character table of the projective Pauli group Since the n-qubit projective Pauli group is abelian it has only degree one irreducible characters. One-dimensional representations characters coincide since the trace leaves 1-by-1 matrices invariant. Elements of the n-qubit projective Pauli group have order at most 2. Thus, the character of an element of 𝒫_n must be ±1. 𝒫_n is generated by {[X_1],[Z_1],…,[X_n],[Z_n]}, where [A_j] is the equivalence class of the Pauli operator A acting on the j^th qubit A_j = I_2^⊗ j-1⊗ A⊗ I_2^⊗ n-j. So a character of 𝒫_n is fully determined by a choice of ±1 for [X_i] and [Z_i] for each i ∈{1,…,n}. The 4 choices for each qubit leave us with 4^n choices for the whole group. There are 4^n = |𝒫_n| characters since 𝒫_n is abelian and thus has only singleton conjugacy classes. Irreducible characters that disagree on any one element must be distinct, so this completely determines the character table of 𝒫_n. Thus, the character table of 𝒫_n can be written by filling the first row and column of a 4^n-by-4^n table with ones, then in the rest of each remaining row writing each permutation of 4^n/2-1 ones and 4^n/2 negative ones. § THE INERTIA SUBGROUP To begin our study of the character theory of the n-qubit Clifford quotient group, we examine the inertia subgroups of the representations of the n-qubit projective Pauli group in the n-qubit projective Clifford group. Let σ and ρ be nontrivial irreducible representations of 𝒫_n, then there exists g ∈𝒞_n such that ^gσ∼ρ. In other words, all nontrivial irreducible representations of 𝒫_n are conjugate in 𝒞_n. We begin the proof by noticing that HXH^-1 = Z HZH^-1 = X HYH^-1 = -Y. So we have that conjugation of Pauli matrices by H maps X to Z, and vice versa, while mapping Y to -Y. Thus conjugation by [H] maps [X] to [Z] and vice versa while leaving [Y] invariant. We can calculate SXS^-1 = Y SZS^-1 = Z SYS^-1 = -X, thus conjugation by [S] maps [X] to [Y] and vice versa while leaving [Z] invariant. Furthermore conjugation by [H][S][H] maps [Z] to [Y] and vice versa while leaving [X] invariant. We see that we can permute the equivalence classes of the non-identity one-qubit Pauli matrices in any way via conjugation by elements of 𝒞_n. We now turn our attention to 2-qubit operators. Consider the swap gate, if A and B are any 2-by-2 matrices we have (SWAP)(A⊗ B)(SWAP) = B⊗ A. Our previous calculations for 1-qubit matrices tell us that any pair of nontrivial representations σ and ρ of 𝒫_n that have the same number of pairs of generators ([X_i],[Z_i]) in their kernels, that is |{i∈{1,…,n}: ρ([X_i]) = ρ([Z_i]) = 1}| = |{i∈{1,…,n}: σ([X_i]) = σ([Z_i]) = -1}|, are conjugate in 𝒞_n. Suppose that we have two representations ρ and σ of 𝒫_2 with σ([X⊗ I]) = σ([I⊗ Z]) = -1 σ([Z⊗ I]) = σ([I⊗ X]) = 1 ρ([X⊗ I]) = ρ([I⊗ X]) = ρ([Z⊗ I]) = 1 ρ([I⊗ Z]) = -1. Recall that this completely determines σ and ρ. Now we calculate CZ(I⊗ X)CZ = (Z⊗ X) CZ(Z⊗ I)CZ = (Z⊗ I) CZ(X⊗ I)CZ = (X⊗ Z) CZ(I⊗ Z)CZ = (I⊗ Z). Thus we have ^CZρ([X⊗ I]) = ρ([X⊗ Z]) = -1 = σ([X⊗ I]) ^CZρ([Z⊗ I]) = ρ([Z⊗ I]) = 1 = σ([Z⊗ I]) ^CZρ([I⊗ X]) = ρ([Z⊗ X]) = 1 = σ([I⊗ X]) ^CZρ([I⊗ Z]) = ρ([I⊗ Z]) = -1 = σ([I⊗ Z]). Thus nontrivial irreducible representations σ and ρ of 𝒫_2 with differing numbers of ([X_i],[Y_i]) pairs in their kernels, that is |{i∈{1,2}: ρ([X_i]) = ρ([Z_i]) = 1}| ≠ |{i∈{1,2}: σ([X_i]) = σ([Z_i]) = 1}|, are conjugate in 𝒞_2. Since restricting irreducible representations of 𝒫_n to any two qubits gives an irreducible representation of 𝒫_2, taking all the previous calculations together, we have that all nontrivial irreducible representations of 𝒫_n are conjugate in 𝒞_n. Since by lemma <ref> conjugate representations of normal subgroups have isomorphic inertia subgroups, we see that there is only one inertia subgroup to calculate for the nontrivial representations of the projective Pauli group in the Clifford group. We have the following immediate corollary. If ρ is an irreducible representation of 𝒞_n and σ an irreducible representation of 𝒫_n with Res^𝒞_n_𝒫_nρ≽σ then one of two cases hold: * σ is trivial, in which case 𝒫_n is in the kernel of ρ. In this case, ρ is the inflation of an irreducible representation of Sp(2n,2). * σ is nontrivial in which case we can apply Lemma <ref> and Theorem <ref> to obtain Res^𝒞_n_𝒫_nρ = l⊕_θ∈Irr(𝒫_n) θ nontrivialθ, where Irr(𝒫_n) is the set of irreducible representations of 𝒫_n and l is the inertia index of ρ with respect to 𝒫_n. Additionally, since 𝒫_n is an abelian normal subgroup of 𝒞_n we have that the degree of ρ divides [𝒞_n:𝒫_n] = 2^n^2∏_j=1^n(2^2j-1) by Corollary <ref>. If we specialize to case 2, then equation <ref> and the fact that all irreducible representations of 𝒫_n have degree 1 implies that the degree of ρ is divisible by 4^n-1 (the number of nontrivial irreducible representations of 𝒫_n). If χ is the character of ρ, then equation <ref> implies Res^𝒞_n_𝒫_nχ = l∑_ψ∈IrrChar(𝒫_n) ψ nontrivialψ, where IrrChar(𝒫_n) is the set of irreducible characters of 𝒫_n. In particular if g ∈𝒫_n is a non-identity element then χ(g) = -l, since for any such g the summand takes the value -1 a total of 2^2(n-1)+1 times and takes the value 1 a total of 2^2(n-1)+1-1 times. To understand case 2, we need to calculate the inertia subgroup I_𝒞_n(σ) of a nontrivial representation σ of the Pauli group in the Clifford group. Let M = CX_1,2(Z_1H_1X_2)CX_1,2 Then we have the following theorem. For n ≥ 2 the inertia subgroup of a nontrivial representation of 𝒫_n in 𝒞_n is isomorphic to IN_n:=⟨{[M],[H_1],[X_1],[I⊗ A]:for A∈Cliff(n-1)}⟩. Notice that if σ is a nontrivial irreducible representation of 𝒫_n, and ψ an irreducible representation of I = I_𝒞_n(σ) with Res^I_𝒫_nψ≽σ then by the Clifford correspondence we have m_ψ(2^2n-1) = deg m_ψ⊕_θ∈Irr(𝒫_n) θ nontrivialθ = deg Ind^𝒞_n_Iψ=[𝒞_n:I]deg ψ, where m_ψ is the inertia index of ψ with respect to 𝒫_n. Additionally, by the Clifford correspondence, [𝒞_n:I]deg ψ = [𝒞_n:I]m_ψdeg σ = m_ψ[𝒞_n:I], Thus [𝒞_n:I] = 2^2n-1 and |I| = 1/2^2n-1|𝒞_n| = 2^n^2+2n∏_j=1^n-1(2^2j-1) = 2^2n+1|𝒞_n-1|. So if for any particular σ we can find a subgroup of 𝒞_n that preserves σ under conjugation and has this order, then we have found the inertia subgroup. Consider the irreducible character σ_1 of 𝒫_n defined by σ_1([X_1]) = σ_1([Z_1]) = -1 and σ_1([X_i]) = σ_1([Z_i]) = 1 for all i ∈{2,…,n}. We want to calculate I_𝒞_n(σ_1) = {g ∈𝒞_n : ^gσ_1 ∼σ_1}. So we want to find the elements of 𝒞_n that preserve the presence of X or Z in the first tensor factor by conjugation. We immediately see that [I⊗ A] ∈ I_𝒞_n(σ_1) for A∈Cliff(n) since operations restricted to other qubits do not affect the first qubit. Since conjugation by H simply exchanges X and Z, we also have [H_1] ∈ I_𝒞_n(σ_1). Additionally, we have the operator CX(ZH⊗ X)CX = 1/√(2)[ 0 1 1 0; 1 0 0 1; -1 0 0 1; 0 -1 1 0 ]. The action of this matrix on 𝒫_2 by conjugation is CX(ZH⊗ X)CX(X⊗ I)CX(HZ⊗ X)CX = -Z⊗ X CX(ZH⊗ X)CX(Z⊗ I)CX(HZ⊗ X)CX = X⊗ X CX(ZH⊗ X)CX(I⊗ X)CX(HZ⊗ X)CX = I⊗ X CX(ZH⊗ X)CX(I⊗ Z)CX(HZ⊗ X)CX = -ZX⊗ ZX. Then we have σ_1([ZX⊗ ZX]) = σ_1([Z⊗ Z])σ_1([X⊗ X]) = (-1)^2 = σ_1([I⊗ Z]). Thus the action of [CX(ZH⊗ X)CX] preserves σ_1. The action of IN_n leaves [X_1Z_1] invariant, thus there are 2^2n-1 possible images of the pair ([X_1],[Z_1]). The order of IN_n is thus (2^2n-1)(2^2n)|Sp(2(n-1),2)| = 2^2n+1|𝒞_n-1|. Note that there is no way to write [X_1] in terms of the other generators of IN_n. Since [X_1]^-1 = [X_1], any reduction we perform on a word written in these generators preserves the parity of the number of [X_1]s. For g a word in IN_n, let n_X_1(g) be the number of times X_1 appears in g. The above analysis implies that the map σ_1': IN_n ⟶{1,-1} g ⟼ (-1)^n_X_1(g) is an irreducible character of IN_n, and clearly Res^IN_n_𝒫_nσ_1' = σ_1. Thus σ_1' is an extension of σ_1 to IN_n, its own inertia group. Since all nontrivial irreducible representations of the projective Pauli group ℤ_2^2n are conjugate, we have that any irreducible representation σ of the projective Pauli group can be extended to a representation σ' of its own inertia group I(σ) in the Clifford group. We apply the little group method to obtain the following. The irreducible representations of the projective Clifford group are 𝒞_n = {Ind^𝒞_n_IN_n (σ_1'⊗ψ) : ψ∈IN_n/ℤ^2n_2}∪{ψ: ψ∈Sp(2n,2)}, where the ψ in the left set is an inflation to an irreducible representation of IN_n and in the right set is an inflation to an irreducible representation of 𝒞_n. Theorem <ref> gives a complete list of the irreducible representations of the n-qubit Clifford group. To actually calculate these representations, we would like to know the representations of Sp(2n,2) and those of the quotient group IN_n/𝒫_n. Using theorem <ref>, we may calculate the following example character tables. The character table of the 1-qubit projective Pauli group is [I_2] [X] [Z] [Y] ψ_1 1 1 1 1 ψ_2 1 -1 1 -1 ψ_3 1 1 -1 -1 ψ_4 1 -1 -1 1 . Notice that the inertia group of the representation ψ_4 is just the subgroup I(ψ_4) = ⟨[H],[X],[Z]⟩⊂𝒞_1. The extension ψ_4' of ψ_4 to I(ψ_4) is achieved by defining the value of ψ_4'([H]) = 1. The character table of I(ψ_4)/ℤ_2^2 is [[I_2]] [[H]] ϕ_1 1 1 ϕ_2 1 -1 . Via GAP4 calculation, the character table of Sp(2,2) is [[I_2]] [[H]] [[S]] θ_1 1 1 1 θ_2 1 -1 1 θ_3 2 0 -1 . Then by theorem <ref> the character table of the 1-qubit Clifford group 𝒞_1 is [I_2] [S] [X] [H] θ_1 1 1 1 1 1 θ_2 1 -1 1 1 -1 θ_3 2 0 -1 2 0 Ind^𝒞_1_I(ψ_4)(ψ_4'⊗ϕ_1) 3 -1 0 -1 1 Ind^𝒞_1_I(ψ_4)(ψ_4'⊗ϕ_2) 3 1 0 -1 -1 , §.§ The representation theory of the inertia quotient group To understand the irreducible representations of the Clifford group with nontrivial restriction to the Pauli group, we will now examine the irreducible representations of IN_n/ℤ_2^2n. Notice that H_1^M = (XZ⊗ X)H_1, and that H_1 commutes with all other non-Pauli operators in the generating set of IN_n. From this we see that ℤ_2 ≅⟨[[I]],[[H_1]]⟩ forms an order 2 normal subgroup of IN_n/ℤ^2n_2. For convenience we make the definition ℋ_1,n:=⟨ [H_1],{[X_i],[Z_i], for i=1,…,n}⟩. We are ready to prove the following lemma. The inertia quotient group has the affine symplectic group as a quotient group, that is (IN_n/ℤ_2^2n)/ℤ_2 ≅ IN_n/ℋ_1,n≅ Sp(2(n-1),2)⋉ℤ_2^2(n-1). For x ∈ℤ_2^2(n-1), let W_x be the Weyl operator defined in the proof of theorem <ref>. Consider operators of the form X⊗ W_x and Z⊗ W_x, which we will call inertia Weyl operators since by definition n-qubit Weyl operators of this form are preserved by the inertia subgroup IN_n under conjugation. Since the n-qubit Pauli group is generated by these inertia Weyl operators, the action of U ∈ IN_n by conjugation on these operators defines the action of U on the Pauli group. From theorem <ref>, we know that conjugating an inertia Weyl operator by I⊗ U for U ∈Cliff(n-1) will give us X⊗ W_Γ x or Z⊗ W_Γ x respectively for some Γ∈ Sp(2(n-1),2) with potential phase factors. Furthermore, we know that any such Γ is realized by some U ∈Cliff(n-1). Conjugation by H_1 will exchange the X and Z on the first qubit. Conjugating by MS_2H_2S^-1_2 amounts to multiplication by X_2 on the left with a possible phase factor of -1 and a possible exchange of X and Z on the first qubit. Similarly conjugation by H_1H_2MS_2H_2S^-1_2H_2 amounts to multiplication by Z_2 on the left with a possible phase factor of -1 and a possible exchange of X for Z. Notice that the actions by conjugation of the matrices we have examined generate the affine symplectic group Sp(2(n-1),2)⋉ℤ_2^2(n-1) on the index x of a Weyl operator W_x, with an extra operator H that exchanges the X and Z on the first qubit. Since the equivalence classes of said matrices also generate IN_n, and the inertia Weyl operators along with H_1 generate ℋ_1,n, we have the result. From the proof of this lemma, we see that the quotient group IN_n/ℤ_2^2n is a central extension of Sp(2(n-1),2)⋉ℤ^2(n-1)_2 by ℤ_2. Through GAP4 calculation we have determined that in general, the extension will not be a direct product, although it is in the case of two qubits. Fix a normalized section t of the central extension IN_n/ℤ_2^2n of Sp(2(n-1),2)⋉ℤ^2(n-1)_2 by ℤ_2. Let b(h,k)∈ℤ_2 be the corresponding factor set. The only nontrivial irreducible representation ξ of ℤ_2 maps the non-identity element to -1. Let η(h,k) = ξ(b(h,k)). By applying proposition <ref>, we obtain IN_n/ℤ_2^2n = {ψ:ψ∈Sp(2(n-1),2)⋉ℤ^2(n-1)_2}∪{Θ:Ψ∈ (Sp(2(n-1),2)⋉ℤ^2(n-1)_2)^η}, with Θ defined by Θ(t(h)b) = ξ(b)Ψ(h) for all h∈ Sp(2(n-1),2)⋉ℤ^2(n-1)_2 and b∈ℤ_2. § LIFTING IRREDUCIBLE CHARACTERS TO HIGHER DIMENSIONAL CLIFFORD GROUPS We will now explain how irreducible characters of the n-qubit Clifford group can be used to explicitly construct characters of the (n+1)-qubit Clifford group. First, we need to understand the representation theory of the affine symplectic group Sp(2n,2)⋉ℤ_2^2n. It is clear that if U acts on ℤ^2n_2 by Γ∈ Sp(2n,2) then ^(𝐱,Γ)σ∼^Uσ for any σ∈ℤ^2n_2 and (𝐱,Γ)∈ Sp(2n,2)⋉ℤ^2n_2. Let σ_1 be the irreducible representation of ℤ^2n_2 defined in section 5, then it follows that I_Sp(2n,2)⋉ℤ^2n_2(σ_1)/ℤ^2n_2≅ IN_n/ℤ^2n_2 Let σ_1” be the extension of σ_1 to I_Sp(2n,2)⋉ℤ^2n_2(σ_1) via σ_1”(x,Γ) = σ_1(x). Applying theorem <ref> we immediately obtain the following. The irreducible representations of the affine symplectic group are Sp(2n,2)⋉ℤ_2^2n = {Ind^Sp(2n,2)⋉ℤ_2^2n_(IN_n/ℤ^2n_2)⋉ℤ_2^2n (σ_1”⊗ψ) : ψ∈IN_n/ℤ^2n_2}∪{ψ: ψ∈Sp(2n,2)}, where ψ in the left set is the inflation to (IN_n/ℤ^2n_2)⋉ℤ_2^2n and in the right set is inflation to Sp(2n,2)⋉ℤ_2^2n. We can now prove the following lemma which was first proven by Bernd Fischer using the technique of Fischer-Clifford matrices in <cit.>. Sp(2n,2)⋉ℤ^2n_2 and 𝒞_n have identical character tables. This is trivially true if n=1, as in that case, the groups are isomorphic. For n>1 we first notice that (Sp(2n,2)⋉ℤ^2n_2)/ℤ^2n_2 ≅ Sp(2n,2)≅𝒞_n/𝒫_n. The irreducible characters that come from Sp(2n,2) are nothing but inflations of the irreducible characters of Sp(2n,2). Thus if χ is an irreducible character of Sp(2n,2) and χ and χ' are its inflations to 𝒞_n and Sp(2n,2)⋉ℤ^2n_2 respectively, we have χ(U) = χ(Γ) = χ'(𝐱,Γ) for all 𝐱∈ℤ^2n_2 and U∈𝒞_n such that UW_𝐱U^† = (-1)^f(𝐱)W_Γ𝐱. Fix a normalized section t: Sp(2n,2)→𝒞_n of the extension 1→ℤ_2^2n→𝒞_n→ Sp(2n,2)→ 1 such that σ_1(t(Γ)) = 1. Define mapping ϕ: Sp(2n,2)⋉ℤ^2n_2→𝒞_n by ϕ(𝐱,Γ) = W_𝐱t(Γ). It is clear that this mapping is one-to-one and onto, and σ_1”(s) = σ_1'(ϕ(s)) for all s ∈ Sp(2n,2)⋉ℤ^2n_2. Using the notation of equation <ref> we see that χ(ϕ(s)) = χ'(s) for all s∈ Sp(2n,2)⋉ℤ^2n_2. Let ψ be an irreducible representation of IN_n/ℤ^2n_2, and ψ and ψ' be its inflations to I_𝒞_n(σ_1) and I_Sp(2n,2)⋉ℤ^2n_2(σ_1) respectively. From the formula for induced characters, we have Ind^Sp(2n,2)⋉ℤ^2n_2_I_Sp(2n,2)⋉ℤ^2n_2(σ_1)(ψ'⊗σ_1”)(s) = 1/|I_Sp(2n,2)⋉ ℤ^2n_2(σ_1)|∑_r∈ Sp(2n,2)⋉ℤ^2n_2 r^-1sr ∈ I_Sp(2n,2)⋉ℤ^2n_2(σ_1)ψ'⊗σ_1”(r^-1sr), and Ind^𝒞_n_I_𝒞_n(σ_1)(ψ⊗σ_1')(s) = 1/|I_𝒞_n(σ_1)|∑_r∈𝒞_n r^-1sr ∈ I_𝒞_n(σ_1)ψ⊗σ_1'(r^-1sr). Since the action by conjugation of ϕ(𝐱,Γ) depends only on Γ, we see that ϕ(r)^-1ϕ(s)ϕ(r)∈ I_𝒞_n(σ_1) if and only if r^-1sr∈ I_Sp(2n,2)⋉ℤ^2n_2(σ_1) for any r,s∈ Sp(2n,2)⋉ℤ^2n_2, and furthermore ψ(ϕ(r)^-1ϕ(s)ϕ(r)) = ψ'(r^-1sr). Finally, we obtain Ind^𝒞_n_I_𝒞_n(σ_1)(ψ⊗σ_1')(ϕ(s)) = Ind^Sp(2n,2)⋉ℤ^2n_2_I_Sp(2n,2)⋉ℤ^2n_2(σ_1)(ψ'⊗σ_1”)(s) for all s ∈ Sp(2n,2)⋉ℤ^2n_2. By column orthogonality of character tables we have that r,s∈ Sp(2n,2)⋉ℤ^2n_2 are conjugate if and only if ϕ(r) and ϕ(t) are conjugate in 𝒞_n. Thus the map ϕ respects conjugacy classes and the character tables are identical. Taken together these lemmas imply a remarkable property of the Clifford group. Let ϕ:Sp(2n,2)⋉ℤ^2n_2→𝒞_n be the map defined in the proof of lemma <ref>. If χ is an irreducible character of the n-qubit Clifford group 𝒞_n then Ind^𝒞_n+1_IN_n+1(χ∘ϕ)⊗σ_1' is an irreducible character of the (n+1)-qubit Clifford group 𝒞_n+1. By lemma <ref> we see that every irreducible character χ of 𝒞_n is also an irreducible character of Sp(2n,2)⋉ℤ^2n_2 when precomposed with the bijection ϕ of the conjugacy classes of the two groups. We can then see by lemma <ref> that the irreducible character χ_ϕ := χ∘ϕ of Sp(2n,2)⋉ℤ^2n_2 inflates to an irreducible character χ_ϕ of IN_n+1 that contains ℋ_1,n+1 in its kernel. In Particular this means that 𝒫_n+1 will be contained in the kernel of χ_ϕ, so we know that χ_ϕ⊗σ_1' is an irreducible character of IN_n+1 that has σ_1 in the decomposition of its restriction to 𝒫_n+1 into irreducible representations. Therefore, by the Clifford correspondence we obtain the result. This gives a straightforward method for obtaining irreducible characters of the (n+1)-qubit Clifford group from irreducible characters of the n-qubit Clifford group. As an example, we demonstrate the lifting procedure from the 1-qubit to the 2-qubit Clifford group. In this case, because of the isomorphism 𝒞_1 ≅ Sp(2,2)⋉ℤ^2_2, we know that the inertia quotient IN_2/ℤ^4_2 group is a central extension of 𝒞_1 by ℤ_2. Moreover, in this case, the extension splits and we have IN_2/ℤ^2_2 ≅𝒞_1×ℤ_2. If we denote the characters of 𝒞_1 by χ_i for i ∈{1,… 5} and denote the characters of ℤ_2 by θ_1 and θ_2 where θ_1 is the trivial representation. The character table of 𝒞_1×ℤ_2 is ([I_2],0) ([I_2],1) μ_1:=χ_1×θ_1 1 1 1 1 1 1 1 1 1 1 μ_2:=χ_2×θ_1 1 1 1 -1 -1 -1 -1 1 1 1 μ_3:=χ_1×θ_2 1 1 1 -1 -1 1 1 -1 -1 -1 μ_4:=χ_2×θ_2 1 1 1 1 1 -1 -1 -1 -1 -1 μ_5:=χ_3×θ_1 2 2 -1 0 0 0 0 -1 2 2 μ_6:=χ_3×θ_2 2 2 -1 0 0 0 0 1 -2 -2 μ_7:=χ_4×θ_1 3 -1 0 -1 1 -1 1 0 3 -1 μ_8:=χ_4×θ_2 3 -1 0 -1 1 1 -1 0 -3 1 μ_9:=χ_5×θ_2 3 -1 0 1 -1 -1 1 0 -3 1 μ_10:=χ_5×θ_1 3 -1 0 1 -1 1 -1 0 3 -1 . So from every character of 𝒞_1 we will get two characters of 𝒞_2, and the character table of 𝒞_2 is determined entirely by these characters, and inflated characters from Sp(4,2). Thus the character table of 𝒞_2 is ψ_1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ψ_2 1 1 -1 -1 1 1 1 -1 -1 -1 1 -1 1 1 -1 -1 1 1 1 -1 -1 ψ_3 5 5 -1 -1 1 1 1 3 3 3 -1 -1 2 2 1 1 -1 -1 0 0 0 ψ_4 5 5 1 1 1 1 1 -3 -3 -3 -1 1 2 2 -1 -1 -1 -1 0 0 0 ψ_5 5 5 -3 -3 1 1 1 1 1 1 2 0 -1 -1 -1 -1 -1 -1 0 1 1 ψ_6 5 5 3 3 1 1 1 -1 -1 -1 2 0 -1 -1 1 1 -1 -1 0 -1 -1 ψ_7 9 9 -3 -3 1 1 1 -3 -3 -3 0 0 0 0 1 1 1 1 -1 0 0 ψ_8 9 9 3 3 1 1 1 3 3 3 0 0 0 0 -1 -1 1 1 -1 0 0 ψ_9 10 10 -2 -2 -2 -2 -2 2 2 2 1 1 1 1 0 0 0 0 0 -1 -1 ψ_10 10 10 2 2 -2 -2 -2 -2 -2 -2 1 -1 1 1 0 0 0 0 0 1 1 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_1⊗σ_1') 15 -1 -3 1 -1 -1 3 1 1 -7 0 0 3 -1 1 -1 1 -1 0 -1 1 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_2⊗σ_1') 15 -1 -3 1 3 -1 -1 -3 1 5 0 0 3 -1 -1 1 -1 1 0 -1 1 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_3⊗σ_1') 15 -1 3 -1 -1 -1 3 -1 -1 7 0 0 3 -1 -1 1 1 -1 0 1 -1 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_4⊗σ_1') 15 -1 3 -1 3 -1 -1 3 -1 -5 0 0 3 -1 1 -1 -1 1 0 1 -1 ψ_11 16 16 0 0 0 0 0 0 0 0 -2 0 -2 -2 0 0 0 0 1 0 0 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_5⊗σ_1') 30 -2 -6 2 2 -2 2 -2 2 -2 0 0 -3 1 0 0 0 0 0 1 -1 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_6⊗σ_1') 30 -2 6 -2 2 -2 2 2 -2 2 0 0 -3 1 0 0 0 0 0 -1 1 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_7⊗σ_1') 45 -3 -3 1 -3 1 1 1 -3 9 0 0 0 0 1 -1 -1 1 0 0 0 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_8⊗σ_1') 45 -3 3 -1 -3 1 1 -1 3 -9 0 0 0 0 -1 1 -1 1 0 0 0 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_9⊗σ_1') 45 -3 -3 1 1 1 -3 5 -3 -3 0 0 0 0 -1 1 1 -1 0 0 0 Ind^𝒞_2_I_𝒞_2(σ_1)(μ_10⊗σ_1') 45 -3 3 -1 1 1 -3 -5 3 3 0 0 0 0 1 -1 1 -1 0 0 0 , where the ψ_i are inflated characters from Sp(4,2). amsalpha
http://arxiv.org/abs/2307.04799v1
20230710180008
Black hole complementarity from microstate models: A study of information replication and the encoding in the black hole interior
[ "Tanay Kibe", "Sukrut Mondkar", "Ayan Mukhopadhyay", "Hareram Swain" ]
hep-th
[ "hep-th", "cond-mat.str-el", "gr-qc", "quant-ph" ]
(
http://arxiv.org/abs/2307.05361v1
20230708230112
A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle Force and Joint Kinematics
[ "Yue Shi", "Shuhao Ma", "Yihui Zhao", "Zhiqiang Zhang" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.LG", "cs.RO" ]
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives [ ==================================================================== Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis of the dynamic interplay among neural muscle stimulation, muscle dynamics, and kinetics. Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner. However, the small sample nature and physical interpretability of biomechanical analysis limit the applications of DNNs. This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics. This method seamlessly integrates Lagrange's equation of motion and inverse dynamic muscle model into the generative adversarial network (GAN) framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Experimental validations are conducted on two scenarios (i.e. the walking trials and wrist motion trials). Results indicate that the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics, which outperforms the selected benchmark methods, including physics-informed convolution neural network (PI-CNN), vallina generative adversarial network (GAN), and multi-layer extreme learning machine (ML-ELM). § INTRODUCTION Human movements involve complex interactions within the neuromuscular system. The surface electromyography (sEMG)-driven estimation of muscle force and joint kinematics dynamics provides detailed biomechanical analysis to understand the neuromuscular system <cit.>, which benefits various applications, such as sports rehabilitation treatments <cit.>, <cit.>, and optimizing robotic design for individuals with impairments <cit.>. Although physics-based models explicitly explain and map sEMG signals to joint kinematics, the high cost of their static optimization has always limited the practical applications of these models <cit.>. Recently, deep neural networks (DNNs) provide an alternative solution to map the sEMG signals to the joint kinetics and kinematics <cit.>. In this kind of model, the multi-layer convolution architecture has been explored to establish relationships between movement variables and neuromuscular status <cit.>. For example, Nasr et al <cit.> mapped the sEMG signals to the regression of joint angle, joint velocity, joint acceleration, joint torque, and activation torque, illustrating that the multi-layer convolution operators are capable of extracting underlying motor control information. Zhang et al <cit.> developed an active deep convolutional neural network to enhance the dynamic tracking capability of the musculoskeletal model on unseen data. Despite the advantages, traditional DNNs are data-hungry and their performance is highly dependent on the quantity and quality of data <cit.>. Meanwhile, biomechanics analysis is typically a physics-based extrapolation process with small sample nature <cit.>. Therefore, it is a challenge to train DNNs with small sample data so that the DNNs perform consistently with the physics-based model. To fill this research gap, the low-shot learning (LSL) technique has attracted many researchers' attention <cit.>. For example, Rahimian et al <cit.> introduced a Few-Shot Learning Hand Gesture Recognition (FS-HGR) model to enhance the generalization capability of DNNs from a limited number of instances. Lehmler et al <cit.> explored a low-shot learning methodology that adjusts DNNs to new users with only a small size of training data. In addition, the generative adversarial network (GAN) framework has shown great potential in handling physical extrapolating and predictive problems <cit.>. The GAN-based model is capable of discovering the structured patterns of the references and extrapolating the underlying data distribution characteristics during the adversarial learning process <cit.>. For example, Chen et al <cit.> tested and evaluated the performance of the deep convolutional generative adversarial network (DCGAN) on sEMG-based data enhancement, and their results indicated that the extrapolated data is able to augment the diversity of the original data. Fahimi et al <cit.> proposed a generative adversarial learning framework for generating artificial electroencephalogram (EEG) data to extrapolate the brain-computer interface, and their findings suggest that generated EEG augmentation can significantly improve brain-computer interface performance. In this study, we propose a physics-informed low-shot learning method for muscle force and joint kinematics estimation from multi-channel sEMG signals. This method seamlessly integrates physics knowledge with the GAN framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Results show the muscle forces and joint kinematics estimated from the proposed method are unbiased compared to the physics-based inverse dynamics. The remainder of this paper is organized as follows: Section <ref> detailed describes the algorithm of the proposed physics-informed policy gradient for reinforcement generative adversarial learning, including the mathematics framework of the algorithm and network architectures. Section <ref> presents the material and experimental methods. Section <ref> discusses the experimental results and model evaluations. and Section <ref> presents the conclusions. § PHYSICS-INFORMED LOW-SHOT LEARNING METHOD The continuous estimation of muscle forces (F) and joint kinematics(θ) from multi-channel sEMG can be denoted as the time-series generation problem. Thus, given a real multi-channel sEMG time series, we train a σ parameterized generative network G_σ to estimate the muscle force (F̂) and joint kinematics (θ̂). In this section, we propose a GAN framework, as shown in Fig.<ref>, to train the G_σ on the small sample data. Specifically, we denote the F̂ and θ̂ estimated by G_σ as the negative samples (see details in Section <ref>), the ground truth (θ) and the inverse dynamics-based (F) <cit.> as positive samples (i.e. references). The ϕ-parameterized discriminative model D_ϕ is introduced to distinguish the positive samples and negative samples (see details in Section <ref>). During adversarial learning, the task of D_ϕ is to determine if an input sample is positive or negative, and the task of G_σ is to generate the unbiased negative samples to fool the discriminator D_ϕ. The model optimization process is driven by the newly proposed physics-informed policy gradient (see details in Section <ref>) which rewards the homogeneity of physics representation and structural characteristics between the positive and negative samples. §.§ GAN optimization via physics-informed policy gradient The physics-informed policy gradient method, inspired by reinforcement learning <cit.>, aims to optimize the learning process of the GAN-based model yielding physical extrapolations from the small sample data (i.e. low-shot learning). Mathematically, the physics-informed policy gradient method maximizes its expected reward J(σ) based on the physics law and structured characteristics from the small sample data. The J(σ) consists of two parts, the structural reward R_G_σ and physics representation action Q_D(ϕ)^G(σ). The J(σ) is defined as follows. J(σ) = 𝔼[R_G_σ(G_σ(sEMG_0:T))] · Q_Dϕ^Gσ((G_σ(sEMG_0:T), [F,θ]_0:T) = 𝔼[R_G_σ ([F̂, θ̂]_0:T)] · Q_Dϕ^Gσ([F̂, θ̂]_0:T, [F, θ]_0:T) where sEMG_0:T is the input multi-channel sEMG time series for T time steps. The J(σ) is beginning with the expected reward from a predetermined state from the positive samples. And then, the R_G_σ and Q_D(ϕ)^G(σ) will jointly optimize the generative network G_σ to generate the unbiased ([F̂, θ̂]_0:T) following the physics laws. Specifically, the structural reward R_G_σ is computed by the G_σ and defined as follows. R_G(([F̂, θ̂]_0:T) = exp ^ PL^2 ([F̂, θ̂]_0:T) where PL([F̂, θ̂]_0:T) is the physics law used to restrict the hierarchical structure of the generated data, which provides the additional information to the regularize the learning process from the small sample data. In this case, we use the Lagrange equation of motion <cit.> as the physics law, which is defined as follows. PL([F̂, θ̂]_0:T) = 1/T∑_t=1^T (m(θ̂_t)θ̈̂̈_t + c(θ̂_t, θ̇̂̇_t + g(θ̂_t) - ∑_n=1^NF̂^n_t)^2 where T is the number of time-steps, N is the channels of the F̂, m(θ̂_t), c(θ̂_t, θ̇̂̇_t, and g(θ̂_t) denote mass matrix, the Centrifugal and Coriolis force, and the gravity, respectively <cit.>. In this manner, the G_σ will generate the structured outputs of (F̂, θ̂). The Q_D(ϕ)^G(σ) is computed by the D(ϕ) and interprets the physics constraint action values as the estimated probability of being physics real by D(ϕ). These physics constraint action values lead to the improvement of GAN model in physical extrapolation from the small training data. The Q_D(ϕ)^G(σ) can be formulated as: Q_Dϕ^Gσ((G_σ( sEMG_0:T), [F, θ]_0:T) = 𝔼_[F̂, θ̂]_0:T∼ [F, θ]_0:T [log Dϕ([F̂, θ̂]_0:T)] + 𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T))[log (1-Dϕ([F̂, θ̂]_0:T))] For each epoch, once the new R_G and Q_D(ϕ)^G(σ) has been obtained, the policy model G(σ) will be updated following the gradient of the reward function as follows. ∇_σ J(σ) = 𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T)∑∇_σ R_G_σ([F̂, θ̂]_0:T|[F, θ]_0:T) · Q^G_σ_D_ϕ ([F̂, θ̂]_0:T, [F, θ]_0:T) Using likelihood ratios, the unbiased estimation for Eq. <ref> on one epoch can be described as follows. ∇_σJ(σ) ≃1/T∑_t=1^T∑_y_t ∈ [F̂, θ̂]_t∇_σ R_G_σ(y_t|[F, θ]_t) · Q^G_σ_D_ϕ (y_t, [F, θ]_t) =1/T∑_t=1^T ∑_y_t ∈ [F̂,θ̂]_t G_σ(y_t|[F, θ]_t) ∇_σlog G_σ(y_t|[F, θ]_t) · Q^G_σ_D_ϕ(y_t, [F, θ]_t) The parameters of the policy model G_σ can be updated as follows. σ←σ + α∇_σ J(σ) where α∈ℝ is the learning rate. To summarize, Algorithm 1 provides an in-depth look at our proposed GAN optimization via a physics-informed policy gradient. Initially, G_σ is pre-trained on the training set sEMG = {X_1:T} using the maximum likelihood estimation (MLE). And then, the G_σ and D_ϕ undergo adversarial learning. As the G_σ improves, the D_ϕ is routinely retrained to stay synchronized with the G_σ improvement. We ensure balance by generating an equal number of negative samples for each training step as the positive samples. §.§ The generative network The proposed physics-informed low-shot learning method does not depend on the specific generative network architecture. In this study, considering the long-term temporal dependencies of the F and θ sequences to the input multi-channel sEMG sequence, we employ the Long Short-Term Memory (LSTM) cells to our generative model <cit.>. The architecture of the generator network G is shown in Fig.<ref>. It serves three functions: multi-channel sEMG feature extraction, residual learning with LSTM, and musculoskeletal tokens sequence generation. Firstly, for the multi-channel sEMG feature extraction, a 1-dimensional (1D) convolution filter with a 2 /times 1 kernel is introduced to capture the multiple sEMG features at time step t. The extracted convolution features represent the hierarchical structures of the multi-channel sEMG. In this study, the convolution kernel is set to 1 × b for a b-channel sEMG input. Considering the batch normalization (BN) layer would normalize the features and get rid of the range flexibility for upscaling features <cit.>, no BN layer is used here to avoid blurring the sEMG responses hidden in the extracted features. The max-pooling layer is used to combine the extracted sEMG features into a single neuron by using the maximum value from each convolution window. The max-pooling operation reduces the number of parameters and network computation costs and has the effect of adjusting over-fitting. Secondly, the LSTM blocks are employed for residual learning of the time-series characteristics of the target musculoskeletal tokens. The LSTM layer is well suited for time-series sequence generation by addressing the explosive and vanishing gradient issues <cit.>. An LSTM block consists of a memory cell, an input gate, an output gate, and a forget gate, the detailed definitions of the components are described in <cit.>'s study. Specifically, in this study, in time step t, the memory cell remembers structured feature values over the previous t-1 intervals and the three gates regulate the flow of information into and out of the memory cell, which has a great preference for preserving long-term temporal structure characteristics by consolidating previous temporal correlations as memory units. Meanwhile, the high-level sEMG features extracted from the convolution layer represent the current multi-channel sEMG responses to muscle force and joint kinematics. The skip-connect of the memory cell and the high-level sEMG features not only represent extracted local kinetic invariances but also represent the temporal dynamics of the motions. It is noteworthy that the traditional LSTM layer only produces fitness between the current time step and the previous time steps. However, we expect the model also can pay insight into the resulting future outputs. In order to compute the action value for future physical fitness, a Monte Carlo (MC) search with a roll-out strategy is used to sample the unknown last T-t time steps. and the N-time Monte Carlo search can be formulated as: {(F_0:T, θ_0:T)^1, ..., (F_0:T, θ_0:T)^N = MC(F_0:t, θ_0:t)} Finally, the fully connected layers are used to generate the musculoskeletal tokens sequence over a motion period. The output of the LSTM unit is flattened to a feature vector and scaled to the muscle force F and joint kinematics θ. §.§ The discriminative model In this study, a ϕ parameterized discriminator network D_ϕ is built to guide the iterations of G_σ from the small sample data. D_ϕ outputs a probability indicating the heterogeneity between [F̂, θ̂] and [F, θ]. For this purpose, we employ a convolution neural network (CNN) <cit.> as the discriminative model because of its successful applications in sequence classification. In this study, we concentrate on the situation where the discriminator estimates the likelihood of a completed [F̂, θ̂] time-series from the physical-law model (i.e. ID). We first represent an input muscle force and joint kinematics time series x_1,...,x_T as E_0:T = [F̂, θ̂]_0 ⊕ [F̂, θ̂]_2 ⊕ ... ⊕ [F̂, θ̂]_T where, x_t ∈ℝ^b is the muscle force and joint kinematics in time-step t and ⊕ is the concatenation operator to build the matrix E_1:T∈ℝ^T. Then the convolution operator is used to produce a new feature map: c_i = ρ(w ⊙ E_i:i+l-1 + b) where ⊙ is the element-wise production, b is a bias term and ρ is a non-linear function. In this study, the discriminator, as shown in Fig.<ref>, employs various numbers of kernels with different window sizes to extract different features from the input musculoskeletal sequence. And the max-pooling operation over the feature maps to reduce the number of parameters and network computation costs. In order to enhance the discrimination performance, a highway operator <cit.> based on the pooled feature maps is also employed in our discriminative model. Finally, a fully connected layer with softmax activation is used to output the estimation of the likelihood that the input sequence conforms to physical laws. § MATERIAL AND EXPERIMENTAL METHODS In this study, we test our proposed method on two joint motion scenarios. The first one is the knee joint modeling from an open-access dataset of walking trials, and the second one is the wrist joint modeling from the self-collected dataset of wrist motions. §.§ Open-access dataset of walking trials The open-access dataset of walking trails is obtained from a real-world experiment reported in <cit.>. This dataset involves six healthy participants with an average age of 12.9 ± 3.2 years and an average weight of 51.8 ± 19.1 Kg. Participants are instructed to walk at four distinct speeds, which include very slow (0.53 ± 0.1 m/s), slow (0.75 ± 0.1 m/s), free (1.15 ± 0.08 m/s), and fast (1.56 ± 0.21 m/s) speeds. The sEMG signals are captured from the biceps femoris short head (BFS) and the rectus femoris (RF) as they are the primary flexor and extensor of the knee joint. In this study, we normalize each gait cycle into 100 frames for model training and testing, and the original data for model extrapolation evaluation. In the model training and testing session, each walking trial sample is formatted into a source matrix that includes the time step, gait motion data, and enveloped sEMG signals. All of the samples from different participants are combined to create a comprehensive dataset for model training and testing. §.§ Self-collected dataset of wrist motions Our wrist motions experiment, approved by the MaPS and Engineering Joint Faculty Research Ethics Committee of the University of Leeds (MEEC 18-002), involved six participants with signed consent. Participants were instructed to keep their torso straight with their shoulder abducted at 90 degrees and their elbow joint flexed at 90 degrees. The VICON motion capture system is used to record continuous wrist flexion/extension motion. Joint motions are calculated using an upper limb model with 16 reflective markers with 250 Hz sampling rate. Concurrently, sEMG signals are captured from the primary wrist muscles (n = 1, 2,..., 5), including the flexor carpi radialis (FCR), the flexor carpi ulnaris (FCU), the extensor carpi radialis longus (ECRL), the extensor carpi radialis brevis (ECRB), and the extensor carpi ulnaris (ECU) using Avanti Sensors (sampling rate is 2000 Hz). Electrodes are placed by palpation and their placement is validated by observing the signal during contraction before the experiment. The sEMG signals and motion data were synchronized and resampled at 1000 Hz. Each participant performed five repetitive trials with a three-minute break between trials to prevent muscle fatigue. The recorded sEMG signals are pre-processed by a 20 Hz and 450 Hz band-pass filter, full rectification, and a 6 Hz low-pass filter. These signals are then normalized based on the maximum voluntary contraction recorded prior to the experiment, yielding the enveloped sEMG signals. We normalize each motion cycle into 156 frames for model training and testing, and the original data for model extrapolation evaluation. A total of 360 motion data are then combined to create a comprehensive dataset for model training and testing, and 6 motion data are used for model evaluation. §.§ Benchmark models and parameter settings To evaluate the performance and effectiveness of the proposed physics-informed policy gradient for low-shot generative adversarial learning, the benchmark models employ three representative methods, including physics-Informed convolutional neural network (PI-CNN) <cit.> which represents the state-of-the-art deep learning based musculoskeletal modeling method, ML-ELM <cit.> which represents the general musculoskeletal modeling method, and the vanilla GAN which represents the traditional GAN family without physical-law <cit.>. §.§ Evaluation metrics The evaluation metrics include 1) the metrics for evaluating the quality of the generated samples including the information entropy associated peak signal-to-noise ratio (PSNR) <cit.>, coefficient of Determination (R^2) <cit.>, root mean square error (RMSE) <cit.>, Spearman's Rank Correlation Coefficient (SRCC) <cit.>, and 2) the metrics for evaluating the mode collapse of GANs, including 1) inception score (IS) <cit.>, and 2) Frechet inception distance (FID) <cit.>. § RESULTS AND DISCUSSION In this section, we evaluate the performance of the proposed physics-informed low-shot learning in the knee joint and wrist joint scenarios. We first carry out overall comparisons of the results from the proposed and benchmark methods. We also evaluate the model performance on small training data and handling mode collapse. Lastly, we investigate the robustness and generalization performance of the proposed method in intersession scenarios. The training of the proposed framework and benchmark methods was conducted using PyTorch on a workstation equipped with NVIDIA Quadro K4200 graphics cards and 256G RAM. §.§ Overall evaluation of the muscle force dynamics modeling In this section, we first carry out overall comparisons between the proposed and benchmark methods on the test dataset. Fig. <ref> demonstrates the overall results of the joint kinematics generation in one motion circle from the proposed and benchmark methods for both the knee joint (the first row of Fig. <ref>) and wrist joint cases (the second row of Fig. <ref>). The average joint kinematics and standard deviation distribution from the proposed method align well with the ground truth in both the knee joint and wrist joint cases. These findings indicate the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the joint kinematics. Similarly, Fig. <ref> and Fig.<ref> demonstrate the overall results of the muscle force estimations in one motion circle for both the knee joint (i.e. RF and BFS) and wrist joint (i.e. FCR, FCU, ECRL, ECRB, and ECU) cases, respectively. The average muscle forces estimated by the proposed method align well with the inverse dynamics, demonstrating the excellent multiple muscle tracking capability of the proposed model. In addition, the standard deviation distribution of the proposed model-generated muscle forces is perfectly consistent with the standard deviation distribution of the inverse dynamics-based references. These results indicate that the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the muscle force from the multi-channel sEMG signals. To further assess the extrapolation performance quantitatively, we present detailed comparisons of the proposed and benchmark models on both of the test data and evaluation data. Table <ref> and Table <ref> respectively shows the results for the knee joint case and the wrist joint case. The results indicate that the proposed model performs best on both of the testing and evaluation data. Specifically, for model testing, the PSNR, R^2, RMSE, SRCC of the proposed model are 15.57%, 6.22%, 28.08%, 7.2% higher than that of the second best model (i.e. PI-CNN). For model evaluation, the PSNR, R^2, RMSE, SRCC of the proposed model are 24.72%, 16.29%, 38.99%, 17.66% higher than that of the second best model (i.e. GAN). In addition, because the evaluation data involve the original sEMG recordings, the comparison of the testing results and evaluation results indicates the model extrapolation from the experimental scenarios to real scenarios. The proposed model shows the best extrapolated estimation of muscle force and joint kinematics among the benchmark models, the results from the testing data and evaluation data is consistent. In contrast, the performance of the benchmark models show serious decline on evaluation data. §.§ Evaluation of low-shot learning The proposed physics-informed policy gradient incorporates the temporal relationship of the muscle force and joint kinematics dynamics from the Lagrange motion equation, resulting in an improved kinetics estimation from the low-shot samples. Initially, the physical information is used to constrain the model reward accumulated following the periodic multi-channel sEMG signals. And then, the accumulative reward is used to guide the Monte Carlo search to generate the unbiased estimation of muscle force and joint kinematics dynamics. To quantitatively assess the effectiveness of the proposed method on low-shot learning, we firstly regard the modeling results shown in Table <ref> and Table <ref> as the baselines that represent the optimal performance of the proposed and benchmark models, and then we train the models with different training sample sizes for 1500 epochs as low-shot learning learning. The percentages of the low-shot learning learning results and the baseline joint kinematics modeling results, denote as P-PSNR, P-R^2, P-RMSE, and P-SRCC, are used as the evaluation metrics to describe what percentage of the performance of the baseline models can be achieved with the new models. The evaluation of the low-shot learning of the proposed and benchmark models on the knee joint and wrist joint kinematics modeling is shown in Table <ref>. It is obvious that the proposed model with a physics-informed policy gradient outperforms all of the benchmark models in low-shot learning. The 10-shot learning is able to achieve over 80% baseline performance in terms of PSNR, R^2, RMSE, and SRCC. In comparison, the PINN and GAN models achieving a similar modeling performance require at least 80-shot learning. Therefore, it can be inferred that the proposed physics-informed policy gradient relies heavily on the physical representations and temporal structural characteristics of the training data, rather than the quantity of the data. This is encouraging as it suggests that the proposed method facilitates the applications of deep learning in biomechanical engineering from the general issue of limited sample size. §.§ Mode collapse evaluation Mathematically, the generative model is easy to find a biased estimation caused by mode collapse, which leads to the generated samples only being located in the partial real distribution where it can fool the discriminative model and ignore other modes of real distribution during the adversarial learning. To handle this issue, the proposed physics-informed policy gradient alleviates the random noises and makes the generated feature sequence governed by the physics law, which facilitates the estimation of compound kinematics patterns and achieves the unbiased estimation of kinematics generation. In order to evaluate the performance of the proposed method on alleviating the mode collapse, we test and compare the proposed model with the benchmark model from two aspects: 1) a quantitative evaluation of the diversity of the generated motions, based on the distance-derived IS and FID metrics; and 2) a monotonicity assessment on the generator iterations during the network training process; and 3) visualization of the distributions of the real and the generated motion samples. Firstly, the quantitative evaluation for the diversity of the generated motions is conducted on the testing dataset. The higher IS and lower FID indicate the better diversity of the generated super-resolution HSIs, which further indicates the alleviation of mode collapse. The results demonstrated in Table <ref> show the proposed model outperforms the competitors in terms of the IS and FID measurements for both the knee joint and wrist joint motion generation. In addition, the benchmark GAN model, with the network architecture as same as the proposed model, is 19.11% higher in IS, and 14.23% lower in FID than the proposed model. These findings suggest that the proposed physics-informed policy gradient optimization approach has great performance in alleviating the mode collapse during adversarial learning. Secondly, in order to further explore the performance of the proposed physics-informed policy gradient on the mode collapse issue, we compare the generator iterations of the same GAN architectures with and without the physics-informed policy gradient (Fig. <ref>). The IS and FID curves from the GAN with the proposed physics-informed policy gradient are more monotonous than the GAN without the physics-informed policy gradient, along with the increase of iteration number. Thus, the curves of IS from the proposed physics-informed policy gradient steadily increase and the curves of FID steadily decrease for both knee joint (<ref>a and b) and wrist joint (<ref>c and d) cases. §.§ Model application on intra-session scenario In musculoskeletal modeling, the intra-session scenario is regarded as the multiple sets of motions that occur within the same session. To test the robustness of the proposed model in the intra-session scenario, we use the knee joint data with different walking speeds for one subject as the intra-session evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best among the baseline methods. Importantly, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different walking speeds. In comparison, the median and quartiles of the baseline methods, such as the GAN model without using the physics-informed policy gradient, show significant inconsistencies with the real data, indicating a declined performance in the intra-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in intra-session scenarios. §.§ Model application on inter-session scenario The inter-session scenario generally refers to a situation where motion data are collected across multiple sessions. To test the robustness of the proposed model in the inter-session scenario, we use the wrist joint data with different subjects as the evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best on the musculoskeletal modeling among the baseline methods. Specifically, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different subjects. In comparison, the baseline methods, such as the GAN model without using the physics-informed policy gradient, show a declined performance in the inter-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in inter-session scenarios. § CONCLUSION This paper develops a physics-informed low-shot learning method, which seamlessly integrates the Lagrange equation of motion and inverse dynamic muscle model into the adversarial learning process, to train the generative network for the unbiased estimation of the muscle force and joint kinematics from the small size sEMG time series. Specifically, the Lagrange equation of motion is introduced as physical constraint, which facilitates the generator to estimate the muscle force and joint kinematics with more temporal structural representations. Meanwhile, the physics-informed policy gradient rewards the physical consistency of the generated muscle force and joint kinematics and the inverse dynamics-based references, which improve the extrapolation performance of the generative network. Comprehensive experiments on the knee joints and wrist joints indicate the feasibility of the proposed method. The resultant findings suggest that the proposed method performs well in handling the mode collapse issue on the small sample data, and the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics. These findings suggest that the proposed method may reduce the gaps between laboratory prototypes and clinical applications. However, it is worth noting that the physics reference (i.e. the inverse dynamics for this study) plays an important role in constraining the physics representation of the generated samples. Therefore, the choice of physics module may vary when the proposed approach is extended to other application cases. Going forward, we plan to delve deeper into the properties of the physics-informed deep learning framework in the context of sEMG-based musculoskeletal modeling. We aim to investigate the potential of the low-shot learning-based model on the continuous and simultaneous estimation of multiple joint kinematic chains from sEMG signals. We also plan to adjust the compositions of the proposed method to cater to different application scenarios. Furthermore, we intend to evaluate the reliability and accuracy of the proposed framework through more complex movements. unsrtnat
http://arxiv.org/abs/2307.07238v1
20230714092133
Remarks on Parikh-recognizable omega-languages
[ "Mario Grobler", "Leif Sabellek", "Sebastian Siebertz" ]
cs.FL
[ "cs.FL" ]
Probing new physics with polarized τ and Λ_c in quasielastic ν_τ+n→τ^-+Λ_c scattering process Dong-Hui Zheng August 12, 2023 ============================================================================================= Several variants of Parikh automata on infinite words were recently introduced by Guha et al. [FSTTCS, 2022]. We show that one of these variants coincides with blind counter machine as introduced by Fernau and Stiebe [Fundamenta Informaticae, 2008]. Fernau and Stiebe showed that every ω-language recognized by a blind counter machine is of the form ⋃_iU_iV_i^ω for Parikh recognizable languages U_i, V_i, but blind counter machines fall short of characterizing this class of ω-languages. They posed as an open problem to find a suitable automata-based characterization. We introduce several additional variants of Parikh automata on infinite words that yield automata characterizations of classes of ω-language of the form ⋃_iU_iV_i^ω for all combinations of languages U_i, V_i being regular or Parikh-recognizable. When both U_i and V_i are regular, this coincides with Büchi's classical theorem. We study the effect of ε-transitions in all variants of Parikh automata and show that almost all of them admit ϵ-elimination. Finally we study the classical decision problems with applications to model checking. § INTRODUCTION Finite automata find numerous applications in formal language theory, logic, verification, and many more, in particular due to their good closure properties and algorithmic properties. To enrich this spectrum of applications even more, it has been a fruitful direction to add features to finite automata to capture also situations beyond the regular realm. One such possible extension of finite automata with counting mechanisms has been introduced by Greibach in her study of blind and partially blind (one-way) multicounter machines <cit.>. Blind multicounter machines are generalized by weighted automata as introduced in <cit.>. Parikh automata (PA) were introduced by Klaedtke and Rueß in <cit.>. A PA is a non-deterministic finite automaton that is additionally equipped with a semi-linear set C, and every transition is equipped with a d-tuple of non-negative integers. Whenever an input word is read, d counters are initialized with the values 0 and every time a transition is used, the counters are incremented by the values in the tuple of the transition accordingly. An input word is accepted if the PA ends in an accepting state and additionally, the resulting d-tuple of counter values lies in C. Klaedtke and Rueß showed that PA are equivalent to weighted automata over the group (^k, +, ), and hence equivalent to Greibach's blind multicounter machines, as well as to reversal bounded multicounter machines <cit.>. Recently it was shown that these models can be translated into each other using only logarithmic space <cit.>. In this work we call the class of languages recognized by any of these models Parikh recognizable. Klaedtke and Rueß <cit.> showed that the class of Parikh recognizable languages is precisely the class of languages definable in weak existential monadic second-order logic of one successor extended with linear cardinality constraints. The class of Parikh-recognizable languages contains all regular languages, but also many more, even languages that are not context-free, , the language {a^nb^nc^n | n ∈ℕ}. On the other hand, the language of palindromes is context-free, but not Parikh-recognizable On finite words, blind counter automata, Parikh automata and related models have been investigated extensively, extending <cit.> for example by affine PA and PA on letters <cit.>, bounded PA <cit.>, two-way PA <cit.>, PA with a pushdown stack <cit.> as well as a combination of both <cit.>, history-deterministic PA <cit.>, automata and grammars with valences <cit.>, and several algorithmic applications, e.g. in the context of path logics for querying graphs <cit.>. In the well-studied realm of verification of reactive systems, automata-related approaches provide a powerful framework to tackle important problems such as the model checking problem <cit.>. However, computations of systems are generally represented as infinite objects, as we often expect them to not terminate (but rather interact with the environment). Hence, automata processing infinite words are suited for these tasks. One common approach is the following: assume we are given a system, e.g. represented as a Kripke structure K, and a specification represented as an automaton (or any formalism that can be translated into one) accepting all counterexamples. Then we can verify that the system has no bad computations by solving intersection-emptiness of K and . Yet again, the most basic model of Büchi automata (which recognize ω-regular languages) are quite limited in their expressiveness, although they have nice closure properties. Let us consider two examples. In a three-user setting in an operating system we would like to ensure that none of the users gets a lot more resources than the other two. A corresponding specification of bad computations can be modeled via the ω-language , stating that one user gets more resources than the other two users combined infinitely often. As another example consider a classical producer-consumer setting, where a producer continuously produces a good, and a consumer consumes these goods continuously. We can model this setting as an infinite word and ask that at no time the consumer has consumed more than the producer has produced at this time. Bad computations can be modeled via the ω-language {α∈{p,c}^ω|there is a prefix w of α with |w|_c > |w|_p}. Such specifications are not ω-regular, as these require to “count arbitrarily”. This motivates the study of blind-counter and Parikh automata on infinite words, which was initiated by Fernau and Stiebe <cit.>. Independently, Klaedte and Rueß proposed possible extensions of Parikh automata on infinite words. This line of research was recently picked up by Guha et al. <cit.>. Guha et al. <cit.> introduced safety, reachability, Büchi- and co-Büchi Parikh automata. These models provide natural generalization of studied automata models with Parikh conditions on infinite words. One shortcoming of safety, reachability and co-Büchi Parikh automata is that they do not generalize Büchi automata, that is, they cannot recognize all ω-regular languages. The non-emptiness problem, which is highly relevant for model checking applications, is undecidable for safety and co-Büchi Parikh automata. Furthermore, none of these models has ω-closure, meaning that for every model there is a Parikh-recognizable language (on finite words) L such that L^ω is not recognizable by any of these models. They raised the question whether (appropriate variants of) Parikh automata on infinite words have the same expressive power as blind counter automata on infinite words. Büchi's famous theorem states that ω-regular languages are characterized as languages of the form ⋃_i U_i V_i^ω, where the U_i and V_i are regular languages <cit.>. As a consequence of the theorem, many properties of ω-regular languages are inherited from regular languages. For example, the non-emptiness problem for Büchi automata can basically be solved by testing non-emptiness for nondeterministic finite automata. In their systematic study of blind counter automata, Fernau and Stiebe <cit.> considered the class _*, the class of ω-languages of the form ⋃_i U_i V_i^ω for Parikh-recognizable languages U_i and V_i. They proved that the class of ω-languages recognizable by blind counter machines is a proper subset of the class _*. They posed as an open problem to provide automata models that capture classes of ω-languages of the form ⋃_i U_i V_i^ω where U_i and V_i are described by a certain mechanism. In this work we propose reachability-regular Parikh automata, limit Parikh automata, and reset Parikh automata as new automata models. We pick up the question of Fernau and Stiebe <cit.> to consider classes of ω-languages of the form ⋃_i U_i V_i^ω where U_i and V_i are described by a certain mechanism. We define the four classes , , and of ω-languages of the form ⋃_iU_iV_i^ω, where the U_i,V_i are regular or Parikh-recognizable languages of finite words, respectively. By Büchi's theorem the class is the class of ω-regular languages. We show that the newly introduced reachability-regular Parikh automata, which are a small modification of reachability Parikh automata (as introduced by Guha et al. <cit.>) capture exactly the class . This model turns out to be equivalent to limit Parikh automata. This model was hinted at in the concluding remarks of <cit.>. Fully resolving the classification of the above mentioned classes we introduce reset Parikh automata. In contrast to all other Parikh models, these are closed under the ω-operation, while maintaining all algorithmic properties of PA (in particular, non-emptiness is -complete and hence decidable). We show that the class of Reset-recognizable ω-languages is a strict superclass of . We show that appropriate graph-theoretic restrictions of reset Parikh automata exactly capture the classes and , yielding the first automata characterizations for these classes. The automata models introduced by Guha et al. <cit.> do not have ϵ-transitions, while blind counter machines have such transitions. Towards answering the question of Guha et al. we study the effect of ϵ-transitions in all Parikh automata models. We show that all models except safety and co-Büchi Parikh automata admit ϵ-elimination. This in particular answers the question of Guha et al. <cit.> whether blind counter automata and Büchi Parikh automata have the same expressive power over infinite words affirmative. We show that safety and co-Büchi automata with ϵ-transitions are strictly more powerful than their variants without ϵ-transitions, and in particular, they give the models enough power to recognize all ω-regular languages. § PRELIMINARIES §.§ Finite and infinite words We write for the set of non-negative integers including 0, and for the set of all integers. Let Σ be an alphabet, , a finite non-empty set and let Σ^* be the set of all finite words over Σ. For a word w ∈Σ^*, we denote by |w| the length of w, and by |w|_a the number of occurrences of the letter a ∈Σ in w. We write ε for the empty word of length 0. An infinite word over an alphabet Σ is a function α : ∖{0}→Σ. We often write α_i instead of α(i). Thus, we can understand an infinite word as an infinite sequence of symbols α = α_1α_2α_3… For m ≤ n, we abbreviate the finite infix α_m …α_n by α[m,n]. We denote by Σ^ω the set of all infinite words over Σ. We call a subset L ⊆Σ^ω an ω-language. Moreover, for L ⊆Σ^*, we define L^ω = {w_1w_2…| w_i ∈ L ∖{ε}}⊆Σ^ω. §.§ Regular and ω-regular languages A nondeterministic finite automaton (NFA) is a tuple = (Q, Σ, q_0, Δ, F), where Q is the finite set of states, Σ is the input alphabet, q_0 ∈ Q is the initial state, Δ⊆ Q ×Σ× Q is the set of transitions and F ⊆ Q is the set of accepting states. A run of on a word w = w_1 … w_n∈Σ^* is a (possibly empty) sequence of transitions r = r_1 … r_n with r_i = (p_i-1, w_i, p_i)∈Δ such that p_0=q_0. We say r is accepting if p_n ∈ F. The empty run on ϵ is accepting if q_0 ∈ F. We define the language recognized by as L() = {w ∈Σ^* |there is an accepting run of on w}. If a language L is recognized by some NFA , we call L regular. A Büchi automaton is an NFA = (Q, Σ, q_0, Δ, F) that takes infinite words as input. A run of on an infinite word α_1α_2α_3… is an infinite sequence of transitions r = r_1 r_2 r_3 … with r_i = (p_i-1, α_i, p_i) ∈Δ such that p_0=q_0. We say r is accepting if there are infinitely many i with p_i ∈ F. We define the ω-language recognized by  as L_ω() = {α∈Σ^ω| there is an accepting run of on α}. If an ω-language L is recognized by some Büchi automaton , we call L ω-regular. Büchi's theorem establishes an important connection between regular and ω-regular languages: A language L ⊆Σ^ω is ω-regular if and only if there are regular languages U_1, V_1, …, U_n, V_n ⊆Σ^* for some n ≥ 1 such that L = U_1V_1^ω∪…∪ U_nV_n^ω. If every state of a Büchi automaton is accepting, we call a safety automaton. §.§ Semi-linear sets For some d ≥ 1, a linear set of dimension d is a set of the form {b_0 + b_1z_1 + … + b_ℓ z_ℓ| z_1, …, z_ℓ∈}⊆^d for b_0,…, b_ℓ∈^d. A semi-linear set is a finite union of linear sets. For vectors = (u_1, …, u_c)∈^c, = (v_1, …, v_d) ∈^d, we denote by · = (u_1, …, u_c, v_1, …, v_d) ∈^c+d the concatenation of and . We extend this definition to sets of vectors. Let C ⊆^c and D ⊆^d. Then C · D = {·|∈ C, ∈ D}⊆^c+d. We denote by ^d (or simply if d is clear from the context) the all-zero vector, and by ^d_i (or simply _i) the d-dimensional vector where the ith entry is 1 and all other entries are 0. We also consider semi-linear sets over (∪{∞})^d, that is semi-linear sets with an additional symbol ∞ for infinity. As usual, addition of vectors and multiplication of a vector with a number is defined component-wise, where z + ∞ = ∞ + z = ∞ + ∞ = ∞ for all z ∈, z ·∞ = ∞· z = ∞ for all z > 0∈, and 0 ·∞ = ∞· 0 = 0. §.§ Parikh-recognizable languages A Parikh automaton (PA) is a tuple = (Q, Σ, q_0, Δ, F, C) where Q, Σ, q_0, and F are defined as for NFA, Δ⊆ Q ×Σ×^d × Q is a finite set of labeled transitions, and C ⊆^d is a semi-linear set. We call d the dimension of and refer to the entries of a vector in a transition (p, a, , q) as counters. Similar to NFA, a run of on a word w = x_1 … x_n is a (possibly empty) sequence of labeled transitions r = r_1 … r_n with r_i = (p_i-1, x_i, _i, p_i) ∈Δ such that p_0 = q_0. We define the extended Parikh image of a run r as ρ(r) = ∑_i ≤ n_i (with the convention that the empty sum equals ). We say r is accepting if p_n ∈ F and ρ(r) ∈ C, referring to the latter condition as the Parikh condition. We define the language recognized by as L() = {w ∈Σ^* |there is an accepting run of on w}. If a language L⊆Σ^* is recognized by some PA, then we call L Parikh-recognizable. §.§ Graphs A (directed) graph G consists of its vertex set V(G) and edge set . In particular, a graph G may have loops, that is, edges of the form (u, u). A path from a vertex u to a vertex v in G is a sequence of pairwise distinct vertices v_1 … v_k such that v_1 = u, v_k = v, and (v_i, v_i+1) ∈ E(G) for all 1 ≤ i < k. Similarly, a cycle in G is a sequence of pairwise distinct vertices v_1 … v_k such that (v_i, v_i+1) ∈ E(G) for all 1 ≤ i < k, and (v_k, v_1) ∈ E(G). If G has no cylces, we call G a directed acyclic graph (DAG). For a subset U ⊆ V(G), we denote by G[U] the graph G induced by U, , the graph with vertex set U and edge set {(u,v) ∈ E(G) | u, v ∈ U}. A strongly connected component (SCC) in G is a maximal subset U ∈ V(G) such that for all u, v ∈ U there is a path from u to v, , all vertices in U are reachable from each other. We write SCC(G) for the set of all strongly connected components of G (observe that SCC(G) partitions V(G)). The condensation of G, written C(G), is the DAG obtained from G by contracting each SCC of G into a single vertex, that is V(C(G)) = SCC(G) and (U, V) ∈ E(C(G)) if and only if there is u ∈ U and v ∈ V with (u, v) ∈ E(G). We call the SCCs with no outgoing edges in C(G) leaves. Note that an automaton can be seen as a labeled graph. Hence, all definitions translate to automata by considering the underlying graph (to be precise, an automaton can be seen as a labeled multigraph; however, we simply drop parallel edges). § PARIKH AUTOMATA ON INFINITE WORDS In this section, we recall the acceptance conditions of Parikh automata operating on infinite words that were studied before in the literature and introduce our new models. We make some easy observations and compare the existing with the new automata models. We define only the non-deterministic variants of these automata. Let = (Q, Σ, q_0, Δ, F, C) be a PA. A run of on an infinite word α = α_1 α_2 α_3 … is an infinite sequence of labeled transitions r = r_1 r_2 r_3 … with r_i = (p_i-1, α_i, _i, p_i)∈Δ such that p_0 = q_0. The automata defined below differ only in their acceptance conditions. In the following, whenever we say that an automaton accepts an infinite word α, we mean that there is an accepting run of on α. * The run r satisfies the safety condition if for every i ≥ 0 we have p_i ∈ F and ρ(r_1 … r_i) ∈ C. We call a PA accepting with the safety condition a safety PA <cit.>. We define the ω-language recognized by a safety PA as S_ω() = {α∈Σ^ω| accepts α}. * The run r satisfies the reachability condition if there is an i ≥ 1 such that p_i ∈ F and ρ(r_1 … r_i) ∈ C. We say there is an accepting hit in r_i. We call a PA accepting with the reachability condition a reachability PA <cit.>. We define the ω-language recognized by a reachability PA as R_ω() = {α∈Σ^ω| accepts α}. * The run r satisfies the Büchi condition if there are infinitely many i ≥ 1 such that p_i ∈ F and ρ(r_1 … r_i) ∈ C. We call a PA accepting with the Büchi condition a Büchi PA <cit.>. We define the ω-language recognized by a Büchi PA as . Hence, a Büchi PA can be seen as a stronger variant of a reachability PA where we require infinitely many accepting hits instead of a single one. * The run r satisfies the co-Büchi condition if there is i_0 such that for every i ≥ i_0 we have p_i ∈ F and ρ(r_1 … r_i) ∈ C. We call a PA accepting with the co-Büchi condition a co-Büchi PA <cit.>. We define the ω-language recognized by a co-Büchi PA as CB_ω() = {α∈Σ^ω| accepts α}. Hence, a co-Büchi PA can be seen as a weaker variant of safety PA where the safety condition needs not necessarily be fulfilled from the beginning, but from some point onwards. Guha et al. <cit.> assume that reachability PA are complete, i.e., for every (p,a)∈ Q×Σ there are ∈^d and q∈ Q such that (p,a,,q)∈Δ, as incompleteness allows to express additional safety conditions. We also make this assumption in order to study “pure” reachability PA. In fact, we can assume that all models are complete, as the other models can be completed by adding a non-accepting sink. We remark that Guha et al. also considered asynchronous reachability and Büchi PA, where the Parikh condition does not necessarily need to be satisfied in accepting states. However, for non-deterministic automata this does not change the expressiveness of the considered models <cit.>. We now define the models newly introduced in this work. As already observed in <cit.> among the above considered models only Büchi PA can recognize all ω-regular languages. For example, {α∈{a,b}^ω| |α|_a=∞} cannot be recognized by safety PA, reachability PA or co-Büchi PA. We first extend reachability PA with the classical Büchi condition to obtain reachability-regular PA. In <ref> we show that these automata characterize ω-languages of the form , hence, providing a robust and natural model. * The run satisfies the reachability and regularity condition if there is an i ≥ 1 such that p_i ∈ F and ρ(r_1 … r_i) ∈ C, and there are infinitely many j ≥ 1 such that p_j ∈ F. We call a PA accepting with the reachability and regularity condition a reachability-regular PA. We define the ω-language recognized by a reachability-regular PA as RR_ω() = {α∈Σ^ω| accepts α} and call it reachability-regular. Note that (in contrast to reachability PA) we may assume that reachability-regular PA are complete without changing their expressiveness. Observe that every ω-regular language is reachability-regular, as we can turn an arbitrary Büchi automaton into an equivalent reachability-regular PA by labeling every transition with 0 and . We next introduce limit PA, which were proposed in the concluding remarks of <cit.>. As we will prove in <ref>, this seemingly quite different model is equivalent to reachability-regular PA. * The run satisfies the limit condition if there are infinitely many i ≥ 1 such that p_i ∈ F, and if additionally ρ(r) ∈ C, where the jth component of ρ(r) is computed as follows. If there are infinitely many i ≥ 1 such that the jth component of _i has a non-zero value, then the jth component of ρ(C) is ∞. In other words, if the sum of values in a component diverges, then its value is set to ∞. Otherwise, the infinite sum yields a positive integer. We call a PA accepting with the limit condition a limit PA. We define the ω-language recognized by a limit PA as L_ω() = {α∈Σ^ω| accepts α}. Still, none of the yet introduced models have ω-closure. This shortcoming is addressed with the following two models, which will turn out to be equivalent and form the basis of the automata characterization of and . * The run satisfies the strong reset condition if the following holds. Let k_0 = 0 and denote by k_1 < k_2 < … the positions of all accepting states in r. Then r is accepting if k_1, k_2, … is an infinite sequence and ρ(r_k_i-1+1… r_k_i) ∈ C for all i ≥ 1. We call a PA accepting with the strong reset condition a strong reset PA. We define the ω-language recognized by a strong reset PA as SR_ω() = {α∈Σ^ω| accepts α}. * The run satisfies the weak reset condition if there are infinitely many reset positions 0 = k_0 < k_1 < k_2, … such that p_k_i∈ F and ρ(r_k_i-1+1… r_k_i) ∈ C for all i ≥ 1. We call a PA accepting with the weak reset condition a weak reset PA. We define the ω-language recognized by a weak reset PA as WR_ω() = {α∈Σ^ω| accepts α}. Intuitively worded, whenever a strong reset PA enters an accepting state, the Parikh condition must be satisfied. Then the counters are reset. Similarly, a weak reset PA may reset the counters whenever there is an accepting hit, and they must reset infinitely often, too. In the following we will often just speak of reset PA without explicitly stating whether they are weak or strong. In this case, we mean the strong variant. We will show the equivalence of the two models in <ref> and <ref>. Let be the automaton in <ref> with C = {(z,z'), (z, ∞) | z' ≥ z}. * If we interpret as a PA (over finite words), then we have L() = {w ∈{a,b}^* ·{b}| |w|_a ≤ |w|_b}∪{ε}. The automaton is in the accepting state at the very beginning and every time after reading a b. The first counter counts the occurrences of letter a, the second one counts occurrences of b. By definition of C the automaton only accepts when the second counter value is greater or equal to the first counter value (note that vectors containing an ∞-entry have no additional effect). * If we interpret as a safety PA, then we have S_ω() = {b}^ω. As q_1 is not accepting, only the b-loop on q_0 may be used. * If we interpret as a reachability PA, then we have R_ω() = {α∈{a,b}^ω|α has a prefix in L()}. The automaton has satisfied the reachability condition after reading a prefix in L() and accepts any continuation after that. * If we interpret as a Büchi PA, then we have B_ω() = L()^ω. The automaton accepts an infinite word if infinitely often the Parikh condition is satisfied in the accepting state. Observe that C has no base vector and the initial state as well as the accepting state have the same outgoing transitions. * If we interpret as a co-Büchi PA, then we have CB_ω() = L() ·{b}^ω. This is similar to the safety PA, but the accepted words may have a finite “non-safe” prefix from L(). * If we interpret as a reachability-regular PA, then we have RR_ω() = {α∈{a,b}^ω|α has a prefix in L() and |α|_b = ∞}. After having met the reachability condition the automaton still needs to satisfy the Büchi condition, which enforces infinitely many visits of the accepting state. * If we interpret as a limit PA, then we have L_ω() = {α∈{a,b}^ω| |α|_a < ∞}. The automaton must visit the accepting state infinitely often. At the same time the extended Parikh image must belong to C, which implies that the infinite word contains only some finite number z of letter a (note that only the vectors of the form (z, ∞) have an effect here, as at least one symbol must be seen infinitely often by the infinite pigeonhole principle). * If we interpret as a weak reset PA, then we have WR_ω() = L()^ω. As a weak reset PA may (but is not forced to) reset the counters upon visiting the accepting state, the automaton may reset every time a (finite) infix in L() has been read. * If we interpret as a strong reset PA, then we have SR_ω() = {b^*a}^ω∪{b^* a}^* ·{b}^ω. Whenever the automaton reaches an accepting state also the Parikh condition must be satisfied. This implies that the a-loop on q_1 may never be used, as this would increase the first counter value to at least 2, while the second counter value is 1 upon reaching the accepting state q_0 (which resets the counters). The automaton in the last example is deterministic. We note that L_ω() is not deterministic ω-regular but deterministic limit PA-recognizable. § BÜCHI-LIKE CHARACTERIZATIONS It was observed in <cit.> that Büchi PA recognize a strict subset of . In this section we first show that the class of reset PA-recognizable ω-languages is a strict superset of . Then we provide an automata-based characterization of ,, and . Towards this goal we first establish some closure properties. Guha et al. <cit.> have shown that safety, reachability, Büchi, and co-Büchi PA are closed under union using a modification of the standard construction for PA, , taking the disjoint union of the automata (introducing a fresh initial state), and the disjoint union of the semi-linear sets, where disjointness is achieved by “marking” every vector in the first set by an additional 1 (increasing the dimension by 1), and all vectors in the second set by an additional 2. We observe that the same construction also works for reachability-regular and limit PA, and a small modification is sufficient to make the construction also work for reset PA. We leave the details to the reader. The classes of reachability-regular, limit PA-recognizable, and reset PA-recognizable ω-languages are closed under union. Furthermore, we show that these classes, as well as the class of Büchi PA-recognizable ω-languages, are closed under left-concatenation with PA-recognizable languages. We provide some details in the next lemma, as we will need to modify the standard construction in such a way that we do not need to keep accepting states of the PA on finite words. This will help to characterize via (restricted) reset PA. The classes of reachability-regular, limit PA-recognizable, reset PA-recognizable, and Büchi PA-recognizable ω-languages are closed under left-concatenation with PA-recognizable languages. We begin with reset PA. Let _1 = (Q_1, Σ, q_1, Δ_1, F_1, C_1) be a PA of dimension d_1 and let _2 = (Q_2, Σ, q_2, Δ_2, F_2, C_2) be a reset PA of dimension d_2. We sketch the construction of a reset PA if dimension d_1 + d_2 that recognizes L(_1) · SR_ω(). We assume wlog that q_2 is accepting (this can be achieved by introducing a fresh initial state). Furthermore, for now we assume that ε∉ L(_1), that is, every accepting run of _1 is not empty. Again, consists of disjoint copies of _1 and _2 but only the accepting states of _2 remain accepting, and the initial state of is q_1. All transitions of the copy of _1 use the first d_1 counters (that is, the remaining d_2 counters are always 0), and, likewise, the transitions of _2 only use the last d_2 counters (that is, the first d_1 counters are always 0), Finally, we copy every transition of _1 that leads to an accepting state of _1 such that it also leads to q_2, that is, we add the transitions {(p, a, · 0^d_2, q_2) | (p,a,, q) ∈Δ_1, q ∈ F_1}. The semi-linear set C of is C_1 ·{0^d_2}∪{0^d_1}· C_2. As every accepting run of _1 is non-empty by assumption, may guess the last transition of every accepting run of _1 and replace it with one of the new transitions that leads to _2 instead. As q_2 is accepting, the counters are reset, which justifies the choice of C. Now, if ε∈ L(_1), observe that L(_1) · SR_ω(_2) = (L(_1) ∖{ε}· SR_ω(_2)) ∪ SR_ω(_2). Hence, we may remove ε from L(_1) by replacing q_1 by a fresh non-accepting copy and use the closure under union. Hence, in any case only the copies of accepting states of _2 remain accepting; in particular no state of _1 is accepting in the corresponding copy of . The construction for reachability-regular PA, limit PA and Büchi PA is very similar. The only difference is that we choose C = C_1 · C_2 for the semi-linear set of , as counters are never reset here. Before we continue, we show that we can normalize PA (on finite words) such that the initial state is the only accepting state. This observation simplifies several proofs in this section. Let = (Q, Σ, q_0, Δ, F, C) be a PA of dimension d. Then there exists an equivalent PA ' of dimension d + 1 with the following properties. * The initial state of ' is the only accepting state. * SCC(') = {Q}. We say that ' is normalized. The normalized PA ' is obtained from by adding a fresh state q_0', which is the initial state and only accepting state, and which inherits all outgoing transitions from q_0 and all incoming transitions from the accepting states. Furthermore, all transitions get a new counter, which is set to 0 except for the new incoming transitions of q_0' where the counter is set to 1, and all vectors in C are concatenated with 1 (and we add the all zero-vector if we want to accept ε). Finally, we remove all states that cannot reach q'_0 (such states can appear when shortcutting the incoming transitions of F, and are useless in the sense that their removal does not change the accepted language; however, this removal is necessary for the second property). We observe that L() = L('). Observe that we have SR_ω(') = L()^ω, that is, every normalized PA interpreted as a reset PA recognizes the ω-closure of the language recognized by the PA. As an immediate consequence we obtain the following corollary. The class of reset PA-recognizable ω-languages is closed under the ω-operation. Combining these results we obtain that every ω-language in , i.e., every ω-language of the form ⋃_i U_i V_i^ω is reset PA-recognizable. We show that the other direction does not hold, i.e., the inclusion is strict. The class is a strict subclass of the class of reset PA-recognizable ω-languages. The inclusion is a direct consequence of <ref>, <ref>, and <ref>. Hence we show that the inclusion is strict. Consider the ω-language L = {a^n b^n | n ≥ 1}^ω∪{a^n b^n | n ≥ 1}^* ·{a}^ω. This ω-language is reset PA-recognizable, as witnessed by the strong reset PA in <Ref> with C = {(z,z) | z ∈}. We claim that L ∉. Assume towards a contraction that L ∈, , there are Parikh-recognizable languages U_1, V_1, …, U_n, V_n such that L = U_1 V_1^ω∪…∪ U_n V_n^ω. Then there is some i ≤ n such that for infinitely many j ≥ 1 the infinite word α_j = aba^2b^2 … a^jb^j · a^ω∈ U_iV_i^ω. Then V_i must contain a word of the form v = a^k, k > 0. Additionally, there cannot be a word in V_i with infix b. To see this assume for sake of contradiction that there is a word w ∈ V_i with ℓ = |w|_b > 0. Let β = (v^ℓ+1 w)^ω. Observe that β has an infix that consists of at least ℓ+1 many a, followed by at most ℓ, but at least one b, hence, no word of the form uβ with u ∈ U_i is in L. This is a contradiction, thus V_i ⊆{a}^+. Since U_i ∈, there is a PA _i with L(_i) = U_i. Let m be the number of states in _i and w' = aba^2b^2 … a^m^4+1 b^m^4+1. Then w' is a prefix of a word accepted by _i. Now consider the infixes a^ℓ b^ℓ and the pairs of states q_1,q_2, where we start reading a^ℓ and end reading a^ℓ, and q_3,q_4 where we start to read b^ℓ and end to read b^ℓ, respectively. There are m^2 choices for the first pair and m^2 choices for the second pair, hence m^4 possibilities in total. Hence, as we have more than m^4 such infixes, there must be two with the same associated states q_1,q_2,q_3,q_4. Then we can swap these two infixes and get a word of the form ab … a^rb^s … a^s b^r … a^m^4+1 b^m^4+1 that is, prefix of some word in L(_i) = U_i. But no word in L has such a prefix, a contradiction. Thus, U_1 V_1^ω∪…∪ U_nV_n^ω≠ L. §.§ Characterization of Büchi Parikh automata As mentioned in the last section, the class of ω-languages recognized by Büchi PA is a strict subset of , , languages of the form ⋃_i U_i V_i^ω for Parikh-recognizable U_i and V_i. In this subsection we show that a restriction of the PA recognizing the V_i is sufficient to exactly capture the expressiveness of Büchi PA. To be precise, we show the following. The following are equivalent for all ω-languages L ⊆Σ^ω: * L is Büchi PA-recognizable. * L is of the form ⋃_i U_i V_i^ω, where U_i ∈Σ^* is Parikh-recognizable and V_i ∈Σ^* is recognized by a normalized PA where C is a linear set without base vector. We note that we can translate every PA (with a linear set C) into an equivalent normalized PA by <ref>. However, this construction adds a base vector, as we concatenate {1} to C. In fact, this can generally not be avoided without losing expressiveness. However, this loss of expressiveness is exactly what we need to characterize the class of ω-languages recognized by Büchi PA as stated in the previous lemma. The main reason for this is pointed out in the following lemma. Let L be a language recognized by a (normalized) PA = (Q, Σ, q_0, Δ, {q_0}, C) where C is linear and has no base vector. Then we have B_ω() = L()^ω. In this proof we assume that C = {b_1 z_1 + … + b_ℓ z_ℓ| z_1, … z_ℓ∈} for some ℓ≥ 1. ⇒ To show B_ω() ⊆ L()^ω, let α∈ B_ω() with accepting run r = r_1r_2r_3 … where r_i = (p_i-1, α_i, _i, p_i). As r satisfies the Büchi condition and is normalized there are infinitely many accepting hits, that is, infinitely many i such that p_i = q_0 and ρ(r_1 … r_i) ∈ C. By Dickson's Lemma <cit.>, there is an infinite monotone (sub)sequence of accepting hits s_1 < s_2 < …, , for all j > i we have ρ(r_1 … r_s_i) = b_1 z_1 + … + b_ℓ z_ℓ for some z_i ∈ and ρ(r_1 … r_s_j) = b_1 z'_1 + … + b_ℓ z'_ℓ for some z'_i ∈, and z'_k ≥ z_k for all k ≤ℓ. Hence, every infix α[s_i + 1, s_i+1] for i ≥ 0 (assuming s_0 = 0) is accepted by . ⇐ To show L()^ω⊆ B_ω(), let w_1w_2 …∈ L()^ω such that w_i ∈ L() for all i ≥ 1. Let r^(i) be an accepting run of on w_i. Observe that for every i ≥ 1 we have that r^(1)… r^(i) is an accepting run of on w_1 … w_i, as C has no base vector, and hence we have ρ(r^(1)… r^(i)) = ρ(r^(1)) + … + ρ(r^(i)) ∈ C. Hence, the infinite sequence r^(1) r^(2)… is a run of on w_1 w_2 … with infinitely many accepting hits. Hence w_1 w_2 …∈ B_ω(). This is the main ingredient to prove <ref>. We note that the proof in <cit.> showing that every ω-language L recognized by a Büchi-PA is of the form ⋃_i U_i V_i for PA-recognizable U_i and V_i already constructs PA for the V_i of the desired form. This shows the implication (1) ⇒ (2). To show the implication (2) ⇒ (1), we use that the ω-closure of languages recognized by PA of the stated form is Büchi PA-recognizable by <ref>. As Büchi PA are closed under left-concatenation with PA-recognizable languages (<ref>) and union <cit.>, the claim follows. §.§ Characterization of In this subsection we characterize by showing the following equivalences. The following are equivalent for all ω-languages L ⊆Σ^ω. * L is of the form ⋃_i U_i V_i^ω, where U_i ∈Σ^* is Parikh-recognizable, and V_i ⊆Σ^* is regular. * L is limit PA-recognizable. * L is reachability-regular. Observe that in the first item we may assume that L is of the form ⋃_i U_i V_i, where is Parikh-recognizable, and V_i ⊆Σ^ω is ω-regular. Then, by simple combinatorics and Büchi's theorem we have ⋃_i U_i V_i = ⋃_i U_i (⋃_j_i X_j_i Y_j_i^ω) = ⋃_i, j_i U_i (X_j_i Y_j_i^ω) = ⋃_i, j_i (U_i X_j_i) Y_j_i^ω, for regular languages X_j_i, Y_j_i, where U_i X_j_i is Parikh-recognizable, as Parikh-recognizable languages are closed under concatenation. [To the best of our knowledge there is no explicit construction for concatenation in the literature for PA on finite words, however, a standard construction very similar to the one of <ref> works. ] To simplify the proof, it is convenient to consider the following generalizations of Büchi automata. A transition-based generalized Büchi automaton (TGBA) is a tuple = (Q, Σ, q_0, Δ, ) where ⊆ 2^Δ is a collection of sets of transitions. Then a run r_1 r_2 r_3 … of is accepting if for all T ∈ there are infinitely many i such that r_i ∈ T. It is well-known that TGBA have the same expressiveness as Büchi automata <cit.>. <ref> will be a direct consequence from the following lemmas. The first lemma shows the implication (1) ⇒ (2). If L ∈, then L is limit PA-recognizable. As the class of limit PA-recognizable ω-languages is closed under union by <ref>, it is sufficient to show how to construct a limit PA for an ω-language of the form L = UV^ω, where U is Parikh-recognizable and V is regular. Let _1 = (Q_1, Σ, q_1, Δ_1, F_1, C) be a PA with L(_1) = U and _2 = (Q_2, Σ, q_2, Δ_2, F_2) be a Büchi automaton with L_ω(_2) = V^ω. We use the following standard construction for concatenation. Let = (Q_1 ∪ Q_2, Σ, q_1, Δ, F_2, C) be a limit PA where Δ = Δ_1 ∪{(p, a, , q) | (p, a, q) ∈Δ_2}∪{(f, a, , q) | (q_2, a, q) ∈Δ_2, f ∈ F_1}. We claim that L_ω() = L. ⇒ To show L_ω() ⊆ L, let α∈ L_ω() with accepting run r_1 r_2 r_3 … where r_i = (p_i-1, α_i, _i, p_i). As only the states in F_2 are accepting, there is a position j such that p_j-1∈ F_1 and p_j ∈ Q_2. In particular, all transitions of the copy of _2 are labeled with , , _i = for all i ≥ j. Hence ρ(r) = ρ(r_1 … r_j-1) ∈ C (in particular, there is no ∞ value in ρ(r)). We observe that r_1 … r_j-1 is an accepting run of _1 on α[1,j-1], as p_j-1∈ F_1 and ρ(r_1 … r_j-1) ∈ C. For all i ≥ j let r'_i = (p_i-1, α_i, p_i). Observe that (q_2, α_j, p_j)r'_j+1 r'_j+2… is an accepting run of _2 on α_j α_j+1α_j+2…, hence α∈ L(_1) · L_ω(_2) = L. ⇐ To show L = UV^ω⊆ L_ω(), let w ∈ L(_1)=U with accepting run s, and α∈ L_ω(_2)=V^ω with accepting run r = r_1 r_2 r_3 …, where r_i = (p_i-1, α_1, p_i). Observe that s is also a partial run of on w, ending in an accepting state f. By definition of Δ, we can continue the run s in basically as in r. To be precise, let r'_1 = (f, α_1, , p_1), and, for all i > 1 let r'_i = (p_i-1, α_i, , p_i). Then s r'_1 r'_2 r'_3 … is an accepting run of on w α, hence w α∈ L_ω(). Observe that the construction in the proof of the lemma works the same way when we interpret as a reachability-regular PA (every visit of an accepting state has the same good counter value; this argument is even true if we interpret as a Büchi PA), showing the implication (1) ⇒ (3). If L ∈, then L is reachability-regular. For the backwards direction we need an auxiliary lemma, essentially stating that semi-linear sets over C ⊆ (∪{∞})^d can be modified such that ∞-entries in vectors in C are replaced by arbitrary integers, and remain semi-linear. Let C ⊆ (∪{∞})^d be semi-linear and D ⊆{1, …, d}. Let C_D ⊆^d be the set obtained from C by the following procedure. * Remove every vector = (v_1, …, v_d) where v_i = ∞ for an i ∉ D. * As long as C_D contains a vector = (v_1, …, v_d) with v_i = ∞ for an i ≤ d: replace by all vectors of the form (v_1, … v_i-1, z, v_i+1, …, v_d) for z ∈. Then C_D is semi-linear. For a vector = (v_1, …, v_d) ∈ (∪{∞})^d, let () = {i | v_i = ∞} denote the positions of ∞-entries in . Furthermore, let = (v̅_1, …, v̅_d) denote the vector obtained from by replacing every ∞-entry by 0, , v̅_i = 0 if v_i = ∞, and v̅_i = v_i otherwise. We carry out the following procedure for every linear set of the semi-linear set independently, hence we assume that C = {b_0 + b_1z_1 + … + b_ℓ z_ℓ| z_1, …, z_ℓ∈} is linear. We also assume that there is no b_j with (b_j) ⊈D, otherwise, we simply remove it. Now, if (b_0) ⊈D, then C_D = ∅, as this implies that every vector in C has an ∞-entry at an unwanted position (the first item of the lemma). Otherwise, C_D = {b_0 + ∑_j≤ℓb̅_j z_j + ∑_i ∈(b_j)_i z_ij| z_j, z_ij∈}, which is linear by definition. We are now ready to prove the following lemma, showing the implication (2) ⇒ (1). If L is limit PA-recognizable, then L ∈. Let = (Q, Σ, q_0, Δ, F, C) be an limit PA of dimension d. The idea is as follows. We guess a subset D ⊆{1, …, d} of counters whose values we expect to be ∞. Observe that every counter not in D has a finite value, hence for every such counter there is a point where all transitions do not increment the counter further. For every subset D ⊆{1, …, d} we decompose into a PA and a TGBA. In the first step we construct a PA where every counter not in D reaches its final value and is verified. In the second step we construct a TGBA ensuring that for every counter in D at least one transition adding a non-zero value to that counter is used infinitely often. This can be encoded directly into the TGBA. Furthermore we delete all transitions that modify counters not in D. Fix D ⊆{1, …, d} and f ∈ F, and define the PA ^D_f = (Q, Σ, q_0, Δ, {f}, C_D) where C_D is defined as in <ref>. Furthermore, we define the TGBA ^D_f = (Q, Σ, f, Δ^D, ^D) where Δ^D contains the subset of transitions of Δ where the counters not in D have zero-values (just the transitions without vectors for the counters, as we construct a TGBA). On the other hand, for every counter i in D there is one acceptance component in ^D that contains exactly those transitions (again without vectors) where the ith counter has a non-zero value. Finally, we encode the condition that at least one accepting state in F needs to by seen infinitely often in ^D by further adding the component {(p, a, q) ∈Δ| q ∈ F} (now we need to see an incoming transition of a state in F infinitely often). We claim that L_ω() = ⋃_D ⊆{1, …, d}, f ∈ F L(^D_f)· L_ω(^D_f), which by the comment below <ref> and the equivalence of TGBA and Büchi automata implies the statement of the lemma. ⇒ To show L_ω() ⊆⋃_D ⊆{1, …, d}, f ∈ F L(^D_f) · L_ω(^D_f), let α∈ L_ω() with accepting run r_1 r_2 r_3 … where r_i = (p_i-1, α_i, _i, p_i). Let D be the positions of ∞-entries in ρ(r) = (v_1, …, v_d). As the v_i with i ∉ D have integer values, there is a position j such that in all _k for k ≥ j the i-th entry of _k is 0. Let ℓ≥ j be minimal such that p_ℓ in F. We split α = w β, where w = α[1,ℓ], and β = α_ℓ + 1α_ℓ +2…. First we argue that w ∈ L_ω(^D_p_ℓ). Observe that ^D_p_ℓ inherits all transitions from , hence r_1 … r_ℓ is a run of ^D_p_ℓ on w. As p_ℓ is accepting by definition, it remains to show that ρ(r_1 … r_ℓ) ∈ C_D. By the choice of ℓ, all counters not in D have reached their final values. As C_D contains all vectors of C where all ∞-entries are replaced by arbitrary values, the claim follows, hence w ∈ L(^D_p_ℓ). Now we argue that β∈ L_ω(^D_p_ℓ). For every k > ℓ define r'_k = (p_k-1, α_k, p_k). Observe that r' = r'_k+1 r'_k+2… is a run of ^D_p_ℓ on β (all r'_k+1 exist in ^D_p_ℓ, as the counters not in D of all transitions r_k have zero-values by the definition of ℓ). It remains to show that r' is accepting, , that for every counter in D at least one transition with a non-zero value is used infinitely often, and an accepting state is visited infinitely often. This is the case, as these counter values are ∞ in ρ(r) and by the acceptance condition of limit PA, hence β∈ L_ω(^D_p_ℓ). We conclude α∈⋃_D ⊆{1, …, d}, f ∈ F L(^D_f) · L_ω(^D_f). ⌟ ⇐ To show ⋃_D ⊆{1, …, d}, f ∈ F L(^D_f) · L_ω(^D_f) ⊆ L_ω(), let w ∈ L(^D_f) and β∈ L_ω(^D_f) for some D ⊆{1, …, d} and f ∈ F. We show that wβ∈ L_ω(). Let s be an accepting run of ^D_f on w, which ends in the accepting state f by definition. Let ρ(s) = (v_1, …, v_d). By definition of C_D, there is a vector = (u_1, …, u_d) in C where u_i = ∞ if i ∈ D, and u_i = v_i if i ∉ D. Furthermore, let r = r_1r_2r_3…, where r_i = (p_i-1, α_i, p_i), be an accepting run of ^D_f on β, which starts in the accepting state f by definition. By definition of ^d, for every counter i ∈ D at least one transition where the i-th counter of the corresponding transition in Δ is non-zero is used infinitely often. Hence, let r' = r'_1 r'_2 r'_3 … where r'_i = (p_i-1, α_i, _i, p_i) for a suitable vector _i. Furthermore, the labels of transitions of counters not in D have a value of zero, hence ρ(r') = (x_1, …, x_d), where x_i = ∞ if i ∈ D, and x_i = 0 if i ∉ D. A technical remark: it might be the case that there are more than one transitions in Δ that collapse to the same transition in Δ^D, say δ_1 = (p, a, , q) and δ_2 = (p, a, , q) appear in Δ and collapse to (p, a, q) in Δ^D. If both transitions, δ_1 and δ_2, are seen infinitely often, we need to take care that we also see both infinitely often when translating the run r back. This is possible using a round-robin procedure. Now observe that sr' is a run of on wβ (recall that s ends in f, and r' starts in f). Furthermore, we have ρ(sr') = ρ(s) + ρ(r') = (v_1 + x_1, …, v_d + x_d), where v_i + x_i = ∞ if i ∈ D, and v_i + x_i = v_i if i ∉ D by the observations above. Hence ρ(sr') ∈ C. Finally, ^D enforces that at least one accepting state in ^D_f is seen infinitely often, hence wβ∈ L_ω(). Observe that the construction in <ref> yields a limit PA whose semi-linear set C contains no vector with an ∞-entry. Hence, by this observation and the construction in the previous lemma we obtain the following corollary. For every limit PA there is an equivalent limit PA whose semi-linear set does not contain any ∞-entries. Finally we show the implication (3) ⇒ (1). If L is reachability-regular, then L ∈. Let = (Q, Σ, q_0, Δ, F, C) be a reachability-regular PA. The intuition is as follows. a reachability-regular PA just needs to verify the counters a single time. Hence, we can recognize the prefixes of infinite words α∈ B_ω() that generate the accepting hit with a PA. Further checking that an accepting state is seen infinitely often can be done with a Büchi automaton. Fix f ∈ F and let _f = (Q, Σ, q_0, Δ, {f}, C) be the PA that is, syntactically equal to with the only difference that f is the only accepting state. Similarly, let _f = (Q, Σ, f, {(p,a,q) | (p,a,, q) ∈Δ}, F) be the Büchi automaton obtained from by setting f as the initial state and the forgetting the vector labels. We claim that RR_ω() = ⋃_f ∈ F L(_f) · L_ω(_f). ⇒ To show RR_ω() ⊆⋃_f ∈ F L(_f) · L_ω(_f), let α∈ B_ω() with accepting run r = r_1 r_2 r_3 … where r_i = (p_i-1, α_i, _i, p_i). Let k be arbitrary such that there is an accepting hit in r_k (such a k exists by definition) and consider the prefix α[1,k]. Obviously r_1 … r_k is an accepting run of _p_k on α[1,k]. Furthermore, there are infinitely many j such that p_j ∈ F by definition. In particular, there are also infinitely many j ≥ k with this property. Let r'_i = (p_i-1, α_i, p_i) for all i > k. Then r'_k+1 r'_k+2… is an accepting run of _p_k on α_k+1α_k+2… (recall that p_k is the initial state of _p_k). Hence we have α[1,k] ∈ L(_p_k) and α_k+1α_k+2…∈ L_ω(_p_k). ⇐ To show ⋃_f ∈ F L(_f) · L_ω(_f) ⊆ RR_ω(), let w ∈ L(_f) and β∈ L_ω(_f) for some f ∈ F. We show wβ∈ B_ω(). Let s = s_1 … s_n be an accepting run of _f on w, which ends in the accepting state f with ρ(s) ∈ C by definition. Furthermore, let r = r_1 r_2 r_3 … be an accepting run of ^D_f on β which starts in the accepting state f by definition. It is now easily verified that sr' with r' = r'_1r'_2r'_3… where r'_i = (p_i-1, α_i, _i, p_i) (for an arbitrary _i such that r'_i ∈Δ) is an accepting run of on wβ, as there is an accepting hit in s_n, and the (infinitely many) visits of an accepting state in r translate one-to-one, hence wβ∈ B_ω(). As shown in <ref>, the class of Büchi PA-recognizable ω-languages is equivalent to the class of ω-languages of the form ⋃_i U_i V_i^ω where U_i and V_i are Parikh-recognizable, but the PA for V_i is restricted in such a way that the initial state is the only accepting state and the set is linear without base vector. Observe that for every regular language L there is a Büchi automaton where the initial state is the only accepting state with L_ω() = L^ω (see e.g. <cit.>). Hence, is a subset of the class of Büchi PA-recognizable ω-languages. This inclusion is also strict, as witnessed by the Büchi PA in <ref> which has the mentioned property. The class is a strict subclass of the class of Büchi PA-recognizable ω-languages. We finish this subsection by observing that (complete) reachability PA capture a subclass of where, due to completeness, all V_i = Σ. The following are equivalent for all ω-languages L ⊆Σ^ω. * L is of the form ⋃_i U_i Σ^ω where U_i ⊆Σ^* is Parikh-recognizable. * L is reachability PA-recognizable. §.§ Characterization of and In this section we give a characterization of and a characterization of . As mentioned in the beginning of this section, reset PA are too strong to capture this class. However, restrictions of strong reset PA are a good candidate to capture as well as . In fact we show that it is sufficient to restrict the appearances of accepting states to capture , as specified by the first theorem of this subsection. Further restricting the vectors yields a model capturing , as specified in the second theorem of this subsection. Recall that the condensation of is the DAG of strong components of the underlying graph of . The following are equivalent for all ω-languages L ⊆Σ^ω. * L is of the form ⋃_i U_i V_i^ω, where U_i, V_i ⊆Σ^* are Parikh-recognizable. * L is recognized by a strong reset PA with the property that accepting states appear only in the leaves of the condensation of , and there is at most one accepting state per leaf. (1) ⇒ (2). Let _i = (Q_i, Σ, q_i, Δ_i, F_i) for i ∈{1,2} be PA and let L = L(_1) · L(_2)^ω. By <ref> we may assume that _2 is normalized (recall that by <ref> this implies SR_ω(_2) = L(_2)^ω) and hence write L = L(_1) · SR_ω(_2). As pointed out in the proof of <ref>, we can construct a reset PA that recognizes L such that only the accepting states of _2 remain accepting in . As _2 is normalized, this means that only q_2 is accepting in . Hence  satisfies the property of the theorem. Finally observe that the construction in <ref> maintains this property, implying that the construction presented in <ref> always yields a reset PA of the desired form ⌟ (2) ⇒ (1). Let = (Q, Σ, q_0, Δ, F, C) be a strong reset PA of dimension d with the property of the theorem. Let f ∈ F and let _f = (Q, Σ, q_0, Δ_f, {f}, C ·{1}) with Δ_f = {p,a,· 0,q) | (p,a,, q) ∈Δ, q ≠ f}∪{(p, a, · 1, f) | (p, a, , f) ∈Δ} be the PA of dimension d+1 obtained from by setting f as the only accepting state with an additional counter that is, 0 at every transition except the incoming transitions of f, where the counter is set to 1. Additionally all vectors in C are concatenated with 1. Similarly, let _f,f = (Q, Σ, f, Δ_f, {f}, C ·{1}) be the PA of dimension d+1 obtained from by setting f as the initial state and only accepting state, where Δ_f is defined as for _f. We claim SR_ω() = ⋃_f ∈ F L(_f) · L(_f,f)^ω. ⇒ To show SR_ω() ⊆⋃_f ∈ F L(_f) · L(_f,f)^ω, let α∈ S_ω() with accepting run r = r_1 r_2 r_3 … where r_i = (p_i-1, α_i, _i, p_i). Let k_1 < k_2 < … be the positions of accepting states in r, , p_k_i∈ F for all i ≥ 1. First observe that the property in the theorem implies p_k_i = p_k_j for all i, j ≥ 1, , no two distinct accepting states appear in r, since accepting states appear only in different leaves of the condensation of . For j ≥ 1 define r'_j = (p_j-1, α_j, _j · 0, p_j) if j ≠ k_i for all i ≥ 1, and r'_j = (p_j-1, α_j, _j · 1, p_j) if j = k_i for some i ≥ 1, , we replace every transition r_j by the corresponding transition in Δ_f. Now consider the partial run r_1 … r_k_1 and observe that p_i ≠ p_k_1 for all i < k_1, and ρ(r_1 … r_k_1) ∈ C by the definition of strong reset PA. Hence r' = r'_1 … r'_k_1 is an accepting run of _p_k_1 on α[1, k_1], as only a single accepting state appears in r', the newly introduced counter has a value of 1 when entering p_k_1, , ρ(r') ∈ C ·{1}, hence α[1, k_1] ∈ L(_p_k_1). Finally, we show that α[k_i + 1, k_i+1] ∈ L(_p_k_1,p_k_1). Observe that r'_k_i + 1… r'_k_i+1 is an accepting run of _p_k_1,p_k_1 on α[k_i + 1, k_i+1]: we have ρ(r_k_i + 1… r_k_i+1) = ∈ C by definition. Again, as only a single accepting state appears in r'_k_i + 1… r'_k_i+1, we have ρ(r'_k_i + 1… r'_k_i+1) = · 1 ∈ C ·{1}, and hence α[k_i + 1, k_i+1] ∈ L(_p_k_1,p_k_1). We conclude α∈ L(_p_k_1) · L(_p_k_1, p_k_1)^ω. ⇐ To show ⋃_f ∈ F L(_f) · L(_f,f)^ω⊆ SR_ω(), let u ∈ L(_f), and v_1, v_2, …∈ L(_f,f) for some f ∈ F. We show that uv_1v_2 …∈ SR_ω(). First let u = u_1 … u_n and r' = r'_1 … r'_n with r'_i = (p_i-1, u_i, _i · c_i, p_i), where c_i ∈{0,1}, be an accepting run of _f on u. Observe that ρ(r') ∈ C ·{1}, hence ∑_i ≤ n c_i = 1, , p_n is the only occurrence of an accepting state in r' (if there was another, say p_j, then c_j = 1 by the choice of Δ_f, hence ∑_i ≤ n c_i > 1, a contradiction). For all 1 ≤ i ≤ n let r_i = (p_i-1, u_i, _i, p_i). Then r_1 … r_n is a partial run of on w with ρ(r_1 … r_n) ∈ C and p_n = f. Similarly, no run of _f,f on any v_i visits an accepting state before reading the last symbol, hence we continue the run from r_n on v_1, v_2, … using the same argument. Hence uv_1v_2 …∈ SR_ω(), concluding the proof. As a side product of the proof of <ref> we get the following corollary, which is in general not true for arbitrary reset PA. Let = (Q, Σ, q_0, Δ, F, C) be a strong reset PA with the property that accepting states appear only in the leaves of the condensation of , and there is at most one accepting state per leaf. Then we have SR_ω() = ⋃_f ∈ F S_ω(Q, Σ, q_0, Δ, {f}, C). By even further restricting the power of strong reset PA, we get the following characterization of . The following are equivalent for all ω-languages L ⊆Σ^ω. * L is of the form ⋃_i U_i V_i^ω, where U_i ⊆Σ^* is regular and V_i ⊆Σ^* is Parikh-recognizable. * L is recognized by a strong reset PA with the following properties. * At most one state q per leaf of the condensation of may have incoming transitions from outside the leaf, this state q is the only accepting state in the leaf, and there are no accepting states in non-leaves. * only transitions connecting states in a leaf may be labeled with a non-zero vector. Observe that property (a) is a stronger property than the one of <ref>, hence, strong reset PA with this restriction are at most as powerful as those that characterize . However, as a side product of the proof we get that property (a) is equivalent to the property of <ref>. Hence, property (b) is mandatory to sufficiently weaken strong reset PA such that they capture . In fact, using the notion of normalization, we can re-use most of the ideas in the proof of <ref>. (1) ⇒ (2). We can trivially convert an NFA into an equivalent PA by labeling every transition with 0 and choosing C = {0}. Let be an arbitrary PA and assume that it is normalized; in particular implying that it only a single SCC. Again, we have L()^ω = S_ω() and the constructions for concatenation and union do not destroy the properties, hence we obtain a strong reset PA of the desired form. ⌟ (2) ⇒ (1) Let = (Q, Σ, q_0, Δ, F, C) be a strong reset PA of dimension d with properties (a) and (b). Fix f ∈ F and let with Q_f = {q ∈ Q | q appears in a non-leaf SCC of C()}∪{f} be the NFA obtained from by removing all leaf states except f, and removing all labels from the transitions. Recycling the automaton from <ref>, let with Δ_f = {(p,a,· 0,q) | (p,a,, q) ∈Δ, q ≠ f}∪{(p, a, · 1, f) | (p, a, , f) ∈Δ}. We claim SR_ω() = ⋃_f ∈ F L(_f) · L(_f,f)^ω. ⇒ To show SR_ω() ⊆⋃_f ∈ F L(_f) · L(_f,f)^ω, let α∈ SR_ω() with accepting run r = r_1r_2r_3 … where r_i = (p_i-1, α_i, _i, p_i), and let k_1< k_2< … be the positions of the accepting states in r, and consider the partial run r_1 … r_k_1 (if k_1 = 0, , the initial state is already accepting, then r_1 … r_k_1 is empty). By property (a) we have that p_k_1 is the first state visited in r that is, located in a leaf of C(). Hence r'_1 … r'_k_1, where r'_i = (p_i-1, α_i, p_i), is an accepting run of _p_k_1 on α[1, k_1] (in the case k_1 = 0 we define α[1, k_1] = ε). By the same argument as in the proof of <ref> we have p_k_i = p_k_j for all i,j ≥ 1, hence α[k_i + 1, k_i+1] ∈ L(_p_k_1, p_k_1), and hence α∈ L(_p_k) · L(_p_k_1, p_k_1)^ω. ⇐ To show ⋃_f ∈ F L(_f) · L(_f,f)^ω⊆ SR_ω(), let u ∈ L(_f), and v_1, v_2, …∈ L(_f,f) for some f ∈ F. We show that uv_1v_2 …∈ S_ω(). First observe that properties (a) and (b) enforce that ∈ C, as the accepting state of a leaf of C() is visited before a transition labeled with a non-zero can be used. Let u = u_1 … u_n and s_1 … s_n with s_i = (p_i_1, u_i, p_i) be an accepting run of _f on u. Define s'_i = (p_i_1, u_i, , p_i) and observe that s'_1 … s'_n is a partial run of with ρ(s'_1 … s'_n) ∈ C and p_n = f by the observation above. Again we can very similarly continue the run on v_1, v_2, … using the same argument. Hence uv_1v_2 …∈ SR_ω(), concluding the proof. § BLIND COUNTER MACHINES AND Ε-ELIMINATION As mentioned in the introduction, blind counter machines as an extension of automata with counting mechanisms were already introduced and studied in the 70s <cit.>. Over finite words they are equivalent to Parikh automata <cit.>. Blind counter machines over infinite words were first considered by Fernau and Stiebe <cit.>. In this section we first recall the definition of blind counter machines as introduced by Fernau and Stiebe <cit.>. The definition of these automata admits ε-transitions. It is easily observed that Büchi PA with ε-transitions are equivalent to blind counter machines. Therefore, we extend all Parikh automata models studied in this paper with ϵ-transitions and consider the natural question whether they admit ε-elimination (over infinite words). We show that almost all models allow ε-elimination, the exception being safety and co-Büchi PA. For the latter two models we observe that ε-transitions allow to encode ω-regular conditions, meaning that such transitions give the models enough power such that they can recognize all ω-regular languages. A blind k-counter machine (CM) is quintuple = (Q, Σ, q_0, Δ, F) where Q, Σ, q_0 and F are defined as for NFA, and Δ⊆ Q × (Σ∪{ε}) ×^d × Q is a finite set of integer labeled transitions. In particular, the transitions of Δ are labeled with possibly negative integer vectors. Observe that ε-transitions are allowed. A configuration for an infinite word α = α_1α_2α_3… of is a tuple of the form c = (p, α_1 …α_i, α_i+1α_i+2…, ) ∈ Q ×Σ^* ×Σ^ω×^k for some i ≥ 0. A configuration c derives into a configuration c', written c ⊢ c', if either c' = (q, α_1 …α_i+1, α_i+2…, + ) and (p, α_i+1, , q) ∈Δ, or c' = (q, α_1 …α_i, α_i+1α_i+2…, + ) and (p,ε, ,q) ∈Δ.  accepts an infinite word α if there is an infinite sequence of configuration derivations c_1 ⊢ c_2 ⊢ c_3 ⊢… with c_1 = (q_0, ε, α, ) such that for infinitely many i we have c_i = (p_i, α_1 …α_j, α_j+1α_j+2…, ) with p_i ∈ F and for all j ≥ 1 there is a configuration of the form (p, α_1 …α_j, α_j+1α_j+1…, q) for some p, q ∈ Q in the sequence. That is, a word is accepted if we infinitely often visit an accepting state when the counters are , and every symbol of α is read at some point. We define the ω-language recognized by as L_ω() = {α∈Σ^ω| accepts α}. Parikh automata naturally generalize to Parikh automata with ϵ-transitions. An ϵ-PA is a tuple = (Q, Σ, q_0, Δ, , F, C) where ⊆ Q ×{ε}×^d × Q is a finite set of labeled ε-transitions, and all other entries are defined as for PA. A run of on an infinite word α_1α_2α_3 … is an infinite sequence of transitions r ∈ (^* Δ)^ω, say r = r_1r_2r_3 … with r_i = (p_i-1, γ_i, _i, p_i) such that p_0 = q_0, and γ_i = ε if r_i ∈, and γ_i = α_j if r_i ∈Δ is the j-th occurrence of a (non-ε) transition in r. The acceptance conditions of the models translate to runs of ε-PA in the obvious way. We use terms like ε-safety PA, ε-reachability PA, etc, to denote an ε-PA with the respective acceptance condition. Note that we can treat every PA as an ϵ-PA, that is, a PA = (Q,Σ, q_0, Δ, F, C) is equivalent to the ε-PA ' = (Q, Σ, q_0, Δ, ∅, F, C). §.§ Equivalence of blind counter machines with Büchi PA We start with the following simple observation. CM and ε-Büchi PA are equivalent. We first show that for every CM there is an equivalent ε-Büchi PA . Let = (Q, Σ, q_0, Δ, F) be a k-counter machine. For a vector we define the vector ^± = (x_1^+, … x_k^+, x_1^-, … x_k^-) ∈^2k as follows: if x_i is positive, then x_i^+ = x_i and x_i^- = 0. Otherwise, x_i^+ = 0 and x_i^- = |x_i|. We construct an equivalent ε-Büchi PA = (Q, Σ, q_0, Δ', ', F, C) of dimension 2k, where Δ' = {(p, a, ^±, q) | (p, a, , q) ∈Δ} and ' = {(p, ε, ^±, q) | (p, ε, , q) ∈Δ}. Finally, let C = {(x_1, …, x_k, x_1, …, x_k) | x_i ∈}. It is now easily verified that L_ω() = P_ω(). For the reverse direction we show that for every Büchi PA there is an equivalent CM . Let = (Q, Σ, q_0, Δ, F, C) be a Büchi PA of dimension d where C = C_1 ∪…∪ C_ℓ for linear C_i. Note that we have B_ω() = ⋃_i ≤ℓ B_ω(Q, Σ, q_0,Δ, F, C_i) by the infinite pigeonhole principle. Hence, we can assume that C is linear as CM are closed under union <cit.>. We construct a blind d-counter machine that simulates as follows: consists of a copy of where the accepting states have additional ε-transitions labeled with the negated period vectors of C. We only need to consider the base vector of C a single time, hence we introduce a fresh initial state q_0' and a ε-transition from q'_0 to q_0 labeled with the negated base vector of C. Observe that a vector lies in C = {b_0 + b_1z_1 + … + b_ℓ z_ℓ| z_1, …, z_ℓ} if and only if - b_1z_1 - … - b_ℓ z_ℓ - b_0 = for some z_i. Intuitively, computes the vector in the copies of Q and guesses the z_i in the accepting states. We construct = (Q ∪{q_0'}, Σ, q_0', Δ', F) where Δ' = Δ∪{(q_0', ε, -b_0, q_0}∪{(q_f, ε, -b_i, q_f) | q_f ∈ F, i ≤ℓ}. It is now easily verified that B_ω() = L_ω(). §.§ ϵ-elimination for Parikh automata We now show that almost all PA models admit ϵ-elimination. We first consider Büchi PA, where ϵ-elimination implies the equivalence of blind counter machines and Büchi PA by <ref>. We provided a direct but quite complicated proof in the manuscript <cit.>. We thank Georg Zetzsche for outlining a much simpler proof, which we present here. ϵ-Büchi PA admit ϵ-elimination. Observe that the construction in <ref> translates ϵ-free CM into ϵ-free Büchi PA. We can hence translate a given Büchi PA into a CM and eliminate ϵ-transitions and then translate back into a Büchi PA. Therefore, all we need to show is that CM admit ϵ-elimination. To show that CM admit ϵ-elimination we observe that L is recognized by a CM ⟺ L=⋃_i U_iV_i^ω, where U_i is a language of finite words that is recognized by a CM and V_i is a language of finite words that is recognized by a CM where F={q_0}. The proof of this observation is very similar to the proof of <ref> and we leave the details to the reader. As shown in <cit.>, CM on finite words admit ε-elimination. Furthermore, from the proof technique established in <cit.> is is immediate that the condition F = {q_0} can be preserved. We obtain ϵ-free CM _i' and _i' for the languages U_i and V_i. Using the construction of <cit.>, we can translate _i' and _i' into PA _i and _i, where the _i satisfy F_i={q_0} and the sets C_i are linear and have no base vector b_0 (Theorem 32 of <cit.>). Now the statement follows by <ref>. We continue with ε-reachability, ε-reachability-regular and ε-limit PA, as we show ε-elimination using the same technique for these models. As shown in <ref> and <ref>, the class of ω-languages recognized by reachability PA coincides with the class of ω-languages of the form ⋃_i U_i Σ^ω for Parikh-recognizable U_i, and the class of reachability-regular and limit PA-recognizable ω-languages coincides with the class of ω-languages of the form ⋃_i U_i V_i^ω for Parikh-recognizable U_i and regular V_i, respectively. It is well-known that NFA and PA on finite words are closed under homomorphisms and hence admit  <cit.> (as a consequence of <cit.>, ε-transitions can even be eliminated without changing the semi-linear set). The characterizations allow us to reduce ε-elimination of these infinite word PA to the finite case. ε-reachability, ε-reachability-regular, and ε-limit PA admit ε-elimination. We show the statement for ε-reachability PA. The technique can very easily be translated to the other two models. Let  be an ε-reachability PA with R_ω() = L ⊆Σ^ω. Let _e be the reachability PA obtained from by replacing every ε-transition with an e-transition, where e is a fresh symbol that does not appear in Σ. Let h be the homomorphism that erases the letter e, i.e., h(e) = ε. Observe that _e recognizes an ω-language L_e ⊆ (Σ∪{e})^ω with the property that h(L_e) = L (note that by definition {ϵ}^ω=∅). Now, by <ref> we can write L_e as ⋃_i U_i · (Σ∪{e})^ω where U_i ⊆ (Σ∪{e})^* is Parikh-recognizable. As the class of Parikh-recognizable languages is closed under homomorphisms <cit.>, we have L = h(L_e) = h(⋃_i U_i · (Σ∪{e})^ω) = ⋃_i h(U_i)·Σ^ω, and can hence find a reachability PA for L. The proof for reachability-regular and limit PA works the same way, as the regular languages are also closed under homomorphisms. Finally we show that safety and co-Büchi PA do not admit ε-elimination. ε-safety PA and ε-co-Büchi PA do not admit ε-elimination. Consider the automaton in <ref> with C={(z,z') | z' ≥ z}. If we interpret as an ε-safety or ε-co-Büchi PA, we have we have S_ω() = CB_ω() = {ab^+}^ω. This ω-language is neither safety PA nor co-Büchi PA-recognizable (one can easily adapt the proof in <cit.> showing that {α∈{a,b}^ω| |α|_a = ∞} is neither safety PA nor co-Büchi PA-recognizable). Observe how utilizes the ε-transition to enforce that q_0 is infinitely often: whenever the b-loop on q_1 is used, the first counter increments. The semi-linear set states that at no point the first counter value may be greater than the second counter value which can only be increased using the ε-loop on q_0. Hence, any infinite word accepted by may contain arbitrary infixes of the form b^n for n < ∞, as the automaton can use the ε-loop on q_0 at least n times before, but not b^ω. As a consequence of the previous proof we show that ε-safety PA and ε-co-Büchi PA recognize all ω-regular languages, as the presented trick can be used to encode ω-regular conditions, that is ε-transitions can be used to enforce that at least one state of a subset of states needs to be visited infinitely often. Every ω-regular language is ε-safety PA and ε-co-Büchi recognizable. Let L be an ω-regular language and let = (Q, Σ, q_0, Δ, F) be a Büchi automaton recognizing L. Wlog we assume that q_0 ∈ F (this can be achieved by creating a fresh accepting copy of q_0). We construct an ε-safety (or in this case equivalent ε-co-Büchi) PA ' = (Q, Σ, q_0, Δ', ', Q) with two counters where Δ' is defined as follows. The set Δ' inherits every transition of Δ where the first counter value is increased by one. For every q ∈ F we create an ε-loop on q that increments the second counter by one. The semi-linear set C enforces that the second counter value must be greater or equal the first counter value all the time. Formally, we have Δ' = {(p, a, (1,0), q) | (p,a,q) ∈Δ} and ' = {(q, ε, (0,1), q) | q ∈ F}, and C={(z,z') | z' ≥ z}. Hence, every accepting run of ' must visit at least one accepting state an infinite amount of times, as only accepting states can increase the second counter which is necessary to use other transitions. To be precise, if r is an accepting run of with accepting positions 0 = k_0, k_1, k_2, …, then ' can mimic the run r by using the inherited transitions, and additionally, using an ε-loop at position k_i at least k_i+1 - k_i many times. Finally we show that strong ε-reset PA and weak ε-reset PA admit ε-elimination. We show that these two models are equivalent. Hence to show this statement we only need to argue that strong ε-reset PA admit ε-elimination. Every strong ε-reset PA is equivalent to a weak ε-reset PA ' that has the same set of states and uses one additional counter. If is a strong reset PA, then ' is a weak reset PA. Let = (Q, Σ, q_0, Δ, , F, C) be a strong ε-reset PA. We construct an equivalent weak ε-reset PA ' that simulates , ensuring that no run visits an accepting state without resetting. To achieve that, we add an additional counter that tracks the number of visits of an accepting state (without resetting). Moreover, we define C'=C·{1}, such that this new counter must be set to 1 when visiting an accepting state, thus disallowing to pass such a state without resetting. Now it is clear that ' is a weak ε-reset PA equivalent to . Observe that if has no ϵ-transitions, then ' has no ϵ-transitions. Every weak ε-reset PA is equivalent to a strong ε-reset PA ' with at most twice the number of states and the same number of counters. If is a strong reset PA, then ' is a weak reset PA. Let = (Q, Σ, q_0, Δ, , F, C) be a weak ε-reset PA. We construct an equivalent strong ε-reset PA ' that simulates by having the option to “avoid” accepting states arbitrarily long. For this purpose, we create a non-accepting copy of F. Consequently, ' can decide to continue or reset a partial run using non-determinism. Again, it is clear that ' is equivalent to . Observe that if has no ϵ-transitions, then ' has no ϵ-transitions. Strong ε-reset PA admit ε-elimination. Let = (Q, Σ, q_0, Δ, , F, C) be a strong ε-reset PA of dimension d. We assume wlog that q_0 has no incoming transitions (this can be achieved by introducing a fresh copy of q_0). Furthermore, we assume that F ≠∅ (otherwise SR_ω() = ∅). Let the states of Q be ordered arbitrarily, say Q = {q_0, …, q_n-1}. We construct an equivalent strong reset PA ' = (Q', Σ, q_0, Δ', F', C') of dimension d + n. In the beginning, ' is a copy of (keeping the ϵ-transitions for now), which is modified step-by-step. The purpose of the new counters is to keep track of the states that have been visited (since the last reset). Initially, we hence modify the transitions as follows: for every transition (q_i, γ, , q_j) ∈Δ∪ we replace by ·_j^n. Let p, q ∈ Q. Assume there is a sequence of transitions λ̃= r_1 … r_j … r_k ∈^* Δ^*; 1≤ j ≤ k ≤ 2n+1, where * r_j = (p_j-1, a, _j, p_j) ∈Δ, and * r_i = (p_i-1, ε, _i, p_i) ∈ for all i ≠ j, i ≤ k, * such that p_0 = p, p_k = q, and p_i≠ p_ℓ for i,ℓ≤ j and p_i≠ p_ℓ for i,ℓ≥ j, and * all internal states are non-accepting, , p_i ∉ F for all 0 < i < k. Then we introduce the shortcut (p, a, ρ(λ̃), q), where ρ(λ̃) is computed already with respect to the new counters, tracking that the p_i in λ̃ have been visited, , the counters corresponding to the p_i in this sequence have non-zero values. Let p,q∈ Q. We call a (possibly empty) sequence λ = r_1 … r_k ∈^*; k≥ 0 with r_i = (p_i-1, ε, _i, p_i) and p_0 = p, p_k = q a no-reset ε-sequence from p to q if all internal states are non-accepting, , p_i ∉ F for all 0 < i < k. A no-reset ϵ-path is a no-reset sequence such that p_i≠ p_j for i≠ j. Observe that the set of no-reset ε-paths from p to q is finite, as the length of each path is bounded by n-1. We call the pair (p,q) a C-pair if there is a no-reset ε-sequence r from p to q with ρ(r) ∈ C, where ρ(r) is computed in . Let S=(f_1,…, f_ℓ) be a non-empty sequence of pairwise distinct accepting states (note that this implies ℓ≤ n). We call S a C-sequence if each (f_i,f_i+1) is a C-pair. For all p,q∈ F and C-sequences S such that p=f_1 if p∈ F and q=f_ℓ if q∈ F, we introduce a new state (p,S,q). We add (p,S,q) to F', that is, we make the new states accepting. State (p,S,q) will represent a partial run of the automaton with only ϵ-transitions starting in p, visiting the accepting states of S in that order, and ending in q. Observe that in the following we introduce only finitely many transitions by the observations made above; we will not repeat this statement in each step. Let p, q ∈ Q and S = (f_1, …, f_ℓ) be a C-sequence. For every transition of the form (s, a, , p) ∈Δ we insert new transitions {(s, a, + ρ(λ), (p,S,q)) |λ is a no-reset ε-path from p to f_1} to Δ'. Similarly, for every transition of the form (q, a, , t) ∈Δ we insert new transitions {((p,S,q), a, + ρ(λ), t) |λ is a no-reset ε-path from f_ℓ to q} to Δ'. Again this set is finite. Additionally, let p', q' ∈ Q and S' = (f_1', …, f'_k) be a C-sequence. For every sequence λ̃=λδλ' where λ is a no-reset ε-path from f_ℓ to q, δ = (q, a, , p'), and λ' is a no-reset ε-path from p' to f'_1 we add the shortcuts ((p,S,q), a, ρ(λ̃), (p',S',q')) to Δ'. Lastly, we connect the initial state q_0 in a similar way (recall that we assume that q_0 has no incoming transitions, and in particular no loops). For every transition (p, a, , q) ∈Δ and every C-sequence S = (f_1, …, f_l) with the property that (q_0, f_1) is a C-pair and there is a no-reset ε-path λ from f_ℓ to p, we introduce the transition (q_0, a, ρ(λ) + , q) for every such path λ. Additionally, for every C-sequence S' = (f_1', … f'_k) such that there is a no-reset ε-path λ' from q to f_1', we introduce the transition (q_0, a, ρ(λ) + + ρ(λ'), (q,S',t)) for all such paths λ, λ' and t∈ Q. Furthermore, for every no-reset ε-path λ̂ from q_0 to p, we introduce the transition (q_0, a, ρ(λ̂) + + ρ(λ'), (q,S',t)) for all t ∈ Q. A reader who is worried that we may introduce too many transitions at this point shall recall that (q,S',t) has no outgoing transition if there does not exist a no-reset ϵ-path from f_k' to t. Finally, we delete all ϵ-transitions. We define C' similar to the construction by Klaedtke and Ruess <cit.> used to eliminate ε-transitions in the finite setting. For every q ∈ Q ∖ F we define C_q = {ρ(r) | r ∈^* is partial run of starting and ending in q that does not visit any accepting state}. As a consequence of Parikh's theorem <cit.> and <cit.>, the sets C_q are semi-linear. Then C' = {· (x_0, …, x_n-1) | + ∈ C, ∈∑_x_i ≥ 1 C_q_i}. By this, we substract the C_q_i if the counter for q_i is greater or equal to one, that is, the state has been visited. This finishes the construction. We now prove that ' is equivalent to . In the one direction we compress the run by using the appropriate shortcuts, in the other direction we unravel it accordingly. ⇒ To show that SR_ω()⊆ SR_ω('), let α∈ SR_ω() with accepting run r = r_1 r_2 r_3 …. If there are no ε-transitions in r, we are done (as r is also an accepting run of ' on α). Otherwise, we construct an accepting run r' of ' on α by replacing maximal in r step-by-step. Let i be minimal such that r_i … r_j is a maximal ε-sequence. Let r_i = (p_i-1, ε, _i, p_i), r_j = (p_j-1, ε, _j, p_j), and r_j+1 = (p_j, α_z, _j+1, p_j+1). It might be the case that i = 1, , the run r starts with an ε-transition leaving q_0. Otherwise i > 1 and we can write r_i-1 = (p_i-2, α_z-1, _i-1, p_i-1). By allowing the empty sequence, we may assume that there is always a second (possibly empty) maximal ε-sequence r_j+2… r_k starting directly after r_j+1. We distinguish (the combination of) the following cases. * At least one state in r_i … r_j is accepting, , there is a position i-1 ≤ℓ≤ j such that p_ℓ∈ F (F) or not (N). * At least one state in r_j+2… r_k is accepting, , there is a position j+1 ≤ℓ' ≤ k such that p_ℓ'∈ F (F) or not (N). If r_j+2… r_k is empty, we are in the case (N). Hence, we consider four cases in total. * Case (NN). That is, there is no accepting state in r_i … r_k. Note that the ε-sequence r_i … r_j can be decomposed into an ε-path and ε-cycles as follows. If we have p_i_1≠ p_j_1 for all i ≤ i_1 < j_1 ≤ j we are done as r_i … r_j is already an ε-path. Otherwise let i_1 ≥ i be minimal such that there is j_1 > i_1 with p_i_1 = p_j_1, that is, r_i_1+1… r_j_1 is an ε-cycle. If r_i … r_i_1 r_j_1+1… r_j is an ε-path, we are done. Otherwise, let i_2 > j_1 be minimal such that there is j_2 > i_2 with p_i_2 = p_j_2, that is, r_i_2+1… r_j_2 is an ε-cycle. Then again, if r_i … r_i_1 r_j_1+1… r_i_2 r_j_2+1… r_j is an ε-path, we are done. Otherwise, we can iterate this argument and obtain a set of ε-cycles r_i_1+1… r_j_1, …, r_i_m+1… r_j_m for some m, and an ε-path r̂_i,j = r_i … r_i_1 r_j_1+1… r_i_m r_j_m+1… r_j which partition r_i … r_j. Now observe that ρ(r_i_1+1… r_j_1) + … + ρ(r_i_m+1… r_j_m) ∈ C_p_i_1 + … + C_p_i_m. We can do the same decomposition for the ε-sequence r_j+2… r_k into a set of ε-cycles and an ε-path r̂_j+2,k. By the construction of Δ', there is a shortcut δ = (p_i-1, α_z, (ρ(r̂_i,j) + _j+1 + ρ(r̂_j+2,k)) ·^n, p_k), where ^n is the n-dimensional vector counting the states appearing in r̂_i,j and r̂_j+2,k and the state p_j+1. By the construction of Δ' and C', we may subtract all ε-cycles that have been visited in r_i … r_k, hence, we may replace r_i … r_k by δ to simulate exactly the behavior of . * Case (NF). That is, there is no accepting state in r_i … r_j but at least one accepting state in r_i+2… r_k (in particular, this sequence is not empty). Let ℓ_1, …, ℓ_m denote the positions of accepting states in r_i+2… r_k, and let ℓ_0 < ℓ_1 be maximal such that ℓ_0 is resetting (this is before r_i, and if such an ℓ_0 does not exist, let ℓ_0 = 0), , ℓ_0 is the position of the last reset before the reset at position ℓ_1. As r is an accepting run, the sequence S = (ℓ_1, …, ℓ_m) is a C-sequence (we may assume that all states in S are pairwise distinct, otherwise there is a reset-cycle, which can be ignored). In the same way as in the previous case we can partition the ε-sequence r_i … r_j into an ε-path r̂_i,j and a set of ε-cycles, which may be subtracted from C. Likewise, we can partition the sequence r_j+2… r_ℓ_1 into an ε-path r̂_j+2,ℓ_1 and ε-cycles with the same property. By the construction of Δ' there is a shortcut (p_i-1, a, ρ(r̂_i,j) + _j+1, p_j+1) and hence a transition δ = (p_i-1, a, ρ(r̂_i,j) + _j+1 + ρ(r̂_j+2, ℓ_1), (p_j+1, S,p_k)) (note that this is also the case if i = 1). Thus, we replace r_i … r_k by δ. In particular, ρ(r_ℓ_0+1… r_i-1δ) can be obtained from ρ(r_ℓ_0+1… r_ℓ_1) by subtracting all ε-cycles that have been visited within this partial run. Furthermore, observe that ρ(r_ℓ_1+1… r_ℓ_2) ∈ C, …, ρ(r_ℓ_m-1+1… r_ℓ_m) ∈ C depend only on the automaton, and not the input word. As the counters are reset in r_ℓ_m, we may continue the run from δ the same way as in r_k, using an appropriate transition from Δ' that adds the vector ρ(r̂_ℓ_m+1, k), thus respecting the acceptance condition. * Case (FN). Similar to (NF), but this time we replace r_i-1r_i … r_j by an appropriate transition into a state of the form (p_i-2, α_z, , (p_i-1, S, p_j)) for a suitable C-sequence S and vector , followed by a shortcut leading to p_k. If i = 0 (we enter a C-sequence before reading the first symbol), we make use of the transitions introduced especially for q_0. * Case (FF). Similar to (FN) and (NF), but we transition from a state of the form (p_i-1, S, p_j) into a state of the form (p_j+1, S', p_k) for suitable C-sequences S, S', again respecting the case i = 0. ⌟ ⇐ To show that SR_ω(')⊆ SR_ω() we unravel the shortcuts and (p, S,q)-states introduced in the construction. Let α∈ SR_ω(') with accepting run r' = r'_1 r'_2 r'_3 …. We replace every transition r'_i ∈Δ' ∖Δ (, transitions that do not appear in ) by an appropriate sequence of transitions in . Let i ≥ 1 be minimal such that r'_i is a transition in Δ' ∖Δ. We distinguish the form of r'_i and show that the possible forms correspond one-to-one to the cases in the forward direction. * Case (NN). The case that r'_i = (p, a, ρ(λ̃), q) is a shortcut, , λ̃∈^* Δ^*, corresponds to the case (NN). In particular, there are no accepting states in r. Let k < i be the position of the last reset before r'_i, and k' the position of the first reset after r'_i, where k' = i if r'_i transitions into a accepting state. By the acceptance condition we have ρ(r'_k+1… r'_k') ∈ C - (∑_q ∈ Q' C_q) for some set Q' ⊆ Q based on the counter values. Hence, we can replace r'_i by the partial run λ̃ filled with possible ε-cycles on some states in Q'. * Case (NF). The case that r'_i = (s, a, + ρ(λ), (p,S,q)) such that S = (f_1, … f_ℓ) is a C-sequence, there is a transition δ = (s, a, , p) ∈Δ and λ is a no-reset ε-path from p to f_1, corresponds to the case (NF). By the definition of C-sequence there is a sequence r_f_1, f_ℓ of ε-transitions in starting in f_1, ending in f_ℓ, visiting the accepting states f_1 to f_ℓ (in that order) such that the reset-acceptance condition is satisfied on every visit of one the accepting states. Then we can replace r'_i by δλ r_f_1, f_ℓ, possibly again filled with some ε-cycles based on the state counters of λ, similar to the previous case. Note that at this point we do not yet unravel the path from f_ℓ to q, as it depends on how the run r' continues (as handled by the next two cases). * Case (FN). The case that r'_i = ((p,S,q), a, + ρ(λ), t) such that S = (f_1, … f_ℓ) is a C-sequence, there is a transition δ = (q, a, , t) ∈Δ and λ is a no-reset ε-path from f_ℓ to q, corresponds to the case (FN). Similar to the previous case, we can replace r'_i by λδ, possibly again amended with some ε-cycles based on the state counters of λ. If i = 1, the transition might also be of the form r'_1 = (q_0, α_1, ρ(λ) + , t) such that S is a C-sequence with the property that (q_0, f_1) is a C-pair. Then there is a sequence of ε-transitions r_q_0, f_ℓ in as above. Then we replace r'_1 by r_q_0, f_ℓλδ (with possible ε-cycles) instead. * Case (FF). The case that r'_i = ((p,S,q), a, ρ(λ̃), (p', S', q') such that S = (f_1, … f_ℓ) and S' = (f'_1, …, f'_k) are C-sequences, there is a transition δ = (q, a, , p') ∈Δ and λ̃ = λδλ', where λ is a no-reset ε-path from f_ℓ to q and λ' is a no-reset ε-path from p' to f_1', corresponds to the case (FF). This case is basically the union of the previous cases. There is a sequence r_f'_1, f'_k of ε-transitions in , as in the case (RF). Hence, we replace r'_i by λ̃ r_f'_1, f'_k (with possible ε-cycles). If i = 1, the transition might also be of the form r'_1 = (q_0, α_1, ρ(λ) + + ρ(λ'), (p', S', q')) such that (q_0, f_1) is a C-pair. Then there is a sequence of ε-transitions r_q_0, f_ℓ in as above, and we replace r'_1 by r_q_0, f_ℓλ̃ r_f'_1, f'_k (with possible ε-cycles). ⌟ Observe that the size of ' is in (||^2||!). This finishes the proof of the lemma. § DECISION PROBLEMS As shown by Guha et al. <cit.>, the results for common decision problems translate from the finite case to reachability PA and Büchi PA, that is, non-emptiness is -complete, and universality (and hence inclusion and equivalence) are undecidable. We show that these results translate to reset PA (which are more expressive), even if we allow ε-transitions (which does not increase their expressiveness but our ε-elimination procedure constructs an equivalent reset PA of super-polynomial size). Hence, (ε-)reset PA are a powerful model that can still be used for algorithmic applications, such as the model checking problem. The main reason for this is that the ω-languages recognized by reset PA are ultimately periodic, meaning that whenever a reset PA accepts at least one infinite word, then it also accepts an infinite word of the form uv^ω. Let be an ε-reset PA. If SR_ω() ≠∅, then accepts an infinite word of the form uv^ω. Assume SR_ω() ≠∅. Then there exists an infinite word α∈ SR_ω() with accepting run r = r_1 r_2 r_3 …, where r_i = (p_i-1, γ_i, _i, p_i). Let k_1 < k_2 < … be the positions of all accepting states in r. Let k_i < k_j be two such positions such that p_k_i = p_k_j and has read at least one symbol from α after leaving p_k_i and entering p_k_j. Let u = γ_1 …γ_k_i be the prefix of α read upon visiting p_k_i and v = γ_k_i + 1…γ_k_j the infix read between p_k_i and p_k_j. Note that v ≠ε by the choice of k_j. Then also accepts uv^ω, as r_1 … r_k_i (r_k_i + 1… r_k_j)^ω is an accepting run of on uv^ω by definition. As a consequence, we can reduce non-emptiness for reset PA to the finite word case, as clarified in the following lemma. Non-emptiness for ε-reset PA is -complete. The -hardness follows from the finite word case <cit.>, hence we focus on the membership in . Let be a strong reset PA. By the previous lemma, it suffices to check whether accepts an infinite word uv^ω with u ∈Σ^* and v ∈Σ^+. If such a word exists, we may assume that there is an accepting run r_u r_v^ω of on uv where neither r_u nor r_v visit the same accepting state twice. For any p,q ∈ Q we define _p ⇒ q = (Q ∪{q_0'}, Σ, q_0', Δ', ', {q}, C), where Δ' = {(q_1, a, , q_2) | (q_1, a, , q_2) ∈Δ, q_1 ∉ F}∪{(q_0', a, , q_2) | (p, a, , q_2) ∈Δ} and, analogously, . Now, the following algorithm solves non-emptiness: * Guess a sequence f_1, …, f_k of accepting states with k ≤ 2|F| such that f_i = f_k for some i ≤ k. * Verify that L(_q_0 ⇒ f_1) ≠∅ and L(_f_j ⇒ f_j+1) ≠∅ for all 1 ≤ j < k (interpreted as PA over finite words). * Verify that L(_f_i ⇒ f_i+1) ·…· L(_f_k-1⇒ f_k) ⊈{ε}. The second step can be done by adding a fresh symbol (say e) to the automata and replacing every ε-transition with an e-transition (observe that this does construction does not change the emptiness behavior, and is, in contrast to the ε-elimination procedure in <cit.> computable in polynomial time). Afterwards we use the NP-algorithm for non-emptiness for PA <cit.>. The third step essentially states that not all L(_f_j ⇒ f_j+1) for j ≥ i may only accept the empty word, as we require v ≠ε. To check this property, we can construct a PA[This is possible in polynomial time by a standard construction very similar to the one of <ref>.] recognizing L(_f_i ⇒ f_i+1) ·…· L(_f_k-1⇒ f_k), and again replace every ε-transition with an e-transition. Finally, we build the product automaton with the PA (NFA) that recognizes the language {w ∈ (Σ∪{e})^* | w contains at least 1 symbol from Σ}, which is possible in polynomial time <cit.> and test non-emptiness for the resulting PA. Furthermore, we study the following membership problem for automata processing infinite words. Given an automaton and finite words u, v, does accept uv^ω? Note that we can always construct a safety automaton that recognizes uv^ω and no other infinite word with |uv| many states. Recall that every state of a safety automaton is accepting. We show that the intersection of a reset PA-recognizable ω-language and a safety automaton-recognizable ω-language remains reset PA-recognizable using a product construction which is computable in polynomial time. Hence, we can reduce the membership problem to the non-emptiness the standard way. The class of reset PA-recognizable ω-languages is closed under intersection with safety automata-recognizable ω-languages. We show a construction for strong ε-reset PA that is computable in polynomial time. Let _1 = (Q_1, Σ, q_1, Δ_1, _1, F_1, C_1) be a strong ε-reset PA and _2 = (Q_2, Σ, q_2, Δ_2, Q_2) be a safety automaton. Consider the product automaton = (Q_1 × Q_2, Σ, (q_1, q_2), Δ, , F_1 × Q_2, C_1) with Δ = {((p,q), a, , (p',q') | (p, a, , p') ∈Δ_1 and (q, a, q') ∈Δ_2} and = {((p,q), ε, , (p',q)) | (p, ε, , p') ∈_1 and q ∈ Q_2}. As every state of _2 is accepting, we need to take care that does not use a transition that is not enabled in _2 while mimicking the behavior of _1. Hence, it is easily verified that SR_ω() = SR_ω(_1) ∩ L_ω(_2). As the membership problem for PA (on finite words) is -complete <cit.>, and the construction in the previous lemma can be computed efficiently, we obtain the following result. Membership for ε-reset PA is -complete. Finally, we observe that universality, inclusion and equivalence remain undecidable for PA, as these problems are already undecidable for Büchi PA <cit.> and the constructions showing that the class of Büchi PA-recognizable ω-languages is a subclass of , and that is a subclass of the class of reset PA-recognizable ω-languages are effective. § CONCLUSION We conclude by giving an overview of all characterizations and inclusions shown in this paper, as depicted in <ref>. Recall the ω-languages motivated by the model checking problem from the introduction, namely {α∈{a,b,c}^ω| there are infinitely many prefixes w of α with |w|_a > |w|_b + |w|_c}, representing unfair resource distributions of an operating system, and {α∈{p,c}^ω|there is a prefix w of α with |w|_c > |w|_p}, representing invalid computations in a producer-consumer setting. Both of these ω-languages are Reset PA-recognizable (in fact, the first is Büchi PA-recognizable and the second is even reachability PA-recognizable). As mentioned, in a common approach we are given a system represented as a Kripke structure K, and a specification of counter-examples given as an automaton, e.g. a reset PA . By moving the labels of the states of K to its transitions, we can see a Kripke structure as a safety automaton _K (see <cit.> for details). As every state of a safety automaton is accepting, we can easily find a reset automaton recognizing all bad computations of K (that is the intersection of the ω-languages recognized by _K and ) by <ref>. As (non-)emptiness is decidable for reset PA, we can solve the model-checking problem by computing the product automaton of _K and and testing for emptiness, which is in by <ref>. We recall that deterministic ω-regular languages are characterized as regular arrow-languages L⃗, where L⃗ = {α|α[1,i] ∈ L for infinitely many i} <cit.>. This characterization can easily be adapted to show that deterministic Büchi PA-recognizable ω-languages are captured by arrows of deterministic Parikh-recognizable languages. In future work we plan to study the expressiveness of the deterministic variants of the introduced models and find similar characterizations. Observe that the proof showing that every strong reset PA can be translated into an equivalent weak reset PA relies on non-determinism. Hence we conjecture that the class of ω-languages recognized by deterministic weak reset PA is a strict subclass of those recognized by deterministic strong reset PA (and that nondeterministic reset PA are strictly more powerful than their deterministic counterparts). In particular, it would be nice to understand the structure of ω-languages that are not reset PA-recognizable. Furthermore, one could define a reset-counterpart on finite words and study the resulting automata. Although the existence of a natural logic capturing the expressiveness of the presented models of PA on infinite words is very unlikely due to their bad closure properties <cit.>, we hope that our characterizations in terms of regular and Parikh-recognizable (finite word) languages (for which equivalent logics are known) help us to gain more insights.
http://arxiv.org/abs/2307.04209v1
20230709154400
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs
[ "Yingjie Cheng", "Gaojun Luo", "Xiwang Cao", "Martianus Frederic Ezerman", "San Ling" ]
cs.IT
[ "cs.IT", "math.CO", "math.IT" ]
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg. G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241. August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Coded distributed computing (CDC) was introduced to greatly reduce the communication load for MapReduce computing systems. Such a system has K nodes, N input files, and Q Reduce functions. Each input file is mapped by r nodes and each Reduce function is computed by s nodes. The architecture must allow for coding techniques that achieve the maximum multicast gain. Some CDC schemes that achieve optimal communication load have been proposed before. The parameters N and Q in those schemes, however, grow too fast with respect to K to be of great practical value. To improve the situation, researchers have come up with some asymptotically optimal cascaded CDC schemes with s+r=K from symmetric designs. In this paper, we propose new asymptotically optimal cascaded CDC schemes. Akin to known schemes, ours have r+s=K and make use of symmetric designs as construction tools. Unlike previous schemes, ours have much smaller communication loads, given the same set of parameters K, r, N, and Q. We also expand the construction tools to include almost difference sets. Using them, we have managed to construct a new asymptotically optimal cascaded CDC scheme. Almost difference set, coded distributed computing, communication load, symmetric design. § INTRODUCTION Processing large amount of data efficiently is a must in this era of big data. Handling such a computational task lies beyond the capability of a single computer. The challenge to complete huge computational assignments motivates the design of distributed computing systems. The main objective is to greatly expedite task execution by letting distributed computing nodes perform computational jobs in parallel by exploiting the distributed nature of available resources, both computing and storage. It is often the case that a large amount of data needs to be exchanged among the computing nodes, which limits the system's performance. In a Facebook Hadoop cluster, for example, it has been observed that 33% of the overall job execution time was spent on data shuffling <cit.>. We know from <cit.> that 70% of the overall job execution time is spent on data shuffling when running a self-join application on an Amazon EC2 cluster. S. Li et al. in <cit.> introduced coded distributed computing(CDC) to reduce the communication load in distributed computing systems. The reduction is the result of CDC's capability to increase the computation load of the so-called Map functions to create novel coding opportunities. Some systems, which had already been in use by then, including Dean and Ghemawat's MapReduce <cit.> and Spark of Zaharia et al. from <cit.>, could subsequently be improved. We call a system a (K,N,r,s,Q)-CDC when the system has K computing nodes, N input data files of equal size, and Q output values, each of which is computed by a function on the N files. A computation in this system is divided into three phases, namely Map, Shuffle, and Reduce. In the Map phase, a given input file is exclusively mapped by a distinct r-subset computing nodes to Q intermediate values (IVs) with T bits. In the Shuffle phase, each tt Reduce function is assigned to an s-subset of computing nodes. Subsequently, all computing nodes generate coded symbols from their respective local IVs in such a way that each computing node can derive the needed IVs that it cannot, by itself, calculate locally. In the Reduce phase, any computing node can compute each Reduce function assigned to it after receiving the coded signals during the Shuffle phase. We underline the fact that nodes have to spend most of their execution time in exchanging IVs among themselves, causing a substantial communication bottleneck in the system <cit.>. Hence, it is highly desirable to reduce the execution time in the Shuffle phase. A fundamental trade-off between computation load in the Map phase and communication load in the Shuffle phase was formulated and characterized by Li et al. in <cit.>. Increasing the computation load by a factor of r can reduce the communication load by the same factor. The authors of the said work also proposed several CDC schemes that achieved the optimal communication load. Their main idea is as follows. In the Map phase the nodes need to compute some side information locally. In the Shuffle phase, the nodes exchange some coded data among themselves. The side information makes each coded data simultaneously useful for multiple Reduce tasks. In a general (K,N,r,s,Q)-CDC scheme, if s=1, then each Reduce function is calculated by exactly one node. This scheme is similar to the coded caching scheme for the D2D network treated in, e.g., <cit.> and <cit.>. If s≥ 1, then each Reduce function is calculated by multiple nodes. The scheme is known as cascaded CDC scheme. Numerous works, e.g., <cit.>, <cit.>, <cit.> and <cit.>, proposed CDC schemes with stragglers. In heterogeneous networks, CDC schemes have been studied in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Extension to various setups had been pursued. Without attempting a complete listing, we mention works on CDC schemes in wireless network in <cit.> and <cit.>, and in the context of matrix multiplication in <cit.> and <cit.>. Our present work focuses on cascaded CDC schemes. Prior works on such schemes include <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. The scheme in <cit.>, henceforth the Li-CDC, splits the data set into N=Kr files and designs Q=Ks output functions, with r being the average number of nodes that store each file and s being the average number of nodes that calculate each function. The Li-CDC achieves the minimum communication load. The number N=Kr of files and the number Q=Ks of functions in the Li-CDC grow too fast with respect to K for practical scenarios. This was shown by Konstantinidis and Ramamoorthy in <cit.>. Woolsey, Chen, and Ji in <cit.> introduced a combinatorial structure called hypercube structure to design the file and function assignments. Their scheme requires the data set to be split into N=(K/r)^r-1 files and designs Q=(K/r)^r-1 output functions. They also showed that the communication load of their scheme is close to that of the one in <cit.>. Jiang and Qu in <cit.> put forward some cascaded CDC schemes with N=(K/r)^r-1 and Q=K/(K,s) by using placement delivery array. Such an array had previously been introduced by Yan et al. to construct coded caching schemes in <cit.>. The communication load of the schemes built by Jiang and Qu, however, is about twice as large as that of the Li-CDC. Recently, Jiang, Wang, and Zhou in <cit.> used a symmetric balanced incomplete block design (SBIBD) to generate the data placement and the Reduce function assignment to obtain an asymptotically optimal scheme with K=N=Q. In <cit.>, Cheng, Wu, and Li proposed some asymptotically optimal schemes based on t-designs, with t≥ 2. More specifically, their main tool consists of t-group divisible designs (GDDs). For ease of reference, we list the above-mentioned known cascaded CDC schemes in the first part of Table <ref>. This paper has two main contributions. First, we construct a new class of asymptotically optimal schemes for the cascaded case with r+s=K. In our construction, we carefully arrange the data placement and assign the Reduce functions by using symmetric designs that meet specific requirements. As shown in Table <ref>, our new schemes have the following advantages. * Compared with the Li-CDC scheme in <cit.>, our schemes have much smaller N and Q. Using the known symmetric designs listed in Table <ref>, the respective communication loads of our schemes approximate that of the Li-CDC scheme. * Compared with the schemes of Jiang and Qu in <cit.>, ours have smaller respective communication loads for the same (K,r,N,Q). Although both our schemes and those in <cit.> make use of symmetric designs in their constructions, we devise a different transmission scheme from what Jiang and Qu had chosen. Second, we present a class of new asymptotically optimal cascaded CDC schemes. They are constructed based on specially built 1-designs from almost difference sets. Although our schemes bear some similarities with the schemes of Cheng, Wu, and Li in <cit.>, the parameters differ, as shown in Table <ref>. In terms of organization, Section <ref> introduces useful properties of symmetric designs, almost difference sets, and cascaded CDC systems. We explain two new constructions of CDC schemes in Section <ref>. Comparative performance analysis of our new schemes relative to known schemes can be found in Section <ref>. The section also collects concluding remarks. § PRELIMINARIES We denote by |·| the cardinality of a set or the length of a vector. For any positive integers a and b with a<b, we use [a,b] to denote the set {a,a+1,…,b}. If a=1, then we use the shorter form [b]. §.§ Cascaded Coded Distributed Computing Systems In a coded distributed computing system, K distributed computing nodes compute Q Reduce functions by taking advantage of N input files, each of equal size. Let W={w_1,w_2,…,w_N} be the set of the N files, each of size B bits. The set of functions is 𝒬={ϕ_1,ϕ_2,…,ϕ_Q}, where, for any q∈[Q], ϕ_q maps the N files to a C-bit value u_q:=ϕ_q(w_1,w_2,…,w_N) ∈𝔽_2^C. Figure <ref> depicts how each output function ϕ_q is decomposed into ϕ_q(w_1,w_2,…,w_N) = h_q(g_q,1(w_1),g_q,2(w_2),…,g_q,N(w_N)). Here, g_q,n is a Map function for any q∈[Q] and n∈[N], whereas h_q is a Reduce function for any q∈[Q]. We name v_q,n := g_q,n(w_n) ∈𝔽_2^T, where q∈[Q] and n∈[N], an intermediate value (IV) of length T. Figure <ref> shows that a cascaded CDC consists of three phases. * Map Phase: Each node k∈𝒦 stores M files. For each file w_n, let 𝒟_n represent the set of nodes, each of which stores file w_n. We can then write the files stored by node k as elements in the set 𝒵_k={w_n : n∈[N],k∈𝒟_n}. Using the stored files in (<ref>) and the Map functions in {g_q,n(·) : q ∈ [Q] n ∈ [N]}, node k can compute the IVs in ℐ_k={v_q,n=g_q,n(w_n) ∈𝔽_2^T: q∈[Q],n∈[N],k∈𝒟_n}. * Shuffle Phase: For any k∈𝒦, let 𝒬_k={ϕ_q : q∈[Q],k∈𝒜_q} be the set of output functions to be calculated by node k. Collectively and in a coordinated way, the nodes exchange calculated IVs such that each node can derive the IVs that it cannot locally calculated. The node k, for k∈𝒦, multicasts a coded message X_k of length ℓ_k. * Reduce Phase: Upon receiving the coded signals 𝒳={X_1,X_2,…,X_K} and its locally computed IVs in ℐ_k, node k∈𝒦 can compute each Reduce function in 𝒬_k. Keeping the relevant definitions from <cit.>, we know that there are two important quantities that measure the goodness of a CDC. First, the average number of nodes that store each file is the computation load r=∑^K_k=1 |𝒲_k|/N. Second, the ratio of the amount of transmitted data to the product Q N T is the communication load L=∑^K_k=1ℓ_k/Q N T. <cit.> Let K ∈ℕ. Given r,s∈[K], there exists a CDC scheme that achieves the optimal communication load L=∑^min(r+s,K)_ℓ = max(r+1,s)K-rK-ℓ rℓ-s/Ks ℓ-r/ℓ-1, where r is the computation load and s is the number of nodes that calculate each function. Given K, r, and s, we call any CDC scheme whose L is as in (<ref>) a Li-CDC scheme and denote by L_ Li the communication load of a Li-CDC scheme. It is clear from Lemma <ref> that, given r and s, we seek to minimize L. §.§ Almost Difference Sets We recall useful results on almost difference sets. Let (A,+) be a finite abelian group of order n and let D be a subset of size k in (A,+). The difference function on a subset D of (A,+) is diff_D(x) = |D∩(D+x)|, where D+x={y+x:y∈ D} and x∈ A. <cit.> Let (A,+) be an abelian group of order n. A k-subset D of A is an (n,k,λ,t) almost difference set (ADS) of A if diff_D(x) takes on λ altogether t times and λ+1 altogether n-1-t times as x traverses the nonzero elements of A. We list useful facts from <cit.>. Let an abelian group (A,+) be given. * If an (n,k,λ,t) ADS exists, then k(k-1)=tλ+(n-1-t)(λ+1). * If D is an (n,k,λ,t) ADS, then its complement D^c=A ∖ D is an (n,n-k,n-2k+λ,t) ADS in (A,+). * If D is an (n,k,0,t) ADS with t=n-1-k(k-1), then D is also called a modular Golomb ruler in (A,+). Ruzsa introduced a class of modular Golomb ruler, which we will use in the next section, in <cit.>. <cit.> For every prime p, there exists a (p^2-p,p-1,0,2p-3) almost difference set. The missing differences are the 2p-2 multiples of p or p-1. §.§ Symmetric Designs We gather useful results on relevant combinatorial designs. <cit.> Let 𝒳 be a set of v elements. Let ℬ:={B_1,B_2,…,B_u} be such that B_i⊆𝒳 and |B_i| =t for any i∈[u]. Given any two distinct elements x,y∈𝒳, if there exist exactly λ elements in ℬ containing them, then (𝒳,ℬ) is a (v,t,λ) balanced incomplete block design (BIBD). For each i∈[u], we call B_i a block. Since any BIBD is also a 2-design, we have |ℬ|=λ v (v-1)/t (t-1) for any (v,t,λ) BIBD (𝒳,ℬ). We will soon use symmetric designs as a main construction tool for a class of CDC schemes. <cit.> A (v,t,λ) BIBD (𝒳,ℬ) is a (v,t,λ) symmetric design (SD) if |ℬ|=v. <cit.> Given a (v,t,λ) symmetric design (𝒳,ℬ), the following statements hold. * Each x∈𝒳 is contained in t blocks among the elements of ℬ. * For any two distinct blocks B and B' in ℬ, we have | B⋂ B'|=λ. * λ=t (t-1)/v-1. In <cit.>, Ionin and van Trung listed the parameters of four known classes of symmetric designs. <cit.> A (v,k,λ) symmetric design exists if its parameters can be found in Table <ref>. The four classes of symmetric designs in the table play important roles in our construction of asymptotically optimal cascaded CDC schemes. § TWO CONSTRUCTIONS OF CDC SCHEMES §.§ Construction One This subsection introduces a new construction method for the case r≠ s. Let (𝒳,𝔅) be an (N,t,λ) symmetric design, 𝒳={x_1,x_2,…,x_N}, and 𝔅 = {ℬ_1,ℬ_2,…,ℬ_N}. By Lemma <ref>, any two distinct blocks ℬ_i and ℬ_j intersect in exactly λ points. We now construct a CDC scheme with N nodes, where 𝒦 = ℬ, on N files, which are elements of 𝒲 = {w_x_1,w_x_2,…,w_x_N}, and N functions in 𝒬 = {ϕ_x_1,ϕ_x_2,…,ϕ_x_N}. Each node stores t files and each Reduce function is computed by s nodes. During the Map phase, each node ℬ∈𝔅 stores the files in the set Z_ℬ={w_x : x∈ℬ, x∈𝒳}. Since |ℬ|=t, for any block B, the computation load is r=∑^N_i=1 |Z_i|/N=t N/N=t. In the Shuffle phase, we arrange each node ℬ∈𝔅 to compute the Reduce functions 𝒬_ℬ= {u_y = ϕ_y(w_x_1,w_x_2,…,w_x_N) : y∈𝒳, y∈ℬ}, with ℬ denoting the complement set of ℬ with respect to 𝒳. Given the assigned stored files and the set 𝒬_ℬ, node ℬ can compute the intermediate values in the set ℐ_ℬ = {v_y,x = g_y,x(w_x) : x,y∈𝒳, x∈ℬ}. Hence, for any x,y∈𝒳 and for any block ℬ∈𝔅, the intermediate value v_y,x is both required and cannot be locally computed by node ℬ if and only if y ∈ℬ and x ∈ℬ, i.e., y ∉ℬ and x ∉ℬ. The intermediate value v_y,x is locally computable by node ℬ if and only if x∈ℬ. Based on what we have just investigated, we can divide the delivery strategy into two classes. The first class is for the N intermediate values v_x,x : x ∈𝒳. We cluster each v_x,x into t segments as v_x,x = (v^ℬ_k_1_x,x, v^ℬ_k_2_x,x, …, v^ℬ_k_t_x,x), where x ∈ℬ_k_i for each i∈ [t]. Since v_x,x∈𝔽_2^T, we know that v^ℬ_k_i_x,x∈𝔽_2^T/t. A node ℬ_k has access to t stored files in {w_z_1,w_z_2,…,w_z_t}, giving it the intermediate values in 𝒱_k = {v_w_z_1,w_z_1,v_w_z_2,w_z_2, …,v_w_z_t,w_z_t}. If α_1,α_2,…,α_t∈𝔽_2^T/t are all distinct, then t must be a divisor of T and T ≥ t^2. Node ℬ_k multicasts the t-λ signals X^ℬ_k[1] = v^ℬ_k_w_z_1,w_z_1+ v^ℬ_k_w_z_2,w_z_2+…+ v^ℬ_k_w_z_t,w_z_t, X^ℬ_k[2] =α_1v^ℬ_k_w_z_1,w_z_1+ α_2v^ℬ_k_w_z_2,w_z_2+ …+α_tv^ℬ_k_w_z_t,w_z_t, ⋮ X^ℬ_k[t-λ] =α^t-λ-1_1v^ℬ_k_w_z_1, w_z_1+α^t-λ-1_2v^ℬ_k_w_z_2, w_z_2+…+ α^t-λ-1_tv^ℬ_k_w_z_t,w_z_t, which we express as [ X^ℬ_k[1]; X^ℬ_k[2]; ⋮; X^ℬ_k[t-λ] ] = [ 1 1 ⋯ 1; α_1 α_2 ⋯ α_t; ⋮ ⋮ ⋱ ⋮; α^t-λ-1_1 α^t-λ-1_2 ⋯ α^t-λ-1_t; ] [ v^ℬ_k_w_z_1,w_z_1; v^ℬ_k_w_z_2,w_z_2; ⋮; v^ℬ_k_w_z_t,w_z_t ]. The total number of bits transmitted by ℬ_k is, therefore, (t-λ)T/t, which comes from (t-λ)1/t intermediate values. Thus, the total number of intermediate values transmitted by all the nodes combined is (t-λ)v/t. If a node ℬ is unable to compute v_y,y, then w_y∉ℬ. Hence, there exist nodes ℬ_u_i : i∈[t] such that w_y∈ℬ_u_i. Without loss of generality, let ℬ_u_1 be a node whose stored files are in {w_y,w_ℓ_1,w_ℓ_2,…,w_ℓ_t-1}. By Lemma <ref>, we have |ℬ_u_1⋂ℬ|=λ. If these λ stored files are elements of {w_ℓ_t-λ,w_ℓ_t-λ+1,…, w_ℓ_t-1}, then node ℬ can locally compute v^ℬ_u_1_w_ℓ_t-λ, w_ℓ_t-λ, v^ℬ_u_1_w_ℓ_t-λ+1,w_ℓ_t-λ+1, …, v^ℬ_u_1_w_ℓ_t-1,w_ℓ_t-1. Thus, ℬ only needs to solve the system of equations [ X^ℬ_u_1[1]- ∑^t_i=t-λ+1 v^ℬ_u_1_w_l_i-1,w_l_i-1; X^ℬ_u_1[2]-∑^t_i=t-λ+1α_iv^ℬ_u_1_w_l_i-1,w_l_i-1; ⋮; X^ℬ_u_1[t-λ]-∑^t_i=t-λ+1α^t-λ-1_iv^ℬ_u_1_w_l_i-1,w_l_i-1 ] = [ 1 1 ⋯ 1; α_1 α_2 ⋯ α_t-λ; ⋮ ⋮ ⋱ ⋮; α^t-λ-1_1 α^t-λ-1_2 ⋯ α^t-λ-1_t-λ; ] [ v^ℬ_u_1_w_y,w_y; v^ℬ_u_1_w_l_1,w_l_1; ⋮; v^ℬ_u_1_w_l_t-λ-1,w_l_t-λ-1 ]. The coefficient matrix is clearly Vandermonde. Since α_1,α_2,…,α_t are all distinct, node ℬ decodes v^ℬ_u_1_w_y,w_y for node ℬ_u_1. Similarly, node ℬ can also derive v^ℬ_u_i_w_y,w_y for any node ℬ_u_i : i∈{2,3,…,t}. Thus, node ℬ can derive v_w_y,w_y = {v^ℬ_u_1_w_y, w_y,v^ℬ_u_2_w_y,w_y, …, v^ℬ_u_t_w_y,w_y}. Proceeding to the second class of intermediate values v_x,y, where x, y ∈𝒳 are distinct, we cluster v_x,y into the λ segments v_x,y= (v^ℬ_s_1_x,y, v^ℬ_s_2_x,y,…, v^ℬ_s_λ_x,y), where x,y∈ℬ_s_i for any i∈ [λ]. Since v_x,y∈𝔽_2^T, it is immediate to confirm that v^ℬ_s_i_x,y∈𝔽_2^T/λ for any i∈ [λ]. Any node ℬ_s has access to t stored files in {w_a_1,w_a_2,…,w_a_t}. Hence, ℬ_s has the intermediate values in {v_w_a_1,w_a_2, v_w_a_1,w_a_3, …,v_w_a_1,w_a_t,…,v_w_a_t, w_a_1,v_w_a_t,w_a_2,…, v_w_a_t,w_a_t-1}. If β_1,β_2,…,β_t-1∈𝔽_2^T/λ are all distinct, then λ divides T and T ≥λ(t-1). The t (t-λ-1) signals that node ℬ_k multicasts can be expressed as [ Y_i^ℬ_s[1]; Y_i^ℬ_s[2]; ⋮; Y_i^ℬ_s[t-λ-1] ] = [ 1 1 ⋯ 1; β_1 β_2 ⋯ β_t-1; ⋮ ⋮ ⋱ ⋮; β^t-λ-2_1 β^t-λ-2_2 ⋯ β^t-λ-2_t-1; ][ v^ℬ_s_w_a_i,w_a_1; v^ℬ_s_w_a_i,w_a_2; ⋮; v^ℬ_s_w_a_i,w_a_t ], where i∈ [t]. The total number of bits transmitted by ℬ_s is, therefore, t (t-λ-1) T/λ, which comes from t (t-λ-1) 1/λ intermediate values. Thus, the total number of intermediate values v_x,y transmitted by all nodes combined is t (t-λ-1) v/λ. If a node ℬ_m is unable to compute v_x,y, then w_x,w_y∉ℬ_m. Since (𝒳,𝔅) is a symmetric design, there exist λ nodes ℬ_n_i : i∈[λ] with access to files w_x and w_y. Without loss of generality, let ℬ_n_1 be a node such that its stored files are the elements in {w_x,w_y,w_b_1,w_b_2,…,w_b_t-2}. By Lemma <ref>, |ℬ_n_1⋂ℬ_m|=λ. If the λ stored files are the elements in {w_b_t-λ-1, w_b_t-λ, …,w_b_t-2}, then node ℬ can locally compute v^ℬ_n_1_w_x, w_b_t-λ-1, v^ℬ_n_1_w_x, w_b_t-λ, …, v^ℬ_n_1_w_x,w_b_t-2. Thus, ℬ_m only needs to solve the system of equations [ Y_x^ℬ_n_1[1] - ∑^t_i=t-λ+1 v^ℬ_n_1_w_x,w_b_i-2; Y_x^ℬ_n_1[2] - ∑^t_i=t-λ+1β_i-1 v^ℬ_n_1_w_x,w_b_i-2; ⋮; Y_x^ℬ_n_1[t-λ-1]- ∑^t_i=t-λ+1β^t-λ-2_i-1 v^ℬ_n_1_w_x,w_b_i-2 ] = [ 1 1 ⋯ 1; β_1 β_2 ⋯ β_t-λ-1; ⋮ ⋮ ⋱ ⋮; β^t-λ-2_1 β^t-λ-2_2 ⋯ β^t-λ-2_t-λ-1 ][ v^ℬ_n_1_w_x,w_y; v^ℬ_n_1_w_x,w_b_1; ⋮; v^ℬ_n_1_w_x,w_b_t-2 ]. The coefficient matrix is obviously Vandermonde. Since β_1,β_2,…,β_t-1 are all distinct, node ℬ_m decodes v^ℬ_n_1_w_x,w_y for node ℬ_n_1. Similarly, node ℬ_m can provide v^ℬ_n_i_w_x,w_y to any node ℬ_n_i : i∈{2,3,…,λ}. Thus, node ℬ_m can derive v_w_x,w_y= (v^ℬ_n_1_w_x,w_y, v^ℬ_n_2_w_x,w_y,…, v^ℬ_n_t_w_x,w_y). Since λ=t(t-1)/v-1 in the known (v,t,λ) SD, the communication load is L = t(t-1-λ)Tv/λ + vT/t(t-λ)/Q N T = t(t-1-λ) Tv/λ + vT/t(t-λ)/v^2T = t(t-1-t(t-1)/v-1) v-1/t(t-1) + 1/t(t-t(t-1)/v-1)/v = (v-1)^2-t(v-1)+v-1-t+1/v(v-1) = (v-1)^2-tv+v/v(v-1). In the Reduce phase, we know that each node ℬ∈𝔅 can derive the intermediate values {v_x,y : x,y∈𝒳, x,y∈ℬ} during the Shuffle phase. Node ℬ can locally compute the Reduce functions 𝒬_ℬ = {u_y = ϕ_y(w_x_1,w_x_2,…,w_x_N) : y∈𝒳, y∈ℬ}. We formalize the above discussions in the following theorem. Given a (v,t,λ) SD with t>λ+1, one can construct a CDC scheme with v distributed computing nodes, N=v files and Q=v output functions such that * each output function is computed by s=v-t nodes, * the computation load is r=t, and * the communication load is L=(v-1)^2-tv+v/v(v-1). We use (7,3,1) SD in an example to illustrate our construction. When N=Q=K=7, there are 7 files in 𝒲 = {w_1,w_2,…,w_7} and 7 functions in 𝒬= {ϕ_1, ϕ_2,…,ϕ_7}. In the first stage, the nodes store the respective files 𝒵_ℬ_1 ={w_1,w_2,w_4}, 𝒵_ℬ_2 ={w_2,w_3,w_5}, 𝒵_ℬ_3 ={w_3,w_4,w_6}, 𝒵_ℬ_4 ={w_4,w_5,w_7}, 𝒵_ℬ_5 ={w_1,w_5,w_6}, 𝒵_ℬ_6 ={w_2,w_6,w_7}, 𝒵_ℬ_7 ={w_1,w_3,w_7}. The computation load is r=3 · 7/7=3. If the Reduce functions are arranged by nodes as 𝒬_ℬ_1 ={ϕ_3,ϕ_5,ϕ_6,ϕ_7}, 𝒬_ℬ_2 ={ϕ_1,ϕ_4,ϕ_6,ϕ_7}, 𝒬_ℬ_3 ={ϕ_1,ϕ_2,ϕ_5,ϕ_7}, 𝒬_ℬ_4 ={ϕ_1,ϕ_2,ϕ_3,ϕ_6}, 𝒬_ℬ_5 ={ϕ_2,ϕ_3,ϕ_4,ϕ_7}, 𝒬_ℬ_6 ={ϕ_1,ϕ_3,ϕ_4,ϕ_5}, 𝒬_ℬ_7 ={ϕ_2,ϕ_4,ϕ_5,ϕ_6}, then each function is computed by s=4 nodes. The locally computable intermediate values, arranged by nodes, can be listed as ℐ_ℬ_1 ={v_q,n : q∈[7], n∈{1,2,4}}, ℐ_ℬ_2 ={v_q,n : q∈[7], n∈{2,3,5}}, ℐ_ℬ_3 ={v_q,n : q∈[7], n∈{3,4,6}}, ℐ_ℬ_4 ={v_q,n : q∈[7], n∈{4,5,7}}, ℐ_ℬ_5 ={v_q,n : q∈[7], n∈{1,5,6}}, ℐ_ℬ_6 ={v_q,n : q∈[7], n∈{2,6,7}}, ℐ_ℬ_7 ={v_q,n : q∈[7], n∈{1,3,7}}. Table <ref> lists the intermediate values required by each of the nodes. We cluster each v_x,x : x∈𝒳 into 3-segments v_1,1 = (v^ℬ_1_1,1,v^ℬ_5_1,1, v^ℬ_7_1,1), v_2,2 = (v^ℬ_1_2,2, v^ℬ_2_2,2, v^ℬ_6_2,2), v_3,3 = (v^ℬ_2_3,3, v^ℬ_3_3,3, v^ℬ_7_3,3), v_4,4 = (v^ℬ_1_4,4, v^ℬ_3_4,4, v^ℬ_4_4,4), v_5,5 = (v^ℬ_2_5,5, v^ℬ_4_5,5, v^ℬ_5_5,5), v_6,6 = (v^ℬ_3_6,6, v^ℬ_5_6,6, v^ℬ_6_6,6), v_7,7 = (v^ℬ_4_7,7, v^ℬ_6_7,7,v^ℬ_7_7,7). When this is the case, the nodes can collectively send the coded signals listed in Table <ref>, with distinct α_1,α_2,α_3∈𝔽_2^T/3. Node ℬ_1, for instance, sends the coded signals v^ℬ_1_1,1+ v^ℬ_1_2,2+ v^ℬ_1_4,4α_1 v^ℬ_1_1,1 + α_2 v^ℬ_1_2,2 + α_3 v^ℬ_1_4,4. After receiving the signals in (<ref>), node ℬ_2 can individually decode the intermediate values v^ℬ_1_1,1 by using the locally computed intermediate value v^ℬ_1_2,2. Similarly, node ℬ_2 can decode the required intermediate values v^ℬ_5_1,1 and v^ℬ_7_1,1 from nodes ℬ_5 and ℬ_7, respectively. Doing so allows node ℬ_2 to decode v_1,1. It is straightforward to verify that the situation holds for each node and the required value v_x,x : x ∈[7]. Let us now consider v_x,y : x ≠ y. Node ℬ_1, for example, sends the coded signal v_1,2 + v_1,4. Upon receiving the signal, nodes ℬ_3 and ℬ_4 can individually decode v_1,2 by using the locally computable v_1,4. Nodes ℬ_2 and ℬ_6 can individually decode v_1,4 from the locally computable v_1,2. Similarly, all other nodes can obtain their respective intermediate values. Thus, the communication load of our scheme is L=7 ·2/3 + 3 · 7/7 · 7 = 11/21. When K=7, r=3, and s=4, we reproduce a cascaded CDC scheme from <cit.> with N=Q=7 whose communication load L'=7-3/7-1=2/3 is larger than that of ours. §.§ Construction Two Cheng, Wu, and Li in <cit.> constructed some asymptotically optimal cascaded CDC schemes by using t-designs and t-GDDs with t ≥ 2. We propose a construction of such schemes based on 1-designs. For the case of r=s we use almost difference (AD) sets. For any (n,k,λ,μ) AD set (A,D) with λ < k-1, we denote by (A,+) the abelian group {0,1,…,n-1} under addition and by D the set {i_1,i_2,…,i_k : i_t∈{0,1,…,n-1} t∈ [k] }. There exist n subsets ℬ_r={i'_1,i'_2,⋯,i'_k}⊆ A, where i'_t≡ i_t+r-1 n with t ∈[k] and r∈ [n]. By the definition of difference function, when λ<k-1, we know that ℬ_u≠ℬ_v for any u,v∈ [n] such that m≠ n. Letting ℬ = {ℬ_1,ℬ_2,…,ℬ_n}, we confirm that (A,𝔅) is a 1-design with parameters (n,k,k). To verify that (A,𝔅) is not a 2-design, we observe that, if {a,b}⊆ A with | diff_D(a-b)| = λ+1, then {a,b} is contained in the λ+1 elements of 𝔅. If {c,d}⊆ A with | diff_D(c-d)|=λ, then {c,d} is contained in the λ elements of 𝔅. Focusing on the A2 subsets of two elements in A, there are, respectively, nt/2 and (n-1-t)n/2 such subsets which are contained in λ and λ+1 elements of 𝔅. We use A={0,1,2,3,4,5} to form an abelian group under addition. We verify that D={0,1,3} is a (6,3,1,4) AD set, where the function diff_D(x) takes on 1, in total, 4 times, if x∈{1,2,4,5}, and takes on 2 once if x=3. Our construction yields the composite structure (A,𝔅), where 𝔅 = {{0,1,3},{1,2,4},{2,3,5},{3,4,0},{4,5,1},{5,2,0}}. We confirm that (A,𝔅) is a 1-design with parameters (6,3,3). It, however, is not a 2-design since the pairs {0,3}, {1,4}, {2,5} are contained in 2 elements of 𝔅, but the pairs {0,1}, {0,2}, {0,4}, {0,5}, {1,2}, {1,3}, {1,5}, {2,3}, {2,4}, {3,4}, {3,5}, {4,5} are contained in only a single element of 𝔅. Let (A,+)={0,1,2,3,4,5} be the abelian group. We confirm that D={0,1} is a (6,2,0,3) AD set. Its diff_D(x) takes on 1, in total, twice for x ∈{1,5} and 0, in total, 3 times if x∈{2,3,4}. We have the composite structure (A,𝔅) with 𝔅 = {{0,1},{1,2},{2,3},{3,4},{4,5},{5,0}}. We verify that (A,𝔅) is a 1-design with parameter (6,2,2). It is not a 2-design since the pairs {0,1}, {1,2}, {2,3}, {3,4}, {4,5}, {0,5} are contained in a single element of 𝔅, but the pairs {0,2}, {0,3}, {0,4}, {1,3}, {1,4}, {1,5}, {2,4}, {2,5}, {3,5} are not contained in any element of 𝔅. We refine our construction into two cases: λ≥1 and λ=0. We start with the case of λ≥1 and construct a CDC scheme with N nodes, 𝒦=ℬ, n files in 𝒲 = {w_0,w_1,…,w_n-1}, and n functions in 𝒬 = {ϕ_0,ϕ_1,…,ϕ_n-1}. Each node stores k files and each Reduce function is computed by s nodes. During the Map phase, let each node ℬ∈𝔅 store the files in Z_ℬ={w_x : x∈ℬ, x∈𝒳}. Since the cardinality of any block is |ℬ| = k, the computation load is r=∑^n_i=1| Z_i|/n = kn/n=k. In the Shuffle phase, let each node ℬ∈𝔅 be arranged to compute the Reduce functions in 𝒬_ℬ = {u_y=ϕ_y(w_0,w_1,…,w_n-1) : y∈ A, y∈ℬ} Using the stored files and the functions in 𝒬, node ℬ can compute the intermediate values ℐ_ℬ = {v_y,x = g_y,x(w_x) : x,y∈ A,x∈ℬ}. For any x,y∈ A and any block ℬ∈𝔅, the intermediate value v_y,x is required. It is not locally computable by node ℬ if and only if y∈ℬ and x∉ℬ. On the other hand, v_y,x is locally computable by node ℬ if and only if x∈ℬ. We devise our delivery strategy accordingly. If x,y∈ A are such that | diff_D(x-y)| =λ+1, then there exist λ+1 nodes with access to the pair of files (x,y). We call these nodes ℬ_1,j with j ∈[λ+1]. There are k-λ-1 nodes with access to file x but not file y. We label these nodes ℬ_2,u with u ∈[k-λ-1]. There are k-λ-1 nodes with access to file y but not file x. We name these nodes ℬ_3,v with v∈[k-λ-1]. Our delivery strategy must allow for the exchange of relevant intermediate values among the nodes. Each node ℬ_1,j can locally compute v_x,y and v_y,x since it stores files w_x and w_y. Each node ℬ_2,u can locally compute v_y,x since it stores file w_x but requires v_x,y from some other nodes. This node does not store w_y but is assigned to compute the Reduce function u_y. Each node ℬ_3,v can locally compute v_x,y since it stores file w_y but requires v_y,x from some other nodes. This node does not store w_x but is assigned to compute the Reduce function u_x. We divide v_x,y and v_y,x into λ+1 sub-intermediate values v_y,x={v^(1)_y,x,v^(2)_y,x,…,v^(λ+1)_y,x} v_x,y={v^(1)_x,y,v^(2)_x,y,…,v^(λ+1)_x,y}. Node ℬ_1,j multicasts {v^(i_1)_y,x + v^(i_1)_x,y : i_1∈ [λ+1] } to nodes ℬ_2,i_2 and ℬ_3,i_3, with i_2,i_3∈ [k-λ-1]. Hence, any ℬ_2,i_2 and ℬ_3,i_3 can derive v^(i_1)_x,y and v^(i_1)_y,x, respectively. Since (A,D) is an (n,k,λ,μ) AD set, for each node ℬ∈𝔅 there exist n-1-μ pairs (x,y), with {x,y}⊆ℬ, such that | diff_D(x-y)| = λ+1. Paying closer attention to the A2 subsets of two elements in A, we infer that there are (n-1-μ)n/2 such subsets which are contained in λ+1 elements of 𝔅. Thus, in this particular delivery strategy, there are exactly (n-1-μ)(λ+1)n/2 transmitted sub-intermediate values, each of which has T/λ+1 bits. The total number of bits transmitted by the nodes is (n-1-μ)Tn/2. If u,v∈ A are such that | diff_D(u-v)| = λ, then we have the desired exchange scheme. In total, the number of bits transmitted is μ Tn/2 for μ Tn+(n-1-μ) Tn/2=n(n-1)T/2 signals. The communication load is L=n(n-1)T/2n^2T=n-1/2n, leading us to the following theorem. Given an (n,k,λ,μ) almost different set (A,D) with 1≤λ<k-1, one can construct a CDC scheme with n distributed computing nodes, N=n files, and Q=n output functions such that each output function is computed by s=k nodes. The scheme's respective computation and communication loads are r=k and L=n-1/2n. Continuing from Example <ref>, we can construct the following coded distributed computing. When N=Q=K=6, we have 6 files 𝒲={w_0,w_1,⋯,w_5} and 6 output functions 𝒬={ϕ_0,ϕ_1,…,ϕ_5}. In the Map phase, the nodes and their respective stored files are 𝒵_ℬ_1 ={w_0,w_1,w_3}, 𝒵_ℬ_2 ={w_1,w_2,w_4}, 𝒵_ℬ_3 ={w_2,w_3,w_5}, 𝒵_ℬ_4 ={w_3,w_4,w_0}, 𝒵_ℬ_5 ={w_4,w_5,w_1}, 𝒵_ℬ_6 ={w_5,w_0,w_2}. The computation load is r=3 · 6/6=3. Let the Reduce functions be arranged by nodes, such that each function is computed by s=3 nodes, as 𝒬_ℬ_1 ={ϕ_0,ϕ_1,ϕ_3}, 𝒬_ℬ_2 ={ϕ_1,ϕ_2,ϕ_4}, 𝒬_ℬ_3 ={ϕ_2,ϕ_3,ϕ_5}, 𝒬_ℬ_4 ={ϕ_3,ϕ_4,ϕ_0}, 𝒬_ℬ_5 ={ϕ_4,ϕ_5,ϕ_1}, 𝒬_ℬ_6 ={ϕ_5,ϕ_0,ϕ_2}. The indicated nodes can then locally compute their respective intermediate values ℐ_ℬ_1 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{0,1,3}}, ℐ_ℬ_2 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{1,2,4}}, ℐ_ℬ_3 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{2,3,5}}, ℐ_ℬ_4 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{3,4,0}}, ℐ_ℬ_5 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{4,5,1}}, ℐ_ℬ_6 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{5,0,2}}. Table <ref> lists the required intermediate values in relation to the nodes. We cluster all intermediate values v_0,3, v_3,0, v_1,4, v_4,1, v_5,2 and v_2,5 into the 2-segments v_0,3 = (v^(1)_0,3,v^(2)_0,3), v_1,4 = (v^(1)_1,4,v^(2)_1,4), v_2,5 = (v^(1)_2,5,v^(2)_2,5), v_3,0 = (v^(1)_3,0,v^(2)_3,0), v_4,1 = (v^(1)_4,1,v^(2)_4,1), v_5,2 = (v^(1)_5,2,v^(2)_5,2). The nodes send the coded signals listed in Table <ref>. Node ℬ_1, for instance, sends v_0,1+v_1,0. After receiving it, nodes ℬ_4 and ℬ_6 get the required v_0,1 since they can locally compute v_1,0. Similarly, nodes ℬ_2 and ℬ_5 can obtain v_1,0 after receiving v_0,1+v_1,0. Inspecting the rest of the nodes, we have analogous situation. The communication load is (2+1/2)6/6· 6 = 5/12. We continue to the case of λ=0. If {u,w}⊆ A such that | diff_D(u-w)|=1, then there exists an element of 𝔅 that contain u and w. If {u,w}⊆ A such that | diff_D(u-w)|=0, then there is no element of 𝔅 that contains u and w. We recall that an (n,k,0,μ) almost difference sets are also known as modular Golomb rulers. Since the Map phase is the same as in the case of λ≥ 1 above, the computation load is also r=k. In the Shuffle phase, if each node ℬ∈𝔅 is to compute the Reduce functions in 𝒬_ℬ = {u_y=ϕ_y(w_0,w_1,…,w_n-1) : y∈ A, y∈ℬ}, then s=r=k. Using the stored files and the functions in 𝒬, node ℬ can compute the intermediate values ℐ_ℬ = {v_y,x = g_y,x(w_x) : x,y∈ A,x∈ℬ}. Hence, for any x,y∈ A and any block ℬ∈𝔅, the intermediate value v_y,x is required but not locally computable by node ℬ if and only if y ∈ℬ and x ∉ℬ. It is locally computable by node ℬ if and only if x∈ℬ. By a similar analysis as in the case of λ≥ 1, there exist k(k-1)n/2 pairs of elements of A which are contained by elements of 𝔅. On the other hand, there exist nμ/2 pairs of elements of A which are not contained in any element of 𝔅. We adjust the delivery strategy accordingly. First, for the k(k-1)n/2 pairs, we use the delivery strategy in the proof of Theorem <ref>. There are k(k-1)nT/2 transmitted signals in total. Second, for the remaining nμ/2 intermediate values v_u,w for which {u,w} is not contained in any element of 𝔅, no node can broadcast the coded signal v_u,w+v_w,u. If u,w ∈ A are files such that | diff_D(u-w)|=0, then u and w are stored by nodes whose block representatives contain u. Both files are required by nodes whose block representatives contain w. We collect the respective blocks containing u and w into sets 𝔅_u ={ℬ^u_1, ℬ^u_2,…,ℬ^u_k}𝔅_w = {ℬ^w_1,ℬ^w_2,…, ℬ^w_k}. We split v_u,w into k sub-intermediate values v^(1)_u,w, v^(2)_u,w, …, v^(k)_u,w. Node ℬ^w_i sends v^(i)_u,w to nodes in 𝔅_u. Clearly, each node in 𝔅_u can obtain v_u,w from all k sub-intermediate values sent by the nodes in 𝔅_w. In total, there are (nμ+k(k-1)n/2)T transmitted signals. Since k(k-1)=n-1-μ, the system transmits (n(n-1)-k(k-1)n/2)T signals. From the above discussion, the communication load is L=2n-2-k(k-1)/2n. We have thus proved the next result. Given an (n,k,0,μ) almost different set (A,D), one can construct a CDC scheme with n distributed computing nodes, N=n files, and Q=n output functions such that each output function is computed by s=k nodes. The respective computation and communication loads are r=k and L=2n-2-k(k-1)/2n. Continuing from Example <ref>, we construct the following CDC scheme. When N=Q=K=6, we have 6 files in 𝒲={w_0,w_1,…,w_5} and 6 output functions in 𝒬={ϕ_0,ϕ_1,…,ϕ_5}. In the Map phase, the nodes and their respective stored files are 𝒵_ℬ_1 ={w_0,w_1}, 𝒵_ℬ_2 ={w_1,w_2}, 𝒵_ℬ_3 ={w_2,w_3}, 𝒵_ℬ_4 ={w_3,w_4}, 𝒵_ℬ_5 ={w_4,w_5}, 𝒵_ℬ_6 ={w_5,w_0}. Hence, the computation load is r=2 · 6/6=2. Let the Reduce functions be arranged by nodes such that each function is computed by s=2 nodes as 𝒬_ℬ_1 ={ϕ_0,ϕ_1}, 𝒬_ℬ_2 ={ϕ_1,ϕ_2}, 𝒬_ℬ_3 ={ϕ_2,ϕ_3}, 𝒬_ℬ_4 ={ϕ_3,ϕ_4}, 𝒬_ℬ_5 ={ϕ_4,ϕ_5}, 𝒬_ℬ_6 ={ϕ_5,ϕ_0}. The indicated nodes can then compute the respective intermediate values ℐ_ℬ_1 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{0,1}}, ℐ_ℬ_2 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{1,2}}, ℐ_ℬ_3 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{2,3}}, ℐ_ℬ_4 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{3,4}}, ℐ_ℬ_5 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{4,5}}, ℐ_ℬ_6 = {v_q,n : q∈{0,1,2,3,4,5}, n∈{5,0}}. Table <ref> lists the required intermediate values in relation to the nodes. We cluster the intermediate values v_0,2, v_0,3, v_0,4, v_1,3, v_1,4, v_1,5, v_2,4, v_2,5, v_2,0, v_3,1, v_3,5, v_3,0, v_4,1, v_4,2, v_4,0, v_5,2, v_5,3, v_5,1 into the 2-segments v_0,2 = (v^(1)_0,2,v^(2)_0,2), v_0,3 = (v^(1)_0,3,v^(2)_0,3), v_0,4 = (v^(1)_0,4,v^(2)_0,4), v_1,3 = (v^(1)_1,3,v^(2)_1,3), v_1,4 =(v^(1)_1,4,v^(2)_1,4), v_1,5 =(v^(1)_1,5,v^(2)_1,5), v_2,4 =(v^(1)_2,4,v^(2)_2,4), v_2,5 =(v^(1)_2,5,v^(2)_2,5), v_2,0 =(v^(1)_2,0,v^(2)_2,0), v_3,5 =(v^(1)_3,5,v^(2)_3,5), v_3,0 =(v^(1)_3,0,v^(2)_3,0), v_3,1 =(v^(1)_3,1,v^(2)_3,1), v_4,2 =(v^(1)_4,2,v^(2)_4,2), v_4,0 =(v^(1)_4,0,v^(2)_4,0), v_4,1 =(v^(1)_4,1,v^(2)_4,1), v_5,2 =(v^(1)_5,2,v^(2)_5,2), v_5,3 =(v^(1)_5,3,v^(2)_5,3), v_5,1 =(v^(1)_5,1,v^(2)_5,1). In this case, the nodes can send the coded signals listed in Table <ref>. Node ℬ_1, for example, sends v_0,1+v_1,0. After receiving the signal, nodes ℬ_4 and ℬ_6 can obtain the intermediate value v_0,1 because they can locally compute v_1,0. Similarly, nodes ℬ_2 and ℬ_5 can obtain the required v_1,0 after receiving v_0,1+v_1,0. On the other hand, nodes ℬ_1 and ℬ_6 send the respective coded signals v^(1)_0,2 and v^(2)_0,2. Upon receiving v^(1)_0,2 and v^(2)_0,2, nodes ℬ_2 and ℬ_3 can obtain v_0,1. The rest of the nodes can obtain their respective required intermediate values in a similar manner. The communication load is (1+ 6 ·1/2) 6/6 · 6=2/3. Although, the schemes in Theorems <ref> and <ref> are quite similar to the scheme in <cit.>, we can obtain an asymptotically optimal cascaded CDC scheme with different parameters. The next section focuses on their performance for comparative purposes. § PERFORMANCE COMPARISON AND CONCLUDING REMARKS For fixed (r,s), the number of files N=Kr and functions Q=Ks in the Li-CDC schemes grow fast as the number of computing nodes K increases. In practical scenarios, as Konstantinidis and Ramamoorthy have shown in <cit.>, this fast growth is detrimental to the performance of the schemes. The number of input files and output functions in each of our new schemes, in contrast, are equal to the number of the computing nodes, confirming the superiority of our schemes. What about the respective communication loads? This section compares the communication loads of our scheme in Subsection <ref> and that of the Li-CDC. Jiang, Wang, and Zhou in <cit.> have constructed an asymptotically optimal cascaded CDC scheme with r≠ s based on symmetric designs. Their scheme, which we call Jiang-CDC for ease of reference, has larger communication load, given the same input files, output functions, and computation load than our scheme in Subsection <ref>. Here we compare the communication load of our scheme and that of Jiang-CDC in <cit.>. §.§ On the CDC Schemes from Theorem <ref> Using a (v,t,λ) SD, a Jiang-CDC scheme with r=t and s=v-t has communication load L_ Jiang=v-t/v-1. From the same (v,t,λ) SD, Theorem <ref> gives us a CDC scheme with r=s and (minimum) communication load L_ ours=(v-1)^2-tv+v/v(v-1). It is straightforward to prove that L_ ours is smaller than L_ Jiang on the same number of input files, output functions, computing nodes, r, and s. Let a suitable (v,t,λ) SD be given. For a contradiction, let us assume that L_ Jiang≤ L_ ours. Hence, v-t/v-1≤(v-1)^2-tv+v/v(v-1), which is equivalent to v≤ 1. It is then clear that L_ Jiang≤ L_ ours if and only if v≤ 1, which contradicts the very definition of a symmetric design. We know that Jiang-CDC schemes constructed from the symmetric designs in Table <ref> are all asymptotically optimal. Thus, using the same symmetric designs, our cascaded CDC schemes are also asymptotically optimal. Figure <ref> provides concrete performance comparison between our CDC schemes and Jiang-CDC schemes based on the specified SDs. §.§ On the CDC Schemes from Theorem <ref> For any prime p, Lemma <ref> and Theorem <ref> lead to a construction of a class of CDC scheme. The class has r=s=p-1, K=p^2-p, and communication load L_1:=p^2+p-4/2(p^2-p). The class has p^2-p input files and the same number, p^2-p, of output functions. We establish that schemes in this class are asymptotically optimal, that is, L_1/L_ Li converges to 1 when p is large. We begin with the following lemma whose proof will be given in the appendix. For a positive integer p≥ 5, ∑^p-1_ℓ=0ℓp^2-2p+1ℓp-1ℓ >(p-3) p^2-pp-1. If p is large, then p^2-p>2(p-1). Taking r=s=p-1 and K=p^2-p in Lemma <ref> yields L_ Li =∑^2(p-1)_ℓ=p(ℓ-(p-1)/ℓ-1 p^2-2p+1p^2-p-ℓ p-1ℓ-(p-1)/p^2-pp-1) = 1/p^2-pp-1∑^p-1_ℓ=1(ℓ/ℓ+p-2 p^2-2p+1ℓ p-1ℓ). By Lemma <ref>, we obtain L_ Li =1/p^2-pp-1∑^p-1_ℓ=1(ℓ/ℓ + p - 2 p^2-2p+1ℓ p-1ℓ) ≥1/p^2-pp-1 (2p-3)∑^p-1_ℓ=1(ℓ p^2-2p+1ℓ p-1ℓ) > 1/p^2-pp-1 (2p-3) (p-3) p^2-pp-1 = p-3/2p-3. On the other hand, L_ Li =1/p^2-pp-1∑^p-1_ℓ=1(ℓ/ℓ+p-2p^2-2p+1ℓp-1ℓ) ≤1/p^2-pp-1 p-1/2p-3 ∑^p-1_ℓ=1p^2-2p+1ℓp-1ℓ < 1/p^2-pp-1 p-1/2p-3 ∑^p-1_ℓ=0p^2-2p+1ℓp-1ℓ =p-1/2p-3. Thus, p-3/2p-3 < L_ Li <p-1/2p-3. The fact that lim_p →∞p-3/2p-3 = lim_p →∞p-1/2p-3 = lim_p →∞ L_1 = 1/2lim_p →∞L_1/L_ Li = 1, confirming that our cascaded CDC scheme is asymptotically optimal. Figure <ref> compares the communication load of our scheme with L_ Li. §.§ Concluding Remarks Our constructions highlight the prominent role that combinatorial designs play in minimizing the communication load. We believe that construction routes from known combinatorial structures still have a lot of potential to exploit in improving the performance of distributed computing schemes. Our schemes are asymptotically optimal. So are many previously known schemes, most notably the Li-CDC and Jiang-CDC schemes. Unlike those prior schemes, ours have generally improved communication loads, being consistently closer to the theoretical lower bound. Another significant advantage of our schemes lies in the parameters. The schemes constructed in Subsection <ref> have N=Q=v, with v as specified in Table <ref>. This means that the growth of the number N (of input files) and Q (of functions to run) can be nicely calibrated to suit practical constraints. The schemes built in Subsection <ref> have N=Q=n, with n being the number of nodes. Since Q ≥ n, the schemes require only the least possible number of computing nodes and the least number of functions to complete the given task. The framework depicted in Figure <ref> does not appear to have incorporated some error-control mechanism. The assumption is that the whole system is robust, e.g., none of the nodes can fail and that the broadcasts are sent and received error-free. In practice, some small number of nodes may fail or a few intermediate values cannot be made available due to transmission errors. The question of error-control coding form CDC schemes seems open for investigation. § APPENDIX: PROOF OF LEMMA <REF> We begin by establishing the inequality p^2-2p+1p-1 > p-13 p^2-2p+1p-4 by observing directly that p^2-2p+1p-1/p-13 p^2-2p+1p-4 =6 (p^2-3p+5) (p^2-3p+4) (p^2-3p+3)/(p-1)^2 (p-2)^2 (p-3)^2 =6 (p^2-3p+5) (p^2-3p+4) (p^2-3p+3)/(p^2-4p+3)^2 (p^2-4p+4) >1. Our next step is to prove the inequality 2 p^2-2p+1p-4 p-1 3 > ∑^p-4_ℓ=0 (p-3-ℓ) p-1p-1-ℓ p^2-2p+1ℓ. For any ℓ∈{0,1,…,p-4}, let d_ℓ = (p-3-ℓ) p-1p-1-ℓ p^2-2p+1ℓ. Hence, as ℓ increases in the range 0,1,…,p-5, the function d_ℓ/d_ℓ+1 = (p-3-ℓ/p-4-ℓ) (ℓ+1)^2/(p^2-2p+1-ℓ) (p-1-ℓ) is increasing. Hence, for any ℓ∈{0,1,…,p-5}, d_ℓ/d_ℓ+1≤d_p-5/d_p-4 = 2 (p^2-8p+16)/4 (p^2-3p+6) < 1/2, making it evident that d_ℓ < 1/2 d_ℓ+1 < … < (1/2)^p-4-ℓ d_p-4 ∑^p-4_ℓ=0 d_ℓ < ∑^p-4_ℓ=0(1/2)^p-4-ℓ d_p-4 = (2-(1/2)^p-4) d_p-4 < 2d_p-4, settling (<ref>). Our last step is to establish (<ref>). We use (<ref>) and (<ref>), respectively, to get the last two inequalities in the expression ∑^p-1_ℓ=0ℓ p^2-2p+1ℓp-1ℓ = ∑^p-4_ℓ=0ℓ p^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ + p-11 p^2-2p+1p-2 + 2 p^2-2p+1p-1 >∑^p-4_ℓ=0ℓp^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ + 2 p^2-2p+1p-1 > ∑^p-4_ℓ=0ℓ p^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ + 2 p^2-2p+1p-4 p-13 > (p-3) ∑^p-4_ℓ=0p^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ = (p-3) p^2-pp-1. The proof is now complete. 10 url@samestyle chowdhury2011 M. Chowdhury, M. Zaharia, J. Ma, M. I. Jordan, and I. Stoica, “Managing data transfers in computer clusters with orchestra,” ACM SIGCOMM computer communication review, vol. 41, no. 4, pp. 98–109, 2011. zhang2013 Z. Zhang, L. Cherkasova, and B. T. Loo, “Performance modeling of mapreduce jobs in heterogeneous cloud environments,” in 2013 IEEE Sixth International Conference on Cloud Computing.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 839–846. li2017 S. Li, M. A. Maddah-Ali, Q. Yu, and A. S. Avestimehr, “A fundamental tradeoff between computation and communication in distributed computing,” IEEE Transactions on Information Theory, vol. 64, no. 1, pp. 109–128, 2017. dean2008 J. Dean and S. Ghemawat, “Mapreduce: simplified data processing on large clusters,” Communications of the ACM, vol. 51, no. 1, pp. 107–113, 2008. zaharia2010 M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, I. Stoica et al., “Spark: Cluster computing with working sets.” HotCloud, vol. 10, no. 10-10, p. 95, 2010. ji2015 M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of caching in wireless d2d networks,” IEEE Transactions on Information Theory, vol. 62, no. 2, pp. 849–869, 2015. agrawal2020 S. Agrawal and P. Krishnan, “Low complexity distributed computing via binary matrices with extension to stragglers,” in 2020 IEEE International Symposium on Information Theory (ISIT).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 162–167. lee2017 K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, “Speeding up distributed machine learning using codes,” IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1514–1529, 2017. li2016 S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “A unified coding framework for distributed computing with straggling servers,” in 2016 IEEE Globecom Workshops (GC Wkshps).1em plus 0.5em minus 0.4emIEEE, 2016, pp. 1–6. yan2020 Q. Yan, M. Wigger, S. Yang, and X. Tang, “A fundamental storage-communication tradeoff for distributed computing with straggling nodes,” IEEE Transactions on Communications, vol. 68, no. 12, pp. 7311–7327, 2020. kiamari2017 M. Kiamari, C. Wang, and A. S. Avestimehr, “On heterogeneous coded distributed computing,” in GLOBECOM 2017-2017 IEEE Global Communications Conference.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 1–7. shakya2018 N. Shakya, F. Li, and J. Chen, “On distributed computing with heterogeneous communication constraints,” in 2018 52nd Asilomar Conference on Signals, Systems, and Computers.1em plus 0.5em minus 0.4emIEEE, 2018, pp. 1795–1799. woolsey2021combinatorial N. Woolsey, R.-R. Chen, and M. Ji, “A combinatorial design for cascaded coded distributed computing on general networks,” IEEE Transactions on Communications, vol. 69, no. 9, pp. 5686–5700, 2021. woolsey2019 ——, “Cascaded coded distributed computing on heterogeneous networks,” in 2019 IEEE International Symposium on Information Theory (ISIT).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 2644–2648. woolsey2020coded ——, “Coded distributed computing with heterogeneous function assignments,” in ICC 2020-2020 IEEE International Conference on Communications (ICC).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 1–6. xu2019 F. Xu and M. Tao, “Heterogeneous coded distributed computing: Joint design of file allocation and function assignment,” in 2019 IEEE Global Communications Conference (GLOBECOM).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 1–6. li2019 F. Li, J. Chen, and Z. Wang, “Wireless mapreduce distributed computing,” IEEE Transactions on Information Theory, vol. 65, no. 10, pp. 6101–6114, 2019. li2016edge S. Li, Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “Edge-facilitated wireless distributed computing,” in 2016 IEEE Global Communications Conference (GLOBECOM).1em plus 0.5em minus 0.4emIEEE, 2016, pp. 1–7. lee2017high K. Lee, C. Suh, and K. Ramchandran, “High-dimensional coded matrix multiplication,” in 2017 IEEE International Symposium on Information Theory (ISIT).1em plus 0.5em minus 0.4emIEEE, 2017, pp. 2418–2422. d2020notes R. G. D’Oliveira, S. El Rouayheb, D. Heinlein, and D. Karpuk, “Notes on communication and computation in secure distributed matrix multiplication,” in 2020 IEEE Conference on Communications and Network Security (CNS).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 1–6. cheng2023 M. Cheng, Y. Wu, and X. Li, “Asymptotically optimal cascaded coded distributed computing via combinatorial designs,” arXiv preprint arXiv:2302.05826, 2023. jiang2022 J. Jiang, W. Wang, and L. Zhou, “Cascaded coded distributed computing schemes based on symmetric designs,” IEEE Transactions on Communications, vol. 70, no. 11, pp. 7179–7190, 2022. woolsey2021 N. Woolsey, R.-R. Chen, and M. Ji, “A combinatorial design for cascaded coded distributed computing on general networks,” IEEE Transactions on Communications, vol. 69, no. 9, pp. 5686–5700, 2021. jiang2020 J. Jiang and L. Qu, “Cascaded coded distributed computing schemes based on placement delivery arrays,” IEEE Access, vol. 8, pp. 221 385–221 395, 2020. kon2020 K. Konstantinidis and A. Ramamoorthy, “Resolvable designs for speeding up distributed computing,” IEEE/ACM Transactions on Networking, vol. 28, no. 4, pp. 1657–1670, 2020. yan2017 Q. Yan, M. Cheng, X. Tang, and Q. Chen, “On the placement delivery array design for centralized coded caching scheme,” IEEE Transactions on Information Theory, vol. 63, no. 9, pp. 5821–5833, 2017. ding2014 C. Ding, Codes from difference sets.1em plus 0.5em minus 0.4emWorld Scientific, 2014. ruzsa1993 I. Z. Ruzsa, “Solving a linear equation in a set of integers i,” Acta arithmetica, vol. 65, no. 3, pp. 259–282, 1993. ionin2006 Y. J. Ionin and T. van Trung, “Symmetric designs,” in Handbook of Combinatorial Designs.1em plus 0.5em minus 0.4emChapman and Hall/CRC, 2006, pp. 136–149.
http://arxiv.org/abs/2307.05835v2
20230711225926
On the Forking Path Conjecture
[ "Gonzalo Jiménez" ]
math.RT
[ "math.RT", "math.CO", "20C30 (Primary), 05E10 (Secondary)" ]
On the Forking Path Conjecture Gonzalo Jiménez August 12, 2023 ============================== We prove the Forking Path Conjecture for all but one element in the symmetric group S_4. Two specific paths in the rex graph of that element give a counterexample for the conjecture. We propose a refined conjecture for the longest element of any S_n. § INTRODUCTION In the 2011 paper <cit.>, N. Libedinsky studied morphisms induced by paths in the reduced expression graph (see Definition <ref>) of extra-large Coxeter systems. He showed the surprising fact that morphisms induced by complete paths are idempotents on the corresponding Bott-Samelson bimodules. In 2016, B. Elias provided in <cit.> an extension of his work with M. Khovanov <cit.>, where they gave a diagrammatic presentation of the category of Bott-Samelson bimodules 𝔹𝕊Bim. Here, morphisms can be translated into linear combinations of planar graphs, and stacking planar graphs can be interpreted as composing morphisms. Elias used their diagrammatic calculus to construct an idempotent using the reduced expression graph of the longest element w_0, in the symmetric group S_n. These results, plus thousands of cases checked by computer, motivated Libedinsky in 2017 to announce the Forking Path Conjecture <cit.>. [Forking Path Conjecture] Let x ∈ S_n, let p, q be two complete paths with the same starting points and the same ending points in the reduced expression graph of x. The morphisms induced by these paths are equal. In this document we prove the conjecture for all but one element in S_4. The outstanding element is the one that sends 1 to 4, 2 to 2, 3 to 3, and 4 to 1. Viewed in the context of Coxeter systems with generators s_i=(i i+1), its reduced expression graph is the following (we simplify notation writing ijk in place of s_is_js_k). The counterexample is given by 1 ⊗_s_1 1 ⊗_s_3 1 ⊗_s_2 x_3 ⊗_s_3 1 ⊗_s_1 1, in the Bott-Samelson bimodule B_s_1B_s_3B_s_2B_s_3B_s_1. The complete paths inducing the different morphisms are the following. It might seem surprising to find a counterexample in a group of such a low rank, but we need to recall that there is an infinity of paths for each graph. Section <ref> contains background material, notations and conventions. In Section <ref> we prove the FPC for all but one element in S_4. In Section <ref> we present a counterexample for the FPC with a diagrammatic verification. Finally, in Section <ref> we explain (without giving a proof) how to generate a family of counterexamples. We also propose a refined conjecture for the longest element of any symmetric group. Acknowledgments: I would like to thank my advisor Nicolás Libedinsky for posing this problem, for many helpful discussions and valuable comments, and I would like to thank Ben Elias, for helpful discussions and feedback. This paper was partially funded by FONDECYT project number 1200061, and by ANID scholarship number 21171339. § BACKGROUND §.§ Rex graphs For n ∈ℕ, let (W,S) be the Coxeter system with W=S_n the symmetric group on {1,…,n}, and the set of generators S={s_i | i=1, 2, … , n-1} where each s_i is the transposition (i i+1). They are also known as simple reflections. For s,t ∈ S, m_st is the order of st (it can be 2, or 3 if s≠ t). Let l:W→ℤ_≥0, be the length function and w_0,n the longest element of W. If no confusion is possible, we write w_0 instead of w_0,n. When no confusion is possible, we denote by i the simple reflection s_i. The longest elements in S_n for n=2,3,4 are respectively 1, 121, 121321. We can obtain w_0,n+1 inductively, by joining the sequence n(n-1)… 21 on the right of w_0,n. The reduced expression graph of an element w ∈ W, usually abreviated rex graph and denoted by Rex(w), is the graph defined as follows. Its vertices are the reduced expressions of w, with an edge between two reduced expressions if they differ by a single braid relation. These relations are s_is_i+1s_i = s_i+1s_is_i+1 for all i∈{1,2…, n-2} and s_is_j = s_js_i when |i-j|≥2. We call the edges determined by the former identity adjacent edges, and the ones determined by the latter, distant edges. The reduced expression graph of 21321. Given a rex graph of w∈ W, we can draw the distant edges with dashed lines. With this convention, we name this colored graph the expanded expressions graph of w. We symbolize it by Γ_w. The expanded expressions graph of 12321. The expanded expressions graph of w_0,4. There are different kinds of cycles appearing in Figure <ref>. For instance, a square is formed between 213231 and 231213, because there are two disjoint distant moves connecting them. In other words, these movements can be applied in either order. Any square of this kind in any graph is called a disjoint square. A disjoint square can involve distant or adjacent edges. For example, there is a disjoint square of adjacent edges from 121343 to 212434. See Example <ref>. The expanded expressions graph of 1214, a distant Octagon. §.§ Bott-Samelson bimodules Let R be the polynomial ring over ℝ in variables x_1,…, x_n, together with an action of W where s_i permutes the variables x_i and x_i+1. The ring R is graded with deg(x_i) = 2. If M = ⊕ M^i is a graded R-module, then the grading shift convention will be M(i)^j = M^i+j. For s in S, we denote by R^s the subring of R consisting of polynomials invariant under the action of s. Let B_s denote the graded R-bimodule B_s := R ⊗_R^s R(1). The Bott-Samelson bimodule related to an expression w=(s, r…, t), and denoted by B_w, is the graded R-bimodule given by the tensor product of bimodules B_w = B_s⊗_RB_r⊗_R …⊗_R B_t. Direct sums of shifts of Bott-Samelson bimodules form the full monoidal graded subcategory of R-bimodules denoted by 𝔹𝕊Bim. We simplify the notation by writing B_i instead of B_s_i. For tensor products, we write B_i B_j instead of B_s_i⊗_RB_s_j. We refer to subsets of S as parabolic subsets. Given such a subset, J ⊂ S, we let R^J denote the subring of R of polynomials invariant under all the simple reflections in J. The full (additive monoidal graded) subcategory of R-bimodules additively generated by all the shifts of direct summands of Bott-Samelson bimodules is the category 𝕊Bim of Soergel bimodules. Soergel proved in <cit.>, that the isomorphism classes of indecomposable Soergel bimodules (up to grading shift) are parameterized by W. The indecomposable bimodule B_w appears as a summand inside B_sB_r… B_t in any reduced expression sr⋯ t of w, and does not appear in any Bott-Samelson bimodule associated to any other element smaller than w in the Bruhat order. Let B_J be the R-bimodule B_J := R ⊗_R^J R, and let w_J be the longest element of the parabolic subgroup generated by J. It is possible to show that B_w_J≅ B_J (see <cit.>). Thus B_J will appear as a summand of B_sB_r… B_t whenever sr⋯ t is a reduced expression for w_J, the longest element of J. §.§ Braid morphisms f_sr Consider the bimodules X_sr = B_sB_rB_s… and X_rs = B_rB_sB_r…, each product having m_sr terms. We write 1^⊗ for 1⊗ 1 ⊗…⊗ 1 ∈ R⊗_R^s R ⊗_R^r…⊗_R^tR. The morphism f_sr is defined as the only degree 0 morphism from X_sr to X_rs sending 1^⊗ to 1^⊗. We write f_s_is_j as f_ij. We describe these maps in terms of certain generators (as an (R,R)-bimodule) of the corresponding Bott-Samelson bimodules. There are three cases to consider: First case: If |i-j|≥ 2. The morphism f_ij : B_iB_j → B_jB_i is determined by the formula f_ij(1^⊗) = 1^⊗, because 1^⊗ generates B_iB_j as a bimodule. Second case: The morphism f_i(i+1) : B_iB_i+1B_i → B_i+1B_iB_i+1 is determined by the formulae f_i(i+1)(1^⊗) = 1^⊗ and f_i(i+1)(1⊗x_i⊗1⊗1) = (x_i+x_i+1)⊗1⊗1⊗1 - 1⊗1⊗1⊗x_i+2. Third case: The morphism f_i(i-1) : B_iB_i-1B_i → B_i-1B_iB_i-1 is determined by the formulae f_i(i-1)(1^⊗) = 1^⊗ and f_i(i-1)(1⊗x_i+1⊗1⊗1) = 1⊗1⊗1⊗(x_i+x_i+1) - x_i-1⊗1⊗1⊗1. §.§ Path morphisms Let G = (V, E, φ) be a graph. Here, V denotes the set of vertices, E denotes the set of edges, and φ is the incidence function φ :E→{{x,y}| x,y∈ V and x≠ y}. Let G = (V, E, φ) be a graph. A path p is a sequence of edges (e_1, e_2, …, e_n-1) for which there is a sequence of vertices [v_1, v_2,…, v_n] such that φ(e_i) = {v_i, v_i + 1} for i = 1, 2, …, n-1. The sequence [v_1, v_2,…, v_n] is the vertex sequence of the path. Note that it is possible to recover the edges of a path from its vertex sequence, so we will work with vertex sequences and paths indistinctly. For any path p we denote by [p] the associated sequence of vertices. We say that the length of p is n. We give a semi-orientation to the rex graph. We orient adjacent edges with the lexicographic order, so these edges go from i(i + 1)i to (i + 1)i(i + 1). The distant edges remain unoriented. When we speak of an oriented path in a semi-oriented graph, we refer to a path which may follow unoriented edges freely, but can only follow oriented edges along the orientation. A reverse-oriented path is a path oriented backwards. When we say path with no specification, we refer to any path. The starting point (vertex) and the ending point of a path p will be referred as p_a and p_z respectively. Here, a subpath is a path that makes up part of a larger path. For a pair of Bott-Samelson bimodules B, B' whose expressions differ by a single braid relation, we have a morphism of the type Id⊗…⊗ f_sr⊗…⊗Id∈Hom(B,B') where s and r depend on the aforementioned braid relation. In S_4, the expressions 212321 and 213231 are reduced expressions of the same element, and they differ by the braid relation 232=323. The aforementioned morphism from 212321 to 213231 has the following form. Id^2⊗ f_23⊗Id B_2 B_1(B_2 B_3 B_2) B_1→ B_2 B_1 (B_3 B_2 B_3) B_1. For each path p in the rex graph Rex(w) we call f(p) the associated morphism between the Bott-Samelson bimodules B_p_a and B_p_z. We call f(p) a path morphism. Note that for expressions related by distant edges (first case), the morphisms f_sr are isomorphisms. We will see in Section <ref> that the path morphism associated to a composition of distant edges only depends on the starting point and the ending point. In this way, we can collapse the dashed lines obtaining a new graph that we now define. The conflated expression graph, denoted by Γ_w, is the quotient of Γ_w (or Rex(w)) by all its distant edges. In other words, if p is a path such that all its edges are distant, then we identify p_a and p_z. We remark that there are no possible adjacent edges between p_a and p_z because the sum of the indices of a reduced expression remains unchanged when applied to a distant edge and varies when applied to an adjacent edge. When identifying the vertices we must choose a representative, which usually will be a specific one depending on the path morphisms we are working with. When the representative is not explicit, by convention, we will consider that it is the lower in the lexicographical order among the identified elements. We remark that there might be multiple edges between two vertices in this graph (see Example <ref>), as opposed to the expanded expressions graph. Here we choose a representative following the same criteria, avoiding multigraphs. If e is an edge (resp. v is a vertex) of the expanded expressions graph we call π(e) (resp. π(v)) its image in the conflated expression graph. In particular, if e is a distant edge, π(e)=∅. For a sequence of edges p=(e_1,…, e_n) we denote by π(p) the sequence (π(e_1),…, π(e_n)) omitting π(e_j) when it is empty. The following figure is the conflated expression graph of 121343. This configuration is also known as disjoint square. The conflated expression graph for 12321 in S_4 has three vertices. The expanded expressions graph for 246 in S_7 and its conflated expression graph. A configuration like the first one is known as distant hexagon. Considering the semiorientation in Definition <ref>, and the quotient in Definition <ref>, we obtain a proper orientation in Γ_w. This orientation is known as the Manin-Schechtman orientation <cit.>. The conflated expression graph of w_0,4 with the Manin-Schechtman orientation. We refer to this cycle in any of its forms (i.e. in its reduced, expanded, or conflated expression graph) as a Zamolodchikov cycle. For any w∈ S_n, the Manin-Schechtman orientation determines a unique source and a unique sink in Γ_w. We refer to them as s and t respectively. In <cit.>, it is proven that the Manin-Schechtman orientation satisfies the following properties. * It is BS-consistent or consistent with Bott-Samelson bimodules (<cit.>). This means that for any pair of oriented (or reverse-oriented) paths p and q, with p_a=q_a and p_z=q_z, we have f(p)=f(q). * For w_0,n, the orientation is said to be idempotent-magical. This means that the morphism associated to an oriented path from s to t composed with the morphism associated to a reverse-oriented path from t to s is an idempotent. A complete path in a graph is a path passing through every vertex of the graph at least once. Recall from the introduction the Forking Path Conjecture. Let w ∈ S_n, and let p, q be two complete paths in Rex(w), with p_a=q_a and p_z=q_z. Then f(p)=f(q). §.§ Conflated expression graph of the longest element We now restrict our attention to the graph Γ = Γ_w_0,n with the Manin-Schechtman orientation. For 𝐱, 𝐲∈Γ, we denote 𝐱↘𝐲 (resp. 𝐲↗𝐱) for some oriented (resp. reverse-oriented) path from 𝐱 to 𝐲 (resp. 𝐲 to 𝐱), presuming that one exists. We use f_𝐱↘𝐲 and f_𝐲↗𝐱 for the induced path morphisms, which do not depend on the choice of oriented path by the BS-consistency. The following is Proposition 3.16 in <cit.>. There is a unique source 𝐬, and a unique sink 𝐭 in Γ. Let m be the length of the shortest (not necessarily oriented) path from 𝐬 to 𝐭. Then every vertex lies on some oriented path 𝐬↘𝐭 of length m, and every oriented path 𝐱↘𝐲 can be extended to a length m path 𝐬↘𝐱↘𝐲↘𝐭. For 𝐱, 𝐲∈Γ, let DUD_𝐱, 𝐲 = f_𝐬↘𝐲∘ f_𝐭↗𝐬∘ f_𝐱↘𝐭. That is, DUD_𝐱, 𝐲 corresponds to any oriented path which goes from 𝐱 down to the sink, up to the source, and down to 𝐲. Let UDU_𝐱, 𝐲 = f_𝐭↗𝐲∘ f_𝐬↘𝐭∘ f_𝐱↗𝐬 corresponds to any path which goes from 𝐱 up to the source, down to the sink, and up to 𝐲. <cit.> For all 𝐱, 𝐲∈Γ, we have DUD_𝐱, 𝐲 = UDU_𝐱, 𝐲. Its image is the indecomposable object B_w_0 corresponding to the longest element of S_n. Let Z= f_𝐬↘𝐭 denote the unique oriented path morphism from source to sink. Let Z= f_𝐭↗𝐬 denote the unique reverse-oriented path morphism from sink to source. Note that DUD_𝐭,𝐬 = Z, UDU_𝐬, 𝐭= Z, and DUD_𝐬,𝐬 = UDU_𝐬, 𝐬 = Z∘ Z. Also, note that considering 𝐱 = 𝐬 and 𝐲=𝐭, Theorem <ref> says that Z∘Z∘ Z = Z. Analogously, we have Z∘ Z∘Z = Z. § FORKING PATH CONJECTURE IN S_4 §.§ Distant edges identification If w∈Γ_w is a vertex in the conflated expression graph, the set π^-1(w)∈Γ_w is called a cloud. If C is a cloud, then by definition, every two vertices in C are connected by a sequence of distant edges. If we consider the statistic N(w) given by adding all the indexes of the reduced expression (for example N(s_1s_3s_2)=1+3+2=6) we can see that the function N is constant in the vertices of a cloud. Consider any w∈ S_n. Let p be a path in the conflated expression graph Γ_w. The path morphism f(p) defined by p is f(p̃), where p̃=(e_1, e_2, …, e_n-1) is any path in the expanded expressions graph Γ_w with p̃_a=p_a, p̃_z=p_z, and such that one obtains p from p̃ by applying π (see Notation <ref>) to this sequence. Path morphisms in conflated expression graphs are well-defined. In other words, given two paths p̃, p̃' in Γ_w satisfying the conditions in Definition <ref>, we have f(p̃) = f(p̃'). Any two paths in Γ_w defining f(p) will only differ on their distant edges connecting two successive adjacent edges. So, for a fixed pair of successive adjacent edges, each sequence of distant edges will have the same starting vertex and the same ending vertex. These sequences represent oriented paths, and therefore, their induced path morphisms are the same (see <cit.>). Repeating this argument in each sequence of distant edges, we have the result. Definition <ref> does not depend on π. If we change the choices of adjacent edges e such that π(e)=ϕ, by <cit.> we obtain the same path morphism. The next two propositions show the equivalence between working with paths in rex graphs and working with paths in conflated expression graphs. We will therefore deduce that there is an equivalence between the Forking Path Conjecture (that we call FPC in the rest of this paper) for Γ_w and for Γ_w. For any w∈ S_n, finding paths in its conflated expression graph giving a counterexample for the FPC gives a counterexample for the FPC in its rex graph. For any w∈ S_n, let p,q be complete paths in Γ_w, with p_a=q_a and p_z=q_z, such that f(p)≠ f(q). By definition, f(p) is equal to f(p̃), where p̃ is a path in Γ_w satisfying the requirements in Definition <ref>. Similarly for f(q) and f(q̃). As p and q are complete, p̃ and q̃ pass through every cloud in Γ_w. We can modify p̃ and q̃ to pass through every vertex in Γ_w as follows. Each time p̃ or q̃ passes through a cloud, add a complete closed path in that cloud and then continue as before (this does not alter the path morphism). Let us call p̃_0 and q̃_0 the new paths in Rex(w), then they are complete paths such that f(p̃_0)≠ f(q̃_0). For any w∈ S_n, the FPC in Γ_w implies the FPC in Rex(w). Suppose that the FPC is true in Γ_w for some element w∈ S_n. Let p̃, q̃ be complete paths in Rex(w), with p̃_a=q̃_a and p̃_z=q̃_z. When applying the projection π to these paths, it is possible to obtain π(e) = ϕ for some adjacent edges of these paths. This means that these edges vanish while doing the identifications of vertices and choices of edges in the construction of Γ_w. If this is the case, it is possible to replace each of these edges e at a time. We can do that replacement with a path that goes through the corresponding cloud from the same starting vertex of e to the starting vertex of the edge that does not vanish, then follows that edge, and then goes back through the corresponding cloud again to the same ending vertex of e, returning to the original path. These local replacements (one for each vanishing edge e) do not alter the resulting path morphism, because the involved subpaths are oriented paths (see <cit.>). So, by modifying the paths as so, the resulting projections will satisfy the hypothesis of the FPC in Γ_w. Therefore f(p̃) = f(π(p̃)) = f(π(q̃))= f(q̃) and we have the FPC for Rex(w). So we obtain an equivalent conjecture. [FPC for conflated expression graphs] Let w ∈ S_n. Let p, q be two complete paths in Γ_w, with p_a = q_a and p_z = q_z. Then f(p) = f(q). §.§ Calculating path morphisms Consider w ∈ S_n and Γ_w with the Manin-Schechtman orientation. We say that a path is straight if it goes in an oriented or reverse-oriented fashion from one vertex x to another vertex y. We denote that by x → y. We use this notation when we do not want to specify if x↗ y or x↘ y. In particular, straight paths s ↘ t or t ↗ s will be called direct paths, and we denote them by the letter d. We say that a pair of paths p, q are equivalent and we write p ≃ q if f(p)=f(q), i.e., if they define the same path morphism. If a direct path (resp. straight path) is a subpath of a larger path, we call it a direct (resp. straight) subpath of the larger path. Now we restrict our attention to Γ_w_0,n. Let p, q be two complete paths with p_a= q_a and p_z = q_z, both containing a direct subpath d. We will show that p ≃ q. The main idea is to construct equivalent paths that lead us to a reduced problem, i.e., studying equivalences between a small set of paths. We divide p into three parts: the path before d, from p_a to d_a, which we call the p^α subpath, the direct subpath d, and the path after d, from d_z to p_z, which we call the p^β subpath. If there exist more than one direct subpath, it does not matter which one we choose to work with. We now focus on the p^α subpath. Let p be a complete path in Γ_w_0,n with a direct subpath d. Then p^α is equivalent to a path p' of the form p_a ↗𝐬↘𝐭↗𝐬↘…→ d_a, or p_a ↘𝐭↗𝐬↘𝐭↗…→ d_a. We assume without loss of generality that d_a= s. Then, p^α ends in the vertex 𝐬, that is, p^α_z= s. If p^α=id (i.e. the empty sequence), we are done. If not, the path p^α has a straight subpath from a vertex x_1 to s (with x_1≠ s ) which we take maximal in this sense, i.e. the straight subpath x_1 ↗𝐬 is not contained in any larger straight subpath. If x_1 ↗𝐬 = p^α, we have the desired path p'; if this is not the case, there exists a vertex x_2 (maximal in the same sense) such that x_2 ↘ x_1 is a subpath of p. We have that x_2↘ x_1↗𝐬↘𝐭 is a subpath of the path p corresponding to the end of p^α, followed by d. Using equation (<ref>), we rewrite (<ref>) as x_2↘ x_1↗𝐬↘𝐭↗𝐬↘𝐭. Now, by Theorem <ref>, we apply UDU_x_1,𝐭=DUD_x_1,𝐭, to obtain x_2↘ x_1↘𝐭↗𝐬↘𝐭↗𝐬↘𝐭. Using again (<ref>) to simplify, we see that (<ref>) has the same path morphism as x_2↘𝐭↗𝐬↘𝐭. We now consider the subpath 𝐭↗𝐬 as our new direct subpath d. Using repeatedly this process we obtain the equivalent path p' of the prescribed form. The same arguments work for the β subpath, mutatis mutandis. This way, after simplifications using the identities (<ref>) and (<ref>) if needed, from p it is possible to obtain a new path p̂ consisting of the following: a straight path from p_a to 𝐬 or 𝐭, followed by one or two direct paths, and then a straight path from 𝐬 or 𝐭 to p_z, satisfying f(p̂)=f(p). If p̂_a or p̂_z are 𝐬 or 𝐭, then p̂ does not have the α or the β subpaths (both cases may occur at the same time). The path p̂ obtained from the application of Proposition <ref> to p will be called a simplified path. In Γ_w_0, consider any pair of simplified complete paths, p and q, both containing a direct path d. Suppose that p_a=q_a and p_z=q_z. Then f(p)=f(q). ∙ First case: p_a=q_a=𝐬. If p_z=q_z=𝐭, then they are necessarily equivalent to 𝐬↘𝐭. If p_z=q_z=𝐬, they will be equivalent to 𝐬↘𝐭↗𝐬. If p_z=q_z=u, with u being a vertex that is not 𝐬 or 𝐭, we have two possibilities: 𝐬↘𝐭↗ u and 𝐬↘𝐭↗𝐬↘ u. By Theorem <ref>, UDU_𝐬, u= DUD_𝐬,u, so they are equivalent. ∙ Second case: If p_a=q_a=𝐭 or p_z=q_z=𝐬 or p_z=q_z=𝐭, we repeat a similar analysis as in the first case and conclude that f(p)=f(q). ∙ Third case: Now we study the case p_a=q_a=u and p_z=q_z=v, with u and v being vertices that are neither 𝐬 nor 𝐭. There are four possible cases for the paths p, q. * u↗𝐬↘𝐭↗𝐬↘ v * u↘𝐭↗𝐬↘ v * u↗𝐬↘𝐭↗ v * u↘𝐭↗𝐬↘𝐭↗ v The equation UDU_u, 𝐬=DUD_u,𝐬 implies that the first is equivalent to the second, UDU_u, v= DUD_u,v implies that the second is equivalent to the third, and UDU_u, 𝐭=DUD_u,𝐭 implies that the third is equivalent to the fourth. The FPC would follow if we could guarantee the existence of a direct subpath in any path, but this is not the case. Despite this, we will show that in Γ_w_0,4, any complete path will always contain a subpath that is equivalent to a direct path, proving the conjecture for this element. §.§ Diagrammatic calculus We begin by considering a complete path p in Γ_w_0,4 and its path morphism f(p). Since the path is complete, the vertices 𝐬 and 𝐭 are part of the path. Note that it could be possible to visit these points multiple times. So there is at least one subpath, that we will call candidate path, starting in 𝐬 and ending in 𝐭, or starting in 𝐭 and ending in 𝐬, minimal with this property. This means that there are no proper subpaths of the candidate path starting in 𝐬 and ending in 𝐭, or starting in 𝐭 and ending in 𝐬. Without loss of generality we will assume that the candidate path starts in 𝐬 and ends in 𝐭. Since the Zamolodchikov cycle has a “ring shape” (see Figure <ref>), our conditions imply that the candidate path will be hosted either in the left or in the right half of this cycle. Let us suppose without loss of generality that the candidate path is in the left half. Let us consider the following path which represents the desired direct subpath. 𝐬↘ A↘ B ↘ C ↘𝐭 This is a path from 𝐬 to 𝐭 where 𝐬=121321, A=212321, B = 213231 = 231231 = 213213 = 231213, C=232123, 𝐭=323123. Recall from Def. <ref> that for any path p we denote by [p] the associated sequence of vertices. The beginning of our candidate path k is from 𝐬 to A. We cannot return to 𝐬 by the minimality of the candidate path. So we have [k]=[𝐬, A, B,…, 𝐭]. Once we are in B we can go back to A or go forward to C. If we go back to A, since we cannot return to 𝐬, we have to return to B. For s, A, B as in path (<ref>), we have [A, B, A, B] ≃ [A, B] (so [𝐬, A, B]≃ [𝐬, A, B,A,B] ≃ [𝐬, A, B, A, B, A, B], and so on). Also, [B, A, B, A] ≃ [B, A]. This is a well-known identity. Such composition of morphisms can be represented and decomposed as illustrated in Figure <ref> below. We will use black rectangular frames to highlight spots where we use local relations. Blue, red, and green correspond to indexes 1, 2, and 3 respectively. The following is a consequence of <cit.> The part inside the rectangle in the second summand in Figure <ref> is decomposed as in Figure <ref>, by <cit.>. Each summand in the right-hand side is zero by <cit.>. Reading the diagrams upside down we conclude that [B, A, B, A] ≃ [B, A]. Note that, using the same diagrams (but with different colors), we can also find the equivalence [B, C] ≃ [B, C, B, C]. Without loss of generality we will assume that our candidate path has minimal length when compared to all its equivalent paths. Because of this, the candidate path [𝐬, A, B,…, 𝐭] has no subsequences of the forms [A,B,A,B] and [B,C,B,C]. So we can assume that our candidate path starts with [𝐬, A, B, C]. Being at C, if we go to 𝐭 we are done. We will find a contradiction if this is not the case. If we don't go to 𝐭, the path returns to B. From B we can not return to C, because we would have [B, C, B, C] as a subpath. Thus, from B we go to A. Since we cannot return to 𝐬 we have to go to B. So our path starts as follows [𝐬, A, B, C, B, A, B]. Again, by minimality of the length of the candidate path, the next vertex has to be C. The following proposition proves the contradiction. For A, B, C as in path <ref>, we have [A, B, C, B, A, B, C] ≃ [A, B, C] The equivalence is proved diagrammatically in Figure <ref>. The local relations that we use are all from <cit.>. In particular, from 1) to 2) we apply Eq. 2.26. From 2) to 4) we apply Eq. 2.20 twice. From 3) we obtain 5) and 6) by means of Eq. 2.26. Applying Lemma <ref> to the term 4) we obtain Figure <ref>. Using the concluding observation of the proof of Lemma <ref>, we find that each of the summands 5) and 6) can be rewritten as a sum of two morphisms which both are zero. Hence, 5) and 6) are both zero, as shown in Figure <ref>. From 5) and 6), to 7) and 8) we apply Eq. 2.15. From 7) and 8) to 9) and 10), we repeatedly apply Eq. 2.20 and Eq. 2.22. As in Figure <ref>, we recognize in 9) and 10) compositions equivalent to the zero morphism. The Forking Path Conjecture is true for w_0,4. By <ref> and <ref>, we have that any path starting at 𝐬 and ending in 𝐭, that does not visit 𝐬 or 𝐭 in the rest of the path, and that is located on the left half of the Zamolodchikov cycle (See Figure <ref>) is equivalent to a direct path. The same result is true if one considers paths from 𝐬 to 𝐭 or from 𝐭 to 𝐬, located on the left or on the right half of the cycle. The reason for this is that the proofs for the four cases will be the same as before, but turning the diagrams upside down for the case 𝐭 to 𝐬 in the left half, applying a vertical axial symmetry to the diagrams for the right half for the case 𝐬 to 𝐭 and turning the diagrams upside down and applying a vertical axial symmetry to the diagrams for the case 𝐭 to 𝐬 on the right[ A deeper reason for this is that we are implicitly applying some equivalences of categories. There is a contravariant equivalence of monoidal categories 𝕊Bim→𝕊Bim, given by the flip (that sends a diagram to its horizontal flip) and also an auto-equivalence of 𝕊Bim associated to the only non-trivial automorphism of the Dynkin diagram of type A_n, sending s_i↦ s_n-i+1. ]. So we conclude that any complete path in Γ_w_0,4 has a subpath that is equivalent to a direct path. Therefore, by Proposition <ref> the proof of the FPC for w_0 in S_4 is complete. Now we verify the conjecture for the remaining elements in S_4 different from 12321. The elements w and their Γ_w graphs oriented according to Manin-Schechtman are given in the following table. The last entry Zam is the Zamolodchikov cycle, as introduced in Example <ref>. There is no need to verify the trivial graphs Γ_w (∙), since the only possible morphism is the id. For any ∙→∙ case, the proof of the Forking Path Conjecture follows easily from Lemma <ref>. It remains to check the ∙→∙→∙ cases. We will concentrate in the cases 23121 and 12312 because the other case is the one giving the counterexample to the Forking Path Conjecture. We now study the element 23121. The Forking path conjecture is true for the element 23121. In this case we can speak of a simplified path p similar to that of Definition <ref>. Consider {x,y}={𝐬, 𝐭}. These paths will be of the form p_a→ x→ y→ p_z, where p_a could be equal to x, and p_z could be equal to y, or alternatively, of the form p_a→ x→ y→ x→ p_z, where p_a and p_z could be equal to x. The path p_a → x (resp. y→ p_z in the first case, and x→ p_z in the second) will have length one only when p_a=c (resp. p_z=c), where c is the only vertex different to 𝐬 and 𝐭. We consider simplified paths p and q. ∙ We first study the case p_a, p_z∈{𝐬, 𝐭}. It is immediate that p≃ q, since there is only one possible path, i.e., p=q. ∙ Consider the case p_a=𝐬 and p_z=c. There are two possible simplified paths, P_1:=[𝐬, c, 𝐭 , c] and P_2:=[𝐬, c, 𝐭 , c, 𝐬 , c]. We will prove that P_1 is equivalent to P_2. In the following figure, part 1) represents the path morphism of P_2. We begin in 1) applying the relation seen in Figure <ref>. Then, we apply the same relation in 3). Diagram 5) is zero by the same reason as diagram 9) in Figure <ref>. In 6) we focus on the red and blue strands because we can retract the green dots. We have the following diagrams. So we obtain that diagram 6) is zero. Thus, diagram 1) is equal to diagram 4), or in other words, the path morphism of P_2 is equal to the path morphism of P_1. ∙ We have proved the proposition when p_a=𝐬 and p_z is any vertex. One can prove similarly the proposition for p_a=𝐭 and p_z any vertex, by symmetry. By flipping the diagrams we can prove the proposition for any p_z∈{𝐬,𝐭}. ∙ The only case that remains to show is the equivalence for p and q such that p_a=q_a=p_z=q_z=c. There are four possible simplified paths Q_1:=[c,𝐬,c,𝐭,c], Q_2:=[c,𝐭,c,𝐬,c], Q_3:=[c,𝐬,c,𝐭,c,𝐬,c], Q_4:=[c,𝐭,c,𝐬,c,𝐭,c]. In the left hand-side of the following figure, we draw the path morphism corresponding to Q_1, which can be seen to be equal to the path morphism corresponding to Q_2 after an application of the local relation (2.15) in <cit.>. By reading Figure <ref> upside down, we have that [𝐬,c,𝐭,c,𝐬,c] ≃ [𝐬,c,𝐭,c]. This way, we deduce that Q_4 ≃ [c,𝐭,c,𝐬,c,𝐭,c] ≃ [c,𝐭,c,𝐬,c,𝐭,c,𝐬,c] ≃ [c,𝐭,c,𝐬,c] ≃ Q_2. Analogously, [c,𝐬,c,𝐭,c,𝐬,c] ≃ [c,𝐬,c,𝐭,c], so Q_3≃ Q_1. The proof of the Forking Path Conjecture for the element 12132=12312 is essentially the same as the proof given in Proposition <ref>, after applying the auto-equivalence of 𝕊Bim given by the unique non-trivial automorphism of the Dynkin diagram, i.e., applying a vertical symmetry to all diagrams. § THE COUNTEREXAMPLE Let us consider the element σ=12321 ∈ S_4. The rex graph Rex(σ) corresponds to the following figure: Note that the four vertices in the middle are the same vertex in the conflated expression graph. Let us consider the element x:=1 ⊗_s_1 1 ⊗_s_3 1 ⊗_s_2 x_3 ⊗_s_3 1 ⊗_s_1 1 in the Bott-Samelson bimodule B_1B_3B_2B_3B_1. Let v_1 and v_2 be the following paths respectively: To simplify calculations we need the following. From Equation (<ref>) we obtain f_i(i+1)(1⊗_s_ix_i+1⊗_s_i+11⊗_s_i1) = 1⊗_s_i+11⊗_s_i1⊗_s_i+1x_i+2. Similarly, from Equation (<ref>) we obtain f_i(i-1)(1⊗_s_i1⊗_s_i-1x_i⊗_s_i1) = x_i-1⊗_s_i-11⊗_s_i1⊗_s_i-11. We will use a diagrammatic method to evaluate homomorphisms in 𝕊Bim. The next figure shows the evaluation of f(v_1) (left) and f(v_2) (right) in the element x defined above. We will prove that the elements obtained are different. It is known that B_3B_2B_3 ≅ B_323⊕ B_3 , where B_323≅⟨1^⊗⟩ and B_3 ≅⟨ 1⊗ 1⊗ x_3 ⊗ 1 ⟩ . So it is possible to generate the R-bimodule B_3B_2B_3 with the elements 1^⊗ and 1⊗ 1⊗ x_3 ⊗ 1. In B_1B_3B_2B_3B_1, we have f(v_1)(x) = 1 ⊗_s_1 1 ⊗_s_3 1 ⊗_s_2 x_3 ⊗_s_3 1 ⊗_s_1 1 and f(v_2)(x) = 1 ⊗_s_1 x_2 ⊗_s_3 1 ⊗_s_2 1 ⊗_s_3 1 ⊗_s_1 1. If they were the same, applying dots over both B_1 in B_1B_3B_2B_3B_1 we would have that x_2 · 1^⊗ = 1⊗_s_3 1 ⊗_s_2 x_3 ⊗_s_3 1 in B_3B_2B_3, which, as stated above, is not true. § A FAMILY OF COUNTEREXAMPLES The element σ=12321 is the only one where the FPC fails for the group S_4. We proved this by showing that the diagrams in Figure <ref> (the same diagrams as in Figure <ref>) are not equal. The first diagram decomposes as in Figure <ref>, while the second decomposes as in Figure <ref>. The only summand that is different is the last one. This allowed us to find the correct element to find the counterexample. Repeating the same idea, we see that the elements of the form 12… (n-1)n(n-1)… 21 have a line as conflated expression graph. Let's say this line is the following figure. Considering p the path [E_2, E_1, E_2, E_3, …, E_n, E_n-1, …, E_2] and q the path [E_2, E_3, …, E_n, E_n-1, …, E_1, E_2]. We can check in general that f(p)≠ f(q) by evaluating these path morphisms in particular elements. We will not give a rigorous proof of this fact, but the general strategy can be inferred from Figure <ref>. The purple strand is related to the index 4. The black, to the index 5. There are also elements of symmetric groups which can be used to produce counterexamples, and whose conflated expression graphs are not linear, i.e., elements whose conflated expression graph is different from Figure <ref>. The reader can verify that one such element is 12134325 in S_6. Note that all elements in our family of counterexamples are paths with p_a∉{𝐬, 𝐭} and p_z∉{𝐬, 𝐭}. This could make us think that the behavior for complete paths with p_a∈{𝐬, 𝐭} or p_z ∈{𝐬, 𝐭} is different. That is not the case! Consider the same element σ=12321, and Γ_σ, the paths [𝐬,c,𝐭,c,𝐬,c] and [𝐬,c,𝐭,c] give a counterexample. It is enough to evaluate both path morphisms in the element 1 ⊗_s_1 x_2 ⊗_s_2 1 ⊗_s_3 1 ⊗_s_2 1 ⊗_s_1 1. These counterexamples (and some others that we do not show here) have in common particular choices of elements and paths, however, verification for other families of elements show that there is a phenomenon hidden underneath. To be precise, we have observed that for the longest element in S_n, any path p with p_a=𝐬 and p_z=𝐭 will be equivalent to an oriented path from 𝐬 to 𝐭 (we proved this for S_4 in Section <ref>). In other words, we do not need to follow the Manin-Schechtman orientation as long as we start and end in the right vertices. The same when we start in 𝐭 and end in 𝐬. We propose the following strengthening of the FPC for w_0. Let w_0,n∈ S_n and 𝐬, 𝐭 be the source and sink of the Manin-Schechtman orientation. Let p, q be two paths in Γ_w_0,n, which pass through 𝐬 and 𝐭, satisfying p_a=q_a, and p_z=q_z. Then f(p)=f(q). We also conjecture the same for other choices of 𝐬 and 𝐭 obtained from other orientations different from the one given by Manin and Schechtman. Of course, this conjecture implies the FPC for w_0,n. plain
http://arxiv.org/abs/2307.06009v1
20230712084117
A Linear Algebraic Framework for Dynamic Scheduling Over Memory-Equipped Quantum Networks
[ "Paolo Fittipaldi", "Anastasios Giovanidis", "Frédéric Grosshans" ]
quant-ph
[ "quant-ph", "cs.NI" ]
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. XXXX Sorbonne Université, CNRS, LIP6, F-75005 Paris, France (email: {paolo.fittipaldi, anastasios.giovanidis, frederic.grosshans}@lip6.fr) We acknowledge funding by the French state through the Programme d’Investissements d’Avenir managed by the Agence Nationale de la Recherche (project ANR-21-CMAQ-0001) and by the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 820445 and project name “Quantum Internet Alliance”. This work extends <cit.>, which was presented at the IEEE International Conference of Quantum Computing and Engineering 2022. Corresponding author: Paolo Fittipaldi (email: [email protected]). Quantum Internetworking is a recent field that promises numerous interesting applications, many of which require the distribution of entanglement between arbitrary pairs of users. This work deals with the problem of scheduling in an arbitrary entanglement swapping quantum network — often called first generation quantum network — in its general topology, multicommodity, loss-aware formulation. We introduce a linear algebraic framework that exploits quantum memory through the creation of intermediate entangled links. The framework is then employed to mathematically derive a natural class of quadratic scheduling policies for quantum networks by applying Lyapunov Drift Minimization, a standard technique in classical network science. Moreover, an additional class of Max-Weight inspired policies is proposed and benchmarked, reducing significantly the computation cost, at the price of a slight performance degradation. The policies are compared in terms of information availability, localization and overall network performance through an ad-hoc simulator that admits user-provided network topologies and scheduling policies in order to showcase the potential application of the provided tools to quantum network design. Dynamic Scheduling, Optimal scheduling, Integer programming, Lyapunov methods, Queueing analysis, Quantum Communication, Quantum entanglement, Quantum networks, Scheduling, Scheduling algorithms, Teleportation. =-15pt A Linear Algebraic Framework for Dynamic Scheduling Over Memory-Equipped Quantum Networks Paolo Fittipaldi, Anastasios Giovanidis, and Frédéric Grosshans August 12, 2023 =========================================================================================== § INTRODUCTION As experimental demonstrations of quantum repeater links and small-scale quantum networks<cit.><cit.><cit.> start to surface, the vision of a future Quantum Internet moves closer to reality<cit.><cit.><cit.><cit.>. Despite it still being a long-term goal, the road is partially paved by the development of the classical internet, that identified and solved all the problems intrinsic to scaling a network up and operating it in a distributed way. The solutions to such problems are not directly translatable to quantum networks in general because quantum hardware is radically different, creating the need for a new branch of network science with its own set of specialized tools. The present work aims to describe a novel framework to formulate and solve the problem of scheduling entanglement swapping operations in quantum networks, and showcase its potential through some application examples. In classical networks, communication is achieved by making information packets hop through a series of network nodes until they reach their destination. Whenever several packets from different users need to pass through the same node, the node needs to have a specific discipline that regulates the order in which the packets are relayed. Depending on the application, the network might want to minimize all wait times, prioritize the packets that have certain properties or use more sophisticated specialized algorithms to determine the order of passage. The set of rules that a node applies to solve this problem is called a scheduling policy, and it is an integral part of every well-functioning network architecture<cit.>. Switching to quantum networks, the concept of packet going from a source to a destination no longer applies. The cornerstone of a large and varied set of communication applications<cit.><cit.> in the quantum domain is quantum entanglement, and the ultimate task of a quantum network system is to distribute entanglement to arbitrary sets of users. Due to the difficulties that come with distributing entanglement over a long link, the task is achieved in practice through entanglement swapping operations at intermediate nodes <cit.> that may serve several distinguished pairs of end users. The challenge of scheduling in quantum networks revolves therefore around entanglement swapping operations, which must be scheduled by the nodes following what will be addressed in the following as a quantum scheduling policy. Despite there being several solutions that yield an extensive choice of well-established policies for classical networks, the scheduling problem remains an active challenge for quantum networks: pioneeristic effort has been undertaken to solve the scheduling problem in specific quantum networking settings<cit.><cit.><cit.><cit.><cit.>, but no trivial generalization of the results presented in these works to medium and large scale networks is possible. In this context, our work aims to provide a general framework that can be employed for designing and benchmarking scheduling policies on general quantum networks. We stress that our findings pertain to arbitrary network topologies with no theoretical limit on scale and enable users to work with multiple commodities requesting streams of entangled pairs. Furthermore, our framework actively exploits quantum memory slots: even when not all elementary links along a given route are ready, the network is still allowed to create intermediate entangled pairs that cover a part of the route exploiting the available links and store them in memory for future use. The idea of intermediate links has already appeared in other works<cit.><cit.><cit.>, and we seek to extend it to our general setting as a core mechanism of operation of the network systems we model. It should be noted that, while some scheduling policies are proposed and analysed in the following, the broader focus of this work is on describing the framework as a practical tool and providing examples of its application to non-trivial scenarios. Our work is primarily aimed at first generation quantum networks as detailed in <cit.>, but our methods might prove interesting for a future treatment of second and third generation systems as well. The paper follows the following structure: in sec. <ref>, the relevant scientific literature is reviewed and compared with our contribution. Sec. <ref> provides a detailed description of the system we are modeling and the various components of our algebraic framework. We follow up with sec. <ref>, where we introduce and analyze an array of scheduling policies through the tools we propose. Sec. <ref> is devoted to presenting numerical results obtained by applying our tools to several network setups. § CONTEXT AND RELEVANCE OF THIS WORK As a cross-disciplinary topic, quantum networks are interesting to both quantum physicists and classical network scientists. As such, it is common to try and adapt classical networks ideas and know-how to the quantum world. Much like our work, <cit.> provides a formulation of the scheduling problem on quantum networks, the main difference being that the cited work approaches the problem through architecture design and heuristic scheduling, while our contribution is more geared towards building a general algebraic framework to mathematically derive and compare scheduling policies. Concerning purely theoretical results, an optimal theoretical bound for entanglement distribution across a line network with a single commodity is derived in <cit.> and expanded upon in <cit.>. References <cit.>,<cit.>, and <cit.> are all examples of stochastic analysis of a single quantum switch to characterize the scheduling policies that stabilize it. The physical model employed in these works is deeper, in that it accounts for purely quantum imperfections that we neglect, but their scope is somewhat narrower than ours because they all consider a single quantum switch that has to serve a set of users in a star-like configuration. More specifically relevant to our work, <cit.> and <cit.> detail the application of Lyapunov stability theory to a quantum switch and the subsequent derivation of a throughput-optimal Max Weight <cit.> policy, much like it is done for the quadratic policies we propose. The key differences rely in the generality of our work, which applies to arbitrary topologies with multiple commodities, and in the fact that the cited papers model a switch as a single-hop queuing system dealing with entanglement requests, i.e. requests arrive at the switch and are served after waiting in a queue. In our work, we add a complexity layer: together with the single-hop queuing model for the requests, we propose a multi-hop model for entangled pairs in quantum memories, modeling swapping as movement of pairs between queues. This new set of queues acts as a variable resource that the network must regulate according to a suitable scheduling policy. The usage of memory in our framework is physically similar to the Virtual Quantum Link idea first introduced in <cit.> and revisited in <cit.><cit.><cit.>: the introduction of memory at the nodes enables them to seek a balance between depleting their supply of entangled pairs for swapping and conserving it for future use or direct user consumption. The deeper implication of this point is that the network is free to create intermediate links and store them: this leads to distributing pairs across a service route in a “growing” fashion, that both increases performance and removes the need for end-to-end link state information, while naturally adapting to a multi-hop queuing scenario. As a final remark, we stress that due to the abundance of interesting research that has been carried out to perform quantum routing on several network topologies<cit.><cit.><cit.><cit.> we assume the existence of a set of static pre-computed routes that connect each end-users pair, under the premise that our work should be easily integrable with a more refined routing technique. To conclude the section, we summarize the key contributions of the present manuscript: * We introduce a general framework for scheduling in quantum networks that poses no assumptions on topology, number of commodities or choice of scheduling policy (sec. <ref>); * We extend the idea of intermediate virtual link to the general network case (ibidem); * Through the help of our framework, we derive an optimal quadratic scheduling policy that works over our multi-hop model. We then formulate suboptimal versions of this policy that relax information requirements (sec. <ref>). * Finally, we propose a novel, Max-Weight inspired class of scheduling policies that is shown to perform satisfactorily while posing feasible communication constraints on the network (ibidem). § SYSTEM DESCRIPTION In this section, we describe the physical model that we will rely on to develop our framework. Since the framework we provide is composed of two interconnected queuing models, we devote subsections <ref> and <ref> to describe respectively the details behind ebit queues and demand queues. As a preliminary step, we clarify the notation conventions that are adopted in this work: lower case for scalars (x), bold lower case for vectors (𝐱), bold upper case for matrices (𝐗) and calligraphic upper case for sets (𝒳). Well-known matrices such as the identity matrix or the null matrix are indicated in blackboard bold and subscripted with their dimension, as in I_n and 0_n× m. Since the term is ubiquitous in the following, we state the definition of a quantum switch as a device that is equipped with quantum memories to store qubits, a Bell State Measurement (BSM) apparatus to perform entanglement swapping operations, and local quantum processing capabilities. An entanglement swapping operation is assumed to be instantaneous and always successful, and the classical communication overhead that comes with entanglement swapping (such as sharing measurement results) is considered free. We assume our quantum switches to be connected to a classical communication infrastructure to coordinate control operations for protocols and, if the chosen scheduling policy so requires, exchange status information with other nodes and/or a central scheduling controller. Moreover, every node is assumed to possess unlimited memory slots. While this might look like too coarse of an assumption, both the literature<cit.><cit.> and some preliminary results we present here suggest that, while indeed being an important modeling point, limiting the memory slots might not be the first network limitation that must be taken into account. The physical system we consider is a network of quantum switches connected by lossy fiber links. We model it as an arbitrary connected graph 𝒢 = (𝒱,ℰ), where the switches are deployed at the locations specified by the vertices 𝒱 and interconnected by edges (i,j) ∈ℰ that represent a fiber link plus a generic elementary entangement generation scheme (such as a χ^(2) crystal, a Bell State Analyzer in the middle<cit.> or at one of the stations<cit.>). Every switch has a number of memory slots, assumed to be infinite in this work, in which qubits may be stored. Pairs of entangled qubits (referred to as an ebit<cit.> hereafter) are generated by each fiber link with a given constant average rate, which may be heterogeneous across links but is constant in time, and stored inside memories at the end nodes of the respective link. Among the network nodes, there are n pairs {(𝐴𝑙𝑖𝑐𝑒_1, 𝐵𝑜𝑏_1),…,(𝐴𝑙𝑖𝑐𝑒_n, 𝐵𝑜𝑏_n)} that request ebits in a random way to realize a generic application. Each (𝐴𝑙𝑖𝑐𝑒_n, 𝐵𝑜𝑏_n) pairs is connected by one or more routes that are not necessarily disjoint form the ones connecting other users, and therefore can create congestion that needs to be managed by a scheduling policy. We stress that since we assume unlimited memory we are choosing to focus on the link congestion case: we leave node congestion for future investigation. Given this starting point, the purpose of a quantum network is to perform entanglement swapping operations in order to distribute ebits to its users in a way that is optimal under a given performance metric. In pursuing this objective, the network must rely on a scheduling policy to minimize congestion by carefully deciding which swaps to perform when, while also being hindered by link-level fiber losses and by quantum memory imperfection causing the loss of stored ebits. Memory and fiber losses are the only two sources of imperfection that are accounted for in this paper: for simplicity reasons, we neglect sources of state degradation other than losses in this formulation of our algebraic model, since they require a far lower level of abstraction, and lead to more complex multiobjective problems <cit.>. However, our model could be reinterpreted in the context of more modern error-corrected networks if we state that each link generates entangled pairs with a given logical rate, i.e. the rate of creation of error-corrected ebits. For practical reasons, we discretize the time axis: since the scheduler is supposed to take decisions at fixed times, it is natural to take a discrete time step Δ t as the time unit of interest. Between two subsequent clock ticks the system is free to evolve stochastically and at the end of each time step a scheduling decision is taken. This places a lower bound on Δ t: no decision can happen before all information has been successfully communicated to all deciding agents, thus Δ t must be at least as large as the classical communication delay introduced by state-related information exchange. We note that, while at the moment our work does not take into account finite communication delays, the design process of a real system would need to consider that a policy that requires more communication, despite being better informed, will suffer from more losses (as they depend on the length of the time step) and be less reactive to instantaneous change. §.§ Ebit Queues To model ebits stored at memory nodes, the concept of an ebit queue is introduced: each pair of nodes e = (i,j) inside the extended edge set ℰ̃=𝒱×𝒱 is said to possess an ebit queue q_ij(t). Furthermore, among ebit queues, every q_ij(t) associated to an edge (i,j)∈ℰ corresponds to an elementary entanglement generation link, and is therefore called a physical queue, while all other ebit queues are called virtual queues. Ebit queues are therefore a piece of classical control information introduced to keep track of which nodes share entanglement: q_ij(t) = n means that there are n qubits at node i and n qubits at node j, taking up n memory slots at the respective nodes and sharing pairwise entanglement. In the following, we describe how all the processes that ebits undergo in our model are translated to queue operations. §.§.§ Ebit Generation At each time step, every fiber link — and thus every physical queue — generates a random number of ebits a_ij(t). This term can be seen as an open interface to the specific random process that models ebit generation and it is modeled hereafter as a Poisson process of constant mean value α_ij≥0 for generality. It should be noted that α_ij is the final generation rate after accounting for link-level imperfections — finite brightness of the source, propagation losses, finite success probability of pair-generation BSMs, etc. — as a cascade of Poisson filtration processes, at the end of which we obtain a value for α_ij. Thus, ebit generation is modeled by a direct enqueueing operation along the relevant queue. It should be noted that, since this operation models entanglement generation at the physical level, it only concerns physical queues. For virtual queues, a_ij(t) = 0 ∀ t. §.§.§ Ebit Losses To model (symmetrical) memory loss, we employ a standard quantum memory model and calculate the storage-and-retrieval efficiency of the memories as η = exp(-Δ t/τ), where τ is the expected lifetime of a qubit in the memory and Δ t is the duration of a time step. This figure of merit models the probability to correctly retrieve a qubit from a memory after it has been stored in it for one time step. We assume losses to be symmetrical in that whenever one loss event happens, either both ends of the link lose their respective qubit or one end loses it and instantly communicates loss to the other concerned node. Therefore, one loss event always models the loss of one complete ebit. At every time step, every queue throws as many biased coins as there are stored qubits and removes as losses all the ones that fail the random check. Losses are therefore modeled by the binomially distributed random variable ℓ_ij(t), with as many trials as there are ebits stored in queue (i,j) and probability to lose one pair 1 - η. It should be clear that the number of trials for the geometric distribution is based on q_ij(t), i.e. on the pairs present at the beginning of the time step, meaning that new arrivals are immune to losses for the ongoing time step. We remark that the statistical distribution of ebit survival times follows the geometric distribution defined by η, whose mean value 11-η tends to the expected τΔ t for small Δ t/τ, τ being the expected lifetime of ebits in the memories. The remaining difference is an effect of the dicretization. Finally, we stress that accounting for losses in such a time-dependent way makes the presented framework valid as a tool to determine the optimal frequency at which scheduling decision should be taken, given the technological parameters. §.§.§ Entanglement Swapping After covering generation and loss, the last mechanism that can modify the amount of ebits in a queue is entanglement swapping. Entanglement swapping always involves consuming two "shorter" pairs to obtain one longer pair, which naturally translates to our queue-based formalism as two removals from the parent queues, and one addition to the child queue. We introduce the following notation: let r_i[j]k(t) indicate the number of swapping operations that happen at a given time step, at node j, from queues (i,j) and (j,k) to queue (i,k): as a notation example, r_A[B]C(2) = 3 means that the scheduler has ordered three BSMs to be performed at node B to swap three pairs from queues AB, BC to AC at time step 2. There will be as many r_i[j]k(t) terms as there are transitions allowed by the chosen routing: if for instance there are two parallel paths ABCD and AB'C'D across the Alice-Bob pair AD, but only ABCD is explicitly routed, the system will include terms r_A[B]C(t) and r_A[C]D(t), but not r_A[B']C'(t) and r_A[C']D(t), effectively ignoring the second path. This is a limitation that directly arises from assuming that routing is static and known, but is also easily circumvented by adding more paths to the routing, since we place no theoretical limit on the number of routes that can serve a user pair. To clarify how all the pieces introduced until now fit together, suppose to have the Alice-Bob pair AD connected by route ABCD, as shown in Fig. <ref>. Assume the average generation rates to be α_AB, α_BC and α_CD = 1 (time steps)^-1. Lastly, assume that all the memories in the system have η = 0.9 storage-and-retrieval efficiency for the chosen time step duration. Fig. <ref> shows how the full system evolves throughout two time steps, while Fig. <ref> shows the same test run but focusing on queue AB, to highlight the timing of the various phenomena at play. [!t]()[width=.9]Sim_Example.pdf Explicit example of two time steps over a simple topology. Continuous lines represent physical queues and dashed lines virtual ones. Grey circles represent ebits that were in the queue at the beginning of a time step, red ones ebits that arrived during that time step. Blue crosses represent loss of an ebit. Upper figures (a) at the beginning of the corresponding time step, lower figures (b) at the end of it. [!t]()[width=.9]Sim_Timing.pdf Example of two time steps from the point of view of queue AB. Queue snapshots q_ij(t) are taken at the very beginning of a time step, while arrivals and losses happen stochastically but are only assessed at the end of the step, when the scheduling decision is taken. Note that ebits arriving during the current time step are not subject to losses in this model. ∙ During time step 1: * At the beginning of the time step, the queue states are: q_AB(1) = q_CD(1) = 1, q_BC(1) = 0; * At the end of the time step, new ebits have been generated across AB and BC (a_AB(1) = 2, a_BC(1) = 1) and one has been lost across CD (ℓ_CD(1) = 1). The scheduling decision is taken from this configuration as r_A[B]C(0) = 1: one swap at node B from queues AB and BC to AC. ∙ During time step 2: * The initial configuration sees two stored pairs in AB which were not employed in the last time step (q_AB(2) = 2) and the freshly swapped one in AC (q_AC(2) = 1); * Throughout the time step, one pair was lost across AB (ℓ_AB(2) = 1) and one generated across CD. The scheduler may now decide r_A[C]D(2) = 1 to move to AD or store the pairs for future use. To categorize transitions in terms of their net effect on queues, we say that a given transition i[j]k is incoming for queue (i,k), because it adds pairs to it, and outgoing for queues (i,j) and (j,k), because it takes pairs from them. A queue's evolution can therefore be summarized as follows, i.t. and o.t. being shortcuts for incoming and outgoing transitions: q_ij(t+1) = q_ij(t) + a_ij(t) - ℓ_ij(t) - ∑_o ∈o.t.r_o(t) + ∑_k ∈i.t.r_k(t). For clarity, we reiterate that while all terms of (<ref>) are calculated for every queue, a_ij(t) across a virtual queue will always be zero, because virtual queues do not generate ebits. Moreover, it is quite rare for a physical pair to have incoming transitions, but not impossible: it may happen in a peculiar topology such as the ABC triangle with AB as an Alice-Bob pair and ACB as service route. In this edge case, transition A[C]B is incoming for a physical queue. Conversely, it should be stressed that the loss term ℓ_ij(t) is calculated in the same way for all queues, because ebit storage is always handled by memories at the network nodes. §.§.§ Vector Formulation A description of the whole system requires |ℰ̃| equations like (<ref>), ushering a natural transition to a model built with matrices and vectors. The first vector terms are 𝐪(t), 𝐚(t) and ℓ(t), whose N_queues entries correspond to the individual q_ij(t), a_ij(t) and ℓ_ij(t) values (the ordering is irrelevant as long as it is consistent). Moreover, since the effect of swapping on the queues is linear, it is possible to describe it by introducing the vector 𝐫(t), which has N_transitions elements — and a matrix 𝐌 with N_queues rows and N_transitions columns to translate the transition rates into their net effect on queues. The 𝐫(t) vector embodies the scheduling decision and it is a mere list of all the r_i[j]k terms, while the 𝐌 matrix introduces an efficient encoding of the network topology and routes: For each of its columns, associated to transition i[j]k, the 𝐌 matrix has -1 on the rows associated to queues (i,j) and (j,k), and +1 on the row associated to queue (i,k). All other terms are zero. An example of the 𝐌 matrix is given in table <ref> in order to provide the reader with intuition on how it's built. We remark that in all non-trivial examples that are analyzed in this work the 𝐌 matrix is automatically generated by our simulator. System-wide queue evolution can be restated as the following simple linear equation: 𝐪(t+1) = 𝐪(t) - ℓ(t) + 𝐚(t) + 𝐌𝐫(t). Looking at tab. <ref>, notice that, as this work only involves bipartite entanglement, all columns of M have two -1 terms and one 1. It would be possible to generalize this model to n-party entanglement by introducing multipartite queues and defining transitions that add to them by drawing from three or more bipartite queues to model a protocol similar to the ones shown in <cit.><cit.>. For the sake of simplicity and avoiding the severe scaling issues this generalization would create, we focus on bipartite states for now. This entails that every column of M sums to -1, i.e. every swap operation has the net effect of removing one pair from the system. §.§.§ Ebit Consumption Up to now, the scheduler can freely swap pairs in the network but there is no mechanism for users to employ the received pairs. The missing piece of the puzzle for ebit queues is consumption: whenever there is availability of entangled pairs across one of the final (𝐴𝑙𝑖𝑐𝑒_n,𝐵𝑜𝑏_n) pairs, the scheduler must be able to use the available pairs to serve requests, i.e. consume the distributed resource. This is implemented in the model by extending the matrix 𝐌 through concatenation of a negative identity block to obtain 𝐌̃ = [ 𝐌|-I_N_queues], and the 𝐫(t) vector to have N_transitions + N_queues components. What this extension achieves is to have a set of new transitions that only remove one pair from a given queue, modeling actual consumption of the distributed pair by the users. Extending 𝐌 to 𝐌̃ empowers the scheduler but also adds a new facet to the decision problem: if a given queue has n pairs inside, the scheduler not only needs to balance swapping and storage for future use, it might also have to account for direct consumption of some of the available ebits. Putting all the terms together, the vector of ebit queues evolves as: 𝐪(t+1) = 𝐪(t) - ℓ(t) + 𝐚(t) + 𝐌̃𝐫(t). §.§ Demand Queues The ultimate purpose of a communication network is to serve the requests that users issue. Therefore, we need to include in our discussion a mechanism that allows to keep track of user demand: at any given time, every (𝐴𝑙𝑖𝑐𝑒_n, 𝐵𝑜𝑏_n) pair will issue a random number of demands and store them in a backlog called the demand queue. Every time a direct consumption operation is scheduled and a pair is consumed along link ij, a demand is contextually removed from the demand queue of link ij. This physically corresponds to the users measuring their qubits and “consuming” one ebit to realize the specific application they are implementing. Thus, it becomes natural to introduce another set of queues to describe the evolution of demands. Similarly to ebits, demands arriving to the system and being held for future service are modeled through queues: alongside every ebit queue, there exists a demand queue d_ij(t) that keeps track of the number of user-issued requests (as introduced in <cit.> for a single switch and generalized in this work for an arbitrary topology). At each time step, every demand queue d_ij(t) receives b_ij(t) demands, which for simplicity and generality are again modeled as a Poisson process with constant average value β_ij (as in the case of ebit generation, this term may be interpreted as an open interface to more refined traffic patterns). To maintain the model's uniformity, all edges belonging to ℰ̃ have a demand queue, but only the ones that are associated to an (𝐴𝑙𝑖𝑐𝑒_n, 𝐵𝑜𝑏_n) pair have nonzero arrivals. For all the other links, b_ij(t) = 0 ∀ t. Demand queues have a simpler evolution than ebit queues, since a demand is only a request for one ebit to be distributed across a given (𝐴𝑙𝑖𝑐𝑒, 𝐵𝑜𝑏) pair: demands enter their queues when they are received and exit when they are served. Demand service can be naturally controlled by the ij terms of the 𝐫(t) vector, i.e. the same terms that control ebit consumption. We therefore introduce the matrix 𝐍̃ = [0_N_queues× N_transitions| -I_N_queues] as a mean of interfacing with the consumption part of the 𝐫(t) vector without being affected by the scheduling one, which is irrelevant to demand queues. Demand evolution may therefore be stated as: 𝐝(t+1) = 𝐝(t) + 𝐛(t) + Ñ𝐫(t) By construction, the last N_queues components of the 𝐫(t) vector regulate both demand and ebit consumption: one demand always consumes one ebit. § SCHEDULING POLICIES §.§ General Overview After introducing all the components of the model, we move to describing scheduling policies and how they can be tested through our tools. We first outline what a scheduling policy is in the context of our work and follow up with subsections dedicated to three categories of scheduling policies: subsection <ref> describes the Greedy scheduler, i.e. the simplest policy we analyze in this work; subsection <ref> features a mathematical derivation of a quadratic family of scheduling policies; subsection <ref> shows how the quadratic schedulers can be modified to obtain a class of policies that perform similarly but require lighter computations. We define a Scheduling Policy as any arbitrary set of rules that at every time step t takes as its input some degree of information about the network state and returns a scheduling decision 𝐫(t), i.e. a scheduling vector as defined in the previous section. We first subdivide policies according to their localization degree: in distributed policies, the nodes themselves determine the operations to perform; in centralized ones, the system features a physical scheduler to which all the nodes communicate their status information and receive orders from. It is moreover possible to categorize policies in terms of information availability: we remark that in all policies that we analyze in the following we work on the assumption that (𝐪(t),𝐝(t)), i.e. the exact state of the system at the beginning of time step t, is known to all parties. However, since networks are distributed systems, it may happen that some crucial information (such as the realizations of the random processes a_ij(t) and ℓ_ij(t) for faraway queues) is not available or outdated when the scheduling decision is taken, introducing the notion of feasibility of a scheduling decision, which is detailed in the following paragraph. To start, assume a centralized scheduler, with complete access to information. As shown in sec. <ref>, the net effect of a scheduling decision 𝐫(t) on the ebit and demand queues is given respectively by 𝐌̃𝐫(t) and Ñ𝐫(t). We can set two bounds on the decision: * The net number of outgoing ebits from any given queue can never exceed what is physically available: -𝐌̃𝐫(t)≤𝐪(t) - ℓ(t) + 𝐚(t). * Along a queue, the number of consumed ebits should never be higher than the demands: -Ñ𝐫(t)≤𝐝(t) + 𝐛(t) We refer to those bounds as the feasibility bounds. If we now suppose (as will be the case for most of the scheduling policies presented hereafter) to have incomplete access to information, one or more of the random processes' realizations become inaccessible, making it impossible to exactly formulate the feasibility bounds. Despite it still being possible to design scheduling policies that perform well while only using educated guesses based on averages, it is not possible to guarantee that their decisions at each time instant will respect (<ref>) and (<ref>). Infeasibilities in general arise when n ebits are available in a queue and n'>n are scheduled out of it; they may be caused by a central scheduler relying on outdated information and scheduling more pairs than available, or by race conditions between two node-local schedulers that try to draw from the same queue. Infeasibile decisions themselves do not prevent a network from operating (performing more measurements than there are available ebits simply results in failure of the excess measurements), but infeasibility that is not properly managed may entail sensible degradation of performance. Therefore, a working quantum network stack also needs a specific discipline to manage infeasible orders. In the context of this work, conflicts are managed by assigning a random timeout to all measurement operations, and then executing them with a first-come-first-serve (FCFS) discipline. However, to avoid artificially degrading the performance, we also introduce a ranking system, shown in fig. <ref>, such that high-rank operations are always executed after low-rank ones. Were not such a system in place, it could happen that the scheduler ordered to feed q_AC through r_A[B]C = 1, exploit the new AC pair in r_A[C]D = 1 and finally serve one request with r_AD = 1. Each of these operations depends on the one before it, and if the execution order is not respected the system will serve one less AD request, possibly also wasting the intermediate links in the process. In practice, we hypothesize to have a control layer in charge of sending a control signal to the nodes to apply their scheduling decision at the end of the time step. To ensure proper priority is respected, we subdivide the signal from the control layer in multiple sequential “apply” signal, one per rank (i.e. one per horizontal set of nodes in fig. <ref>). In the following sections we propose some examples of scheduling policies and provide detail on their degree of localization and information availability. §.§ Greedy Scheduler The Greedy Scheduler is a nontrivial, distributed scheduling policy that works with minimal communication between the nodes. It is a natural and immediate solution to the scheduling problem, and it is commonly found in classical network literature as a test case. Under a greedy scheduling policy, all nodes perform swapping operations as soon as they are available, regardless of user demand. When several competing operations are available, the node selects randomly. It should be noted that, although it disregards user demand, the greedy scheduler we examine is still routing-aware: if the route ABCD is to be served, the scheduler will never attempt “downward” transitions like A[D]C. The greedy scheduler's advantage lies in the fact that it requires no additional communication infrastructure on top of the one already employed by ebit generation and swapping, since the policy works on strictly local information. The downside to such simplicity is found in the low performance of this policy, that is only interesting as a lower bound for other policies to beat in order to justify the additional communication overhead required. Simulation data for the greedy policy, as well as a comparison with more refined schedulers, is provided in sec. <ref>. §.§ Quadratic Scheduling We now turn to mathematically stating and solving the scheduling problem through the lens provided by our framework. Before solving the problem and displaying results, we spend some words to describe our tools. §.§.§ Drift Minimization Lyapunov Drift Minimization (LDM) is a standard technique that is often used in classical network science to stabilize queuing systems<cit.>. We provide in this section a demonstration of how and why LDM works, and follow up with its application to quantum networks. As a first step, let V(𝐪(t), 𝐝 (t))) = V(𝐬(t)) be an arbitrary, non-negative, convex N^nR function of the current state of the system, that we call the Lyapunov function. In short, choosing an arbitrary Lyapunov function and showing it satisfies certain conditions will allow us to infer that the system is stable. This method entails great simplification of the analysis of highly multivariate systems, because it reduces the problem to a scalar one: when V(𝐬(t)) is small, all the queues are small, and when it is big, at least one queue is accumulating. A common convention<cit.> in network science is to use the square norm of the queue backlog vector as V(𝐬(t)). After choosing a suitable Lyapunov function, the next step is to define its drift Δ V(𝐬(t)) as: Δ V(𝐬(t)) = V(𝐬(t+1)) - V(𝐬(t))|𝐬(t). Some intuition about this formulation can be gained by thinking of the Lyapunov function as a potential, akin to the electrical one in physics: the drift is positive if from t to t+1 the system evolves into an higher-potential, less stable state, and negative otherwise. It is possible to prove<cit.> that if Δ V(𝐬(t)) is negative on the entire state space of the system, except possibly for a compact subset of 𝐬(t) values, then the Markov chain describing the system is positive recurrent, i.e. the network is stable and user requests will not accumulate boundlessly. Such property is known as the Foster-Lyapunov criterion. Intuitively, the drift being positive only on a compact set means that there is a region of the state space in which the system evolves away from stability: since the drift is negative everywhere outside said region the system is always pushed back inside it, so that the Lyapunov function is never allowed to diverge. To visualize this, one may think of a charged particle in a potential well: even if it manages to exit in some way, it is eventually pushed back by the higher potential region. In its most general form, the Foster-Lyapunov criterion can be phrased as: Δ V(𝐬(t)) ≤ -f(𝐬(t)) + g(𝐬(t)), where f and g are two non-negative functions and the right-hand side is positive on a compact region of the state space of our system. Therefore, the practical goal is to find a bound for the drift and minimize it, in order to satisfy the Foster-Lyapunov criterion: min_R(t)∈ℛΔ V(𝐬(t)) ≤ -f(𝐬) + g(𝐬(t)) where ℛ is the set of all feasible scheduling policies. Notice that everything in our equation is defined only in terms of t and t+1: the optimization must be repeated at every time step because of the t dependence, and since the system only sees up to t+1 we call this process a myopic optimization. Solving the myopic problem at every time step can be proven<cit.> to be a suboptimal solution to the infinite horizon Markov Decision Problem of stabilizing the network at steady state. §.§.§ Application to the Framework We now move to the application of drift minimization to our quantum problem. We first remark that we only seek to stabilize demand queues, because ebit queues play the role of a resource, and their accumulation is not an indicator of the ability of the network to serve user demand(accumulating ebit queues merely amount to more ebits being available and more freedom to the scheduler, especially under unlimited memory assumptions). Additionally, we remark that experimental quantum networks will have a finite number of quantum memory slots at every node, enforcing a hard upper bound on 𝐪(t). To make our analysis apply to any arbitrary scheduling decision in ℕ^n, we refine our definition of 𝐝(t): 𝐝(t+1) = (𝐝(t) + 𝐛(t) + Ñ𝐫(t))^+, where (·)^+ is a shorthand for max(·,0). This is a failsafe measure that prevents the queues in our mathematical model from going negative even if a scheduling policy prescribes more service than there are requests. To apply drift minimization to our case, the first step is to choose a Lyapunov function that satisfies the requirements detailed above. As is customary in classical networks, we opt for the square norm of the queue backlog: V(t) = 1/2𝐝^T(t)𝐝(t). From there, we obtain the drift: Δ V = 1/2[𝐝^T(t+1)𝐝(t+1) - 𝐝^T(t)𝐝(t)] | 𝐝(t) If we let 𝐝(t)+𝐛(t)=𝐝̃(t) and note that [max(x,0)]^2 ≤ x^2 we can bound the drift as: 1/2[𝐝^T(t+1)𝐝(t+1) - 𝐝^T(t)𝐝(t)]|𝐝(t)≤ ≤1/2[(𝐝̃(t)+Ñ𝐫(𝐭))^T(𝐝̃(t)+Ñ𝐫(t)) - 𝐝^T(t)𝐝(t)]|𝐝(t) = = 1/2[𝐝̃^T(t)𝐝̃(t)|𝐝(t) - 𝐝^T(t)𝐝(t) + + 𝐝̃(t)|𝐝(t)^TÑ𝐫(t) + 𝐫^T(t)𝐍^T𝐍𝐫(t)] Therefore, stabilizing the system amounts to finding the 𝐫(t) that minimizes the last term U(𝐫(t)) = 𝐝̃(t)|𝐝(t)^T𝐍̃𝐫(t) + 𝐫^T(t)𝐍^𝐓𝐍r(t). Notice that U(0) = 0 implies that U(𝐫(t)) ≤ 0 ∀𝐫(t). §.§.§ Fully Informed Quadratic Scheduler The derivation presented in the previous section yielded an expression that has a direct effect on stability: the more negative U(𝐫(t)) is, the stabler the network. In other words, the task of a scheduler in this context is to choose at every time step a decision 𝐫(t) such that U(𝐫(t)) is minimized. The natural tool to solve this problem is optimization. Assuming, as an initial ideal case, that all information about the network state is available (and therefore dropping the expectation from U(𝐫(t)), it is possible to formulate a central scheduling policy that at each time step solves the following quadratic integer program: min𝐰(t) ·𝐫(t) + 𝐫(t)^T𝐍^T𝐍𝐫(t) s.t. 𝐫(t)∈ℛ(t) with weights 𝐰(t) = (𝐝(t) + 𝐛(t))^T𝐍̃). Since we assumed complete information availability, we can use as constraints the feasibility conditions mentioned in <ref>(d being a shorthand for the dimension of 𝐫(t)): ℛ(t) = {𝐫(t)∈ℕ^d | -𝐌̃𝐫(t)≤𝐪(t) - ℓ(t) + 𝐚(t), . . -Ñ𝐫(t)≤𝐝(t) + 𝐛(t)} This constraint set binds the system so that, along every queue: * No more outgoing transitions are scheduled than there are stored ebits; * No more ebits are consumed than there is demand. Solving this problem at every time step will guarantee the best possible scheduling decision 𝐫(t) that can be obtained starting from a 2-norm Lyapunov function, even though such a policy carries a crucial flaw that hinders its experimental realizability: since this is a centralized policy, there must be a physical scheduling block that acts as an authority; all the nodes in the network submit local status information and receive a scheduling decision to apply. In the time it takes for the information to reach the scheduling agent and for the decision to be relayed back to the nodes and applied, the physical layer of the network has continued stochastically generating and losing ebits, so that when the decision finally arrives it is based on outdated information. Two possible solutions to this issue are addressed in the following, in the form of two policies that rely on less information being available. §.§.§ Partially Informed Quadratic Scheduler One solution to the stale information problem detailed in the previous section could be to replace all unavailable information with sensible expectation values and thus implement a partially informed quadratic scheduler. We assume that for each queue, the scheduler has access to: * The average arrival rate α; * The loss parameter η; * The average demand rate β; * The system state (𝐪(t),𝐝(t)) at the beginning of each time step. This information set relaxes the requirements because the network can take a snapshot of its state at the beginning of each time step and exploit the leftover time to communicate it to the scheduler. The scheduler will in turn use average parameters to build an expectation for the system's state at the end of the time step and take its decision based on that. Note that if these requirements are still too tight, it is always possible to formulate a policy that knows the exact state of the system with n time steps of delay, or even hybrid localized policies where every node knows the state of the surrounding queues with a delay that depends on their physical distance. Let ℐ={𝐪(t),𝐝(t),α,β,η} be the set of available information at time t. To formulate our partially informed policy, we re-use the (<ref>) problem, but change the constraint set to: ℛ(t) = {𝐫(t)∈ℕ^d |. .-𝐌̃𝐫≤𝐪(t) - ℓ(t) + 𝐚(t)| ℐ(t), . . -𝐍̃𝐫(t)≤𝐝(t) + 𝐛(t)|ℐ(t)}. Which in practice reads: ℛ(t) = {𝐫(t)∈ℕ^d | . .-𝐌̃𝐫≤η𝐪(t) + αI_N_queues×1., .-𝐍̃𝐫(t)≤𝐝(t) + βI_N_queues×1}. This class of partially informed policies still outperforms greedy ones but removes the stale information problem. It should be stressed that, since this policy relies on a guess made using averages, it is not guaranteed that its decisions will satisfy the feasibility conditions. Moreover, since the policy was formulated by manually modifying the result of LDM, it is by nature a suboptimal policy. The performance of this policy is reviewed in sec. <ref>. §.§.§ Node-Localized Quadratic Scheduler As mentioned before, information availability is one of the main points to consider when choosing a scheduling policy: a well-designed policy must be able to take sensible decisions while leveraging the available information to the best extent possible. Following this idea, we propose a distributed, optimization-based original policy and subsequently benchmark it to assess its expected performance. Since we are describing a distributed policy, we shift our point of view to that of a node in the network: we assume that every node i in the network has access to all relevant average values, which can be communicated before the network is booted or measured in a rolling average fashion. Additionally, let node i have access to the queue state of the full network at the start of each time step (𝐪(t),𝐝(t)), where the same remarks we gave in the previous section apply. Finally, due to how entanglement generation and swapping are implemented, node i should have access to how many qubits are stored in its memory slots and with whom they are entangled, which means that node i also knows exact arrivals and exact losses for all the queues connected to it, both physical and virtual, and can exploit this additional information when taking a scheduling decision. To formalize this, let 𝒞^i be the set of queues connected to node i, i.e. the set of edges e in the extended set ℰ̃ such that e is connected to node i. Using this concept, we can define a node-local version of the information set ℐ^i(t) which contains the entirety of the information available to node i: ℐ^i(t) = {𝐪(t),𝐝(t),η,β,α,a_e(t),ℓ_e(t),b_e(t), ∀ e∈𝒞^i}, where a_e(t),ℓ_e(t) and b_e(t) correspond to the additional local exact information that is unique to each node. Instead of phrasing a global optimization problem, node i may now formulate an individual problem and solve it to obtain a strictly local scheduling decision to apply directly, without waiting for a discrete scheduler to send back a decision. To do so, the node builds all the relevant quantities (backlogs, arrivals, losses) with exact information from the queues it is connected to and expectation values from the other queues. The i-localized quadratic integer program can thus be written as: min𝐰^i(t) ·𝐫(t) + 𝐫^T(t)𝐍^T𝐍𝐫(t) s.t. 𝐫(t)∈ℛ^i(t) where the weights are given by 𝐰^i(t) = 𝐝(t) + 𝐛(t)|ℐ^i(t)^T𝐍̃, In accordance with its previous definition, the set ℛ^i(t) of all possible scheduling decisions 𝐫(t) at time slot t localised at node i is defined as: ℛ^i(t) = {𝐫(t)∈ℕ^d |. .-𝐌̃𝐫≤𝐪(t) - ℓ(t) + 𝐚(t)| ℐ^i(t), . . -𝐍̃𝐫(t)≤𝐝(t) + 𝐛(t)|ℐ^i(t)}, where each individual expected value will locally resolve to a form similar to eq. <ref> (i.e. all exact values) for queues that are connected to node i and to eq. <ref> (i.e. all averages) for queues that are not. As an example, node A will be able to formulate a problem that includes the constraint -𝐌̃_AB,·𝐫≤ q_AB(t) - ℓ_AB(t) + a_AB(t) (where 𝐌̃_AB, is row AB of 𝐌̃) because queue AB is directly connected to it, but will have to resort to -𝐌̃_CD,·𝐫≤η q_CD(t) + α for queue CD, because it has no up-to-date information about it. The locally informed quadratic scheduler provides a practically implementable alternative to the globally informed policy while still retaining good enough performance. We remark once again that, while the centralized fully informed method came from exact calculations, this scheduler was modified and is thus partially heuristic. Therefore, while the decisions taken by the fully informed scheduler were optimal, the ones taken by the localized one are not: one of the tasks of performance analysis is to characterize this margin of suboptimality in order to gauge how close a distributed scheduler can get to its centralized, idealistic variant. §.§ Max Weight Scheduling The quadratic policies that have been detailed in the previous section are valid solutions to the scheduling problem in quantum networks. However, situations might arise in which computational complexity is a stricter constraint than network performance. To accommodate such cases, we present in this section a class of policies that perform almost as well as the quadratic ones, for a fraction of the computational cost. Looking at the policies presented until now, we notice two interesting points: * The objective function features a linear term that depends on queue length plus a quadratic penalty that does not; * The linear terms are reminiscent of the objective function for the Max Weight<cit.> policy, an extremely well-established result of classical network theory. It is therefore natural to propose a class of semi-heuristic scheduling policies derived by taking our quadratic objectives and suppressing the quadratic penalty, which does not depend on the queue backlog. For brevity, we explicitly formulate only the fully informed variant of the Max Weight scheduler. The partial and local information quadratic schedulers can be turned to their linear variants following the same steps. The fully informed Max Weight problem is obtained by simply suppressing the quadratic term from (<ref>): min𝐰^T(t) ·𝐫(t) s.t. 𝐫(t)∈ℛ(t), and solving it with the same constraints as (<ref>). The partial and local information policies may be constructed in the same way, by suppressing the quadratic term from (<ref>) and using the constraint sets (<ref>) and (<ref>) respectively. The performance analysis for the globally, partially and locally informed linear schedulers is provided in section <ref>. § NUMERICAL ANALYSIS In this section, we give an overview of how our simulation tool works and then provide results for the numerical analysis of all the proposed schedulers. §.§ Simulator Architecture All the results shown in this work were obtained through an ad-hoc simulator implemented in Python, relying on the <cit.> solver for the optimization calculations and <cit.> as a graph backend. In the following, we provide a quick breakdown of how our simulator works, from the point of view of a user that is not necessarily experienced with writing code. Interested readers may find more information on the simulator's GitHub repository <cit.>. From a black-box perspective, the focus of the code design phase of our work was on an object-oriented model of the network system that is as modular and layered as possible. The motivation driving this approach was that an ideal version of the controlling code should be abstract enough not to be aware whether it is driving our model, another more refined simulator or even a real network. In the following, we give a brief rundown of the kind of parameters that a user of our framework and simulator may expect to tune. The simulator's input files are composed of two sets of ingredients for the user to provide: the first set of parameters is devoted to the generation of the network topology, the choice of service pairs and demand rates. Users are free to choose one of the topologies we propose in this work (with tunable parameters) or provide an entirely custom network graph. After selecting the topology, the user selects the set of scheduling policies that the simulator will analyze. As before, it is possible to select one of the policies we analyzed here or provide a custom one. The code provides seamless access to all the information we used in our policies through simple specification of an “information availability” parameter. The second set of input values is related to physics and low-level simulation parameters, enabling fine-tuning of generation rates across physical links and losses at nodes, but also number and duration of the time steps. A set of parameters related to the optimization of the simulator's performance concludes the user inputs for our code. A discussion of these parameters is out of the scope of this paper as they are only relevant to raw computational performance, but can be found in the full code documentation of the simulator on GitHub<cit.>. §.§ Results To avoid excessively prolonging this section, we show in fig. <ref> that the quadratic schedulers provide a negligible, if any, increase in performance at the cost of a major increase in computational complexity (quadratic optimization calculations are much more taxing than linear ones). They were therefore omitted from the complete discussion of numerical results, that only shows the greedy scheduler and the three linear ones. The main goal of the following analysis is to showcase how the proposed scheduling policies affect the performance of quantum networks of various topologies, both deterministic and randomly generated. The topologies on which our analysis was run, shown in fig. <ref>, are a complete 5x5 grid, a 6x6 grid with some randomly removed nodes, and two realizations of the Watts–Strogatz<cit.> and Erdős–Rényi<cit.> models of 25 nodes each. Since our 𝐌 matrix is built from the static routes that connect the service pairs, building a nontrivial example requires more than two routes. To obtain such an example, we increase the number of users we consider: for each topology, we run our simulation with ten user pairs, of which two are manually fixed (red and blue in fig. <ref>) and eight are randomly selected at the beginning of each simulation run to mimic different traffic configurations (green in <ref>). Every user pair is connected, when possible, by two semi-distinct routes. Since routing is outside the scope of this work, for we simply take the shortest path connecting each user pair, remove the edges that compose it with a given tunable probability, and then take the shortest path in the newly obtained graph as a second route, under the assumption that in a real application scenario users will provide sensibly computed static routes. We sweep the demand rate of the two manually selected pairs, while fixing the random ones to a constant load value L, and then average together the results of ten runs to remove the bias that one particular parasitic pairs set may entail. Fig. <ref> provides a showcase of all the results that we obtain from our simulation: given the complete 5x5 grid topology shown in fig. <ref> and the fully informed Max Weight scheduler, we select the four corners of the grid as the two main user pairs, randomize the parasitic pairs and run the simulation, displaying all outputs. Since tracing the capacity region of a network requires gauging its stability, we rely on fig. <ref> as an aid to clarify our definition of this crucial concept. In the context of dynamical systems, stability may be defined in several different ways, depending on the amount of mathematical rigor required. The established definition states that a system of queues is stable if the time it takes for the cumulative queue length to return to zero is finite on average (i.e. the queues keep returning to zero, regardless of how far they stray). Of course, such a definition loses meaning in a finite-time context, because there is no way to tell whether a system would turn back to zero if left running for a longer wall time, even though it looks unstable over a finite time window. However, arguments can be made to justify the usage of such a notion in a context such as ours. First of all, it is safe to say that a queue whose length is constantly zero is stable (This is apparent from fig. <ref>, plot in the (0,0) cell, which depicts the temporal trend of the total demand, with all demand rates set to zero). Secondly, we may state that a queue that has Poissonian arrivals and is never depleted will accumulate in a roughly linear fashion, and it will surely be unstable. Thirdly, we claim that the stability front of a network system is a Pareto boundary: if a given load L = (l_1,l_2,...l_i,...,l_n) cannot be served by the network and is therefore outside its stability region, then all higher loads L' = (l_1,l_2,...l'_i,...,l_n) such that l'_i > l_i are unstable (<ref>,upper-right cluster of linear plots, depicting total demand in a high-load scenario). These considerations make a finite-time simulation slightly more insightful: if the queue length returns to zero several times during the simulation window, the system is likely stable. If the system shows a clear linear trend, there is high possibility that it is not. If a cluster of points all show a linear trend, the possibility of instability further increases. Moreover, to conform with standard practice in the classical network field, we also include as a performance metric the average demand queue length, plotted as a colormap in the background of fig. <ref>'s cells. This is the metric on which we focus for the rest of the analysis, since it yields a more easily legible graph of the stability of a load point and is therefore more suitable for high-resolution plots and/or comparison of a large number of results. Another reason why we choose to employ the average queue length because a color map is that it provides a visual approximation of the capacity region of the network we are considering. To give a sense of scale, we complement our outputs with the maximum excursion of the cumulative demand backlog, shown in the top-left corner of every cell. Running our analysis over all topologies and schedulers and displaying the average demand backlog, we obtain four arrays of plots that show the performance of our network as a function of the information granted to the scheduling policy (Greedy to Fully Informed Global) and the load on the parasitic pairs, shown in fig. <ref>. From these arrays of plots, insight on several levels may be obtained. Firstly, looking at all the plots for any given topology, we observe that changing the scheduler entails radical change on the capacity region of a quantum network, providing proof that not only the scheduling problem is an interesting one to formulate in the context of quantum networking, but its solution brings non-negligible performance margins to the operation of a quantum network. Another piece of information that may be gathered resides in the shapes of the stability margin: when the dark region is not shaped like a rectangle it means that the two plotted pairs are in direct competition, as increasing demand along one of the axes reduces the amount of demand that can be served along the other one. To an end user employing our tool for network design, this would mean that the network is bottlenecked by routing, since there is a set of routes across which the scheduler must balance service to two or more competing commodities. Another point that can be made from these results comes from looking at the difference between the fully informed global scheduler and the local ones: as mentioned before, the fully informed Max Weight scheduler can be interpreted as a performance upper bound for a Max Weight policy. Therefore, when designing an original scheduler, one may gauge its performance by comparing stability regions with the fully informed scheduler. There is a noticeable difference between FI and LI but it may be deemed acceptable because of the information trade-off: the region still has the same shape and, although smaller, is still comparable to the upper bound, meaning that the locally informed policy we are proposing performs very well in this scenario. Conversely, a rectangular shape (e.g. <ref>, Fully Informed scheduler column) is an indicator that the two main pairs we selected are not directly competing over a shared bottleneck. This does not necessarily mean that the network is not congested: traffic from the parasitic pairs is still stressing the network (as demonstrated by the reduction in size of the stability region when going up along the parasitic load axis) and requiring careful scheduling decisions. § LIMITATIONS OF THE FRAMEWORK AND FUTURE OUTLOOK In this section, we discuss the main limitations and open questions in our model, and propose some seed ideas for future directions. The first limitation to talk about is the modelization of strictly quantum imperfections such as decoherence, that degrade the quality of a quantum state without necessarily meaning the state is lost. Despite being well aware of the paramount importance of noise in quantum modeling, the history of the classical Internet shows that a successful large-scale network infrastructure is best thought of in terms of separate functional layers, and a layered architecture has already been proposed for a prospective future quantum internet <cit.> that effectively separates the Link Layer, where quantum error correction should be implemented, from the Network Layer, which is the scope of our work. While we are aware that in real implementations, especially initial ones, theoretically separate layers leak and blend with each other, the Quantum Internet should eventually converge to a well defined network stack, making it redundant to treat noise in the same layer as scheduling. Thus, while we remain interested in an expansion of our work that treats quantum imperfections, the lack of explicit state quality modeling does not make our work irrelevant. A similar concern could be raised for the memory at the network nodes: despite this being another issue that is very close to hardware, its integration with scheduling policies would seem crucial because it could radically change how a scheduling decision is taken: if a node only has a finite number of memory slots, the scheduler would have the additional constraint of free space (or lack thereof, in some cases having to “waste” ebits in order to free up memory). As a matter of fact, a similar problem has been analysed over a single switch in <cit.> and <cit.>, showing that the memory requirements of an isolated quantum switch are quite low (on the order of 5 slots) to achieve performance comparable to that of a switch with unlimited memory slots, making the memory problem not as concerning. Moreover, <cit.> formulates the problem of exploiting limited memory slots and develops a Max-Weight memory allocation policy for quantum nodes that could be adapted to our scenario. Furthermore, it is possible to look at the memory problem from a different direction: while a solution inside our framework could in principle be to add compound constraints to the optimization problems, we stress that results such as fig. <ref> (maximal excursion numbers) gauge the accumulation of total demand in a stable network, effectively providing an upper bound for memory requirements in the design of a real quantum network system. The third limitation of our work is how the framework scales: The fact that the number of queues we need to account for grows quadratically with the number of nodes in the network entails quick growth of the 𝐌 matrix, which makes the integer programs required by several policies presented here increasingly complex. While this is not as much of a problem currently as it was in the past decades, it is still an issue that is worth closely investigating, perhaps to find sub-optimal scheduling strategies that require only a subset of the extended edge set (akin to an overlay network, as demonstrated in <cit.>). We note here that easing scaling concerns would also enable a future extension of our framework to multipartite entanglement: as mentioned in the beginning, an extension in this direction would require the definition of new multipartite virtual queues, together with ad-hoc transitions that interface them with the bipartite ones, greatly increasing the overall number of queues and therefore the problem's complexity. Finally, it would be interesting to delve into other physical imperfections, such as finite speed of communication between nodes, which entail a stricter definition of what information is local and accessible to a node at a given time. One interesting implication of such analysis would be the case in which only one of the qubits in an ebit is lost, and what happens if the loss is not communicated before other swapping operations are undertaken, i.e. the error propagates along the swapping route. All these considerations would require a more refined physical model, which would in turn imply revisions to our mathematical framework, but should not be excessively difficult to include in the numerical part of our discussion: the simulator code was written from the ground up in order to provide a simpler and more agile contribution, but it was designed with particular attention to keeping a layered and modular structure that should be reasonably adaptable to well-established quantum network simulation packages such as NetSquid<cit.> or QuISP<cit.>. § CONCLUSIONS In this work, we presented a general framework that allows to formulate and solve the scheduling problem in general, lossy memory-endowed quantum networks in a dynamical way. We then integrated our framework with Lyapunov Drift Minimization in order to mathematically derive an optimal quadratic scheduling policy for quantum networks and proposed several other suboptimal policies with various advantages. Finally, we showcased how our framework may be exploited by people interested in policy design to benchmark and fine-tune a general quantum network's performance under arbitrary scheduling policies. Despite a sizable amount of work still needing to be tackled before a collective quantum network science exists, the promising results we presented could eventually become one of many assets in the quest for the Quantum Internet. IEEEtran
http://arxiv.org/abs/2307.04672v1
20230710161831
Black-hole powered quantum coherent amplifier
[ "Avijit Misra", "Pritam Chattopadhyay", "Anatoly Svidzinsky", "Marlan O. Scully", "Gershon Kurizki" ]
quant-ph
[ "quant-ph", "gr-qc", "hep-th" ]
[email protected] AMOS and Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot 7610001, Israel [email protected] AMOS and Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot 7610001, Israel [email protected] Texas A& M University, College Station, Texas 77843, USA [email protected] Texas A& M University, College Station, Texas 77843, USA Baylor University, Waco, Texas 76798, USA Princeton University, Princeton, New Jersey 08544, USA [email protected] AMOS and Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Atoms falling into a black hole (BH) through a cavity are shown to enable coherent amplification of light quanta powered by the BH gravitational vacuum energy. This process can harness the BH energy towards useful purposes, such as propelling a spaceship trapped by the BH. The process can occur via transient amplification of a signal field by falling atoms that are partly excited by Hawking radiation reflected by an orbiting mirror. In the steady-state regime of thermally equilibrated atoms that weakly couple to the field, this amplifier constitutes a BH-powered quantum heat engine. The envisaged effects substantiate the thermodynamic approach to BH acceleration radiation. Black-hole powered quantum coherent amplifier Gershon Kurizki August 12, 2023 ============================================= Introduction: Imagine a scene that can play out in a science fiction movie (Fig. <ref>): a spaceship is helplessly falling into a black hole (BH) because its fuel supply is dwindling and does not suffice for a breakaway maneuver. Luckily, its SOS message has been received by a faraway spaceship, which is equipped with a powerful laser that can transfer coherent energy to its distressed sister ship. Unlike heat, coherent energy transfer is associated with ergotropy <cit.> that can perform mechanical work <cit.> to propel the ship. Unfortunately, coherent energy transfer would have poor efficiency due to diffraction and BH gravitational lensing over large distances between the ships. Yet a revolutionary technique may still rescue the ill-fated spaceship: the laser signal can be coherently amplified in a novel fashion by atoms in free fall through a cavity. Namely, the amplification can only occur through excitation of the free-falling atoms by BH Hawking radiation redirected by an orbiting mirror. The envisioned amplification can strongly enhance the coherent power transfer to the falling spaceship, providing it with enough thrust to free itself from the grip of the BH. What is the theoretical basis for this fantastic story? It is the mind-boggling idea that the Unruh vacuum <cit.> yields thermal Hawking radiation near the BH horizon, but cannot directly excite atoms falling into the BH, as opposed to a bright star that can directly heat up falling atoms in its vicinity. By contrast, near a BH the free-falling atoms feel the heat only if the Hawking radiation is redirected by a mirror placed on a stable orbit around the BH (Fig. <ref>). Then, counter-intuitively, BH gravity can act on atoms as a heat bath, although the process is purely unitary <cit.>. For atoms falling into a BH during their passage through a cavity, a perturbative (master-equation) approach maps this BH-gravitational problem onto that of a quantum heat engine that acts as a two-level maser/laser without population inversion coupled to two baths at different temperatures <cit.>. Here the piston of the heat engine is the signal laser field whereas the BH scalar field modes redirected by a mirror replace the hot bath as the energy source and the cold bath as the entropy dump of the engine. This uniquely quantum mechanical manifestation of anomalous, gravitational vacuum effect unequivocally demonstrates the validity of the thermodynamic approach to acceleration radiation near a BH. Another intriguing limit is the strong-coupling field-atom regime mediated by the BH vacuum state, a novel manifestation of gravity-induced quantum electrodynamics. Analysis: A cloud of two-level atoms (TLA) initially in their ground state, is freely falling towards the BH through a cavity. The TLA are coupled to the gravitational field of the BH by a quantized scalar field <cit.> Φ̂(r,t)=∑_i[â_iϕ _i(r,t)+H.c.], where H.c. stands for the Hermitian conjugate, index i labels the field modes, r=(r,Θ ) denotes the radial and angular coordinates, and â_i is the i-th mode annihilation operator. The scalar field is coupled with the TLA as depicted in the space-time diagram (Fig. <ref>b). An atom freely falling into a non-rotating BH while still above the horizon can (see App. <ref>) be resonant with the following scalar field modes (in the Kruskal-Szekeres coordinates) ϕ _1Ω(T,X)=e^-iΩ( T-X) , ϕ _2Ω(T,X)=( T+X) ^-iΩθ (T+X), where θ is the step function and Ω >0. From the perspective of the free-falling atom the modes (<ref>)-(<ref>) harmonically oscillate as a function of the atom's proper time with positive frequency. The form of the outgoing mode (<ref>) and the ingoing mode (<ref>) derived here (App. <ref>) is, as shown below, key to our ability to employ the BH as a source of useful quanta. The free-falling atoms may resonantly interact with the outgoing plane-wave field ϕ _1Ω and with the ingoing Rindler field ϕ _2Ω. However, in the Unruh vacuum, which by consensus represents the state of the evaporating BH field <cit.>, there are no photons in the modes ( <ref>) and (<ref>). Consequently, free-falling atoms cannot become excited in the Unruh vacuum (see App. <ref>). Instead, we might consider exciting these atoms by the outgoing Rindler photons, which fill the Unruh vacuum and constitute the Hawking radiation <cit.>. They thermally populate the modes ϕ _3Ω(T,X)=( X-T) ^iΩθ (X-T). Yet, it can be shown (App. <ref>) that these outgoing Rindler photons cannot excite free-falling atoms. Is there another way to excite these atoms by BH radiation? Indeed, there is: we show that free-falling atoms can be excited by redirecting the outgoing Rindler photons (Hawking radiation) towards the BH via a mirror. The mirror should orbit the BH at a fixed radius r=r_0. To be stable, the mirror orbit should lie at r > 3r_g, r_g being the gravitational radius, but otherwise the value of r does not affect the result (see below). In the presence of such a mirror, the mode function satisfying the boundary condition ϕ (t,r_0)=0 at the mirror surface acquires a new, advantageous form ϕ (T,X)=( X-T) ^iΩ_ϕ _c mode-e^iΩ( r_0+ln (r_0-1)) ( T+X) ^-iΩ_ϕ _h mode. This hitherto unexplored scalar field mode has two parts: the outgoing Rindler photon mode (the first term on the rhs) and a part reflected from the mirror into the ingoing Rindler mode (the second term on the rhs). This ingoing Rindler mode acts as a hot bath mode, denoted as ϕ _h(r,t) with frequency Ω =Ω _h, that can excite the free-falling atom. The outgoing Rindler modes act as a cold-bath (vacuum state) mode denoted as ϕ _c(r,t). We wish to show that the redirected Hawking radiation can enable coherent amplification of a signal mode. The complete field-atom interaction Hamiltonian has then the form H_int=∑_ig_hiϕ _hib̂^†â_hi|e⟩⟨ g|+∑_jg_jϕ_cjĉ_j|e⟩⟨ g|+H.c. Here b̂ stands for the signal-mode annihilation operator, â _hi is the i-th mode annihilation operator of the hot bath mode ϕ_hi of the redirected Hawking radiation, and ĉ_j for that of the j-th cold bath mode ϕ_cj of the redirected Hawking radiation (Eq. (<ref>)). The atom-scalar field interaction (first term on the rhs of Eq. (<ref>)) represents an anti-resonant Raman process whereby a scalar-field quantum in the i-th redirected Hawking-radiation mode ϕ _hi is converted into a signal photon by the atomic transition between the ground (g) and excited (e) states, with coupling strength g_hi. The interaction Hamiltonian of the atom with the cold bath ϕ_cj involves the same atomic transition operator |e⟩⟨ g| with coupling strength g_cj. Our goal is to maximize the energy gain of the signal mode in a non-passive (ergotropy-carrying) form, capable of delivering work <cit.>. Strong TLA-BH coupling: Here we assume that while traversing the cavity, the atom is strongly coupled to one redirected Hawking radiation mode ϕ _h with a coupling strength g_h that overwhelms the coupling strengths g_cj to all cold bath modes. This scenario corresponds to a high-Q cavity which allows for strong coupling of a single Hawking radiation mode to the atom. To render the problem single-mode, we choose the TLA resonant frequency ω _0, the cavity frequency ω _c, the signal ν and the Ω _h frequency of the redirected mode ϕ _h in (<ref>) such that ν≈Ω _h-ω _0. Then the interaction Hamiltonian in Eq. (<ref>) simplifies to H_int=g_hϕ _hb̂^†â_h|e⟩⟨ g|+H.c. The basis for the combined atom-field energy states can then be |1⟩ = |g,n_s,n_h⟩ , |2⟩ = |e,n_s+1,n_h-1⟩ , where |n_s⟩ and |n_h⟩ are Fock states of the signal mode and the BH ϕ _h mode respectively. At short times, where first-order transitions between the atom and the field modes predominate, the subspace in Eq. (<ref>) is decoupled from other subspaces, whilst keeping the total number of excitations constant. Let us assume that the atom and the signal mode are initially in the ground and Fock state |n_s⟩ respectively. Thus, the initial state of the combined system is ρ^i= |g⟩⟨ g| ⊗ |n_s⟩⟨ n_s|⊗ρ_T_c⊗ρ_T_h , where ρ_T_c and ρ_T_h are the thermal field states at temperature T_c and T_h, respectively. In this problem, T_c = 0. Then the initial state is a mixture of the pure states |g⟩ |n_s⟩ |n_h⟩ with probabilities p_n_h=e^-β _hΩ _hn_h/Z_β _h, where β _h=1/k_BT_H is the effective BH (Hawking) temperature <cit.>. The final-states of the atom and the signal mode after their unitary evolution over time t are then (App. <ref>) ρ_atom^f= |u|^2|g⟩⟨ g|+ |v|^2 |e⟩⟨ e|, ρ_s^f= |u|^2|n_s⟩⟨ n_s|+ |v|^2 |n_s+1⟩⟨ n_s+1| where u = e^-1/2 i δ t(cos(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))+i δsin(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g _h^2 ϕ _h^2)), v = -2 i g_h ϕ _h e^-1/2 i δ tsin( 1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g_h^2 ϕ _h^2), δ = ω_0 + ν - Ω_h. The work capacity (ergotropy) change following the interaction in the cavity is Erg(ρ _s^f)-Erg(ρ _s^i)=ν (|v|^2-|u|^2), which is maximized for |v|=1, |u|=0. For the choice δ =0, g_h t|ϕ _h|=(2m+1)π /2, where m is an integer, the atom is transferred to the excited state and the signal adds a photon to its mode, ρ _s^f=|n_s+1⟩⟨ n_s+1|. The highest amplification per atom is achieved for n_s = 1. The efficiency of work extraction by the signal from the BH is then η = ν/ω _0+ν. This efficiency can closely approach the Scovil-Schulz-Dubois (SSD) bound of quantum heat engine/amplifiers  <cit.> ν /(ω _0+ν ). In turn, the SSD efficiency η_ SSD can approach the Carnot efficiency η_C if T_h/T_c≳Ω_h/ω_c. However, as T_c → 0, the atom resonant frequency must approach zero in order to attain the Carnot efficiency, which is unfeasible. The maximal average power of work extraction in this regime is given by Ẇ= 2 g_h |ϕ_h|ν/(2m+1)π, where the maximal power corresponds to m=0. Spectacular power boost can be obtained in the Dicke regime of N atoms that are collectively coupled to the hot bath mode. Following <cit.>, we can have Ẇ→ N Ẇ. Weak TLA-BH coupling: Let us now consider the opposite limiting regime of a cavity with insufficiently high Q, such that its leakage to cold bath modes ϕ_c outside the cavity is stronger than the coupling of the atom to the Hawking radiation mode ϕ_h. In this regime, the atom that is energized by the redirected Hawking radiation reaches a steady state (equilibrates) under the action of the cold bath while in the cavity. Hence, the process is analogous to our continuously operating heat-engine maser based on a TLA <cit.>. Here, the atom together with the signal at frequency ν are coupled to a hot field mode near resonantly, but the coupling strength g_h is assumed to be weaker than the coupling to the cold modes g_cj. The atom then reaches a steady state under the action of the cold bath (App. <ref>). The atom-scalar field interaction obeys the Raman Hamiltonian that in the interaction picture reads (cf. Ref <cit.> for derivation) H_(t)=g_h∑_i( ϕ _hiâ_hib̂^†|e⟩⟨ g|e^-i[Ω _hi-(ν +ω _0)]t+H.c.) . Under this interaction, we then get a master equation for the state of the hot scalar field. By tracing out the atom, which has reached a steady population under the influence of the cold bath, we then find the time evolution of the signal mode (see SI) The ergotropy (work capacity) of the signal state in this regime that corresponds to coherent amplification grows as 𝒲= ν |α_0|^2 e^𝒢t, where |α _0| is the mean initial signal amplitude and 𝒢 is the gain (see SI). The power of the gained work is therefore given by 𝒲̇= 𝒢ν |α_0|^2 e^𝒢t. As in the strong-coupling regime, N-fold collective (Dicke) power boost <cit.> is attainable by N atoms. The efficiency can be computed as the ratio of power generated by the signal to the heat flux from the BH, Q̇_h. This efficiency evaluates to (see App. <ref>) η = Ẇ/Q̇_h = ν/Ω _h|α _0|^2/|α |^2+ n_h(n_c+1)/n_h-n_c, where |α _0| is the mean initial signal amplitude. It approaches the Scovil-Schulz-Dubois (SSD) bound ν /(ω _0+ν ) as |α _0|>>1 (Fig.<ref>). In Fig. <ref> we show that the division of the gained signal energy between ergotropy and heat tends in favor of ergotropy (coherent work production) as the gain increases. Conclusions: We have put forth the possibility of black hole (BH) gravity to act as the energizing source of coherent light amplification. The amplification is mediated by the Hawking radiation of the BH in the presence of an orbiting mirror that transforms outgoing Hawking radiation into ingoing Rindler quanta. It can be viewed as a BH-fueled heat engine that converts Hawking radiation into work in a coherent signal mode. The main energy source in our model is Hawking radiation, and not the kinetic or potential energy of the atoms. In principle, one can also use the kinetic energy of ground-state atoms passing through the cavity to amplify light <cit.>. Our results corroborate the view <cit.> that, despite the unitarity of such processes, a BH can act as a heat source on falling matter (cf. <cit.>). Concepts of quantum information theory and optics have been gaining prominence in the context of quantum effects of gravity <cit.>. We here venture in yet another direction, demonstrating that such effects may find practical use, such as propelling a spaceship by atoms falling into a BH. These results open a new avenue that bridges quantum optics, quantum thermodynamics and BH gravity. Acknowledgements: GK and MOS acknowledge the support of NSF-BSF. GK acknowledges the support of PACE-IN (QUANTERA), PATHOS (EU FET OPEN) and DFG (FOR 2724). MOS acknowledges the support of the Air Force Office of Scientific Research (Grant No. FA9550-20-1-0366 DEF), the Robert A. Welch Foundation (Grant No. A-1261), and the National Science Foundation (Grant No. PHY 2013771). Author contributions: GK conceived the initial idea, and then all authors conceptualized and designed the project. AM, PC and AS did the analytical study. PC did the figures and plots. GK and MOS supervised the project. All authors were involved in the analysis and interpretation of the results. GK, AM and AS wrote the manuscript with input from all authors. Competing interests: The authors declare no competing interests. Data availability: Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. 5pt § MODE FUNCTIONS OF PHOTONS RESONANT WITH FREE-FALLING ATOMS Here we consider a two-level atom with transition frequency ω freely falling into a nonrotating BH of mass M along a radial trajectory from infinity with zero initial velocity. We choose the gravitational radius r_g=2GM/c^2 as a unit of distance and r_g/c as a unit of time and introduce the dimensionless distance, time, and frequency as r→ r_gr, t→ (r_g/c)t, ω→ (c/r_g)ω. In dimensionless Schwarzschild coordinates the atom trajectory is described by the equations dr/dτ=-1/√(r), dt/dτ=r/r-1, where t is the dimensionless time in Schwarzschild coordinates and τ is the dimensionless proper time for the atom. Integration of equations (<ref>) yields τ =-2/3r^3/2+const, t=-2/3r^3/2-2√(r)-ln( √(r)-1/√(r)+1 ) +const. For a scalar photon in the Regge-Wheeler coordinate r_∗=r+ln (r-1) the field propagation equation reads [ ∂ ^2/∂ t^2-∂ ^2/∂ r_∗^2+( 1-1/r) ( 1/r^3- Δ/r^2) ] ψ =0, where Δ is the angular part of the Laplacian. We are interested in solutions of this equation outside of the event horizon, that is for r>1. If the dimensionless photon frequency ν≫ 1, then the first two terms in Eq. (<ref>) dominate and one can approximately write ( ∂ ^2/∂ t^2-∂ ^2/∂ r_∗^2) ψ =0. The general solution of this equation reads ψ =F( t± r_∗) =F( t± r±ln (r-1)) , where F is an arbitrary function. We consider a trajectory of the atom near the event horizon and choose the origin of τ such that τ =0 when the atom crosses the horizon. In the vicinity of the horizon, we obtain for the atom's trajectory t≈ -ln (-τ )+5/4τ +const, r≈ 1-τ -1/4τ ^2, and, therefore, along the atom's trajectory t-r-ln (r-1)≈ -2ln (-τ )+const, t+r+ln (r-1)≈1/2τ +const. Eqs. (<ref>) and (<ref>) yield the following mode functions of the field which harmonically oscillates as a function of τ along the atom's trajectory ψ _1ν(t,r)=e^iν e^-1/2( t-r-ln (r-1)) ≈ e^-iντ, ψ _2ν(t,r)=e^-2iν( t+r+ln (r-1)) ≈ e^-iντ. It is insightful to write the mode functions (<ref>) and (<ref>) in the Kruskal-Szekeres coordinates T and X that are defined in terms of the Schwarzschild coordinates t and r as T=√(r-1)e^r/2sinh( t/2) , X=√(r-1)e^r/2cosh( t/2) , for r>1, and T=√(1-r)e^r/2cosh( t/2) , X=√(1-r)e^r/2sinh( t/2) , for 0<r<1. In these coordinates, we obtain for r>1 e^-1/2( t-r-ln (r-1)) =X-T, T+X=e^1/2( t+r+ln (r-1)) , and, therefore, ψ _1ν(T,X)=e^-iν( T-X) , ψ _2ν(T,X)=( T+X) ^-4iν. § STRONG-COUPLING AMPLIFIER REGIME The initial state of the combined system is ρ^i= |g⟩⟨ g| ⊗ |n_s⟩⟨ n_s|⊗ρ_T_h, which is a mixture of the pure states |g⟩ |n_s⟩ |n_h⟩ with thermal occupation probability of the hot bath mode p_ n_h= e^-(β_h Ω_h n_h)/Z_β_h. Each such pure state can be written in the basis in Eq. (<ref>) as |ψ⟩^i=( [ 1; 0 ]), which under the unitary evolution maps to |ψ⟩^f=( [ [ e^-1/2 i δ t(cos(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2)); +i δsin(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g_h^2 ϕ _h^2)) ] & -2 i g_h ϕ _h e^-1/2 i δ tsin(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g_h^2 ϕ _h^2) ]) = ( [ u; v ]). The final state of the atom after time t is then ρ_atom^f= |u|^2|g⟩⟨ g|+ (|v|^2) |e⟩⟨ e|, and the final state of the piston is ρ_p^f= |u|^2|n_s⟩⟨ n_s|+ |v|^2 |n_s+1⟩⟨ n_s+1|. Here we have taken the sum over all pure state in Eq. (<ref>) with the thermal probability p_ n_h in the hot bath mode. The initial ergotropy of the piston mode is [ρ_s^i]= ν n_s. The final ergotropy of the piston mode is [ρ_s^f]= ν [n_s+ (|v|^2-|u|^2)]. The ergotropy gain or the work gain is _gain= ν (|v|^2-|u|^2), which is maximized when |v|^2=1. § WEAK-COUPLING AMPLIFIER REGIME The Hamiltonian in Eq. (<ref>) holds only when the cold and the hot modes are not in the ground state, but their probability of being in the ground state for a thermal distribution is p_0,0= (1/Z_β_c)(1/Z_β_h), which is the probability to have no transition from the initial state. Then the master equation (ME) for the combined signal-atom state associated with the hot bath mode is <cit.> ρ̇_h = g_h^2 |I_h,gi|^2 (n̅_h+1)([Sρ_h, S^†]+[S,ρ_h S^†]) + g_h^2 |I_h,ei|^2 n̅_h([S^†ρ_h, S]+[S^†,ρ_h S]), where S=b |g⟩⟨ e|, n_h is the mean quanta number in the thermal state associated with the Hawking radiation, and |I_h,gi|^2= ∫_t_i ^t_f dt^' e^-i δ_ci t^'ϕ_h^⋆ (t^') ∫_t_i ^t_f dt^'' e^i δ_ci t^''ϕ_h (t^'') , |I_h,ei|^2= ∫_t_i ^t_f dt^' e^i δ_ci t^'ϕ_h (t^') ∫_t_i ^t_f dt^'' e^-i δ_ci t^''ϕ_h^⋆ (t^''), where δ_ci =(Ω_ci- ω_0). Upon tracing out the atom, we obtain for the signal mode s the ME ρ̇_s = g_h^2 [|I_h,gi|^2 (n̅_h+1) ρ_ee([bρ_s, b^†]+[b,ρ_s b^†]) + |I_h,ei|^2 n̅_hρ_gg([b^†ρ_s, b]+[b^†,ρ_s b]) ], where we have assumed for simplicity that |I_h,gi| = |I_h,ei| and ρ_ee/ρ_gg≈n̅_c/n̅_c+1= exp [-ħω/k_B T_c], T_c being the cold bath temperature. The resulting time evolution of the signal-mode Fock state n_s is given by ṅ_s = - 2 g_h^2 |I_h,gi|^2 ((n̅_h+1) n_s ρ_ee - n̅_h (n_s+1) ρ_gg), For the Glauber-Sudarshan P-distribution of the signal state, i.e., ρ_s = ∫ P(α) |α⟩⟨α | d^2 α, one obtains the Fokker-Planck (FP) equation ∂/∂ t P(α) = -𝒢/2( ∂/∂α + ∂/∂α^⋆) P + 𝒟∂^2 P/∂α∂α^⋆, with 𝒢 = 2 g_h^2 |I_h,ai|^2 (n_h-n_c)/2n_c+1 𝒟 = 2 g_h^2 |I_h,ai|^2 n_h (n_c +1)/2n_c + 1. Here 𝒢 describes the effective gain rate in the amplification regime and 𝒟 describes the diffusion rate for the process. An initial coherent state |α_0⟩ then evolves into P(α, t) = 1/πσ^2 (t) Exp ( -|α - α_0 e^𝒢t/2|^2/σ^2 (t)), with σ^2 (t) = 𝒢/𝒟 (e^𝒢t -1).
http://arxiv.org/abs/2307.03873v1
20230708012434
Why does dissolving salt in water decrease its dielectric permittivity
[ "Chunyi Zhang", "Shuwen Yue", "Athanassios Z. Panagiotopoulos", "Michael L. Klein", "Xifan Wu" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.dis-nn", "physics.chem-ph" ]
Department of Physics, Temple University, Philadelphia, Pennsylvania 19122, USA Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08544, USA Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08544, USA [email protected] Department of Physics, Temple University, Philadelphia, Pennsylvania 19122, USA Institute for Computational Molecular Science, Temple University, Philadelphia, Pennsylvania 19122, USA Department of Chemistry, Temple University, Philadelphia, Pennsylvania 19122, USA [email protected] Department of Physics, Temple University, Philadelphia, Pennsylvania 19122, USA Institute for Computational Molecular Science, Temple University, Philadelphia, Pennsylvania 19122, USA The dielectric permittivity of salt water decreases on dissolving more salt. For nearly a century, this phenomenon has been explained by invoking saturation in the dielectric response of the solvent water molecules. Herein, we employ an advanced deep neural network (DNN), built using data from density functional theory, to study the dielectric permittivity of sodium chloride solutions. Notably, the decrease in the dielectric permittivity as a function of concentration, computed using the DNN approach, agrees well with experiments. Detailed analysis of the computations reveals that the dominant effect, caused by the intrusion of ionic hydration shells into the solvent hydrogen-bond network, is the disruption of dipolar correlations among water molecules. Accordingly, the observed decrease in the dielectric permittivity is mostly due to increasing suppression of the collective response of solvent waters. Why does dissolving salt in water decrease its dielectric permittivity Xifan Wu ====================================================================== In chemistry and biology, water is widely referred to as the universal solvent <cit.>. As salts dissolve in water, the anomalously large dielectric permittivity of water promotes the solubilization of salt by screening interionic Coulomb interactions. At the same time, the dielectric response of water is influenced by the presence of dissolved salts <cit.>. Almost 100 years ago, it was found that the static dielectric permittivity of sodium chloride (NaCl) solution decreases as more salt is dissolved <cit.>. Later, more sophisticated experiments revealed a nonlinear behavior in which dielectric decrement slows down at high solute concentrations <cit.>. A theoretical explanation of this phenomenon was conceived soon after the first experiment. As stated in their dielectric saturation theory, Debye <cit.> and Sack <cit.> envisioned the formation of hydration shells due to the tendency of water dipoles to be aligned along electric fields of dissociated ions. Debye further estimated that ionic electric fields are strong enough to saturate the polarizability of water molecules near the ions and therefore lower the dielectric response <cit.>. Because of its built-in physical intuition, dielectric saturation has been, to date, the most adopted theory to explain dielectric decrement in salt water <cit.>. The past half-century has witnessed significant progress in understanding water through principles of quantum mechanics and statistical physics <cit.>. This progress calls into question the dielectric saturation explanation. Indeed, consensus has been reached that the high dielectric permittivity of water is closely associated with correlated dipole fluctuations of water molecules on the underlying hydrogen(H)-bond network <cit.>. However, this collective dipolar response is missing in the picture of dielectric saturation which mainly concerns the suppressed dielectric response of individual water molecules <cit.>. More disturbingly, based on classical electrodynamics, dielectric saturation is estimated to occur on water molecules that are a few angstroms away from ions <cit.>. The above length scale is comparable to the estimated de Broglie wavelength of electrons at room temperature <cit.>. Physical interactions at such length scales are governed by quantum mechanics rather than a classical description. In this regard, density functional theory (DFT)-based <cit.> ab initio molecular dynamics (AIMD) <cit.> provides an ideal framework to predict properties of liquids from quantum mechanical principles. Indeed, recent AIMD simulations found that polarizabilities of water molecules in ionic first hydration shells are only slightly different from that in neat water <cit.>, which contradicts the dielectric saturation hypothesis. Due to the long-range nature of the dipole-dipole interaction and the disordered liquid structure, the prediction of dielectric response in water demands both a spatially extensive model containing many hundreds of water molecules and a simulation time beyond nanoseconds <cit.>. However, AIMD simulations of such large timescale and system size are simply not feasible using current computer architectures. Thus, to date, dielectric decrement has been mostly studied using molecular dynamics with classical force fields, and the effect of electronic polarizability has been neglected <cit.>. Herein, we overcome the challenge by studying dielectric decrement by combining AIMD and deep neural networks (DNNs) <cit.>. The liquid structures of NaCl solutions are simulated by a DNN that explicitly incorporates long-range electrostatic interactions <cit.> with periodic simulation cells containing about 4000 water molecules. Importantly, the potential is trained on DFT calculations based on the strongly constrained appropriately normed (SCAN) functional <cit.>. In addition, a second DNN <cit.> is trained separately for centers of electronic orbitals, in terms of maximally localized Wannier functions <cit.>. Notably, this second DNN allows us to rigorously partition the electronic charge density into contributions from dipole moments of individual water molecules. The dual DNNs enable efficient computations of dielectric permittivity at the DFT accuracy. (See Supplemental Material <cit.> for more details on this methodology.) Based on linear response theory, the static dielectric permittivity of NaCl solutions, ε_NaCl(aq), is related to the fluctuation of the total dipole moment, M, by <cit.> ε_NaCl(aq) =⟨M^2⟩/3 V k_B T ε_0+ε_∞ =⟨(M_W(aq)+M_I(aq))^2⟩/3 V k_B T ε_0+ε_∞ =ε_W(aq)+ε_W(aq)-I(aq)+ε_I(aq)+ε_∞ where V, k_B, T, and ε_0 are the system volume, Boltzmann constant, temperature, and vacuum permittivity, respectively. ε_∞ is the electronic contribution in the high-frequency limit. As expected, the theoretical ε_∞ are small values around 1.88-1.99 at concentrations under consideration. We report the computed dielectric permittivity of NaCl solutions in Fig. <ref> together with experimental data. Note that both results have been normalized to enable a better comparison of dielectric decrement behavior. There is good agreement between experiments and present calculations. In particular, the nonlinear behavior in dielectric decrement observed in experiment is well reproduced. The dielectric permittivity drops steeply at low concentrations, but its slope becomes gradually flattened as solute concentration increases. Notably, the nonlinearity generates a bowing feature in dielectric decrement. Absolute values of the computed dielectric permittivity are reported in Supplemental Material Table 1 <cit.>. It should be noted that the predicted dielectric permittivity of neat water by SCAN functional is 102.5, which is larger than the experimental value of 78. The overestimation of the dielectric permittivity is consistent with a previous study employing the SCAN functional <cit.>, and this overestimation is particularly attributed to the self-interaction error in the SCAN functional that over-strengthens H-bonds. The slightly overstructured liquid water has been widely reported in literature <cit.> and its effects on observables can be approximated by the effects of decreasing the temperature, which does not affect our conclusions. In NaCl solutions, the fluctuation of the overall dipole moment, M, involves contributions from both water molecules, M_W(aq), and ions, M_I(aq). Therefore, the dielectric permittivity, ε_NaCl(aq) in Eq. <ref> is composed of the self-terms, ε_W(aq) and ε_I(aq) whose dipole fluctuations are restricted to water molecules and solvated ions only, and the cross-coupling term ε_W(aq)-I(aq) reflecting dipole fluctuations in water induced by the movements of ions or vice versa. The computed values of above terms are presented in the inset of Fig. <ref>. Notably, ε_NaCl(aq) is dominated by ε_W(aq) at all concentrations, which agrees with previous findings <cit.>. Thus, dielectric decrement observed in NaCl solutions is due to the weakened dielectric response of solvent water molecules. The dielectric component ε_W(aq) due to solvent water can be further evaluated via the dipolar correlation formalism proposed by Kirkwood <cit.> as ε_W(aq)=ρμ^2 G_K/3 k_B T ε_0, where ρ and μ denote water number density and average dipole moment per water molecule respectively, and G_K is the so-called correlation factor that measures the total angular correlations among water dipoles. In polar liquids, G_K is obtained by the integration of the dipolar correlation function as G_K=∫𝒞(r)dr=1/N∑_i=1^N∑_j=1^N μ̂_i·μ̂_j, where μ̂_i is the unit vector of the ith molecular dipole and N is the number of water molecules. The dipolar correlation is defined as 𝒞(r)=⟨d(0)·d(r)⟩, accounting for the spatial correlation between the dipolar density as a function of distance, r. Because of the discretized nature of water molecules, the dipolar density is defined as d(r)=∑_i=1^N μ̂_i δ(r-r_i) with r_i denoting the position vector of the ith water molecule. In neat water, both the dipole moment, μ, and the correlation factor, G_K, are largely enhanced by the underlying H-bond network, leading to the anomalously large dielectric permittivity <cit.>. In NaCl solutions, as shown in Fig. <ref> (a), the correlation factor, G_K, the water number density, ρ, and the water dipole moment, μ, all decrease as increasing amounts of salt dissolved, which according to Eq. <ref> leads to dielectric decrement. The effect from the disrupted H-bond network As seen in Fig. <ref> (a), dielectric decrement of NaCl solutions is mostly attributed to the decreased correlation factor, G_K, relative to that of neat water. Thus, the strong correlation among dipole moments in neat water is significantly suppressed in salt solutions. In neat water, the large G_K is closely associated with the tetrahedral H-bond structure, in which a water molecule at the center of a tetrahedron is H-bonded with four neighboring water molecules. The directions of dipole moments of any two H-bonded water molecules, therefore, point in a similar direction, resulting in a positive μ̂_i·μ̂_j, which gives rise to the first positive sharp peak at 2.7 Å in the dipolar correlation function in Fig. <ref> (b). Under the influence of the directional H-bonding, dipole moments on vertices of a tetrahedron also prefer to be aligned in a similar direction to some extent, which yields a second positive peak around 5.1 Å in Fig. <ref> (b). In the same fashion, the dipolar correlation propagates to the third coordination shell and beyond. The H-bond network is disrupted increasingly as more salt is dissolved. Salt ions exert electrostatic fields that can attract water molecules by competing with the H-bonding. In the close vicinity of ions, water molecules hydrate the ions by orienting their electric dipole moments towards the ions, thereby lowering the electrostatic energy of the system, as schematically shown in Fig. <ref> (b). For a sodium cation, the first hydration shell can be described as a relatively tight sphere comprised of about 5 or 6 water molecules, whose oxygen is attractive to the cation at the center <cit.>. On the other hand, the first hydration shell of a chloride ion is a relatively large sphere composed of as many as 6-8 water molecules whose protons are attracted to the chloride lone pair electrons <cit.>. Because of the intrusion of the hydration shells, water molecules in the solvent are now divided into two distinct categories: the “hydration (H) water” inside the ionic hydration shells and the “bulk (B) water” outside it. As such, the pattern of dipolar correlation is fundamentally revised. As shown in Eq. <ref>, G_K =∫[𝒞^B(r)+𝒞^H(r)+𝒞^BH(r)]dr =G_K^B+G_K^H+G_K^BH, the total correlation factor G_K involves the self-terms of G_K^B (G_K^H) by dipolar correlation restricted to “bulk water” (“hydration water”) only, and the coupling term G_K^BH due to the dipolar correlation between “bulk water” and “hydration water”. The above components in correlation factors, relative to neat water, are presented in Fig. <ref> (a). (See: Supplemental Material <cit.> for more details.) As seen in Fig. <ref> (a), the reduction in the overall correlation factor, G_K, is mostly from G_K^H, which describes the correlation among “hydration water”. This is because water molecules in hydration shells are constricted by the ion-water attraction instead of H-bonding. Within a hydration shell, the cation (anion)-water attraction reorientates the dipole moments from an H-bonding direction to a central-force direction pointing outwards (towards) ions. As such, the dipolar correlation between two neighboring “hydration water” molecules is thereby significantly suppressed. This is evidenced by the sharp negative peak at ∼ 2.7 Å in the dipolar correlation function Δ𝒞^H(r) as plotted relative to neat water in Fig. <ref> (c). Moreover, the absence of H-bonding even causes anti-correlations between two “hydration water” molecules located on the opposite sides of a single ion as schematically shown by opposite directions of water molecular dipoles in the inset of Fig. <ref> (c). Therefore, the aforementioned positive peak of neat water in Fig. <ref> (b) due to correlated dipole moments on vertices of a tetrahedron at 5.1 Å disappears. Instead, it is replaced by two negative peaks at 4.8 and 6.1 Å, which are caused by the anti-correlated water dipoles in hydration shells of Na^+ and Cl^- ions, respectively. At long range, water molecules in a hydration shell, in principle, should be correlated to those in another hydration shell. However, such correlations are also weaker than those in neat water as expected in Fig. <ref> (c). As concentration increases, the loss of G_K^H should accumulate linearly, which is responsible for most of the linear dielectric decrement in salt water. Of course, “hydration water” is H-bonded to “bulk water”, and in this way, the H-bond network is partially restored. Nevertheless, the reconstructed H-bond structure deviates from that found in neat water. Within a hydration shell, two water molecules located on opposite sides of a single ion are anti-correlated, as mentioned above. Because of the highly directional nature of H-bonding, the anti-correlation extends to the correlation between one “hydration water” molecule and one “bulk water” molecule that is H-bonded to another “hydration water” molecule at the other side of the ion, as schematically shown by the opposite direction of green arrows in the inset of Fig. <ref> (d). Again, these anticorrelations can be identified as a broad negative peak centered at 8 Å, which weakens the dipolar correlation. As a result, G_K^BH also contributes to the decreased overall correlation factor of G_K relative to neat water, as shown in Fig. <ref> (a). Moreover, G_K^BH plays a surprisingly key role in the nonlinear dielectric decrement as evidenced by its arc shape in Fig. <ref> (a). This nonlinearity is an intrinsic property because G_K^BH describes the correlation between the dipolar density of “bulk water” d^B(r) and the dipolar density of “hydration water” d^H(r), and its value depends on the existence of both types of water, i.e., ⟨d^B(0)·d^H(r)⟩. In neat water, G_K^BH=0 since the dipolar density of “hydration water” d^H(r) is zero. As salt dissolve in water, hydration shells appear in the solution, and the absolute value of G_K^BH starts to increase, reaching its maximum at about 2.3 M, in which the NaCl solution is roughly equally occupied by “bulk water” and “hydration water”. After the maximum, G_K^BH decreases with further elevated concentrations. In principle, it will vanish again at d^B(r)=0, when the entire solution is completely occupied by hydration shells. The tetrahedral H-bond network is expected to recover in the “bulk water” outside the hydration shell. The dipolar correlation among “bulk water” molecules is captured by the G_K^B component of the correlation factor. Indeed, the analysis in Fig. <ref> (a) shows that G_K^B of NaCl solutions at all concentrations is little different from neat water. Thus, the large decrease in the correlation factor, G_K, in salt water is mostly due to the disrupted H-bond network in the “hydration water”. Excluded volume effect Due to short-range repulsion, ions and water molecules are separated by 2-4 Å. This extra volume demanded by ions is no longer accessible to water molecules, and the water number density is therefore decreased. In the literature, this is referred to as the excluded volume effect <cit.>. According to Eq. <ref>, this effect should lead to the decreased dielectric permittivity. Indeed, the present computations show that the excluded volume effect makes a small contribution to dielectric decrement, in which the water number density decreases slightly with increasing solute concentration as shown in Fig. <ref> (a). Since the repelled volume by ions is proportional to the salt concentration, dielectric decrement due to the excluded volume effect is indeed linear as expected. Local field effect Hydrated ions, like all charged defects, change the electrostatic potential profile throughout the solution. As expected, water molecules nearby an ion are polarized in a different manner from neat water. In condensed matter physics, related phenomena have been already identified, for example around defects in semiconductors or at interfaces in solid materials, and they have long been recognized as the local field effect <cit.>. There is consensus that a proper description of local field effects, particularly for regions close to charged defects, demands electronic structure details computed from quantum mechanics. Based on DFT, the present DNN simulations yield a dipole moment, μ=2.85 (2.91) Debye for the “hydration water” of the cation (anion), which is only slightly smaller than the value of 2.99 Debye in neat water. This suggests that the capability of ions to polarize the water dipole is comparable to that of H-bonding. Indeed, it is also consistent with the recent theoretical discovery that molecular polarizabilities of the “hydration water” are only marginally different from that in neat water <cit.>. Since H-bonding is mostly electrostatic in nature, it strongly indicates that water molecules nearby ions are far from being saturated by ions’ local fields. Nevertheless, the local field effect also contributes slightly to dielectric decrement as indicated by Eq. <ref>. Because the μ of the “hydration water” is only a little smaller than in neat water, μ^2 of NaCl solutions drops slowly as a function of concentration, as shown in Fig. <ref> (a). In addition to the SCAN ab initio simulations, we also simulated the dielectric permittivity using the classical OPC water model <cit.>. As shown in Supplemental Material <cit.>, the results obtained using the OPC model agree well with those from the SCAN-DFT approach. A notable distinction between the OPC model and the SCAN-DFT model is that the OPC model is a rigid model with a fixed dipole moment of 2.48 D, indicating that the DFT approach is necessary for accurately capturing the local field effect. In conclusion, dielectric decrement, as a century-old problem, has been extensively studied over decades. However, a critical question remains unresolved in the field regarding the main origin behind the dielectric decrement—whether it is the dielectric saturation effect <cit.> or the loss of dipolar correlation on the H-bond network <cit.>. To provide an unambiguous answer, theoretical simulations must explicitly include both a polarizable model of water molecules and an accurate model of H-bonding, which can account for the dielectric saturation effect and correlation effect simultaneously. Importantly, the polarizable models of water molecules should be described from first principles at the quantum mechanics level, because the length scale of dielectric saturation effect is about a few angstroms which is comparable to the de Broglie wavelength of electrons at room temperature. In this work, we achieve the above goal by reproducing dielectric decrement in NaCl solutions on the DFT level using advanced DNNs. The results unambiguously determine that the dielectric decrement in NaCl solutions is dominated by the loss of correlations between water molecules due to the intrusion of ionic hydration shells into the H-bond network, while the contribution from dielectric saturation effect is small. Importantly, the present computations provide a quantitative explanation of dielectric decrement in salt water; we found that the linear dielectric decrement is due to the loss of correlation within hydration shells, while nonlinear dielectric decrement is due to the loss of correlation between water in hydration shells and bulk water. We thank Roberto Car, Linfeng Zhang, and Han Wang for fruitful discussions. This work was supported by National Science Foundation through Awards No. DMR-2053195. We also acknowledge support from the “Chemistry in Solution and at Interfaces” (CSI) Center funded by the U.S. Department of Energy through Award No. DE-SC0019394. This research used resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy (DOE), Office of Science under Contract No. DE-AC02-05CH11231. This research includes calculations carried out on HPC resources supported in part by the National Science Foundation through major research instrumentation grant number 1625061 and by the U.S. Army Research Laboratory under contract No. W911NF-16-2-0189. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
http://arxiv.org/abs/2307.04481v1
20230710110332
Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling
[ "Giuseppe Desolda", "Andrea Esposito", "Florian Müller", "Sebastian Feger" ]
cs.HC
[ "cs.HC", "cs.AI", "H.5.2; I.2.1" ]
Digital Modeling for Everyone G. Desolda et al. Department of Computer Science, University of Bari Aldo Moro, Bari, Italy {giuseppe.desolda, andrea.esposito}@uniba.it LMU Munich, Munich, Germany {florian.mueller, sebastian.feger}@um.ifi.lmu.de Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling Giuseppe Desolda10000-0001-9894-2116 Andrea Esposito10000-0002-9536-3087 Florian Müller20000-0002-9621-6214 Sebastian Feger20000-0002-0287-0945 August 12, 2023 ==================================================================================================================================================== Manufacturing tools like 3D printers have become accessible to the wider society, making the promise of digital fabrication for everyone seemingly reachable. While the actual manufacturing process is largely automated today, users still require knowledge of complex design applications to produce ready-designed objects and adapt them to their needs or design new objects from scratch. To lower the barrier to the design and customization of personalized 3D models, we explored novice mental models in voice-based 3D modeling by conducting a high-fidelity Wizard of Oz study with 22 participants. We performed a thematic analysis of the collected data to understand how the mental model of novices translates into voice-based 3D modeling. We conclude with design implications for voice assistants. For example, they have to: deal with vague, incomplete and wrong commands; provide a set of straightforward commands to shape simple and composite objects; and offer different strategies to select 3D objects. § INTRODUCTION The digital fabrication revolution aims to democratize the way people create tangible objects <cit.>. With the widespread availability of 3D printing together with many other digital fabrication technologies such as laser cutters or CNC routers, end users are moving from passive consumers to active producers. While the actual manufacturing process is largely automated today, users are still required to have a profound knowledge of complex 3D modeling applications, when they adapt models to their needs or even design new objects from scratch <cit.>. Thus, even if the introduction of technologies such as 3D printers has revolutionized the hobbyist community, lowering the barrier of entry to manufacturing even for novices (who can now put their hands in the process of creating artifacts without relying on third parties), we argue that the design of the 3D objects to be manufactured still requires a high level of knowledge and expertise. These limitations have pushed researchers to investigate natural interaction techniques to simplify 3D modeling tools <cit.>. For example, research explored gestures <cit.>, virtual/augmented reality <cit.>, eye tracking <cit.>, brain-computer interface <cit.> and their combination <cit.> as a multimodal approach. However, their adoption is reserved for technical users and it is strongly limited by hardware costs and excessive size/weight that can make the users easily fatigued <cit.>. As another possible solution, voice-based interaction has been explored, to both integrate the traditional GUI interface (e.g., to enable shortcuts via voice commands) <cit.>) or as the primary interaction paradigm (e.g., see <cit.>). Although voice-based interaction requires only a microphone, it does not yet provide adequate digital modeling support for everyone: existing solutions either do not consider final users at all <cit.>, or only target 3D experts <cit.>, and novices are not considered potential target beneficiaries of the proposed innovations. To lower the barrier to the design and customization of personalized 3D models by exploiting the potential of voice-based interaction, this study aims to understand how the mental model of novices translates into voice-based 3D modeling. We conducted a high-fidelity WoZ study to elicit novices' mental model, for example, their expectation, beliefs, needs, and abilities. We recruited a total of 22 participants without skills in 3D modeling, who performed 14 tasks revolving around some basic concepts of 3D modeling like the creation of objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite objects. All the WoZ sessions' recordings were analyzed through thematic analysis. The findings of the study have been distilled in the form of lessons learned. For example, we found that: voice assistants must manage the corrections the novices do during and after the commands; deal with vague and incomplete commands; consider the prior novices' knowledge; provide only a simplified set of operations for creating simple and composite 3D objects; design a workflow similar to what novices would do if they were building real objects; understand chained commands; understand commands that are relative to the users’ point of view. The contribution of this paper is two-fold. First, we report the results of our WoZ study presenting the themes that emerged from the thematic analysis. Second, based on these results, we provide a set of design implications for the future design of voice-based interaction paradigms for 3D modeling for novices. § BACKGROUND AND RELATED WORK This study revolves around the concept of voice-based 3D modeling as a key factor for enabling the democratization of digital fabrication. This section starts by illustrating some of the existing solutions based on natural interaction that try to address the complexity of 3D modeling (<ref>). Next, we provide an overview of the requirements for interacting with voice assistants (<ref>). Finally, we provide a brief summary of the motivation of this study and introduce the research question that guided our work (<ref>). §.§ Addressing the Complexity of 3D modeling To mitigate the issues of traditional GUI-based CAD systems, researchers explored natural interaction paradigms like eye tracking <cit.>, brain-computer interface <cit.>, gestures <cit.>, virtual/augmented reality <cit.> and their combination <cit.> as a multimodal approach for 3D modeling. The goal of natural interactions with CAD systems is to increase their usability for both expert users and, especially, novice users. Specifically, they aim to: [label=*)] * reduce the learning curve of the system; * allow a more intuitive interaction process; * enhance the design abilities of the designers <cit.>. An example of a multimodal system is “3D Palette” by Billinghurst et al.: a mix of tablet and pen inputs, electromagnetic sensors and voice commands are used to support the digital design process <cit.>. Similarly, Nanjundaswamy et al. explored a mix of gesture-based interaction, speech recognition, and brain-computer interfaces to reduce the initial learning curve of the design system <cit.>. A complete overview of the multimodal solutions for CAD is reported by Niu et al. <cit.>. Despite these potential benefits, such multimodal techniques require the adoption of specialized hardware (e.g., depth-sensing cameras for gesture recognition, headsets to recognize brain signals), which use can be limited by their prices, sizes, weight, and complexity of use <cit.>. Thus, it is still hard for novice users to really adopt them in real and daily contexts <cit.>. To overcome these limitations, researchers also investigated voice-based interaction because of its intuitive nature and the simplicity of the required hardware, i.e., a microphone, which nowadays is embedded in any laptop, tablet, or webcam <cit.>. Furthermore, considering the ubiquity of smartphones and the rise of AR and VR glasses, voice-based interaction can be generalized to technologies where other interaction modalities are not available options. Attempts of integrating voice-based interaction to CAD systems date as back as 1985 <cit.>. A more recent work suggests the use of voice commands to allow users to either quickly search commands by simply stating their intention <cit.>, or to annotate 3D models <cit.>. Systems, where the entire modeling process is carried out by voice commands, have also been explored. An example is the solution presented by Kou and Tan, where voice commands related to a CAD-specific lexicon and grammar are understood by a context-aware algorithm <cit.>. A similar example was proposed by Xue et al., which improves the previous solution by allowing free-form sentences in <cit.>. Another example of a fully-working system is the one presented by Grigor et al.: it follows the same ideas as the previous ones but uses AI to understand the users' inputs, thus allowing for more freedom in the commands, <cit.>. Similarly, Kou et al. proposed a flexible voice-enabled CAD system, where users are no longer constrained by predefined commands by exploiting a knowledge-guided approach to infer the semantics of voice input <cit.>. Among all the previous examples, it must be highlighted that the design of their paradigm was made without any kind of involvement of the final users <cit.> or by solely involving experts in the final testing phase <cit.>. For example, the study by Nanjundaswamy et al. evaluates a multimodal system using gestures, speech and a brain-computer interface by involving a group of five skilled people <cit.>. Similarly, Khan et al. involve a total of 41 skilled users from an architecture or engineering background to elicit the requirements of a CAD system based on gestures and speech commands <cit.>. As another example, Vyas et al. test the usability of a speech-based CAD system involving 6 students with backgrounds in engineering, architecture and visualization <cit.>. The work proposed by Cuadra et al. investigated how novices use voice assistants to design 3D objects <cit.>. They performed a WoZ study to compare voice assistants with and without the use of a video channel showing the design in progress, investigating how the two approaches impact users' accuracy and satisfaction. Cuadra et al. validate the idea of using voice assistants, as participants are more satisfied with their objects and suffer less from cognitive overload when the design process is supported by video, but it does not provide any insight on the mental model of novices approaching the digital modeling task <cit.>. §.§ Interacting with Voice Assistants The first solution of voice interaction implementing speech recognition dates as back as 1952, when Davis et al. proposed a prototype able to recognize digits <cit.>. In recent years, the evolution of machine learning and AI fostered the spreading of powerful commercial voice assistants, often based on deep neural networks trained on a plethora of data. However, such powerful speech recognition models alone are not sufficient to build an effective voice assistant, since the interaction with such systems must be considered in the design of the whole system <cit.>. This need, together with the growing availability of commercial voice assistants, has fostered a sharp uptick of studies on user interaction with voice assistants <cit.>. Aspects like the cues that drive the conversation <cit.>, the properties that a voice assistant should have <cit.>, the user's mental model <cit.>, emotions felt during the conversation <cit.>, conversational design patterns <cit.> have been investigated. In addition, solutions to design and evaluate interaction with voice assistants are beginning to be proposed (see, for example, <cit.>). Careful consideration of these design aspects gains importance when voice assistants aim to simplify challenging or technical operations (e.g., see <cit.>). Since 3D modeling represents such a demanding task for novices, the elicitation of the novices' mental model is crucial to lower the barrier for 3D modeling. §.§ Summary and Research Question The analysis of the literature highlights that to simplify the 3D modeling, often the existing solutions are based on multimodal techniques such as gestures, eye tracking, or brain-computer interfaces; however, their adoption in real contexts is strongly limited by the adoption of specialized hardware and, overall, they target technical users. Voice interaction seems a promising paradigm that can overcome the limitations of multimodal solutions, but the existing voice-based solutions are still lacking for three important reasons: [label=*)] * users are often not considered throughout the design phase, or they are only involved too late in testing phases; * to the best of our knowledge, novices are never considered as target users; * the voice-based interaction is built on top of the existing CAD systems (and their complexity), instead of designing from scratch the voice paradigm and the whole system. Considering these limitations, to really democratize digital fabrication considering novices, users should be able to access 3D modeling tools even without special skills. All these motivations pushed us to explore novices' mental model in voice-based 3D modeling, in order to reduce the cost of their entry in the digital fabrication era. This is an aspect that has never been explored before and that deserves attention to really democratize digital fabrication. Therefore, our work addresses the following research question: How does the mental model of novices translate into voice-based 3D modeling? § METHOD To answer our research question, we performed a high-fidelity WoZ study <cit.> because it has been proven successful in eliciting the user's mental model for voice-based interaction (e.g., see <cit.>). Then, we carried out an inductive thematic analysis <cit.> on the qualitative data, i.e., the transcriptions of the WoZ sessions and the answers of the participants to the open questions. §.§ Participants A total of 22 participants (F=15, M=7) have been recruited through convenience sampling <cit.> on the social circles of the authors of this article. This number of participants is in line with other similar studies (e.g., see <cit.>). Half of the participants were Italians while the other half were Germans. Their mean age was 24.1 years (σ = 3.7, min = 21, max = 34). The entire study was performed in English so as not to have results related to specific languages, which is out of the scope of this study. To ensure that the collected data is not biased toward knowledgeable users, we only recruited participants without any kind of experience with 3D modeling. Regarding the participants' level of education, around 45.45% already have a High School Diploma or a German A-level, 36.36% have a Bachelor's Degree, 13.64% have a Master's Degree, and only one participant (representing the remaining 4.55%) has not provided any information. Most participants (15 out of 22) do not have a STEM education, while 6 of the remaining 7 do not have any computational thinking skills, as they studied or worked in non-IT scientific fields (e.g., pharmaceutical and nutrition sciences). Regarding the participants' skills, they had an average level of IT knowledge (x̅ = 6.5/10; σ = 2.1), a medium-low level of knowledge of voice assistants (x̅ = 3.1/10; σ = 2.0) and very low knowledge of 3D modeling (x̅ = 1.6/10; σ = 1.1). §.§ Tasks A total of 14 tasks have been designed by two authors of this paper, both experts in 3D modeling, taking into account the most common and useful activities that are required to create simple and composite 3D objects. The resulting tasks revolve around basic concepts of 3D modeling, like the creation of simple objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite geometries. The details of the tasks are reported in the task table in the attached appendix (the list of all the graphical tasks is available in the attached appendix, sub-folder tasks). To reduce the impact of the primer effect <cit.> that providing a textual description of a task would have on the participants, we chose to provide the participants with graphical tasks: each task is composed of a brief prompt and a diagram showing the participants a 3D object or a 3D transformation that should be recreated (an example of graphical tasks is provided in <ref>). The representations chosen for each task were validated during a pilot study with 3 novices that were not considered in the final WoZ study. §.§ Apparatus We carried out the WoZ study remotely by using Zoom[<https://zoom.us>]. Four researchers have been involved: two Italians acted respectively as conductors and wizards for the Italian participants, while two German researchers acted as conductors and wizards for the German participants. In both groups, researchers switched roles to minimize the risk of bias introduced when conducting the test. To create the illusion for participants that they are interacting with a real voice-based system for 3D modeling, we decided to use Blender[<https://www.blender.org>], explaining to participants that they can interact with it through voice commands. Blender has been selected since it is a free and open-source software that, among other features like sculpting or rendering, allows one to design and visualize 3D objects. One of the main features that made Blender the perfect choice for our WoZ study is the availability of API for the Python language[<https://docs.blender.org/api/current/>] that can be used inside a shell-like environment: this allows the Wizard to immediately create and modify the objects programmatically when the participants provide voice commands, thus preventing the participants from noticing anything odd and increasing the speed at which the Wizard is capable of satisfying the participants' requests. Taking advantage of this feature, we pre-defined a set of functions in a Python module to simplify the use of Blender's APIs for the purpose of this study (the module is available in the supplementary materials, sub-folder python module). To show the participants the task they had to complete, we overlaid the graphical tasks on the bottom-right side of the Blender's window. To this aim, we used Open Broadcaster Software (or, more commonly, OBS)[<https://obsproject.com>], a free and open-source software for video recording and live streaming. Using OBS, it was also possible to define animations and transitions to show when users are moving to the next task and to signal to the participants that the “voice assistant” (i.e., the Wizard) is listening to the user's command or it is actually performing it. In particular, for each task, both the Blender window and the graphical task are visible (see <ref>). When the participants activate the Blender voice assistant by saying “Hey Blender”, the “I'm listening” label indicates that participants can provide the command to solve the task (see <ref>). Then, when the voice command has been issued, a rotating icon indicates that the voice assistant is analyzing it, creating the illusion that there is a real voice assistant (see <ref>). During the loading, the Wizard writes the Python statements related to the user commands and the result is finally shown in Blender (see <ref>). §.§ Procedure For each participant, when the Zoom session started, both the conductor and the Wizard were connected on Zoom but the latter never appeared or interacted with the participant. While the conductor introduced the participant to the study, the Wizard shared his screen, in particular the window created by using OBS. The sessions were recorded using Zoom's built-in recorder. Before starting the recordings, participants were asked to sign (either in digital or in verbal form) a privacy policy. It is worth mentioning that our universities require approval by an ethics committee only in the case of medical and clinical studies. For other studies like ours, they require that test participants give consent in a written or digital form; thus, we informed participants about all the details of the study and asked them to agree before starting the study. All of them agreed. As soon as the participant agreed to attend the study, the conductor invited the participant to complete a set of tasks. The webcam of the conductor was turned off during task execution to avoid disturbing the participant. To reduce the variability between sessions and between the Italian and German participants, the same introductory script was defined (available in the attached appendix, sub-folder "introductory script"). In summary, the conductor explains that the goal of the study was to validate a new voice assistant called Blender, which we created to assist novices in 3D modeling. Then, the conductor asks to complete a set of tasks and that, for each of them, a graphical representation appears on the right-bottom side of their screen. The conductor also specifies that the participant had to first activate the voice assistant by saying “Hey Blender” and then, once the “I'm listening” label appears, the participant can provide a sequence of voice commands that, in their opinion, is the best to solve the task (for example “create a cube”). No examples of voice commands have been provided to avoid introducing bias. At the end of each task, the participants had to communicate with the conductor to move on to the next task. At the end of the session, each participant filled in a questionnaire that includes questions on demographics, as well as some usability-related questions to evaluate the effectiveness of the Blender voice assistant. Furthermore, since (to the extent of our knowledge) there were no previous examples of graphical tasks for a WoZ study, we have also chosen to add some questions to evaluate how easy it was for the user to understand the tasks (available in attached appendix, sub-folder questionnaire). The entire procedure lasted around 30 minutes for each participant. A graphical synthesis of the entire procedure and the data collected is shown in <ref>. §.§ Data Analysis The first analysis regarded the questionnaire answers that evaluate the choice of providing the tasks in graphical format. Specifically, we included a question that asked “How easy it was to understand the graphical tasks?” and it ranges from 1 (not simple at all) to 10 (very simple). Both the median and average scores are 8.2/10, with a standard deviation of 1.0. These results seem to validate the idea of presenting the tasks graphically, but it also highlights that for some tasks (the ones with an ambiguous representation) the conductor of the study must be able to guide the participants to the right interpretation (without the use of words that may introduce a primer effect <cit.>). In our study, this issue impacted only the 11th task for four participants and it was solved by turning the webcam on and mimicking the action depicted in the task, in case the user was showing difficulties in understanding a task or if he/she explicitly requested help. After ensuring the quality of the graphical tasks, we analyzed the qualitative data collected during the study, which helped us answer the research question, i.e., video transcriptions, questionnaire responses and participants' comments. All the video recordings (a total of about 11 hours) were first transcribed and expanded by including the annotations that identify pauses, the start and the end of the processing by the WoZ, and eventual errors or over-correction by the WoZ. This dataset was completed by reporting the participants comments and the answers to the three open questions we included in the questionnaire: [label=*)] * What did you like the most about the system used and the interaction with it? * What did you like less about the system and the interaction with it? and * Would you use a system like Blender to model in 3D? Please motivate your answer. This data was analyzed in a systematic qualitative interpretation using Inductive Thematic Analysis <cit.>. The initial coding was conducted independently by four researchers, who are co-authors of this article and are experienced in qualitative data analysis: two of them analyzed the Italian results while the other two the German results. The two couples of researchers began with open coding independently. Once all the data was coded, the set of initial codes was further refined by merging the different codes. This first filtering phase allowed us to obtain a set of code groups that capture meaning at a higher level. The identified code groups were then used by each group to extract the main themes. At the end, both the codes and the themes of the two groups were compared to identify similarities and differences. With the exception of some minor differences related to their naming, both the codes and the themes identified by the two couples of researchers were identical in meaning. The final themes that will be presented here derive from a joint naming session carried out by all four researchers. Only a few small differences were identified, and they will be discussed as part of the design implications. The final codes and themes with the relationships among them are available in the attached appendix, sub-folder Codes and Themes. § RESULTS The thematic analysis resulted in the description of five themes reported in the following sub-sections. For each theme, significant participant quotes are reported. For the sake of conciseness, we will refer to participants as “P” followed by the participant number, and to the WoZ system as simply “system”. §.§ Basic Operations This theme frames the strategies of interactions that novices have when they approach the 3D modeling activities of creation and manipulation. §.§.§ Creation. Novices tend to provide simple commands in the form “”, where the used verbs are typically “create”, “draw”, “build”, and examples of shape names are “cube”, “box”, or “cylinder”. This behavior has been observed in tasks that required the creation of simple or composite objects. Strictly related to this is the object duplication. Novices usually keep the requests simple by asking them to duplicate a precise object, as P4 did in task 12 when he said “duplicate the cube”. When the novices, instead, have to face the creation of multiple identical objects, without using the duplication requests (for example, because there was no previous copy in the scene), they simply use a basic creation request by also providing the number of copies: this is clearly exemplified by P5 in task 14 in “create four cylinders”. §.§.§ Manipulation The manipulation operations used by novices during the study are translation, rotation, and scaling. It is worth mentioning that the manipulation operations require some kind of reference frame to be performed; to this aim, novices often use relative references (for more details see theme theme:mental-model where the references used by the novices are discussed). In more complex cases, novices provided commands containing both a creation request and an implicit manipulation request, where the manipulation is often expressed as a set of constraints on the final object. As an example, in task 14, P8 asked the system to “create four cylinders on the corners of the lower rectangle”: in this example, the multiple creation request is clearly visible, and it is put alongside a relative positioning request. Finally, one of the most interesting identified open codes is the one that relates to moving objects with respect to implicit construction shapes. As an example, P4 during the last task asked “place the four cylinders at the four corners of a square.” In this example, the participant did not have a square in the scene but implicitly requested the system to create a square, place the cylinders at its corners, and delete the square once the operation was completed. This kind of operation was pretty common throughout the last task: around 45% of the participants provided a command that used a construction shape like the one previosly cited. §.§ Selection of Objects This theme covers the strategies adopted to identify and select objects, specifically, absolute selection, relative selection, or implicit selection. In the case of absolute selection, most participants explicitly refer to the entire scene, or to a single object in a scene by using its name (the one shown in the “inspector” view in Blender, as P11 asked during task 14 by saying “should I call it Box 0001 if I want to move it?”) or by its shape (as P1 did during task 6 by saying “move the cube 20 cm downwards”). A specialization of the latter case is the reference to a shape using a 2D approximation. One example is echoed by P8 during task 14: “Hey blender, move the upper rectangle on the side of the lower one”. Here, the user referred to two 3D boxes by their 2D approximation (rectangles). The relative selection resulted in four commonly used strategies to select objects, namely: * their relative time of creation (e.g., P3 in task 14: “Blender, place the second box under the first”); * their relative position (e.g., P8 in task 14: “Hey Blender, create four cylinders in the corners of the lower rectangle”); * their dimensions (e.g., P11 in task 14: “Hey Blender, move the tallest box attaching it to the side of the other box”); * by inverting the current selection, eventually applying additional filters (e.g., P3 in task 14: “Blender, place the other two cylinders like you placed the previous ones”). Finally, users also often performed implicit selections of the objects in the scene, for example, by referring to a single object in the scene or by referring to the last edited object, either explicitly or implicitly (e.g., P1 in task 8 implicitly referred to the last edited object by saying “increase the volume by three times”). It is worth remarking that novices do not differentiate nor have preferences between the various methods, and actually, often mix them to be sure that the selection is clear and precise (e.g.: in a previously shown example by P8 in task 14, “Hey blender, move the upper rectangle on the side of the lower one”, the user performs the selection by using both an absolute reference to the 2D approximation of the shape of an object, and a relative reference to the positioning of another object). §.§ Errors Due to the lack of geometry knowledge and/or 3D modeling expertise, often novices commit errors of which the users are aware of, and errors of which the users are not aware of. In the first case, they try to prevent or correct the errors. For this reason, we named it “error correction”. In the second case, when a user is either not aware of an error or if they do not care about trying to fix it, then the error simply represents a mistake made during the task execution. For this reason, we named it “execution errors”. We analyze the details of each thread in the following paragraphs. §.§.§ Error correction. Different behaviors for correcting the errors have been observed, specifically during and after the command. Regarding the error correction made during the command, some novices try to prevent their own errors when they recognize one while stating the command, by providing a correction in the same command. For example, P9 during the chair construction task says “Hey blender, create a rectangle over the quadrilateral of length – I mean, height 30 centimeters, depth 5 and side 20–22...”. This command contains multiple corrections, starting from the correction of the name of the dimension that the user wants to set to 30 centimeters, and then correcting the actual size of the side of the rectangle to 22 centimeters Regarding the corrections made after the commands, most of the participants expected some utility commands that are typically available in GUI-based software, like the “undo” and “redo” functions. As an example, P3 during task 14 provided both the command “Blender, undo the last operation”, and “place the other two cylinders as you've placed the previous ones.” This highlights how, although novices may not be familiar with the task of 3D modeling or voice-based interaction, they were able to transfer the knowledge of other software they may have used in the past, expecting that their previous experience would be applicable to the new, unknown system. §.§.§ Execution errors. Some of the mistakes committed by the novices are strictly related to lapsus, lack of knowledge, or system shortcomings. In the case of lapsus, some participants referred to shapes and objects using the wrong name (e.g., P10 was trying to refer to a box by calling it “cylinder” during task 14). In case of lack of knowledge, errors range from wrong names used for dimensions and primitives, to being unaware of the direction of the axis, perhaps by referring to previous knowledge obtained in school. For example, the Y axis in a 2D plane is usually the vertical one, thus some novices expect the Y axis to be the vertical one also in 3D. Finally, we identified system shortcomings, i.e. errors made by the wizard during the execution of the commands: all of these errors can be traced back to the incomprehension of the command, often due to its intrinsic vagueness (see the theme of “theme:mental-model”). §.§ The Gulf of Execution This theme represents the way novices translate their goals into commands. Throughout the sessions, before providing specific commands, we immediately noticed that novices often think aloud to understand what they have to do and how they can translate it to commands like P16 said during task 14 by saying “so, the picture has a different point of view. I should move it a little bit. Ok. Hey Blender, make the cylinder bigger.” Then, by analyzing their commands, we identified three main aspects of the commands where the gulf of execution becomes critical, specifically: [label=*)] * relativity * vagueness * abstraction. §.§.§ Relativity. Here we summarize how novices think about positions, scale, rotation, and selection relative to other parts of the scene. Two main overall frames of reference are used by the novices: the axes and other objects. To select an axis, novices adopt three approaches, namely: [label=*)] * axis relative direction: a common way of selecting axes is through their relative direction (depending on the user's point of view), as echoed by P9 during task 11, by saying “move the geometric shape 20 cm to the right”; * the axis color: as an example, during the execution of the last task (the one of creating a chair), P2 referred to the Y axis by its color stating “turn of 180 degrees the box on the green axis”; * axis name: some novices also refer to axes by their actual name, as P19 did during the 12th task by asking the system to “move the right cube 10 centimeters along the X axis.”. When referring to objects' dimensions, novices adopted two main approaches for selection. A first approach consists of using the dimensions' name, as P3 has done in the task of chair creation by saying “move along the y axis of a length equal to the base of the second box the last cylinder”. A second approach used a relative comparison to other dimensions; for example, P3 during task 14 selected an object by stating “move the third cylinder under the highest box [...]”. §.§.§ Vagueness. It encloses a lack of information in the commands provided to reach the goals. In general, the lack of information is caused by: * chaining of multiple commands to describe at a high level a composite shape, as shown by P22 during the chair creation task, by asking “create four cylinders with the same distance to each other.”; * missing data that the system needs to execute the requests; as an example, novices forget to provide some or all dimensions of a shape (e.g., P1 in task 1 stated “create a cube” without providing any dimension), they forget to specify a parameter for a transformation (e.g., P7 in task 10 asked to “rotate of 30 degrees the figure” without specifying a direction). §.§.§ Abstraction. We noticed two behaviors related to the abstraction of the commands. The first one relates a general abstraction over the process to reach the desired goal, as exemplified by P2 that tried to solve task 14 by saying “create a chair using two boxes and four cylinders”. The second one refers to how novices translate the desired 3D shapes into words. For example, shapes are created by providing a general description (e.g., P10 in task 4 by saying “create a 3D rectangle 30 cm high, 20 cm deep, and long 10 cm”, referred to a box as a “3D rectangle”, thus simply describing the shape) or by approximating the desired shape with a similar 2D shape (e.g., P8 during task 4 used “rectangle” instead of “box” by saying “create a rectangle of height 30, width 20, depth 10”). Furthermore, especially German participants, novices also refer to the 3D shapes by using similar real-world objects (e.g., P17 during task 3 stated “create a dice with an edge length of 30 centimeters”, using “dice” instead of “cube”). §.§ Users' Requests We collected requests and suggestions provided by the participants, which provide useful insights on novices' mental model. Among the most common requests, participants often asked to rotate the camera and change their point of view. As an example, P11 during the last task of creating a chair, asked “can I see it from below?” and “can I see it from above” to perform some minor adjustments and corrections to the positions of the 3D objects. This behavior underlines the need to provide a way to allow novices to rotate their point of view. This functional requirement is strictly related to the theme of theme:selection-of-objects as it may benefit from different interaction modalities that could be explored (e.g., using AR). Another common request is related to the actual dimensions: when novices explicitly set size in the command (for example, in the third task), they want to check that the system created an object of the right size. This is exemplified by P10 which explicitly asked if “can I ask it to check the dimensions?” in the third task. This suggestion does not translate to an additional requirement for the AI model that recognizes users' commands, but it rather provides some insights on the requirements of the whole 3D modeling tool. Other minor suggestions regarded the customization of the axis: some participants expected the Y axis to be the “vertical” one as it usually happens in 2D drawings, rather than the Z axis as it happens in 3D modeling tools like Blender. Providing such a customization option would surely reduce the error rate in a final system, as the novices could adapt it to their own knowledge. § DISCUSSION AND IMPLICATIONS Based on the findings of the WoZ study, in the following we present design implications for the development of future voice-based 3D modeling tools for novice designers and relate them to the wider research literature around voice assistants and general user experience principles. §.§.§ Understand user corrections and adapt to them. This requirement stems from the errors the users are aware of (see theme theme:errors). It poses requirements that impact two different facets of future voice-based digital modeling tools: the NLU layer and the conversation flow. Regarding the NLU layer, systems must be able to intercept user corrections and aborted commands. Based on our findings, we note that recognizing uncertainty, hesitation, doubt, and error awareness early on is particularly crucial in the digital modeling context, as users displayed them frequently due to their unfamiliarity with 3D modeling <cit.>. Regarding the conversation flow, after intercepting the error correction, it is important to design a dialog that helps users understand the error and recover from it <cit.>. Moore and Arar <cit.> provide valuable pointers through their Natural Conversation Framework which proposes a set of conversational patterns. Some of these patterns relate to user corrections and can be applied to voice-based digital modeling. An example inspired by this framework that relates to errors that users correct while they issue a 3D modeling command might be: User: Hey blender, increase of 10 centimeters -no- of 20 centimeters the sides of the geometric figure Agent: I'm sorry, I didn't understand. Do you mean an increase of 10 or 20 centimeters? User: 20 centimeters. Agent: Ok, I'm increasing of 20 centimeters the sides of the geometric figure. §.§.§ Deal with vague and incomplete commands . We have identified numerous theme:errors by the lack of knowledge and the system's shortcomings that users were unaware of. These errors are related to incomprehension due to the vagueness and abstraction of some commands. Self-repair strategies should be introduced to improve interaction <cit.>. To this aim, we identified two possible solutions. The first one consists of sensible defaults: in case of a vague command, the voice assistant fixes it by selecting a relevant parameter from a list of alternatives. For example, if the user says “create a cylinder on top of the cube”, the cylinder diameter is not specified. In this case, the system can assume that the diameter is equal to the side of the cube. This solution can also benefit from the dialog context: as suggested by Jain et al., resolving and maintaining the dialog context can help select the most appropriate sensible default from a list of alternatives <cit.>. For example, if other cylinders have been previously created with a given diameter on top of cubes the same can be applied to the new ones in case of vague commands. This allows the system to be proactive, anticipating the users' requests as suggested by Völkel et al. <cit.>. The second solution consists of interactively guiding the user by providing the missing information. With reference to the previous command of the box and cylinder, instead of using defaults, the voice assistant can explicitly ask the user for the desired radius. The strategy adopted by the voice assistant is informed by the degree of system autonomy or desired user control. A hybrid solution can also benefit from both approaches: the selected sensible default can be used by the voice assistant to ask the user if the default is right, for example, with reference to the previous case the voice assistant can reply: “OK, I'm creating a cylinder with a diameter equal to the side of the cube. Is it OK?” §.§.§ Translate interaction conventions to voice-based digital modeling . Users commonly apply their experience with software applications to other applications or even different domains. As an example, some participants expected to execute “undo” or “redo” commands, which are common across applications and domains. This is in line with the traditional Nielsen heuristics of “user control and freedom” and “consistency and standard” <cit.>. The latter states that “users should not have to wonder whether different words, situations, or actions mean the same thing”, thus the system should “follow platform and industry conventions” (from Nielsen <cit.>). For this reason, a voice-based 3D modeling system should provide such common operations, like the aforementioned “undo” and “redo” commands. Further exploration may be required to clearly define and match the set of expected commands to voice-based digital modeling. §.§.§ Adopt simple operations even for the creation of composite 3D models . Based on the theme theme:creation-and-manipulation, we note that most users follow similar and simple approaches even in complex tasks. For example, by analyzing task 13 (which consisted of creating a figure having a cylinder on top of the cube), multiple approaches might be adopted, but novices used only basic operations (creation and translation) to create both a simple cube and a cylinder and then moving the latter on top of the former. This highlights that, although many technical operations may be implemented in voice assistants for digital modeling, it is important to provide novices with simple operations to create and compose 3D objects, rather than prescribing more complex operations like “extrusion” and “insetting”, which are most adequate for skilled users <cit.>. §.§.§ Match digital modeling workflows with novices' expectations and experiences from building physical objects . Related to the theme:creation-and-manipulation, but by focusing on the last task (that consisted of the creation of a chair), we noticed that the majority of the users started by creating the base cylinders (almost all users started with a phrase like “create four cylinders”). This surely provides an interesting insight on how people approach the creation of composite 3D objects. By creating the base cylinders first, users are basically following an approach that starts from the bottom and proceeds upwards. This is not different from the approach that users should follow if they were composing physical shapes: by starting from the bottom, they are able to stack the various shapes without the risk of their composition to “fall down”. This indication can be useful if wizard procedures are introduced to guide the creation of composite 3D objects; for example, the voice assistants can start the interaction by asking which is the shape, with its features, that must be placed at the bottom, then going on guiding the user to create other shapes on top of the previous ones. §.§.§ Provide alternatives for the selection of 3D objects . By reflecting on the theme of theme:selection-of-objects, we argue that it is among the most critical ones: most of the 3D modeling revolves around the selection of objects to be composed. We found that several and different techniques have been adopted by the novices. For example, a common solution is represented by commands to select an object by referring to the entire scene, in other words in an absolute way. We also documented commands that use relative references, for example, their relative time of creation, their relative position, their dimensions, and by inverting the current selection. The last approach is represented by the implicit selection of the objects in the scene. These strategies represent different solutions the users can adopt to select a 3D object, and thus the voice assistant should accommodate all of them. To simplify the interaction, future voice assistants can be complemented with additional interaction modalities like gestures or eye tracking, where users could simply point <cit.> or gaze <cit.> at the object or surface they want to select. §.§.§ Understand commands that are relative to the user's point of view . As described in the themes theme:mental-model and theme:selection-of-objects, users often execute commands that are related to their point of view, in particular, to change the camera perspective, to select an axis, and to select a 3D object. In other words, we found that a common way for novices to issue commands is through the “screen” coordinate system <cit.>, as provided by some professional 3D modeling systems[<https://shorturl.at/fGLRZ>], by using common words such as “left” and “right”, as P9 did during task 11 with the command “move the geometric shape 20 cm to the right”. Furthermore, novices often provided commands relative to both their point of view and other objects (as P10 did during task 13: “insert a cylinder on top of the cube”). This implies that future voice assistants must be equipped with some way of understanding the 3D context into which the command is provided, and they must take into account the user's point of view during the intent-matching process. §.§.§ Grant multiple ways to refer to the axes . Users referred to the axes of the 3D scene by adopting different approaches: by indicating the axis color, by referring to the user's relative direction, by using the axis name (see themes theme:mental-model) or some users also preferred to switch the Y and Z axes as the “vertical” axis (see theme theme:users-suggestions). This ambiguity is also found in professional systems, as some of them use the Z axis as vertical while others use the Y axis instead <cit.>. This behavior should be considered in the design of voice assistants for 3D modeling, since this is a core activity that, if not adequately supported, might lead to ineffective user interaction. §.§.§ Design for complex commands. . Multiple chained commands have often been prompted to execute various actions. In our study, it was possible to accommodate the multiple users commands thanks to the WoZ but voice assistants are typically restricted to simple standalone commands. Similar to what Fast et al. already proposed for complex tasks <cit.>, also voice-based systems for 3D modeling should address this requirement, which strongly impacts the design of its NLU layer that must be able to understand and execute multiple chained commands. §.§.§ Favor explicit trigger words . Previous work by Vtyurina et al. argued that forcing the use of explicit trigger words would constrain user interactions, suggesting the use of implicit conversation cues for driving the dialog <cit.>. On the contrary, during our experiments novices used implicit conversational cues while thinking about their workflow and as a natural reaction after a successful command execution (see theme:mental-model): this highlights the need for future voice-based systems to provide clear explicit activation cues and trigger words, to avoid any unintentional activation that would disrupt users' workflow. §.§.§ Embrace diversity in naming approaches . As novices usually have little to no knowledge of the 3D modeling domain, they often have to resort to different naming approaches when dealing with shapes for which they do not recall the “right” name. As already highlighted in theme:mental-model, novices can refer to shapes by providing high-level descriptions (e.g., “3D rectangle” instead of “box”), 2D approximations (“rectangle” instead of “box”), or by associating them to a real-world object (e.g., “dice” instead of “cube”). For this reason, future systems must be able to understand both analogies and descriptions of shapes. A concrete solution might be the adoption of a lexical ontology like WordNet <cit.> to infer the shape name related to the real object. § LIMITATIONS OF THE STUDY Our study is an initial step toward understanding how novices approach voice-based 3D modeling. We have identified some limitations of our work. First, the novices' languages deserve a wider exploration: our study highlights very small differences between Germans and Italians because of their culture; however, a similar study where participants use their native languages might be useful to understand how language might impact the resulting mental model. Similarly, this study does not focus on how aspects like ethnicity, socio-economic status, and age might impact the novice's mental model. Another limitation regards the tasks: the ones used in the study are representative of the most common operations to design 3D models but digital fabrication often implies the design of objects that are more complex than a chair. In addition, the set of proposed tasks does not cover all possible operations (e.g., selecting textures and making holes). Future work may also study differences between the mental model of lay users (target of this study) and novices in 3D modeling that are domain experts (e.g., they have expertise in sculpting or 3D world composition, but do not know how to model). Similarly, the proposed voice-based interaction approach may be compared with alternative solutions based on mouse and keyboard or multi-modal approaches, to explore the pros and cons of each solution. Finally, Blender has been selected as the 3D modeling tool because of the advantages reported in <ref>; however, its UI is designed for a WIMP interaction thus it presents commands, buttons, functions, etc., that might bias or confuse novices. Despite carefully hiding all the useless parts of the Blender UI, the adoption of a system purposely designed to better fit the voice interaction might be adopted to elicit the mental model. § CONCLUSION Voice interaction is emerging as a promising paradigm that can simplify 3D modeling for digital fabrication. However, novices' mental model is never considered when designing voice-based 3D modeling systems. In addition, voice interaction is usually built on top of WIMP systems instead of designing the voice paradigm and the whole system from scratch. This study addresses these limitations by investigating the novices' mental model in 3D modeling and contributes to the state-of-the-art by identifying a set of design implications that support the definition of voice-based interaction paradigms for the design and customization of personalized 3D models. This contribution aims to lower the barrier to 3D modeling thus supporting the wider democratization of digital fabrication. As future work, we are now addressing the limitations reported in the previous section. We are also working on the development of a prototype of a voice assistant integrated into Blender: it is currently being developed in DialogFlow <cit.> and it has been designed considering the design implications proposed in this study. The aim is to study novices' behavior when interacting with real systems, also exploring if and how the design indications suggested in this study also accommodate the design of more complex objects in more realistic situations, for example, by proposing scenarios instead of tasks. §.§.§ Acknowledgements This work has been funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 952026 (<https://www.humane-ai.eu/>). The research of Andrea Esposito is funded by a Ph.D. fellowship within the framework of the Italian “D.M. n. 352, April 9, 2022” - under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment 3.3 - Ph.D. Project “Human-Centered Artificial Intelligence (HCAI) techniques for supporting end users interacting with AI systems”, co-supported by “Eusoft S.r.l.” (CUP H91I22000410007). splncs04
http://arxiv.org/abs/2307.05429v1
20230711165101
A study of spirallike domains: polynomial convexity, Loewner chains and dense holomorphic curves
[ "Sanjoy Chatterjee", "Sushil Gorai" ]
math.CV
[ "math.CV", "math.DS", "32E20, 32H02, 30K20" ]
A study of spirallike domains]A study of spirallike domains: polynomial convexity, Loewner chains and dense holomorphic curves Department of Mathematics and Statistics, Indian Institute of Science Education and Research Kolkata, Mohanpur – 741 246 [email protected] Department of Mathematics and Statistics, Indian Institute of Science Education and Research Kolkata, Mohanpur – 741 246 [email protected], [email protected] Sanjoy Chatterjee is supported by CSIR fellowship (File No-09/921(0283)/2019-EMR-I). Sushil Gorai is partially supported by a Core Research Grant (CRG/2022/003560) of SERB, Govt. of India [2020]Primary: 32E20, 32H02, 30K20; Secondary: 47A16 In this paper, we prove that the closure of a bounded pseudoconvex domain, which is spirallike with respect to a globally asymptotic stable holomorphic vector field, is polynomially convex. We also provide a necessary and sufficient condition, in terms of polynomial convexity, on a univalent function defined on a strongly convex domain for embedding it into a filtering Loewner chain. Next, we provide an application of our first result. We show that for any bounded pseudoconvex strictly spirallike domain Ω in ^n and given any connected complex manifold Y, there exists a holomorphic map from the unit disc to the space of all holomorphic maps from Ω to Y. This also yields us the existence of (Ω, Y)-universal map for any generalized translation on Ω, which, in turn, is connected to the hypercyclicity of certain composition operators on the space of manifold valued holomorphic maps. [ Sanjoy Chatterjee and Sushil Gorai August 12, 2023 ====================================== § INTRODUCTION AND STATEMENTS OF THE RESULTS The domains we study in this paper are pseudoconvex domains that are spirallike with respect to certain holomorphic vector fields. Recall that a holomorphic vector field V on a domain ⊂ℂ^n is a real vector field on such that V(z)=∑_i=1^n a_i(z) ∂/∂ x_i+b_i(z) ∂/∂ y_i, where (a_j(z)+ib_j(z)) is holomorphic function on for all j ∈{1,2, ⋯ ,n}. We denote the set of all holomorphic vector fields on by 𝔛_𝒪(). Any holomorphic map F→ can be viewed as a holomorphic vector field on . We will use matrices with complex entries while talking about linear vector fields on . We will need the notion of spirallike domain with respect to a holomorphic vector field from <cit.> to move further into our discussion. Let Ω and be domains in ℂ^n, such that 0 ∈⊂⊆. Suppose that Φ be a holomorphic vector field on Ω such that Φ(0)=0. Then Ω is said to be spirallike with respect to Φ, if for any z∈Ω, the initial value problem dX/dt =Φ(X(t)) X(0) =z, has a solution defined for all t ≥ 0 with X(t,z) ∈ for all t>0 and X(t) → 0 as t →∞. We say that is strictly spirallike with respect to holomorphic vector field Φ if X(t,z) ∈ for all t>0 and for all z ∈. Let ⊂ be a domain containing the origin. We say that is spiralshapelike (strictly spiralshapelike), with respect to the holomorphic vector field Φ, if there exists Ψ∈ Aut(), such that Ψ(0)=0, and Ψ() is spirallike (strictly spirallike) with respect to Φ. Next, we briefly mention some notions of stability of the equilibrium point of a system of differential equations (see <cit.> for details). The notions of stability will play a vital role in our study. Let E ⊂ℝ^n be an open set containing the origin. Suppose that f E →ℝ^n is a continuously differentiable mapping such that f(0)=0. Consider the system of differential equation dX(t)/dt = f(X(t)) ,   X(0) =x_0. Assume that the solution of the system (<ref>) exists for every t ≥ 0 and ∀ x_0∈ E. Then: * The origin is said to be a stable equilibrium point of the system (<ref>), if for every ϵ>0 there exist δ >0 such that X(t,x_0) ∈ B(0, ϵ) for every x_0∈ B(0, δ). * The origin is said to be globally asymptotically stable equilibrium point with E=^n if the origin is stable and lim_t→∞ X(t,x_0) =0 for all x_0∈^n. A vector field V ∈𝔛_𝒪() is said to be globally asymptotically stable vector field if the origin is the globally asymptotically stable equilibrium point of V. In this paper, we always consider globally asymptotic stable vector field whose equilibrium point is the origin. In this paper we study the polynomial convexity of the closure of pseudoconvex strictly spirallike domains. We also study the embedding a univalent function into a filtering Loewner chain through polynomial convexity property along the lines of Hamada <cit.>. We also use polynomial convexity of certain domains to study the dense holomorphic curves in the space of all holomorphic maps. We now describe each of these separately in the following subsections. §.§ Polynomial convexity For a compact subset K⊂ the Polynomially convex hull of K, denoted by K, is defined by K:={z ∈:|p(z)| ≤sup_w ∈ K|p(w)|, ∀ p ∈ℂ[z_1,z_2, ⋯ ,z_n]}. We say that K is polynomially convex if K=K. Polynomial convexity is the main ingredient in the study of uniform approximation by polynomials. In ℂ, a compact subset K is polynomially convex if and only if ℂ∖ K is connected. In general, for n >1, it is difficult to determine whether a compact subset in is polynomially convex or not. It is known that any compact convex subset of ^n is polynomially convex. In particular, the closure of any bounded convex domain is polynomially convex. However, in <cit.>, it was shown that the closure of a bounded strongly pseudoconvex domain with a smooth boundary may not be polynomially convex. In <cit.>, Joiţa gave an example of a strongly pseudoconvex domain in ^n with real analytic boundary whose closure is not polynomially convex. The doamin in the example of Joiţa<cit.> is a is also a Runge domain. This raises a natural question: For which classes of pseudoconvex domains the closure is polynomially convex? Hamada<cit.> proved that bounded pseudoconvex domains, which are strictly spirallike with respect to certain linear vector field, have polynomially convex closure. [Hamada] Let A∈ M_n() such that inf_||z||=1⟨ Az, z⟩>0. Let Ω⊂ be a bounded pseudoconvex domain containing the origin such that e^-tAw ∈ for all t > 0 and for all w ∈Ω. Then, Ω is polynomially convex. In this article, we are able to provide a generalization of <Ref>, which enlarges the class of bounded pseudoconvex domains that have polynomially convex closure. We need the following definition for the demonstration of our results. The first result of this paper states that the conclusion of <Ref> is also true if the domain D is spirallike with respect to any asymptotic stable holomorphic vector field. More precisely, we present: Let V ∈𝔛_𝒪()(n ≥ 2) be a complete asymptotic stable vector field. Let D ⊆ be a bounded pseudoconvex domain containing the origin. If there exists ψ∈ Aut() such that ψ(D) is strictly spirallike domain with respect to V then D is polynomially convex. It is proved in <cit.> (in <cit.> for linear case) that any spirallike domain with respect to an asymptotically stable vector field is Runge. But, in view of Joiţa<cit.>, this does not imply the closure of the domain, in case the domain is bounded, is polynomially convex. Hamada also provided an example in <cit.> showing that strictly spirallike is crucial, even in case the vector field is linear. This suggests that the strictly spirallike assumption in <Ref> is a natural assumption. The following corollary gives a condition of polynomial convex closure in terms of the defining function of the domain. For a holomorphic vector field V(z)=∑_i=1^n a_i(z) ∂/∂ x_i+b_i(z) ∂/∂ y_i, we define V(f)(z):=∑_j=1^n(a_j(z)+ib_j(z))∂ f/∂ z_j, where f ∈(). Let D ⊂ be a bounded pseudoconvex domain with ^α boundary, for some α≥ 1 that contains the origin. Let V ∈𝔛_𝒪() be a complete globally asymptotic stable vector field. Suppose that θℝ×→ be the flow of the vector field V. Assume that U be an open subset such that {θ(t,z)|t ≥ 0, z ∈D}⊂ U and r U →ℝ be a defining function of D. If (V(r))<0 on U, then D is polynomially convex. §.§ Loewner chains The issue of embedding univalent functions within Loewner chains is the subject of our next discussion. LoewnerLoe invented a technique, now referred to as Loewner chains, for embedding univalent functions within particular families of univalent functions. The Loewner theory on the Kobayashi hyperbolic complex manifold was studied in BCM22009. Poreda studied the Loewner chain on the polydisc in his papers Poreda87a, Poreda87b, and <cit.> (see <cit.>, <cit.> and the references therein for an overview of recent results about the embedding of the univalent maps into the Loewner chain). The embedding of a univalent map defined on a bounded strongly convex domain into some Loewner chain is the subject of our second result. We need following definitions before we present the statement. Let D ⋐ be a domain containing the origin and 𝒮(D):={f:D→: f(0)=0, df(0)=I_n, f   is univalent}. Let d ∈ [1, ∞]. A family of mappings f_t D → is called L^d-normalized Loewner chain on D if i. For each fix t ≥ 0, f_t D → is an univalent holomorphic mapping such that f_t(0)=0 and df_t(0)=e^tI_n. ii. For 0≤ s <t<∞, f_s(D) ⊂ f_t(D) iii. for any compact set K M and any T > 0, there exists a function κ_K,T∈ L^d([0,1], [0,∞)) such that such that for all z ∈ K and for all 0 ≤ s ≤ t ≤ T we have f_s(z)-f_t(z)≤∫_s^tκ_K,T(x) dx. Loewner range of Loewner chain is defined by biholomorphism class of R(f_t):=∪_s ≥ 0_s. A function f ∈𝒮(D) is said to be embedded into a L^d-normalized Loewner chain if there exists a L^d-normalized Loewner chain (f)_t such that f_0=f. Here we put some notations. 𝒮^1(D) :={f ∈𝒮: f  embeds into a normalized Loewner chain (f)_t} 𝒮^0(D) :={f ∈𝒮^1: {e^-tf_t}_t ≥ 0  is a normal family} 𝒮_ℛ(D) :={f ∈𝒮: f(D)  is a Runge domain} For the unit disc ⊂, 𝒮^0()=𝒮^1( )=𝒮(), but, for n ≥ 2, the following chain of inclusions holds for D=𝔹^n: 𝒮^∘(𝔹^n) ⊊𝒮^1(𝔹^n) ⊊𝒮(𝔹^n). The class 𝒮^0(𝔹^n) is compact in the topology of uniform convergence on compact subsets of 𝔹^n but 𝒮(𝔹^n),  𝒮^1(𝔹^n) are non compact. Hence, in higher dimension, 𝒮^0⊊𝒮^1(𝔹^n) (see <cit.>). In <cit.>, it is shown that 𝒮^1(𝔹^n) ⊊𝒮(𝔹^n). Recently, Bracci-Gumenyuk <cit.> showed that 𝒮_ℛ(𝔹^n) ⊊𝒮^1(𝔹^n). In <cit.>, Arosio-Bracci-Wold introduced the the notion of filtering normalized Loewner chain on 𝔹^n. We mention it here for any bounded domain. Let D ⋐ and (f_t) be normalized Loewner chain in D. We say that (f_t) is filtering normalized Loewner chain provided the family _t:=f_t(D) satisfies the following conditions. 1. Ω_s⊂_t for all t >s; and 2. for any open set U containing Ω_s there exist t_0>s such that _t⊂ U for all t ∈ (s, t_0). Let 𝒮_𝔉^1(D)={f ∈𝒮(D): f embeds into filtering normalized Loewner chain, R(f_t)=}. The connection between an univalent function to be embedded in a filtering normalized Loewner chain and the polynomial convexity of the closure of the image under that function was first explored by Arosio-Bracci-Wold<cit.>. They proved the following result for ⋐ be a bounded pseudoconvex domain with ^∞ boundary, which is biholomorphic to the open unit ball. <cit.> Let n ≥ 2 and let f ∈𝒮_ℛ. Assume that :=f(𝔹^n) is bounded strongly pseudoconvex domain with C^∞ boundary. Then f ∈𝒮_𝔉^1 if and only if Ω is polynomially convex. In the same article, Arosio-Bracci-Wold deduced that 𝒮_𝔉^1(𝔹^n)=𝒮_ℛ (𝔹^n) as a corollary of <Ref> (see <cit.>). Our second result in this article we replace the unit ball with a strongly convex domain. Let 0 ∈ D ⋐ be a strongly convex domain with C^m boundary and f D → f(D) be a biholomorphism. Assume that :=f(D) is bounded strongly pseudoconvex domain with C^m boundary for some m>2+1/2. Then f can be embedded into a filtering L^d-Loewner chain with Loewner range if and only if f(D) is polynomially convex. It also follows from <Ref> and Andersen-Lempert theorem (<cit.>) that, for any bounded strongly convex domain D ⋐, 𝒮_𝔉^1(D)=𝒮_ℛ (D) (See <Ref>). In order to prove <Ref>, we proved the following theorem which might be of independent interest. Let ⋐ be a strongly pseudoconvex domain with C^k boundary which is biholomorphic to some bounded strongly convex domain with C^k boundary for some k>2+1/2. Then the following are equivalent. 1. is polynomially convex. 2. ∃ Ψ∈Aut() such that Ψ() is strongly convex. Moreover, if one of the conclusions holds then is a Runge domain. <Ref> is a general version of <cit.> and <cit.>. The following corollary provides a class of strongly pseudoconvex domains that are biholomorphic to strongly convex domains through automorphisms of ^n. Let ⋐ be a strongly pseudoconvex domain with ^α boundary which is biholomorphic to a strongly convex domain with ^α boundary for α >2+1/2 and is spirallike with respect to a globally asymptotically stable vector field. Then there exists Ψ∈ Aut() such that Ψ() is strongly convex. §.§ Dense holomorphic curves Next, we will demonstrate an application of <Ref> in the context of finding a dense holomorphic map and constructing universal mapping. For any two complex manifolds X, Y, the set of all holomorphic maps from X to Y is denoted by (X, Y). If Y is then the set of all holomorphic functions on X is denoted by (X). Let Y be a complex manifold. The main question here is: For a given complex manifold Z, does there exists a holomorphic map f Z → Y such that f(Z) is dense in Y? In this case, we say that f is a dense holomorphic map from Z to Y. In <cit.>, Winkelmann proved that if X and Y are irreducible complex spaces and X admits a non-constant bounded holomorphic function then there exists a holomorphic map from X to Y with dense image (for the notions of complex manifold and complex space see <cit.>). In <cit.>, Forstnerič and Winkelmann showed that if X is a connected complex manifold then the set of all holomorphic maps f→ X with f()=X is dense in 𝒪(, X) with respect to the compact open topology. In this paper we consider the holomorphic maps with value in (X,Y), where X and Y are complex manifolds. Let X, Y, Z be connected complex manifolds and 𝒮(X,Y). We say that a map f Z →𝒮 is holomorphic if the map f̂ Z × X→ Y defined by f̂(z,x)= f(z)(x) is holomorphic. In this case, f̂ is said to be associated holomorphic map for f. Following the terminology introduced by Kusakabe <cit.>, we say a subset 𝒮𝒪(X,Y) is Z-dominated if there exists a dense holomorphic map f Z →𝒮. In <cit.>, Kusakabe proved the following result. <cit.> Let ⋐ be a bounded convex domain and Y be a connected complex manifold. Then (,Y) is 𝔻-dominated. By demonstrating an example <cit.>, Kusakabe showed also that <Ref> is not true in general for bounded pseudoconvex domain. In this paper, we use <Ref> to provide a class of pseudoconvex domains in for which (, Y) is 𝔻-dominated. Our next theorem reads as Let ⋐ be a bounded pseudoconvex domain containing the origin and is strictly spirallike with respect to globally asymptotic stable vector field V ∈𝔛_𝒪() and Y be a connected complex manifold. Then (,Y) is 𝔻-dominated. §.§ Universal mappings and composition operators We also apply <Ref> to obtain a result providing the existence of a universal mapping. In <cit.>, Birkhoff constructed for every sequence of real number {b_k}_k ∈𝐍 with lim_k →∞ b_k=∞, a holomorphic function F ∈(), with the property that if F_k→ defined by F_k(z)=F(z+b_k) then {F_k| k ∈ℕ} is dense in () with respect to the compact open topology. Such a function is called universal function. Later, Seidel and Walsh <cit.> proved the result for the unit disc replacing the Euclidian translation with a non-Euclidian translation. In <cit.>, Fernando proved that any compactly divergent sequence of automorphisms of the open unit ball and the polydisc admits a universal function. Existence of universal function has a very close connection with the hypercyclicity of a composition operator (See <cit.> for a nice survey in this topic). Let T X → X be a self-map on a topological vector space X, and (n_k)_k ∈ℕ is an increasing sequence of natural numbers. T is said to be hypercyclic with respect to (n_k) if there exists x ∈ X such that {T^n_k(x)| k ∈ℕ} is dense in X, where T^m:=T ∘ T ∘⋯∘ T_m times for every m ∈ℕ . In <cit.>, Zaja̧c proved the following result: Let be a connected Stein manifold, and ϕ∈(, ) and let (n_k)_kℕ be an increasing sequence. Then the composition operator C_ϕ() →() defined by C_ϕ(f)=f ∘ϕ is hypercyclic with respect to (n)_k if and only if ϕ in injective and for every compact () convex subset K there exists such that K ∩ϕ^n_k(K) ≠∅ and the set K ∪ϕ^n_k(K) is () convex. Motivated by the properties of ϕ in the <Ref>, Andrist and Wold <cit.>, defined generlized   translation which is as follows : Let X be a Stein space and τ∈ Aut(X). The automorphism τ is a generalized translation if for any compact 𝒪(X)-convex subset K ⊂ X there exists j ∈ℕ such that 1. τ^j(K) ∩ K =∅, and 2. τ^j(K) ∪ K is 𝒪(X)-convex. In <cit.>, it is proved that if X is a stein manifold with density property (see <cit.> for the notion of density property), and τ∈ Aut(X) is a generalized translation, then there exists F ∈ Aut_0(X) such that the subgroup generated by τ and F is dense in Aut_0(X) in compact open topology, where Aut_0(X) is path connected component of identity automorphism. In <cit.>, Kusakabe first studied the hypercyclicity of the composition from the space (, Y), where Y is a connected complex manifold. Let X be a complex manifold and τ∈ Aut(X) and assume that 𝒮⊂(X, Y) is a τ^* invariant subset, i.e. τ^*𝒮⊂ 𝒮, where τ^*(X,Y) →(X,Y) defined by τ^*(f)=f∘τ. A holomorphic map F ∈𝒮 is called an 𝒮-universal map for τ if {F ∘τ^j}_j is dense in 𝒮. From the above-mentioned result due to Brikhoff, for every translation mapping τ(z)=z+a, with a ≠ 0, there exists an ()-universal map for τ. In <Ref>, we had the existence of universal map when X=, a Stein manifold, and Y=, and τ∈(, ) is generalized translation. As an application of <Ref>, Kusakabe proved that <cit.> Let be a bounded convex domain, τ∈ Aut() is a generalized translation, and Y be a connected complex manifold. Then there exists an (, Y)-universal map for τ. The Carathéodory pseudodistance is defined as follows. We will need this definition to discuss our next result. Let be a domain and ρ denotes the Poincaré distance in . The Carathéodory pseudodistance between z,w ∈ is denoted by c_(z,w) and is defined by c_(z,w)=sup{ρ(f(z),f(w))|f ∈(, )}. A domain is said to be Carathéodory hyperbolic if (, c_) is a metric space. A c_-ball centered at x∈ and with radius r>0 is defined by B_c_(x,r)={z∈| c_(x,z)<r}. A Carathéodory hyperbolic domain is said to be c_-finitely compact if all c_-balls with center in and finite radius, is relatively compact in with respect to the usual topology of . As an application of <Ref>, we are also able to extend <Ref>, for bounded pseudoconvex domains which are strictly spirallike with respect to a globally asymptotic stable vector field and c-finitely compact. More precisely, our next theorem is Let V ∈𝔛_𝒪() be a complete globally asymptotic stable vector field. Let ⋐ be a bounded pseudoconvex domain containing the origin. Suppose that is c_-finitely compact and strictly spirallike domain with respect to the vector field V. Then for any generalized translation τ∈ Aut() and any connected complex manifold Y, there exists an (, Y)-universal map for τ. In particular, the conclusion of the theorem is true if is a strongly pseudoconvex domain with ^2 boundary and spirallike with respect to the vector field V. <Ref> and <Ref> can be interpreted as the composition operator C_τ(,Y)→(, Y) is hypercyclic for corresponding domains in those results. Our theorems extend Kusakabe's results. For example, Ω={(z_1,z_2)∈ℂ^2 | |z_1|<5, |z_2|<e^-|z_1|)} is a bounded pseudoconvex domain satisfying the hypothesis of <Ref>, <Ref>. The domain is not convex (See <Ref>). Now, applying <Ref> and <Ref>, we obtain the following corollary Let Ω={(z_1,z_2)∈ℂ^2 | |z_1|<1, |z_2|<e^-|z_1|)} and Y be any complex manifold. Then i. The set of all dense holomorphic maps f →(, Y)) is dense in (, (,Y). ii. For every connected complex manifold Y and any generalized translation τ∈ Aut() there exists an 𝒪(, Y)-universal map for τ. The proof of the above corollary can be seen as follows: It is proved in <Ref> that is strictly spirallike with respect to an asymptotic stable vector field. Therefore, <Ref> and <Ref> proves the corollary. It seems from the above discussion that it is very important to know which are the generalized translation on a given domain . In this regard, we state our next result: Let be a bounded strongly pseudoconvex domain with ^3 boundary containing the origin and strictly spirallike with respect to complete globally asymptotic stable vector field V ∈𝔛_𝒪(^n). Then for an automorphism τ∈ Aut() the following are equivalent. 1. τ is a generalized translation. 2. τ has no fixed point. 3. {τ^j}_j ∈ℕ is compactly divergent. For bounded convex domain this result was proved in <cit.>. We extend that to a certain class of strongly pseudoconvex domains. If V ∈𝔛_𝒪() is a globally asymptotic stable vector field on , with a non-zero equilibrium point and the given domain is spirallike with respect to the vector field V ∈𝔛_𝒪() containing the equilibrium point of the vector field V, then <Ref>, <Ref>, <Ref>, <Ref>, <Ref> are also true. § PRELIMINARIES AND TECHNICAL RESULTS For any subset A ⊂, we denote B(A,r):={z ∈: dist(z, A)<r }, where dist(z, A):=inf_a ∈ Az-a. For a ^2 function ρ^n→ℝ we denote ∇ρ(x)=(∂ρ(x)/∂ x_1,∂ρ(x)/∂ x_2, ⋯, ∂ρ(x)/∂ x_n) and Hessian matrix of the function ρ at the point x ∈^n is denoted by H(ρ)(x). For a real number k >0 and D ⊂ we say that a mapping f ∈ C^k(D) if f has continuous partial derivatives of order [k] on D and the partial derivatives of order [k] are Hölder continuous with exponent k-[k] on D. For any two complex manifolds X, Y, the space (X, Y) equipped with compact open topology, forms a second countable and completely metrizable space. In particular, it is separable Baire space (see <cit.>). Let M, N be two smooth manifolds and r ∈ℕ. Suppose that ^r(M,N) denotes the set of all r times differentiable map. Let f ∈^r(M,N). Suppose that (U, ϕ), (V, ψ) be two charts on M, N respectively. Let K U be a compact subset such that F(K) V. For >0 we denote a weak subbasic neighborhood of f by 𝒩^r(f, (U, ϕ), (V, ψ), K , ) and it is defined by {g ∈^r(M,N)|g(K) V, sup_x ∈ KD^k(ψ f ϕ^-1(x))-D^k(ψ g ϕ^-1(x))<, k ∈{0,1,2, ⋯ ,r}}. The weak topology generated by sets 𝒩^r(f, (U, ϕ), (V, ψ), K , ) is called compact open C^r topology and it is denoted by ^r_W(M, N). The next result is well known (see <cit.>). We will use it for proving <Ref>. A ^1 immersion f M → N, which is injective on a closed set K ⊂ M, is injective on a neighborhood of K. Moreover, f has a neighborhood 𝒩⊂^1_S(M,N) and K has a neighborhood U of M such that every g ∈𝒩 is injective on U. If K is compact then 𝒩 can be taken in ^1_W(M, N). We need the following result for proving <Ref>. <cit.> Let ⊂R^n be a strongly convex domain. Then there exist a constant c>0 and a defining function ρ̃ for such that ∑_j,k =1^n∂ ^2 ρ̃/∂ x_j∂ x_k(P)w_jw_k≥ c w^2, ∀ P ∈∂Ω and ∀ w ∈ℝ^n. We need the following notion for proving <Ref> (see <cit.> for detail). A compact subset K is said to be Stein compactum if for every neighborhood U of K there exists a domain of holomorphy V_U such that K ⊂ V_U⊂ U. For any compact subset K in , 𝒫(K) denotes the set of all continuous functions that can be approximated by holomorphic polynomials uniformly on K and (K) denotes the set of all continuous functions on K that can be approximated uniformly on K by holomorphic functions in a neighborhood of K. The following result from Range's book will be used in our proof of <Ref>. <cit.> Let K be a Stein compactum and assume that 𝒪(K) ⊂𝒫(K). Then K is polynomially convex. Recall that a domain U ⊂ is said to be Runge if every f ∈𝒪(U) can be approximated by holomorphic polynomials. Next, we state a result that will be used to prove <Ref> <cit.> Let F ∈𝔛_𝒪(ℂ^n) be a complete globally asymptotically stable vector field. If ∋ 0 is a spirallike domain with respect to F containing the origin then Ω is a Runge domain. The following result is from <cit.>, related to holomorphic approximation. We will use this in the proof of <Ref>. [Forstnerič, <cit.>] Let K_0 and S= K_0∪ M be compact holomorphically convex subsets in a complex manifold X such that M=S∖ K_0 is a totally real m dimensional submanifold of class ^r. Assume that r ≥m/2+1 and let k be an integer satisfying 0 ≤ k ≤ r-m/2-1. Given an open set U X containing K_0 and a map f U ∪ M → Y to a complex manifold Y such that f|_U is holomorphic f|_M∈^k(M), there exist open sets V_j X containing S and holomorphic map f_j V_j→ Y (j=1,2,, ⋯ ) such that, as j →∞, the sequence f_j converges to f uniformly on K_0 and in the ^k-sense on M. If in addition, X_0 is a closed complex subvariety of X which does not intersect M and s ∈ℕ then we can choose the approximating sequence such that f_j agrees to order s with f along X_0∩ V_j for all j = 1, 2, 3 ⋯. The next result from <cit.>, gives a necessary and sufficient condition for being a continuous map hypercyclic. We will use this result to prove <Ref>. [Birkhoff, <cit.>] Let T X → X be a continuous map on a separable complete metric space X without isolated points. Then the following assertions are equivalent: i. For any pair U, V of nonempty open subsets of X, there exists some n ≥ 0 such that T^n(U) ∩ V ≠∅ ii. there exists some x ∈ X such that {T^m(x)|m ∈ℕ} is dense X. The next result from <cit.>, is related to the diffeomorphic extension of a biholomorphism defined on a strongly pseudoconvex domain. It will be used in the proof of <Ref>. [Hurumov, <cit.>] Let D_1, D_2⋐ be strongly pseudoconvex domain with the boundaries of class C^m(m ≥ 2), and f: D_1→ D_2 is a biholomorphism or proper holomorphic mapping. Then f ∈ C^m-1/2(D_1), if m-1/2 is not an integer and f ∈ C^m-1/2 -(D_1), for arbitrarily small ϵ>0 if m-1/2 is an integer. The next result from <cit.>, is a Mergelyan type approximation on a strongly pseudoconvex domain. It will be used in the proof of <Ref>. <cit.> Let X be a Stein manifold and ⋐ X be a strongly pseudoconvex domain of class C^k for k ≥ 2. Then for any f ∈ C^k()∩𝒪() , k ≥ 2 there exists a sequence of function f_m∈𝒪() such that lim_m →∞f_m-f_C^k()=0. The following result can be proved using <cit.>. We will use it in the proof of <Ref>. Let K be a compact polynomially convex subset of containing the origin. Then ((𝔻∪{2}) × K)∪ [1,2] ×{0}⊂ℂ^1+n is also polynomially convex. The next three lemmas are the main ingredients of the proof of <Ref> and <Ref>. Let ⋐ be a domain and f ∈^1( , ). Suppose that f is injective on and Df(z) is invertible for all z ∈∂. Then there exists a neighborhood U of , such that f is injective on U. Let f → be an injective ^1 map. Since f is ^1 on closed set, hence it is ^1 on a neighborhood of . Assume that f is not injective on any neighborhood of . Therefore, there exists N ∈ℕ and z_m, w_m∈ B(, 1/m), such that z_m≠ w_m and f(z_m)=f(w_m), for m >N. Clearly, {z_m} and {w_m} are two bounded sequence. Hence, passing to subsequence we can assume that there exist z_0, w_0∈ such that z_m→ z_0 and w_m→ w_0, as m →∞. Clearly, lim_m →∞f(z_m)=lim_m →∞f(w_m). Hence, f(z_0)=f(w_0). Since f is injective on , hence we have z_0=w_0. Now from the inverse function theorem, it follows that f is a local diffeomorphism at z_0. Since every neighborhood of z_0, contains two distinct points z_m and w_m such that f(z_m)=f(w_m), hence f can not be a locally injective map at z_0. Therefore, we get a contradiction. This proves the lemma. Let ⊂ be a bounded domain and U be a Runge domain in containing . If h: U → h(U) is a biholomorphism such that h(U) is convex and h() is strongly convex with ^2 boundary then there exists Ψ∈ Aut() such that Ψ() is a strongly convex domain. Let U be a Runge domain. Assume that h U → h(U) is a biholomorphism such that h(U) is a convex domain. We now invoke Andersén-Lempert theorem <cit.>, to get that h^-1:h(U) → U can be approximated by Aut(ℂ^n) uniformly over every compact subset of h(U). Let ψ_m^-1∈ Aut() such that ψ_m^-1 converges to h^-1 uniformly on every compact subset of h(U). Then ψ_m converges to h uniformly on every compact subset of U. We prove that ψ_m() is strongly convex for large enough m ∈ℕ. From <Ref>, we obtain a defining function ρ: ℝ^2n→ℝ of the domain h() and a constant C>0, such that the following holds: ∑_j,k =1^2n∂ ^2 ρ/∂ x_j∂ x_k(P)w_jw_k≥ C w^2, ∀ P ∈∂ h() ∀ w ∈ℝ^2n. Clearly, ρ∘ h: U → is a ^2 defining function for . Let V', V”⊂ h(U) be such that h() ⋐ V' ⋐ V”⋐ h(U). Now ψ_m^-1→ h^-1 uniformly over V”. Thus, there exists m_0∈ℕ such that ψ_m^-1(V') ⋐ h^-1(V”) ⋐ U. Therefore, the function ρ̃_m V' → defined by ρ̃_m(z)=ρ∘ h ∘ψ_m^-1(z) is well defined and a defining function for ψ_m(), for all m>m_0. Since ψ_m^-1 converges to h^-1 on every compact subset of h(U), hence h ∘ψ_m^-1→ id_V' uniformly on every compact subset of V'. Consequently, ρ̃_m→ρ locally uniformly on h(). Clearly, ∇ρ̃_m(z)=∇ρ (h(ψ_m^-1(z)))D(h∘ψ_m^-1)(z). Since h ∘ψ_m^-1 is holomorphic map, hence D(h∘ψ_m^-1)(z) converges to I_n uniformly on every compact subset of V'. Since ∇ρ is continuous on V', hence ∇ρ_m(z) →∇ρ(z) locally uniformly on V'. From the chain rule of second order derivative, we get that H(ρ_m)(z) =D(h∘ψ_m^-1(z))^TH(ρ)(h∘ψ_m^-1(z))D(h∘ψ_m^-1(z))+ ∑_j=1^2n∂ρ((h∘ψ_m^-1(z)))/∂ x_jH(Π^j((h∘ψ_m^-1))(z), where Π^jℝ^2n→ℝ is projection on j th component. Since ρ is a ^2 function, hence H(ρ)(h∘ψ_m^-1(z)) converges to H(ρ)(z) locally uniformly on V'. Since h ∘ψ_m^-1 is holomorphic for all m ∈ℕ, hence, D(h∘ψ_m^-1)(z) → I_n and H(Π^j((h∘ψ_m^-1))(z) → O as m →∞ locally uniformly on h(U). Therefore, we conclude that H(ρ̃_m) → H(ρ) uniformly on every compact subset of V', particularly on ∂h(). Let us define a map F_m∂h()× S^2n-1→ℝ by F_m(p,x)=x^T(H(ρ̃_m)(p)-H(ρ)(p))x, where S^2n-1:={z ∈|z=1}. Since we have H(ρ̃_m)(p)-H(ρ)(p))→ 0 uniformly on ∂h( ), hence F_m(p,x) → 0 uniformly over ∂h()× S^2n-1. Consequently, there exists N ∈ℕ, such that for all m >N, for every p ∈∂ h() and for every non zero (w_1,w_2, ⋯ ,w_2n) ∈ℝ^2n we have (w/w)^tD^2ρ̃_m(p)(w/w)>(w/w)^tHρ(p)(w/w)-C/2=C/2. Here ρ̃_m is a ^2 smooth function. Hence, there exist r_p>0, C'>0, such that for all q ∈ B(p, r_p), for all non zero w ∈ℝ^2n, for all m>N, we have (w/w)^tH(ρ̃_m)(q)(w/w)>C'. Since ∂ h() is a compact subset, hence we conclude that there exists >0 such that (w/w)^tH(ρ̃_m)(p)(w/w) >C', for all p ∈ B(∂ h(), ) and for all w ∈ S^2n-1. We choose m_1∈ℕ such that ∂ψ_m() ⊂ B(∂ h(), ), for all m >m_1. Therefore, ψ_m() is strongly convex ∀ m >max{m_0, m_1, N}. Let ⋐ be a strongly convex domain with ^2 boundary. Suppose that f_m→ with f_m∈(, )∩^2() is a sequence of diffeomorphism such that f_m→ i_, uniformly over , in ^2 topology. Then there exists N ∈ℕ such that f_m() is strongly convex for all m>N. Let ⋐ be a strongly convex domain with ^2, boundary. Applying <Ref>, we choose a defining function ρ :→ℝ of the domain and C>0 such that for all P ∈∂ and for all w=(w_1, w_2,⋯ ,w_2n) ∈ℝ^2n the following holds: ∑_j,k =1^n∂ ^2 ρ(P)/∂ x_j∂ x_kw_jw_k≥ C w^2. Here f_m is injective on , and Df_m(z) is invertible for all z ∈, for all m ∈ℕ. Hence, from <Ref>, we conclude that for all m ∈ℕ there exists a neighborhood of V_m of , such that f_m is also injective on V_m. Clearly, ρ∘ f_m^-1 f_m(V_m) →ℝ is a defining function of the domain _m:=f_m(). Since for every m∈ℕ, f_m→ f_m() is a diffeomorphism, hence _m=f_m() and ∂_m=f_m(∂). Consequently, for all x ∈∂_m we have that f_m^-1(x) ∈∂. Now for every x ∈∂_m we have Df_m^-1(x)=(Df_m(f_m^-1(x)))^-1. Now (Df_m(y))^-1-I→ 0 locally uniformly over . Since for all m ∈ℕ and for all x ∈∂_m we have f_m^-1(x) ∈∂, hence Df_m^-1(x) → I uniformly over ∂_m, as m →∞ in ^1 topology. Let (f_m^-1)^j denotes the j th component of the map f_m^-1. Clearly, we get that ∇ (f_m^-1)^j(x) →(0,0, ⋯ 1, ⋯, 0)_j th position as m →∞ uniformly over ∂_m in ^1 norm. Since H((f_m^-1)^j(x))=D(∇ (f_m^-1)^j)(x), hence H((f_m^-1))^j(x) → O, uniformly over ∂_m as m →∞. Next, using the chain rule of the Hessian matrix we obtain that H(ρ∘ f_m^-1)(x) = (Df_m^-1(x))^THρ (f_m^-1(x)) (Df_m^-1(x))+∑_j=1^2n∂ρ/∂ x_j(f_m^-1(x))H((f_m^-1)^j)(x), for all x∈∂_m and for all m ∈ℕ. We have that H((f_m^-1))^j(x) → O uniformly over ∂_m as m →∞. Hence, there exists N_1∈ℕ such that for all m >N_1 sup_w=1, x∈∂ D_m|∑_j=1^2n∂ρ/∂ x_j(f_m^-1(x))w^TH((f_m^-1)^j(x))w |<C/3. Therefore, taking into account (<ref>), (<ref>), (<ref>) we conclude that w^TH(ρ∘ f_m^-1)(x)w >2C/3, for all w ∈, with w=1, for all x ∈∂_m and for all m>N_1. Therefore, from <Ref>, we conclude that _m is strongly convex for large enough m ∈ℕ. § POLYNOMIAL CONVEXITY AND SPIRALLIKE DOMAINS We begin this section with the following lemma that will be used to prove <Ref>. Let ⋐ be a domain containing the origin and spirallike with respect to V ∈𝔛_𝒪(). Then X(t,z) ∈, ∀ z ∈, ∀ t ≥ 0. Moreover, if has C^1 smooth boundary and V(z) ∉ T_z∂ (i.e. V(z) is transversal to the boundary) then X(t,z) ∈, for all z ∈, for all t>0. Let z_1∈∂. Suppose that X(s, z_1) ∉, for some s >0. Then ∃  r>0 such that B(X(s,z_1),r)∩=∅. Since X_s→ is a continuous map, hence there exist δ_r>0, such that X_s(z) ∈ B(X_s(z_1),r), for all z ∈ B(z_1,δ_r). Therefore, for all z ∈∩ B(z_1, δ_r), it follows that X_s(z) ∈ B(X_s(z_1),r). Choose w ∈ B(z_1,δ_r) ∩. Then X_s(w) ∉. This contradicts the assumption that is spirallike with respect to V. Suppose that V(z) ∉ T_z∂, for every z ∈∂. Assume that z ∈∂, and X(t_1,z) ∈∂ for some t_1>0. Then we get that X(t,z) ∈∂ for all t ∈ [0, t_1]. Now if X(t,z) ∈∂ for t ∈ [0,t_1], then d/dt(X(t,z)) =V(X(t,z)) ∈ T_X(t,z)∂ for t ∈ (0,t_1). This again leads to a contradiction with the fact V(z)∉ T_z∂Ω. Since polynomial convexity remains invariant under automorphism of , hence, without loss of generality we can assume that D is a strictly spirallike domain with respect to the globally asymptotic stable vector field V ∈𝔛_𝒪(). Suppose that Xℝ×→ is the flow of the holomorphic vector field V. We show that the family {X_-t(D)}_t ≥ 0 forms a Runge and Stein neighborhood basis of D. Since D is pseudoconvex domain and X_t∈ Aut(), hence, X_t(D) is pseudoconvex domain for every t ∈ℝ. Let t>0 and w ∈ X_-t(D). Since D is spirallike with respect to the vector field V, hence, for any τ >0 we have X(t+τ, w) ∈ D. Therefore, X(-t, (X(t+τ, w)))∈ X_-t(D). Consequently, for all t>0, X_-t(D) is spirallike with respect to V ∈𝔛_𝒪(). We now invoke <Ref>, to conclude that X_-t(D) is Runge domain for all t>0. Clearly, ∀ z ∈D we have z=X_-t(X_t(z)). Since D is strictly spirallike, hence, we get that X_t(z) ∈ D for all z ∈D. Therefore, D⊂ X_-t(D) for all t ≥ 0. Let U be a domain in such that D⊂ U ⊆. Now for every z ∈D ∃ r_z>0 such that B(z,r_z) ⊂ U. Clearly, D⊆∪_z ∈DB(z,r_z/3). Since D is compact, hence there are z_1,z_2, ⋯ ,z_m∈D such that D⊂∪_i=1^mB(z_i, r_z_i/3) . Since X(t,z) is continuous map, hence, there exists T>0 such that X_t(z_j) ∈ B(z_j, r_z_j/3) for all 0<t <T and for all j ∈{1, 2, ⋯ m}. Let B:=sup_z ∈DDV(z). Now, for all w ∈, we have w ∈ B(z_j, r_z_j/3) for some z_j∈D. Let 0<T'<min{1/Bln3/2,T}. Applying <cit.>, we conclude that, for all t ∈ (0,T'), the following holds: X_-t(w)-z_j =X_-t(w)-X_-t(X_t(z_j)) ≤ e^Btw-X_t(z_j) ≤ e^Bt(w-z_j+z_j-X_t(z_j)) < e^Bt.2r_z_j/3 <r_z_j. Hence, for every t ∈ (0,T'), we obtain D X_-t(D) X_-t(D) ⊂∪_j=1^mB(z_j, r_z_j) ⊂ U. Therefore, {X_-t(D)}_t ≥ 0 forms a Runge and Stein neighborhood basis of D. Clearly, D is Stein compactum. Since admits a Runge neighborhood basis, hence, 𝒪(D) ⊂𝒫(D). Therefore, from <Ref>, we conclude that D is polynomially convex. We deduce the following corollary using <Ref> and <Ref>. Let V ∈𝔛_𝒪()(n ≥ 2) be a complete asymptotic stable vector field. Let D ⊆ be a pseudoconvex domain with ^1 boundary containing the origin. Assume that D is spirallike with respect to V and V(z) ∉ T_z∂ D, ∀ z ∈∂ D (i.e. boundary is transversal to the vector field). Then D is polynomially convex. Let D be a bounded pseudoconvex domain. Let θ(t,z) be the flow of the vector field V. Let σ [0, ∞) →ℝ defined by σ(t)=r(θ(t,z)). Clearly, for every z ∈D, r(z) ≤ 0. Now we have the following d σ(t) dt =∑_j=1^n∂ r (θ(t,z))/∂ z_j(θ(t,z))^j/ dt+∂ r (θ(t,z))/∂z_j(θ(t,z))^j/ dt =2·(V^j(θ(t,z))∂ r(θ(t,z))/∂ z_j)<0. From our assumption we get that d σ(t) dt<0 for all t ∈ [0, ∞). Hence, from (<ref>), we have r(θ(t,z)) < r(z) ∀ z ∈D. Therefore, D is a strictly spirallike domain with respect to a globally asymptotic stable vector field V. Therefore, applying <Ref>, we conclude that D is polynomially convex. Since compact convex subsets of are polynomially convex and polynomial convexity is invariant under Aut() hence (2) (1). Now, we prove that (1) (2). Let ⋐ be a strongly pseudoconvex domain with ^m boundary for some m>2+1/2, such that is polynomially convex. Suppose that D is a bounded strongly convex domain with the same boundary regularity as and f→ D be a biholomorphism. In view of <Ref>, it is enough to construct the following: * A Runge neighborhood U of and a univalent map h U → h(U), such that h(U) is convex. * h() is strongly convex with ^2 boundary. Since D is a bounded strongly convex domain with ^2+1/2 boundary, hence, D is a bounded strongly pseudoconvex domain with the same boundary regularity. Now applying <Ref>, we get a diffeomorphic extension of f on . Let f̃→D be the diffeomorphic extension of f. In view of <Ref> we conclude that there exist a sequence ϕ_m∈(, ) such that ϕ_m converges to f uniformly over in ^2 topology. Here f̃→D is a diffeomorphism (at least ^2 regularity) . Therefore, from <Ref>, we obtain a neighborhood V' of such that f̃ injective on V'. Shrinking V' if needed we can assume that Df̃(z) is invertible for every z ∈ V'. We consider an open set V such that V ⋐ V'. Clearly, f̃ V →f̃(V) is a ^1 immersion. Here is a compact subset of V and ϕ_m converges uniformly to the map f̃ on in ^2 topology. Hence, invoking <Ref>, we get a large enough m' ∈ℕ and a neighbourhood U_2 of in V such that ϕ_m is injective on U_2 for all m >m'. Since ϕ_m is also holomorphic on , hence it is injective holomorphic on a neighborhood V'_m of . Hence, ϕ_m is particularly a diffeomorphism on for all m>m'. Therefore, for all m>m', ϕ_m∘f̃^-1D→ sequence of diffeomorphism on D such that ϕ_m∘f̃^-1→ i_D in ^2 topology. Hence, we infer from <Ref>, that there exist m_1∈ℕ such that ϕ_m∘f̃^-1(D)=ϕ_m() is strongly convex domain for all m >max{m_1,m'}. Therefore, taking h=ϕ_m, for any fix m>max{m',m_1}, we get a domain U_2 containing such that h U_2→ h(U_2) is a univalent map and h() is strongly convex. Since Ω is polynomially convex, hence, it has a Runge neighborhood basis. Choose U ” a Runge neighborhood of Ω and h(U”) is a neighborhood of h(Ω). Since h() is a strongly convex domain, hence, it has a strongly convex neighborhood basis. Now choose a strongly convex domain D' such that h() ⊂ D' ⊂ h(U”). Since D' is Runge in , hence, (D', h(U”)) is a Runge pair. Therefore, (h^-1(D'), U”) is a Runge pair. Since (U”, ) is a Runge pair, hence, (h^-1(D'), ) is a Runge pair. Therefore, h^-1(D') is a Runge domain. Choose, U=h^-1(D'), which implies that h(U) is convex and U is Runge. Therefore, we are done. Since every convex domain is Runge (see <cit.>) and the automorphism of maps the Runge domain onto the Runge domain, hence conclusion (2) implies that is a Runge domain. From <cit.>, we conclude that is polynomially convex implies that ^∘ is Runge. Since for every convex domain D we have D^∘=D, hence, we have ^∘=f(D)^∘=f(D)^∘=f(D^∘)=f(D)= Hence, is a Runge domain. § LOEWNER CHAINS Let A ∈ Gl(n, ). We say that a family of the univalent map (f)_t on D is an A-normalized Loewner chain on D if f_t(0)=0 and df_t(0)=e^tAI_n, for all t ≥ 0, with f_s(D) ⊂ f_t(D). The following Proposition is a generalization of <cit.>. We denote Aut_0()={ψ→|  ψ is an automorphism, ψ(0)=0, Dψ(0)=I_n}. A particular version of the proposition, when the vector field V =-I, will be used to prove <Ref>. Let D ⋐ be a domain containing the origin. Let f D → f(D) ⋐ be a univalent map such that f ∈𝒮(D). If there exists Ψ∈ Aut_0() such that Ψ(f(D)) is bounded strictly spirallike domain with respect to a complete globally asymptotic stable vector field V ∈𝔛_𝒪() then f embeds into a filtering -DV(0)- normalized L^d-Loewner chain with range . Let f D → f(D) be a univalent map such that f ∈𝒮(D). Here, f(D) is a bounded domain and there exists Ψ∈Aut_0() such that Ψ(f(D)) is strictly spirallike domain with respect to the globally asymptotic stable vector field V. Suppose that Xℂ×→ be the flow of the vector field V. We show that f [0,∞) × D → defined by f_t=Ψ^-1(X_-t(Ψ(f(z)))) forms a filtering -DV(0)- normalized L^d-Loewner chain with range . For any 0 ≤ s <t, assume that f_s(D)=Ω_s. At first we show that _s_t. If w_s∈f_s(D), then there exist a sequence {z_sm}_m ∈ℕ in D such that w_s=lim_m →∞Ψ^-1(X_-s(Ψ(f(z_sm)))). Let z_tm=f^-1(Ψ^-1(X_(t-s)(Ψ(f(z_sm))))). Then we get that w_s=lim_m →∞f_t(z_tm)=f_t(lim_m →∞z_tm). Clearly, lim_m →∞(Ψ(f(z_sm))) ∈Ψ(f(D)). Since Ψ(f(D)) is strictly spirallike domain with respect to the vector field V, hence, we conclude that X_τ(Ψ(f(D))) ⊂Ψ(f(D)) for all τ>0. Hence, X_t-s(lim_m →∞(Ψ(f(z_sm)))) ∈Ψ (f(D)) for all 0<s <t. Hence, we have lim_m →∞z_tm∈ D for all t>0. From this we obtain that _s⊂_t for all s<t. Now let _s⊂ U. Then, we have X_-sΨ(f(D)) ⊂Ψ(U). Since X_-s(Ψ(f(D))) is a compact subset and Ψ(U) is an open subset of , hence, there exists r_s>0 such that B(X_-sΨ(f(D)),r_s) ⊂Ψ(U). Choose t_0∈ (0,r_s/2R), where R=sup_z ∈ (Ψ(f(D)))V(X(-s,z)). If w ∈ X_-t(Ψ(f(D))) for some t ∈ (s, s+t_0). We have w=X(-(s+t'), z') for some z'∈Ψ(f(D)) and 0<t'<t_0. Now we deduce the following: dist(w,X_-s(Ψ(f(D))) ≤X(-(s+t'),z')-X(-s,z') =|t'||V(X(-s,z'))| <r_s. It follows that X_-t(Ψ(f(D)))⊂Ψ(U). Consequently, Ψ^-1(X_-tΨ(f(D))) ⊂ U for all t ∈ (s, s+t_0). Therefore, f_s is a filtering Loewner chain. Clearly, for every t,s ≥ 0 with 0 ≤ s ≤ t ≤ T and for any compact subset K ⊂ D, f_s(z)-f_t(z) ≤sup_0 ≤τ≤ T, ξ∈ KDf_t(ξ)t-s ≤∫_t^sκ(ζ) dζ, where κ(ζ) is constant function on [0,T] →ℝ^+. Clearly κ∈ L_loc^d([0,T], ℝ^+). Let w ∈. Since Ψ^-1(X_t(Ψ(w)) )→ 0, therefore, there exists t>0 large enough so that Ψ^-1(X_tΨ(w)) ∈ f(D). Choose, z=f^-1(Ψ^-1X_tΨ(w)) ∈ D. Then, we have w ∈ f_t(D). Therefore, Rf_t(D)=. Clearly, Df_t(0)=e^-tDV(0). Therefore, f_s is filtering -DV(0) normalized L^d-Loewner chain. Let D be a bounded strongly convex domain. Then, 𝒮_ℛ(D)= 𝒮_ℛ(D), where closure is taken in compact open topology. Let f ∈𝒮_ℛ(D). Hence, there exists a sequence f_n∈𝒮_ℛ(D) such that f_n(D) is Runge for all n ∈ℕ. Now in view of Andersén-Lempert theorem <cit.>, every f_n can be approximated by elements of Aut() locally uniformly on D. Therefore, f D → can also be approximated by elements Aut() locally uniformly. Since D is Runge, hence using <cit.>, we conclude that f(D) is a Runge domain. Hence, f ∈𝒮_ℛ(D). Here we present proof of <Ref>. Suppose that 0 ∈ D ⋐ is a strongly convex domain and f:D → is an univalent map such that f ∈𝒮_𝔉^1(D). Let _s:=f_s(D). According to our assumption, f can be embedded into a filtering L^d-Loewner chain with the Loewner range . Hence, from <cit.>, (_s, Rf_t(D)) is a Runge pair for all s ∈ [0,∞). Since Rf_t(D)=, hence, {_s}_s>0 are Runge. Since _s is biholomorphic to D, _s are also stein domain for every s ∈ [0, ∞). Therefore, {_s}_s>0 forms a Runge and stein neighborhood basis of f(D). From <Ref>, we conclude that f(D) is polynomially convex. Conversely, suppose that f D → be a univalent map such that f(0)=0 and f(D) is a bounded strongly pseudoconvex domain with ^m boundary, with m>2+1/2 and f(D) is polynomially convex. Now, invoking <Ref>, we get that there exists Ψ∈ Aut() with Ψ(0)=0 such that Ψ(f(D)) is convex. Particularly, Ψ(f(D)) is spirallike with respect to -I. Now in view of <Ref>, we conclude that f ∈𝒮^1_𝔉(D). Let D be a bounded strongly convex domain containing the origin. Then, 𝒮_𝔉^1(D)=𝒮_ℛ(D) . Clearly, 𝒮_𝔉^1(D)⊂𝒮_ℛ(D). Thus, we have 𝒮_𝔉^1(D)⊆𝒮_ℛ(D). From <Ref>, we conclude that 𝒮_𝔉^1(D)⊆𝒮_ℛ(D). Now let f ∈𝒮_ℛ(D). Then by Andersén-Lempert theorem <cit.>, there exist a sequence {Ψ_m}_m ∈ℕ∈ Aut() such that ψ_m→ f uniformly over every compact subset of D. Since f(0)=0 and df(0)=id, hence, we can assume that ψ_m(0)=0 and dψ_m(0)=id. From <cit.> we conclude that the convex domain is strictly spirallike with respect to the vector field -I. Clearly, ψ_m(D) satisfies the condition of <Ref>. Therefore, ψ_m|_D∈𝒮_𝔉^1(D). Consequently, f ∈𝒮_𝔉^1(D). to examine the existence of universal mappings of operators on certain classes of function spaces. In functional analysis and topological dynamics, the universality of the operators is investigated. For a detailed survey, see <cit.>. For any X be a topological space and any self map T X → X we denote T^m= T∘ T ∘⋯∘ T_m times. Let X be a topological space. A self-map T X → X is called hypercyclic if there exists an f ∈ X, called hypercyclic element for T, such that the orbit {T^m(f)| m ∈ℕ} is dense in X. Brikhoff proved that given any sequence {a_m}_m ∈ℝ such that lim_m →∞ a_m = ∞ there exists f ∈𝒪() such that {F_m∈𝒪(): F_m(z)=f(z+a_m)}=𝒪(). In particular, it says that for every translation map τ→ defined by τ(z)=z+a, where a ≠ 0, the composition operator C_τ𝒪() →𝒪() defined by C_τ(f)=f∘τ is a hypercyclic opertor. Let X,Y be complex space (we will always assume that complex spaces are reduced and second countable). It is a fundamental problem that given a complex space Z does there exists a holomorphic map f Z → Y such that f(Z)=Y. Let X,Y, Z be reduced complex space. A subspace 𝒮(X,Y) is said to Z- dominated if there exists a holomorphic map f Z →𝒮 with dense image (i.e. f(Z)=𝒮). § DENSE HOLOMORPHIC CURVES AND UNIVERSAL MAPPINGS We start this section with a couple of key lemmas that will be used to prove <Ref>, <Ref>. The following lemma is an application of <Ref>. It will be used crucially in the proof of <Ref>. Let ⊂ be a pseudoconvex domain containing the origin. Assume that is a strictly spirallike domain with respect to a complete globally asymptotic stable vector field V ∈𝔛_𝒪(). Let Y be a connected complex manifold. Set K=(𝔻∪{2}) × and I=[1,2] ×{0}. Let W ⊂ℂ^n+1 be a neighborhood of K and f W ∪ I → Y is a continuous map that is holomorphic on W. Then there exists a sequence {f_j D_j→ Y}_j ∈ℕ of holomorphic maps from open neighborhood of K ∪ I ⊂ℂ^n+1 such that 1. f_j|_K ∪ I→ f as j →∞ 2. f_j(2,.)|_=f(2,.)|_Ω for all j ∈ℕ. Invoking <Ref> we conclude that is polynomially convex. Now from <Ref>, we get that K ∪ I is polynomially convex. Now invoking <Ref>, we get there exists a sequence of holomorphic maps defined on the neighborhood of K ∪ I such that f_j|_K ∪ I→ f. Since for every x ∈ we can construct a hyperplane L_x in passing through x and not containing the origin. Consider a closed complex subvarity {2}× L_x and again using <Ref>, we get f_j(2,x)=f(2,x) for all x ∈. This proves the lemma. Let ⋐ be a bounded pseudoconvex domain containing the the origin which is strictly spirallike with respect to a complete globally asymptotic stable vector field V ∈𝔛_𝒪(). Let Y be a connected complex manifold. Let f 𝔻→(, Y) be a holomorphic map and 𝒰⊂(, Y) a nonempty open subset. Then there exists a sequence of holomorphic maps {f_j W_j→(, Y)}_j ∈ℕ from open neighborhoods W_j of 𝔻∪ [1,2] such that 1. f_j |_𝔻→ f 2. f_j(2) ∈𝒰 for all j ∈ℕ. Suppose that f̂𝔻×→ Y is associated holomorphic map for f defined by f̂(z,x)=f(z)(x). Note that 𝔻 and are strictly spirallike with respect to -id and V respectively. Hence, 𝔻× is strictly spirallike with respect to the vector field V(z)=(-z, V(z)) ∈𝔛_𝒪(ℂ^n+1). Clearly, Xℝ×ℂ^n+1 defined by X(t,(z,x))=(e^-tz, X(t,x)) is the flow of the vector field V. We have X_t(𝔻×) ⊂𝔻×, for all t >0. Hence, for all t>0, we get 𝔻×⊂X_-t(𝔻×). Define f_tX_-t(𝔻×) → Y by f_t(z',x')=f(X_t(z',x')). Therefore, f_t is defined on a neighborhood of ×. We now show that f_t→f locally uniformly on 𝔻×. Let K ⊂𝔻× be a compact set. Since X_t→ id_× as t → 0^+ uniformly on K and f is a uniformly continuous map on every compact subset, hence f_t→f uniformly on K as t → 0^+. Let u ∈𝒰. Similarly, for all t>0 we consider û_t: X_t() → Y defined by û_t(x)=u(X(t,x)). Then, for each t>0, we have u_t∈(, Y) and u_t→ u locally uniformly on . Since f̂ can be approximated locally uniformly by holomorphic functions defined on ×, it is enough to consider that f̂∈(×). Similarly, we can assume that û∈(, Y). Now proceeding in a similar way as <cit.> and using <Ref>, we conclude that there exist sequence {g_j V_j→ Y}_j ∈ℕ of holomorphic map from open neighborhoods V_j of ((D∪{2})×) ∪ ([1,2] ×{0}) ⊂ℂ^n+1 such that i. g_j|_×→f̂ as j →∞ locally uniformly ii. g_j(2, .)| =u, for all j ∈ℕ. Since is a strictly spirallike domain with respect to V, hence for any open neighbourhood N_ of , we have X_t() ⊂ N_, for all t ∈ [0,1]. From the continuity of the map X_t and compactness of [0,1] ×, we obtain an open set U× G ×ℂ^n+1, with [0,1] × U × G and X_t(G) N_ for all t ∈ U. Now again proceeding the same way as <cit.> and using <Ref>, we conclude the lemma. The next lemma is an application of <cit.>. It can be proved similarly as <cit.>, using <Ref>. Therefore, we omit its proof. Let {D_j}_j ∈ℕ be a sequence of open neighborhood of ∪ [1,2]. Then there exists a sequence {ϕ_j→ D_j}_j ∈ℕ of holomorphic maps such that for all j ∈ℕ the following holds 1. ϕ_j|_→ id|_ 2. 2 ∈ϕ_j() Let X, Y be topological spaces. A sequence {f_ν}(X, Y) is compactly diverges if for every pair of compacts H ⊆ X and K ⊆ Y there exists ν_0∈ℕ such that f_ν(H) ∩ K= ∅ for every ν≥ν_0. A domain is said to be taut if every sequence {f_n}_n(, ) is compactly divergent in (, ) or has a subsequence convergent in (, ). If is taut, then, from <cit.>, we get that for every complex manifold Y and every sequence sequence {f_n}_n ∈ℕ(Y, ) either {f_n} has a convergent subsequence or {f_n} is compactly divergent. Next lemma will be used to prove <Ref>. Let be c_ finitely compact pseudoconvex domain. Assume that contains the origin is taut domain and strictly spirallike with respect to complete globally asymptotic stable vector field V ∈𝔛_𝒪(^n). Let τ∈ Aut() be an automorphism such that {τ^j}_j ∈ℕ is compactly divergent. Then the map C_τ(, ) →(, ) defined by C_τ(f)=f∘τ is hypercyclic with respect to the sequence (j)_j ∈ℕ. We have to show that there exists F ∈(, ) such that {C_τ^j(F):j ∈ℕ} is dense in (, ) with respect to compact open topology. In view of <Ref>, it is enough to show that for any pair of nonempty open subsets G, U⊂(X, ), there exists j ∈ℕ such that (C_τ)^j(G) ∩ U ≠ 0. Let g ∈ G and h ∈ U. Since is strictly spirallike domain with respect to the vector filed V, hence, for any t>0, we obtain X_t(g())⊆ X_t(g()) ⊆ X_t(). Clearly, X_t() ⊂ is a compact set, for all t>0 . Hence, X_t∘ g (). Here, G is an open set containing g. Since X_t∘ g → g as t → 0^+ in compact open topology, hence, for small enough t>0, we conclude that g_t:=X_t∘ g in G. Therefore, without loss of generality we can assume that g(), h(). For j ∈{1,2} consider the following maps π_j×→ defined by π_j(x_1,x_2)=x_j. Clearly, × is spirallike domain with respect to the vector field (V,V) ∈𝔛_𝒪(^2n). Hence, from <Ref>, there exists x_1, x_2∈ and a holomorphic map f →(×, ) such that f(x_1)=π_1, f(x_2)=π_2. Hence, the rest of the proof goes the same as <cit.>. Clearly, g∘τ ^j, h ∘τ ^-j→ is sequence of holomorphic map such that g∘τ ^j(), h∘τ ^-j(). Consequently, both the sequence of holomorphic maps g∘τ ^j, h ∘τ ^-j is not compactly divergent. Here is a taut domain. Therefore, after passing to subsequence we conclude that there exist g̃, h̃∈𝒪(, ) such that g̃=lim _j →∞ g∘τ ^j and h̃=lim _j →∞ h∘τ ^-j. By taking the composition of suitable automorphism of the unit disc with the function f, we can assume that f(0)=π_1 and f(x'_2)=π_2 for some x_2' ∈. Now consider a disc Δ such that Δ and 0,x_2' ∈∂Δ. Using Möbius transformation, we can construct a holomorphic map ψ→Δ, such that ψ(-1)=0, ψ(1)=x_2'. Denoting f∘ψ again by f we obtain a holomorphic map f→(×, ) such that f(-1)=π_1 and f(1)=π_2. Therefore, we have f(-1)∘ (g(x), h̃(x))=g(x) ∈𝒢 and f(1)∘ (g̃(x), h(x))=h(x) ∈𝒟. Let x ∈. From <cit.> and <cit.>, we get that is strongly complete. Since τ^j is compactly divergent, any closed ball with respect to the Carathéodory distance can not contain any subsequence of τ^j(x). Therefore, it follows that c_(x, τ^j(x)) →∞ as j →∞. In view of <Ref> we get that there exists a sequence of holomorphic map ψ_m→ such that ψ_m(x) → -1 and ψ_m∘τ^j_m(x) → 1. Now from Montel's theorem and maximum modulus principle we get that ψ_m→ -1 and ψ_m∘τ^j_m→ 1 locally uniformly on . For each m ∈ℕ, let F_m∈(, ) defined by F_m(x)=f(ψ_m(x))((g(x), h(τ^-j_m(x))). From our construction it follows that F_m→ f(-1)(g, h̃ )=g ∈𝒢. We have F_m∘τ^j_m(x)=(f(ψ(τ^j_m(x))))(g∘τ^j_m(x),h(x)). Therefore, from our construction it follows that F_m∘τ^j_m→ f(1)(g̃, h)=h ∈𝒟. Therefore, it follows that there exists m_0∈ℕ such that for all m >m_0 we have (τ^*)^j_m(F_m) ∈𝒟 and F_m∈𝒢. Consequently, (τ^*)^j_m(𝒢) ∩𝒟≠∅. This proves the lemma. Now we present the proof of <Ref>. The topological space (𝒟, Y) has a countable base with respect to the compact open topology (see <cit.>). Hence, we can choose a countable base {𝒰_j}_j ∈ℕ for the topological space (𝒟, Y) such that 𝒰_j≠∅. Consider the set 𝒲_j={f ∈( ,(𝒟, Y)): f() ∩𝒰_j≠∅}. Clearly, the set is open. We will show that 𝒲_j is dense subsets of (, (𝒟, Y)) with respect to compact open topology for all j ∈ℕ. Since (×𝒟, Y) is a Baire space, hence, (,(𝒟, Y)) is also a Baire space. Therefore, countable intersection of open dense subsets in (, (𝒟, Y)) is again dense. Consequently, we conclude that 𝒲:=∩_j ∈ℕ𝒲_j is dense in (, (𝒟, Y)). Let g ∈∩_j ∈ℕ𝒲_j and 𝒱 be any open subset of (𝒟, Y). Then, there exists 𝒰_j such that 𝒰_j⊂𝒱. From the choice of g, there exists z_j∈ such that g(z_j) ∈𝒰_j. Hence, for any open subset 𝒱(𝒟, Y) we have g() ∩𝒱≠∅. Therefore, g()=(𝒟, Y). Consequently, we get that (𝒟, Y) is -dominated. Now we show that 𝒲_j are dense in (, (𝒟, Y)). Fix k ∈ℕ. Let f ∈(, (𝒟, Y)). Now, invoking <Ref>, we get a sequence {f_m𝒟_m→(𝒟 , Y)} of holomorphic map from open neighbourhood of ∪ [1,2] such that f_m|_→ f as m →∞ and f_m(2) ∈𝒰_k for all m ∈ℕ. Now, using <Ref>, there exists a sequence of holomorphic maps ϕ_m→𝒟_m such that ϕ_m→ id locally uniformly on and there exists z_m∈ such that 2 = ϕ_m(z_m) for all m ∈ℕ. Now, we consider the sequence of the holomorphic map f_m∘ϕ_m→(𝒟, Y). Since ϕ_m→ id locally uniformly, hence, we have f_m∘ϕ_m|_→ f locally uniformly. Now, for all m ∈ℕ, we have f_m(ϕ_m(z_m))=f_m(2) ∈𝒰_k. Therefore, we conclude that for any k ∈ℕ and f ∈(, (𝒟,Y)), there exists a sequence g_m:=f_m∘ϕ_m∈𝒲_k such that g_m→ f locally uniformly . Consequently, each 𝒲_k are dense. From the above proof we obtain that 𝒲=(,(𝒟, Y)) and for all f ∈𝒲 we have f()=(𝒟, Y). Therefore, the set of all dense holomorphic maps f→(𝒟, Y) is dense in (,(𝒟, Y)). With a little modification of the proof of the <Ref> and <Ref> we can prove that if u_1, u_2∈(, Y) such that both u_1, u_2 has holomorphic extension on , then, there exists a holomorphic map f→(, Y) and x_1, x_2∈ and such that f(x_1)=u and f(x_2)=v. Next, we show that there exists subsequence φ_q_j_m of the sequence of φ_m such that φ_q_j_m(x) → -1 and φ_q_j_m∘τ^j_m(x) → +1 as m →∞. Let ^n be a bounded pseudoconvex domain containing the origin and strictly spirallike with respect to complete globally asymptotic stable vector field V ∈𝔛_𝒪(^n). Let τ∈ Aut() be a generalized translation and Y be a complex space. Let 𝒵⊂(X, Y) be a τ^*-invariant irreducible component (with respect to Zariski topology). Then 𝒵 is -dominated if and only if there exists a 𝒵-universal map for τ. Now we present the proof of <Ref>. Given that 𝒟 is a bounded pseudoconvex domain containing the origin that is strictly spirallike with respect to the complete globally asymptotic stable vector field V ∈𝔛_𝒪(^n) and c_𝒟-finitely compact. We have to show that there exists a holomorphic map G𝒟→ Y such that the set {G∘τ^j|j ∈ℕ} is dense in (𝒟, Y) with respect to the compact open topology. From <Ref>, there exists a holomorphic map f →(𝒟, Y) such that f()=(𝒟, Y). Let f×𝒟→ Y defined by f(z,x)=(f(z))(x) is associated holomorphic map induced by f. Clearly, we obtain a continuous map f_*(𝒟, ×𝒟) →(𝒟, Y) defined by f_*(g)=f∘ g. At first we show that f_*((𝒟, ×𝒟) )=(𝒟, Y). Let {U_m}_m ∈ℕ be countable basis for (𝒟, Y). Since f is dense holomorphic map, hence, there exists z_m∈ such that f(z_m) ∈ U_m. Now define g_m𝒟→×𝒟 by g_m(x)=(z_m,x). Then, we have f_*(g_m(x))=f(z_m,x)=f(z_m). Since {U_m} are basic open sets, hence, we conclude that for every open set U ∈(𝒟, Y) there exists g ∈(𝒟, (, 𝒟)) such that f(g) ∈ U. Since 𝒟 is c_𝒟-finitely compact hence it is particularly a taut domain. Therefore, ×𝒟 is a taut domain and also strictly spirallike with respect to complete globally asymptotic stable vector field (-I, V) ∈𝔛_𝒪(^1+n). Since 𝒟 is a bounded pseudoconvex domain that is c_𝒟-finitely compact and {τ^j} compactly diverges on 𝒟, hence, invoking <Ref>, we infer that there exists (𝒟 ,×𝒟)-universal map F for τ. Hence, we obtain that {F∘τ^j| j ∈ℕ}=(𝒟, ×𝒟). Let us consider the map G=f_*(F) ∈(𝒟, Y). We show that {G∘τ^j| j ∈ℕ} is dense in (𝒟, Y). Let 𝒰(𝒟, Y) be any open set. Since f_*((𝒟, ×𝒟) )=(𝒟, Y), hence, there exists g ∈(𝒟, ×𝒟) such that f_*(g) ∈𝒰. Now f_* is a continuous map. Therefore, f̂_*^-1(𝒰) is an open set in (𝒟, ×𝒟) containing g. Since, {F ∘τ^j|j ∈ℕ}=(𝒟, ×𝒟), hence, there exists j_0∈ℕ such that F∘τ ^j_0∈f_*^-1(𝒰). Consequently, f̂_*∘ F ∘τ^j_0∈𝒰. Since we have G∘τ^j=f̂∘ F ∘τ^j, therefore, we conclude that for every open subset 𝒰 of (𝒟 ,Y) there exists j_0∈ℕ such that G ∘τ^j_0∈𝒰. This proves the existence of (𝒟, Y)-universal map for τ. This proves the theorem. Let as <Ref> and τ∈ Aut() is generalized translation. If there exists (,Y)-universal map for τ, then ( ,Y) is -dominated. This can be seen as follow: Let F𝒟→ Y be a (𝒟, Y)-universal map for τ. From <Ref>, we get a holomorphic map f→(𝒟, 𝒟) such that image of f is dense in (𝒟, 𝒟). Now F_*(𝒟, 𝒟) →(𝒟, Y) defines a continuous map by F_*(g)=F∘ g. Then F_*∘ f →(𝒟, Y) is a dense holomorphic map. (1) (2) follows from definition. Since 𝒟 is spirallike with respect to complete globally asymptotic stable vector field V ∈𝔛_𝒪(^n), hence it is contractible (see<cit.>). Now, from <cit.>, we get (2)⇔ (3). Suppose that (3) holds. Since 𝒟 is c_𝒟-finitely compact, hence, from <cit.>, we obtain that C_τ() →() is hypercyclic with respect to (n). Then, from <cit.> and <cit.> we conclude that, for every compact 𝒪(𝒟)-convex subset K, there exist j_K such that K ∪τ^j_K(K) is 𝒪(𝒟)-convex. Hence, (3) (1). § EXAMPLES Let us consider the following domain: Let r>0 and Ω={(z_1,z_2)∈ℂ^2 | |z_1|<r, |z_2|<e^-|z_1|)}. Clearly, is a Hartogs domain (see <cit.>) over an open ball of radius r in . Since -loge^-|z_1|=|z_1| is plurisubharmonic function hence from <cit.>, we get that is pseudoconvex domain. Now consider the vector field F(z_1,z_2)=(-2z_1,-3z_2+z_1z_2). The flow of the vector field is defined by X(t,z)= (z_1e^-2t,z_2e^-3te^z_1/2(1-e^-2t)), where (t,z) ∈ℝ×ℂ^2. We show that is strictly spirallike with respect to F. Let (z_1,z_2) ∈ and t>0. Suppose that w_1=z_1e^-2t and w_2=z_2e^-3te^z_1/2(1-e^-2t). Clearly, |w_1|<r for all t>0. Since, for all t>0, we have (1-e^-2t)(-|z_1|+(z_1)/2)-3t<0, hence for all t>0, we have the following |w_2| =|z_2|e^-3t+(z_1)/2(1-e^-2t) ≤ e^-|z_1|-3t+(z_1)/2(1-e^-2t) <e^-|w_1|. Therefore, is a bounded pseudoconvex domain that is a strictly spirallike domain with respect to the complete globally asymptotic stable vector field F. Clearly, is not convex. Let for j ={1,2} V_j:={(z_1,z_2) ∈| z_j =0}. Clearly, is a pseudoconvex Reinhardt domain such that ∩ V_j≠∅. Therefore, from <cit.> it follows that is c_-finitely compact. Therefore, the conclusion of <Ref>, <Ref> is true for this domain. Next, we give an example of a non-convex, strongly pseudoconvex domain with polynomially convex closure that is biholomorphic to a bounded strongly convex domain. Therefore, the conclusion of <Ref> as well as <Ref> holds. Let _1={(z_1,z_2)∈ℂ^2:|z_1|^2+|z_2|^2+|z_1|^2|z_2|^2-1<0}. Here, _1 is a strongly convex domain. Since _1 is a circular domain hence from Cartan's Theorem <cit.>, it follows that any biholomorphism from the open unit ball onto _1 is a linear map. Therefore, the defining function of _1 can not contain the term |z_1|^2|z_2|^2. Hence, _1 is not biholomorphic to the open unit ball. Let U be any non-convex simply connected domain. Suppose that p,q ∈ U can not be connected by a straight line contained in U. By Riemann mapping theorem, we get there exists a biholomorphism f→ U such that f(0)=p and f(x)=q. Clearly x ≠ 0. Choose >0 such that 0< <1/|x|-1. Let (0, 1+ϵ)={z ∈| |z|<1+}. Note that _1(0, 1+ϵ) ×(0, 1+ϵ). Let G(0, 1+ϵ) ×(0, 1+ϵ) → U ×(0, 1+ϵ) defined by G(z,w)=(f(z/1+),w). Here G(0,0) and G((1+)x,0) can not be connected by a straight line by construction. Hence, G(_1) is not convex. Since G has holomorphic extension on a neighborhood of _1, hence G(_1) is strongly pseudoconvex domain with ^∞ boundary. Note that G is a biholomorphism from a star-shaped domain onto a Runge domain. Hence, in view of <cit.>, we conclude that G can be approximated by Aut(). Consequently, G(_1) is polynomially convex. Therefore, from <Ref>, we conclude that there exists Ψ∈Aut() such that Ψ(G(_1)) is convex. Equivalently, G can be embedded into a filtering Loewner chain. Acknowledgements. Sanjoy Chatterjee is supported by a CSIR fellowship (File No-09/921(0283)/2019-EMR-I) and also would like to thank Golam Mostafa Mondal for several discussions and fruitful comments. Sushil Gorai is partially supported by a Core Research Grant (CRG/2022/003560) of SERB, Government of India. plain
http://arxiv.org/abs/2307.04830v2
20230710180826
Double-Fourier engineering of Josephson energy-phase relationships applied to diodes
[ "A. Mert Bozkurt", "Jasper Brookman", "Valla Fatemi", "Anton R. Akhmerov" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
Double-Fourier engineering of Josephson energy-phase relationships applied to diodes A. Mert Bozkurt,1,2,* Jasper Brookman,1 Valla Fatemi,3† and Anton R. Akhmerov1 1 Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 4056, 2600 GA Delft, The Netherlands 2 QuTech, Delft University of Technology, P.O. Box 4056, Delft 2600 GA, The Netherlands 3 School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853 USA [email protected][email protected] [email protected] August 12, 2023 § ABSTRACT We present a systematic method to design arbitrary energy-phase relations using parallel arms of two series Josephson tunnel junctions each. Our approach employs Fourier engineering in the energy-phase relation of each arm and the position of the arms in real space. We demonstrate our method by engineering the energy-phase relation of a near-ideal superconducting diode, which we find to be robust against the imperfections in the design parameters. Finally, we show the versatility of our approach by designing various other energy-phase relations. § INTRODUCTION Josephson junction circuits allow to create many functional devices (such as SNAILs, quartons etc). The Josephson tunnel junction is the fundamental building block of superconducting circuits <cit.>. These junctions have enabled the development of a wide range of functional devices such as superconducting quantum interference devices (SQUIDs), superconducting low-inductance undulatory galvanometers (SLUGs) <cit.>, superconducting nonlinear asymmetric inductive elements (SNAILs) <cit.>, quantum-limited amplifiers <cit.>, and a bevy of superconducting qubits <cit.>. One such device is a superconducting diode, which also exists in SNS junctions under magnetic field. An example device that can be realized using Josephson junctions is a superconducting diode: a junction with unequal critical currents in different directions. Superconducting diode effect manifests generically in inhomogeneous Josephson junctions subject to a magnetic field <cit.>. Recently, however, there has been renewed interest in studying different physical mechanisms for the creation of superconducting diodes. While superconducting diodes require breaking both time-reversal and inversion symmetries—otherwise the current-phase relationship (CPR) is anti-symmetric in phase—the way in which these symmetries are broken reveals information about the underlying physical systems. To name several examples, recent studies reported superconducting diode effect in spin-orbit coupled in 2d-electron gases under external magnetic field <cit.>, superconducting thin films <cit.>, topological insulators <cit.>, finite-momentum superconductors <cit.>. An alternative to controlling the junction CPR for creating a supercurrent diode is to combine multiple junctions in a supercurrent interferometer either consisting of multiple high transparency junctions <cit.> or arrays of Josephson tunnel junctions <cit.>. We propose an approach to design arbitrary energy-phase relationships using Josephson junction arrays. We propose a systematic approach to engineer arbitrary energy-phase relationships (EPRs) of a two-terminal device using parallel arrays of Josephson tunnel junctions. We draw inspiration in the observation that circuits of conventional tunnel Josephson junctions implement a variety of Hamiltonians <cit.>, originally proposed for difficult-to-engineer microscopic structures. We show that the EPR of a Josephson junction array can be engineered by combining Fourier engineering of the EPRs of each arm of the array, variation of the arm strengths in real space, and phase offsets created by an external magnetic field. Our design relies on using standard fabrication techniques and is resilient against fabrication imperfections. We promote that the schemes presented here may be useful in designing sophisticated energy-phase landscapes for decoherence-protected qubit designs <cit.>. Our recipe consists of several steps: creation of higher Fourier components, FT, and adding zero trick. § THE ARBITRARY EPR ALGORITHM The elementary unit (or building block) of our design consists of two tunnel junctions in series, which behaves as a short classical junction. Our conceptual algorithm relies on the following realizations: * The current-phase relation of two Josephson junctions in series matches the functional form of that of a short Josephson junction with a finite transparency. This allows single-parameter control over the Fourier components in the energy-phase relation of the arm. * The energy-phase relation of multiple parallel junctions is a convolution of the individual energy-phase relations with the vector of junction strengths when each arm has equal phase offsets and transparency. * Shifting the total Josephson energy of all arms by the same amount does not change the lowest Fourier components, and therefore the overall shape of the current-phase relation stays the same. The elementary unit of the design used to generate higher harmonics of a CPR, of an arm of the Josephson junction array, consists of two Josephson tunnel junctions connected in series, with Josephson energies E_J1 and E_J2 [see Fig. <ref>(a)]. The EPR of each Josephson junction is U_i(φ_i) = - E_Jicos(φ_i), where φ_i is the phase drop across the junction. [We ignore the weak higher harmonic terms that have recently been reported in single Josephson tunnel junctions <cit.>. Their influence can be easily incorporated and does not substantially alter the claims of our work.] Current conservation and the additivity of phase differences yields: E_J1sin(φ_1) = E_J2sin(φ - φ_1), where φ = φ_1 + φ_2, with φ the total phase difference across the arm (see Figure <ref>). Solving for φ, we obtain the CPR of an arm: I_▸◂(φ) = E_J τ/4 Φ_0sin(φ)/√(1 - τsin^2(φ/2)), with Φ_0 = ħ/2e the superconducting flux quantum. The corresponding EPR is E_▸◂(φ) = -E_J√(1 - τsin^2(φ/2)), where E_J ≡ E_J1 + E_J2 is an overall Josephson energy of an arm and τ≡ 4 E_J1E_J2 / (E_J1 + E_J2)^2 controls the relative strength of the higher harmonics of the EPR.[We note that Eq. (<ref>) is consistent with the microscopic model for supercurrent in double barrier Josephson junctions <cit.>. For the case τ=1, Eq. (<ref>) is also consistent with the semiclassical model for CPR in a superconducting trilayer system <cit.>.] This EPR has the same functional form as that of a short, single-channel finite transparency junction with transparency τ and gap E_J—a remarkable coincidence, for which we have no explanation. The EPR and CPR of an arm become highly nonsinusoidal at τ≈ 1 or E_J1≈ E_J2, see Fig <ref>. We introduce the Fourier transform of the normalized EPR of an arm: 𝒰(τ, φ) = √(1 - τsin^2(φ/2))≡∑_m=-∞^∞𝒰_m (τ) e^i mφ, where 𝒰_m are the Fourier coefficients of 𝒰(τ,φ). In the high transparency limit, τ≈ 1, 𝒰_m ∼ 1/m^2 for m ≲ 1/(1-τ). We plot τ-dependence of several lowest Fourier coefficients of a single arm EPR in Fig. <ref>(c). Connecting these elementary units in parallel and threading flux through them enables us to design the overall energy-phase relationship. With this way to create higher order harmonics of a single arm EPR, we utilize a Josephson junction array shown in Fig. <ref>(b) to engineer arbitrary EPRs. In addition to varying the strengths of each Josephson junction, and therefore E_J,n and τ_n of n-th arm, we utilize phase offsets by adding magnetic flux between the arms. Magnetic flux gives rise to phase differences δφ_n between arms n and n-1. In this way, we shift the phase offset of each arm by an amount ϕ_n=∑_n'=1^nδφ_n' with respect to a reference arm n=0. For the rest of the discussion, we define an arm strength distribution by assigning a position to each arm, namely E_J,n≡ E_J(x_n), and correspondingly distributions of the effective transparency τ_n ≡τ(x_n) and phase offsets ϕ_n ≡ϕ(x_n). The EPR of the Josephson junction array is U(φ) = -∑_n=0^N-1 E_J(x_n)𝒰(τ(x_n), φ + ϕ(x_n)), where N is the total number of arms. This EPR is highly nonlinear in τ_n and x_n, and linear in E_J. Our goal is to find U(φ) that approximates a target EPR, U_target(φ), by optimizing the design parameters E_J, τ and ϕ. Because the role of τ is to introduce higher harmonics, and the role of x_n is to break time-reversal symmetry, we choose to make τ_n and x_n uniform to simplify the problem. Specifically, we use ϕ(x_n) = 2π n/N and τ(x_n) = τ≈ 1, which makes the right hand side of Eq. (<ref>) a convolution of E_J(x_n) and 𝒰(τ, φ). We then find an approximate solution of the optimization problem by requiring that two EPRs agree at a set of discrete points U(2 π m/N) = U_target(2 π m/N), with integer 0 ≤ m < N. In other words, the Josephson junction strengths E_J(x_n) are obtained by Fourier transforming U_target, dividing the coefficients by the Fourier components of 𝒰(φ) and applying an inverse Fourier transform: E_J(x_n) = -ℱ^-1{ℱ{U_target(φ_m)}/𝒰_m}_n. We find that adding the most negative junction strength makes all E_J positive, while only minimally changing the current-phase relationship. In general, the set of Josephson energies E_J(x_n) found by inverse discrete Fourier transform includes negative values, whereas the stable state of a single arm has a positive E_J. We resolve this obstacle by adding the most negative E_J,min to all the Josephson energies E_J(x_n). Because ∑_n 𝒰(ϕ - 2π n/N) has a period of 2π/N, N of its lowest Fourier components are absent, and therefore adding it to the EPR only changes it minimally, as shown in Fig. <ref>. This concludes the design of a Josephson junction array with a target EPR. § OPTIMIZING THE SUPERCONDUCTING DIODE EFFICIENCY The ideal SC diode EPR is special case of interest because it features a discontinuity, and the target metric is highly nonlinear. We now apply our approach to design a superconducting diode. This device has an asymmetric CPR with unequal critical currents in opposite directions. The diode efficiency η is the degree of asymmetry of its two critical currents: η = | I_c+ - I_c-|/I_c+ + I_c-, where I_c± are the maximum critical currents for current flow in opposite directions. An ideal superconducting diode with η=1 has a sawtooth-shaped EPR: U_sawtooth(φ) = φ/2π - ⌊φ/2π⌋, where ⌊φ⌋ is the floor function. To optimize a superconducting diode, we apply the algorithm of the previous section with U_target = U_sawtooth, with the results shown in Fig. <ref>. Because U_sawtooth is discontinuous, its Fourier approximation exhibits oscillatory behavior near the discontinuity, known as the Gibbs phenomenon. This reduces the superconducting diode efficiency by allowing small side peaks of the opposite sign next to the main peak in the CPR. To attenuate the Gibbs phenomenon, we modify the Fourier coefficients of E_J using the σ-approximation <cit.>. In Fig. <ref>(a), we demonstrate the effect of the σ-approximation on CPR of a superconducting diode. With increasing degree of regularization the efficiency of the superconducting diode increases and eventually peaks at η=0.92 (for N=78 arms). We then choose a degree of regularization that maximizes the efficiency for a given number of arms and τ. In Fig. <ref>(b), we show N dependency of the EPR and CPR of a Josephson junction array for a fixed τ=0.95. As N increases, the main peak in the CPR gets higher and narrower, resulting in a larger efficiency. § GENERALIZATION OF THE ALGORITHM We then relax the evenly spaced arms condition and perform stochastic optimization to test the robustness of our diode against imperfections that may arise from fabrication. The discrete Fourier transform approach yields a closed form solution, it applies to any target EPR using the setup of Fig. <ref>. On the other hand, it relies on several simplifications: * It makes U(φ) agree with U_target(φ) at N points, instead of minimizing an error norm. * It requires that all τ_n are equal and x_n are equidistant. * It does not take into account the random variation of junction strengths. To relax the first limitation we observe that as long as the error norm is quadratic in U(φ) - U_target(φ), the optimization problem stays a least squares problem (LS), implemented in the SciPy library <cit.>. Relaxing the second and third limitations makes the problem nonlinear, but keeps it solvable using stochastic global optimization techniques. We use LS to minimize off-current curvature. To apply LS to the superconducting diode design, we use the error norm E_J(x_n)min∑_i[U^''(φ_i)]^2, which makes the negative current as constant as possible in the range φ_i ∈[ φ_min, φ_max]. To make the solution nonzero we fix E_J(x_0)=1 and solve for the Josephson junction strengths of the remaining N-1 arms. After finding a solution to the LS problem, we add the most negative junction strength, similar to the Fourier method. We then apply a brute force optimization to determine the phase region [ φ_min, φ_max] that yields highest η. We implement stochastic optimization using differential evolution. We solve the nonlinear problem by applying the SciPy's <cit.> implementation of the differential evolution method <cit.> to the problem of finding max_{x_n}, {E_Jn}η for a given N. This procedure yields the results shown in Fig. <ref>. Because differential evolution allows the presence of noise, we allow the junction strengths to vary by ± 2%, similar to the experimental state of the art <cit.>. We find mean diode efficiency of η≈ 0.71 for N=5 arms, much larger than the result of the Fourier method. We compare the three optimization methods. In Fig. <ref>, we compare the diode efficiencies produced by the three optimization methods in perfect conditions and in presence of noise. All three methods show improvement with increasing N. The differential evolution method yields highest efficiencies for low N and converges the fastest, while showing only limited degradation in presence of disorder. The superior performance of this method is expected, however the computational costs become prohibitively high for large N. The discrete Fourier transform method is the most constrained, and therefore it performs worst, albeit the difference with LS vanishes at high N. The LS approach is the least resilient to disorder once N becomes large due to overfitting. § OTHER EXAMPLE EPRS Finally, we demonstrate the generality of our approach through other energy-phase relationship examples: A square barrier, a triangular barrier and a double well. To demonstrate the generality of our approach, we apply it to other example EPRs: a square wave, a triangular wave, and a double well potential. For square and triangular wave potentials, we employ the discrete Fourier transform approach. Similar to the superconducting diode EPR case, we choose a constant τ and solve for the Josephson energy distribution. The convergence of this method with N, shown in Fig. <ref> confirms that it allows to generate arbitrary EPRs. Through the double well example, we show that our method is not only effective in designing overall energy-phase relationships, but also for a specific phase range. The double well EPR example demonstrates how to apply the same device to design an EPR that is only defined within a limited phase range. Specifically, we consider a double well potential of the form: U_dw(φ) = φ^4 - 1/2φ^2. By discretizing Eq. (<ref>) and eliminating equations outside the region of interest, we obtain an overdetermined set of equations, which we solve using LS and shift the Josephson energies by the most negative one when necessary. Due to absence of sharp features in double well potential, we choose a low value of τ=0.1. The resulting EPR of the Josephson junction array with N=4 arms, shown in Fig. <ref>, agrees with target EPR given in Eq. (<ref>) in the phase region of interest, depicted by the yellow dashed line. § CONCLUSION AND OUTLOOK We proposed and investigated an approach to design arbitrary energy-phase relationships using Josephson tunnel junction arrays. In particular, our approach allows to design a superconducting diode with a desired efficiency and the resulting design is robust against variation in device parameters. The main building block of our approach is possibly the simplest source of a non-sinusoidal CPR: two Josephson tunnel junctions in series. While our method does not rely on a specific arm EPR, this choice offers practical advantages. For example, more than two junctions in series generally have a multi-valued CPR <cit.> and does not allow for a simple parametrization. An alternative way of generating higher harmonics is a Josephson junction in series with an inductor <cit.>, however it has a non-periodic CPR, and is therefore more complicated to use. We have focused on the DC properties of the circuit, and we envision engineering the RF characteristic as the next logical step. For example, we expect that diode effects are correlated with odd-order RF nonlinearities, which we could explore <cit.>. Furthermore, so far, we have ignored the role of junction capacitance E_C, which sets the plasma frequency of the superconducting junctions, and consequently the islands. This plasma frequency limits the range of operation frequencies, therefore incorporating the dynamics of the superconducting islands into the picture would be relevant for designing quantum coherent devices. Finally, we can extend our scheme to 2- or 3-dimensional energy-phase landscapes and include sensitivity to parametric knobs as optimization inputs for design of protected qubits <cit.>. Hence, to make our design usable in quantum circuits, we need to have high E_C compared to the range of operation frequencies. This step would be more relevant for designing quantum coherent devices such as qubits, parametric amplifiers. § ACKNOWLEDGEMENTS We acknowledge useful discussions with Alessandro Miano, Nicholas E. Frattini, Pavel D. Kurilovich, Vladislav D. Kurilovich, and Lukas Splitthoff. Data availability The code used to generate the figures is available on Zenodo <cit.>. Author contributions A.R.A. and V.F. defined the research question. A.R.A oversaw the project. J.B. implemented the initial version of the optimization as a part of his bachelor project. A.M.B. implemented the final version of the optimization and performed the numerical simulations in the manuscript. All authors contributed to identifying the final algorithm. A.M.B., A.R.A. and V.F. wrote the manuscript. Funding information This work was supported by the Netherlands Organization for Scientific Research (NWO/OCW) as part of the Frontiers of Nanoscience program, an Starting Grant 638760, a subsidy for top consortia for knowledge and innovation (TKl toeslag) and a NWO VIDI Grant (016.Vidi.189.180). AMB acknowledges NWO (HOTNANO) for the research funding.
http://arxiv.org/abs/2307.04883v1
20230710200601
Doping driven metal-insulator transition in disordered graphene
[ "Kaiyi Guo", "Ying Liang", "Tianxing Ma" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.dis-nn", "cond-mat.mes-hall" ]
Department of Physics, Beijing Normal University, Beijing 100875, China [email protected] Department of Physics, Beijing Normal University, Beijing 100875, China Key Laboratory of Multiscale Spin Physics(Ministry of Education), Beijing Normal University, Beijing 100875, China [email protected] Department of Physics, Beijing Normal University, Beijing 100875, China Key Laboratory of Multiscale Spin Physics(Ministry of Education), Beijing Normal University, Beijing 100875, China Controlling the metal-insulator transition in graphene-based material is a crucial topic as it directly impacts its potential applications. Inspired by recent experiments, we study the effects of doping and bond disorder on metal-insulator transition in graphene within the Hubbard model on a honeycomb lattice. By using the determinant quantum Monte Carlo method, we first conduct tests on the value of sign under various parameters, such as electron density, on-site interactions, temperature, and lattice size, so as to select the appropriate parameters to alleviate the impact of the sign problem. Given the knowledge that bond disorder can lead to a mental-insulator transition, our study has revealed, after ruling out the influence of size effects, that the critical strength of disorder increases as the electron density decreases while decreasing as the on-site interactions increase. Furthermore, we compared our results with experimental data and concluded that, in actual graphene materials, the localization effect induced by doping plays a dominant role, resulting in an insulating phase. Doping driven metal-insulator transition in disordered graphene Tianxing Ma ^1 Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France ^2 CNRS, Univ de Lyon, ENS de Lyon, Laboratoire de Physique, F-69342 Lyon, France ^3 Department of Network and Data Science, Central European University, 1100 Vienna, Austria ^4 Rényi Institute of Mathematics, 1053 Budapest, Hungary ^*Corresponding author: [email protected] =========================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Since the discovery of graphene, a honeycomb single layer of sp^2-bonded carbon atoms, it has attracted enormous attention because of its excellent electrical, structural, mechanical, and optical properties, which have always been the critical and challenging aspects of the research.<cit.> Due to its unique semimetal nature, intrinsic graphene can not provide sufficient conductivity for desired applications, and doping is considered as an optimal way to tailor the electronic structure of graphene,<cit.> which allows for control of the Fermi level E_F even pushes the van Hove singularity into the vicinity of E_F and impact on superconducting pairing.<cit.> Moreover, doping plays an extremely important role in various applications, such as photodetectors,<cit.> sensors,<cit.> field-effect transistors,<cit.> and so on. In these applications, the regulation of metal-insulator transition (MIT) in graphene materials is very crucial, as it has a direct impact on further applications of these materials.<cit.> Therefore, doping-dependent MIT in graphene is a worthwhile problem to investigate. In essence, MIT can be driven by various mechanisms, resulting in different types of insulators: changing the chemical potential can produce a transition from a metal to a band insulator.<cit.> Strong correlations can drive metals into Mott insulators with an energy gap,<cit.> while Anderson insulators originate from disorder-induced localized insulators, where no gap can be observed in the spectrum.<cit.> It is of great importance to tune and control MIT on graphene for applications.<cit.> However, the nature of the metal-insulator transition remains elusive despite tremendous effort due to the complex interaction of doping, chemistry, elastic strain, and other applied fields.<cit.> There have been many experimental studies on MIT in graphene-based system. As early as 2009, researchers found that dosing atomic hydrogen on the surface of graphene would cause the system to transition from a metallic phase to an insulating phase and they discussed this phenomenon by possible transition to a strongly Anderson localized ground state.<cit.> Reports on MIT in nitrogen-doped and oxygen-doped graphene materials in 2016 further indicated that doping would transform the material from a metallic phase into an insulating phase.<cit.> Recent reports also suggest the possibility of modulating MIT in graphene through an externally applied electric field.<cit.> Drawing inspiration from the aforementioned research, we conducted an investigation on the mechanical properties of graphene lattices at MIT. Due to the fact that doping leads to changes in carrier density and introduces disorder into the system at the same time,<cit.> while an applied electric field can also modulate electron density,<cit.> we took into account both disorder and electron density in the system and studied their interplay and the impact they have on the MIT. In order to investigate strongly correlated problems with both disorder and doping, the determinant quantum Monte Carlo (DQMC) method is a powerful tool<cit.>. In the context of QMC simulations, various interesting MIT phenomena have been reported in the honeycomb lattice.<cit.> For example, a disorder-induced nonmagnetic insulating phase is found to emerge from the zero-temperature quantum critical point, separating a semimetal from a Mott insulator at half filling.<cit.> Furthermore, recent QMC simulations on a bilayer honeycomb lattice have identified a potential deconfined quantum critical point in interacting Dirac fermions as a new area of study for investigating the MIT.<cit.> Localization due to the on-site Coulomb interaction and disorder can also induce an insulating transition.<cit.> In this paper, we completed our simulations by the DQMC method for cases with different electron densities and bond disorder strength to investigate the MIT in doped graphene with a disordered Hubbard model. Our main focus is on the impact of electron density, on-site Coulomb interaction, and bond disorder on the conductivity σ_dc. We analyzed the interplay between these three factors and found that doping increases conductivity, which is favorable for the formation of metallic phases, while disorder has the opposite effect. The impact of the on-site Coulomb interaction on σ_dc depends on the particle-hole symmetry: at half-filling, the on-site Coulomb interaction suppresses conductivity, while deviating from half-filling can promote conductivity. Our study expands the understanding of MIT in honeycomb lattice through doping and disorder and may provide some inspiration for modulating MIT in experiments. § MODEL AND METHODS The Hamiltonian for disordered Hubbard model on a honeycomb lattice is defined as Ĥ= -∑_i,j,σt_ ij(ĉ_ iσ^†ĉ_ jσ^†+ĉ_ jσ^†ĉ_ iσ^†)-μ∑_ iσn̂_ iσ +U∑_ in̂_ i↑n̂_ i↓ where t_ ij represent the hopping amplitude between two nearest-neighbor sites i and j, ĉ_ iσ^†(ĉ_ jσ^†) is the creation (annihilation) operator of a spin-σ electron at site i( j), and n̂_ iσ=ĉ_ iσ^†ĉ_ jσ^† is the number operator, denotes the number of spin-σ electrons at site i. The chemical potential μ determines the density of the system, and when μ=U/2, n=1, the system is half-filled, indicating the particle-hole symmetry. Here U>0 represent the on-site repulsive interaction. Bond disorder is induced by modifying the matrix element t_ ij of the hopping matrix, which is chosen from t_ ij∈[t-Δ/2,t+Δ/2] and zero otherwise with a probability P(t_ ij)=1/Δ. We set t=1 as the energy scale. The strength of disorder can be characterized by Δ, which represents the magnitude of the modification of matrix elements t_ ij in the hopping matrix. In the presence of disorder, reliable results are obtained by taking an average of 20 disorder simulations, as it has been demonstrated to effectively avoids errors introduced by randomness.<cit.> The DQMC method is employed to complete simulations on disordered Hubbard model of doped honeycomb lattice at finite temperature with periodic boundary condition. In DQMC, the partition function Z=Tr e^-β H is represented as an integral over the configuration space of a set of interacting fermions on a lattice and the integral is completed by the Monte Carlo sampling. The imaginary time interval (0,β) is discretely divided into M slices of interval Δτ, which is chosen as small as 0.1 to control the “Trotter errors". The diagonalization of two-operator products can be achieved with simplicity; however, the same cannot be said for on-site interaction involving four-operator products as they need to be decoupled into quadratic terms before computation by a discrete Hubbard-Stratonovich (HS) field. Then, by analytically integrating the Hamiltonian quadratic term, the partition function can be converted into the product of two fermion determinants, where one is spin up and the other is spin down. The value of the fermion determinant is not always positive in calculations, except for a few exceptional cases, and this will cause sign problem. We calculated the average fermion sign sign, which is the ratio of the integral of the product of up and down spin determinants to the integral of the absolute value of the product<cit.> ⟨ S ⟩ = ∑_ X det M_↑( X) det M_↓( X) /∑_ X | det M_↑( X) det M_↓( X) | to measure the severity of the sign problem. sign=1 indicates the absence of sign problem. To study the MIT of the system, we computed the T-dependent DC conductivity from calculating the momentum q- and imaginary time τ-dependent current-current correlation function Λ_xx(q,τ): σ_dc(T)=β^2/πΛ_xx(q=0,τ=β/2) where Λ_xx(q,τ)=<ĵ_x(q,τ)ĵ_x(-q,0)>, β=1/T, ĵ_x(q,τ) is the Fourier transform of time-dependent current operator ĵ_x(r,τ) in the x direction: ĵ_x(r,τ) = e^Hτ/hĵ_x(r)e^-Hτ/h where ĵ_x(r) is the electronic current density operator, defined in Eq.(<ref>). ĵ_x(r) = i∑_σt_i+x̂,i×(c_i+x̂,σ^+c_iσ-c_i σ^+c_i+x̂,σ) Eq.(<ref>) has been used for MIT in the Hubbard model in many studies.<cit.> § RESULTS AND DISCUSSION As the system is doped away from half-filled, the particle-hole symmetry no longer exists, resulting in a sign problem. We have known that sign∼ e^-β N_sγ, where γ relies on the values of n and U. In the case of a given fixed n value, γ is a monotonic function of U; whereas, with respect to a designated U value, γ is relatively small at certain specific values of n. To ensure the reliability of the data, the value of the average sign sign, given by Eq.(<ref>), was calculated and the corresponding results are presented in Fig.<ref>. We present the average sign sign as a function of the electron density n for different values of (a) disorder strength, (b) on-site interaction, (c) temperature, and (d) lattice size. Our studies were conducted in the region of n≥0.85, with the dashed line indicating the case of n=0.85. Obviously, when the system is doped, the average sign deviates from 1 and starts to decrease rapidly. The sign problem becomes more severe as the inverse temperature, interaction strength, lattice size increase, while introducing disorder can alleviate the sign problem to some extent. This is consistent with the preceding investigations.<cit.> Fig.<ref>(a) shows the variation of average sign with respect to n for different disorder strengths Δ at L=12, U=3.0 and β=10. It can be observed that in the clean limit, Δ=0.0, the sign problem is severe and the calculation is almost impossible even with minor doping. However, the introduction of disorder partially alleviates the sign problem, and in the regime Δ≥1.0, which is of our primary interest, the sign problem is effectively suppressed. Fig.<ref>(b) exhibits the influence of on-site interaction on the sign problem when L=12, Δ=1.5 and β=10, implying that a larger U greatly exacerbates the sign problem. Moreover, it is observable that when U<2.5, sign∼ 1, making the impact of the sign problem almost negligible. The similar consequence is also evident in the Fig.<ref>(c): when β <6, the sign problem has a minimal impact; however, as β increases and the temperature decreases, the sign problem becomes increasingly severe. Fig.<ref>(d) displays the effect of lattice size L on the sign problem: as the lattice size increases, sign decreases and the sign problem becomes dire. Given the significance of the sign problem, along with the computational processing time considerations, we opt to utilize a lattice size of L=12 as the primary subject of inquiry in this article, building upon the conclusion presented in Fig.<ref>. In Fig.<ref>, the dc conductivity is shown as a function of the temperature T for several values of the disorder strength Δ. The values are computed on the L=12 lattice with coupling strength U=2.0. Figs.<ref>(a)-(d) represent the situations under different density: (a) n=1.00; (b) n=0.95; (c) n=0.90; and (d) n=0.85. We have known that the system behaves as a mental in the clean limit at half-filling with the coupling strength U=2.0<cit.>, which means that in the low-temperature regime, dσ_dc/dT<0 and σ_dc diverges as the temperature is further decreased to the limit T→0. Then consider about the situations with bond disorder, the system will transfer from metallic to insulating phase, indicating by dσ_dc/dT>0 at low-T, with increasing value of Δ, as is shown in Fig.<ref>(a). At this condition, the critical disorder strength for MIT Δ_c is currently between 1.5 and 2.0. When the system deviates from half-filling, as is shown in Figs.<ref>(b)-(d), distinct insulation behavior is only observed for Δ>1.5. From this, we may draw the conclusion that in disordered systems, doping will increase the critical disorder strength Δ_c required for MIT. The impact of electron density n on MIT will be further discussed in Fig.<ref>. To exclude the influence of system size being smaller than the localization length on insulation, we compute the finite-size effect. Fig.<ref> exhibits the response of the conductivity σ to the lattice size L=9,12,15, with respect to different electron density (a) n=0.95, (b) n=0.85 and varying values of disorder (a) Δ=1.5, 2.0 and (b) Δ=0.0, 2.5. Upon comparison, it is evident that both the metallic and insulating phases are minimally affected by system size in terms of conductivity. Additionally, Fig.<ref>(a) illustrates that the critical disorder strength values remain consistent across varying lattice dimensions of L=9,12,15. As the computational simulation time rapidly increases with an increase in lattice size, and a larger L suggests more severe sign problems while deviating from half-filling, it is reasonable that we selected L=12 as the primary focus of our study. In Fig.<ref>, we further investigate the impact of electron densities n on the MIT. Fig.<ref>(a) and Fig.<ref>(b) respectively demonstrate the effect of n on the σ_dc-T curve for L=12, U=2.0, and the disorder strength (a)Δ=1.5 and (b)Δ=2.0: When Δ=1.5, as shown in Fig.<ref>(a), at n=1.00, the system exhibits an insulating phase due to hopping disorder, while deviating away from half-filling, the conductivity σ_dc increases with decreasing temperature, indicating metallic behavior, thus demonstrating a MIT induced by doping; When Δ=2.0, however, as shown in Fig.<ref>(b), the system will always remain in an insulating phase irrespective of the variation in n. We have also included the σ_dc-T curve for n=0.7, which reveals that within our measurement range, doping will not induce a MIT when the disorder strength Δ=2.0. A similar situation can be observed at on-site Coulomb interaction U=3.0, as shown in Fig.<ref>(c)L=12, U=2.0, Δ=1.5 and (d)L=12, U=2.0, Δ=2.0. Doping induces a transition from an insulating to a metallic phase at Δ=1.5, whereas there is no metallic phase observed in the range of n≤0.85 when Δ=2.0. To obtain a more accurate determination of the critical disorder strength for the MIT, we plot the variation of conductivity σ_dc with disorder strength Δ at the three lowest temperatures β=6,8,10 in Fig.<ref>(a)-(c). When Δ<Δ_c, the σ_dc increases with decreasing temperature, exhibiting metallic behavior, while for Δ>Δ_c, the σ_dc decreases with decreasing temperature, exhibiting insulating behavior. The three curves in each subplot of Fig.<ref> intersect nicely at a point where the conductivity σ_dc becomes temperature-independent, marking the critical point of MIT. Here, (a) corresponds to L=12,U=2.0,n=0.95; (b) corresponds to L=12,U=2.0,n=90; and (c) corresponds to L=12,U=3.0,n=0.90. We have conducted extensive calculations to obtain the values of Δ_c for different parameters and plot the variation of Δ_c with on-site Coulomb interaction U for electron density n=1.00 and n=0.85 in Fig.<ref>(d), where the curves above denote the insulating phase and the curves below denote the metallic phase. An interesting phenomenon can be observed: as n=1.00 and the system is half-filled, the critical disorder strength Δ_c of MIT decreases with an increase in U, indicating a suppressing effect of U on the metallic state; whereas when n=0.85 and the system deviates from half-filling, Δ_c increases with an increase in U, signifying a promoting effect of U on the metallic state. Next we move on to the role of U in the MIT for half-filled and doped cases. Fig.<ref>(d) demonstrates that at n=1.0 and Δ=1.5, an increase in U drives the system from a metallic state to an insulating state, whereas at n=0.85 and Δ=2.0, an increase in U leads the system from an insulating state to a metallic state. We set n=1.00,0.95,0.90,0.85 in Fig.<ref>(a)-(d). In order to observe the phase transition, we set the disorder strength to Δ=1.5 for half-filling and Δ=2.0 for deviations from half-filling, respectively. Furthermore, we set the minimum temperature parameter to β=14. Although this approach incurs a significant degree of error, it still yields valuable information. We then proceed to calculate the temperature dependence of the conductivity σ at different on-site Coulomb interactions U=1.0,2.0,3.0. Fig.<ref>(a) shows the transition of the system from a metallic state to an insulating state as the on-site Coulomb interaction U increasing, while Fig.<ref>(b)-(d) show the transition in the opposite direction. At U=1.0,2.0, the system shows insulating phases and at U=3.0, the system exhibits metallic phase. Overall, Fig.<ref> demonstrates that in half-filled systems, U has a suppressing effect on the metallic state, while in doped systems, U has a promoting effect on the metallic state. § CONCLUSION In summary, we employed the determinant quantum Monte Carlo method to investigate the regulatory effects of doping and disorder on the metal-insulator transition process in graphene materials. We discussed the factors affecting the MIT, including doping, temperature, lattice size and on-site Coulomb interactions by carrying out calculations for variations of the DC conductivity σ_dc with temperature under different values, utilizing the reciprocal of the variation of σ_dc with temperature T to determine the metallic or insulating phase of the system. Through our calculations, we have reached the conclusion that doping increases conductivity and induces a transition from insulator to metal phase, while disorder has the opposite effect. In experiments, substitutional doping or adsorbate doping often simultaneously alters the carrier density and introduces disorder, thus making the competition between doping and disorder important in the study of MIT in graphene materials. Our calculations show that when doping and disorder coexist, a larger disorder strength may cause the system to transition from the metal phase to the insulating phase. This finding is consistent with the metal-insulator transition phenomenon observed in hydrogen, nitrogen, and oxygen substitutional doped graphene materials in experiments.<cit.> Our research contributes to a deeper understanding of the mechanisms underlying the metal-insulator transition in graphene materials, and may be helpful in the development of applications for graphene materials. § ACKNOWLEDGEMENTS This work was supported by NSFC (No. 11974049). The numerical simulations in this work were performed at HSCC of Beijing Normal University.
http://arxiv.org/abs/2307.05774v1
20230711200600
Spectral Stability of Periodic Traveling Wave Solutions for a Double Dispersion Equation
[ "Fábio Natali", "Thiago P. de Andrade" ]
math.AP
[ "math.AP", "math-ph", "math.MP", "76B25, 35Q51, 35Q53" ]
-1.5cm -1.5cm
http://arxiv.org/abs/2307.04337v1
20230710042906
Detection of temporal fluctuation in superconducting qubits for quantum error mitigation
[ "Yuta Hirasaki", "Shunsuke Daimon", "Toshinari Itoko", "Naoki Kanazawa", "Eiji Saitoh" ]
quant-ph
[ "quant-ph" ]
]Detection of temporal fluctuation in superconducting qubits for quantum error mitigation Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan. [Author to whom correspondence should be addressed: ][email protected] Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan. Quantum Materials and Applications Research Center, National Institutes for Quantum Science and Technology (QST), Tokyo 152-8550, Japan. IBM Quantum, IBM Research-Tokyo, 19-21 Nihonbashi Hakozaki-cho, Chuo-ku, Tokyo, 103-8510, Japan. Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan. Institute for AI and Beyond, The University of Tokyo, Tokyo 113-8656, Japan. WPI Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577, Japan. Institute for Materials Research, Tohoku University, Sendai 980-8577, Japan. We have investigated instability of a superconducting quantum computer by continuously monitoring the qubit output. We found that qubits exhibit a step-like change in the error rates. This change is repeatedly observed, and each step persists for several minutes. By analyzing the correlation between the increased errors and anomalous variance of the output, we demonstrate quantum error mitigation based on post-selection. Numerical analysis on the proposed method was also conducted. [ Eiji Saitoh August 12, 2023 =================== Over the last few decades, there has been a growing trend towards developing quantum computers and advances in quantum engineering technologies are overwhelming <cit.>. Among diverse materials or artificial atoms proposed to serve as quantum bits (qubits), superconducting qubits<cit.> are one of the most promising candidates. A number of studies have been conducted to improve the performance of superconducting qubits and several breakthroughs have been achieved<cit.>. Nevertheless, even the state-of-the-art qubits unpredictably interact with the surrounding environments and suffer from noise during computation, which places a critical limit on their computational abilities<cit.>. Several attempts have been made to identify microscopic pictures of unexpected interactions and improve the device's performance <cit.>. Recent evidence suggests that superconducting qubits exhibit a temporal change in their coherence times under a continuous measurement <cit.>. Qubit instability poses a serious threat to quantum computers. A sudden decrease in the qubit lifetime can temporarily degrade the device's performance. In addition, most of the current quantum error mitigation (QEM) techniques <cit.> are unable to mitigate time dependent noise<cit.>, and a temporal change in decoherence calls for re-learning of a noise model or developing more sophisticated QEM techniques. Therefore, it is imperative to investigate the dynamics of a superconducting qubit system and assess its stability. In this paper, we report a temporal change in the qubit errors in a superconducting quantum computer. We also developed an anomaly detection method for a temporal change in errors. All the experiments were performed on , which is one of the IBM Quantum systems. This quantum computer has 27 transmon qubits and the readout assignment errors are around 1% on average. The energy relaxation times of the qubits are approximately 1.2× 10^2 μ s on average, with the phase damping times around 1.2× 10^2 μ s. We iterate a same quantum circuit and a subsequent measurement for L times at a sampling rate of several hundred microseconds. As a result, we obtain a binary sequence 𝐗∈{0, 1}^L. To estimate the qubit output fluctuations, we transform a subsequence of 𝐗 with size N into a fluctuation indicator S, which is defined by S = 1/m-1∑_j = 1^m(Y_j - Y)^2/Y ( 1 - Y)/n., where Y_j = 1/n∑_i = (j-1)n + 1^jnX_i, and Y = 1/m∑_j = 1^mY_j with some integers n and m that satisfy the condition N = nm≪ L. In the experiments below, we obtain a time series of S from the entire sequence 𝐗∈{0, 1}^L using the following procedure. We first take the average of every n data to obtain a time series 𝐘 with the length M = ⌊L/n⌋. We then calculate the time series 𝐒 from 𝐘 by applying a sliding window of size m, and thus the length of 𝐒 is given by l = M- m + 1. The indicator S is introduced based on the following background. From the Born's rule, the measurement outcome X_i in the i-th measurement is a random variable whose distribution is given by the binomial distribution B(1, P_1), where P_1 denotes the probability of measuring the excited state. The average Y_j is also a random variable whose probability distribution is determined by the binomial distribution B(n, P_1). Thus, the expectation value of the sample mean Y = 1/m∑_j = 1^mY_j is equal to P_1, and that of the unbiased sample variance V_samp = 1/m-1∑_j = 1^m(Y_j-Y)^2 is equal to P_1(1 - P_1)/n. Since P_1 is unknown, we estimate the expected variance with V_bi = Y(1-Y)/n, and S is given by the ratio of V_samp and V_bi in Eq. (<ref>). Intuitively, S quantifies the extent to which the sample variance deviates from what is expected under the assumption that {X_i}_i are generated from an identical binomial distribution. S can be used to detect a temporal change in qubit errors and exclude abnormal outcomes in quantum computing as discussed later. Note that S is a random variable obtained from the random variables X_1, X_2, …, X_N and S takes several values with different probabilities. The probability distribution of S is well described by the chi-squared distribution with (m-1) degrees of freedom and the mean of S is given by 1 with the variance σ^2 = 2/m-1, whose rigorous derivation is provided in the latter part of this letter. Thus, when we calculate S from an experimental result (for clarity we represent the experimental value as S_exp and use S_theo when we describe a stochastic characteristic of S), S_exp should spread randomly around 1 with the statistical fluctuation σ = √(%s/%s)2m-1. If S_exp significantly deviates from the probabilistic behavior of S_theo, we reject the hypothesis that the binary data X_1, X_2,… ,X_N are generated from an identical binomial distribution B(1, P_1) and the data are classified as anomalous in our QEM method. First, we performed a one-qubit continuous measurement on the IBM quantum processor. The pulse sequence is depicted in Fig. <ref>(a). The qubit is initialized to the ground state with the reset pulse, excited with the π pulse, and then measured. We repeated this pulse sequence for 1000 seconds with the repeat delay time τ≈ 6× 10^2 μ s to record normal and abnormal behavior in a single set of experimental data. The time series of S_exp defined by Eq. (<ref>) was calculated from the obtained outcomes with the parameters n = m = 128 and L = 1787904. Figure <ref>(b) illustrates the time series of S_exp. The value of S_exp remains almost constant for the first 230 s. This behavior is consistent with the fact that the expectation value of S_theo is equal to 1 with the standard deviation σ≈ 0.125. In the next moment, however, S_exp abruptly increases to approximately 4 [see the red band in Fig. <ref>(b)], which is 24 standard deviations above the mean, and this cannot be explained in terms of the statistical error. This increase persists for 110 seconds, and sharp switching behavior is repeatedly observed in the rest of the record as visualized by the four red bands in Fig. <ref>(b). This phenomenon is observed repeatedly in other experiments on . Figure <ref>(c) compares the error rates in two time periods. The red bar represents 1 - P_1 in the time period from 430 s to 720 s, while the black bar shows that from 870 s to 1000 s, where P_1 denotes the average of the binary outcomes and should be 1 in the absence of errors. The temporal increase in S_exp appears to be closely related to a temporal increase in errors. This correlation between S_exp and errors suggests that we can reduce errors by classifying obtained outcomes based on the values of S_exp and eliminating the anomalous outcomes. Based on this, we propose a QEM technique based on post-selection (or we also call it an anomaly detection). We first compute the time series 𝐒_exp from an obtained binary sequence 𝐗. Then, we compare each element of 𝐒_exp against a threshold value S_thre. If an element exceeds the threshold, we label the corresponding subsequence of 𝐗 as anomalous and segregate it from the remaining sequence. The critical value is determined based on the p-value in the detection and here we employ S_thre = 1.5, which corresponds to the p-value of 0.006334%. This method can be easily extended to multi-qubit computations by computing the time series of S_exp for each qubit individually. We performed a Bell state measurement to demonstrate the proposed QEM as illustrated in Fig. <ref>. We obtain two binary sequences from two qubits and calculated the time series of S_exp from the two sequences individually. For each time window with size N, we calculate S_1 and S_2 from the two binary subsequences by Eq. (<ref>). If either S_1 or S_2 exceeds the threshold value S_thre = 1.5, the corresponding two binary subsequences are labeled as anomalous and labeled as normal otherwise. The time series of *Z_1Z_2 is depicted in Fig. <ref>(a), where *Z_1Z_2 denotes the expectation value of the observable Z_1Z_2, and it is calculated from the two binary sequences with the same window. *Z_1Z_2 should be 1 in the absence of errors. The red colored region represents the time periods labeled as anomalous based on S_exp and the blue represents the normal state. *Z_1Z_2 exhibits a great decrease to around 0.85 in the anomalous time period [the red band in Fig. <ref>(a)], while it shows little fluctuation around 0.97 in the normal time periods. We obtain two histograms from the normal and anomalous outcomes as depicted in Fig. <ref>(b). The probabilities of measuring the four states, |00⟩,|01⟩, |10⟩, and |11⟩, are visualized by the black bars in Fig. <ref>(b). The top panel shows the probability distribution calculated from the data classified as the normal state [colored blue in Fig. <ref>(a)], while the one at the bottom depicts that from the anomalous state (colored red). The probability distribution of the anomalous state exhibits a prominent peak in the |10⟩ state. We compare the values of 1 - *Z_1Z_2 obtained from the two categorized data as shown in Fig. <ref>(c). This means that our method successfully removes the abnormal data and improves the fidelity in estimating the expectation value of a physical observable. We then benchmarked the proposed protocol in a quantum volume circuit <cit.> as an example of sampler tasks, in which we measure the probability distributions of the final quantum states. The result is shown in Fig. <ref>. The circuit comprises three qubits and the qubits are measured after three layers of operation as shown in Fig. <ref>(a). Each layer is characterized by sampling a random permutation and then applying a random unitary transformation to the first two qubits. We compute the time series of S_exp for the three qubits and classify the outcomes into the anomalous and normal state data as illustrated in Fig. <ref>(b). The blue regions represent the outcomes classified as normal, while the red corresponds to the anomalous. We obtain two probability distributions from the two categorized experimental data and compare them with the ideal distribution (the black bars) as depicted in Fig. <ref>(c). The distribution derived from the normal data is overall closer to the ideal distribution, demonstrating a 5.5% improvement in the Hellinger fidelity<cit.>. We note that in our setup the circuit outcomes have been recorded for a sufficiently long time to investigate the time variation of S_exp. However, our mitigation technique can be applied at a moderate sampling overhead of tens of thousands shots, which is readily available with IBM Quantum processors. Finally, we perform a theoretical analysis on the probability distribution of S_theo introduced in Eq. (<ref>). Note that the i-th measurement outcome X_i is given by a random variable following the Bernoulli distribution B(1, p_i), where p_i is the probability of measuring the excited state in the i-th measurement. Here we make two fundamental assumptions, namely, p_i is a constant P_1, and {X_i}_i independently obey the identical Bernoulli distribution. Under these assumptions, it analytically follows that the random variables nY_j = ∑_i = nj + 1^(n + 1)jX_i independently obey the binomial distribution B(n, P_1) and the variance of {Y_j}_j is given by P_1(1 - P_1)/n. Since n is sufficiently large (in the experiments n = 128), we can apply the central limit theorem and approximate the probability distribution of {Y_j}_j with a Gaussian distribution. Then we express S_theo in Eq. (<ref>) in terms of new random variables {Z_j}_j defined by Z_j = Y_j - P_1/√(P_1(1 - P_1)/n), which independently obey the standard normal distribution 𝒩(0, 1), where 𝒩(μ, σ^2) denotes a Gaussian distribution with the mean μ and the variance σ^2. The expression of S_theo is given by S_theo = 1/m-1∑_j = 1^m (Z_j-Z)^2/( Z/√(n) + √(P_1/1 - P_1))( -Z/√(n) + √(1 - P_1/P_1)), where Z = 1/m∑_j = 1^m Z_j∼𝒩( 0, 1/m). Z/√(n) takes values of order 1/√(nm) with high probability, and thus, when 1/√(nm) is much smaller than √(P_1/1 - P_1) and √(1 - P_1/P_1), Z/√(n) is negligible compared to √(1 -P_1/P_1) and √(P_1/1 - P_1) with a high likelihood. As a result, Eq. (<ref>) reduces to S_theo≈S̃≡1/m-1∑_j = 1^m(Z_j-Z)^2 ∑_j = 1^m(Z_j - Z)^2 obeys the chi-squared distribution with (m - 1) degrees of freedom<cit.> and therefore the statistical characteristic of S̃ is analytically derived. In particular, the mean of S̃ is μ = 1 and the variance is σ^2 = 2/m - 1, which is independent of P_1. This fact suggests that we can use the same threshold for anomaly detection in practical quantum computation where P_1 (or the measured quantum state) is unknown. The condition √(%s/%s)1 - P_1P_1, √(%s/%s)P_11-P_1≫1/√(nm) is satisfied in most of our experiments since we use n = m = 128, and the inequality 0.01≤ P_1≤ 0.99 holds due to the 1% readout assignment errors. We then performed a Monte-Carlo simulation to support the validity of the discussions above, and the result is illustrated in Fig. <ref>. We numerically prepared 100,000 samples of S_theo for each of P_1 values we chose and compared the distributions of S_theo with those of S̃. The sample means of S_theo for several P_1 values (the blue dots) and the expectation value of S̃ (*S̃ = 1) (the red line) are depicted in Fig. <ref>(a), while Fig. <ref>(b) compares the variance of S_theo and S̃. The result provides a close similarity between the numerical and theoretical analysis for all the P_1 values. The probability density functions generated from the Monte-Carlo simulation are presented with the blue histograms in Fig. <ref>(c) for several P_1 values. The red lines show the functions calculated theoretically, showing a good agreement with the numerical histograms. In conclusion, we have investigated a temporal change in fluctuations in superconducting qubits by developing a statistic that quantifies the qubit stability. The measured temporal change is closely related to a temporal increase in errors, and we have demonstrated QEM by analyzing the correlation of the fluctuation. Furthermore, we have conducted an analytical study on the QEM method, and performed a numerical simulation to verify the result. This work was supported by CREST (Nos. JPMJCR20C1, JPMJCR20T2) from JST, Japan; Grant-in-Aid for Scientific Research (S) (No. JP19H05600), Grant-in-Aid for Transformative Research Areas (No. JP22H05114) from JSPS KAKENHI, Japan. This work is partly supported by IBM-Utokyo lab. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. §.§ Author Contributions Y. Hirasaki: Conceptualization (equal); Formal analysis (lead); Investigation (lead); Methodology (lead); Software(lead); Validation (equal); Writing – original draft (lead). S. Daimon: Conceptualization (lead); Funding acquisition (equal); Investigation (supporting); Methodology (supporting); Project administration (lead); Software(equal); Supervision (supporting); Validation (equal); Writing – review & editing (supporting). T. Itoko: Methodology (supporting); Validation (supporting); Writing – review & editing (supporting). N. Kanazawa: Project administration (supporting); Software(supporting); Supervision (supporting); Writing – review & editing (supporting). E. Saitoh: Funding acquisition (lead); Project administration (equal); Supervision (lead); Validation (equal); Writing – review & editing (lead).
http://arxiv.org/abs/2307.04037v2
20230708195151
Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages
[ "E. C. Nunes" ]
cs.RO
[ "cs.RO" ]
Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages 1st Eduardo Carvalho Nunes Department of Engineering University of Trás-os-Montes and Alto Douro 5000-801, Vila Real, Portugal ORCID: 0000-0002-5345-8854 ===================================================================================================================================================================== This article explores the use of drones in agriculture and discusses the various types of drones employed for different agricultural applications. Drones, also known as unmanned aerial vehicles (UAVs), offer numerous advantages in farming practices. They provide real-time and high-resolution data collection, enabling farmers to make informed irrigation, fertilization, and pest management decisions. Drones assist in precision spraying and application of agricultural inputs, minimizing chemical wastage and optimizing resource utilization. They offer accessibility to inaccessible areas, reduce manual labor, and provide cost savings and increased operational efficiency. Drones also play a crucial role in mapping and surveying agricultural fields, aiding crop planning and resource allocation. However, challenges such as regulations and limited flight time need to be addressed. The advantages of using drones in agriculture include precision agriculture, cost and time savings, improved data collection and analysis, enhanced crop management, accessibility and flexibility, environmental sustainability, and increased safety for farmers. Overall, drones have the potential to revolutionize farming practices, leading to increased efficiency, productivity, and sustainability in agriculture. Drone, Agriculture, UAV § INTRODUCTION The use of drones in agriculture has gained significant attention in recent years due to their potential to revolutionize farming practices. Drones, also known as unmanned aerial vehicles (UAVs), offer a range of applications that can enhance efficiency, productivity, and sustainability in agriculture. One of the key advantages of using drones in agriculture is their ability to provide real-time and high-resolution data collection <cit.>. Drones equipped with cameras, sensors, and imaging technologies can capture detailed imagery of crops, soil conditions, and field topography <cit.>. This data can be used for crop monitoring, assessment, and precision agriculture practices <cit.>. By analyzing this data, farmers can make informed decisions regarding irrigation, fertilization, and pest management, leading to optimized resource utilization and improved crop yields <cit.>. Drones also play a crucial role in precision spraying and application of agricultural inputs <cit.>. With their ability to navigate through fields and deliver targeted treatments, drones can reduce chemical wastage, minimize environmental impact, and improve the efficiency of pesticide and fertilizer application <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and optimize resource utilization <cit.>. Furthermore, drones offer accessibility to inaccessible or inaccessible areas by traditional means <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. This enables farmers to monitor large farmland areas quickly and efficiently, reducing the time and labor required for manual inspections <cit.>. Drones can cover large farmland areas in a fraction of the time it would take using traditional methods, leading to cost savings and increased operational efficiency <cit.>. In addition to data collection and monitoring, drones can assist in mapping and surveying agricultural fields. They can create high-resolution maps and 3D models, providing valuable information for crop planning, land management, and resource allocation. Drones equipped with advanced sensors, such as LiDAR or hyperspectral cameras, can capture detailed data for precise analysis and decision-making <cit.>. This enables farmers to identify areas of nutrient deficiencies, optimize irrigation practices, and implement site-specific management strategies. The use of drones in agriculture is challenging. Regulations and licensing requirements for drone operation vary across countries and regions, and compliance with these regulations is essential to ensure safe and responsible drone use <cit.>. Additionally, drones' limited flight time and battery capacity can pose challenges in large-scale farming operations <cit.>. However, advancements in drone technology, such as improved battery life and payload capacity, are addressing these limitations and expanding the possibilities for drone applications in agriculture. § DIFFERENT TYPES OF DRONES USED IN AGRICULTURE In agriculture, different types of drones are used for various applications. These drones offer unique capabilities and functionalities that cater to specific agricultural needs. Some of the commonly used types of drones in agriculture include: * Multi-Rotor Drones: Multi-rotor drones (Figure <ref>), such as quadcopters and hexacopters, are popular in agriculture due to their maneuverability and stability <cit.>. They are equipped with multiple rotors that allow them to hover in place, fly at low altitudes, and capture high-resolution imagery. Multi-rotor drones are suitable for tasks that require close and contained object capture, such as monitoring crop health, detecting pests and diseases, and applying targeted treatments <cit.>. * Fixed-Wing Drones: Fixed-wing drones (Figure <ref>) have a wing-like structure and are designed to fly like airplanes <cit.>. They are known for their long-flight endurance and ability to cover large areas. Fixed-wing drones are commonly used for mapping and surveying agricultural fields, as they can fly faster and cover more considerable distances. However, they require a runway for takeoff and landing, which can be a limitation in specific agricultural settings. * Hybrid Drones: Hybrid drones (Figure <ref>) combine the features of multi-rotor and fixed-wing drones <cit.>. They can take off and land vertically like multi-rotor drones and then transition to fixed-wing flight for longer endurance and coverage <cit.>. Hybrid drones are suitable for applications that require both close-range imaging and large-scale mapping, providing flexibility and versatility in agricultural operations. * Thermal Imaging Drones: Thermal imaging drones (Figure <ref>) are equipped with thermal cameras that capture infrared radiation emitted by objects <cit.>. These drones are used in agriculture to monitor crop health, detect irrigation issues, and identify areas of heat stress or pest infestation <cit.>. Thermal imaging drones can provide valuable insights into the temperature distribution and thermal patterns in agricultural fields, aiding precision agriculture practices. * Spraying Drones: Spraying drones (Figure <ref>), also known as agricultural drones or crop dusting drones, are specifically designed for the targeted application of pesticides, fertilizers, and other agricultural inputs <cit.>. These drones are equipped with spraying systems that can accurately and efficiently deliver chemicals to crops, reducing the need for manual labor and minimizing chemical wastage <cit.>. Spraying drones offer precise and controlled applications, reducing environmental impact and optimizing resource utilization. * Surveillance Drones: Surveillance drones (Figure <ref>) are used in agriculture for monitoring and security purposes <cit.>. These drones are equipped with cameras and sensors that capture real-time video footage and imagery, allowing farmers to monitor their fields, livestock, and infrastructure remotely <cit.>. Surveillance drones can help detect unauthorized activities, track animal movements, and identify potential threats or risks in agricultural operations. * Mapping and Surveying Drones: Mapping and surveying drones (Figrue <ref>) are used to create high-resolution maps and 3D models of agricultural fields <cit.>. These drones have advanced sensors, such as LiDAR (Light Detection and Ranging) or photogrammetry cameras, to capture detailed and accurate data <cit.>. Mapping and surveying drones are valuable tools for precision agriculture, enabling farmers to analyze topography, monitor soil conditions, and plan efficient land management strategies. * Payload-Specific Drones: Drones are designed for specific agricultural applications besides the above types. For example, there are drones equipped with hyperspectral sensors for detailed analysis of crop health and nutrient content <cit.>. There are also drones with specialized sensors for monitoring soil moisture levels, detecting weed infestations, or assessing plant growth parameters <cit.>. These payload-specific drones (Figure <ref>) cater to specific data collection needs in agriculture. § ADVANTAGES OF USING DRONES IN AGRICULTURE Using drones in agriculture offers several advantages contributing to improved efficiency, productivity, and sustainability in agricultural practices. The advantages of using drones in farming are: * Precision Agriculture: Drones enable precision agriculture practices by providing high-resolution imagery and data collection capabilities <cit.>. They can capture detailed information about crop health, soil conditions, and pest infestations, allowing farmers to make informed decisions and apply targeted treatments <cit.>. This precision approach helps optimize resource utilization, reduce input wastage, and increase crop yields <cit.>. * Cost and Time Savings: Drones can cover large areas of farmland quickly and efficiently, reducing the time and labor required for manual inspections and data collection <cit.>. They can perform tasks such as crop monitoring, mapping, and spraying in a fraction of the time it would take using traditional methods <cit.>. This leads to cost savings by minimizing the need for manual labor and reducing the use of resources such as water, fertilizers, and pesticides <cit.>. * Improved Data Collection and Analysis: Drones equipped with various sensors, such as cameras, thermal imaging, and multispectral sensors, can collect a wide range of data about crops, soil, and environmental conditions <cit.>. This data can be used for detailed analysis and monitoring, enabling farmers to detect early signs of crop stress, nutrient deficiencies, or disease outbreaks <cit.>. The data collected by drones can be processed using advanced analytics and machine learning algorithms to generate actionable insights for better decision-making <cit.>. * Enhanced Crop Management: Drones provide real-time and up-to-date information about crop health, allowing farmers to implement timely interventions and optimize crop management practices <cit.>. For example, drones can help identify areas of the field that require additional irrigation or fertilization, enabling precise application and reducing waste <cit.>. They can also assist in monitoring crop growth, estimating yield potential, and predicting harvest times <cit.>. * Accessibility and Flexibility: Drones offer accessibility to areas that are difficult to reach or inaccessible by traditional means, such as steep slopes or dense vegetation <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. Drones can be deployed quickly and easily, allowing farmers to respond rapidly to changing conditions or emergencies <cit.>. * Environmental Sustainability: Using drones in farming can contribute to environmental sustainability by reducing the use of chemicals and minimizing the environmental impact of agricultural practices <cit.>. Drones enable targeted spraying of pesticides and fertilizers, reducing the amount of chemicals applied and minimizing their dispersion into the environment <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and promote ecological balance <cit.>. * Safety: Drones eliminate or reduce the need for farmers to physically access hazardous or difficult-to-reach areas, such as tall crops, steep terrains, or areas with potential safety risks <cit.>. This improves the safety of farmers and reduces the risk of accidents or injuries associated with manual labor <cit.>. § CONCLUSION Using drones in agriculture holds immense promise for revolutionizing farming practices and improving efficiency, productivity, and sustainability. The various types of drones available cater to specific agricultural needs, ranging from crop monitoring and assessment to precision spraying, mapping, and surveying. Drones provide real-time and high-resolution data collection, enabling farmers to make informed decisions regarding resource allocation and optimize crop management practices. They offer cost and time savings by reducing manual labor and minimizing the use of resources. The ability of drones to access inaccessible areas and provide comprehensive views of the fields enhances their usability and efficiency in large-scale farming operations. Furthermore, drones contribute to environmental sustainability by enabling targeted spraying, reducing chemical wastage, and minimizing the environmental impact of agricultural practices. The safety aspect of using drones must be considered, as they eliminate or reduce the need for farmers to access hazardous areas physically. Despite challenges such as regulations and limited flight time, advancements in drone technology are continually addressing these limitations. Overall, the advantages of using drones in agriculture are significant, and their integration into farming practices has the potential to transform the industry, leading to optimized resource utilization, improved crop yields, and sustainable agricultural practices. 00 10.1002/net.21818Otto, A., Agatz, N., Campbell, J., Golden, B. & Pesch, E. Optimization Approaches for Civil Applications of Unmanned Aerial Vehicles (UAVs) or Aerial Drones: A Survey. Networks. (2018) 10.1007/s41666-020-00080-6Nasajpour, M., Pouriyeh, S., Parizi, R., Dorodchi, M., Valero, M. & Arabnia, H. Internet of Things for Current COVID-19 and Future Pandemics: An Exploratory Study. Journal Of Healthcare Informatics Research. (2020) 10.3390/rs9010088Jakob, S., Zimmermann, R. & Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sensing. (2017) 10.3390/s20051487Gao, D., Sun, Q., Hu, B. & Zhang, S. A Framework for Agricultural Pest and Disease Monitoring Based on Internet-of-Things and Unmanned Aerial Vehicles. Sensors. (2020) 10.1109/access.2020.2982086Castellanos, G., Deruyck, M., Martens, L. & Joseph, W. System Assessment of WUSN Using NB-IoT UAV-Aided Networks in Potato Crops. Ieee Access. (2020) 10.1038/s41598-020-67898-3Santangeli, A., Chen, Y., Kluen, E., Chirumamilla, R., Tiainen, J. & Loehr, J. Integrating Drone-Borne Thermal Imaging With Artificial Intelligence to Locate Bird Nests on Agricultural Land. Scientific Reports. (2020) 10.3390/land10020164Ayamga, M., Tekinerdogan, B. & Kassahun, A. Exploring the Challenges Posed by Regulations for the Use of Drones in Agriculture in the African Context. Land. (2021) 10.3390/drones6070160Javan, F., Samadzadegan, F., Gholamshahi, M. & Mahini, F. A Modified YOLOv4 Deep Learning Network for Vision-Based UAV Recognition. Drones. (2022) 10.1109/access.2021.3130900Dutta, A., Roy, S., Kreidl, O. & Bölöni, L. Multi-Robot Information Gathering for Precision Agriculture: Current State, Scope, and Challenges. Ieee Access. (2021) 10.5937/ekonomika1804091sSpalević, Ž., Ilic, M. & Savija, V. The Use of Drones in Agriculture: ICT Policy, Legal and Economical Aspects. Ekonomika. (2018) 10.3390/app11052138Kim, S., Ahmad, H., Moon, J. & Jung, S. Nozzle With a Feedback Channel for Agricultural Drones. Applied Sciences. (2021) 10.5194/isprs-archives-xlii-2-789-2018Oliveira, R., Khoramshahi, E., Suomalainen, J., Hakala, T., Viljanen, N. & Honkavaara, E. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing. The International Archives Of The Photogrammetry Remote Sensing And Spatial Information Sciences. (2018) 10.1111/sum.12771Chen, Q., Li, L., Chong, C. & Wang, X. AI‐enhanced Soil Management and Smart Farming. Soil Use And Management. (2021) 10.1088/1757-899x/1259/1/012015Borikar, G., Gharat, C. & Deshmukh, S. Application of Drone Systems for Spraying Pesticides in Advanced Agriculture: A Review. Iop Conference Series Materials Science And Engineering. (2022) 10.1016/j.jairtraman.2020.101929Merkert, R. & Bushell, J. Managing the Drone Revolution: A Systematic Literature Review Into the Current Use of Airborne Drones and Future Strategic Directions for Their Effective Control. Journal Of Air Transport Management. (2020) 10.1371/journal.pone.0141006Lisein, J., Michez, A., Claessens, H. & Lejeune, P. Discrimination of Deciduous Tree Species From Time Series of Unmanned Aerial System Imagery. Plos One. (2015) 10.3390/drones5020041Krul, S., Pantos, C., Frangulea, M. & Valente, J. Visual SLAM for Indoor Livestock and Farming Using a Small Drone With a Monocular Camera: A Feasibility Study. Drones. (2021) 10.3390/agronomy11091809Huzaifah, M., Juraimi, A., Che'ya, N., Sulaiman, N., Manaf, M., Ramli, Z. & Motmainna, M. Using Remote Sensing and an Unmanned Aerial System for Weed Management in Agricultural Crops: A Review. Agronomy. (2021) 10.30657/pea.2021.27.10Dadi, V., Nikhil, S., Mor, R., Agarwal, T. & Arora, S. Agri-Food 4.0 and Innovations: Revamping the Supply Chain Operations. Production Engineering Archives. (2021) 10.22438/jeb/43/1/mrn-1912Verma, A., Singh, M., Parmar, R. & Bhullar, K. Feasibility Study on Hexacopter UAV Based Sprayer for Application of Environment-Friendly Biopesticide in Guava Orchard. Journal Of Environmental Biology. (2022) 10.1007/978-981-16-4369-9_25Kumaar, A. & Kumaar, A. GPS-Based Path Planning Algorithm for Agriculture Drones. (2021) 10.3390/agriculture13051075McCarthy, C., Nyoni, Y., Kachamba, D., Banda, L., Moyo, B., Chisambi, C., Banfill, J. & Hoshino, B. Can Drones Help Smallholder Farmers Improve Agriculture Efficiencies and Reduce Food Insecurity in Sub-Saharan Africa? Local Perceptions From Malawi. Agriculture. (2023) 10.1051/matecconf/202133502002Lee, C., Phang, S. & Mun, H. Design and Implementation of an Agricultural UAV With Optimized Spraying Mechanism. Matec Web Of Conferences. (2021) 10.1051/e3sconf/202338101048Zhichkin, K., Nosov, V., Zhichkina, L., Anichkina, O., Borodina, I. & Beketov, A. Efficiency of Using Drones in Agricultural Production. E3s Web Of Conferences. (2023) 10.1109/access.2019.2949703Farooq, M., Riaz, S., Abid, A., Abid, K. & Naeem, M. A Survey on the Role of IoT in Agriculture for the Implementation of Smart Farming. Ieee Access. (2019)
http://arxiv.org/abs/2307.06339v1
20230712054254
Real-time Trading System based on Selections of Potentially Profitable, Uncorrelated, and Balanced Stocks by NP-hard Combinatorial Optimization
[ "Kosuke Tatsumura", "Ryo Hidaka", "Jun Nakayama", "Tomoya Kashimata", "Masaya Yamasaki" ]
cs.ET
[ "cs.ET", "q-fin.ST" ]
Real-time Trading System based on Selections of Potentially Profitable, Uncorrelated, and Balanced Stocks by NP-hard Combinatorial Optimization Kosuke Tatsumura^∗, Ryo Hidaka, Jun Nakayama, Tomoya Kashimata, and Masaya Yamasaki Corporate Research and Development Center, Toshiba Corporation, Japan ^∗Corresponding author: Kosuke Tatsumura (e-mail: [email protected]) ====================================================================================================================================================================================================================================================== Financial portfolio construction problems are often formulated as quadratic and discrete (combinatorial) optimization that belong to the nondeterministic polynomial time (NP)-hard class in computational complexity theory. Ising machines are hardware devices that work in quantum-mechanical/quantum-inspired principles for quickly solving NP-hard optimization problems, which potentially enable making trading decisions based on NP-hard optimization in the time constraints for high-speed trading strategies. Here we report a real-time stock trading system that determines long(buying)/short(selling) positions through NP-hard portfolio optimization for improving the Sharpe ratio using an embedded Ising machine based on a quantum-inspired algorithm called simulated bifurcation. The Ising machine selects a balanced (delta-neutral) group of stocks from an N-stock universe according to an objective function involving maximizing instantaneous expected returns defined as deviations from volume-weighted average prices and minimizing the summation of statistical correlation factors (for diversification). It has been demonstrated in the Tokyo Stock Exchange that the trading strategy based on NP-hard portfolio optimization for N=128 is executable with the FPGA (field-programmable gate array)-based trading system with a response latency of 164 μs. § INTRODUCTION Many portfolio construction/selection problems in finance are, with considering minimum transaction lots or other discretenesses of decision variables as realistic constraints, known to be nondeterministic polynomial (NP)-hard in computer science <cit.>. Those include discrete optimizations of Markowitz's mean-variance model <cit.> for better risk-return characteristics <cit.>, multi-period portfolio optimizations (or optimal trading trajectory problems) <cit.>, and correlation-diversified portfolio constructions including maximum independent set (MIS) problem-based ones <cit.> and permutation of assets-based one <cit.>. Recently, special-purpose computers for NP-hard combinatorial (or discrete) optimization, called Ising machines <cit.>, have attracted intense attention. Ising problems are the ground (energy minimum)-state search problems of Ising spin models, which consist of binary variables, called spins, coupled each other with pairwise interactions. The Ising problem belongs to the NP-hard class <cit.>; a variety of notoriously hard problems can be represented in the form of the Ising problem <cit.>. The Ising machine is a heuristic methodology and searches for the optimal (exact) or near-optimal solutions of the Ising problem in the whole solution space. Many Ising machines have claimed higher speed performance than simulated annealing <cit.> (on von Neumann computers), a conventional heuristic for combinatorial optimization. The Ising machines are implemented with various hardware including superconducting flux qubits <cit.>, hybrid electronic-optical systems <cit.>, memristor-based neural networks <cit.>, probabilistic bits <cit.>, coupled-oscillator circuits <cit.>, analog computing units <cit.>, application specific integrated circuits (ASICs) <cit.>, field programmable gate array (FPGAs) <cit.>, and graphics processing units (GPUs) <cit.>. The Ising machines may enable making more rational judgments based on NP-hard combinatorial optimizations for automated trading systems <cit.> that become increasingly important in financial markets <cit.>. Those trading systems are typical real-time systems that must respond (sense, judge, and react) within critically defined time constraints. Many high-speed trading systems <cit.> utilize FPGAs to shorten the latency from the market feed arrival to order packet issuance. Thus, among various Ising machines, FPGA-based ones (Ising machines that can be accelerated with modern FPGA architectures <cit.>) are suitable for high-speed trading systems because they can be embedded together with other system components in the FPGAs. The trading systems utilizing FPGA-based embeddable Ising machines as in <cit.> have been, however, not extensively studied. Furthermore, the execution capability of such a trading system needs to be validated in the actual market because of the latency of the system and the lifetime of the trading opportunity depending on the activities of other trading entities. Here we propose a trading strategy based on selections of potentially profitable, uncorrelated, and balanced stocks by NP-hard combinatorial optimization and show through real-time trading that the strategy is executable with an automated real-time system using an FPGA-based embedded Ising machine for the discrete selection problem. Based on the demand in the direction of convergence of the stock price to the volume-weighted average price (VWAP), the proposed strategy considers the deviations of stock prices from the VWAPs as instantaneous expected returns and selects a balanced (delta-neutral) group of stocks from an N-stock universe according to an objective function involving maximizing the expected returns and minimizing the summation of statistical correlation factors (for correlation-diversification). The selection problem is formulated as quadratic and discrete optimization and solved by an Ising machine based on a quantum-inspired algorithm called simulated bifurcation (SB) <cit.>. SB was derived from a classical counterpart to a quantum adiabatic optimization method called a quantum bifurcation machine <cit.> and numerically simulates the adiabatic time-evolution of a classical nonlinear oscillator network exhibiting bifurcation phenomena, where two branches of the bifurcation in each oscillator correspond to two states of each Ising spin. To reduce the system-wide latency by decreasing the input data size of the SB machine (SBM, a hardware implementation of SB) from 𝒪(N^2) to 𝒪(N), we separate the data describing the problem into two components that change tick-by-tick or day-by-day and customize the basic SBM design <cit.>. We discuss the execution capability of the system by comparing the real-time transaction records of the system in the Tokyo Stock Exchange (TSE) with a backcast simulation of the strategy assuming the orders issued are necessarily filled. The rest of the paper is organized as follows. In Sec. <ref> (trading strategy), we describe the proposed strategy and formulate the discrete selection problem in the forms of quadratic unconstrained binary optimization (QUBO) and the Ising problem. Sec. <ref> (system) describes the architecture of the system, the customization of the SBM core, and the implementation details. Sec. <ref> (experiment) describes the transaction records in the TSE and the execution capability of the system. § TRADING STRATEGY §.§ Discrete optimization-based strategy The proposed strategy considers the deviations of stock prices from the VWAPs as instantaneous expected returns and bets that the deviations would eventually converge (partially) in the trading hours. To improve the reward-to-variability ratio (or the Sharpe ratio <cit.>), it simultaneously holds multiple positions selected through a discrete portfolio optimization problem making the group of positions being market-neutral <cit.> and correlation-diversified <cit.>. There is demand in the direction of convergence of the stock price to the VWAP <cit.>. For institutional investors mainly through passive investments, one of the common methods for reducing the trading impact on market prices is that the fund managers, with a certain fee promised, ask brokerages to execute their large volume trades on the VWAP determined at the end of the trading hours. If the average executed price is the same as the end-of-trading-hours VWAP, the brokerage earns the fee. If the brokerage executes the trades at prices more favorable than the VWAP, this brokerage earns more than the fee. Considering the deviations of stock prices from the VWAP as expected returns, the strategy takes long positions of the underperforming stocks and short positions of the outperforming stocks and statistically expects that the underperforming stocks would move up while the outperforming stocks would move down. To adapt to various market conditions (uptrend, downtrend, or sideways), the strategy matches long(/short) positions with short(/long) positions so that the overall deltas of the positions total almost zero (delta neutral) <cit.>. In addition, to statistically reduce the deviation of the returns (risk), the strategy incorporates the concept of correlation-diversified portfolio <cit.>; the multiple long/short positions are selected so that the stocks involved are uncorrelated with each other. The Sharpe ratio <cit.> is, in this work, the ratio of the mean to the standard deviation of the return (the profit and loss per period for an investment) from a strategy as in <cit.>. To enhance the Sharpe ratio of the proposed strategy, a group including N_s stocks is selected from an N-stock universe as the candidates of open positions (positions to be taken) so that (i) the summation of instantaneous expected returns is maximized (for maximizing returns), (ii) the summation of statistical correlation factors is minimized (for diversification), and (iii) the numbers of long/short positions are equal (delta-neutral). This is a discrete optimization problem. The selection of N_s-stock group is executed every time the market situation changes and then the selected group is evaluated for determining the opening. The deviation of the stock price from the VWAP (Δ p_i) normalized with the base price on the day (p_i^b) is expressed by Δ p_i=(p_i-VWAP_i)/p_i^b, where p_i is the middle price between the best ask (ask) and the best bid (bid). When the sign of Δ p_i is negative (/positive), ith stock is the candidate of long (/short) position. The absolute value of Δ p_i corresponds to the instantaneous expected return of the ith-stock position. The number of lots per order for a stock (L_i) is determined to make the amount of transaction (A_trans) common for all tradable stocks by rounding with considering the minimum tradable shares per order (a lot) of the stock (S_i^min) and the base price on the day (p_i^b); L_i=⌊ A_trans/S_i^min p_i^b⌋. The number of intraday positions is controlled to be within a maximum number (P_max) and all positions are closed (unwind) before the close of the day. Duplicate pair positions are not allowed. In this work, the correlation factor between ith and jth stocks for a business day (σ̂_i,j) is defined based on the price deviation sequences against the VWAP as follows. σ̂_i,j=∑_k(p_i^k-VWAP_i^k)(p_j^k-VWAP_j^k) /∑_k|p_i^k-VWAP_i^k|∑_k|p_j^k-VWAP_j^k|, where p_i^k and VWAP_i^k are the middle price and VWAP of ith stock sampled at one-second intervals. The correlation factor (σ_i,j) in the strategy is the average value for the last five business days of σ̂_i,j and is normalized to be in [0,1]. §.§ Formulation The problem to select N_s stocks from an N-stock universe according to a cost (objective) function involving maximizing instantaneous expected returns (| Δ p_i |) and minimizing the summation of statistical correlation factors (σ_i,j) of the stocks involved under the constrain for delta-neutral positions is formulated in the form of quadratic unconstrained binary optimization (QUBO). In this subsection, we explain the QUBO formulation for explanatory clarity, but the Ising machine takes the input data for the Ising problem in a one-to-one relationship with the QUBO problem as described in the next subsection. The primitive data defining the problem ({Δ p_i} vector and σ matrix) are converted directly to the Ising formulation in the system (not via the QUBO formulation). Define a decision (bit) variable b_i (b_i∈{0,1}) as taking value 1 if ith stock is selected and 0 otherwise. When ith stock is selected, the sign of Δ p_i [sgn(Δ p_i)] indicates whether it corresponds to a long or short position. We prepare N bit variables for an N-stock universe. In the QUBO formulation, we search for the bit configuration {b_i} that minimizes the QUBO cost function H_QUBO. H_QUBO is a linear combination of a cost function H_cost and a penalty function H_penalty. H_QUBO=∑_i^N∑_j^NQ_i,jb_ib_j=H_cost+H_penalty. The cost function to be minimized is defined by H_cost=∑_i∑_jQ_i,j^costb_ib_j, Q_i,j^cost= -c_1| Δ p_i | (if i=j), σ_i,j (otherwise), where c_1 is a positive coefficient. Note that b_i^2=b_i for diagonal terms (i=j). The constraints for N_s-stock selection and delta-neutral positions are represented as a penalty function expressed by H_penalty=c_2( (∑_i b_i)-N_s)^2 +c_3(∑_isgn(Δ p_i)b_i)^2. where c_2 and c_3 are positive coefficients. The first and second terms correspond to the constraints for N_s-stock selection and delta-neutral positions, respectively. Constraint violations increase the penalty, with H_penalty=0 if there are no violations. Note that the nondiagonal elements in the coupling coefficient matrix Q in Eq. (<ref>) include not only σ_i,j in Eq. (<ref>) but also components of sgn(Δ p_i) coming from the second term in Eq. (<ref>). QUBOs are known to be NP-hard problems for classical computers <cit.>. Since the cost function in Eq. (<ref>) is quadratic, the discrete optimization involved in the strategy is thought to be NP-hard problems. §.§ Separation of problem components The discrete optimization problem to be solved at a market situation is described as an N × N size of coupling coefficient matrix Q (in Eq. (<ref>)), which should be transferred to the Ising machine (in this work, SBM) every time the market situation changes. To reduce the system-wide latency by decreasing the size of data transferred from 𝒪(N^2) to 𝒪(N), we separate the data describing the problem into two components that change tick-by-tick or day-by-day. We prepare additional circuit units for the computation depending only on the tick-by-tick data to the basic SBM design (see Sec. <ref>). In the QUBO formulation in Eqs. (<ref>), (<ref>), (<ref>) and (<ref>), the {Δ p_i} vector is the tick-by-tick change component and the σ matrix is the day-by-day change component. The QUBO problem can be represented in the form of the Ising problem (see APPENDIX A), where the decision variables are binary variable called spins s_i (s_i∈{-1,1}) and the problem is represented by a coupling coefficient matrix J and a bias vector h. We describe the Ising cost function H_Ising as a linear combination of terms that include the day-by-day change components (J^day, h^day) or tick-by-tick change components (J^tick, h^tick) as follows; H_Ising= -1/2∑_i^N∑_j^NJ_i,j^days_is_j-1/2∑_i^N∑_j^NJ_i,j^ticks_is_j +∑_i^Nh_i^days_i+∑_i^Nh_i^ticks_i. Here, the (J^day, h^day) and (J^tick, h^tick) can be calculated from the σ matrix and {Δ p_i} vector, respectively. The SBM core described in the next section stores the (J^day, h^day) and (J^tick, h^tick) data in the separated memories. Note that the size of J_i,j^tick is N × N, but the N-size intermediate values are stored in the separated memory (see APPENDIX B for details). When a market feed (informing the change of ask or bid of a stock) arrives, the SBM core updates the (J^tick, h^tick) intermediate data [the size is 𝒪(N)] with keeping the (J^day, h^day) data [the size is 𝒪(N^2)]. § SYSTEM The real-time stock trading system is a hybrid FPGA/CPU system, featuring an event-driven SBM module that starts processing the discrete optimization involved in the proposed strategy when detecting the changes in ask or bid of tradable stocks. The system-wide latency from the market feed arrival to order packet issuance is shortened by co-integrating, in the FPGA, the SBM module together with other system components including communication interfaces. The processing units and memory subsystems in the basic SBM circuit design <cit.> have been customized (modified) for the proposed strategy to further improve the system latency. §.§ Architecture Figure <ref> (a) and (b) show the block diagram and timing chart of the hybrid FPGA/CPU system. The FPGA part responds to the changes in the market in a low latency, i.e., it receives the market information, determines the opening of positions based on the NP-hard portfolio optimization by the SBM module, and then issues the order packets. The CPU part controls the whole system and manages the positions using state machines for opened positions (the closing of the positions is determined by the CPU part). The market information (including the changes in ask or bid) is received by both the FPGA and CPU parts. The order (buying/selling) packets are issued only from the FPGA part. The execution-result packets informing the results (fill/lapse) of the orders are received by the CPU part. The FPGA and CPU parts are connected with the peripheral component interconnect-express (PCIe) bus. The system components in the FPGA part are, in the order of data flow, a receiver (RX), a price buffer (P) that accommodates the price list of ask or bid for the N-stock universe, the SBM module including the three memory units (Δ p, σ, VWAP) which are updated at different timing, a judgment module, a message generator, and a transmitter (TX). Those components are implemented as independent (not synchronized) circuit modules, which are connected with directed streaming data channels with FIFO (first-in-first-out) buffers. One of the characteristics of the SBM module is that it has three memory units (Δ p, σ, and VWAP) to store data updated at the three timing of tick-by-tick, day-by-day, and every second. The VWAP information, a {VWAP_i} vector, is updated in the CPU part and informed to the FPGA part at one-second intervals. The SBM module also has a preprocessing submodule (pre) to generate the Ising problem described by (J^day, h^day) and (J^tick, h^tick) based on the data in the three memory units. The data in the Δ p memory is changed depending also on the open list (O memory) in the judgment module. The open position is registered in the open list when the opening is decided (before the issuance of the order) and deregistered from there when the closing of the position is confirmed with the message from the CPU part. The Δ p of the stocks listed in the O memory is set to zero as the duplicate opening is prohibited. Figure <ref> (b) shows the timing chart for the operation of the SBM module when representative events (Events 1 to 7) happen. When no event happens for a certain time, the SBM module is idling (polling to the FIFO buffers from the price buffer, judgment, and PCIe I/F modules). When a market feed arrives (Event 1), the SBM module immediately starts the preprocessing (updating of the Δ p memory and the J^tick/h^tick memory) and then the main processing (the discrete optimization). After that, the SBM module evaluates the solution output from the core circuit in terms of the constraint violation and the objective function and then informs the open candidates to the judgment module if the evaluation passes (postprocessing, post). After checking the open candidates, the judgment module finally determines the open positions, registers them in the O memory, and concurrently issues order packets via the message generator (Event 2). Regardless of whether or not order packets are issued, the SBM module repeats the main processing for a predetermined number of times with different initial states generated by an internal random number generator <cit.>. As the simulated bifurcation is a heuristic algorithm, the SBM module may find a better solution or another solution enough for the opening. When repeating the main processing, the SBM module also repeats the preprocessing if the order packets have been issued at the last run (in the case of Event 2) since the Δ p memory (O memory) has been changed, but skips the preprocessing otherwise (Event 3). When the Ising problem changes during the main processing because of the arrivals of the new market feed (Event 4), the VWAP updating information (Even 5), or the close confirmation information (Event 6), the information is incorporated at the beginning of the next execution of the SBM module (Event 7). §.§ Customized SBM core circuit To reduce the data size transferred to the SBM core from 𝒪(N^2) to 𝒪(N) (for improving the system latency) when the market situation (or the internal state) changes, we prepare additional computation and memory units to the basic SBM design. Instead of combining the tick-by-tick (N-size) and day-by-day (N× N-size) change information as the coefficients describing the Ising problem, we store that information in separated memory units, separately calculate the updating components of internal variables (corrections of momenta) coming from those two components and then combine the updating components (the number of the components is N). Simulated bifurcation <cit.> simulates the time evolution of N nonlinear oscillators according to the Hamiltonian equations of motion, where the nonlinear oscillators correspond to the spin variables and the state of ith oscillator is described by the position and momentum (x_i, y_i). The SB time evolution step consists of calculating the correction of momenta {Δ y_i} based on the many-body interaction [computationally corresponding to the matrix-vector multiplication (MM) of the J coupling matrix and { x_i} position vector] and calculating the updated (time-evolved) state variables, { x_i^k+1} and { y_i^k+1}, from the {Δ y_i}, { h_i}, and the current state variables, { x_i^k} and { y_i^k}. The additional circuits calculate the correction of the momentum depending only on the tick-by-tick J and h components (Δ y_i^tick). Figure <ref> shows the block diagram of the SB core circuit, where the additional circuit units for the computation depending only on the tick-by-tick change problem components (J^tick, h^tick) are blue highlighted and the remaining units are architecturally the same as in the basic SBM design <cit.>. In the basic design, the main computation components are JX units corresponding to the multiply-accumulate (MAC) operations of ∑_j=1^NJ_ijx_j and TE units corresponding to the time-evolution operation, which are combined to be MMTE units (each responsible for updating a subgroup of coupled oscillators). The MMTE units are organized with the global X^'_mem memory unit to make a circulative structure as a whole corresponding to the iteration of the SB time-evolution steps. The J^tick and h^tick data are stored in the J^tick and H^tick memory units, which are separated from the J^day and H^day memory units storing the day-by-day change problem components (J^day, h^day). The JX^tick module calculates an intermediate value Δ Y^tick common for all the oscillators and supply the Δ Y^tick and sgn(Δ p_i) data to the TE units. The TE unit calculates the Δ y_i^tick for each oscillator based on the Δ Y^tick, sgn(Δ p_i), x_i and h_i and updates the state of ith oscillator with the Δ y_i^tick. See APPENDIX B for details. §.§ Implementation We implemented the system described in Sec. <ref> with a CPU server with a network interface card (NIC) and an FPGA board having another network interface (see APPENDIX C for details). Figure <ref> (a) shows the architecture and implementation results of the SBM module for 128-stock universes (N=128). Among three variants of simulated bifurcation (adiabatic, ballistic, and discrete SBs) <cit.>, ballistic SB is adopted in this work, with the SB parameters of N_step=300 and dt=0.02. The machine size (the number of spins) is 128 spins with all-to-all connectively, and the computation precision is 32-bit floating point. Figure <ref> (b) shows the result of the placement of system modules in the FPGA. The SBM module is dominant, and the circuit resources used are listed in Fig. <ref> (a). The system clock frequency determined as a result of circuit synthesis, placement, and routing is 208 MHz. The clock cycles of the SB main (core) processing, preprocessing, and postprocessing are 33,000 per run (110 per SB step), 129, and 648, respectively. The computation time (the module latency) per run (t_pre+t_core+t_post) is 162.1 μs, where the SBM core processing is dominant (t_core=158.4 μs). The system-wide latency from the market feed arrival to order packet issuance depicted in Fig. <ref>(b) as a red arrow is 164 μs (including the latencies of the RX, price buffer, judgment, SBM, message generator, and TX modules). § EXPERIMENT The trading system described in Sec. <ref> was installed at the JPX Co-location area of the TSE and operated through real-time trading to examine whether the strategy based on the NP-hard combinatorial optimization proposed in Sec. <ref> is executable. The trading results are compared with a backcast simulation of the strategy assuming the orders issued are necessarily filled. The proposed strategy determines the opening of positions based on an instantaneous market situation (a price list of ask and bid for the N-stock universe). Because of the latency of a system that executes the strategy and the activities of other trading entities, the orders issued are not necessarily filled at the ask/bid prices used for the decision-making. We developed a simulator that processes the historical market feeds provided by the TSE and emulates the internal state of the trading system. The simulator assumes that the orders issued are necessarily filled at the intended prices. Figures <ref> (a) and (b) show the cumulative values of the amounts of transactions per day and the profit and loss (including ask-bid spread costs and commission) per day for real-time trading (red line) and backcast simulation (black line) with fixed strategic parameters of N=128, N_s=4, P_max=4, and A_trans=4 million Japanese yen (JPY). The 128 stocks were selected from the Nikkei 225 or TOPIX 100 constituents in terms of high liquidity. The system is allowed to take positions in the afternoon session of the TSE. The simulation data is from May. 1, 2020, to May. 11, 2023. The real trade data is from Feb. 1, 2023, to Mar. 31, 2023, being adjusted with the simulation at the first day. The Sharpe ratio of the strategy over the simulation period (approximately 3 years) is 1.23, where the annualized return and risk (the standard deviation of the return) are 3.6 % and 2.9 %, respectively, for an investment of 16 million JPY (A_trans× P_max); the strategy proposed can be profitable (a positive annualized return) with a reasonable risk (a low level of annualized risk compared to the annualized return). The cumulative value of the amounts of transactions by the system (118,956,828 JPY) over the experiment (252 hours of real-time trading) is coincident well (+0.01 %) with the simulation value (118,948,300 JPY), indicating that the strategy proposed is executable with the trading system with a latency of 164 μs. Note that the slight difference in the transaction amounts comes from the executed prices. Figure <ref> shows a typical transaction by the trading system observed on Feb. 24, 2023. On that day, the number of the market feeds informing the changes of ask/bid of stocks in the N(=128)-stock universe was 5,565,723, which arrived at intervals of 3.6 ms on average. The system decided the opening of the positions at 1:13 PM in JST (15,186 seconds after 9:00 AM) based on the selection of codes 8411, 6762, 8036, and 9735 by the SBM module, leading to the profitable closing of the positions before the end of the day [Fig. <ref> (a)]. The selection of the four stocks (N_s=4) by the SBM module was based on the deviations of the stock prices from the VWAPs (Δ p_i) and the correlation factors (σ_i,j) shown in Figs. <ref> (b), (c), and (d). Codes 8411 and 8036 were selected mainly because of the instantaneous expected returns (the maximum and second-maximum ones at that moment) as the candidates for long positions. From the candidates for short positions balancing to codes 8411 and 8036, codes 6762 and 9735 were chosen based on not only the relatively high expected returns but also relatively low correlation factors against both codes 8411 and 8036. The solution of the SBM module satisfies the constraints of the discrete optimization; ∑_i^Nb_i=N_s and ∑_i^Nsgn(Δ p_i)b_i=0 in the representation using bit variables b_i. § CONCLUSION We proposed a strategy based on selections of potentially profitable, uncorrelated, and balanced stocks by NP-hard, quadratic and discrete optimization and have demonstrated with the real-time transaction records in the TSE that the strategy is executable in terms of response latency with the automated trading system using the SB-based embeddable Ising machine for the selection problem. The cost function of N_s-stock selection problem is designed to involve maximizing instantaneous expected returns defined as deviations from volume-weighted average prices (VWAPs), minimizing the summation of statistical correlation factors (for correlation diversification), and penalty functions for N_s-stock selection and delta-neutral positions. The selection problem is formulated in the form of the Ising problem and then the data describing the problem is separated into two components that change tick-by-tick (N-size) or day-by-day (N× N-size). By customizing the SBM core circuit to have two sets of memory and computation modules respectively for the tick-by-tick and day-by-day change problem-components, we reduced the data size transferred to the SBM core from 𝒪(N^2) to 𝒪(N) when the market situation changes and improved the system latency. This is a technique to improve the system latency when the problem components change at different timing and is applicable to the SB algorithm and other algorithms based on Hamiltonian equations of motion. The automated trading system is a hybrid FPGA/CPU system, featuring an event-driven SBM module in the FPGA part. The FPGA part (hardware processing) decides the opening of a group of long/short positions using the SBM and then issues the corresponding orders, while the CPU part (software processing) manages the positions (including the decision of closing positions). The system-wide latency from the market feed arrival to the order packet issuance is 164 μs for a 128-stock universe. The trading system was installed at the JPX Co-location area of the TSE and operated for a real-time trading period of 42 business days or 252 hours. The real-time transaction records were compared with a backcast simulation of the strategy assuming the orders issued are necessarily filled at the intended prices. Based on the good agreement in the cumulative transaction amounts and detailed comparison analysis of transactions between the experiment and simulation, we have concluded that the response latency of the system with the SB-based Ising machine is sufficiently low to execute the trading strategy based on the NP-hard discrete portfolio optimization. Automated trading systems with embedded Ising machines would be applicable to the strategies based on various discrete portfolio optimizations characterized by different definitions of expected returns and correlations [diagonal and non-diagonal terms in Eq. (<ref>)] and other trading strategies that rely on high-speed discrete optimization. § APPENDICES §.§ A. QUBO & Ising representations The QUBO formulation (b_i∈{0,1}), H_QUBO=∑_i^N∑_j^NQ_i,jb_ib_j, is represented also in the Ising formulation (s_i∈{-1,1}) as follows. H_Ising=-1/2∑_i^N∑_j^NJ_i,js_is_j+∑_i^Nh_is_i, where s_i=2b_i-1, J_i,j= -Q_i,j/2 (if i≠ j), 0 (if i= j), h_i=∑_j^NQ_i,j/2. §.§ B. Additional computation units The correction (Δ y_i^tick) of the momentum per SB time-evolution step for ith oscillator depending on the tick-by-tick J and h components is expressed by Δ y_i^tick=c_3/2( Δ Y^tick-x_i) sgn(Δ p_i)-h_i^tick, where Δ Y^tick=∑_i^N sgn(Δ p_i) x_i, h_i^tick=-c_1 | Δ p_i |/2 + c_3 ( ∑_j^Nsgn(Δ p_j)/2) sgn(Δ p_i). The JX^tick in the MAC^tick module (Fig. <ref>) is provided with the x_i and sgn(Δ p_i) data from the global X^'_mem memory and the J^tick memory and calculates the Δ Y^tick in a spatially parallel manner using multiple MAC processing elements. The time evolution (TE) module receives the Δ Y^tick and sgn(Δ p_i) data from the MAC^tick module and also receives the h_i^tick data from the H^tick memory, and then updates the momentum of each oscillator by respective correction of Δ y_i^tick in a temporal parallel manner (pipelining). §.§ C. Implementation details An FPGA board and a high-speed network interface card (NIC) are mounted on a host server with dual CPUs (Intel Xeon Silver 4215R) and DDR-DRAM modules (384 GB). The FPGA (Intel Arria 10 GX 1150 FPGA) on the board has 427,200 adaptive logic modules (ALMs) including 854,400 adaptive look-up-tables (ALUTs, 5-input LUT equivalent) and 1,708,800 flip-flop registers, 2,713 20Kbit-size RAM blocks (BRAMs), and 1,518 digital signal processor blocks (DSPs). The system components in the FPGA described in Section <ref> were coded in a high-level synthesis (HLS) language (Intel FPGA SDK for OpenCL, ver. 18.1). The FPGA interfaces including a PCIe IP (PCIe Gen3×8), a 10 Gbps Ethernet PHY IP and communication IPs (RX, TX) were written in Verilog HDL and incorporated in the board support package (BSP). §.§ Acknowledgment The experiment in the Tokyo Stock Exchange was conducted under a joint project between Toshiba Corporation and Dharma Capital. K.K. The authors thank Ryosuke Iio and Kohei Shimane for fruitful discussions and technical support. §.§ Conflicts of Interest K.T., R.H., and M.Y. are included in inventors on two U.S. patent applications related to this work filed by the Toshiba Corporation (no. 17/249353, filed 20 February 2020; no. 17/565206, filed 29 December 2021). The authors declare that they have no other competing interests. 00 bienstock96 D. Bienstock, “Computational study of a family of mixed-integer quadratic programming problems,” Mathematical programming 74, pp. 121–140, 1996. [Online]. Available: https://doi.org/10.1007/BF02592208 mansini99 R. Mansini, M. G. Speranza, “Heuristic algorithms for the portfolio selection problem with minimum transaction lots,” European Journal of Operational Research 114, pp. 219–233, 1999. [Online]. Available: https://doi.org/10.1016/S0377-2217(98)00252-5 markowitz52 H. Markowitz, “Portfolio selection,” The Journal of Finace 7, pp. 77–91, 1952. [Online]. Available: https://doi.org/10.2307/2975974 venturelli19 D. Venturelli, A. Kondratyev, “Reverse quantum annealing approach to portfolio optimization problems,” Quantum Machine Intelligence 1, pp. 17–30, 2019. [Online]. Available: https://doi.org/10.1007/s42484-019-00001-w lang22 J. Lang, S. Zielinski, S. Feld, “Strategic Portfolio Optimization Using Simulated, Digital, and Quantum Annealing,” Applied Sciences 12, 12288, 2022. [Online]. Available: https://doi.org/10.3390/app122312288 rosenberg15 G. Rosenberg, P. Haghnegahdar, P. Goddard, P. Carr, K. Wu, M. L. De Prado, “Solving the optimal trading trajectory problem using a quantum annealer,” Proc. of Workshop on High Performance Computational Finance (WHPCF), pp. 1–7, 2015. [Online]. Available: https://doi.org/10.1145/2830556.2830563 steinhauer20 K. Steinhauer, T. Fukadai, S. Yoshida, “Solving the Optimal Trading Trajectory Problem Using Simulated Bifurcation,” arXiv preprint arXiv:2009.08412, 2020. [Online]. Available: https://doi.org/10.48550/arXiv.2009.08412 mugel22 S. Mugel, C. Kuchkovsky, E. Sanchez, S. Fernandez-Lorenzo, J. Luis-Hita, E. Lizaso, R. Orus, “Dynamic portfolio optimization with real datasets using quantum processors and quantum-inspired tensor networks,” Physical Review Research 4, 013006, 2022. [Online]. Available: https://doi.org/10.1103/PhysRevResearch.4.013006 butenko03 S. Butenko,, “Maximum independent set and related problems, with applications,” Ph.D. dissertation, the Industrial and Systems Engineering Department, University of Florida, 2003. [Online]. Available: https://ufdcimages.uflib.ufl.edu/UF/E0/00/10/11/ 00001/butenko_s.pdf boginski04 V. Boginski, S. Butenko, P. M. Pardalos, “Network-based Techniques in the Analysis of the Stock Market,” in Supply Chain and Finance, eds. P. M. Pardalos, A. Migdalas, G. Baourakis, World Scientific, pp. 1–14, 2004. [Online]. Available: https://doi.org/10.1142/9789812562586_0001 marzec16 M. Marzec, “Portfolio optimization: Applications in quantum computing,” in Handbook of High-Frequency Trading and Modeling in Finance eds. I. Florescu, M. C. Mariani, H. E. Stanley, F. G. Viens, Wiley Online Library, pp. 73–106, 2016. [Online]. Available: https://doi.org/10.1002/9781118593486.ch4 sakurai21 Y. Sakurai, Y. Yuki, R. Katsuki, T. Yazane, F. Ishizaki, “Correlation Diversified Passive Portfolio Strategy Based on Permutation of Assets,” Journal of Investment Strategies 10, pp. 1–22, 2021. [Online]. Available: http://doi.org/10.21314/JOIS.2021.010 sbm1 H. Goto, K. Tatsumura, A. R. Dixon, “Combinatorial optimization by simulating adiabatic bifurcations in nonlinear Hamiltonian systems,” Science Advances 5, eaav2372, 2019. [Online]. Available: https://doi.org/10.1126/sciadv.aav2372 FPL19 K. Tatsumura, A. R. Dixon, H. Goto, “FPGA-Based Simulated Bifurcation Machine,” Proc. of IEEE International Conference on Field Programmable Logic and Applications (FPL), pp. 59–66, 2019. [Online]. Available: https://doi.org/10.1109/FPL.2019.00019 sbm2 H. Goto, K. Endo, M. Suzuki, Y. Sakai, T. Kanao, Y. Hamakawa, R. Hidaka, M. Yamasaki, K. Tatsumura, “High-performance combinatorial optimization based on classical mechanics,” Science Advances 7, eabe7953, 2021. [Online]. Available: https://doi.org//10.1126/sciadv.abe7953 NatEle K. Tatsumura, M. Yamasaki, H. Goto, “Scaling out Ising machines using a multi-chip architecture for simulated bifurcation,” Nature Electronics 4, pp. 208–217, 2021. [Online]. Available: https://doi.org/10.1038/s41928-021-00546-4 kanao23 T. Kanao, H. Goto, “Simulated bifurcation for higher-order cost functions,” Applied Physics Express 16, 014501, 2023. [Online]. Available: https://doi.org/10.35848/1882-0786/acaba9 johnson11 M. W. Johnson, M. H. S. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A. J. Berkley, J. Johansson, P. Bunyk, E. M. Chapple, C. Enderud, J. P. Hilton, K. Karimi, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, C. J. S. Truncik, S. Uchaikin, J. Wang, B. Wilson, G. Rose, “Quantum annealing with manufactured spins,” Nature 473, pp. 194–198 (2011). [Online]. Available: https://doi.org/10.1038/nature10012 king23 A. D. King, J. Raymond, T. Lanting, R. Harris, A. Zucca, F. Altomare, A. J. Berkley, K. Boothby, S. Ejtemaee, C. Enderud, E. Hoskinson, S. Huang, E. Ladizinsky, A. J. R. MacDonald, G. Marsden, R. Molavi, T. Oh, G. Poulin-Lamarre, M. Reis, C. Rich, Y. Sato, N. Tsai, M. Volkmann, J. D. Whittaker, J. Yao, A. W. Sandvik, M. H. Amin, “Quantum critical dynamics in a 5,000-qubit programmable spin glass,” Nature 617, pp. 61–-66 (2023). [Online]. Available: https://doi.org/10.1038/s41586-023-05867-2 honjo21 T. Honjo, T. Sonobe, K. Inaba, T. Inagaki, T. Ikuta, Y. Yamada, T. Kazama, K. Enbutsu, T. Umeki, R. Kasahara, K. Kawarabayashi, H. Takesue, “100,000-spin coherent ising machine,” Science Advances 7, eabh095 (2021). [Online]. Available: https://doi.org/10.1126/sciadv.abh0952 pierangeli19 D. Pierangeli, G. Marcucci, C. Conti, “Large-Scale Photonic Ising Machine by Spatial Light Modulation,” Physical Review Letters 122, 213902 (2019). [Online]. Available: https://doi.org/10.1103/PhysRevLett.122.213902 cai20 F. Cai, S. Kumar, T. V. Vaerenbergh, X. Sheng, R. Liu, C. Li, Z. Liu, M. Foltin, S. Yu, Q. Xia, J. J. Yang, R. Beausoleil, W. D. Lu, J. P. Strachan, “Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks,” Nature Electronics 3, pp. 409–418, 2020. [Online]. Available: https://doi.org/10.1038/s41928-020-0436-6 aadit22 N. A. Aadit, A. Grimaldi, M. Carpentieri, L. Theogarajan, J. M. Martinis, G. Finocchio, K. Camsari, “Massively parallel probabilistic computing with sparse Ising machines,” Nature Electronics 5, pp. 460–468, 2022. [Online]. Available: https://doi.org/10.1038/s41928-022-00774-2 moy22 W. Moy, I. Ahmed, P. Chiu, J. Moy, S. S. Sapatnekar, C. H. Kim, “A 1,968-node coupled ring oscillator circuit for combinatorial optimization problem solving,” Nature Electronics 5, pp. 310–317, 2022. [Online]. Available: https://doi.org/10.1038/s41928-022-00749-3 sharma22 A. Sharma, R. Afoakwa, Z. Ignjatovic, M. Huang, “Increasing Ising machine capacity with multi-chip architectures,” Proc. of Annual International Symposium on Computer Architecture (ISCA), pp. 508–521, 2022. [Online]. Available: https://doi.org/10.1145/3470496.3527414 takemoto19 T. Takemoto, M. Hayashi, C. Yoshimura, M. Yamaoka, “A 2×30k-Spin Multi-Chip Scalable Annealing Processor Based on a Processing-In-Memory Approach for Solving Large-Scale Combinatorial Optimization Problems,” IEEE Journal of Solid-State Circuits 55, pp. 145–156, 2019. [Online]. Available: https://doi.org/10.1109/JSSC.2019.2949230 kawamura23 K. Kawamura, J. Yu, D. Okonogi, S. Jimbo, G. Inoue, A. Hyodo, Á. L. García-Anas, K. Ando, B. H. Fukushima-Kimura, R. Yasudo, T. Van Chu, M. Motomura, “Amorphica: 4-replica 512 fully connected spin 336MHz metamorphic annealer with programmable optimization strategy and compressed-spin-transfer multi-chip extension,” Proc. of IEEE International Solid-State Circuits Conference (ISSCC), pp. 42–43, 2023. [Online]. Available: https://doi.org/10.1109/ISSCC42615.2023.10067504 matsubara20 S. Matsubara, M. Takatsu, T. Miyazawa, T. Shibasaki, Y. Watanabe, K. Takemoto, H. Tamura, “Digital annealer for high-speed solving of combinatorial optimization problems and its applications,” Proc. of Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 667–672, 2020. [Online]. Available: https://doi.org/10.1109/ASP-DAC47756.2020.9045100 waidyasooriya21 H. M. Waidyasooriya, M. Hariyama, “Highly-parallel FPGA accelerator for simulated quantum annealing,” IEEE Transactions on Emerging Topics in Computing 9, pp. 2019–2029, 2021. [Online]. Available: https://doi.org/10.1109/TETC.2019.2957177 okuyama19 T. Okuyama, T. Sonobe, K. Kawarabayashi, M. Yamaoka, “Binary optimization by momentum annealing,” Physical Review E 100, 012111, 2019. [Online]. Available: https://doi.org/10.1103/PhysRevE.100.012111 barahona82 F. Barahona, “On the computational complexity of Ising spin glass models,” Journal of Physics A: Mathematical and General 15, pp. 3241–-3253, 1982. [Online]. Available: https://doi.org/10.1088/0305-4470/15/10/028 lucas14 A. Lucas, “Ising formulations of many NP problems,” Frontiers in physics 2, 5, 2014. [Online]. Available: https://doi.org/10.3389/fphy.2014.00005 SA83 S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, “Optimization by simulated annealing,” Science 220, pp. 671–-680, 1983. [Online]. Available: https://doi.org/10.1126/science.220.4598.671 yoo23 S. Yoo, H. Kim, J. Kim, S. Park, J.-Y. Kim, J. Oh, “LightTrader: A Standalone High-Frequency Trading System with Deep Learning Inference Accelerators and Proactive Scheduler,” IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 1017–1030, 2023. [Online]. Available: https://doi.org/10.1109/HPCA56546.2023.10070930 fil20 M. Fil, L. Kristoufek, “Pairs trading in cryptocurrency markets,” IEEE Access 8, pp. 172644–172651, 2020. [Online]. Available: https://doi.org/10.1109/ACCESS.2020.3024619 huang19 B. Huang, Y. Huan, L. D. Xu, L. Zheng, Z. Zou, “Automated trading systems statistical and machine learning methods and hardware implementation: a survey,” Enterprise Information Systems 13, pp. 132–144, 2019. [Online]. Available: https://doi.org/10.1080/17517575.2018.1493145 denholm15 S. Denholm, H. Inoue, T. Takenaka, T. Becker, W. Luk, “Network-level FPGA acceleration of low latency market data feed arbitration,” IEICE Transactions on Information and Systemss E98-D, pp. 288–297, 2015. [Online]. Available: https://doi.org/10.1587/transinf.2014RCP0011 leber11 C. Leber, B. Geib, H. Litz, “High frequency trading acceleration using FPGAs,” Proc. of IEEE International Conference on Field Programmable Logic and Applications (FPL), pp. 317–322, 2011. [Online]. Available: https://doi.org/10.1109/FPL.2011.64 betz12 V. Betz, J. Rose, A. Marquardt, “Architecture and CAD for deep-submicron FPGAs,” Springer New York, NY, 1999 [Online]. Available: https://doi.org/10.1007/978-1-4615-5145-4 ISCAS20 K. Tatsumura, R. Hidaka, M. Yamasaki, Y. Sakai, H. Goto, “A Currency Arbitrage Machine based on the Simulated Bifurcation Algorithm for Ultrafast Detection of Optimal Opportunity,” Proc. of IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, 2020. [Online]. Available: https://doi.org/10.1109/ISCAS45731.2020.9181114 qbm H. Goto, “Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network,” Scientific Reports 6, 21686, 2016. [Online]. Available: https://doi.org/10.1038/srep21686 malceniece23 L. Malceniece, K. Malcenieks, T. J. Putniņš, Tālis, “High frequency trading and comovement in financial markets,” Journal of Financial Economics 134, pp. 381–399, 2019. [Online]. Available: https://doi.org/10.1016/j.jfineco.2018.02.015 brogaard14 J. Brogaard, T. Hendershott, R. Riordan, “High-Frequency Trading and Price Discovery,” The Review of Financial Studies 27, pp. .2267-2306, 2014. [Online]. Available: https://doi.org/10.1093/rfs/hhu032 sharpe66 W. F. Sharpe, “Mutual fund performance,” The Journal of Business 39, pp. 119–138, 1966. [Online]. Available: https://www.jstor.org/stable/2351741 backus93 D. K. Backus, A. W. Gregory, C. I. Telmer, “Accounting for forward rates in markets for foreign currency,” The Journal of Finance 48, pp. 1887–1908, 1993. [Online]. Available: https://doi.org/10.1111/j.1540-6261.1993.tb05132.x gatev06 E. Gatev, W. N. Goetzmann, K. G. Rouwenhorst, “Pairs trading: Performance of a relative-value arbitrage rule,” The Review of Financial Studies 19, pp. 797–827, 2006. [Online]. Available: https://doi.org/10.1093/rfs/hhj020 berkowitz88 S. A. Berkowitz, D. E. Logue, E. A. Noser Jr, “The total cost of transactions on the NYSE,” The Journal of Finance 43, pp. 97–112, 1988. [Online]. Available: https://doi.org/10.1111/j.1540-6261.1988.tb02591.x kakade04 S. M. Kakade, M. Kearns, Y. Mansour, L. E. Ortiz, “Competitive algorithms for VWAP and limit order trading,” Proc. of ACM conference on Electronic commerce, pp. 189–198, 2004. [Online]. Available: https://doi.org/10.1145/988772.988801 bialkowski08 J. Białkowski, S. Darolles, G. Le Fol, “Improving VWAP strategies: A dynamic volume approach,” Journal of Banking & Finance 32, pp. 1709–1722, 2008. [Online]. Available: https://doi.org/10.1016/j.jbankfin.2007.09.023 marsaglia03 G. Marsaglia, “Xorshift RNGs,” Journal of Statistical software 8, pp. 1–6, 2003. [Online]. Available: https://doi.org/10.18637/jss.v008.i14
http://arxiv.org/abs/2307.04419v1
20230710085104
Constraints on primordial curvature power spectrum with pulsar timing arrays
[ "Zhi-Qiang You", "Zhu Yi", "You Wu" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
[ * Received / Accepted ======================== § INTRODUCTION Recently, four pulsar timing array (PTA) collaborations, namely NANOGrav <cit.>, PPTA <cit.>, EPTA <cit.>, and CPTA <cit.>, all announced the strong evidence of a stochastic signal consistent the Hellings-Downs angular correlations, pointing to the gravitational-waves (GW) origin of this signal. Assuming the signal originates from an ensemble of binary supermassive black hole inspirals and a fiducial f^-2/3 characteristic-strain spectrum, the strain amplitude is estimated to be at the order of ∼ 10^-15 at a reference frequency of 1  yr^-1 <cit.>. However, the origin of this signal, whether from supermassive black hole binaries or other cosmological sources, is still under investigation <cit.>. A promising candidate to explain the signal is the scalar-induced gravitational waves (SIGWs) accompanying the formation of primordial black holes <cit.>. Other physical phenomena (see e.g. <cit.>) can also be the sources in the PTA band. The SIGW is sourced from scalar perturbations generated during the inflationary epoch <cit.>. They offer valuable insights into the physics of the early Universe and can be detected not only by PTAs but also by space-based GW detectors such as LISA <cit.>, Taiji <cit.>, TianQin <cit.>, and DECIGO <cit.>. Significant SIGWs require the amplitude of the power spectrum of the primordial curvature perturbations to be around 𝒜_ζ∼𝒪(0.01) which is approximately seven orders of magnitude larger than the constraints from large-scale measurements of cosmic microwave background (CMB) anisotropy observation, 𝒜_ζ= 2.1× 10^-9 <cit.>. Therefore, to account for the observed gravitational wave signal detected by PTAs, the curvature power spectrum must possess at least one high peak. This can be achieved through inflation models with a transition ultra-slow-roll phase <cit.>. To characterize a single-peak primordial curvature power spectrum, various parameterizations such as the δ-function form, box form, lognormal form, or broken power law form are employed. Among them, the δ-function, box and lognormal parameterizations are investigated in Ref. <cit.>, where the constraints from the PTAs data on the parameters of these models are also given. The constraints on the broken power law form are provided in Ref. <cit.>, where the role of non-Gaussianity is also considered. However, the analysis does not determine which model among these is the most compatible with the PTAs signal. For the multi-peak primordial curvature power spectrum model <cit.>, we parameterize the primordial curvature power spectrum with the double lognormal form. In this study, we aim to determine whether the PTAs signal favors a single-peak or multi-peak primordial curvature power spectrum and identify the most compatible model with the PTAs signal. The organization of this paper is as follows: Section II provides a brief review of the scalar-induced gravitational waves. Section III presents the constraints on the power spectrum for different forms and identifies the best-fitted model based on the PTAs signal. Finally, Section IV summarizes our findings and provides concluding remarks. § SCALAR-INDUCED GRAVITATIONAL WAVES The large scalar perturbations seeded from the primordial curvature perturbation generated during inflation can act as the source to induce GWs at the radiation domination epoch. In this section, we give a brief review of the SIGW. In the cosmological background, the metric with perturbation in Newtonian gauge is d s^2= -a^2(η)(1+2Φ)dη^2 +a^2(η)[(1-2Φ)δ_ij+1/2h_ij]d x^i d x^j, where a is the scale factor of the Universe, η is the conformal time, dη =dt/a(t), Φ is the Bardeen potential, and h_ij are the tensor perturbations. The tensor perturbations in the Fourier space can be obtained by the transform h_ij(x,η)=∫ d^3k e^ik·x/(2π)^3/2 [h_k(η)e_ij(k)+h̃_k(η)ẽ_ij(k)], where the plus and cross polarization tensors e_ij(k) and ẽ_ij(k) are e_ij(k)=1/√(2)[e_i(k)e_j(k)-ẽ_i(k)ẽ_j(k)], ẽ_ij(k)=1/√(2)[e_i(k)ẽ_j(k)+ẽ_i(k)e_j(k)], and the basis vectors satisfying e·ẽ= e ·k= ẽ·k. For the source from the second order of linear scalar perturbations, the tensor perturbations with either polarization in the Fourier space satisfy <cit.> h”_k+2ℋh'_k+k^2h_k=4S_k, where ℋ=a'/a is the conformal Hubble parameter and a prime denotes the derivative with respect to the conformal time η. The second order source S_k is S_k= ∫d^3k̃/(2π)^3/2e_ij(k)k̃^ik̃^j [2Φ_k̃Φ_k-k̃1/2+ 1/ℋ^2(Φ'_k̃+ℋΦ_k̃) (Φ'_k-k̃+ℋΦ_k-k̃)]. The Bardeen potential in the Fourier space, Φ_k, can be connected to the primordial curvature perturbations ζ_k produced during inflation epoch through the transfer function, Φ_k=3+3w/5+3wT(k,η) ζ_k, where w is the equation of state parameter and the transfer function T(k,η) satisfy T(k,η)=3[sin(k η/√(3))-(kη/√(3)) cos(kη/√(3))/(kη/√(3))^3]. The equation of the tensor perturbations (<ref>) can be solved by the Green function method and the solution is h_k(η)=4/a(η)∫_η_k^ηd η̃g_k(η,η̃)a(η̃)S_k(η̃), where g_k is the corresponding Green function with the form g_k(η,η')=sin[k(η-η')]/k. The definition of the power spectrum of tensor perturbations h_k is ⟨ h_k(η)h_k̃(η)⟩ =2π^2/k^3δ^(3)(k+k̃)𝒫_h(k,η). Combining it with the solution of h_k (<ref>), we have <cit.> 𝒫_h(k,η)= 4∫_0^∞dv∫_|1-v|^1+vdu [4v^2-(1-u^2+v^2)^2/4uv]^2 × I_RD^2(u,v,x)𝒫_ζ(k v)𝒫_ζ(ku), where u=|k-k̃|/k, v=k̃/k, x=kη, and 𝒫_ζ is the power spectrum of the curvature perturbation which is parameterized in the following section. The integral kernel I_RD is I_RD(u, v, x)= ∫_1^x dy y sin(x-y){3T(uy)T(vy) +y[T(vy)u T'(uy)+v T'(vy) T(uy)] +y^2 u v T'(uy) T'(vy)}. The definition of the energy density of gravitational waves is Ω_GW(k,η)=1/24(k/aH)^2𝒫_h(k,η). By combining the equation (<ref>) and the definition (<ref>), we obtain <cit.> Ω_GW(k,η)= 1/6(k/aH)^2∫_0^∞dv∫_|1-v|^1+vdu×[4v^2-(1-u^2+v^2)^2/4uv]^2 ×I_RD^2(u, v, x)𝒫_ζ(kv)𝒫_ζ(ku), where I_RD^2 represents the oscillation time average of the integral kernel. The energy density of gravitational waves undergoes the same evolution as radiation. Exploiting this property, it becomes straightforward to determine the energy density of gravitational waves at present, which is Ω_GW(k,η_0)=c_gΩ_r,0Ω_GW(k,η)/Ω_r(η), where Ω_r(η)=1 is the energy density of the radiation at the generation of SIGWs during radiation domination, Ω_r,0 is that at present, and <cit.> c_g=0.387(g_*,s^4g_*^-3/106.75)^-1/3. § MODELS AND RESULTS At large scales, the observational data from the CMB impose a constraint on the amplitude of the primordial curvature power spectrum, which is limited to 𝒜_ζ = 2.1 × 10^-9 <cit.>. However, there are minimal constraints on the primordial curvature power spectrum at small scales. Consequently, in order to generate significant SIGWs, it is necessary to enhance the primordial curvature power spectrum to approximately 𝒜_ζ∼𝒪(0.01) at small scales. Thus, the profile of the primordial curvature spectrum exhibits at least one pronounced peak at intermediate scales, while displaying lower amplitudes at both large and very small scales. In this section, we consider the primordial curvature spectrum with single-peak and double-peak, respectively. For the single peak, the commonly employed parameterizations of the primordial curvature spectrum are the simple δ function form 𝒫_ζ = Aδ(ln k -ln k_p), the box form 𝒫_ζ = A Θ(k - k_min) Θ(k_max - k), the lognormal form 𝒫_ζ = A/√(2π)Δexp[-1/2(ln k -ln k_p/Δ)^2], and the broken power law form 𝒫_ζ =A(α+β)/β(k/k_p)^-α+α(k/k_p)^β+A_*(k/k_*)^n_s_*-1. For the double peak model, we parameterize the primordial curvature spectrum with a double lognormal form 𝒫_ζ= A_1/√(2π)Δ_1exp[-1/2(ln k -ln k_p_1/Δ_1)^2]+ A_2/√(2π)Δ_2exp[-1/2(ln k -ln k_p_2/Δ_2)^2]. We conducted a Bayesian analysis of the NANOGrav 15 yrs data to investigate the parameterization of the power spectrum of the primordial curvature perturbation, as described by Eq.(<ref>), Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>). In our analysis, we utilized the 14 frequency bins reported in <cit.> to fit the posterior distributions of the model parameters. The Bilby code <cit.> was employed for the analysis, utilizing the dynesty algorithm for nested sampling <cit.> . The log-likelihood function was constructed by evaluating the energy density of SIGWs at the 14 specific frequency bins. Subsequently, we computed the sum of the logarithm of the probability density functions obtained from 14 independent kernel density estimates corresponding to these frequency values <cit.>. The equation for the likelihood function is presented as ℒ(Θ)=∏_i=1^14ℒ_i(Ω_GW(f_i, Θ)), where Θ is the collection of parameters for δ-function, box, lognormal, broken power law, and double lognormal models. These parameters and their priors are shown in Table <ref>. We divide these models into two categories. The first one is single-peak power spectrum models, including δ-function (<ref>), box (<ref>), lognormal (<ref>) and broken power law model (<ref>), while the second one is double-peak model, including double lognormal model (<ref>). The posterior distributions for the parameters in Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>) are depicted in Figure <ref>, Figrue <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, respectively. We summarize the mean values and 1-σ confidence intervals for parameters of these models in Table <ref>. When comparing the results of the double-peak lognormal primordial curvature power spectrum with the single-peak models using δ, box, lognormal, and broken power law forms, the Bayesian analysis yields no support in favor of the single-peak models with respective Bayes factors of lnℬ= 0.42, lnℬ=0.26, lnℬ =0.46, and lnℬ =0.45. Thus, the PTAs data show no significant evidence for or against the single-peak primordial curvature power spectrum over the double-peak primordial curvature power spectrum. Due to the very close values of logarithmic evidence, it is also difficult to favor which single-peak model provides a better fit. After obtaining the best-fit values from posteriors, we present the power spectrum of the primordial curvature perturbations in Figure <ref> and the corresponding SIGWs in Figure <ref>. In Figure <ref>, the orange thin solid line, blue thick solid line, red dashed line, black dotted line, and green dash-dotted line denote the primordial curvature power spectrum with the δ-function, box, lognormal, broken power law, and double-lognormal parameterizations, respectively. The peak scale of these parameterizations is around k_p∼ 10^8  Mpc^-1, and the amplitude of the primordial curvature power spectrum of these parameterizations at the peak is around A∼ 0.1. In Figure <ref>, the orange thin solid line, blue thick solid line, red dashed line, black dotted line, and green dash-dotted line represent the energy density of the SIGW from the primordial curvature power spectrum with the δ-function, box, lognormal, broken power law, and double-lognormal parameterizations, respectively. If the PTAs data indeed arises from the SIGWs, this PTAs signal can also be detected by space-based detectors in the future. And the parameterizations of the primordial curvature power spectrum can also be distinguished by the space-based detectors. § CONCLUSION The stochastic signal detected by the NANOGrav, PPTA, EPTA, and CPTA collaborations points to the GW origin and can be explained by the SIGWs, where the scalar perturbations are seeded from the primordial curvature perturbations. To determine the SIGWs model that best fits the observed stochastic signal, we explore both single-peak and double-peak parameterizations for the power spectrum of the primordial curvature perturbations. For the single-peak scenarios, we consider parameterizations using the δ-function form, box form, lognormal form, and broken power law form. Additionally, in the double-peak scenario, we employ the double lognormal form. The best-fit values for the scale and amplitude of the primordial curvature perturbations at the peak, obtained from these five parameterizations, are approximately k_p ∼ 10^8  Mpc^-1 and A∼ 0.1. Comparing the results with the double-peak scenarios, the Bayesian analysis provides no support in favor of the single-peak models, with respective Bayes factors of lnℬ= 0.42, lnℬ=0.26, lnℬ =0.46, and lnℬ =0.45 for the δ-function, box, lognormal, and broken power law forms, respectively. If the stochastic signal observed by the PTAs indeed originates from SIGWs, it may also be detectable by space-based gravitational wave detectors in the future, potentially allowing for the distinction between different types of SIGWs. Although our analysis in this paper focuses on the double-peak model, our conclusion can be extended to multi-peak models. In conclusion, the recent gravitational wave background signal can be explained by SIGWs, without preference for a single peak in the primordial curvature power spectrum over a multi-peak configuration. We thank Xiao-Jing Liu for useful discussions. ZQY is supported by the China Postdoctoral Science Foundation Fellowship No. 2022M720482. ZY is supported by the National Natural Science Foundation of China under Grant No. 12205015 and the supporting fund for young researcher of Beijing Normal University under Grant No. 28719/310432102. 100 NANOGrav:2023hde NANOGrav collaboration, The NANOGrav 15 yr Data Set: Observations and Timing of 68 Millisecond Pulsars, https://doi.org/10.3847/2041-8213/acda9aAstrophys. J. Lett. 951 (2023) L9 [https://arxiv.org/abs/2306.162172306.16217]. NANOGrav:2023gor NANOGrav collaboration, The NANOGrav 15 yr Data Set: Evidence for a Gravitational-wave Background, https://doi.org/10.3847/2041-8213/acdac6Astrophys. J. Lett. 951 (2023) L8 [https://arxiv.org/abs/2306.162132306.16213]. Zic:2023gta A. Zic et al., The Parkes Pulsar Timing Array Third Data Release, https://arxiv.org/abs/2306.162302306.16230. Reardon:2023gzh D.J. Reardon et al., Search for an Isotropic Gravitational-wave Background with the Parkes Pulsar Timing Array, https://doi.org/10.3847/2041-8213/acdd02Astrophys. J. Lett. 951 (2023) L6 [https://arxiv.org/abs/2306.162152306.16215]. Antoniadis:2023lym J. Antoniadis et al., The second data release from the European Pulsar Timing Array I. The dataset and timing analysis, https://arxiv.org/abs/2306.162242306.16224. Antoniadis:2023ott J. Antoniadis et al., The second data release from the European Pulsar Timing Array III. Search for gravitational wave signals, https://arxiv.org/abs/2306.162142306.16214. Xu:2023wog H. Xu et al., Searching for the Nano-Hertz Stochastic Gravitational Wave Background with the Chinese Pulsar Timing Array Data Release I, https://doi.org/10.1088/1674-4527/acdfa5Res. Astron. Astrophys. 23 (2023) 075024 [https://arxiv.org/abs/2306.162162306.16216]. NANOGrav:2023hvm NANOGrav collaboration, The NANOGrav 15 yr Data Set: Search for Signals from New Physics, https://doi.org/10.3847/2041-8213/acdc91Astrophys. J. Lett. 951 (2023) L11 [https://arxiv.org/abs/2306.162192306.16219]. Antoniadis:2023xlr J. Antoniadis et al., The second data release from the European Pulsar Timing Array: V. Implications for massive black holes, dark matter and the early Universe, https://arxiv.org/abs/2306.162272306.16227. Franciolini:2023pbf G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae, The recent gravitational wave observation by pulsar timing arrays and primordial black holes: the importance of non-gaussianities, https://arxiv.org/abs/2306.171492306.17149. Liu:2023ymk L. Liu, Z.-C. Chen and Q.-G. Huang, Implications for the non-Gaussianity of curvature perturbation from pulsar timing arrays, https://arxiv.org/abs/2307.011022307.01102. Vagnozzi:2023lwo S. Vagnozzi, Inflationary interpretation of the stochastic gravitational wave background signal detected by pulsar timing array experiments, https://arxiv.org/abs/2306.169122306.16912. Cai:2023dls Y.-F. Cai, X.-C. He, X. Ma, S.-F. Yan and G.-W. Yuan, Limits on scalar-induced gravitational waves from the stochastic background by pulsar timing array observations, https://arxiv.org/abs/2306.178222306.17822. Wang:2023ost S. Wang, Z.-C. Zhao, J.-P. Li and Q.-H. Zhu, Exploring the Implications of 2023 Pulsar Timing Array Datasets for Scalar-Induced Gravitational Waves and Primordial Black Holes, https://arxiv.org/abs/2307.005722307.00572. Yi:2023mbm Z. Yi, Q. Gao, Y. Gong, Y. Wang and F. Zhang, The waveform of the scalar induced gravitational waves in light of Pulsar Timing Array data, https://arxiv.org/abs/2307.024672307.02467. Bi:2023tib Y.-C. Bi, Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Implications for the Supermassive Black Hole Binaries from the NANOGrav 15-year Data Set, https://arxiv.org/abs/2307.007222307.00722. Wu:2023hsa Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Cosmological Interpretation for the Stochastic Signal in Pulsar Timing Arrays, https://arxiv.org/abs/2307.031412307.03141. Zhu:2023faa Q.-H. Zhu, Z.-C. Zhao and S. Wang, Joint implications of BBN, CMB, and PTA Datasets for Scalar-Induced Gravitational Waves of Second and Third orders, https://arxiv.org/abs/2307.030952307.03095. Franciolini:2023wjm G. Franciolini, D. Racco and F. Rompineve, Footprints of the QCD Crossover on Cosmological Gravitational Waves at Pulsar Timing Arrays, https://arxiv.org/abs/2306.171362306.17136. Zeldovich:1967lct Y.B. Zel'dovich and I.D. Novikov, The Hypothesis of Cores Retarded during Expansion and the Hot Cosmological Model, Soviet Astron. AJ (Engl. Transl. ), 10 (1967) 602. Hawking:1971ei S. Hawking, Gravitationally collapsed objects of very low mass, Mon. Not. Roy. Astron. Soc. 152 (1971) 75. Carr:1974nx B.J. Carr and S.W. Hawking, Black holes in the early Universe, Mon. Not. Roy. Astron. Soc. 168 (1974) 399. Chen:2018czv Z.-C. Chen and Q.-G. Huang, Merger Rate Distribution of Primordial-Black-Hole Binaries, https://doi.org/10.3847/1538-4357/aad6e2Astrophys. J. 864 (2018) 61 [https://arxiv.org/abs/1801.103271801.10327]. Chen:2018rzo Z.-C. Chen, F. Huang and Q.-G. Huang, Stochastic Gravitational-wave Background from Binary Black Holes and Binary Neutron Stars and Implications for LISA, https://doi.org/10.3847/1538-4357/aaf581Astrophys. J. 871 (2019) 97 [https://arxiv.org/abs/1809.103601809.10360]. Liu:2018ess L. Liu, Z.-K. Guo and R.-G. Cai, Effects of the surrounding primordial black holes on the merger rate of primordial black hole binaries, https://doi.org/10.1103/PhysRevD.99.063523Phys. Rev. D 99 (2019) 063523 [https://arxiv.org/abs/1812.053761812.05376]. Liu:2019rnx L. Liu, Z.-K. Guo and R.-G. Cai, Effects of the merger history on the merger rate density of primordial black hole binaries, https://doi.org/10.1140/epjc/s10052-019-7227-0Eur. Phys. J. C 79 (2019) 717 [https://arxiv.org/abs/1901.076721901.07672]. Chen:2019irf Z.-C. Chen and Q.-G. Huang, Distinguishing Primordial Black Holes from Astrophysical Black Holes by Einstein Telescope and Cosmic Explorer, https://doi.org/10.1088/1475-7516/2020/08/039JCAP 08 (2020) 039 [https://arxiv.org/abs/1904.023961904.02396]. Liu:2020cds L. Liu, Z.-K. Guo, R.-G. Cai and S.P. Kim, Merger rate distribution of primordial black hole binaries with electric charges, https://doi.org/10.1103/PhysRevD.102.043508Phys. Rev. D 102 (2020) 043508 [https://arxiv.org/abs/2001.029842001.02984]. Liu:2020vsy L. Liu, O. Christiansen, Z.-K. Guo, R.-G. Cai and S.P. Kim, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: Circular orbits on a cone, https://doi.org/10.1103/PhysRevD.102.103520Phys. Rev. D 102 (2020) 103520 [https://arxiv.org/abs/2008.023262008.02326]. Liu:2020bag L. Liu, O. Christiansen, W.-H. Ruan, Z.-K. Guo, R.-G. Cai and S.P. Kim, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: elliptical orbits on a cone, https://doi.org/10.1140/epjc/s10052-021-09849-4Eur. Phys. J. C 81 (2021) 1048 [https://arxiv.org/abs/2011.135862011.13586]. Wu:2020drm Y. Wu, Merger history of primordial black-hole binaries, https://doi.org/10.1103/PhysRevD.101.083008Phys. Rev. D 101 (2020) 083008 [https://arxiv.org/abs/2001.038332001.03833]. Chen:2021nxo Z.-C. Chen, C. Yuan and Q.-G. Huang, Confronting the primordial black hole scenario with the gravitational-wave events detected by LIGO-Virgo, https://doi.org/10.1016/j.physletb.2022.137040Phys. Lett. B 829 (2022) 137040 [https://arxiv.org/abs/2108.117402108.11740]. Liu:2022wtq L. Liu and S.P. Kim, Merger rate of charged black holes from the two-body dynamical capture, https://doi.org/10.1088/1475-7516/2022/03/059JCAP 03 (2022) 059 [https://arxiv.org/abs/2201.025812201.02581]. Chen:2022fda Z.-C. Chen, S.-S. Du, Q.-G. Huang and Z.-Q. You, Constraints on primordial-black-hole population and cosmic expansion history from GWTC-3, https://doi.org/10.1088/1475-7516/2023/03/024JCAP 03 (2023) 024 [https://arxiv.org/abs/2205.112782205.11278]. Chen:2022qvg Z.-C. Chen, S.P. Kim and L. Liu, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: hyperbolic orbits on a cone, https://doi.org/10.1088/1572-9494/acce98Commun. Theor. Phys. 75 (2023) 065401 [https://arxiv.org/abs/2210.155642210.15564]. Liu:2022iuf L. Liu, Z.-Q. You, Y. Wu and Z.-C. Chen, Constraining the merger history of primordial-black-hole binaries from GWTC-3, https://doi.org/10.1103/PhysRevD.107.063035Phys. Rev. D 107 (2023) 063035 [https://arxiv.org/abs/2210.160942210.16094]. Zheng:2022wqo L.-M. Zheng, Z. Li, Z.-C. Chen, H. Zhou and Z.-H. Zhu, Towards a reliable reconstruction of the power spectrum of primordial curvature perturbation on small scales from GWTC-3, https://doi.org/10.1016/j.physletb.2023.137720Phys. Lett. B 838 (2023) 137720 [https://arxiv.org/abs/2212.055162212.05516]. Zhu:2018lif X.-J. Zhu, W. Cui and E. Thrane, The minimum and maximum gravitational-wave background from supermassive binary black holes, https://doi.org/10.1093/mnras/sty2849Mon. Not. Roy. Astron. Soc. 482 (2019) 2588 [https://arxiv.org/abs/1806.023461806.02346]. Chen:2021wdo Z.-C. Chen, C. Yuan and Q.-G. Huang, Non-tensorial gravitational wave background in NANOGrav 12.5-year data set, https://doi.org/10.1007/s11433-021-1797-ySci. China Phys. Mech. Astron. 64 (2021) 120412 [https://arxiv.org/abs/2101.068692101.06869]. Wu:2021kmd Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Constraining the Polarization of Gravitational Waves with the Parkes Pulsar Timing Array Second Data Release, https://doi.org/10.3847/1538-4357/ac35ccAstrophys. J. 925 (2022) 37 [https://arxiv.org/abs/2108.105182108.10518]. Chen:2021ncc Z.-C. Chen, Y.-M. Wu and Q.-G. Huang, Searching for isotropic stochastic gravitational-wave background in the international pulsar timing array second data release, https://doi.org/10.1088/1572-9494/ac7cdfCommun. Theor. Phys. 74 (2022) 105402 [https://arxiv.org/abs/2109.002962109.00296]. Chen:2022azo Z.-C. Chen, Y.-M. Wu and Q.-G. Huang, Search for the Gravitational-wave Background from Cosmic Strings with the Parkes Pulsar Timing Array Second Data Release, https://doi.org/10.3847/1538-4357/ac86cbAstrophys. J. 936 (2022) 20 [https://arxiv.org/abs/2205.071942205.07194]. PPTA:2022eul PPTA collaboration, Constraining ultralight vector dark matter with the Parkes Pulsar Timing Array second data release, https://doi.org/10.1103/PhysRevD.106.L081101Phys. Rev. D 106 (2022) L081101 [https://arxiv.org/abs/2210.038802210.03880]. IPTA:2023ero IPTA collaboration, Searching for continuous Gravitational Waves in the second data release of the International Pulsar Timing Array, https://doi.org/10.1093/mnras/stad812Mon. Not. Roy. Astron. Soc. 521 (2023) 5077 [https://arxiv.org/abs/2303.107672303.10767]. Wu:2023pbt Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Search for stochastic gravitational-wave background from massive gravity in the NANOGrav 12.5-year dataset, https://doi.org/10.1103/PhysRevD.107.042003Phys. Rev. D 107 (2023) 042003 [https://arxiv.org/abs/2302.002292302.00229]. Wu:2023dnp Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Pulsar timing residual induced by ultralight tensor dark matter, https://arxiv.org/abs/2305.080912305.08091. tomita1967non K. Tomita, Non-linear theory of gravitational instability in the expanding universe, Progress of Theoretical Physics 37 (1967) 831. Saito:2008jc R. Saito and J. Yokoyama, Gravitational wave background as a probe of the primordial black hole abundance, https://doi.org/10.1103/PhysRevLett.102.161101Phys. Rev. Lett. 102 (2009) 161101 [https://arxiv.org/abs/0812.43390812.4339]. Young:2014ana S. Young, C.T. Byrnes and M. Sasaki, Calculating the mass fraction of primordial black holes, https://doi.org/10.1088/1475-7516/2014/07/045JCAP 1407 (2014) 045 [https://arxiv.org/abs/1405.70231405.7023]. Yuan:2019udt C. Yuan, Z.-C. Chen and Q.-G. Huang, Probing primordial–black-hole dark matter with scalar induced gravitational waves, https://doi.org/10.1103/PhysRevD.100.081301Phys. Rev. D 100 (2019) 081301 [https://arxiv.org/abs/1906.115491906.11549]. Yuan:2019wwo C. Yuan, Z.-C. Chen and Q.-G. Huang, Log-dependent slope of scalar induced gravitational waves in the infrared regions, https://doi.org/10.1103/PhysRevD.101.043019Phys. Rev. D 101 (2020) 043019 [https://arxiv.org/abs/1910.090991910.09099]. Chen:2019xse Z.-C. Chen, C. Yuan and Q.-G. Huang, Pulsar Timing Array Constraints on Primordial Black Holes with NANOGrav 11-Year Dataset, https://doi.org/10.1103/PhysRevLett.124.251101Phys. Rev. Lett. 124 (2020) 251101 [https://arxiv.org/abs/1910.122391910.12239]. Yuan:2019fwv C. Yuan, Z.-C. Chen and Q.-G. Huang, Scalar induced gravitational waves in different gauges, https://doi.org/10.1103/PhysRevD.101.063018Phys. Rev. D 101 (2020) 063018 [https://arxiv.org/abs/1912.008851912.00885]. Ananda:2006af K.N. Ananda, C. Clarkson and D. Wands, The Cosmological gravitational wave background from primordial density perturbations, https://doi.org/10.1103/PhysRevD.75.123518Phys. Rev. D 75 (2007) 123518 [https://arxiv.org/abs/gr-qc/0612013gr-qc/0612013]. Baumann:2007zm D. Baumann, P.J. Steinhardt, K. Takahashi and K. Ichiki, Gravitational Wave Spectrum Induced by Primordial Scalar Perturbations, https://doi.org/10.1103/PhysRevD.76.084019Phys. Rev. D 76 (2007) 084019 [https://arxiv.org/abs/hep-th/0703290hep-th/0703290]. Alabidi:2012ex L. Alabidi, K. Kohri, M. Sasaki and Y. Sendouda, Observable Spectra of Induced Gravitational Waves from Inflation, https://doi.org/10.1088/1475-7516/2012/09/017JCAP 09 (2012) 017 [https://arxiv.org/abs/1203.46631203.4663]. Nakama:2016gzw T. Nakama, J. Silk and M. Kamionkowski, Stochastic gravitational waves associated with the formation of primordial black holes, https://doi.org/10.1103/PhysRevD.95.043511Phys. Rev. D 95 (2017) 043511 [https://arxiv.org/abs/1612.062641612.06264]. Kohri:2018awv K. Kohri and T. Terada, Semianalytic calculation of gravitational wave spectrum nonlinearly induced from primordial curvature perturbations, https://doi.org/10.1103/PhysRevD.97.123532Phys. Rev. D 97 (2018) 123532 [https://arxiv.org/abs/1804.085771804.08577]. Cheng:2018yyr S.-L. Cheng, W. Lee and K.-W. Ng, Primordial black holes and associated gravitational waves in axion monodromy inflation, https://doi.org/10.1088/1475-7516/2018/07/001JCAP 07 (2018) 001 [https://arxiv.org/abs/1801.090501801.09050]. Cai:2019amo R.-G. Cai, S. Pi, S.-J. Wang and X.-Y. Yang, Resonant multiple peaks in the induced gravitational waves, https://doi.org/10.1088/1475-7516/2019/05/013JCAP 05 (2019) 013 [https://arxiv.org/abs/1901.101521901.10152]. Cai:2018dig R.-g. Cai, S. Pi and M. Sasaki, Gravitational Waves Induced by non-Gaussian Scalar Perturbations, https://doi.org/10.1103/PhysRevLett.122.201101Phys. Rev. Lett. 122 (2019) 201101 [https://arxiv.org/abs/1810.110001810.11000]. Cai:2019elf R.-G. Cai, S. Pi, S.-J. Wang and X.-Y. Yang, Pulsar Timing Array Constraints on the Induced Gravitational Waves, https://doi.org/10.1088/1475-7516/2019/10/059JCAP 10 (2019) 059 [https://arxiv.org/abs/1907.063721907.06372]. Cai:2019bmk R.-G. Cai, Z.-K. Guo, J. Liu, L. Liu and X.-Y. Yang, Primordial black holes and gravitational waves from parametric amplification of curvature perturbations, https://doi.org/10.1088/1475-7516/2020/06/013JCAP 06 (2020) 013 [https://arxiv.org/abs/1912.104371912.10437]. Cai:2020fnq R.-G. Cai, Y.-C. Ding, X.-Y. Yang and Y.-F. Zhou, Constraints on a mixed model of dark matter particles and primordial black holes from the galactic 511 keV line, https://doi.org/10.1088/1475-7516/2021/03/057JCAP 03 (2021) 057 [https://arxiv.org/abs/2007.118042007.11804]. Pi:2020otn S. Pi and M. Sasaki, Gravitational Waves Induced by Scalar Perturbations with a Lognormal Peak, https://doi.org/10.1088/1475-7516/2020/09/037JCAP 09 (2020) 037 [https://arxiv.org/abs/2005.123062005.12306]. Domenech:2020kqm G. Domènech, S. Pi and M. Sasaki, Induced gravitational waves as a probe of thermal history of the universe, https://doi.org/10.1088/1475-7516/2020/08/017JCAP 08 (2020) 017 [https://arxiv.org/abs/2005.123142005.12314]. Liu:2021jnw L. Liu, X.-Y. Yang, Z.-K. Guo and R.-G. Cai, Testing primordial black hole and measuring the Hubble constant with multiband gravitational-wave observations, https://doi.org/10.1088/1475-7516/2023/01/006JCAP 01 (2023) 006 [https://arxiv.org/abs/2112.054732112.05473]. Papanikolaou:2021uhe T. Papanikolaou, C. Tzerefos, S. Basilakos and E.N. Saridakis, Scalar induced gravitational waves from primordial black hole Poisson fluctuations in f(R) gravity, https://doi.org/10.1088/1475-7516/2022/10/013JCAP 10 (2022) 013 [https://arxiv.org/abs/2112.150592112.15059]. Papanikolaou:2022hkg T. Papanikolaou, C. Tzerefos, S. Basilakos and E.N. Saridakis, No constraints for f(T) gravity from gravitational waves induced from primordial black hole fluctuations, https://doi.org/10.1140/epjc/s10052-022-11157-4Eur. Phys. J. C 83 (2023) 31 [https://arxiv.org/abs/2205.060942205.06094]. Danzmann:1997hm K. Danzmann, LISA: An ESA cornerstone mission for a gravitational wave observatory, https://doi.org/10.1088/0264-9381/14/6/002Class. Quant. Grav. 14 (1997) 1399. Audley:2017drz LISA collaboration, Laser Interferometer Space Antenna, https://arxiv.org/abs/1702.007861702.00786. Hu:2017mde W.-R. Hu and Y.-L. Wu, The Taiji Program in Space for gravitational wave physics and the nature of gravity, https://doi.org/10.1093/nsr/nwx116Natl. Sci. Rev. 4 (2017) 685. Luo:2015ght TianQin collaboration, TianQin: a space-borne gravitational wave detector, https://doi.org/10.1088/0264-9381/33/3/035010Class. Quant. Grav. 33 (2016) 035010 [https://arxiv.org/abs/1512.020761512.02076]. Gong:2021gvw Y. Gong, J. Luo and B. Wang, Concepts and status of Chinese space gravitational wave detection projects, https://doi.org/10.1038/s41550-021-01480-3Nature Astron. 5 (2021) 881 [https://arxiv.org/abs/2109.074422109.07442]. Kawamura:2011zz S. Kawamura et al., The Japanese space gravitational wave antenna: DECIGO, https://doi.org/10.1088/0264-9381/28/9/094011Class. Quant. Grav. 28 (2011) 094011. Akrami:2018odb Planck collaboration, Planck 2018 results. X. Constraints on inflation, https://doi.org/10.1051/0004-6361/201833887Astron. Astrophys. 641 (2020) A10 [https://arxiv.org/abs/1807.062111807.06211]. Martin:2012pe J. Martin, H. Motohashi and T. Suyama, Ultra Slow-Roll Inflation and the non-Gaussianity Consistency Relation, https://doi.org/10.1103/PhysRevD.87.023514Phys. Rev. D 87 (2013) 023514 [https://arxiv.org/abs/1211.00831211.0083]. Motohashi:2014ppa H. Motohashi, A.A. Starobinsky and J. Yokoyama, Inflation with a constant rate of roll, https://doi.org/10.1088/1475-7516/2015/09/018JCAP 09 (2015) 018 [https://arxiv.org/abs/1411.50211411.5021]. Yi:2017mxs Z. Yi and Y. Gong, On the constant-roll inflation, https://doi.org/10.1088/1475-7516/2018/03/052JCAP 03 (2018) 052 [https://arxiv.org/abs/1712.074781712.07478]. Garcia-Bellido:2017mdw J. Garcia-Bellido and E. Ruiz Morales, Primordial black holes from single field models of inflation, https://doi.org/10.1016/j.dark.2017.09.007Phys. Dark Univ. 18 (2017) 47 [https://arxiv.org/abs/1702.039011702.03901]. Germani:2017bcs C. Germani and T. Prokopec, On primordial black holes from an inflection point, https://doi.org/10.1016/j.dark.2017.09.001Phys. Dark Univ. 18 (2017) 6 [https://arxiv.org/abs/1706.042261706.04226]. Motohashi:2017kbs H. Motohashi and W. Hu, Primordial Black Holes and Slow-Roll Violation, https://doi.org/10.1103/PhysRevD.96.063503Phys. Rev. D 96 (2017) 063503 [https://arxiv.org/abs/1706.067841706.06784]. Ezquiaga:2017fvi J.M. Ezquiaga, J. Garcia-Bellido and E. Ruiz Morales, Primordial Black Hole production in Critical Higgs Inflation, https://doi.org/10.1016/j.physletb.2017.11.039Phys. Lett. B 776 (2018) 345 [https://arxiv.org/abs/1705.048611705.04861]. Gong:2017qlj H. Di and Y. Gong, Primordial black holes and second order gravitational waves from ultra-slow-roll inflation, https://doi.org/10.1088/1475-7516/2018/07/007JCAP 07 (2018) 007 [https://arxiv.org/abs/1707.095781707.09578]. Ballesteros:2018wlw G. Ballesteros, J. Beltran Jimenez and M. Pieroni, Black hole formation from a general quadratic action for inflationary primordial fluctuations, https://doi.org/10.1088/1475-7516/2019/06/016JCAP 06 (2019) 016 [https://arxiv.org/abs/1811.030651811.03065]. Dalianis:2018frf I. Dalianis, A. Kehagias and G. Tringas, Primordial black holes from -attractors, https://doi.org/10.1088/1475-7516/2019/01/037JCAP 01 (2019) 037 [https://arxiv.org/abs/1805.094831805.09483]. Bezrukov:2017dyv F. Bezrukov, M. Pauly and J. Rubio, On the robustness of the primordial power spectrum in renormalized Higgs inflation, https://doi.org/10.1088/1475-7516/2018/02/040JCAP 02 (2018) 040 [https://arxiv.org/abs/1706.050071706.05007]. Kannike:2017bxn K. Kannike, L. Marzola, M. Raidal and H. Veermäe, Single Field Double Inflation and Primordial Black Holes, https://doi.org/10.1088/1475-7516/2017/09/020JCAP 09 (2017) 020 [https://arxiv.org/abs/1705.062251705.06225]. Lin:2020goi J. Lin, Q. Gao, Y. Gong, Y. Lu, C. Zhang and F. Zhang, Primordial black holes and secondary gravitational waves from k and G inflation, https://doi.org/10.1103/PhysRevD.101.103515Phys. Rev. D 101 (2020) 103515 [https://arxiv.org/abs/2001.059092001.05909]. Lin:2021vwc J. Lin, S. Gao, Y. Gong, Y. Lu, Z. Wang and F. Zhang, Primordial black holes and scalar induced gravitational waves from Higgs inflation with noncanonical kinetic term, https://doi.org/10.1103/PhysRevD.107.043517Phys. Rev. D 107 (2023) 043517 [https://arxiv.org/abs/2111.013622111.01362]. Gao:2020tsa Q. Gao, Y. Gong and Z. Yi, Primordial black holes and secondary gravitational waves from natural inflation, https://doi.org/10.1016/j.nuclphysb.2021.115480Nucl. Phys. B 969 (2021) 115480 [https://arxiv.org/abs/2012.038562012.03856]. Gao:2021vxb Q. Gao, Primordial black holes and secondary gravitational waves from chaotic inflation, https://doi.org/10.1007/s11433-021-1708-9Sci. China Phys. Mech. Astron. 64 (2021) 280411 [https://arxiv.org/abs/2102.073692102.07369]. Yi:2020kmq Z. Yi, Y. Gong, B. Wang and Z.-h. Zhu, Primordial black holes and secondary gravitational waves from the Higgs field, https://doi.org/10.1103/PhysRevD.103.063535Phys. Rev. D 103 (2021) 063535 [https://arxiv.org/abs/2007.099572007.09957]. Yi:2020cut Z. Yi, Q. Gao, Y. Gong and Z.-h. Zhu, Primordial black holes and scalar-induced secondary gravitational waves from inflationary models with a noncanonical kinetic term, https://doi.org/10.1103/PhysRevD.103.063534Phys. Rev. D 103 (2021) 063534 [https://arxiv.org/abs/2011.106062011.10606]. Yi:2021lxc Z. Yi and Z.-H. Zhu, NANOGrav signal and LIGO-Virgo primordial black holes from the Higgs field, https://doi.org/10.1088/1475-7516/2022/05/046JCAP 05 (2022) 046 [https://arxiv.org/abs/2105.019432105.01943]. Yi:2022anu Z. Yi, Primordial black holes and scalar-induced gravitational waves from the generalized Brans-Dicke theory, https://doi.org/10.1088/1475-7516/2023/03/048JCAP 03 (2023) 048 [https://arxiv.org/abs/2206.010392206.01039]. Zhang:2020uek F. Zhang, Y. Gong, J. Lin, Y. Lu and Z. Yi, Primordial non-Gaussianity from G-inflation, https://doi.org/10.1088/1475-7516/2021/04/045JCAP 04 (2021) 045 [https://arxiv.org/abs/2012.069602012.06960]. Pi:2017gih S. Pi, Y.-l. Zhang, Q.-G. Huang and M. Sasaki, Scalaron from R^2-gravity as a heavy field, https://doi.org/10.1088/1475-7516/2018/05/042JCAP 05 (2018) 042 [https://arxiv.org/abs/1712.098961712.09896]. Kamenshchik:2018sig A.Y. Kamenshchik, A. Tronconi, T. Vardanyan and G. Venturi, Non-Canonical Inflation and Primordial Black Holes Production, https://doi.org/10.1016/j.physletb.2019.02.036Phys. Lett. B 791 (2019) 201 [https://arxiv.org/abs/1812.025471812.02547]. Fu:2019ttf C. Fu, P. Wu and H. Yu, Primordial Black Holes from Inflation with Nonminimal Derivative Coupling, https://doi.org/10.1103/PhysRevD.100.063532Phys. Rev. D 100 (2019) 063532 [https://arxiv.org/abs/1907.050421907.05042]. Fu:2019vqc C. Fu, P. Wu and H. Yu, Scalar induced gravitational waves in inflation with gravitationally enhanced friction, https://doi.org/10.1103/PhysRevD.101.023529Phys. Rev. D 101 (2020) 023529 [https://arxiv.org/abs/1912.059271912.05927]. Dalianis:2019vit I. Dalianis, S. Karydas and E. Papantonopoulos, Generalized Non-Minimal Derivative Coupling: Application to Inflation and Primordial Black Hole Production, https://doi.org/10.1088/1475-7516/2020/06/040JCAP 06 (2020) 040 [https://arxiv.org/abs/1910.006221910.00622]. Gundhi:2020zvb A. Gundhi and C.F. Steinwachs, Scalaron–Higgs inflation reloaded: Higgs-dependent scalaron mass and primordial black hole dark matter, https://doi.org/10.1140/epjc/s10052-021-09225-2Eur. Phys. J. C 81 (2021) 460 [https://arxiv.org/abs/2011.094852011.09485]. Cheong:2019vzl D.Y. Cheong, S.M. Lee and S.C. Park, Primordial black holes in Higgs-R^2 inflation as the whole of dark matter, https://doi.org/10.1088/1475-7516/2021/01/032JCAP 01 (2021) 032 [https://arxiv.org/abs/1912.120321912.12032]. Zhang:2021rqs F. Zhang, Primordial black holes and scalar induced gravitational waves from the E model with a Gauss-Bonnet term, https://doi.org/10.1103/PhysRevD.105.063539Phys. Rev. D 105 (2022) 063539 [https://arxiv.org/abs/2112.105162112.10516]. Zhang:2021vak F. Zhang, J. Lin and Y. Lu, Double-peaked inflation model: Scalar induced gravitational waves and primordial-black-hole suppression from primordial non-Gaussianity, https://doi.org/10.1103/PhysRevD.104.063515Phys. Rev. D 104 (2021) 063515 [https://arxiv.org/abs/2106.107922106.10792]. Kawai:2021edk S. Kawai and J. Kim, Primordial black holes from Gauss-Bonnet-corrected single field inflation, https://doi.org/10.1103/PhysRevD.104.083545Phys. Rev. D 104 (2021) 083545 [https://arxiv.org/abs/2108.013402108.01340]. Cai:2021wzd R.-G. Cai, C. Chen and C. Fu, Primordial black holes and stochastic gravitational wave background from inflation with a noncanonical spectator field, https://doi.org/10.1103/PhysRevD.104.083537Phys. Rev. D 104 (2021) 083537 [https://arxiv.org/abs/2108.034222108.03422]. Chen:2021nio P. Chen, S. Koh and G. Tumurtushaa, Primordial black holes and induced gravitational waves from inflation in the Horndeski theory of gravity, https://arxiv.org/abs/2107.086382107.08638. Zheng:2021vda R. Zheng, J. Shi and T. Qiu, On primordial black holes and secondary gravitational waves generated from inflation with solo/multi-bumpy potential *, https://doi.org/10.1088/1674-1137/ac42bdChin. Phys. C 46 (2022) 045103 [https://arxiv.org/abs/2106.043032106.04303]. Karam:2022nym A. Karam, N. Koivunen, E. Tomberg, V. Vaskonen and H. Veermäe, Anatomy of single-field inflationary models for primordial black holes, https://doi.org/10.1088/1475-7516/2023/03/013JCAP 03 (2023) 013 [https://arxiv.org/abs/2205.135402205.13540]. Ashoorioon:2019xqc A. Ashoorioon, A. Rostami and J.T. Firouzjaee, EFT compatible PBHs: effective spawning of the seeds for primordial black holes during inflation, https://doi.org/10.1007/JHEP07(2021)087JHEP 07 (2021) 087 [https://arxiv.org/abs/1912.133261912.13326]. Espinosa:2018eve J.R. Espinosa, D. Racco and A. Riotto, A Cosmological Signature of the SM Higgs Instability: Gravitational Waves, https://doi.org/10.1088/1475-7516/2018/09/012JCAP 09 (2018) 012 [https://arxiv.org/abs/1804.077321804.07732]. Lu:2019sti Y. Lu, Y. Gong, Z. Yi and F. Zhang, Constraints on primordial curvature perturbations from primordial black hole dark matter and secondary gravitational waves, https://doi.org/10.1088/1475-7516/2019/12/031JCAP 12 (2019) 031 [https://arxiv.org/abs/1907.118961907.11896]. Vaskonen:2020lbd V. Vaskonen and H. Veermäe, Did NANOGrav see a signal from primordial black hole formation?, https://doi.org/10.1103/PhysRevLett.126.051303Phys. Rev. Lett. 126 (2021) 051303 [https://arxiv.org/abs/2009.078322009.07832]. DeLuca:2020agl V. De Luca, G. Franciolini and A. Riotto, NANOGrav Data Hints at Primordial Black Holes as Dark Matter, https://doi.org/10.1103/PhysRevLett.126.041303Phys. Rev. Lett. 126 (2021) 041303 [https://arxiv.org/abs/2009.082682009.08268]. Ashton:2018jfp G. Ashton et al., BILBY: A user-friendly Bayesian inference library for gravitational-wave astronomy, https://doi.org/10.3847/1538-4365/ab06fcAstrophys. J. Suppl. 241 (2019) 27 [https://arxiv.org/abs/1811.020421811.02042]. NestedSampling J. Skilling, Nested Sampling, https://doi.org/10.1063/1.1835238AIP Conf. Proc. 735 (2004) 395. Moore:2021ibq C.J. Moore and A. Vecchio, Ultra-low-frequency gravitational waves from cosmological and astrophysical processes, https://doi.org/10.1038/s41550-021-01489-8Nature Astron. 5 (2021) 1268 [https://arxiv.org/abs/2104.151302104.15130].
http://arxiv.org/abs/2307.04276v1
20230709230219
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant
[ "Yann Hicke", "Tonghua Tian", "Karan Jha", "Choong Hee Kim" ]
cs.CL
[ "cs.CL" ]
2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). LAK'23: Workshop on Partnerships for Cocreating Educational Content, March 13, 2023, Arlington, TX, USA 1]Yann Hicke[ [email protected], ] [1] [1] [1]Cornell University, Department of Computer Science 2]Tonghua Tian [1] [2]Cornell University, Department of Operations Research and Information Engineering 3]Karan Jha [1] [3]Cornell University, Department of Mechanical Engineering 3]Choong Hee Kim [1] [1]Corresponding author. [1]These authors contributed equally. Automated Essay scoring has been explored as a research and industry problem for over 50 years. It has drawn a lot of attention from the NLP community because of its clear educational value as a research area that can engender the creation of valuable time-saving tools for educators around the world. Yet, these tools are generally focused on detecting good grammar, spelling mistakes, and organization quality but tend to fail at incorporating persuasiveness features in their final assessment. The responsibility to give actionable feedback to the student to improve the strength of their arguments is left solely on the teacher's shoulders. In this work, we present a transformer-based architecture capable of achieving above-human accuracy in annotating argumentative writing discourse elements for their persuasiveness quality and we expand on planned future work investigating the explainability of our model so that actionable feedback can be offered to the student and thus potentially enable a partnership between the teacher's advice and the machine's advice. Automated Essay Scoring Argument Mining Large Language Models Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant [ August 12, 2023 ======================================================================== § INTRODUCTION ETS e-rater <cit.> is one of many commercial tools available today that can automatically grade essays and hence save a substantial amount of human time. It follows a long lineage of tools that have been created over the past 50 years all traced back to Page's pioneering work on the Project Essay Grader <cit.>. All high school students taking the SAT to get into college or undergraduate students applying to graduate schools with their GRE or GMAT scores will have their essays graded by an Automated Essay Scoring (AES) system. The vast majority of AES software are holistic graders in the sense that they summarize the entire quality of an essay in one single score. The main reason that explains such a trend is the nature of the vast majority of annotated corpora available which has a holistic score associated with it. In August 2022, Crossley et al. released a dataset somewhat unique of its kind: a large-scale corpus of writing with annotated discourse elements (PERSUADE) indicating their level of persuasiveness <cit.>. The originality of this dataset is what is motivating our work: can we achieve human-level accuracy on the persuasiveness prediction task? And then using this performance can we provide feedback to the student-writer? § RELATED WORK We will outline here work that has been done in Automated Essay Scoring that does not solely focus on holistic scoring. §.§ Identifying Argumentative Discourse Structures in Persuasive Essays: In 2014, Stab and Gurevych <cit.> developed a corpus of essays and tried to identify the structure of arguments in persuasive essays as well as novel feature sets for identifying argument components and argumentative relations, which was one of the first approaches in the field of argument mining. §.§ SVM Regressor for Modeling Argument Strength: In 2015, Persing and Ng <cit.> proposed an SVM regressor model to score an essay based on the strength of an argument. This paper also released a human-annotated dataset of 1000 essays publically to stimulate further research. In this dataset, the essays were scored from 1 through 4, higher score indicating a strong argument. §.§ Neural Models for Predicting Argument Persuasiveness: In 2018 Carlile et. al. <cit.> released an argument mining dataset, annotating the arguments within the essay as MajorClaims, Claims, Premises, Support and Attack, as well as scored these sections on the basis of attributes like Specificity, Eloquence, Strength, etc. The same group of people in another paper in 2018 <cit.> proposed a bidirectional LSTM model with attention for providing scores on these metrics (Specificity, Eloquence, Strength, etc.) using a neural network, using the same dataset. In 2019, Toledo et. al. also released a new dataset annotating arguments on the basis of quality and comparing pair-wise arguments for the stronger argument. They used a BERT-base-uncased model architecture to create word embeddings, with an Argument Classification and an Argument Ranking head. <cit.> § METHOD §.§ Data Preprocessing and Problem Formulation Recall that our goal is to predict the effectiveness rating for each discourse element given its type label. In the training data, except a table of discourse elements, we also have access to the complete essays which these discourse elements are extracted from. Each essay contains a variable number of discourse elements, with possibly repeated type labels. §.§.§ Data Preprocessing: In order to fully utilize the context information, when evaluating each discourse element, we aim to include all other discourse elements extracted from the same essay in the input as well. For the purpose of efficiency, predictions are done on the essay level. The data preprocessing goes as follows. For each essay, we first look for every discourse element extracted from it and locate them within the essay. Then we add special tokens to the beginning and the end of each discourse element indicating the corresponding discourse type. Finally, we concatenate all the discourse elements together to form a new essay, following the same order as when they appear in the original one. An example of preprocessed essays is the following: After this step, we tokenize the essays and use the resulting sequences as the inputs to our model. §.§.§ Problem Formulation: Eventually, we want to produce one prediction, which is a probability distribution over the three different ratings, for every discourse element. Essentially this is a sequence classification problem. However, instead of directly handling it as a sequence classification task, we find that it is more efficacious to formulate the problem as a token classification task. At training time, we label each token with the effectiveness rating of its corresponding discourse element and try to correctly predict all labels. Then at inference time, we take the average scores of overall tokens to obtain a prediction for each discourse element. §.§ Model Selection We build our classifier on the pre-trained large language model DeBERTaV3. The DeBERTa model, originally proposed in <cit.>, improves BERT <cit.> using a disentangled attention mechanism and an enhanced mask decoder. Then DeBERTaV3 <cit.> further improves the original DeBERTa model with a new ELECTRA-style pre-training method. We briefly introduce the three techniques here. §.§.§ Disentangled Attention In a classical attention mechanism, each token is represented by one vector which is the sum of the content embedding and the position embedding, whereas in DeBERTa these two embeddings are kept separate. For each pair of tokens x_i and x_j at position i and j respectively, we have two pairs of embeddings: the content embeddings H_i, H_j and the relative position embeddings P_i|j, P_j|i. The cross-attention score between x_i and x_j is then calculated as A_i,j = H_iH_j^⊤ + H_iP_j|i^⊤ + P_i|jH_j^⊤, where position-to-position attention is omitted in the implementation for lack of useful information. §.§.§ Enhanced Mask Decoder: DeBERTa is pre-trained using masked language modeling (MLM). The absolute position information is important in this task as well as many other NLP tasks. To use this information, DeBERTa incorporates the absolute position embeddings after the Transformer layers but before the softmax layer for masked token prediction, hence enhancing the mask decoder. §.§.§ ELECTRA-style Pre-training: DeBERTaV3 replaces the MLM pre-training procedure in DeBERTa with an ELECTRA-style pre-training procedure. In the pre-training stage, DeBERTaV3 trains a generator that aims to minimize an MLM loss and a discriminator which aims to minimize an RTD (Replaced Token Detection) loss simultaneously. A Gradient-Disentangled Embedding Sharing method is adopted to avoid the tug-of-war between the generator and the discriminator. §.§ Memory constraints The "scaling laws" draw us towards picking models that are ever larger. Yet, this added performance brought by a larger number of weights does not come for free for all Machine Learning practitioners. DeBERTaV3_large represents around 800MB of weights when stored in a PyTorch bin file. The virtual machines that a lot of practitioners have access to for free (be it Google collab or Kaggle notebooks) tend to have around 13GB of RAM available. It is not sufficient memory; we lay out a simple memory requirement estimation example below. For a 1 billion fp32 parameter model we can break down the memory needs as such: 4GB of data just for the weights since these are fp32 numbers, the same amount is needed to store the gradients. On top of this 8 GB, in the case of Adam as a choice of the optimizer (which is the optimizer that we are using) we need to add 8GB for storing the first and second moments of each gradient. Therefore as a rough estimation, we end up with 16GB required just to properly load a 1bn parameter model for training. before taking into account the required memory to store the activations during the forward pass. If we want to load a decently sized model such as DeBERTaV3_large we need to make use of a few engineering tricks to circumvent these constraints. §.§.§ fp16: Mixed precision training is the first trick that we used <cit.>. It is a very intuitive technique that we used which relies on cleverly using low-precision arithmetic. Instead of storing all real numbers in their 32-bit representation, we shift all of them to be represented as 16-bit numbers. It enables saving a lot of memory while not compromising the accuracy of computations (fp16 loses in representative power due to its limited numerical range but has decent precision otherwise). Therefore based on the architecture which uses batch normalization layers we can assume that quantization errors will most likely be negligible since activations are frequently normalized. We can shift the quantization of our neural network by passing "fp16" as an argument to the Trainer object. The implementation that Huggingface uses for quantizing a model does not involve representing all weights, gradients, activations, and moments as fp16 numbers; simply the forward activations saved for gradient computation. Thus it does not halve the memory needs; more optimization techniques are required. §.§.§ Gradient checkpointing: Chen et al. introduced this technique - also known as "activation checkpointing" - in their paper <cit.>. It uses significantly less memory. When enabled a lot of memory can be freed at the cost of a small decrease in training speed. The memory savings tend to be in the order of 𝒪(n) with n the number of feed-forward layers. The general idea of this technique is to cleverly analyze the computation graph and based on it decide on what results to store. For example, if a low-cost operation of a forward pass can be dropped and only recomputed later during the backward pass it becomes a savvy trade-off between computation and memory. §.§.§ Gradient accumulation: This technique modifies the last step of the backward pass when training a neural network. Instead of updating the gradients after the forward pass and backward pass of each mini-batch, the gradients are saved and the update is only done after several mini-batches. It enables an algorithm to emulate a network training procedure on larger batches even though the execution of the forward and backward pass is done on smaller batches by performing the weights update on their accumulation; hence saving extra memory. §.§ Ensembling Ensembling is an approach that combines predictions from multiple models in order to obtain a better predictive performance. There are multiple ways of ensembling models. In our model, we used a "K-fold cross-training" approach - a combination of K-fold cross-validation and bagging approaches, where we divided the training data into five folds and trained five models independently with each fold as a test set. The final prediction is an average of all five models. A brief description of bagging and other ensembling methods is listed below. §.§.§ Bagging We noticed that models trained over different folds of training samples had high variance. To reduce this variance, we used bagging <cit.> by averaging the predictions of five different models. §.§.§ Boosting This other paradigm had us use a LightGBM <cit.> Bag-of-Words model <cit.>, which is a Bag-of-words classifier with Light Gradient Boosting Method <cit.>. This model worked decently well, slightly better than a BERT-base sequence classifier, but we needed a stronger model to get a competitive model. §.§.§ Stacking Another ensembling technique that we tried was stacking <cit.>; we tried to ensemble each of the five models' predictions and the Bag-of-words classifier using another meta-model, which was a neural network. This approach is useful in reducing bias among models. We weren't able to improve the final results using stacking because the five models combined make a strong learner, while the Bag-of-words model is quite weak, therefore the neural network almost completely ignores the Bag-of-words model. It would be interesting to stack a few other weak models before ensembling with our main strong model. We can actually show some improvement in the loss, something that was done by this paper to identify fake news <cit.>. §.§ Hyperparameter Optimization As mentioned before DeBERTaV3_large is a heavy model. The training time of four models that we then ensemble takes about 12 hours on a Tesla P100. We can understand the limitations that one can have when using free VM access like Colab free or Kaggle notebooks. Hyperparameter optimization was therefore a low priority because of our inability to run instances for more than 30 hours per week. Yet, we decided to focus on one hyperparameter: the accumulation step interval. We decreased it so that gradients would be updated after every batch at the expense of extra memory usage. It granted us a +0.01 in AUC. §.§ Regularization We used several regularization techniques to help our model generalize. We used dropout probabilities of 10%, 20%, 30%, 40% and 50% and averaged the outputs to get the final output logits. Additionally, we used Adversarial Weight Perturbation, which is briefly described below. We also tried to implement Scale Invariant Fine-tuning, a work in progress that we also describe below. §.§.§ Adversarial Weight Perturbation: Adversarial Weight Perturbation <cit.>,<cit.> is a regularization method that perturbs the weights of the Deep Neural Networks to prevent overfitting to the data. Here, we apply weight perturbation every time the training loss goes below a set threshold and we add noise in the direction of the gradient of the loss function with respect to the weights. This method works similarly to Stochastic Weight Averaging which keeps on trying to push the learner away from falling into local minima. §.§.§ Scale Invariant Fine-tuning: Virtual Adversarial Perturbation <cit.> is a technique of regularization that introduces small perturbations in the input thus regularizing the model by generating the same output for an example as it would generate for the adversarial perturbation of the example. In the case of NLP tasks, these perturbations are added to the word embeddings instead of the original sequence. The problem is that the word embedding values vary largely between different words and models. To solve this, the authors of the DeBERTa paper <cit.> suggest Scale Invariant fine-tuning or SiFT that normalizes embedding layers before applying perturbations. This method significantly improves the performance of the model in the downstream NLP tasks. § RESULTS §.§ Evaluation Metric The output is evaluated based on the log loss as follows. log-loss = 1/N∑_i=1^N ∑_j=1^M y_ij log(p_ij) where N is the number of rows in the test set, M is the number of class labels, y_ij is 1 if observation i is in class j and 0 otherwise, and p_ijis the predicted probability that observation i belongs to j. §.§ Overall Results Table <ref> below describes the performance of the different language models, that were used for generating the embeddings, for the given task. The models trained on DeBERTa Large embeddings significantly outperformed the other models. Section <ref> explains how DeBERTa improves over the BERT model. Using a larger model improves the performance significantly, however, it also severely increases the computational costs and memory requirements. Section <ref> explains how we overcame such memory constraints. § DISCUSSION This project seeks to identify a way to include Artificial intelligence in assessing argument-persuasiveness. This model, combined with an argument-mining AI, is capable of identifying the sections of an essay that are "effective" or "ineffective" in persuading the reader. Based on the segments identified as "ineffective" by the machine, a teacher can go through those sections carefully, identify the scope of improvements and provide the necessary feedback. It allows to specifically target the segment of the essay that needs attention, thus making the job of the teacher much more specific. It ends up really helping the student identify the sections where they need to strengthen their arguments. In this project, various aspects of language models were explored in order to achieve competitive accuracy. This discussion section tries to summarize some of the more vital aspects of the architecture, and the key takeaways from those. §.§ Choosing the right language model The choice of model was a vital piece of the puzzle. A vanilla DeBERTa_base<cit.> model performs on par with a fine-tuned BERT_base<cit.> architecture which is one of the most popular Transformer-based architectures. Recall that the DeBERTa architecture differs from BERT<cit.> or RoBERTa<cit.> mainly due to the introduction of the disentangled attention property. Increasing the size of the model by using the DeBERTa_large architecture improved the performance significantly. This led us to two major conclusions. Firstly, that the disentangled attention, that separates the token and positional embeddings, creates a more robust representation of text for assessing the its persuasiveness. The AI index report of 2021 <cit.> shows that the DeBERTa architecture tops the leaderboard of the SuperGLUE benchmark <cit.> which is a benchmark for complex language understanding tasks. This shows that DeBERTa model, with its disentangled attention mechanism, better encapsulates the contextual understanding of the text and hence, supports the sentence evaluation better than the other Language Models. Secondly, a little more trivial conclusion was that incrementing the size of the model actually significantly improves the performance of the model. §.§ Ensembling methods Among the various methods applied to our initial architecture, ensembling methods applied to DeBERTaV3 led to the most significant improvement. We ensembled five identical DeBERTaV3 architectures each using different training/validation splits of our working dataset. After ensembling these models we reached 0.63 in log loss. However, ensembling requires substantial extra GPU memory during training due to having to deal with four other models. Section <ref> describes how we went about solving the computational challenges. We also applied boosting on the bag-of-words model<cit.>. Even though a BERT<cit.>_base model is known to provide significantly better performance for most NLP tasks as compared to a bag-of-words model, results show that boosting improves the bag-of-words model's performance for specific tasks. However, the worsening of performance by stacking shows that ensembling unbalanced models can lead to poorer models. §.§ Overcoming Computational Challenges Training a large-language model requires an appropriate quantization of the model. Reducing the precision of the model to fp16 reduced the memory requirements significantly. Furthermore, increasing the gradient accumulation steps reduced the amount of computation required and hence, the amount of time required to train the model. Overall, training a large language model might require making trade-offs between accuracy and training speed, as well as making judgement calls regarding the precision that the model requires. § FUTURE WORK In future work, we aspire to use Explainable AI (XAI) to have the machine pivot from a grader position to a Teaching Assistant position. The hope is to transform our predictive model into a feedback provider. The feedback would then trigger a conversation between the student, the teacher, and the machine. XAI is a subset of artificial intelligence that focuses on making the decision-making process of a model transparent and interpretable to human users. In the context of providing feedback to students on the strength of their arguments in an essay, a large language model can be used to analyze the text and then identify key elements such as the structure of the argument, the use of evidence, and the clarity of the writing so that the student can improve on those. In future work, we aspire to use XAI to make the model's feedback more explainable and use natural language generation (NLG) techniques to generate human-readable explanations as feedback. For example, the model could identify that a student's essay lacks a clear thesis statement and generate the feedback "Your essay does not have a clear thesis statement. A strong thesis statement is essential for guiding the structure and direction of your argument." XAI would be done by using techniques such as attention visualization, which can show which parts of the text the model is focusing on when making its grade predictions. Once identified, it is shown to the student and thus can help them understand why the machine is giving them a certain grade and how they can improve their writing. On a more granular level, a way to use attention visualization for feedback is to display a heatmap of the essay, where each word is colored based on the level of attention the model is giving to it. The words that are colored more brightly are the ones that the model is paying more attention to, and therefore are the ones that are most important for the student to focus on when revising their essay. For example, if the model is giving low attention to the introduction, it could be an indication of weak thesis statement or lack of a clear direction for the essay or if the model is giving low attention to certain key vocabulary words related to the topic, it could be an indication of lack of understanding of the topic or weak research. Overall, attention visualization can be a useful tool for providing feedback to students on their writing by allowing them to see which parts of the essay the model is focusing on and why. This can help them to better understand the model's feedback and make more informed revisions to their writing. Thanks to Professor Kilian Weinberger for his support and ideas throughout our work.
http://arxiv.org/abs/2307.06045v1
20230712094835
Microwave conductivity due to impurity scattering in cuprate superconductors
[ "Minghuan Zeng", "Xiang Li", "Yongjun Wang", "Shiping Feng" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
[email protected] Department of Physics, Beijing Normal University, Beijing 100875, China The microwave surface impedance measurements on cuprate superconductors provide the crucial information of the effect of the impurity scattering on the quasiparticle transport, however, the full understanding of the effect of the impurity scattering on the quasiparticle transport is still a challenging issue. Here based on the microscopic octet scattering model, the effect of the impurity scattering on the low-temperature microwave conductivity in cuprate superconductors is investigated in the self-consistent T-matrix approach. The impurity-dressed electron propagator obtained in the Fermi-arc-tip approximation of the quasiparticle excitations and scattering processes is employed to derive the electron current-current correlation function by taking into account the impurity-induced vertex correction. It is shown that the microwave conductivity spectrum is a non-Drude-like, with a sharp cusp-like peak extending to zero-energy and a high-energy tail falling slowly with energy. Moreover, the microwave conductivity decreases with the increase of the impurity concentration or with the increase of the strength of the impurity scattering potential. In a striking contrast to the dome-like shape of the doping dependence of the superconducting transition temperature, the microwave conductivity exhibits a reverse dome-like shape of the doping dependence. The theory also show that the highly unconventional features of the microwave conductivity are generated by both the strong electron correlation and impurity-scattering effects. 74.25.Nf, 74.62.Dh, 74.25.Fy, 74.72.-h Microwave conductivity due to impurity scattering in cuprate superconductors Minghuan Zeng, Xiang Li, Yongjun Wang, and Shiping Feng ============================================================================ § INTRODUCTION For a conventional superconductor with a s-wave pairing symmetry, the impurity scattering has little effect on superconductivity<cit.>. However, cuprate superconductors are anomalously sensitive to the impurity scattering <cit.>, since superconductivity involves a paring state with the dominant d-wave symmetry<cit.>. In particular, the superconducting (SC) transition temperature T_ c in cuprate superconductors is systematically diminished with impurities <cit.>, which therefore confirms definitely that the impurity scattering has high impacts on superconductivity<cit.>. In this case, the understanding of the effect of the impurity scattering on superconductivity is a central issue for cuprate superconductors. Among the striking features of the SC-state in cuprate superconductivity, the physical quantity which most evidently displays the dramatic effect of the impurity scattering on superconductivity is the quasiparticle transport<cit.>, which is manifested by the microwave conductivity. This microwave conductivity contains a wealth of the information on the SC-state quasiparticle response, and is closely associated with the superfluid density<cit.>. By virtue of systematic studies using the microwave surface impedance measurements, the low-temperature features of the SC-state quasiparticle transport in cuprate superconductors have been well established <cit.>, where an agreement has emerged that the microwave conductivity are dominated mainly by thermally excited quasiparticles being scattered by impurities. In particular, as an evidence of the very long-live quasiparticle excitation deep in the SC-state, the low-temperature microwave conductivity spectrum has a cusp-like shape of the energy dependence <cit.>. However, it is still unclear how this microwave conductivity evolves with the impurity concentration. Moreover, the experimental observations have also shown that even minor concentrations of impurities lead to changes in the temperature dependence of the magnetic-field penetration-depth from linear in the pure systems to quadratic<cit.>, while the ratio of the low-temperature superfluid density and effective mass of the electrons n_ s(T→ 0)/m^* is decreased when one increases the impurity concentration<cit.>. In the d-wave SC-state of cuprate superconductors, the SC gap vanishes along the nodal direction of the electron Fermi surface (EFS)<cit.>, and then as a natural consequence, the most properties well below T_ c ought to be controlled by the quasiparticle excitations at around the nodal region of EFS. In this case, the d-wave Bardeen-Cooper-Schrieffer (BCS) type formalism<cit.>, incorporating the effect of the impurity scattering within the self-consistent T-matrix approach, has been employed to study the effect of the impurity scattering on the microwave conductivity of cuprate superconductors <cit.>, where the impurity scattering self-energy was evaluated in the nodal approximation of the quasiparticle excitations and scattering processes, and then was used to calculate the electron current-current correlation function by including the contributions of the impurity-induced vertex correction and Fermi-liquid correction <cit.>. The obtained results show that both the impurity-induced vertex correction and Fermi-liquid correction modify the microwave conductivity<cit.>. However, (i) although the contribution from the Fermi-liquid correction is included, these treatments suffer from ignoring the strong electron correlation effect in the homogenous part of the electron propagator <cit.>, while this strong electron correlation effect also plays an important role in the SC-state quasiparticle transport; (ii) moreover, the angle-resolved photoemission spectroscopy (ARPES) experiments <cit.> have shown clearly that the Fermi arcs that emerge due to the EFS reconstruction at the case of zero energy <cit.> can persist into the case for a finite binding-energy, where a particularly large fraction of the spectral weight is located at around the tips of the Fermi arcs. These tips of the Fermi arcs connected by the scattering wave vectors q_i thus construct an octet scattering model, and then the quasiparticle scattering with the scattering wave vectors q_i contribute effectively to the quasiparticle scattering processes<cit.>. In particular, this octet scattering model has been employed to give a consistent explanation of the experimental data detected from Fourier transform scanning tunneling spectroscopy <cit.> and the ARPES autocorrelation pattern observed from ARPES experiments<cit.>. These experimental results <cit.> therefore have shown clearly that the shape of EFS has deep consequences for the various properties of cuprate superconductors, while such an aspect should be also reflected in the SC-state quasiparticle transport. In the recent work<cit.>, we have started from the homogenous part of the electron propagator and the related microscopic octet scattering model, which are obtained within the framework of the kinetic-energy-driven superconductivity <cit.>, to discuss the influence of the impurity scattering on the electronic structure of cuprate superconductors in the self-consistent T-matrix approach, where the impurity scattering self-energy is derived in the Fermi-arc-tip approximation of the quasiparticle excitations and scattering processes, and then the impurity-dressed electron propagator incorporates both the strong electron correlation effect and the impurity-scattering effect. The obtained results<cit.> show that the decisive role played by the impurity scattering self-energy in the particle-hole channel is the further renormalization of the quasiparticle band structure with a reduced quasiparticle lifetime, while the impurity scattering self-energy in the particle-particle channel induces a strong deviation from the d-wave behaviour of the SC gap, leading to the existence of a finite gap over the entire EFS. In this paper, we study the effect of the impurity scattering on the microwave conductivity in cuprate superconductors along with this line by taking into account the impurity-induced vertex correction, where the impurity-dressed electron propagator<cit.> is employed to evaluate the vertex-corrected electron current-current correlation function in the self-consistent T-matrix approach, and the obtained results in the Fermi-arc-tip approximation of the quasiparticle excitations and scattering processes show that the low-temperature microwave conductivity spectrum is a non-Drude-like, with a sharp cusp-like peak extending to zero-energy and a high-energy tail falling slowly with energy, in agreement with the corresponding experiments <cit.>. In particular, although the low-energy cusp-like peak decay as → 1/[ω+ constant], the overall shape of the microwave conductivity spectrum exhibits a special non-Drude-like behavior with the depicted formula that has been also used to fit the corresponding experimental data in Ref. Turner03. Moreover, the microwave conductivity decreases with ascending impurity concentration or with rising strength of the impurity scattering potential. Our these results therefore show that the highly unconventional features of the microwave conductivity are induced by both the strong electron correlation and impurity-scattering effects. The remainder of this paper is organized as follows: Sec. <ref> contains details regarding the calculation technique of the microwave conductivity in the presence of the impurity scattering. The quantitative characteristics of the impurity-scattering effect on the doping and energy dependence of the microwave conductivity are presented in Sec. <ref>, where we show that in a striking contrast to the dome-like shape doping dependence of T_ c, the minimum of the microwave conductivity occurs at around the optimal doping, and then increases in both underdoped and overdoped regimes. Finally, we give a summary in Sec. <ref>. In the Appendix, we present the details of the derivation of the vertex kernels of the electron current-current correlation function. § THEORETICAL FRAMEWORK It was recognized shortly after the discovery of superconductivity in cuprate superconductors that the essential physics of cuprate superconductors is contained in the square-lattice t-J model<cit.>, H = -∑_ll'σt_ll'C^†_lσC_l'σ +μ∑_lσC^†_lσC_lσ +J∑_lη̂ S_l· S_l+η̂, where C^†_lσ (C_lσ) creates (annihilates) a constrained electron with spin index σ=↑,↓ on lattice site l, S_l is spin operator with its components S^ x_l, S^ y_l, and S^ z_l, and μ is the chemical potential. The kinetic-energy part includes the electron-hopping term t_ll'=t_η̂=t between the nearest-neighbor (NN) sites η̂ and the electron-hopping term t_ll'=t_η̂'=t' between the next NN sites η̂', while the magnetic-energy part is described by a Heisenberg term with the magnetic interaction J between the NN sites η̂. As a qualitative discussion, the commonly used parameters in the t-J model (<ref>) are chosen as t/J=2.5 and t'/t=0.3 as in our previous discussions <cit.>. However, when necessary to compare with the experimental data, we set J=1000K. The basis set of the t-J model (<ref>) is restricted by the requirement that no lattice site may be doubly occupied by electrons<cit.>, i.e., ∑_σC^†_lσC_lσ≤ 1. Our method employs a fermion-spin theory description of the t-J model (<ref>) together with the on-site local constraint of no double electron occupancy <cit.>, where the constrained electron operators C_l↑ and C_l↓ in the t-J model (<ref>) are separated into two distinct operators as, C_l↑=h^†_l↑S^-_l,     C_l↓=h^†_l↓S^+_l, with the spinful fermion operator h_lσ=e^-iΦ_lσh_l that describes the charge degree of freedom of the constrained electron together with some effects of spin configuration rearrangements due to the presence of the doped hole itself (charge carrier), while the spin operator S_l that represents the spin degree of freedom of the constrained electron, and then the local constraint of no double electron occupancy is fulfilled in actual analyses. Starting from the t-J model (<ref>) in the fermion-spin representation (<ref>), the kinetic-energy-driven SC mechanism has been established <cit.>, where the charge carriers are held together in the d-wave pairs in the particle-particle channel due to the effective interaction, which originates directly from the kinetic energy of the t-J model (<ref>) in the fermion-spin representation (<ref>) by the exchange of spin excitations, then the d-wave electron pairs originating from the d-wave charge-carrier pairing state are due to the charge-spin recombination, and their condensation reveals the d-wave SC-state. In these previous discussions, the homogenous electron propagator of the t-J model (<ref>) in the SC-state has been obtained explicitly in the Nambu representation as<cit.>, G̃( k,ω) = ( [ G( k,ω), ( k,ω); ^†( k,ω), -G( k,-ω) ]) = 1 F( k,ω){[ω-Σ_0( k,ω)]τ_0 +Σ_1( k,ω)τ_1 + Σ_2( k,ω)τ_2+[ε_ k +Σ_3( k,ω)]τ_3}, where τ_0 is the unit matrix, τ_1, τ_2, and τ_3 are Pauli matrices, ε_ k=-4tγ_ k+4t'γ_ k'+μ is the energy dispersion in the tight-binding approximation, with γ_ k=( cosk_x+ cos k_y)/2, γ_ k'= cosk_x cosk_y, F( k,ω)=[ω-Σ_0( k,ω)]^2-[ε_ k +Σ_3( k,ω)]^2-Σ^2_1( k,ω) -Σ^2_2( k,ω), and the homogenous self-energy has been expanded into its constituent Pauli matrix components as, Σ̃( k,ω)=∑_α=0^3Σ_α( k,ω)τ_α = ( [ Σ_0( k,ω)+Σ_3( k,ω), Σ_1( k,ω)-iΣ_2( k,ω); Σ_1( k,ω)+iΣ_2( k,ω), Σ_0( k,ω)-Σ_3( k,ω) ]), with Σ_0( k,ω) and Σ_3( k,ω) that are respectively the antisymmetric and symmetric parts of the homogenous self-energy in the particle-hole channel, while Σ_1( k,ω) and Σ_2( k,ω) that are respectively the real and imaginary parts of the homogenous self-energy in the particle-particle channel. Moreover, these homogenous self-energies Σ_0( k,ω), Σ_1( k,ω), Σ_2( k,ω), and Σ_3( k,ω) have been derived explicitly in Ref. Feng15a in terms of the full charge-spin recombination. In particular, the sharp peaks visible for temperature T→ 0 in Σ_0( k,ω), Σ_1( k,ω), Σ_2( k,ω), and Σ_3( k,ω) are actually a δ-functions, broadened by a small damping used in the numerical calculation for a finite lattice. The calculation in this paper for Σ_0( k,ω), Σ_1( k,ω), Σ_2( k,ω), and Σ_3( k,ω) is performed numerically on a 120× 120 lattice in momentum space, with the infinitesimal i0_+→ iΓ replaced by a small damping Γ=0.05J. The homogenous electron spectral function can be obtained directly from the above homogenous electron propagator (<ref>). In this case, the topology of EFS in the pure system has been discussed in terms of the intensity map of the homogenous electron spectral function at zero energy<cit.>, and the obtained results show that EFS contour is broken up into the disconnected Fermi arcs located around the nodal region<cit.>, however, a large number of the low-energy electronic states is available at around the tips of the Fermi arcs, and then all the anomalous properties arise from these quasiparticle excitations located at around the tips of the Fermi arcs. In particular, these tips of the Fermi arcs connected by the scattering wave vectors q_i naturally construct an octet scattering model, and then the quasiparticle scattering with the scattering wave vectors q_i therefore contribute effectively to the quasiparticle scattering processes <cit.>. Moreover, this octet scattering model can persist into the case for a finite binding-energy <cit.>, which leads to that the sharp peaks in the ARPES autocorrelation spectrum with the scattering wave vectors q_i are directly correlated to the regions of the highest joint density of states. §.§ Impurity-dressed electron propagator In the low-temperature limit, the framework for the discussions of the impurity-scattering effect is the self-consistent T-matrix approach <cit.>. The discussions of the low-temperature microwave conductivity of cuprate superconductors in this paper builds on the impurity-dressed electron propagator, which is obtained from the dress of the homogenous electron propagator (<ref>) via the impurity scattering<cit.>, where the self-consistent T-matrix approach is employed to derive the impurity scattering self-energy in the Fermi-arc-tip approximation of the quasiparticle excitations and scattering processes. For a convenience in the following discussions of the microwave conductivity, a short summary of the derivation process of the impurity-dressed electron propagator <cit.> is therefore given in this subsection. The homogenous electron propagator in Eq. (<ref>) is dressed due to the presence of the impurity scattering<cit.>, and can be expressed explicitly as, G̃_ I( k,ω)^-1=G̃( k,ω)^-1 -Σ̃_ I( k,ω), where in a striking similarity to the homogenous self-energy (<ref>), the impurity scattering self-energy Σ̃_ I( k,ω) can be also expanded into its constituent Pauli matrix components as, Σ̃_ I( k,ω)=∑_α=0^3Σ_ Iα( k,ω)τ_α = ( [ Σ_ I0( k,ω)+Σ_ I3( k,ω), Σ_ I1( k,ω)-iΣ_ I2( k,ω); Σ_ I1( k,ω)+iΣ_ I2( k,ω), Σ_ I0( k,ω)-Σ_ I3( k,ω) ]). The above impurity scattering self-energy together with the dressed electron propagator (<ref>) can be analyzed in the self-consistent T-matrix approach <cit.>, where Σ̃_ I( k,ω) can be derived approximately as, Σ̃_ I( k,ω)=n_ iNT̃_ k k(ω), with the impurity concentration n_ i, the number of sites on a square lattice N, and the diagonal part of the T-matrix T̃_ k k(ω), while the self-consistent T-matrix equation that can be expressed formally by the summation of all impurity scattering processes as, T̃_ k k'=1 Nτ_3V_ k k'+1 N∑_ k” V_ k k”τ_3G̃_ I( k”,ω)T̃_ k” k', where V_ k k' is the momentum dependence of the impurity scattering potential. It thus shows that the initial and final momenta of an impurity scattering event must always be equal to the momentum-space sited in the Brillouin zone (BZ). However, in the microscopic octet scattering model<cit.> shown in Fig. <ref>, a particularly large fraction of the spectral weight is accommodated at around eight tips of the Fermi arcs in the case of low temperatures and low energies, indicating that a large number of the quasiparticle excitations are induced only at around these eight tips of the Fermi arcs. On the other hand, the strength of the impurity scattering potential V_ k k' in the T-matrix equation (<ref>) falls off quickly when the momentum shifts away from the tips of the Fermi arcs. In this case, the initial and final momenta of an impurity scattering event are always approximately equal to the momentum-space sited at around one of these eight tips of the Fermi arcs. In this Fermi-arc-tip approximation<cit.>, we only need to consider three possible cases as shown in Fig. <ref> for the impurity scattering potential V_ k k' in the T-matrix equation (<ref>): (i) the impurity scattering potential for the scattering process at the intra-tip of the Fermi arc V_ k k'=V_1, where k and k' are located at the same tip of the Fermi arc; (ii) the impurity scattering potentials for the scattering process at the adjacent-tips of the Fermi arcs V_ k k'=V_2, V_ k k'=V_3, V_ k k'=V_7, and V_ k k'=V_8, where k and k' are located at the adjacent-tips of the Fermi arcs; (iii) the impurity scattering potentials for the scattering process at the opposite-tips of the Fermi arcs V_ k k'=V_4, V_ k k'=V_5, and V_ k k'=V_6, where k and k' are located at the opposite-tips of the Fermi arcs, and then the impurity scattering potential V_ k k' in the self-consistent T-matrix equation (<ref>) is reduced as a 8× 8-matrix, Ṽ =( [ V_11 V_12 ⋯ V_18; V_21 V_22 ⋯ V_28; ⋮ ⋮ ⋱ ⋮; V_81 V_82 ⋯ V_88 ]), where the matrix elements are given by: V_jj=V_1 for j=1,2,3,... 8, V_jj'=V_j'j=V_2 for j=1,2,3,6 with the corresponding j'=7,4,5,8, respectively, V_jj'=V_j'j=V_3 for j=1,2,3,4 with the corresponding j'=8,7,6,5, respectively, V_jj'=V_jj'=V_4 for j=1,2,3,4 with the corresponding j'=6,5,8,7, respectively, V_jj'=V_j'j=V_5 for j=1,2,3,4 with the corresponding j'=5,6,7,8, respectively, V_jj'=V_j'j=V_6 for j=1,2,4,5 with the corresponding j'=3,8,6,7, respectively, V_jj'=V_j'j=V_7 for j=1,2,5,6 with the corresponding j'=4,3,8,7, respectively, and V_jj'=V_j'j=V_8, for j=1,3,5,7 with the corresponding j'=2,4,6,8, respectively. With the help of the above impurity scattering potential matrix Ṽ, the self-consistent T-matrix equation (<ref>) is reduced as a 16× 16-matrix equation around eight tips of the Fermi arcs as, T̃_jj'=1 Nτ_3V_jj'+1 N∑_j” k”V_jj”[τ_3G̃_ I( k”,ω)]T̃_j”j', where j, j', and j” label the tips of the Fermi arcs, the summation k” is restricted within the area around the tip j” of the Fermi arc, T̃_jj' is now an impurity-average quantity, and then the impurity scattering self-energy Σ̃_ I( k,ω) in Eq. (<ref>) is obtained as, Σ̃_ I(ω)=n_ iNT̃_jj(ω). It has been shown that the diagonal propagator in Eq. (<ref>) is symmetrical about the nodal direction, while the off-diagonal propagator is asymmetrical about the nodal direction, since the SC-state has a d-wave symmetry<cit.>. In this case, the region of the location of the tips of the Fermi arcs has been separated into two groups: (A) the tips of the Fermi arcs located at the region of |k_y|>|k_x|, and (B) the tips of the Fermi arcs located at the region of |k_x|>|k_y|, and then the dressed electron propagator G̃_I( k,ω) in Eq. (<ref>) can be also derived in the regions A and B as<cit.>, G̃^ (A)_ I( k,ω) =1 F^ (A)_ I( k,ω){[ω-Σ_0( k,ω) -Σ_ I0(ω)]τ_0 +[Σ_1( k,ω)+Σ^ (A)_ I1(ω)]τ_1 +[Σ_2( k,ω)+Σ^ (A)_ I2(ω)]τ_2 +[ε_ k+Σ_3( k,ω)+Σ_ I3(ω)] τ_3},   G̃^ (B)_ I( k,ω) =1 F^ (B)_ I( k,ω){[ω-Σ_0( k,ω) -Σ_ I0(ω)]τ_0 +[Σ_1( k,ω)+Σ^ (B)_ I1(ω)]τ_1 +[Σ_2( k,ω)+Σ^ (B)_ I2(ω)]τ_2 +[ε_ k+Σ_3( k,ω)+Σ_ I3(ω)] τ_3}, respectively, where F^ (A)_ I( k,ω)=[ω-Σ_0( k,ω) -Σ_ I0(ω)]^2-[ε_ k+Σ_3( k,ω) +Σ_ I3(ω)]^2-[Σ_1( k,ω) +Σ^ (A)_ I1(ω)]^2-[Σ_2( k,ω) +Σ^ (A)_ I2(ω)]^2, F^ (B)_ I( k,ω) =[ω-Σ_0( k,ω) -Σ_ I0(ω)]^2-[ε_ k+Σ_3( k,ω) +Σ_ I3(ω)]^2-[Σ_1( k,ω) +Σ^ (B)_ I1(ω)]^2-[Σ_2( k,ω) +Σ^ (B)_ I2(ω)]^2. In the self-consistent T-matrix approach, these impurity scattering self-energies Σ^ (A)_ I0(ω) [Σ^ (B)_ I0(ω)], Σ^ (A)_ I1(ω) [Σ^ (B)_ I1(ω)], Σ^ (A)_ I2(ω) [Σ^ (B)_ I2(ω)], and Σ^ (A)_ I3(ω) [Σ^ (B)_ I3(ω)] and the related T-matrix T̃^ (A)_jj'=∑_αT^(α)_ Ajj'τ_α [T̃^ (B)_jj'=∑_αT^(α)_ Bjj'τ_α] with the matrix elements T^(α)_ Ajj' [T^(α)_ Bjj'] in Eq. (<ref>) have been obtained in the Fermi-arc-tip approximation of the quasiparticle excitations and scattering processes, and given explicitly in Ref. Zeng22. With the help of the above dressed electron propagator (<ref>) [then the dressed electron spectral function], we<cit.> have also discussed the influence of the impurity scattering on the electronic structure of cuprate superconductors, and the obtained results of the line-shape in the quasiparticle excitation spectrum and the ARPES autocorrelation spectrum are well consistent with the corresponding experimental results <cit.>. §.§ Microwave Conductivity Now we turn to derive the microscopic conductivity of cuprate superconductors in the presence of impurities, which is closely associated with the dressed electron propagator (<ref>). The linear response theory allows one to obtain the microwave conductivity in terms of the Kubo formula<cit.>, σ^↔(Ω,T) = - ImΠ^↔(Ω)Ω, where Π^↔(Ω) is the retarded electron current-current correlation function, and can be expressed explicitly as, Π^↔(iΩ_m)=-1 N∫_0^βdτ e^iΩ_mτ⟨ T_τJ(τ)J(0)⟩, with β = 1/T, the bosonic Matsubara frequency Ω_m=2π m/β, and the current density of electrons J. This current density of electrons can be obtained in terms of the electron polarization operator, which is a summation over all the particles and their positions<cit.>, and can be expressed explicitly in the fermion-spin representation (<ref>) as P=∑_lσ R_lĈ^†_lσĈ_lσ=1 2∑_lσ R_lh_lσh^†_lσ. Within the t-J model (<ref>) in the fermion-spin representation (<ref>), the current density of electrons is obtained by evaluating the time-derivative of the polarization operator using the Heisenberg's equation of motion as, J = -ie[H, P] = -i1 2et∑_⟨ lη̂⟩η̂(h^†_l+η̂↑ h_l↑S_l^+S^-_l+η̂+h^†_l+η̂↓h_l↓ S^†_lS^-_l+η̂) +i1 2et'∑_⟨ lη̂'⟩η̂'(h^†_l+η̂'↑ h_l↑S_l^+S^-_l+η̂'+h^†_l+η̂'↓h_l↓ S^†_lS^-_l+η̂') = i1 2et∑_⟨ lη̂⟩ση̂C^†_lσ C_l+η̂σ-i1 2et'∑_⟨ lη̂'⟩ση̂' C^†_lσC_l+η̂'σ≈ -eV_ F∑_kσC^†_ kσ C_ kσ, with the electron charge e, the electron Fermi velocity V_ F, which can be derived directly from the energy dispersion ε_ k in the tight-binding approximation in Eq. (<ref>) as, V_ F = V^(x)_ Fk̂_x+V^(y)_ Fk̂_y = V_ F[k̂_xcosθ_k_ F +k̂_ysinθ_k_ F], where V^(x)_ F=tsin k^(x)_ F-2t'sin k^(x)_ Fcos k^(y)_ F, V^(y)_ F=tsin k^(y)_ F-2t'sin k^(y)_ Fcos k^(x)_ F, cosθ_k_ F=V^(x)_ F/V_ F, sinθ_k_ F=V^(y)_ F/V_ F, and V_ F=√([V^(x)_ F]^2+[V^(y)_ F]^2). For a convenience in the following discussions of the electron current-current correlation function (<ref>), the electron operators can be rewritten in the Nambu representation as Ψ^†_ k=(C^†_ k↑,C_- k↓) and Ψ_ k=(C_ k↑,C^†_- k↓)^ T, and then the current density of electrons in Eq. (<ref>) can be rewritten in the Nambu representation as, J = -eV_ F∑_kΨ_k^†τ_0Ψ_k. With the help of the above current density of electrons (<ref>), the impurity-induced vertex-corrected current-current correlation function (<ref>) can be formally expressed in terms of the dressed electron propagator as, Π^↔(iΩ_m)=1 N∫_0^βdτ e^iΩ_mτΠ^↔(τ) =(eV_ F)^21 N∑_k1β∑_iω_nk̂Tr[G̃_I(k,iω_n) G̃_I(k,iω_n+iΩ_m) Γ̃(k,iω_n,iΩ_m)], where ω_n=(2n+1)π/β is the fermionic Matsubara frequency, while the impurity-induced vertex correction in the ladder approximation can be generally expressed as<cit.>, Γ̃(k,iω_n,iΩ_m)=k̂τ_0+n_iN∑_k”T̃_kk”(iω_n+iΩ_m) G̃_I(k”,iω_n+iΩ_m) Γ̃(k”,iω_n,iΩ_m)G̃_I(k”,iω_n) T̃_k”k(iω_n). Starting from the homogenous part of the d-wave BCS type formalism, the effect of the impurity scattering on the microwave conductivity has been discussed in the self-consistent T-matrix approach by taking into account the impurity-induced vertex correction <cit.>, where the vertex-corrected electron current-current correlation function and the related impurity-dressed electron propagator have been evaluated in the nodal approximation. In the following discussions, the vertex-corrected electron current-current correlation function is generalized from the previous case obtained in the nodal approximation <cit.> to the present case in the Fermi-arc-tip approximation, where the impurity-induced vertex correction for the electron current-current correlation function (<ref>) can be expressed explicitly in the regions A and B as, Γ̃^(A)(k,iω_n,iΩ_m) = k̂_ F^(j)τ_0 +k̂_x^(j)Λ̃^(A)_x(iω_n,iΩ_m)+k̂_y^(j)Λ̃^(A)_y(iω_n,iΩ_m),      for j∈ odd, Γ̃^(B)(k,iω_n,iΩ_m) = k̂_ F^(j)τ_0 +k̂_x^(j)Λ̃^(B)_x(iω_n,iΩ_m)+k̂_y^(j)Λ̃^(B)_y(iω_n,iΩ_m),      for j∈ even, respectively, while the vertex kernels Λ̃^(A)_x(iω_n,iΩ_m), Λ̃^(A)_y(iω_n,iΩ_m), Λ̃^(B)_x(iω_n,iΩ_m), and Λ̃^(B)_y(iω_n,iΩ_m) satisfy the following self-consistent equations, k̂_x^(j)Λ̃_x^(A)(iω_n,iΩ_m) + k̂_y^(j)Λ̃_y^(A)(iω_n,iΩ_m)=n_iN{∑_k∈A j”∈ oddT̃_jj”(iω_n+iΩ_m) G̃^(A)_I(k,iω_n+iΩ_m) × [k̂^(j”)_ Fτ_0 +k̂_x^(j”)Λ̃^(A)_x(iω_n,iΩ_m) +k̂_y^(j”)Λ̃^(A)_y(iω_n,iΩ_m)] G̃^(A)_I(k,iω_n)T̃_j”j(iω_n) + ∑_k∈B j”∈ evenT̃_jj”(iω_n+iΩ_m) G̃^(B)_I(k,iω_n+iΩ_m)[k̂^(j”)_ Fτ_0+k̂_x^(j”)Λ̃^(B)_x(iω_n,iΩ_m) + k̂_y^(j”)Λ̃^(B)_y(iω_n,iΩ_m)] G̃^(B)_I(k,iω_n)T̃_j”j(iω_n)},            for j∈ odd, k̂_x^(j)Λ̃_x^(B)(iω_n,iΩ_m) + k̂_y^(j)Λ̃_y^(B)(iω_n,iΩ_m)=n_iN{∑_k∈A j”∈ oddT̃_j j”(iω_n+iΩ_m) G̃^(A)_I(k,iω_n+iΩ_m) × [k̂^(j”)_ Fτ_0+k̂_x^(j”)Λ̃^(A)_x(iω_n,iΩ_m) +k̂_y^(j”)Λ̃^(A)_y(iω_n,iΩ_m)] G̃^(A)_I(k,iω_n)T̃_j”j(iω_n) + ∑_k∈B j”∈ evenT̃_j j”(iω_n+iΩ_m) G̃^(B)_I(k,iω_n+iΩ_m) [k̂^(j”)_ Fτ_0+k̂_x^(j”)Λ̃^(B)_x(iω_n,iΩ_m) + k̂_y^(j”)Λ̃^(B)_y(iω_n,iΩ_m)] G̃^(B)_I(k,iω_n)T̃_j”j(iω_n)},            for j∈ even. Substituting the above results in Eq. (<ref>) into Eqs. (<ref>) and (<ref>), the vertex-corrected electron current-current correlation function (<ref>) now can be expressed as, Π^↔(iΩ_m) = (eV^ (TFA)_ F)^21 N∑_k1β∑_iω_n(k̂_x+k̂_y) Tr{G̃_I(k,iω_n) G̃_I(k,iω_n+iΩ_m)[k̂_ Fτ_0 +k̂_xΛ̃_x(iω_n,iΩ_m) +k̂_yΛ̃_y(iω_n,iΩ_m)]} = (eV^ (TFA)_ F)^2∑_j∈ odd1β∑_iω_n(k̂_x^(j)+k̂_y^(j))Tr{1 N∑_k∈AG̃^(A)_I(k,iω_n) G̃^(A)_I(k,iω_n+iΩ_m)[k̂_ F^(j)τ_0 +k̂_x^(j)Λ̃_x^(A)(iω_n,iΩ_m) + k̂_y^(j)Λ̃_y^(A)(iω_n,iΩ)_m]} + (eV^ (TFA)_ F)^2∑_j∈ even1β∑_iω_n(k̂_x^(j)+k̂_y^(j)) Tr{1 N∑_k∈BG̃^(B)_I(k,iω_n) G̃^(B)_I(k,iω_n+iΩ_m) × [k̂_ F^(j)τ_0+k̂_x^(j)Λ̃_x^(B)(iω_n,iΩ_m)+k̂_y^(j)Λ̃_y^(B)(iω_n,iΩ_m)]}, with the electron Fermi velocity V^ (TFA)_ F at around the tips of the Fermi arcs. However, in the absence of an external magnetic field, the rotational symmetry in the system is unbroken, indicating that Π_xy(Ω)=Π_yx(Ω)=0 and Π_xx(Ω)=Π_yy(Ω), and then the above vertex-corrected electron current-current correlation function (<ref>) is reduced as, Π^↔(iΩ_m)= ([ Π_xx(iΩ_m) 0; 0 Π_yy(iΩ_m) ]) =τ_0Π_xx(iΩ_m), where Π_xx(iΩ_m) is given by, Π_xx(iΩ_m) = (2eV^ (TFA)_ F)^21β∑_iω_n J_xx(iω_n,iω_n+iΩ_m), with the kernel function, J_xx(iω_n,iω_n+iΩ_m) = 1 N∑_α=0^3{cos^2θ^ (A)_ FĨ^(A)_0(α,iω_n,iω_n+iΩ_m) Tr[τ_α[τ_0+Λ̃_x^(A)(iω_n,iΩ_m)]] + cos^2θ^ (B)_ FĨ^(B)_0(α,iω_n,iω_n+iΩ_m) Tr[τ_α[τ_0+Λ̃_x^(B)(iω_n,iΩ_m)]]}, where the functions Ĩ^(A)_0(α,iω_n,iω_n+iΩ_m) and Ĩ^(B)_0(α,iω_n,iω_n+iΩ_m) are defined as, ∑_k∈AG̃^ (A)_ I(k,iω_n)τ_γG̃^ (A)_ I(k,iω_n+iΩ_m) = ∑_β=0^3Ĩ_γ^(A)(β,iω_n,iω_n+iΩ_m)τ_β,   ∑_k∈BG̃^ (B)_ I(k,iω_n)τ_γG̃^ (B)_ I(k,iω_n+iΩ_m) = ∑_β=0^3Ĩ_γ^(B)(β,iω_n,iω_n+iΩ_m)τ_β, respectively. After a quite complicated calculation, the function Tr[τ_αΛ̃^(A)_x(ω,Ω)] in the above kernel function (<ref>), which is a trace of the product of the vertex kernel Λ̃^(A)_x(ω,Ω) and matrix τ_α with α=0,1,2,3 in the region A of BZ, and the function Tr[τ_αΛ̃^(B)_x(ω,Ω)] in the above kernel function (<ref>), which is a trace of the product of the vertex kernel Λ̃^(B)_x(ω,Ω) and matrix τ_α in the region B of BZ, can be derived straightforwardly [see Appendix <ref>], and then the above kernel function J_xx(ω,ω+Ω) can be obtained explicitly. On the other hand, the dressed electron propagators G̃_ I(k,iω_n) and G̃_ I(k,iω_n+iΩ_m) are involved directly in the above kernel function J_xx(iω_n,iω_n+iΩ_m) in Eq. (<ref>), then the singularity of J_xx(iω_n,iω_n+iΩ_m) only lies at the real axes [ϵ∈ℝ] and these parallel to the real axes [ϵ-iΩ_m]. In this case, the contribution for the summation of the kernel function J_xx(iω_n,iω_n+iΩ_m) in Eq. (<ref>) over the fermionic Matsubara frequency iω_n comes from the two branch cuts: ϵ∈ℝ and ϵ-iΩ_m, and then the vertex-corrected electron current-current correlation function (<ref>) can be expressed as, Π_xx(iΩ_m) = i(2eV^ (TFA)_ F)^2∫_-∞^∞dϵ 2πn_ F(ϵ)[ J_xx(ϵ+iδ,ϵ+iΩ_m)-J_xx(ϵ-iδ,ϵ+iΩ_m) + J_xx(ϵ-iΩ_m,ϵ+iδ) -J_xx(ϵ-iΩ_m,ϵ-iδ)], By virtue of the analytical continuation iΩ_m→Ω+iδ, the above vertex-corrected electron current-current correlation function (<ref>) can be obtained explicitly as, Π_xx(Ω) = i(2eV^ (TFA)_ F)^2∫_-∞^∞dϵ 2π{ n_ F(ϵ)[ J_xx(ϵ+iδ,ϵ+Ω+iδ) -J_xx(ϵ-iδ,ϵ+Ω+iδ)] + n_ F(ϵ+Ω)[ J_xx(ϵ-iδ,ϵ+Ω+iδ) - J_xx(ϵ-iδ,ϵ+Ω-iδ)]}, and then the microwave conductivity σ^↔(Ω,T)=τ_0σ(Ω,T) in Eq. (<ref>) in the presence of impurities is obtained as, σ(Ω)=- ImΠ_xx(Ω)Ω =(2eV^ (TFA)_ F)^2∫_-∞^∞dϵ 2πn_ F(ϵ) -n_ F(ϵ+Ω)Ω[ ReJ_xx(ϵ-iδ,ϵ+Ω+iδ) - Re J_xx(ϵ+iδ,ϵ+Ω+iδ)]. § QUANTITATIVE CHARACTERISTICS In the self-consistent T-matrix approach, the strength of the impurity scattering potential is an important parameter. Unless otherwise indicated, the adjacent-tip impurity scattering V_2, V_3, V_7, and V_8, and the opposite-tip impurity scattering V_4, V_5, and V_6 in the following discussions are chosen as V_2=0.85V_1, V_3=0.8V_1, V_7=0.8V_1, V_8=0.9V_1, V_4=0.7V_1, V_5=0.65V_1, and V_6=0.75V_1, respectively, as in the previous discussions of the influence of the impurity scattering on the electronic structure<cit.>, while the strength of the intra-tip impurity scattering V_1 is chosen as V_1=V_ scale tan(π 2d) with V_ scale=58J and the adjustable parameter d of the impurity scattering potential strength, where the case of d∼ 0 [then tan(π 2d)∼ 0] is corresponding to the case V_j∼ 0 with j=1,2,3,...8 in the Born-limit, while the case of d∼ 1 [then tan(π 2d)∼∞] is corresponding to the case V_j∼∞ in the unitary-limit. We are now ready to discuss the effect of the impurity scattering on the microwave conductivity in cuprate superconductors. We have performed a calculation for the microwave conductivity σ(ω,T) in Eq. (<ref>), and the results of the microwave conductivity σ(ω,T) as a function of energy at the doping concentration δ=0.15 for temperatures T=0.005J∼ 5K (black-line), T=0.009J∼ 9K (red-line), and T=0.015J∼ 15K (blue-line) together with the impurity concentration n_i=0.0025 and parameter of the impurity scattering potential strength d=0.05 are plotted in Fig. <ref> in comparison with the corresponding experimental results of the microwave conductivity observed on the cuprate superconductor<cit.> YBa_2Cu_3O_6.993 (inset). The results in Fig. <ref> therefore show clearly that the energy dependence of the low-temperature microwave conductivity in cuprate superconductor <cit.> is qualitatively reproduced, where the highly unconventional features of the low-temperature microwave conductivity spectrum can be summarized as: (i) a sharp cusp-like peak develops at the low-energy limit; (ii) the low-temperature microwave conductivity spectrum is non-Drude-like; (iii) a high-energy tail falls slowly with the increase of energy. To see this non-Drude behavior in the low-temperature microwave conductivity spectrum more clearly, the results of the low-temperature microwave conductivity spectra shown in Fig. <ref> have been numerically fitted in terms of the following fit form, σ(ω,T)=σ_0 1+(ω/C_0T)^ y, as they have been done in the experiments<cit.>, and the fit result at the temperature T=0.015J∼ 15K is plotted in Fig. <ref> (black-line), where the fit parameters σ_0=238.073, C_0=4.145, and y=1.333. For a more better understanding, we have also fitted the low-energy part of the microwave conductivity spectrum alone with the fit form σ(ω,T)=A_0/[ω+B_0], and the numerically fit result at the same temperature T=0.015J∼ 15K is also plotted in Fig. <ref> (inset), where the fit parameters A_0=15.676 and B_0=0.063. These fit results in Fig. <ref> thus indicate clearly that although the lower-energy cusp-like peak in Fig. <ref> decay as → 1/[ω+B_0], the overall shape of the low-temperature microwave conductivity spectrum in Fig. <ref> exhibits a special non-Drude-like behavior, which can be well fitted by the formula in Eq. (<ref>), in agreement with the corresponding experimental observations<cit.>. More specifically, in comparison with other fit results at the temperatures T=0.005J∼ 5K and T=0.009J∼ 9K, we also find that the fit parameter y in the fit form (<ref>) is almost independence of temperature, and remains relatively constant, taking the average value of y=1.333. This anticipated value of the fit parameter y=1.333 is not too far from the corresponding value of y=1.45(± 0.06), which has been employed in Ref. Turner03 to fit the corresponding experimental data with the same fit formula (<ref>). The qualitative agreement between the present theoretical results and experimental data therefore also show that the kinetic-energy-driven superconductivity, incorporating the effect of the impurity scattering within the framework of the self-consistent T-matrix theory, can give a consistent description of the low-temperature microwave conductivity spectrum found in the microwave surface impedance measurements on cuprate superconductors<cit.>. As a natural consequence of the doped Mott insulator, the microwave conductivity in cuprate superconductors evolve with doping. In Fig. <ref>, we plot the result of σ(ω,T) [black-line] as a function of doping with T=0.002J for energy ω=0.0025J together with n_i=0.0025 and d=0.05. For a comparison, the corresponding result<cit.> of T_ c obtained within the framework of the kinetic-energy-driven superconductivity is also shown in Fig. <ref> (red-line). Apparently, in a striking contrast to the dome-like shape of the doping dependence of T_ c, the microwave conductivity exhibits a reverse dome-like shape of the doping dependence, where σ(ω,T) is a decreasing function of the doping concentration, the system is thought to be at the underdoped regime. The system is at around the optimal doping, where σ(ω,T) reaches its minimum. However, with the further increase in the doping concentration, σ(ω,T) increases at the overdoped regime. This reverse dome-like shape of the doping dependence of the microwave conductivity in low energies and low temperatures is also qualitatively consistent with the microwave conductivity σ_ ul∝ 1/Δ̅ in the universal limit of ω→ 0 and T→ 0, since the SC gap parameter Δ̅ obtained within the framework of the kinetic-energy-driven superconductivity<cit.> has the similar dome-like shape of the doping dependence. For a further understanding of the intrinsic effect of the impurity scattering on the SC-state quasiparticle transport in cuprate superconductors, we now turn to discuss the evolution of the microwave conductivity with the impurity concentration in the case of the universal-limit. The microwave conductivity σ_ ul in the universal-limit can be obtained directly from the energy and temperature dependence of the microwave conductivity (<ref>) in the zero-temperature (T→ 0) and zero-energy (Ω→ 0) limits as, σ_ ul = lim_Ω→ 0 T→ 0σ_xz(Ω) = (2eV^ (TFA)_ F)^2 2πlim_ϵ→ 0[ ReJ_xx(ϵ-iδ,ϵ+iδ) - ReJ_xx(ϵ+iδ,ϵ+iδ)]. In this case, we have made a series of calculations for σ_ ul at different impurity concentrations and different strengths of the impurity scattering potential, and the results of σ_ ul as a function of the impurity concentration n_i at δ=0.15 for d=0.05 (black-line) and d=0.5 (red-line) are plotted in Fig. <ref>, where the main features can be summarized as: (i) for a given set of the impurity scattering potential strength, the microwave conductivity gradually decreases with the increase of the impurity concentration; (ii) for a given impurity concentration, the microwave conductivity decreases when the strength of the impurity scattering potential is increased. In other words, the crucial role played by the impurity scattering is the further reduction of the microwave conductivity. In the present theoretical framework, the effect of the strong electron correlation on the microwave conductivity is reflected in the homogenous part of the electron propagator (then the homogenous self-energy), while the effect of the impurity scattering on the microwave conductivity is reflected both in the impurity-dressed electron propagator (then the impurity-scattering self-energy) and the impurity-induced vertex correction to the electron current-current correlation function. In other words, the microwave conductivity is further renormalized by the impurity-induced vertex correction. For the understanding of this renormalization of the microwave conductivity from the impurity-induced vertex correction, the microwave conductivity in the case of the universal-limit in Eq. (<ref>) can be rewritten as, σ_ ul=β_vcσ_ ul^(0), where the characteristic factor β_vc is the impurity-induced vertex correction to the universal bare result of the microscopic conductivity σ_ ul^(0), while this σ_ ul^(0) can be reduced directly from σ_ ul in Eq. (<ref>) by ignoring the impurity-induced vertex correction as, σ_ ul^(0) = (2eV^ (TFA)_ F)^2πlim_ϵ→ 0∑_μ=A,BΘ^(μ)(θ_ F) Re [ Ĩ^(μ)_0(0,ϵ-iδ,ϵ+iδ) -Ĩ^(μ)_0(0,ϵ+iδ,ϵ+iδ) ], with the function, Θ^(μ)(θ_ F)= {[ cosθ^ (A)_ F,    for μ=A; cosθ^ (B)_ F,    for μ=B ]. In Fig. <ref>, we plot characteristic factor β_vc-1 as a function of the impurity concentration n_i at δ=0.15 for d=0.05 (black-line) and d=0.5 (red-line), where for a given set of the impurity scattering potential strength, the characteristic factor monotonically increases as the impurity concentration is increased. On the other hand, for a given impurity concentration, β_vc-1 increases with the increase of the strength of the impurity scattering potential. It thus shows clearly that the impurity-induced vertex correction is quite significant in the renormalization of the microwave conductivity <cit.>, and then all the effects of the strong electron correlation, the impurity-scattering self-energy, and the impurity-induced vertex correction lead to the highly unconventional behaviors in the microwave conductivity of cuprate superconductors<cit.>. § SUMMARY Starting from the homogenous electron propagator and the related microscopic octet scattering model, which are obtained within the framework of the kinetic-energy-driven superconductivity, we have rederived the impurity-dressed electron propagator in the self-consistent T-matrix approach, where the impurity scattering self-energy is evaluated in the Fermi-arc-tip approximation of the quasiparticle excitations and scattering processes, and then the impurity-dressed electron propagator incorporates both the strong electron correlation and impurity-scattering effects. By virtue of this impurity-dressed electron propagator, we then have investigated the effect of the impurity scattering on the low-temperature microwave conductivity of cuprate superconductors, where the electron current-current correlation function is derived by taking into account the impurity-induced vertex correction. The obtained results show clearly that the low-temperature microwave conductivity spectrum is a non-Drude-like, with a sharp cusp-like peak extending to zero-energy and a high-energy tail falling slowly with energy, in agreement with the corresponding experimental observations <cit.>. In particular, although the low-energy cusp-like peak decay as → A_0/[ω+B_0], the overall shape of the low-temperature microwave conductivity spectrum exhibits a special non-Drude-like behavior, and can be well fitted by the formula σ(ω,T)=σ_0/[1+(ω/C_0T)^ y] with the relatively temperature-independent constant y. Moreover, the low-temperature microwave conductivity decreases with the increase of the impurity concentration or with the increase of the strength of the impurity scattering potential. Our results therefore indicate that the highly unconventional features of the microwave conductivity in cuprate superconductors are arisen from both the strong electron correlation and impurity-scattering effects. The theory also predicts a reverse dome-like shape of the doping dependence of the microwave conductivity, which is in a striking contrast to the dome-like shape of the doping dependence of T_ c, and therefore should be verified by further experiments. § ACKNOWLEDGEMENTS This work is supported by the National Key Research and Development Program of China under Grant No. 2021YFA1401803, and the National Natural Science Foundation of China under Grant Nos. 12247116, 11974051, and 12274036. § DERIVATION OF VERTEX KERNELS OF ELECTRON CURRENT-CURRENT CORRELATION FUNCTION Starting from the homogenous part of the d-wave BCS type formalism, the electron current-current correlation function has been discussed by taking into account the impurity-induced vertex correction <cit.>, where the T-matrix approach has been employed to derive the vertex kernels of the electron current-current correlation function in the nodal approximation. In this Appendix <ref>, we generalize these previous calculations <cit.> for the vertex kernels of the electron current-current correlation function in the nodal approximation to the present case in the Fermi-arc-tip approximation. In the microscopic octet scattering model shown in Fig. <ref>, the tips of the Fermi arcs labelled by the odd numbers are located in the region A of BZ, where |k_y|>|k_x|, while the tips of the Fermi arcs labelled by the even numbers are located in the region B of BZ, where |k_x|>|k_y|. For a convenience in the following discussions, j=1 in Eq. (<ref>) is chosen in the region A of BZ, and j=2 in Eq. (<ref>) is chosen in the region B of BZ, then the trace of the product between the self-consistent equation (<ref>) and the unit vector k̂_x^(1) in the region A and the trace of the product between the self-consistent equation (<ref>) and the unit vector k̂_x^(2) in the region B can be obtained as, Tr[τ_0Λ̃^(A)_x(ω,Ω)]=n_iNcos^2θ^ (A)_ F∑_k∈ATr[G̃^(A)_I(k,ω) ∑_j”∈ oddk̂_x^(1)·k̂^(j”)_ FT̃_j”1(ω) T̃_1j”(ω+Ω)G̃^(A)_I(k,ω+Ω) [τ_0+Λ̃^(A)_x(ω,Ω)]] + n_iNcos^2θ^ (A)_ F∑_k∈B Tr[G̃^(B)_I(k,ω)∑_j”∈ evenk̂_x^(1)·k̂^(j”)_ FT̃_j”1(ω) T̃_1j”(ω+Ω)G̃^(B)_I(k,ω+Ω)[τ_0 +Λ̃^(B)_x(ω,Ω)]],     Tr[τ_0Λ̃^(B)_x(ω,Ω)]=n_iNcos^2θ^ (B)_ F∑_k∈ATr[G̃^(A)_I(k,ω) ∑_j”∈ oddk̂_x^(2)·k̂^(j”)_ FT̃_j”2(ω) T̃_2 j”(ω+Ω)G̃^(A)_I(k,ω+Ω) [τ_0+Λ̃^(A)_x(ω,Ω)]] +n_iNcos^2θ^ (B)_ F∑_k∈B Tr[G̃^(B)_I(k,ω)∑_j”∈ evenk̂_x^(2)·k̂^(j”)_ FT̃_j”2(ω)T̃_2j”(ω+Ω) G̃^(B)_I(k,ω+Ω)[τ_0 +Λ̃^(B)_x(ω,Ω)]], respectively, where the Fermi velocity unit vectors k̂_ F^(j) with j=1,2,3,...,8 at the tips of the Fermi-arc are defined as follows: k̂_ F^(1)=k̂_xcosθ_ F+k̂_ysinθ_ F, k̂_ F^(2)=k̂_xsinθ_ F+k̂_ycosθ_ F, k̂_ F^(3)=k̂_xcosθ_ F-k̂_ysinθ_ F, k̂_ F^(4)=k̂_xsinθ_ F-k̂_ycosθ_ F, k̂_ F^(5)=-k̂_xcosθ_ F-k̂_ysinθ_ F, k̂_ F^(6)=-k̂_xsinθ_ F-k̂_ycosθ_ F, k̂_ F^(7)=-k̂_xcosθ_ F+k̂_ysinθ_ F, k̂_ F^(8)=-k̂_xsinθ_ F+k̂_ycosθ_ F. In particular, it is easy to verify the following relations, n_iNcos^2θ_ F∑_j”∈ oddk̂_x^(1)·k̂_ F^(j”)T̃_j”1(ω)T̃_1j”(ω+Ω) = n_iN[T̃_11(ω)T̃_11(ω+Ω)+T̃_31(ω) T̃_13(ω+Ω) - T̃_51(ω)T̃_15(ω+Ω)-T̃_71(ω) T̃_17(ω+Ω)],     n_iNcos^2θ_ F∑_j”∈ evenk̂_x^(1)·k̂_ F^(j”)T̃_j”1(ω)T̃_1j”(ω+Ω) = tanθ_F n_iN[T̃_21(ω)T̃_12(ω+Ω) +T̃_41(ω)T̃_14(ω+Ω) - T̃_61(ω)T̃_16(ω+Ω)-T̃_81(ω) T̃_18(ω+Ω)],     n_iNsin^2θ_ F∑_j”∈ oddk̂_x^(2)·k̂_ F^(j”)T̃_j”2(ω)T̃_2j”(ω+Ω) = θ_F n_iN[T̃_12(ω)T̃_21(ω+Ω) +T̃_32(ω)T̃_23(ω+Ω) - T̃_52(ω)T̃_25(ω+Ω)-T̃_72(ω) T̃_27(ω+Ω)], n_iNsin^2θ_ F∑_j”∈ evenk̂_x^(2)·k̂_ F^(j”)T̃_j”2(ω)T̃_2j”(ω+Ω) = n_iN[T̃_22(ω)T̃_22(ω+Ω)+T̃_42(ω) T̃_24(ω+Ω) - T̃_62(ω)T̃_26(ω+Ω)-T̃_82(ω) T̃_28(ω+Ω)], in the regions A and B of BZ, respectively, with the T-matrix, T^(α)(ω) = ( [ T_AA^(α)(ω) T_AB^(α)(ω); T_BA^(α)(ω) T_BB^(α)(ω) ]), where the matrixes T_μν^(α)(ω) (μ,ν = A, B) with the corresponding matrix elements have been given explicitly in Ref. Zeng22. Moreover, a general formalism is satisfied by T̃_jn(ω)T̃_nj(ω+Ω) as, T̃_jn(ω)T̃_nj(ω+Ω)= ∑_α, β = 0^3τ_α T^(α)_jn(ω)τ_βT^(β)_nj(ω+Ω)= ∑_α, β,γ = 0^3iϵ̅_αβγ T^(α)_jn(ω)T^(β)_nj(ω+Ω)τ_γ, with iϵ̅_αβγ that is defined as, iϵ̅_αβγ=δ_αβδ_γ0+(1-δ_α0) δ_β0δ_γα+δ_α0(1-δ_β0)δ_γβ +iϵ_αβγ, where ϵ_αβγ is the Levi-Civita tensor, and then iϵ̅_αβγ satisfies the following identities: τ_ατ_β=∑_γiϵ̅_αβγτ_γ and iϵ̅_αβγ=iϵ̅_γαβ. With the help of the above general formalism (<ref>), the relations in Eq. (<ref>) can be derived as, n_iNcos^2θ_ F∑_j”∈ oddk̂_x^(1)·k̂_ F^(j”)T̃_j”1(ω)T̃_1j”(ω+Ω) = ∑_γC^(x)_A1(γ)τ_γ, C^(x)_A1(γ)=n_iN∑_α, β = 0^3 iϵ̅_αβγ[ T^(α)_11(ω)T^(β)_11(ω+Ω) + T^(α)_31(ω)T^(β)_13(ω+Ω)-T^(α)_51(ω) T^(β)_15(ω+Ω) - T^(α)_71(ω)T^(β)_17(ω+Ω)],     n_iNsin^2θ_ F∑_j”∈ oddk̂_x^(2)·k̂_ F^(j”)T̃_j”1(ω)T̃_1j”(ω+Ω) = ∑_γC^(x)_A2(γ)τ_γ, C^(x)_A2(γ)=θ_ Fn_iN∑_α, β = 0^3 iϵ̅_αβγ[ T^(α)_12(ω)T^(β)_21(ω+Ω) + T^(α)_32(ω)T^(β)_23(ω+Ω)-T^(α)_52(ω) T^(β)_25(ω+Ω) - T^(α)_72(ω)T^(β)_27(ω+Ω)],    n_iNcos^2θ_ F∑_j”∈ evenk̂_x^(1)·k̂_ F^(j”)T̃_j”1(ω)T̃_1j”(ω+Ω) = ∑_γC^(x)_B1(γ)τ_γ, C^(x)_B1(γ)=n_iNtanθ_ F∑_α, β = 0^3 iϵ̅_αβγ[ T^(α)_21(ω)T^(β)_12(ω+Ω) + T^(α)_41(ω)T^(β)_14(ω+Ω)-T^(α)_61(ω) T^(β)_16(ω+Ω) - T^(α)_81(ω)T^(β)_18(ω+Ω)],        n_iNsin^2θ_ F∑_j”∈ evenk̂_x^(2)·k̂_ F^(j”)T̃_j”2(ω)T̃_2 j”(ω+Ω) = ∑_γC^(x)_B2(γ)τ_γ, C^(x)_B2(γ)=n_iN∑_α, β = 0^3 iϵ̅_αβγ[ T^(α)_22(ω)T^(β)_22(ω+Ω) + T^(α)_42(ω)T^(β)_24(ω+Ω)-T^(α)_62(ω) T^(β)_26(ω+Ω) - T^(α)_82(ω)T^(β)_28(ω+Ω)]. Substituting the above results in Eq. (<ref>) into Eq. (<ref>) and Eq. (<ref>), Tr[τ_0Λ̃^(A)_x(ω,Ω)] and Tr[τ_0Λ̃^(B)_x(ω,Ω)] can be obtained explicitly as, Tr[Λ̃^(A)_x(ω,Ω)]=∑_β = 0^3{ Tr[τ_β[τ_0+Λ̃^(A)_x(ω,Ω)]] R_A1β^(x)(ω,ω+Ω)+Tr[τ_β[τ_0 +Λ̃^(B)_x(ω,Ω)]] R_B1β^(x)(ω,ω+Ω)},        Tr[Λ̃^(B)_x(ω,Ω)]=∑_β = 0^3{ Tr[τ_β[τ_0+Λ̃^(A)_x(ω,Ω)]] R_A2β^(x)(ω,ω+Ω)+Tr[τ_β[τ_0 +Λ̃^(B)_x(ω,Ω)]] R_B2β^(x)(ω,ω+Ω)}, respectively, with the functions, R_A1β^(x)(ω,ω+Ω)=∑_γ =0^3 C^(x)_A1(γ)Ĩ_γ^(A)(β,ω,ω+Ω),        R_A2β^(x)(ω,ω+Ω)=∑_γ =0^3 C^(x)_A2(γ)Ĩ_γ^(A)(β,ω,ω+Ω),     R_B1β^(x)(ω,ω+Ω)=∑_γ =0^3 C^(x)_B1(γ)Ĩ_γ^(B)(β,ω,ω+Ω),        R_B2β^(x)(ω,ω+Ω)=∑_γ =0^3 C^(x)_B2(γ)Ĩ_γ^(B)(β,ω,ω+Ω). Now we turn to evaluate the similar traces of the product between the vertex kernel Λ̃^(A)_x(ω,Ω) and matrix τ_α with α=1,2,3 in the region A and the product of the vertex kernel Λ̃^(B)_x(ω,Ω) and matrix τ_α in the region B in the kernel function (<ref>), where the derivation processes are almost the same as the derivation processes for the above Tr[τ_0Λ̃^(A)_x(ω,Ω)] in Eq. (<ref>) and Tr[τ_0Λ̃^(B)_x(ω,Ω)] in Eq. (<ref>), and the obtained results can be expressed explicitly as, Tr[τ_αΛ̃^(A)_x(ω,Ω)] = ∑_β = 0^3{ Tr[τ_β[τ_0+Λ̃^(A)_x(ω,Ω)]] R_A1β^(x)(α,ω,ω+Ω) + Tr[τ_β[τ_0+Λ̃^(B)_x(ω,Ω)]] R_B1β^(x)(α,ω,ω+Ω)},    Tr[τ_αΛ̃^(B)_x(ω,Ω)] = ∑_β = 0^3{ Tr[τ_β[τ_0+Λ̃^(A)_x(ω,Ω)]] R_A2β^(x)(α,ω,ω+Ω) + Tr[τ_β[τ_0+Λ̃^(B)_(x)(ω,Ω)]] R_B2β^(x)(α,ω,ω+Ω)}, with the functions, R_A1β^(x)(α,ω,ω+Ω) = ∑_λ = 0^3 C^(x)_A1α(λ)Ĩ^(A)_λ(β,ω,ω+Ω), C^(x)_A1α(λ) = n_iN∑_μ, ν = 0^3(∑_σiϵ̅_μνσiϵ̅_σαλ) η_α(ν)[T^(μ)_11(ω)T^(ν)_11(ω+Ω)+T^(μ)_31(ω) T^(ν)_13(ω+Ω) - T^(μ)_51(ω)T^(ν)_15(ω+Ω)-T^(μ)_71(ω) T^(ν)_17(ω+Ω)], R_B1β^(x)(α,ω,ω+Ω) = ∑_λ = 0^3 C^(x)_B1α(λ)Ĩ^(B)_λ(β,ω,ω+Ω), C^(x)_B1α(λ) = n_iNtanθ_ F∑_μ, ν = 0^3(∑_σiϵ̅_μνσ iϵ̅_σαλ)η_α(ν)[T^(μ)_21(ω) T^(ν)_12(ω+Ω)+T^(μ)_41(ω)T^(ν)_14(ω+Ω) - T^(μ)_61(ω)T^(ν)_16(ω+Ω)-T^(μ)_81(ω) T^(ν)_18(ω+Ω)], R_A2β^(x)(α,ω,ω+Ω) = ∑_λ = 0^3 C^(x)_A2α(λ)Ĩ^(A)_λ(β,ω,ω+Ω), C^(x)_A2α(λ) = n_iNθ_ F∑_μ, ν = 0^3(∑_σiϵ̅_μνσ iϵ̅_σαλ)η_α(ν)[T^(μ)_12(ω) T^(ν)_21(ω+Ω)+T^(μ)_32(ω)T^(ν)_23(ω+Ω) - T^(μ)_52(ω)T^(ν)_25(ω+Ω)-T^(μ)_72(ω) T^(ν)_27(ω+Ω)], R_B2β^(x)(α,ω,ω+Ω) = ∑_λ = 0^3 C^(x)_B2α(λ)Ĩ^(B)_λ(β,ω,ω+Ω), C^(x)_B2α(λ) = n_iN∑_μ, ν = 0^3(∑_σiϵ̅_μνσ iϵ̅_σαλ)η_α(ν)[T^(μ)_22(ω) T^(ν)_22(ω+Ω)+T^(μ)_42(ω)T^(ν)_24(ω+Ω) - T^(μ)_62(ω)T^(ν)_26(ω+Ω)-T^(μ)_82(ω) T^(ν)_28(ω+Ω)], where ∑_σiϵ̅_μνσiϵ̅_σαλ satisfies the following identity, ∑_σiϵ̅_μνσ iϵ̅_σαλ = -4δ_μ0δ_ν0δ_α0δ_λ0 +δ_αμδ_λ0δ_ν0+δ_ανδ_μ0δ_λ0 +δ_λμδ_ν0δ_α0+δ_μ0δ_α0δ_λν +δ_αλδ_μν+δ_ανδ_λμ -δ_αμδ_λν + iδ_α0ϵ_λμν+iδ_λ0ϵ_αμν + iδ_μ0ϵ_ναλ+iδ_ν0ϵ_μαλ, and the tensor η_α(ν) is defined as, η_α(ν) = {[ 1, ν = 0, α; -1, others . ]. Substituting the above results in Eqs. (<ref>) and (<ref>) into Eq. (<ref>) of the main text, we therefore obtain the kernel function J_xx(ω,ω+Ω) in Eq. (<ref>) of the main text. 00 Schrieffer64 J. R. Schrieffer, Theory of Superconductivity, (Benjamin, New York, 1964). Anderson58 P. W. Anderson, Phys. Rev. 109, 1492 (1958). Basov01 D. N. Basov, and T. Timusk, in Handbook on the Physics and Chemistry of Rare Earths, Vol. 31 (Elsevier Science, Amsterdam, 2001), p. 437. Hussey02 See, e.g., the review, N. E. Hussey, Adv. Phys. 51, 1685 (2002). Balatsky06 See, e.g., the review, A. V. Balatsky, I. Vekhter, and J.-X. Zhu, Rev. Mod. Phys. 78, 373 (2006). Alloul09 See, e.g., the review, H. Alloul, J. Bobroff, M. Gabay, and P. J. Hirschfeld, Rev. Mod. Phys. 81, 45 (2009). Tsuei00 See e.g., the review, C. C. Tsuei and J. R. Kirtley, Rev. Mod. Phys. 72, 969 (2000). Ishida91 K. Ishida, Y. Kitaoka, T. Yoshitomi, N. Ogata, T. Kamino, and K. Asayama, Physica C 179, 29 (1991). Legris93 A. Legris, F. Rullier-Albenque, E. Radeva, and P. Lejay, J. Phys. I 3, 1605 (1993). Giapintzakis94 J. Giapintzakis, D. M. Ginsberg, M. A. Kirk, and S. Ockers, Phys. Rev. B 50, 15967 (1994). Fukuzumi96 Y. Fukuzumi, K. Mizuhashi, K. Takenaka, and S. Uchida, Phys. Rev. Lett. 76, 684 (1996). Tolpygo96 S. K. Tolpygo, J.-Y. Lin, M. Gurvitch, S. Y. Hou, and J. M. Phillips, Phys. Rev. B 53, 12454 (1996). Attfield98 J. P. Attfield, A. L. Kharlanov, and J. A. McAllister, Nature 394, 157 (1998). Bobroff99 J. Bobroff, W. A. MacFarlane, H. Alloul, P. Mendels, N. Blanchard, G. Collin, and J.-F. Marucco, Phys. Rev. Lett. 83, 4381 (1999). Eisaki04 H. Eisaki, N. Kaneko, D. L. Feng, A. Damascelli, P. K. Mang, K. M. Shen, Z.-X. Shen, and M. Greven, Phys. Rev. B 69, 064512 (2004). Bonn93 D. A. Bonn, R. Liang, T. M. Riseman, D. J. Baar, D. C. Morgan, K. Zhang, P. Dosanjh, T. L. Duty, A. MacFarlane, G. D. Morris, J. H. Brewer, W. N. Hardy, C. Kallin, and A. J. Berlinsky, Phys. Rev. B 47, 11314 (1993). Lee96 S.-F. Lee, D. C. Morgan, R. J. Ormeno, D. M. Broun, R. A. Doyle, J. R. Waldram, and K. Kadowaki, Phys. Rev. Lett. 77, 735 (1996). Hosseini99 A. Hosseini, R. Harris, S. Kamal, P. Dosanjh, J. Preston, R. Liang, W. N. Hardy, and D. A. Bonn, Phys. Rev. B 60, 1349 (1999). Turner03 P. J. Turner, R. Harris, S. Kamal, M. E. Hayden, D. M. Broun, D. C. Morgan, A. Hosseini, P. Dosanjh, G. K. Mullins, J. S. Preston, R. Liang, D. A. Bonn, and W. N. Hardy, Phys. Rev. Lett. 90, 237005 (2003). Harris06 R. Harris, P. J. Turner, S. Kamal, A. R. Hosseini, P. Dosanjh, G. K. Mullins, J. S. Bobowski, C. P. Bidinosti, D. M. Broun, R. Liang, W. N. Hardy, and D. A. Bonn, Phys. Rev. B 74, 104508 (2006). Bonn94 D. A. Bonn, S. Kamal, K. Zhang, R. Liang, D. J. Baar, E. Klein, and W. N. Hardy, Phys. Rev. B 50, 4051 (1994). Bucci94 C. Bucci, P. Carretta, R. D. Renzi, G. Guidia, F. Licci, L. G. Raflob, H. Keller, S. Lee, I. M. Savićc, Physica C 235-240, 1849 (1994). Bernhard96 C. Bernhard, J. L. Tallon, C. Bucci, R. DeRenzi, G. Guidi, G. V. M. Williams, and C. Niedermayer, Phys. Rev. Lett. 77, 2304 (1996). Bobroff05 J. Bobroff, Ann. Phys. (Paris) 30, 1 (2005). Hirschfeld94 P. J. Hirschfeld, W. O. Putikka, and D. J. Scalapino, Phys. Rev. B 50, 10250 (1994). Durst00 A. C. Durst and P. A. Lee, Phys. Rev. B 62, 1270 (2000). Berlinsky00 A. J. Berlinsky, D. A. Bonn, R. Harris, and C. Kallin, Phys. Rev. B 61, 9088 (2000). Hettler00 M. H. Hettler and P. J. Hirschfeld, Phys. Rev. B 61, 11313 (2000). Durst02 A. C. Durst and P. A. Lee, Phys. Rev. B 65, 094501 (2002). Kim04 W. Kim, F. Marsiglio, and J. P. Carbotte, Phys. Rev. B 70, 060505(R) (2004). Nunner05 T. S. Nunner and P. J. Hirschfeld, Phys. Rev. B 72, 014514 (2005). Wang08 Z. Wang, H. Guo, and S. Feng, Physica C 468, 1078 (2008); Z. Wang and S. Feng, Phys. Rev. B 80, 174507 (2009). Chatterjee06 U. Chatterjee, M. Shi, A. Kaminski, A. Kanigel, H. M. Fretwell, K. Terashima, T. Takahashi, S. Rosenkranz, Z. Z. Li, H. Raffy, A. Santander-Syro, K. Kadowaki, M. R. Norman, M. Randeria, and J. C. Campuzano, Phys. Rev. Lett. 96, 107006 (2006). He14 Y. He, Y. Yin, M. Zech, A. Soumyanarayanan, M. M. Yee, T. Williams, M. C. Boyer, K. Chatterjee, W. D. Wise, I. Zeljkovic, T. Kondo, T. Takeuchi, H. Ikuta, P. Mistark, R. S. Markiewicz, A. Bansil, S. Sachdev, E. W. Hudson, and J. E. Hoffman, Science 344, 608 (2014). Restrepo23 F. Restrepo, J. Zhao, J. C. Campuzano, and U. Chatterjee, Phys. Rev. B 107, 174519 (2023). Norman98 M. R. Norman, H. Ding, M. Randeria, J. C. Campuzano, T. Yokoya, T. Takeuchi, T. Takahashi, T. Mochiku, K. Kadowaki, P. Guptasarma, and D. G. Hinks, Nature 392, 157 (1998). Shi08 M. Shi, J. Chang, S. Pailhés, M. R. Norman, J. C. Campuzano, M. Mánsson, T. Claesson, O. Tjernberg, A. Bendounan, L. Patthey, N. Momono, M. Oda, M. Ido, C. Mudry, and J. Mesot, Phys. Rev. Lett. 101, 047002 (2008). Sassa11 Y. Sassa, M. Radović, M. Mánsson, E. Razzoli, X. Y. Cui, S. Pailhés, S. Guerrero, M. Shi, P. R. Willmott, F. Miletto Granozio, J. Mesot, M. R. Norman, and L. Patthey, Phys. Rev. B 83, 140511(R) (2011). Fujita14 K. Fujita, C. K. Kim, I. Lee, J. Lee, M. H. Hamidian, I. A. Firmo, S. Mukhopadhyay, H. Eisaki, S. Uchida, M. J. Lawler, E.-A. Kim, and J. C. Davis, Science 344, 612 (2014). Comin14 R. Comin, A. Frano, M. M. Yee, Y. Yoshida, H. Eisaki, E. Schierle, E. Weschke, R. Sutarto, F. He, A. Soumyanarayanan, Yang He, M. L. Tacon, I. S. Elfimov, Jennifer E. Hoffman, G. A. Sawatzky, B. Keimer, and A. Damascelli, Science 343, 390 (2014). Kaminski15 A. Kaminski, T. Kondo, T. Takeuchi, and G. Gu, Phil. Mag. 95, 453 (2015). Loret17 B. Loret, S. Sakai, S. Benhabib, Y. Gallais, M. Cazayous, M. A. Méasson, R. D. Zhong, J. Schneeloch, G. D. Gu, A. Forget, D. Colson, I. Paul, M. Civelli, and A. Sacuto, Phys. Rev. B 96, 094525 (2017). Chen19 S. D. Chen, M. Hashimoto, Y. He, D. Song, K. J. Xu, J. F. He, T. P. Devereaux, H. Eisaki, D. H. Lu, J. Zaanen, and Z. -X. Shen, Science 366, 1099 (2019). Yin21 See, e.g., the review, J.-X. Yin, S. H. Pan, and M. Z. Hasan, Nat. Rev. Phys. 3, 249 (2021). Pan01 S. H. Pan, J. P. ÓNeal, R. L. Badzey, C. Chamon, H. Ding, J. R. Engelbrecht, Z. Wang, H. Eisaki, S. Uchida, A. K. Gupta, K.-W. Ng, E. W. Hudson, K. M. Lang, and J. C. Davis, Nature 413, 282 (2001). Kohsaka07 Y. Kohsaka, C. Taylor, K. Fujita, A. Schmidt, C. Lupien, T. Hanaguri, M. Azuma, M. Takano, H. Eisaki, H. Takagi, S. Uchida, and J. C. Davis, Science 315, 1380 (2007). Kohsaka08 Y. Kohsaka, C. Taylor, P. Wahl, A. Schmidt, J. Lee, K. Fujita, J. W. Alldredge, K. McElroy, J. Lee, H. Eisaki, S. Uchida, D.-H. Lee, and J. C. Davis, Nature 454, 1072 (2008). Hamidian16 M. H. Hamidian, S. D. Edkins, S. Hyun Joo, A. Kostin, H. Eisaki, S. Uchida, M. J. Lawler, E.-A. Kim, A. P. Mackenzie, K. Fujita, J. Lee, and J. C. S. Davis, Nature 532, 343 (2016). Zeng22 M. Zeng, X. Li, Y. Wang, and S. Feng, Phys. Rev. B 106, 054512 (2022). Feng0306 S. Feng, Phys. Rev. B 68, 184501 (2003); S. Feng, T. Ma, and H. Guo, Physica C 436, 14 (2006). Feng12 S. Feng, H. Zhao, and Z. Huang, Phys. Rev. B. 85, 054509 (2012); Phys. Rev. B 85, 099902(E) (2012). Feng15 See, e.g., the review, S. Feng, Y. Lan, H. Zhao, L. Kuang, L. Qin, and X. Ma, Int. J. Mod. Phys. B 29, 1530009 (2015). Feng15a S. Feng, L. Kuang, and H. Zhao, Physica C 517, 5 (2015). Anderson87 P. W. Anderson, Science 235, 1196 (1987). Zhang88 F. C. Zhang and T. M. Rice, Phys. Rev. B 37, 3759 (1988). Yu92 See, e.g., the review, L. Yu, in Recent Progress in Many-Body Theories, edited by T. L. Ainsworth, C. E. Campbell, B. E. Clements, and E. Krotscheck (Plenum, New York, 1992), Vol. 3, p. 157. Feng93 S. Feng, J. B. Wu, Z. B. Su, and L. Yu, Phys. Rev. B 47, 15192 (1993). Zhang93 L. Zhang, J. K. Jain, and V. J. Emery, Phys. Rev. B 47, 3368 (1993). Guillou95 J. C. LeGuillou and E. Ragoucy, Phys. Rev. B 52, 2403 (1995). Feng0494 S. Feng, J. Qin, and T. Ma, J. Phys.: Condens. Matter 16, 343 (2004); S. Feng, Z. B. Su, and L. Yu, Phys. Rev. B 49, 2368 (1994). Liu21 Y. Liu, Y. Lan, and S. Feng, Phys. Rev. B 103, 024525 (2021). Gao18 D. Gao, Y. Liu, H. Zhao, Y. Mou, and S. Feng, Physica C 551, 72 (2018). Gao19 D. Gao, Y. Mou, Y. Liu, S. Tan, and S. Feng, Phil. Mag. 99, 752 (2019). Mahan81 See, e.g., G. D. Mahan, Many-Particle Physics, (Plenum Press, New York, 1981). Hirschfeld89 P. J. Hirschfeld, P. Wölfle, J. A. Sauls, D. Einzel, and W. O. Putikka, Phys. Rev. B 40, 6695 (1989). Hirschfeld93 P. J. Hirschfeld and N. Goldenfeld, Phys. Rev. B 48, 4219 (1993) Dessau91 D. S. Dessau, B. O. Wells, Z.-X. Shen, W. E. Spicer, A. J. Arko, R. S. List, D. B. Mitzi, and A. Kapitulnik, Phys. Rev. Lett. 66, 2160 (1991). Hwu91 Y. Hwu, L. Lozzi, M. Marsi, S. LaRosa, M. Winokur, P. Davis, M. Onellion, H. Berger, F. Gozzo, F. Lévy, and G. Margaritondo, Phys. Rev. Lett. 67, 2573 (1991). Randeria95 M. Randeria, H. Ding, J-C. Campuzano, A. Bellman, G. Jennings, T. Yokoya, T. Takahashi, H. Katayama-Yoshida, T. Mochiku, and K. Kadowaki, Phys. Rev. Lett. 74, 4951 (1995). Fedorov99 A. V. Fedorov, T. Valla, P. D. Johnson, Q. Li, G. D. Gu, and N. Koshizuka, Phys. Rev. Lett. 82, 2179 (1999). Lu01 D. H. Lu, D. L. Feng, N. P. Armitage, K. M. Shen, A. Damascelli, C. Kim, F. Ronning, Z.-X. Shen, D. A. Bonn, R. Liang, W. N. Hardy, A. I. Rykov, and S. Tajima, Phys. Rev. Lett. 86, 4370 (2001). Sakai13 S. Sakai, S. Blanc, M. Civelli, Y. Gallais, M. Cazayous, M.-A. Méasson, J. S. Wen, Z. J. Xu, G. D. Gu, G. Sangiovanni, Y. Motome, K. Held, A. Sacuto, A. Georges, and M. Imada, Phys. Rev. Lett. 111, 107001 (2013). DMou17 D. Mou, A. Kaminski, and G. Gu, Phys. Rev. B 95, 174501 (2017).
http://arxiv.org/abs/2307.04792v1
20230710180004
Generalized Hall current on a finite lattice
[ "Srimoyee Sen", "Semeon Valgushev" ]
hep-th
[ "hep-th", "cond-mat.str-el", "hep-lat", "nucl-th" ]
Generalized Hall current on a finite lattice Srimoyee Sen, Semeon Valgushev Department of Physics and Astronomy, Iowa State University, Ames, IA, 50011 ======================================================================================================================= Gapped fermion theories with gapless boundary fermions can exist in any number of dimensions. When the boundary has even space-time dimensions and hosts chiral fermions, a quantum Hall current flows from the bulk to the boundary in a background electric field. This current compensate for the boundary chiral anomaly. Such a current inflow picture is absent when the boundary theory is odd dimensional. However, in recent work, the idea of quantum Hall current has been generalized to describe odd dimensional boundary theories in continuous Euclidean space-time dimension of infinite volume. In this paper we extend this idea to a lattice regulated finite volume theory of 1+1 dimensional Wilson-Dirac fermions. This fermion theory with a domain wall in fermion mass can host gapless modes on the wall. The number of gapless fermions is equal to the integral of the divergence of the lattice generalized Hall current. § INTRODUCTION Odd dimensional Dirac fermion field theories are interesting when there is a domain wall in fermion mass. In that case, the domain wall defect is even dimensional and hosts massless chiral fermions <cit.>. When this theory is coupled to electromagnetic fields, the boundary suffers from chiral anomaly leading to non-conservation of vector current in the presence of background electromagnetic fields. However, as Callan-Harvey showed <cit.>, a vector current flows from the bulk to the boundary restoring current conservation in the higher dimensional theory. In order to compute this current one integrates out the fermion away from the domain wall which leaves behind a Chern-Simons theory for the electromagnetic field. This explains the inflowing current from the bulk to the boundary. As is well known, the odd dimensional gapped bulk theory of free Dirac fermion describes the physics of quantum Hall effect. The inflowing current is analogous to the quantum Hall current whereas the massless chiral fermions on the domain wall are analogs of the quantum Hall edge states. More generally, gapped fermion field theories can host massless fermions on domain walls irrespective of whether the wall is even or odd dimensional. They describe the physics of topological insulators and superconductors with corresponding edge states in various dimensions <cit.>. When the boundary is odd dimensional, in contrast to quantum Hall effect, the boundary theory does not have a chiral anomaly. Therefore, we don't expect an inflowing current from the bulk to the boundary as in the case of quantum Hall effect. Although, the boundary theory can have discrete anomalies which connects the existence of the edge states to the gapped bulk theory<cit.>. In a recent paper <cit.> the authors showed that the idea of the Hall current can be generalized to odd dimensional boundaries. The idea was inspired by index calculation of fermion vortex system in <cit.>. This generalization of the Hall current relies on the following step: the Minkowski space domain wall fermion theory with massless boundary fermion is first connected to another Euclidean fermion theory where the Euclidean fermion operator has a nonzero index. This index equals the number of massless fermions in the original Minkowski theory. From there, it was shown <cit.> that one can construct a generalized Hall current: the space-time integral of its divergence equaling the index of the fermion operator. The construction outlined in <cit.> holds for non-interacting fermions in infinite volume and continuum space-time. The goal of this paper is to extend that analysis to a discrete space-time lattice of infinite and finite volume. The analysis in <cit.> included several different fermion theories in various space-time dimensions. In this paper, we choose to work with the simplest example: 1+1 dimensional Dirac fermion with a domain wall in its mass <cit.>. The domain wall hosts a massless fermion which may suffer from discrete anomalies (<cit.>), but does not suffer from chiral anomaly. As a result, one doesn't expect a Hall current flowing from bulk to the boundary. However, the generalized Hall current exists for this system in infinite volume and continuum space-time. We explore how the generalized Hall current for this system can be constructed on an infinite and finite lattice. A crucial observation which makes the continuum construction of the generalized Hall current possible is the following. Whenever the index of a Euclidean elliptic fermion operator is nonzero, there is a current in the system: the space-time integral of the divergence of this current equals the index. We call this current the generalized Hall current. Note that the index of a Euclidean elliptic operator is the difference between the number of zero modes of that operator and that of its Hermitian conjugate <cit.>. Therefore, no generalized Hall current exists if the index of the operator is zero. This observation is not meant to be self-evident and its proof is outlined in <cit.>. We will discuss the proof briefly in the next section of this paper. This observation can then be used to construct the generalized Hall current for massless fermion edge states of any Minkowski fermion theory as follows. The first step is to use the Minkowski fermion operator to construct its Euclidean counterpart. Since the Minkowski fermion operator has massless states living on the defect, the corresponding Euclidean operator has unnormalizable zero eigenvalue eigenstates living on the same defect. These states are not zero modes since they are not normalizable. As a result the index of the Euclidean operator at this stage is zero. Ref. <cit.> then introduces a slight deformation to this Euclidean operator through the introduction of a background diagnostic field in such a way that this unnormalizable zero eigenvalue state becomes localized and normalizable. i.e. the deformed Euclidean operator has a zero mode iff the original Minkowski fermion operator had a massless fermion in its spectrum. The introduction of this diagnostic field also creates an imbalance between the number of zero modes of the fermion operator and its Hermitian conjugate resulting in a nonzero index for the deformed theory. Additionally, the construction carefully ensures that the index survives in the limit of the diagnostic field being taken to zero. We expect a generalized Hall current to flow as long as the index is nonzero. In the continuum analysis, one can obtain this Hall current by simply perturbing in the diagnostic field and integrating out the fermions in a one loop diagram. This is analogous to the Goldstone-Wilczek calculation <cit.>. As we embark on generalizing the above construction on the lattice, both infinite and finite, we explore which elements of the continuum construction can be carried over to the lattice without significant modification and which elements need to be reformulated. Since we will work with the 1+1 dimensional fermion theory, from this point onward we exclusively focus on it. The organization of the paper is as follows. We will begin with a brief overview of the generalized Hall current construction in the continuum specializing to the case in 1+1 dimensions. We will then discuss how this construction is generalized to an infinite lattice analytically. The following section will describe the numerical analysis of this construction and demonstrate that a generalized Hall current exists on a finite lattice. § INFINITE VOLUME CONTINUUM ANALYSIS The procedure for constructing the generalized Hall current in the continuum in infinite volume is described in detail in <cit.>. We briefly review this construction here. Consider a Minkowski fermion operator D_M with a mass defect which causes it to have a massless fermion in the spectrum that is stuck to the defect. To construct the generalized Hall current * We analytically continue this fermion operator to Euclidean space-time, denoting it by 𝒟. * Introduce background diagnostic field to deform the fermion operator 𝒟 to have an index of one (equal to the number of massless fermions in the original Minkowski theory). * Obtain the generalized Hall current following a Goldstone-Wilczek <cit.> inspired calculation using one loop Feynman diagram. * Take the background diagnostic field to zero at the end of calculation and confirm that the generalized Hall current and the index survives taking this limit. Before we apply this construction to 1+1 dimensional example, let's first attempt to understand how the index of a fermion operator gives rise to an inflowing current in infinite volume continuous space-time of Euclidean signature. Note that the index of the fermion operator 𝒟 is given by I=Dim(ker D)-Dim(ker D^†). In the example we will consider, the number of zero modes of either the operator 𝒟 or the operator 𝒟^† is zero. As a result the magnitude of the index ends up being equal to the number of zero modes of one operator or the other. Furthermore, the number of zero modes of the operator 𝒟 coincides with the number of zero modes for the operator 𝒟^†𝒟 and the number of zero modes of 𝒟^† coincides with that of 𝒟𝒟^†. Therefore the formula for the index can be re-expressed by defining ℐ(M)=M^2/M^2+𝒟^†𝒟-M^2/𝒟𝒟^†+M^2 and noting that I=lim_M→ 0ℐ(M). Interestingly, the quantity ℐ(M) can now be recast as the matrix element ℐ(M)=-∫ d^d+1x⟨Ψ̅Γ_χΨ⟩ in a fermion theory with the following action 𝒮=∫ d^d+1x Ψ̅(K+M)Ψ where K=[ 0 -𝒟^†; 𝒟 0 ] and Γ_χ=[ 1 0; 0 -1 ]. Note that, d+1 is the number of space-time dimensions in which the original fermion operator 𝒟 is defined. The spinor Ψ has twice the dimension of the spinors of the original theory. The gamma matrices for this theory can be easily read off using Γ_μ=i∂K̃(p)/∂ p_μ where K̃ is the Fourier transform of K. The theory of Eq. <ref> has its own fermion number symmetry which works as Ψ→ e^iθΨ. In the M→ 0 limit, it also has an axial symmetry Ψ→ e^iΓ_χαΨ where this new axial symmetry has nothing to do with the symmetries of the original theory. We can now construct an axial current 𝒥_μ^χ=Ψ̅Γ_μΓ_χΨ and write down the Ward identity for it ∂_μ𝒥_μ^χ=2MΨ̅Γ_χΨ-𝒜 where 𝒜 is the “anomaly contribution" 𝒜=-2 lim_Λ→∞Tr(Γ_χe^K^2/Λ^2)=-2ℐ(∞). This anomaly contribution can be computed using the methods outlined in Fujikawa <cit.>. It is found to vanish for the theory under consideration Eq. <ref> and was elaborated in <cit.>. At this point we can take the limit M→ 0 in Eq. <ref> to write I=ℐ(0)=-lim_M→ 0 M∫ d^d+1x ⟨Ψ̅Γ_χΨ⟩=-lim_M→ 01/2∫ d^d+1x ⟨∂_μ𝒥_μ^χ⟩. We have now expressed the index of the fermion operator in terms of the “axial" current of the theory in Eq. <ref>. We call this current the generalized Hall current. This generalized Hall current Ψ̅Γ_μΓ_χΨ can now be computed using one loop Feynman diagrams by perturbing in the mass defect as well as the other background fields. We will review how this is done for 1+1 dimensional Dirac fermion with a domain wall in its mass. §.§ 1+1 dimensional Dirac fermion in continuum Let's consider the Lagrangian of a Dirac fermion in Minkowski space-time with Dirac mass denoted as ϕ_1. It has the Lagrangian ℒ=ψ̅(iγ^μ∂_μ-ϕ_1)ψ where μ takes values 0 and 1, x_0 is the temporal and x_1 is the spatial coordinate. We can take the γ matrices as γ^0=σ_2, γ^1=-iσ_1, γ^χ=σ_3 where γ_χ is the chirality operator. If we introduce a domain wall in ϕ_1 along the spatial coordinate x_1, ϕ_1=m_0ϵ(x_1) with m_0>0 and ϵ(x) = +1, x ≥ 0 -1, x < 0 , then we will have a massless fermion mode living on the domain wall at x_1=0 as seen from the Dirac equation in the domain wall background iγ^0∂_0ψ+iγ^1∂_1ψ-ϕ_1ψ=0. To look for massless state, we can set ∂_0ψ=0 and find that the Dirac equation is solved by ψ=1/√(2)[ 1; -1 ]e^-m_0 |x_1|. In order to construct the generalized Hall current we first have to analytically continue to Euclidean space-time where the Lagrangian is now ℒ_E=ψ̅(γ_μ∂_μ+ϕ_1)ψ with Euclidean gamma matrices defined as γ_0=σ_2, γ_1=-σ_1, γ_χ=σ_3. We also denote two dimensional identity matrix as σ_0. The corresponding fermion operator γ_μ∂_μ+ϕ_1 has an unnormalizable zero eigenvalue eigenstate. However this state doesn't count as zero mode which should be normalizable. In order to engineer a zero mode we turn on a background pseudo-scalar field with a domain wall profile in the Euclidean time direction. We also refer to this field as a diagnostic field. The corresponding Lagrangian is of the form ℒ_E=ψ̅(γ_μ∂_μ+ϕ_1+iϕ_2γ_χ)ψ where ϕ_2=μ_0ϵ(x_0) with μ_0>0. Let's denote this fermion operator as 𝒟 with 𝒟=(γ_μ∂_μ+ϕ_1+iϕ_2γ_χ). We find that the operator 𝒟 has one zero mode of the form ψ=1/√(2)[ 1; -1 ]e^-m_0|x_1|-μ_0|x_0|. We can also look for zero modes for the operator 𝒟^† and find that there are none for this specific choice of domain wall profile (m_0>0, μ_0>0). More generally, for other choices of the domain wall profile, e.g. with m_0>0, μ_0<0 or m_0<0, μ_0>0 we find a zero mode for the operator 𝒟^† and the operator 𝒟 has no zero modes. Similarly, the choice of m_0<0, μ_0<0 yields a zeromode for 𝒟 and none for 𝒟^†. In other words, the magnitude of the index of the fermion operator remains 1 as long as there is a domain wall in both ϕ_1 and ϕ_2. However, whether the index is positive or negative depends on the profile of choice. There is a simple way to relate the domain wall profile with the index of the fermion operator. To see this, we can first express ϕ_1+iϕ_2 as ϕ_1+iϕ_2=v e^iθ. It is easy to see that for a crossed domain wall profile in ϕ_1 and ϕ_2, if one considers a polar coordinate system centered at x_0=x_1=0, then the phase variable θ completes a winding of 2π or -2π as one travels along a contour encircling the center over a polar angle of 2π. The crossed domain wall defect can therefore be thought of as a vortex in ϕ_1+iϕ_2. We have now constructed the intended fermion operator whose index is equal to the winding in the crossed domain wall configuration. Note that the index and the winding survives in the limit μ_0→ 0. §.§ Generalized Hall current (GHC) in the continuum We now review the one loop Feynman diagram calculation to compute the generalized Hall current and then verify that the space-time integral of its divergence equals the index. Following the prescription outlined in Eq. <ref>,<ref>,<ref> we construct the K matrix which we can re-express in momentum space as K=Γ_μk_μ+i ϕ_2Γ_2+iϕ_1Γ_3 where we have defined Γ_i=σ_1⊗γ_i, Γ_2=σ_1⊗γ_χ, Γ_3=-σ_2⊗σ_0, Γ_χ=σ_3⊗σ_0. To compute the “axial" current, we rewrite the mass terms as ϕ_1+iϕ_2=(v+ρ(x))e^iθ(x) and expand the K matrix in θ with K=K_0+δ K where K_0=γ_μk_μ+ρ and δ K=i vθΓ_2+iρΓ_3. Up to linear order in θ we get 𝒥_μ^χ = +v∂θ/∂ x_μ∫d^2q/(2π)^2Tr(Γ_μΓ_χdK_0^-1/dq_νΓ_2 K_0^-1) = ϵ_μν∂_νθ∫d^2q/(2π)^24v^2/(q^2+v^2)^2 = 1/πϵ_μν∂_νθ We can now compute the space-time integral of the divergence of this current and relate it to the index with ℐ(0)=-1/2∫ d^2x ∂_μ𝒥_μ^χ=-ν_θ where ν_θ is the winding of the crossed domain wall or vortex configuration. For the specific domain wall profile we have chosen this winding is -1. Therefore we get an index of 1 which is consistent with the index we obtained for the fermion operator in the previous subsection. This demonstrates that whenever the Minkowski theory specified by Eq. <ref> has a domain wall in fermion mass hosting massless edge state, one can construct a corresponding Euclidean fermion operator with the following properties: * The Euclidean fermion operator has an index of ± 1 in the presence of a background diagnostic field. * In the limit of diagnostic field going to zero this Euclidean operator coincides with the Euclidean analytic continuation of the Minkowski operator in Eq. <ref>. * The index of this Euclidean operator persists in the limit of the diagnostic field being taken to zero and is equal to the space-time integral of the divergence of the GHC. In the next section, we will to extend our Euclidean fermion operator construction to discrete space-time. In order to mimic the continuum construction sufficiently closely we will have to maintain the following * The lattice fermion operator or its Hermitian conjugate should not have more than one zeromode. * We will exclude regions in parameter space where the number of zeromodes for the fermion operator and its Hermitian conjugate are the same. The second condition ensures that the index of the fermion operator is nonzero. § 1+1 CASE ON THE LATTICE IN INFINITE VOLUME We begin with the fermion operator in Eq. <ref> and discretize spacetime setting the lattice spacing to 1. If we first set ϕ_2=0 and naively discretize space-time, we observe an important difference from the spectrum in the continuum : i.e. we see fermion doubling. This is to say, in the continuum we had a single solution to the equation 𝒟|_ϕ_2=0ψ=0 with ψ being localized in the x_1 direction and constant in the x_0 direction. On the lattice, there are more than one solution of this form. In order to remove fermion doubling so as to retain only one solution will require us to introduce higher dimensional operators to the Lagrangian similar to the Wilson terms used in domain wall fermions <cit.>. Since our end goal is to construct a Euclidean fermion operator with a single zeromode we have two simple choices for this higher dimensional term: * Wilson-like operator: Inspired by the Wilson term in lattice field theory, we introduce the following higher-derivative operators to the Lagrangian which we call Wilson-like terms <cit.>: 𝒟_1 = ∑_μγ_μ∇_μ + R/2∇_1^2+i γ_χR/2∇_0^2. We set parameter R=1. * Fermion operator with Wilson term: We introduce in the Lagrangian the standard Wilson term: 𝒟_2 = ∑_μγ_μ∇_μ + R/2∑_μ∇_μ^2 We again set Wilson parameter to R=1. We now look for the zeromodes of these operators by varying the paramaters of our theory. §.§ Zeromodes In this subsection we aim to obtain zeromode solutions, by varying the parameters like the domain wall heights for the two types of lattice fermion operator introduced in the previous section. We first present an analytic calculation for the zeromode of the Wilson-like operator in infinite and finite volume. The corresponding expressions for the zeromode profile are simple and illuminating. An analogous analytic calculation for the Wilson fermion case is more difficult and not particularly illuminating. Therefore we defer the discussion of the Wilson fermion operator to the subsection <ref> where we present numerical analysis of both the Wilson fermion case and the Wilson-like cases. §.§.§ Analytic solution for the zeromode in infinite volume We begin with Wilson-like operator given by Eq. <ref>: 𝒟_1=[ ϕ̃_1+iϕ̃_2 -i∇_0-∇_1; i∇_0-∇_1 ϕ̃_1-iϕ̃_2 ] where ϕ̃_1=ϕ_1+1/2∇_1^2 and ϕ̃_2=ϕ_2+1/2∇_0^2. With an ansatz of ψ_+=[ 1; -1 ]φ_+ of γ_1 eigenvalue +1 we get two equations for φ_+, ∇_1φ+ϕ̃_1φ_+=0, ∇_0φ+ϕ̃_2φ_+=0. Then using an ansatz of φ_+=z_0^x_0z_1^x_1 we see that there exists normalizable solution with z_0=(1-ϕ_2) z_1=(1-ϕ_1) when 0<m_0<2 and 0<μ_0<2. Let's fix m_0=1 and μ_0=1. Now we consider the ansatz of ψ_-=[ 1; 1 ]φ_-. The EOMs for φ_- are ∇_1φ_–ϕ̃_1φ_-=0, ∇_0φ_–ϕ̃_2φ_-=0. These are solved by the ansatz φ_-=z_0^x_0 z_1^x_1 with z_0=1/(1-ϕ_2), z_1=1/(1-ϕ_1). The solution is not normalizable for our choice of m_0=1 and μ_0=1. Therefore ψ_- is not a zeromode of 𝒟_1; thus 𝒟_1 has a single zeromode specified by the expression for ψ_+ in Eq. <ref>. Now, let's look at the zero modes for 𝒟^†. With an ansatz of ξ_-=[ 1; 1 ]χ_- and ξ_+=[ 1; -1 ]χ_+ we get the following EOMs for χ_- and χ_+, ∇_1χ_-+ϕ̃_1χ_-=0 ∇_0χ_–ϕ̃_2χ_-=0 and ∇_1χ_+-ϕ̃_1χ_+=0 ∇_0χ_++ϕ̃_2χ_+=0 Using an ansatz of the form z_0^x_0z_1^x_1 for χ_- and χ_+ we see that there are no normalizable solutions for either. Thus we have accomplished what we set out do, i.e. engineer a Euclidean fermion operator on the lattice with an index of +1 using the Wilson-like terms. Note that, if we vary parameters the pattern of zeromodes change. E.g. for -2<m_0<0, -2<μ_0<0 we find a zeromode solution with γ_1 eigenvalue -1. Similarly, with 2>m_0>0, 0>μ_0>-2 and 0>m_0>-2, 2>μ_0>0 we find no normalizable zeromode for the operator 𝒟_1. However, we find a zeromode for the operator 𝒟_1^†: γ_1 eigenvalue -1 for 2>m_0>0, 0>μ_0>-2 and γ_1 eigenvalue 1 for 2>μ_0>0, 0>m_0>-2. §.§.§ Finite volume Our next goal is to generalize the infinite volume construction to finite volume, i.e. on S^1 × S^1. At this point we will have to resort to numerical techniques. We will take the lattice size to be L× L where the domain wall in ϕ_1 is located at x_1=0 and the anti-wall is located at x_1=L/2. Similarly, the domain wall in ϕ_2 is located at x_0=0 with anti-wall at x_0=L/2. Therefore in effect we have four vortex-like defects at (x_0=0, x_1=0), (x_0=0, x_1=L/2), (x_0=L/2, x_1=0) and (x_0=L/2, x_1=L/2). There are several subtleties with this finite volume analysis which we describe below. Exact zeromode and tuning: The two types of lattice fermion operators, which we call the Wilson-like or Wilson fermion operator, will in general not exhibit exact zeromodes in finite volume for arbitrary choice of domain wall heights. To understand why this is the case, consider the Wilson-like fermion operator. Since we are considering S^1× S^1 with periodic boundary condition, any solution to the equation of motion including the zeromode should satisfy: ϕ_+(x_μ = -L/2) = ϕ_+(x_μ = L/2) for μ=0,1. The solution obtained in Eq. <ref> for an infinite lattice with equal magnitude of domain wall height on the two sides of the wall will not satisfy this periodic boundary condition (PBC) in finite volume. In order to obtain an exact zeromode solution which satisfies PBC, we will need to assume more general domain wall configuration: ϕ_1(x_1) = m_+ x_1≥ 0 m_- x_1<0 , ϕ_2(x_0) = μ_+ x_0≥ 0 μ_- x_0<0 . Then we find an exact zeromode for the choice 1/1-m_- = (1-m_+), 1/1-μ_- = (1-μ_+). Note that these equations do not depend on the lattice size, thus if they are satisfied then the exact zeromode of 𝒟_1 will exist in any volume. A similar analysis is much more complicated for the Wilson case and is not particularly interesting. It's important consider however, that the Minkowski space domain wall theory in continuous space-time and in infinite volume hosts massless edge states without requiring any tuning of the domain wall height. Therefore, on the finite lattice too, we seek a formulation which does not rely on tuning of the domain wall heights. Since, a finite volume lattice fermion operator 𝒢 does not have an exact zeromode in general, we shift our attention to the operator 𝒢^†𝒢. This is also motivated by the observation that the index formula for the fermion operator involves the kernel of the operator 𝒢^†𝒢 and 𝒢𝒢^†. However, the operator 𝒢^†𝒢 (or 𝒢𝒢^†) doesn't have exact zeromodes in finite volume either. In order to recover them one has to take an infinite volume limit. Interestingly, this limit is smooth for 𝒢^†𝒢 (or 𝒢𝒢^†) but not necessarily for 𝒢 (or 𝒢^†) itself. We will use this observation to enable the GHC construction. The index formula in infinite volume is related to the the difference of the zeromodes of the operators 𝒟_1/2𝒟_1/2^† and 𝒟_1/2^†𝒟_1/2. We will work with the same definition for the “index” in finite volume. As we will see, in finite volume, the operators, 𝒟_1/2𝒟_1/2^† and 𝒟_1/2^†𝒟_1/2 will exhibit smooth convergence towards infinite volume zeromodes without any fine tuning for the domain wall heights, whereas 𝒟_1/2 will not. This will enable us to construct a tuning independent lattice GHC. Although we don't need fine tuning of domain wall heights, the domain wall heights must satisfy the following constraints to host a zeromode in the infinite volume limit. E.g. for a crossed domain wall configuration of the form ϕ_1=m_0ϵ(x_1) and ϕ_2=μ_0ϵ(x_0) we must have 0<m_0,μ_0<2 in order for there to be a zeromode. Therefore, in the rest of the paper we will choose parameters that satisfy this condition. Finally, even though our goal is to construct a GHC formulation which does not rely on tuning of the domain wall height, we will present the results for the tuned case of the Wilson-like fermion operator to illustrate a GHC in the presence of an exact lattice zeromode. Index in finite volume: In a finite volume, a domain wall setup will appear accompanied by an anti-wall. As a result, with a domain wall in mass and the diagnostic field, we will have four vortex, two vortex and two anti-vortex defects in finite volume as described in the beginning of this subsection. Clearly the net winding of this system is zero. Therefore the net “index" in this finite volume lattice theory is also zero. However, locally in a region near each of the vortex defect we should be able to define an "index" which we can then attempt to connect to a lattice version of the generalized Hall current. In other words, in finite volume, the operators 𝒟_1/2𝒟_1/2^† and 𝒟_1/2^†𝒟_1/2 have the same number of zeromodes. This implies that the difference between the number of zeromodes for the two is zero, or the net “index" is zero. However, the zeromodes for these two operators will be localized on different vortex defects. E.g. 𝒟_1^†𝒟_1 will have a zeromode on the defect at (x_0=0, x_1=0) and (x_0=L/2, x_1=L/2). Similarly, 𝒟_1𝒟_1^† will have zeromodes at (x_0=L/2, x_1=0) and (x_0=0, x_1=L/2). As a result, e.g., near the vortex at (x_0=0, x_1=0) we expect the index to be 1. Our goal is to show that the integral of the divergence of the lattice GHC in a region around the vortex equals the index. §.§.§ Zeromode numerics and singular value decomposition (SVD) In this subsection we study the eigenvalues of the finite volume lattice operators numerically. Our goal is to map the lowest eigenstate of the suitable finite volume lattice operator to the zeromode of the infinite volume continuum fermion operator. As stated earlier, this mapping cannot be performed smoothly in the infinite volume limit by directly considering the eigenvalues of 𝒟_1/2 and 𝒟_1/2^†. Instead, we need to consider the eigenvalues of 𝒟_1/2^†𝒟_1/2 and 𝒟_1/2𝒟_1/2^†. Our goal therefore, is to find the lowest eigenvalues of 𝒟_1/2^†𝒟_1/2 and 𝒟_1/2𝒟_1/2^† and confirm that they go to zero in the infinite volume limit. This discussion is organized as follows: first, we present numerical methods for finding the zeromodes of 𝒟_1/2𝒟_1/2^† and 𝒟_1/2^†𝒟_1/2. We first apply this method to study a 0+1 dimensional Wilson fermion operator with domain wall. We then apply it to the lattice fermion operators we wish to study in 1+1 dimensions, i.e. 𝒟_1 and 𝒟_2. To describe the numerical technique, we use a fermion operator 𝒟 which would serve as a proxy for both 𝒟_1/2. We can now consider the spectrum of the operators 𝒟𝒟^† and 𝒟^†𝒟 using the eigenvalue equation 𝒟𝒟^† u_i = σ_i^2 u_i, 𝒟^†𝒟 v_i = σ_i^2 v_i, where σ_i^2 is an eigenvalue. The eigenvectors u_i and v_i are called left and right singular vectors and corresponding σ_i ≥ 0 is called a singular value of 𝒟. Note that the vectors u_i and v_i are linearly independent since the fermion operator is not normal, i.e. [𝒟^†,𝒟] ≠ 0. Another possible way to arrive at the same is to look for a vector v^' which will minimize the norm |𝒟 v^'|. The square of this norm is positive-definite quadratic form given by 𝒟^†𝒟, therefore the minimum is delivered by eigenvector v_min corresponding to the smallest eigenvalue σ^2_min. Analogously, u_min will deliver the minimum of |𝒟^† u^'|. Interestingly, there is a simple relationship between u_i and v_i, since they together with σ_i ≥ 0 define a singular values decomposition (SVD) of the operator 𝒟: 𝒟^† u_i = σ_i v_i, 𝒟 v_i = σ_i u_i. The SVD can be written in a compact matrix form as follows: 𝒟 = U Σ V^†, where unitary matrix U is composed of (column) singular vectors u_i, unitary matrix V – of singular vectors v_i and Σ is a diagonal matrix of corresponding singular values σ_i. It is clear from SVD that neither u_i nor v_i are straightforwardly related to eigenvectors of 𝒟 if the operator is not normal. However, the singular values of the operator 𝒟 and 𝒟^† map one to one to the eigenvalues of the operators 𝒟^†𝒟 and 𝒟𝒟^†. Therefore, the SVD of 𝒟/𝒟^† is equivalent to eigen-decomposition of 𝒟^†𝒟/𝒟𝒟^† etc. In the rest of the paper we will refer to the lowest eigenmode of 𝒟^†𝒟/𝒟𝒟^† as near-zeromode of the operator 𝒟/𝒟^† and the vectors u_i, v_i as singular vectors. Wilson fermion operator in 0+1 dimension: We first demonstrate the utility of our approach in the simple case of 0+1 dimensional Wilson fermion operator 𝒟_1d in the presence of a domain wall. We use periodic boundary conditions on S^1 and a domain wall in the fermion mass: m(x) = m_+ L/2>x≥ 0 m_- -L/2≤ x<0 . The equation of motion is given by: 𝒟_1dψ(x) = 1/2(ψ(x+1) - ψ(x-1)) + m(x) ψ(x) +R/2(ψ(x+1) + ψ(x-1)-2 ψ(x)) = 0, which in the case of R=1 can be simplified to: 𝒟_1dψ(x) = (m(x) - 1) ψ(x) + ψ(x+1) = 0. We numerically find singular vectors u_i and v_i together with singular values σ_i of this operator and study their dependence on the lattice size L. Let us first consider singular values σ(L) which we depicted on the Fig. <ref>. We observe that the smallest singular value σ_0 approaches zero exponentially fast: σ_0∼ O(e^-L), whereas other singular values remain finite. This indicates that in the infinite volume there exists a zero mode of 𝒟_1d given by infinte volume limit of corresponding singular vector v_min. We show the near-zero mode v_0 on the Fig. <ref>. We compare it to exact solution of equation 𝒟_1dψ_inf(x) = 0 in the infinite volume which is given by: ψ_0^inf(x) = ( 1-m_+)^x, x ≥ 0, ( 1-m_-)^x, x < 0, where m_± are bulk fermion masses on either sides of the domain wall. Here we work with m_-=3/4, m_+=-1. We find an excellent agreement between v_0 and ψ_inf already for lattice sizes L > 20. We also show on the Fig. <ref> how near-zeromodes of 𝒟_1d and 𝒟_1d^† are related by SVD in Eq. <ref>. Fermion operators in 1+1 dimension: Let us now consider 1+1 dimensional fermion operators we proposed in section <ref> and analyze the corresponding zeromodes and near-zeromodes. As mentioned before, in 1+1 D, it is possible to obtain an exact zeromode for the Wilson-like operator in finite volume by tuning the domain wall heights. However, we didn't find such a solution for the Wilson fermion operator. Here we will use SVD to instead find near-zeromodes for the Wilson fermion 𝒟_2 and Wilson-like fermion operators 𝒟_1. The results for the Wilson-like case are very similar to the Wilson fermion case. Therefore, we only present results for the Wilson fermion case here. In order to study the singular values of the Wilson fermion operator we use two-dimensional lattice of the size L × L and impose periodic boundary conditions. We also use the domain wall configuration Eq. <ref> with 0>-m_-=m_+>0 and 0>-μ_-=μ_+>0. By performing SVD numerically for different lattice sizes L we find a complete set of singular values σ_i(L) and corresponding singular vectors v_i(L) and u_i(L). Let us first consider few lowest singular values σ_i(L) which are presented on the Fig. <ref>. We observe that the smallest two of them (take them to be i=0, 1) are degenerate and exhibit clear exponential decay as L →∞. Thus, we find the first evidence for the emergence of two degenerate zero modes of the Wilson fermion operator in the infinite volume. Let us now study corresponding singular vectors v_i(L) and u_i(L). Note that there are two degenerate singular vectors v_i=0,1(L) corresponding to the lowest σ_0=σ_1. The same is true for u_i. These degenerate vectors are some superposition of two near-zero modes localized on appropriate vortex defects, i.e. v_i=0,1 are superpositions of near-zeromodes on defects with winding -1. These two defects are localized at (x_0=0, x_1=0) and (x_0=L/2, x_1=L/2). Similarly, u_i=0,1 are superpositions of near-zeromodes located on defects with winding 1, (x_0=0, x_1=L/2) and (x_0=L/2, x_1=0). At this point we can change basis by writing v^'_i = α_i v_0 + β_i v_1 with |α_i|^2 + |β_i|^2 = 1 with i=0, 1, in order to find near-zeromodes which are completely localized on the vortices. One can achieve this by minimizing Inverse Participation Ratio (IPR) which can serve as a measure of the localization: IPR = 1/∑_x_0,x_1 |v^'(x_0,x_1)|^2. Intuitively, if a mode is uniformly distributed over entire lattice of volume V then one would find that IPR = V. On the other hand, if the mode is localized at a single point then IPR = 1. Using this method we find two vectors v'_i=0,1(L) which are exponentially localized on two vortices of the same winding number ν_θ = -1, as shown on the Fig. <ref> and Fig. <ref>. Thus, we have identified two near-zermodes of the Wilson fermion operator 𝒟_2. For convenience, we will refer to these vectors v_i=0,1 and forego the superscript prime, as in v'→ v. We do the same for the vectors u_0/1. The same procedure yields two vectors u_i(L) corresponding to the same two singular values localized on the other two vortices of winding number ν_θ = +1 (at x_0=0, x_1=L/2 and x_0=L/2, x_1=0). Finally, let us describe how near-zeromodes behave if one switches the diagnostic field off, i.e. ϕ_2 → 0. If the lattice volume is kept fixed, then at sufficiently small ϕ_2 the near-zeromodes completely delocalize in the direction μ=0, and the SVD spectrum become consistent with that of ϕ_2 = 0 case. Namely, we find that near-zeromodes transform into plane wave excitations living on the two remaining domain walls. This can be seen by direct inspection of | v_i(x_0,x_1) | and from the behavior of singular values σ_i(L) ∼ 2π n / L characteristic to the spectrum of plane waves in the finite box. Furthermore, lowest singular values are 4 times degenerate accounting for 2 remaning domain walls and 2 possible spinor polarizations. Additionally, by imposing anti-periodic boundary condition in the μ=0 direction we again observe that the flow of singular values σ_i(L) ∼ 2π (n + 1/2) / L is characteristic to that of plane waves in the anti-periodic box, see Fig. <ref>. The true near-zeromode should not, in general, be sensitive to such change of boundary conditions. This reorganization happens because for sufficiently small ϕ_2 the localization width of the near-zero modes become comparable or bigger than the lattice size, thus it completely delocalizes. If ϕ_2 is kept fixed then one should recover the near-zeromodes by increasing the volume. Therefore we find that limits ϕ_2 → 0 and L →∞ do not commute. In order to correctly define the “index" from the finite volume analysis one has to take infinite volume limit first and only then switch the diagnostic field off. § GENERALIZED HALL CURRENT IN THE FINITE VOLUME In this part we will study the realization of the Generalized Hall Current (GHC) for the Wilson-like and the Wilson fermions and corresponding “indices". Before we proceed to the computations, let us outline the plan of this section. First, we will present how we've computed the GHC on the lattice. Next, we will study GHC for the Wilson-like operator 𝒟_1, taking the domain wall heights to satisfy the tuning condition (Eq. <ref>). This will illustrate how the GHC reproduces the index of the fermion operator in finite volume for the case when there is an exact zeromode. This will give us an opportunity to study GHC and its relation to the index without complications of the finite volume effects. Next, we will proceed to study of Wilson fermion operator and see how near-zeromodes and finite volume effects influence the realization of the GHC. Results for the Wilson-like operator in the same setup (when exact zeromodes are absent) are essentially the same, therefore we will not present them. §.§ Computation of the Generalized Hall Current on the lattice The lattice generalized Hall current J^H_μ(x) can be defined as follows: J^H_μ(x) = Ψ̅Γ̃_μ(x) Γ_χΨ where Γ̃_μ(x) is given by: Γ̃_μ(x) = -i . δ K(A_μ(x))/δ A_μ(x)|_A_μ(x) = 0. Here A_μ(x) is a U(1) gauge field and K(A_μ(x)) is a gauged lattice Dirac operator of the double theory obtained via standard Peierls substitution δ_x+a_μ,y→δ_x+a_μ,y exp(i A_μ(x)). The expectation value of J^H_μ(x) is evaluated numerically by straightforward computation of the matrix (K + M)^-1 and taking a trace. The divergence is computed as usual with the help of lattice backward difference ∇^B_μ: ∇^B_μ J^H_μ(x) = ∑_μ=0,1( J^H_μ (x - a_μ) - J^H_μ(x) ). We posit that the space-time integral of the divergence should produce the “index" of interest. We compute the “index" I_lat according to the lattice version of the Eq. <ref>: I_lat = -1/2∑_x ∈ S∇^B_μ J^H_μ(x) where S is the area over which the divergence of the lattice GHC current J^H_μ(x) is integrated. The area S can be an entire lattice, however in that case the total index has to vanish. Thus we will integrate only over some portion of the lattice adjacent to the defect (vortex) of interest. To implement this, we divide the lattice into 4 equal squares centered around each of the 4 vortices created by the domain walls and then integrate the divergence of lattice GHC on these four squares separately to compute the corresponding index. §.§ GHC for Wilson-like lattice operator and exact zeromodes Let us first present results for GHC for Wilson-like operator 𝒟_1 when domain wall configuration satisfies the tuning condition Eq. <ref>. In this case there is an exact zeromode for the fermion operator in finite volume. We've computed the GHC J^H_μ(M) and the “index" I_lat(M) for several values of the regulator mass from M =10^-5 to 2 on the lattice L × L = 32 32. We present the current J^H_μ(x) and its divergence on the Fig. <ref> for the smallest value of M, with M = 10^-5. We observe that the divergence is localized around the vortices. It has maximal value at the vortex center. The sign is consistent with the winding number of the defect. The current J^H_μ(M) flows preferably along the edges of the domains from one vortex to another. The divergence exhibits an exponential decay around the vortex as shown on Fig. <ref>. Now we want to verify that the space-time integral of the divergence of the lattice GHC produces the correct “index". As discussed previously, we divided the lattice into 4 equal squares centered around each vortex and performed integration of the divergence of GHC over them. Due to the exponential decay of the GHC away from the defect, we expect that that the integral would approach infinite volume value quickly. The resulting “index" I_lat(M) is shown in the Fig. <ref> as function of M. We observe that it clearly goes towards ± 1 as M → 0. The sign of the index depends on the vortex defect in consideration. Also, as expected, for very large M the “index" approaches zero with increasing M. In order to quantify finite volume effects we have computed the deviation: ϵ(L) = |± 1 - I_lat(M → 0)| where the plus or minus sign is chosen according to the winding of the vortex and I_lat is the corresponding “index" computed by integrating the ∇_μ^B J_μ^H. This function is shown in the Fig. <ref> where one can see that the error is indeed exponentially small: ϵ(L) ∼ e^-L. Therefore, after performing infinite volume extrapolation our computations show that the lattice GHC correctly reproduces the index of the Euclidean fermion operator. Finally, we find that generalized hall current and divergence vanish when ϕ_2 → 0 for fixed L and M. This shows that we have to take the infinite volume limit first and then take ϕ_2 to zero in order to retain a nonzero index in the limit of ϕ_2→ 0. §.§ GHC for Wilson fermion operator and near-zero modes We now present results for GHC and the index for the Wilson fermion operator 𝒟_2. The results for the untuned Wilson-like operator are very similar. We use the same strategy in order to compute the “index" which is presented in the Fig. <ref> for several values of M and lattice sizes L = 8 … 32. First of all, we observe that the “index" vanishes when we naively take M → 0. This is an expected behaviour since the spectrum of 𝒟_2 is strictly speaking gapped: σ_0 ∼exp(-L)≠ 0. In order to understand it better one can expand contribution of Dim(ker 𝒟_2) in powers of M/σ_0 ≪ 1: M^2/D_2^† D_2 + M^2 = M^2/σ_0^2 + O(M^4/σ_0^4). We indeed find this dependence as shown on the Fig. <ref>. The “index" exhibits a pronounced maximum at some M_0 > σ_0 and then decays exponentially fast as M →∞. We find that the maximum tends to ± 1 as lattice size gets bigger, also exponentially fast, as illustrated on the Fig. <ref>. Moreover, the position of the maximum M_0 tends to zero as L →∞ exponentially as well, see Fig. <ref>. Therefore we find in order to reproduce the index of the fermion operator one has to take infinite volume limit first and only then M=M_0 → 0. § CONCLUSIONS In this paper we extended the idea of generalized Hall current proposed in <cit.> to discrete space-time in finite volume. Our construction is focused on one of the several examples presented in <cit.>: 1+1 dimensional Dirac fermion with a domain wall in its mass. It is well known that the domain wall hosts massless fermion in the continuum. The continuum GHC construction connects the existence of this massless fermion to a Euclidean fermion operator with an index of 1 by turning on some diagnostic field in the theory. We extend this construction to discrete Euclidean space-time in finite volume (S^1× S^1) by introducing higher dimensional operators which we call Wilson-like and Wilson terms. We tackle several nontrivial features associated with a finite volume analysis which includes the net vorticity of the defects on S^1× S^1 being zero. We have four defects on the lattice, two vortices and two anti-vortices. In order to mimic the GHC construction of the continuous infinite volume space-time, we focus on the region of space-time around only one of these vortices. We were successful in engineering a nonzero index for the fermion operator on each of these vortices. We then computed the lattice GHC to show that the space-time integral of its divergence computed locally reproduced the “index" correctly. Future research directions involve extending this lattice finite volume construction to higher dimensional theories. Ref. <cit.> constructed the continuum GHC for several examples, including the 1+1 dimensional example we focus on here. The other examples included domain wall fermions in higher dimensions. The GHC construction in these higher dimensional examples involved diagnostic background gauge fields as well as diagnostic scalar and pseudo-scalar fields. Our plan is to extend these continuum constructions to the lattice. Also, the continuum construction of GHC in <cit.> applies to free fermion theories. In particular, the GHC is computed using a one-loop Feynman diagram in perturbation theory. It is however well known that in a multiflavor theory, introducing interactions can sometimes gap out massless fermions through nonperturbative effects. This is even more interesting when the interaction in question do not break any anomalous symmetries of the non-interacting theory. E.g. see symmetric mass generation <cit.>. The non-perturbative effects of interactions on the GHC may not be captured using a one loop Feynman diagram as described in <cit.>. One may need to resort to a numerical analysis to uncover these effects. Even though our lattice GHC construction was formulated for non-interacting 1+1 Dimensional fermions, it can be easily modified to take into account interactions. This will enable us to compute the generalized Hall current taking into account non-perturbative effects. § ACKNOWLEDGEMENT We acknowledges support from the U.S. Department of Energy, Nuclear Physics Quantum Horizons program through the Early Career Award DE-SC0021892.
http://arxiv.org/abs/2307.05583v1
20230710144048
Resistivity in Quantum Vortex Liquid of Clean Two-Dimensional Superconductor
[ "Naratip Nunchot", "Ryusuke Ikeda" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
Department of Physics, Kyoto University, Kyoto 606-8502, Japan Motivated by a recent controversy on a possible quantum phase in thin films of relatively clean superconductors under an out-of-plane magnetic field, the quantum fluctuation effects on the phase diagram and the resistivity are reexamined. It is argued that most of features seen in the corresponding resistivity data in relatively clean systems reported recently are explained within the present theory, and that the fan-shaped resistivity curves, suggestive of the presence of a superconductor to insulator transition at zero temperature, in the vortex liquid regime is a consequence of the insulating behavior of the Aslamasov-Larkin fluctuation conductivity in the quantum regime. Resistivity in Quantum Vortex Liquid of Clean Two-Dimensional Superconductor Naratip Nunchot and Ryusuke Ikeda August 12, 2023 ============================================================================ § INTRODUCTION In thin films of type II superconductors under a magnetic field perpendicular to the plane, the resistivity often shows a behavior insensitive to the temperature T over wide field and temperature ranges <cit.>. Possibilities of a novel two-dimensional (2D) quantum phase based on this quantum metallic behavior have been discussed repeatedly over the past two decades <cit.>. However, it has been clarified recently that most of the T-independent behavior of the resistivity is removed by adequately filtering external radiation from the film sample <cit.>, strongly suggesting that external noise has created the quantum metallic behavior in experiments. The presence of a quantum metal state has been still argued in some recent experimental works on relatively clean systems, i.e., with weak disorder <cit.>. Since a nearly flat resistivity curve is seen even in the temperature range of the same order as the mean field T_c in some film samples, such a peculiar resistive behavior cannot be due to the randomness or the sample disorder which becomes more effective at lower temperatures. In addition, a crossing behavior leading to assuming the presence of a superconductor to insulator quantum transition (SIT) at zero temperature <cit.> is seen at relatively higher fields in samples of relatively clean films <cit.>. Then, one might wonder what the flat resistivity curve appearing in clean samples in lower fields than the apparent SIT field implies. In the present work, the quantum superconducting (SC) fluctuation effects on the resisitivity in clean and 2D superconductors are reexamined by performing a detailed analysis within the framework of the renormalized fluctuation theory <cit.>. It was argued in a previous theoretical work of one of the present authors <cit.> that, based on a dimensional analysis, the melting curve H_m of the 2D vortex lattice becomes insensitive to T at low enough temperatures due to the quantum SC fluctuation, and that, in such a quantum regime, the vortex flow resistance in a narrow field range close to H_m is also insensitive to T and takes a value of the order of the quantum resistance R_q=π e^2/2 ħ = 6.45(kΩ). However, this explanation on the crossing behavior seen in the field dependence of the resistivity curves seems to be inconsistent with the observation of the apparent SIT behavior in a couple of experiments <cit.> where the crossing of the resistivity is seen in a much higher field than the nominal vortex lattice melting field at low temperatures. Below, the vortex lattice melting transition line will be first examined without resorting to the rough argument <cit.> and by comparing the free energy of the renormalized fluctuation of the SC order parameter with that of the vortex lattice corrected by the Gaussian fluctuation <cit.>. In contrast to the previous estimate of the quantum melting line <cit.>, the resulting melting field H_m grows upon cooling everywhere at nonzero temperatures, while H_m(T=0) can take a much lower value than H_c2(T=0), and the resulting quantum vortex liquid regime becomes well-defined <cit.>. Next, the in-plane resistivity computed within the renormalized fluctuation theory is examined in a consistent way with the calculation of the melting line. Bearing in our mind that the characteristic features of the resistivity curves in the quantum regime seem to depend on the details of the materials, the resistivity curves will be discussed by focusing on the two extremely different cases: One is the case with a moderate strength of the thermal fluctuation and an extremely strong quantum fluctuation, and the other is the case with strong thermal fluctuation and weak quantum fluctuation. In both cases, the crossing behavior of the resistivity leading to erroneously assuming the presence of an SIT at zero temperature appears in a finite temperature range, as a consequence of the fact that the Aslamasov-Larkin (AL) term of the dc fluctuation conductivity vanishes in the vortex liquid in zero temperature limit <cit.>. The resistivity curve insensitive to T tends to appear more frequently when the thermal fluctuation is stronger. This paper is organized as follows. We explain the theoretical treatment used in the present work in sec.2. The resulting numerical results on the phase diagram and the resistivity curves are presented in sec.3. Summary of our results is given and relevance to the experimental data are given in sec.4. § THEORETICAL EXPRESSIONS In the unit of k_ B=ħ=1, we start from the partition function Z= Trexp(- S). Here, in the high field approximation where the pair field ψ( r) consists only of the lowest Landau level (LLL) modes ψ_0( r), the action S expressing the Ginzburg-Landau (GL) model takes the form <cit.> S = ∑_ω, p (s ω^2 + γ_0|ω| + ε_0) |ψ̃_0(p; ω)|^2 + g/2 d β^2∫_0^β dτ∫ d^2r |ψ_0( r, τ)|^4. Here, the order parameter field was rescaled so that the dependences on the film thickness d and the temperature T=β^-1 appear only in the quartic term. Further, the order parameter field was expanded in terms of the normalized eigen functions u_p( r) in LLL in the manner ψ_0( r, τ) = ∑_p, ωψ̃_0(p, ω) e^-i ωτ u_p( r), ω is the Matsubara frequency for bosons, and p measures the macroscopic degeneracy in LLL. The microscopic T and H dependences of the positive coefficients s, γ_0, and g are, for simplicity, neglected, and the bare mass ε_0 will be assumed to be linearly dependent on H and T like ε_0 = t-1+h, where h=H/H_c2(0), and t=T/T_c0. The mean field H_c2(T) line is given by ε_0=0. Further, since the ω^2 term in the action S was introduced only to cut off an inessential divergence in the frequency summation, the coefficient s is assumed to be small so that s ≪γ_0^2. The simplest approximation describing reasonably the fluctuation renormalization is the Hartree approximation which is reached through the self-consistent replacement |ψ_0|^4 → 4 ⟨ |ψ_0|^2 ⟩ |ψ_0|^2 in the quartic term, where ⟨ ⟩ denotes the statistical average within the Hartree approximation. Then, the fluctuation propagator G_0(p, ω)=⟨ |ψ̃_0(p, ω)|^2 ⟩ is given by 1/[r_0 + γ_0|ω| + s ω^2], where r_0 = ε_0 + g h/πξ_0^2 d β^-1∑_ω1/r_0 + γ_0|ω| + s ω^2, where ξ_0 is the coherence length in zero temperature limit. Note that, according to the BCS theory <cit.>, the mode-coupling strength g is a positive constant of the order of (N(0) T_c0^2)^-1, where N(0) is the density of states of the quasiparticles on the Fermi energy in the normal state. To rewrite the frequency summation into a tractable form, the spectral representation <cit.> 1/r_0 + γ_0|ω| + s ω^2 = 1/π∫_-∞^∞ du ρ(r_0; u)/u - iγ_0 ω will be used, where ρ(r; u) = u/(u^2 + (a r)^2)((us/γ_0^2)^2 + a^-2). This expression (7) of the spectral function is valid when r < γ_0^2/(4s). Then, the coefficient a in eq.(<ref>) is given by a^-1= (1 + √(1 - 4sr/γ_0^2))/2. Since we are interested in the region below H_c2(T)-line where r_0 ≪ 1, the coefficient a will be replaced by unity in the ensuing expressions. Therefore, we will use hereafter the following self-consistent relation on the renormalized mass r_0 of the LLL fluctuation r_0 = ε_0 + 2 ε_ G^(2) h/πγ_0 T_c0∫_0^∞ du coth(u/2 γ_0 T) u/(u^2 + r_0^2)( 1 + (s u/γ_0^2)^2), where H_c2(0) is the depairing field in zero temperature limit, ε_ G^(2) = g T_c0/2 πξ_0^2 d is the Ginzburg-number in 2D, and the identity coth(u/2 γ_0 T) = 2 γ_0 T ∑_ω1/u - iγ_0 ω was used. Note that eq.(<ref>) can be regarded as being a definition of ε_0(r_0) as a function of r_0. Then, we have ∂ε_0(r_0)/∂r_0 = 1 + 2 ε_ G^(2) h/πγ_0 T_c0∫_0^∞ du coth(u/2 γ_0 T) 2 r_0 u/(u^2 + r_0^2)^2. §.§ Free energy Next, the expressions on the free energy density will be derived. Using the identity on the fluctuation free energy F_> ∂ F_>/∂ε_0 = ∑_p, ω G_0(p, ω), the fluctuation free energy density f_> in the vortex liquid regime of a SC thin film with thickness d is simply given by f_> = h/2 π^2 ξ_0^2 d γ_0∫_r_c^r_0 dμ∫_0^∞ dx coth(x/2 γ_0 T) ρ_μ(x) ∂ε_0(μ)/∂μ, where the prefactor proportional to h arises from the degeneracy in LLL. Then, f_> will be expressed in terms of eq.(<ref>) as f_>=f_ G(r) + f_ H, where f_ H = h/2 π^2 ξ_0^2 d γ_0∫_r_c^r_0 dμ∫_0^∞ dx coth(x/2 γ_0 T) ρ_μ(x) (∂ε_0(μ)/∂μ - 1 ). The cut-off r_c will be determined in examining f_ G(r_0) (see below). Regarding the remaining term f_ G(r_0) = f_> - f_ H which is nonvanishing even when g=0, i.e., even in the absence of the mode-couplings, the μ-integral will be performed firstly. Then, f_ G(r_0) takes the form f_ G(r_0) = H/ϕ_0 π d γ_0∫_0^∞ dx 1/1+(sx/γ_0^2)^2 coth(x/2 γ_0 T) [ tan^-1(x/r_c) - tan^-1(x/r_0) ]. Here, to determine the cut-off r_c, we take the thermal limit of eq.(<ref>) in which coth(x/(2 γ_0 T)) is replaced by 2 γ_0 T/x. By comparing it with the corresponding result in ref.16, h T ln(r_0/r_c)/(2 πξ_0^2 d), the cut-off will be chosen hereafter as r_c = πγ_0 T. On the other hand, by making use of eq.(<ref>) determining the T and H dependences of the renormalized mass r_0, f_ H may be rewritten in the following simpler form f_ H = - 1/4g (r - ε_0 )^2. The free energy derived above can be used as the SC fluctuation contribution to the free energy in the normal phase. To determine the 2D quantum melting transition line, the corresponding free energy density f_< in the vortex lattice phase corrected by the Gaussian fluctuations is needed. Within the GL approach, the contribution of the shear elastic energy is smaller <cit.> in the order of the magnitude than that of the amplitude (or, Higgs) mode and hence, will be simply neglected. Then, f_< becomes f_< = - 1/2 β_ A( ε_0^2/g - 1/√(2) f_ G(-2 ε_0) ), where β_ A is the Abrikosov factor 1.1596 of the triangular lattice. Using these expressions, the transition line of the 2D vortex lattice melting occurring through not only the thermal but also the quantum fluctuations of the SC order parameter is determined by the relation f_>=f_<. §.§ Fluctuation Conductivity The fluctuation conductivity in the moderately clean case is dominated by the Aslamasov-Larkin (AL) term of the conductivity due to the renormalized SC fluctuation which is expressed in dc limit by <cit.> d R_q σ_ AL = 2 T r_1^2 ∑_ω[ G_0(ω) G_1(ω) ( γ_0 G_0(ω) + γ_1 G_1(ω) ) - γ_0^2 [ G_0(ω)]^2 + γ_1^2 [ G_1(ω)]^2/γ_0 r_1 + γ_1 r_0], where G_n(ω)= 1/(γ_n|ω| + r_n) (see also eq.(15) of Ref.14 and eq.(21) of Ref.22). The time scale γ_1 is the counterpart in the second lowest (n=1) Landau level (LL) fluctuation of γ_0 of the LLL fluctuation, and the tiny ω^2 term introduced in eq.(5) as a cut-off term for the frequency summation is unnecessary in obtaining σ_ AL and hence, has been neglected in eq.(20). As shown previously <cit.>, the renormalized mass r_1 of the n=1 LL fluctuation is renormalized to be 2h deep in the vortex liquid regime in the Hartree approximation. Hereafter, the relations r_1=2h and γ_0=γ_1 will be assumed for simplicity in our numerical analysis. § NUMERICAL RESULTS Now, we will explain typical examples of the resistivity curves following from eqs.(<ref>) and (<ref>) together with the corresponding phase diagrams which follow from eqs.(<ref>) and (<ref>). Below, the coefficient of the ω^2 term of eq.(<ref>) which plays the role of a cutoff on the dissipative dynamics will be chosen as s = (10^-6γ_0)^2 throughout this paper. In our work, the DOS and Maki-Thompson fluctuation terms of the conductivity are not taken into account from the outset based on the well-known fact <cit.> that, in clean limit, those terms and the subleading contribution of the Aslamasov-Larkin term cancel with one another in 2D systems with no Pauli paramagnetic depairing. For this reason, the total dimensionless conductivity R_q d σ_ tot is assumed hereafter to be given by the sum of the leading contribution of the Aslamasov-Larkin term in dc limit, eq.(<ref>), and the dimensionless normal conductivity d R_q σ_ N. Regarding σ_ N, the same model as used in Ref.14, d R_q σ_ N = (1 + (8 π)^-1 ln(T_c0/T))^-1, will be used here to describe a weakly insulating resistivity curve in the normal state of a couple of materials <cit.>. To clarify what are typical consequences originating from strong quantum SC fluctuations, typical results following from two highly different sets of the parameter values will be compared with each other. Below, the strengths of the thermal fluctuation and the quantum fluctuation will be measured, respectively, by ε_ G^(2) = [λ(0)]^2/(d Λ(T_c0)) and ħ/(γ_0 k_ BT_c0), where λ(0) is the magnetic penetration depth at T=0, Λ(T)=ϕ_0^2/(16 π^2 T), and ϕ_0 is the flux quantum <cit.>. Here, we have used the relation between g and λ(0) in the BCS theory <cit.>. First, the results of the phase diagram in a case with moderately strong thermal fluctuation and unusually strong quantum fluctuation are shown in Fig.1 where ħ/(γ_0 k_ BT_c0)= 100 and ε_ G^(2)=2.0 × 10^-4. This ε_ G^(2)-value corresponds to, e.g., the set of the parameter values T_c0=10(K), d=25(A), and λ(0)=330(A). It is found that the melting field H_m(T) is linear in the temperature over a wide field range except close to T_c0. Close to T_c0, the quantum fluctuation is negligible so that H_m(T) in the present LLL-GL approach obeys the 2D LLL scaling <cit.> H_m(T) ≃ (T_c0 - T)^2 (see the Inset of Fig.1). Such a large deviation of the melting line from its LLL scaling behavior over the wide field range is a consequence of the strong quantum fluctuation in this case, and the T=0 melting field H_m(0) becomes 0.62 H_c2(0). Figure 2 expresses the resistivity curves ρ(T) at various magnetic fields, H/H_c2(0) = 0.5, 0.55, 0.6, 0.65, 0.66, 0.67, 0.68, 0.69, and 0.7. The two curves in lower fields than H_m(0) are found to become flat, i.e., insensitive to T, below the melting line, while each of other curves in H > H_m(0) simply shows a drop at a temperature without a clear flat portion accompanied. We note that each temperature T_d at which the resistivity starts to drop is much lower than T_c2(H) corresponding to the mean field H_c2(T)-line. For instance, at H=0.66 H_c2(0), T_d/T_c0=0.06, while T_c2/T_c0=0.34 <cit.>. Such a large deviation of T_d from T_c2 is a consequence of strong reduction of σ_ AL (eq.(20)) due to the unusually strong quantum fluctuation assumed in Figs.1 and 2. On the other hand, the flat (i.e., metallic) portion is not clearly seen in those resistivity curves. As will be stressed below, it appears that the flat portion does not become remarkable as far as the thermal fluctuation is not strong enough. Nevertheless, as a consequence of the strong quantum superconducting fluctuation, the so-called fan-shaped T-dependence of the resistivity curves which often leads to assuming the presence of a superconductor to insulator transition (SIT) at T=0 is seen in the field range (H_m(0) <) 0.67 H_c2(0) < H < 0.7 H_c2(0) in spite of the absence of a quantum continuous transition. It seems that these resistivity curves are qualitatively similar to the data in Refs. 5, 8, and 9. Of course, it should be noted that those resistive behaviors explained above in H > 0.6 H_c2(0) are not their genuine low T results. Since there are no quantum transitions above H_m(0) in the present clean limit, all curves of the normalized resistance 1/(d R_q σ_ tot) in H > H_m(0) start to grow at much lower temperatures than 0.01 T_c0 and reduce to their normal values 1/(d R_q σ_ N) on approaching T=0 reflecting the vanishing of σ_ AL at T=0 <cit.>. Next, the case with exceptionally strong thermal fluctuation and a moderate strength of the quantum fluctuation will be considered. In Fig.3 and 4, we have used ε_ G^(2) = 0.12, corresponding to, e.g., the set of the parameter values T_c0=30(K), d=5(A), and λ(0)=2000(A), and ħ/(γ_0 k_ BT_c0)= 1.0. As Fig.3 shows, the vortex liquid regime is expanded particularly at higher temperatures reflecting the large ε_ G^(2), and the melting curve is bent upwardly at low enough temperatures reflecting the relatively weaker quantum fluctuation. Nevertheless, the H_m(0)/H_c2(0)-value is remarkably low, and, in H > 0.2 H_c2(0), we have only the vortex liquid regime at any temperature. In Fig.4, the corresponding resistivity curves are shown. It is noticeable that nearly flat resistivity curves are seen over a wide field range. This is a consequence of the strong thermal fluctuation assumed here. In particular, the flat resistivity curves appear in fields below H_m(0), i.e., the fluctuating vortex solid phase. Here, we stress that, in 2D case, the freezing from the vortex liquid to the vortex solid tends not to be reflected in the resistivity curve. It has been recently clarified through a detailed diagrammatic analysis <cit.> that this feature on the resistivity in 2D case has a theoretical foundation. Even in the resistivity data of Fig.4, a crossing behavior of the resistivity curves is seen at nonzero temperatures. As is seen in the lower figure of Fig.4, the resistivity curves in the field range 0.2 H_c2(0) < H < 0.34 H_c2(0) obey an approximate crossing behavior around H=0.28 H_c2(0) in the temperature range 0.04 T_c0 < T < 0.16 T_c0. Again, this crossing behavior never implies the presence of a genuine quantum transition and is merely a reflection of the insulating behavior of the fluctuation conductivity <cit.> arising from the vanishing of eq.(<ref>) at T=0. § SUMMARY AND DISCUSSION In this work, we have examined possible field v.s. temperature phase diagrams and the corresponding resisitivity curves to be seen in thin films of clean superconductors under a magnetic field perpendicular to the two-dimensional plane. Since a moderately strong fluctuation has been assumed in obtaining those figures, the field range of our interest in which the vortex lattice melting occurs at zero temperature is low enough to neglect the paramagnetic pair-breaking effect. In this situation, the fluctuation conductivity in clean 2D superconductors is given by only the well-known Aslamasov-Larkin term <cit.>. For this reason, we have been able to assume that, even at low temperatures, the total conductivity is the sum of a quasiparticle contribution and the conventional fluctuation conductivity following from a time-dependent GL dynamics. We note that, in a moderately dirty system <cit.> case, the sum of the Maki-Thompson and DOS terms of the fluctuation conductivity has a contribution leading to a negative magnetoresistance in the fluctuation regime <cit.>. Therefore, the absence of such a negative magnetoresistance would play a key role in judging whether the present theory is applicable to experimental data of the resistivity or not. The resistivity curves obtained based on the renormalized fluctuation theory <cit.> are highly dependent on the relative magnitude of the quantum fluctuation to the thermal one. When the thermal fluctuation is of a moderate strength, enhanced quantum fluctuation tends to create fan-shaped resistivity curves R(T), often leading to erroneously assuming the presence of a quantum SIT, in the quantum vortex liquid but far above the vortex lattice melting field in T=0 limit. This type of resistivity data have been reported in several works <cit.>. Further, even in quite a different case where the thermal fluctuation is quite strong, while the quantum fluctuation has a moderate strength, the resistive behavior suggestive of the presence of an apparent SIT is visible in the experimentally measurable temperature range. We conclude that, except the observations in dirty systems <cit.>, the SIT behavior of the resistivity in relatively clean systems is a consequence of the insulating behavior <cit.> of the Aslamasov-Larkin fluctuation conductivity in dc limit in the quantum regime. In the present work, any pinning effect arising from some randomness or defects in the SC material has been neglected. In analyzing resistivity data in thin films, the resistivity drop upon cooling at intermediate temperatures is often modelled according to the empirical thermal activation (TA) (or the so-called Arrhenius) formula. Within the GL model, this TA behavior may be conveniently incorporated as an exponential growth in the inverse temperature T^-1 of the coefficient γ_1. As far as the vortex lattice melting transition does not occur due to weak disorder in the material, the present fluctuation theory can be used even for the lower temperature region, in which a flat resistive behavior may be seen, than the region of the vortex liquid in which the TA behavior is seen. In fact, it is interesting to regard a (if any) flat resistivity curve as a consequence of a competition between the insulating fluctuation conductivity <cit.> and an increase of γ_1 on cooling. The present work was supported by a Grant-in-Aid for Scientific Research [Grant No.21K03468] from the Japan Society for the Promotion of Science. 9 Kapi1 D. Ephron, A. Yazdani, A. Kapitulnik, and M. R. Beasley, Phys. Rev. Lett. 76, 1529 (1996). Kapi2 N. Mason and A. Kapitulnik, Phys. Rev. Lett. 82, 5341 (1999). Kapi3 J. A. Chervenak and J. M. Valles, Jr., Phys. Rev. B 61, 9245(R) (2000). nature Y. Qin, C. L. Vicente, and J. Yoon, Phys. Rev. B 73, 100505(R) (2006). Nojima1 Y. Saito, T. Nojima, and Y. Iwasa, Nature Comm. 9, 778 (2018). Tamir I. Tamir, A. Benyamini, E. J. Telford, F. Gorniaczyk, A. Doron, T. Levinson, D. Wang, F. Gay, B. Sacepe, J. Hone, K. Watanabe, T. Taniguchi, C. R. Dean, A. N. Pasupathy, and D. Shahar, Sci. Adv. 5, 3826 (2019). India Surajit Dutta, Indranil Roy, Soumyajit Mandal, John Jesudasan, Vivas Bagwe, and Pratap Raychaudhuri, Phys. Rev. B 100, 214518 (2019). Ienaga K. Ienaga, T. Hayashi, Y. Tamoto, S. Kaneko, and S. Okuma, Phys. Rev. Lett. 125, 257001 (2020). Masonjyanaihou Wei Liu. LiDong Pan, Jiajia Wen, M. Kim, G. Sambandamurthy, and N. P. Armitage, Phys. Rev. Lett. 111, 067003 (2013). Shahar23 A. Haug and D. Shahar, arXiv: 2305.1593. MPAF M. P. A. Fisher, Phys. Rev. Lett. 65, 923 (1990). HP A. F. Hebard and M. A. Paalanen, Phys. Rev. Lett. 65, 927 (1990). IOT R. Ikeda, T. Ohmi, and T. Tsuneto, J. Phys. Soc. Jpn. 58, 3770 (1989). IOT2 R. Ikeda, J. Phys. Soc. Jpn. 72, 2930 (2003). RI96b R. Ikeda, Int. J. Mod. Phys. B 10, 601 (1996). Hikami S. Hikami, A. Fujita, and A. I. Larkin, Phys. Rev. B 44, 10400(R) (1991). Blatter G. Blatter, B. Ivlev, Y. Kagan, M. Theunissen, Y. Volokitin, and P. Kes, Phys. Rev. B 50, 13013 (1994). RI96a R. Ikeda, J. Phys. Soc. Jpn. 65, 33 (1996). deGennes P. G. de Gennes, Superconductivity of Metals and Alloys (Addison Wesley, 1989). Tsuneto E. Abrahams and T. Tsuneto, Phys. Rev. B 11, 4498 (1975). RI90 G. Eilenberger, Phys. Rev. 164,628 (1967). Nunchot N. Nunchot, D. Nakashima, and R. Ikeda, Phys. Rev. B 105, 174510 (2022). Varlamov D. V. Livanov, G. Savona, and A. A. Varlamov, Phys. Rev. B 62, 8675 (2000). Galitski V. M. Galitski and A. I. Larkin, Phys. Rev. B 63, 174506 (2001). com For simplicity, effects of the higher LL modes making the vertical portion of the melting curve in lower fields in the field v.s. temperature phase diagram will be neglected. See T. Saiki and R. Ikeda, Phys. Rev. B 83, 174501 (2011). comhc2 In Ref.5, the H_c2(T)-curve has been determined based on the LLL scaling relation <cit.> formulated by neglecting the quantum fluctuation in spite of the fact that the resistivity curves show the fan-shaped SIT behavior. In the phase diagram proposed in Ref.5 (Fig.4 there), the correct H_c2(T)-curve must lie at a much higher temperature at least in higher fields, and it seems to us that their erroneous determination of the H_c2(T)-curve has led to their argument on the presence of a quantum Griffiths state which should not appear in cleaner systems of a type studied in Ref.5. Nunchot2 N. Nunchot and R. Ikeda, unpublished. Gant V. F. Gantmakher, M. V. Golubkov, V. T. Dolgopolov, G. E. Tsydynzhapov, and A. A. Shashkin, JETP Letters 68, 344 (1998).
http://arxiv.org/abs/2307.05173v1
20230711111039
Shot Noise as a Diagnostic in the Fractional Quantum Hall Edge Zoo
[ "Sourav Manna", "Ankur Das", "Moshe Goldstein" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.str-el" ]
[email protected] Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Tel Aviv, 6997801, Israel [email protected] Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Tel Aviv, 6997801, Israel Bulk-boundary correspondence allows one to probe the bulk topological order by studying the transport properties of the edge modes. However, edge modes in a fractional quantum Hall (FQH) state can undergo edge reconstruction; moreover, they can be in the coherent regime or exhibit varying degrees of charge and thermal equilibration, giving rise to a zoo of intriguing scenarios. Even more possibilities arise when a quantum point contact (QPC) is introduced and tuned into a conductance plateau. Distinguishing among the different models and equilibration regimes is an outstanding problem, which cannot be resolved by dc electrical conductance measurement. In this work we show that electrical shot noise at a QPC conductance plateau can serve as such diagnostic. As a prototypical example we consider the ν=2/3 FQH state, and show that different inequalities between the auto- and cross-correlation electrical shot noise hold for different edge models. In particular, our results offer several possible scenarios for the QPC conductance plateaus e^2/3h (observed previously), e^2/2h (recently observed), and 5e^2/9h (our prediction), as well as how to distinguish among them via shot noise. Shot Noise as a Diagnostic in the Fractional Quantum Hall Edge Zoo Moshe Goldstein August 12, 2023 ================================================================== Introduction.— The oldest known examples of topological states of matter are the quantum Hall states in a two-dimensional electron gas (2DEG) subject to a strong magnetic field <cit.>. These gapped bulk phases have chiral edge modes <cit.> carrying both charge and energy <cit.>. In simple cases, such as the Laughlin states <cit.>, these modes can be co-propagating. The situation becomes more interesting when counter-propagating modes appear, either due to topological constraints <cit.> (e.g. hole-conjugate ν=2/3 filling fraction) or due to edge reconstruction <cit.>. Therefore, for a given bulk topological order a number of edge models can be found which are consistent with the bulk-boundary correspondence. Moreover, the edge modes can be coherent or experience different degree of charge and thermal equilibration can exist, which give rise to a rich zoo of scenarios. Distinguishing between them based on an experimentally relevant diagnostic in a single device is an important and interesting avenue. A quantum point contact (QPC), that is, a constriction in 2DEG, is an essential component for manipulating and controlling edge modes. As we make the QPC constriction narrower, the conductance across the QPC can have plateaus; e.g., for ν=2/3 bulk filling, plateaus at e^2/2h <cit.> and at e^2/3h <cit.> were observed experimentally. This leads to even more possibilities for the corresponding edge mode structure, which cannot be resolved by dc electric conductance measurements. In this work we will both enumerate such possibilities and show how they could be distinguished experimentally using purely-electrical means. Earlier, it was shown that the shot noise (described later) across the QPC can be used to determine the fractional charge carried by an edge mode <cit.>. However, even though noise is not expected at a QPC conductance plateau, it was observed experimentally <cit.> and discussed theoretically <cit.>. In this work we show how auto- and cross-correlation shot noise can resolve the edge structure and equilibration state. System.— We consider a Hall bar at filling ν interrupted by a QPC with filling ν_i. We denote its four contacts as a source S, on which a dc voltage V_dc is applied, a ground G, and two drains D_1, D_2, and we assume the typical arm length L_A to be much larger than the typical QPC size L_Q (Figs. <ref>, <ref>, <ref>). In addition to the edge modes, dictated by topology, there can be edge reconstruction leading to the introduction of counter-propagating edge modes for each filling. For each edge structure the modes can be in the coherent regime at zero temperature and can be renormalized due to inter-mode interactions and random disorder induced charge tunnelings reaching a renormalization group (RG) fixed point. Also, in each edge structure equilibration can take place at finite temperature. Recent experiments have shown that the charge equilibration length l_eq^ch is typically very short <cit.>, hence full charge equilibration can be assumed in each segment of the device, leading to l_eq^ch≪ L_Q≪ L_A. On the other hand, the thermal equilibration length l^th_eq can be parametrically larger, allowing for three regimes of thermal equilibration: (1) each segment is thermally unequilibrated, L_Q≪ L_A≪ l_eq^th (no), (2) the QPC is thermally unequilibrated while the other segments are thermally equilibrated, L_Q≪ l_eq^th≪ L_A (mixed), and (3) each segment is thermally equilibrated, l_eq^th≪ L_Q≪ L_A (full). For full charge and thermal equilibration, the modes in each segment form a chiral hydrodynamic mode characterized by its electrical and thermal conductances, which eliminates any effect of edge reconstruction. We denote by I_1 and I_2 the currents (correspondingly Q_1 and Q_2 are the charges) entering the drains D_1 and D_2, respectively. The dc current-current auto-correlations are defined as δ^2 I_1=⟨ (I_1 - ⟨ I_1 ⟩)^2 ⟩ in D_1 and δ^2 I_2=⟨ (I_2 - ⟨ I_2 ⟩)^2 ⟩ in D_2, while the cross-correlation is δ^2 I_c=⟨(I_1 - ⟨ I_1 ⟩) (I_2 - ⟨ I_2 ⟩) ⟩ <cit.>. Correspondingly, the correlations in charge fluctuations are δ^2 Q_1, δ^2 Q_2 and δ^2 Q_c. The Fano factors are defined as F_j = |δ^2 I_j|/2e ⟨ I ⟩ t(1-t) = |δ^2 Q_j|/e τ⟨ I ⟩ t(1-t), with j ∈{1,2,c}, where ⟨ I ⟩ is the source current, τ is time, and t = ⟨ I_1 ⟩/⟨ I ⟩ is the QPC transmission <cit.>. The QPC conductance is G_D_1e^2/h, where G_D_1=t⟨ I ⟩τ/e. To make the discussion of different edge model concrete, from now on we focus our attention on the prototypical example of ν=2/3, its QPC conductance plateaus, and shot noise in those plateaus. As mentioned above, we will show that, by measuring both the auto- and cross-correlations of the electrical current across a QPC, one may discern both the edge configuration and its degree of equilibration. ν=2/3 edge models.— We consider the prototypical example of filling ν=2/3 in a QPC (Figs. <ref>, <ref>, <ref>) and take the structure of the bare edge modes as the MacDonald model <cit.>, consisting of two counter-propagating modes having filling factor discontinuities (from bulk to edge) δν = [-1/3,+1]. We note that in the coherent regime the MacDonald edge structure fails to be consistent with several experimental observations <cit.>. Subsequently, it was realized that an interplay of the inter-mode interactions and disorder-induced charge tunneling can drive the system into a disorder dominated RG fixed point, known as the Kane-Fischer-Polchinski (KFP) RG fixed point <cit.>, consistent with the experimental observations. Later, a number of experimental observations <cit.> were found which could not be explained by the KFP RG fixed point. To reconcile these experimental observations, a model (reconstructed MacDonald edge <cit.>) was proposed consisting of four counter-propagating modes having δν = [-1/3,+1, -1/3, +1/3], which, in the coherent regime, may give rise to a new Wang-Meir-Gefen (WMG) <cit.> intermediate fixed point. For each edge structure, different equilibration regimes can happen as explained earlier. Emergence of different G_D_1 plateaus from different models and a classification of those based on the shot noise are listed in <ref>. We assume that there is no bulk-leakage <cit.>. The G_D_1=1/2 plateau.— Recent experiments have shown the emergence of an intermediate QPC conductance plateau at G_D_1=1/2 <cit.>. A theoretical explanation for the appearance of it was provided <cit.>, which is similar to an earlier work <cit.>. Here, we show that this plateau may arise due to different mechanisms in either the coherent or equilibrated regimes and show that shot noise can be used to discriminate among them (<ref>). (a) Coherent scenario.— We consider the MacDonald edge structure <cit.>, consisting of counter-propagating e/3 and e charge modes (from bulk to edge) (<ref>(a)). We assume that the contacts are clean, where the modes are non-interacting <cit.>. In each region between a contact and the QPC the e and e/3 modes are renormalized to (2/3 + ϵ)e and ϵ e charge modes, respectively, where ϵ > 0 (KFP region) <cit.>. At the KFP RG fixed point we have ϵ = 0 and the ϵ e mode becomes neutral. At the QPC the e/3 mode is fully backscattered and the e mode is fully transmitted at a plateau having QPC conductance G_D_1. We consider a wavepacket having charge e emanating from S in the charge mode e in time τ. The wavepacket encounters an infinite number of stochastic reflections and transmissions while entering and leaving the KFP regions. The values of those reflection and transmission coefficients are parametrized by the elements of the density kernel matrix <cit.> in each KFP region, which are determined by the conductance matrix <cit.>. The wavepacket leaves the device through either D_1 or D_2. To calculate the total charge reaching different contacts we write down an infinite series of terms, each of which is composed of the following factors: (i) a tunnelling factor for the first entrance to the QPC region from S, (ii) a factor for the shortest path leaving the QPC region to reach a contact (D_1 or D_2), and (iii) a factor giving the contribution of multiple reflections from different KFP regions, where the piece arising from “(iii)" is the same for both D_1 and D_2 <cit.>. This process gives rise to the shot noise, and we note that δ^2 Q_1 = ⟨ Q_1 ⟩ (1- ⟨ Q_1 ⟩), δ^2 Q_2 = ⟨ Q_2 ⟩ (1- ⟨ Q_2 ⟩), and δ^2 Q_c = -⟨ Q_1 ⟩⟨ Q_2 ⟩. Summing up this series we find that to second order in ϵ the source current becomes I ≈ (2/3 + 0.25 ϵ + 2.25 ϵ^2)e/τ, t ≈ (3/4 + 0.281 ϵ + 2.21 ϵ^2), and G_D_1≈ (1/2 + 0.75 ϵ + 3.375 ϵ^2) <cit.>. Moreover, F_1 ≈ 2 - 0.75ϵ + 3.375ϵ^2, F_2 ≈ 1.111 - 4.416ϵ+7.375ϵ^2 and F_c ≈ -0.666 +2.25ϵ - 7.875ϵ^2 <cit.>. For ϵ=0 we find the results at the KFP RG fixed point and we note that G_D_1=1/2 matches with the recent experimental observations <cit.>. (b) Equilibration scenario.— In this case, differences in the transport properties can arise depending on the degree of edge equilibration. We assume that the charge transport is ballistic (B), moving “downstream" along each segment of the QPC (<ref>(b)). The nature of heat transport in that segment can be B, diffusive (D), or antiballistic (AB, i.e., “upstream”). <cit.>. In these three regimes we have, respectively, an exponentially suppressed, an algebraically decaying, or a constant shot noise as a function of the geometric length of the segment <cit.>. From now on, we neglect the exponentially suppressed contribution to the shot noise, arising from B heat transport. As contacts S and G are at different potentials, there are potential drops in the device which occur in the regions marked as hot spots H_1, H_2, resulting in Joule heating (<ref>(b)) <cit.>. In principle, there exist two possible hot spots near the drains D_1, D_2. However, the heat generated there cannot flow back to the QPC region, in this configuration, and hence cannot contribute to the noise <cit.>. In addition, four noise spots (M,N,O,P) are formed due to the creation of thermally excited particle-hole pairs and their stochastic splitting into the two drains D_1, D_2 (<ref>(b)) <cit.>. The shot noise is computed by collecting the contributions from M,N,O,P, which are determined by the nature of heat transport in the outer, line, and upper segments (<ref>(b)). The Fano factors are found to be <cit.> F_1 = F_2 = F_O + F_P + F_M + F_N and F_c = F'_O + F'_P - F_M - F_N, where F_α is the contribution from the noise spot α∈{M,N,O,P} and F'_β is the contribution to the cross-correlation from the noise spot β∈{O,P}. We consider three possible edge structures giving rise to a 1/2 QPC conductance plateau. They correspond to three different combinations of {ν,ν_i}, where ν and ν_i are the bulk and QPC filling factors, respectively. We take {ν,ν_i} = { 2/3, 1 } or { 2/3(R), 1 } or { 2/3(R), 1(R) }, where 2/3(R) refers to the reconstructed MacDonald edge and 1(R) denotes edge reconstruction in QPC leading to the QPC filling factor discontinuities (from bulk to edge) δν_i = [+1, -1/3, +1/3] <cit.>. We sum up an infinite series to compute total current at D_1 and thereby find G_D_1=1/2, that is, t=3/4 for all those edge structures; this result is due to the assumed full charge equilibration. For no thermal equilibration we have only B and AB heat transports leading to constant Fano factors (<ref>). For mixed and full thermal equilibration, the heat transport in the outer segment becomes D, and hence the heat, generated at the hot spots H_1, H_2, flows to the contacts very slowly. Thus, the noise spots M, N acquire a √(L_A/l_eq^th) contribution to their temperatures, which is manifested in the shot noise, while the noise spots O, P provide asymptotically constant contributions (<ref>) <cit.>. The G_D_1=5/9 plateau.— Here only a coherent scenario is possible. We consider the reconstructed MacDonald edge structure, consisting of counter-propagating e/3 (“innermost"), e, e/3 and e/3 (“outermost") charge modes (from bulk to edge) <cit.>(<ref>). At the QPC, we consider the case when the outermost e/3 mode is fully transmitted, the innermost e/3 mode is fully backscattered, and the remaining modes are renormalized to the vicinity of the KFP RG fixed point <cit.>. The renormalized charge modes become (2/3 + ϵ_3)e and ϵ_3 e, where ϵ_3 > 0 (KFP region). In each region between a contact and the QPC the remaining modes are renormalized to the vicinity of the WMG RG fixed point <cit.>. The renormalized charge modes become (1/3 + ϵ_1 + ϵ_2)e, ϵ_1 e and ϵ_2 e, where ϵ_1 > 0,ϵ_2 > 0 (WMG region). At the RG fixed points we have, respectively, ϵ_3=0 or ϵ_1=ϵ_2=0, and the ϵ_1 e, ϵ_2 e, ϵ_3 e modes become neutral. Similarly to the 1/2 QPC plateau considered before, we write down an infinite series with contributions (i), (ii), and (iii) to calculate the total charge reaching different contacts. Differently from before, here the piece (iii) contains three types of contributions as a factor which arises due to multiple reflections (iiia) among all the contacts, (iiib) between S and D_1, and (iiic) between G and D_2 <cit.>. Summing up all the contributions to first order in ϵ_1,2,3, the source current becomes I ≈[2/3 + 0.55 (ϵ_1 + ϵ_2)]e/τ, hence t ≈[5/6 + 0.5 ϵ_3 - 1.36 (ϵ_1 + ϵ_2)], and G_D_1≈ [5/9 + 0.33 ϵ_3 - 0.44(ϵ_1+ϵ_2)] <cit.>. We also find F_1 ≈[1.866 + 6.48ϵ_3 - 16.41(ϵ_1 + ϵ_2)], F_2 ≈[1.066 + 2.56ϵ_3 - 15.32(ϵ_1 + ϵ_2)] and F_c ≈[-0.266 -1.04ϵ_3 + 4.63(ϵ_1 + ϵ_2)]. For ϵ_1 = ϵ_2 = ϵ_3 = 0 we obtain the results at the RG fixed points. The G_D_1=1/3 plateau.— Earlier experiments have shown the emergence of an intermediate QPC conductance plateau at G_D_1=1/3 <cit.>. Here, we provide its theoretical explanation based on either coherent and equilibrated scenarios and show that shot noise can be used to discriminate among those (<ref>). (a) Coherent scenario.— We consider the renormalized reconstructed MacDonald edge structure (WMG RG fixed point <cit.>), consisting of n_1,n_2, e/3 (“inner") and e/3 (“outer") modes (from bulk to edge), where n_1, n_2 denote the neutral modes (<ref>(a)). A plateau is observed at transmission t=1/2, leading to G_D_1=1/3, when the inner e/3 charge mode is fully backscattered and outer e/3 charge mode is fully transmitted <cit.>. At this transmission plateau, it has been shown earlier that the neutral modes can create particle-hole pairs, which stochastically split and reach different contacts, thus creating current fluctuations in D_1 and D_2 and the Fano factors were found to be F_1=F_2=2/3 <cit.>. Using the same stochastic variable approach, one finds that F_c=-2/3 <cit.>. (b) Equilibration scenario.— We consider two possible edge structure combinations, {ν,ν_i} = { 2/3, 1/3 } or { 2/3(R), 1/3 }. Employing the same techniques as for the 1/2 plateau, we find G_D_1=1/3 leading to t=1/2 for both of these edge structures (<ref>(b)). Again, the equality is due to full charge equilibration. For no thermal equilibration we have constant Fano factors while for mixed and full thermal equilibration, the Fano factors acquire a √(L_A/l_eq^th) contributions (<ref>) <cit.>. Summary and outlook.— One FQH state may feature different edge modes due to reconstruction. Moreover, the modes can be coherent or equilibrated to varying extent. We show that different models can give rise to the same QPC conductance plateau but the models can be distinguished based on shot noise. We have established our claim by studying the ν=2/3 FQH state, and found that different inequalities among the Fano factors hold for different scenarios. Our results include several possible scenarios for the recently observed e^2/2h <cit.>, previously observed e^2/3h <cit.> QPC conductance plateaus in experiments and the means to distinguish between them. In addition, we predict a possible 5e^2/9h (only in the coherent regime) QPC conductance plateau. Our scheme is realizable with the present day experimental abilities. The analyses can be extended to other quantum Hall states <cit.>, graphene quantum Hall, and edge reconstructed ℤ_2 topological insulators <cit.>. Recently, Ref. Dima2023 has also discussed the auto- and cross-correlation noise. We thank Yuval Gefen for many illuminating discussions and collaboration on related works. We also thank Christian Glattli, Kun Yang, Michael J. Manfra, and Udit Khanna for their useful discussions. S.M. was supported by the Weizmann Institute of Science, Israel Deans fellowship through Feinberg Graduate School, as well as the Raymond Beverly Sackler Center for Computational Molecular and Material Science at Tel Aviv University. A.D. was supported by the German-Israeli Foundation Grant No. I-1505-303.10/2019, DFG MI 658/10-2, DFG RO 2247/11-1, DFG EG 96/13-1, and CRC 183 (project C01). A.D. also thanks the Israel Planning and budgeting committee (PBC) and the Weizmann Institute of Science, the Dean of Faculty fellowship, and the Koshland Foundation for financial support. M.G. has been supported by the Israel Science Foundation (ISF) and the Directorate for Defense Research and Development (DDR&D) Grant No. 3427/21, and by the US-Israel Binational Science Foundation (BSF) Grant No. 2020072.
http://arxiv.org/abs/2307.04113v1
20230709080545
Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping
[ "Kazuya Nishimura", "Ami Katanaya", "Shinichiro Chuma", "Ryoma Bise" ]
cs.CV
[ "cs.CV" ]
Mitosis Detection from Partial Annotation K. Nishimura et al. Kyushu University, Fukuoka, Japan [email protected] Kyoto University, Kyoto, Japan Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1 Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ========================================================================================== Detection of mitosis events plays an important role in biomedical research. Deep-learning-based mitosis detection methods have achieved outstanding performance with a certain amount of labeled data. However, these methods require annotations for each imaging condition. Collecting labeled data involves time-consuming human labor. In this paper, we propose a mitosis detection method that can be trained with partially annotated sequences. The base idea is to generate a fully labeled dataset from the partial labels and train a mitosis detection model with the generated dataset. First, we generate an image pair not containing mitosis events by frame-order flipping. Then, we paste mitosis events to the image pair by alpha-blending pasting and generate a fully labeled dataset. We demonstrate the performance of our method on four datasets, and we confirm that our method outperforms other comparisons which use partially labeled sequences. Code is available at <https://github.com/naivete5656/MDPAFOF>. § INTRODUCTION Fluorescent microscopy is widely used to capture cell nuclei behavior. Mitosis detection is the task of detecting the moment of cell division from time-lapse images (the dotted circles in Fig. <ref>). Mitosis detection from fluorescent sequences is important in biological research, medical diagnosis, and drug development. Conventionally tracking-based methods <cit.> and tracking-free methods <cit.> have been proposed for mitosis detection. Recently, deep-learning-based mitosis-detection methods have achieved outstanding performance <cit.>. However, training deep-learning methods require a certain amount of annotation for each imaging condition, such as types of cells and microscopy and the density of cells. Collecting a sufficient number of labeled data covering the variability of cell type and cell density is time-consuming and labor-intensive. Unlike cell detection and segmentation, which aims to recognize objects from a single image, mitosis detection aims to identify events from time series of images. Thus, it is necessary to observe differences between multiple frames to make mitosis events annotation. Comprehensively annotating mitosis events is time-consuming, and annotators may be missed mitosis events. Thus, we must carefully review the annotations to ensure that they are comprehensive. Partial annotation has been used as a way to reduce the annotation costs of cell and object detection <cit.>. Fig. <ref> shows an example of partially annotated frames. Some mitosis events are annotated (a red-dotted circle), and others are not (light-blue-dotted circles). The annotation costs are low because the annotator only needs to plot a few mitotic positions. In addition, this style of annotation allows for missing annotations. Therefore, it would be effective for mitosis detection. Unlike supervised annotation, partial annotation can not treat unannotated areas as regions not containing mitosis events since the regions may contain mitosis events (Fig. <ref>). The regions naturally affect the training in the partial annotation setting. To avoid the effect of unlabeled objects in unlabeled regions, Qu et al. <cit.> proposed to use a Gaussian masked mean squared loss, which calculates the loss around the annotated regions. The loss function works in tasks in which foreground and background features have clearly different appearances, such as in cell detection. However, it does not work on mitosis detection since the appearance of several non-mitotic cells appears similar to mitosis cells; it produces many false positives. In this paper, we propose a cell-mitosis detection method for fluorescent time-lapse images by generating a fully labeled dataset from partially annotated sequences. We achieve mitosis detection training in a mitosis detection model with the generated dataset. To generate the fully labeled dataset, we should consider two problems: (1) no label indicating regions not containing mitosis cells and (2) few mitosis annotations. We can easily generate the regions not containing mitotic cells by using one image twice. However, such regions do not contribute to identifying mitotic cells and non-mitotic cells since the data do not show natural cell motions. For the training to be effective, the regions not containing mitotic cells should show the natural movements of cells. To generate such regions, we propose frame-order flipping which simply flips the frame order of a consecutive frame pair. As shown in the white rectangles in Fig. <ref>, we can convert a mitosis event to a cell fusion by flipping operation. Hence, the flipped pair is the region not containing mitosis cells. Even though we flipped the frame order, the non-mitotic cells still have natural time-series motion, as shown in the yellow rectangles in Fig. <ref>. In addition, we can make the most of a few partial annotations by using copy-and-paste-based techniques. Unlike regular copy-and-paste augmentation <cit.> for supervised augmentation of instance segmentations which have object mask annotations, we only have point-level annotations. Thus, we propose to use alpha-blending pasting techniques which naturally blend two images. Experiments conducted on four types of fluorescent sequences demonstrate that the proposed method outperforms other methods which use partial labels. Related work Some methods used partially labeled data to train model <cit.>. Qu <cit.> proposed a Gaussian masked mean squared loss, which calculates the loss around the annotated areas. To more accurately identify negative and positive samples, positive unlabeled learning has been used for object detection <cit.>. These methods have used positive unlabeled learning on candidates detected by using partial annotation to identify whether the candidates are labeled objects or backgrounds. However, since the candidates detected by partial annotation include many false positives, the positive unlabeled learning does not work on mitosis detection. the appearance of the mitosis event and backgrounds in the mitosis detection task, it is difficult to estimate positive prior. These methods could not work on mitosis detection. The positive unlabeled learning requires a positive prior. § METHOD: MITOSIS DETECTION WITH PARTIAL LABELS Our method aims to detect coordinates and timing (t, x, y) of mitosis events from fluorescent sequences. For training, we use time-lapse images ℐ = {I_t}_t=1^T and partial labels (a set of annotated mitosis cells). Here, I_t denotes an image at frame t, and T is the total number of frames. Our method generates a fully labeled dataset 𝒟_p= { (I'_t-1, I'_t), 𝒫_t' }^T-1_t=1 from time-lapse images ℐ and partial labels and then trains a mitosis detection model f_θ with the generated dataset. Here, I'_t is a generated image, and 𝒫_t' is a set of mitotic coordinates contained in (I'_t-1, I'_t). Since our method trains the network with partial labels, it can eliminate the costs of checking for missed annotations. §.§ Labeled dataset generation Fig. <ref> shows an overview of our dataset generation. We randomly pick a pair of consecutive frames (I_t-1, I_t) from time-lapse images ℐ. Since the pair may contain unannotated mitosis events, we forcibly convert the pair into a negative pair (i.e., a pair which does not contain mitosis events) by using frame-order flipping. Next, we paste mitosis events to a generated pair using alpha-blending pasting and obtain a generated pair (I'_t-1, I'_t). Since we know the pasted location, we can obtain the mitosis locations 𝒫'_t of the generated pair. Negative pair generation with frame-order flipping: In this step, we generate a pair not containing mitotic cells by using a simple augmentation-based frame-order flipping. Fig. <ref> shows an example of the pair images (I_t-1, I_t). The pair may contain mitosis events. If we assume that the pair does not contain mitotic cells, it affects the training of the mitosis detection model f_θ. To prevent the pair from containing mitosis events, we flip the frame order and treat the flipped pair (I_t, I_t-1) as a pair of negative. Since mitosis is the event that a cell divides into two daughter cells, the mitosis event is transformed into an event in which two cells fuse into one by flipping the order (Fig. <ref>). The flipped event can treat as a non-mitotic event. Note that the motivation behind using frame flipping is to be able to utilize pixels showing the motions of non-mitotic cells negatives by transforming mitosis into other events. Even if the order is flipped, the movements of the non-mitotic cell are still a non-mitotic cell feature, and we consider that these cells are effective for the training of the negative label. Mitosis label utilization with alpha-blending pasting: Next, we paste mitosis events to the flipped pair by using copy-and-paste techniques in order to utilize the positive labels effectively. Copy and paste augmentation has been used for supervised augmentation of instance segmentation <cit.>. Unlike instance segmentation with object masks, we only have locations (t, x, y). A simple solution is to crop images around the mitosis position and copy and paste them to the target image, like in CutMix <cit.>. However, the cropped image naturally contains surrounding objects, and the generated image appears unnatural. Unnatural images cause the detection network to make biased predictions and reduce generalization performance. To avoid this problem, we propose alpha-blending pasting with a Gaussian blending mask. We blend two images by leaving the pixel value in the center and blurring the vicinity of the edge of the image. First, we crop the image around the positive annotations and obtain a set of cropped pair 𝒞 = {(C_t-1^i, C_t^i )}^N_i=0 and initialize (I'_t-1, I'_t)=(I_t, I_t-1) and 𝒫_t'= {}. Here, N is the total number of partial annotations, while C_t-1^i and C_t^i are images before and after the mitosis of the i-th annotation (Fig. <ref>). Define I_t'(l⃗^j), I_t-1'(l⃗^j) as a cropped pair image at the j-th random spatial location l⃗^j. We crop each image centered at l⃗^j to a size that is the same as that of C_t^i. We update the randomly selected patch I_t'(l⃗^j), I_t-1'(l⃗^j) by blending a randomly selected cropped pair (C_t-1^i, C_t^i) with the following formula: I_t'(l⃗^j) = (1-α) ⊙I_t'(l⃗^j) + α⊙C_t^i, where α is a Gaussian blending mask (Fig. <ref>). We generate the blending mask by blurring a binary mask around the annotation with a Gaussian filter. We use a random sigma value for the Gaussian filter. Then, we add the paste location l⃗^j to the set 𝒫_t'. We repeat this process random k times. §.§ Mitosis detection with generated dataset We modified a heatmap-based cell detection method <cit.> to work as a mitosis detection method. Fig. <ref> is an illustration of our mitosis detection model. Given two consecutive frames (I'_t-1, I'_t), the network output heatmap Ĥ_t. We treat the channel axis as the time axis for the input. The first channel is I'_t-1, and the second is I'_t. First, we generate individual heatmaps H_t^j for each pasted coordinate l⃗^j = (l^j_x, l^j_y). H_t^j is defined as H_t^j(p_x, p_y) = exp( -(l_x^j - p_x) ^2 + (l_y^j - p_y) ^ 2/σ^2), where p_x and p_y are the coordinates of H_t^j and σ is a hyper parameter that controls the spread of the peak. The ground truth of the heatmap at t is generated by taking the maximum through the individual heatmaps, H_t = max_j (H^j_t) (H_t in Fig. <ref>). The network is trained with the mean square error loss between the ground truth H_t and the output of the network Ĥ_t. We can find the mitosis position by finding a local maximum of the heatmap. § EXPERIMENTS Dataset: We evaluated our method on four datasets. The first set is HeLa <cit.>, in which live cell images of HeLa cells expressing H2B-GFP were captured with 1100 × 700 resolution <cit.> [We used the publicly available CTC data-set <http://celltrackingchallenge.net/>. We only use HeLa since the number of mitosis events in other cells is small.]. Each sequence contains 92 fluorescent images with 141 mitosis events on average. The second set is ES, in which live cell images of mouse embryonic stem cells expressing H2B-mCherry were captured with 1024 × 1024 resolution. Each sequence contains 41 fluorescent images with 33 mitosis events on average. The third set is ES-D in which mouse embryonic stem cells expressing H2B-mCherry were induced to differentiate and used to capture live cell images. Each sequence contains 61 fluorescent images with 18 on average events on average. The fourth set is Fib, in which live cell images of mouse fibroblast cells expressing H2B-mCherry were captured with 1024 × 1024 resolution. Each sequence contains 42 fluorescent images with 11 mitosis events on average. Each dataset consists of four sequences of images. We performed four-fold cross-validation in which two sequences were used as training data, one as validation data, and one as test data. As shown in Fig. <ref>, the appearance and density are different depending on the dataset. Implementation details: We implemented our method within the Pytorch framework <cit.> and used a UNet-based architecture <cit.> for the mitosis-detection network. The model was trained with the Adam optimizer with a learning rate of 1e-3. σ, which controls the spread of the heatmap, was 6. The cropping size of the positive annotations was 40 pixels. We randomly change the number of pasting operations k between 1 and 10. We used random flipping, random cropping, and brightness change for the augmentation. Evaluation metrics: We evaluated our method using the F1 score <cit.>, which is widely used in mitosis detection. Given ground-truth coordinates and detected coordinates, we performed one-by-one matching. If the distance of the matched pair was within spatially 15 pixels and temporally 6, we associated the closest coordinate pairs. We treated the matched pair as true positives (TP), unassociated coordinates as false positives (FP), and unassociated ground-truth coordinates as false negatives (FN). Comparisons: We conducted four comparisons that involved training the model with partially labeled data. For the first method, we trained the model by treating unlabeled pixels as non-mitosis ones (Baseline <cit.>). The second method used the Gaussian masked loss (GM <cit.>). The masked loss was calculated on the masked pixels around the positive-label pixels. Thus, the method ignored unlabeled pixels. The third method used positive unlabeled learning to identify mitosis from candidates obtained by the detection model trained with the masked loss (PU <cit.>). The fourth method generated pseudo-labels from the results of positive unlabeled learning and retrained the detection model with the pseudo-label (PU-I <cit.>). In Table <ref>, we compared our method with previous methods in one and five-shot settings. We used N samples per sequence in the N-shot settings. For a robust comparison, we sampled one or five mitosis annotations under five seed conditions and took the average. Overall, our method outperformed all compared methods in F1 metric. GM <cit.>, PU <cit.>, and PU-I <cit.> are designed for detecting objects against simple backgrounds. Therefore, these methods are not suited to a mitosis detection task and are inferior to the baseline. The baseline <cit.> treats unlabeled pixels as non-mitosis cell pixels. In the partially labeled setting, unlabeled pixels contain unannotated mitosis events, and unannotated mitosis affects performance. Unlike cell detection, mitosis detection requires identifying mitosis events from various non-mitotic cell motions, including motions that appear mitotic appearances. Although GM <cit.> can ignore unlabeled mitosis pixels with the masked loss, it is difficult to identify such non-mitosis motions. Therefore, GM estimates produce many false positives. PU <cit.> uses positive unlabeled learning to eliminate false positives from candidates obtained from the detection results with partial labels. However, positive unlabeled learning requires a positive prior in the candidates and a certain amount of randomly sampled positive samples. Since the candidates contain many false positives, the positive prior is difficult to estimate. In addition, there is no guarantee that positive unlabeled learning can work correctly with the selected N-shot annotations. Moreover, since positive unlabeled learning does not work in the mitosis detection task, PU-I <cit.> can not select accurate pseudo labels. Unlike these methods, our method can estimate mitosis events accurately. Since our method generates a fully labeled dataset from a partial label, it effectively uses a few partial annotations. Effectiveness of each module: We performed an ablation study on the HeLa dataset to investigate the effectiveness of the proposed module. We used random augmentation (i.e., random elastic transformation <cit.>, brightness change, and gaussian noise) instead of using frame-order flipping (FOF). We generated I_t^aug by augmenting I_t and input the pair (I_t, I_t^aug) to the network. In the w/o ABP setting, we directly pasted cropped images on the target image as in CutMix <cit.>. Table <ref> demonstrates that the proposed modules improve mitosis detection performance. Fig. <ref> shows examples of the estimation results for each condition. Without the FOF setting, the detection model estimates a high value for all moving cells, leading to over-detection. Without the ABP setting, the detection model overfits the directly pasted image. The directly pasted image tends to include unnatural boundaries on the edge, leading to missed detections in real images. Robustness against missing annotations: We confirmed the robustness of the proposed method against missing annotations on the ES dataset. We changed the missing annotation rate from 0% to 30%. A comparison with the supervised method in terms of F1-score is shown in Fig. <ref>. The performance of the supervised method deteriorates as the percentage of missing labels increases, whereas the performance of the proposed method remains steady. Since our method flips the frame order, we can avoid the effects of missing annotations. Appearance of generated dataset: Fig. <ref> shows an example of the generated image pair. The cropped mitosis image pairs were pasted on the red-dotted circle. It can be seen that the borders of the original image and the pasted image have been synthesized very naturally. § CONCLUSION We proposed a mitosis detection method using partially labeled sequences with frame-order flipping and alpha-blending pasting. Our frame-order flipping transforms unlabeled data into non-mitosis labeled data through a simple flipping operation. Moreover, we generate various positive labels with a few positive labels by using alpha-blending pasting. Unlike directly using copy-and-paste, our method generates a natural image. Experiments demonstrated that our method outperforms other methods that use partially annotated sequences on four fluorescent microscopy images. Acknowledgements: This work was supported by JSPS KAKENHI Grant Number JP21J21810 and JST ACT-X Grant Number JPMJAX21AK, Japan. splncs04
http://arxiv.org/abs/2307.04943v1
20230711001558
Dispersive estimates for 1D matrix Schrödinger operators with threshold resonance
[ "Yongming Li" ]
math.AP
[ "math.AP" ]
Department of Mathematics Texas A&M University College Station, TX 77843, USA [email protected] The author was partially supported by NSF grants DMS-1954707 and DMS-2235233. We establish dispersive estimates and local decay estimates for the time evolution of non-self-adjoint matrix Schrödinger operators with threshold resonances in one space dimension. In particular, we show that the decay rates in the weighted setting are the same as in the regular case after subtracting a finite rank operator corresponding to the threshold resonances. Such matrix Schrödinger operators naturally arise from linearizing a focusing nonlinear Schrödinger equation around a solitary wave. It is known that the linearized operator for the 1D focusing cubic NLS equation exhibits a threshold resonance. We also include an observation of a favorable structure in the quadratic nonlinearity of the evolution equation for perturbations of solitary waves of the 1D focusing cubic NLS equation. Dispersive estimates for 1D matrix Schrödinger operators with threshold resonance Yongming Li October 2023 ================================================================================= § INTRODUCTION In this article, we establish dispersive estimates and local decay estimates for the (non-self-adjoint) matrix Schrödinger operators = _0 + = [ -∂_x^2 + μ 0; 0 ∂_x^2 - μ ] + [ -V_1 -V_2; V_2 V_1 ] on L^2() × L^2(), where μ is a positive constant and V_1, V_2 are real-valued sufficiently decaying potentials. The operator is closed on the domain D() = H^2() × H^2(). These matrix operators arise when linearizing a focusing nonlinear Schrödinger equation around a solitary wave. By our assumptions on V_1 and V_2, Weyl's criterion implies that the essential spectrum of is the same as that of _0, given by (-∞,-μ] ∪ [μ,∞). As a core assumption in this paper, we suppose that the edges ±μ of the essential spectrum are irregular in the sense of Definition <ref>. This implies that there exist non-trivial bounded solutions to the equation Ψ⃗_± = ±μΨ⃗_±, see Lemma <ref>. The dispersive estimates for when the thresholds ±μ are regular have been obtained in Section 7-8 of the paper by Krieger-Schlag <cit.>, building on the scattering theory developed by Buslaev-Perel'man <cit.>. See also the recent work of Collot-Germain <cit.>. Our proof is instead based on the unifying approach to resolvent expansions first initiated by Jensen-Nenciu <cit.>, and then further refined in Erdogan-Schlag <cit.> for matrix Schrödinger operators. We also adopt techniques from Erdogan-Green <cit.>, where the authors prove similar dispersive estimates for one-dimensional Dirac operators. §.§ Motivation Our interest in developing dispersive estimates for (<ref>) stems from the asymptotic stability problem for solitary wave solutions to nonlinear Schrödinger (NLS) equations. The NLS equation i∂_t ψ + ∂_x^2 ψ + F(|ψ|^2)ψ = 0, ψ_t ×_x →, appears in many important physical contexts such as the propagation of a laser beam, the envelope description of water waves in an ideal fluid, or the propagation of light waves in nonlinear optical fibers. See, e.g., Sulem-Sulem <cit.> for physics background. Under certain general conditions on the nonlinearity F(·) (see, e.g., <cit.>), the equation (<ref>) admits a parameterized family of localized, finite energy, traveling solitary waves of the form ψ(t,x) = e^itα^2ϕ(x;α), where ϕ(·;α) is a ground state, i.e., a positive, decaying, real-valued solution to the (nonlinear) elliptic equation - ∂_x^2 ϕ + α^2 ϕ = F( ϕ^2)ϕ. The existence and uniqueness of these ground state solutions are well-understood, see, e.g., <cit.>, <cit.>. The solitary wave solutions (or simply, solitons) are of importance due to the special role they play for the long-time dynamics of the Cauchy problem (<ref>). Consequently, over the last few decades there has been a significant interest in the study of stability (or instability) of such solitary waves under small perturbations. The primary notion of stability is that of orbital stability, and it is by now well-understood for the NLS equation. The pioneering works in this direction were due to Cazenave-Lions <cit.>, Shatah-Strauss <cit.>, and Weinstein <cit.>; see also <cit.> for the general theory. On the other hand, a stronger notion of stability is that of asymptotic stability. There are two general approaches for the asymptotic stability problem. The first approach is to use integrability techniques, when the underlying partial differential equation is completely integrable and inverse scattering is available. A second approach is perturbative, which means that one studies the dynamics of the nonlinear flow in the neighborhood of the solitary wave, on a restricted set of the initial data. Generally, one starts by decomposing the perturbed solution into a sum of a solitary wave and a dispersive remainder term. For the perturbative approach, dispersive estimates for the linear flow are key. Let us briefly describe the perturbative approach for the NLS equation. To keep our exposition short, we will not take into account any modulation aspects related to the Galilean invariance of the equation. For small α>0, consider the perturbation ansatz ψ(t,x) = e^itα^2(ϕ(x) + u(t,x)) with the ground state ϕ(·) = ϕ(·;α) and the dispersive remainder term u(t,x). The linearization of (<ref>) around the solitary wave e^itα^2ϕ(x) then leads to the following nonlinear partial differential equation i ∂_t u = (- ∂_x^2 + α^2 - V)u + W u+ N, where N = N(ϕ,u,u) is nonlinear in the variables (u,u), and V = F(ϕ^2) + F'(ϕ^2)ϕ^2 and W = F'(ϕ^2)ϕ^2 are real-valued potentials related to the ground state ϕ. Equivalently, the above equation can be recast as a system for the vector U := (u,u)^⊤, which is given by i∂_t U - U = , where is a nonlinear term, and is a matrix Schrödinger operator of the form (<ref>) with the parameters μ = α^2, V_1 = V, and V_2 = W. For the study of asymptotic stability of solitary waves for NLS, it is thus crucial to fully understand the spectral properties of the matrix operator as well as to derive dispersive estimates for the linear evolution operator e^it. One of the key steps in a perturbative analysis is to prove that the dispersive remainder (<ref>) decays to zero in a suitable topology. Let us consider for example, the 1D focusing NLS with a pure power nonlinearity, i.e. i∂_t ψ + ∂_x^2 ψ + |ψ|^2σψ = 0,σ>0. The ground state ϕ(x;1) has an explicit formula for all σ > 0 given by ϕ(x;1) = (σ + 1)^1/2σ^1/σ(σ x), and the linearized operator around e^itϕ(x;1) takes the form _σ = [ -∂_x^2 - (σ+1)^2^2(σ x) + 1 - σ(σ+1)^2(σ x); σ(σ+1)^2(σ x) ∂_x^2 + (σ+1)^2^2(σ x) - 1 ]. For monomial nonlinearities, we may obtain ϕ(x;α) from rescaling by ϕ(x;α) = α^1/σϕ(α x,1). The matrix operators when linearizing around e^itα^2ϕ(x;α) are also equivalent to the matrix operator _σ by rescaling. The spectra for these matrix operators were investigated in <cit.>; see also Section 9 of <cit.>. For σ≥ 2, Krieger-Schlag <cit.> were able to construct finite co-dimensional center-stable manifolds around the solitary waves and prove asymptotic stability using dispersive and Strichartz estimates developed for the evolution operator e^it. However, for the completely integrable case (σ =1), it was shown in <cit.> that the matrix operator _1 exhibits the threshold resonance Ψ(x) = (tanh^2(x),-^2(x) )^⊤ at λ = 1. The dispersive estimates developed in <cit.> do not apply in this case. Furthermore, we note that a key assumption in the papers <cit.>, <cit.>, <cit.>, <cit.> is that the linearized matrix operator does not possess threshold resonances at the edges of the essential spectrum. In these “generic" (regular) cases, it can be shown that the evolution operator enjoy improved decay estimates in weighted spaces; see, e.g., Proposition 8.1 in <cit.>. Thus, a meaningful motivation for this paper is to prove dispersive estimates in the presence of threshold resonances under some general spectral assumptions on the matrix operator , which are applicable to the 1D cubic NLS case (σ=1). We will discuss this particular case briefly in Section <ref>. §.§ Main result We are now in the position to state the main result of this paper. We begin by specifying some spectral assumptions on . (A1) -σ_3 is a positive matrix, where σ_3 is one of the Pauli matrices (c.f. (<ref>)), (A2)L_- := -∂_x^2 + μ - V_1 + V_2 is non-negative, (A3) there exists β>0 such that | V_1 (x) | + | V_2 (x) |≲ e^-(√(2μ)+β)| x | for all x ∈, (A4) there are no embedded eigenvalues in (-∞,- μ)∪(μ,∞). Under these assumptions, we recall the general spectral theory for from <cit.>.[The results in Section 2 of <cit.> are stated for dimension 3, but they in fact hold for all dimensions. Moreover, only a polynomial decay on V_1 and V_2 is assumed in <cit.>. See also <cit.>.] <cit.> Suppose Assumption <ref> holds. The essential spectrum of equals (-∞,-μ] ∪ [μ,∞). Moreover, () = -() = () = (^*), and () ⊂∪ i. The discrete spectrum of consists of eigenvalues {z_j}_j=1^N, 0≤ N < ∞, of finite multiplicity. For each z_j ≠ 0, the algebraic and geometric multiplicities coincide and (-z_j) is closed. The zero eigenvalue has finite algebraic multiplicity, i.e., the generalized eigenspace ∪_k=1^∞(^k) has finite dimension. In fact, there is a finite m ≥ 1 so that (^k) = (^k+1) for all k ≥ m. The symmetry (<ref>) is due to the following commutation properties of , ^* = σ_3 σ_3, - =σ_1 σ_1, with the Pauli matrices σ_1 = [ 0 1; 1 0 ], σ_2 = [ 0 -i; i 0 ], σ_3 = [ 1 0; 0 -1 ]. As a core assumption in this paper, we impose that the thresholds ±μ of the essential spectrum are irregular. (A5) The thresholds ±μ are irregular in the sense of Definition <ref>. This implies that there exist non-trivial bounded solutions Ψ⃗_± = (Ψ_1^±,Ψ_2^±)^⊤ to the equation Ψ⃗_± = ±μΨ⃗_±. (A6) The vanishing (bilateral)-Laplace transform condition holds [V_2Ψ_1^+ + V_1 Ψ_2^+](±√(2μ)) = ∫_-∞^∞ e^∓√(2μ) (V_2 Ψ_1^+ + V_1 Ψ_2^+)(y) y = 0. For details about the characterization of the threshold functions Ψ⃗, we refer the reader to Definition <ref> and Lemma <ref> in Section 4. Due to the commutation identity (<ref>), we have the relation Ψ⃗_+ = σ_1 Ψ⃗_-. We emphasize that assumption (A6) is used to infer that (non-trivial) bounded solutions Ψ⃗_± = (Ψ_1^±,Ψ_2^±) to the equation Ψ⃗_± = ±μΨ⃗_± satisfy Ψ_1^+ = Ψ_2^- ∈ L^∞()∖ L^2(). Let P_d L^2()× L^2() → L^2() × L^2() be the Riesz projection corresponding to the discrete spectrum of , and define P_s := I - P_d. We now state the main theorem of this article. Suppose assumptions (A1) – (A6) hold, and let Ψ⃗ = (Ψ_1,Ψ_2) be the L^∞()× L^∞()∖ L^2() × L^2() distributional solution to Ψ⃗ = μΨ⃗, with the normalization lim_x →∞( |Ψ_1(x)|^2 + |Ψ_1(-x) |^2 ) = 2. Then, for any f⃗=(f_1,f_2) ∈() ×(), we have * the unweighted dispersive estimate ‖ e^itP_sf⃗ ‖_L^∞()× L^∞()≲| t |^-1/2‖f⃗ ‖_L^1() × L^1(), ∀ | t |≥ 1, * and the weighted dispersive estimate ‖⟨ x ⟩^-2 (e^itP_s - F_t)f⃗ ‖_L^∞()× L^∞()≲| t |^-3/2‖⟨ x ⟩^2f⃗ ‖_L^1() × L^1(), ∀ | t|≥ 1, where F_tf⃗ := e^itμ/√(-4 π i t)⟨σ_3Ψ⃗,f⃗ ⟩Ψ⃗ - e^-itμ/√(4π i t)⟨σ_3 σ_1Ψ⃗, ⟩σ_1Ψ⃗. We proceed with some remarks on the main theorem: * The estimate (<ref>) is an analogue of the weighted dispersive estimates obtained by Goldberg <cit.> for the scalar Schödinger operator H = -∂_x^2 + V on the real line for non-generic potentials V; see <cit.>. The local decay estimate (<ref>) shows that the bulk of the free wave e^itP_s enjoys improved local decay at the integrable rate (| t |^-3/2), and that the slow (| t |^-1/2) local decay can be pinned down to the contribution of the finite rank operator F_t. Such sharp information can be useful for nonlinear asymptotic stability problems, see also Section <ref> below. * We make some comments on the spectral hypotheses. The assumptions (A1)–(A4) are known to be satisfied by the linearized operator around the solitary wave for the 1D focusing power-type NLS (<ref>). In the case of the 1D focusing cubic NLS (σ = 1), the linearized operator _1 satisfies the assumptions (A1)–(A6); see Section <ref> below. More generally, in Lemma <ref>, we show for matrix operators of the form (<ref>) satisfying assumptions (A1)–(A6) that the edges ±μ of the essential spectrum of cannot be eigenvalues, and that the non-trivial bounded solutions _± = (Ψ_1^±,Ψ_2^±)^⊤ to Ψ⃗_± = ±μΨ⃗_± belong to L^∞∖ L^2 since Ψ_1(x) has a non-zero limit as x →±∞. In this sense, we characterize the solutions Ψ⃗_± as threshold resonances. However, it is not yet clear to the author whether assumption (A6) is strictly needed to show that non-trivial bounded solutions Ψ⃗_± to Ψ⃗_± = ±μΨ⃗_± cannot be eigenfunctions. Moreover, an inspection of the proof of Lemma <ref> reveals that the strong exponential decay assumption (A3) and the vanishing condition assumption (A6) are only used in a Volterra integral equation argument. In all other proofs, we only use some polynomial decay of the potentials V_1 and V_2. * It might be possible to prove Theorem <ref> using the scattering theory developed by <cit.>. However, one major difficulty for this approach is due to the fact that the matrix Wronskian associated with the vector Jost solutions is not invertible at the origin for cases where the matrix operators exhibit threshold resonances. Hence, the vector-valued distorted Fourier basis functions are not immediately well-defined at zero frequency. See Corollary 5.21 and Section 6 in <cit.> for further details. §.§ Previous works In this subsection, we collect references related to dispersive estimates for Schrödinger operators and to the study of the stability of solitary waves. For dispersive estimates for the matrix operator , we refer to Section 5-9 of <cit.> in dimension 1, and to <cit.> in higher dimensions. A comprehensive study on the spectral theory for the matrix operator arising from pure-power type NLS is given in <cit.>. See also <cit.> for related analytical and numerical studies. For dispersive estimates for the scalar Schrödinger operators, pioneering works include <cit.>, and we refer to <cit.> for a sample of recent works. Finally, we mention the papers <cit.> on resolvent expansions for the scalar Schrödinger operator. On the general well-posedness theory for the NLS Cauchy problem (<ref>), we refer to the pioneering works <cit.>. Results on the orbital stability (or instability) of solitary waves for the NLS equation were first obtained by <cit.>, and a general theory was established in <cit.>. Subsequent developments for general nonlinearities were due to <cit.>. Regarding the asymptotic stability of solitary waves, the first results were due to Buslaev-Perel'man <cit.>. Subsequent works in this direction were due to <cit.>. For surveys on the stability of solitary waves, we refer to the reviews <cit.> and the monographs <cit.>. §.§ On the solitary wave for the 1D focusing cubic NLS In this subsection, we present two observations related to the asymptotic stability problem for the solitary wave of the 1D focusing cubic NLS. First, we verify that the assumption (A6) holds for the linearized operator around the solitary wave of the 1D focusing cubic NLS. Second, we use the local decay estimate (<ref>) to shed some light on the leading order structure of the quadratic nonlinearity in the perturbation equation for the solitary wave of the 1D focusing cubic NLS. We note that a proof for the asymptotic stability problem has been given by Cuccagna-Pelinovsky <cit.> via inverse scattering techniques. On the other hand, a perturbative proof that does not explicitly rely on the integrable structure has not yet appeared in the literature to the best of the author's knowledge. We now briefly discuss the evolution equation for perturbations of the solitary wave for the 1D focusing cubic NLS. To keep our exposition short, we do not discuss the modulation aspects for the solitary wave. For simplicity, consider the perturbation ansatz ψ(t,x) = e^it(Q(x)+u(t,x)) for the equation (<ref>) (σ = 1). The ground state has the explicit formula Q(x) := ϕ(x;1) = √(2)(x). The evolution equation for the perturbation in vector form u⃗ = (u_1,u_2) :=(u,u̅) is given by i ∂_t u⃗ - ℋ_1u⃗ = () + (u⃗), where _1 = ℋ_0 + 𝒱_1 = [ -∂_x^2 + 1 0; 0 ∂_x^2 - 1 ] + [ -4^2(x) - 2^2(x); 2^2(x) 4^2(x) ], and () := [ - Qu_1^2 - 2Qu_1u_2; Qu_2^2 + 2Qu_1u_2 ], () := [ - u_1^2u_2; u_1u_2^2 ]. Recall from <cit.> that the matrix operator _1 has the essential spectrum (-∞,-1]∪[1,∞), and a four-dimensional generalized nullspace _g(_1) = {[ Q; - Q ], [ (1+x∂_x)Q; (1+x∂_x)Q ], [ ∂_x Q; ∂_x Q ], [ x Q; - xQ ]}, as well as a threshold resonance at +1 given by Ψ⃗≡Ψ⃗_+ := [ Ψ_1; Ψ_2 ] = [ 1-1/2Q^2; -1/2Q^2 ] = [ tanh^2(x); -^2(x) ]. By symmetry, there is also a threshold resonance function at -1 given by Ψ⃗_- = σ_1Ψ⃗_+ = [ -^2(x); tanh^2(x) ]. The eigenfunctions listed in (<ref>) are related to the underlying symmetries for the NLS equation. Note that we have normalized the resonance function Ψ⃗ to satisfy the condition (<ref>) stated in Theorem <ref>. §.§.§ On assumption (A6) for the 1D focusing cubic NLS Our first observation is that the assumption (A6) is satisfied by the matrix operator _1. Let V_1(x) = 4^2(x), V_2(x) = 2^2(x), and (Ψ_1(x),Ψ_2(x)) = (tanh^2(x),-^2(x)). Then, we have ∫_ e^±√(2)y(V_2(y) Ψ_1(y) + V_1(y)Ψ_2(y)) y = 0. We denote the (two-sided) Laplace transform by [f](s) = ∫_-∞^∞ e^-syf(y) y, s ∈, which is formally related to the Fourier transform by [f](s) = √(2π)[f](is). By direct computation, (V_1Ψ_2+V_2Ψ_1)(x) = 2^2(x) - 6 ^4(x), and ^4(x) = 2/3^2(x)-1/6∂_x^2(^2(x)). Recall from <cit.> that as equalities in (), [^2](ξ) = √(π/2)ξ/sinh(π2ξ). Hence, using the basic property [-∂_x^2 f](ξ) = ξ^2 [f](ξ) and (<ref>), we obtain [^4](ξ) = 1/6√(π/2)ξ(4+ξ^2)/sinh(π2ξ). As complex functions, we recall that sinh(iz) = i sin(z) and that z ↦z/sin(z) is analytic[to be pedantic, there is a removable singularity at z=0 which we can remove by setting the function z/sin(z) equal to 1 at z=0.] in the strip {s+iσ: s ∈ (-π,π), σ∈}. Thus, by analytic continuation, [V_1 Ψ_2 + V_2 Ψ_1](s) = √(2π)(2 [^2](is) - 6 [^4](is) ) = π s(-2+s^2)/sin(π s2), for any s ∈ with (s) ∈ (-2,2), which in particular proves the vanishing condition (<ref>). The other assumptions (A1)–(A5) for _1 are also satisfied by either checking directly or invoking the results from Section 9 in <cit.>. §.§.§ Null structure for perturbations of the solitary wave of the 1D focusing cubic NLS Due to the slow local decay of the Schrödinger waves in the presence of a threshold resonance, the spatially localized quadratic nonlinearity in (<ref>) may pose significant difficulties for proving decay of small solutions to (<ref>). The weighted dispersive estimate (<ref>) shows that the slow local decay is only due to the finite rank projection F_t. To shed some light on the expected leading order behavior of the quadratic nonlinearity () in (<ref>), it is instructive to insert a free Schrödinger wave _free(t) := e^-itP_s, for some fixed ∈() ×(). By Theorem <ref>, we have _free(t) = c_- e^-it/√(t)[ Ψ_1; Ψ_2 ] + c_+ e^it/√(t)[ Ψ_2; Ψ_1 ] + (t), with c_- = 1/√(-4 π i)⟨σ_3 Ψ⃗, ⟩, c_+ = -1/√(4π i)⟨σ_3 σ_1 Ψ⃗, ⟩, and where the remainder (t) satisfies ‖⟨ x ⟩^-2(t) ‖_L_x^∞() × L_x^∞()≲| t |^-3/2‖⟨ x ⟩^2 ‖_L_x^1() × L_x^1(). Thus, owing to the spatial localization of the quadratic nonlinearity, we have (_free(t)) = c_+^2e^2it/t𝒬_1(Ψ⃗) + c_+c_-/t𝒬_2(Ψ⃗) + c_-^2e^-2it/t𝒬_3(Ψ⃗) + _L^∞(| t |^-2), where 𝒬_1(Ψ⃗) = [ -QΨ_2^2 - 2QΨ_1Ψ_2; Q Ψ_1^2 + 2QΨ_1Ψ_2 ], 𝒬_2(Ψ⃗) = [ - 2QΨ_1Ψ_2 - 2Q(Ψ_1^2+Ψ_2^2); 2QΨ_1Ψ_2 + 2Q(Ψ_1^2+Ψ_2^2) ], 𝒬_3(Ψ⃗) = -σ_1_1(Ψ⃗) = [ -QΨ_1^2 - 2QΨ_1Ψ_2; Q Ψ_2^2 + 2QΨ_1Ψ_2 ]. Due to the critical (| t |^-1) decay of the leading order terms on the right-hand side of (<ref>), it is instructive to analyze the long-time behavior of small solutions to the inhomogeneous matrix Schrödinger equation with such a source term { i∂_t _src- _1 _src = P_s(c_+^2e^2it/t𝒬_1(Ψ⃗) + c_+c_-/t𝒬_2(Ψ⃗) + c_-^2e^-2it/t𝒬_3(Ψ⃗) ), t ≥ 1, _src(1) = 0⃗. . To this end, it will be useful to exploit a special conjugation identity for the matrix Schrödinger operator _1. It was recently pointed out by Martel, see <cit.>, that the matrix operator _1 can be conjugated to the flat matrix Schrödinger operator _0. By first conjugating _1 with the unitary matrix = 1/√(2)[ 1 i; 1 - i ], we obtain the equivalent matrix Schrödinger operator _1 = -i ^-1_1 := [ 0 L_-; -L_+ 0 ] = _0 + := [ 0 -∂_x^2 + 1; ∂_x^2 -1 0 ] + [ 0 - 2^2(x); 6^2(x) 0 ]. Introducing the operator := [ 0 (-∂_x^2+1)S^2; -S^2L_+ 0 ], S := Q ·∂_x · Q^-1 = ∂_x + tanh(x), one has the conjugation identity (see also <cit.>) _1 = _0 . We then transfer the above identity to the matrix operator by setting := ^-1 to obtain the conjugation identity _1 = _0 . Moreover, it can be checked directly that η⃗= 0 for any generalized eigenfunction η⃗∈_g(_1), and this implies that P_d≡ 0, which is equivalent to saying that = P_s. Hence, by applying the transformation to the equation (<ref>), we obtain the transformed equation i∂_t _src - _0 _src = (c_+^2e^2it/t𝒬_1(Ψ⃗) + c_+c_-/t𝒬_2(Ψ⃗) + c_-^2e^-2it/t𝒬_3(Ψ⃗) ), where _src := _src is the transformed variable. Note that the above equation features the flat operator _0 on the left. The Duhamel formula for _src(t) at times t ≥ 1 reads _src(t) = -i∫_1^t e^-i(t-s)_0(c_+^2e^2is/s𝒬_1(Ψ⃗) + c_+c_-/s𝒬_2(Ψ⃗) + c_-^2e^-2is/s𝒬_3(Ψ⃗)) s. The flat, self-adjoint, matrix operator _0 has the benefit that the semigroup e^-it_0 can be represented in terms of the standard Fourier transform by the formula (e^-it_0) (x) = 1/√(2π)∫_ e^-it(ξ^2+1)g_1(ξ)e^ixξ ξ e_1 + 1/√(2π)∫_ e^it(ξ^2+1)g_2(ξ)e^ixξ ξ e_2, where = (g_1,g_2)^⊤ and e_1,e_2 are the standard unit vectors in ^2. The profile of _src(t) is given by _src(t) := e^it_0_src(t). Setting _j(Ψ⃗) =: (G_j,1,G_j,2)^⊤ 1≤ j ≤ 3, we have for times t ≥ 1 that ℱ[_src(t)](ξ) =c_+^2 ∫_1^t e^is(ξ^2+3)/sG_1,1(ξ) s e_1 + c_+c_- ∫_1^t e^is(ξ^2+1)/sG_2,1(ξ) s e_1 + c_-^2 ∫_1^t e^is(ξ^2-1)/sG_3,1(ξ) s e_1 + c_+^2 ∫_1^t e^-is(ξ^2-1)/sG_1,2(ξ) s e_2 + c_+c_- ∫_1^t e^-is(ξ^2+1)/sG_2,2(ξ) s e_2 + c_-^2 ∫_1^t e^-is(ξ^2+3)/sG_3,2(ξ) s e_2. The uniform-in-time boundedness in L_ξ^∞ of the Fourier transform of the profile ℱ[_src(t)](ξ) is related to recovering the free decay rate for _src(t). However, in view of the critical decay of the integrand, this requires favorable time oscillations. Observe that the above terms with time phases e^± is(ξ^2+1), e^± is(ξ^2+3) are non-stationary for any s ∈ which implies that they have a better decay rate using integration by parts in the variable s. On the other hand, the terms with the phases e^± is(ξ^2-1) are stationary at the points ξ = ± 1. Thus, it is important to know if the Fourier coefficients G_3,1(±1) and G_1,2(± 1) vanish. Indeed, this is true due to the following lemma. It holds that G_3,1(±1) = G_1,2(±1)= 0. First, to ease notation, we write = i/2[ (-D_1 - D_2) (D_1-D_2); (-D_1 + D_2) (D_1 + D_2) ], where D_1 := (-∂_x^2+1)S^2 = (-∂_x^2+1)(∂_x + tanh(x))(∂_x + tanh(x)), D_2 := S^2L_+ = (∂_x + tanh(x))(∂_x + tanh(x))(-∂_x^2 - 6^2(x) + 1). Since σ_1 = - σ_1 and _3(Ψ⃗)= -σ_1 _1(Ψ⃗) (c.f. (<ref>)), it follows that G_3,1≡ G_1,2 as functions. Note that G_3,1 = i/2( D_1(QΨ_1^2) + D_1(QΨ_2^2) + 2D_1(2QΨ_1Ψ_2) + D_2(QΨ_1^2) - D_2(QΨ_2^2)), where (QΨ_1^2)(x) = √(2)(x)tanh^4(x), (QΨ_1Ψ_2)(x) = -√(2)^3(x)tanh^2(x), (QΨ_2^2)(x) = √(2)^5(x). By using the trigonometric identity ^2(x) + tanh^2(x) =1, we may simplify the expression for G_3,1 into G_3,1(x) = i √(2)/2(D_1((x)-6^3(x)+6^5(x)) + D_2((x) - 2^3(x)) ). By patient direct computation, we find F_1(x) := D_1((x)-6^3(x)+6^5(x)) =192^3(x) - 3456^5(x) + 9720 ^7(x) - 6720^9(x) and F_2(x) :=D_2((x)-2^3(x)) = 48^3(x) -264^5(x) + 240^7(x). Moreover, using the identities (∂_x^2)(x) = (x) - 2^3(x), (∂_x^4)(x) = (x) - 20^3(x)+24^5(x), (∂_x^6)(x) = (x) -182^3(x)+840^5(x)-720^7(x), (∂_x^8)(x) = (x) - 1640^3(x) +23184^5(x)-60480^7(x) + 40320^9(x), we obtain F_1(x) = - 1/6(-∂_x^2 + 3∂_x^4 - 3∂_x^6 + ∂_x^8)(x)= -1/6(-∂_x^2+1)^3(-∂_x^2)(x), and F_2(x) = 1/3(-∂_x^2 + 2∂_x^4 - ∂_x^6)(x) = 1/3(-∂_x^2+1)^2(-∂_x^2)(x). Thus, using the property [-∂_x^2 f] = ξ^2 [f](ξ) and the fact that (ξ)= √(π/2)(πξ/2), we compute that G_3,1(ξ) = i√(2)/2(F_1(ξ)+F_2(ξ))= -i √(π)/12 (ξ^2-1)ξ^2(ξ^2+1)^2(πξ/2), which implies (<ref>) as claimed. We determined the identities (<ref>) – (<ref>) with the aid of the Wolfram Mathematica software. The above lemma shows that the localized quadratic resonant terms are well-behaved for the nonlinear perturbation equation (<ref>). The presence of this null structure is potentially a key ingredient for a perturbative proof of the asymptotic stability of the solitary wave solutions to the 1D focusing cubic NLS. We end this subsection with the following closing remark. The motivation for analyzing the quadratic nonlinearity in the perturbation equation (<ref>) and for uncovering the null structure for the localized quadratic resonant terms in Lemma <ref> is due to the recent work by Lührmann-Schlag <cit.>, where the authors investigate the asymptotic stability of kink solutions to the 1D sine-Gordon equation under odd perturbations. In <cit.>, the authors employ a similar conjugation identity like the one we used in (<ref>) to transform the scalar Schrödinger operator H_1 := -∂_x^2 -2^2(x) to the flat operator H_0 := -∂_x^2 for the perturbation equation. In fact, it is easy to check that one has the conjugation identity SH_1 = H_0 S, where S = ∂_x + tanh(x). Moreover, an analogue of Lemma <ref> on the non-resonant property for the localized quadratic resonant terms in the perturbation equation for the sine-Gordon kink was first obtained in <cit.>. This remarkable null structure for the sine-Gordon model played a key role in the asymptotic stability proof in <cit.>. In <cit.>, the same authors obtained long-time decay estimates for even perturbation of the soliton of the 1D focusing cubic Klein-Gordon equation. The absence of the null structure in the nonlinearity of the perturbation equation in the focusing cubic Klein-Gordon model is a major obstruction to full co-dimension one asymptotic stability result under even perturbations. Our short discussion on the effects of the threshold resonance on the quadratic term for (<ref>) suggests that the localized quadratic resonant terms are well-behaved for the perturbation equation in the 1D cubic NLS model. However, note that a full perturbative proof of the asymptotic stability problem for this model has to encompass the modulation theory associated to the moving solitary wave, and take into account the long-range (modified) scattering effects due to the non-localized cubic nonlinearities in the perturbation equation. We point out that Collot-Germain <cit.> recently obtained general such asymptotic stability results for solitary waves for 1D nonlinear Schrödinger equations under the assumption that the linearized matrix Schrödinger operator does not exhibit threshold resonances. §.§ Organization of the article The remaining sections of this paper are devoted to the proof of Theorem <ref>. In Section 2, we state a few stationary phase lemmas, which will be heavily utilized in Sections 5 and 6, and we will also provide an analogue of Theorem <ref> for the free matrix operator _0. In Section 3, we employ the symmetric resolvent expansion following the framework in <cit.>, and in Section 4, we carefully extract the leading operators for these resolvent expansions. A characterization of the threshold resonance is stated in Lemma <ref> under the spectral assumptions (A1)–(A6). Then, in Section 5, we prove dispersive estimates for the evolution operator e^it in the low energy regime. The approach taken in Section 5 largely follows the techniques employed in <cit.> for one-dimensional Dirac operators. In Section 6, we prove dispersive estimates for the remaining energy regimes and finish the proof of Theorem <ref>. §.§ Notation For any = (f_1,f_2)^⊤, = (g_1,g_2)^⊤∈ L^2() × L^2(), we use the inner product ⟨,⟩ := ∫_f⃗^* g⃗ x = ∫_(f̅_1g_1 + f̅_2 g_2) x,f⃗^* := (f̅_1,f̅_2). The Schwartz space is denoted by () and we use the weighted L^2-spaces X_σ := ⟨ x ⟩^-σL^2() ×⟨ x ⟩^-σL^2(), ‖‖_X_σ := ‖x^σ‖_L^2()× L^2(), σ∈. Note that for any α > β > 0, one has the continuous inclusions X_α⊂ X_β⊂ X_0 = L^2()× L^2() ⊂ X_-β⊂ X_-α, and the duality X_α^* = X_-α. Our convention for the Fourier transform is [f](ξ) = (ξ) = 1/√(2π)∫_ e^-ixξf(x) x, ^-1[f](x) = (x) = 1/√(2π)∫_ e^ixξf(ξ) ξ. We denote by C>0 an absolute constant whose value is allowed to change from line to line. In order to indicate that the constant depends on a parameter, say θ, we will use the notation C_θ or C(θ). For non-negative X, Y we write X ≲ Y if X ≤ CY. We use the Japanese bracket notation ⟨ x ⟩ = (1+x^2)^1/2 for x ∈. The standard tensors on ^2 are denoted by e_1 = [ 1; 0 ], e_2 = [ 0; 1 ], e_11 = e_1e_1^⊤ =[ 1 0; 0 0 ], e_22 = e_2e_2^⊤= [ 0 0; 0 1 ]. Acknowledgments. The author would like to thank his Ph.D. advisor Jonas Lührmann for suggesting the problem and patiently checking the manuscript. The author is grateful to Andrew Comech, Wilhelm Schlag, Gigliola Staffilani, and Ebru Toprak for helpful discussions. § FREE MATRIX SCHRÖDINGER ESTIMATES In this section, we derive dispersive estimates for the free evolution semigroup e^it_0. We recall that the free matrix Schödinger operator _0 = [ -∂_x^2 + μ 0; 0 ∂_x^2 - μ ], has a purely continuous spectrum (_0) = σ_ac(_0) = (-∞,-μ] ∪ [μ,∞), and the resolvent operator of _0 is given by (_0 - λ)^-1 = [ R_0(λ - μ) 0; 0 -R_0(-λ-μ) ], λ∈∖ (-∞,-μ] ∪ [μ,∞), where R_0 is the resolvent operator for the one-dimensional Laplacian, with an integral kernel given by R_0(ζ^2)(x,y) := (-∂^2 - ζ^2)^-1(x,y) = -e^i ζ| x - y|/2i ζ, ζ∈_+, where _+ is the upper half-plane. We obtain from the scalar resolvent theory due to Agmon <cit.> that the limiting resolvent operators (_0 - (λ± i0) )^-1 = lim_↓ 0 (_0 - (λ± i))^-1, λ∈ (-∞,-μ) ∪ (μ,∞), are well defined as operators from X_σ→ X_-σ for any σ > 1/2. Here, the matrix operator _0 is self-adjoint and Stone's formula applies: e^it_0 = 1/2 π i∫_|λ|≥μ e^itλ[(_0 - (λ + i0))^-1 - (_0 - (λ - i0))^-1] λ. Let us focus on the spectrum on the positive semi-axis [μ,∞), as the negative part can be treated using the symmetric properties of (c.f. Remark <ref>). By invoking the change of variables λ↦λ = μ + z^2 with 0< z <∞, the kernel of e^it_0P_s^+ is then given by e^it_0P_s^+(x,y) = e^itμ/π i∫_0^∞ e^itz^2z [(_0 - (μ+z^2+ i0))^-1 - (_0 - (μ+z^2- i0))^-1](x,y) z. Here, the notation P_s^+ means that we restrict the free evolution e^it_0 to the positive semi-axis in the integral representation (<ref>). By (<ref>) and (<ref>), we have (_0 - (μ+z^2± i0))^-1(x,y) = [ ± ie^± i z | x - y |/2 z 0; 0 -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) ], 0 < z < ∞, and thus, e^it _0P_s^+(x,y) = e^it μ/2π∫_ e^it z^2 e^i z | x - y |e_11 z. Note that the above integral is to be understood in the principal value sense, due to the pole in (<ref>). To this end, we recall the following standard stationary phase results. The first lemma is a direct consequence of the classic van der Corput lemma. Let r ∈, and let ψ(z) be a compactly supported smooth function. Then for any | t| > 0, |∫_ e^itz^2 + i z rψ (z) z |≤ C | t |^-1/2‖∂_zψ‖_L_z^1(). Moreover, if ψ(z) is supported away from zero, then for all | t | > 0, |∫_ e^itz^2 + i z rψ (z) z |≤ C | t |^-3/2‖ [∂_z^2 + i r ∂_z](ψz) ‖_L_z^1(). The bound (<ref>) follows from the van der Corput lemma (see e.g. <cit.>) by observing that the phase ϕ(z) = z^2 + zr/t satisfies |∂_z^2 ϕ(z)| = 2>0. The last bound follows by first integrating by parts ∫_ e^itz^2e^i z rψ (z) z = -1/2 i t∫_ e^itz^2∂_z [e^iz rψ(z)/z] z =-1/2 i t∫_ e^itz^2 + iz r[ir + ∂_z][ψ (z)/z] z, and then invoking the van der Corput lemma. We will also need the following sharper stationary phase lemma, which may be found in many text on oscillatory integrals with a Fresnel phase. Let χ(z) be a smooth, non-negative, even cut-off function such that χ(z) = 1 for z ∈ [-1,1] and χ(z) = 0 for | z |≥ 2. For r, t ∈, define G_t(r) := ∫_ e^itz^2+izrχ(z^2) z. Then there exists C = C(‖χ(z^2)‖_W^4,1()) >0 such that for any r ∈ and for any | t | > 0, | G_t(r) - √(π)/√(-it) e^-ir^2/4t|≤ C | t |^-3/2⟨ r ⟩. Moreover, if r_1, r_2 ≥ 0, then | G_t(r_1+r_2) - √(π)/√(-it) e^-ir_1^2/4te^-ir_2^2/4t|≤ C | t |^-3/2⟨ r_1 ⟩⟨ r_2 ⟩. First, the phase ϕ(z) := z^2 + zr/t has a critical point at z_* = -r/2t∈ with ϕ”(z) = 2 > 0. We use Taylor expansion of ϕ(z) and shift the integral by the change of variables z ↦ z + z^* to obtain G_t(r) = ∫_ e^itϕ(z)χ(z^2) z = ∫_R e^itϕ(z^*)+ ϕ”(z_*)(z-z_*)^2χ(z^2) z = e^-ir^2/4t∫_ e^itz^2χ((z+z_*)^2) z. Using the Fourier transform of the free Schrödinger group and the Plancherel's identity, we have ∫_ e^itz^2χ((z+z_*)^2) z = 1/√(-2 i t)∫_ e^-iξ^2/4t_z →ξ[χ((z+z_*)^2)](ξ) ξ = 1/√(-2 i t)∫__z →ξ[χ((z+z_*)^2)](ξ) ξ + 1/√(-2 i t)∫_(e^-iξ^2/4t-1) _z →ξ[χ((z+z_*)^2)](ξ) ξ = √(2π)/√(-2it)χ(z_*^2) + 1/√(-2 i t)∫_(e^-iξ^2/4t-1) e^iz_*ξ[χ((z+z_*)^2)](ξ) ξ. Using the bound | e^iξ^2/4t-1|≤ C| t |^-1ξ^2 and the Hölder's inequality, we bound the remainder term by |1/√(-2 i t)∫_(e^-iξ^2/4t-1) e^iz_*ξ[χ((z+z_*)^2)](ξ) ξ| ≤ C | t|^-3/2∫_|ξ^2 [χ(z^2)](ξ) | ξ ≤ C | t |^-3/2‖χ(z^2)‖_W^4,1()≤ C | t |^-3/2. Next, we use the fact that | 1 - χ(z^2) |≤ C | z | for all z ∈ and for some C>0 large enough so that | 1-χ(z_*^2)|≤ C | z_* |≤ C | t |^-1⟨ r ⟩. Then (<ref>) follows (<ref>)–(<ref>). Finally, we use the estimate (<ref>) to obtain | G_t(r_1+r_2) - √(2π)/√(-2it) e^-i(r_1-r_2)^2/4t|≤ C | t |^-3/2⟨ r_1 - r_2 ⟩≤ C | t |^-1⟨ r_1 ⟩⟨ r_2 ⟩. Thus, by the triangle inequality and the bound | e^-i(r_1-r_2)^2/4t - e^-ir_1^2/4te^-ir_2^2/4t| = | e^-ir_1^2/4te^-ir_2^2/4t|| e^ir_1 r_2/2t - 1 |≤ C | t |^-1⟨ r_1 ⟩⟨ r_2 ⟩, we conclude (<ref>). Next, we prove the analogue of Theorem <ref> for the free evolution. We emphasize that the free matrix Schrödinger operator _0 has threshold resonances _0 e_1 = μe_1 and _0 e_2 = -μe_2. For any u⃗ = (u_1,u_2) ∈() ×() and for any | t|≥ 1, we have ‖ e^it_0 P_s^+ u⃗ ‖_L_x^∞× L_x^∞≲| t |^-1/2‖u⃗ ‖_L_x^1 × L_x^1, and ‖⟨ x ⟩^-1( e^it_0P_s^+ - F_t^0)u⃗ ‖_L_x^∞× L_x^∞`≲| t |^-3/2‖⟨ x ⟩u⃗ ‖_L_x^1 × L_x^1, where F_t^0(x,y) := e^it μ/√(-4π i t)e^-ix^2/4te_1e^-iy^2/4te_1^⊤. We first begin by splitting the evolution operator into low and high energy parts[Symbols like χ(_0 - μ I) are only used in a formal way to represent the cut-off χ(z^2) in the z-integrals, where they arise.]: e^it_0P_s^+(x,y) = e^it_0χ(_0 - μ I)P_s^+(x,y) +e^it_0(1-χ(_0 - μ I))P_s^+(x,y) = e^itμ/2π∫_ e^itz^2+iz| x - y |χ(z^2) z e_11 + e^itμ/2π∫_ e^itz^2+iz| x - y | (1-χ(z^2)) z e_11, where χ(z) is a standard smooth, even, non-negative cut-off function satisfying χ(z) = 1 for | z |≤ 1 and χ(z) =0 for | z |≥ 2. In the high energy part in (<ref>), following the ideas from <cit.> <cit.>, we prove the estimate |∫_ e^itz^2+iz| x - y | (1-χ(z^2))dz |≲min{| t |^-1/2,| t |^-3/2⟨ x ⟩⟨ y ⟩}. For a more rigorous treatment, we instead use a truncated cutoff χ_L(z) = (1-χ(z^2))χ(z/L), where L ≥ 1, and we prove the uniform estimate sup_L ≥ 1|∫_ e^itz^2 + iz| x - y |χ_L(z) z |≤ Cmin{| t |^-1/2,| t |^-3/2⟨ x ⟩⟨ y ⟩}, with a constant C>0 independent of L. This estimate will imply (<ref>). Indeed for any | t | >0, by the Plancherel's identity, we have sup_a ∈|∫_ e^itz^2+iazχ_L(z) z | = sup_a ∈|∫_^-1[e^itz^2+iaz] (ξ ) [χ_L(z)](ξ) ξ|≤ C| t |^-1/2‖[χ_L]‖_L_ξ^1(). Here, we use that the Fourier transform of the tempered distribution e^itz^2+iaz has | t |^-1/2 decay. Using the definition of χ_L, the scaling properties of the Fourier transform, and Young's convolution inequality, we obtain ‖[χ_L]‖_L_ξ^1() ≤‖[χ(z/L)]‖_L_ξ^1() + ‖[χ(z/L)]‖_L_ξ^1()‖[χ(z^2)]‖_L_ξ^1() ≤ C ‖ L [χ](Lξ)‖_L_ξ^1() = C ‖[χ](ξ)‖_L_ξ^1()≤ C ‖χ‖_W^2,1()≲ 1. For the high-energy weighted dispersive estimate, we use integration by parts to find that |∫_ e^itz^2e^iz | x - y |)χ_L(z) z |≤ C | t |^-1|∫_ e^itz^2∂_z( e^iz| x - y | z^-1χ_L(z) ) z|. When the derivative falls onto e^iz | x - y |, the weights ⟨ x ⟩⟨ y ⟩ appear, whereas the term z^-1χ_L(z) is smooth since χ_L is compactly supported away from the interval [-1,1]. By following the previous argument, we conclude the (| t |^-3/2⟨ x ⟩⟨ y ⟩) bound for (<ref>) in the high-energy regime. Next we turn to the low-energy estimates. For the low-energy unweighted estimate, we employ Lemma <ref> to obtain |∫_ e^itz^2+iz| x -y |χ(z^2) z |≤ C | t |^-1/2‖∂_z χ(z^2) ‖_L^1()≤ C | t |^-1/2. On the other hand, for the low-energy weighted estimate, we observe that by Lemma <ref>, |∫_ e^itz^2+iz| x -y |χ(z^2) z - √(2π)/√(-2it) e^-ix^2/4te^-iy^2/4t|≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩. Hence, using that e_11 = e_1e_1^⊤, we arrive at the kernel estimate | e^it_0χ(_0 - μ)P_s^+(x,y) - F_t^0(x,y) |≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩, where F_t^0 is given by (<ref>). Thus, by combining the high energy bounds (<ref>) and the low energy bounds (<ref>) - (<ref>), we conclude the dispersive estimates (<ref>) and (<ref>). § SYMMETRIC RESOLVENT IDENTITY By assumption (A1), we can factorize the matrix potential = -σ_3 v v = v_1 v_2, with v_1 = -σ_3 v := [ -a -b; b a ] v_2 = v := [ a b; b a ], where a := 1/2(√(V_1+V_2) + √(V_1 - V_2)) b := 1/2(√(V_1+V_2) - √(V_1 - V_2)). It will be helpful in later sections to keep in mind that V_1 = a^2 + b^2, V_2 = 2ab. We denote the resolvent of = _0 + by (-z)^-1 for z ∈ρ(). The resolvent identity states that ( - z)^-1 = (I+(_0 -z)^-1)^-1(_0-z)^-1, ∀ z ∈ρ(_0) ∩ρ(). This identity was used in <cit.> to establish that there is a limiting absorption principle for the resolvent of on the semi-axes (-∞,-μ)∪ (μ,∞) in the weighted L^2-spaces X_σ→ X_-σ, σ>1/2. Note that the lemma below applies in any spatial dimension. (<cit.>, see also the proof in <cit.>) Suppose assumptions (A1) – (A4) hold. Then, the following holds. * For σ > 1/2, and |λ| > μ, the operator (_0 - (λ± i0))^-1: X_-σ→ X_-σ is compact and I + (_0 - (λ± i0))^-1 is boundedly invertible on X_-σ. * For σ>1/2 and λ_0>μ arbitrary, we have sup_|λ|≥λ_0, > 0|λ|^1/2‖( - (λ± i))^-1‖_X_σ→ X_-σ<∞. * For |λ| > μ, define ( - (λ± i0) )^-1 := (I+ (_0 - (λ± i0))^-1)^-1(_0 -(λ± i0) )^-1. Then, as ↘ 0, ‖( - (λ± i) )^-1 - ( - (λ± i0) )^-1‖_X_σ→ X_-σ⟶ 0 for any σ > 1/2. We recall the following spectral representation of e^it from <cit.>. (<cit.>) Under assumptions (A1) – (A6), there is the representation e^it= 1/2π i∫_|λ|≥μ e^itλ[( - (λ+i0))^-1 - ( -(λ - i0))^-1] λ + ∑_j e^itP_z_j, where the sum runs over the entire discrete spectrum and P_z_j is the Riesz projection corresponding to the eigenvalue z_j. The formula (<ref>) and the convergence of the integral are to be understood in the sense that if ϕ,ψ∈ [W^2,2() × W^2,2()] ∩ [⟨ x⟩^-1- L^2() ×⟨ x⟩^-1- L^2()], then ⟨ e^itϕ,ψ⟩ = lim_R →∞1/2π i∫_R ≥|λ|≥μ e^itλ⟨[(-(λ+i0))^-1-( - (λ - i0))^-1]ϕ,ψ⟩ λ + ∑_j⟨ e^itP_z_jϕ,ψ⟩ , for all t ∈. We write P_s = P_s^+ + P_s^-, where the signs ± refer to the positive and negative halves of the essential spectrum (-∞,-μ]∪ [μ,∞). In the following sections, we will focus on the analysis on the positive semi-axis part of the essential spectrum. We can treat the negative semi-axis of the essential spectrum by taking advantage of the symmetry properties of , see Remark <ref> below. In view of the spectral representation of e^it from Lemma <ref>, we use the change of variables λ↦λ = μ+z^2 with 0<z<∞ to write e^itP_s^+ = e^itμ/π i∫_0^∞ e^itz^2 z [( - (μ + z^2 + i0))^-1 - ( - (μ + z^2 - i0))^-1] z. For the upcoming dispersive estimates, it is convenient to first open up the domain of integration for the above integral to the entire real line by means of analytic continuation for the perturbed resolvent. Following the framework of Section 5 in <cit.>, we introduce the operator (z) := ( - (μ + z^2 + i0))^-1, for z>0, (z) := ( - (μ + z^2 - i0))^-1 = ( - (μ + z^2 + i0))^-1, for z<0, so that e^itP_s^+ = e^itμ/π i∫_ e^itz^2 z (z) z. Here, the integral should be understood in the principal value sense due to the pole associated with the resolvent (z) at the origin. We also set _0(z) := (_0 - (μ + z^2 + i0))^-1, for z>0, _0(z) := (_0 - (μ + z^2 + i0))^-1, for z<0. In particular, with this definition, we have by (<ref>) for all z ∈∖{0} that _0(z)(x,y) = (_0 - (μ + z^2 + i0))^-1(x,y) = [ ie^ i z | x - y |/2 z 0; 0 -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) ]. As in <cit.>, we employ the symmetric resolvent identity (z) = _0(z) - _0(z)v_1(M(z))^-1v_2_0(z), where M(z) = I + v_2 _0(z)v_1, z ∈∖{0}. By inserting the above identity, one checks that e^itP_s^+ = e^itμ/π i∫_ e^itz^2zℛ_0(z) z - e^itμ/π i∫_ e^itz^2zℛ_0(z)v_1 (M(z))^-1v_2ℛ_0(z) z. In the next section, we will investigate the invertibility of the matrix operator M(z) near the origin. We give the following remark for the evolution operator in the negative part of the essential spectrum. Using the identities = -σ_1 σ_1, = -σ_1 σ_1, we infer that e^itP_s^- = σ_1 e^-itP_s^+σ_1. Furthermore, since these identities also hold for _0, the analogue of Proposition <ref> for the weighted estimate of the free evolution e^it_0P_s^- is given by ‖⟨ x ⟩^-1( e^it_0P_s^- - F_t^0) ‖_L_x^∞≤ C | t |^-3/2‖⟨ x ⟩ ‖_L_x^1, | t |≥ 1, where F_t^0(x,y) := e^-it μ/√(4π i t)e^i x ^2/4te_2e^i y^2/4te_2^⊤. Note that F_t^0 = σ_1 F_-t^0 σ_1. § LAURENT EXPANSION OF THE RESOLVENT NEAR THE THRESHOLD In this section we study asymptotic expansions of the perturbed resolvent operators near the thresholds of the essential spectrum, closely following the framework of the seminal paper <cit.> for the scalar Schrödinger operators H = -∂_x^2 + V on the real line. As specified in the introduction, we are interested in the irregular case, where the matrix Schrödinger operator exhibits a threshold resonance. See Definition <ref> for a precise definition. This means that there exist globally bounded non-trivial solutions of Ψ = ±μΨ. In this context, we mention that the threshold regularity can also be characterized by the scattering theory introduced by <cit.>; see Lemma 5.20 of <cit.>. We begin with the terminology used in <cit.>. We say an operator A L^2()× L^2() → L^2()× L^2() with an integral kernel A(x,y)∈^2 × 2 is absolutely bounded if the operator with the kernel | A(x,y) | := (| A(x,y)_i,j|)_i,j=1^2∈^2 × 2 is bounded from L^2()× L^2() → L^2()× L^2(). In particular, Hilbert-Schmidt and finite rank operators are absolutely bounded. To investigate the asymptotic expansions of the operator M(z) (c.f. (<ref>)), we start with the following Taylor expansions of the free resolvent around the origin z=0. Let z_0 := min{1,√(2μ)}. For any 0 < | z| < z_0, we have the following expansion _0(z)(x,y) = i/2ze_11 + _0(x,y) + z_1(x,y) + E(z)(x,y) where _0(x,y) := [ - | x - y |/2 0; 0 - e^-√( 2μ)| x - y |/2√(2μ) ], _1(x,y) := [ | x - y |^2/4i 0; 0 0 ], and E(z) is an error term which satisfies the estimate | z|^k |∂_z^k E(z)(x,y)|≤ C_μ,k | z |^2 ⟨ x ⟩^3+k⟨ y ⟩^3+k, ∀ k=0,1,2, for any | z | < z_0. Recall from (<ref>) that _0(z)(x,y) = [ ie^i z | x - y |/2 z 0; 0 -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) ]. For 0 < | z | < 1, we have the Laurent expansion i e^i z | x -y |/2 z = i/2 z + -| x - y |/2 + | x - y |^2/4iz + r_1(z,| x - y |), where the remainder term is r_1(z,| x - y|) := i/2z_1(z,| x - y|), _1(z,| x - y|) := (iz| x - y |)^3/2!∫_0^1 e^isz | x - y | (1-s)^2 s. By direct computation, for any x, y ∈ and for any | z | <1, we have the estimate | z |^k |∂_z^k r_1(z,| x - y |) |≲| z |^2 ⟨ x⟩^3+k⟨ y ⟩^3+k, k=0,1,2. In the lower component of the resolvent kernel, for | z | < 2μ, we have the Taylor expansion -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) = -e^-√(2μ)| x - y|/2√(2μ) + r_2(z,| x - y |), where we denote the remainder term by r_2(z,| x - y |) := z^2/2!∫_0^1 (1-s) (∂_z^2 g_μ)(sz,| x - y |) s, g_μ(z,| x - y |) := -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ). Using the fact that for any η∈, ⟨η⟩ := (1+η^2)^1/2, one has the bounds |∂_η^k ⟨η⟩^-1|≤ C_k ⟨η⟩^-1-k |∂_η^k ⟨η⟩|≤ C_k ⟨η⟩^1-k, k =0,1,2,…, it follows that all derivatives of √(z^2+2μ) and 2(z^2+2μ)^-1/2 are uniformly bounded in z up to a constant depending only on μ and the number of derivatives. Therefore, by the Leibniz formula, we have the estimate sup_z ∈|∂_z^k g_μ(z,| x- y |) |≤ C_μ,k⟨ x ⟩^k ⟨ y ⟩^k, k=0,1,…,4, which in turn implies that | z |^k|∂_z^k r_2(z,| x - y |) |≲| z |^2 ⟨ x ⟩^2+k⟨ y ⟩^2+k, k=0,1,2. Thus, by using (<ref>) and (<ref>), the error term given by E(z)(x,y) := [ r_1(z, | x - y |) 0; 0 r_2(z,| x - y |) ] satisfies (<ref>) as claimed. We insert the above asymptotic expansion into the operator M(z) = I + v_2_0(z)v_1. First, we have the transfer operator T on L^2() × L^2() with a kernel given by T(x,y) = I + v_2(x) _0(x,y) v_1(y). Note that T is self-adjoint because (v_2_0v_1)^* = v_1^*_0 v_2 = (-vσ_3)_0v = v_0(-σ_3v) = v_2_0v_1. Since the potentials v_1 and v_2 have exponential decay by assumption (A3), it follows that v_2_0 v_1 is a Hilbert-Schmidt operator on L^2() × L^2(). Hence, T is a compact perturbation of the identity, and therefore the dimension of (T) is finite by the Fredholm alternative. Recalling the formulas for v_1 and v_2 from (<ref>), we have the identity v_2 e_11 v_1 = -[ a 0; b 0 ][ a b; 0 0 ] = - [ a; b ][ a b ]. Next, we define the orthogonal projection onto the span of the vector (a,b)^⊤∈ L^2() × L^2() by P[ f_1; f_2 ](x) := ∫_( a(y)f_1(y) + b(y)f_2(y)) y/‖ a^2 + b^2 ‖_L^1()[ a(x); b(x) ] = 1/‖ V_1 ‖_L^1()⟨ (a,b)^⊤, f⃗ ⟩[ a(x); b(x) ]. Note that we use the identity (<ref>) above. From (<ref>), the contribution of the singular term i/2ze_11 of _0(z) to M(z) will be associated to the following integral operator with the kernel i/2zv_2(x)e_11v_1(y) = - i/2z[ a(x); b(x) ][ a(y) b(y) ] =: g(z)P(x,y), where g(z) := -i/2z‖ V_1 ‖_L^1(). Lastly, we denote the orthogonal projection to the complement of the span of (a,b)^⊤ by Q := I - P. In summary, we have the following proposition. Suppose | a(x) |, | b(x) |≲⟨ x ⟩^-5.5-, and let z_0 := min{1,√(2μ)}. Then, for any 0<| z | < z_0, we have M(z) = g(z)P + T + zM_1 + _2(z), where M_1 and _2(z) are Hilbert-Schmidt operators on L^2() × L^2() defined by M_1(x,y) := v_2(x)_1(x,y)v_1(y) = | x - y |^2/4i[ a(x); b(x) ][ a(y) b(y) ], _2(z)(x,y) := v_2(x)E(z)(x,y)v_1(y), with G_1 and E(z) defined in Lemma <ref>. Moreover, the error term _2(z) and its derivatives satisfy the absolute bound | z |^k ‖|∂_z^k _2(z) |‖_L^2() × L^2() → L^2() × L^2()≲| z |^2, k =0,1,2, for all | z | < z_0. The identity on the right of (<ref>) follows from (<ref>). We recall that operators of the following type U(x)⟨ x⟩^k⟨ y⟩^k W(y) are Hilbert-Schmidt operators on L^2() whenever U and W are smooth potentials with polynomial decay | U(x) |, | W(x) |≲⟨ x ⟩^-k-1/2-, for k ∈. Hence, under the assumptions on a(x) and b(x), and using the fact that |_1(x,y)|≲| x -y |^2≤⟨ x ⟩^2⟨ y ⟩^2, it follows that M_1 is Hilbert-Schmidt. The same argument can be applied to the error term _2(z) and its derivatives using the remainder estimates in (<ref>) and we are done. The next definition characterizes the regularity of the endpoint μ of the essential spectrum. * We say that the threshold μ is a regular point of the spectrum of provided that the operator QTQ is invertible on the subspace Q(L^2() × L^2()). * Suppose μ is not a regular point. Let S_1 be the Riesz projection onto the kernel of QTQ, and we define D_0 = (Q(T+S_1)Q)^-1. Note that QD_0Q is an absolutely bounded operator on L^2()× L^2(). The proof for this follows from Lemma 8 of <cit.> with minor changes. See also <cit.>. Note that since we impose symmetry assumptions on the potential , the thresholds μ and -μ are either both regular or irregular. The invertibility of QTQ is related to the absence of distributional L^∞() × L^∞() solutions to Ψ = μΨ. The following lemma establishes the equivalent definitions. See <cit.> for the analogue in the scalar case. Suppose assumptions (A1) – (A5) hold. Then the following holds. * Let Φ∈ S_1(L^2() × L^2()) ∖{0}. If Ψ = (Ψ_1,Ψ_2)^⊤ is defined by Ψ(x) := -_0[ v_1 Φ](x) + c_0 e_1, with c_0 = ⟨(a,b)^⊤, TΦ⟩/‖ V_1 ‖_L^1(), then Φ = v_2 Ψ, and Ψ∈ L^∞() × L^∞() is a distributional solution to Ψ = μΨ. Furthermore, if additionally assumption (A6) holds, i.e., c_2,± := 1/2√(2μ)∫_ e^±√(2μ)y(V_2(y) Ψ_1(y) + V_1(y)Ψ_2(y)) y = 0, then lim_x →±∞Ψ_1(x) = c_0 ∓ c_1, where c_1 := 1/2⟨ x(a(x),b(x))^⊤,Φ(x)⟩ = 1/2∫_ x ( a(x)Φ_1(x) + b(x) Φ_2(x)) x. In particular, Ψ_1 ∉ L^2(). More precisely, the constants c_0 and c_1 cannot both be zero. * Conversely, suppose there exists Ψ∈ L^∞() × L^∞() satisfying (<ref>) in the distributional sense. Then Φ = v_2 Ψ∈ S_1 (L^2() × L^2()). * Suppose assumptions (A1) – (A6) hold. Then, S_1(L^2() × L^2()) ≤ 1. In the case S_1(L^2() × L^2()) =1, i.e., S_1(L^2()× L^2()) = {Φ} for some Φ = (Φ_1,Φ_2)^⊤∈ L^2() × L^2() ∖{0}, we have the following identities S_1 T P T S_1 = | c_0| ^2 ‖Φ‖_L^2()× L^2()^-2‖ V_1 ‖_L^1() S_1, PTS_1TP = | c_0|^2 ‖Φ‖_L^2()× L^2()^-2‖ V_1 ‖_L^1()P, S_1M_1S_1 = -2i | c_1 |^2‖Φ‖_L^2()× L^2()^-2 S_1, where the constants c_0 and c_1 are given by (<ref>) and (<ref>) respectively for this Φ. Let Φ = (Φ_1,Φ_2) ∈ S_1(L^2() × L^2()) with Φ≠ 0. Since S_1(L^2() × L^2()) is a subspace of Q(L^2() × L^2()), we have QΦ = Φ. Using the fact that Φ∈(QTQ) and the definition of T (c.f (<ref>)), we obtain 0 = QTQΦ = (I- P)TΦ = (I+v_2 _0 v_1)Φ - PTΦ. Since (a,b)^⊤ = v_2 e_1 and P is the orthogonal projection onto the span of (a,b)^⊤, we have PTΦ = ⟨ (a,b)^⊤ , TΦ⟩/‖ V_1 ‖_L^1()(a,b)^⊤ = c_0 v_2 e_1, with c_0 defined in (<ref>). It follows that Φ = -v_2_0v_1 Φ + c_0 v_2 e_1 = v_2(-_0v_1 Φ + c_0 e_1) = v_2 Ψ. This proves (<ref>). Next, we show (<ref>). Denoting Φ = (Φ_1,Φ_2)^⊤ and using the definition of 𝒢_0 (c.f. (<ref>)), we have (_0 - μ I)_0 (v_1Φ) = v_1 Φ , i.e., (-∂_x^2)∫_- | x - y |/2(-a(y)Φ_1(y) - b(y)Φ_2(y)) y = -a(x)Φ_1(x) - b(x)Φ_2(x), (∂_x^2 - 2μ) ∫_-e^-√( 2μ)| x - y |/2√(2μ)(b(y)Φ_1(y) + a(y)Φ_2(y)) y = b(x)Φ_1(x) + a(x)Φ_2(x). This equation is well-defined, since v_1Φ∈⟨ x ⟩^-1- L^1() ×⟨ x ⟩^-1- L^1(). Using (<ref>), (<ref>), and (H_0 - μ I)(c_0 e_1) = 0, we have (_0 - μ I)Ψ = (H_0 - μ I)[-_0 (v_1 Φ)+c_0 e_1] = - v_1 Φ = -v_1 v_2 Ψ = -Ψ, which implies (<ref>). We now show that Ψ = (Ψ_1,Ψ_2)^⊤ is in L^∞() × L^∞(). Noting that Ψ_1(x) = c_0 + 1/2∫_| x - y |(a(y)Φ_1(y) + b(y) Φ_2(y)) y, by employing the orthogonality condition ⟨ (a,b)^⊤,Φ⟩ = 0, we have Ψ_1(x) = c_0 + 1/2∫_ (| x - y | - | x |) (a(y)Φ_1(y) + b(y) Φ_2(y)) y. Using || x - y | - | x ||≤| y | and | a(y) | + | b(y) |≲⟨ y ⟩^-2, we have the uniform bound sup_x ∈|Ψ_1(x) |≤| c_0 | + 1/2∫| y || a(y)Φ_1(y) + b(y) Φ_2(y)| y ≲‖Φ‖_L^2()× L^2()≲ 1. Since (a,b)^⊤ and Φ are in L^2() × L^2(), we have the uniform bound on Ψ_2 by the Cauchy-Schwarz inequality sup_x ∈|Ψ_2(x) |≲∫_| b(y)Φ_1(y) + a(y)Φ_2(y) | y ≤‖ b ‖_L^2()‖Φ_1 ‖_L^2() + ‖ a ‖_L^2()‖Φ_2 ‖_L^2()≲ 1. Thus, we have shown that Ψ =(Ψ_1,Ψ_2)^⊤∈ L^∞() × L^∞(). Finally, we now assume c_2,± = 0 and show that Ψ_1 cannot be in L^2() ∖{0} by a Volterra argument. Using ⟨ (a,b)^⊤,Φ⟩ = 0, for x ≥ 0 large, we write Ψ_1(x) = c_0 - c_1 + ∫_x^∞ (y-x) (a(y)Φ_1(y) + b(y) Φ_2(y)) y. Using c_2,± = 0, we insert -e^-√(2μ)xc_2,+ = 0 to write Ψ_2(x) = 1/2√(2μ)∫_x^∞(e^-√(2μ)(y-x)-e^-√(2μ)(x-y)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y. Similarly, for x<0, using e^√(2μ x)c_2,-=0, we have Ψ_1(x) = c_0 + c_1 +∫_-∞^x (x-y)(V_1(y) Ψ_1(y) +V_2(y) Ψ_2(y)) y, Ψ_2(x) = 1/2√(2μ)∫_-∞^x (e^-√(2μ)(x-y)-e^-√(2μ)(y-x)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y. Suppose now that c_0 = c_1 = 0. Owing to the exponential decay of V_1, V_2 by assumption (A3), we obtain from (<ref>) and (<ref>) a homogeneous Volterra equation for Ψ=(Ψ_1,Ψ_2)^⊤ satisfying Ψ(x) = ∫_ K(x,y) Ψ(y) y, x ≥ 0, where | K(x,y) |≲ e^-γ| y |1_y > x for some 0< γ < β, which is a quasi-nilpotent operator. By performing a standard contraction on L^∞(M,∞), with M>0 sufficiently large, one arrives at a solution Ψ(x) ≡ 0 for all x ≥ M. By the uniqueness theorem for ODEs, this implies that Ψ≡ 0 on . Then, by the relation Φ = v_2 Ψ and the fact that v_2 is a positive matrix, one finds that Φ≡ 0, which contradicts the hypothesis Φ≠ 0. Thus, the conclusion is that c_0 and c_1 cannot be both zero. In particular, it follows from (<ref>) and (<ref>) that lim_x →±∞Ψ_1(x) = c_0 ∓ c_1. Since either c_0+c_1 ≠ 0 or c_0 - c_1 ≠ 0, we conclude that Ψ_1 ∉L^2(). Proof of (2). Define Φ = v_2 Ψ. Since Ψ is a distributional solution to (<ref>), using = v_1v_2, we have (_0 - μ I)Ψ = v_1 Φ⟺Ψ_1” = a Φ_1 + bΦ_2, Ψ_2” - 2μΨ_2 = b Φ_1 + a Φ_2. Let η∈ C_0^∞() be a non-negative function satisfying η(x) = 1 for | x |≤1 and η(x) = 0 for | x |≥ 2. Using the first equation from above and integrating by parts, we have for any >0, |∫_(a(y)Φ_1(y) + b(y)Φ_2(y)) η( y) y | = |∫_Ψ_1”(y) η( y) y | = |∫_Ψ_1(y) ^2 η”( y) y |≤‖Ψ_1 ‖_L^∞()∫_|η”(x) | x. By taking the limit → 0 and using the Lebesgue dominated convergence theorem, we find that ⟨ (a,b)^⊤, Φ⟩ = 0. Thus, PΦ = 0, i.e. Φ∈ Q(L^2() × L^2()). Following this fact and using Φ = v_2 Ψ, we have QTQΦ = QTΦ = Q(I+v_2_0v_1)Φ = Qv_2(Ψ +_0(Ψ)). Now set u := Ψ + _0(Ψ). Since u = (u_1,u_2)^⊤ is a distributional solution of (_0 - μ I)u = 0, i.e. -u_1” = 0, u_2” - 2μ u_2 = 0, we find that u_1(x) = κ_1 + κ_2x, u_2(x) = κ_3e^-√(2μ)x + κ_4e^√(2μ)x, for some κ_i ∈, i ∈{1,…,4}. By similar arguments from Item (1), we obtain that _0(Ψ) ∈ L^∞() × L^∞(). Since Ψ∈ L^∞() × L^∞(), it follows that u ∈ L^∞() × L^∞(), which implies that κ_2 = κ_3 = κ_4 = 0. Thus, we have u(x) ≡ (κ_1,0)^⊤ = κ_1e_1. Since Qv_2 e_1 = 0, we conclude from (<ref>) using the definition of u(x) that QTQΦ = 0, whence Φ∈ S_1(L^2() × L^2()). Proof of (3). Suppose there are two linearly independent Φ,∈ S_1(L^2() × L^2()). As in the proof of Item (1), for x ≥ 0, we have Ψ_1(x) = c_0 - c_1 + ∫_x^∞ (y-x) (V_1(y)Ψ_1(y) + V_2(y) Ψ_2(y)) y, Ψ_2(x) = 1/2√(2μ)∫_x^∞(e^-√(2μ)(y-x)-e^-√(2μ)(x-y)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y, and Ψ_1(x) = d_0 - d_1 + ∫_x^∞ (y-x) (V_1(y)Ψ_1(y) + V_2(y) Ψ_2(y)) y, Ψ_2(x) = 1/2√(2μ)∫_x^∞(e^-√(2μ)(y-x)-e^-√(2μ)(x-y)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y, where d_0 and d_1 are constants defined from which are analogous to c_0 and c_1. There is some constant θ∈ such that c_0 - c_1 = -θ (d_0 - d_1), which imply the Volterra integral equation [ Ψ_1 + θΨ_1; Ψ_2 + θΨ_2 ](x) = ∫_x^∞[ y - x 0; 0 e^-√(2μ)(y-x)-e^-√(2μ)(x-y)/2√(2μ) ](y) [ Ψ_1(y) + θΨ_1(y); Ψ_2(y) + θΨ_2(y) ]dy, for any x ≥ 0. By the same Volterra equation argument used in Item (1), we obtain Ψ+ θΨ≡ 0, which implies that Φ + θΦ≡ 0, but this contradicts that Φ and Φ are linearly independent. Thus, we have shown that S_1(L^2() × L^2()) ≤ 1. Next, we prove (<ref>)–(<ref>). Write S_1 = ‖Φ‖_L^2 × L^2^-2⟨Φ,·⟩Φ. By (<ref>) and the fact that P, S_1, and T are self-adjoint, we compute for any u ∈ L^2() × L^2() that S_1 T P T S_1 u = ‖Φ‖_L^2 × L^2^-2⟨Φ,u ⟩ S_1 T P T Φ = ‖Φ‖_L^2 × L^2^-2c_0⟨Φ,u⟩ S_1T [ a; b ] = | c_0 |^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()S_1 u. A similar computation reveals PTS_1TPu = | c_0|^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()Pu. For the third identity (<ref>), in view of (<ref>) and (<ref>), we write M_1(x,y) = v_2(x)G_1(x,y)v_1(y) = i| x - y |^2/4[ a(x); b(x) ][ a(y) b(y) ]. By using the orthogonality ⟨Φ,(a,b)^⊤⟩ = ∫_(Φ_1(x)a(x) + Φ_2(x)b(x)) x = 0, and the identity | x - y |^2 = x^2 + y^2 - 2xy, we have [S_1M_1S_1](x,y) = ∫_^2 S_1(x,x_1)M_1(x_1,y_1)S_1(y_1,y) x_1 y_1 = i/4Φ(x)/‖Φ‖_L^2 × L^2^2∫_^2( | x_1 - y_1 |^2 Φ^*(x_1)[ a(x_1); b(x_1) ][ a(y_1) b(y_1) ]Φ(y_1)) x_1 y_1 Φ^*(y)/‖Φ‖_L^2 × L^2^2 = -2i (∫_x_1/2Φ^*(x_1)[ a(x_1); b(x_1) ] x_1) (∫_y_12[ a(y_1) b(y_1) ]Φ(y_1) y_1) ‖Φ‖_L^2 × L^2^-2S_1(x,y) = -2i | c_1 |^2 ‖Φ‖_L^2 × L^2^-2 S_1(x,y). This proves (<ref>) and we are done. By direct computation, the conjugation identity σ_3 = ^* σ_3 and the identity v_1 = -σ_3 v_2 imply that the vector Ψ := σ_3 Ψ solves ^* Ψ = μΨ, where Ψ is the distribution solution to (<ref>). Moreover, one has the identities σ_3 Ψ = _0 (v_2 Φ) + (c_0,0)^⊤, Φ = v_2 Ψ = -v_1^⊤Ψ Similarly, using the conjugation identity σ_1 = - σ_1, we note that the vector Ψ_- = σ_1 Ψ solves the system Ψ_- = -μΨ_-. Following the preceding discussion, we assume the threshold μ is irregular and we derive an expansion for the inverse operator M(z)^-1 on a small punctured disk near the origin. We employ the inversion lemma due to Jensen and Nenciu <cit.>. Let H be a Hilbert space, let A be a closed operator and S a projection. Suppose A+S has a bounded inverse. Then A has a bounded inverse if and only if B = S - S(A+S)^-1S has a bounded inverse in SH, and in this case, A^-1 = (A+S)^-1 + (A+S)^-1SB^-1S(A+S)^-1, on H. We will now state the inverse operator of M(z) away from z=0. Suppose assumptions (A1) – (A6) hold. Let S_1(L^2() × L^2()) = ({Φ}) for some Φ = (Φ_1,Φ_2)^⊤≠0⃗. Let κ := (2i)^-1‖ V_1 ‖_L^1(), and let d be the constant defined by d := -2i(| c_0 |^2 + | c_1 |^2) ‖Φ‖_L^2 × L^2^-2≠ 0, with c_0 and c_1 defined by (<ref>) and (<ref>) respectively for this Φ. Then, there exists a positive radius z_0>0 such that for all 0 < | z| < z_0, M(z) is invertible on L^2() × L^2() and M(z)^-1 = 1/d(1/zS_1 - 1/κ PTS_1 - 1/κ S_1TP) +( 1/κ + | c_0 |^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()/dκ^2)zP + Q Λ_0(z) Q + zQΛ_1(z) + zΛ_2(z)Q + z^2Λ_3(z), where Λ_j(z) are absolutely bounded operators on L^2() × L^2() satisfying the improved bounds ‖|∂_z^k Λ_j(z) |‖_L^2() × L^2() → L^2() × L^2()≲ 1, k=0,1,2, j=0,1,2,3, uniformly in z for | z| < z_0. Throughout the proof, we will denote by _j(z), for 0 ≤ j ≤ 3, as error terms that satisfy the absolute bound | z |^k ‖|∂_z^k _j(z) |‖_L^2() × L^2() → L^2() × L^2()≲| z |^j, ∀ k = 0,1,2, ∀ | z | < z_0, for some z_0>0 small. This convenient notation will be useful in invoking Neumann series inversion for small values of z. Since we only need the expansion of M(z)^-1 up to a few powers of z, the exact expressions of _j(z) are insignificant and we allow it to vary from line to line. By Proposition <ref>, we rewrite M(z) by setting (z) := z/κM(z) = P + z/κ (T + zM_1 + _2(z)), where _2(z) is the error term in Proposition <ref>. Using I = P + Q, we write (z) + Q = I + z/κ (T + zM_1 + _2(z)), and by choosing z small enough, a Neumann series expansion yields the inverse operator [(z)+Q]^-1 = ∑_n ≥ 0 (-1)^n (z/κ(T + zM_1 + _2(z)))^n on L^2()× L^2(). We collect the terms of power order up to 2 to obtain [(z)+Q]^-1 = I - z/κT - z^2 ( 1/κM_1 -1/κ^2T^2 ) + _3(z). Note that z_2(z) is of the form _3(z). Recall by Lemma <ref> that the operator (z) is invertible on L^2() × L^2() if and only if the operator B_1(z) := Q-Q[(z)+Q]^-1Q is invertible on the subspace QL^2 ≡ Q(L^2() × L^2()). Using (<ref>), we find that B_1(z) = z/κQTQ + z^2(1/κQM_1Q - 1/κ^2QT^2Q) + Q_3(z)Q. We rewrite B_1(z) by setting _1(z) := κ/zB_1(z) = QTQ + z(QM_1Q - 1/κQT^2Q) + Q_2(z)Q. Since the threshold μ is not regular, the operator QTQ is not invertible on QL^2 according to Definition <ref>. By considering the operator _1(z) + S_1 = (QTQ + S_1) + z(QM_1Q - 1/κQT^2Q) + Q_2(z)Q, and the fact that we have QD_0Q = D_0 = (QTQ+S_1)^-1 on QL^2, we can pick z small enough such that ‖ z(QM_1Q - 1/κQT^2Q) + Q_2(z)Q ‖_L^2 × L^2 → L^2 × L^2 < ‖ QD_0Q ‖_L^2 × L^2 → L^2 × L^2^-1. This allows for the more complicated Neumann series expansion (c.f. Lemma <ref>) on QL^2: (_1(z) + S_1)^-1 = D_0∑_n≥0 (-1)^n( (z (QM_1Q - κ^-1 QT^2Q ) + Q_2(z)Q)D_0)^n on QL^2. We collect the leading order terms in this expansion and write (_1(z) + S_1)^-1 = D_0 - zD_0(QM_1Q - κ^-1QT^2Q)D_0 + Q_2(z)Q. At this step, it is crucial that the operator D_0 is absolutely bounded to ensure that the remainder term Q_2(z)Q and its derivatives are absolutely bounded. Next, we set B_2(z) := S_1 - S_1(_1(z) + S_1)^-1S_1, on S_1L^2 ≡ S_1(L^2()× L^2()). Using the orthogonality conditions S_1D_0 = D_0 S_1 = S_1, S_1Q= QS_1 = S_1, QTS_1 = S_1TQ = 0, we obtain B_2(z) = z S_1(M_1 - κ^-1 T^2)S_1 + S_1_2(z)S_1. By Lemma <ref>, we note that S_1L^2 is spanned by Φ(x) and that PTΦ = TΦ holds (c.f. (<ref>)), whence S_1T^2S_1 = S_1TPTS_1. Using Lemma <ref> (c.f. (<ref>), (<ref>)), we obtain that d := (S_1(M_1 - κ^-1T^2)S_1) = (S_1M_1S_1)-κ^-1(S_1TPTS_1) =-2i(| c_0 |^2 + | c_1 |^2)‖Φ‖_L^2 × L^2^-2≠ 0. Hence, we apply another Neumann series expansion to invert the operator B_2(z) on S_1L^2 for small z and write B_2(z)^-1 = 1/dzS_1 + S_1_0(z)S_1 on S_1L^2. Moreover, by Lemma <ref>, we have _1(z)^-1 = (_1(z)+S_1 )^-1 + (_1(z)+S_1)^-1S_1B_2(z)^-1S_1(_1(z)+S_1)^-1 on QL^2. Using (<ref>), (<ref>), and (<ref>), we find that _1(z)^-1 = 1/dzS_1 + Q_0(z)Q on QL^2. Hence, B_1(z)^-1 = κ/z_1(z)^-1 = κ/d z^2S_1 + κ/zQ_0(z)Q on QL^2. We return to the expansion of (z)^-1 by using Lemma <ref> with (<ref>) to obtain that (z)^-1 = ((z)+Q)^-1 + ((z)+Q)^-1QB_1(z)^-1Q((z)+Q)^-1 = (I - z/κT) + κ/d z^2S_1 -1/dzTS_1 - 1/dzS_1T + 1/d κTS_1T + κ/z(Q_0(z)Q + _1(z)Q + Q_1(z) + _2(z)). Here, we used the identity Q = IQ = QI. By reverting back to M(z) = κ/z(z), we have M(z)^-1 = z/κ(z)^-1 = z/κI + 1/d zS_1 - 1/dκTS_1 - 1/dκS_1T + z/dκ^2TS_1T + Q_0(z)Q + _1(z)Q + Q_1(z) + _2(z). Note that we absorb the z^2/κ^2T term into the error _2(z) above. By using the identities I = Q + P, QTS_1 = S_1TQ = 0, and by factoring the powers of z from the error terms _j(z), we obtain the expansion of M(z)^-1 on L^2: for 0 < | z | < z_0, M(z)^-1 = z/κP + 1/d(1/zS_1 - 1/κ PTS_1 - 1/κ S_1TP + 1/κ^2PTS_1TP ) + Q Λ_0(z) Q + zQΛ_1(z) + zΛ_2(z)Q + z^2Λ_3(z), where the operators Λ_j(z), j=0,…,3, satisfy (<ref>). Here, we choose z_0>0 sufficiently small such that the expansion (<ref>) and the Neumann series inversions (<ref>), (<ref>), (<ref>) are valid for all 0<| z | < z_0. Finally, by Lemma <ref> (c.f. (<ref>)), the term PTS_1TP can be simplified to | c_0 |^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()P, which finishes the proof. We appeal to the reader that each leading term in the expansion (<ref>) plays an important role in revealing the cancellations among the finite rank operators that arise in the local decay estimate (<ref>). Such a precise expression was also obtained for the one-dimensional Dirac operators in <cit.>, even though the proof we give here is different. See Remark 3.7 in that paper. For the low-energy unweighted dispersive estimates, it is sufficient to work with the simpler expression M(z)^-1 = 1/zQΛ_0(z)Q + QΛ_1(z) + Λ_2(z)Q + zΛ_3(z), where we absorb the operators S_1,S_1TP,PTS_1,P in (<ref>) into the operators QΛ_0(z)Q, QΛ_1(z), Λ_2(z)Q, Λ_3(z) respectively. The operators Λ_j(z), for j=0,…,3, satisfy the same estimates as (<ref>). § LOW ENERGY ESTIMATES In this section, we prove the low energy bounds for the perturbed evolution, following the ideas in Section 4 of <cit.>. We will frequently exploit the crucial orthogonality condition ∫_e_11v_1(x) Q(x,y) x = ∫_ Q(x,y) v_2(y) e_11 y= 0_2× 2. The following calculus lemma will be helpful for dealing with the lower entry of the free resolvent kernel. For any m>0 and r ≥ 0, we define g_m(x) := e^-r√(x^2+m^2)/√(x^2+m^2). Then, there exists C_m > 0 (independent of r) such that ‖∂_x^k g_m ‖_L^∞()≤ C_m ≲ 1, ∀ k=0,1,2. First, by rescaling, we set g_m(x) = 1/m(x/m) where (x) := e^-rm √( x^2+1)/√(x^2+1) = 1/e^⟨ x ⟩⟨ x ⟩, := rm. Hence, it sufficient to prove the same estimate (<ref>) for (x). For k=0, it is clear that |(x) |≤ 1 for all x ∈. For k=1,2, direct computation shows that ∂_x (x) = - x(1+⟨ x ⟩)/e^⟨ x ⟩⟨ x ⟩^3, and ∂_x^2 (x) = 3x^2 + 3 x^2⟨ x ⟩-⟨ x ⟩^2 + ^2 x^2⟨ x ⟩^2 - ⟨ x ⟩^4/e^⟨ x ⟩⟨ x ⟩^5. Since e^-⟨ x ⟩max{1,,^2}≤ 1, it follows from (<ref>), (<ref>) that the estimate (<ref>) holds for and thus for g(x) too. The next proposition establishes the dispersive estimates for the evolution semigroup e^itP_s^+ for small energies close to the threshold μ. Let the assumptions of Theorem <ref> hold. Let χ_0(z) be a smooth, even, non-negative cut-off function satisfying χ_0(z) = 1 for | z |≤z_0/2 and χ_0(z) = 0 for | z |≥ z_0, where z_0>0 is given by Proposition <ref>. Then, for any | t |≥ 1, and u⃗ = (u_1,u_2) ∈() ×(), we have ‖ e^itχ_0( - μ I)P_s^+ u⃗‖_L^∞()× L^∞()≲| t |^-1/2‖u⃗‖_L^1() × L^1(), and ‖⟨ x ⟩^-2(e^itχ_0( - μ I)P_s^+ - F_t^+ )u⃗‖_L^∞()× L^∞()≲| t |^-3/2‖⟨ x ⟩^2u⃗‖_L^1() × L^1(), where F_t^+ is defined by F_t^+(x,y) = e^itμ/√(-4 π i t)Ψ⃗(x) [σ_3 Ψ⃗(y)]^⊤. We begin with the proof of the dispersive decay estimate (<ref>). We recall the spectral representation from (<ref>): e^itP_s^+ = e^itμ/π i∫_ e^itz^2zℛ_0(z) z - e^itμ/π i∫_ e^itz^2zℛ_0(z)v_1(M(z))^-1v_2ℛ_0(z) z. Note that the first term on the right is the spectral representation for the free evolution e^it_0P_s^+ and it satisfies the same estimate as (<ref>) thanks to Proposition <ref>. We insert the weaker expansion (<ref>) for M(z)^-1 following Remark <ref>, and write ∫_ e^itz^2zχ_0(z^2)ℛ_0(z)v_1(M(z))^-1v_2ℛ_0(z) z =∫_ e^itz^2χ_0(z^2)ℛ_0(z)v_1 QΛ_0(z)Q v_2ℛ_0(z) z + ∫_ e^itz^2zχ_0(z^2)ℛ_0(z)v_1 QΛ_1(z) v_2ℛ_0(z) z +∫_ e^itz^2zχ_0(z^2)ℛ_0(z)v_1 Λ_2(z)Q v_2ℛ_0(z) z+∫_ e^itz^2z^2χ_0(z^2)ℛ_0(z)v_1 Λ_3(z) v_2ℛ_0(z) z =: J_1 + J_2 + J_3 + J_4. It remains to show that ‖ J_k‖_L^1→ L^∞≤ C | t |^-1/2, ∀ k=1,…,4 . We treat the case for J_1 since the other cases follow similarly. First, we recall the kernel of _0(z) from (<ref>) and write _0(z)(x,y) := _1(z)(x,y) + _2(z)(x,y) := ie^iz | x - y |/2ze_11 + -e^-√(z^2 + 2μ)| x - y |/2√(z^2 + 2μ)e_22, and we further decompose the integral J_1 as J_1 = J_1^(1,1) + J_1^(1,2) + J_1^(2,1) + J_1^(2,2), where J_1^(i,j)(x,y) := ∫_ e^itz^2χ_0(z^2)[ℛ_i(z)v_1 QΛ_0(z)Q v_2ℛ_j(z)](x,y) z, i,j ∈{1,2}. We begin with the most singular term J_1^(1,1)(x,y) = ∫_^3 e^itz^2 + iz(| x - x_1 | + | y - y_1 |)χ_0(z^2)/(2iz)^2 [e_11v_1QΛ_0(z)Qv_2e_11](x_1,y_1) z x_1 y_1 . The orthogonality conditions (<ref>) imply that ∫_ e^iz | x |e_11 v_1(x_1)Q(x_1,x_2) x_1 = ∫_ e^iz | y | Q(y_2,y_1)v_2(y_1)e_11 y_1 = 0. Hence, writing e^iz| x - x_1| - e^iz| x | = iz ∫_| x |^| x - x_1 |e^izs_1 s_1 and e^iz| y - y_1| - e^iz| y | = iz ∫_| y |^| y - y_1 |e^izs_2 s_2, we obtain J_1^(1,1)(x,y) = 1/4∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_ e^itz^2 + iz(s_1+s_2) A(z,x_1,y_1) s_1 s_2 x_1 y_1 z, where A(z,x_1,y_1) = χ_0(z^2)[e_11v_1QΛ_0(z)Qv_2e_11](x_1,y_1), and note that A is differentiable and compactly supported in z due to Proposition <ref> and the compact support of χ_0(z^2). We obtain by Lemma <ref> and the Fubini theorem that | J_1^(1,1)(x,y) |≤ C | t|^-1/2∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_|∂_z A(z,x_1,x_2)| z s_1 s_2 x_1 y_1. Using ∫_| x |^| x - x_1 |∫_| y |^| y - y_1 | 1 s_1 s_2 ≤|| x - x_1 | - | x ||·|| y - y_1 | - | y ||≲⟨ x_1 ⟩⟨ y_1 ⟩, as well as ∂_z A(z,x_1,y_1) = [e_11v_1Q ∂_z(χ_0(z^2)Λ_0(z))Qv_2e_11](x_1,y_1), along with the bound (<ref>) on Λ_0, we deduce that ∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_|∂_z A(z,x_1,x_2)| z s_1 s_2 x_1 y_1 ≤ C‖ Q ‖_L^2 → L^2^2 ‖⟨ x_1 ⟩ v_1(x_1)‖_L^2()‖⟨ y_1 ⟩ v_2(y_1)‖_L^2() ·∫_[-z_0,z_0] (‖|Λ_0(z)|‖_L^2× L^2 → L^2× L^2 + ‖|∂_z Λ_0(z)|‖_L^2× L^2 → L^2× L^2) z ≲ 1. Hence, ‖ J_1^(1,1)‖_L^1 × L^1 → L^∞× L^∞≤ C | t |^-1/2. Next, we consider the least singular term J_1^(2,2)(x,y) = ∫_^3 e^itz^2B(z,x,y,x_1,y_1) x_1 y_1 z, where B(z,x,y,x_1,y_1) := e^-√(z^2+2μ)(| x - x_1 | + | y - y_1 |)χ_0(z^2) /4(z^2+2μ) [e_22v_1QΛ_0(z)Qv_2e_22](x_1,y_1). By Lemma <ref>, we have | J_1^(2,2)(x,y)|≤ C | t |^-1/2, if we can show the uniform estimate sup_x,y ∈∫_^3|∂_z B(z,x,y,x_1,y_1)| z x_1 y_1 ≲ 1. By Lemma <ref>, we have sup_z∈|∂_z^k ( e^-√(z^2+2μ)(| x - x_1 | + | y - y_1 |)/4(z^2+2μ)) |≤ C_μ≲ 1, k=0,1, uniformly in the x,y,x_1,y_1 variables. Hence, using the Cauchy-Schwarz inequality in the x_1,y_1 variables and the bound (<ref>) on Λ_0, we have ∫_^3|∂_z B(z,x,y,x_1,y_1)| z x_1 y_1 ≤ C_μ∫_^3| (1+∂_z)χ_0(z^2)[e_22v_1QΛ_0(z)Qv_2e_22](x_1,y_1) | z x_1 y_1 ≲‖ Q ‖_L^2 × L^2 → L^2 × L^2^2 ‖ v_1‖_L^2()‖ v_2‖_L^2() ∫_[-z_0,z_0](‖|Λ_0(z)|‖_L^2 × L^2 → L^2 × L^2 + ‖|∂_z Λ_0(z)|‖_L^2 × L^2 → L^2 × L^2) z ≲ 1. Hence, the bound (<ref>) is proven. The remaining terms J_1^(1,2) and J_1^(2,1) can be treated similarly with the same techniques, while for the remaining cases J_2,J_3, and J_4, we use the additional powers of z in place of the missing Q operators to obtain the same bounds (<ref>) as the term J_1. This finishes the proof of (<ref>). Next, we turn to the proof of the low-energy weighted estimate (<ref>). Recall that the threshold resonance function Ψ = (Ψ_1,Ψ_2)^⊤ has been normalized in Theorem <ref>, which means that we need to carefully treat the constants relating to the function Φ where Φ := v_2Ψ. By Lemma <ref>, note that Φ spans the subspace S_1(L^2()× L^2()). We define η := ‖Φ‖_L^2() × L^2()^-2≠ 0, so that S_1(x,y) = η Φ^*(y)Φ(x), and we fix the constants c_0 and c_1 defined by (<ref>) and (<ref>) respectively for this Φ. By Lemma <ref>, one finds the relation 2 = lim_x →∞(|Ψ_1(x)|^2 + |Ψ_1(-x)|^2) = 2(| c_0 |^2 + | c_1 |^2), by the polarization identity (c.f. (<ref>)). Thus, the precise expansion (<ref>) of M(z)^-1 from Proposition <ref> simplifies to M(z)^-1 = i/2η zS_1 + 1/η‖ V_1 ‖_L^1() PTS_1 + 1/η‖ V_1 ‖_L^1() S_1TP + (2i/‖ V_1 ‖_L^1() + 2 | c_0 |^2/i‖ V_1 ‖_L^1() )zP + Q Λ_0(z) Q + zQΛ_1(z) + zΛ_2(z)Q + z^2Λ_3(z), 0 < | z | < z_0. We insert the above expression into the spectral representation of e^itχ_0( - μ I)P_s^+, and obtain that e^itχ_0( - μ I)P_s^+ = e^itμ/π i∫_e^itz^2zχ_0(z^2)_0(z) z - e^itμ/π i∫_e^itz^2zχ_0(z^2)_0(z)v_1(M(z))^-1v_2_0(z) z = e^itμ/π iI_1 -e^itμ/π i( i/2ηI_2,1 + 1/η‖ V_1 ‖_L^1()I_2,2 + 1/η‖ V_1 ‖_L^1() I_2,3 + (2i/‖ V_1 ‖_L^1() + 2 | c_0 |^2/i‖ V_1 ‖_L^1() )I_2,4) -e^itμ/π i(I_3,1 + I_3,2 + I_3,3 + I_3,4), where I_1 := ∫_ e^itz^2z χ_0(z^2) _0(z) z, I_2,1 := ∫_ e^itz^2χ_0(z^2) [_0(z)v_1 S_1 v_2_0(z)] z, I_2,2 := ∫_ e^itz^2 zχ_0(z^2) [_0(z)v_1 S_1 T P v_2_0(z)] z, I_2,3 := ∫_ e^itz^2 zχ_0(z^2) [_0(z)v_1 P T S_1 v_2_0(z)] z, I_2,4 := ∫_ e^itz^2 z^2χ_0(z^2) [_0(z)v_1 P v_2_0(z)] z, and I_3,1 := ∫_ e^itz^2 zχ_0(z^2) [_0(z)v_1 Q Λ_0 (z) Q v_2_0(z)] z, I_3,2 := ∫_ e^itz^2 z^2χ_0(z^2) [_0(z)v_1 Q Λ_1(z) v_2_0(z)] z, I_3,3 := ∫_ e^itz^2 z^2χ_0(z^2) [_0(z)v_1 Λ_2(z)Q v_2_0(z)] z, I_3,4 := ∫_ e^itz^2 z^3χ_0(z^2) [_0(z)v_1 Λ_3(z) v_2_0(z)] z. Now we study the local decay of the terms I_1, I_2,j, I_3,ℓ, for j,ℓ∈{1,…,4} and we will observe in the following propositions that the terms I_1,I_2,1,…,I_2,4 contribute to the leading order for the local decay estimate while the remainder terms I_3,1, …, I_3,4 satisfy the stronger local decay estimate (| t |^-3/2⟨ x ⟩⟨ y ⟩). We first handle these remainder terms by Lemma <ref> in a similar spirit to the proof for the (unweighted) dispersive bound (<ref>), exploiting the additional power of z. For i∈{1,2,…,4} and | t|≥ 1, we have | I_3,i(x,y) |≤ C | t|^-3/2⟨ x ⟩⟨ y ⟩. We treat the case for I_3,1 as the other cases follow similarly by using the additional powers of z in place of the missing operators Q. As before, we consider the decomposition I_3,1 = I_3,1^(1,1) + I_3,1^(1,2) + I_3,1^(2,1) + I_3,1^(2,2), where I_3,1^(i,j) := ∫_ e^itz^2zχ_0(z^2)[_i(z)v_1QΛ_0(z)Qv_2_j(z)] z, i,j∈{1,2}, with _1 and _2 defined in (<ref>). We begin with the term I_3,1^(1,1)(x,y) = ∫_^3 e^itz^2 + iz(| x - x_1 | + | y - y_1 |)zχ_0(z^2)/(2iz)^2 [e_11v_1Q Λ_0(z)Qv_2e_11](x_1,y_1) z x_1 y_1. Using the orthogonality condition (<ref>) like in (<ref>), we obtain I_3,1^(1,1)(x,y) = 1/4∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1|∫_e^itz^2+iz(s_1+s_2)z A(z,x_1,y_1) z s_1 s_2 x_1 y_1, where A(z,x_1,y_1) := χ_0(z^2)[e_11v_1QΛ_0(z)v_2Qe_11](x_1,y_1). By Lemma <ref>, we obtain that | I_3,1^(1,1)(x,y) | ≲| t |^-3/2∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1|∫_[-z_0,z_0](|∂_z^2 A | + (s_1+s_2)|∂_z A | + | A |) z s_1 s_2 x_1 y_1. Using the bounds ∫_| x |^| x - x_1 |∫_| y |^| y - y_1| 1 s_1 s_2 ≲⟨ x_1 ⟩⟨ y_1 ⟩, ∫_| x |^| x - x_1 |∫_| y |^| y - y_1| (s_1+s_2) s_1 s_2 ≲⟨ x_1 ⟩^2 ⟨ y_1 ⟩^2 ⟨ x ⟩⟨ y ⟩, we have | I_3,1^(1,1) (x,y) |≲| t |^-3/2∫_^2∫_[| z |≤ z_0]⟨ x_1 ⟩⟨ y_1 ⟩ (|∂_z^2 A | + ⟨ x_1 ⟩⟨ y_1 ⟩⟨ x ⟩⟨ y ⟩|∂_z A | + | A | ) z x_1 y_1. Noting that ⟨ x⟩ v_1(x_1) and ⟨ y_1 ⟩ v_2(y_1) are in L^2 and that Λ_0 satisfies the bound (<ref>), we apply Cauchy-Schwarz inequality in x_1 and y_1 variables to obtain the bound | I_3,1^(1,1)(x,y) | ≲| t |^-3/2‖ Q ‖_L^2 → L^2^2 ‖⟨ x_1 ⟩ v_1 ‖_L_x_1^2() ‖⟨ y_1 ⟩ v_2 ‖_L_y_1^2() ·∫_[| z |≤ z_0] (‖|∂_z^2 Λ_0(z) |‖_L^2 × L^2 → L^2 × L^2 + ‖|Λ_0(z) |‖_L^2 × L^2→ L^2× L^2 ) z +| t |^-3/2⟨ x ⟩⟨ y ⟩‖ Q ‖_L^2 → L^2^2‖⟨ x_1 ⟩ v_1 ‖_L_x_1^2() ‖⟨ y_1 ⟩ v_2 ‖_L_y_1^2() ·∫_[| z |≤ z_0]‖|∂_z Λ_0(z) |‖_L^2 × L^2 → L^2× L^2 z ≲| t |^-3/2⟨ x ⟩⟨ y ⟩ . Next, we consider the term I_3,1^(1,2)(x,y) = ∫_^3 e^itz^2 + iz| x - x_1 | - √(z^2+2μ)| y - y_1 |χ_0(z^2)/4i√(z^2+2μ) [e_11v_1Q Λ_0(z)Qv_2e_22](x_1,y_1) z x_1 y_1. By using the Q orthogonality (c.f. (<ref>)) condition, we write I_3,1^(1,2)(x,y) = ∫_^3∫_| x |^| x - x_1 | e^itz^2 + izs_1 zB(z,x_1,y_1,x,y) s_1 z x_1 y_1, where B(z,x_1,y_1,x,y) := e^- √(z^2+2μ)| y - y_1 |/4i√(z^2+2μ)χ_0(z^2) [e_11v_1Q Λ_0(z)Qv_2e_22](x_1,y_1) . Since B is compactly supported in z, we can exchange the order of integration and we use Lemma <ref> to obtain | I_3,1^(1,2)(x,y) |≤ C | t |^-3/2∫_^2∫_| x |^| x - x_1|∫_| [∂_z^2 + is_1 ∂_z] B(z,x_1,y_1,x,y) | z s_1 x_1 y_1. By Lemma <ref>, we have sup_z ∈|∂_z^k (e^- √(z^2+2μ)| y - y_1 |4i√(z^2+2μ)) |≤ C_μ≲ 1, ∀ k=0,1,2, which implies by Hölder's inequality and Leibniz rule that ∫_| [∂_z^2 + is_1 ∂_z] B(z,x_1,y_1,x,y) | z ≤ C ⟨ s_1 ⟩∫_|e_11v_1Q [1+∂_z + ∂_z^2](χ_0(z^2)Λ_0(z))Qv_2e_22| z. Repeating the arguments from (<ref>)–(<ref>), we obtain | I_3,1^(1,2)(x,y) |≤ C | t |^-3/2⟨ x ⟩. Similarly, one has the bounds | I_3,1^(2,1)(x,y) |≤ C | t |^-3/2⟨ y ⟩, | I_3,1^(2,2)(x,y) |≤ C | t |^-3/2, and we are done. For all | t |≥ 1, we have | I_2,1(x,y) - F_t^1(x,y) |≤ C | t |^-3/2⟨ x⟩^2 ⟨ y ⟩^2, where F_t^1(x,y) := η√(π)/√(-it) [c_0 e_1 - Ψ(x)][σ_3 Ψ(y) - c_0e_1]^*. As in the previous propositions, we decompose I_2,1 into the sum I_2,1 = I_2,1^(1,1)+I_2,1^(1,2)+I_2,1^(2,1)+I_2,1^(2,2), with I_2,1^(i,j) := ∫_ e^itz^2χ_0(z^2) [_i(z)v_1 S_1 v_2_j(z)] z, i,j ∈{1,2}. We start with the most singular term I_2,1^(1,1)(x,y) = ∫_^3 e^itz^2 + iz(| x - x_1 | + | y - y_1 |)χ_0(z^2)/(2iz)^2 [e_11v_1S_1v_2e_11](x_1,y_1) x_1 y_1 z. Noting that S_1L^2 ⊂ QL^2, the orthogonality conditions (<ref>) imply that ∫_ e^iz | x |e_11 v_1(x_1)S_1(x_1,x_2) x_1 = ∫_ e^iz | y | S_1(y_2,y_1)v_2(y_1)e_11 y_1 = 0_2× 2, ∀ x, y ∈. Hence, by the Fubini theorem, I_2,1^(1,1) (x,y) = 1/4∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_ e^itz^2 + iz(s_1 + s_2)χ_0(z^2) [e_11v_1S_1v_2e_11](x_1,y_1) z s_1 s_2 x_1 y_1 = 1/4∫_| x |^| x - x_1 |∫_| y |^| y - y_1 | G_t(s_1+s_2) s_1 s_2 ∫_^2 [e_11v_1S_1v_2e_11](x_1,y_1) x_1 y_1, where G_t(·) is the function defined in Lemma <ref>, which satisfies the estimate | G_t(s_1+s_2) - √(π)/√(-it) e^-is_1^2/4te^-is_2^2/4t|≤ C | t |^-3/2⟨ s_1 ⟩⟨ s_2 ⟩. Using the bound ∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |⟨ s_1 ⟩⟨ s_2 ⟩ s_1 s_2 ≲⟨ x_1⟩^2⟨ y_1⟩^2⟨ x ⟩⟨ y ⟩, the decay assumptions on v_1,v_2, and the estimate (<ref>), we have | I_2,1^(1,1)(x,y) - √(π)/4√(-it) e^i π/4∫_^2H_t(x_1,x)[e_11v_1S_1v_2e_11](x_1,y_1)H_t(y_1,y) x_1 y_1| ≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩‖ S_1 ‖_L^2 × L^2 → L^2 × L^2 ‖⟨ x_1 ⟩^2 v_1(x_1) ‖_L^2‖⟨ y_1 ⟩^2 v_2(y_2) ‖_L^2≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩, where we set H_t(x_1,x) := ∫_| x |^| x_1 - x | e^-is^2/4t s. Since S_1(x,y) = ηΦ(x)Φ^*(y), the orthogonality conditions (<ref>) imply that ∫_| x |e_11 v_1(x_1) S_1(x_1,y_1) x_1 = η∫_| x |e_11v_1(x_1)Φ(x_1) x_1Φ^*(y_1) = 0_2× 2, ∀ y ∈, ∫_| y | S_1(x_1,y_1) v_2(y_1)e_11 y_1 = ηΦ(x_1)∫_| y |Φ^*(y_1)v_2(y_1)e_11 y_1 = 0_2 × 2, ∀ x ∈. Hence, using the bound | H_t(x_1,x) - (| x - x_1 | - | x | )|≤ C | t |^-1⟨ x ⟩^2 ⟨ x_1 ⟩^3, and the exponential decay of v_1,v_2, we conclude the estimate | I_2,1^(1,1)(x,y) - η√(π)/√(-it) [G_0(e_11v_1Φ)(x)][G_0(e_11v_2Φ)(y)]^* |≤ C| t |^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where G_0(x,y) := -1/2| x - y|, and [G_0(e_11v_1Φ)(x)] := -1/2∫_| x - x_1 |e_11v_1(x_1)Φ(x_1) x_1, [G_0(e_11v_2Φ)(y)]^* := -1/2∫_| y - y_1 |Φ^*(y_1)v_2(y_1) e_11 y_1. In the preceding definition, we used the identity v_2^* = v_2. Next, we treat the term I_2,1^(2,2)(x,y) = ∫_^3 e^itz^2χ_0(z^2)e^-√(z^2+2μ)| x - x_1|/-2√(z^2+2μ) [e_22v_1S_1v_2e_22](x_1,y_1)e^-√(z^2+2μ)| y - y_1|/-2√(z^2+2μ) x_1 y_1 z. By Taylor expansion, we have I_2,1^(2,2)(x,y) =∫_^3 e^itz^2χ_0(z^2)e^-√(2μ)| x - x_1|/-2√(2μ) [e_22v_1S_1v_2e_22](x_1,y_1)e^-√(2μ)| y - y_1|/-2√(2μ) x_1 y_1 z + ∫_^3 e^itz^2z^2χ_0(z^2)[e_22v_1S_1v_2e_22](x_1,y_1)κ(x,x_1)κ(y,y_1) x_1 y_1 z = η∫_ e^itz^2χ_0(z^2) z [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* + ∫_^3 e^itz^2z^2χ_0(z^2)[e_22v_1S_1v_2e_22](x_1,y_1)κ(x,x_1)κ(y,y_1) x_1 y_1 z, where we set G_2(x,y) := e^-√(2μ)| x - y|/-2√(2μ), and where κ(x,x_1)κ(y,y_1) is an error term bounded by C⟨ x ⟩⟨ x_1 ⟩⟨ y ⟩⟨ y_1 ⟩ e^-c(| x - x_1 | + | y - y_1 |), for some C,c >0, (c.f. (<ref>)). The definitions for G_2(e_22v_1Φ)(x) and G_2(e_22v_2Φ)(y) are defined analogously to the ones for G_0(e_11v_1Φ)(x) and G_0(e_11v_2Φ)(y). By non-stationary phase, one has the uniform estimate |∫_ e^itz^2z^2χ_0(z^2) z |≤ C| t |^-3/2. Hence, we can control the remainder term in I_2,1^(2,2) by |∫_^3 e^itz^2z^2χ_0(z^2)[e_22v_1S_1v_2e_22](x_1,y_1)κ(x,x_1)κ(y,y_1) x_1 y_1 z |≤ C | t|^-3/2⟨ x ⟩⟨ y ⟩. On the other hand, by Lemma <ref>, one has ∫_ e^itz^2χ_0(z^2) z = √(π)/√(-it) + R_t, | R_t |≤ C| t |^-3/2. Hence, the leading contribution of I_2,1^(2,2) can be written as |∫_ e^itz^2χ_0(z^2) z [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* - η√(π)/√(-it) [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^*|≤ C| t |^-3/2. Thus, one concludes the estimate for I_2,1^(2,2): | I_2,1^(2,2) - η√(π)/√(-it) [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* |≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩. Finally, we note that a similar analysis holds for the terms I_2,1^(1,2) and I_2,1^(2,1) yielding the contributions | I_2,1^(1,2) - η√(π)/√(-it)[G_0(e_11v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* |≤ C| t |^-3/2⟨ x ⟩^2 ⟨ y ⟩, | I_2,1^(2,1) -η√(π)/√(-it)[G_2(e_22v_1Φ)(x)][G_0(e_11v_2Φ)(y)]^*|≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩^2. By adding all leading order contributions, we obtain F_t^1(x,y) =η√(π)/√(-it)[(G_0e_11 + G_2e_22)v_1Φ](x)[(G_0e_11 + G_2e_22)v_2Φ]^*(y). Recalling that _0 = G_0 e_11 + G_2e_22 from Lemma <ref>, that _0(v_1 Φ ) = c_0e_1 - Ψ from Lemma <ref>, and that _0(v_2 Φ) = σ_3 Ψ - c_0 e_1 from Remark <ref> (c.f. (<ref>)), we arrive at F_t^1(x,y) = η√(π)/√(-it)[c_0 e_1 - Ψ(x)][σ_3 Ψ(y) - c_0e_1]^*, as claimed We continue the analysis for the terms involving the operators S_1TP and PTS_1. For all | t |≥ 1, we have | I_2,2(x,y) - F_t^2(x,y) |≤ C | t|^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, | I_2,3(x,y) - F_t^3(x,y) |≤ C | t|^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where F_t^2(x,y) := iη‖ V_1 ‖_L^1()/2√(π)/√(-it)[c_0 e_1 - Ψ(x)][e^iy^2/4tc_0e_1]^*, F_t^3(x,y) := -iη‖ V_1 ‖_L^1()/2√(π)/√(-it) [e^-ix^2/4tc_0 e_1][σ_3 Ψ(y) - c_0e_1]^*. As in the proof of Proposition <ref>, we decompose I_2,2 into I_2,2 = I_2,2^(1,1) + I_2,2^(1,2) + I_2,2^(2,1) + I_2,2^(2,2), with I_2,2^(i,j) := ∫_ e^itz^2 z χ_0(z^2) [_i(z) v_1 S_1TP v_2 _j(z)] z, i,j ∈{1,2}, where _1 and _2 were defined in (<ref>). We start with I_2,2^(1,1)(x,y) = ∫_^3 e^itz^2zχ_0(z^2) e^iz | x - x_1 |/-2iz[e_11v_1S_1TPv_2e_11](x_1,y_1)e^iz | y - y_1 |/-2iz x_1 y_1 z. Using the orthogonality (<ref>), we have I_2,2^(1,1)(x,y) = 1/4∫_^3∫_| x |^| x - x_1 |∫_| y |^| y - y_1 | e^itz^2+iz(s_1+s_2)zχ_0(z^2) [e_11v_1S_1TPv_2e_11](x_1,y_1) s_1 s_2 x_1 y_1 z + 1/4i∫_^3∫_| x |^| x - x_1 |e^itz^2+izs_1χ_0(z^2) [e_11v_1S_1TPv_2e_11](x_1,y_1)e^iz | y | s_1 x_1 y_1 z =: I_2,2;1^(1,1) + I_2,2;2^(1,1). By Lemma <ref>, we have |∫_e^itz^2+iz(s_1+s_2)zχ_0(z^2) z |≤ C| t |^-3/2⟨ s_1 ⟩⟨ s_2 ⟩. Using this estimate, the bound ∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |⟨ s_1 ⟩⟨ s_2 ⟩ s_1 s_2 ≲⟨ x_1 ⟩^2 ⟨ y_2 ⟩^2⟨ x ⟩⟨ y ⟩, the absolute boundedness of S_1TP, and the exponential decay of v_1,v_2, we deduce that | I_2,2;1^(1,1)(x,y) |≲| t |^-3/2⟨ x ⟩⟨ y ⟩∫_^2|⟨ x_1 ⟩^2 ⟨ y_2 ⟩^2 [e_11v_1S_1TPv_2e_11](x_1,y_1)| x_1 y_1 ≲| t |^-3/2⟨ x ⟩⟨ y ⟩. By Lemma <ref> and direct computation, ∫_ S_1TP(x_1,y_1)v_2(y_1)e_11 y_1 = η‖ V_1 ‖_L^1()Φ(x_1)[c_0e_1]^*. Hence, integrating in y_1, we have I_2,2;2^(1,1)(x,y) = η‖ V_1 ‖_L^1()/4i(∫_∫_| x |^| x - x_1 |∫_ e^itz^2+iz(s_1 + | y |)χ_0(z^2) e_11 v_1(x_1)Φ(x_1) z s_1 x_1)[c_0e_1]^* = η‖ V_1 ‖_L^1()/4i(∫_∫_| x |^| x - x_1 | G_t(s_1 + | y |) s_1 e_11 v_1(x_1)Φ(x_1) x_1)[c_0e_1]^*, where G_t is the function defined in Lemma <ref>. By Lemma <ref> (c.f. (<ref>)–(<ref>) for similar computations), we have | I_2,2;2^(1,1)(x,y) - i η‖ V_1 ‖_L^1()/2 [G_0(e_11v_1Φ)(x)][e^iy^2/4tc_0e_1]^* |≤ C | t |^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where G_0 is the operator defined in (<ref>). This completes the analysis of the term I_2,2^(1,1). Next, we treat the term I_2,2^(2,1)(x,y) = ∫_^3 e^itz^2z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1S_1TPv_2e_11](x_1,y_1)e^iz | y - y_1 |/-2iz x_1 y_1 z. By inserting e^iz | y |, we write I_2,2^(2,1)(x,y) = -1/2∫_^3 e^itz^2z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1S_1TPv_2e_11](x_1,y_1) ∫_| y |^| y - y_1 |e^izs_2 s_2 x_1 y_1 z + ∫_^3 e^itz^2z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1S_1TPv_2e_11](x_1,y_1)e^iz | y |/-2iz x_1 y_1 z =: I_2,2;1^(2,1)(x,y) + I_2,2;2^(2,1)(x,y), where I_2,2;2^(2,1) is the leading term. By Lemma <ref> and Lemma <ref>, |∫_ e^itz^2+izs_2 z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ) z |≤ C | t |^-3/2⟨ s_2 ⟩. Hence, using the absolute boundedness of S_1TP and the bound (<ref>), we have | I_2,2;1^(2,1)(x,y) |≲| t |^-3/2∫_^2⟨ y_1 ⟩^2 ⟨ y ⟩[e_22v_1S_1TPv_2e_11](x_1,y_1) x_1 y_1 ≲| t |^-3/2⟨ y ⟩. On the other hand, we treat I_2,2;1^(2,1) similarly as in (<ref>) - (<ref>) and find that | I_2,2;2^(2,1)(x,y) - i/2∫_^3 e^itz^2 + iz | y |χ_0(z^2)G_2(x,x_1)[e_22v_1S_1TPv_2e_11](x_1,y_1) x_1 y_1 z |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩, where G_2 is defined in (<ref>). Hence, by Lemma <ref> and (<ref>), we conclude that | I_2,2^(2,1)(x,y) - iη‖ V_1 ‖_L^1() /2√(π)/√(-it) [G_2(e_22v_1Φ)(x)][e^iy^2/4tc_0e_1]^* |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. Finally, we show that the terms I_2,2^(1,2) and I_2,2^(2,2) satisfy the better decay rates of (| t |^-3/2⟨ x ⟩⟨ y ⟩). By orthogonality (c.f. (<ref>)), I_2,2^(1,2)(x,y) = 1/-2∫_^3e^itz^2zχ_0(z^2) ∫_| x |^| x - x_1| e^izs_1 s_1[e_11v_1S_1TPv_2e_22](x_1,y_1) e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) x_1 y_1 z. By Lemma <ref> and Lemma <ref>, we note that the z-integral satisfy the bound |∫_ e^itz^2+izs_1 z χ_0(z^2)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) z |≤ C| t |^-3/2⟨ s_1 ⟩. Hence, by the absolute boundedness of S_1TP and decay of v_1, v_2, we conclude that | I_2,2^(1,2)(x,y) |≤ C| t |^-3/2⟨ x ⟩. The analysis of I_2,2^(2,2) is analogous to the preceeding one, yielding the bound | I_2,2^(2,2)(x,y) |≤ C | t |^-3/2⟨ y ⟩. Thus, using _0 = G_0 e_11 + G_2e_22, and _0(v_1Φ) = c_0e_1 - Ψ from Lemma <ref>, we conclude (<ref>) and (<ref>). For the estimate (<ref>) involving I_2,3, one should instead use the identity ∫_e_11v_1(x_1)PTS_1(x_1,y_1) x_1 = -η‖ V_1 ‖_L^1() c_0e_1 Φ(y_1)^*, and we leave the remaining details to the reader. Next, we remark that the analysis for I_2,4 involving the operator P leads to a similar estimate as the free evolution in Proposition <ref>. For all | t |≥ 1, we have | I_2,4(x,y) - F_t^4(x,y) |≤ C | t|^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where F_t^4(x,y) := ‖ V_1 ‖_L^1()/4√(π)/√(-it)e^-ix^2/4te_1 e^-iy^2/4te_1^⊤. As before, we write I_2,4 = I_2,4^(1,1) + I_2,4^(1,2) + I_2,4^(2,1) + I_2,4^(2,2), with I_2,4^(i,j) := ∫_ e^itz^2 z^2 χ_0(z^2) [_i(z) v_1 P v_2 _j(z)] z, i,j ∈{1,2}, where _1 and _2 were defined in (<ref>). We first treat the leading term I_2,4^(1,1)(x,y) = ∫_ e^itz^2 z^2 χ_0(z^2) e^iz | x - x_1 |/2iz[e_11v_1Pv_2e_11](x_1,y_1)e^iz | y - y_1 |/2iz x_1 y_1 z. By adding and subtracting e^iz | x | and e^iz | y | twice, we further consider I_2,4^(1,1)(x,y) = ∫_^3 e^itz^2 z^2 χ_0(z^2) e^iz | x |/2iz[e_11v_1Pv_2e_11](x_1,y_1)e^iz | y |/2iz x_1 y_1 z + 1/2∫_^3 e^itz^2 z^2 χ_0(z^2) e^iz | x |/2iz[e_11v_1Pv_2e_11](x_1,y_1)∫_| y |^| y - y_1 |e^izs_2 s_2 x_1 y_1 z + 1/2∫_^3 e^itz^2 z^2 χ_0(z^2) ∫_| x |^| x - x_1 |e^izs_1ds_1[e_11v_1Pv_2e_11](x_1,y_1)e^iz | y |/2iz x_1 y_1 z + 1/4∫_^3 e^itz^2 z^2 χ_0(z^2) ∫_| x |^| x - x_1 |e^izs_1ds_1[e_11v_1Pv_2e_11](x_1,y_1)∫_| y |^| y - y_1 |e^izs_2 s_2 x_1 y_1 z =: I_2,4;1^(1,1)(x,y) +I_2,4;2^(1,1)(x,y) +I_2,4;3^(1,1)(x,y) +I_2,4;4^(1,1)(x,y). By direct computation, ∫_^2 [e_11v_1Pv_2e_11](x_1,y_1) x_1 y_1 = - ‖ V_1 ‖_L^1()e_1e_1^⊤. Hence, by Lemma <ref>, | I_2,4;1^(1,1)(x,y) - ‖ V_1 ‖_L^1()/4√(π)/√(-it) e^-ix^2/4te_1e^-iy^2/4te_1^⊤|≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. For the terms I_2,4;2^(1,1), I_2,4;3^(1,1), the additional factor of z allows to invoke Lemma <ref>, |∫_ e^itz^2+iz (| x | + s_2)zχ_0(z^2) z |≤ C| t |^-3/2⟨ x ⟩⟨ s_2⟩, |∫_ e^itz^2+iz (s_1 + | y |)zχ_0(z^2) z |≤ C| t |^-3/2⟨ y ⟩⟨ s_1⟩ . Thus, we infer from the exponential decay of v_1 and v_2 that | I_2,4;2^(1,1)(x,y) | + | I_2,4;3^(1,1)(x,y) |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. For the term I_2,4;4^(1,1), we can use non-stationary phase to conclude the same bound. Hence, we have | I_2,4^(1,1)(x,y) - ‖ V_1 ‖_L^1()/4√(π)/√(-it) e^-ix^2/4te_1e^-iy^2/4te_1^⊤|≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. Thus, it remains to prove that the other terms I_2,4^(1,2), I_2,4^(2,1), I_2,4^(2,2) have the better (| t |^-3/2⟨ x ⟩⟨ y ⟩) weighted decay estimate to finish the proposition. We first treat the term I_2,4^(1,2)(x,y) =1/2i∫_^3 e^itz^2 z χ_0(z^2) e^iz | x - x_1 | [e_11v_1Pv_2e_22](x_1,y_1)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) x_1 y_1 z. By Lemma <ref> and Lemma <ref>, |∫_ e^itz^2+iz(| x-x_1|)zχ_0(z^2) e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) z |≤ C | t |^-3/2⟨ x ⟩⟨ x_1 ⟩. Hence, using the decay assumptions on v_1 and v_2, we conclude that | I_2,4^(1,2)(x,y) |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. The same bound holds for the term I_2,4^(2,1) and we will skip the details. Finally, we are left with I_2,4^(2,2)(x,y) = ∫_^3 e^itz^2 z^2 χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1Pv_2e_22](x_1,y_1)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) x_1 y_1 z. By direct computation using (<ref>), [e_22v_1Pv_2e_22](x_1,y_1) = 1/‖ V_1 ‖_L^1()[V_2e_2](x_1)[V_2e_2]^⊤(y_1), and by Lemma <ref> and Lemma <ref>, we have the uniform estimate |∫_e^itz^2 z^2 χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ)dz |≤ C_μ| t |^-3/2. Hence, by exchanging the order of integration, we conclude that | I_2,4^(2,2)(x,y) |≤ C | t |^-3/2. Thus, we conclude (<ref>) by summing over the four terms. Finally, we are ready to complete the proof of the local decay estimate (<ref>). We sum the leading contributions of the spectral representation of e^itχ_0( - μ I)P_s^+ in (<ref>) by invoking Proposition <ref>, Proposition <ref>, Proposition <ref>, and Proposition <ref> to obtain F_t^0 - e^itμ/π i(i/2η F_t^1 + 1/η‖ V_1 ‖_L^1() F_t^2 + 1/η‖ V_1 ‖_L^1() F_t^3 + (2i/‖ V_1 ‖_L^1()+2 | c_0 |^2 /i‖ V_1 ‖_L^1()) F_t^4 ) = e^itμ/√(-4 π i t)( - [c_0 e_1 - Ψ(x)][σ_3 Ψ(y) - c_0e_1]^* - [c_0 e_1 - Ψ(x)][e^ iy^2/4tc_0e_1]^*. . + [e^-ix^2/4tc_0 e_1][σ_3 Ψ(y) - c_0e_1]^* + | c_0 |^2 e^-ix^2/4t e^-iy^2/4te_1e_1^⊤) = e^itμ/√(-4 π i t)( Ψ(x) [σ_3 Ψ(y)]^* + (e^-i x^2/4t-1)c_0 [σ_3 Ψ(y)]^* + (e^-i y^2/4t-1) Ψ(x) [ c_0e_1]^* . . + (1-e^-i x^2/4t - e^-i y^2/4t + e^-ix^2/4t e^-iy^2/4t)| c_0 |^2 e_1e_1^⊤), where we use the cancellation F_t^0 - e^itμ/π i2i/‖ V_1 ‖_L^1() F_t^4 = 0 in the first equality. We note that the first term gives us the finite rank operator F_t^+(x,y) = e^itμ/√(-4 π i t)Ψ(x) [σ_3 Ψ(y)]^*, and we show that the last three terms satisfy the better decay rate. Using, | 1 - e^-i x^2/4t|≤x^2/4| t|, and the fact that Ψ∈ L^∞() × L^∞(), we have |e^itμ e^iπ/4/2√(π)√(t)(e^-i x^2/4t - 1)c_0 e_1[σ_3 Ψ(y)]^* |≲| t |^-3/2⟨ x ⟩^2, and similarly |e^itμ e^iπ/4/2√(π)√(t)(e^-i y^2/4t-1) c_0Ψ(x) e_1^⊤|≲| t |^-3/2⟨ y ⟩^2. For the last term, we have | 1-e^-i x^2/4t - e^-i y^2/4t + e^-ix^2/4t e^-iy^2/4t| = | 1 - e^-i x^2/4t|| 1 - e^-i y^2/4t|≲| t |^-2⟨ x ⟩^2 ⟨ y ⟩^2. Thus, the leading contribution to e^itχ_0( - μ I)P_s^+ is F_t^+. § INTERMEDIATE AND HIGH ENERGY ESTIMATES In order to complete the proof of Theorem <ref>, we also need to prove the dispersive estimates when the spectral variable is bounded away from the thresholds ±μ. As usual, we focus on the positive semi-axis [μ,∞) of the essential spectrum and prove the dispersive estimates for energies λ > μ. The negative semi-axis (-∞,-μ] can be treated by symmetry of . We recall from Section 2 that the kernel of the limiting resolvent operator for _0 has the formula _0^±(z)(x,y) := (_0-(z^2+μ± i0))^-1 = [ ±ie^± i z | x -y |/2 z 0; 0 -e^-√(z^2+2μ)| x - y |/2 √(z^2 + 2μ) ], ∀ 0 < z <∞. From this, we have the following bound ‖_0^±(z) ‖_L^1 × L^1 → L^∞× L^∞≤ C | z |^-1. Hence, for sufficiently large z, the perturbed resolvent ^±(z) can be expanded into the infinite Born series ^±(z) = ∑_n=0^∞_0^±(z)(-_0^±(z))^n. More precisely, since the operator norm L^1 × L^1 → L^∞× L^∞ in the n-th summand above is bounded by C | z| ^-1 (C‖‖_1 | z|^-1)^n, the Born series converges in the operator norm whenever | z| > z_1 := 2C‖‖_L^1 × L^1. We define the high-energy cut-off by χ_h(z) := 1-χ(z), where χ(z) is a standard smooth even cut-off supported on [-z_1,z_1] satisfying χ(z) = 1 for | z |≤z_1/2 and χ(z) = 0 for | z |≥ z_1. We insert the cut-off and the Born series expansion into the spectral representation e^itχ_h(-μ I) P_s^+ and look to bound the following |⟨ e^itχ_h(-μ I) P_s^+ u⃗,v⃗⟩| = |∫_0^∞ e^itz^2z χ_h(z^2) ⟨ [^+(z) - ^-(z)]u⃗,v⃗⟩ z | ≤ C ∑_±∑_n=0^∞|∫_0^∞ e^itz^2z χ_h(z^2) ⟨_0^±(z)(_0^±(z))^nu⃗,v⃗⟩ z|, where u⃗,v⃗∈() ×(). From <cit.>, we have the following dispersive estimates: Under the same hypothesis as Theorem <ref>, we have ‖ e^itχ_h(-μ I)P_s^+ u⃗ ‖_L^∞()× L^∞()≲| t |^-1/2‖u⃗ ‖_L^1() × L^1(), and ‖⟨ x ⟩^-1e^itχ_h(-μ I) P_s^+u⃗ ‖_L^∞()× L^∞()≲| t |^-3/2‖⟨ x ⟩u⃗ ‖_L^1() × L^1(), for any | t |≥ 1. For (<ref>), see the proof of <cit.>, and for (<ref>), see the proof of <cit.>. Note that the high-energy dispersive estimate holds irrespective of the regularity of the thresholds ±μ. Let z_0>0 be the constant from Proposition <ref>. It may happen that z_1 is strictly larger than z_0. In this case, we need to derive estimates analogous to the above proposition in the remaining intermediate energy regime [-z_1,-z_0]∪[z_0,z_1]. To this end, we set χ_m(z) to be the intermediate energy cut-off given by χ_m(z) := 1 - χ_0(z) - χ_h(z), where χ_0(z) was the cut-off defined in the previous section in Proposition <ref>. For any | t |≥ 1, we have ‖ e^itχ_m(-μ I)P_s^+ u⃗ ‖_L_x^∞()× L_x^∞()≲| t |^-1/2‖u⃗ ‖_L_x^1() × L_x^1(), and ‖⟨ x ⟩^-1e^itχ_m(-μ I) P_s^+u⃗ ‖_L_x^∞()× L_x^∞()≲| t |^-3/2‖⟨ x ⟩u⃗ ‖_L_x^1() × L_x^1(). Before proving the above proposition, we need the following lemmas for pointwise bounds and operator norm bounds on the resolvent operators and its derivatives. The first lemma follows immediately from the expression (<ref>) and the triangle inequality || x - x_1 | - | x ||≤| x_1 |. Let γ_0 > 0. For every z > γ_0, and k ∈{0,1,2}, we have |∂_z^k _0^±(z)(x,y) |≤ C γ_0^-1-k⟨ x - y ⟩^k, and hence ‖∂_z^k _0^±(z)(x,·) ‖_X_-(1/2+k)-≤ C γ_0^-1-k⟨ x ⟩^k. Moreover, define _±(z)(x,x_1) = [ e^∓ i z | x | 0; 0 1 ]_0^±(z)(x,x_1)=[ ±ie^± i z (| x - x_1 | - | x |)/2 z 0; 0 -e^-√(z^2+2μ)| x - x_1 |/2 √(z^2 + 2μ) ]. Then, for any k ≥ 0, sup_x ∈|∂_z^k ^±(z)(x,x_1)|≤ C γ_0^-1-k| x_1 |. With these bounds, we are able to give operator norm bounds on the perturbed resolvent via the resolvent identity. Let γ_0 > 0. We have sup_| z | > γ_0‖∂_z ^±(z) ‖_X_3/2+→ X_-3/2-≲ 1, sup_| z | > γ_0‖∂_z^2 ^±(z) ‖_X_5/2+→ X_-5/2-≲ 1. By Lemma <ref>, for any | z | > γ_0, we have ^±(z) = (I+_0^±(z))^-1_0^±(z) =: S^±(z)^-1_0^±(z), as a bounded operator from X_1/2+ to X_-1/2-. Note that S^±(z) is boundedly invertible on X_-σ for any σ>0. By differentiation, we have ∂_z ^±(z) = -S^±(z)^-1∂_z_0^±(z) S^±(z)^-1_0^±(z) + S^±(z)^-1∂_z _0^±(z). Moreover, as a multiplication operator, :X_-σ→ X_σ is bounded for any σ>0 due to the exponential decay of . By Lemma <ref>, ∂_zR_0^±(z):X_3/2+→ X_-3/2- is bounded and since the embedding X_-1/2-⊂ X_-3/2- is continuous, we infer the bound (<ref>) by taking composition. By a similar argument, ‖∂_z^2 ^±(z) ‖_X_5/2+→ X_-5/2-≲ 1. By iterating the second resolvent identity, we write the perturbed resolvent as a finite sum ^±(z) = _0^±(z) - _0^±(z)_0^±(z) + _0^±(z)^±(z)_0^±(z), and we write e^itχ_m( - μ I)P_s^+(x,y) = ∑_j=1^3 ∫_0^∞ e^itz^2zχ_m(z^2)(-1)^j+1(_j^+(z) - _j^-(z))(x,y)dz, with _1^±(z) = _0^±(z), _2^±(z) = _0^±(z)_0^±(z), _3^±(z) = _0^±(z)^±(z)_0^±(z). Hence, to prove (<ref>) and (<ref>), it is sufficient to establish the estimates sup_±sup_j=1,2,3|∫_0^∞ e^itz^2zχ_m(z^2)_j^±(z)(x,y) z |≲min{| t |^-1/2,| t |^-3/2⟨ x ⟩⟨ y ⟩}. The term involving _1^± is handled by the earlier Proposition <ref>, while the second term involving _2^± can be treated analogously as in Proposition <ref>. We refer the reader to <cit.> and <cit.> for similar computations. For the term involving _3^±, we first write _0^±(z)(s_1,s_2) = [ e^± i z| s_1 | 0; 0 1 ]_±(z)(s_1,s_2), where the operator _±(z) was defined in (<ref>). Then, using that the kernel _0^±(z)(x,y) is symmetric in x and y variables, and using the matrix identity e_jj[ a_11 a_12; a_21 a_22 ]e_kk = a_jke_je_k^⊤, j,k ∈{1,2}, we compute the following kernel identity _3^±(z)(x,y) = ∫_^2_0^±(x,x_1)[^±(z)](x_1,y_1)_0^±(y,y_1) x_1 y_1 = [ e^± iz | x | 0; 0 1 ]∫_^2^±(x,x_1)[^±(z)](x_1,y_1)^±(y,y_1) x_1 y_1[ e^± iz | y | 0; 0 1 ] = e^± iz (| x | + | y |)⟨ (^±)^*(z)(x,·)e_1,^±(z)^±(z)(y,·)e_1⟩ e_1e_1^⊤ + e^± iz | x |⟨ (^±)^*(z)(x,·)e_2,^±(z)^±(z)(y,·)e_1⟩ e_1e_2^⊤ + e^± iz | y |⟨ (^±)^*(z)(x,·)e_1,^±(z)^±(z)(y,·)e_2⟩ e_2e_1^⊤ + ⟨ (^±)^*(z)(x,·)e_2,^±(z)^±(z)(y,·)e_2⟩ e_2e_2^⊤ =: e^± iz (| x | + | y |) A_1^±(z,x,y) + e^± iz | x | A_2^±(z,x,y) + e^± iz | y |A_3^±(z,x,y) + A_4^±(z,x,y). We plug this identity into the left hand side of (<ref>), and hence it will be sufficient to provide the bounds |∫_0^∞ e^itz^2 ± iz r zχ_m(z^2) A_k^±(z,x,y) z |≲min{| t |^-1/2,| t |^-3/2⟨ r ⟩}, k∈{1,…,4}, where r can represent 0 or | x|, | y|, or the sum of both variables. For the case k=1, by Lemma <ref>, we have that |∫_0^∞ e^itz^2 ± iz (| x | + | y |) zχ_m(z^2) A_1^±(z,x,y) z |≤ C | t |^-1/2‖∂_z (zχ_m(z^2) A_1^±(z,x,y) )‖_L_z^1(). Since the term zχ_m(z^2) is smooth and has compact support, we only need to track the derivatives when they fall onto either ^±(z) or ^±(z). In any case, thanks to the exponential decay of , and the bounds (<ref>), (<ref>) from the previous lemmas, we have the following uniform bound sup_±sup_z ∈ (χ_m)sup_j,k =1,2|∂_z ⟨ (^±)^*(z)(y,·)e_j, ^±(z)^±(z)(x,·)e_k⟩| ≲sup_±sup_z ∈ (χ_m)sup_j,k =1,2‖√(||)(x_1) (|^±(z)(x_1,x_2) | + |∂_z ^±(z)(x_1,x_2) |) √(||)(x_2) ‖_L_x_2^2 → L_x_1^2 ·‖√(||)(x_1) (|^±(z)(x,x_1)| + |∂_z^±(z)(x,x_1)|) e_j‖_L_x_1^2 ·‖√(||)(x_2)(|^±(z)(x_2,y)| + |∂_z^±(z)(x_2,y)|) e_k‖_L_x_2^2 ≲ 1, for all x,y ∈. To prove the weighted dispersive estimate, we invoke the stronger estimate in Lemma <ref>: |∫_0^∞ e^itz^2 ± iz (| x | + | y |) zχ_m(z^2) A_1^±(z,x,y) z |≤ C | t |^-3/2‖ [∂_z^2 ± i(| x | + | y |)∂_z] (χ_m(z^2) A_1^±(z,x,y) )‖_L_z^1() Here, we can apply the same argument as in (<ref>) for the two derivatives bound on A_1^± using the estimates (<ref>) and (<ref>), whereas the bound on one derivative for A_1^± leads to the weights ⟨ x ⟩⟨ y ⟩. Thus, we prove (<ref>) for k=1. The other cases follow by the same argument and we are done. Finally, we conclude with the proof of Theorem <ref>. By combining the estimates from Proposition <ref>, Proposition <ref>, and Proposition <ref>, we have established the bounds ‖ e^itP_s^+ ‖_L_x^∞()× L_x^∞()≲| t |^-1/2‖ ‖_L_x^1() × L_x^1(), as well as ‖⟨ x ⟩^-2 (e^itP_s^+ - F_t^+) ‖_L_x^∞()× L_x^∞()≲| t |^-3/2‖ ‖_L_x^1() × L_x^1(), for any := (u_1,u_2)^⊤∈() ×() and | t |≥ 1, with F_t^+ given by (<ref>). By Remark <ref>, we can similarly deduce that the unweighted dispersive estimate for the evolution e^itP_s^- using the identity (<ref>). On the other hand, for the weighted estimate, we find that the leading contribution to e^itP_s^- is given by F_t^-(x,y) = σ_1 F_-t^+(x,y)σ_1 = -e^-itμ/√(4 π i t)[σ_1Ψ(x)][σ_3σ_1Ψ(y)]^*, where we used the anti-commutation identity σ_3 σ_1 = - σ_1 σ_3. Thus, we conclude the local decay estimate (<ref>) and the formula (<ref>) by setting F_t := F_t^+ + F_t^-. § NEUMANN SERIES Let A be an invertible operator and B be a bounded operator satisfying ‖ B ‖ < ‖ A^-1‖^-1. Then, A-B is invertible with (A-B)^-1 =A^-1∑_n=0^∞ (BA^-1)^n = A^-1 + A^-1BA^-1 + A^-1BA^-1BA^-1 + ⋯, and ‖ (A-B)^-1‖≤ (‖ A^-1‖^-1 - ‖ B ‖)^-1. By the hypothesis ‖ B ‖ < ‖ A^-1‖^-1, we have ‖ A^-1B‖ <1. Consider the identity (A-B)^-1 = (I-A^-1B)^-1A^-1. The term on the right hand side can be written in the usual Neumann series (I-A^-1B)^-1 = ∑_n=0^∞ (A^-1B)^n. Thus, by multiplying A^-1, we deduce (<ref>). Note that the argument also holds true for (A-B)^-1 = A^-1(I-BA^-1)^-1. Now, since we have the estimate ‖ (I- A^-1B)^-1‖≤ (1 - ‖ A^-1B‖)^-1, we deduce (<ref>) by the sub-multiplicative property for operator norms. alpha
http://arxiv.org/abs/2307.04429v1
20230710090926
Designing Novel Cognitive Diagnosis Models via Evolutionary Multi-Objective Neural Architecture Search
[ "Shangshang Yang", "Haiping Ma", "Cheng Zhen", "Ye Tian", "Limiao Zhang", "Yaochu Jin", "Xingyi Zhang" ]
cs.NE
[ "cs.NE", "cs.AI", "cs.LG" ]
IEEE TRANSACTIONS ON XXXX, VOL. X, NO. X, MM YYYY Yang et al.: No title Designing Novel Cognitive Diagnosis Models via Evolutionary Multi-Objective Neural Architecture Search Manuscript received –. This work was supported in part by the National Key Research and Development Project under Grant 2018AAA0100105 and 2018AAA0100100, in part by the National Natural Science Foundation of China under Grant 61822301, 61876123, 61906001, 62136008, U21A20512, and U1804262, in part by the Anhui Provincial Natural Science Foundation under Grant 1808085J06 and 1908085QF271, in part by the Collaborative Innovation Program of Universities in Anhui Province under Grant GXXT-2020-013, and in part by the State Key Laboratory of Synthetical Automation for Process Industries under Grant PAL-N201805 (Corresponding authors: Limiao Zhang and Xingyi Zhang). Shangshang Yang, Haiping Ma, Cheng Zhen, Ye Tian, Limiao Zhang, Yaochu Jin, Fellow, IEEE, and Xingyi Zhang, Senior Member, IEEE S. Yang and X. Zhang is with the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Artificial Intelligence, Anhui University, Hefei 230039, China (email: [email protected]; [email protected]). C. Zhen, Y. Tian, and H. Ma are with the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Institutes of Physical Science and Information Technology, Anhui University, Hefei 230601, China (email: [email protected];[email protected];[email protected]). L. Zhang is with Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, 230601, Anhui, China (email: [email protected]). Y. Jin is with the Faculty of Technology, Bielefeld Unversity, Bielefeld 33619, Germany (email:[email protected]). August 12, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Cognitive diagnosis plays a vital role in modern intelligent education platforms to reveal students' proficiency in knowledge concepts for subsequent adaptive tasks. However, due to the requirement of high model interpretability, existing manually designed cognitive diagnosis models hold too simple architectures to meet the demand of current intelligent education systems, where the bias of human design also limits the emergence of effective cognitive diagnosis models. In this paper, we propose to automatically design novel cognitive diagnosis models by evolutionary multi-objective neural architecture search (NAS). Specifically, we observe existing models can be represented by a general model handling three given types of inputs and thus first design an expressive search space for the NAS task in cognitive diagnosis. Then, we propose multi-objective genetic programming (MOGP) to explore the NAS task's search space by maximizing model performance and interpretability. In the MOGP design, each architecture is transformed into a tree architecture and encoded by a tree for easy optimization, and a tailored genetic operation based on four sub-genetic operations is devised to generate offspring effectively. Besides, an initialization strategy is also suggested to accelerate the convergence by evolving half of the population from existing models' variants. Experiments on two real-world datasets demonstrate that the cognitive diagnosis models searched by the proposed approach exhibit significantly better performance than existing models and also hold as good interpretability as human-designed models. To make the search algorithm effectively generate offspring, containing four sub-genetic operations is devised. Besides, we also propose an initialization strategy to make half of the population evolve from existing models' variants to accelerate the convergence. Experiments on two real-world datasets demonstrate that the cognitive diagnosis models searched by the proposed approach exhibit significantly better performance than existing models and also hold good interpretability same as human-designed models. and then propose the model interpretability objective to formulate the NAS task as a multi-objective optimization problem (MOP) for maintaining models' performance and interpretability simultaneously. To tackle the formulated MOP well, we propose a multi-objective evolutionary algorithm to explore the devised large search space, where . Specifically, a novel tree-based search space is first suggested to contain not only existing models but also other architectures that humans have never seen as many as possible, where each cognitive diagnosis model can be represented by a binary tree. Then, an effective evolutionary algorithm is developed to explore the suggested novel search space, which considers both the performance and the interpretability of models in the search. Cognitive diagnosis models, neural architecture search, evolutionary algorithm, multi-objective optimization, genetic programming, model interpretability. § INTRODUCTION Cognitive diagnosis (CD) in the field of intelligent education <cit.> aims to reveal students' proficiency in specific knowledge concepts according to their historical response records of answering exercises and the exercise-concept relational matrix (termed Q-matrix) <cit.>. Fig. <ref> gives an illustrative example of CD, where students {A,B} have practiced a series of exercises (i.e., {e_1,e_3,e_4} and {e_1,e_2,e_3}), and got corresponding responses. Based on the records and Q-matrix, the students' knowledge proficiency in each concept can be obtained through CD. By doing so, there is a wide range of intelligent education tasks, such as personalized exercise recommendation <cit.> and targeted training <cit.>, which can benefit from the students' diagnosis results. With the rising demand for cognitive diagnosis models (CDMs) in online education platforms, many researchers developed various CD approaches, which are generally grouped into two types. The first genre of approaches is mainly proposed by researchers in educational psychology. Their designed CDMs usually rely on simple handcrafted functions to model student-exercise interactions and portray the student learning ability in a one-dimensional vector or other manners. The representatives include Item Response Theory (IRT) <cit.>, Deterministic Input, Noisy ’And’ gate (DINA) <cit.>, Multidimensional IRT (MIRT) <cit.>, and Matrix Factorization (MF) <cit.>. Item Response Theory (IRT) <cit.> and Deterministic Inputs, Noisy-And gate (DINA) <cit.> are two pioneering approaches, where IRT and DINA utilize a unidimensional continuous vector and a binary vector respectively to denote the student mastery for predicting the probabilities of a student correctly answering exercises. In addition, there are also some CD approaches improving above two CDMs or using other techniques, such as MIRT <cit.> which extends IRT's unidimensional student and exercise latent traits into multidimensional space, and MF <cit.> based on the matrix factorization technique. The second genre of ones <cit.> is based on neural networks (NNs), where the student learning ability is portrayed by an inner latent vector. The representatives contain Neural Cognitive Diagnosis (NCD) <cit.>, Prerequisite Attention model for Knowledge Proficiency diagnosis (PAKP) <cit.>, and Relation map driven Cognitive Diagnosis (RCD) <cit.> . As the critical components of CDMs, diagnostic functions are mainly responsible for predicting student exercising scores by integrating three types of input vectors (i.e., student/exercise/concept-related input vector) in a highly interpretable manner. To pursue high model interpretability, existing CDMs' diagnostic functions are desired to hold simple architectures. For example, IRT <cit.> and MF <cit.> utilize the simple logistic function and inner-product respectively as their diagnostic functions. However, there exist two kinds of problems for these simple handcrafted diagnostic functions. Firstly, simple diagnostic functions' architectures disable CDMs from modeling complex relationships between students and exercises well <cit.>, failing to meet the demands of modern education systems containing a large quantity of student exercising data. Secondly, the design of existing diagnostic functions heavily relies on researchers' knowledge of both educational psychology and NNs <cit.>, which is labor-intensive and needs a lot of trial-and-error. And the human design bias may limit the emergence of novel diagnostic functions to some extent. Furthermore, recent CD approaches <cit.> put less focus on the architecture design of diagnostic functions but on enhancing the input vectors for high performance, which hinders the development of CDMs to some extent. As the key components of CDMs, diagnostic functions are mainly responsible for predicting student exercising scores by integrating three types of input vectors in a high interpretability manner, including the student ability-related latent vector (i.e., student-related vector), the exercise-related latent vectors, and the concept-related latent vectors. Due to the aim of pursuing high model interpretability during the model design, existing CDMs' diagnostic functions are desired to hold simple architectures. For example, IRT <cit.> and MF <cit.> utilize the simple logistic function and the intuitive inner-product respectively as their diagnostic functions to linearly combine student-related and exercise-related latent vectors. However, there exist two aspects of problems for these simple manually-designed diagnostic functions. On the one hand, the simple architectures of diagnostic functions disable CDMs from modeling the complex relationship between students and exercises well, failing to meet the demands of current intelligent education systems containing a large quantity of student exercising data. On the other hand, the design of existing diagnostic function architectures heavily relies on researchers' domain knowledge in both educational psychology and NNs, where not only the design process is labor-intensive and needs a lot of trial-and-error but also the human design bias may limit the emergence of novel diagnostic functions to some extent. Although NCD <cit.> argues to find an automatic way to learn the complex interactions between students and exercises, its simple diagnostic function architecture is still manually designed by summarizing architectures of previous CDMs. Furthermore, recent CD approaches do not focus on the architecture design of diagnostic functions but on enhancing the input vectors based on existing diagnostic function architectures for improving the prediction performance. Therefore, it is necessary to design more effective novel diagnostic function architectures to meet the demands of current intelligent education systems. For the above reasons, this paper aims to develop novel CDMs by automatically designing effective diagnostic function architectures. Since Zoph and Le <cit.> proposed to search neural architectures for image tasks, neural architecture search (NAS) <cit.> has been widely applied to many research fields and achieved significant success <cit.>. Among various search strategies of NAS, including reinforcement learning <cit.> and gradient optimization <cit.>, evolutionary algorithms (EAs), especially multi-objective evolutionary algorithms (MOEAs), have shown a more powerful ability to search <cit.>. Moreover, compared to other NAS approaches, MOEA-based NAS approaches <cit.> are superior in getting out of local optima and presenting trade-offs among multiple objectives , where many architectures holding different attributes can be found in a single run. The representative approaches include Neural Architecture Search using Multi-Objective Genetic Algorithm (NSGA-Net) <cit.>, and Lamarckian Evolutionary algorithm for Multi-Objective Neural Architecture DEsign (LEMONADE) <cit.>. However, existing NAS approaches cannot be applied to CD due to the difference in search space between CD and other tasks, and different search space generally needs different MOEAs <cit.>, whose representations and genetic operations are task-tailored <cit.>, further hindering them from being applied to CD. Therefore, this paper proposes an evolutionary multi-objective NAS to design novel CDMs (termed EMO-NAS-CD), where an expressive search space is first devised and multi-objective genetic programming (MOGP) is employed to explore the search space to develop high-performance CDMs with good interpretability. Specifically, our main contributions are as follows: * This paper is the first NAS work to design CDMs, which explores the search space design and search strategy design of NAS. Regarding the search space, we first design an expressive search space for the NAS task of CD (NAS-CD) by summarizing existing diagnostic function architectures. Within, each candidate architecture is denoted by a general model, which takes at most three given types of input vectors as input nodes. Then, regarding the search strategy, we propose MOGP to explore the search space by solving a bi-objective problem of NAS-CD, which maximizes the objectives of model performance and interpretability simultaneously. To make the searched highly interpretable, we propose to optimizethe model performance and model interpretability simultaneously and thus formulate the NAS-CD task as a bi-objective optimization problem, where the interpretability of an architecture is intuitively characterized with its depth, breadth, and its contained computation node number. * In the MOGP design, we first transform architectures under the search space into tree architectures and then encode them by trees for easy optimization, which can avoid the optimization difficulties of vector-based encoding (e.g., the problem of variable-length encoding). Based on four sub-genetic operations, a tailored genetic operation is devised for effective offspring generation in the MOGP. Besides, to accelerate the MOGP's convergence, we further design a prior knowledge-based initialization strategy to evolve partial individuals of the population from existing CDMs' variants. To avoid some optimization difficulties in general MOEAs (e.g., variable-length encoding difficulty in vector-based encoding), each architecture is first transformed into its corresponding tree architecture and then encoded by tree-based representation for easy optimization. Besides, a tailored genetic operation inspired from GP is suggested for effective offspring generation. On the basis of the above techniques, the proposed MOEA turns out to be a MOGP. To accelerate the convergence of the proposed MOGP, we further design a population initialization strategy to initialize partial individuals of the population from existing CDMs' variants. * To validate the effectiveness of the proposed EMO-NAS-CD, we compare it with some representative CDMs on two popular education datasets. Experimental results show that EMO-NAS-CD can find a set of architectures to build CDMs, which present trade-offs between interpretability and performance. The found architectures hold both significantly better prediction performance and good interpretability. Moreover, we verify the effectiveness of the suggested genetic operation as well as the initialization strategy, and we also demonstrate the superiority of the devised model interpretability objective over the common model complexity. The rest of this paper is as follows. Section II reviews existing CD approaches and presents the motivation for this work. Section III introduces the proposed search space. Section IV presents the details of the proposed approach. The experiments are shown in Section V, and we give conclusions and future work in Section VI. § PRELIMINARIES AND RELATED WORK §.§ Preliminaries of Cognitive Diagnosis Task Formally, there are N students, M exercises, and K knowledge concepts in an intelligent education platform for the cognitive diagnosis task, which can be represented by S = {s_1,s_2,⋯,s_N}, E= {e_1,e_2,⋯,e_M}, and C={c_1,c_2,⋯,c_K}, respectively. Besides, there is commonly an exercise-concept relation matrix Q= (Q_jk∈{0,1})^M× K, Q-matrix, to depict the relationship between exercises and knowledge concepts, where Q_jk=1 means the exercise e_j contains the knowledge concept c_k and Q_jk=0 otherwise. R_log is used to denote the students' exercising response logs and it can be represented by a set of triplets (s_i,e_j,r_ij), where s_j ∈ S, e_j ∈ E, and r_ij∈{0,1} refers to the response score of student s_i on exercise e_j. Here r_ij=1 indicates the answer of student s_i on e_j is correct and r_ij=0 otherwise. Based on the students' response logs R_log and Q-matrix, the cognitive diagnosis task mines the students' proficiency in knowledge concepts by building a model ℱ to predict the students' exercising score. To predict the score of student s_i on exercise e_j, the model ℱ can take three types of inputs, including the student-related feature vector 𝐡_S∈ R^1× D, the exercise-related feature vector 𝐡_E∈ R^1× D, and the knowledge concept-related feature vector 𝐡_C∈ R^1× K, which can be obtained by . 𝐡_S = 𝐱_i^S × W_S, W_S∈ R^N× D 𝐡_E = 𝐱_j^E × W_E, W_E∈ R^M× D 𝐡_C = 𝐱_j^E × Q = (Q_j1, Q_j2,⋯, Q_jK) ., where D is the embedding dimension (usually equal to K for consistency), 𝐱_i^S ∈{0,1}^1× N is the one-hot vector for student s_i, 𝐱_j^E ∈{0,1}^1× M is the one-hot vector for exercise e_j, and W_S and W_E are trainable matrices in the embedding layers. Then, the model ℱ outputs the predicted response r̂_ij as r̂_ij = ℱ(𝐡_S,𝐡_E,𝐡_C), where ℱ(·) is the diagnostic function to combine three types of inputs in different manners. Generally speaking, after training the model ℱ based on students' response logs, each bit value of 𝐡_S represents the student's proficiency in the corresponding knowledge concept. §.§ Related Work on Cognitive Diagnosis In the past decades, a series of CDMs have been developed based on researchers' experiences in educational psychology and deep neural networks (DNNs), mainly from two perspectives. §.§.§ Incorporating Richer Input Information As introduced above, there are three types of inputs that can be used for the diagnostic function in a CDM, including the student-related vector 𝐡_S, the exercise-related vector 𝐡_E, and the knowledge concept-related vector 𝐡_C. Therefore, the first type of approaches aims to incorporate richer context information or other information into these input vectors to boost the diagnostic function inputs for improving the prediction performance. To achieve this, Zhou et al. <cit.> proposed Educational context-aware Cognitive Diagnosis (ECD) <cit.> to model educational context-aware features in student learning. Specifically, the student's educational contexts (e.g., school information, student personal interests, parents' education) are incorporated into the student-related vector 𝐡_S by a hierarchical attention NN. Then, the integrated student-related vector 𝐡_S will be processed by a common diagnostic function. The incorporated educational context information can indeed improve the diagnosis performance of different diagnostic functions, including IRT, MIRT, and NCD. In <cit.>, Gao et al. proposed RCD to incorporate the model inputs with the prior relations between knowledge concepts. To be specific, students, exercises, and concepts are first built as a hierarchical graph. This graph contains a student-exercise interaction map, a concept-exercise correlation map, and a concept dependency map that is extracted from the prior relations between knowledge concepts. Then, a multi-level attention NN is used to achieve node aggregation of the hierarchical graph, and the aggregated node features are used as three input vectors, 𝐡_S, 𝐡_E, and 𝐡_C, to improve the model performance. Similarly, Wang et al. <cit.> proposed CDGK (i.e., Cognitive DiaGnosis by Knowledge concept aggregation) to incorporate the relations between knowledge concepts into input vectors. Different from RCD, CDGK only builds the graph structure of knowledge concepts according to the dependency among knowledge concepts. Only the leaf nodes in the constructed graph will be used to aggregate the target node's features. Finally, the aggregated knowledge concept features will be taken as the concept-related vector 𝐡_C used for subsequent diagnosis process. §.§.§ Designing Diagnostic Functions The above CD approaches only focus on incorporating extra information into input vectors, and directly employ existing diagnostic functions to handle the enhanced input vectors for diagnosis. In contrast, the second type of approaches focuses on designing powerful diagnostic functions, which are responsible for combining input vectors in highly interpretable manners. As the most typical CDM, the diagnostic function of DINA <cit.> is to first obtain two binary student and concept latent features (θ, β∈{0,1}^1× K) and two exercise latent features (guessing g∈ R^1 and slipping sl∈ R^1) from input vectors. Then, the score of student s_i on exercise e_j can be represented as r̂_ij = g^1-nt(1-sl)^nt, where nt = ∏_kθ_k^β_k. Despite the high interpretability of its diagnostic function, DINA suffers from poor prediction performance in current CD tasks due to its poor scalability on large-scale student exercising data. As another typical CDM, the diagnostic function of IRT <cit.> first takes student-related and exercise-related vectors 𝐡_S and 𝐡_E, and then transforms them into one student latent feature θ∈ R^1 and two exercise latent features (β∈ R^1 and a ∈ R^1), respectively. Next, a simple logistic function is applied to the linear transformation of θ, β, and a, e.g., a simple version is Sigmoid(a(θ -β)) as stated in <cit.>. Finally, the diagnostic function outputs the predicted scores of the student on exercises. Similarly, MIRT <cit.> applies the same logistic function as IRT to the linear transformation of the student latent feature θ∈ R^1× K, the exercise latent feature β∈ R^1, and the knowledge concept latent feature α∈ R^1× K. θ and α are equal to 𝐡_S and 𝐡_C, and β is transformed from 𝐡_E. Note that student and knowledge concept latent features in MIRT are multidimensional for the demands of multidimensional data <cit.>. Finally, its prediction process can be output as r̂_ij = Sigmoid(β+∑α⊙θ). Compared to IRT, MIRT exhibits better performance yet without losing interpretability. Differently, MF <cit.> is originally proposed for recommender systems but can be used for CD from the data mining perspective, where students and exercises in CD can correspond to users and items in recommender systems. As demonstrated in <cit.>, the diagnostic function of MF can be modeled as directly applying the inner-product to 𝐡_S and 𝐡_E. Finally, its prediction process can be represented by r̂_ij = ∑𝐡_S⊙𝐡_E, whose architecture is quite simple yet effective compared to other CDMs. The most representative approach NCD <cit.> builds a new diagnostic function with one shallow layer and three fully connected (FC) layers. Firstly, the student latent feature 𝐟_S∈ R^1× K and two exercise latent features 𝐟_diff∈ R^1× K and f_disc∈ R^1 are first obtained by { 𝐟_S = Sigmoid(𝐡_S) 𝐟_diff = Sigmoid(𝐡_E) f_disc = Sigmoid(𝐡_E× W_disc), W_disc∈ R^D× 1.. Then, the shallow layer inspired by MIRT is used to linearly combine the above features and concept-related vector 𝐡_C as 𝐲 = 𝐡_C⊙(𝐟_S-𝐟_diff )× f_disc. Afterward, the hidden feature 𝐲 is fed into three FC layers with the monotonicity property to get the final prediction output. Ma et al. proposed Knowledge-Sensed Cognitive Diagnosis (KSCD) to diagnose the student's proficiency. Similar to NCD, KSCD's diagnostic function <cit.> consists of two FC layers followed by one shallow layer. Two FC layers are used to combine the learned knowledge concept features with 𝐡_S and 𝐡_E, respectively, for obtaining the enhanced student and exercise features. Then, the shallow layer is used to further combined enhanced features and 𝐡_C to get the prediction. §.§ Motivation of This Work Despite the competitive performance of the above CDMs, their diagnosis function architectures are too simple to model complex student-exercise interactions well <cit.>, especially for large-scale student exercising data in current intelligent education systems. Moreover, the design of existing diagnosis function architectures heavily relies on researcher expertise in the domains of both education and NNs, which needs a lot of trial-and-error and thus is labor-intensive and costly <cit.>. Besides, the human design bias may make some potential yet beyond-human knowledge architectures miss. Therefore, in contrast to current CD approaches focusing on improving model inputs, this paper aims to develop more effective diagnostic function architectures for CD. As an automated neural architecture design paradigm <cit.>, NAS has been widely used for many research domains <cit.> and made significant progress since it was first proposed by Zoph and Le in <cit.>. Existing NAS approaches have made great achievement in various domains to search the best architectures of prevailing various DNNs, including convolution neural networks (CNNs) for computer vision (CV) tasks <cit.>, recurrent neural networks (RNNs) for natural language process (NLP) <cit.> and speech-related <cit.> tasks, graph neural networks (GNNs) for non-European data tasks <cit.>, and Transformers for CV <cit.>, NLP <cit.>, and speech-related <cit.> tasks. As an automated neural architecture design paradigm <cit.>, NAS has been widely used for many research domains <cit.> and made significant progress <cit.>. Existing NAS approaches have been used to search the best architectures of prevailing various DNNs, including convolution neural networks (CNNs) for computer vision (CV) tasks <cit.>, recurrent neural networks (RNNs) for natural language processing (NLP) <cit.> and speech-related <cit.> tasks, graph neural networks (GNNs) for the tasks having non-Euclidean data <cit.>, and Transformers for CV <cit.>, NLP <cit.>, and speech-related <cit.> tasks. However, due to the difference in search space among different domains, these NAS approaches cannot be applied to search the optimal diagnostic function architecture. Besides, the architectures of existing diagnostic functions can be seen as a general model, which is used to handle three given types of inputs and output a scalar or a vector. To this end, this paper an evolutionary multi-objective optimization-based NAS approach for automatically designing effective diagnostic function architectures to build novel CDMs. Here, we first design an expressive search space by summarizing existing architectures, and then we propose MOGP to explore the devised search space by optimizing the objectives of model performance and model interpretability simultaneously. To the best of our knowledge, our work is the first to apply the NAS technique to the CD task. § THE PROPOSED SEARCH SPACE FOR CD As stated above, the search space of existing NAS approaches <cit.> is task-specific, which cannot be applied to CD for searching diagnostic functions. To design the search space for CD, we first observe and summarize existing CD approaches that design novel diagnostic functions. Then we find that their diagnostic functions combine three types of input vectors in a linear or non-linear manner and finally output a scalar or a vector for the score prediction. In other words, the diagnostic function architecture can be seen as a general model that has three input nodes, some internal nodes, and one output node. Both its output node and its internal nodes are computation nodes to handle their inputs by their adopted operators. We can find that the general model is similar to models under the search space of RNN in NAS <cit.>. Fig. <ref>(a) plots the RNN cell found by Efficient Neural Architecture Search (ENAS) <cit.>, where x[t] and h[t-1] are two input nodes, avg is the output node, and others are computation nodes. By summarizing previous CD approaches, we collected some operators that be used for computation nodes of the general model. These operators are divided into two types, i.e., unary and binary operators, which are used to receive one input and two inputs, respectively. Here computation nodes (including the output node) in the general model can only handle at most two inputs, which is different from that of RNNs. As a result, we take the general model as the proposed search space for CD, where 15 candidate operators in Table <ref> can be adopted by each computation node and the following are their descriptions: * Unary operators. Each unary operator only takes one input x and returns its output. FFN_D returns the vectors, Sum, Mean, and FFN return the scalar outputs, while the other eight unary operators return the outputs having the same shape as their inputs, which contains five arithmetic operators (i.e., Neg, Abs, Inv, Square, and Sqrt) and three activation functions Tanh <cit.>, Sigmoid <cit.>, and Softplus <cit.>. * Binary operators. Three binary operators considered in the general model: in addition to addition Add and multiplication Mul, we further consider a Concat operator to aggregate two input vectors into one vector. Note that the output shapes of Add and Mul are determined by the maximal shape of two inputs. For example, when one input is a scalar x and another input is a vector 𝐲∈ R^1× D, the output shape is same as 𝐲 (equal to 1× D). Here FFN and Concat are NN-based operators containing learnable parameters, which make the proposed search space more expressive than that of RNN. Note that the general model may output a scalar y or a vector 𝐲, because the general model may adopt different operators while the output shapes of candidate operators are different. To make the prediction process successful, the general model has to execute the following process to get the prediction score of student s_i on exercise e_j: r̂_ij={ y, if y ∈ R^1 FC_3(FC_2(FC_1(𝐲))), if 𝐲∈ R^1× D ., where FC_1(·), FC_2(·), and FC_3(·) are three FC layers with output dimensions H_1, H_2, and H_3, respectively. The three FC layers are set to hold the monotonicity property according to the experiences in <cit.>. By doing so, the probability of a correct response to the exercise is monotonically increasing at any dimension of the student’s knowledge proficiency, which enables FC layers to hold the same interpretability as the identity operation. For better understanding, Fig. <ref>(b) presents the general diagnostic function architecture under the proposed search space. The general diagnostic function architecture (the general model) contains two parts. The first part (termed CD cell) is similar to the RNN cell in NAS, and the second part is a three-layer FC NN or an identity operation as shown in (<ref>). The CD cell has several computation nodes (represented by ovals) and at most three input nodes (𝐡_S, 𝐡_E, and 𝐡_C, represented by triangles). Different from the RNN cell, its output node is also computation node, and computation nodes are selected from unary operators (denoted by green nodes) or binary operators (denoted by orange nodes). After obtaining the CD cell's output y, either the identity operation or the three-layer FC NN will be applied to get the final prediction r̂_ij. As stated in <cit.>, a promising search space should contain not only a large number of expressive neural architectures but also as many existing handcrafted architectures as possible. To demonstrate the effectiveness of the proposed search space, we take four representative CDMs, including IRT, MIRT, MF, and NCD, as illustrative examples. Fig. <ref>(a) to Fig. <ref>(d) present their diagnostic function architectures under the proposed search space. As can be seen, these typical CDMs can be easily represented under the proposed search space by specific computation nodes and selected input nodes. § THE PROPOSED EMO-NAS-CD This section will first present the proposed EMO-NAS-CD framework, and then sequentially give individual representation, objectives, and a tailored genetic operation. Finally, other details are introduced. §.§ Overall Framework of EMO-NAS-CD The main idea of the proposed EMO-NAS-CD is to search high-performance diagnostic function architectures holding high interpretability under the devised search space. To this end, we aim to solve the NAS-CD task by optimizing a multi-objective optimization problem (MOP), which has two objectives: model performance and model interpretability. To avoid the difficulties of using vector-based encoding for the devised search space (e.g., variable-length encoding problem), we propose MOGP (a popular type of MOEAs <cit.>) to solve the MOP by transforming architectures into tree architectures and encoding them by trees, because genetic programming (GP) <cit.> can solve tree-encoding-based problems well. The devised MOGP follows the framework of NSGA-II <cit.>, and we devise an effective genetic operation and a population initialization strategy for the MOGP. As can be seen that the proposed EMO-NAS-CD is a MOGP-based NAS approach for CD. Based on the classical NSGA-II <cit.>, the main idea of the proposed EMO-NAS-CD is to search effective diagnostic function architectures holding high interpretability by maximizing the objectives of model performance and model interpretability. Instead of using vector-based encoding for each architecture in the proposed search space, we first transform each architecture into its corresponding tree architecture, and then encode it by the tree-based representation, which avoids some difficulties of vector-based encoding (e.g., variable-length encoding difficulty) in general MOEAs and is easier to be optimized by GP <cit.>. Moreover, we devise an effective genetic operation inspired by GP and a population initialization strategy for the proposed EMO-NAS-CD. As a result, the proposed EMO-NAS-CD is a MOGP-based NAS approach for CD. The overall framework of the proposed EMO-NAS-CD is summarized in Fig. <ref>, which is mainly composed of five steps. Firstly, a population initialization strategy (in Section <ref>) is employed to randomly generate Pop individuals as population 𝐏. Second, the standard binary tournament selection is employed to select individuals for getting the mating pool 𝐏'. Next, a novel genetic operation is applied to 𝐏' to generate offspring individuals and form the offspring population 𝐐. Fourth, train the architecture of each individual of 𝐐 for a certain number of (Num_E) epochs to compute its objective values. Fifth, the environmental selection in NSGA-II <cit.> will be employed to identify and maintain the individuals that hold better objective values from the union of population 𝐏 and offspring population 𝐐. The second to the fifth step will be repeated until the maximal number of generation Gen is exceeded, then the non-dominated individuals will finally be output. For details, Algorithm <ref> also summarizes the main procedures of the proposed EMO-NAS-CD. It is worth noting that there exist some individuals during the whole optimization process, whose neural architectures achieve terrible performance, nearly close to random performance. The reason behind this is that these architectures will encounter the gradient explosion problem when they continuously use some operations (e.g., Square, Tanh, and Softplus), which makes it difficult for general training paradigms to train them well. To solve this problem, in the individual evaluation, we adopt a simple early-stopping strategy <cit.> to stop the training of a neural architecture if its performance does not improve for several epochs. §.§ Individual Representation To represent architectures in the proposed search space, vector-based encoding is naturally our first choice because of its high popularity in many real-world optimization problems. Suppose the vector-based encoding for i-th computation node of an architecture is n_i={link_1, link_2,Op}, where link_1 and link_2 denote node n_i receiving which nodes' outputs and Op denotes which operator is adopted, and then each architecture is represented by a set of nodes {n_i| 1≤ i ≤ num_c} (num_c denotes the number of computation nodes). However, as shown in Fig. <ref>(b), the architectures in the proposed search space are variable. Thus it is difficult and unsuitable to represent architectures by vector-based encoding due to two challenges. The first challenge is that num_c is not fixed but variable, and thus the vector-based encoding of each architecture is variable-length, which is difficult to solve by general MOEAs <cit.>. Secondly, different from the output node of the RNN cell, the output node in the proposed search space is a computation node and receives at most two inputs. This poses a decision constraint in using vector-based encoding as individual representation and thus is also difficult to solve. It can be found from Fig. <ref>(b) that there are two challenges for vector-based encoding in general MOEAs to represent architectures in the proposed search space. Suppose the vector-based encoding for i-th computation node of an architecture is n_i={link_1, link_2,Op}, where link_1 and link_2 denote node n_i receiving which nodes' outputs and Op denotes which operator is adopted, and then each architecture is represented by a set of nodes {n_i| 1≤ i ≤ num_c} (num_c denotes the number of computation nodes). The first challenge is that num_c is not fixed but variable and thus the vector-based encoding of each architecture is variable-length, which is difficult to solve by general MOEAs <cit.>. Secondly, different from the output node of the RNN search space, the output node in the proposed search space is a computation node and thus receives at most two inputs, which poses a decision constraint in using vector-based encoding as individual representation and thus is also difficult to solve. To avoid the above issues, we propose to utilize tree-based representation to encode architectures in our proposed search space, and we propose MOGP to solve the MOP to search novel CDMs because of the superiority of GP in solving tree-encoding-based optimization problems <cit.>. For this aim, we have to transform the architectures under the proposed search space into their corresponding single-root tree architecture. Fig. <ref> (e) gives the transform process by taking the general model as an illustrative example: the input nodes are seen as the leaf nodes of the tree architecture, the output node is equal to the root node, and the whole tree architecture can be seen as a single-root binary computation tree, where the obtained tree architecture is similar to the Koza-like tree in GP <cit.>. Based on the tree-based representation, the proposed MOGP can effectively search diagnostic function architectures but still needs the assistance of some tailored strategies, such as genetic operations and initialization strategies. we observe that the general model can be transformed into a corresponding single-root tree architecture. Fig. <ref> (e) gives the transform process: the input nodes are seen as the leaf nodes of the tree architecture, the output node is equal to the root node, and the whole tree architecture can be seen as a single-root binary computation tree, where the obtained tree architecture is similar to the Koza-like tree in GP <cit.>. Considering the superiority of GP in solving tree-encoding-based optimization problems <cit.>, we adopt tree-based representation to encode architectures in our search space, and thus the proposed MOEA turns out to be a MOGP, which needs effective tailored genetic operations. §.§ Objectives To make the searched architectures hold good performance and high interpretability, the proposed MOGP is to optimize the following MOP: max_𝒜 F(𝒜)={ f_1(𝒜) = AUC(𝒜,D_val) f_2(𝒜) = model interpretability(𝒜) ., where 𝒜 denotes the candidate architecture to be optimized. f_1(𝒜) represents the AUC (Area Under an ROC Curve) value <cit.> of 𝒜 (i.e., model performance) on validation dataset D_val. f_2(𝒜) represents the model interpretability of architecture 𝒜, since an architecture holding high model interpretability is preferred for CD. To obtain reasonable f_2(𝒜), an intuitive idea is to compute the model complexity by counting how many computation nodes and leaf nodes are in 𝒜. But it is not reasonable <cit.> to some extent since much research <cit.> indicates that the model depth plays the most important role in the model interpretability. Besides, some research on interpretable trees <cit.> further indicates that binary operators commonly provide better interpretability than unary operators. More importantly, recent CDMs prefer introducing extra inputs and more feature fusions in the models because it is easier to interpret the model performance <cit.>. This implies that more inputs in CDMs represent higher interpretability, further indicating that binary operators are more important than unary ones since binary operators will introduce more inputs. However, the model complexity that counts the number of nodes in 𝒜 can not reflect the above fact. As shown in Fig. <ref>, despite more nodes, we think 𝒜_2 holds better interpretability than 𝒜_1 due to a smaller depth. Due to larger breadth (more inputs), 𝒜_4 and 𝒜_3 should be better than 𝒜_1 but worse than 𝒜_2. 𝒜_5 should be better than 𝒜_3 but worse than 𝒜_4 due to containing more nodes. Even compared to 𝒜_3 having the same depth as 𝒜_1, 𝒜_1 is worse than 𝒜_3 because 𝒜_3 holds a larger breadth than 𝒜_1, where the tree breadth is equal to the number of leaf nodes. As can be seen from the comparisons among 𝒜_1, 𝒜_3, and 𝒜_4, a tree holding a larger breadth means more binary operators contained in the tree, and thus indicates the tree holds higher interpretability. Besides, 𝒜_4 holds higher interpretability than 𝒜_5 since 𝒜_4 has fewer computation nodes. With the above considerations, we characterize the model interpretability of architecture 𝒜 by its tree's depth, breadth, and computation node number. The model interpretability of 𝒜 is first determined by the tree depth depth, then by the tree breadth breadth (equal to the number of leaf nodes), and finally by the number of computation nodes num_c. As a consequence, the f_2(𝒜) can be computed by f_2(𝒜) = (1-depth-1/10)+breadth/200+(0.001- num_c/20000), where we make the depths of all architectures less than 10 in this paper to hold high model interpretability and thus f_2(𝒜)∈ (0,1) has five decimal places. The first decimal place is determined by depth, The second and third decimal places are determined by breadth, and the remaining decimal places are determined by num_c. Note that three parameters (10, 200, 2000) are empirically set and can be other choices, which will not affect the proposed approach's result as long as two criteria are met. Firstly, the decimal place(s) determined by depth, breadth, and num_c do not affect each other; secondly, the decimal place(s) determined by depth is most important, followed by breadth, and finally num_c. In Fig. <ref>, the depths of five architectures are 3, 2, 3, 3, and 3, their breadths are 1, 2, 3, 4, and 4, and their computation node numbers are 3, 3, 4, 4, and 5. According to (<ref>), their second objective values are 0.80585, 0.91085, 0.81580, 0.82080, and 0.82075, respectively, which are consistent with our consideration. §.§ Genetic Operation For effective offspring generation in the proposed MOGP, we propose an effective genetic operation based on four sub-genetic operations that modified and inspired from GP <cit.>. The following introduces four modified sub-genetic operations: Exchange, Delete, Replace, and Insert. * Exchange. Given two individuals, 𝐏'_1 and 𝐏'_2, randomly select two sub-trees, t_1 and t_2, from the trees corresponding to two individuals, respectively, and then exchange two sub-trees to generate two new trees and form two offspring individuals, 𝐎_1 and 𝐎_2. (The root nodes will not be selected.) * Delete. Given a parent individual 𝐏'_1, randomly select a computation node from the tree corresponding to 𝐏'_1. To delete this node, one of the left and right child trees of this node will be randomly connected to its parent node (if exists). The newly generated tree can form the offspring individual 𝐎_1. * Replace. For the tree corresponding to individual 𝐏'_1, randomly select a node to be replaced and replace the node's operator by a new operator randomly sampled from Table <ref>. If the original operator is unary but the sampled operator is binary, a new leaf node will be generated and connected to this node as its child tree, where the new leaf node is randomly sampled from {𝐡_S, 𝐡_E, 𝐡_C}. If the original operator is binary but the sampled operator is unary, only one of the left and right child trees of this node will be kept. As a result, offspring individual 𝐎_1 can be obtained based on the revised tree. * Insert. A new operator is first randomly sampled from the predefined operators, and a computation node is randomly selected from individual 𝐏'_1. Then, the sampled operator is inserted between this node and its parent node (if exists) as a new computation node. If the sampled operator is binary, an additional leaf node will be randomly sampled from {𝐡_S, 𝐡_E, 𝐡_C} and added to the new computation node as its child tree. Finally, offspring individual 𝐎_1 will be generated. Note that the root node will not be involved in Exchange since the Exchange operation will be meaningless or ineffective if root nodes are selected. For a better understanding of the above operations, Fig. <ref> gives some illustrative examples of generating offspring individuals. The pink area denotes the selected computation nodes (or corresponding sub-trees) needed to be handled, and the light purple area represents the executed changes. As can be seen, Exchange will lead to big modifications between generated individuals and corresponding parent individuals, while other operations commonly lead to small modifications. Therefore, the Exchange operation can be used for exploration, and others can be used for exploitation <cit.>. Equipped with four sub-genetic operations, we empirically combine them to form our proposed genetic operation, whose basic procedures are summarized in Algorithm <ref>. Four operations are called four sub-genetic operations because they can constitute many other genetic operations when adopting different combination manners. In Algorithm <ref>, two individuals 𝐏'_i (i-th individual in 𝐏') and 𝐏'_i+1 are first selected from the mating pool 𝐏', and the numbers of computation nodes in the two individuals are computed as num_c^i and num_c^i+1 (Lines 3-4). Second, randomly sample an integer rand from {1,2,3,4} if both num_c^i and num_c^i+1 are not smaller than 2, otherwise randomly sample rand from {3,4}. Numbers 1, 2, 3, and 4 correspond to Exchange, Delete, Replace, and Insert , respectively (Lines 5-9). This is because Exchange and Delete will be ineffective, even meaningless, if there is only one computation node in the individual. Third, the sub-genetic operation corresponding to rand will be applied to 𝐏'_i and 𝐏'_i+1 to generate offspring individuals 𝐎_1 and 𝐎_2 (Lines 10-14). Next, the obtained 𝐎_1 and 𝐎_2 will be added to the offspring population Off (Line 15). The first to the fourth step will be repeated until all offspring individuals are generated. After that, an individual repair strategy in Section <ref> is used to make offspring individuals feasible since there exist some constraints for some operators in computation nodes of trees (Line 17). For example, Sum, Mean, FFN, and Concat only receive vectors as their inputs. Finally, the obtained offspring population Off is output. §.§ Related Details In the mating pool selection of EMO-NAS-CD, two individuals are first randomly selected each time, and then their non-dominated front sizes and crowding distance values are compared to keep the better one. The computation of non-dominated front size and crowding distance for each individual is the same as for NSGA-II <cit.>. Due to the simple topologies of tree architectures, this will generate many duplicated individuals. To address this issue, a simple archive stores the individuals that have appeared and identifies whether a newly generated individual has already occurred. In addition, there are the population initialization strategy and the individual repairing strategy in the proposed approach. §.§.§ Population Initialization Instead of evolving architectures entirely from scratch <cit.>, we aim to introduce prior knowledge about existing CDMs' diagnostic functions into the search process. To this end, one half of the individuals in the population are generated from four existing CDMs (IRT, MIRT, MF, and NCD) by applying the proposed genetic operation. To maintain the diversity of the population and avoid getting trapped into local optima, another half of individuals are randomly generated from scratch. Here, we utilize a hyperparameter Node_range = {node_h1, node_h2} to limit the computation node number sampled in each randomly generated individual. Here node_h1 and node_h2 refer to the lower and upper bounds of the number of generated nodes. §.§.§ Individual Repair Most operators in Table <ref> can be applied to the input with any shape, except for Sum, Mean, FFN, and Concat, which can only receive one-dimensional vectors as their inputs. The first three operators are specially used to extract a high-level scalar feature from vectors, while Concat is specially used to concatenate and map two vectors to one vector. Therefore, one generated individual is infeasible and needs repairing if its contained nodes are equipped with the above four operators but take scalar inputs (termed infeasible nodes). To tackle this issue, we first execute the post-order traversal for each individual to check whether each node is feasible and then directly replace the operator of the infeasible node with other unary operators or other binary operators (e.g, replace Concat by Add, and replace Mean by Neg). §.§.§ Complexity Analysis The time complexity of the proposed EMO-NAS-CD is mainly determined by two components, i.e., the training of each architecture and the optimization process of NSGA-II. Suppose the size of a training dataset is |D_train|, the time complexity of training each architecture <cit.> is O(Num_E× |D_train| × D), and the time complexity of one generation of NSGA-II is O(Pop^2) <cit.>. Therefore, the overall time complexity of EMO-NAS-CD is O(Pop× Gen × Num_E× |D_train| × D) +O(Pop^2× Gen). Since Num_E× |D_train| × D ≫ Pop× Gen, the time complexity of EMO-NAS-CD can be regarded as O(Pop× Gen × Num_E× |D_train| × D). On the other hand, its space complexity is mainly determined by the population and the offspring population, and each population has Pop individuals encoded by trees. Suppose the average number of computation nodes in the trees is AvgNum, the space complexity of an individual is O(AvgNum*3) since each node needs three numbers to specify its operation and two subtrees. As a result, the whole space complexity of EMO-NAS-CD is O(AvgNum*3× Pop × 2), i.e., O(AvgNum× Pop × 6). of EMO-NAS-CD is determined by hidden vectors with the size of 1× D in each architecture, where the number of hidden vectors is determined by the number of contained computation nodes and three leaf nodes. Suppose the average number of computation nodes in each architecture is AvgNum_node, the space complexity of EMO-NAS-CD is O(Pop× Gen ×(AvgNum_node+3)× D). § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets To verify the effectiveness of the proposed EMO-NAS-CD, we conducted experiments on two real-world education datasets, including ASSISTments <cit.> and SLP <cit.>. We have summarized the statistics of two datasets in Table <ref> and presented the descriptions of two datasets as follows: * ASSISTments (ASSISTments 2009-2010 skill builder) <cit.> is an openly available dataset created in 2009 by the ASSISTments online tutoring service system. Here we adopted the public corrected version that does not contain duplicate data. As can be seen, there are more than 4 thousand students, nearly 18 thousand exercises, and over 300 thousand response logs in the dataset. * SLP (Smart Learning Partner) <cit.> is another public education dataset published in 2021. SLP collects the regularly captured academic performance data of learners during their three-year study on eight different subjects, including Chinese, mathematics, English, physics, chemistry, biology, history and geography. The dataset contains nearly 58 thousand response logs of 1,499 students on 907 exercises. According to the experiences of previous work <cit.>, we filtered out students with less than 15 response logs for all datasets to ensure that there are sufficient data to model each student for diagnosis. §.§.§ Compared Approaches and Metrics To validate the effectiveness of the proposed approach, we compared the diagnostic function architectures found by the proposed EMO-NAS-CD with state-of-the-art CDMs, including DINA <cit.>, IRT <cit.>, and MIRT <cit.>, MF <cit.>, NCD <cit.>, RCD <cit.>, CDGK <cit.>, and KSCD <cit.>. The detailed descriptions of these comparison CDMs can be found in Section <ref>. The source codes of most compared approaches are available at <https://github.com/orgs/bigdata-ustc/repositories>. Note that the results of RCD on SLP are not reported since RCD needs extra manually enhanced inputs that SLP does not have. To measure the performance obtained by all CDMs, three evaluation metrics including AUC, accuracy (ACC), and root mean square error (RMSE) are adopted. §.§.§ Parameter Settings * 1. Architecture Settings The dimension D is equal to the number of knowledge concepts K, H_1, H_2, and H_3 are set to 512, 256, and 1, respectively. * 2. Search Settings During the search process in the proposed EMO-NAS-CD, each student's response logs in each dataset are randomly split into 70%, 10%, and 20% as training, validating, and testing datasets, respectively. To train the architecture encoded by each individual, the Adam optimizer with a learning rate of 0.001 is used to optimize the Cross-Entropy loss between the prediction results and the targets, where the size of each batch is set to 128, and the number of training epochs Num_E is set to 30. For the proposed EMO-NAS-CD, the population size Pop is set to 100, the maximal number of generation Gen is set to 100, and the initial node range Node_range is set to {2,4}. * 3. Training Settings For more convincing results, we adopted multiple different settings to split the dataset into training and test datasets for evaluating the model performance, where the settings contain 50%/50%, 60%/40%, 70%/30%, and 80%/20% as suggested in <cit.>. Each found architecture needs to be retrained from scratch for 50 epochs, the settings are the same as that in the above search settings. For a fair comparison, the parameter settings of all comparison CDMs are the same as those in their original papers to hold their best performance. All experiments were conducted on a NVIDIA RTX 3090 GPU. Our source code can be available at <https://github.com/DevilYangS/EMO-NAS-CD>. §.§ Effectiveness of The Proposed EMO-NAS-CD Table <ref> summarizes the prediction performance comparison between the proposed EMO-NAS-CD and comparison CDMs in terms of ACC, RMSE, and AUC values that are averaged on 30 independent runs on the two datasets, where five different splitting settings are considered. Here seven architectures (with different degrees of model interpretability) found by EMO-NAS-CD in a single run are selected for comparison, where architectures A1 to A7 are found on the ASSISTments and architectures S1 to S7 are found on the SLP. To this end, for some architectures that have similar model interpretability, the architecture with the best performance among these architectures will be selected for final comparison. For more convincing explanations, Table <ref> further shows the results of A1 (S1) to A7 (S7). A1 refers to the average results on ten different runs of EMO-NAS-CD. In each run, the architecture that has similar interpretability to A1 is used to compute A1, which is the same to obtain A2 to A7 and S1 to S7. Besides, the Friedman test with Nemenyi procedure <cit.> (under significance level α=0.05) was conducted on the results of comparison CDMs and A1 (S1) to A7 (S7), which is a nonparametric statistical procedure to check whether a set of samples are statistically different. Table <ref> summarizes the statistical results including significance analysis and rank of each method, where '1' indicates significant difference between two methods and '0' otherwise. As can be observed from Table III and Table IV, nearly all architectures found by EMO-NAS-CD (except for the simplest architectures A1 and S1) exhibit significantly better performance than all comparison CDMs. Take the results under the splitting setting of 80%/20% for analysis, and the boxplots for AUC values (under this setting) of comparison CDMs and seven found architectures are further presented in Fig.<ref> for explicit observation. As can be seen, the most effective architecture A7 outperforms the current best CDM (RCD) by over 0.07 on the ASSISTments dataset in terms of the AUC value. Even for the simplest architecture A1, there still holds the superiority of performance over most CDMs, which is competitive to KSCD and only worse than RCD, but KSCD and RCD use extra input information to enhance the performance. Therefore, compared to the CDMs that do not have such input information, the performance difference between our best-found architectures and these CDMs is more significant: the performance leading of A7 over the best of these CDMs is up to 0.08 in terms of AUC values, and architecture A1 also outperforms these CDMs. It can be seen that the proposed approach achieves such a tremendous performance improvement by only designing more effective architectures without extra input information. In addition, we can find that the standard deviation of the proposed approach is very small from the comparisons between A1 to A7 and A1 to A7. We can make the same observations and conclusions based on the results on the SLP dataset. For a deep insight into found architectures, we presented all non-dominated individuals found by the proposed approach on two datasets in Fig. <ref> and Fig. <ref>, where the architectures corresponding to these individuals are further plotted in the right parts of two figures. As can be observed, A1 or S1 is the shallowest architecture, which holds the highest model interpretability but worse prediction performance, while A7 or S7 is the deepest architecture, which holds the best performance but the worst interpretability among all selected architectures. In addition, we can obtain some interesting and insightful observations from these best-found architectures on two datasets. Firstly, from the comparisons of S1 and S2, A2 and A3, as well as A4 and A5, we can find that adding a proper activation such as Sigmoid and Softplus can enhance the model performance without losing interpretability; Secondly, in most shallower architectures, the exercise-related input 𝐡_E tends to be directly combined with the student-related input 𝐡_S by some binary operators, while in most deeper architectures, 𝐡_E tends to be first combined with the knowledge concept-related input 𝐡_C and then combined with 𝐡_S. Finally, all shallower architectures prefer FC layers as their second parts to output the final prediction, while for the deeper architectures with better performance, the Identity operation seems to be a more effective second part. These deeper architectures commonly obtain the final prediction with the assistance of the Mean operator. The above observations provide some valuable guidelines for manually designing novel CDMs. §.§ Architecture Transferring Validation As can be seen from Figs. <ref> and <ref>, the two sets of selected best-architectures on two datasets are a bit different from each other. Only architecture A1 is same as architecture S1 and similar to S2 and S4, and architectures A6 and A7 are similar to architecture S5. To further investigate the transferability and generalization of the found architectures, Table <ref> presents the performance of architectures A2 to A7 and architectures S2 to S7 on the two datasets under the splitting setting of 80%/20%, where the results of A1 and S1 are not contained since they have the same architecture. As can be observed, the architectures found on the ASSISTments still hold competitive performance on the SLP; similarly, the architectures found on the SLP also hold comparable performance on the ASSISTments. Note that architecture A5 and S2 have the best generalization to hold the most promising performance on both datasets. §.§ Ablation Study This section will validate the effectiveness of some devised strategies and analyze the parameter sensitivity. In the following, only the results on the SLP dataset is presented due to higher search cost on the ASSISTments. To verify the effectiveness of the proposed initialization strategy, we equipped the proposed approach with other two initialization strategies to form two variant approaches, EMO-NAS-CD (random) and EMO-NAS-CD (existing). The initial population of the former is randomly generated, and the latter initializes its population purely from existing CDMs. Besides, we also established another variant called EMO-NAS-CD (crossover+mutation), which generates offspring by first applying the Exchange operation and then randomly applying one of other three operations. As a result, Fig. <ref> presents the convergence profiles of hypervolume (HV) <cit.> obtained by the proposed approach and its variants. HV measures convergence and diversity of a population, and a large HV value indicates a good convergence and diversity. The comparison between EMO-NAS-CD and other two variants indicates that the suggested population initialization strategy can indeed speed up the convergence and lead to better final convergence. Besides, we can observe that the proposed genetic operation is significantly better for the proposed approach than the compared genetic operation. The reason behind this is that successively employing two sub-genetic operations to generate offspring will cause major modifications between the generated individuals and their parent individuals, which can promote the exploration of the algorithm but hinder the exploitation of the algorithm to some extent. To sum up, the effectiveness of the proposed initialization and genetic operation can be demonstrated. To validate the effectiveness of objective f_2(𝒜) of (<ref>) in assisting the proposed approach to search interpretable CDMs, Fig. <ref> exhibits the non-dominated individuals found by two variants of the proposed approach and plots their six representative architectures for observation. Here, the first variant takes the model complexity as the second objective: f_2^com = 1-numc+breadth/30, and the second variant computes the model complexity as f_2^com_dep = 1-numc+breadth+depth-1/30. f_2^com is measured by the sum of the numbers of computation nodes and leaf nodes, and f_2^com_dep additionally considers the influence of the tree depth (30 is a parameter used for normalizing the objective value). As can be seen from Fig. <ref>(a), compared to the architectures in the right part, the architectures in the left part have better performance but at the expense of a much larger increase of depth. Besides, the architectures (located in the upper left area) are much deeper compared to the architectures with similar performance in Fig. <ref>. The reason behind this is that the objective of model complexity prefers adding a unary operator node, whereas adding a binary operator node would introduce an extra leaf node, leading to a worse objective value. The same observation and conclusion can be drawn from Fig. <ref>(b), where the found architectures are still very deep. This is because f_2^com_dep is basically the same as f_2^com yet implicitly assigns a smaller penalty to binary operator nodes, where the assigned penalty is still larger than the penalty assigned to unary operator nodes by f_2^com_dep. Finally, the effectiveness of the devised model interpretability objective can be validated. To analyze the sensitivity of the proposed approach to the framework of MOEAs and the hyperparameters Pop and Node_range, Fig. <ref> compares HV values on the SLP obtained by EMO-NAS-CD under different hyperparameter combinations of Pop and Node_range. According to Taguchi method <cit.>, Pop is set from 10 to 120 with step equal to 10, while node_h2 in Node_range is set from 1 to 12 with step equal to 1 and node_h1 is fixed to 2. The original EMO-NAS-CD is under NSGA-II, but EMO-NAS-CD[NSGA-III] and EMO-NAS-CD[VAEA] are EMO-NAS-CD under NSGA-III <cit.> and VAEA <cit.>, respectively. As can be seen from Fig. <ref>, firstly, the proposed EMO-NAS-CD is robust to the framework of MOEAs; secondly, the proposed EMO-NAS-CD can obtain relatively good performance when the population size is greater than 80, and it is not necessary to set Pop to 120 for a slightly higher HV value at the expense of an extra 0.2 times of cost; thirdly, the setting of node_h2 has a big influence on the result of the EMO-NAS-CD, and EMO-NAS-CD can obtain relatively good performance when node_h2 lies from 3 to 5. Therefore, current hyperparameter settings for EMO-NAS-CD are good enough to some extent. §.§ Discussion This section will discuss three guidelines for researchers in various domains after the experiments. The first guideline is for researchers in NAS. To design a task-specific NAS approach, researchers should make the best of their domain knowledge to create a search space. By doing so, the search space can include existing models for the target task and many other potential models. In addition, the search strategy should also be based on the search space's characteristics and the target task's domain knowledge. The second guideline is for researchers in CD, inspiring them on how to design effective CDMs, where the detailed guideline can be found in the last paragraph of Section <ref>. The third guideline is for researchers interested in NAS and intelligent education. Considering the success made by our approach, it is promising for other tasks in intelligent education to employ the NAS technique to design effective neural architectures. Besides, researchers can borrow experiences from this paper to design the objectives of model interpretability, generalization, and robustness, formulate their multiple objectives as a MOP, and then employ a suitable MOEA to solve the MOP. § CONCLUSION AND FUTURE WORK In this paper, we proposed to design novel CDMs by leveraging evolutionary multi-objective NAS. Specifically, we first proposed an expressive search space for CD, which contains a large number of potential architectures and existing architectures. Then, we proposed an effective MOGP to search high-performance architectures with high interpretability in the search space by optimizing the MOP having two objectives. To avoid some optimization difficulties, each architecture is first transformed into its corresponding tree architecture and then encoded by tree-based representation for easy optimization. Besides, in the proposed MOGP, an effective genetic operation is designed for offspring generation, and a population initialization strategy is devised to accelerate the convergence. Experimental results demonstrate the superiority of the architectures found by the proposed approach to existing CDMs in terms of performance and model interpretability. This work has shown the promising prospect of leveraging NAS for CD, but there still exist some threats to the validity of the proposed approach, including internal, external, and construct threats. Firstly, the devised model interpretability objective is the primary internal threat. As seen from Fig. <ref>, Fig. <ref>, and Fig. <ref>, the proposed approach will find relatively different architectures when different model interpretability objectives are adopted. Besides, the proposed model interpretability objective is empirically designed based on some experiences from decision trees, which may limit the emergence of novel architectures as well as the application of found architectures in the real world due to a little unreasonable definition of model interpretability. Therefore, we would like to design more reasonable model interpretability objectives in the future. Secondly, the dataset utilized in the proposed approach is the main external threat. We can find that the found architectures on different datasets are quite different, which indicates that the architectures found by the proposed approach on a single dataset are not general for the cognitive diagnosis task. Besides, the size of the utilized dataset affects the search efficiency of the proposed approach, which leads to an extremely high computation cost when a large-scale dataset is met, e.g., the search cost on ASSISTments is about 15 GPU days. Therefore, in the future, we would like to design generalized CDMs and explore surrogate models <cit.> to reduce the search cost. Finally, the proposed search space is the main construct threat since it is designed based on the summary of existing architectures and forces all architectures to be single-root trees. Despite high effectiveness, the current search space may limit the emergence of more potential architectures since CDMs should not always be single-root trees. Therefore, it is interesting to devise other types of search space to contain more effective CDMs. Firstly, the proposed approach suffers from high computation cost, e.g., the search cost on ASSISTments is about 15 GPU days, and it will become severer when searching architectures for a large-scale dataset, where the training time of each architecture is much more expensive. Therefore, in the future, we would like to explore surrogate models <cit.> to reduce the search cost. Moreover, our proposed search space is designed based on the summary of existing architectures and forces all architectures to be single-root trees, which is effective to some extent but may ignore many potential architectures. Therefore, it is interesting to devise a more effective search space and encoding strategy for CD. IEEEtran
http://arxiv.org/abs/2307.05060v1
20230711071001
Satisfiability of Arbitrary Public Announcement Logic with Common Knowledge is $Σ^1_1$-hard
[ "Rustam Galimullin", "Louwe B. Kuijer" ]
cs.LO
[ "cs.LO" ]
Belief Revision from Probability Jeremy Goodman School of Philosophy University of Southern California, USA [email protected] Bernhard Salow Faculty of Philosophy University of Oxford, UK [email protected] August 12, 2023 ======================================================================================================================================================================================================= Arbitrary Public Announcement Logic with Common Knowledge (APALC) is an extension of Public Announcement Logic with common knowledge modality and quantifiers over announcements. We show that the satisfiability problem of APALC on S5-models, as well as that of two other related logics with quantification and common knowledge, is Σ^1_1-hard. This implies that neither the validities nor the satisfiable formulas of APALC are recursively enumerable. Which, in turn, implies that APALC is not finitely axiomatisable. § INTRODUCTION Quantified Public Announcement Logics. Epistemic logic (EL) <cit.> is one of the better-known formalisms for reasoning about knowledge of agents in multi-agent systems. It extends the language of propositional logic with constructs □_a φ meaning that `agent a knows φ'. Formulas of EL are interpreted on epistemic models (or, equivalently, S5-models) that comprise a set of states, equivalence relations for each agent between states, and a valuation function that specifies in which states propositional variables are true. However, EL provides only a static description of distribution of knowledge in a system. Extensions of the logic that allow one to reason about how information of individual agents and groups thereof changes as a result of some epistemic event are generally collectively known as dynamic epistemic logics (DELs) <cit.>. The prime example of a DEL and arguably the most well-studied logic in the family is public announcement logic (PAL) <cit.>. A public announcement is an event of all agents publicly and simultaneously receiving the same piece of information. The language of PAL extends that of EL with formulas [ψ]φ that are read as `after public announcement of ψ, φ is true'. Quantification over various epistemic actions, and in particular over public announcements, has been explored in the last 15 or so years <cit.>. Adding quantification over public announcements allows one to shift the emphasis from the effects of a particular announcement to the question of (non-)existence of an announcement leading to a desired epistemic goal. In this paper, we focus on the three, perhaps most well-known, quantified PALs (QPALs). The first of the three is arbitrary PAL (APAL) <cit.> that extends the language of PAL with constructs [!]φ meaning `after any public announcement, φ is true'. A formula with the dual existential quantifier ⟨ ! ⟩φ is read as `there is a public announcement, after which φ is true'. Observe that quantifiers of APAL do not specify whether an announcement can be made by any of the agents, or groups thereof, modelled in a system. Hence, a more `agent-centric' quantified PAL was proposed. Group announcement logic (GAL) <cit.> extends the language of PAL with formulas [G]φ meaning `after any announcement by agents from group G, φ is true'. A formula with the dual of the universal GAL quantifier is ⟨ G ⟩φ that is read `there is an announcement by agents from group G that makes φ true'. Once we start reasoning about what groups of agents can achieve by making public announcements, it is only too natural to consider their abilities in a game-theoretic setting. In particular, we may let agents outside of the group make their own announcements in an attempt to preclude the group from reaching their epistemic goals. A QPAL with such a competitive flavour to it is called coalition announcement logic (CAL) <cit.>. The logic extends PAL with modalities [⟨ G ⟩] φ that are read as `whatever agents from coalition G announce, there is a counter-announcement by the anti-coalition that makes φ true'. The diamond version ⟨ [ G ] ⟩φ is then means that `there is an announcement by coalition G, such that whatever the anti-coalition announces at the same time, they cannot avoid φ'. Observe, that compared to APAL and GAL, modalities of CAL contain double quantification: ∀∃ and ∃∀ correspondingly. As the name of the logic suggests, modalities of CAL were inspired by coalition logic <cit.>, and they capture game-theoretic notions of α- and β-effectivity <cit.>. Some Logical Properties of QPALs. One of the most pressing open problems in the area is the existence of finitary axiomatisations of QPALs. Both finitary and infinitary axiom systems for APAL were proposed in <cit.>, but later the finitary version was shown to be unsound <cit.>. The infinitary axiomatisation is, however, sound and complete <cit.>. As the axiomatisation of GAL <cit.> is quite similar to that of APAL, its finitary version is also not sound <cit.>, and its infinitary version can be shown to be sound and complete by a modification of the proof from <cit.>. To the best of our knowledge, there are no known sound and complete proof systems, finitary or infinitary, for CAL[A complete infinitary axiomatisation with CAL modalities and additional operators was given in <cit.>]. The satisfiability problem for QPALs is known to be undecidable <cit.>. The result is achieved by a reduction from the classic tiling problem that consists in answering the question whether a given finite set of tiles can tile the ℕ×ℕ plane. Since this problem is co-RE-complete <cit.>, or, equivalently, Π^0_1-complete, the reduction amounts to the fact that the satisfiability problem for QPALs is co-RE-hard (or Π^0_1-hard). Note that this result does not rule out the existence of finitary axiomatisations of QPALs. A prime example of a logic with a co-RE-complete satisfiability problem and a finitary axiomatisation is first-order logic. Overview of the paper and our result. In this paper we consider extensions of QPALs with common knowledge <cit.>, which is a classic variant of group knowledge in multi-agent systems. Its intuitive meaning is that `φ is common knowledge among agents in group G if everyone in G knows φ, everyone in G knows that everyone in G knows φ and so on ad infinitum'. Semantically, common knowledge among agents from G corresponds to the reflexive transitive closure of equivalence relations of all agents from group G. We call extensions of APAL, GAL, and CAL with common knowledge APALC <cit.>, GALC, and CALC, correspondingly, or QPALCs if we refer to all of them at the same time. The result we prove in this paper is that the satisfiability problems for QPALCs are Σ_1^1-hard. We do this by showing that the recurring tiling problem, which is known to be Σ_1^1-complete <cit.>, can be reduced to satisfiability of QPALC formulas. Because the satisfiability problems are Σ_1^1-hard, it follows that, in particular, the set of valid QPALC formulas is not recursively enumerable. That, in turn, implies that QPALCs have no finitary axiomatisations. The non-existence of a finitary axiomatisation of a somewhat related arbitrary arrow update logic <cit.> with common knowledge was shown in <cit.> by the reduction from the non-halting problem. Moreover, the recurring tiling problem was used in <cit.> to demonstrate that the satisfiability problem of PAL with iterated announcements and common knowledge is Σ^1_1-complete. The use of common knowledge is instrumental in our paper, since it allows us to have a `tighter' grid than the ones from <cit.> and <cit.>. We deem our result important in at least two ways. First, the non-existence of finitary axiomatisations of QPALCs is interesting in its own right as it demonstrates that presence of common knowledge in QPALCs is a sufficient condition for Σ^1_1-hardness. Second, having both our construction (with common knowledge) and the constructions from <cit.> and <cit.> side by side, allows one to flesh out crucial differences between Σ^1_1-hardness and Σ_1^0 -hardness arguments, and, hopefully, move closer to tackling the open problem of (non-)existence of finitary axiomatisations of QPALs. Outline of the paper. The rest of the paper is organised as follows. In Section <ref> we cover the background on QPALCs. After that, in Section <ref>, we prove the main claim of this paper, and, finally, we conclude in Section <ref>. § QUANTIFIED PUBLIC ANNOUNCEMENT LOGICS WITH COMMON KNOWLEDGE Let A be a finite set of agents, and P be a countable set of propositional variables. The languages of arbitrary public announcement logic with common knowledge 𝖠𝖯𝖠𝖫𝖢, group announcement logic with common knowledge 𝖦𝖠𝖫𝖢, and coalition announcement logic with common knowledge 𝖢𝖠𝖫𝖢 are inductively defined as 𝖠𝖯𝖠𝖫𝖢 ∋ φ::= p |φ|(φφ) |□_a φ|[φ]φ|▪_G φ|[!]φ 𝖦𝖠𝖫𝖢 ∋ φ::= p |φ|(φφ) |□_a φ|[φ]φ|▪_G φ|[G]φ 𝖢𝖠𝖫𝖢 ∋ φ::= p |φ|(φφ) |□_a φ|[φ]φ|▪_G φ|[⟨G ⟩]φ where p ∈ P, a ∈ A, and G ⊆ A. Duals are defined as _a φ := ¬□_a ¬φ, ⟨ψ⟩φ := ¬ [ψ]¬φ, ⧫_G φ := ¬▪_G ¬φ, ⟨ ! ⟩φ := ¬ [!] ¬φ, ⟨ G ⟩φ := ¬ [G] ¬φ and ⟨ [ G ] ⟩φ := ¬ [⟨ G ⟩ ] ¬φ. The fragment of 𝖠𝖯𝖠𝖫𝖢 without [!]φ is called public announcement logic with common knowledge 𝖯𝖠𝖫𝖢; the latter without [φ]φ is epistemic logic with common knowledge 𝖤𝖫𝖢; 𝖯𝖠𝖫𝖢 and 𝖤𝖫𝖢 minus ▪_G φ are, correspondingly, public announcement logic 𝖯𝖠𝖫 and epistemic logic 𝖤𝖫. Finally, fragments of 𝖠𝖯𝖠𝖫𝖢, 𝖦𝖠𝖫𝖢 and 𝖢𝖠𝖫𝖢 without ▪_G φ are called arbitrary public announcement logic 𝖠𝖯𝖠𝖫, group announcement logic 𝖦𝖠𝖫 and coalition announcement logic 𝖢𝖠𝖫 respectively. A model M is a tuple (S, ∼, V), where S is a non-empty set of states, ∼: A → 2^S × S gives an equivalence relation for each agent, and V:P → 2^S is the valuation function. By ∼_G we mean reflexive transitive closure of ⋃_a ∈ G∼_a. We will denote model M with a distinguished state s as M_s. We would like to stress that agent relations in our models are equivalence relations (and hence our models are S5 models). The results of this paper do not generalise to arbitrary agent relations in any obvious way. It is assumed that for group announcements, agents know the formulas they announce. In the following, we write 𝖯𝖠𝖫𝖢^G = {⋀_i ∈ G□_i ψ_i |for all i ∈ G, ψ_i ∈𝖯𝖠𝖫𝖢} to denote the set of all possible announcements by agents from group G. We will use ψ_G to denote arbitrary elements of 𝖯𝖠𝖫𝖢^G. Let M_s = (S, R, V) be a model, p ∈ P, G ⊆ A, and φ, ψ∈𝖠𝖯𝖠𝖫𝖢∪𝖦𝖠𝖫𝖢∪𝖢𝖠𝖫𝖢. M_s p iff s ∈V(p) M_s ¬φ iff M_s φ M_s φψ iff M_s φ and M_s ψ M_s □_a φ iff ∀t ∈S: s ∼_a t implies M_t φ M_s ▪_G φ iff ∀t ∈S: s ∼_G t implies M_t φ M_s [ψ] φ iff M_s ψ implies M_s^ψφ M_s [!] φ iff ∀ψ∈𝖯𝖠𝖫𝖢: M_s [ψ] φ M_s [G] φ iff ∀ψ_G ∈𝖯𝖠𝖫𝖢^G: M_s [ψ_G] φ M_s [ ⟨G⟩] φ iff ∀ψ_G ∈𝖯𝖠𝖫𝖢^G, ∃χ_A ∖G ∈𝖯𝖠𝖫𝖢^A∖G: M_s ψ_G implies M_s ⟨ψ_G χ_A ∖G⟩φ where M_s^ψ = (S^ψ, R^ψ, V^ψ) with S^ψ = {s ∈ S | M_s ψ}, R^ψ (a) is the restriction of R(a) to S^ψ for all a ∈ A, and V^ψ(p) = V(p) ∩ S^ψ for all p ∈ P. Observe, that it follows from the definition of the semantics that in the case of the grand coalition A, M_s [A] φ if and only if M_s [⟨ A ⟩ ]φ. For the case of the empty group ∅, we assume that the conjunction of an empty set of formulas is a tautology. For APAL, GAL, and CAL, we assume that quantification ranges over a quantifier-free fragment of the language, i.e. over PAL, which is equally expressive as EL <cit.>. This is, however, not as straightforward once we consider ELC and PALC. The latter is strictly more expressive than ELC <cit.>, and ELC, in its turn, is strictly more expressive than EL, and thus it matters, expressivity-wise, which quantifer-free fragment of a QPALC the quantification ranges over. These matters are explored in <cit.>, where also infinitary axiomatisations of APALC and GALC are given. For our current purposes, though, the difference in the range of quantification does not play a role. § THE SATISFIABILITY PROBLEM OF QPALCS IS Σ^1_1-HARD As I mentioned in an e-mail, there are some (fixable) issues where I don't think the current proof quite works. For the sake of having an easy comparison, I'm going to create two copies of this section, one I will leave as is, the other I will make some changes. This one is the original copy. We prove the Σ^1_1-hardness of the satisfiability problem of QPALCs via a reduction from the recurring tiling problem <cit.>. Let C be a finite set of colours. A tile is a function τ:{𝗇𝗈𝗋𝗍𝗁, 𝗌𝗈𝗎𝗍𝗁, 𝖾𝖺𝗌𝗍, 𝗐𝖾𝗌𝗍}→ C. A finite set of tiles T is called an instance of the tiling problem. A solution to an instance of the tiling problem is a function f:ℕ×ℕ→T such that for all (i,j) ∈ℕ×ℕ, f(i,j) (𝗇𝗈𝗋𝗍𝗁) = f(i,j+1) (𝗌𝗈𝗎𝗍𝗁) and f(i,j) (𝖾𝖺𝗌𝗍) = f(i+1,j) (𝗐𝖾𝗌𝗍). Let T be a finite set of tiles with a designated tile τ^∗∈T. The recurring tiling problem is the problem to determine whether there is a solution to instance T of the tiling problem such that τ^∗ appears infinitely often in the first column. For our construction we will require four propositional variables — 𝗇𝗈𝗋𝗍𝗁, , , and — to designate the corresponding sides of tiles. Additionally, we will have designated propositional variables for each colour in C, and for each tile τ_i ∈T there is a propositional variable p_i that represents this tile. Finally, we will use p^∗ for designated τ^∗. In our construction, we will represent each tile with four states: one for each of the four sides of a tile. As for agents, we require only three of them for our construction. Agent s, for square, cannot distinguish states within the same tile. Agent v, for vertical, cannot distinguish between the northern part of one tile and the southern part of the tile above. Similarly, the horizontal agent h cannot distinguish between the eastern and western parts of adjacent tiles. Let an instance T of the recurring tiling problem be given. We start by construction of formula Ψ_T that will be satisfied in a given model if and only if the model is grid-like. We will build up Ψ_T step-by-step, defining useful subformulas along the way. Let 𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 be the following set 𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 := {𝗇𝗈𝗋𝗍𝗁, 𝗌𝗈𝗎𝗍𝗁, 𝖾𝖺𝗌𝗍, 𝗐𝖾𝗌𝗍}. The first constraint, expressed by formula 𝑜𝑛𝑒_𝑐𝑜𝑙, is that each state is coloured by exactly one colour. To ensure that all four parts — north, south, east, and west — are present in a current square, we state in 𝑎𝑙𝑙_𝑝𝑎𝑟𝑡𝑠 that in all squares the square agent s has access to only relevant states. We need some brackets here. Do you prefer \left( and \right), or just ( and )? I usually do \left( and \right) in display mode 𝑜𝑛𝑒_𝑐𝑜𝑙 := ⋁_c ∈ C c ⋀_d ∈ C ∖{c}¬ d 𝑎𝑙𝑙_𝑝𝑎𝑟𝑡𝑠 := □_s ⋁_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 q ⋀_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇_s q With 𝑜𝑛𝑒_𝑝𝑜𝑠 we force each state to satisfy only one propositional variable from 𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇, and with 𝑜𝑛𝑒_𝑡𝑖𝑙𝑒 we ensure that all states within the same tile are labelled by the tile proposition. 𝑜𝑛𝑒_𝑝𝑜𝑠 := ⋁_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 (q ⋀_q^'∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇∖{q}¬ q^') 𝑜𝑛𝑒_𝑡𝑖𝑙𝑒 := ⋁_τ_i ∈T (p_i □_s p_i ⋀_τ_j ∈T∖{τ_i}¬ p_j) Next, we force each state in a square to satisfy exactly one atom corresponding to their designated colour: 𝑠𝑡𝑎𝑡𝑒_𝑐𝑜𝑙 := ⋁_τ_i ∈T (p_i →⋀_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 (q →τ_i(q))) All the formulas considered so far deal with the representation of a single tile. We will use the following abbreviation: ψ_𝑡𝑖𝑙𝑒 := 𝑜𝑛𝑒_𝑐𝑜𝑙𝑎𝑙𝑙_𝑝𝑎𝑟𝑡𝑠𝑜𝑛𝑒_𝑝𝑜𝑠𝑜𝑛𝑒_𝑡𝑖𝑙𝑒𝑠𝑡𝑎𝑡𝑒_𝑐𝑜𝑙 As adjoining tiles are required to have the same colour, we simulate this by requiring that agents h and v consider a current colour in the top and right directions. In such a way we also ensure that the grid is infinite in the positive quadrant. 𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠 := ⋀_c ∈ C( (𝗇𝗈𝗋𝗍𝗁 c →_v 𝗌𝗈𝗎𝗍𝗁□_v c) (𝖾𝖺𝗌𝗍 c →_h 𝗐𝖾𝗌𝗍□_h c) ) We are concerned with the reduction from the ℕ×ℕ recurring tiling problem, i.e. our grid will have left and bottom edges. We force the existence of a tile at position (0,0) with the following formula: 𝑖𝑛𝑖𝑡 := ⧫_{h,v,s}□_s ((𝗌𝗈𝗎𝗍𝗁→□_v 𝗌𝗈𝗎𝗍𝗁) (𝗐𝖾𝗌𝗍→□_h 𝗐𝖾𝗌𝗍)) Next two formulas specify that vertical and horizontal agents can `see' one step ahead of them in down to up and left to right directions correspondingly. Note that the formulas contain the arbitrary announcement operator which quantifies over all 𝖯𝖠𝖫𝖢 formulas that can be of arbitrary large modal depth. In such a way we guarantee that vertical (resp. horizontal) agent can reach only one, up to 𝖯𝖠𝖫𝖢 indistinguishability, state to the north (resp. east). 𝑑𝑜𝑤𝑛&𝑢𝑝 := ⋀_τ_i ∈T p_i → [!]□_s(𝗇𝗈𝗋𝗍𝗁⋁_τ_j ∈T_v (𝗌𝗈𝗎𝗍𝗁 p_j _s ¬𝗌𝗈𝗎𝗍𝗁) → →□_v ((𝗇𝗈𝗋𝗍𝗁 p_i ) (𝗌𝗈𝗎𝗍𝗁 p_j _s ¬𝗌𝗈𝗎𝗍𝗁))) 𝑙𝑒𝑓𝑡&𝑟𝑖𝑔ℎ𝑡 := ⋀_τ_i ∈T p_i → [!]□_s(𝖾𝖺𝗌𝗍⋁_τ_j ∈T_v (𝗐𝖾𝗌𝗍 p_j _s ¬𝗐𝖾𝗌𝗍) → →□_v ((𝖾𝖺𝗌𝗍 p_i ) (𝗐𝖾𝗌𝗍 p_j _s ¬𝗐𝖾𝗌𝗍))) Finally, we establish the following commutativity property: if a tile is reachable by first an h-step and then a v-step, then the tile is reachable by first making a v-step and then an h-step, and vice versa. Similarly to the previous two formulas, arbitrary announcement operators here allow us to accurately approximate the fact that in both cases we reach the same tile (up to 𝖯𝖠𝖫𝖢 indistinguishability). 𝑟𝑖𝑔ℎ𝑡&𝑢𝑝:= ⋁_τ_i ∈T[!](𝖾𝖺𝗌𝗍→ (_h (𝗐𝖾𝗌𝗍_s(𝗇𝗈𝗋𝗍𝗁_v(𝗌𝗈𝗎𝗍𝗁 p_i _s 𝗐𝖾𝗌𝗍))) → →□_s (𝗇𝗈𝗋𝗍𝗁→□_v(𝗌𝗈𝗎𝗍𝗁→□_s(𝖾𝖺𝗌𝗍→_h (𝗐𝖾𝗌𝗍 p_i)))))) 𝑢𝑝&𝑟𝑖𝑔ℎ𝑡:= ⋁_τ_i ∈T[!](𝗇𝗈𝗋𝗍𝗁→ (_h (𝗌𝗈𝗎𝗍𝗁_s(𝖾𝖺𝗌𝗍_v(𝗐𝖾𝗌𝗍 p_i _s 𝗌𝗈𝗎𝗍𝗁))) → →□_s (𝖾𝖺𝗌𝗍→□_h(𝗐𝖾𝗌𝗍→□_s(𝗇𝗈𝗋𝗍𝗁→_h (𝗌𝗈𝗎𝗍𝗁 p_i)))))) We carry on with our tiling approximation by requiring that going counter-clockwise from the current state leads us to the same tile we started from. 𝑐𝑦𝑐𝑙𝑒:= ⋀_τ_i ∈T p_i → [!] □_s (𝖾𝖺𝗌𝗍→□_h(𝗐𝖾𝗌𝗍→□_s (𝗇𝗈𝗋𝗍𝗁→□_v(𝗌𝗈𝗎𝗍𝗁→ →□_s(𝗐𝖾𝗌𝗍→□_h(𝖾𝖺𝗌𝗍→□_s(𝗌𝗈𝗎𝗍𝗁→□_v(𝗇𝗈𝗋𝗍𝗁→_s( p_i east))))))) ) We abbreviate the 𝑥&𝑦 formulas with quantifiers as ψ_x&y:= 𝑑𝑜𝑤𝑛&𝑢𝑝𝑙𝑒𝑓𝑡&𝑟𝑖𝑔ℎ𝑡𝑟𝑖𝑔ℎ𝑡&𝑢𝑝𝑢𝑝&𝑟𝑖𝑔ℎ𝑡 In our reduction, we are interested in grids where a special tile appears infinitely often in the first column of the grid. The following formula requires that the special tile appears only in the leftmost column: 𝑡𝑖𝑙𝑒_𝑙𝑒𝑓𝑡 := p^∗→□_s(𝗐𝖾𝗌𝗍→□_h𝗐𝖾𝗌𝗍) All of this completes the necessary requirements for the grid. Now, by adding a common knowledge modality for all agents, we force all of the aforementioned formulas to hold everywhere in the grid. Ψ_T := ▪_{h,v,s}( ψ_𝑡𝑖𝑙𝑒𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠𝑖𝑛𝑖𝑡ψ_𝑥&𝑦𝑡𝑖𝑙𝑒_𝑙𝑒𝑓𝑡) Observe that Ψ_T does not say anything about the special tile τ^∗ appearing infinitely often in the first column. The formula merely requires that if there is a special tile, then it should appear in the first column. We first show that Ψ_T forces a grid-like model, and only after that will we consider the (in)finite number of occurrences of the special tile. Let T be an instance of the recurring tiling problem. If T can tile ℕ×ℕ, then Ψ_T is satisfiable. Assume that there is a tiling of the ℕ×ℕ plane with a finite set of tiles T. We construct model M = (S, ∼, V) satisfying Ψ_T directly from the given tiling. In particular, * S = ℕ×ℕ×{𝔫, 𝔰, 𝔢, 𝔴}, * ∼_s = {(i,j,𝔩), (i',j',𝔩') | i = i' and j = j'} * ∼_h = {(i,j,𝔫), (i, j+1, 𝔰)} * ∼_v = {(i,j,𝔢), (i+1, j, 𝔴)} * for all τ_i ∈T, V(p_i) = {(i,j,𝔩) |τ_i is at (i,j)} * for all c ∈ C, V(c) = {(i,j,𝔩) |τ(𝔩)} * for all l ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇, V(l) = {(i,j,𝔩) | l corresponds to 𝔩} To argue that M_(0,0,𝔢)Ψ_T we first notice that due to the fact that T tiles the ℕ×ℕ plane and by the construction of M, subformulas of Ψ_T that do not involve arbitrary announcements are straightforwardly satisfied. Let us now show that M_(0,0,𝔢)▪_{h,v,s}𝑟𝑖𝑔ℎ𝑡&𝑢𝑝. By the definition of semantics, this is equivalent to the fact that for all (i,j,𝔩) such that (0,0,𝔢) ∼_{h,v,s} (i,j,𝔩), it holds that M_(i,j,𝔩)𝑟𝑖𝑔ℎ𝑡&𝑢𝑝. We need to show that for some p_i and for all ψ∈𝖯𝖠𝖫𝖢, M_(i,j,𝔩)^ψ 𝖾𝖺𝗌𝗍→ (_h (𝗐𝖾𝗌𝗍_s(𝗇𝗈𝗋𝗍𝗁_v(𝗌𝗈𝗎𝗍𝗁 p_i _s 𝗐𝖾𝗌𝗍))) → →□_s (𝗇𝗈𝗋𝗍𝗁→□_v(𝗌𝗈𝗎𝗍𝗁→□_s(𝖾𝖺𝗌𝗍→_h (𝗐𝖾𝗌𝗍 p_i))))) Let us assume that 𝔩 = 𝔢 and thus M_(i,j,𝔩)^ψ𝖾𝖺𝗌𝗍, and assume furthermore that M_(i,j,𝔩)^ψ_h (𝗐𝖾𝗌𝗍_s(𝗇𝗈𝗋𝗍𝗁_v(𝗌𝗈𝗎𝗍𝗁 p_i _s 𝗐𝖾𝗌𝗍))). This implies that the following states (and thus corresponding relations) were preserved after the announcement of ψ: (i,j,𝔢), (i+1,j,𝔴), (i+1,j,𝔫), (i+1,j+1,𝔰), and (i+1,j+1,𝔴). To see that M_(i,j,𝔩)^ψ□_s (𝗇𝗈𝗋𝗍𝗁→□_v(𝗌𝗈𝗎𝗍𝗁→□_s(𝖾𝖺𝗌𝗍→_h (𝗐𝖾𝗌𝗍 p_i)))), we consider two cases. First, assume that after the announcement of ψ the following states were preserved: (i,j,𝔫), (i,j+1,𝔰), and (i,j+1,𝔢). In this case, we indeed have M_(i,j,𝔩)^ψ□_s (𝗇𝗈𝗋𝗍𝗁→□_v(𝗌𝗈𝗎𝗍𝗁→□_s(𝖾𝖺𝗌𝗍→_h (𝗐𝖾𝗌𝗍 p_i)))) since state (i,j,𝔫) is the only state satisfying 𝗇𝗈𝗋𝗍𝗁 that can be reached by agent s from (i,j,𝔢). Similarly for states (i,j+1,𝔰), and (i,j+1,𝔢). Moreover, subformula _h (𝗐𝖾𝗌𝗍 p_i) is satisfied because we assumed that state (i+1,j+1,𝔴), satisfying 𝗐𝖾𝗌𝗍 p_i and reachable from (i,j+1,𝔢) by a v-transition, is preserved after the announcement. In the second case, if one of the aforementioned states is not preserved, the whole formula is vacuosly satisfied. That other 𝑥&𝑦 formulas are satisfied in model M can be shown by a similar reasoning. Let T be an instance of the recurring tiling problem. If Ψ_T is satisfiable, then T can tile ℕ×ℕ. Assume that for some M_(i,j,𝔩) we have that M_(i,j,𝔩)Ψ_T. We argue that model M is grid-like. We proceed by taking conjuncts of Ψ_T one at a time. The fact that M_(i,j,𝔩)𝑜𝑛𝑒_𝑐𝑜𝑙 implies that the current state satisfies one and only one colour. Next, according to 𝑎𝑙𝑙_𝑝𝑎𝑟𝑡𝑠, agent s can reach all positions {𝗇𝗈𝗋𝗍𝗁, 𝗌𝗈𝗎𝗍𝗁, 𝖾𝖺𝗌𝗍, 𝗐𝖾𝗌𝗍} and only them. Moreover, satisfaction of 𝑜𝑛𝑒_𝑡𝑖𝑙𝑒 means that all states s considers possible from the current one satisfy exactly one p_i. Thus, s can reach all positions, and all positions reachable by s are labelled with the same tile proposition. Finally, 𝑠𝑡𝑎𝑡𝑒_𝑐𝑜𝑙 ensures that the current state labelled by some tile proposition corresponds to the colouring of that tile. So far, we have captured the local properties of a tile. Now, let us consider 𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠: M_(i,j,𝔩)𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠 implies that for all colours c ∈ C, if the current state satisfies c and 𝗇𝗈𝗋𝗍𝗁, then there must be a v-relation to a state labelled with 𝗌𝗈𝗎𝗍𝗁 and all states reachable by v satisfy c. Similarly for agent v and positions 𝖾𝖺𝗌𝗍 and 𝗐𝖾𝗌𝗍. The requirement of the existence of v and h neighbours forces the model to be growing in 𝗌𝗈𝗎𝗍𝗁 to 𝗇𝗈𝗋𝗍𝗁 and 𝗐𝖾𝗌𝗍 to 𝖾𝖺𝗌𝗍 directions. The fact that M_(i,j,𝔩)𝑖𝑛𝑖𝑡 implies that there is a tile reachable by a ∼_{h,v,s} such that its corresponding state satisfying (resp. ) does not have a 𝗇𝗈𝗋𝗍𝗁 (resp. ) neighbour reachable by v (resp. h), i.e. the formula implies the existence of the initial tile at position (0,0). Formula 𝑡𝑖𝑙𝑒_𝑙𝑒𝑓𝑡 specifies that if the current state satisfies the special tile proposition, then the current state is in the leftmost column of the grid. Now, let us argue that M_(i,j,𝔩)𝑑𝑜𝑤𝑛&𝑢𝑝 forces v-transitions to connect only two classes of states up to 𝖯𝖠𝖫𝖢 indistinguishability. For suppose to the contrary that from the current state labelled with p_i 𝗇𝗈𝗋𝗍𝗁 there are v-successors w and w^' that are both labelled with p_j 𝗌𝗈𝗎𝗍𝗁 but can be distinguished by some formula ψ∈𝖯𝖠𝖫𝖢: M_w ψ and M_w^'ψ. W.l.o.g. we also assume that w^' is the only state satisfying ψ. Announcing χ:=¬𝗌𝗈𝗎𝗍𝗁→□_s ¬ψ results in model M^χ, where all states s-reachable from w^' that satisfy ¬𝗌𝗈𝗎𝗍𝗁 are removed. Hence, we have that M_(i,j,𝔩)^χ satisfies p_i _s (𝗇𝗈𝗋𝗍𝗁_v (𝗌𝗈𝗎𝗍𝗁 p_j _s ¬𝗌𝗈𝗎𝗍𝗁) _v (¬ (𝗇𝗈𝗋𝗍𝗁 p_i ) ¬ (𝗌𝗈𝗎𝗍𝗁 p_j _s ¬𝗌𝗈𝗎𝗍𝗁))), which, taking into account that p_j was arbitrary, contradicts M_(i,j,𝔩)𝑑𝑜𝑤𝑛&𝑢𝑝. We can use analogous reasoning for 𝑙𝑒𝑓𝑡&𝑟𝑖𝑔ℎ𝑡. Now, let us consider M_(i,j,𝔩)𝑟𝑖𝑔ℎ𝑡&𝑢𝑝 and let us assume that M_(i,j,𝔩)𝖾𝖺𝗌𝗍. According to 𝑟𝑖𝑔ℎ𝑡&𝑢𝑝, if from the current state lablelled with we can make first an h-step and then a v-step to reach a state corresponding to tile τ_i, then we can reach the same state (up to 𝖯𝖠𝖫𝖢 indistinguishability) by taking first a v-state and then an h-state. To see this, assume to the contrary, that by going right and up we can reach state w satisfying and some p_i, and by going up and right we reach a state w^', also satisfying 𝗐𝖾𝗌𝗍 p_i, which can be distinguished from w by some formula ψ∈𝖯𝖠𝖫𝖢: M_w ψ and M_w^'ψ. Announcing formula χ: = 𝗐𝖾𝗌𝗍 p_i →ψ results in model M^χ, where s-successors of w^', including itself, that satisfy 𝗐𝖾𝗌𝗍 p_i are removed, while state w is still present in the model. Hence, we have that M_(i,j,𝔩)^χ _h (𝗐𝖾𝗌𝗍_s(𝗇𝗈𝗋𝗍𝗁_v(𝗌𝗈𝗎𝗍𝗁 p_i _s 𝗐𝖾𝗌𝗍))) _s (𝗇𝗈𝗋𝗍𝗁_v(𝗌𝗈𝗎𝗍𝗁_s(𝖾𝖺𝗌𝗍¬_h (𝗐𝖾𝗌𝗍 p_i)))), which contradicts the fact that M_(i,j,𝔩)𝑟𝑖𝑔ℎ𝑡&𝑢𝑝. We can reason in the same manner for 𝑢𝑝&𝑟𝑖𝑔ℎ𝑡. All of the above formulas are satisfied throughout all reachable states in M due to the common knowledge operator for {h,v,s}. Hence, M is a grid-like model on ℕ×ℕ, where each side of each tile in the grid may be represented by several states in M that are indistinguishable by any 𝖯𝖠𝖫𝖢 formula. Moreover, formula Ψ_T, and in particular conjunct ψ_𝑥&𝑦, guarantees that the corresponding tiling is unambiguous, i.e. tile at position (i,j) can be reached from the initial tile by any combination of i steps north and j steps south. The final formula that is satisfied in a grid model if and only if a given tiling has a tile that occurs infinitely often in the first column would be Ψ_T⧫_{h,v,s} (⧫_{v,s} p^∗▪_{v,s}[▪_{h,s}¬ p^∗] ¬Ψ_T). Intuitively, the formula states that if we remove all rows with the special tile, then our model is no longer a grid. Let T be an instance of the tiling problem with a special tile τ^∗∈T. Set T can tile ℕ×ℕ with τ^∗ appearing infinitely often in the first column if and only if Ψ_T⧫_{h,v,s} (⧫_{v,s} p^∗▪_{v,s}[▪_{h,s}¬ p^∗] ¬Ψ_T) is satisfiable. Assume that the set of tiles T can tile the ℕ×ℕ plane with a special tile τ^∗∈T appearing infinitely often in the first column. We can construct a corresponding grid-like model following the proof of Lemma <ref>. In such a construction, propositional variable p^∗ will hold in all states (i,j,𝔩) such that position (i,j) in the tiling is covered by τ^∗. Moreover, our construction of M satisfies 𝑡𝑖𝑙𝑒_𝑙𝑒𝑓𝑡 ensuring that p^∗ is true only in some states of the first column. We need to argue that M_(0,0,𝔢)Ψ_T⧫_{h,v,s} (⧫_{v,s} p^∗▪_{v,s}[▪_{h,s}¬ p^∗] ¬Ψ_T). That M_(0,0,𝔢)Ψ_T follows from Lemma <ref>. By the semantics, M_(0,0,𝔢)⧫_{h,v,s}(⧫_{v,s} p^∗▪_{v,s}([▪_{h,s}¬ p^∗] ¬Ψ_T)) is equivalent to the fact that there is a (i,j, 𝔩) such that (0,0,𝔢) ∼_{h, v,s} (i,j, 𝔩) and M_(i,j, 𝔩)⧫_{v,s} p^∗▪_{v,s} ([▪_{h,s}¬ p^∗] ¬Ψ_T). Since it is required that M_(i,j, 𝔩)⧫_{v,s} p^∗ and due to the fact that τ^∗ appears only in the first column of the grid, we have that i=0. Hence, M_(0,j, 𝔩)⧫_{v,s} p^∗. By the construction of M and by the definition of semantics, M_(0,j, 𝔩)▪_{v,s} ([▪_{h,s}¬ p^∗] ¬Ψ_T) is equivalent to the fact that for all (0,k, 𝔩) such that (0,j,𝔢) ∼_{v,s} (0,k, 𝔩) we have M_(0,k, 𝔩) [▪_{h,s}¬ p^∗] ¬Ψ_T. Note that ∼_{v,s} allows us to reach only states in the first column of the model. W.l.o.g. let us assume that M_(0,k, 𝔩)▪_{h,s}¬ p^∗, i.e. p^∗ is not satisfied anywhere on row k. The result of announcing ▪_{h,s}¬ p^∗ is model M^▪_{h,s}¬ p^∗, where if M_(0, l, 𝔩) p^∗, then (k, l, 𝔩) ∉S^▪_{h,s}¬ p^∗ for all k∈ℕ and 𝔩∈{𝔫, 𝔰, 𝔢, 𝔴}. In other words, all rows in M that start with a state satisfying p^∗ are not preserved in M^▪_{h,s}¬ p^∗. It is now left to argue that M^▪_{h,s}¬ p^∗_(0,k, 𝔩)Ψ_T. Since we assumed that tile τ^∗ appears infinitely often in the first column, then p^∗ is satisfied by an infinite number of states in the first column of M. Thus, removing all corresponding rows with the announcement ▪_{h,s}¬ p^∗ guarantees that no matter how large k is, i.e. how high we are in the tiling, we are always on a part of the grid with a finite height. Hence, Ψ_T (in particular, ▪_{h,v,s}𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠) is not satisfied . To prove the other direction, let us show the contrapositive. Assume that set T cannot tile the ℕ×ℕ plane with a special tile τ^'∈T appearing infinitely often in the first column. We argue that in this case, Ψ_T⧫_{h,v,s}(⧫_{v,s} p^∗▪_{v,s}([▪_{h,s}¬ p^∗] ¬Ψ_T)) is not satisfiable. The first two conjuncts are straightforward. If T cannot tile the ℕ×ℕ plane, then, by Lemma <ref>, Ψ_T is not satisfiable. If T can tile the ℕ×ℕ plane, but τ^∗ never appears in the first column, then ⧫_{v,s} p^∗ is trivially false. Finally, assume that T can tile the ℕ×ℕ plane, and τ^∗ appears in the first column only finitely often. This means that there is some position (0,j) in the grid, such that τ^∗ does not cover it and any other positions above (i.e. (0,j) is above the final occurrence of τ^∗ in the first column). By the construction of M, we have that there is a state (0,j,𝔩) with (0,k,𝔩) ∉V(p^∗) for all k ⩾ j. As (0,j,𝔩) is in the first column, it is reachable by ∼_{v,s}. As argued above, announcement of ▪_{h,s}¬ p^∗ removes all rows in M that do not start with a p^∗-state. However, since the number of occurrences of p^∗ is finite, and (0,j,𝔩) is above the last row that satisfied p^∗, M_(0,j,𝔩)^▪_{h,s}¬ p^∗Ψ_T. See Figure <ref> for the depiction of the situation. In the construction of Ψ_T and proofs of Lemmas <ref> and <ref>, we used APALC quantifiers [!]. We can prove the similar results for GALC and CALC quantifers by substituting [!] with [{h,v,s}] and [ ⟨{h,v,s}⟩ ] correspondingly, and substituting 𝖯𝖠𝖫𝖢 with 𝖯𝖠𝖫𝖢^{h,v,s}. We get the hardness result from the Σ^1_1-completeness of the recurring tiling problem <cit.>. The satisfiability problem of QPALCs is Σ^1_1-hard. The Σ^1_1-hardness of the satisfiability problems of QPALCs implies that the sets of validites of the logics are not RE, which, in turn, implies that QPALCs are not finitely axiomatisable. The set of valid formulas of QPALCs is neither RE nor co-RE. QPALCs do not have finitary axiomatisations. We are perhaps a bit too vague here. § THE SATISFIABILITY PROBLEM OF QPALCS IS Σ^1_1-HARD We prove the Σ^1_1-hardness of the satisfiability problem of QPALCs via a reduction from the recurring tiling problem <cit.>. Let C be a finite set of colours. A tile is a function τ:{𝗇𝗈𝗋𝗍𝗁, 𝗌𝗈𝗎𝗍𝗁, 𝖾𝖺𝗌𝗍, 𝗐𝖾𝗌𝗍}→ C. A finite set of tiles T is called an instance of the tiling problem. A solution to an instance of the tiling problem is a function[Throughout the paper we assume that 0 ∈ℕ.] f:ℕ×ℕ→T such that for all (i,j) ∈ℕ×ℕ, f(i,j) (𝗇𝗈𝗋𝗍𝗁) = f(i,j+1) (𝗌𝗈𝗎𝗍𝗁) and f(i,j) (𝖾𝖺𝗌𝗍) = f(i+1,j) (𝗐𝖾𝗌𝗍). Let T be a finite set of tiles with a designated tile τ^∗∈T. The recurring tiling problem is the problem to determine whether there is a solution to instance T of the tiling problem such that τ^∗ appears infinitely often in the first column. We assume without loss of generality that the designated tile τ^∗ occurs only in the first column. §.§ Encoding a Tiling For our construction we will require five propositional variables — , , , and — to designate the corresponding sides of tiles. Additionally, we will have designated propositional variables for each colour in C, and for each tile τ_i ∈T there is a propositional variable p_i that represents this tile. Finally, we will use p^∗ for the special τ^∗. In our construction, we will represent each tile with (at least) five states: one for each of the four sides of a tile, and one for the centre. As for agents, we require only three of them for our construction. Agent s, for square, cannot distinguish states within the same tile. Agent v, for vertical, cannot distinguish between the northern part of one tile and the southern part of the tile above. Similarly, the horizontal agent h cannot distinguish between the eastern and western parts of adjacent tiles. See Figure <ref> for the depiction of an intended grid-like model. Let an instance T of the recurring tiling problem be given. We start by construction of formula Ψ_T that will be satisfied in a given model if and only if the model is grid-like. We will build up Ψ_T step-by-step, defining useful subformulas along the way. Let 𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 be the following set 𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 := {𝗇𝗈𝗋𝗍𝗁, 𝗌𝗈𝗎𝗍𝗁, 𝖾𝖺𝗌𝗍, 𝗐𝖾𝗌𝗍, 𝖼𝖾𝗇𝗍𝗋𝖾}. The first constraint, expressed by formula 𝑜𝑛𝑒_𝑐𝑜𝑙𝑜𝑢𝑟, is that each state is coloured by exactly one colour. To ensure that all five parts — north, south, east, west, and centre — are present in a current square, we state in 𝑎𝑙𝑙_𝑝𝑎𝑟𝑡𝑠 that in all squares the square agent s has access to all five relevant states. 𝑜𝑛𝑒_𝑐𝑜𝑙𝑜𝑢𝑟 := ⋁_c ∈ C(c ⋀_d ∈ C ∖{c}¬ d) 𝑎𝑙𝑙_𝑝𝑎𝑟𝑡𝑠 := □_s ⋁_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇 q ⋀_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇_s q The formulas ℎ𝑜𝑟 and 𝑣𝑒𝑟𝑡 state that the relation h only allows us to move between 𝖾𝖺𝗌𝗍 and 𝗐𝖾𝗌𝗍 states, while v only allows movement between 𝗇𝗈𝗋𝗍𝗁 and 𝗌𝗈𝗎𝗍𝗁 states. ℎ𝑜𝑟 := ⋀_q∈{𝗇𝗈𝗋𝗍𝗁,𝗌𝗈𝗎𝗍𝗁,𝖼𝖾𝗇𝗍𝗋𝖾} (q→□_h q) 𝑣𝑒𝑟𝑡 := ⋀_q∈{𝖾𝖺𝗌𝗍,𝗐𝖾𝗌𝗍,𝖼𝖾𝗇𝗍𝗋𝖾} (q→□_v q) With 𝑜𝑛𝑒_𝑝𝑜𝑠 we force each state to satisfy exactly one propositional variable from 𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇, and with 𝑜𝑛𝑒_𝑡𝑖𝑙𝑒 we ensure that all states within the same tile are labelled by the tile proposition. 𝑜𝑛𝑒_𝑝𝑜𝑠 := ⋁_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇(q ⋀_q^'∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇∖{q}¬ q^') 𝑜𝑛𝑒_𝑡𝑖𝑙𝑒 := ⋁_τ_i ∈T(p_i □_s p_i ⋀_τ_j ∈T∖{τ_i}¬ p_j) Next, we force each state in a tile to satisfy exactly one atom corresponding to their designated colour: 𝑠𝑡𝑎𝑡𝑒_𝑐𝑜𝑙 := ⋁_τ_i ∈T(p_i →⋀_q ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇∖{𝖼𝖾𝗇𝗍𝗋𝖾} (q →τ_i(q))), where τ_i(q) is the colour of the tile τ_i on the side q (e.g. τ_i (𝗌𝗈𝗎𝗍𝗁) is the bottom colour of tile τ_i). All the formulas considered so far deal with the representation of a single tile. We will use the following abbreviation: ψ_𝑡𝑖𝑙𝑒 := 𝑜𝑛𝑒_𝑐𝑜𝑙𝑜𝑢𝑟𝑎𝑙𝑙_𝑝𝑎𝑟𝑡𝑠ℎ𝑜𝑟𝑣𝑒𝑟𝑡𝑜𝑛𝑒_𝑝𝑜𝑠𝑜𝑛𝑒_𝑡𝑖𝑙𝑒𝑠𝑡𝑎𝑡𝑒_𝑐𝑜𝑙 Adjoining tiles are required to have the same colour on the sides facing each other, we simulate this by requiring that agents h and v consider a current colour in the top and right directions. In such a way we also ensure that the grid is infinite in the positive quadrant. 𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠 := ⋀_c ∈ C( (𝗇𝗈𝗋𝗍𝗁 c →_v 𝗌𝗈𝗎𝗍𝗁□_v c) (𝖾𝖺𝗌𝗍 c →_h 𝗐𝖾𝗌𝗍□_h c) ) We are concerned with the reduction from the ℕ×ℕ recurring tiling problem, i.e. our grid will have left and bottom edges. We force the existence of a tile at position (0,0) with the following formula: 𝑖𝑛𝑖𝑡 := ⧫_{h,v,s} (▪_{v,s}(𝗐𝖾𝗌𝗍→□_h𝗐𝖾𝗌𝗍) ▪_{h,s}(𝗌𝗈𝗎𝗍𝗁→□_v𝗌𝗈𝗎𝗍𝗁)) For the remaining formulas, it is useful to define two abbreviations. We use □_𝑢𝑝φ to denote □_s (𝗇𝗈𝗋𝗍𝗁→□_v(𝗌𝗈𝗎𝗍𝗁→φ)), i.e., we first move, by agent s, to the state representing the northern quadrant of the tile, then we move, by agent v, to southern quadrant of the tile above, where we evaluate φ. Similarly, we use □_𝑟𝑖𝑔ℎ𝑡φ to denote □_s(𝖾𝖺𝗌𝗍→□_h(𝗐𝖾𝗌𝗍→φ)). The duals ◊_𝑢𝑝 and ◊_𝑟𝑖𝑔ℎ𝑡 are defined as usual. The next two formulas are used to guarantee that for every tile there are unique tiles, up to PALC-indistinguishability, above it and to its right. 𝑢𝑝 := [!](◊_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾→□_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾) 𝑟𝑖𝑔ℎ𝑡 := [!](◊_𝑟𝑖𝑔ℎ𝑡◊_s𝖼𝖾𝗇𝗍𝗋𝖾→□_𝑟𝑖𝑔ℎ𝑡◊_s𝖼𝖾𝗇𝗍𝗋𝖾) Additionally, we use the following two formulas to establish a commutative property: going up and then right results in a state that is PALC-indistinguishable from going right and then up. 𝑟𝑖𝑔ℎ𝑡&𝑢𝑝 := [!](◊_𝑟𝑖𝑔ℎ𝑡◊_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾→□_𝑢𝑝□_𝑟𝑖𝑔ℎ𝑡◊_s𝖼𝖾𝗇𝗍𝗋𝖾) 𝑢𝑝&𝑟𝑖𝑔ℎ𝑡 := [!](◊_𝑢𝑝◊_𝑟𝑖𝑔ℎ𝑡◊_s𝖼𝖾𝗇𝗍𝗋𝖾→□_𝑟𝑖𝑔ℎ𝑡□_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾) Finally, we make sure that any two states that are h or v related and that are in the same position are parts of indistinguishable tiles. 𝑛𝑜_𝑐ℎ𝑎𝑛𝑔𝑒 := ⋀_q,q'∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇[!]((q∧◊_s q')→ (□_h (q→◊_s q')∧□_v (q→◊_sq'))) The formula ℎ𝑜𝑟 states that unless we are in a 𝖾𝖺𝗌𝗍 or 𝗐𝖾𝗌𝗍 position, we cannot go to a different position using h. Similarly, 𝑣𝑒𝑟𝑡 states that unless we are in a 𝗇𝗈𝗋𝗍𝗁 or 𝗌𝗈𝗎𝗍𝗁 position we can't use v to change position. The formula 𝑛𝑜_𝑐ℎ𝑎𝑛𝑔𝑒 then states that any move by relation h or v that does not change the position must lead to an indistinguishable tile. We abbreviate formulas with quantifiers as ψ_x&y:= 𝑢𝑝𝑟𝑖𝑔ℎ𝑡𝑟𝑖𝑔ℎ𝑡&𝑢𝑝𝑢𝑝&𝑟𝑖𝑔ℎ𝑡𝑛𝑜_𝑐ℎ𝑎𝑛𝑔𝑒 In our reduction, we are interested in grids where a special tile appears infinitely often in the first column of the grid. The following formula requires that the special tile appears only in the leftmost column: 𝑡𝑖𝑙𝑒_𝑙𝑒𝑓𝑡 := p^∗→□_s(𝗐𝖾𝗌𝗍→□_h𝗐𝖾𝗌𝗍) All of this completes the necessary requirements for the grid. Now, by adding a common knowledge modality for all agents, we force all of the aforementioned formulas to hold everywhere in the grid. Ψ_T := ▪_{h,v,s}( ψ_𝑡𝑖𝑙𝑒𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠𝑖𝑛𝑖𝑡ψ_𝑥&𝑦𝑡𝑖𝑙𝑒_𝑙𝑒𝑓𝑡) Observe that Ψ_T does not say anything about the special tile τ^∗ appearing infinitely often in the first column. The formula merely requires that if there is a special tile, then it should appear in the first column. We first show that Ψ_T forces a grid-like model, and only after that will we consider the (in)finite number of occurrences of the special tile. Let T be an instance of the recurring tiling problem. If T can tile ℕ×ℕ, then Ψ_T is satisfiable. Assume that there is a tiling of the ℕ×ℕ plane with a finite set of tiles T. We construct model M = (S, ∼, V) satisfying Ψ_T directly from the given tiling. In particular, * S = ℕ×ℕ×{𝔫, 𝔰, 𝔢, 𝔴, 𝔠}, * ∼_s = {(i,j,𝔩), (i',j',𝔩') | i = i' and j = j'} * ∼_v is the reflexive closure of {(i,j,𝔫), (i, j+1, 𝔰)} * ∼_h is the reflexive closure of {(i,j,𝔢), (i+1, j, 𝔴)} * for all τ_k ∈T, V(p_k) = {(i,j,𝔩) |τ_k is at (i,j)} * for all c ∈ C, V(c) = {(i,j,𝔩) |τ(𝔩)=c} * for all l ∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇, V(l) = {(i,j,𝔩) | l corresponds to 𝔩} To argue that M_(0,0,𝔢)Ψ_T we first notice that due to the fact that T tiles the ℕ×ℕ plane and by the construction of M, subformulas of Ψ_T that do not involve arbitrary announcements are straightforwardly satisfied. Now, consider the formula 𝑢𝑝. For every (i,j,𝔩), there is at most one (i',j',𝔩') that is reachable by taking an s-step to a 𝗇𝗈𝗋𝗍𝗁 state followed by a v-step to a 𝗌𝗈𝗎𝗍𝗁 state, namely (i',j',𝔩')=(i,j+1,𝔰). Furthermore, this property is retained in any submodel of M. As a consequence, in any state of any submodel of M, ◊_𝑢𝑝χ implies □_𝑢𝑝χ, for every χ. In particular, it follows that M_(i,j,𝔩) [!](◊_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾→□_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾), i.e., M_(i,j,𝔩)𝑢𝑝. Similar reasoning shows that (i,j,𝔩) satisfies the other conjuncts of ψ_x&y. Hence M_(i,j,𝔩)ψ_𝑡𝑖𝑙𝑒𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠𝑖𝑛𝑖𝑡ψ_𝑥&𝑦𝑡𝑖𝑙𝑒_𝑙𝑒𝑓𝑡, for all (i,j,𝔩), and thus M_(0,0,𝔢)Ψ_T. The more complex part of the reduction is to show that if Ψ_T is satisfiable, then a tiling exists. Let T be an instance of the recurring tiling problem. If Ψ_T is satisfiable, then T can tile ℕ×ℕ. Let M be such that M_sΨ_T. The model M is partitioned by ∼_s, we refer to these partitions as grid points, and label these points as follows. * The grid point containing s is labelled (0,0). * If A and B are grid points, A is labelled (i,j) and there is a 𝗇𝗈𝗋𝗍𝗁-state in A that is v-indistinguishable to a 𝗌𝗈𝗎𝗍𝗁-state in B, then B is labelled (i,j+1). * If A and B are grid points, A is labelled (i,j) and there is a 𝖾𝖺𝗌𝗍-state in A that is h-indistinguishable to a 𝗐𝖾𝗌𝗍-state in B, then B is labelled (i+1,j). Note that a single grid point might have multiple labels. We say that (i,j) is tiled with τ_i if there is some grid point labelled with (i,j) that contains a state where p_i holds. We start by noting that because the main connective of Ψ_T is ▪_{h,v,s}, the formula holds in every labelled grid point. For every labelled grid point X and every x∈ X, we therefore have M_xψ_𝑡𝑖𝑙𝑒. So X contains states for every direction, each labelled with exactly one colour that corresponds to the tile that holds on X. We continue by proving the following claim. Claim 1: Let X, A and B be grid points where X is labeled (i,j) while A and B are both labeled (i,j+k) by virtue of being k-steps to the north of X. Then A and B are PALC-indistinguishable, in the sense that for every χ∈𝖯𝖠𝖫𝖢, if there is an a∈ A such that M_aχ then there is a b∈ B such M_bχ (and vice versa). Proof of Claim 1: By induction on k. As base case, let k=1 and suppose towards a contradiction that, for some χ∈𝖯𝖠𝖫𝖢 and a ∈ A, M_aχ while for every b∈ B, M_bχ. Consider then the formula 𝖼𝖾𝗇𝗍𝗋𝖾→◊_sχ. Every 𝖼𝖾𝗇𝗍𝗋𝖾 state in A satisfies this formula, while none of the 𝖼𝖾𝗇𝗍𝗋𝖾 states in B do. Hence, for every state x∈ X, M_x [𝖼𝖾𝗇𝗍𝗋𝖾→◊_sχ](◊_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾□_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾). But that contradicts M_x𝑢𝑝. From this contradiction, we prove the base case k=1. Now, suppose as induction hypothesis that k>1 and that the claim holds for all k'<k. Again, suppose towards a contradiction that M_aχ while M_bχ for all b∈ B. Let A' and B' be grid points that lie k-1 steps to the north of X and one step to the south of A and B, respectively. Then for every a'∈ A' and b'∈ B', M_a'◊_𝑢𝑝◊_sχ and M_b'◊_𝑢𝑝◊_sχ. By the induction hypothesis, A' and B' are indistinguishable, so M_a'◊_𝑢𝑝◊_sχ∧◊_𝑢𝑝◊_sχ. But then there are distinguishable grid points one step to the north of A', contradicting the induction hypothesis. From this contradiction, we prove the induction step and thereby the claim. Similar reasoning shows that any two grid points A, B that are labeled (i+k,j) by virtue of being k steps to the right of the same grid point X are indistinguishable. Now, we can prove the next claim. Claim 2: Let X, A and B be grid points, where X is labelled (i,j), A is labelled (i+1,j+1) by virtue of being above A' which is to the right of X, and B is labelled (i+1,j+1) by virtue of being to the right of B' which is above B. Then A and B are PALC-indistinguishable. Proof of claim 2: Suppose towards a contradiction that for some χ∈𝖯𝖠𝖫𝖢 and a ∈ A we have M_aχ, while M_bχ for all b∈ B. Then for x∈ X we have M_x [𝖼𝖾𝗇𝗍𝗋𝖾→◊_sχ](◊_𝑟𝑖𝑔ℎ𝑡◊_𝑢𝑝◊_s𝖼𝖾𝗇𝗍𝗋𝖾∧◊_𝑢𝑝◊_𝑟𝑖𝑔ℎ𝑡◊_s𝖼𝖾𝗇𝗍𝗋𝖾), contradicting M_x𝑟𝑖𝑔ℎ𝑡&𝑢𝑝. From Claim 1 it follows that any A and B that are labelled (i,j) by virtue of being i steps to the right and then j steps up from (0,0) are PALC-indistinguishable. Claim 2 then lets us commute the “up” and “right” moves. Any path to (i,j) can be obtained from the path that first goes right i steps then up j steps by a finite sequence of such commutations. Hence any grid points A and B that are labelled (i,j) are PALC-indistinguishable. The tile formulas p_i, for every τ_i∈T, are PALC-formulas, so there is exactly one tile τ_i that is assigned to the grid point (i,j). Furthermore, 𝑠𝑡𝑎𝑡𝑒_𝑐𝑜𝑙 then guarantees that each side of a grid point has the colour corresponding to the tile, and 𝑎𝑑𝑗_𝑡𝑖𝑙𝑒𝑠 guaranteees that the tile colours match. This shows that if Ψ_T is satisfiable, then T can tile ℕ×ℕ. §.§ Encoding the Recurring Tile The final formula that is satisfied in a grid model if and only if a given tiling has a tile that occurs infinitely often in the first column would be Ψ_T▪_{v,s}[▪_{h,s}¬ p^∗] ¬Ψ_T. In other words, the recurring tiling problem can be reduced to the APALC-satisfiability problem, where the reduction maps the instance (T,τ^∗) of the recurring tiling problem to the satisfiability of Ψ_T▪_{v,s}[▪_{h,s}¬ p^∗] ¬Ψ_T. Intuitively, the formula states that if we remove all rows with the special tile, then our model is no longer a grid. See Figure <ref>, where on the left we have a grid with the special grey tile τ^∗ appearing infinitely often in the first column (every other tile in the first column is grey). Formula ▪_{h,s}¬ p^∗ holds only in those squares of the grid that lie on rows without the special tile. Thus, announcing ▪_{h,s}¬ p^∗ removes all rows that has the grey tile (see the right part of Figure <ref>). Since the grey tile appears infinitely often in the original grid, we have to remove an infinite number of rows after the announcement of ▪_{h,s}¬ p^∗, thus ensuring that what is left of the original model is not a grid. Let T be an instance of the tiling problem with a special tile τ^∗∈T. Set T can tile ℕ×ℕ with τ^∗ appearing infinitely often in the first column if and only if Ψ_T▪_{v,s}[▪_{h,s}¬ p^∗] ¬Ψ_T is satisfiable. First, let us can extend the labelling from the proof of Lemma <ref> as follows: * For every q∈𝖯𝗈𝗌𝗂𝗍𝗂𝗈𝗇, if A and B are grid points, A is labeled (i,j) and there is a q state in A that is v or h-indistinguishable from a q state in B, then B is labeled (i,j). It follows from 𝑛𝑜_𝑐ℎ𝑎𝑛𝑔𝑒 that this extended labelling retains the property that any two grid points with the same label are PALC-indistinguishable. Furthermore, from ℎ𝑜𝑟 and 𝑣𝑒𝑟𝑡 it follows that every grid point that is reachable by h, v and s is now labelled with some coordinates (i,j). Hence we can identify the {h, v, s}-reachable grid points in any model of Ψ_T with ℕ×ℕ. Now, assume that set T cannot tile the ℕ×ℕ plane with a special tile τ^'∈T appearing infinitely often in the first column. We argue that in this case, Ψ_T▪_{v,s}([▪_{h,s}¬ p^∗] ¬Ψ_T) is not satisfiable. The first conjunct is straightforward. If T cannot tile the ℕ×ℕ plane, then, by Lemma <ref>, Ψ_T is not satisfiable. So suppose that T can tile the plane, but only in such a way that τ^∗ occurs finitely often. For every model M_(0,0,𝔩) of Ψ_T, there is then some k∈ℕ that is the last row in which p^∗ is true. The formula ▪_{h,s} p^∗ holds exactly on those rows where p^∗ does not hold in the first column. As a result, the update [▪_{h,s} p^∗] does not remove any rows past row k. The grid points ℕ×ℕ_>k then still form a grid that is isomorphic to ℕ×ℕ, and that is tiled. See Figure <ref> for a depiction of the situation. It follows that M_(0,k,𝔩)[▪_{h,s} p^∗]Ψ_T, and therefore M_(0,0,𝔩)▪_{v,s}[▪_{h,s} p^∗]Ψ_T. This is true for every model of Ψ_T, so Ψ_T▪_{v,s}[▪_{h,s} p^∗]Ψ_T is not satisfiable. If, on the other hand, T can tile the plane in such a way that τ^∗ occurs infinitely often in the first column, then there is a model of Ψ_T where the modality [▪_{h,s} p^∗] removes infinitely many rows, and therefore does not leave any infinite grid. So Ψ_T∧▪_{v,s}[▪_{h,s} p^∗]Ψ_T is satisfiable. In the construction of Ψ_T and proofs of Lemmas <ref> and <ref>, we used APALC quantifiers [!]. We can prove the similar results for GALC and CALC quantifers by substituting [!] with [{h,v,s}] and [ ⟨{h,v,s}⟩ ] correspondingly, and substituting 𝖯𝖠𝖫𝖢 with 𝖯𝖠𝖫𝖢^{h,v,s}. We get the hardness result from the Σ^1_1-completeness of the recurring tiling problem <cit.>. The satisfiability problem of QPALCs is Σ^1_1-hard. The Σ^1_1-hardness of the satisfiability problems of QPALCs together with the fact that the class of Σ^1_1 problems is strictly greater than the class of co-RE problems <cit.> imply that the sets of validites of the logics are not RE, which, in turn, implies that QPALCs are not finitely axiomatisable. The set of valid formulas of QPALCs is neither RE nor co-RE. QPALCs do not have finitary axiomatisations. § DISCUSSION The existence of finitary axiomatisations of any of APAL, GAL, and CAL is a long-standing open problem. In this paper, we have showed that the satisfiability problem of the logics extended with common knowledge modality is Σ^1_1-hard, and thus they do not admit of finitary axiomatisations. Table <ref> contains the overview of the known results, including those shown in this paper, and open questions. It is important to point out that the use of common knowledge is instrumental in our construction. Arguments from <cit.> did not rely on common knowledge to enforce local grid properties globally, and instead the authors used an agent with the universal relation over the set of states. This approach is good enough if one wants to demonstrate the existence of a grid-like model. However, if we also require that the grid satisfies some property, like a special tile occurring infinitely often in the first column, then the presence of the global agent makes it harder to ensure this. The problem is that such an unrestrained relation may access other grids within the same model, and thus we may end up in the situation when the property is satisfied by a set of grids taken together and not by any single grid. Our construction is `tighter' than those in <cit.>. In particular, our vertical and horizontal agents can `see' only one step ahead. This guarantees that we stay within the same grid. In order to force grid properties globally, we use common knowledge operators that allow us to traverse a given grid-like model in all directions. It is not yet clear how to have a `tight' grid and still be able to traverse the model without common knowledge. With this work, apart from showing that QPALCs are Σ^1_1-hard, we also hope to have elucidated the exact obstacle one has to overcome in order to claim the same about QPALs. §.§ Acknowledgements We would like to thank the three anonymous reviewers for their encouraging comments and constructive suggestions, which helped us to improve the presentation of our result. eptcs
http://arxiv.org/abs/2307.07644v1
20230714222428
Non-Gaussian Saha ionization equation in Rindler space
[ "L. L. Sales", "F. C. Carvalho" ]
astro-ph.CO
[ "astro-ph.CO", "physics.plasm-ph", "stat.AP" ]
^1Departamento de Física, Universidade do Estado do Rio Grande do Norte, 59610–210, Mossoró – RN, Brazil This paper investigates the non-Gaussian effects of the Saha equation in Rindler space via Tsallis statistics. By considering a system with cylindrical geometry and the equivalence principle, we deduce the non-Gaussian Saha ionization equation for a partially ionized hydrogen plasma that expands with uniform acceleration. We examine the photoionization of hydrogen atoms and the electron-positron pair production at high temperatures. Our findings reveal that the non-Gaussian binding energy exhibits a quadratic dependence on the gravitational field, in contrast to the linear dependence predicted by Boltzmann-Gibbs statistics. Hence, both photoionization and pair production are more intensely suppressed in regions with a strong gravitational field in a non-Gaussian context than in the Boltzmann-Gibbs framework. Finally, constraints on the gravitational field and the electron and positron chemical potentials are derived. Keywords: Saha equation. Rindler space-time. Tsallis statistics. Non-Gaussian Saha ionization equation in Rindler space F. C. Carvalho^1,[[email protected]] August 12, 2023 ====================================================== § INTRODUCTION The Saha equation, formulated by the Indian astrophysicist Meghnad Saha in 1920 <cit.>, is crucial in studying the fraction of ionized atoms as a function of particle densities and temperature. This approach provides essential information about various phenomena, still not very well understood, such as the creation of neutrinos in the solar core and the estimation of light element concentrations in the early universe, among other issues in astrophysics and cosmology. A recent study on the Saha equation in Rindler space has been carried out by De and Chakrabarty <cit.>. An analysis of the photoionization of hydrogen atoms and the pair production process was performed. The authors proved that strong gravitational fields suppress the photoionization of hydrogen atoms and also pair production at high temperatures. The so-called Rindler space refers to a reference frame undergoing a uniformly accelerated motion with respect to an inertial frame. In this space-time, the relationship between two frames of reference, one inertial and the other non-inertial with uniform acceleration, is given by the well-known Rindler coordinates <cit.>. This theoretical framework has been used in a variety of physical systems, as evidenced by the references <cit.>. The photoionization and pair production processes are primarily determined by electromagnetic reactions, which exhibit long-range interactions, as well as the gravitational field. Since Boltzmann-Gibbs (BG) statistical mechanics is unsuitable for describing systems with long-range interactions <cit.>, the conventional Saha equation in Rindler space is also not suitable to accurately describe such physical processes. New observational evidence, reported in recent publications, indicates that q-thermostatistics appropriately describes certain aspects of astrophysical self-gravitating systems (see, for instance, <cit.>). Thus, in this context, we chose to apply Tsallis statistics to analyze photoionization and pair production processes in Rindler space so that we can account for the long-range interactions in the process due to the presence of the gravitational field. Since the original publication of Tsallis in 1988 <cit.>, Tsallis statistics has found successful application in several fields of knowledge. Examples include astrophysics and cosmology <cit.>, mathematical physics <cit.>, general relativity <cit.>, among others. This statistical framework has proved to be valuable in describing various physical systems, particularly those that exhibit anomalous behavior or long-range interactions, such as the gravitational and electromagnetic forces <cit.>. This paper is structured as follows. In Section <ref>, an overview of Rindler space is presented. In Section <ref>, we show the methodological procedure to obtain the non-Gaussian Saha equation in Rindler space, as well as the analysis of photoionization and the pair production. Lastly, conclusions are shown in Section <ref>. Throughout the paper, we will adopt the following system of natural units: c=k_B=ħ=1. § RINDLER'S SPACE: BRIEF INTRODUCTION In special relativity, the relationship between the coordinates of two inertial frames S and S^' is given by the well-known Lorentz transformations <cit.> x^' = γ(x-vt) , y^' = y , z^' = z , t^' = γ(t-vx) , where γ=(1-v^2)^-1/2 is the Lorentz factor. Here, S^' moves uniformly in the x-direction concerning the S-frame. Let's assume, now, that the frame of reference S^' is moving along the x-direction with uniform acceleration α concerning S. In this instance, the transformations are given by the Rindler coordinates <cit.>: x = (1/α + x^')cosh(α t^') , t = (1/α + x^')sinh(α t^') . The line element in Rindler space-time can be written as <cit.> ds^2 = (1+α x^')^2dt^'^2 -dx^'^2 -dy^'^2 -dz^'^2 , whose metric tensor takes the form g^μν = diag[ (1+α x^')^2,-1,-1,-1] . Employing the concepts of the relativistic dynamics of special relativity, it is possible to prove that the Rindler Hamiltonian (single particle energy) can be written as <cit.> H ≡ E= m(1+α x)(1+p^2/m^2)^1/2 , where m is the rest mass of the particle. In the non-relativistic regime, Eq. (<ref>) can be approximated as <cit.> E= m^'' + p^2/2m' , in which we define m^' = m(1+α x)^-1 and m^'' = m(1+α x) . § NON-GAUSSIAN SAHA EQUATION IN RINDLER SPACE Some basic considerations must be made before proceeding to the derivation of the Saha equation in Rindler space. Consider the following system configuration <cit.>: (i) a partially ionized hydrogen plasma (a reactive mixture of neutral hydrogen atoms, hydrogen ions, electrons and photons); (ii) cylindrical geometry: in this scenario, the plasma expands along the positive x-direction, which coincides with the symmetry axis of the cylinder, with constant acceleration α; and (iii) principle of equivalence: the photoionization or recombination of accelerated particles (non-inertial frame) is equivalent to the same processes occurring in an inertial reference frame in the presence of a uniform gravitational field α. When the photoionization rate equals the recombination rate, we may express the reaction H_n + γ↔ H^+ + e^- , where n denotes the hydrogen atom's energy level. In a chemical equilibrium situation, we have that μ_e^-+μ_H^+=μ_H_n, since μ_γ=0. In Ref. <cit.>, the Einstein equivalence principle is used to illustrate how the gravitational field affects the production of electron-positron pairs and the photoionization of hydrogen atoms. The Saha equation in a non-inertial frame was obtained by the authors as follows <cit.>:  R(α)=n_H^+n_e/n_H_n = G_(e,+,n)n_Q_eexp(-βΔε_n) , where G_(e,+,n)=g_eg_+/g_n is the degeneracy factor and Δε_n = (1+α x)Δ m, with Δ m=m_e+m_H^+-m_H_n being the hydrogen ionization potential. In 1988, inspired by multifractal systems, Tsallis introduced a new entropic form for the BG entropy <cit.>. This new theoretical framework recovers the BG entropy as a particular case. Mathematically, the Tsallis entropy is defined as follows: S_q = 1/q-1(1-∑_i=1^Ωp_i^q) , where p_i is the probability of the system being in the microstate i, Ω the total number of settings, and q the parameter that measures the intensity of correlations of the system commonly referred to as the entropic index. When q→ 1 the classic entropy is recovered. Besides, S_q is concave for q>0 and convex for q<0. Tsallis entropy is nonadditive, for example, for a system composed of two statistically independent subsystems A and B, the q-entropy behaves like S_q(A+B) = S_q(A) + S_q(B) + (1-q)S_q(A)S_q(B) , where the cross term expresses the degree of nonadditivity of the system and is controlled by the q-parameter. In general, the Tsallis entropy is nonextensive and nonadditive for q≠ 1. However, it is extensive for strongly correlated systems, but only for a certain family of values of the q-parameter, e.g., q=1-1/ρ for typical complex systems with Ω(N)∝ N^ρ. While the additivity property comes from the form of the entropic functional, extensivity is a purely thermodynamic property. The logarithm and exponential functions are rewritten as power laws as follows: ln_q(x)=x^1-q-1/1-q , and exp_q(x) = e_q^x = [1+(1-q) x]^1/1-q . These modified functions return to usual functions when q→ 1. Many helpful properties in the Tsallis framework can be found in Ref. <cit.>. §.§ Non-Gaussian number density in Rindler space In the context of Tsallis' non-Gaussian statistics, the particle number q-density of species i can be defined as n_i^q≡N_i^q/Δ V = g_i/(2π)^3∫ d^3p 𝒩_i^q , where 𝒩_i^q is the generalized occupation number and g_i the degeneracy of species i. Besides, Δ V=AΔ x is a small volume element with A being the cross-sectional area of the cylinder, and Δ x is a small length element in the x-direction at a distance x from the origin. Using a non-Gaussian fermionic distribution and neglecting the quantum effects, Eq. (<ref>) becomes n_i^q = 4π g_i/(2π)^3e_q^β(μ_i-m_i^'')∫_0^a dpp^2e_q^-γ_i p^2 , where a = {[ ∞, q>1; [(1-q)γ_i] ^-1/2, q<1 , ]. being the factor γ_i defined as γ_i = β/2m_i^'[1+(1-q)β(μ_i-m_i^'')] . The number density for non-relativistic particles is given by n_i^q = g_iB_qn_Q_i[ e_q^β(μ_i-m_i^'')]^5-3q/2 , where n_Q_i = (m_i^'T/2π)^3/2 , and B_q = {[ 1/(q-1)^3/2Γ(5-3q/2(q-1))/Γ(1/q-1), 1<q<5/3; ; 1/(1-q)^3/2Γ(2-q/1-q)/Γ(7-5q/2(1-q)), q<1 . ]. The equations presented above were originally derived in Ref. <cit.>. However, in this study, we extend this framework by considering the effects of non-relativistic energy in Rindler space, as shown in Eq. (<ref>). From Eq. (<ref>), we obtain a general expression for the chemical potential of species i as a function of its concentration, which reads as μ_i = m_i^'' + Tln_q(n_i^q/g_iB_qn_Q_i)^2/5-3q . These expressions revert to their original form in the limit q→ 1. The next assignment is to build the non-Gaussian Saha equation in Rindler space for the photoionization and pair production processes. §.§ Photoionization From now on, let's examine the photoionization process in a partially ionized hydrogen plasma in the Tsallis framework, and assess the effects of a non-Gaussian contribution. Taking n_i^q for electrons (e^-), hydrogen ion (H^+), and hydrogen atoms in their n-th excited state (H_n), the non-Gaussian version of Eq. (<ref>) is R(α,q)=n_H^+^qn_e^q/n_H_n^q = G_(e,+,n)n_Q_eB_q( e_q^-βΔε_q,n)^5-3q/2 , where we set Δε_q,n =Δε_n+(q-1)β m_e^''m_H^+^''/1+(q-1)β m_H_n^'' , or in terms of acceleration α as Δε_q,n = (1+α x)Δ m+(q-1)(1+α x)^2β m_em_H^+/1+(q-1)(1+α x)β m_H_n . Note that this effective biding energy depends on α^2. Therefore, by treating α as the gravitational field via the equivalence principle, the non-Gaussian effect on the binding energy is characterized by a quadratic dependence on the gravitational field that is governed by the q-parameter. In contrast, using the BG statistics, the effective binding energy has a linear dependency of α in Rindler space. By definition, Δε_n >0, so we have Δε_q>0 if q>1, and Δε_q<0 if q<1. At very high temperatures (T→∞), the quadratic dependence in Eq. (<ref>) disappears and then Δε_q,n→Δε_n. In the limit of q→ 1 and in the absence of a gravitational field, Eq. (<ref>) recovers the standard Saha equation. The effective binding energy defined in Ref. <cit.> reads as[It is noteworthy that although it is convenient to introduce the notion of effective binding energy, the binding energy itself, strictly speaking, is determined solely by the laws of quantum mechanics and is independent of temperature or of any other parameters related to thermo-statistics.] ε_q =ε_0+(q-1)β m_em_p/1+(q-1)β m_H . An analysis of the limiting cases of equation above shows that lim_T →∞ε_q→ε_0 and lim_T → 0ε_q≈ m_e = 0.511 MeV . This suggests that the hydrogen ionization potential in an environment with temperatures close to absolute zero must be on the order of the electron's rest energy. On the other hand, with the effective binding energy in Rindler space, Eq. (<ref>), the result analogous to that of Eq. (<ref>) is given by lim_T → 0Δε_q,n≈ m_e^'' = (1+α x)m_e , where a correction due to the gravitational field emerges. A meaningful remark is that we can obtain a constraint for the gravitational field according to the following property: exp_q(x)/exp_q(y) = exp_q[x-y/1+(1-q)y]    (y≠1/q-1) . Hence, Eq. (<ref>) is constrained as μ_H_n-m_H_n^''≠T/q-1 , or α≠1/m_H_nx(μ_H_n-m_H_n-T/q-1) . This implies that the non-Gaussian effects on photoionization in Rindler space only hold if we take into account the constraint given by Eq. (<ref>). Now consider the ratio with α>0 and α=0: I(α,q) ≡R(α,q)/R(0,q) = (1+α x)^-3/2(e_q^-βϕ_q)^5-3q/2 , where I measures how the photoionization of hydrogen atoms is affected by the gravitational field. Furthermore, we define ϕ_q = Δε_q,n(α)-Δε_q,n(0)/1+(q-1)βΔε_q,n(0) . We have demonstrated in Eq. (<ref>) that Δε_q,n depends on α^2. Therefore, the term ϕ_q also has a quadratic dependency of the gravitational field. This suggests that the non-Gaussian photoionization of hydrogen atoms in Rindler space is more intensely suppressed in regions with a strong gravitational field. In the limit q→ 1, the linear dependency is recovered, i.e., I(α,1) = (1+α x)^-3/2exp(-βα xΔ m) . This result was also found in the work of De and Chakrabarty <cit.>. §.§ Electron-positron pair production In order to investigate the electron-positron pair production, consider the following reaction: γ + γ↔ e^- + e^+ . Here, the chemical equilibrium reads as μ_e^-+μ_e^+=0, since μ_γ=0. In this situation, the particle number q-density is written as n_e^∓^q = g_eB_qn_Q_e{exp_q[β(μ_e^∓-m_e^∓^'')]} ^5-3q/2 . Hence, the product of electron-positron concentration is given by C(α,q) ≡ n_e^-^qn_e^+^q = g_e^2B_q^2n_Q_e^2[exp_q(ξ_q)]^5-3q/2 , where we set ξ_q = (1-q)β^2[ (1+α x)^2m_e^2-μ_e^2] -2β(1+α x)m_e , and we employ the equilibrium chemical condition. The ratio between the product of electron-positron concentrations with α>0 and α=0 can be written as follow: C(α,q)/C(0,q) = (1+α x)^-3(e_q^λ_q)^5-3q/2 , where we define λ_q = ξ_q(α)-ξ_q(0)/1+(1-q)ξ_q(0) . It is worth noting that Eq. (<ref>) also portrays a quadratic dependence of the gravitational field, as seen in the term ξ_q. Accordingly, the effect of α on pair production is analogous to that on the photoionization process. In other words, pair production is also more intensely suppressed in regions with a strong gravitational field. As an immediate mathematical consequence, using the property (<ref>), Eq. (<ref>) is constrained by imposing the following condition: μ_e^+≠[m_e - T/1-q(T/q-1 + 2m_e)]^1/2 . This is a constraint to the electron and positron chemical potentials, since μ_e^+=-μ_e^-. When q→ 1, Eq. (<ref>) takes the form C(α,1)/C(0,1) = (1+α x)^-3exp(-2βα xm_e) , as can one verifies in Ref. <cit.>. § CONCLUSIONS In this study, we have investigated the non-Gaussian effects of the Saha equation in Rindler space via Tsallis statistics. Our analysis has provided insight into the photoionization of hydrogen atoms and the electron-positron pair production at high temperatures in the presence of a strong gravitational field. We have shown that the effective binding energy exhibits a quadratic dependence on the gravitational field, in contrast to the linear dependence predicted by Boltzmann-Gibbs statistics. Our findings demonstrate that both photoionization and pair production are more intensely suppressed in regions with a strong gravitational field in a non-Gaussian context than in the Boltzmann-Gibbs framework. We have discussed the implications of the effective binding energy at low temperatures. Future investigations are warranted in order to better understand the implications of this theoretical prediction. Furthermore, we have derived constraints on the gravitational field and the electron and positron chemical potentials. Given that Tsallis statistics have been successfully used for over 30 years to describe long-range correlations and interactions, including astrophysical self-gravitating systems, non-Gaussian effects must be considered when studying the behavior of partially ionized hydrogen plasmas under strong gravitational fields. Our results have the potential to contribute to the description of extreme astrophysical scenarios. § ACKNOWLEDGMENTS The authors sincerely thank Carlos Alexandre Wuensche for his valuable comments and suggestions. LLS is thankful to the Brazilian agency CAPES for financial support. FCC was supported by CNPq/FAPERN/PRONEM. § REFERENCES
http://arxiv.org/abs/2307.05023v1
20230711055621
Best Arm Identification Based Beam Acquisition in Stationary and Abruptly Changing Environments
[ "Gourab Ghatak" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
22.827.6Best Arm Identification Based Beam Acquisition in Stationary and Abruptly Changing Environments Gourab Ghatak, Member, IEEE The author is with the Department of Electrical Engineering at the Indian Institute of Technology (IIT) Delhi, New Delhi, India 110016. Email: [email protected]. ============================================================================================================================================================================================================ We study the initial beam acquisition problem in millimeter wave (mm-wave) networks from the perspective of best arm identification in multi-armed bandits (MABs). For the stationary environment, we propose a novel algorithm called concurrent beam exploration, , in which multiple beams are grouped based on the beam indices and are simultaneously activated to detect the presence of the user. The best beam is then identified using a Hamming decoding strategy. For the case of orthogonal and highly directional thin beams, we characterize the performance of in terms of the probability of missed detection and false alarm in a beam group (BG). Leveraging this, we derive the probability of beam selection error and prove that outperforms the state-of-the-art strategies in this metric. Then, for the abruptly changing environments, e.g., in the case of moving blockages, we characterize the performance of the classical sequential halving () algorithm. In particular, we derive the conditions on the distribution of the change for which the beam selection error is exponentially bounded. In case the change is restricted to a subset of the beams, we devise a strategy called K-sequential halving and exhaustive search, , that leads to an improved bound for the beam selection error as compared to . This policy is particularly useful when a near-optimal beam becomes optimal during the beam-selection procedure due to abruptly changing channel conditions. Finally, we demonstrate the efficacy of the proposed scheme by employing it in a tandem beam refinement and data transmission scheme. § INTRODUCTION §.§ Context and Background The mm-wave spectrum offers large bandwidths, enabling high data rates for future wireless applications <cit.>. However, it is highly susceptible to path loss and blockage due to the shorter wavelength <cit.>. To overcome such detrimental issues, the mm-wave transceivers employ beamforming using large antenna arrays <cit.>. Consequently, in mm-wave communication systems, initial beam selection plays a crucial role in establishing a reliable and high-quality link between the BS and the UE <cit.> <cit.>. In the case of beam-selection error or beam misalignment, the received signal quality at the UE deteriorates significantly thereby rendering communication infeasible <cit.>. The number of beams per synchronization signal block (SSB) can vary depending on the specific deployment and configuration of the network <cit.>. The exact number of beams per SSB is determined by the network operator and can be adjusted based on factors such as coverage requirements, network capacity, and radio resource management strategies. 3GPP specifies that multiple beams can be formed and transmitted by the base station (gNB) to cover different areas or sectors <cit.>, which we also assume in the first part of our work. The number of beams can be dynamically configured and can vary from one gNB to another. To perform initial beam acquisition in mm-wave systems, researchers have investigated several technologies such as beam sweeping <cit.>, channel estimation <cit.>, compressed sensing <cit.>, hybrid beamforming <cit.>, and machine-learning <cit.>. Beam sweeping involves transmitting signals using different beamforming directions over a predefined set of beams. The receiver then measures the received signal quality for each beam and reports it back to the transmitter. Based on this feedback, the transmitter selects the beam with the highest received signal strength or quality. An exhaustive search of the beam space is associated with high overheads and leads to high initial access delays. To overcome this, researchers have proposed compressed sensing methods, where the BS sends a compressed version of the beam codebook to the UE <cit.>. Hybrid beamforming is often employed to strike a balance between performance and complexity <cit.>. In this approach, initially, analog beamforming is performed at the transmitter using a limited number of radio frequency (RF) chains, which reduces the complexity. The receiver measures the signal quality for each analog beam, and the selected beam index is fed back to the transmitter. Then, digital beamforming is applied on the selected beam at the transmitter to further refine the beamforming gain. On the contrary, several approaches for initial access involve the estimation of the channel characteristics between the transmitter and receiver <cit.>. By exploiting the estimated channel information, such as arrival and departure angles, the transmitter can make informed decisions regarding initial beam selection. This fall under the larger umbrella of localization-assisted initial access <cit.>. Recently, machine learning algorithms have been utilized to learn and predict the optimal beam selection based on various channel and environment parameters <cit.>. In particular, by training models with large datasets, the transmitter can predict the best beamforming parameters for a given set of conditions, reducing the need for exhaustive beam search procedures <cit.>. The specific method chosen for initial beam selection depends on the system requirements, available resources, and implementation constraints. Beam training and selection are iterative processes, and continuous adaptation may be necessary to maintain an optimal link in dynamic mm-wave environments. However, the issue of a changing environment during the beam selection procedure is largely unaddressed. §.§ Related Work Authors in <cit.> proposed antenna architectures that generate a collection of well-defined, high-gain, orthogonal beams. Due to the orthogonal nature of each beam and their minimal coupling with other beams, these transmit beams possess inherent independence and exhibit relatively low correlation. In our work, we consider a similar beamforming scheme characterized by thin, highly directional beams which are independent of each other. Alkhateeb el al. <cit.> designed an initial beam association method based on beam sweeping and downlink control pilot reuse. Typically, hierarchical and multi-resolution codebooks result in reduced initial access delay. In this regard, Wang et al. <cit.> devised an efficient multi-resolution beam search technique that initiates with wide beams and progressively narrows them down until identifying the optimal beam. Nevertheless, the beam resolution requires adjustment at each stage. Wu et al. <cit.> presented a technique for rapid and precise beam alignment in multi-path channels within a point-to-point mm-wave system. The method capitalizes on the correlation structure among beams, extracting information from neighboring beams to identify the optimal beam efficiently, rather than searching through the entire beam space. There exists extensive research literature on beam selection strategies for 5G and beyond systems, an exhaustive discussion of which is out of scope of the current discussion. Instead, we enlist below the MAB based approaches for the beam selection problem and refer the reader to the work by Giordani <cit.> and the references therein for a thorough discussion on the other techniques. Recently, MAB frameworks have been employed to study the problem of efficient initial access. It is interesting to note that the two settings of the best arm identification problem in MAB - the fixed budget setting and the fixed confidence setting correspond to two beam acquisition requirements - fixed beam selection deadline, and fixed beam selection error. Hashemi et al. <cit.> have studied contextual bandits for beam alignment. Specifically, they address an online stochastic optimization scenario where the objective is to maximize the directivity gain of the beam alignment policy over a specific time frame. By leveraging the inherent correlation and unimodality properties of the model, the authors illustrate that the inclusion of contextual information enhances performance. The work by Va et al. <cit.> utilized a UCB-based framework to create an online learning algorithm for selecting and refining beam pairs. The algorithm initially learns coarse beam directions from a predefined beam codebook and subsequently refines the identified directions to align with the power angular spectrum's peak at that specific position. Hussain et al. <cit.> developed an innovative scheme for beam pair alignment utilizing Bayesian MAB. The primary objective of this scheme was to maximize both the alignment probability and the throughput of data communication. More recently, Wei et al. developed a bandit-based initial beam selection algorithm named two-phase heteroscedastic track-and-stop () <cit.>. The authors formulated the beam selection as a fixed-confidence pure exploration problem. The authors assumed a correlation structure among beams, considering that the information from nearby beams is similar. Additionally, the algorithm takes exploits the heteroscedastic property that the variance of the reward of an arm is related to its mean. groups all beams into several beam sets such that the optimal beam set is first selected and the optimal beam is identified in this set. §.§ Motivation In almost all of the research above, the authors did not provide any insight into the performance of their algorithms in a non-stationary environment. Our formulation also considers the heteroscedastic Gaussian distribution and for the stationary environment, we demonstrate that for highly directional thin beams, outperforms . Additionally, we investigate an algorithm tuned to a changing environment. §.§ Contributions and Organization The main contributions in this work are as follows. * For the stationary environment we propose and characterize a novel initial beam acquisition algorithm, concurrent beam-exploration (). The main innovation in is the formation of BG based on the beam indices, followed by concurrent multi-beam detection to identify the BG in which the UE is present. Then, the index of the best beam is decoded for service. * We prove that in the case of highly directional beams that are characterized by negligible side-lobe gains, the detection statistic reduces to a generalized Chi-square distributed random variable. We derive the probability of missed detection and the probability of false alarms for the BG. Leveraging this, we prove that reduces the probability of beam selection error as compared to the state-of-the-art hierarchical beam selection procedures. * For the case of intermittent blockages, i.e., when a previously sub-optimal beam becomes optimal during the beam selection procedure, the performance of deteriorates significantly. For this piece-wise stationary environment, we characterize the performance of the sequential halving () algorithm, which is popular for best arm identification in bandit environments. To the best of our knowledge, ours is the first work that rigorously characterizes the performance of in an abruptly-changing environment. We show that the upper bound of consists of an exponential term and a term dependent on the distribution of the location of the change. Accordingly, we derive conditions on the distribution of the change in order to guarantee an exponential bound for in the presence of a single change. * For the case when the change occurs in one of the best K arms, we propose a novel algorithm called K-sequential halving followed by exhaustive search () and demonstrate that it outperforms not only but also other state-of-the-art algorithms for initial beam acquisition. To the best of our knowledge, this is the first attempt at algorithm design for beam acquisition in a changing environment. We also highlight its limitations, specifically for the cases of early change. * Finally, as a case study to test the efficacy of , we employ it in a tandem beam refinement and data communication system. Based on that, we derive the system design rules for selecting an optimal beam dictionary size and the optimal fractional resources allotted to the beam refinement phase of the system. The rest of the paper is organized as follows. We introduce the system model and define the problem statement for both the stationary and the non-stationary case in Section <ref>. We focus on the stationary environment in Section <ref> and propose and characterize the algorithm. The abruptly changing environment is considered in Section <ref>. The heuristic hybrid policy is proposed in Section <ref>. Some numerical results and the case study are discussed in Section <ref>. Finally, the paper concludes in Section <ref>. § SYSTEM MODEL AND PROBLEM STATEMENT We consider the propagation environment with limited scattering (typical for mm-wave channels) and adopt the commonly-used geometric channel model <cit.>. Let us consider a ULA, however, it will shortly be apparent that the framework can be applied to a UPA since the analysis follows only from the beam directions. The beamforming codebook 𝒩 of size N is C ≜{ f_i = a(-1 + 2i/K) | i = 0, 1, …, N-1}, where a(·) denotes the array response vector. The structure of a(·) for ULA can be found in <cit.> and is skipped here for brevity. The received signal in case only f_i is activated is y_i = √(P) h_i^H f_i + n, where h is the channel vector. Thus, the received power is R_i = P| h_i^H f_i|^2 + |n|^2 + 2√(P)ℝ( h_i^H f_i n^1), where ℝ(·) denotes the real part of the argument. Since the noise power is negligible, the received power is Gaussian distributed with mean μ_i = P| h_i^H f_i|^2 and variance σ_i^2 = 2P| h_i^H f_i|^2σ^2 = 2σ^2 μ_i <cit.>. In what follows, we formulate two problem statements P_1 and P_2, for the stationary and the abruptly-changing case respectively. In addition, for the stationary environment, we make and additional assumption that the best beam, i.e., the beam in which the user is aligned has a gain G, while all the other beams, i.e., the ones not aligned towards the user have a gain g. Note that this assumption is only for the stationary environment, while for the abruptly changing environment, the model is more general as described later. §.§ Stationary Environment Problem The problem of the best beam identification is the same as selecting the beam with the highest μ_i within a beam selection deadline T. P_1: Find _i μ_i, st R(t) ∼𝒩(μ_i, σ_i), ∀ i, ∀ t ∈ [T] within T. The critical challenge is the fact that the beam with the highest μ_i is also the same with the highest σ_i and accordingly, a higher number of samples is needed to estimate μ_i. At the end of T, let the selected beam by an algorithm/policy 𝒵 be f_𝒵. Then the probability of beam selection error is given as 𝒫^𝒵_ e = ℙ(f_𝒵≠max{ f_i}), where the subscript e∈{ NC, C} stands for either a stationary (NC: no-change) or non-stationary (C: changing) environment. The typical benchmark used for comparing proposed beam selection algorithms is the exhaustive search <cit.>, where each beam is activated sequentially and based on multiple measurements for each beam, the best beam is selected. Other popular algorithms with which we compare our results are hierarchical search <cit.> for the stationary and non-stationary case and a variant of track and stop <cit.> for the non-stationary case. First, let us note that the beam selection error for the exhaustive search approach is obvious from the Chernoff bound <cit.> as follows. For single beam activation, with T_i∈ℕ observations, the estimate μ̂_i of μ_i is ϵ close to μ_i is given by ℙ(|μ̂_i - μ_i| ≥ϵ) ≤exp(- T_i ϵ^2/4σ_i^2 μ_i). In the case of fixed-budget exploration, equal temporal resources are allotted to each beam, i.e., T_i = T/N. Thus, in case of an exhaustive search in a stationary environment, the probability of beam-selection error is upper bounded by 𝒫^ ES_ NC(T) ≤∑_i ≠ 1ℙ(|μ̂_i - μ̂_1| ≥ϵ) ≤ Nexp(- T Δ^2_min/8Nσ^2 μ_max). In the first part of the paper, i.e., in Section <ref>, we propose a grouped exploration strategy that outperforms not only this benchmark but also the popular hierarchical search algorithm for highly directional beams. §.§ Abruptly Changing Environment Problem In the second part of this paper, i.e., in Section <ref> we consider the scenario in which the environment changes abruptly. At a time slot t within the beam-selection deadline T, the mean of a sub-optimal beam f_j changes from μ_j^- to μ_j^+ so that it becomes optimal for all time slots beyond t. This is typical in cases where the optimal beam is blocked during the initial parts of the beam selection process and the blockage shifts during the beam selection process. The challenge here is to still identify the best beam at the beam selection deadline T as described below. P_2: _i μ_i(T), st R(t) ∼𝒩(μ_i(t), σ_i(t)), ∀ i, ∀ t ∈ [T] μ_i(t) = μ_i, ∀ i≠ j, ∀ t, μ_j(t) = μ_j^-, 0 ≤ t ≤ t_ c, μ_j(t) = μ_j^+, t_ c < t ≤ T, within T. Note that the parameters for all other beams except f_j remain constant. § INDEXED EXPLORATION FOR STATIONARY ENVIRONMENT Let us first analyze the stationary environment. The first step for is to form BG as discussed below. §.§ Beam Grouping The i-th beam f_i is added to the BG B_k, k = 1, 2, …, d if and only if the binary representation of i has a "1" in the k-th binary place. In other words, f_i is added to B_k if ((i) (k) ) ≠ 0, where () and () are respectively operators that convert binary numbers to decimals and decimal numbers to binary. Additionally, (k) is a binary number with all zeros except 1 at the k-th binary position. is the bit-wise AND operator. This strategy for creating super-arms is inspired by the classical forward error correcting strategy due to Hamming, which enables detection and correction of single-bit errors <cit.>. Such a grouping strategy was explored in <cit.> for fast detection of changes in a classical bandit environment. However, here we leverage the same for quick identification of the best beam. Example: Let us elaborate this further with an illustrative example by cosnidering N = 16. Fig. <ref> shows the following grouping for the beams - i) B_1 - Beams with '1' in the first binary place - f_1, f_3, f_3, f_5, f_7, f_9, f_9, f_11, f_13, and f_15, ii) B_2 - Beams with '1' in the second binary place - f_2, f_3, f_6, f_7, f_10, f_11, f_14, and f_15, iii) B_3 - Beams with '1' in the third binary place - f_4, f_5, f_6, f_7, f_12, f_13, f_14, and f_15, and iv) B_4 - Beams with '1' in the fourth binary place - f_8, f_9, f_10, f_11, f_12, f_13, f_14, and f_15. §.§ Rewards In case the BG B_k is employed to measure the downlink power, the received power is given as P_B_k =∑_ f_i ∈ B_k2P_ t/N | h^H f_i|^2 + |n|^2 + ℝ(2∑_ f_i ∈ B_k√(P_ t/N/2) h^H f_i n^1 + ..2∑_ f_i ∈ B_k∑_ f_j ∈ B_k, f_j ≠ f_iP_ t/N/2 f_i^H h h^H f_j), Similar to <cit.>, we assume that the noise variance is much smaller than the transmit power. Additionally, since we assume highly directional beams <cit.>, we neglect the contribution of f_i^H h h^H f_j as either f_i^H h or f_j^H h is low for i ≠ j. Accordingly, the variable P_B_k is approximately a Gaussian random variable with mean μ_B_k = ∑_ f_i ∈ B_kP_ t/N/2 | h^H f_i|^2 = ∑_ f_i ∈ B_k2μ_i P_t/N and variance σ^2_B_k = 2∑_ f_i ∈ B_kP_ t/N/2 | h^H f_i|^2 σ^2 = ∑_ f_i ∈ B_k2μ_i P_t σ^2/ N. §.§ Beam Selection Strategy The beam selection strategy for is summarized in Algorithm <ref>. We divide the total initial access time into log N rounds and allot each round to one BG[All logarithms in this paper unless otherwise stated have a base 2.]. Then, all the beams of a BG are activated to detect the presence of the user in that particular BG. Assume that 1(B_k) indicates the presence of the user in BG B_k. Then, the user detection is based on the classical likelihood ratio test. Note that the conditional PDF of R_B_k given that the user is respectively present and absent in the BG B_k, are f_ R_B_k|1(B_k)( y| 1 ) = ∏_j = 1^T_B_k1/σ_1^2√(2π)exp[ -(y_j - μ_1)^2/2σ_1^2], f_ R_B_k|1(B_k)( y| 0 ) = ∏_j = 1^T_B_k1/σ_0^2√(2π)exp[ -(y_j - μ_0)^2/2σ_0^2]. Here, μ_0 = g, μ_1 = 2/N((N/2 - 1)g + G), σ^2_0 = 2gσ^2, σ^2_1 = 4σ^2/N((N/2 - 1)g + G). Accordingly, the LLR is evaluated as LLR( R_B_k) = σ_0/σ_1 + [∑_j = 1^T_B_k -(y_j - μ_1)^2/2σ_1^2 + (y_j - μ_0)^2/2σ_0^2] = σ_0/σ_1 + [∑_j = 1^T_B_k1/σ_0^2 σ_1^2( y_j^2(σ_1^2 - σ_0^2) + .. ..2y_j(μ_1σ_0^2 - μ_0 σ_1^2) + (σ_1^2μ_0^2 - μ_1^2σ_0^2) )], where, interestingly, substituting (<ref>), we get μ_1σ_0^2 - μ_0 σ_1^2 = 0. Thus, LLR( R_B_k) = σ_0/σ_1 + T_B_k(μ_0^2/σ_0^2 - μ_1^2/σ_1^2) +∑_j = 1^T_B_k y_j^2 σ_1^2 - σ_0^2/σ_1^2 σ_0^2 = √(Ng/2g') - T_B_kG - g/Nσ^2 + 2/Ng' - g/4g/Nσ^2g'∑_j = 1^T_B_k y_j^2, where g' = (N/2 - 1)g + G . Conveniently, our test statistic and the decision rule for BG B_k is R_B_k^2 = ∑_j = 1^T_B_k y_j^2 user not detecteduser detected⋛γ, where γ = 4gg'σ^2/2/Ng' - g[1 - √(Ng/2g') - T_B_k(g - G/Nσ^2)]. Finally, based on the detection of the user in different BG, the best beam is identified as the one that belongs to all the BG in which the user is detected. For this let us define a new sequence of sets as C_k = B_k; If the user is detected in B_k, B_k^ C; If the user is not detected in B_k. Then, the optimal beam is identified as f_j, where f_j = ⋂_k = 1^log N C_k. In case the user is not detected in any of the BG, the optimal beam is identified as f_0. Let us recall the illustration in Fig. <ref>. Corresponding to this case of 16 beams and 4 BG, Table <ref> exhaustively enlists the cases of beam identification. As an example, if the user is detected in B_1 but not in any other BG, then the beam f_0 is selected for it. Similarly, if the user is detected in all the BG, then the beam f_15 is identified as the best beam. §.§ Characterization of the Test Statistic We note that for the BG B_k, R_B_k^2/σ_l^2 has a non-central Chi-squared distribution with T_B_k degrees of freedom and a non-centrality parameter T_B_kμ_l/σ_l^2, where l ∈{0, 1}, respectively denoting the presence and the absence of the user in the BG B_k. Mathematically, if y_i ∼𝒩(μ_l, σ_l^2), we have R_B_k^2/σ_l^2∼χ_ NC^2(T_B_k, T_B_kμ_l^2/σ_l^2), where χ_NC^2(a,b) is the non-central Chi-squared distribution with a degrees of freedom and non-centrality parameter b. Accordingly, the conditional CDF of R_B_k^2/σ_l^2 is F_ R_B_k^2/σ_l^2|1(B_k) = l(x) = ℙ( R_B_k^2/σ_l^2≤ x |1(B_k) = l) = 1 - 𝒬_T_B_k/2(√(T_B_k)μ_l/σ_l, √(x)), where 𝒬_T_B_k/2(√(T_B_k)μ_l/σ_l, √(x)) = 1/(√(T_B_k)μ_l/σ_l)^T_B_k/2 - 1∫_√(x)^∞ x^T_B_k/2· exp(- x^2 + (√(T_B_k)μ_l/σ_l)^2/2) ℐ_T_B_k/2 - 1(√(T_B_k)μ_l/σ_l x) dx, is the Marcum Q-function <cit.> and ℐ_ν(·) is the modified Bessel function of first kind of order ν <cit.>. §.§ Probability of Missed Detection Missed detection occurs when R_B_k^2 = ∑_j = 1^T_B_k y_j^2 ≤γ, given that the user is present in the BG B_k. The probability of missed detection is evaluated as p_ m = ℙ( R_B_k^2 ≤γ|1(B_k) = 1 ) =ℙ( R_B_k^2/σ_1^2≤γ/σ_1^2|1(B_k) = 1 ) = 1 - 𝒬_T_B_k/2(√(T_B_k)μ_1/σ_1, √(γ)/σ_1). Next, consider the arguments of the Marcum Q-function above as a_1 = √(T_B_k)μ_1/σ_1 and b_1 = √(γ)/σ_1^2, respectively. Thus, we have a_1^2 = g'T_B_k/Nσ^2, b_1^2 = 4gg'σ^2/2/Ng' - g[1 - √(Ng/2g') - T_B_k(g - G/Nσ^2)]/4σ^2g'/N = Ng/2/Ng' -g[1 - Ng/2g' + T_B_kG -g/σ^2 N]. Clearly, b_1^2 < T_B_k/2(a_1^2 + 2) and hence, for 0 < λ < 1/2 we can derive a Chernoff-type bound for the probability of missed detection as p_ m≤(1 - 2λ)^-T_B_k/2exp(-λ b_1^2 + λ T_B_k a_1^2/2 (1 - 2λ)). The detailed steps to derive the above bound can be found in <cit.> and is being skipped here for brevity. The optimal value for the Chernoff parameter λ is found by differentiation as λ_1 = 1/2(1 - T_B_k/2b_1^2 - T_B_k/2b_1^2√(1 - 2a_1^2b_1^2/T_B_k)). However, in such cases, p_ m is trivially upper-bounded by 1. In order to derive a more meaningful bound, we note that lim_g → 0 a_1^2 = GT_B_k/2σ^2 N, lim_g → 0 b_1^2 = 0. Following this observation, we derive the following bound on the probability of missed detection. For some C_1≥ 0, p_ m≤ C_1exp(-GT/2Nσ^2log N), where lim_g → 0 C_1 = 0 ∀ T. Please see Appendix <ref>. §.§ Probability of False Alarm False alarm is raised when R_B_k^2 = ∑_j = 1^T_B_k y_j^2 > γ, given that the user is not present in the BG B_k. The probability of false alarm is evaluated as p_ f = ℙ( R_B_k^2 > γ|1(B_k) = 0 ) =ℙ( R_B_k^2/σ_0^2 > γ/σ_0^2|1(B_k) = 0 ) = 𝒬_T_B_k/2(√(T_B_k)μ_0/σ_0, √(γ)/σ_0). Letting a_0 = √(T_B_k)μ_0/σ_0 and b_0 = √(γ)/σ_0^2, respectively, we have a_0^2 = T_B_kg/2σ^2, b_0^2 = 2g'/2/Ng' - g[1 - √(Ng/2g') + T_B_k(G- g)/σ^2 N]. Thus, we have lim_g → 0 b_0^2 = N + GT/σ^2 N log N, lim_g → 0 a_0^2 = 0. Following this observation, we derive the following bound on the probability of missed detection. For some C_2≥ 0 p_ f≤ C_2exp(-GT/σ^2 Nlog N), where lim_g → 0C_0 = 0, ∀ T. Please see Appendix <ref>. Interestingly, for p_ f, the upper bound is tighter than the one for p_ m due to the extra exp(-N) term in the former. Now we are in a position to state the main result for . With , the probability of beam-selection error is upper bounded by 𝒫^ CBE_ NC(T) ≤ L_1 log N exp(-GT/2Nσ^2log N), where L_1 = max{C_1, C_2}. The proof follows from Lemma <ref> and Lemma <ref>. Note that each beam f_i belongs to v_i BG, where v_i = {1, 2, , …, log N}. As an example, for N = 16, f_1 belongs to only {B_1} and hence, v_1 = `1 while f_15 belongs to {B_1, B_2, B_3, B_4} and hence v_15 = 4. The number of BG a beam f_i belongs to depends on the number of '1's in the binary representation of i. Accordingly, for the event that the user is present in beam f_i for each i, the probability of beam selection error is upper bounded by 𝒫^ CBE_ NC(T) ≤ν_i p_ m + (N - ν_i) p_ f ≤log N max{p_ m, p_ f} ≤log N max{C_1, C_2}exp(-GT/2Nσ^2log N). §.§ Comparison with Other Bounds The bound derived in the work by Karnin et al. <cit.> for the probability of best arm selection error as 3 log N exp(- T/8H_2 log N), where H_2 := max_i ≠ 1i/Δ_i^2, which in our case is N/G - g and hence the bound is 3 log N exp(- T(G-g)/8N log N). We note that similar to hierarchical search, also discards half the possible beams for the codebook at each stage. Due to the result that lim_g → 0 L_1 = 0, the bound derived by us for this case of thin and highly directional orthogonal beams is a much tighter one as compared to <cit.>. Of course, the additional assumption is that we allow for multi-beam concurrent transmission. This is confirmed in Fig. <ref> where we plot the bounds for the hierarchical search and for with respect to the distance of the UE from the BS. For comparison, we also plot the actual error evaluated using extensive Monte-Carlo simulations. § BEST BEAM SELECTION IN AN ABRUPTLY CHANGING ENVIRONMENT §.§ The Algorithm The is a popular algorithm used for identifying the best arm in MAB problems. It evolves as a sequence of rounds. The total number of rounds is log N. In our context, for each round, each beam is allocated an equal number of measurement time slots for transmission. Within each round, the reward of each beam is evaluated based on the allocated slots, i.e., the sum of the received power is measured. Then, the top half of beams (i.e., those with the highest observed rewards) are identified and the remaining beams are eliminated. Then, in the next round, the framework allocates an equal number of slots to each of the surviving beams. These steps are repeated until only one beam remains, which is considered the best beam based on the observed rewards. Thus, balances exploration and exploitation by gradually eliminating weaker beams and reallocating samples to the stronger beams. By allocating more measurement slots to beams with potentially higher rewards, it focuses exploration on the most promising options. §.§ with a Single Abrupt Change For this analysis, we make a minor change in the algorithm as compared to <cit.> - in each episode, instead of consecutive sampling from the same beam, we sample the beams in a round-robin manner. Naturally, this increases the possibility of sampling the beams post a change event. Let us consider the sample mean of the i-th beam at the end of the r-th round in case it does not experience a change - μ̂_i(r) = 1/n_r∑_t= 1^n_r R_i(∑_v =1^r-1n_v + (i - |S_r|) + |S_r| t), where n_r = 2^r-1T/Nlog N is the number of times each beam is sampled in the round r. We simplify the time indices since the reward values for beam f_i are i.i.d. as R_i in case of no changes. Thus the estimate of the mean of the beam f_i at the end of episode r is given as μ̂_i(r) = 1/n_r∑_k= 1^n_r R_i(k) = 1/n_r R_i(r)^2. Let the change occur in the reward distribution of the arm j in round r_c and consider that the reward values for arm j are i.i.d. as R_j^- ∼𝒩(μ_j^-, σ_j^-) before t_c and as R_j^+ ∼𝒩(μ_j^+, σ_j^+) after t_c. Consider the case that the change results in the arm j being the best arm for t > t_c, i.e., μ_i(t) = j for t > t_c. If j ∈ S_r_ c, its mean estimate is μ̂_j(r_c) = 1/n_r_c[∑_l = 1^n' R_j^-(l) + ∑_m = 1^n_r_c-n' R_j^+(m)], where n' is the slot in round r_ c after which the change occurs. Let the conditional CDF that the change occurs in any slot n given r_ c be given by F_n'(n | n_r_ c). §.§ Analysis for Round r_ c Recall that S_r_ c = N/2^r_ c - 1 beams enter the round r_ c and each beam is played n_r_ c = T/|S_r_ c|log N times. The probability, p_i,j(r_ c) that the arm j has a lower empirical mean than the arm i ≠ j after round r_c is calculated in the following lemma. Given that the beams f_i and f_j survive until the round r_ c in which f_j undergoes a change, the probability that the estimate of the mean of f_j is lower than that of f_i after round r_ c is p_i,j(r_ c) ≤ 1 - F_n'(n_i^* | n_r_ c) (1 - exp(- Δ^2_minT/2Nlog Nσ_max^2)), where n_i^* = -n_r_ cΔ_i,j^+/Δ_ c and F_n'(·| n_r_ c) is the conditional CDF of the change time slot given that the change occurs in the round r_ c. Additionally, σ^2_max = 2σ^2 μ_max. Please see Appendix <ref>. We note that the bound derived above has two parts - 1 - F_n_i'(n_i^* | n_r_ c) and F_n'(n^* | n_r_ c)exp(- Δ^2_minT/2Nlog Nσ_max^2). Since we are interested in the event that f_j survives the round, the sum of these terms needs to be less than or equal to 1 for the bound to be meaningful. Based on the difference of the means between the arms f_i and f_j before and after the change, the following four cases arise. * Δ_i,j^+ > 0 and Δ_ c > 0, i.e., the beam f_i is always superior to the beam f_j. In this case, p_i,j(r_ c) is trivially upper bounded by 1. * Δ_i,j^+ < 0 < Δ_ c and |Δ_i,j^+| > |Δ_ c|, i.e., the beam f_i is always inferior to the beam f_j. In this case, F_n'(n_i^* | n_r_ c) = 1 and thus, p_i,j(r_ c) is exponentially bounded. * Δ_i,j^+ < 0 < Δ_ c and |Δ_i,j^+| < |Δ_ c|, i.e., the beam f_i is superior to f_j before the change and it becomes inferior to the beam f_j after the change. Here for a change at slot n' ≤ n_i^*, p_i,j(r_ c) is exponentially bound, while for n' > n_i^* it is trivially bounded by 1. Hence, in this case, the earlier the change, the higher the change that f_j survives with respect to f_i. * Δ_ c< 0 < Δ_i,j^+, i.e., f_i is inferior to f_j before the change and it becomes superior to f_j after the change. Contrary to the previous case, here, for a change at slot n' ≤ n_i^*, p_i,j(r_ c) is bounded by 1, while for n' > n_i^* it is exponentially bounded. Hence, in this case, the later the change, the higher the chance that f_j survives with respect to f_i. Out of the above, only cases 2 and 3 are of interest to us since we assume that after the change f_j becomes the best beam. Let the change occur in the K-th best beam. Then, the following result bound its probability of elimination in the round r_ c. The probability that the K-th arm is eliminated in round r_ c is upper bounded by p_K(r_ c) ≤ 2[1 - F_n'(n_max) (1 - exp(- Δ_min^2/2σ_max^2))], where n_max = n_r_ cΔ_min/Δ_ c. Let N_r_ c denote the number of arms that have a higher estimated mean than the K-th arm in the round r_ c. Then, 𝔼[N_r_ c] = ∑_ f_i ∈𝒮_r_ cℙ(μ̂_i(r_ c) > μ̂_K(r_ c)) ≤∑_ f_i ∈𝒮_r_ c 1 - F_n'(n_i^* | n_r_ c) (1 - exp(- Δ_min^2/2σ_max^2)) ≤ |𝒮_r_ c| [1 - F_n'(n_max) (1 - exp(- Δ_min^2/2σ_max^2))]. Now, from Markov's inequality, we have ℙ(N_r_ c≥|𝒮_r_ c|/2) ≤2𝔼[N_r_ c]/|𝒮_r_ c|. Substituting (<ref>) in the above completes the proof. In case the exact change slot is uniformly distributed in the round r_ c, then we have F_n'(n_max) = n_max/n_r_ c = Δ_min/Δ_ C. Accordingly, p_K(r_ c) is upper bounded as p_K(r_c) ≤ 2(1 - Δ_min/Δ_ c(1 - exp(- Δ_min^2/2σ_max^2))). For a given value of r_ c (equivalently n_r_ c) the exact location of the change is governed by its distribution. In this work we do not make any assumptions on the same, and hence, a beta distribution is appropriate to model its location <cit.>. First we note from Fig. <ref> that higher the magnitude of change Δ_ min, the lower will be the bound on p_K(r_ c). More importantly, in case the changes occur earlier in the change round, i.e., the beta distribution is skewed to the left, the probability of elimination of f_K is limited. In particular, we have the following important result. For r_ c≤ r^*, p_K(r_c) ≤ 2Kexp(- Δ_min^2/2σ_max^2). This follows from Lemma <ref> by recognizing that for all beams f_i which are inferior to f_K, we have Δ_i,K^+ < 0 < Δ_ c and |Δ_i,K^+| > |Δ_ c|. Hence, F_n'(n_i^*) = 1 ∀ i > K. Now, for f_K to be eliminated, it has to be in the bottom half of the estimated beams in the r_ c-th round, at least |𝒮_r_ c|/2 - K inferior beams should have a higher estimate than f_K. Recall that the number of beams in the r_ c-th round which are inferior to f_K is |𝒮_r_ c| - K. Hence, ℙ(N_r_ c≥|𝒮_r_ c|/2| r_ c≤ r^*) ≤|𝒮_r_ c| - K/|𝒮_r_ c|/2 - Kexp(- Δ_min^2/2σ_max^2) ≤ 2Kexp(- Δ_min^2/2σ_max^2). Next, we characterize the probability of eliminating the K-th arm (1 ≤ K ≤ N) in two distinct segments. Early change - r_ c≤ r^* Conditioned on the change occurring within the first r^* = logN/2K + 1 rounds, the probability that the best arm is eliminated is upper bounded as 𝒫^ SH_ C(T | r_ c≤ r^* ) ≤ 2 (log N + K - 1)exp(-1/2Δ^2_minT/Nlog N ). Please see Appendix <ref>. Late change - r_ c > logN/2K If the change after the first logN/2K rounds, the probability that the best arm is eliminated is upper bounded as 𝒫^ SH_ C(T | r_ c > r^* ) ≤ 𝒯_1(r_ c) + 2log 2NK exp(-1/2Δ_min^2 T/N log N). where 𝒯_1(r_ c) = 𝔼[r_ c - r^* | r^* ≤ r_ c≤log N]. Please see Appendix <ref> Thus, in case of late change, the bound has an exponential term and a term that depends on the distribution of the change slot location. Thus, in case of late change occurs in any round, does not achieve an exponential upper bound. In case of a single abrupt change in the mean of f_K at time 0 ≤ t_ c≤ T, the bound on the beam selection error is given by 𝒫^ SH_ C = 𝒫^ SH_ C(T | r_ c≤ r^* ) ℙ( r_ c≤ r^*) + 𝒫^ SH_ C(T | r_ c > r^* )ℙ( r_ c > r^*) ≤𝒯_1(r_ c) + 2 (2log N + K - 1)exp(-1/2Δ^2_minT/Nlog N ). [No Change] In case of no change, the performance of is exponentially bounded as log N exp(-1/2Δ^2_minT/Nlog N ) which is of the form given in <cit.>. § HYBRID POLICY FOR KNOWN K Next, consider the case when the change is restricted to the top K beams of the system. This is typical for cases when the optimal beam is blocked initially. The beam-selection procedure recognizes an adjacent beam to the optimal beam as the best one for service initially. This is mainly due to the correlation among the beams, directional transmissions, and limited multipath in mm-wave. However, in case the optimal beam abruptly transitions into a line-of-sight state during the beam-selection procedure, the algorithm must adapt and report only the optimal beam. In this regard, we propose which exploits the knowledge of K to tune the appropriately. The steps of is presented in Algorithm <ref>. For a given value of K, we calculate r^* = N/2K. Until the round r^*, employs the classical algorithm, i.e., until the 2K arms are left. Once 2K beams are left, the algorithm does not further eliminate beams. After r^*, the remaining N/2^r^* - 1 beams are sampled in a round-robin manner, and the best beam is determined at T based on received power in the slots after r^*. The beam selection error for the algorithm is given by 𝒫^ K-SHES_C(T) ≤𝒯_ K-SHESexp(-1/2Δ^2_ minT/2Nlog N σ^2_max) + ∑_i = 1^K-1 1 - F_t_ c (t_i | r^*)(1 - exp(-1/2Δ^2_ min/Nlog N σ^2_ max)). where 𝒯_K-SHES = logN^2/2K + K (2log (2K) + 1). If the change occurs in the first T[log(N/2K)/log N(1 - 1/2K) + 1/2K] time slots with probability 1, then the beam selection error is exponentially bounded as 𝒫^ K-SHES_C(T) ≤ 2(2log N + 2K - 1) · exp(-1/2Δ^2_ minT/2Nlog N σ^2_max) Similar to the case, the upper bound on the error with can be derived as a sum for the early and the late change cases. For both the early change and the late change cases, the analysis remains the same until the round r^*. Beyond r^*, due to a single round, the probability of beam-selection error is given by the union bound over the remaining 2K arms similar to the exhaustive search case, ℙ(ℰ_K([r^*,log N]) | r_ C≤ r^*) ≤ 2Kexp(-1/2Δ^2_ min/Nlog N σ^2_ max) While for the late change case, the analysis follows similarly to Lemma <ref>, ℙ(ℰ_K([r^*,log N]) | r_ C > r^*) ≤∑_i = 1^K-1 1 - F_t_ c (t_i | r^*) (1 - exp(-1/2Δ^2_ min/Nlog N σ^2_ max)) For the late change case, if the change occurs early enough so as to safeguard against the best arm prior to the change, results in an exponential bound. § NUMERICAL RESULTS AND DISCUSSION §.§ Performance Comparison Fig. <ref> shows that outperforms and the exhaustive search algorithms. However, due to the no elimination in the case of , it suffers from a higher probability of error as compared to in case the changes occur early. In case of an early change, and perform equally until r^*. However, beyond r^*, due to no further changes, performs better due to sequentially eliminating suboptimal beams. However, since does not eliminate beams beyond r^*, it results in a higher beam selection error. This is elaborated in Fig. <ref>. §.§ Communication-Sensing Trade-off Next, let us study the efficacy of from the perspective of a wireless communication system and the trade-offs arising from the same. Let the communication scheme be partitioned into beam refinement and downlink data transmission phases as shown in Fig. <ref>. The beam alignment phase of duration T is mapped to developed in this paper. The data transmission phase is of duration T_ D. In case of a larger N, each beam can be made highly directional which leads to a higher radiated power. However, a larger N results in a higher beam selection error and a reduced communication performance. In addition, for a fixed frame length T_ tot, if a higher number of time slots is allotted to beam refinement, then fewer slots remain for data communication which may degrade the communication performance. On the contrary, fewer slots are reserved for beam refinement leads to a higher beam selection error and accordingly, a poor communication even with a large number of data transmission slots. We assume that the user is stationary and is present at 100 m from the access point. The blockage condition can intermittently change uniformly within a frame. We assume a bandwidth of 1 GHz and a transmit power of 40 dBm. The impact of interference is ignored. The frame duration consists of 35072 slots. For a given number of beam N, we assume that the directivity gain per beam is 2π/N and accordingly, the downlink data rate is given by (1 - 𝒫_ e) T_ D/T + T_ D W log_2 (1 + ξ_0 ), where ξ_0 is the reference SNR without the directivity gain. Here we have assumed that the side lobes have negligible power and hence, the received power in case of a beam misalignment is 0. Fig. <ref> shows that for a chosen T, there exists an optimal N. In case 1% of temporal resources allotted to beam alignment, the optimal beam number is 64, while for a higher number of resources allotted for beam alignment (10%), the optimal N increases to 128. Thus, for a larger beam alignment budget, a larger N can be employed to maximize the data rate. However, for low beam dictionary size, e.g., N = 16, T/T_ tot = 0.1 is sufficient to achieve the best possible beam selection efficacy and increasing resources further for beam alignment simply reduces the data rate. Similarly, Fig. <ref> shows that for a given N, there exists optimal partitioning of the temporal resources between beam alignment and data communication phases. For very stringent beam alignment deadline, e.g., T/T_ tot = 0.1, a lower N is a better choice due to the low beam selection error. However, as the beam alignment budget increases, a higher N can be chosen for optimal data rate. § CONCLUSION For the stationary environment, our proposed beam selection scheme outperforms the state of the art bandit algorithms in terms of the probability of error. For the non-stationary environment, we showed that the popular algorithm does not achieve an exponential error bound. For a know range of index of change, we proposed that achieves an exponential bound and thus, can be employed in beam selection procedures where the state of the beams change during initial access. We employed in a tandem beam refinement and data transmission scheme and highlighted key system design insights in terms of selection of beam codebook and partitioning of temporal resources. A detailed analysis of the type of allowable change distributions as well as handling multiple changes are indeed interesting directions of research and we are currently investigating the same. This will be reported in future work. § PROOF OF LEMMA <REF> Let ζ_1 = a_1/b_1. Following (<ref>) we have ζ_1 > 1 for diminishing g and accordingly p_ m = 1 - 𝒬_T_B_k/2(a_1, b_1) (a)≤exp(-1/2(a_1^2 + b_1^2))√(ℐ_0(2a_1b_1))√(ζ_1^2(1 - T_B_k/2)/2(ζ_1^2 - 1)) (b)≤exp(-1/2(a_1^2 + b_1^2))exp(a_1b_1)√(ζ_1^2(1 - T_B_k/2)/2(ζ_1^2 - 1)) ≤exp(-a_1^2/2)exp(a_1b_1)√(ζ_1^2(1 - T_B_k/2)/2(ζ_1^2 - 1)) = C_1exp(-g'/2σ^2T/Nlog N) (c)≤ C_1exp(-GT/2Nσ^2log N). ℐ_0(·) is the modified Bessel function of the first kind with order 0. Step (a) follows from the Cauchy-Schwarz inequality for the Marcum-Q function <cit.>. The step (b) follows from the following <cit.> I_ν(x) < cosh x/Γ(ν+1)(2/x)^ν≤x^ν/2^νν!exp(x) ℐ_0(x) ≤exp(x). The step (c) follows from the definition of g'. Now consider C_1 = exp(a_1b_1) √(ζ_1^2(1 - T_B_k/2)/2(ζ_1^2 - 1)). Due to (<ref>) we have lim_g → 0ζ_1 = ∞ and thus, lim_ζ_1 →∞√(ζ_1^2(1 - T_B_k/2)/2(ζ_1^2 - 1)) = 0, ∀ T, and lim_g → 0exp(a_1b_1) = 1. Thus, from the limit rule of product, we have lim_g → 0 C_1 = 0. § PROOF OF LEMMA <REF> The proof follows by considering ζ_0 = a_0/b_0 < 1 and applying the corresponding Cauchy-Schwarz bound for Q_T_B_k/2(a_0,b_0) - p_ f = 𝒬_T_B_k/2(a_0, b_0) (a)≤exp(-1/2(a_0^2 + b_0^2))√(ℐ_0(2a_0b_0))√(ζ_0^2(1 - T_B_k/2)/2(ζ_0^2 - 1)) ≤exp(-1/2(a_0^2 + b_0^2))exp(a_0b_0)√(ζ_0^2(1 - T_B_k/2)/2(ζ_0^2 - 1)) ≤exp(-b_0^2/2)exp(a_0b_0)√(ζ_0^2(1 - T_B_k/2)/2(ζ_0^2 - 1)) = C_0exp(-GT/2Nσ^2log N). Unlike p_ m, the step (a) follows since ζ_0 < 0. Now consider C_0 = exp(a_0b_0) exp(-N) √(ζ_0^2(1 - T_B_k/2)/2(ζ_0^2 - 1)). Due to (<ref>) we have lim_g → 0ζ_0 = 0 and accordingly, lim_ζ_0 → 0√(ζ_0^2(1 - T_B_k/2)/2(ζ_0^2 - 1)) = 0, ∀ T, and lim_g → 0exp(a_0b_0) = 1. Thus, from the limit rule of product, we have lim_g → 0 C_1 = 0. § PROOF OF LEMMA <REF> We have from the definition of p_i,j(r_ c), p_i,j(r_ c) = ℙ(μ̂_i(r_ c) - μ̂_j(r_ c) > 0 | f_i, f_j ∈ S_r_ c) =ℙ(1/n_r_ c∑_k= 1^n_r_ c R_i(k) - 1/n_r_ c[∑_l = 1^n' - 1 R_j^-(l) + .. .. ∑_m = 1^n_r_ c-n' + 1 R_j^+(m)] > 0 | f_i, f_j ∈ S_r_ c) = ℙ(n' - 1/n_r_ c[μ̂_i, n' - 1 - μ̂_j, n' - 1^-] + n_r_ c - n' + 1/n_r_ c·. . [μ̂_i, n_r_ c - n' + 1 - μ̂_j, n_r_ c - n' + 1^+] > 0 | f_i, f_j ∈ S_r_ c) = ℙ([μ̂_i, n_r_ c - μ_i] + n' - 1/n_r_ c[μ_j^- - μ̂_j, n' - 1^-] +. . n_r_ c - n' + 1/n_r_ c[μ_j^+ - μ̂^+_j, n_r_ c-n' + 1] + n' - 1/n_r_ cΔ^-_i,j + . . n_r_ c - n' + 1/n_r_ cΔ^+_i,j > 0 | f_i, f_j ∈ S_r_ c). Now, let us note that Z = [μ̂_i, n_r_ c - μ_i] + n' - 1/n_r_ c[μ_j^- - μ̂_j, n' - 1^-] + n_r_ c - n' + 1/n_r_ c[μ_j^+ - μ̂^+_j, n_r_ c-n' + 1] ∼𝒩(0, σ'^2_ij). Accordingly, p_i,j(r_ c) ≤1/∑_n' = 0^n_r_ c q_n(n')∑_0^n^*_iexp(-Δ'^2_i,j/2σ'^2_ij) q_n(n') + ∑_n^*_i^n_r_ c q_n(n') ≤exp(-Δ^2_min/2σ'^2_ij) F_n'(n_i^* ) +1 - F_n'(n_i^* ) ≤exp(-Δ^2_minT/4N log Nσ^2μ_max) F_n'(n_i^* ) +1 - F_n'(n_i^* ). where in step (a), Δ'_i,j = n' - 1/n_r_ cΔ^-_i,j + n_r_ c - n' + 1/n_r_ cΔ^+_i,j, σ'^2_ij = σ_i^2/n_r_c + n'σ_j^-^2/n^2_r_c + (n_r_c - n')σ_j^+^2/n^2_r_c. § PROOF OF LEMMA <REF> The analysis considers three phases - i) rounds before r_ c, ii) the r_ c-th round, and iii) the rounds after r_ c. For the rounds before r_ c, let N'_r be the number of arms from the bottom |𝒮_r| - K arms that have the estimate of their means larger than the estimate of the K-th arm. We have ∀ r < r_ C 𝔼[N'_r | r < r_ c≤ r^*] ≤∑_q = K+1^|𝒮_r|exp(- 1/2Δ_Kq^2 n_r) ≤ (|𝒮_r| - K - 1) exp(- 1/2Δ_min^2 T/Nlog N) . Consequently, the probability that the K-th arm is eliminated in the r-th round (r ≤ r_ c≤ r^*) is upper bounded by ℙ(N'_r > |S_r|/2| r ≤ r_ c≤ r^*) ≤2/|S_r|[(|𝒮_r| - K - 1)exp(-1/2Δ_min^2 T/N log N)] ≤ 2exp(-1/2Δ_min^2 T/N log N). Accordingly, the probability that the K-th arm is eliminated until the round r_ c is upper bounded as p_ e_1(r_ c) = ℙ(ℰ_K([r_ c - 1]) | r_ c≤ r^*) ≤𝔼_r_ c≤ r^*[r_ c - 1] [2exp(-1/2Δ_min^2 T/N log N)]. Next, if the K-th arm survives until the r_ c-th round, the probability that it is eliminated in the r_ c-th round is evaluated as (following Lemma <ref>) p_ e_2(r_ c) = ℙ(ℰ_K(r) |1(ℰ_K([r_ c - 1])) = 0 , r ≤ r_ c≤ r^*) ≤ 2 𝔼_r_ c≤ r^*[1 - F_n'(n_max) (1 - .. ..exp(- TΔ^2_min/2Nlog Nσ^2_max))] (a)≤ K exp(- TΔ^2_min/2Nlog Nσ^2_max). Step (a) is because before r^* only 1 arm among the set of beams inferior to the K-th beams needs a higher estimate than the K-th arm for it to be eliminated. Finally, given that the K-th arm has survived until the end of r_ c the probability it is eliminated at the end of the play is upper bounded as p_ e_3(r_ c) = ℙ(ℰ_K([r_ c, log N]) |1(e_K(r^*)) = 0 , r ≤ r_ c≤ r^*) ≤𝔼_r_ c≤ r^*[log N - r_ c_1] 2exp(-1/2Δ_min^2 T/N log N). Thus, the total probability that the arm K is eliminated given that the change occurs in the first logN/2K rounds is upper bounded using the union bound as ℙ(ℰ_K([log N]) | r_ c≤ r^* ) ≤𝔼[p_ e_1(r_ c) + p_ e_2(r_ c) + p_ e_3(r_ c) | r_ c≤ r^*] ≤𝔼_r_ c, n'[ (r_ c - 1 ) [2exp(-1/2Δ^2_minT/Nlog N) ] .+ . [Kexp(-1/2σ^2_maxΔ^2_minT/Nlog N )] + . .2(log N - r_ c) [exp(-1/2Δ^2_min T/N log N)] | r_ c≤ r^* ]. § PROOF OF LEMMA <REF> We have ∀ r < r^* 𝔼[N'_r | r ≤ r^* ≤ r_ c] ≤∑_q = K+1^|𝒮_r|exp(- 1/2Δ_Kq^2 n_r) ≤ (|𝒮_r| - K - 1) exp(- 1/2Δ_min^2 T/Nlog N) . Consequently, the probability that the K-th arm is eliminated in the r-th round (r ≤ r^* ≤ r_ c) is upper bounded by ℙ(N'_r > |S_r|/2| r ≤ r_ c≤ r^*) ≤2/|S_r|[(|𝒮_r| - K - 1)exp(-1/2Δ_min^2 T/N log N)] ≤ 2exp(-1/2Δ_min^2 T/N log N). Accordingly, the probability that the K-th arm is eliminated until the round r_* is upper bounded as p_ l_1(r_ c) = ℙ(ℰ_K([r^*]) | r_ c > r^*) ≤ 2r^*exp(-1/2Δ_min^2 T/N log N). Given that the K-th arm has survived until the end of r_ c_2 the probability it is eliminated at the end of the play is upper bounded as p_ l_2(r_ c_2) = 2(log N - r_ c) exp(-1/2Δ_min^2 T/N log N). Thus, the total probability that the arm K is eliminated given that the change occurs in the first logN/2K rounds is given by p_ l ≤𝒯_2(r_ c) + 2log N exp(-1/2Δ_min^2 T/N log N). IEEEtran
http://arxiv.org/abs/2307.04805v1
20230710180052
The Dragon-II simulations -- I. Evolution of single and binary compact objects in star clusters with up to 1 million stars
[ "Manuel Arca Sedda", "Albrecht W. H. Kamlah", "Rainer Spurzem", "Mirek Giersz", "Peter Berczik", "Sara Rastello", "Giuliano Iorio", "Michela Mapelli", "Massimiliano Gatto", "Eva K. Grebel" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Autonomous feedback stabilization of a cavity-coupled spin oscillator Dan M. Stamper-Kurn August 12, 2023 ===================================================================== We present the first results of the Dragon-II simulations, a suite of 19 N-body simulations of star clusters with up to 10^6 stars, with up to 33% of them initially paired in binaries. In this work, we describe the main evolution of the clusters and their compact objects (COs). All Dragon-II clusters form in their centre a black hole (BH) subsystem with a density 10-100 times larger than the stellar density, with the cluster core containing 50-80% of the whole BH population. In all models, the BH average mass steeply decreases as a consequence of BH burning, reaching values ⟨ m_ BH⟩ < 15 M_⊙ within 10-30 relaxation times. Generally, our clusters retain only BHs lighter than 30 M_⊙ over 30 relaxation times. Looser clusters retain a higher binary fraction, because in such environments binaries are less likely disrupted by dynamical encounters. We find that BH-main sequence star binaries have properties similar to recently observed systems. Double CO binaries (DCOBs) ejected from the cluster exhibit larger mass ratios and heavier primary masses than ejected binaries hosting a single CO (SCOBs). Ejected SCOBs have BH masses m_ BH = 3-20 M_⊙, definitely lower than those in DCOBs (m_ BH = 10-100 M_⊙). methods: numerical – galaxies: star clusters: general – stars: general, black holes § INTRODUCTION Massive star clusters in the range (10^4-10^6), like globular clusters or young massive clusters, represent galactic repositories of stellar compact objects, and are ideal laboratories to study the interplay of stellar evolution and dynamics. Several hundreds of stellar black holes (BHs), neutron stars (NSs), and white dwarfs (WDs) are expected to form in a typical massive cluster. In the last decade, it became clear that the fraction of BHs that massive clusters can retain is much larger than previously thought, as suggested by numerous theoretical and numerical works <cit.>, providing support to the crescent number of observations of stellar BH candidates in Galactic clusters <cit.>. The progress in stellar evolution of massive stars <cit.>, partly triggered by the discovery of gravitational-wave (GW) emission by merging BH and NS binaries <cit.>, has completely changed our understanding of BHs. Stellar models demonstrated that the evolution of single massive stars is significantly influenced by the possible development of so-called pair instability supernovae (PISN), which causes the complete disruption of stars that develop an He core with a mass of M_ He = 64-135, and pulsational pair instability supernovae (PPISN), a mechanism that leads to an enhanced mass-loss in stars with a He core mass of M_ He = 32-64. This leads to a maximum stellar BH mass in the range m_ BH, max = (40-60), depending on the theoretical model adopted and the stellar metallicity. Direct consequence of these two processes is the well known upper-mass gap of BHs, a region of the mass-spectrum where no remnants are expected <cit.>. The boundaries of the upper-mass gap are highly uncertain and depend on the adopted stellar evolution model and metallicity <cit.>. Only stars with a zero age main sequence mass beyond M_ ZAMS > (200-250) can avoid PISN and, depending on their metallicity, directly collapse to an intermediate-mass BH with little mass loss in the process <cit.>. Stellar collisions might lead to the formation of BHs in the upper-mass gap <cit.>, thus suggesting that star clusters could be perfect laboratories to form mass-gap BHs <cit.>, but it is unclear how the stellar merger frequency depends on the cluster initial properties <cit.> or the stellar conditions at merger <cit.>. More in general, the formation of a population of compact objects can significantly affect star cluster dynamics. Massive stars and BHs rapidly sink into the cluster centre via mass-segregation, possibly forming a massive subsystem on a core-collapse timescale <cit.> which can contract and determine the onset of runaway stellar collisions if the time does not exceed the stellar evolution timescale <cit.>. The runaway growth of a massive star can be hampered by the formation of tight binaries that supply energy to the cluster core, cause BH ejection, deplete the cluster's BH reservoir, and eventually kick each other out via super-elastic encounters <cit.>. The competing effect of binary energy supply and stellar collisions likely depends on the cluster mass, density, metallicity, the fraction of primordial binaries, the initial mass function and its boundaries, the natal kicks of BHs and NSs, and the compact object mass spectrum. Typically, the exploration of a tiny part of such parameter space is performed with numerical models capable of simultaneously accounting for stellar dynamics and evolution, either via direct N-body <cit.> or Monte Carlo techniques <cit.>. Direct N-body simulations offer most likely the highest level of accuracy in terms of stellar dynamics modelling, but their computational cost forced the vast majority of works in the literature to focus on star clusters with less than a few × 10^5 stars and/or with a relatively small fraction of primordial binaries <cit.>, with a few notable exceptions. For example, several works have explored the impact of a large primordial binary fraction, up to 100%, on the dynamics of isotropic <cit.> and anisotropic <cit.> low-mass star cluster models, i.e. with N < 20,000, with equal-mass stars, and recently in intermediate-mass GCs, i.e. N∼ 10^5 <cit.>. With regards to simulations tailored to represent massive globular clusters, the DRAGON simulations remain the only one that exploited 10^6 particles <cit.>. Since the development of such pioneering simulations, and especially after the discovery of GWs, numerical tools underwent major upgrades in terms of stellar evolution and treatment of relativistic binaries. In this work, we present the simulation database, a suite of 19 direct N-body simulations performed with the code[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>] representing star clusters with N=(0.12-1)× 10^6 stars, half-mass radius densities in the ρ_h = 1.3× 10^4 - 6.9 × 10^6 M_⊙ pc^-3 range, and a fraction f_ 2b = 0.10-0.33 of stars initially paired in primordial binaries. This work, which is the first one of a series, focuses on the evolution of single and binary BHs and compact objects in massive and dense star clusters, paying particular attention to the relation between the BH population (mass, average BH mass, density) and the cluster properties (mass, radius). Our models explore a portion of the parameter space still uncharted by direct N-body simulations, thus complementing previous works that either rely on Monte Carlo simulations or exploit star cluster models with old stellar evolution recipes or a significantly smaller number of stars. The paper is organised as follows: Section <ref> describes the main properties of the clusters and the improvements integrated in the code; Section <ref> presents our main results in terms of overall star cluster evolution (Section <ref>), main properties of single and binary compact objects (Sections <ref> - <ref>), and the possible implementation of N-body outputs into semi-analytic tools (Section <ref>); whilst Section <ref> is devoted to summarise the main outcomes of our work. § NUMERICAL METHODS All the models are carried out exploiting the code <cit.>, which represents the current state-of-the-art of direct N-body codes optimised to exploit GPU-accelerated high-performance supercomputing <cit.> altogether with several recently developed codes, like Petar <cit.> or Bifrost <cit.>. belongs to a long-standing family of direct N-body integrators initiated by Sverre Aarseth and developed for almost 50 years <cit.>. implements a 4th-order Hermite integrator with individual block-time steps <cit.> and sophisticated algorithms for close encounters and few-body dynamics, namely the Kustaanheimo-Stiefel (KS) regularisation <cit.>, the Ahmad-Cohen (AC) scheme for neighbours <cit.>, and algorithmic chain regularisation <cit.>, which enables us to closely follow the evolution of binaries with periods 10^-10 times smaller than the dynamical timescales of star clusters, which typically exceed O(10) Myr. In the last few years, the code underwent a series of major upgrades related to the treatment of relativistic compact objects <cit.>, the implementation of flexible stellar evolution recipes <cit.>, and the inclusion of a dedicated treatment for spins <cit.>. Here, we expand the possible choices for BH natal spin distribution and implement relativistic recoil for post-merger remnants. In the following, we briefly summarize the features of the code that are most relevant for this work, and discuss the newest upgrades that we implemented into the code and use here for the first time. §.§ Stellar evolution implements stellar evolution for single and binary stars via the and routines <cit.>, which we heavily updated to include up-to-date prescriptions for the evolution of massive stars. We refer the reader to <cit.> for a comprehensive discussion about the updated stellar evolution encoded in . In this work, we adopt the level-B of stellar evolution as defined in <cit.>. This implies that our models take into account the formation of electron-capture supernovae (ECSNe, following ), the delayed SN scheme <cit.>, and the development of pair-instability (PISN) and pulsational pair instability supernovae (PPISN) <cit.>. For the formation of compact objects, we adopt mass loss from <cit.> with additional metallicity-dependent correction factors taken from <cit.> and a dedicated treatment for mass loss of hot and massive H-rich O/B stars <cit.>. The adopted stellar evolution models imply that the maximum BH mass attainable by massive stars with zero-age main-sequence mass <150 is m_ BH, max = 40.5 <cit.>. The BHs falling in the so-called upper mass-gap can still form via stellar collisions, accretion of stellar material onto stellar BHs, and BH-BH mergers, as we discuss in our companion papers. Natal kicks for NSs forming via ECSNe, accretion induced collapse (AIC), and merger-induced collapse (MIC) are drawn from a Maxwellian distribution with dispersion 3 km/s <cit.>, whilst for all other NSs we adopt a Maxwellian distribution with dispersion 265 km/s <cit.>. This latter value is adopted also for BHs, but the kick amplitude is reduced by a factor that accounts for the amount of fallback material <cit.>. For binary stars, we model common envelope evolution via the parametrised α_ CE-λ_ CE scheme, according to which it is possible to regulate the fraction of orbital energy injected into the envelope (α_ CE) and to scale the binding energy of the envelope by a factor λ_ CE in a way similar, but not equal, to the one followed by <cit.> <cit.>. In this work, we adopt α_ CE = 3 <cit.>. §.§ Dynamics of compact objects In particularly dense clusters, stellar interactions can trigger collisions among stars and/or compact objects. The aftermath of such collisions is still a poorly understood process that can crucially affect the formation and evolution of stellar BHs. Whilst the outcome of stellar mergers is better understood, also thanks to recent detailed hydrodynamical simulations coupled with stellar evolution models <cit.>, it is still unclear how much mass a massive star can accrete onto a stellar BH. Several works have shown that in the case of a star with a mass ∼ (1-10) merging with a stellar BH, there is little accretion as most of the energy is radiated away via jets, although the mechanism is highly uncertain and likely depends on the star structure and evolutionary stage <cit.>. Hydrodynamical simulations of star–BH close encounters have shown that up to 70% of the star mass remains bound to the BH, but energy arguments suggest that even a tiny amount of accreted matter, O(10^-3-10^-2) would suffice to evaporate the accretion disk and halt the BH growth <cit.>. Nonetheless, recent simulations modelling the common envelope phase of a tight star–BH binary have shown that the BH accretes the stellar core and expels the envelope, a process accompanied by a SN-like transient and spin-up of the BH to nearly extreme values regardless of the initial spin <cit.>. In multiple main-sequence star collisions, the merger product is expected to have a compact core and a tenuous envelope with densities as low as 10^-10 g cm^-3 <cit.>. Therefore, if: a) most of the merger product mass is in the core <cit.>, and b) the core can efficiently feed the BH <cit.>, it is reasonable to assume that a BH would accrete a significant fraction of it. Given the aforementioned uncertainties, in we parametrise the outcome of star-BH collisions via the fraction of star mass accreted onto the BH, f_c <cit.>. Throughout this paper we adopt f_c = 0.5. Natal spins are another poorly known property of stellar BHs. implements the so-called “Geneva”, “MESA”, and “Fuller” models <cit.>, and four additional choices implemented in this work, namely: zero-spins, uniform spin distribution, Gaussian spin distribution with mean value χ = 0.5 and dispersion σ_χ = 0.2, and a Maxwellian distribution with dispersion σ_χ = 0.2. also features a treatment for compact binary mergers based on an orbit-averaged formalism <cit.>, which enables us to follow the formation and evolution of in-cluster compact binary mergers, a feature implemented in a number of recent works modelling young star clusters <cit.>. In this work, we present the implementation of three new features of the code: mass and spin of the merger remnant, calculated via numerical relativity fitting formulas <cit.>, and the recoil kick imparted by asymmetric GW emission promptly after merging events <cit.>. We follow the implementation depicted in our previous works <cit.>. v⃗_ = v_mê_,1 + v_(cosξê_,1 + sinξê_,2) + v_∥ê_∥, v_m = Aη^2 √(1-4η) (1+Bη), v_ = Hη^2/1+q_(S_2,∥ - q_ S_1,∥), v_∥ = 16η^2/1+q_[ V_11 + V_A Ξ_∥ + V_B Ξ_∥^2 + V_C Ξ_∥^3 ] × ×| S⃗_2, - q_S⃗_1,| cos(ϕ_Δ - ϕ_1). Here η≡ q_/(1+q_)^2 is the symmetric mass ratio, Ξ⃗≡ 2(S⃗_2 + q_^2 S⃗_1) / (1 + q_)^2, and the subscripts and ∥ mark the perpendicular and parallel directions of the BH spin vector (S⃗) with respect to the direction of the binary angular momentum. We adopt A = 1.2 × 10^4 km s^-1, B = -0.93, H = 6.9× 10^3 km s^-1, and ξ = 145^∘ <cit.>, V_11 = 3677.76 km s^-1, and V_A,B,C = (2.481, 1.793, 1.507)× 10^3 km s^-1. The quantity ϕ_Δ represents the angle between the direction of the infall at merger (which we randomly draw in the binary orbital plane) and the in-plane component of the quantity Δ⃗≡ (M_a+M_b)^2 (S⃗_b - q_S⃗_a)/(1+q_), while ϕ_1 = 0-2π is the phase of the binary, extracted randomly between the two limiting values. §.§ Massive star cluster models with up to one million stars We generate the 19 star clusters with the updated software <cit.>, as described in <cit.> and <cit.>. All star clusters are modelled via <cit.> dynamical models with a central dimensionless potential well W_0 = 6, and are characterised by three values of the half-mass radius, R_ = 0.47, 0.80, 1.75 pc, four values of the initial number of stars, N = (1.2, 3, 6, 10)× 10^5, and two values of the primordial binary fraction, as described below. All clusters have the same metallicity Z = 0.0005, a value typical of several clusters proposed to host a dense subsystem of stellar BHs, like NGC3201 or a central intermediate-mass black hole (IMBH), like NGC6254 <cit.>. All simulations were conducted on the Juwels BOOSTER supercomputer and the GRACE HPC workstation over a ∼ 2 yr timespan. Eventually, the whole database consists of almost 35 Tb of data. Stellar masses are drawn from the <cit.> initial mass function limited between m_* = 0.08-150, which implies an initial average stellar mass is ⟨ m_* ⟩≃ 0.59. The corresponding initial mass and density scale in clusters are M_c = (0.7-5.9)× 10^5 and densities ρ_c ≃ 1.3× 10^4 - 6.9 × 10^6 pc^-3, respectively. All clusters move on a circular orbit at a distance of 13.3 kpc from the centre of a galaxy whose gravitational potential is modelled via a simple Keplerian potential assuming a total galaxy mass of M_g = 1.78× 10^11. As a consequence, our clusters have initially a tidal radius in the range R_ tid = 67-138 pc and they can all be considered as underfilling systems, thus the gravitational field has a smaller impact on the cluster evolution with respect to internal dynamics, at least at the beginning. clusters would underfill their Roche lobe even in the case of a rather extremely eccentric orbit, e.g. e = 0.9. We assume that a fraction of the total number of stars is initially paired in a primordial binary system. Following in , we define the binary fraction as the ratio between the number of binaries and the sum of single stars and binaries, f_b = n_b/(n_s+n_b). We set a f_b = 0.05-0.2 depending on the cluster model as summarized in Table <ref>. Our simulation grid contains two sets that differ only in f_b, thus their comparison could unveil some effects triggered by primordial binary dynamics. Note also our definition of f_b implies that the number of stars in binaries over the total is f_ 2b = 2f_b/(1+f_b)= 0.10-0.33. Binaries are initialised assuming the same mass function of single stars and a uniform mass ratio distribution in the range q=0.1-1 for stars heavier than m_*>5 or random pairing for the lighter ones <cit.>. Following previous works on the same topics, we adopt a thermal distribution of the eccentricity and a semi-major axis distribution flat in logarithmic values, with an upper limit set to 50 AU and a lower limit set by the sum of the stars' radii <cit.>. In the majority of the cases, for each value of R_ and N we run two simulations with different random seeds to explore possible dependencies on the randomness of the star distribution. The only exception is the case R_ = 0.47 pc and N = 300k stars, which was limited to only one model because of the available computational time. The simulations are performed until either the average mass of stellar BHs falls below ⟨ m_⟩≲ 15, no BHs with a mass above 30 are retained in the cluster, or the simulated time exceeds at least one relaxation time <cit.>, which can be expressed in the form <cit.> T_ rlx = 282 Myr1/m_* lnγ_n N√(M_c/10^5)(R_/1 pc)^3/2, where γ_n = 0.11-0.4 for a monochromatic mass spectrum <cit.> but it can be as low as γ_n=0.02 for a multi-mass mass spectrum <cit.>. These choices result in a physical simulated time ranging between T_ sim∼ 0.1-2.3 Gyr and lead to an optimal balance between the computational cost of the simulations and the portion of parameter space that can be explored. Table <ref> summarizes the main properties of models. As sketched in Figure <ref>, in comparison to the most recent studies based on N-body <cit.> and Monte Carlo simulations <cit.>, the clusters occupy a region of the N-ρ_h plane mostly populated by Monte Carlo simulation grids. This, coupled with the fact that simulations with N>10^5 stars usually adopt a binary fraction <20%, makes our simulations an unprecedented grid of models that complements, and further expands, the phase space accessible with direct N-body models. § RESULTS §.§ Star cluster evolution The clusters were originally devised to explore compact object dynamics, compact binary mergers, and intermediate-mass black hole build-up in dense star clusters, thus they are not meant to be representative of any observed cluster. Nonetheless, it is interesting to compare in Figure <ref> the time evolution of the modelled mass and half-mass radius with relatively young, i.e. typical ages 0.1-1 Gyr, massive star clusters in the Milky Way (MW), the Small (SMC) and Large Magellanic Cloud (LMC), M31 <cit.>, the Henize 2-10 starburst dwarf galaxy <cit.>, and the M83 galaxy <cit.>. Over the simulated time, our models overlap with observed clusters, thus indicating that the adopted initial conditions lead to numerical models that can represent one possible evolutionary pathway of some observed clusters. We find that the mass and half-mass radius evolution is well described by the following relations: M_ cl(t) = M_ cl,0[1 + α_M(t/T_ rlx)^-β_M], R_(t) = R_,0[1+t/α_R T_ rlx]^β_R. The values of the fitting parameters, which are summarised in Table <ref>, are independent of the initial cluster mass, and weakly depend on the initial value of the half-mass radius. This owes to the fact that the mass-segregation time scales with M_c^1/2 R_^3/2, thus it is mostly affected by the choice of the half-mass radius. Figure <ref> shows the ratio between the final and initial values of R_ as a function of the simulated time, normalised to the initial relaxation time. The plot clearly highlights how the cluster expansion depends only on the dynamical age of the cluster, regardless of the initial cluster mass. By the end of the simulations, our clusters have typically lost ∼ 25-50% of their initial mass and their radius has expanded by a factor of 1.5-10, thus implying a reduction of the density at the half-mass radius by up to four orders of magnitude and a reduction of the velocity dispersion of around 1-1.5 times. The drop in density and velocity dispersion crucially affects the rates at which dynamical interactions take place. A thorough comparison among simulations and the models discussed in the past literature is made hard by the many different assumptions of previous works, like the use of equal-mass stars to represent the cluster, the different binary fraction, the properties of the primordial binary population, the lack of a dedicated treatment to deal with compact binaries, and the use of outdated prescriptions for the evolution of massive stars (m_ ZAMS > 50). In order to test the new features of the code, we have carried out an extensive comparison of the evolution of star clusters with 110,000 stars in N-body and Monte Carlo simulations in our companion paper <cit.>, where we have shown, among other things, that N-body models of the same clusters seem to evolve toward sparser configurations compared to Monte Carlo models with large tidal radii simulated with the MOCCA code. This difference is likely due to the different criteria used to identify escapers in the two methods, which can lead to an early removal of escaping stars in MOCCA simulations compared to . §.§ Stellar and compact object binaries Mass-segregation of the most massive stars enhances strong dynamical interactions, which can trigger the ejection of the tightest binaries, the ionisation of the loosest ones, and the formation and hardening of new binaries. In the clusters, the processes responsible for the formation and disruption of binaries counterbalance efficiently, determining a slow variation of the overall binary fraction. As shown in Figure <ref>, the binary fraction decreases by a small fraction, down to f_b,fin∼ 0.16-0.18 in models starting with f_b=0.2 and to f_b,fin=0.04-0.05 in models with f_b = 0.05. Interestingly, this variation in the binary fraction is similar, within the simulation time, to results obtained for lower-N cluster simulations <cit.>. The decrease of the binary fraction is mostly due to the disruption of the softest binaries in the cluster and, for a small fraction (< 5%), to hard binaries that are ejected in strong dynamical interactions. These binaries have typical semi-major axes broadly distributed in the 10^-2-5× 10^2 AU. For the sake of comparison, Figure <ref> shows the initial period-mass distribution and mass-ratio of the population of primordial binaries in our models. Figure <ref> shows the distribution of the ratio between the semi-major axis of ejected binaries and the hard-binary separation, both measured at the moment of the ejection, and the ejection velocity distribution for two different simulations. The plot makes clear that the vast majority of ejected binaries are hard and that this population is dominated mostly by binaries with a mass m_ bin < 2. The velocities of the ejected binaries generally remain in the range of 1-100 km s^-1, too small compared to the circular velocity of the Galaxy to permit the identification of these escapers as former cluster members. The upper panel of Figure <ref> shows the variation of the fraction of binaries normalised to the total number of stars in a given mass bin and at a given time. Initially, around 35-50% of all stars with a mass above 20 are initially binary members, with the maximum percentage achieved for stars heavier than 100. However, the population of heavy objects is rapidly depleted (note that t/T_ rlx = 0.22 corresponds in this case to t = 18.8 Myr) owing mostly to stellar/binary evolution, which causes a sharp drop in their number. The maximum stellar mass keeps decreasing over time, whilst a small population of binaries with components in the 5-100 develops – clearly owing to the formation of binaries with one or two BHs. The mass distribution of objects in binary systems, shown in the lower panel of Figure <ref>, highlights that the number of binaries with at least one component heavier than 10 is relatively small compared to the total number of objects in binaries. Assuming initially N=120,000 stars and f_b=0.2, we see that less than 1,000 binaries contain a component with a mass m_* > 10, most of them being former components of a primordial binary. The progenitors of compact objects, which are the most massive stars and stellar binaries in the cluster, have already sunk into the cluster centre when compact objects form. Therefore, to dissect the properties of compact binaries in clusters, we focus on binaries forming within the cluster half-mass radius, calculated along the cluster evolution. Figure <ref> shows the number of binaries with a WD, NS, or BH as a function of time for all models. The population of binaries containing at least one WD (dWDs), N_ dWD, depends on the half-mass radius and binary fraction. At fixed half-mass radius, the number of binaries with a WD significantly decreases at decreasing f_b, because most of these binaries are of a primordial origin. In fact, at fixed N stars and R_, the ratio between the number of dWDs is 4-5 times higher in models with f_b=0.2 compared to those with f_b=0.05, thus comparable to the ratio between the initial amount of primordial binaries in one case or the other. At fixed value of f_b, instead, the smaller the half-mass radius, the smaller is the number of dWDs. In general, by the end of the simulations we find N_ dWD≃ 200-700 dWDs per cluster. The amount of binaries with a WD monotonically increases over the simulated time, highlighting the competition between WD depletion via dynamical encounters and the formation of new WDs, mostly via binary stellar evolution <cit.>. The evolution of the number of binaries with a NS (dNS) shows two clear peaks at 20 and ∼ 100 Myr. These peaks correspond to the formation of NSs from stars in the high-end (the first) and low-end (the second) of the NS progenitor mass range. The drop after each of the peaks is due to NS natal kicks, which cause the ejection of a substantial fraction of NSs from the parent cluster. The width of the peaks is related to the time needed for the NS to leave the cluster, i.e. when their distance from the cluster centre exceeds twice the tidal radius. After the second peak, the number of binaries with a NS decreases in all simulations, regardless of the initial conditions. We find that the largest value of N_ dNS is reached in the case of R_=1.75 pc, f_b=0.2, and N=600k. At fixed value of R_ and N we find that a larger initial binary fraction leads to a more numerous population of binaries with a NS, around 50% more for models with f_b = 0.2. At fixed value of N and f_b the number of binaries with a NS increases at increasing values of R_ because in denser clusters it is more likely that massive stellar binaries either are ejected or merge before stellar evolution becomes dominant. The population of binaries with a BH (dBH), similarly to those with a NS, are characterised by two peaks of formation, one at around 10 Myr, driven by stellar evolution, and another at later times driven by dynamics. The number of binaries with a BH, N_ dBH, in the primary peak depends on the initial number of stars – the larger N_0 the larger N_ bBH, whilst the number in the secondary peak depends on both the half-mass radius and binary fraction, although it is hard to discern the effects of different initial conditions in this case. the clusters, §.§ Ejection of single and double compact objects Over the simulated time, all clusters lose around 20–70 single BHs, depending on the cluster initial conditions, and 10–70 binaries containing either one or two compact objects. Figure <ref> shows the mass distribution of ejected single BHs, which is characterised by two peaks, one at m_ BH∼ 3 and another at m_ BH∼ 25, and a tail that extends up to m_ BH∼ 10^2. The first peak is due to the natal kick of NSs and low-mass BHs, with masses in the range m_ BH = 2.5-6, and develops in the first 10–50 Myr, whilst the secondary peak is due to dynamical interactions[In our simulations the minimum mass allowed for BHs is m_ BH,min = 2.5]. The population of ejected binaries hardly depends on the cluster initial conditions. Therefore, for the sake of simplicity, we gather the ejected binaries from all simulations to have a statistically larger sample. In the following, we distinguish between binaries containing two compact objects, labelled as DCOB, and those containing one compact object and a star, labelled as SCOB. Figure <ref> shows the component mass, semi-major axis, and eccentricity distribution of the ejected binaries in all the clusters. Around 94% of the ejected binaries are primordial. A clear difference between double and single compact object binaries arises from these Figures. In total, we find 229 ejected DCOBs of both dynamical (144) and primordial (85) origin. The DCOBs exhibit a similar mass distribution for the primary and the companion, characterised by a plateau in the m_1,2 = 2-20 and a clear peak at m_1 ∼ 45 for the primary and m_2 ∼ 27 for the companion. The resulting mass ratio distribution is quite peculiar, with a clear dominance of DCOB with a mass ratio q>0.6, owing to the tendency of dynamical interactions to pair objects of comparable mass. The eccentricity distribution is dominated by a peak around 0, caused by a sub-population of primordial binaries that underwent the common envelope phase (64.7%), and a nearly flat distribution in the range e=0.5-1. Additionally, we find 375 ejected SCOBs, the vast majority of which coming from primordial binaries (353) with a small contribution from dynamically assembled systems (22). The mass distribution of the compact objects in SCOBs peaks at a value, m_ CO∼ 2-4, in the range of NSs and small BHs, definitely smaller compared to the mass distribution of the stellar companion, which peaks at 10, but with a secondary peak at ∼ 0.3-0.5. The binary mass-ratio distribution of SCOBs clearly differs from DCOBs, showing a peak at q∼ 0.2 and a decrease toward larger values. The compact object in the SCOBs is mostly a low-mass BH (200) – typically with a mass m_ BH<10 (173) – or a NS (173), and in only two cases a ONeWD (2). The stellar companion is a main-sequence star in the vast majority of the cases (353), followed by core He burning stars (20) (all with a primary weighing <5), and 2 naked He main-sequence (MS) star. Stellar companions in the MS phase are relatively massive: 18 of them have a mass m_ MS < 1, 245 have a mass in the range 1<m_ MS/<10, 74 in the range 10<m_ MS<20, and just one with a mass m_ MS = 29. All stars in the CHeB phase have a mass in the m_ CHeB = 5-16 range and are paired with an object lighter than m_ CO < 5, all of them come from primordial binaries. Focusing on DCOBs, we find a few peculiar and interesting systems. Among all ejected BBHs only 5 merge within a Hubble time, because most BBHs were ejected when the density and velocity dispersion of the cluster had already dropped due to its expansion and mass loss. In two cases, the ejected BBH contains an IMBH with mass either M_ IMBH = 120 or 350. In five cases, instead, we find an ejected BBH with a merging time smaller than a Hubble time. Table <ref> summarises the number of ejected single and binary BHs, and of BBHs and BH-IMBH binaries that merge within a Hubble time. §.§ Black hole – main sequence star binaries The sample of known BH–MS star systems has significantly grown over the last few years <cit.>. Some of the BHs observed in a BH–MS binary appear to reside in star clusters both in the Milky Way <cit.> and the Large Magellanic Cloud <cit.>, whilst others appear to be in the Galactic disc <cit.>. It is an open question whether these BH–MS systems come from primordial or dynamically assembled binaries. In the case of a dynamical origin it is also unknown whether the stellar companion captured the BH or its progenitor. In these regards, the models offer us a novel way to look for BH–MS binaries in simulated clusters and identify possible properties of BH–MS binaries formed through different channels. Since the cluster database is relatively small and limited to a single metallicity, we cannot perform a comprehensive comparison between observed and simulated BH–MS binaries. Nonetheless, it is illustrative to qualitatively compare the properties of BH–MS binaries formed in models and the observed one. For example, models permit us to dissect the population of BH–MS binaries into those forming inside the cluster, some of which have a lifetime much shorter than the cluster life and are disrupted via interactions with other cluster members, or that have been ejected from the cluster. Figure <ref> shows the component masses, period, and eccentricity of in-cluster and ejected BH–MS binaries. We assume that in-cluster binaries are those forming at any time over the simulated time, therefore the same binary or one or both components can appear multiple times in the plot. We see that in-cluster binaries are markedly different from ejected binaries. The latter can be divided in two sub-classes. The first sub-class exhibits a short period (P<0.1 day) and an almost null eccentricity, e ∼ 0. Binaries in this sub-class are characterised by a BH with mass m_ BH < 10 and a MS star with a mass in the 2-10 range. They originate from binary evolution, and, in particular, underwent a common envelope phase that shrank the semi-major axis and damped the eccentricity of the binary. The ejection engine of these binaries is a SN explosion. The second sub-class, instead, comprises heavier BHs (m_ BH = 10-100) and lighter MS stars (m_ MS < 1), and is characterised by eccentricities in the range e = 0.2-1, indicating that these binaries come from dynamical interactions sufficiently strong to eject the binary from the cluster. In-cluster BH–MS binaries can contain BHs and MS stars significantly heavier than the ejected binaries and are characterised by longer periods (P>10 d) compared to ejected binaries. Most in-cluster binaries with a period P≲ 10^3 d have zero eccentricity, whilst practically all those with a longer period have eccentricity >0.1 and up to extreme values. From Figures <ref>, it is evident that in-cluster binaries exhibit a peculiar distribution in the m_ BH-m_ MS, which suggests the existence of two sub-classes. We find that the first class is characterised by a companion with a mass m_ MS/m_ BH = k (m_ BH/1)^-1/2, with k=2-10. Most binaries falling in this class have a period shorter than 100 d, whilst the second class involves binaries with m_ BH>10 and m_ MS<5. An even more clear distinction is shown in Figure <ref>, where the MS-to-BH mass ratio is shown against the orbital period and eccentricity. This plot highlights four interesting peculiarities of in-cluster BH–MS binaries: * the vast majority of binaries with e<0.1 are primordial. Most of them are characterised by m_ MS/m_ BH > 0.3, heavy MS stars m_ MS > 1 M_⊙, and periods below P < 100 d; * primordial binaries with e > 0.1 have larger periods (P = 10^2-10^6 d), and similar mass ratio and MS mass as circular primordial binaries; * the vast majority of dynamically formed binaries have e>0.1 and periods in the range (P=10^2-10^9 d). They are generally characterised by a mass ratio m_ MS/m_ BH < 0.3, MS stars with a mass m_ MS < 10 and a BH with mass m_ BH = (10-100); * only a handful dynamically formed binaries have e < 0.1, and are all characterised by a period P=1-10 d. As shown in Figure <ref>, we find that the longer is the orbital period the larger the binary eccentricity, and almost all binaries with eccentricity e>0.1 have a period P>100 d, with a handful exceptions. Most binaries with a period shorter than P<100 d, instead, are primordial and involve a MS star heavier than m_ MS > 1. The difference between primordial and dynamical BH–MS binaries is further highlighted in Figure <ref>, which shows the component masses of these two classes of binaries. From the plot, it is apparent that dynamically assembled binaries dominate the region of the plane with m_ BH > 10 and m_ MS < 10. The observed BH–MS binaries have orbital properties quite different from our ejected binaries, especially if we consider the observed period and eccentricity. However, only the quiescent BH candidates in NGC3201 are still associated with a star cluster, whilst the origin of the other binaries is unknown. Two of the six observed binaries <cit.> have component masses compatible with our primordial binaries, one of them <cit.> falls in a range where only dynamically assembled binaries are present, and the three sources observed in the Galactic globular cluster NGC3201 have component masses compatible with both in-cluster and ejected binaries. In our models, the vast majority of ejected binaries have a primordial origin and their small period (P < 0.01 d) owes to mass transfer episodes. The few ejected binaries formed dynamically are characterised by a period P<1 d, still much shorter than observed values. Wider, and more numerous, ejected binaries could form in substantially looser or lighter star clusters. On the one hand, decreasing the cluster mass or density would enlarge the hard-binary separation and possibly increase the semi-major axis of ejected binaries <cit.>. On the other hand, a smaller cluster mass would correspond to a lower escape velocity and thus it is more likely for binaries to escape the parent cluster. In principle, MS–MS binaries ejected in the earliest phase of the cluster life could further contribute to the population of BH–MS binaries, but these binaries are removed from our simulations before they can further evolve. Nonetheless, we find that only two ejected MS–MS binaries have at least one component with mass above the threshold for BH formation, i.e. ∼ 18, thus ensuring that ejected MS–MS binaries do not contribute to the population of ejected BH–MS binaries. Among all observed data, the binaries observed in NGC3201 are probably the ones more suited for a comparison with our models, given the metallicity and mass of NGC3201. From the central and bottom panel of Figure <ref>, it is apparent that our in-cluster binaries have periods, eccentricities, and BH masses compatible with those observed in NGC3201. The fact that our models do not match well the companion mass may be due to NGC3201's age. In fact, this cluster is relatively old <cit.>, thus its population of binaries has likely been heavily processed over time, and most of its stellar population with super-solar mass already left the MS. Figure <ref> favours this interpretation. Note that both the mass of BHs and MS stars in dynamically formed BH–MS binaries tend to be smaller compared to primordial binaries. As the BH-burning process proceeds, the average BH mass will keep decreasing, while stellar evolution processes will deplete the high-end tail of the MS mass distribution, possibly favouring the formation of BH–MS binaries in the region populated by NGC3201 sources. §.§ Black hole subsystem In all clusters, the segregation time is generally shorter than the stellar evolution timescale of massive stars, therefore massive stars sink to the cluster centre before evolving to BHs. This implies a possible enhancement of the probability for stellar mergers and star–BH collisions. Given the short segregation times, BHs dominate the dynamics in the cluster core already after a time t=20-40 Myr, making up the 50-80% of the mass in the cluster core and around 10% of the mass within the half-mass radius, as shown in Figure <ref>. Given the amount of mass in BHs enclosed within the core radius, this length scale can be regarded as the BH sub-system scale radius <cit.>. A similar trend of the BH mass fraction inside R_ has been found also in recent simulations performed with the Monte Carlo code MOCCA <cit.> and the N-body code PeTar <cit.>, which both exploit similar stellar evolution recipes. Both the primordial binary evolution and the onset of three-body and multiple gravitational scattering favour the formation of binaries containing at least one BH. Figure <ref> shows the BH formation efficiency, defined as the ratio between the number of BHs inside the cluster core radius and the initial cluster mass, i.e. ϵ_ BH, BBH = N_ BH,BBH(<R_c)/M_ cl,0. We find that, regardless of the initial cluster mass, half-mass radius, or binary fraction, all models are characterised by ϵ_ BH≃ (0.8-2)× 10^-3^-1 for single BHs and ϵ_ BBH≃ (0.8-2)× 10^-4^-1 for binary BHs. As shown in the right panel of Figure <ref>, the BH formation efficiency slightly increases with the simulation time, although it is unclear whether this quantity saturates already at t_ sim/T_ rlx≳ 10. Note that our definition of ϵ_ BBH implies that a cluster with initial mass 7× 10^4(6× 10^5) contains around 7(60) BHs in a binary system after 10 relaxation times. It might seem trivial that ϵ is independent of the cluster initial conditions, as it suggests that it is just a consequence of the adopted mass function. However, the BH-burning mechanism <cit.>, by which the most massive BHs pair in binaries that first eject the lighter BHs from the cluster and then get themselves ejected via super-elastic binary-single and binary-binary scatterings, could significantly affect the population of BHs. This does not seem the case in the models. The small spread observed in the BH binary formation efficiency is related to the initial cluster half-mass radius and binary fraction, whilst the weak increase of ϵ_ over time is the result of dynamically formed binaries. Figures <ref>-<ref> show the cluster and BH subsystem density profiles at different times for three cluster models with N = (0.3-1)× 10^6 and R_ = 0.47-1.75 pc. The central density of BH subsystems attains values around ρ_ BHS≃ (10^4-10^5) pc^-3, i.e. values 10–100 times larger than the density of stars, whilst their scale radius is roughly R_ BHS≃ (0.5-1) pc in all models, corresponding to the radius at which the density contribution from the BHs and stars equal each other. Looking at the different panels it is possible to identify the signatures of the whole BH burning process as described in <cit.>. Firstly, BHs start forming and interacting, driving the formation of the BH subsystem and its subsequent expansion over a timescale t∼ T_ rlx. Secondly, dynamical BH interactions cause the steepening of the BHS density and the contraction of its structure, driven by BH ejections over a time 1<t/T_ rlx<5. Thirdly, the BH subsystem rebounces and expands again, reaching a seemingly stable structure, at least within the simulated time. Figure <ref> shows the BH mass distribution at different times for a model with N=1.2× 10^5 stars, R_ = 1.75 pc, and f_b=0.2. This plot shows all BHs inside the cluster at a given time, regardless whether they are components of a binary system or single BHs. For the sake of comparison, we added in the plots the overall BH mass distribution inferred by the LVC <cit.>. The plot highlights an initial phase in which the first BHs start to form, some of them falling in the upper-mass gap, but as the evolution proceeds new, lighter, BHs form while the most massive BHs are ejected via binary-single and binary-binary scatterings, as expected in the BH-burning scenario. Interestingly, our simulations suggest that the evolution of the cluster can naturally lead to the peak around 10 inferred from GW detections, mostly owing to stellar dynamics that crucially sculpts the BH population. Nonetheless, any comparison among our data, which show all BHs in the cluster, and LVC observations, which are representative of BH mergers, must be taken with a grain of salt. There are other potential explanations for the 10 peak, like isolated binary stellar evolution <cit.>, impact of primordial binary evolution in star clusters <cit.>, metal rich star clusters <cit.>. Hopefully, the new data acquired during the forthcoming four LVC observation run could help pinning down the impact of different processes on the BH mass distribution. We find that almost all BHs heavier than >30 are ejected from the simulated clusters reaching more than ∼ 15 relaxation times. To further highlight the BH burning process, we reconstruct the time evolution of the average BH mass, ⟨ m_ BH⟩, for all BHs enclosed within the half-mass radius. As shown in Figure <ref>, ⟨ m_ BH⟩ follows the same trend regardless of the cluster initial condition, namely: i) the most massive BHs form first and the average mass sets close to the peak allowed by the adopted stellar evolution model (35-40); ii) more numerous, lighter BHs start to form causing a rapid decrease of the average mass down to 15-20; iii) dynamical processes kick in and trigger BH ejection, leading to a secular decrease of the BH average mass down to ∼ 8-10 <cit.>. The similar ⟨ m_ BH⟩ time evolution observed in different models supports the idea that the BH burning process is substantially due to dynamics. This is further highlighted in Figure <ref>, which shows the BH average mass as a function of the time normalised to the cluster relaxation time. We find that at a time t > T_ rlx the average BH mass is well described by a simple relation: ⟨ m_(t) ⟩≃ m_,rlx - 4 Log(t/T_ rlx), where m_, rlx = 17.4±0.1. Although our models are not meant to be representative of any observed cluster, and although there are certainly many pathways leading to the same final cluster evolutionary stage, our results suggest that old Galactic globular clusters and massive clusters in the Small Magellanic Cloud could be harboring a population of relatively light BHs (see Figure <ref>). This would explain why observations of BHs in binary systems are generally characterised by masses m_ BH<20, relatively lighter than the typical value inferred for the population of merging BHs, i.e. m_ BH,GW≃ 30. §.§ Using scaling relations as input for semi-analytic codes It is well known that N-body simulations of star clusters require generous computational resources to enable an exploration of the phase space and to reach an appreciably long simulated time. The simulations make no exceptions, as they required in total approximately 2.2 million core hours. To overcome this problem, many works have proposed semi-analytic tools specifically devoted to study the evolution of compact objects in the last few years, and especially BH binary mergers <cit.>. One ingredient missing in some of these fast and accurate codes is a treatment of the co-evolution of the star cluster and the BH population, which may significantly affect the formation of merging compact objects <cit.>. The models could provide important fitting formulas to implement the evolution of under-filling cluster models in such semi-analytic tools. The overall evolution of star clusters can be described by simple expressions (Equations <ref> and <ref>). If the cluster initial mass and half-mass radius are known, the aforementioned relations enable an accurate description of its evolution, at least in the case of under-filling star cluster models. Moreover, our models offer also insights on the internal evolution of the cluster, providing, for example, details about the mass distribution of ejected single and double compact objects, and the properties of the central black hole subsystem. These ingredients can be easily implemented in semi-analytic tools to obtain a fast and accurate description of compact object dynamics in clusters too massive to be modelled with N-body models. A simple implementation of the cluster evolution has been already developed by <cit.> in their B-POP code, showing that the inclusion of cluster mass loss and expansion causes a critical decrease of the probability of high-generation mergers in dense and massive star clusters <cit.>. § SUMMARY AND CONCLUSIONS In this work, we have presented the first results from the star cluster simulations: a suite of 19 direct N-body simulations, performed with the code, modelling the evolution of star clusters with up to 1 million stars and up to 33% of stars initially in a binary, over a timescale of ∼ 0.5-2 Gyr. These simulations contain up-to-date stellar evolution models, and for the first time a series of recipes to treat relativistic binaries in terms of merger remnant mass, spin, and post-merger recoil. Our models represent clusters initially under-filling their Roche lobe, and therefore their evolution can be considered quasi-isolated. The models considerably expand the portion of parameter space covered with full N-body simulations, opening the possibility to compare with large-N Monte Carlo models. Clearly, there is a vast number of parameters whose impact on the simulation results remains unclear. For example, adopting a sufficiently large value of the metallicity would imply the impossibility to form IMBHs from stellar collapse. However, we expect that our main conclusions about the properties of the BH population should not be severely affected by cluster metallicity, as they appear to be driven mostly by dynamics. We find that the amount of primordial binaries seems to poorly affect the overall evolution of the cluster and the evolution of the BH population; however the adopted initial orbital properties could become important when comparing our data with observations, like in the case of BH–MS binaries. For example, a different assumption on the initial mass-ratio distribution could lead to primordial binaries with final BH–MS component masses more similar to the observed one. However, discrepancies among observations and models could arise from a combination of different assumptions, making it hard to pinpoint the main source of uncertainty. Finally, our simulations model initially underfilling clusters, meaning that the impact of the Galactic field is almost negligible compared to clusters' internal dynamics. This choice enabled us to have a clean view at the impact of stellar interactions on the evolution of the whole cluster and its BH population, and incidentally lead to star cluster models that resemble observed clusters in term of mass and radius. Future simulations adopting filling or overfilling clusters may help understanding whether the evolution of BH subsystems is intrinsically linked to the overall evolution of the host cluster, for example in terms of mass-loss and expansion. The main outcomes of the models can be summarised as follows. * mass-loss and expansion of clusters is mostly determined by internal dynamics and can be described by simple analytical expressions, with parameters that weakly depend on the initial conditions. The binary fraction varies mildly over the simulated time, within 10-15% of its initial values. Nonetheless, stellar evolution and dynamics cause a progressive drop of the fraction of stars in binary systems for primary masses m_1>2 [Figures <ref>-<ref>]; * over a Gyr timescale, clusters contains around 200–700 binaries with at least one WD, whilst the number of binaries with a NS or a BH generally remains below 1–10 and 5–40, respectively. In general, binaries with at least one compact object are more numerous in clusters with a larger initial binary fraction, suggesting that most of these binaries have a primordial origin. Moreover, the denser the cluster is the smaller the number of binaries, owing to energetic dynamical interactions that disrupt binaries more efficiently [Figure <ref>]; * ejected binaries with one (SCOB) or two (DCOB) compact objects have different properties. DCOBs exhibit masses following a nearly flat distribution around 2-20 and a peak at m_ BH = 45, a peculiar mass-ratio distribution that peaks around q≳ 0.6, and a flat eccentricity distribution in the range e=0.5-1. SCOBs, most of which formed from primordial binaries, typically involve low-mass BHs (m_ BH = 3-10) and fairly massive MS stars (m_ ST = 1-10) [Figure <ref>]; * we find a substantial population of BH–MS binaries in models. Most BH–MS binaries forming inside the cluster have typical BH masses m_ BH>10, a companion star with mass m_ MS = 0.7-100, orbital periods >10 days, and span the entire eccentricity range. Ejected BH–MS binaries, instead, feature significantly smaller BH masses m_ BH < 10, shorter periods (<10 days), and are mostly contributed by primordial binaries. We find that the properties of the modelled binaries are compatible with some features of observed BH–MS binaries, especially those observed in the globular cluster NGC3201 [Figures <ref>-<ref>]; * in all models, BHs form a long-lived subsystem in the cluster centre already after 0.5 relaxation times, with a typical density 10-100 times higher than that of stars. The cluster core radius represents a good proxy of the BH subsystem size, as BHs make up 50-80% of the mass enclosed within this radius. We find that the ratio between the number of BHs inside the core radius and the bound cluster mass, which we refer to as formation efficiency, attains values of ϵ_ BH,BBH/ M_⊙=10^-3(10^-4), for single and binary BHs, respectively. This quantity is only mildly dependent on the initial conditions, suggesting that dynamical processes have a relatively minor effect on the overall BH population over the simulation time [Figures <ref> - <ref>]; * dynamics in the BH subsystem critically affects the BH mass spectrum, owing to the BH-burning process. The peak of the mass distribution generally shifts from initial values m_ BH,pk = 25 down to m_ BH,pk = 5-15, and the average mass steadily decreases after one relaxation time, following an identical evolution regardless of cluster properties [Figures <ref>-<ref>]. Our simulations suggest that dynamically old star clusters harbour in their centre a population of BHs whose amount scales linearly with the cluster bound mass. The older the cluster is, the smaller the peak of the BH mass spectrum and the average BH mass. § ACKNOWLEDGEMENTS The authors thank the referee for their insightful feedback, which helped us improving our analysis. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the code, and Vincenzo Ripepi for useful discussions and comments. This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 “Dynamical Mechanisms of Accretion in Galactic Nuclei” and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 138713538 – SFB 881 “The Milky Way System” (in particular subproject A08), and by the COST Action CA16104 “GWverse”. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC). MAS acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda). AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9B870. PB acknowledge the support within the grant No. AP14869395 of the Science Committee of the Ministry of Science and Higher Education of Kazakhstan ("Triune model of Galactic center dynamical evolution on cosmological time scale"). The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346. RS thanks Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits MG was partially supported by the Polish National Science Center (NCN) through the grant UMO-2021/41/B/ST9/01191 GI, MM, and SR acknowledge financial support from the European Research Council for the ERC Consolidator grant DEMOBLACK, under contract no. 770017. § DATA AVAILABILITY The data from the runs of these simulations and their initial models will be made available upon reasonable request by the corresponding author. The Nbody6++GPU code is publicly available[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>]. The McLuster version used in this work will soon be available. A similar version is described in <cit.>. mnras
http://arxiv.org/abs/2307.05443v1
20230711170926
Testing for Reviewer Anchoring in Peer Review: A Randomized Controlled Trial
[ "Ryan Liu", "Steven Jecmen", "Vincent Conitzer", "Fei Fang", "Nihar B. Shah" ]
cs.HC
[ "cs.HC", "cs.DL" ]
Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos Kunihiko Kaneko August 12, 2023 ================================================================================================= Objective: The peer review process is an important manifestation of human computation, with wide-ranging implications that are integral to the entirety of scientific research.Peer review frequently follows a process where reviewers first provide initial reviews, authors respond to these reviews, then reviewers update their reviews based on the authors' response. There is mixed evidence regarding whether this process is useful, including frequent anecdotal complaints that reviewers insufficiently update their scores. In this study, we aim to investigate whether reviewers anchor to their original scores when updating their reviews, which serves as a potential explanation for the lack of updates in reviewer scores. Design: We design a novel randomized controlled trial to test if reviewers exhibit anchoring. In the experimental condition, participants initially see a flawed version of a paper that is later corrected, while in the control condition, participants only see the correct version. We take various measures to ensure that in the absence of anchoring, reviewers in the experimental group should revise their scores to be identically distributed to the scores from the control group. Furthermore, we construct the reviewed paper to maximize the difference between the flawed and corrected versions, and employ deception to hide the true experiment purpose. Results: Our randomized controlled trial consists of 108 researchers as participants. First, we find that our intervention was successful at creating a difference in perceived paper quality between the flawed and corrected versions: Using a permutation test with the Mann-Whitney U statistic, we find that the experimental group's initial scores are lower than the control group's scores in both the Evaluation category (Vargha-Delaney =0.64, p=0.0096) and Overall score (=0.59, p=0.058). Next, we test for anchoring by comparing the experimental group's revised scores with the control group's scores. We find no significant evidence of anchoring in either the Overall (=0.50, p=0.61) or Evaluation category (=0.49, p=0.61). The Mann-Whitney U represents the number of individual pairwise comparisons across groups in which the value from the specified group is stochastically greater, while the Vargha-Delaney A is the normalized version in [0,1]. § INTRODUCTION Peer review is a vital application of human computation, serving as the primary method of systematically evaluating scientific research. In order to make important decisions about research publication or funding, peer review systems rely on input from human reviewers that may be subject to various biases. Peer review is the primary method for systematically evaluating scientific research. Many peer-review processes involve reviewers submitting an initial review, following which they may be presented with additional information. This additional information frequently takes the form of a response from the authors. The reviewers are then requested to read the response and change their stated opinions and evaluations accordingly. In this work, we put this potential change under the microscope, investigating whether reviewers anchor to their original opinions. For concreteness, we instantiate our study in the setting of conference peer review, a large human-centric system that has been widely adopted in computer science academia.[In computer science, leading conferences are typically rated at least on par with leading journals, with full paper submissions, competitive acceptance rates from 15-25%, and are often terminal venues for publication.] Across conference peer review, the author response mechanism is termed the “rebuttal stage”, placed between the initial reviews and final review score decisions and are an opportunity for the author(s) to provide additional information or arguments in response to the initial reviews.[Depending on the specific review setting, there may also be alternative forms of information made available to the reviewer, such as the evaluations of other reviewers. In this work, we focus on author rebuttals due to its widespread use and frequently-raised questions about its efficacy.] In computer science conferences, rebuttal stages are a widely adopted practice, with a large number of recent conferences having instituted such periods <cit.>. Despite its pervasiveness, there is so far mixed evidence regarding the usefulness of rebuttals. A program chair of the NAACL 2013 conference described the rebuttal phase as “useless, except insofar as it can be cathartic to authors and thereby provide some small psychological benefit” <cit.>. A study on the NeurIPS 2016 conference found that only 4180 of 12154 (34.4%) reviews had reviewers participate in the discussion after the rebuttal, and only 1193 (9.8%) of reviews subsequently changed in score <cit.>. Furthermore, adjustments in reviewer scores do not necessarily affect paper decisions—in the ACL 2018 conference, 13% of reviewer scores changed after rebuttals, but the amount of papers whose acceptances were likely affected was only 6.6% <cit.>. In addition, authors from various conferences have shared vast amounts of anecdotes on social media regarding the limited impact of their rebuttal statements on reviewer evaluations, including cases where they had written a strong rebuttal but reviewers did not respond to it in a fair and reasonable way <cit.>. <cit.> find that in the natural language processing community, Twitter posts drastically spike both during the rebuttal phase and at acceptance notifications (corresponding to when authors create their rebuttals and when they see the results after rebuttals), with these tweets often including bitter complaints and reform suggestions. One potential explanation behind the limited effect of the rebuttal stage on overall acceptances is that, due to anchoring, reviewers are simply not changing their scores as much as they should. Anchoring <cit.> is formally defined as the bias where people who make an estimate by starting from an initial value and then adjusting it to yield their answer typically make insufficiently small adjustments. Anchoring effects have been found in many applications, including responses to factual questions, probability estimates, legal judgments, purchasing decisions, future forecasting, negotiation resolutions, and judgements of self-efficacy <cit.>. However, despite the high stakes of peer review, anchoring has not yet been studied in the context of conferences and the rebuttal process. Research question In this paper, we test for the existence of anchoring in reviewers to verify whether reviewers are biased in a systematic manner. Our research question compares the following two scenarios in which a reviewer evaluates an academic paper. * Scenario A: The reviewer evaluates the paper's quality and provides a set of numeric scores (termed initial scores). The reviewer is then presented with additional evidence proving that their initial evaluation was mistaken. Subsequently, the reviewer optionally adjusts their previous scores to new values (termed revised scores). * Scenario B: The reviewer is simultaneously presented with the same paper and the additional evidence from the previous scenario. They then provide a numeric evaluation of the paper's quality (termed control scores). Here, scenario A is a situation that may occur in a typical rebuttal process. Scenario B is a counterfactual where the additional evidence of scenario A is incorporated into the paper and presented to the reviewer during their initial reading of the paper. If anchoring is present in the rebuttal process, reviewers' revised scores in scenario A would remain closer to their lower initial scores, and not be identical to the scores they would have given if they had been in scenario B. In aggregate, this would lead to a muted change in acceptances and a less effective rebuttal process. Altogether, we study the following research question: Are the revised scores given by reviewers when placed in scenario A lower than the control scores that those reviewers would have given if they had been placed in scenario B? We hypothesize that, in line with the existing literature on anchoring, reviewers in scenario A will anchor to their initial review scores, causing their revised scores to be lower than the control scores they would have given if they had been in scenario B. Our contributions To answer the research question, we designed and conducted a study to analyze the reviewer anchoring effect. * We recruited 108 participants who have recently published in a computer science-related field and are currently pursuing or have completed their PhD, and randomly assigned them to the control or experimental group. Each participant was placed in the role of a reviewer in a mock conference setting and was asked to review one paper. * We constructed a fake paper for participants to review, and showed different versions of the paper to the different groups. The control group was given a paper with an animated GIF graphic (shown in Figure <ref>) that contains the main evaluation results of the paper's proposed framework, while the experimental group was instead given a frozen frame of the GIF (Figure <ref>) that showed a much weaker result. After experimental group participants completed their review, they were deceived that the GIF was frozen as the result of a technical error, and were shown the proper animated GIF, upon which they were given the opportunity to revise their scores. Our experiment was carefully designed to avoid several confounders and challenges in simulating an anchoring effect under the rebuttal setting, which we detail in Section <ref>. * For the paper, each reviewer was asked to provide an overall score, five category scores, and text comments justifying each category score. We collected this data once from the control group (control scores) and twice from the experimental group (initial and revised scores). We also collected participant data such as self-reported confidence, PhD year and institution. The de-identified data and analysis code are available on GitHub at <https://github.com/theryanl/ReviewerAnchoring>. * In our analysis, we first checked whether our GIF manipulation created a difference in reviewer ratings. We compared the initial scores and control scores, in both the Overall rating and the Evaluation category (which directly corresponds to the aspect of the paper we manipulated). We conducted a one-sided permutation test with the Mann-Whitney U statistic and measured the effect size in terms of the Vargha-Delaney A <cit.>, representing the probability that a randomly-chosen control score is greater than a randomly-chosen experimental score (breaking ties uniformly at random). We found that the initial scores were lower than the control scores in both the Evaluation category (effect size =0.64, p=0.0096) and Overall scores (effect size =0.59, p=0.058), with moderate effect sizes. Thus, our experimental setup successfully introduced a difference in paper quality that enabled our test for anchoring. To test for the anchoring effect, we compared the revised scores with the control scores using a one-sided permutation test with the Mann-Whitney U statistic. We did not find significant evidence of reviewer anchoring in either the Overall scores (effect size =0.50, p=0.61) or Evaluation category scores (effect size =0.49, p=0.61). Although our experiment imitates a specific rebuttal process in conference peer review, we take the first step in extending the literature on anchoring bias to the academic peer review setting, where individual expertise and knowledge may interact differently with human biases. To our knowledge, this is the first randomized controlled trial on anchoring in peer review. Our work could potentially be informative for similar academic settings, such as anchoring in reviewer discussion phases and longer-term author feedback processes. In the following sections, we give a more comprehensive view on our work. In Section <ref>, we give context to how our work fits into the broader literature on conference peer review and human biases. In Section <ref>, we detail our experimental design, data collection, and analysis methods, and describe the various challenges that our design addresses. In Section <ref>, we report the results for our analyses. In Section <ref>, we present the takeaways and discuss the limitations for our current work, and propose directions for future research. § RELATED WORK In this section, we give a brief outline of the work done in several areas: Research done to improve the conference peer review process, studies on cognitive biases in academic reviewers, sources relating to the rebuttal process in particular, and psychology literature regarding the anchoring bias. §.§ Conference peer review Conference peer review has been an increasingly active area of research due to the need for automated and scalable solutions, especially in the field of computer science <cit.>. Work has focused on improving the quality of reviewer assignments <cit.>, providing robustness to malicious behavior <cit.>, and addressing issues of miscalibration <cit.> and subjectivity <cit.> between reviewers. Of particular relevance is the literature investigating cognitive biases in reviewers. These include studies on confirmation bias <cit.>, commensuration bias <cit.>, the effects of revealing author identities to reviewers <cit.>, reviewer herding <cit.>, resubmission bias <cit.>, citation bias <cit.>, and others <cit.>. Other works propose methodology for detecting such biases <cit.>. Research has also focused on the reviewer discussion phase of peer review, which has some similarities to the rebuttal process we study. Most peer review processes include a reviewer discussion phase after initial reviews are submitted, where reviewers can read and respond to each others' reviews. Similar to the author rebuttal process, reviewers are allowed to update their reviews after receiving this new information. Several studies <cit.> on reviewer discussions in grant proposal reviews have found that disagreement between reviewers greatly decreases after discussion, indicating that reviewers do update their scores to reach consensus. In one experiment <cit.>, 47% of reviewers updated their review scores after being shown scores from other fictitious reviewers. <cit.> conducted a randomized controlled trial in the ICML conference to investigate the existence of herding in reviewer discussions, but found no evidence for this effect. While these studies provide insights into how reviewers update their opinions, the present work focuses specifically on anchoring in the rebuttal process. §.§ Rebuttal processes Many conference organizers have analyzed the rebuttal process within their own conferences, and the common finding is that rebuttals only make a meaningful difference to a small fraction of submissions. Out of the 2273 rebuttals at CHI 2020, 931 (41%) did not result in a mean score change, 183 (8%) resulted in an absolute mean score change of 0.5 or more, and only 6 (0.3%) saw the mean score change by 1 or more <cit.>. In ICML 2020, only 43% of reviewers updated their review in response to author rebuttals <cit.>. In ACL 2018, 13% of review scores changed after rebuttals, affecting 26.9% of all papers, but only 6.6% of papers were likely impacted in terms of acceptance <cit.>. At the same venue, though author responses had a marginal but statistically significant influence on final scores, a reviewer's final score was largely determined by their initial score and distances to scores given by other reviewers <cit.>. Despite these statistics, there is overwhelming support for the rebuttal stage from the research community. A set of surveys from PLDI 2015 <cit.> found that authors strongly value the rebuttal process; 96% of authors agreed (with 88% strongly agreeing) that they should be provided the opportunity to rebut reviews. Meanwhile, only 44% of authors agreed to the statement that their reviews were constructive and professional, and only 41% of authors agreed that their reviewers had sufficient expertise. <cit.> found that both the rebuttal stage and the acceptance results after rebuttals yield large increases in the number of tweets in the NLP research community, often including bitter complaints and reform suggestions. In an author survey for IEEE S&P 2017 <cit.>, which did not have a rebuttal phase, approximately 30% of less experienced and 20% of experienced authors felt like they could have convinced their reviewers to accept their paper if they were given an opportunity for a rebuttal. Together, these results send the message that authors are often dissatisfied with their reviews, and that they strongly value the rebuttal mechanism as a method to address bad reviewing. §.§ Anchoring bias Anchoring (more specifically, the anchor-and-adjust hypothesis) was initially described by <cit.>, who defined it as the effect where people who make an estimate by starting from an initial value and then adjusting it to yield their answer typically make insufficiently small adjustments. The initial value can be irrelevant to the question asked, and can also be a partial computation by the person themselves. One basis to interpret this behavior  <cit.> is to view it as a cognitive shortcut: to reduce the mental strain of incorporating new evidence, individuals take their starting estimate and integrate new information in a naive, insufficient way. The anchoring effect has been shown to be present in a variety of domains and applications <cit.>. However, to our knowledge, our study is the first randomized controlled trial to analyze whether reviewers exhibit anchoring behaviors in peer review. § METHODS In this section, we describe the experiment we conducted and the analysis methods we employed to investigate the research question specified in Section <ref>. We first define the experimental procedure along with associated justifications, and then describe participant recruitment and data collection. Lastly, we describe the analysis we performed on the data. Our research question and study design were pre-registered at <https://aspredicted.org/W94_GD3>. This experiment was approved by the Carnegie Mellon University Institutional Review Board (Federalwide Assurance No: FWA00004206, IRB Registration No: IRB00000603). §.§ Experiment design In this subsection, we first describe the challenges inherent to this problem setting before concretely defining the experimental procedure. We then articulate how our key design choices allow us to surmount these challenges. §.§.§ Challenges for the design First and foremost, our hypothesis cannot be tested with an experiment in a real conference environment as it is impossible to control the quality of papers and the strength of rebuttals. Thus, we carefully designed an environment for our experiment that simulates a real conference. In designing our experiment and simulated environment, we address four main challenges: * Clarity and objectivity of the quality of rebuttal. In a real conference environment, the impact of a rebuttal argument on its paper's quality is often subjective. This makes it hard to distinguish between an anchoring effect and a genuine belief that the rebuttal was weak. In our experiment, the rebuttal must clearly and objectively improve the quality of the paper. Furthermore, the participants chosen need to be able to detect this improvement. Lastly, the rebuttal should be meaningful no matter what participants write in their initial review. * Addressing “” confounder. When reviewing, reviewers find and comment about mistakes in the submission that are important to the quality of the paper. Even when authors address these mistakes, if these mistakes were influential enough in the first place, reviewers may choose to take them into account and penalize the authors by giving a lower score. In this study, we explicitly choose to focus on anchoring with respect to reviewer opinions about the paper itself and not their opinions about the authors. As such, we label this phenomenon as the confounder, and consider it to be distinct from the anchoring effect in our research question. In our experiment, we want to account for this confounder, and separate its effects from the anchoring bias. * Equality of the experimental and control experiences. In the experiment, we want to compare between an experimental group, which sees a rebuttal and adjusts their scores, and a control group, which gives the ground truth scores that the experimental group should ideally adjust to. In order to make a meaningful comparison between groups, we want the control group's paper to be equivalent to the experimental group's initial paper combined with the rebuttal. In the traditional conference form, this is paradoxical to recreate; rebuttals are constructed to directly address initial reviews, but the control group cannot give initial reviews without being potentially subjected to anchoring bias themselves. * Participant obliviousness to true purpose of study. Since anchoring would usually be unnoticed by reviewers themselves, it is important to replicate this condition in the experiment. Informing participants of the true purpose of the study could potentially change their behavior according to the demand characteristics effect <cit.>. In our experiment, we need to conceal the purpose of the study and make it such that participants do not suspect that the study concerns reviewer anchoring. Addressing challenge 1 enables us to measure an anchoring effect if it exists, while addressing challenges 2-4 ensure that in the absence of an anchoring effect, the ratings received from the control and experimental groups should be equivalent. These challenges are very tricky to simultaneously address. For example, consider a simple experimental design in which reviewers are randomly assigned to either a high-quality or a low-quality version of a paper; then, after the reviews, experimenters construct a rebuttal to address the points raised in the review. The criticisms raised by the reviewers could concern naturally subjective topics such as its significance. In these cases, we would not be able to refute the reviewer with an objective response in the rebuttal and would struggle to distinguish reviewer anchoring from genuine subjective beliefs (challenge 1). Since the errors in the low-quality paper are due to mistakes by the authors, we would not be able to distinguish between reviewers exhibiting anchoring and reviewers penalizing the author mistakes (challenge 2). Even for the same version of the paper, the criticisms raised by reviewers will likely be widely varied in topic. Thus, if the same rebuttals are used for all reviews, the rebuttals may not match the concerns in each review (challenge 1). Alternatively, if the experimenter generates individualized rebuttals for each review, we cannot guarantee that the post-rebuttal version of the low-quality paper has equivalent quality to the high-quality paper (challenge 3). Finally, if the experiment places significant focus on the rebuttal, participants may suspect the true purpose of the study and modify their behavior accordingly (challenge 4). §.§.§ Experimental procedure In this subsection, we present our experimental procedure, which addresses each of the aforementioned challenges. Experimental setting The experiment procedure consists of a 30-minute, 1-on-1 Zoom meeting with each participant. Each participant takes the role of a reviewer for one paper within a simulated peer review process, and all participants review the same paper. A snapshot of the paper reviewed is provided in Figure <ref>. Participants are falsely told that the purpose of the study is to analyze the effect of new types of media (such as animations) on reviews, and are informed that the paper should be reviewed as a submission to an application-focused track of a large AI conference. Participants are given a reviewer form constructed based on the reviewer guidelines in the AAAI 2020 <cit.> and NeurIPS 2022 <cit.> conferences. The reviewer form contains scores in five sub-categories {Significance, Novelty, Soundness, Evaluation, Clarity}, one sentence justifications for these scores, as well as an Overall score and a confidence rating. Following the fictitious purpose of the study, the form also asked participants to “Please comment on the use of animated figures. (If you did not see this form of media, please answer `N/A')”. This question regarding animated figures plays an important part in our experimental intervention, which we detail in the following paragraph. After the review, we also record participants' institution, program, and year of study. Intervention The key difference between the conditions lies in the presentation of the main evaluation result of the paper. In the control group, this result is presented as an animated GIF graphic (shown in Figure <ref>), whereas the experimental group is initially presented a broken version of the GIF that is stuck on the first frame (Figure <ref>), which shows a significantly weaker result. Then, when experimental group participants are asked the aforementioned question to comment on animated figures, they would indicate that they had not seen any by answering `N/A'. After these participants submit their reviews, the experimenter deceives them by saying that their answer was unexpected and that they should have seen an animated figure. In parallel, the experimenter secretly changes the contents of the webpage displaying the paper such that all new visits see the animated GIF in the paper working properly. The experimenter then suggests the participants to refresh the website, upon which the animation loads and they are asked to revise their scores and comments accordingly. We performed a pilot study with 14 participants before full deployment to test for feasibility and practice the deception. For more details on the deception and score revision process, as well as how deviations from the procedure due to unexpected participant behavior were handled, we refer the reader to Appendix <ref>. All of the instructions, interfaces, and the paper contents are available at <https://github.com/theryanl/ReviewerAnchoring>. §.§.§ Design justification We now highlight some key aspects of our experimental design and how they address the aforementioned challenges. * Construction of the reviewed paper. In order to ensure that the change in quality between the initial and revised versions of the paper was clear and objective (challenge 1), we manually constructed a single paper for all participants to review. The initial and revised versions differed in the paper's numerical results, as this was an area where the paper's quality could be changed objectively. To make the change in quality clearer, the results between the initial and revised/control versions of the paper were very different, and the paper was constructed to emphasize this result. Additionally, we made the paper heavily application-focused and made its metrics easily interpretable such that our participants (who were at minimum computer science PhD students) would not need any specific technical background to interpret the results. * Technical error in displaying the GIF. In the experimental group, the issue in the initial version of the paper was presented as the result of a technical error (the frozen GIF). Since the error was clearly not attributable to the authors, reviewers could not reasonably justify reflecting the error in their scores, which allowed us to circumvent the confounder (challenge 2). Additionally, the frozen GIF issue in the initial paper could be corrected for all participants regardless of the specifics of their review. Thus, we were able to ensure that the change seen by the experimental group was both relevant and identical across participants (challenge 1), while the changed paper was also equal to the paper reviewed by the control group (challenge 3). * Deceptive experimental purpose. We created the alternate experimental purpose, “To study the effect of new types of media on reviews”, to accomplish three objectives. First, we were able to justify the perceived experimental procedure without mentioning anchoring to participants (challenge 4). Second, we enabled the natural use of animated GIFs in the paper, while not raising suspicion in the case where no GIF was seen. Third, we were able to naturally include the question asking for comments on the use of animated figures. On one hand, this enabled the experimenter to easily convince participants that there was a technical error by citing their answer. On the other hand, it allowed for the experimenter to naturally ask the participant to refresh the page, allowing the change in the paper to be shown immediately after. Participants were debriefed about the deception and true purpose of the experiment immediately after the study. §.§ Participation and data collection We recruited 108 participants, who were separated at random into control and experimental groups and were unaware of their assignment. Participants were either PhD students or PhDs with at least one publication in a computer science-related field in the last 5 years (see Table <ref>). Participants were recruited across nine research universities in the United States through various methods including physical posters, university mailing lists, and social media posts (see Appendix <ref>). We conducted a power analysis to determine the target number of participants (see Appendix <ref>). As a large fraction of reviewers in computer science conferences are PhD students (e.g., 33% in the NeurIPS 2016 conference; ), our participant pool is fairly representative of the conference reviewer population we aim to study. For each participant, we gathered the following data: * Overall scores on a 1-10 scale. * Category scores in {Significance, Novelty, Soundness, Evaluation, Clarity} on a 1-4 scale and 1-sentence comments justifying each. * Confidence in their evaluation on a 1-5 scale. * Comments on the hyperlinks and animated figures. * Participant-specific information: Institution, program and (if PhD student) year. The score categories and scales were modeled after those of NeurIPS and AAAI, two of the largest annual computer science conferences. In the experimental group, participants were given a chance to revise all review information after seeing the figure change. In this case, both initial and revised versions were recorded. This resulted in the collection of 3 different sets of data: scores from the control group, initial scores from the experimental group, and revised scores from the experimental group. After the study, we asked participants a few questions to determine the effectiveness of the deception and ensure that they were oblivious to the true study purpose (i.e., challenge 4 in Section <ref>). Before debriefing participants, we asked them if they suspected that the study featured deception; if they answered affirmatively, we asked them to describe what they believed the true study purpose was. If they were able to detect that we deceived them on the study purpose and specifically identify that the true purpose was about re-reviewing or rebuttals, we would exclude them from the study. Along with this, we also included two trivial exclusion criteria: (i) If participants do not consent to their data being collected for the true study purpose, and (ii) if participants do not finish the study. For reference, participants were compensated $20 for participation in the study, and were allowed to withdraw at any time for partial ($10-$15) compensation. No participants withdrew or were excluded due to these criteria (or for any other reason), demonstrating the effectiveness of the deception in our experiment design. §.§ Analysis We first performed a preliminary test of the validity of our experimental setup by comparing the initial scores provided by the experimental group with the the scores provided by the control group. If our experimental setup was successful at inducing a perceived difference in paper quality, we should see that the initial experimental scores are generally lower than the control scores. To compare the distributions of these scores, we performed a non-parametric test of the null hypothesis that the control and initial scores have the same distribution. Specifically, we conducted a one-sided permutation test (with 100000 permutations) with the Mann-Whitney U statistic against the alternative hypothesis that the distribution of the control scores is stochastically greater than the distribution of the initial scores . The test statistic is = ∑_C_i ∈∑_I_j ∈ S(C_i, I_j), where S(a, b) = 1 if a > b 1/2 if a = b 0 if a < b . We performed two tests between these groups, comparing both the Overall scores and the Evaluation category scores. We chose to analyze the Evaluation category, defined as “a score for how its evidence supports its conclusions […]”, as we expected our experimental manipulation to have the greatest effect in this category. Across the two tests, we controlled the false discovery rate using the Benjamini-Hochberg correction under the assumption that the test statistics are positively dependent <cit.>, and the p-values we report are adjusted for this correction <cit.>. As effect sizes, we also report point estimates of the Vargha-Delaney A statistic <cit.>, computed as = /||||, along with 95% bootstrapped confidence intervals (using 100000 samples). Our primary analysis aims to detect anchoring in reviewers. To test for the anchoring effect, we compared the revised scores provided by the experimental group with the scores provided by the control group. For this, we performed a non-parametric test of the null hypothesis that the control and revised scores have the same distribution. We again used a one-sided permutation test with the Mann-Whitney U statistic against the alternative hypothesis that the distribution of the control scores is stochastically greater than the distribution of the revised experimental scores. The test statistic is = ∑_C_i ∈∑_R_j ∈ S(C_i, R_j). We performed two tests to compare both the Overall scores and the Evaluation category scores, and again controlled the false discovery rate at α = 0.05 across the two tests using the Benjamini-Hochberg correction (again assuming positive dependence). We report the Vargha-Delaney A statistic as the effect size, with estimates computed as = /||||. As stated in Section <ref>, our research question and study design were pre-registered. However, the analysis specified here differs from the analysis plan specified in the preregistration. In the preregistration, the test statistic was specified to be the difference between the mean scores of each group, and only the Overall scores were to be analyzed. However, as the scores are not necessarily on a linear scale (in fact, they were each given a description on the review form), the arithmetic means of the scores are not as meaningful. We also analyzed Evaluation category scores since our experimental design specifically manipulates the paper quality in this category. The tests of the validity of our experimental setup were also not preregistered. The preregistered original analysis is available at <https://aspredicted.org/W94_GD3>. Code for all analyses is provided at <https://github.com/theryanl/ReviewerAnchoring>. § RESULTS In this section, we describe the results of all analyses. §.§ Main results The results of our main hypothesis tests introduced in Section <ref> are reported in Table <ref>. Our comparisons between the initial scores and control scores to test the validity of our experimental setup resulted in effect sizes = 0.5857 with respect to the Overall scores (adjusted p=0.0575) and = 0.6375 with respect to the Evaluation category scores (adjusted p=0.0096). The effect sizes can be interpreted as the probability that a randomly chosen control score is greater than a randomly chosen initial score, breaking ties uniformly at random. An effect size of = 0.5 means that the two distributions are stochastically similar, and higher values of indicate the extent to which the distribution of control scores is stochastically greater. If our experimental setup successfully created a perceived difference in paper quality between the conditions, we expect the control scores to be higher than the initial scores (corresponding to effect sizes > 0.5). While both comparisons had moderate effect sizes, the comparison in the Evaluation category is significant at α=0.01, while the comparison in Overall scores is significant at α=0.1. This provides evidence that the paper quality was perceived as different between the two groups, although reviewers may not have reflected this difference as much in their Overall scores. Given that our experiment successfully constructed an environment where anchoring could occur, we turn to our analysis of whether anchoring did occur. Our comparisons between the revised scores and control scores, which test for the anchoring effect, resulted in effect sizes = 0.5048 with respect to the Overall scores (adjusted p=0.6064) and = 0.4863 with respect to the Evaluation category scores (adjusted p=0.6064). Recall that in the presence of an anchoring effect, we expect the control scores to be higher than the revised scores (corresponding to effect sizes > 0.5). Both statistics are insignificant at α=0.1 (and would have been insignificant even without Benjamini-Hochberg correction), indicating that our analysis failed to reject the null hypothesis that reviewers do not anchor. In other words, we did not find any evidence of anchoring bias. §.§ Supplemental results In addition to the main test statistic, we also performed the following informal supplemental analyses. As these analyses were exploratory and data-dependent, the observations we made in these analyses should be interpreted primarily as motivation for future work and not as support for statistically significant conclusions. Other category scores In Table <ref>, we show the results of additional comparisons conducted between revised scores and control scores. We compared scores for each of the categories on the review form apart from the Evaluation category analyzed earlier. We used the same methodology as in our main analysis to compute the effect sizes and 95% confidence intervals. Overall, these results do not indicate that other categories showed signs of anchoring. Confidence We additionally conducted comparisons to investigate whether anchoring was associated with the self-reported confidence of reviewers. In Table <ref>, we separate participants into two groups based on their self-reported confidence score, given on a scale of of 1-5: confident, where participants reported a score of 3 (“Fairly Confident”) or higher, and unconfident where they reported a score of 2 (“Willing to defend”) or lower. This threshold between confident and unconfident reviewers was chosen before the analysis based on the stated descriptions of the scores. In both the control and experimental groups, there were 41 confident reviewers and 13 unconfident reviewers. We conducted comparisons between the revised Overall scores from the experimental group and the Overall scores from the control group, and found that confident reviewers had the same mean revised and control scores, while unconfident reviewers had generally lower revised scores (indicated by the =0.63 effect size). This could indicate that unconfident reviewers are more likely to exhibit anchoring. However, since there were less unconfident reviewers, the uncertainty around this effect size is large. Seniority Next, we split participants into less experienced (“junior”) and more experienced (“senior”) reviewers, and conducted a comparison between the revised and control Overall scores for each subgroup in Table <ref>. Junior reviewers were PhD year 3 and under, whereas senior reviewers were PhD year 4 and over or beyond their PhD. This threshold was chosen before the analysis to produce the most equally-sized groups. We found similar results across the two subgroups, suggesting that our study results may not be dependent on the large amount of junior participants we have in comparison to real conference settings, though the uncertainty around the effect size is large. Counts of score changes Though our main analysis did not find evidence of anchoring (as shown in Section <ref>), we observe that, consistent with the findings from previous conference organizers in Section <ref>, a majority of the reviewers in the experimental group did not change their given scores (see Table <ref>). Out of 54 experimental group participants, 15 (28%) changed their Overall score, with nine participants raising their Overall scores by 1 and six raising their Overall scores by 2. Meanwhile, 25 (46%) participants changed one or more category scores, with 22 (41%) participants including a change in the Evaluation category. Other category scores were changed by only a few participants, which was expected as our manipulation primarily targeted the Evaluation category. In Table <ref>, we further break down the scores and comments updated by experimental group participants. § CONCLUSION AND DISCUSSION In this paper, we presented the design and results of a randomized controlled experiment to test for reviewer anchoring bias in conference peer review. Our design carefully addresses various challenges and confounders through the employment of animated media, deception, and an overarching cover story. Our main analysis did not find evidence of the existence of reviewer anchoring effects in peer review. In the absence of anchoring, the lack of change in scores and decisions observed in conference rebuttal phases may be due to other reasons, such as rebuttals having a relatively weak impact on the quality of the paper, or reviewers penalizing the paper for statements that were unclear or misunderstood in the initial submitted version. Another significant issue concerning the rebuttal process is the limited participation from reviewers <cit.>. Regardless of the prevalence of anchoring, it is essential for conferences to address this lack of active participation in the review processes. Our study had several limitations which we now discuss. One potential limitation was that our sample size could have resulted in insufficient statistical power to detect an effect. Although we estimated the sample size needed for our experiment using real conference data (see Appendix <ref>), the variance in the collected scores was higher than that of the data we used. This variance in scores could have been due to the lack of a unifying context or set of norms that conference reviewers in the same subfield would have. Thus, future studies can consider recruiting participants with expertise in one particular subfield to help increase the calibration between reviewers. Another possibility is that, even if anchoring is prevalent in real conference settings, the experimental conditions of our study failed to replicate the conference environment sufficiently to induce this same effect. For example, a common piece of feedback we received from participants in the study was that there was no context behind the result in the paper. Some participants expressed uncertainty in their review as to whether the weak initial result is significant, and retained this even for the larger corrected result. In contrast, reviewers in a real conference may have better knowledge to more accurately judge the significance of a paper's contributions. In future studies, the aspects of the paper that are updated during the rebuttal may need to be more clearly interpretable to the entire study population, which could also be resolved by recruiting participants with expertise in a particular subfield. Additionally, our experiment intentionally omits certain elements that are typically present in a real conference environment, some of which may be responsible for reviewer anchoring in the real setting. One such aspect is the social dynamic of reviewers. For example, if reviewers know that other reviewers and area chairs can observe their reviews, it is possible that they would choose to defend their initial position more due to concerns about their image in front of others. Similar social dynamics may be present when reviewers are asked to engage directly with authors in discussions. However, the social aspect may also introduce various confounding effects such as reviewers being influenced by the scores of other reviews <cit.>. We decided to forgo the capturing of these secondary social effects, instead leaving them to future work. Finally, there are other variations of our research question that future work could consider. Our supplemental analysis with respect to reviewer confidence suggests that the answer to our research question may not be homogeneous across the entire reviewer pool. Future work may want to design experiments that more carefully take this consideration into account by testing for effects within subpopulations. § ACKNOWLEDGEMENTS This work was supported in part by NSF CAREER Award 1942124 and NSF 2200410. apalike § APPENDIX §.§ Detailed deception and revision process In this section, we first outline the full deception and revision procedure starting from when experimental participants finish their initial review, and ending after they submit their revised review. Then, we list some common deviations to the expected procedure that happened in practice, as well as how we addressed them to return to the experimental procedure. §.§.§ Deception and Revision Process In the experimental procedure, when the participants view the paper on their browser, they see a frozen version of the GIF figure that contains the paper's main evaluation result. The GIF result (pictured in Figure <ref>) still fits into the context of the paper text due to its timeline structure, but its result is substantially weaker than the result depicted in the full GIF animation. Furthermore, there is no mention of the specific numerical values of the main result anywhere outside the GIF figure, and the text surrounding the figure is also intentionally vague to allow for both versions to avoid any inconsistencies between figure and text. Review questions are situated on a google form separate from the webpage. The first page consists of all the traditional review questions, while the second page contains background questions and comments, such as recording the institution and year of the participant. This second page also includes the question, “Please comment on the use of animated figures. (If you did not see this form of media, please answer `N/A'.)”. When the experimental group participants encounter this question, as they did not see any animated figures, they should answer “N/A”. Once they complete their review, the experimenter verbally announces that they are taking a quick look over their submitted review. Meanwhile, the experimenter is actually changing the contents of the webpage that hosts the paper to incorporate the working GIF instead. The experimenter then deceives the participant by acting confused about their “N/A” response to the animated figures question. They state that the participant should have seen an animated figure, and ask them if they could reload the page or try a different browser. Once the participant loads the paper again, they see the animated GIF and notify the experimenter that they had not previously seen the animation. The experimenter then asks them to edit their review based on the figure change, and provide them with the same google form link so that they can edit their response. Prior to their new response being submitted, the experimenter also downloads the participant's original response. We chose to have reviewers revise their initial responses because this parallels the situation in which reviewers revise their ratings after being given rebuttals. Often, conferences will have a reviewer's initial review available as either a reference or to directly edit over, which we mirrored in our experimental setup. We also ensured that reviewers are informed that they could edit any part of their review, not just the comment regarding animated media. §.§.§ Deviations to the expected procedure and prepared solutions In this section, we describe the responses we had in place in case any parts of the experiment did not go as according to the procedure. These originated from both our initial planning and our experiences in the pilot study. One common mistake that experimental participants made was that they mistakenly believed that the static figure shown initially was the “animated figure” referenced in the review form question, despite it not being animated. Consequently, they answered the animated figures question incorrectly by commenting on the static figure instead. This disrupted our attempts to notify them that they saw the wrong figure, which was normally done through this question. To address this, when we identified that participants were mistaken in this way, we instead asked them a follow-up question to clarify their answer to the animated figures question, such as “Could you elaborate a bit more on your answer?”. Then, when the participant explained their answer, the experimenter could act confused as the figure they described would not have an animated component, thereby transitioning back to the experimenter “noticing” that the participant had not seen an animated figure, and asking the participant to reload the page. Another somewhat frequent question from experimental group participants was whether they were supposed to see an animated figure. Here, we could not give them a yes or no answer, as “yes” would reveal that there was a mistake prematurely, while “no” would contradict ourselves later on. In this situation, we instead pretended that the experiment was double blind, stating that we also did not know if they were supposed to see an animated figure until they submitted their review. Then, after their reviews were submitted, we notified them that they were actually supposed to see an animated figure. To keep our control and experimental conditions consistent, we attempted to answer all questions from participants identically regardless of which group they were in. §.§ Power analysis To determine the target number of participants for our study, we performed a power analysis for our original pre-registered significance test. In the power analysis, we assumed that the control and revised Overall scores were distributed normally with two corresponding fixed variances. Since participants review the same paper, the variances were chosen by randomly sampling reviewer score variances across individual papers in ICLR 2022 <cit.>, with different values for each trial of the permutation test. We chose the ICLR 2022 conference due to its proximity to the fake paper's topic as a machine learning conference, as well as its open-source review score data that we could sample from. Overall scores in ICLR 2022 were also based on a 10-point scale, with an average of 3.85 reviewers per paper. We chose to sample two separate variance values for the control and revised scores as participants in different groups might have had different perceptions of the paper. Based on our analysis, we targeted a minimum of 100 participants, since this corresponded to an estimate that we would be able to detect a 0.25 difference in means between the control and revised scores (α=0.05, β=0.2). However, the variances we obtained during data collection were much higher than the estimate (see Table <ref>). In hindsight, we note two limitations of our initial variance estimate: * The scores we used were the post-rebuttal scores, as pre-rebuttal scores were not openly available. In reality, it may be the case that post-rebuttal scores are closer than pre-rebuttal scores due to rebuttals or reviewers being influenced by other reviewers' reviews (this behavior is discussed in Sections <ref> and <ref>). * The participants in our study may have had less homogeneous backgrounds than the typical set of reviewers for a paper. Though our participants were largely of the same age group and social environment, they came from many different subfields of computer science, and thus may have had differing impressions about the standards for an `accept' submission. This may also have contributed to an increased variance in scores. §.§ Participant recruitment We had four requirements for participants to join the study. First, participants were required to be either a current PhD student or have already obtained a PhD. Second, participants were required to have at least one publication in a computer science-related field within the last 5 years, up to the date of the study. Third, participants were required to over the age of 18. And finally, participants were required to be currently residing in the United States. Given these requirements, our participants were likely to be either current or future reviewers at computer science conferences: 33% of reviewers at the NeurIPS 2016 conference were PhD students <cit.>. We recruited participants through physical posters, emails to PhD-student mailing lists, social media posts, announcements to students in PhD-level courses, door-to-door recruitment at PhD offices, as well as word-of-mouth. These methods were performed to varying degrees (depending on physical limitations) at Carnegie Mellon University and 8 other research universities. Participants were given a QR code or link to a sign-up calendar, where they could select their own 30-minute meeting timeslot with the experimenter.
http://arxiv.org/abs/2307.04630v1
20230710151517
The NPU-MSXF Speech-to-Speech Translation System for IWSLT 2023 Speech-to-Speech Translation Task
[ "Kun Song", "Yi lei", "Peikun Chen", "Yiqing Cao", "Kun Wei", "Yongmao Zhang", "Lei Xie", "Ning Jiang", "Guoqing Zhao" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Kibble-Zurek Mechanism for Nonequilibrium Generation of Magnetic Monopoles in Spin Ices Gia-Wei Chern August 12, 2023 ========================================================================================== ^*Lei Xie is the corresponding author. This paper describes the NPU-MSXF system for the IWSLT 2023 speech-to-speech translation (S2ST) task which aims to translate from English speech of multi-source to Chinese speech. The system is built in a cascaded manner consisting of automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS). We make tremendous efforts to handle the challenging multi-source input. Specifically, to improve the robustness to multi-source speech input, we adopt various data augmentation strategies and a ROVER-based score fusion on multiple ASR model outputs. To better handle the noisy ASR transcripts, we introduce a three-stage fine-tuning strategy to improve translation accuracy. Finally, we build a TTS model with high naturalness and sound quality, which leverages a two-stage framework, using network bottleneck features as a robust intermediate representation for speaker timbre and linguistic content disentanglement. Based on the two-stage framework, pre-trained speaker embedding is leveraged as a condition to transfer the speaker timbre in the source English speech to the translated Chinese speech. Experimental results show that our system has high translation accuracy, speech naturalness, sound quality, and speaker similarity. Moreover, it shows good robustness to multi-source data. [Our submitted system ranks 1st in the S2ST task.] § INTRODUCTION In this paper, we describe NPU-MSXF team's cascaded speech-to-speech translation (S2ST) system submitted to the speech-to-speech (S2S) track[<https://iwslt.org/2023/s2s>] of the IWSLT 2023 evaluation campaign. The S2S track aims to build an offline system that realizes speech-to-speech translation from English to Chinese. Particularly, the track allows the use of large-scale data, including the data provided in this track as well as all training data from the offline track[<https://iwslt.org/2023/offline>] on speech-to-text translation task. Challengingly, the test set contains multi-source speech data, covering a variety of acoustic conditions and speaking styles, designed to examine the robustness of the S2ST system. Moreover, speaker identities conveyed in the diverse multi-source speech test data are unseen during training, which is called zero-shot S2ST and better meets the demands of real-world applications. Current mainstream S2ST models usually include cascaded and end-to-end systems. Cascaded S2ST systems, widely used in the speech-to-speech translation task <cit.>, usually contain three modules, i.e. automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS). Meanwhile, end-to-end (E2E) S2ST systems <cit.> have recently come to the stage by integrating the above modules into a unified model for directly synthesizing target language speech translated from the source language. E2E S2ST systems can effectively simplify the overall pipeline and alleviate possible error propagation. Cascaded S2ST systems may also alleviate the error propagation problem by leveraging the ASR outputs for MT model fine-tuning. Meanwhile, thanks to the individual training process of sub-modules, cascaded systems can make better use of large-scale text and speech data, which can significantly promote the performance of each module. In this paper, we build a cascaded S2ST system aiming at English-to-Chinese speech translation with preserving the speaker timbre of the source English speech. The proposed system consists of Conformer-based <cit.> ASR models, a pretrain-finetune schema-based MT model <cit.>, and a VITS-based TTS model <cit.>. For ASR, model fusion and data augmentation strategies are adopted to improve the recognition accuracy and generalization ability of ASR with multi-source input. For MT, we use a three-stage fine-tuning process to adapt the translation model to better facilitate the output of ASR. Meanwhile, back translation and multi-fold verification strategies are adopted. Our TTS module is composed of a text-to-BN stage and a BN-to-speech stage, where speaker-independent neural bottleneck (BN) features are utilized as an intermediate representation bridging the two stages. Specifically, the BN-to-speech module, conditioned on speaker embedding extracted from the source speech, is to synthesize target language speech with preserving the speaker timbre. Combined with a pre-trained speaker encoder to provide speaker embeddings, the TTS model can be generalized to unseen speakers, who are not involved in the training process. Experimental results demonstrate the proposed S2ST system achieves good speech intelligibility, naturalness, sound quality, and speaker similarity. § AUTOMATIC SPEECH RECOGNITION Our ASR module employs multiple models for score fusion in the inference. Moreover, data augmentation is adopted during training to handle noisy multi-source speech. §.§ Model Structure Our system employs both Conformer <cit.> and E-Branchformer models <cit.> in our ASR module to address the diversity of the test set. Conformer sequentially combines convolution, self-attention, and feed-forward layers. The self-attention module serves to capture global contextual information from the input speech, while the convolution layer focuses on extracting local correlations. This model has demonstrated remarkable performance in ASR tasks with the ability to capture local and global information from input speech signals. E-Branchformer uses dedicated branches of convolution and self-attention based on the Conformer and applies efficient merging methods, in addition to stacking point-wise modules. E-Branchformer achieves state-of-the-art results in ASR. §.§ Data Augmentation Considering the diversity of the testing data, we leverage a variety of data augmentation strategies to expand the training data of our ASR system, including the following aspects. * Speed Perturbation: We notice that the testing set contains spontaneous speech such as conversations with various speaking speeds. So speed perturbation is adopted to improve the generalization ability of the proposed model. Speed perturbation is the process of changing the speed of an audio signal while preserving other information (including pitch) in the audio. We perturb the audio speech with a speed factor of 0.9, 1.0, and 1.1 to all the training data. Here speed factor refers to the ratio compared to the original speed of speech. * Pitch Shifting: Pitch shifting can effectively vary the speaker identities to increase data diversity. Specifically, we use SoX[<https://sox.sourceforge.net/>] audio manipulation tool to perturb the pitch in the range [-40, 40]. * Noise Augmentation: There are many cases with heavy background noise in the test set, including interfering speakers and music. However, the data set provided by the organizer is much cleaner than the test set, which makes it necessary to augment the training data by adding noises to improve the recognition performance. Since there is no noise set available, we create a noise set from the data provided. A statistical VAD <cit.> is used to cut the non-vocal and vocal segments from the data and the non-vocal segments with energy beyond a threshold comprise our noise set. We add the noise segments to the speech utterances with a signal-to-noise ratio ranging from 0 to 15 dB. * Audio Codec: Considering the test data come from multiple sources, we further adopt audio codec augmentation to the training data. Specifically, we use the FFmpeg[<https://ffmpeg.org/>] tool to convert the original audio to Opus format at [48, 96, 256] Kbps. * Spectrum Augmentation: To prevent the ASR model from over-fitting, we apply the SpecAugment method <cit.> to the input features during every mini-batch training. SpecAugment includes time warping, frequency channel masking, and time step masking, and we utilize all of these techniques during training. §.§ Model Fusion Since a single ASR model may overfit to a specific optimization direction during training, it cannot guarantee good recognition accuracy for the speech of various data distributions. To let the ASR model generalize better to the multi-source input, we adopt a model fusion strategy. Specifically, we train the Conformer and E-branchformer models introduced in Section 2.1 using the combination of the original and the augmented data. Each testing utterance is then transcribed by these different models, resulting in multiple outputs. Finally, ROVER <cit.> is adopted to align and vote with equal weights on the multiple outputs, resulting in the final ASR output. §.§ ASR Output Post-processing Given that the spontaneous speech in the test set contains frequent filler words such as "Uh" and "you know", it is necessary to address their impact on subsequent MT accuracy and TTS systems that rely on the ASR output. To mitigate this issue, we use a simple rule-based post-processing step to detect and eliminate these expressions from the ASR output. By doing so, we improve the accuracy of the downstream modules. § MACHINE TRANSLATION For the MT module, we first use a pre-trained language model as a basis for initialization and then employ various methods to further enhance translation accuracy. §.§ Pre-trained Language Model As pre-trained language models are considered part of the training data in the offline track and can be used in the S2ST track, we use the pre-trained mBART50 model for initializing our MT module. mBART50 <cit.> is a multilingual BART <cit.> model with 12 layers of encoder and decoder, which we believe will provide a solid basis for improving translation accuracy. §.§ Three-stage Fine-tuning Based on Curriculum Learning We perform fine-tuning on the pre-trained model to match the English-to-Chinese (En2Zh) translation task. There are substantial differences between the ASR outputs and the texts of MT data. First, ASR prediction results inevitably contain errors. Second, ASR outputs are normalized text without punctuation. Therefore, directly fine-tuning the pre-trained model with the MT data will cause a mismatch problem with the ASR output during inference. On the other hand, fine-tuning the model with the ASR outputs will cause difficulty in model coverage because of the difference between the ASR outputs and the MT data. Therefore, based on Curriculum Learning <cit.>, we adopt a three-stage fine-tuning strategy to mitigate such a mismatch. * Fine-tuning using the MT data: First, we use all the MT data to fine-tune the pre-trained model to improve the accuracy of the model in the En2Zh translation task. * Fine-tuning using the MT data in ASR transcription format: Second, we convert the English text in the MT data into the ASR transcription format. Then, we fine-tune the MT model using the converted data, which is closer to the actual text than the ASR recognition output. This approach can enhance the stability of the fine-tuning process, minimize the impact of ASR recognition issues on the translation model, and improve the model's ability to learn punctuation, thereby enhancing its robustness. * Fine-tuning using the ASR outputs: Third, we leverage GigaSpeech <cit.> to address the mismatch problem between the ASR outputs and the MT data. Specifically, we use the ASR module to transcribe the GigaSpeech training set and replace the corresponding transcriptions in GigaST <cit.> with the ASR transcriptions for translation model fine-tuning. This enables the MT model to adapt to ASR errors. §.§ Back Translation Following <cit.>, we adopt the back translation method to enhance the data and improve the robustness and generalization of the model. First, we train a Zh2En MT model to translate Chinese to English, using the same method employed for the En2Zh MT module. Next, we generate the corresponding English translations for the Chinese text of the translation data. Finally, we combine the back translation parallel corpus pairs with the real parallel pairs and train the MT model. §.§ Cross-validation We use 5-fold cross-validation <cit.> to improve the robustness of translation and reduce over-fitting. Firstly, we randomly divide the data into five equal parts and train five models on different datasets by using one of them as the validation set each time and combining the remaining four as the training set. After that, we integrate the predicted probability distributions from these five models to obtain the final predicted probability distribution for the next word during token generation for predicting the translation results. § TEXT-TO-SPEECH §.§ Overview Figure <ref> (a) shows the pipeline of the text-to-speech module in the proposed S2ST system. The TTS module is built on a BN-based two-stage architecture, which consists of a text-to-BN and a BN-to-speech procedure. The text-to-BN stage tends to generate BN features from the Chinese text translated by the MT module. The BN-to-speech stage produces 16KHz Chinese speech from the BN feature, conditioning on the speaker embedding of source speech. Given the translated Chinese speech which preserves the speaker timbre in the source English speech, an audio super-resolution model is further leveraged to convert the synthesized speech from 16KHz to 24KHz for higher speech fidelity. Building on the two-stage framework AdaVITS <cit.>, we employ bottleneck (BN) features as the intermediate representations in the two-stage TTS module. BN features, extracted from a multi-condition trained noise-robust ASR system, mainly represent the speaker-independent linguistic content. So BN can effectively disentangle the speaker timbre and the linguistic content information. In the text-to-BN stage, high-quality TTS data is adopted in the training phase to model the speaker-independent BN features with prosody information. In the BN-to-speech stage, both high-quality TTS data and low-quality ASR data should be involved during training to sufficiently model the speech of various speaker identities. Extracted from speech, BN features contain the duration and prosody information, which eliminates the need for text transcripts and prosody modeling. Instead, the BN-to-speech stage focuses on time-invariant information modeling, such as speaker timbre. As the goal of this work is to conduct zero-shot English-to-Chinese speech translation, we concentrate on the method to transfer the unseen speaker timbre of the source English speech to the synthesized Chinese speech through voice cloning <cit.>. To capture new speaker timbre during inference, the TTS module requires to model abundant various speakers during training, which relies on large-scale high-quality TTS data. Unfortunately, we are limited in the high-quality TTS data we can use in this task and must rely on additional data such as ASR to model the speaker timbre. However, this data is not suitable for TTS model training because the labels are inconsistent with TTS, and the prosody of the speakers is not as good as high-quality TTS data. Furthermore, we incorporate ASR data into the BN-to-speech training procedure by re-sampling all the training speech to 16kHz, which can not reach high-quality audio. Therefore, we utilize audio super-resolution techniques to upsample the synthesized 16KHz audio and convert it into higher sampling rate audio. §.§ Text-to-BN Our text-to-BN stage network in TTS is based on DelightfulTTS <cit.>, which employs a Conformer-based encoder, decoder, and a variance adapter for modeling duration and prosody. The model extends phoneme-level linguistic features to frame-level to guarantee the clarity and naturalness of speech in our system. §.§ BN-to-speech We build the BN-to-speech model based on VITS <cit.>, which is a mainstream end-to-end TTS model. VITS generates speech waveforms directly from the input textual information, rather than a conventional pipeline of using the combination of an acoustic model and a neural vocoder. The network of the BN-to-speech stage consists of a BN encoder, posterior encoder, decoder, flow, and speaker encoder. The monotonic alignment search (MAS) from the original VITS is removed since BN features contain the duration information. For achieving zero-shot voice cloning, an ECAPA-TDNN <cit.> speaker encoder is pre-trained to provide the speaker embedding as the condition of the synthesized speech. To avoid periodic signal prediction errors in the original HiFiGAN-based <cit.> decoder in VITS, which induces sound quality degradation, we follow VISinger2 <cit.> to adopt a decoder with the sine excitation signals. Since the VISinger2 decoder requires pitch information as input, we utilize a pitch predictor with a multi-layer Conv1D that predicts the speaker-dependent pitch from BN and speaker embedding. With the desired speaker embedding and corresponding BN features, the BN-to-speech module produces Chinese speech in the target timbre. §.§ Audio Super-resolution Following <cit.>, we use an upsampling network based vocoder to achieve audio super-resolution (16kHz→24kHz). During training, the 16KHz mel-spectrogram is used as the condition to predict the 24KHz audio in the audio super-resolution model. Specifically, we adopt the AISHELL-3 <cit.> dataset, composing the paired 16KHz and 24KHz speech data for model training. During inference, the high-quality 24kHz speech is produced for the mel-spectrogram of the 16KHz speech generated by the BN-to-speech model. Here DSPGAN <cit.> is adopted as our audio super-resolution model, which is a universal vocoder that ensures robustness and good sound quality without periodic signal errors. § DATA PREPARATION §.§ Datasets Following the constraint of data usage, the training dataset for the S2ST system is illustrated in Table <ref>. <https://github.com/SpeechTranslation/GigaS2S> §.§.§ ASR Data For the English ASR module in our proposed system, we use GigaSpeech, LibriSpeech, TED-LIUM v2&v3 as training data. For the ASR system used to extract BN features in TTS, we use text-to-speech data in AISHELL-3 and Chinese speech in GigaS2S, along with the corresponding Chinese text in GigaST, as the training set. Since the test set's MT output text is a mix of Chinese and English, including names of people and places, the TTS module needs to support both languages. Therefore, we also add the aforementioned English data to the training set. §.§.§ MT Data We use the text-parallel data including News Commentary and OpenSubtitles2018 as MT training set. Moreover, we also add the Chinese texts in GigaST and the English texts in GigaSpeech corresponding to the Chinese texts in GigaST to the training set. §.§.§ TTS Data We use AISHELL-3 as training data in Text-to-BN and audio super-resolution. For the pre-trained speaker encoder, we adopt LibriSpeech, which contains 1166 speakers, as the training data. For the BN-to-speech model, in addition to using AISHELL-3 which has 218 speakers, we also use LibriSpeech to meet the data amount and speaker number requirements of zero-shot TTS. §.§ Data Pre-processing §.§.§ ASR Data To prepare the ASR data, we pre-process all transcripts to remove audio-related tags. Next, we map the text to the corresponding byte-pair encoding (BPE) unit and count the number of BPE units in the ASR dictionary, which totals 5,000 units. For audio processing, we use a frame shift of 10ms and a frame length of 25ms and normalize all audio to 16KHz. §.§.§ MT Data For the MT data, we use the same tokenizer as mBART50 to perform sub-word segmentation for English and Chinese texts and to organize them into a format for neural network training. By doing so, we can maximize the benefits of initializing our translation model with mBART50 pre-trained model parameters. The mBART tokenizer mentioned above is a Unigram tokenizer. A Unigram model is a type of language model that considers each token to be independent of the tokens before it. What’s more, the tokenizer has a total of 250,054 word segmentations, supports word segmentation processing for English, Chinese, and other languages, and uses special tokens like <s>, </s>, and <unk>. §.§.§ TTS Data For AISHELL-3, we downsample it to 16KHz and 24KHz respectively as the TTS modeling target and the audio super-resolution modeling target. All other data is down-sampled to 16KHz. All data in TTS adopts 12.5ms frame shift and 50ms frame length. Speech Enhancement. Given the presence of substantial background noise in the test set, the discriminative power of speaker embeddings is significantly reduced, thereby impeding the performance of the TTS module. Furthermore, the ASR data incorporated during the training of the BN-to-speech model is also subject to background noise. Therefore, we employ a single-channel wiener filtering method  <cit.> to remove such noise from these data. Please note that we do not perform speech enhancement on the test set in the ASR module, because there is a mismatch between the denoised audio and which is used in ASR training, and denoising will reduce the speech recognition accuracy. §.§.§ Evaluation Data For all evaluations, we use the English-Chinese (En-Zh) development data divided by the organizer from GigaSpeech, GigaST and GigaS2S, including 5,715 parallel En-Zh audio segments, and their corresponding En-Zh texts. It is worth noting that the development data for evaluations has been removed from the training dataset. § EXPERIMENTS §.§ Experimental Setup All the models in our system are trained on 8 A100 GPUs and optimized with Adam <cit.>. ASR Module. All ASR models are implemented in ESPnet[<https://github.com/espnet/espnet>]. Both Conformer and E-Branchformer models employ an encoder with 17 layers and a feature dimension of 512, with 8 heads in the self-attention mechanism and an intermediate hidden dimension of 2048 for the FFN. In addition, we employ a 6-layer Transformer decoder with the same feature hidden dimension as the encoder. The E-Branchformer model uses a cgMLP with an intermediate hidden dimension of 3072. The total number of parameters for the Conformer and E-Branchformer model in Section 2.1 is 147.8M and 148.9M respectively. We train the models with batch size 32 sentences per GPU for 40 epochs, and set the learning rate to 0.0015, the warm-up step to 25K. For data augmentation, we conduct speed perturbation, pitch shifting, and audio codec on the original recordings. Spectrum augmentation and noise augmentation are used for on-the-fly model training. MT Module. All MT models are implemented in HuggingFace[<https://github.com/huggingface/transformers>]. Using MT data, we fine-tune the mBART-50 large model, which has 611M parameters, with a batch size of 32 sentences per GPU for 20 epochs. The learning rate is set to 3e-5 and warmed up for the first 10% of updates and linearly decayed for the following updates. For fine-tuning using the MT data in ASR transcription format and the ASR outputs, we also fine-tune the model with batch size 32 sentences per GPU for 5 epochs and set the learning rate to 3e-5, which is warmed up for the first 5% of updates and linearly decayed for the following updates. TTS Module. We complete our system based on VITS official code[<https://github.com/jaywalnut310/vits>]. The text-to-BN follows the configuration of DelightfulTTS and has about 64M parameters. To extract the duration required for text-to-BN, we train a Kaldi[<https://github.com/kaldi-asr/kaldi>] model using AISHELL-3. The ASR system used for extracting BN is the Chinese-English ASR model mentioned in Section 5.1.1. For BN-to-speech, we use a 6-layer FFT as the BN encoder and follow the other configuration in VIsinger2 with about 45M parameters in total. The pitch predictor has 4 layers of Conv1D with 256 channels. Pitch is extracted by Visinger2 decoder and DSPGAN from Harvest <cit.> with Stonemask. To predict pitch in DSPGAN, we use the method described in Section 4.3. Up-sampling factors in DSPGAN is set as [5, 5, 4, 3] and other configuration of DSPGAN-mm is preserved for audio super-resolution. The DSPGAN model has about 9M parameters in total. We train all the above models with a batch size of 64 sentences per GPU for 1M steps and set the learning rate to 2e-4. For the pre-trained speaker encoder, we follow the model configuration and training setup of ECAPA-TDNN (C=1024) with 14.7M parameters. §.§ Evaluation Models Baseline. To evaluate the effectiveness of the proposed cascaded S2ST system, we adopt the original cascaded S2ST system as a baseline, including an E-Branchformer ASR model, a mBART50 MT model fine-tuned using the MT data, and an end-to-end TTS model based on VITS trained with AISHELL-3. Proposed system & Ablation study. We further conduct ablation studies to evaluate each component in the proposed system. Specifically, the ablation studies are designed to verify the effectiveness of model fusion and data augmentation in ASR, three-stage fine-tuning, back translation, cross-verification in MT, two-stage training with BN, pre-trained speaker embedding, and audio super-resolution in TTS. §.§ Results & Analysis We conduct experiments on the effectiveness of each sub-module and the performance of our proposed cascaded S2ST system. §.§.§ ASR Module We calculate the word error rate (WER) of each ASR module to evaluate the English speech recognition accuracy. As shown in Table <ref>, the WER of the proposed system has a significant drop compared with the baseline, which indicates that the proposed system greatly improves the recognition accuracy. Moreover, the results of the ablation study demonstrate the effectiveness of both model fusion and data augmentation in improving speech recognition accuracy. §.§.§ MT Module We evaluate our MT module in terms of the BLEU score, which measures the n-gram overlap between the predicted output and the reference sentence. As shown in Table <ref>, the proposed system with three-stage fine-tuning achieves a significantly better BLEU score than the baseline, demonstrating the effectiveness of curriculum learning in our scenario. Furthermore, by incorporating back translation and cross-validation, the translation performance can be further improved. §.§.§ TTS Module We calculate the character error rate (CER) to evaluate the clarity of speech for each TTS module. The ASR system used for calculating CER is the Chinese-English ASR model mentioned in Section 5.1.1. Additionally, we conduct mean opinion score (MOS) tests with ten listeners rating each sample on a scale of 1 (worst) to 5 (best) to evaluate naturalness, sound quality, and speaker similarity. In the ablation study without pre-trained speaker embedding, speaker ID is to control the speaker timbre of the synthesized speech. To eliminate the influence of ASR and MT results on TTS evaluation, we use the Chinese text in the evaluation data and its corresponding English source speech as the reference of speaker timbre as the test set for TTS evaluation. As shown in Table <ref>, our proposed system has achieved significant improvement in naturalness, sound quality, speaker similarity, and clarity of speech compared with the baseline. Interestingly, the system without pre-trained speaker embedding has better sound quality than both the proposed system and recording. We conjecture the reason is that the pre-trained speaker embedding greatly influences the sound quality in the zero-shot TTS setup. Therefore, the quality of the synthesized 24KHz audio is superior to the 16KHz recording, which can be demonstrated by the 3.64 MOS score of the system without audio super-resolution. Meanwhile, the speaker similarity MOS score is very low due to the lack of generalization ability to unseen speakers. Without using the BN-based two-stage model, the system decreases performance on all indicators, which shows the effectiveness of BN as an intermediate representation in our experimental scenario. §.§.§ System Evaluation Finally, we calculate the ASR-BLEU score for the baseline and the proposed system to evaluate the speech-to-speech translation performance. Specifically, we use the ASR system to transcribe the Chinese speech generated by TTS, and then compute the BLEU scores of the ASR-decoded text with respect to the reference English translations. The ASR system for transcribing Chinese speech is the same as that in Section 6.2.3. As shown in Table <ref>, our proposed system achieves a higher ASR-BLEU score than the baseline, which indicates that our proposed system has good speech-to-speech translation accuracy. § CONCLUSION This paper describes the NPU-MSXF speech-to-speech translation system, which we develop for the IWSLT 2023 speech-to-speech translation task. Our system is built as a cascaded system that includes ASR, MT, and TTS modules. To ensure good performance with multi-source data, we improved each module using various techniques such as model fusion and data augmentation in the ASR, three-stage fine-tuning, back translation, and cross-validation in the MT, and two-stage training, pre-trained speaker embedding, and audio super-resolution in the TTS. Through extensive experiments, we demonstrate that our system achieves high translation accuracy, naturalness, sound quality, and speaker similarity with multi-source input. § APPENDIX We present the official results, which include our submitted system and those of other teams. As shown in Table <ref>, our system ranks 1st in speech quality score and 2nd in translation quality score. By equally weighting translation quality and speech quality, our submitted system achieves the highest overall score in human evaluation. Although the organizers provide both automatic and human evaluation scores, the systems are ranked based on human evaluation. Consequently, our submitted system ranks 1st in the S2ST task of the IWSLT 2023 evaluation campaign. Additionally, as illustrated in Table <ref>, we rank 2nd and closely follow the 1st place in automatic evaluation, which evaluates translation accuracy. Our system employs zero-shot voice cloning, which may result in a slight loss of sound quality and speech clarity. We believe our automatic evaluation results could be better without using zero-shot voice cloning. However, this trade-off allows us to achieve a significant improvement in speaker timbre similarity and naturalness.
http://arxiv.org/abs/2307.03901v2
20230708044917
One-Loop Quantum Effects in Carroll Scalars
[ "Kinjal Banerjee", "Rudranil Basu", "Bhagya Krishnan", "Sabyasachi Maulik", "Aditya Mehra", "Augniva Ray" ]
hep-th
[ "hep-th" ]
=1
http://arxiv.org/abs/2307.04909v1
20230710212643
Planar Curve Registration using Bayesian Inversion
[ "Andreas Bock", "Colin J. Cotter", "Robert C. Kirby" ]
cs.CV
[ "cs.CV", "cs.NA", "math.NA" ]
dtu]Andreas Bockcor1 [email protected] [dtu]organization=Department of Applied Mathematics and Computer Science, Technical University of Denmark, addressline=Richard Petersens Plads, Building 324, city=Kongens Lyngby, postcode=2800, country=Denmark [cor1]Corresponding author imperial]Colin J. Cotter [email protected] [imperial]organization=Department of Mathematics, Imperial College London, addressline=180 Queen's Gate, South Kensington, city=London, postcode=SW72RH, country=United Kingdom baylor]Robert C. Kirby [email protected] [baylor]organization=Department of Mathematics, Baylor University, addressline=1410 S.4th Street, Sid Richardson Science Building, city=Waco, postcode=76706, state=Texas, country=United States of America We study parameterisation-independent closed planar curve matching as a Bayesian inverse problem. The motion of the curve is modelled via a curve on the diffeomorphism group acting on the ambient space, leading to a large deformation diffeomorphic metric mapping (LDDMM) functional penalising the kinetic energy of the deformation. We solve Hamilton's equations for the curve matching problem using the Wu-Xu element [S. Wu, J. Xu, Nonconforming finite element spaces for 2m^th order partial differential equations on ℝ^n simplicial grids when m = n + 1, Mathematics of Computation 88 (316) (2019) 531–551] which provides mesh-independent Lipschitz constants for the forward motion of the curve, and solve the inverse problem for the momentum using Bayesian inversion. Since this element is not affine-equivalent we provide a pullback theory which expedites the implementation and efficiency of the forward map. We adopt ensemble Kalman inversion using a negative Sobolev norm mismatch penalty to measure the discrepancy between the target and the ensemble mean shape. We provide several numerical examples to validate the approach. Closed curve matching Nonconforming finite element method Bayesian inverse problem 87.57.N 65M60 65P10 65M32 § INTRODUCTION Closed curve matching is a central problem in shape analysis where the goal is to bring into alignment two closed curves in called the template and the target <cit.>. For unparameterised curves, the shape space for these objects is Q = ∖ <cit.>. This quotient space disassociates the curve from arbitrary reparameterisation since they do not affect the range of the curves in question. This gives rise to studying the commuting left and right actions of two Lie groups, G=Diff_+(ℝ^2) and H=Diff_+(S^1) as in <cit.>: GQ = Emb(S^1, G.ℝ^2), HQ = Emb(H.S^1,ℝ^2). In the context of developing algorithms for planar curve matching, these group actions must be explicitly discretised. In this paper we our shape space with the so-called outer metric inherited by G which acts on the ambient space. This is in contrast to inner metrics intrinsically defined on the embedded shape <cit.>, see <cit.> for a comparison. To treat the parameterisation, one can parameterise elements of H using its Lie algebra and exploit its vector space structure. In this paper we consider a mismatch penalty that eliminates the need to treat H explicitly. Instead we note that two closed curves c_1 and c_2 are similar when the difference between the indicator function 1 evaluated on their interiors is small. For some linear differential operator 𝒞 we therefore we define the mismatch, or misfit, between them as: 𝔈(c_1, c_2) = 1_c_1 - 1_c_2_𝒞^2, where f _𝒞^2 = ⟨𝒞^-1f,𝒞^-1f⟩_L^2 over some computational domain described later. For the outer metric we take the LDDMM approach <cit.> and consider a one-parameter family of velocities t↦ u_t encoding the motion of the ambient space (and therefore the shape) which simultaneously provides a distance measure. We discretise the velocity field using finite elements, specifically the Wu-Xu element <cit.>. This element provides a nonconforming discretisation for sixth order operators; sixth order is necessary for the diffeomorphism to be sufficiently smooth for the computations that we undertake. The implementation of this element in Firedrake <cit.> is made possible by applying the theory of <cit.> and techniques for code generation in <cit.>. Given certain assumptions on the structure of our problem we can identify this entire family of velocities with a single initial momentum defined as a function over the template. We eliminate its evolution equation by using the analytical solution, and restrict the initial conditions to only generate geodesics in the space of unparameterised curves. This results in a forward map, taking as input the momentum and providing the diffeomorphism whose action maps the template to the target curve. After obtaining a finite element discretisation of this map we apply massively parallel and derivative-free ensemble Kalman inversion which we use to invert the forward map for the initial momentum determining the geodesic motion of the curve. §.§ Previous work Diffeomorphic registration has enjoyed a rich literature since the seminal works <cit.>. For curves specifically, <cit.> present the first algorithms for modelling curve matching via gradient descent methods. <cit.> represents curves as measures onto which a Hilbert structure is endowed, and computations of both the outer metric and the curves are done via radial reproducing kernels producing C^∞ velocities. In particular, curves were represented as geometric currents. <cit.> studies such a varifold-based loss function for elastic metrics, see also <cit.> for numerical frameworks for H^2 metrics. <cit.> contains a review of methods related to elastic curves. In this paper we are concerned with higher-order metrics using finite elements. While there is typically a loss of regularity incurred by these methods, they offer more computationally efficient methods than e.g. kernel methods. Finite elements also benefit from spatial adaptivity allowing for local refinement e.g. close to embedded curves. Closest to our approach in terms of discretisation are <cit.> where a particle-mesh method is employed for curve matching where the curve was discretised into a finite set of particles, acted on by an outer metric. However, we consider instead an outer metric finite element discretisation (as opposed to the intrinsic metric in <cit.>). <cit.> presents an adaptive Eulerian FEM discretisation of the velocity field for LDDMM using C^1 cubic Hermite elements and compares the deformations generated using C^∞ fields to assess the effect of the loss of regularity. Smooth mesh deformations are also of interest in shape optimisation where the aim is to transform a mesh such that some functional is minimised. Finite element methods are also adopted here, with deformation fields being discretised using B-splines <cit.>, harmonic polynomials or Lagrange finite elements depending the desired resolution or order <cit.>. Using the finite element space introduced in <cit.> we can guarantee that the Lipschitz norm remains bounded under mesh refinement without resorting to spline or kernel discretisations. As mentioned, we use Firedrake <cit.> for all our numerical experiments, see also <cit.> for an extension of this package for shape optimisation. Our formulation eliminates the need to integrate the momentum equation via its analytical solution thereby improving on the typically larger cost of Hamiltonian shooting based methods <cit.> compared to an LDDMM formulation <cit.>. We only need to solve an elliptic equation to obtain the velocity and use a simple variational Euler scheme to evolve the diffeomorphism. Traditional approaches in numerical shape analysis often apply a shooting procedures to determine the initial momentum transporting the image or landmarks to the desiderata, see e.g. <cit.>. Bayesian approaches have been employed before in the context of shape analysis, see e.g. <cit.> where function space Markov Chain Monte Carlo is used to characterise the posterior density of momenta generating a given shape. Similar to our approach is <cit.> in which ensemble Kalman inversion <cit.> is applied to recover the momentum for landmark matching. §.§ Organisation Section <ref> contains an introduction to diffeomorphic curve matching and the associated Hamiltonian systems, We also discuss the application of the finite element approach using the Wu-Xu element from <cit.> and the discretisation of the velocity equation. Section <ref> contains the transformation theory for the Wu-Xu element, and Section <ref> contains details of the discretisation of the Hamiltonian equations. Next, Section <ref> discusses the Bayesian inverse problem, and Section <ref> contains numerical results. Section <ref> contains a summary. § DIFFEOMORPHIC REGISTRATION Let Ω be a connected convex subset of , d=2, with polygonal boundary ∂Ω. We study maps q∈ Q=H^1(S^1,) from a template curve Γ_0∈ to a target curve Γ_1∈ whose motion is restricted by the differential equation: q̇_t = u_t∘ q_t , where u_t, t ∈ [0,1] is a family of time-dependent vector fields on Ω with some prescribed spatial smoothness. A geodesic path between two such parameterised curves Γ_0 and Γ_1 is defined as a path minimising the associated kinetic energy in u: 1/2∫_0^1u_t^2 t, where · dominates the Lipschitz norm. In fact, since u_t is supported on Ω it generates a curve on <cit.> of the entire ambient space via: φ̇_t = u_t∘φ_t, φ_0 = , whose motion restricted to the curve q_0 ∘ S^1 equals the q_t ∘ S^1 at time t ∈ [0,1]. As the kinetic energy measures distances between two elements of via velocity defined over the entire field Ω, we refer to this associated distance measure as an outer metric on the shape space . §.§ Hamiltonian system Here we take a Hamiltonian approach <cit.> and introduce the momentum p_t∈ T^*Q occupying the linear cotangent space, which we assume has enough regularity so that it has a Fréchet-Riesz representer in L^2(S^1) (also denoted p_t, with some abuse of notation). We extremise the following the functional: S = ∫_0^1 1/2u_t^2 + ⟨ p_t, q̇_t- u_t∘ q_t⟩ t, where ⟨ h, g ⟩= ∫_S^1 h· g. Taking variations i.e. δ S = 0 leads to Hamilton's equations for curve matching for t∈ [0,1]: ∫_0^1⟨δ p, q̇_t - u_t∘ q_t⟩ t = 0, ∀δ p∈ L^2(S^1), ∫_0^1⟨ṗ_t - ∇ u_t∘ q_t p_t, δ q⟩ t = 0, ∀δ q ∈ Q, δ u_t^2/δ u - ⟨ p_t, δ u∘ q_t⟩ = 0. where δ p, δ u and δ q are space-time test functions. The following theorem shows that we can solve (<ref>) analytically: The solution p_t to (<ref>) is at all times t≥ 0 given by p_t = ∇φ_t∘ q_0 p_0. See <ref>. To generate parameterisation-independent geodesics as in <cit.> we replace the initial condition q_0 by q_0∘η, where η∈Diff_+(S^1) in the case of planar curves is an arbitrary reparameterisation. As a result of this quotient representation ∖Diff_+(S^1) of curves we minimise over all η leading to the horizontality condition on the momentum. This means that the momentum p_0 has no tangential component and can therefore be described by a one-dimensional signal, p̃_0: S^1↦ℝ: p_0 = 𝐧_q_0p̃_0 where 𝐧_q_0:S^1→ℝ^2 is the outward normal of the template. Thus, along with Theorem <ref> we have the following characterisation, p_t = φ_t∘ q_0𝐧_q_0p̃_0. This generates trajectories of geodesics between unparameterised curves. The entire geodesic motion of the curve can therefore be determined by a one-dimensional signal along the initial curve q_0. To summarise this section we are concerned with integration of the following reduced Hamiltonian system for t∈ [0,1]: δ u_t^2/δ u = ⟨φ_t∘ q_0𝐧_q_0p̃_0, δ u∘ q_t⟩, q̇_t = u_t∘ q_t, with q_0 and p̃_0 fixed and boundary conditions u_t|_∂Ω=0 for all t∈ [0,1]. Next we discuss a discretisation of (<ref>). §.§ Outer metric via finite elements From Picard-Lindelhöf analysis it is clear that the Banach space ordinary differential equation (ODE) (<ref>) require a pointwise Lipschitz condition on u_t. As such, u_t must occupy at least when q_0 ∈ L^∞(S^1), see <cit.> (see also Corollary 7 in this reference for other host spaces). Dupuis <cit.> establishes sufficient conditions accomplishing the same in a Hilbertian setting. The Hilbertian setting is better suited to finite element methods. This is in contrast with which is only a Banach space and, to the best of the authors' ability, is not easy to approximate numerically[<cit.> approximates by means of a fixed point linearisation solutions to the nonlinear ∞-harmonic equation <cit.>.]. We therefore request a norm · such a way that a solution to (<ref>) ensures that this condition is met, which in turn implies global existence and uniqueness of (<ref>) by the references above. For d=2, 3, H_0^3(Ω) is contained in 𝖢^1(Ω̅) and so is Lipschitz on the interior <cit.>. As such, we want to describe a discretisation of (<ref>) ensuring a type of H^3 regularity as the follow theorem shows. Let O be a convex bounded Lipschitz domain in ℝ^d with polygonal boundary and O_h a shape-regular, quasi-uniform triangulation thereof <cit.> for some mesh size h>0. Suppose further that u is continuous on O̅, u|_K ∈ H^3(K)^d for K∈ O_h and that there exists an operator B inducing the norm u_B^2 = ∑_K∈ O_hu_B(K)^2, where we define u_B(K)^2 = ∫_K Bu· u x such that u_H^3(K)^d≲u_B(K). Then u∈ W^1,∞(O)^d. The embedding theorem for homogeneous Sobolev spaces (i.e. with zero traces) into the space 𝖢^j(O̅) are well-known. However, since the trace γ_K u of u on ∂ K, K∈ O_h may not be zero. By <cit.>, H^3(K) ↪𝖢_B^1(K), where: 𝖢_B^1(K) = { u ∈𝖢^1(K) | D^ u is bounded on K, ||≤ 1}. This means any H^3(K) function has a continuous representative with almost everywhere bounded first derivatives on K. Since u∈𝖢^0(O̅), u is a continuous function with its first derivative a.e. bounded, implying a Lipschitz condition. To summarise: u _W^1,∞(K)^d^2 ≲ u _H^3(K)^d^2 ≲ u _B(K)^2 Summing over the elements K∈Ω and squaring: u _W^1,∞(O)^d^2≲ u _B^2. where we have used that u is a continuous function with essentially bounded gradient. In light of this theorem we approximate the space of velocity fields by a nonconforming finite element space (see e.g. <cit.>) This way we can guarantee the necessary Lipschitz properties of our functions without having to impose higher-order global continuity of the finite-dimensional solution spaces. In Section <ref> we use the H^3-nonconforming finite element space presented in <cit.> in a discretisation of (<ref>). We choose the operator B=( - αΔ)^2m for a given positive constant α leading to the following bilinear form: a_Ω(u, v) = ∑_i=1^d∫_Ω∑_j=0^m α^j mj D^j u^i· D^j v^i x = ∫_Ω Bu· v x, where x· y is the Euclidean inner product, D^0 =, and D^j =∇ D^j-1 j is odd, ∇· D^j-1 j is even. § A PULLBACK THEORY FOR THE WU-XU ELEMENT The Wu-Xu element provides an opportunity to tackle this problem in a (nonconforming) H^3 setting, but it presents challenges for implementation. Although we can construct its basis on a reference element, say, using the FIAT package <cit.>, the Wu-Xu elements do not form an affine equivalent family <cit.> under pullback. Consequently, we apply the theory developed in <cit.>, which gives a generalization of techniques developed for the C^1 conforming Argyris element <cit.>. To fix ideas, put a reference triangle K with vertices by {_i }_i=1^3. For any nondegenerate triangle K with vertices {_i }_i=1^3, we let F:T →K denote the affine mapping sending each _i to the corresponding _i and J_T its Jacobian matrix. We adopt the ordering convention used in <cit.>, where edge e_i of any triangle connects the vertices other than i. We take the unit tangent _i = [ t_i^x t_i^y ]^T to from the vertex of lower number to the higher one. The normal to edge i is defined by counterclockwise rotation of the tangent, so that _i = R _i, where R = [ [ 0 1; -1 0 ]]. The normals, tangents, and edge midpoints for the reference element K will include hats: _i, _i, and _i. The pull-back of any function f defined on K is given by F^*(f) = f∘ F, and the push-forward of functionals n acting on functions defined over K is F_*(n) = n ∘ F^*, so that F_*(n)(f) = (n ∘ F^*)(f) = n(f∘ F) Finite element implementation requires local shape functions {ψ_i^K }_i=1^N that are restrictions of the global basis to cell K. These are taken dual to a set of nodes or degrees of freedom { n_i^K }_i=1^N in the sense that n_i^K(ψ_j^K) = δ_ij. In practice, one typically computes the basis {ψ̂_i }_i=1^N dual to some nodes {n̂_i}_i=1^N over the reference element K̂. For affine equivalent families (like the Lagrange basis), the physical basis functions are the pullbacks of reference element shape functions, so that ψ_i^K = F^*(ψ_i). Equivalently, the nodes are preserved under push-forward, with F_*(n^K_i) = n̂_i. We may express these relations in a kind of vector-notation. If Ψ̂ is a vector whose entries are ψ_i, then in the affine equivalent case, F^*(Ψ̂) contains the basis on cell K, and also F_*(𝒩) = 𝒩. For non-equivalent families, these relations fail, but we can hope to construct a matrix M such that Ψ = M F^*(Ψ̂) contains the correct vector of basis functions on T. The matrix M will depend on the particular geometry of each cell, but if it is sparse this amounts to a considerable savings over directly constructing the basis on each triangle. Our theory in <cit.> proceeds by transforming the actions of the functionals on the finite element space. The finite element functionals are defined on some infinite-dimensional space (e.g. twice-continuously differentiable functions), and we let π denote the restriction of functionals to the finite-element space and π̂ the corresponding restriction on the reference element. Then, we look for a matrix V such that V F_*(π𝒩) = π𝒩, and can prove <cit.> that M = V^T. For any triangle K and integer k≥ 0, we let ^k(K) denote the space of polynomials of degree no greater than k over K. Letting λ_i be the barycentric coordinates for K (equivalently, the Lagrange basis for ^1(K)), we let b_K = λ_1 λ_2 λ_3 be the standard cubic bubble function over K. We also need notation for the linear functionals defining degrees of freedom. We let δ_ denote pointwise evaluation of some (continuous) function: δ_(p) = p(). We let δ_^ denote the derivative in some direction at a point : δ^_(p) = ^T ∇ p() Repeated superscripts will indicate higher derivatives. We use block notation will for gradients and sets of second-order derivatives, such as ∇_ = [ δ_^ δ_^ ]^T for the gradient in Cartesian coordinates at a point , and △_ = [ δ_^ δ_^ δ_^ ]^T for the unique components of the Hessian matrix. We will use superscripts in the block notation to indicate the derivatives taken in other directions than the Cartesian ones, such as ∇^ containing the derivatives with respect to a normal vector and tangent vector for some part of the boundary. Similarly, △^ will contain the second partials in each direction and the mixed partial in both directions. The Wu-Xu elements also utilise integral moments of normal derivatives, and we shall also need averages tangential and mixed derivatives over edges to perform the transformations. Given any directional vector , we define the moment of the derivative in the direction over edge by: μ^_(f) = ∫_·∇ f ds, Similarly, we let μ^_1_2_ to denote the functionals computing moments of second (possibly mixed) directional derivatives over an edge. Now, we define the pair of H^3 nonconforming triangles considered in <cit.>. Note that there are two spaces given: a space compatible with sixth-order problems, and a robust space that is stable for second, fourth and sixth-order problems. We define function space (K) over some triangle K by (K) = ^3 + b_K ^1, and the function space for the robust element will be (K) = ^3 + b_K ^1 + b_K^2 ^1, where ^k is the standard space of polynomials of degree k. Note that we have (K)= 12 and (K) = 15 since b_K ∈^3 ∩ b_K ^1. The degrees of freedom for the two elements are quite similar. We can parametrise (K) by 𝒩 = [ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 μ^_1_1__1 μ^_2_2__2 μ^_3_3__3 ]^T. That is, the degrees of freedom consist of point values and gradients at each vertex, together with moments of the second normal derivative along edges. For the robust element, we also use the moments of the first normal derivatives, so that 𝒩_r = [ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 μ^_1__1 μ^_2__2 μ^_3__3 μ^_1_1__1 μ^_2_2__2 μ^_3_3__3 ]^T. Wu and Xu actually define the degrees of freedom as average of these moments over the relevant facets, although this does not affect unisolvence or other essential properties. For the reference element, it will be helpful to use their original definition. For some edge of K̂, define μ̂^_(f) = 1||∫_·∇̂ f dŝ, and similarly define moments second directional derivatives over reference element edges. The reference element nodes for (K̂) will be taken as 𝒩 = [ δ__1 ∇̂^T__1 δ__2 ∇̂^T__2 δ__3 ∇̂^T__3 μ̂^_1_1__1 μ̂^_2_2__2 μ̂^_3_3__3 ]^T, and for (K̂) we will use 𝒩 = [ δ__1 ∇̂^T__1 δ__2 ∇̂^T__2 δ__3 ∇̂^T__3 μ̂^_1__1 μ̂^_2__2 μ̂^_3__3 μ̂^_1_1__1 μ̂^_2_2__2 μ̂^_3_3__3 ]^T Note that this redefinition has no effect in the case of an equilateral reference triangle with unit edge length. For the more common case of a right isosceles reference triangle, however, this will eliminate the need for logic indicating to which reference element edges the edges of each triangle correspond. The derivative degrees of freedom in both Wu-Xu elements are not preserved under push-forward, and since we have only normal derivatives on the edges, we cannot immediately obtain the correct nodes by taking linear combinations. Consequently, we must develop a compatible nodal completion <cit.>. For the Wu-Xu elements, this contains all the original degrees of freedom plus the integrals of tangential and mixed normal/tangential derivatives. Such a completion is shown for the standard Wu-Xu element in Figure <ref>. A completion for the robust element includes the first normal moments and tangential moments as well, as showin in Figure <ref>. We define ℳ_1,i = [ μ^_i__i μ^_i__i ]^T to be the vector of the moments of the normal and tangential derivatives on a particular edge. We also let ℳ_1, i contain the corresponding reference element nodes. We only need ℳ_1,i and ℳ_1, i for the robust element. Both elements require ℳ_2,i = [ μ^_i_i__i μ^_i_i__i μ^_i_i__i ]^T containing the unique second derivative moments on each edge. We similarly define ℳ_2, i to contain the reference element integral averages. The compatible nodal completion for (K, (K), 𝒩) is 𝒩^C = [ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 ℳ_2,1^T ℳ_2,2^T ℳ_2,3^T ]^T, with the hatted equivalents comprising 𝒩̂^C on the reference cell. The completed set of nodes for the robust element is 𝒩_r^C = [ δ__1 ∇^T__1 δ__2 ∇^T__2 δ__3 ∇^T__3 ℳ_1,1^T ℳ_1,2^T ℳ_1,3^T ℳ_2,1^T ℳ_2,2^T ℳ_2,3^T ]^T, Now, the matrix V from (<ref>) will be obtained in factored form V = E V^c D, where each matrix plays a particular role. D is a rectangular matrix expressing the completed nodes in terms of the given physical nodes. V^c is a block diagonal matrix relating the push-forward of the reference nodal completion to the physical nodal completion, and E is a Boolean matrix selecting actual finite element nodes from the completion. For the Wu-Xu element, D is 18 × 12, V^c is 18 × 18, and E is 12 × 18. For the robust element, D is 24 × 15, V^c is 24 × 24, and E is 15 × 24. Now, we define the matrix D, which expresses the members of 𝒩^C as linear combinations of the members of 𝒩. Clearly, the rows corresponding to members of 𝒩^C also appearing in 𝒩 will just have a single nonzero in the appropriate column. For the Wu-Xu element, the remaining nodes are all integrals of quantities over edges, and we can use the Fundamental Theorem of Calculus to perform this task. Let be an edge running from vertex _a to _b with unit tangent and normal and , respectively. We have μ^_(f) = ∫_^T ∇ f ds = f(_b)-f(_a) = δ__b(f) - δ__a(f). In a similar way, the moments of the second tangential and mixed derivatives on can be expressed as differences between components of the gradients at endpoints by: μ^_(f) = ^T (∇__b f - ∇__a f), μ^_(f) = ^T (∇__b f - ∇__a f), and we have that 𝒩^C = D 𝒩, or [ δ__1; δ^__1; δ^__1; δ__2; δ^__2; δ^__2; δ__3; δ^__3; δ^__3; μ^_1_1__1; μ^_1_1__1; μ^_1_1__1; μ^_2_2__2; μ^_2_2__2; μ^_2_2__2; μ^_3_3__3; μ^_3_3__3; μ^_3_3__3 ] = [ 1 0 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0 0 0 0; 0 0 0 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 0 0 0 1 0 0; 0 0 0 0 -n_1,x -n_1,y 0 n_1,x n_1,y 0 0 0; 0 0 0 0 -t_1,x -t_1,y 0 t_1,x t_1,y 0 0 0; 0 0 0 0 0 0 0 0 0 0 1 0; 0 -n_2,x -n_2,y 0 0 0 0 n_2,x n_2,y 0 0 0; 0 -t_2,x -t_2,y 0 0 0 0 t_2,x t_2,y 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 1; 0 -n_3,x -n_3,y 0 n_3,x n_3,y 0 0 0 0 0 0; 0 -t_3,x -t_3,y 0 t_3,x t_3,y 0 0 0 0 0 0; ][ δ__1; δ^__1; δ^__1; δ__2; δ^__2; δ^__2; δ__3; δ^__3; δ^__3; μ^_1_1__1; μ^_2_2__2; μ^_3_3__3; ]. The matrix V^C is obtained by relating the push-forwards of the nodal completion to their reference counterparts. We can convert between the Cartesian and other orthogonal coordinate systems (e.g. normal/tangential) representations as follows. Given a pair of orthogonal unit vectors and , we can define an orthogonal matrix G by: G = [ ]^T. In particular, we will use G_i to have the normal and tangential vectors to edge i of triangle K and G_i those for triangle K. The multivariate chain rule readily shows that ∇_x = G^T ∇^_. Similarly, letting = [ n_x n_y ]^T and = [ t_x t_y ]^T, we define the matrix Γ by Γ = [ n_x^2 2 n_x t_x t_x^2; n_x n_y n_x t_y + n_y t_x t_x t_y; n_y^2 2 n_y t_y t_y^2 ], and the chain rule gives △_x = Γ△_x^. Although G is an orthogonal matrix, Γ is not. A similar calculation also shows gives that: △^_ = Γ^-1△_, where Γ^-1 = [ n_x^2 2 n_x n_y n_y^2; n_x t_x n_x t_y + n_y t_x n_y t_y; t_x^2 2 t_x t_y t_y^2 ]. We will also need to transform derivatives under pull-back. Using the chain rule, ∇ (ψ̂∘ F) = J^T ∇̂ψ̂∘ F. Combining this with (<ref>) lets us relate the normal and tangential derivatives in physical space to the normal and tangential derivatives in reference space. ∇_^ = G J^T G^T ∇̂^_. We can perform a similar calculation for second derivatives. With the entries of the Jacobian matrix as: J = [ ∂ x∂x̂ ∂ x∂ŷ; ∂ y∂x̂ ∂ y∂ŷ ], we define the matrix Θ = [ ( ∂x̂∂ x)^2 2 ∂x̂∂ x∂ŷ∂ x ( ∂ŷ∂ x)^2; ∂x̂∂ y∂x̂∂ x ∂x̂∂ y∂ŷ∂ x + ∂x̂∂ x∂ŷ∂ y ∂ŷ∂ x∂ŷ∂ y; (∂x̂∂ y)^2 2 ∂x̂∂ y∂ŷ∂ y ( ∂ŷ∂ y)^2 ], so that for = F(), △_ = Θ△̂_. The inverse of Θ follows by reversing the roles of reference and physical variables: Θ^-1 = [ ( ∂ x∂x̂)^2 2 ∂ x∂x̂∂ y∂x̂ ( ∂ y∂x̂)^2; ∂ x∂ŷ∂ x∂x̂ ∂x∂ŷ∂ y∂x̂ + ∂ x∂x̂∂ y∂ŷ ∂ y∂x̂∂ y∂ŷ; (∂ x∂ŷ)^2 2 ∂ x∂ŷ∂ y∂ŷ ( ∂ y∂ŷ)^2 ] We can also relate the second-order derivatives in normal/tangential coordinates under pullback by △^_ = ΓΘΓ̂^-1△_^. From here, we will let G_i and Ĝ_i denote the matrices containing normal and tangent vectors to edge _i of a generic triangle T and the reference triangle T̂, respectively, with similar convention for the other geometric quantities Γ and Θ. For any vector , edge , and smooth function f = f ∘ F, we have ∫_^T ∇ f ds = ∫_^T ∇̂f ∘ F ds = ∫_^T ∇̂f J_, dŝ, where the Jacobian J_, is just the ratio of the length of to that of the corresponding reference element edge . Applying this to the normal and tangential moments and using (<ref>), we have that: ℳ_1,i = |_i| G_i J^T Ĝ_i^-1ℳ_1, i, where the factor of |_i| in the denominator of the Jacobian is merged with the reference element moments to produce ℳ_1,i. Hence, the slight modification of reference element nodes avoids extra data structures or logic in identifying reference element edge numbers. Then, we can use (<ref>) to express each ℳ_2, i in terms of the reference element nodes ℳ_2,i = || Γ_i ΘΓ̂_i^-1ℳ_2,i. We define vectors B^1,i = 1|_i|Ĝ_i J^-T G_i^T, B^2,i = 1|_i|Γ̂_i Θ^-1Γ_i^-1, and hence V^C is the block-diagonal matrix V^C = [ 1 ; J^-T ; 1 ; J^-T ; 1 ; J^-T ; B^2,1 ; B^2,2 ; B^2,3 ], with zeros of the appropriate shapes in the off-diagonal blocks. The extraction matrix E is just the 12 × 18 Boolean matrix selecting the members of N from N^C: E = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0; ] Multiplying EV^CD out and defining β^i,x = n^i_x B^2,i_12 + t^i_x B^2,i_13, β^i,y = n^i_y B^2,i_12 + t^i_y B^2,i_13, we obtain for V V = [ 1 0 0 0 0 0 0 0 0 0 0 0; 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0 0 0 0; 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0; 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0; 0 0 0 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0; 0 0 0 0 -β^1,x -β^1,y 0 β^1, x β^1,y B^2,1_11 0 0; 0 -β^2, x -β^2,y 0 0 0 0 β^2,x β^2,y 0 B^2,2_11 0; 0 -β^3,x -β^3,y 0 β^3,x β^3,y 0 0 0 0 0 B^2,3_11; ]. The same considerations lead to a similar derivation of E, V^c, and D for the robust element, resulting in V = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0 0 0 0 0 0 0; 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0 0 0 0; 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 ∂ x∂x̂ ∂ y∂x̂ 0 0 0 0 0 0; 0 0 0 0 0 0 0 ∂ x∂ŷ ∂ y∂ŷ 0 0 0 0 0 0; 0 0 0 -B^1,1_12 0 0 B^1,1_12 0 0 B^1,1_11 0 0 0 0 0; -B^1,2_12 0 0 0 0 0 B^1,2_12 0 0 0 B^1,2_11 0 0 0 0; -B^1,3_12 0 0 B^1,3_12 0 0 0 0 0 0 0 B^1,3_11 0 0 0; 0 0 0 0 -β^1,x -β^1,y 0 β^1,x β^1,y 0 0 0 B^2,1_11 0 0; 0 -β^2,x -β^2,y 0 0 0 0 β^2,x β^2,y 0 0 0 0 B^2,2_11 0; 0 -β^3,x -β^3,y 0 β^3,x β^3,y 0 0 0 0 0 0 0 0 B^2,3_11; ] for V, where β is as defined in (<ref>). § DISCRETISATION We now describe the discretisations of the Hamiltonian system (<ref>) using a function space introduced in the previous section. Smooth solutions to (<ref>) generates the following curve of diffeomorphisms: φ̇_t = u_t ∘φ_t, φ_0=, where the domain of φ_t is Ω_0. This subsumes the left action on the curve q_0 in (<ref>). Our approach is therefore to solve (<ref>) for an outer metric in tandem with integrating the diffeomorphism defined over the entire domain and moving the mesh, thereby automatically providing a solution to a discrete analogue of (<ref>). We denote by 𝒯_0 denote a shape-regular, quasi-uniform triangulation of the template domain Ω_0. Let denote the mesh skeleton of 𝒯_0 and the subset of whose elements do not intersect ∂Ω_0. We place the following assumption on the initial triangulation 𝒯_0: 𝒯_0 is constructed such that the range of q_0 is described by a subset of . Using the definition in (<ref>) we define the vector-valued Wu-Xu space defined over Ω_0: V(Ω_0) = { v ∈ L^2(Ω)^2 | v_i|_K ∈(K), K ∈𝒯_0, i=1,2}. Further, let 0=t_0, t_Δ T, … t_T-1=1 denote T uniformly distributed points and use an Euler discretisation of the time derivate in (<ref>), where we let φ_t_k∈ V(Ω_0), u_t_k∈ V(Ω_0): φ_t_k+1 = φ_t_k + u_t_k∘φ_t_kΔ T. For sufficiently small Δ T, φ_t_k is a diffeomorphism of Ω <cit.>. Using the notation Ω_t_k = φ_t_k∘Ω_0, V(Ω_t_k) := {f | f ∘φ_t_k∈ V(Ω_0)} and by noting that q_t_k = φ_t_k∘ q_0 we obtain a discrete analogue of (<ref>) where û_t_k∈ V(Ω_t_k): a_Ω_t_k(û_t_k, v̂) = ∫_S^1∇φ_t_k∘ q_0 𝐧_q_0p̃_0 ·v̂∘ q_0 s, ∀v̂∈ V(Ω_t_k), φ_t_k+1 = φ_t_k + û_t_kΔ T, for k=0,…, T-1, where φ_t_k∘∂Ω_0 = owing to the homogeneous Dirichlet boundary conditions implied by (<ref>). At each time step k after the solution of (<ref>), the mesh is moved according to (<ref>) upon which the equation (<ref>): q_t_k+1 = q_t_k + u_t_k∘ q_t_kΔ T, is automatically satisfied. The underlying coordinate field of the mesh itself is chosen to be a Lagrange subspace of V(Ω_0), so that the map q_0 ↦φ_t_k∘ q_0 is a diffeomorphism. At k=0, the assembly of the right-hand side is in practice done by integration over q_0∘ S^1, which means that we can supply an initial “momentum” signal 𝐩_0∈p̃_0∘ q_0∈ L^2(q_0∘ S^1) (now defined over initial curve) to encode the entire geodesic flow of φ, and thereby of the embedded curve. Figure <ref> show examples of forward integration of this system for various 𝐩_0 and q_0∘ S^1 (the initial meshes were generated using <cit.>). Note that the norm of the velocity present in (<ref>) is confined to certain energy levels determined by the initial momentum as the system is integrated. In the fully discrete analogue we can only hope to establish approximate conservation of the Hamiltonian. The importance of this nebulous since we only integrate over fixed time intervals, and is subject to future work. The computational cost of integrating (<ref>) is dominated by the inversion of the discrete bilinear form. Mesh-based methods readily facilitate parallel computations (e.g. matrix-vector products in a Krylov subspace method), which along with preconditioning strategies are competitive with fast multipole methods. They also offer flexibility in choosing bilinear form (which can be altered according to an informed modelling choice or application). Finally, mesh adaptivity is also an option. For the application at hand a graded mesh with a fine resolution in the vicinity of the curve and coarser elements closer to the boundary can both increase accuracy and the computational burden of the method. § INVERSE PROBLEM We now consider the matching problem using the data misfit functional in (<ref>). We wish to estimate the momentum _0 := p̃_0∘ q_0∈ that generates the curve t↦ q_t. That is, _0 is the momentum object defined on the computational domain q_0 ∘ S^1. We drop explicit dependence on the template as it remains fixed during computation as well as the time dimension of the initial momentum. To ease the notation we use boldface to represent the smoothed version of the indicator function on the interior of a curve q the i.e.: = 𝒞1_q. We define the forward operator: ↦ℱ() = := 𝒞1_q_1, where q_1 is the solution at t=1 given by solving (<ref>) using q_0 and as initial conditions, i.e. the time-1 flow map of the initial curve. Given a target shape the inverse problem of interest is therefore to recover the momentum ^* such that: ≈ℱ(^*) + ξ, where the noise ξ∈𝒩(0, 𝒞) is a Gaussian measure with mean zero and covariance operator 𝒞 defined later on. To tackle this inverse problem, we use an Ensemble Kalman iteration. We let N denote the number of ensemble members and let ^j, j=1,…,N denote the momenta corresponding to ensemble member j. The ensemble mean momentum and the mean predicted shape are: := N∑_j=1^N _j, := N∑_j=1^N ^j, where ^j = 𝒞1_ℱ(^j). The Kalman update operator is defined by: = _ Q [_QQ + ξ I]^-1, where ξ is a regularisation parameter described later and I is the identity operator. The actions above are given by: _QQ[·] = 1/N - 1∑_j=1^N (^j - ) ⟨^j - , ·⟩_L^2, _ Q[·] = 1/N - 1∑_j=1^N (^j - ) ⟨^j - , ·⟩_L^2. The data misfit at iteration k of the EKI is defined as: 𝔈^k = - _L^2(Ω)^2. The prediction and analysis steps are summarised below: * Prediction: For each ensemble member j, compute ^j = 𝒞1_ℱ(^j) and the average using (<ref>). * Analysis: Update each ensemble momentum: ^j+1 = ^j + ( - ^j). § NUMERICAL EXPERIMENTS We present numerical experiments showing that the ensemble Kalman inversion (EKI) algorithm is able to find suitable approximations of a target given a random initial ensemble. Section <ref> describes how we generate the synthetic target that we will use as matching targets. Section <ref> summarises the parameters that we have chosen to in our experiments to match the synthetic data, and section <ref> contains the numerical results. §.§ Synthetic data For simplicity we fix the template curve throughout our experiments and choose the unit circle. The initial mesh is that shown in the top left vignette of Figure <ref>. The computation domain Ω_0 is a triangulation of [-10,10]^2 with mesh resolution[This is the maximum diameter h_K of any triangle K in the triangulation.] h=1. We have taken α=0.5 in (<ref>), T=15 time steps and have used piecewise constant finite elements on the mesh skeleton to represent (although we compute only with functions supported over the submanifold q_0 ∘ S^1⊂). We use the forward operator described previously to generate synthetic targets for this set of parameters. Applying the forward operator ℱ to the momenta in (<ref>) below we produce the targets seen in Figure <ref>. ^†_contract = -1.38π, ^†_squeeze = 0.83π e^-y^2/5 x< -0.3 5/3πsin(x / 5)|y| otherwise, ^†_star = 2.6πcos(2π x/5), ^†_teardrop = -3π sign(y) y<0 3π e^-x^2/5 otherwise. With ^† we associate the following relative error at each iterate k: ℛ^k = ^k - ^†_L^2(q_0∘ S^1) / ^†_L^2(q_0∘ S^1). The consensus deviation 𝒮^k of an ensemble at iteration k in equation (<ref>) is defined below: 𝒮^k = N∑_j=1^N ^j,k - ^k_L^2(q_0∘ S^1), where ^j,k denotes the momentum of ensemble member j at iteration k. This quantity is a useful diagnostic which measures the information remaining in the ensemble after iteration k. Since EKI relies on estimates of the forecast covariance, consensus is reached when 𝒮^k approaches zero, at which point the algorithm can be stopped. In all our simulations we invert the system in (<ref>) using the direct solver MUMPS <cit.>; investigating a preconditioned iterative solver is subject to future work. For details on the validation of the implementation of the Wu-Xu element in Firedrake, see <cit.> and the Zenodo entry <cit.>. §.§ Experimental setup We now describe the setup we have used to test the EKI. Firstly, we have taken T=10 and α=1 so the parameters differ from those used to generate the synthetic targets. Recall that EKI requires an initial ensemble, in this case of momenta. The basis coefficients of the momenta was sampled from a uniform distribution over the interval [-25,25], with different realisations for each ensemble member. The parameter ξ in (<ref>) determines the ratio between the influence of the prediction covariance on the Kalman gain. We set ξ=10^-3 in (<ref>), although adaptive tuning of this parameter to avoid overfitting is possible; an early termination rule is suggested in <cit.>. We choose 𝒞 = ( - κΔ), κ = 10 in (<ref>), as this smoothes out the mismatch sufficiently for our computational domain. The quality of the matching is directly related to the size and variance of the ensemble as the solution is sought as a linear combination of its members. We conduct experiments for two ensemble sizes, N=20, N=40 and N=80. These were chosen since, with the parameter set as above, = 48 in order to develop an understanding of how EKI performs when the ensemble size is smaller than, similar to and larger than the dimension of the state, while still keeping the overall computational cost such that the experiments can be done in a reasonable amount of time. The case where N<< is the de facto situation for ensemble methods as the MC approximation allows for a computationally feasible method. In general, small ensemble sizes can lead to filter inbreeding (the forecast covariance is underestimated), filter divergence (the gain does not adequately correct the ensemble), or spurious correlations <cit.>. We comment on each of these later. §.§ Results We have run EKI 10 times for each value of N with different draws for the initial ensemble to assess the robustness of the algorithm with respect to the starting point. Figure <ref> shows examples of the numerical results we obtain for curve matching using EKI. Note that only five iterations of EKI were necessary to produce the results shown in this section to reach a relative tolerance below 3%. Qualitatively a larger ensemble size leads to a better match, which is to be expected. Ensemble methods such as the EKI offer a practical advantage to gradient methods given their inherent parallelisability. Indeed, the prediction step discussed in Section <ref> can be done in parallel for each ensemble member. We therefore start N processes corresponding to each member, and used a Message Passing Interface (MPI) <cit.> implementation to exchange information between them (the MPI reduce operation, specifically). Thanks to this parallelisation, five iterations of EKI takes less than two minutes for N=20, five minutes for N=40 and 14 minutes for N=80 on a 2021 MacBook Pro[Apple M1 Pro chip, 16 GB of memory.]. Figure <ref>, <ref> and <ref> show the relative errors, data misfits and consensus deviations for our experiments across the selected targets and ensemble sizes. These all decrease at various rates in the early iterations after which they stagnate. As the Kalman gain corrects the ensemble members, and therefore the motion of their respective curves, the data misfit decreases meaning that each member improves its prediction. This increases consensus in the ensemble, which explains what is seen in figure <ref>. We notice from the data misfits and the momentum consensus that higher values of N provides a more accurate approximation of the true momentum, which explains the accuracy of the matches seen in figure <ref>. Note that the relative error, which is a surrogate for posterior consistency, also decreases (albeit not with a clear pattern across shapes and ensemble sizes). Since the forward operator in use is heavily nonlinear, theoretical convergence results are not readily obtained at this stage. We highlight that the Kalman gain is very efficient in correcting the ensemble momenta, with the consensus decreasing exponentially in the early stages of the algorithm. We comment on this later. The conclusion is therefore that even at a modest ensemble size, EKI performs well. It is not certain that the same behaviour that we see above (i.e. few iterations are needed) will scale with N and the size of the problem, but the results are promising for research in this direction. Higher values of ξ were found to slow the convergence of the algorithm compared to the selected value, which is consistent with the behaviour for landmark-based EKI <cit.>, and we do not comment on it further. We noticed that the value of κ also influences the convergence of the EKI; for small values the operator - κΔ approaches the identity, and since the mismatch X is computed from point evaluations of the finite element function, information can be lost if the grid is not sufficiently refined. A larger value of κ “spreads out” the mismatch which improves convergence for coarser grids. § SUMMARY AND OUTLOOK In this paper we have presented a parameterisation- and derivative-free method for matching closed planar curves. A moving mesh discretisation of Hamilton's equations for curves was described using the induced diffeomorphism of a vector field occupying the Wu-Xu finite element space. We also describe a transformation theory for this element facilitating a computationally performant forward model for use in the associated inverse problem. Finding the momentum encoding the forward motion of the template matches a desired curve was treated as a Bayesian inverse problem in section <ref> and EKI was used to approximate its solution. The numerical results presented in section <ref> suggests that the method shows great promise. Not only does is it easy to implement, the EKI is shown to quickly reach ensemble consensus meaning that it is efficient in exploring the subspace spanned by the initial ensemble. This is in part thanks to the momentum being a one-dimensional signal on the template. Treating the mismatch term in a negative Sobolev norm was shown to increase both accuracy of our results and robustness to mesh resolution. We also showed that the method is robust to the choice of initial ensemble even when the ensemble size is less than half the dimension of the forward problem. Further, assuming the forward operator is scalable as the mesh is refined (large-scale PDE solves are common in many areas of scientific computing, and the inverse needed in the Kalman gain scales cubically in N <cit.>). Future work includes proving convergence of the finite element discretisation for (<ref>) and subsequently using these error estimates to quantify error in a rigorous treatment of the Bayesian inverse problem <cit.>. As indicated in <cit.>, some challenges exist for nonconforming finite element methods with singular source terms. The template considered in this paper is a piece-wise linear curve. An obvious extension would be to apply isoparametric methods to cater for piece-wise higher-order polynomial curves. The effect of this would only affect the right-hand side and would not affect regularity results for the velocity. An advantage of the finite element method for curves is also that it allows for adaptivity e.g. refinement of the mesh only in the vicinity of the embedded template. We considered problems of modest size to illustrate the discretisation and the EKI. As the mesh is refined, it is likely the case that the dimension of the forward operator dwarfs the size of the ensemble and effects of the MC approximation are more pronounced. This is the case for ensemble methods for e.g. numerical weather prediction and several techniques exist to counter these effects <cit.> (e.g. localisation or covariance inflation). In particular, localisation methods may be suitable to assume conditional independence between separated states (i.e. parts of the shape that are distant in physical space) so as to counter spurious correlations. § PROOF OF THEOREM <REF> The momentum satisfies ṗ_t + ∇ u_t∘ q_t p_t = 0 Using the ansatz we verify: ṗ_t + ∇ u_t∘ q_t p_t = J̇_̇ṫ p_0 + ∇ u_t J_t p_0 = - J_t d(J_t) J_t p_0 + ∇ u_t∘ q_t J_t p_0 = - J_t (d J_t) J_t p_0 + ∇ u_t∘ q_t J_t p_0 = - J_t (∇ u_t∘ q_t J_t) J_t p_0 + ∇ u_t∘ q_t J_t p_0 = - J_t J_t∇ u_t∘ q_t J_t p_0 + ∇ u_t∘ q_t J_t p_0 = - ∇ u_t J_t p_0 + ∇ u_t∘ q_t J_t p_0 = 0 . elsarticle-num
http://arxiv.org/abs/2307.04100v1
20230709052546
Visible and infrared self-supervised fusion trained on a single example
[ "Nati Ofir" ]
cs.CV
[ "cs.CV" ]
Visible and infrared self-supervised fusion trained on a single example Nati Ofir August 12, 2023 ======================================================================= This paper addresses the problem of visible (RGB) to Near-Infrared (NIR) image fusion. Multispectral imaging is an important task relevant to image processing and computer vision, even more, since the development of the RGBT sensor. While the visible image sees color and suffers from noise, haze, and clouds, the NIR channel captures a clearer picture and it is significantly required by applications such as dehazing or object detection. The proposed approach fuses these two aligned channels by training a Convolutional-Neural-Network (CNN) by a Self-Supervised-Learning (SSL) on a single example. For each such pair, RGB and IR, the network is trained for seconds to deduce the final fusion. The SSL is based on Sturcture-of-Similarity (SSIM) loss combined with Edge-Preservation (EP) loss. The labels for the SSL are the input channels themselves. This fusion preserves the relevant detail of each spectral channel while not based on a heavy training process. In the experiments section, the proposed approach achieves better qualitative and quantitative multispectral fusion results with respect to other recent methods, that are not based on large dataset training. § INTRODUCTION The problem of visible-to-infrared image fusion is a well-studied area with a plethora of works. Even though many solutions have been developed, there is still a need for an Artificial-Intelligence (AI) approach that is based on Deep-learning (DL), however, does not require heavy pre-training and large dataset acquiring to carry a single multispectral fusion. This paper introduces a DL method that works on a single example and produces a fusion result in an SSL way such that no manual human labeling is required. Given this solution, every multispectral camera can be extended with a fusion channel such that the observer will be able to see the details captured by each spectrum without flickering between the different images. While the visible RGB (0.4-0.7μ m) sees color information, the NIR (0.8-2.5μ m) sees beyond haze and fog and suffers less from the noise of low-light imaging. Since each spectral channel captures different information about the scene their fusion is informative and relevant for a person observing the camera. While most of the DL fusion approaches, such as attention based <cit.>, required a timely training phase, the proposed method is training CNN weights for each input image for forty seconds on Nvidia Geforce GTX 3060 GPU. In addition, while classic image fusion, such as <cit.> is relatively fast to compute, it is proved in the experiments of this paper that they are less preserving the input detail according to several quantitative measurements. For example, Figure <ref> demonstrates the proposed method results of RGB to NIR fusion on a country example of the dataset <cit.>. These results maintain to combine the information of both inputs, it can be seen that the far mountains, seen only in infrared, are emphasized by the computed CNN in the final fusion. Moreover, the information on the color of the RGB sensor is preserved in the fusion. Even though this method is based on learned AI CNN, the outcome seems naturally real and without special artifacts. Ofently, the input channels are not aligned with each other, and multispectral image registration is required as a preprocessing step. As the nature of the dataset <cit.> contains small misalignment, this paper proposes simple solutions for that problem. The first approach is to align the images in advance by methods tailored by multispectral imaging like DL based <cit.> and traditional computer vision bases <cit.>. The second solution, that can be integrated into the proposed CNN architecture is to learn a Spatial-Transformation-Network (STN) <cit.> in a holistically end-to-end method to compute the final aligned fusion results. As this example shows, the CNN output does not suffer from channel misregistrations. This manuscript is organized as follows. In Section <ref> the previous methods for image fusion are covered. Next, in Section <ref> the proposed approach is explained in detail including the CNN architecture, training algorithm, and loss functions. Then, Section <ref> illustrate the fusion performance with respect to other methods that are not dependent on the time-consuming training phase. Finally, this paper is concluded in Section <ref>. § PREVIOUS WORK Image fusion is a classic problem of computer vision. Early methods utilized signal characteristics for fusion such as Wavelets-based method <cit.>. Laplacian pyramid blending was used to overcome multi-focus image capturing <cit.> for example. Statistical features of the input images can contribute to their fusion such as Principal-Component-Analysis (PCA) <cit.>. Fusion can be carried out according to spectral analysis of the images as was introduced in <cit.>. A recent approach utilized superpixels <cit.> segmentation for a content-based multispectral fusion<cit.>. The DL revolution produced many related works with state-of-the-art (SOTA) blending performances like <cit.>. Visible and infrared fusion is using DL to enhance object detection <cit.>. The proposed method is utilizing DL techniques and lite-CNN-architecture, however, does not depend on heavy training processes and large datasets contrary to the most recent approaches. The idea of training a CNN on a single example has shown significant potential in super-resolution <cit.> and image-generation by Generative-Adverserial-Network (GAN)<cit.>. This work is the first to utilize single-image training for multispectral image fusion. If the input spectral channels are not geometrically aligned, an apriori step of multispectral registration is required. A single channel registration can be carried out by engineered feature descriptors like Scale-Invariant-Feature-Transform (SIFT) <cit.>. Unfortunately, regular alignment methods usually fail in the multispectral scenario, and therefore a tailored approach to this case is needed. A descriptor that is invariant to different spectra can be based on edge detection <cit.>, like Canny <cit.>, however, this method has limitations on the geometric transformation level. An additional method is to apply for a Mutual-Information based registration <cit.>. MI usually solves translation, or small optical flow fields. Recent methods utilize DL to compute a spectra-invariant descriptor like <cit.>, unfortunately, this method is also geometrically limited. Another DL method, learned a hybrid network for multispectral key points matching <cit.>, it shows better accuracy, however, depends on a training dataset that is manually labeled. The dataset that the proposed methods fuse <cit.> contains small misalignments that are usually solved holistically by the learned CNN. The geometric correction also can be trained using Spatial-Transformation-Network (STN) <cit.>, that computed a geometric transformation by end-to-end learning. In conclusion, multispectral image alignment is a challenging problem that is hardly solved, however, less relevant since the development of RGBT cameras <cit.>. Self-Supervised-Learning (SSL)is a relevant field, enabling AI and DL to be independent of human labeling. A common SSL approach is utilizing contrastive learning <cit.>. In this paper, the proposed method uses the input spectral channels as a label for their fusion, based on Structure-of-Similarity-Measuare (SSIM) <cit.> and Edge-Preservation (EP) loss <cit.>. As a whole, this study introduces a holistic solution for visible-to-infrared fusion and registration based on SSL. § THE PROPOSED MULTISPECTRAL FUSION This Section will introduce the proposed method to fuse visible and infrared multispectral images, by training a fusion CNN on a single example for several seconds using self-supervised loss functions. §.§ Network architecture The proposed CNN architecture for image fusion gets two channels of any image dimension and outputs a single channel with the same height and width as the input. A typical image in the dataset used to evaluate the method <cit.> is 900x768 pixels. The compact fusion network contains four convolutions of kernel 3x3, the first three are followed by ReLU(x) = max(x,0) activation, and the final output-convolution is followed by Sigmod(x) = e^x/1+e^x. The architecture contains two skip connections that are based on numeric addition. Before the feed-forward CNN, an STN is applied to align the spectral channel. In addition, a UNet <cit.> with Resnet18 backbone <cit.> is applied in parallel to the feed-forward CNN, to get a smooth fusion with semantic information. For more graphic details see Figure <ref>, for the whole CNN parameters see Table <ref>. The total number of parameters is ≈ 4M, such that the CNN is versatile and can be trained fastly. In the experiments Section <ref>, an ablation study is learned on this architecture, and each part is assigned a contribution score to the final fusion result. Figure <ref> shows a compact version of the proposed architecture, such that according to the ablation study done in this paper, it has main contributions to the final fusion results. §.§ Training algorithm To train the method CNN, an algorithm training loop is introduced. See Algorithm <ref> for the whole fusion algorithm containing mainly the self-supervised training loop. The RGB input image is converted to the GRAY, and then the training computed the CNN weights to fuse a specific pair of NIR and GRAY images. During the training the network weight is updated due to a combination of SSIM <cit.> and Edge Preservation <cit.> losses. Finally, after the training loop, the fusion is computed and it is used to distort the RGB channels to contain the fusion result. The number of epochs that were found to be required for high-quality fusion is three hundred. In addition, the CNN is initialized with random weights. §.§ Loss functions The loss function that is used to train the CNN are SSIM and Edge Preservation each self-labeled with the input images. Given two input images I_1, I_2 the SSIM, correlated to the human visual system is defined by: (2μ_1μ_2+c_1)(2σ_12+c_2)/(μ_1^2+μ_2^2+c1)(σ_1^2+σ_2^2+c_2), where μ is the mean of each image, σ is the standart deviation and σ_12 is the joint covariance. This similarity function is widely used for understanding the perception of similar images, and it has its differentiable loss definition <cit.>. Regarding the Edge-Preservation loss (EP), it is a regular reconstruction loss, applied after image gradient detection. EP(I_1,I_2) = ||∇ I_1(x)-∇ I_2(x)||_2^2. In the experiment Section <ref> it is shown that using the EP loss in addition to SSIM improves the quantitative fusion results of the proposed method. §.§ Multispectral registration The dataset of <cit.> contains small misalignments between the spectral channels that are basically holistically aligned by the various convolution of the proposed CNN architecture. Even though, if the miss-registration is significant there are approaches to solve it and then fuse with the proposed self-supervised approach. The first solution is based on Spatial-Transformation-Networks (STN) <cit.>. The idea is to apply an STN to the NIR channel at the beginning of the CNN and to train the whole network by the proposed method. If the miss-registration is dramatically significant, then matching is required like the algorithm of <cit.>. § RESULTS The proposed method evaluation is done both quantitatively and qualitatively. For the evaluation, the multispectral dataset <cit.> contains 954 pairs of NIR and RGB, divided into different categories such as country, mountain, urban, and street. The following experiments show that the proposed method produces better results than alternative fast methods for image fusion, in terms of SSIM, Canny <cit.> edge preservation, and statistic correlation. The proposed approach is compared to the latest SuperPixel <cit.>, PCA Fusion <cit.>, and Spectral Fusion <cit.>. In addition, the contribution of the edge preservation loss itself is emphasized. Figure <ref> demonstrates the proposed method visual results, where fusing RGB and IR images from the dataset of <cit.>. It can be seen, that this approach manages to fuse smoothly images from different categories while maintaining the relevant information for each spectral channel. In addition, Figure <ref>, compares the proposed algorithm for fusion to the recent SuperPixel <cit.> method, it shows that the proposed approach picks the relevant information of each spectral channel even though it is holistic and trained in an end-to-end fashion. The SuperPixel method is based on classic computer vision and is engineered to produce such results, the proposed algorithm achieves similar quality of image fusion, while being based on compact short DL CNN training per example. Table <ref> compares the edge preservation of the method when training with and without EP loss. For input images I_1,I_2, their fusion F and their corresponding Canny <cit.> binary-edges C_1, C_2, C_F this loss is defined by: EP(I_1,I_2) = 0.5∑_i∑_x C_i(x) · C_F(x)/∑_x C_i(x). It is demonstrated in the table that the EP loss is crucial for preserving the edge maps in the proposed self-supervised fusion. In addition, Table <ref> shows that the self-supervised fusion achieves the highest SSIM fusion score, where: SSIM(I_1,I_2, F) = 0.5SSIM(I_1, F)+0.5SSIM(I_2,F). This is another proof of the quality of the proposed algorithm. Moreover, Table <ref> depicts similar result for the correlation metric: corr(I_1,I_2, F) = 0.5corr(I_1, F)+0.5coor(I_2,F). In addition, Table <ref> demonstrates in the ablation dataset of the proposed CNN architecture, it shows the fusion SSIM score for every CNN alternative: Compact, Compact+UNet, and Compact+Unet+STN. It can be shown that even a compact CNN can fuse the input images in high quality, however, adding extra parts to the architecture improve the general performance of the self-supervised training. Overall, this experiment section proves that the self-supervised fusion method trained on a single example achieves a high quality of image fusion with respect to competitive fusion alternatives. § CONCLUSIONS In conclusion, this paper introduces a novel approach for infrared and visible image fusion based on self-supervised short CNN training for a single example pair. The paper presented this method's technical details including CNN architecture, training algorithm, and the relevant loss functions. In addition, it was proved in the experiments of the paper that the proposed method gets the best results both quantitatively and qualitatively over competitive methods for fast multispectral fusion. Overall, this manuscript introduces a relevant approach that can be incorporated easily into multi-sensor cameras and systems. ieee_fullname
http://arxiv.org/abs/2307.04684v2
20230710163746
FreeDrag: Point Tracking is Not What You Need for Interactive Point-based Image Editing
[ "Pengyang Ling", "Lin Chen", "Pan Zhang", "Huaian Chen", "Yi Jin" ]
cs.CV
[ "cs.CV", "cs.HC", "cs.LG" ]
[ FreeDrag: Point Tracking is Not What You Need for Interactive Point-based Image Editing Pengyang Ling1* Lin Chen1,2* Pan Zhang2 Huaian Chen1† Yi Jin1† 1University of Science and Technology of China 2Shanghai AI Laboratory {lpyang27, chlin}@mail.ustc.edu.cn [email protected] {anchen, jinyi08}@ustc.edu.cn August 12, 2023 ============================================================================================================================================================================================================================================================== type=figure < g r a p h i c s > figure The comparison between DragGAN <cit.> and our proposed FreeDrag. Given an image input, users can assign handle points (red points) and target points (blue points) to force the semantic positions of the handle points to reach corresponding target points. The examples presented on the left and right columns show the cases without/with masks specifying the editable region (brighter area), respectively. Code will be available on https://github.com/LPengYang/FreeDraghttps://github.com/LPengYang/FreeDrag. ]  *Equal Contribution † Corresponding Author To serve the intricate and varied demands of image editing, precise and flexible manipulation of image content is indispensable. Recently, DragGAN <cit.> has achieved impressive editing results through point-based manipulation. However, we have observed that DragGAN struggles with miss tracking, where DragGAN encounters difficulty in effectively tracking the desired handle points, and ambiguous tracking, where the tracked points are situated within other regions that bear resemblance to the handle points. To deal with the above issues, we propose FreeDrag, which adopts a feature-oriented approach to free the burden on point tracking within the point-oriented methodology of DragGAN. The FreeDrag incorporates adaptive template features, line search, and fuzzy localization techniques to perform stable and efficient point-based image editing. Extensive experiments demonstrate that our method is superior to the DragGAN and enables stable point-based editing in challenging scenarios with similar structures, fine details, or under multi-point targets. § INTRODUCTION The domain of image editing utilizing generative models has gained substantial attention and witnessed remarkable advancements in recent years <cit.>. In order to effectively address the intricate and diverse demands of image editing in real-world applications, it becomes imperative to achieve precise and flexible manipulation of image content. Consequently, researchers have proposed two primary categories of methodologies in this domain: (1) harnessing prior 3D models <cit.> or manual annotations <cit.> to enhance control over generative models, and (2) employing textual guidance for conditional generative models <cit.>. Nevertheless, the former category of methodologies often faces challenges in generalizing to novel assets, while the latter category exhibits limitations in terms of precision and flexibility when it comes to spatial attribute editing. To overcome these aforementioned limitations, a recent pioneering study, known as DragGAN <cit.>, has emerged as a remarkable contribution in the realm of precise image editing. This work has garnered significant attention, primarily due to its interactive point-based editing capability, termed "drag" editing. The DragGAN framework addresses the challenge by introducing a two-step iterative process: (1) a motion supervision step, which directs the handle points to migrate towards their corresponding target positions, and (2) a point tracking step, which consistently tracks the relocated handle points' positions. This innovative approach enables users to exert precise control over the editing process by specifying pairs of handle and target points on the given image. Notwithstanding the praiseworthy achievements exhibited by DragGAN, there exist several issues, as shown in Figure <ref>. One issue is miss tracking, whereby DragGAN encounters difficulty in effectively tracking the desired handle points. This issue arises particularly in highly curved regions with a large perceptual path length, as observed in latent space <cit.>. In such cases, the optimized image undergoes drastic changes, leading to handle points in subsequent iterations being positioned outside the intended search region. Additionally, in certain scenarios, achieving satisfactory outputs necessitates the disappearance of handle points, as shown in Figure <ref>. It is important to note that during miss tracking, the cumulative error in the motion supervision step increases progressively as iterations proceed, owing to the misalignment of tracked features. Another issue that arises is ambiguous tracking, where the tracked points are situated within other regions that bear resemblance to the handle points. This predicament emerges when there are areas in the image that possess similar features to the intended handle points, leading to ambiguity in the tracking process (see Figure <ref>). This issue introduces a potential challenge as it can misguide the motion supervision process in subsequent iterations, leading to inaccurate or misleading directions. To remedy the aforementioned issues, we propose a solution called FreeDrag, which adopts a feature-oriented approach to free the burden on point tracking within the point-oriented methodology of DragGAN. To address the miss tracking issue, we introduce a methodology where a template feature is maintained for each handle point to supervise the movements during the iterative process. This template feature is implemented as an exponential moving average feature that dynamically adjusts its weights based on the errors encountered in each iteration. By utilizing this adaptive and stable template feature, we ensure reliable point-based editing. Even when miss tracking occurs in a specific iteration, the maintained template feature remains intact, preventing the optimized image from undergoing drastic changes. To handle the ambiguous tracking issue, we propose line search and fuzzy localization. Line search restricts the movements along a specific line connecting the original handle point and the corresponding target point. This constraint effectively reduces the presence of ambiguous points and minimizes the potential misguidance of the movement direction in subsequent iterations. On the other hand, fuzzy localization alleviates the burden of precise localization, thereby enhancing the optimization process. To summarize, our key contributions are as follows: * We have observed that the original DragGAN approach encounters challenges in effectively addressing miss tracking and ambiguous tracking scenarios. * We propose FreeDrag, a simple but effective interactive point-based image editing framework that incorporates adaptive template features, line search, and fuzzy localization techniques to free the burden on point tracking. * Extensive experiments demonstrate the superiority and stability of FreeDrag in point-based image editing, marking a significant advancement in the field of flexible and precise image editing. § FORMULATION Considering a latent code z drawn from the latent space 𝒵, the methodology employed by StyleGAN <cit.> involves mapping this code into the 𝒲 space utilizing a mapping network. The resulting intermediate latent code w is subsequently utilized by the synthetic network to generate the corresponding image I. The objective of this paper is to realize point-based image editing on I by optimizing the associated latent code w. Inspired by the previous work <cit.>, our approach exploits optimization techniques within the extended 𝒲^+ space. This choice is motivated by the heightened expressive potential offered by the 𝒲^+ space for conducting image editing tasks. To facilitate point-based manipulation, our framework incorporates a collection of handle points p_i along with their corresponding target points t_i, which are provided by users. Point-based editing, in this context, involves the transfer of semantic features from the handle points p_i to the target points t_i, effectively allowing users to visually "drag" these features to desired locations. § ANALYSIS OF DRAGGAN The DragGAN method achieves point-based image editing through an iterative process consisting of the following two steps: (1) Supervised motion: The method enforces the correspondence between the current handle point and its corresponding target point by ensuring the proximity of F(q_i + d_i) to F(q_i). Here, q_i represents the neighboring pixels of handle point p_i within a radius r_1 defined as Ω _1(p_i, r_1). The vector d_i is a normalized vector pointing from p_i to t_i, where t_i is the target point. The feature values F(q_i) at pixel q_i are derived using bi-linear interpolation. (2) Point tracking: The location of the moved handle point is updated using point tracking. This is achieved by performing the nearest search in the neighborhood of the handle point, i.e., p_i:= min || F^'(q_i) -f_i ||_1. Here, q_i belongs to the neighborhood defined by Ω _2(p_i, r_2) with a radius r_2. The feature f_i represents the initial handle point's feature on the original feature map F_0, and F^'(·) denotes the features obtained from the resized feature map of StyleGAN2 <cit.>. §.§ Instability of point-tracking While DragGAN offers a promising solution for point-based image editing, our observations reveal that it often experiences challenges such as handle point loss, inaccurate editing, and distorted image generation in certain scenarios. We attribute these issues to the intrinsic instability of the point tracking step, which can be understood from the following two aspects: ∙ Constant Value of f_i: Throughout the entire moving process, the value f_i remains constant and fails to adequately reflect the evolving state of the handle point during its motion. ∙ Implicit Assumption of Unique Points: Point tracking assumes that there is only one point within the searching areas that inherits the feature of the handle point during each motion. However, this assumption is not always reliable. Firstly, the desired point may lie outside the searching areas due to drastic content changes, resulting in incorrect tracking (as shown in Figure <ref>). Secondly, misleading guidance can cause points to disappear, further complicating the tracking process (as seen in Figure <ref>). Additionally, the presence of points with similar features in similar or symmetrical structures (such as lips, eyes, and general contours) makes it challenging to discriminate the desired points from the searching areas, leading to ambiguous tracking (as illustrated in Figure <ref>). Moreover, choosing the radius size r_2 poses an internal conflict. On one hand, a larger r_2 enables searching the handle point from a broader image region, but on the other hand, it increases the likelihood of similar interfering pixels being included. These factors collectively contribute to the instability observed during the point tracking process in DragGAN for point-based image editing. §.§ Impact of unstable point tracking When point tracking fails, the resulting searched handle point is prone to errors. This flaw significantly undermines or disrupts the point-based manipulation from two aspects: (1) Incorrect Movement Direction and Optimization Constraint: Erroneous handle points provide inaccurate movement directions (i.e., d_i) and optimization constraints (i.e., F(q_i)) during motion supervision. As a result, the quality of the final image is compromised, leading to inaccurate or distorted editing results. (2) Lack of Timely Termination Sign: In cases where point tracking fails, the absence of a reliable termination sign hampers the timely completion of the entire manipulation process. This can result in unnecessary time consumption or necessitate additional intervention, causing inconvenience and potential frustration for users. § METHODOLOGY Considering the instability of point tracking, we propose a feature-oriented approach to free the burden of precise point tracking in “drag” editing, termed FreeDrag. Specifically, we introduce the concept of adaptive template features to enable the reliable recording of the handle point's feature during motion, without relying on the precise location of the handle point. By compelling the recorded feature to migrate toward an assigned point, the handle point is potentially encouraged to move to the assigned point. As a result, the handle point can progressively migrate to the corresponding target point by forcing the assigned point to approach the target point step by step. To identify suitable assigned points for stable feature migration, we propose a fuzzy localization strategy that incorporates a customized point assignment scheme, thereby reducing the reliance on precise location information. Additionally, to alleviate the potential misguidance caused by ambiguous points, we introduce a line search strategy that intentionally confines the assigned points to lie on the line connecting the original handle point and the corresponding target point. We elaborate on the above techniques in subsequent sub-sections. . §.§ Adaptive Template Features For a given original handle point p_i, we denote the corresponding features of its neighboring points within a radius r as F_r(p_i). By enforcing F_r(t_i^1) to approximate F_r(p_i), we can potentially encourage the handle point to move towards the first assigned location t_i^1. However, an immediate issue is how to obtain the features of the handle point without performing precise point tracking. It is not viable to directly adopt F_r(t_i^1) since there is no assurance that p_i will be precisely moved to t_i^1 within the limited number of steps. Therefore, we introduce the concept of adaptive template features to record the feature values of the handle point based on the quality of motion, i.e., F_ema^k = λ·F_r(t_i^k) + (1 - λ ) · F_ema^k-1 , where F_ema^0=F_r(p_i), t_i^k is the assigned location for k-th motion (t_i^0 = p_i), and λ is an adaptive coefficient that reflects the quality of motion to some extent. The purpose of Eq. (<ref>) is to determine the extent to update the recorded feature values F_ema according to the quality of each motion. Intuitively, if the handle point is successfully moved to t_i^k, we expect F_ema^k to inherit the values of F_r(t_i^k). Otherwise, we expect F_ema^k to maintain the values of F_ema^k-1. This selective updating strategy improves the smoothness of F_ema, making it more resilient to significant content distortion and facilitating stable point movement. Denote the value of F_ema^k-1 - F_r(t_i^k)_1 at the beginning/last optimization step (one sub-motion usually consists of multiple optimization steps) are L_ini^k and L_end^k, respectively, i,e, L_ini^k = F_ema^k-1 - F_r^ini(t_i^k)_1, L_end^k = F_ema^k-1 - F_r^end(t_i^k)_1, where F_r^ini(t_i^k) and F_r^end(t_i^k) denote the values of F_r(t_i^k) at the beginning/last optimization step in k-th sub-motion. For the motion towards t_i^k, we denote the expectant value of L_ini^k as l, i,e., l = E[ L_ini^k], where E[ ·] denotes the expectation function. A larger value of L_ini^k indicates a more difficult motion towards t_i^k, and a smaller value of L_end^k implies a higher quality of motion. Therefore, the adaptive coefficient λ in Eq. <ref> is defined as: λ = (1 + exp(α· (L_end^k - β )))^ - 1, where α and β are two positive constants, and exp(·) is the exponential function. To prevent irreversible deviation during a single sub-motion, we impose a constraint on the maximum value of λ. Considering the following scenarios: (1) the well-optimized case where L_end^k = 0.2 · l, and we set λ=0.5; (2) the ill-optimized case where L_end^k = 0.8 · l, we set λ=0.1, we can obtain the following equations. 0.5 = (1 + exp(α·(0.2 · l - β )))^ - 1, 0.1 = (1 + exp( α·(0.8 · l - β )))^ - 1. Thus, α = ln(9)/(0.6 · l) and β = 0.2 · l can be derived from the above equations. §.§ Fuzzy Localization via Line Search Given F_ema^k, which records the features of handle point after k-th sub-motion towards t_i^k, the motion supervision in the subsequent (k+1)-th sub-motion towards t_i^k+1 is formulated as follows: ℒ_motion = F_ema^k - F_r(t_i^k+1)_1. To find a suitable t_i^k+1 for smooth feature migration in Eq. <ref>, we perform localization based on both the motion distance and feature difference, i.e., t_i^k+1 = S(t_i^k,t_i, F_ema^k,d,l), where S(·) is the localization function, t_i^k+1 is the located position, t_i is the location of the final target point, d controls the maximum distance between t_i^k+1 and the last location t_i^k, i.e., ||t_i^k+1 -t_i^k||_2 ≤ d, and l is the expectant value of feature difference at the beginning of each motion ( see Eq. <ref>). To eliminate ambiguous localization caused by similar points, S(·) performs line search, i,e, the search range is from t_i^k to t_i^k + d·t_i - t_i^k/t_i - t_i^k_2. In addition, to satisfy Eq. <ref>, the searched t_i^k+1 is forced to own the smallest ||F_r(t_i^k+1) - F_ema^k||_1.. - l_1 in the decile points of the search range. Furthermore, to deal with the coupling movement under multiple points, we incorporate a fallback mechanism. The entire localization scheme can be expressed as follows: t_i^k + 1 = {[ S(t_i^k,t_i,F_ema^k, d,l), if L_end^k ≤ 0.5·l; t_i^k, elif L_end^k ≤ L_ini^k; S(t_i^k - d ·t_i - t_i^k/t_i - t_i^k_2,t_i,F_ema^k, 2d,0), otherwise ]. In the exceptional case where L_end^k > L_ini^k, we set l=0 to immediately locate the point and ensure the seamless inheritance of the features F_ema^k. Unlike DragGAN, which relies on precise point tracking for each motion to determine the exact location of the handle point, the localization strategy described in Eq. <ref> is more flexible and fuzzy. It aims to bring the located point close to the handle point by ensuring a limited feature difference with the adaptive template feature F_ema. This approach provides a suitable gradient for each sub-motion and reduces the dependence on precise point tracking. By breaking down the overall movement towards the final location t_i into multiple sub-motions towards customized locations t_i^k, we can control the difficulty of each sub-motion and gradually approach the target location t_i. §.§ Termination Signal For each customized sub-motion in Eq. <ref>, the maximum optimization step is set as 5. To enhance efficiency, for each movement, we pause the optimization process if the value of Eq. <ref> for all handle points falls below 0.5· l. The final termination signal is determined by calculating the remaining distance ||t_i^k-t_i||_2. Furthermore, for each handle point, if the motion terminates, we assign its feature to F_ema to ensure it remains stationary. §.§ Directional Editing Given a binary mask assigned by users, the mask loss is defined as: ℒ_mask = (F_0 - F_r(t_i^k)) ⊙ ( 1 - M) _1, where F_0 denotes the initial feature of local patch on p_i and ⊙ is the element-wise multiplication. § EXPERIMENTS We evaluate the proposed FreeDrag in various images generated by StyleGAN2. It is observed that the difficulty of point manipulation varies in different models. Thus we set different hyperparameters for specific generative models. Generally, for precise editing areas such as the human face, smaller values of d and l are suggested, and vice versa. All optimization processes are performed on the feature map with a resolution of 128×128 and we adopt the Adam <cit.> optimizer with a learning rate of 0.002. As depicted in Fig. <ref>, the proposed FreeDrag successfully avoids the abnormal disappearance of handle point (e.g., the vanished eyes, glasses, mouth, and vehicle wheel in examples (1)-(4)), while preserving the structural integrity (e.g., avoiding the distortion of animal legs and building's roof in examples (5)-(7)), showcasing its superiority in fine-detail editing. Moreover, FreeDrag exhibits robustness in handling similar points and drastic content distortions, resulting in stable and precise point movement, as demonstrated in examples (8)-(10). Additionally, FreeDrag effectively mitigates the potential misguidance during optimization steps, leading to more natural and coherent editing results, as observed in examples (11)-(12) in Fig. <ref>. § DISCUSSION Although our method enables achieving remarkable image editing results, one may challenge that the current pipeline based on generative adversarial networks may still be inevitably limited by GANs' capacity. Fortunately, our proposed FreeDrag framework is not limited to GAN-based methods. In theory, as inspired from <cit.>, if we replace the generative model with a diffusion model <cit.> and optimize the diffusion latent instead of the latent code, we can still efficiently and robustly perform interactive point-based image editing using the proposed adaptive template features mechanism and fuzzy localization technique with line search. We will explore these possibilities in future versions. § CONCLUSION In this study, we propose FreeDrag, an interactive point-based image editing framework that elaborately eliminates the laborious need for unstable point tracking. By incorporating fuzzy localization equipped with line search, FreeDrag decomposes the total movement into numerous sub-motions customized from both movement distance and variation degree, facilitating a more stable point movement. Meanwhile, the concept of adaptive template features is introduced to selectively update the values of recorded features, which enables a better immunity against point missing. Extensive experiments demonstrate the superiority and stability of FreeDrag in dealing with drastic content change and similar structures, marking a significant advancement in the field of flexible and precise image editing. ieee_fullname
http://arxiv.org/abs/2307.04092v1
20230709042603
Coupled-channel $D^\ast K^\ast -D_s^\ast ρ$ interactions and the origin of $T_{c\bar{s}0}(2900)$
[ "Man-Yu Duan", "Meng-Lin Du", "Zhi-Hui Guo", "En Wang", "Dian-Yong Chen" ]
hep-ph
[ "hep-ph", "hep-ex" ]
[addref]
http://arxiv.org/abs/2307.04562v1
20230710135124
Full-F Turbulent Simulation in a Linear Device using a Gyro-Moment Approach
[ "B. J. Frei", "J. Mencke", "P. Ricci" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
AIP/123-QED ]Full-F Turbulent Simulation in a Linear Device using a Gyro-Moment Approach [email protected] Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center, CH-1015 Lausanne, Switzerland Max-Planck-Institut für Plasmaphysik, D-85748 Garching, Germany Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center, CH-1015 Lausanne, Switzerland Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center, CH-1015 Lausanne, Switzerland The first full-F and turbulent simulations based on the Gyro-Moment (GM) are presented by considering a linear device configuration with open and straight field lines. The simulations are based on a simplified version of the gyrokinetic (GK) model proposed by B. J. Frei et al. [J. Plasma Phys. 86, 905860205 (2020)]. By focusing on the electrostatic and long-wavelength limit, a full-F GM hierarchy equation is derived to evolve the ion dynamics, which includes a nonlinear Dougherty collision operator, localized sources, and Bohm sheath boundary conditions. An electron fluid Braginskii model is used to evolve the electron dynamics, coupled to the full-F ion GM hierarchy equation via a vorticity equation. A set of full-F turbulent simulations is performed using the parameters of the LAPD experiments with different numbers of GMs and regimes of collisionality. The GM results (time-averaged profiles and turbulent properties) are compared with those from two-fluid Braginskii simulations, finding good qualitative agreement. Furthermore, the ion distribution function is analyzed, showing the good convergence properties of the GM approach. [ P. Ricci August 12, 2023 =================== § INTRODUCTION Despite recent progress in the development of gyrokinetic (GK) codes, such as <cit.>, <cit.>, <cit.> and <cit.>, extending the GK model from the core to the boundary remains challenging since it requires dealing with a wide range of collisionality, order-one fluctuations across various scales, complex magnetic field geometry, steep pressure gradients and the interaction of the plasma with the wall. As a consequence, less computationally demanding tools such as fluid simulations (see, e.g., Refs. Stegmeir2018,De2022,Giacomin2022) based on the drift-reduced Braginskii model <cit.>, are used to simulate the plasma dynamics in the boundary. However, the validity of a fluid approach remains limited to the collisional region of the boundary, namely the scrape-off layer (SOL), as the fluid modeling lacks kinetic effects. To tackle the challenges of the boundary region, an approach is formulated in Ref. Frei2020 based on the Hermite-Laguerre expansion of the full (full-F) distribution function, which is referred to as the gyro-moment (GM) approach. This approach features kinetic effects <cit.>, which are absent in Braginskii-like fluid models, and collisional effects modeled using advanced collision operators <cit.>. So far, investigations based on the GM approach are limited to the δ f regime, where only the a priori small deviation of the distribution function from thermal equilibrium is evolved <cit.>. To the knowledge of the authors, this work presents the first full-F turbulent results using a moment approach. In particular, we focus on simulations of plasma turbulence in a linear plasma device. Linear plasma devices, such as LAPD <cit.>, HelCat <cit.>, and RAID <cit.>, are experiments that allow for the investigation of basic plasma phenomena in a simplified magnetic geometry characterized by the absence of magnetic gradients, curvature, and shear <cit.>. Despite their simplicity and the lack of kinetic effects such as trapped electrons, linear plasma devices share some of the most important physical processes that occur in the boundary of magnetic confinement devices. In fact, similar to the boundary, the turbulent dynamics in a linear plasma device result from the interplay of cross-field transport, parallel flows to the magnetic field, and plasma losses at the end plates where a sheath forms due to plasma-wall interactions. At the same time, the straight magnetic field lines in these devices facilitate the development of new modeling tools, compared to complex magnetic geometry characterizing the boundary of fusion devices. The modeling in these devices is also simplified by the perpendicular incidence of the magnetic field lines to the wall of the machine, which simplifies the sheath boundary model compared to an oblique incidence <cit.> and by the low plasma temperatures comparable to typical SOL values (e.g., T_i ≲ T_e ∼ 6 eV in typical LAPD discharges <cit.>), which are ideal for applying the full-F GM approach. Indeed, the low plasma temperature allows for a direct comparison of the GM approach with fluid simulations <cit.>, valid in the collisional conditions often met in, e.g., LAPD experiments. By focusing on the drift-kinetic (or long-wavelength) and electrostatic limit of the GK equations <cit.>, a linear plasma device configuration is chosen to perform the first full-F GK simulation in open field lines with the code that uses a discontinuous-Galerkin approach to discretize the velocity-space in Ref. Shi2017. LAPD turbulent simulations using the GK code are also reported in Ref. Pan2018, based on the same physical model. Linear plasma devices provide, therefore, an ideal testbed to perform the first full-F turbulent simulations using the GM approach. In this work, we consider a simplified version of the full-F GM model derived in Ref. Frei2020. In particular, we focus on the long-wavelength and electrostatic limit of the GK model to describe the ion dynamics, with ion-ion collisions modeled using a simple nonlinear Dougherty <cit.> collision operator (similar to the one used in Refs. Shi2017,Pan2018). On the other hand, electrons are assumed collisional, such that their dynamics can be approximated by the drift-reduced Braginskii model <cit.>. In contrast to previous GK simulations of linear devices <cit.>, the ion GK equation is solved within the GM approach where the full ion distribution function F_i is expanded on a Hermite and Laguerre polynomial basis. A parallel (to the magnetic field) velocity-space coordinate shifted by the local ion parallel fluid velocity and the adiabatic invariant are used to describe efficiently sonic ion parallel flows near the end plates where the sheath forms. A full-F ion GM hierarchy equation for the expansion coefficients is then derived. The ion full-F GM hierarchy equation and the fluid electron model are coupled through a vorticity equation. To incorporate the losses at the end plates, Bohm sheath boundary conditions <cit.> are implemented in the parallel direction, which are equivalent to the ones used in the previous Braginskii simulation of LAPD <cit.>. Nonlinear simulations of LAPD are then performed with various numbers of GMs. For comparison, a set of nonlinear turbulent simulations are also performed using the two-fluid drift-reduced Braginskii equations <cit.> (or simply Braginskii model), similarly to Refs. Rogers2010,Fisher2015, and using a reduced cold-ion model derived from the full-F ion GM hierarchy. The present results demonstrate that the full-F GM approach properly describes fluctuations in an open-field line geometry. A detailed analysis shows that turbulence, driven by a long perpendicular wavelength Kelvin-Helmoltz instability, is in qualitative agreement with the Braginskii model. Our results are weakly dependent on the number of GMs used in the simulations and on the collisional regime because of the absence of strong kinetic effects in LAPD. The analysis of the velocity-space representation of the ion distribution function demonstrates that the amplitude of the GMs decays rapidly with the order of the polynomial when collisions are considered. On the other hand, a larger number of GMs is necessary to describe deviations from thermal equilibrium at lower collisionality than LAPD. This investigation also reveals that a simple closure based on the truncation of the GM hierarchy is sufficient in our case and has little effect on turbulence. It is important to note that the purpose of these simulations is not to achieve a highly-fidelity and realistic description of LAPD turbulence, but rather to establish confidence in the applicability of the GM approach in full-F turbulent calculations. Furthermore, a direct comparison with LAPD experimental data <cit.> and with previous GK simulations <cit.> falls outside the scope of our study, but will be addressed in future work. The paper is structured as follows. In sec:linearplasmadevicemodel, we derive the ion full-F GM hierarchy equation in a straight magnetic field and introduce the electron fluid model, as well as the two-fluid drift-reduced Braginskii model. The numerical implementation of the full-F GM hierarchy equation is detailed in sec:numericalimplementation. The results of the first full-F GM turbulent simulations are presented in sec:turbulentsimulations, which includes a detailed comparison with the Braginskii simulations and an analysis of the ion distribution function. We conclude in sec:conclusion. § LINEAR PLASMA DEVICE MODEL In this section, we derive the full-F GM hierarchy equation for the ion dynamics by expanding the ion distribution function onto a Hermite and Laguerre polynomials basis. The hierarchy includes particle and energy sources and a simple nonlinear long-wavelength Dougherty collision operator. A reduced cold-ion model is also considered for comparison purposes, which is obtained analytically from the full-F GM hierarchy in the T_i≪ 1 limit. For the electron dynamics, the Braginskii fluid equations are used to evolve the electron density n_e, parallel velocity U_ e, and temperature T_e. A vorticity equation is derived for the electrostatic potential ϕ, which couples the ion and electron models. Finally, we present the two-fluid Braginskii model. The simple magnetic geometry in a linear plasma device allows us to introduce a simple coordinate system. In particular, assuming a rectangular shape of the linear device cross-section, we define the cartesian coordinate system (x,y,z), such that the (x,y) coordinates describe the plane perpendicular to , while z is the coordinate along the magnetic field lines. The height and width of the perpendicular cross-section are L_x and L_y, respectively, and the length of the linear plasma device is L_z. The magnetic field can simply be written as = × A = B z, where A is the constant magnetic vector potential and z is the unit vector pointing along the axis of the linear device. This section is structured as follows. We describe the ion full-F model in subsec:iongkmodel and we derive the ion full-F GM hierarchy equation in subsec:fullFhierarchy. A presentation of the reduced cold-ion model is then obtained from the GM hierarchy equation in subsec:coldion. The fluid electron model follows in subsec:electronbraginskii and the vorticity equation is derived in subsec:vorticityequation. subsec:braginskii describes the two-fluid Braginskii model used for comparison purposes and, finally, subsec:bc details the Bohm sheath boundary conditions we use in our simulations. §.§ Ion full-F model Focusing on the electrostatic and long-wavelength limits with constant and straight magnetic field lines, the ion one-form Γ_i <cit.> expressed in the gyrocenter coordinates Z = ( R, μ, v_∥, θ), where R = x x + y y + z z is the gyrocenter position, μ = m_i v_⊥^2/(2 B) is the magnetic moment and v_∥ = b · v is the velocity parallel to the magnetic field with b = B / B, reduces to Γ_i( R, μ, v_∥,t)= q_i A_i^* ·- μ R/Ω_iθ̇- m_i v_∥^2 / 2 - q_i Φ_i, with q_i Φ_i =q_i ϕ + μ B and q_i A_i^* = q_i + m_i v_∥ b. In (<ref>), the electrostatic potential ϕ is evaluated at the gyrocenter position, i.e. ϕ = ϕ( R ), such that ion FLR effects are neglected. From (<ref>), we deduce the ion equations of motion, Ṙ = b v_∥ + × b/B , v̇_∥ = q_i/m_i b · E , and μ̇=0, with v_∥ = · b and = -ϕ the electric field, being = x ∂_x + y ∂_y + z ∂_z. (<ref>) describes the parallel streaming along the magnetic field lines and the perpendicular drift due to the × velocity, while (<ref>) represents the acceleration in the parallel direction associated with the electric field E. Using the equations of motion given in (<ref>), the evolution equation of the full-F (gyrophase-independent) ion distribution function, F_i = F_i ( R, μ, v_∥, t), in the long-wavelength limit of the electrostatic GK ion Boltzmann equation <cit.> is given by ∂/∂ t( _i F_i ) + ·( _i F_i ) + ∂/∂ v_∥( _i F_i v̇_∥) = _i _i + _i S_i, where _i = B / m_i is the gyrocenter phase-space Jacobian, which is a constant in the case of linear devices. On the right-hand side of (<ref>), S_i = S_i( R, μ , v_∥) = S_N + S_E model particle (S_N) and energy (S_E) sources and are defined by <cit.> S_N = 𝒜_N F_Mi, S_E = 𝒜_E ( s_ i^2 + x_i - 3/2) F_Mi, respectively. In (<ref>), the functions 𝒜_N = 𝒜_N(x,y) and 𝒜_E = 𝒜_E (x,y) describe the spatial localization of the sources that mimic, for instance, the ionization processes due to fast electrons and of fast ions <cit.>. We remark that in previous fluid investigations of LAPD, the low ion temperature assumption (T_i ≪ T_e) is used and the ion energy source is neglected. In the present work, we consider a finite ion energy source, S_E. We assume that these sources have a uniform, localized, and top-hat-like shape in the case of the LAPD experiment. For instance, 𝒜_N (x,y) is given by <cit.> 𝒜_N (x,y) = 𝒜_N0 0.5 [ 1 - tanh( r -r_s/L_s) ] + 𝒜_N ∞, where r = √(x^2 + y^2) is the perpendicular distance from the center of the device (r =0), r_s is the radial extent of the plasma source, L_s > 0 is its typical source decay scale length, 𝒜_N0 is a positive and constant coefficient, which represents the particle fuelling rate near the center of the device, while 𝒜_N ∞ represents a small positive and constant particle source away from r ∼ r_s added for numerical reasons, in particular, to avoid regions of negative plasma density. Similar definitions for 𝒜_E, 𝒜_E0 and 𝒜_E∞ are used. We remark that the effects of neutral ionization, fast ions, and the presence of localized sources (not uniform in z) near the end plates are neglected in the present work. In (<ref>), we also introduce a shifted Maxwellian distribution function defined by F_Mi = N_i/π^3/2 v_Ti^3 e^- s_∥ i^2 e^- x_i, with the parallel and shifted normalized velocity-space coordinate, s_∥ i= (v_∥ - U_∥ i)/ v_Ti with v_Ti^2 = 2 T_i0 /m_i (T_i0 is the reference constant ion temperature) and U_∥ i = b · u_i = ∫ d v F_i v_∥ / N_i the ion parallel fluid velocity, and the perpendicular velocity-space coordinate x_i = μ B /T_i0. The choice of using the shifted parallel velocity-space coordinate, s_ i, is motivated in subsec:GMspectrum. Finally, the term _i in (<ref>) is a full-F and nonlinear collision operator model describing ion-ion collisions. In particular, we use a long-wavelength Dougherty collision operator <cit.>, given by _i = ν_i∂/∂ v·[ 2 T_i/m_i∂/∂ vF_i- ( v - u_i) F_i ], where T_i = ∫ d F_i m_i ( v - u_i)^2 /( 3N_i) and u_i = ∫ d F_i v / N_i are the ion temperature and mean fluid velocity, and ν_i = 4 √(π) N_i q_i^4 lnΛ/(3 m_i^1/2 T_i0^3/2) is the ion-ion collision frequency, which is constant in the present work. The effects of ion-electron collisions are neglected in (<ref>) since they occur on a time scale larger by, at least, a factor proportional to √(m_i / m_e) than ion-ion collisions. (<ref>) is equivalent to the ion GK model used in previous GK turbulent simulations of LAPD, implemented in the <cit.> and in the <cit.> codes. Both implementations use the same nonlinear Dougherty collision operator for ion-ion collisions (given in (<ref>)) and neglect ion-electron collisions. The code employs a discontinuous-Galerkin approach and uses a finite-volume method to discretize the velocity-space coordinates (v_∥, μ), while our work uses the GM approach to simulate the full-F ion distribution function F_i. To our knowledge, this is the first time such a moment approach is applied to perform nonlinear full-F turbulent simulations. §.§ Full-F ion GM Hierarchy Equation Following Ref. Frei2020, we perform the GM expansion of the full-F ion distribution function, F_i. More precisely, we expand F_i onto a set of Hermite (H_p) and Laguerre (L_j) velocity-space polynomials <cit.>, such that F_i = ∑_p=0^∞∑_j=0^∞^pjH_p(s_∥ i) L_j(x_i)/√(2^p p!)F_Mi/N_i, where ^pj are the ion GMs, evaluated by using the Hermite and Laguerre orthogonality relations <cit.> ^pj = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μB/m_i F_i H_p(s_ i) L_j(x_i)/√(2^p p!). By introducing the GM projector pjχ = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μχB/m_i F_i H_p(s_ i) L_j(x_i)/√(2^p p!). with χ = χ( R, μ, v_∥, t) being an arbitrary gyrocenter phase-space function, we find 𝒩^pj = pj1 from (<ref>). We remark that, in (<ref>), the shift of the parallel velocity coordinate s_ i, appearing in F_Mi defined in (<ref>) and in the argument of the Hermite polynomial H_p, is necessary to ensure good convergence property of the GM approach with respect to the number of GMs in (<ref>), paticularly in the presence of sonic ion flows (see sec:vsp). These flows appear at the sheath entrance where ions are accelerated to the ion sound speed (see subsec:bc). Additionally, we note that F_Mi, defined in (<ref>), is assumed to have the same parallel and perpendicular temperature, T_∥ i = T_⊥ i = T_i0. The assumption of an isotropic Maxwellian distribution function in (<ref>) is justified by the large ion-ion collision frequency typically found in a linear plasma device (where T_i ≲ 1 eV) compared to the boundary region in fusion devices (where T_i ≳ 10 eV). The absence of strong external energy sources driving temperature anisotropy in LAPD experiments supports this assumption (see (<ref>)). The lowest-order GMs can be related to fluid ion gyrocenter quantities, such as the ion gyrocenter density N_i, the ion parallel velocity U_∥ i, and the ion parallel and perpendicular pressure and temperature P_∥ i = T_∥ i N_i and P_⊥ i = T_⊥ i N_i, respectively. Indeed, using (<ref>), we derive that N_i = ^00 , ^10 =0 , P_∥ i = N_i T_∥ i = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μ B F_i(v_∥ - U_∥ i)^2 = T_i0( √(2)^20 + N_i ), P_⊥ i = N_i T_⊥ i = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μB/m_iμ B F_i = T_i0 ( N_i - ^01), with the total ion temperature defined by T_i = ∫ d F_i m_i ( v - b U_∥ i)^2 /( 3N_i) = (T_∥ i + 2 T_⊥ i)/3 = T_i0 ( √(2)^20 + 3 N_i - 2 ^01)/( 3N_i ). We remark that (<ref>) is a direct consequence of our choice of using a shifted parallel velocity-space coordinate s_ i in (<ref>). We now derive the full-F GM hierarchy equation describing the evolution of an arbitrary number of GMs, ^pj. This is obtained by projecting the ion full-F equation given in (<ref>) onto the Hermite-Laguerre basis. In addition, we normalize time t to R / c_s0 (with c_s0 = √(T_e0 / m_i) the ion sound speed evaluated at the reference constant electron temperature T_e0 and R the radial extension of the plasma chamber in the direction perpendicular to B), the potential ϕ to T_e0 / e, the parallel and perpendicular spatial scales to R and ρ_s0 = c_s0 / Ω_i, respectively. We also normalize the ion and electron densities, N_i and N_e, to the constant reference density N_0, the parallel electron velocity U_∥ e to c_s0, and the electron temperature, T_e, to T_e0. In addition, we assume q_i = e, considering a hydrogen plasma. Hence, we derive the normalized ion GM hierarchy equation, which describes the evolution of the GMs ^pj, i.e. ∂/∂ t^pj + √(p/τ_i)^p-1j∂/∂ t U_ i + ·pj + √(p/τ_i )p-1 j· U_ i - √(p/τ_i)p-1jv̇_∥ = ^pj_i + S_N^pj + S_E^pj, where the GM projections are given by ·pj =√(τ_i)∂_z (√(p+1)^p+1 j+√(p)^p-1 j) + ∂_z ( U_ i^ pj) +1/ρ_*ϕ^ p j, √(p/τ_i)p-1jṘ· U_ i = ( p^pj + √(p(p-1))^p-2j. . + √(p/τ_i) U_ i^p-1j) ∂_z U_ i + √(p/τ_i)1/ρ_*^ p-1 jϕU_ i , √(p/τ_i)p-1jv̇_∥ = - ^p-1j√(p/τ_i)∂_z ϕ, with ρ^* = ρ_s0 / R and τ_i = T_i0 / T_e0. In (<ref>), we introduce the Poisson bracket operator that is fg = ∂_x f ∂_y g - ∂_y f ∂_x g. The GM expansions of the particle and energy sources, S_N^pj and S_E^pj, are given by S_N^pj = 𝒜_N δ_p^0 δ_j^0 = S_N, S_E^pj = 𝒜_E ( δ_p^2 δ_j^0/√(2) - δ_p^0 δ_j^1 ), respectively. Finally, we express the nonlinear Dougherty collision operator in terms of GMs. We first express (<ref>) in terms of the velocity-space coordinates (s_ i, x_i, ) and project it onto the Hermite-Laguerre basis. This yields ^pj_i = ν_i[ -(p+2j) ^pj + (T_i -1 ) . . ×(√(p(p-1))^p-2j - 2j ^pj-1) ], where T_i is expressed in terms of the GMs using (<ref>). The nonlinear Dougherty collision operator conserves particles (_i^00 =0), momentum (_i^10 =0) and energy (_i^20 = √(2)_i^01). While simpler in form compared to the GM expansion of the nonlinear Fokker-Planck Landau collision operator <cit.>, the Dougherty collision operator constitutes an initial step to incorporate advanced collisional effects in the nonlinear and full-F ion GM hierarchy equation. The numerical implementation of the nonlinear Fokker-Planck Landau collision operator <cit.> will be considered in future work. To obtain the time evolution of the GMs 𝒩^pj, it is necessary to derive an explicit expression for the time derivative of the ion parallel velocity, ∂_t U_ i which appears in (<ref>) and resulting from the use of the shifted parallel velocity-space coordinate s_ i. By setting (p,j) = (1,0) in (<ref>) and using the fact that ^10 vanishes exactly (see (<ref>)), we derive the desired expression for ∂_t U_ i given by N_i ∂_t U_ i + N_i/ρ_* ϕU_ i +τ_i ∂_z P_∥ i+ N_i U_ i∂_z U_ i + N_i ∂_z ϕ =0, where the parallel ion pressure P_ i is expressed in terms of GMs according to (<ref>). We note that the full-F GM hierarchy equation, given in (<ref>), can also be derived from the electromagnetic full-F GM hierarchy equation described in Ref. Frei2020. This is achieved by considering the electrostatic limit, neglecting FLR effects, and assuming anisotropic ion temperature effects in F_Mi. Notably, the GMs with different p are coupled in (<ref>) due to the parallel streaming terms, associated with the ion Landau damping. On the other hand, the GMs with different j are only coupled through the collision operator (see (<ref>)), since our model neglects FLR effects and magnetic drifts responsible for kinetic effects leading to additional coupling in j <cit.>. As a result, a few Laguerre GMs are expected to be sufficient in our nonlinear turbulent simulations. To carry out the numerical turbulent simulations presented here, a simple closure by truncation is applied to the GM hierarchy equation. More precisely, we set ^pj =0 for all (p,j) > (P,J) with 0 ≤ P,J < ∞. The full-F GM hierarchy equation enables us to perform turbulent simulations of LAPD using an arbitrary number of GMs. Different values of (P,J) are considered in sec:turbulentsimulations where we demonstrate that the closure by truncation is sufficient to perform full-F turbulent simulations in our case. §.§ Cold-ion reduced model We consider here the cold-ion limit of the full-F GM hierarchy and derive a simplified model, similar to the one used in previous turbulent investigations of linear devices based on fluid models (see, e.g., Refs. Rogers2010,Popovich2010,Fisher2015) where the effects of finite ion temperature T_i are neglected. In the cold ion limit, only the GMs _i^00 and _i^10, associated with the ion gyrocenter density and the parallel ion velocity, need to be evolved and the contribution from the parallel ion pressure P_∥ i in (<ref>) can be neglected. As a consequence, the ion GM hierarchy equation given in (<ref>) reduces to the ion gyrocenter continuity equation for N_i and to the ion parallel momentum equation for U_ i, i.e. ∂/∂ t N_i + 1/ρ^*ϕ N_i + ∂_z ( U_ i N_i ) = S_N, ∂_t U_ i + 1/ρ_*ϕU_ i + U_ i∂_z U_ i + ∂_z ϕ =0, respectively. We remark that the particle and momentum conservation of the collision operator is used in deriving (<ref>). §.§ Electron fluid model We use the Braginskii model to evolve the electron dynamics, avoiding the evolution of their distribution function, in contrast to Refs. Shi2017,Pan2018. The fluid approach for the electrons is justified when the electron collision frequency is much larger than the ion collision frequency and electron FLR effects are negligible for modes developing at k_⊥ρ_s ∼ 1, which is the case of LAPD experiments. Hence, the time evolution of the electron density n_e, electron parallel velocity U_∥ e, and temperature T_e is determined by the continuity equation, the generalized Ohm's law, and the temperature equation, respectively. These equations are given by ∂_t n_e + 1/ρ^*ϕ n_e + ∂_z ( U_∥ e n_e ) = S_N , ∂_t U_∥ e + 1/ρ^*ϕ U_ e + U_∥ e∂_z U_∥ e = m_i/m_e[ ν_∥ J_∥ + ∂_z ϕ. . - T_e/n_e∂_z n_e - 1.71 ∂_z T_e ] , ∂_t T_e + 1/ρ^*ϕ T_e + U_∥ e∂_z T_e = 2/3T_e ( 0.71/N_e∂_z J_∥ - ∂_z U_∥ e) + ∂_z ( χ_∥ e ∂_z T_e ) + S_T_e , where the normalized parallel electrical resistivity and electron thermal conductivity are given by ν_∥ = ν_0 / T_e^3/2 and χ_∥ e = 1.075 T_e^5/2 / ν_0, respectively. Here, ν_0 = 4 √(2 π) e^4 n_e0 R √(m_e)lnλ /[ 3 c_s0 m_i T_e0^3/2 1.96 ] is the normalized electron collisionality. On the right-hand side of (<ref>) and (<ref>), S_N and S_T_e are the normalized density and temperature sources. In (<ref>), the parallel electrical current is J_∥ = n_e ( U_ i - U_ e ). §.§ Vorticity equation We now obtain the vorticity equation that governs the evolution of the electrostatic potential ϕ. This equation imposes the charge conservation constraint to the time evolution of the plasma densities and electrical currents. To derive the vorticity equation, we consider the quasineutrality condition in the long-wavelength limit, given by <cit.> - e n_e + q_i N_i = - ·( q_i^2 N_i/m_i Ω_i^2_⊥ϕ). (<ref>) neglects the FLR effects, associated with the difference between the particle and gyrocenter position and proportional to the perpendicular ion pressure. We also notice that (<ref>) is equivalent to the quasineutrality condition used in previous GK turbulent simulations of LAPD <cit.> if the Boussineq approximation is used, i.e. N_i ≃ N_0. This approximation is widely used in fluid codes <cit.> and we use it below to derive the vorticity equation. While (<ref>) can be solved to obtain ϕ given the electron and ion densities, n_e and N_i respectively, we use a vorticity equation instead, as this is often considered in turbulent fluid codes <cit.>. The vorticity equation is derived by taking the time derivative of the quasineutrality equation given in (<ref>) and by using the electron and ion continuity equations, given in Eqs. (<ref>) and (<ref>), respectively. It yields - ∂_t Ω - 1/ρ^*ϕΩ - ∂_z( U_∥ iΩ ) + 1/N_i∂_z J_∥ = 0, with Ω = ^2_⊥ϕ the vorticity variable using the Boussinesq approximation. The effects of the Boussinesq approximation on plasma turbulence is the subject of previous studies <cit.>. While it might not be justified in LAPD when steep density gradients are present, it allows us to reduce the computational cost of our simulations when inverting the two-dimensional Laplacian to obtain ϕ from the vorticity variable Ω. We use the vorticity equation given in (<ref>) to evolve Ω when considering the full-F ion GM hierarchy equation and the cold ion models, given in Eqs. (<ref>) and (<ref>) respectively, coupled to the fluid electron model in (<ref>). §.§ Two-fluid Braginskii fluid model We finally introduce the two-fluid Braginskii fluid model <cit.>, valid in the high-collisional regime, for comparison with the full-F ion GM hierarchy equation and the cold-ion model. In addition to the fluid electron fluid equations for n_e, U_ e and T_e already described in subsec:electronbraginskii, the two-fluid Braginskii equations prescribe a parallel ion momentum equation to evolve the ion parallel velocity U_∥ i, an ion temperature equation to evolve the ion temperature T_i, and vorticity equations for Ω. These equations are given by ∂_t U_∥ i + 1/ρ^*ϕ U_∥ i + U_∥ i∂_z U_∥ i = - ∂_z T_e - τ_i ∂_z T_i - (T_e + τ_i T_i) ∂_z n_e/n_e , ∂_t T_i + 1/ρ^*ϕ T_i + U_∥ i∂_z T_i = +2/3 T_i [(U_∥ i- U_∥ e) ∂_z n_e/n_e- ∂_z U_∥ e] + ∂_z ( χ_∥ i∂_z T_i ) + 𝒜_E/n_e +(1 - T_i) S_N/n_e, ∂_t Ω + τ_i ∂_i _⊥^2 T_i = 1/n_e∂_z J_∥ - 1/ρ^*ϕΩ + τ_i _⊥^2 T_i -U_∥ i∂_z ( Ω + τ_i _⊥^2 T_i ). respectively. In (<ref>), χ_∥ i = 1.32 √(m_e / m_i) (τ _i T_i)^5/2 / ν_0 is the normalized parallel ion thermal conductivity. We remark that the two last terms in (<ref>) are the ion temperature sources associated with the energy source S_E (see (<ref>)), which appears on the right-hand side of (<ref>). In contrast to the cold-ion model given in (<ref>), the two-fluid Braginskii model considered here allows for finite ion temperature effects, but assumes quasineutrality, such that n_e ≃ N_i. In addition, the parallel electric field ∂_z ϕ, appearing in (<ref>), is approximated in (<ref>) by the electron parallel pressure gradient, such that ∂_z ϕ≃∂_z P_e with P_e = n_e T_e (see (<ref>)). We remark that the vorticity equation, (<ref>), corresponds to the one implemented in fluid codes used to study the plasma turbulence in the SOL region <cit.>, such as the GBS code <cit.>. We also remark that the terms proportional to the Laplacian of the ion temperature, i.e. τ_i _⊥^2 T_i, are absent in (<ref>). Indeed, these terms are associated with FLR effects, which are neglected in (<ref>). However, we note that, as the ion temperature in LAPD experiments is generally lower than the electron temperature (τ_i < 1), neglecting finite ion perpendicular pressure in the vorticity equation deduced from the quasineutrality condition in (<ref>) is expected not to significantly affect the plasma dynamics in the simulations described below. §.§ Boundary conditions Boundary conditions are required for the ion GMs, ^pj, the electron fluid quantities, N_e, U_∥ e, T_e, and the potential ϕ in the perpendicular (x,y) plane at x = ± L_x / 2 and y = ± L_y / 2 and at the end plates located in the z direction at z = ± L_z /2, where a sheath forms due to the plasma-wall interaction. At x = ± L_x / 2 and y = ± L_y / 2, homogenous Neumann boundary conditions are used for all quantities. These ad-hoc boundary conditions have a negligible effect on plasma turbulence near the center of the device as they are imposed at a distance sufficiently large from the center of the device. On the other hand, the boundary conditions in the z direction have an important impact since the formation of a Debye sheath is observed when the magnetic field lines intercept the end plates that control the plasma losses <cit.>. Since the sheath region cannot be modeled by the field equations derived in subsec:vorticityequation (the GK formalism is violated in this region), the sheath is modeled in our simulations by a set of appropriate boundary conditions imposed at the sheath entrance. In previous GK simulations of LAPD <cit.>, a conducting wall is considered. Accordingly, the fraction of electrons that cross the sheath and are lost being absorbed by the walls is determined by the value of the potential at the sheath entrance. This fraction is imposed by evaluating the cutoff velocity of the electron distribution function numerically. Leveraging the GM approach, we use the standard fluid Bohm boundary conditions <cit.> which sets the value of the parallel electron and ion velocities, U_∥ e and U_∥ i, at the sheath entrance. Therefore, we assume that <cit.> U_∥ e(x,y,z = { 0, L_z }) = ±√(T_e,s) e^Λ - ϕ_s / T_e,s, U_∥ i(x,y,z = { 0, L_z }) = ± c_s = ±√(T_e,s)√(1 + τ_i T_i,s / T_e,s), with Λ = log m_i/(2m_e) ≃ 3 for hydrogen plasmas. In (<ref>), T_e,s and T_i,s are the electron and ion temperatures evaluated at the sheath entrance, i.e. T_e,s = T_e(x,y, z = ± L_z / 2 and T_i,s = T_i(x,y, z = ± L_z / 2), and, similarly, ϕ_s = ϕ(x,y, z =± L_z / 2). We notice that the boundary conditions in (<ref>) reduce to the ones used in Ref. Rogers2010 when T_i ≪ T_e and correspond to the ones used in SOL turbulent simulations using the drift-reduced Braginskii model <cit.>. For the remaining quantities, we assume, for simplicity, that the gradients of electron density, n_e, electron temperature, T_e, ion GMs, ^pj, and electrostatic potentials, ϕ, vanish along the direction of the magnetic field at the sheath entrance, i.e. homogenous Neumann boundary conditions are imposed at z = ± L_z / 2. While the homogenous Neumann boundary conditions considered here are sufficient to ensure the numerical stability of the present simulations, further investigations are needed to develop first-principles sheath boundary conditions for the GM approach. In particular, the analytical procedure outlined in, e.g., Refs. Loizu2012,Mosetto2015, can be extended to an arbitrary number of GMs and kinetic sheath boundary conditions can also be developed <cit.>. Magnetic field lines intercept the machine wall with a small oblique angle in fusion devices, further complicating the treatment of the sheath boundary conditions <cit.>. § NUMERICAL IMPLEMENTATION To solve the full-F ion GM hierarchy in (<ref>) coupled with the electron fluid model in (<ref>), we have developed a new three-dimensional full-F code. This code solves the turbulent dynamics for an arbitrary number of GMs and also implements a two-fluid Braginskii model for comparison with the GM results. To evolve the plasma dynamics, we employ similar numerical algorithms as the two-fluid code <cit.>. More precisely, an explicit fourth-order Runge-Kutta time-stepping scheme is used. The perpendicular and parallel directions are discretized using a uniform cartesian grid in the (x,y,z) coordinates with the x, y, and z directions discretized using N_x, N_y and N_z points uniformly distributed between the intervals [- L_x /2, + L_x /2], [- L_y /2, + L_y /2] and [- L_z /2, L_z/2], respectively. The Poisson bracket operator, [ f,g] = b × f · g = ∂_x f ∂_y g - ∂_y f ∂_x g, with b = B / B = e_z, is evaluated by using a fourth-order Arakawa method <cit.>. The numerical evaluation of the other spatial operators appearing in the GM hierarchy equation is based on a fourth-order and centered finite difference scheme, resulting in a 5-points centered stencil <cit.>. To avoid checkerboard patterns <cit.>, the grid, referred to as the v-grid, used to evolve the parallel velocities, U_∥ e and U_ i, and the GMs ^pj with odd p, is staggered to the left along the z-direction by Δ z /2 (Δ z is the grid spacing) with respect to the grid, referred to as the n-grid, where the other fluid quantities, i.e. n_e, T_e, Ω (and thus ϕ), and the GMs ^pj with even p are evaluated. Fourth-order interpolation techniques are used between the n- and v- grids <cit.>. To improve the numerical stability of our numerical simulations, parallel and perpendicular numerical diffusions, such as D(f) = η_⊥(∂_xx^2 + ∂_yy^2) f + η_z ∂_zz^2 f, where f denotes one of the evolved quantities, are added to the right hand-side of all equations. We choose the perpendicular and parallel diffusion coefficients, η_⊥ and η_z, to be constant and sufficiently small not to affect significantly the results. The model is implemented in a Fortran code using a MPI domain decomposition in all directions. The initial conditions of the turbulent nonlinear simulations impose equal electron and ion densities and temperatures, such that n_e = ^00 and T_e = T_i with top-hat-like profiles in the perpendicular plane and uniform in z. In addition, we set ϕ = Λ T_e to avoid unphysical and large electron current into the sheath region. The initial values of ^20 and ^01, given the initial ion density and ion temperature T_i profiles, are obtained by inverting (<ref>), which yields ^20 = N_i ( T_i- 1) / √(2) and ^01 = N_i ( 1 - T_i), with T_∥ i = T_⊥ i = T_i. Finally, the parallel velocities, U_ i and U_ e, are initialized with smooth profiles along z, with values at the end plates fixed according to the boundary conditions given in (<ref>). Random noise is added to the initial profiles, with constant amplitude 0.01, to seed turbulence. Typically, a quasi-steady state is achieved after 100 c_s0 / R time unit (corresponding to t ∼ 4 ms), similarly to previous GK and Braginskii turbulent simulations of LAPD <cit.>, where the sources of particle and energy are compensated by the losses at the end plates. § FULL-F TURBULENT SIMULATION RESULTS In this section, we present the first turbulent and full-F simulations of the GM approach of a linear plasma device, focusing on the parameters of the LAPD experiment. We perform a comparison between the turbulent predictions of the full-F GM approach (see subsec:fullFhierarchy), with different numbers of GMs and values of collisionality and compare them with the Braginskii model introduced in subsec:braginskii. Our simulations parameters are similar to those used in Ref. Rogers2010, where a helium LAPD plasma is considered. These parameters are sumarized as follows: n_e0 = 2 × 10^12 cm^-3, T_e0 = 6 eV, T_i0 = 3 eV (τ_i = 0.5), Ω_i∼ 960 kHz, ρ_s0 = 1.4 cm, c_s0= 1.3 × 10^6 cm s^-1, m_i / m_e = 400 and ν_0 = 0.03. The LAPD vacuum chamber has a radius R ≃ 0.56 m (i.e., R ≃ 40 ρ_s0) and a parallel length of L_z ≃ 18 m, such that we use L_x = L_y = 100 ρ_s0 (or L_x ∼ L_y ∼ 1.4 m) and L_z = 36 R. The reference time is R / c_s0∼ 43 μs. We use a numerical resolution of N_x = N_y = 192 in the perpendicular plane and a coarser resolution in the parallel direction of N_z = 64 thanks to the dominant k_∥≃ 0 turbulent structures. We consider the following parameters for the density and temperature sources L_s = ρ_s0, r_s = 20 ρ_s0, 𝒜_N0 = 𝒜_T_e0 = 0.04 (with 𝒜_N∞ = 𝒜_T_e ∞ = 0.001) and 𝒜_E0 = 0.02 (with 𝒜_E∞ = 𝒜_N∞). In order to investigate the impact of ion collisions, we conduct a set of nonlinear simulations in the high (HC) and low (LC) ion collisionality regime. For each set, we consider different numbers of GMs (P,J) to investigate the convergence of the GM approach. More precisely, we consider (P,J) = (2,1), (6,1), (12,1) in the LC regime and (P,J) = (2,1), (6,1) in the HC regime. We change the ion collisionality by varying the ion collision frequency ν_i as an independent parameter while keeping all other parameters constant. In the HC regime, the ion collision frequency is computed using the LAPD physical parameters, such that ν_i = 1.38 √(m_i / m_e)ν_0 /τ_i^3/2≃ 2.34. In this regime, the ion mean-free-path, λ_mpf, is considerably shorter than the total length L_z, i.e. λ_mpf / L_z ≃√(2 τ_i) R / L_z / ν_i ≪ 1, and the effects of the collision operator are expected to be important. On the other hand, we set the ion collision frequency to be small in the LC regime, such that ν_i ≃ 4 × 10^-3 yielding λ_mpf / L_z ∼ 6.9. In this regime, the effect of the collision operator on the GMs is expected to be negligible. We remark that using J=1 is sufficient to represent the ion distribution function F_i since fine structures in x_i are not present due to the absence of strong kinetic effects (e.g., trapped particles). This section is structured as follows. First, sec:simulationresults provides an analysis and comparison of simulations based on the full-F GM hierarchy, the cold-ion, and the Braginskii models. Second, the turbulence characteristics are analyzed and compared in more details in sec:turbulence. Finally, we investigate the ion distribution function in velocity-space in sec:vsp and the GM spectrum in quasi-steady state in subsec:GMspectrum as a function of the number of GMs and for the two collisionality regimes. §.§ Simulation results This section presents a set of nonlinear and turbulent simulations of the LAPD using the full-F GM hierarchy equation given in (<ref>), the cold-ion model in (<ref>), and the Braginskii model introduced in (<ref>). A typical nonlinear evolution of the electron density, n_e, obtained by using the GM hierarchy equation with (P,J) = (6,1) GMs in the HC regime is shown in fig:snapshotsne. For t ≲ 28 R / c_s0, the profiles build up because of the localized particle and energy sources present in the system. The steep density and temperature gradients near r ∼ r_s drive an unstable resistive drift-wave, with the most unstable mode occurring at k_⊥ρ_s0∼ 0.5 (k_⊥ is the perpendicular wavenumber) with finite parallel wavenumber and rotating in the ion diamagnetic direction. Large poloidal flows, with associated velocity typically larger than the phase-velocity of the resistive drift waves <cit.>, nonlinearly trigger a Kelvin-Helmholtz (KH) instability, characterized by a long perpendicular wavelength and k_∥≃ 0. The KH instability becomes clearly visible around t ≃ 33 R / c_s0. This instability, which has been shown to dominate the radial transport in LAPD <cit.>, saturates at t ≃ 43 R / c_s0, transporting the plasma to the r ≳ r_s region and yielding the broadening of the initial profiles. The role of the KH-dominated transport in our simulations is confirmed by the strong steepening of the profiles when the nonlinear term ϕΩ in (<ref>) is artificially suppressed. After t ∼ 91 R / c_s0, a quasi-steady state is reached, where the sources are compensated by the losses at the end plates. A similar qualitative evolution is observed with a higher number of GMs in the LC and HC regime, as well as in the cold-ion and Braginskii simulations. The dynamics in the direction parallel to the magnetic field is shown in fig:snapshotsxz during the quasi-steady state. Instantaneous snapshots of the parallel turbulent structures of the electrostatic potential ϕ, electron density n_e, and electron temperature T_e reveal elongated (k_∥≃ 0) structures. All quantities show larger values at the center (z =0) and decrease near the end plates (located at z= - 18 R and at z = 18 R) due to the particle and energy losses caused by the sheath boundary conditions. Similar parallel structures are obtained when using a larger number of GMs. The turbulent structures observed in fig:snapshotsxz are in good qualitative agreement with previous fluid <cit.> and GK <cit.> turbulent simulations of LAPD. We now examine the time-averaged radial profiles. These profiles are averaged over a time window of ∼ 2 ms during the quasi-steady state as well as over the central region of LAPD - 8 R ≤ z ≤ + 8R (or - 4 m≲ z ≲ 4 m), a region commonly considered to present experimental data <cit.> (a similar approach is used in previous GK simulations <cit.>). The results are shown in fig:profiles, which displays the averaged radial profiles of ϕ, n_e, and T_e obtained in the GM simulations, using different numbers of GMs, in the cold-ion and the Braginskii simulations. Instantaneous profiles are also included for comparison. We note, first, that the plasma profiles extend beyond r_s illustrating the broadening caused by the KH instability <cit.>. More precisely, the profiles are approximatively constant close to the center of the device (r < r_s) and far from the source region (r > r_s), showing a region of steep gradients near r ∼ r_s, where the fluctuation level is large and the radial transport is important (see sec:turbulence). Second, the time-averaged radial profiles from the GM simulations are very similar to the ones obtained from the Braginskii model. Third, no noticeable differences are found between the simulations in the LC and HC regimes and with different numbers of GMs. This suggests that ion kinetic effects may not significantly influence the predictions of the equilibrium (time-averaged) profiles in LAPD. On the other hand, the cold-ion model consistently predicts larger time-averaged radial profiles, while the gradients (not shown here) are of the same order as those obtained in the GM and Braginskii simulations. Fourth, the analysis of the instantaneous profiles (indicated by dotted lines in fig:profiles) shows the existence of large perpendicular turbulent structures associated with the KH instability. Finally, we note that the time-averaged profiles obtained in fig:profiles closely remind those obtained in previous fluid simulations <cit.> and GK simulations <cit.>. We remark that the electrostatic potential profile ϕ follows approximatively the electron temperature T_e, as shown in fig:profiles. Indeed, ϕ∼Λ T_e is required to have comparable electron and ion outflows in steady-state, such that U_i∼ U_ e near the end plates, according to (<ref>). To verify that ϕ∼Λ T_e in our simulations, we evaluate the radial profile of the instantaneous difference, ϕ - Λ T_e, taken at the center of the device (z = 0 R) for the GM (both the LC and HC regimes are considered), cold-ion and Braginskii simulations during the quasi-steady state. The results are shown in fig:philambdaTe. We first observe that the GM and Braginskii simulations yield similar ϕ - Λ T_e values. On the other hand, ϕ - Λ T_e is roughly constant and approximatively vanishes for all radii in the cold-ion model. Even if the deviations of ϕ from Λ T_e are larger in the GM and Braginskii simulations, the differences ϕ - Λ T_e remain smaller than the values of ϕ and Λ T_e (ϕ - Λ T_e ∼ 0.1 for r ≲ r_s compared to ϕ∼Λ T_e ∼ 2, see fig:snapshotsxz). §.§ Turbulence analysis We now delve into the analysis of the turbulence properties, comparing the GM predictions with Braginskii simulations. The instantaneous fluctuations are obtained by subtracting the time-averaged profiles from the full quantities, such that the fluctuations of, e.g., the electrostatic potential, ϕ, is defined by ϕ = ϕ - ϕ, where ϕ denotes the time-averaged potential. Similar definitions for the other quantities are used. The top panels of fig:phisnapshots show instantaneous snapshots of ϕ in the plane perpendicular to the magnetic field at the center of the device z=0 R, while the bottom panels illustrate ϕ snapshots. The Braginskii, cold-ion, and GM simulations with various (P,J) are considered. We first observe that the fluctuations in the Braginskii model closely remind those obtained in Ref. Fisher2015. In particular, the level of fluctuations is low at the center of the device and far from the source region, while it is large where the equilibrium gradient is steeper, in particular near r ∼ r_s (see fig:profiles). Notably, the ϕ snapshots reveal the presence of large amplitude structures propagating outwards. These observations hold for all the GM simulations, demonstrating a good qualitative agreement between the GM approach and the Braginskii model. While the fluctuations of the potential ϕ is not significantly affected by the number of GMs used in the simulation or by the collisionality regime, pointing out the fact that the KH instability (which drives turbulence) has a fluid nature, minor differences in the turbulent properties can still be observed. In fact, the use of a small number of GMs tends to produce slightly larger turbulent structures. This can be observed, for instance, by comparing the results of (P,J) = (2,1) with the (12,1) simulations in the LC regime. Finally, we observe that the cold-ion model produces the largest turbulent structures, which is consistent with the broad time-averaged profiles observed in fig:profiles. The same observations apply to the snapshots of ion gyrocenter density N_i and its associated fluctuations N_i, as shown in fig:nisnapshots. Similar plots are obtained for n_e and T_e, but not shown. We now proceed to analyze the root mean square (RMS) of the fluctuations, defined as √( n_e^2) in the case of electron density fluctuations n_e and similarly for the other quantities. fig:rms displays the RMS of the electron density, n_e, and the electrostatic potential, ϕ, fluctuations plotted as a function of the radius. The data are computed at z = 0 R and normalized to n_e(r) (and to ϕ(r)) <cit.>. We find that the RMS values of the density displayed in fig:rms closely recall those obtained in previous fluid <cit.> and GK <cit.> simulations. Consistent with the observations made in fig:nisnapshots, the RMS values reach their maximum when the gradients are most pronounced near r ∼ r_s. For r ≲ r_s and for r ≳ r_s (where the gradients are smaller), the RMS values decrease because of the absence of the instability drive. Using a low number of GMs or considering the LC regime results in slightly larger RMS values (in particular of ϕ). Overall, this indicates that the level of fluctuations in the steep gradient region is sensitive to the number of GMs used in the simulations. We demonstrate in sec:vsp that the large RMS values observed in fig:rms are associated with a lack of resolution (i.e. insufficient number of GMs) to describe the ion distribution function F_i. Finally, we remark that the best agreement with the Braginskii predictions is obtained by the GM simulation with (P,J) = (6,1) in the HC regime and the largest RMS values (in particular of N_i) are obtained in the cold-ion model. We compare the RMS of the parallel electrical current J_∥ measured at the sheath entrance located at z = - 18 R. The results are shown in fig:rmsjpar as a function of the radius and normalized to the maximum of J_∥(r). It is clearly observed that the boundary conditions imposed on the electron and ion parallel velocities allow for the parallel current to fluctuate. This is in contrast to the case of logical sheath boundary condition, where J_∥ = 0 is imposed everywhere <cit.>. We remark that larger fluctuations of J_∥ are obtained in the Braginskii simulations, while the largest RMS is observed in the case of the cold-ion model. We now turn our attention to the skewness of the ion density fluctuations, which is defined as the third normalized moment of the ion gyrocenter density fluctuation, that is N_i^3 / N_i^2^3/2. The skewness of the density is often used to characterize the presence of plasma holes and blobs, associated with negative and positive skewness respectively <cit.>. fig:skewness shows the skewness of the ion density N_i. In all cases, the skewness is negative for r ≲ r_s, indicating the presence of density holes in the region where the plasma source is present. On the other hand, in the region where r ≳ r_s, the skewness is positive. The sign and amplitude of the skewness shown in fig:skewness are consistent with previous fluid <cit.> and GK <cit.> simulations, with the values obtained in the GM simulations being similar to those observed in the Braginskii case, albeit slightly smaller. Overall, the present turbulent analysis demonstrates that the full-F GM approach is in qualitative agreement with the Braginskii model, employed in previous numerical investigations <cit.> and validated with experimental data <cit.>. §.§ Ion distribution function at quasi-steady state We now investigate the features of the ion distribution function F_i in velocity-space. To obtain the full-F ion distribution function, F_i, from the GM simulations, we use the expansion in (<ref>), truncated to a finite number of GMs, and we compute it as a function of x_i and the unshifted parallel coordinate v_∥ / v_Ti (v_∥ / v_Ti = s_ i + √(2 τ_i) U_ i). Also for this analysis, we consider the quasi-steady period. fig:vsp shows F_i obtained from the (P,J) = (6,1) simulations in the HC regime at the center of the machine (z=0R) and at the sheath entrances, z=- 18R and z=18R. At the two sheath entrances, F_i is centered around the ion parallel velocity, U_ i =± c_s respectively, a consequence of the Bohm sheath boundary conditions given in (<ref>). On the other hand, F_i is centered around v_∥≃ 0 at z = 0R, where U_ i≃ 0. The absence of fine velocity-space structures in fig:vsp is a consequence of the lack of strong kinetic effects such as trapped particles and FLR effects <cit.> in LAPD and explains the weak dependence of the turbulence properties on the number of GMs, reported in sec:turbulence. fig:vspslices shows the ion distribution function at the sheath entrance (z = 18 R and x = y = 0) for x_i =0, in the LC and HC regimes and for different values of (P,J). We first observe that the bulk region of F_i (near v_∥ / v_Ti∼ 1) is well approximated by a shifted Maxwellian. However, deviations from the Maxwellian distribution function are noticeable in the tails of F_i in the LC regime. These deviations become pronounced as (P,J) increases (e.g., from (6,1) to (12,1)) which indicates that F_i is not sufficiently resolved in the LC regime at low (P,J). Finally, we remark that collisional effects tend to widen F_i due to the collisional parallel velocity-space diffusion present in the nonlinear Dougherty operator. We note that the use of v_∥ / v_Ti as an argument in the Hermite polynomials, H_p, in (<ref>) would compromise the convergence properties of the GM approach, with respect to the use of v_∥ / v_Ti, leading to simulations that show unphysical distribution functions with negative values when the same number of GMs are considered as in the simulations presented here (see fig:vsp). In fact, if the unshifted GMs _v_∥^pj, defined with respect to v_∥ / v_Ti as the argument of H_p, i.e. 𝒩^pj_v_∥ = 2 π∫_- ∞^∞ d v_∥∫_0^∞ d μB/m_i F_i H_p ( v_∥/ v_Ti) L_j(x_i)/√(2^p p!), are used to expand F_i, it is found that _v_∥^pj≠ 0 for (p,j) > 0, even when F_i is a Maxwellian distribution function centered at U_ i≠ 0. Indeed, using (<ref>), one derives the analytical expression of the unshifted GMs for F_i = F_Mi, _v_∥^pj = δ_j^0/√(π)∫_- ∞^∞ d ( v_∥/v_Ti) e^- (v_∥ / v_Ti - √(2 τ_i ) U_ i)^2 H_p( v_∥/v_Ti) /√(2^p p!) = √(2^p/p!)(√(2 τ_i ) U_ i)^p δ_j^0, where U_ i is normalized to c_s0. While the amplitude of the unshifted GM decreases rapidly in the presence of subsonic ion flow, U_ i≪ 1, the decrease of the amplitude with p is slower in the presence of sonic flows, such that ^pj_v_∥∼√(2^p / p!). §.§ GM spectrum at quasi-steady sate To better assess the velocity-space representation of F_i in our simulations, we plot the amplitude of the GMs, ^p0, at the sheath entrance of the device, z=18 R and r =0, in fig:gmspectrum (a similar plot is obtained for ^p1 showing considerably smaller amplitudes). This is the amplitudes of the GMs associated with the distribution functions displayed in fig:vspslices. As it can be clearly observed, the amplitude of the GMs decays faster in the HC regime than in the LC regime. The results of the LC (P,J) = (12,1) simulation shows that P ≳ 12 ensures that F_i is well resolved since ^P0 provides a negligible contribution compared to ^00 to F_i. On the other hand, the contributions from ^p0 with p ≳ 4 are negligible in the HC regime, thereby justifying the closure by truncation for P ≳ 4. We also notice that ^10 =0 in all cases, as a consequence of (<ref>). Finally, we note that the amplitude of the low-order GMs is not sensitive to P, as shown in fig:gmspectrum. More precisely, the low-order GMs for (P,J) = (6,1) strongly resemble the ones of the (P,J) = (12,1) simulation in the LC case. This holds true also in the HC regime, for instance, by comparing the (P,J) = (6,1) and (P,J) = (2,1) simulations. This suggests (in addition to the similar results obtained in sec:turbulence with different (P,J)) that full-F turbulent calculations using the GM approach are less sensitive to the values of P and J than linear computations <cit.>, where applying a closure by truncation at low P and J can introduce spurious artifacts <cit.>. Otherwise, fig:gmspectrum reveals that the large RMS values depicted in fig:rms (e.g., (P,J) = (6,1) in the LC and (P,J) = (2,1) in the HC regime) correspond to cases where the GM representation of F_i is unresolved, but still yielding to good turbulent predictions. Additional investigations are required to verify the effect of closure in the presence of kinetic effects such as trapped particles and magnetic drifts, which are absent in LAPD. Finally, fig:npj presents snapshots of the GMs for different values of p in the perpendicular plane obtained for the (P,J) = (6,1) simulations in the LC and HC regimes. It is clearly visible that the turbulent structures are dominated by a long-wavelength perpendicular KH instability for all values of p. The decay of the amplitude of the turbulent structures due to collisions and with increasing p is also evident. § CONCLUSIONS In this work, we present the first full-F turbulent simulations based on the GM approach in a linear plasma device configuration with open straight field lines, such as LAPD. We consider an electrostatic and long-wavelength ion GK model for the full ion distribution function F_i, coupled to the electron Braginskii fluid model for the electron density n_e, parallel velocity U_ e, and temperature T_e. The ion GK model is solved by deriving a full-F ion GM hierarchy equation, based on the Hermite-Laguerre polynomials expansion of F_i. In particular, a velocity-space coordinate centered at the local fluid ion parallel velocity is used to expand F_i, which ensures good convergence properties of the Hermite expansion in the presence of sonic ion flows. The GM hierarchy equation we consider is equivalent to the electrostatic and long-wavelength limit of the GK moment model for the boundary region derived in Ref. Frei2020. To account for the parallel losses at the end plates, Bohm sheath boundary conditions, equivalent to the ones previously used in LAPD fluid simulations <cit.>, are used. We also consider a nonlinear ion-ion Dougherty collision operator. The ion GM hierarchy equation is implemented in a numerical code enabling us to perform the first full-F turbulent calculations based on a moment approach. We present the simulations of a linear device using LAPD physical parameters based on a Helium plasma <cit.> and a first-of-the-kind comparison with the two-fluid Braginskii model. Several nonlinear simulations are performed using a different number of Hermite and Laguerre GMs in a low and high-collisional ion regime. Overall, a good qualitative agreement on the time-averaged radial profiles with the Braginskii model is observed with the GM approach. This is expected from our analysis which shows that turbulence is dominated by the long perpendicular wavelength and k_∥≃ 0 Kelvin-Helmoltz instability of fluid nature. The RMS and skewness of the fluctuations in the GM simulations also agree with the ones previously obtained in fluid <cit.> and GK <cit.> simulations of LAPD. In particular, we find that the RMS values are often larger than the ones predicted by the Braginskii model, if the number of GMs is not sufficient to properly resolve the ion distribution function. The largest RMS values are observed with the cold-ion reduced model (with a difference up to ∼ 20 % with respect to the Braginskii model), while the results closest to the one of the Braginskii model are obtained if collisions are introduced in the GM approach with a sufficient number of GMs, in this case (P,J) = (6,1). Overall, collisions reduce the turbulent fluctuations level, but they do not significantly alter the observed turbulent regimes and radial transport. At the same time, the analysis of the ion distribution function F_i reveals that collisions damp the amplitudes of the GMs, thereby allowing for a reduction in the number of GMs required in the simulations (typically from (P,J) ∼ (12,1) in the low collisional regime to (P,J) ∼ (6,1) in the high-collisional regime of LAPD). Overall, the present work constitutes a step toward the development of future full-F turbulent simulations of the boundary region of fusion devices using the GM approach, which offers an ideal flexible tool to capture kinetic and collisional effects at the desired level of accuracy. The simulations of the boundary of fusion devices require that the present model is extended to the full GM hierarchy of Ref. Frei2020 to include electron kinetic, electromagnetic, FLR, and geometry effects. In addition, a more accurate description of the role of ion-ion collisions involves the implementation of a nonlinear collision operator model with increasing physics fidelity, such as the nonlinear Coulomb operator <cit.>. Proper sheath boundary conditions for the GM hierarchy equation, which extend the simplified Bohm sheath boundary condition used here (see (<ref>)), can enhance the reliability of our simulations. These boundary conditions can be obtained by following a procedure similar to the one outlined in Ref. Loizu2012. Finally, we remark that the implementation of a kinetic electron description is essential also to perform high-fidelity LAPD simulations, as fast and less collisional electrons (with T_e ∼ 15 eV) are emitted by pulsed plasma discharges in experiments <cit.>. Furthermore, kinetic electrons are important in setting the sheath boundary conditions where electrons are reflected because of the potential drop, yielding strong velocity-space gradients in the electron distribution function <cit.>. § ACKNOWLEDGEMENT The authors acknowledge helpful discussions with Alessandro Geraldini and Stephan Brunner. This work has been carried out within the framework of the EUROfusion Consortium, via the Euratom Research and Training Programme (Grant Agreement No 101052200 — EUROfusion) and funded by the Swiss State Secretariat for Education, Research and Innovation (SERI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union, the European Commission, or SERI. Neither the European Union nor the European Commission nor SERI can be held responsible for them. The simulations presented herein were carried out in part on the CINECA Marconi supercomputer under the TSVVT422 project and in part at CSCS (Swiss National Supercomputing Center). This work was supported in part by the Swiss National Science Foundation.
http://arxiv.org/abs/2307.06205v1
20230712144933
Monotonicity formula and stratification of the singular set of perimeter minimizers in RCD spaces
[ "Francesco Fiorani", "Andrea Mondino", "Daniele Semola" ]
math.DG
[ "math.DG", "math.MG" ]
[ [ Received May 01, 2023 / Accepted May 31, 2023 ================================================= The goal of this paper is to establish a monotonicity formula for perimeter minimizing sets in (0,N) metric measure cones, together with the associated rigidity statement. The applications include sharp Hausdorff dimension estimates for the singular strata of perimeter minimizing sets in non collapsed spaces and the existence of blow-down cones for global perimeter minimizers in Riemannian manifolds with nonnegative Ricci curvature and Euclidean volume growth. § INTRODUCTION The main goal of this paper is to prove a monotonicity formula for perimeter minimizing sets in (0,N) metric measure cones, together with the associated rigidity statement. Among the applications, we establish sharp Hausdorff dimension estimates for the singular strata of perimeter minimizing sets in non collapsed spaces, and the existence of blow-down cones for global perimeter minimizers in Riemannian manifolds with nonnegative Ricci curvature and Euclidean volume growth. Below we briefly introduce the setting and then discuss more in detail the main results and their relevance. The class of RCD(K,N) spaces consists of infinitesimally Hilbertian metric measure spaces spaces with synthetic Ricci curvature lower bounds and dimension upper bounds. More specifically, N ∈ [1,∞) represents a synthetic upper bound on the dimension and K∈ represents a synthetic lower bound on the Ricci curvature. This class includes finite dimensional Alexandrov spaces with curvature bounded from below and (possibly pointed) measured Gromov Hausdorff limits of smooth Riemannian manifolds with Ricci curvature lower bounds and dimension upper bounds, the so-called Ricci limit spaces. Many of the results in this paper are new also in these settings, to the best of our knowledge. We address the reader to Section <ref> and to the references therein indicated for the relevant background on spaces. Sets of finite perimeter have been a very important tool in the developments of Geometric Measure Theory in Euclidean and Riemannian contexts in the last seventy years. In <cit.>, and the more recent <cit.>, most of the classical Euclidean theory of sets of finite perimeter has been generalized to (K,N) metric measure spaces. Moreover in <cit.> the second and the third author started a study of locally perimeter minimizing sets in the same setting (see also <cit.>). Due to the compactness of the class of (K,N) spaces with respect to the (pointed) measured Gromov-Hausdorff topology, these developments have been important to address some questions of Geometric Measure Theory on smooth Riemannian manifolds, e.g. see <cit.>. §.§ Monotonicity Formula The first main result of this work is a monotonicity formula for perimeter minimizers in cones over RCD(N-2,N-1) spaces, with the associated conical rigidity statement. We recall that, by <cit.>, the metric measure cone over a metric measure space (X,,) is an (0,N) metric measure space if and only if (X,,) is an (N-2,N-1) metric measure space. For the sake of clarity, we introduce below the relevant notion of perimeter minimizing set in an (K,N) space. Let (X,,) be an (K,N) space. A set of locally finite perimeter E ⊂ X is a * Global perimeter minimizer if it minimizes the perimeter for every compactly supported perturbation, i.e. (E; B_R(x)) ≤(F; B_R(x)) for all x ∈ X, R>0 and F ⊂ X with F=E outside B_R(x); * Local perimeter minimizer if for every x∈ X there exists r_x>0 such that E minimizes the perimeter in B_r_x(x), i.e. for all F ⊂ X with F=E outside B_r_x(x) it holds (E; B_r_x(x)) ≤(F; B_r_x(x)) . Our main result is the following: Let N≥ 2 and let (X,,) be an RCD(N-2,N-1) space (with diam(X)≤π, if N=2). Let C(X) be the metric measure cone over (X,,) and let O denote its tip. Let E ⊂ C(X) be a global perimeter minimizer. Then the function Φ : (0,∞) → defined by Φ(r) := (E;B_r(O))/r^N - 1 , is non-decreasing. Moreover, if there exist 0<r_1<r_2<∞ such that Φ(r_1)=Φ(r_2), then E∩(B_r_2(O)∖B_r_1(O)) is a conical annulus, in the sense that there exists A⊂ X such that E∩(B_r_2(O)∖B_r_1(O))= C(A) ∩(B_r_2(O)∖B_r_1(O)) , where C(A)={(t,x)∈ C(X) x∈ A} is the cone over A⊂ X. In particular, if Φ is constant on (0,∞), then E is a cone (in the sense that there exists A⊂ X such that E=C(A)). The above monotonicity formula with rigidity generalizes the analogous, celebrated result in the Euclidean setting, see for instance <cit.>. On smooth Riemannian manifolds, it is well known that an almost monotonicity formula holds, with error terms depending on two sided bounds on the Riemann curvature tensor and on lower bounds on the injectivity radius. For cones over smooth cross sections, the monotonicity formula is a folklore result, see for instance <cit.>. Some special cases of Theorem <ref> have been discussed recently in <cit.>. We also mention that in <cit.> an analogous monotonicity formula in metric measure cones has been obtained for solutions of free boundary problems, generalizing a well known Euclidean result. In the proof, we adapt one of the classical strategies in the Euclidean setting. The implementation is of course technically more demanding, in particular for the rigidity part, due to the low regularity of the present context. The relevance of Theorem <ref> for the applications, that we are going to discuss below, comes from the fact that tangent cones of non collapsed (K,N) metric measure spaces (X,,ℋ^N) and blow-downs of (0,N) spaces (X,,ℋ^N) with Euclidean volume growth are metric measure cones, see <cit.> for the present setting and the earlier <cit.> for previous results in the case of Ricci limit spaces and Alexandrov spaces. It is an open question whether an almost monotonicity formula holds for perimeter minimizers in general (K,N) spaces, possibly under the non collapsing assumption. In particular, we record the following: Open question: let (X,,ℋ^N) be an (K,N) space and let E⊂ X be a local perimeter minimizing set. Is it true that the limit lim_r→ 0(E; B_r(x))/r^N-1 exists for all x∈∂ E? §.§ Stratification of the singular set and other applications It is well known that monotonicity formulas are an extremely powerful tool in the analysis of singularities of several problems in Geometric Analysis. We just mention here, for the sake of illustration and because of the connection with the developments of the present work: * the Hausdorff dimension estimates for the singular strata of area minimizing currents in codimension one, originally obtained in <cit.>; * the Hausdorff dimension estimates for the singular strata of (K,N) spaces (X,,ℋ^N), obtained in <cit.> and earlier in <cit.> in the case of non collapsed Ricci limit spaces. The proofs of the aforementioned results are based on the so-called dimension reduction technique, which relies in turn on the validity of a monotonicity formula with associated conical rigidity statement. In the present work, building on the top of Theorem <ref> we establish analogous Hausdorff dimension estimates for the singular strata of perimeter minimizing sets in (K,N) spaces (X,,ℋ^N). Below we introduce the relevant terminology and state our main results. Let (X,,ℋ^N) be an RCD(K,N) space, E⊂ X a locally perimeter minimizing set and 0 ≤ k ≤ N-3 an integer. The k-singular stratum of E, 𝒮^E_k, is defined as 𝒮_k^E := {x ∈∂ E (Y,ρ, ℋ^N, F,y), (Y,ρ,y) (Z×^k+1,_Z ×_eucl,(z,0)) (Z,_Z,z) F=G×^k+1 G⊂ Z }. The above definition would make sense also in the cases when k≥ N-2. However, it seems more appropriate not to adopt the terminology singular strata in those instances. Let (X,,ℋ^N) be an RCD(K,N) space and let E⊂ X be a locally perimeter minimizing set. Given x∈∂ E, we say that x is an interior regularity point if Tan_x(X,,ℋ^N,E,x)={(^N,_eucl,ℋ^N, ^N_+,0)} . The set of interior regularity points of E will be denoted by ℛ^E. Given x∈∂ E, we say that x is a boundary regularity point if Tan_x(X,,ℋ^N,E,x)={(^N_+,_eucl,ℋ^N, {x_1≥ 0}, 0)} , where x_1 is one of the coordinates of the ^N-1 factor in ^N_+=^N-1×{x_N≥ 0}. The set of boundary regularity points of E will be denoted by ℛ^E_∂ X. It was proved in <cit.> that the interior regular set ℛ^E is topologically regular, in the sense that it is contained in a Hölder open manifold of dimension N-1. By a blow-up argument it is not hard to show that dim_ℋℛ^E_∂ X≤ N-2 (see Proposition <ref>). Our main results about the stratification of the singular set for perimeter minimizers are that the complement of 𝒮_N-3^E in ∂ E consists of either interior or boundary regularity points, and that the classical Hausdorff dimension estimate (𝒮^E_k)≤ k holds for any 0≤ k≤ N-3. Let (X,,ℋ^N) be an RCD(K,N) space and let E⊂ X be a locally perimeter minimizing set. Then ∂ E∖𝒮_N-3^E=ℛ^E∪ℛ^E_∂ X . Let (X,,ℋ^N) be an RCD(K,N) space and E⊂ X a locally perimeter minimizing set. Then, for any 0≤ k≤ N-3 it holds dim_ℋ𝒮_k^E ≤ k . We point out that the Hausdorff dimension estimate for the top dimensional singular stratum had already been established in <cit.> (for limits of sequences of codimension one area minimizing currents in smooth Riemannian manifolds with Ricci curvature and volume lower bounds) and independently by the second and the third author in <cit.> (in the same setting of the present paper, under the additional assumption that ∂ X=∅). Elementary examples illustrate that the Hausdorff dimension estimates above are sharp in the present setting. With respect to the classical <cit.> or <cit.>, in the proof of Theorem <ref> we need to handle the additional difficulty that a monotonicity formula does not hold directly on the ambient space. Another application of the monotonicity formula with the associated rigidity is that if an (0,N) space (X,,ℋ^N) with Euclidean volume growth contains a global perimeter minimizer, then any asymptotic cone contains a perimeter minimizing cone. Let (X,,ℋ^N) be an (0,N) metric measure space with Euclidean volume growth, i.e. satisfying for some (and thus for every) x∈ X: lim_r→∞ℋ^N(B_r(x))/r^N >0. Let E⊂ X be a global perimeter minimizer. Then for any blow-down (C(Z),_C(Z),ℋ^N) of (X,,ℋ^N) there exists a cone C(W)⊂ C(Z) global perimeter minimizer. The conclusion of Theorem <ref> above seems to be new also in the more classical case of smooth Riemannian manifolds with nonnegative sectional curvature, or nonnegative Ricci curvature. We refer to <cit.> for earlier progress in the case of smooth manifolds with nonnegative sectional curvature satisfying additional conditions on the rate of convergence to the tangent cone at infinity and on the regularity of the cross section, and to the more recent <cit.> for the case of smooth Riemannian manifolds with nonnegative Ricci curvature and quadratic curvature decay. Related to the open question that we raised above, to the best of our knowledge it is not currently known whether in the setting of Theorem <ref> any blow down of the perimeter minimizing set must actually be a cone. Acknowledgements. The first author is supported by the EPSRC-UKRI grant “Maths DTP 2021-22”, at the University of Oxford, with reference EP/W523781/1. The second author is supported by the European Research Council (ERC), under the European's Union Horizon 2020 research and innovation programme, via the ERC Starting Grant “CURVATURE”, grant agreement No. 802689. The last author was supported by the European Research Council (ERC), under the European's Union Horizon 2020 research and innovation programme, via the ERC Starting Grant “CURVATURE”, grant agreement No. 802689, while he was employed at the University of Oxford until August 2022. He was supported by the Fields Institute for Research in Mathematical Sciences with a Marsden Fellowship from September 2022 to December 2022. He is currently supported by the FIM-ETH Zürich with a Hermann Weyl Instructorship. He is grateful to these institutions for the excellent working conditions during the completion of this work. § PRELIMINARIES In this work, a metric measure space (m.m.s. for short) is a triple (X,,), where (X,) is a complete and separable metric space and is a non negative Borel measure on X of full support (i.e. supp =X), called the ambient or reference measure, which is finite on metric balls. We write B_r(x) for the open ball centered at x∈ X of radius r>0. Under our working conditions, the closed metric balls are compact, so we assume from the beginning the metric space (X,) to be proper. Let (X,,) be a m.m.s. and fix x ∈ X. The quadruple (X,,,x) is called pointed metric measure space. We will denote with L^p(X;) := { u: X → : ∫ |u|^p < ∞} the space of p-integrable functions; sometimes, if it is clear from the context which space and measure we are considering, we will simply write L^p or L^p(X). Given a function u:X, we define its local Lipschitz constant at x ∈ X by lip(u)(x) := lim sup_yx|u(x)-u(y)|/(x,y) x ∈ X , and lip(u)(x) = 0 otherwise. We indicate by LIP(X) and LIP_loc(X) the space of Lipschitz functions, and locally Lipschitz functions respectively. We also denote by _b(X) and _bs(X) the space of bounded continuous functions, and the space of continuous functions with bounded support, respectively. We assume that the reader is familiar with the notion and basic properties of spaces. Let us just briefly recall <cit.> that a (K,N) metric measure space (X,,) has Ricci curvature bounded below by K∈ and dimension bounded above by N∈ [1,+∞] in a synthetic sense, via optimal transport. The (K,N) condition is a refinement of the (K,N) one, obtained by adding the assumption that the heat flow is linear or, equivalently, that the Sobolev space W^1,2(X,,) is a Hilbert space or, equivalently, that the Laplacian is a linear operator. The condition was first introduced in the N=∞ case in <cit.> and then proposed in the N<∞ case in <cit.>. We refer the reader to the original papers <cit.>, or to the survey <cit.> for more details. §.§ Metric-measure cones An N-metric measure cone over a measure metric space (X,_X, _X) is defined as the warped product C(X):= ([0,∞) ×_C X, _̣C, _C) obtained with C_ (r) = r^2 and C_=r^N - 1. See <cit.> for some background. We will denote by O∈ C(X) the tip of the cone, given by {O} : = π({0}× X) ⊂ C(X). In what follows we will use a slight abuse of notation and denote π(t,x) by (t,x). In particular, O = (0,x) for any x ∈ X. Moreover, we shall adopt the more intuitive notation _̣C, _C to denote the distance and the reference measure on C(X), when there is no risk of confusion. There is an explicit expression for the distance between two points on a cone: ^2_C ((r_1,x_1),(r_2,x_2)) = r_1^2 +r_2^2 - 2r_1 r_2 cos (_X(x_1,x_2)∧π). In particular, we have _C (O, (r,x))=r . The following result was obtained in <cit.>. Let (X,_X,_X) be a metric measure space and let N≥ 2. Then (C(X), _C, _C) is an RCD(0,N) m.m.s. if and only if (X,_X,_X) is RCD(N-2,N-1) and, in the case N=2, diam(X)≤π. If N>2, then the diameter bound diam(X)≤π follows already from the (N-2,N-1) condition, by the Bonnet-Meyers theorem for spaces. Next, we show a result relating radial derivatives of functions defined on cones and the distance from the tip of the cone. The gradient of such distance function, in some sense, corresponds to the position vector field in the Euclidean setting. Some of the identities of the proof, obtained using basic calculus in the metric setting, will be used later in this work. The following result uses the BL characterization found in <cit.>, section 3.2. Let (C(X), _C, _C) be an (0,N) cone over some (N-2,N-1) space (X,_̣X,_X) and f ∈ W^1,2(X). Let f^(t)(x) := f(t,x) and f^(x)(t) := f(t,x), for (t,x) ∈ C(X). Then |∇ f^(x)| (t) =1/2t| ∇ f (t,x) ·∇_C^2 (O, ·) (t,x)| for -a.e. x∈ X and ℒ^1-a.e. r∈(0,∞) . By <cit.>, section 3.2, f^(t)∈ W^1,2(X) and f^(x)∈ W^1,2([0,∞)) for ℒ^1-a.e. t ∈ (0,∞) and -a.e. x ∈ X, respectively. By using the BL characterization of W^1,2(C(X)) functions (cf. <cit.>, section 3.2) and the polarization identity, we have ∇ f (t,x) ·∇_C^2(O, ·) (t,x)= = 1/2 |∇ (f +_C^2(O,·))|^2(t,x) - 1/2 |∇(f-_C^2(O,·))|^2 (t,x) = 1/2 |∂_r (f^(x)(t) +_C,(x)^2(t))|^2 - 1/2 |∂_r (f^(x)(t)-_C,(x)^2(t))|^2 = 1/2 |∂_r f^(x) +2t|^2 - 1/2 |∂_r f^(x)(t)-2t|^2 = 2t ∂_r f^(x)(t)= 2t sign(∂_r f^(x)(t)) |∇ f^(x)|(t) ; where we have denoted _C(O,(t,x))^(x) by _C,(x)(t). Moreover, we have used the fact that |∇ (f^(t) +(_C^2(O,·))^(t))|^2(x)= |∇ f^(t)|(x) since (_C(O,·))^(t) is constant. We have also exploited the identification between different notions of derivatives (by, say, <cit.>) |∇ g|=|∂_rg| for smooth functions g:[0,∞)→ and the explicit formula for the radial sections of the distance function from the origin given by (<ref>). Let us also point out that (<ref>) shows that ∇ f (t,x)·∇ (_C^2(O,(t,x)) = ∇ f^(x)(t)·∇ (^2_C, (x))(t) . Lastly, we point out that the following equality |∇_C(O,·)|(t,x) = |∂_r _C,(x) (t)| = 1/2t |∇_C^2(O,·)|(t,x) , implies |∇ f^(x)| (t) =| ∇ f (t,x) ·∇ (_C (O, (t,x)))| . See <cit.>. §.§ Finite perimeter sets in RCD spaces Let (X,,) be a metric measure space, u ∈ L^1_loc(X) and Ω⊂ X an open set. The total variation norm of u evaluated on Ω is defined by |D u|(Ω) := inf{lim inf_j ∞∫_Ωlip(u)(y) } , where the infimum is taken over all sequences (u_j) ⊂Lip_loc(X) such that u_j u in L^1_loc(X). A function u ∈ L^1(X) is said to have bounded variation if its total variation |Du|(X) is finite. In this case one can prove that |Du| can be extended to a Borel measure on X. The space of functions of bounded variation is denoted by (X). A set E⊂ X is of locally finite perimeter if, for all x ∈ X and R>0 there holds Per(E; B_R(x)): = inf{lim inf_i →∞∫_B_r(x)lip (f_i) : {f_i}⊂Lip_loc(X), f_i L^1_loc⟶χ_E } < ∞ . We denote the perimeter measure of a locally finite perimeter set E by Per(E). Let us point out that this coincides with the variational measure |Dχ_E| defined in (<ref>). We adopt the following notation: given a Borel set A ⊂ X, Per(E;A):= |Dχ_E(A)|. An important tool for what follows is the coarea formula (cf. <cit.>, theorem 2.16): Let (X,,) be an RCD(K,N) space and v ∈(X). Then {v>r} has finite perimeter for ℒ^1-a.e. r and for any Borel function f :X → it holds ∫_X f |̣Dv| = ∫_ ( ∫_X f ({v>r}) ) ṛ . Given a function u∈(X), we define the set E_t := {x ∈ X: u(x) ≥ t, t ∈}. As a consequence of the coarea formula, it holds |Du|(C) = ∫_(E_t; C) ṭ . We now look at a particular type of locally finite perimeter sets: cones inside cones. Let (C(X), _C, _C) be as in Proposition <ref>, r>0, x ∈ X. An important family of sets we will use later, in particular in the characterization of cones (see Lemma <ref>), is the following C(B^X_r(x)):={(t,y) ∈ C(X): y ∈ B^X_r(x)} . The sets C(B^X_r(x)) ⊂ C(X), r>0, are of locally finite perimeter. We will prove the claim by exhibiting an explicit sequence of Lipschitz functions converging to χ_C(B^X_r(x)). Let f_n(t,y) = 1 y ∈ B^X_r(x) ; nr +1 - n(y,x) y ∈ B^X_r + 1/n(x)∖ B^X_r(x) ; 0 y ∈ X ∖ B^X_r+1/n(x)) . Clearly, f_n is Lipschitz with bounded support, |f_n| ≤ 1, hence f_n ∈ W^1,2_loc(C(X)). Moreover, f_n →χ_C(B^X_r(x)) in L^1_loc(C(X)). Indeed, for p=(s,z) ∈ C(X), R>0 ∫_B_R(p) |f_n - χ_C(B^X_r(x))| _C = ∫_B_R(p) |f_n| χ_C(B^X_r + 1/n(x)∖ B^X_r(x))_C ≤_C( B_R(p)∩ C(B^X_r + 1/n(x)∖ B^X_r(x))). Using the Bishop Gromov inequality <cit.>, we have (B^X_r + 1/n(x)∖ B^X_r(x)) = (B^X_r(x))((B^X_r + 1/n(x)/(B^X_r(x)) - 1) ≤(B^X_r(x))(N-1/rn + O(1/n^2) ) . Therefore, _C(B_R(p) ∩ C(B^X_r + 1/n(x)∖ B^X_r(x))) = O(1/n) , which implies f_n →χ_C(B^X_r(x)) in L^1_loc(C(X)). Let us now show that lim sup_n →∞∫_B_R(p)lipf_n _C < ∞ , for any p=(s,z) ∈ C(X), R>0, which directly implies that C(B^X_r(x)) is a set of locally finite perimeter. It is elementary to check that lipf_n(t,x) = n y ∈ B^X_r + 1/n(x)∖ B^X_r(x) ; 0 . Therefore, using (<ref>), we obtain ∫_B_R(p)lipf_n _C= n _C(B_R(p) ∩ C(B^X_r + 1/n(x)∖ B^X_r(x)))= O(1) , as n→∞ . Let us recall some useful notions of convergence, in order to study blow-ups of sets of (locally) finite perimeter. We refer to <cit.> for more details. Let {(X_i,_i,_i,x_i)}_i be a sequence of pointed metric measure spaces converging in the pointed measured Gromov Hausdorff sense to (Y,ρ,μ, y). Let (Z,_Z) be the ambient space realizing the convergence. Moreover, let E_i⊂ X_i be a sequence of Borel sets with _i(E_i)<∞ for every i∈. We say that {E_i}_i converges to a Borel set F⊂ Y in the strong L^1 sense if the measures χ_E_i_iχ_Fμ in duality with _bs(Z) and _i(E_i)→μ(F). We also say that the convergence of E_i is strong in L^1_loc if E_i∩ B_r(x_i) converges in the strong L^1 sense to F∩ B_r(y) for all r>0. Such a convergence can be metrized, by a distance 𝒟 defined on (isomorphsim classes of) quintuples (X,,,x,E), see <cit.>. We use that notion of convergence to define the tangent to a set of locally finite perimeter contained in an RCD(K,N) metric measure space. Before, let us recall that given an RCD(K,N) and a point x∈ X, by Gromov pre-compactness theorem there is a not-empty set Tan_x(X) of tangent spaces at x obtained by considering the pmGH limits of blow-up rescalings of X centered at x; moreover, for -a.e. x∈ X, the tangent space is unique and Euclidean <cit.>. Let (X,,) be an RCD(K,N) m.m.s. and E⊂ X be a set of locally finite perimeter. We say that (Y, ρ, μ, y, F)∈Tan_x(X,,,E) if (Y,ρ,μ,y) ∈Tan_x(X) and F⊂ Y is a set of locally finite perimeter of positive measure such that χ_E converges in the L^1_loc sense of Definition <ref> to F along the blow up sequence associated to the tangent Y. An important tool for our analysis is the Gauss-Green formula for sets of finite perimeter in the RCD setting. We refer to <cit.> for the proof and for background material; here let us briefly mention that, given a set of finite perimeter E⊂ X, it is possible to define the space of L^2-vector fields with respect to the perimeter measure, denoted by L^2_E(TX). Let (X,,) be an RCD(K,N) m.m.s. and E⊂ X a set of finite perimeter with (E) < ∞. Then there exists a unique vector field ν_E ∈ L^2_E(TX) such that |ν_E| =1 |Dχ_E|-almost everywhere and ∫_E div(v) = - ∫tr_E (v) ·ν_E (E), for all v ∈ W^1,2_C(TX) ∩ D(div) with |v|∈ L^∞(|Dχ_E|). Let us also recall the following useful cut and paste result proved in <cit.>. We use the following notation: Dχ_E = ν_E |Dχ_E|. Moreover, we denote by ℋ^h the codimension one Hausdorff type measure induced by with gauge function h(B_r(x)):=(B_r(x))/r, see <cit.> for further details. Let (X,,) be an RCD(K,N) m.m.s. and E, F ⊂ X be sets of finite perimeter. Then E∩ F, E∪ F and E∖ F are sets of finite perimeter. Moreover, there holds D χ_E∩ F = Dχ_E |_F^(1) + Dχ_F |_E^(1) + ν_E ℋ^h|_{ν_E=ν_F} ; Dχ_E∪ F = Dχ_E |_F^(0) + Dχ_F |_E^(0) + ν_E ℋ^h|_{ν_E=ν_F} ; Dχ_E∖ F = Dχ_E |_F^(0) - Dχ_F |_E^(1) + ν_E ℋ^h|_{ν_E=-ν_F} . We now recall the notion of perimeter minimizing sets. To avoid discussing trivial cases, we will always assume (E)>0 and (X∖ E)≠ 0. A set of locally finite perimeter E ⊂ X, with (E)>0 and (X∖ E)>0, is a * Global perimeter minimizer if it minimizes the perimeter for every compactly supported perturbation, i.e. (E; B_R(x)) ≤(F; B_R(x)) for all x ∈ X, R>0 and F ⊂ X with F=E outside B_R(x); * Local perimeter minimizer if for every x∈ X there exists r_x>0 such that E minimizes the perimeter in B_r_x(x), i.e. for all F ⊂ X with F=E outside B_r_x(x) it holds (E; B_r_x(x)) ≤(F; B_r_x(x)) . For a proof of the following density result see <cit.>. Let (X,,̣) be an (K,N) m.m.s. and let E ⊂ X be a local perimeter minimizer set and let x∈∂ E. Then there exists constants r_0, C >0 such that C^-1(E ∩ B_r(x))/r≤Per(E; B_r(x)) ≤ C (E ∩ B_r(x))/r , for all 0<r<r_0. We report here a few results on BV functions and their associated vectorial variational measures, for their proof and background on notation see <cit.>. Let (X, , ) be an RCD(K,∞) space and f ∈(X). Then there exists a unique vector field ν_F ∈ L_|DF|(TX) such that ∫_X f ÷ v = - ∫_X v ·ν_f |̣Df| , for all v ∈ QC^∞(TX) ∩ D(÷). In what follows, for f ∈(X) we will denote Df := ν_f |Df|. If E is a set of locally finite perimeter, we denote ν_E := ν_χ_E. This definition of unit normal is consistent with the one introduced above via the Gauss-Green formula, see <cit.>. For a function f:X→, define f^∧ = ap lim inf_y→ x f(y) = sup{ t ∈: lim_r ↘ 0(B_r(x) ∩{f<t})/(B_r(x))=0 } f^∨ = ap lim sup_y→ x f(y) = inf{ t ∈: lim_r ↘ 0(B_r(x) ∩{f>t})/(B_r(x))=0 } , and, lastly, f = f^∧+f^∨/2 , with the convention ∞ -∞ = 0. Let (X, , ) be an RCD(K,∞) space and f, g ∈ (X) ∩ L^∞(X). Then fg ∈(X) and D(fg) = f Dg + g Df . In particular, |D(fg)| ≤ |f| |Dg| + |g| |Df|. Let (X, , ) be an RCD(K,∞) space, E a set of locally finite perimeter and f∈(X)∩ L^∞(E). Then f̃ (x) := f(x) if x ∈ E , 0 elsewhere belongs to (X) and Df̃ = f Dχ_E + Df|_E. The result immediately follows by applying (<ref>) with g = χ_E. Let (X, , ) be an RCD(K,∞) m.m.s. and E a set of locally finite perimeter. Let f ∈(E) and g ∈(X∖ E). Let h:X → be defined as h(x):= f(x) x ∈ E ; g(x) x ∈ X∖ E . Then, h ∈(X). Moreover, called f̅, g̅ the representatives given by (<ref>), it holds Dh = Df|_E +Dg|_X∖ E + (f - g)Dχ_E. Let f̃ and g̃ be the extensions by zero given by Proposition <ref>. Then h = f̃ + g̃. § MONOTONICITY FORMULA A classical and extremely powerful tool for studying sets which locally minimize the perimeter in Euclidean spaces is the monotonicity formula for the perimeter. The goal of this section is to generalize such monotonicity formula (with the associated rigidity statement) to perimeter minimizers in cones over RCD spaces. In the next section, we will draw some applications on the structure of the singular set of local perimeter minimizers. Recall that given an RCD(N-2,N-1) space (X,_X,_X) then the metric-measure cone over X, denoted by (C(X), _C, _C), is an RCD(0,N) space (if N=2, we also assume that diam(X)≤π). We denote by O=(0,x)∈ C(X) the tip of the cone (see Section <ref> for more details) and B_r(O) the open metric ball centered at O of radius r>0. When we consider a local perimeter minimizer E, we shall always assume that E=E^(1) is the open representative, given by the measure theoretic interior. See <cit.> for the relevant background. Let N≥ 2 and let (X,,) be an RCD(N-2,N-1) space (with diam(X)≤π, if N=2). Let C(X) be the metric measure cone over (X,d,). Let E ⊂ C(X) be a global perimeter minimizer in the sense of Definition <ref>. Then the function Φ : (0,∞) → defined by Φ(r) := (E;B_r(O))/r^N - 1 , is non-decreasing. Moreover, if there exist 0<r_1<r_2<∞ such that Φ(r_1)=Φ(r_2), then E∩(B_r_2(O)∖B_r_1(O)) is a conical annulus, in the sense that there exists A⊂ X such that E∩(B_r_2(O)∖B_r_1(O))= C(A) ∩(B_r_2(O)∖B_r_1(O)) , where C(A)={(t,x)∈ C(X) x∈ A} is the cone over A⊂ X. In particular, if Φ is constant on (0,∞), then E is a cone (in the sense that there exists A⊂ X such that E=C(A)). In the case where E ⊂ C(X) is a locally finite perimeter set, minimizing the perimeter for perturbations supported in B_R+1(O), then the monotonicity formula holds on (0,R), i.e. the function Φ defined in (<ref>) is non-decreasing on (0,R). Also the rigidity statement holds, for 0<r_1<r_2<R. The proofs are analogous. Let us first give an outline of the argument. The first two steps are inspired by the approach used in the lecture notes <cit.>, which provide a proof of the monotonicity formula for local perimeter minimizers in Euclidean spaces by-passing the first variation formula. Classical references for this approach are <cit.>. The main idea is to approximate the characteristic function of E by regular functions f_k and approximate Φ by the corresponding Φ_f_k; show an almost-monotonicity formula for Φ_f_k and finally pass to the limit and get the monotonicity of Φ. This will be achieved in steps 1-3. In step 4 we relate the derivative of Φ with a quantity characterizing cones as in Lemma <ref>. Throughout the proof, we will write B_r in place of B_r(O) for the ease of notation. Step 1: Approximation preliminaries. In this step we show that, up to error terms, regular functions approximating χ_E preserve the minimality condition. The argument requires an initial approximation. Let f ∈LIP(C(X))∩ D_ loc(Δ)(C(X)) be non-negative. We introduce two functions a, b: [0,∞) [0,∞) to quantify the errors in the approximation: a(r) := | |D f|(B_r) - Per(E;B_r) | , b(r) := ∫_∂ B_r |_∂ B_r^ extχ_E - _∂ B_r f| (B_r) , where _∂ B_r^ extχ_E is the trace of χ_E from the exterior of the ball B_r. We remark that the interior and exterior normal traces can be defined by considering the precise representative of χ_E·χ_B_r and χ_E·χ_X∖ B_r respectively. See <cit.> and <cit.> for the Euclidean theory. Notice that _∂ B_r^ ext f= _∂ B_r f = f|_∂ B_r, since f is continuous. Fix R>0. Let 0<r<R and g ∈_loc(C(X)) be any function such that _∂ B_r^ int g = _∂ B_r f and g = χ_E on C(X)∖ B_r , where _∂ B_r^ int g is the trace of g from the interior of the ball B_r. The minimality of E implies (E; B_R) ≤({q ∈ C(X) : g(q) > t }; B_R), for any 0<t<1. Integrating in t and using the coarea formula (<ref>) we obtain (E; B_R) ≤∫_0^1 ({q ∈ C(X) : g(q) > t }; B_R) ṭ≤ |D g| (B_R) . Therefore, using Lemma <ref> and the definition of g, we obtain (E;B_r) = (E;B_R)-(E;B_R ∖ B_r) ≤ |D g| (B_R) - (E;B_R ∖ B_r) = |D g| (B_r) + ∫_∂ B_r |_∂ B_r^ extχ_E - _∂ B_r f| (B_r) = |D g| (B_r) + b(r) . Finally, for any such g there holds |D f| (B_r) ≤(E;B_r) + a(r) ≤ |D g| (B_r) +a(r) + b(r) . Step 2. Main computation. In this step we show the monotonicity, up to error terms, of an approximation of Φ, denoted below by Φ_f, obtained by replacing χ_E with the regular approximation f of step 1. Fix f as in step 1 and r>0. By <cit.>, |∇ f |^2 (t,x) = |∇ f^(x)|^2(t) + t^-2|∇ f^(t)|^2(x) , for -a.e. x∈ X and ℒ^1-a.e. t>0 . Let h:C(X) be defined by h(t,x):=f^(r)(x) for all t> 0. Notice that h is locally Lipschitz away from the origin and it is elementary to check that it has locally bounded variation. By <cit.>, it holds |D h|(t,x) = r/t |∇ f^(r)|(x) , for -a.e. x and ℒ^1-a.e. t . By integrating over B_r and using the coarea formula, we obtain ∫_B_r |D h| (t,x) _C = ∫_0^r ∫_∂ B_t |D h| (t,x) ( B_t) dt = ∫_0^r t^N - 1∫_Xr/t |∇ f^(r)|(x) ṭ = ∫_0^r t^N - 2/r^N - 2∫_∂ B_r |∇ f^(r)|(x) (B_r) ṭ = r/N - 1∫_∂ B_r |∇ f^(r)|(x) (B_r) . Let us point out that the latter expression can be viewed as the integral on ∂ B_r of the analog of the tangential derivative of f in the smooth case, while h is the radial extension of the values of f on ∂ B_r to the whole of C(X). Given r>0, let us introduce the quantity J(r):= ∫_B_r |∇ f| (t,x) _C= ∫_0^r t^N - 1∫_X |∇ f| (t,x) ṭ , which will approximate r^N - 1Φ (r). Notice that J is a Lipschitz function, hence it is almost everywhere differentiable. Using the identity (<ref>), we obtain that for a.e. r it holds J'(r) = ∫_∂ B_r |∇ f| (r,x) (B_r) = N - 1/r∫_B_r |D h| (t,x) _C + ∫_∂ B_r ( |∇ f| (r,x) - |∇ f^(r)|(x) ) (B_r) . We notice that _∂ B_r h = _∂ B_r f. By defining h̃:C(X)→ to be equal to h inside B_r and χ_E outside, we observe that (<ref>) and (<ref>) still hold if we replace h by h̃ (here it is key that B_r is the open ball). Therefore, in step 1 we can choose g=h̃ and (<ref>) reads as ∫_B_r |D h̃|(t,x) _C + a(r) + b(r) ≥ J(r) . Substituting (<ref>) into (<ref>) and rearranging, yields J'(r) - N - 1/rJ(r) ≥∫_∂ B_r ( |∇ f| (r,x) - |∇ f^(r)|(x) ) (B_r) - N - 1/r(a(r)+b(r)) . With a slight abuse, in order to keep notation simple, below we will write ∇_C(O, (r,x)) to denote ∇(_C (O, ·))(r,x). Using Proposition <ref> together with the fact that 1-√(1-s)≥s/2, for 0≤ s≤ 1, the BL characterization of the norm <cit.> of |∇ f| and the indentification between minimal weak upper gradients for different exponents on spaces from <cit.>, we have |∇ f|(r,x) - |∇ f^(r)|(x)/|∇ f|(r,x) =1-√(1-(∇ f (r,x) ·∇_C (O, (r,x)) )^2/|∇ f|(r,x)^2) ≥( ∇ f (r,x) ·∇_C (O, (r,x)) )^2/2|∇ f|(r,x)^2 = ( ∇ f (r,x) ·∇ (1/2^2_C (O, (r,x))) )^2/2r^2(|∇ f|(r,x))^2 , for -a.e. x∈ X and a.e. r∈(0,+∞). Above, we understand that all the term vanish on the set where |∇ f|=0. Let us now define the function Φ_f (r) := ∫_B_r |∇ f|(t,x) _C/r^N - 1= J(r)/r^N - 1 which will approximate the function Φ in the statement of the theorem. Notice that Φ_f is Lipschitz and differentiable almost everywhere by the coarea formula (<ref>). Taking its derivative and using (<ref>) and (<ref>) we obtain that for a.e. r it holds Φ_f' (r) = J'(r) - N - 1/rJ(r)/r^N - 1 ≥∫_∂ B_r(∇ f (r,x) ·∇(1/2^2_C (O, (r,x))) )^2/2r^N+1|∇ f|(r,x) (B_r) -N - 1/r^N(a(r)+ b(r)) . Integrating (<ref>) from 0< r_1 <r_2 <∞, and using coarea formula, we get Φ_f(r_2) - Φ_f (r_1) ≥∫_B_r_2∖B_r_1(∇ f (r,x) ·∇(1/2^2_C (O, (r,x))) )^2/2r^N+1|∇ f|(r,x) _C - ∫_r_1^r_2N - 1/r^N(a(r)+b(r)) ṛ . Step 3. Approximation. In this step we carry out an approximating argument, using step 2 and in particular (<ref>). This allows us to conclude the monotonicity part of the theorem. Let {f_k}_k ∈⊂LIP (C(X))∩ D_ loc(Δ) be a sequence of non-negative functions converging in BV_loc(X) to χ_E. That is, |f_k - χ_E|_L^1(B_r)k →∞⟶ 0, |D f_k | (B_r) k →∞⟶ |D χ_E | (B_r), for all r>0 . Such sequence can be easily constructed by approximation via the heat flow, see for instance <cit.> for analogous arguments. Let us start by showing that the errors defined in (<ref>) relative to f_k go to zero as k tends to ∞. The term a_k(r) := | |D f_k|(B_r) - Per(E;B_r) | k →∞⟶ 0 by _loc-convergence of f_k to χ_E, i.e. (<ref>). To deal with the error term b_k(r) := ∫_∂ B_r |_∂ B_r^ extχ_E - _∂ B_r f_k| (B_r), we can use the coarea formula (<ref>) to show that ∫_B_r |f_k - χ_E| _C = ∫_0^r b_k(s) ṣ . Together with the L^1-convergence of f_k to χ_E, this shows that b_k(r) → 0 for ℒ^1-a.e. r>0. Lastly, let us show that Φ_f(r)→Φ(r) for ℒ^1-a.e. r>0. By _loc convergence of f_k to χ_E (<ref>), we have lim_k→∞Φ_f_k(r) = 1/r^N - 1lim_k→∞∫_B_r |D f_k| _C = 1/r^N - 1∫_B_r(E)= Φ(r) . Consequently, letting k→∞ in the estimate (<ref>) with f replaced by f_k, we obtain Φ(r_2) - Φ(r_1) ≥ 0 , ℒ^1 r_2 > r_1 >0 , thanks to the non-negativity of the term ∫_B_r_2∖B_r_1(∇ f_k (r,x) ·∇(1/2^2_C (O, (r,x))) )^2/2r^N+1|∇ f_k|(r,x) _C ≥ 0 . To conclude that Φ is monotone, we need to extend (<ref>) to every r_2>r_1 > 0. Let {r_k}_k ∈ be any sequence such that r_k ↑ r. Since B_r is open, B_r_k↑ B_r. Hence, by the inner regularity of measures Per(E; B_r_k) →Per(E;B_r) . Let r_2, r_1 > 0. Since the set of radii for which (<ref>) holds is dense, we can find {r_1,k}_k ∈ and {r_2,l}_l ∈ for which (<ref>) holds and such that r_1,k↑ r_1 and r_2,l↑ r_2. Then 0 ≤lim_l →∞Φ(r_2,l) - lim_k →∞Φ(r_1,k) = Φ(r_2) - Φ(r_1) . Step 4. Rigidity. In this step we focus on the rigidity part of the statement. We show that if there exist r_2>r_1>0 such that Φ(r_1)=Φ(r_2), then E∩(B_r_2∖B_r_1) is a cone. The first step is to prove the following claim: lim inf_k →∞ ∫_B_r_2∖B_r_1(∇ f_k (r,x) ·∇(1/2^2_C (O, (r,x))) )^2/2r^N+1|∇ f_k|(r,x) _C ≥∫_B_r_2∖B_r_1(ν_E (r,x) ·∇(1/2^2_C (O, (r,x))) )^2/2r^N+1 (E) , where ν_E is the unit normal to E, see Theorem <ref>. Subsequently, we will be able to conclude using the characterization of cones provided by Lemma <ref>. The plan is to apply Lemma <ref> to prove (<ref>). Using the notation in Lemma <ref>, we define the measures μ_k := |∇ f_k| _C ⌞_(B_r_2∖B_r_1) , μ := Per(E; ·) ⌞_(B_r_2∖B_r_1) . The _loc-convergence of f_k to χ_E (see (<ref>)) ensures that μ_k μ in duality with _ b(C(X)). The functions g_k : = ∇ f_k ·∇(1/2^2_C (O, ·)))/√(2)r^N+1/2|∇ f_k|·χ_{|∇ f_k| >0}∈ L^2(C(X); μ_k) satisfy (<ref>). Indeed, ∇ f_k (r,x)·∇(1/2^2_C (O, (r,x))) ≤1/2 |∇ f_k|(r,x) |∇^2_C(O, (r,x))| = r |∇ f_k| (r,x) _C-a.e. . Therefore, using (<ref>), g_k^2_(X; μ_k) = ∫_B_r_2∖B_r_1(∇ f_k ·∇ (1/2^2_C (O, ·))))^2/2r^N+1|∇ f_k|·χ_{|∇ f_k| >0}_C ≤∫_B_r_2∖B_r_11/2r^N-1 |∇ f_k| _C < C < +∞ , for some C>0 independent of k ∈ thanks to the _loc-convergence (<ref>). Consequently, Lemma <ref> provides the existence of g ∈ L^2(C(X); μ) and a subsequence k(l) such that ∇ f_k(l)·∇(1/2^2_C (O, ·))/√(2)r^N+1/2|∇ f_k(l)|·χ_{|∇ f_k(l)| >0} μ_k(l) g Per(E; ·) ⌞_(B_r_2∖B_r_1) , in duality with _ b(C(X)). Up to relabelling the approximating sequence f_k, we can suppose that the whole sequence satisfies (<ref>). We next determine the limit function g. Fix a test function φ∈LIP∩ D(Δ) (B_r_2∖B_r_1) with compact support contained in B_r_2∖B_r_1. We apply the Gauss-Green formula (Theorem <ref>) and use that φ has compact support in B_r_2∖B_r_1 to obtain ∫_B_r_2∖B_r_1 f_k ÷ (φ/√(2)r^N+1/2∇( 1/2^2_C(O,·))) _C = - ∫_B_r_2∖B_r_1φ/√(2)r^N+1/2( ∇ f_k·∇(1/2^2_C (O, ·)) ) _C = - ∫_B_r_2∖B_r_1φ∇ f_k·∇(1/2^2_C (O, ·))/√(2)r^N+1/2|∇ f_k|μ̣_k . Using the L^1-convergence of f_k to χ_E and that ÷ (φ∇ (1/2^2_C(O,·))/√(2)r^N+1/2_L^∞ (B_r_2∖B_r_1) < ∞ , we infer that lim_k→∞ ∫_B_r_2∖B_r_1 f_k ÷ (φ/√(2)r^N+1/2∇( 1/2^2_C(O,·) )) _C = ∫_E÷ (φ/√(2)r^N+1/2∇(1/2^2_C(O,·) )) _C = - ∫_∂^* Eφ/√(2)r^N+1/2 ∇(1/2^2_C (O, ·) ) ·ν_E (E) , where in the last equality we used the Gauss-Green formula (Theorem <ref>). Combining (<ref>) and (<ref>) we obtain lim_k→∞∫_C(X)φ∇ f_k·∇(1/2^2_C (O, ·))/√(2)r^N+1/2|∇ f_k|μ̣_k = ∫_∂^* Eφ/√(2)r^N+1/2∇(1/2^2_C (O, (·))) ·ν_E (E). That is, ∇ f_k ·∇(1/2^2_C (O, ·)))/√(2)r^N+1/2|∇ f_k|μ_k ∇(1/2^2_C (O, (·))) ·ν_E/√(2)r^N+1/2Per(E) in duality with _c(B_r_2∖B_r_1), by approximation. By the uniqueness of the weak limit and from (<ref>) we can conclude that g = ∇(1/2^2_C (O, ·))·ν_E/√(2)r^N+1/2 Per(E)|_B_r_2∖B_r_1 . From (<ref>) in Lemma <ref>, we have lim inf_k →∞g_k^2_L^2(C(X);μ_k)≥g^2_L^2(C(X); μ) . That is, we have shown the claim (<ref>). We are now in position to improve the estimate (<ref>) and use it to show the rigidity part of the theorem. By taking the inferior limit in (<ref>), recalling (<ref>) and that the error terms go to zero from step 3, we use (<ref>) to infer Φ(r_2) - Φ (r_1) ≥∫_B_r_2∖B_r_1(ν_E (r,x) ·∇(1/2^2_C (O, (r,x))) )^2/2r^N+1 (E) ≥ 0, for every r_2>r_1>0. Since we are assuming Φ(r_1)=Φ(r_2), if follows that ∇ ( _C (O, ·)) ·ν_E =0 Per(E) B_r_2∖B_r_1 . By applying Lemma <ref>, we can conclude that E∩(B_r_2∖B_r_1) is a conical annulus. Let us now prove a useful characterization of conical annuli contained in cones over RCD spaces. The characterization is based on the properties of the normal to the boundary of the subset: roughly the subset is conical if and only if its normal is orthogonal to the gradient of the distance function from the tip of the ambient conical space. In case the ambient space is Euclidean, the result is classical (see for instance <cit.>). Let (X,,) be an RCD(N-2,N-1) space and let C(X) be the cone over X. Let E ⊂ C(X) be a locally finite perimeter set and let 0<r_1<r_2<∞. Then the measure theoretic interior E^(1)∩(B_r_2(O)∖B_r_1(O)) is a conical annulus if and only if ∇ ( _C (O, ·)) ·ν_E =0 Per(E)-a.e. on B_r_2(O)∖B_r_1(O). As in the proof Theorem <ref>, to keep notation short we will write B_r to denote the open ball of radius r>0 and centered at the tip of the cone, i.e. B_r=B_r(O). Also, we will write B_r^X(x) for the open ball in X, of center x and radius r>0. For simplicity of presentation we will show the equivalence only in the case r_1=0, r_2=∞. The general case requires minor modifications. Moreover, in order to simplify the notation, we assume without loss of generality that E=E^(1), as the condition (<ref>) is clearly independent of the chosen representative. Step 1. We start with some preliminary computations aimed to establish the identity (<ref>) below, which will be key in showing the characterization of conical annuli in C(X). Using the Gauss-Green and the coarea formulas, we will express the derivative of the function u(s):= _C (E∩ C(B^X_r(x))∩ B_s) (suitably rescaled) with the product between the unit normal of E and the gradient of the distance function from the tip of C(X). Let x ∈ X, r,s >0. By Lemma <ref> and Theorem <ref> the set F:=E∩ C(B^X_r(x))∩ B_s is a set of finite perimeter with D χ_F = D χ_E ⌞_C(B^X_r(x))∩ B_s + ∇( _C (O, ·)) Per(B_s)⌞_E ∩ C(B^X_r(x)) + ν_C(B^X_r(x))Per(C(B^X_r(x)))⌞_E∩ B_s . Using the Gauss-Green formula (Theorem <ref>), the equality for the laplacian of the distance function from the tip on cones <cit.>, and cut and paste of sets of locally finite perimeter (Theorem <ref>), we obtain N· u(s) = ∫_F Δ(1/2^2(O,·)) _C = ∫_C(B^X_r(x))∩ B_s ∩∂^* E∇(1/2^2(O,·))·ν_E (E) + ∫_E ∩ C(B^X_r(x))∩∂ B_s∇(1/2^2(O,·))·ν_∂ B_s (B_s) + ∫_E ∩ B_s ∩∂ C(B^X_r(x))∇(1/2^2(O,·))·ν_C(B^X_r(x)) (C(B^X_r(x))) . We now study separately the three integrals in the right hand side of (<ref>), starting from the last one. Fix a function φ∈LIP(C(X)) ∩ D(Δ) with compact support. By applying the Gauss-Green Theorem <ref> on the set of locally finite perimeter C(B^X_r(x)), we obtain ∫_∂ C(B^X_r(x))φ ν_C(B^X_r(x))·∇ (1/2^2 (O,·) ) (C(B^X_r(x))) = - ∫_C(B^X_r(x))∇φ·∇ (1/2^2 (O,·) ) _C + ∫_C(B^X_r(x))φΔ (1/2^2 (O,·) ) _C = ∫_B^X_r(x)∫_0^∞∂_r φ^(y)(r) ·∂_r (1/2_(y)^2 (r) )r^N - 1 ṛ (y) + ∫_C(B^X_r(x))φ N _C = - ∫_B^X_r(x)∫_0^∞φ^(y)(r) N r^N - 1 ṛ (y) + ∫_C(B^X_r(x))φ N _C =0, where we have used (<ref>), the definition of _C and integration by parts on . Since φ was arbitrary, we infer that ν_C(B^X_r(x))·∇ (1/2^2 (O,·) )=0 Per (C(B^X_r(x))) -a.e. and thus ∫_E ∩ B_s ∩∂ C(B^X_r(x))∇(1/2^2(O,·))·ν_C(B^X_r(x)) (C(B^X_r(x)))=0 . Let us now deal with the second integral appearing the right hand side of (<ref>). By the chain rule, we have ∇ (1/2^2 (O, q) ) = (O, q) ∇(O,q). Therefore, we obtain ∫_E ∩ C(B^X_r(x))∩∂ B_s∇(1/2^2(O,·))·ν_∂ B_s (B_s) = ∫_E ∩ C(B^X_r(x))∩∂ B_s(O,·) ∇(O,·) ·ν_∂ B_s (B_s) = s Per (B_s; E ∩ C(B^X_r(x))) , where we used <cit.>. Inserting (<ref>) and (<ref>) into (<ref>), yields u(s) = 1/N∫_C(B^X_r(x))∩ B_s∇(1/2^2(O,·))·ν_E (E)+ s/NPer (B_s; E ∩ C(B^X_r(x))) . By the coarea formula (<ref>), u is Lipschitz and differentiable almost everywhere and it holds u(s)= ∫_0^s Per (B_t; E ∩ C(B^X_r(x))) ṭ . We now compute the derivative of u(s)/s^N. Combining (<ref>) and (<ref>), we obtain that for a.e. s it holds /ṣu(s)/s^N = u'(s)/s^N - N u(s)/s^N+1 = - ∫_C(B^X_r(x))∩ B_s∇(1/2^2(O,·))·ν_E (E) /s^N+1. Fix 0<r<s and consider the sets A((s,x), r):= C(B^X_r(x))∩ B_s(1+r)∖ B_s(1-r) . By Lemma <ref> below, the family of sets {A(q,r)| q∈ C(X), r>0} generates the Borel σ-algebra of C(X), since for any q ∈ C(X), r>0 there exist r_a, r_b>0 such that B(q,r_a) ⊂ A(q,r) ⊂ B(q, r_b) . Define v(s,r):= _C(E∩ A((s,x),r)) = u(s(1+r))-u(s(1-r)) . The identity (<ref>) yields that for a.e. s, /ṣv(s)/s^N = - ∫_A((s,x), r)∇(1/2^2(O,·))·ν_E (E) /s^N+1 . Step 2. In this step we show that if E is a cone, then (<ref>) holds with r_1=0, r_2=∞. We will first show that v(s)/s^N is constant, and then conclude using (<ref>). Since E is a cone, there exists a set F ⊂ X such that E = {(t,x) ∈ C(X): x ∈ F, t ≥ 0 }. Note that E ∩ C(B^X_r(x)) is a cone for any x ∈ X, r>0. Thus, for any s >0, it holds: _C(E ∩ C(B^X_r(x))∩ B_s ) = (F ∩ B^X_r(x))∫_0^s ρ^N-1 ρ̣= s^N/N(F ∩ B^X_r(x)) , yielding that s↦ s^-N u(s) is constant. By (<ref>), for all r>0, p ∈ C(X) we have ∫_A(p, r)∇(1/2^2(O,·))·ν_E (E) = 0. By the Lebesgue differentiation Theorem (see for instance <cit.>), for Per(E)-a.e. p∈ C(X) it holds lim_r → 0_B_r(p)|∇(1/2^2(O,q))·ν_E(q) - ∇(1/2^2(O,p))·ν_E(p)| (E)(q)= 0 . From (<ref>) and the asymptotic doubling property of the perimeter we infer lim_r → 01/(E,A(p,r))∫_A(p,r)|∇(1/2^2(O,q))·ν_E(q) - ∇(1/2^2(O,p))·ν_E(p)| (E)(q) ≤ C lim_r → 01/(E,B_r_b(p))∫_B_r_b(p)|∇(1/2^2(O,q))·ν_E(q) - ∇(1/2^2(O,p))·ν_E(p)| (E)(q) = 0 , where r_a and r_b are as in Lemma <ref>. We can now conclude recalling (<ref>): 0 = lim_r → 01/(E,A(p,r))∫_A(p,r)∇(1/2^2(O,·))·ν_E (E) = ∇(1/2^2(O,p))·ν_E(p) , for (E)-a.e. p . Step 3. In this last step we show that (<ref>) for r_1=0, r_2=∞ implies that E is a cone. We will show that given (s,x) ∈ E and λ>0, then (λ s,x) ∈ E. Using the assumption (<ref>) and (<ref>), we obtain that s↦ v(s)=_C(E∩ A((s,x),r))/s^N is constant. Therefore, for λ>0 _C(E∩ A((s,x),r))= _C(E∩ A((λ s,x),r))/λ^N. Moreover, _C (A((λ t, x),r)) = (B^X_r(x))∫_λ t (1-r)^λ t (1+r) s^N - 1 ṣ = λ^N(B^X_r(x))∫_ t (1-r)^ t (1+r) s^N - 1 ds = λ^N_C (A((t, x),r))), The combination of (<ref>) and (<ref>) gives _C(E∩ A((s,x),r))/_C (A((s,x),r)) =_C(E∩ A((λ s,x),r))/_C (A((λ s,x),r)) . We next show that if (s,x)∈ E then (λ s, x) ∈ E. Thanks to (<ref>), it is enough to show that q ∈ E if and only if lim_r → 0_C(E∩ A(q,r))/_C (A(q,r)) =1 . Assume by contradiction that (<ref>) holds but q ∉E. Then, lim inf_r→ 0_C ((X∖ E)∩ B_r(q))/_C (B_r(q))≥ε >0 . Using Lemma <ref>, we infer that lim inf_r→ 0_C((X∖ E)∩ A(q,r))/_C (A(q,r)) ≥lim inf_r→ 0_C ((X∖ E)∩ B_r_a(q,r)(q))/_C (B_r_a(q,r)(q))·_C (B_r_a(q,r)(q))/_C (B_r_b(q,r)(q)) . Since C(X) is an (0,N) space, the Bishop-Gromov monotonicity formula <cit.> gives _C (B_r_a(q)) ≥ (r_a/r_b)^N _C (B_r_b(q)) . Therefore, from (<ref>) we may conclude lim inf_r→ 0_C((X∖ E)∩ A(q,r))/_C (A(q,r)) ≥lim inf_r → 0 (r_a(q,r)/r_b(q,r))^N ·lim inf_r → 0_C ((X∖ E)∩ B_r_a(q,r)(q))/_C (B_r_a(q,r)(q))) ≥ C ε > 0 , where C := lim inf_r → 0 (r_a(q,r)/r_b(q,r))^N >0 thanks to (<ref>). Clearly, (<ref>) contradicts (<ref>). The proof that q∈ E implies (<ref>) is analogous. The following technical lemma was used in the proof of Lemma <ref> above. Let N≥ 2, let (X,,) be an RCD(N-2,N-1) space and let (C(X), _C,_C) be the cone over it. If N=2, assume also that diam(X)≤π. For x∈ X and 0<r<s, consider the sets A((s,x), r):= C(B^X_r(x))∩ B_s(1+r)(O)∖ B_s(1-r)(O). Then: * The family of sets {A(q,r)| q∈ C(X), r>0} generates the Borel σ-algebra of C(X); * For any q ∈ C(X), r>0 there exist r_a=r_a(q,r) and r_b=r_b(q,r)>0 such that B_r_a(q) ⊂ A(q,r) ⊂ B_r_b(q) and lim_r → 0r_a/r_b=1/4√(2). The first claim follows from the second one; thus let us determine r_a and r_b>0 that satisfy the second statement. To this aim, we compute the minimal and maximal distance of q = (t,x) from the set ∂ A(q,r). Let us start from the minimal distance. We deal with the shell part first: given (t(1+r),y) ∈∂ B_t(1+r)∩∂ A there holds, using (<ref>) ^2_C ((t,x),(t(1+r), y)) = t^2 + t^2(1+r)^2 -2t^2(1+r) cos((x,y)) ≥ t^2 + t^2(1+r)^2 -2t^2(1+r) = t^2 r^2 , where the equality is achieved at y=x. Let now (s,y) ∈∂ C(B^X_r(x)) ∩∂ A(q,r): ^2_C ((t,x),(s,y)) = t^2 + s^2 -2 st cos(r) . This defines a differentiable function of s ∈ [t(1-r),t(1+r)]. Its derivative ∂_s ^2_C ((t,x),(s,y)) = 2s -2tcos(r) is increasing and vanishes at s=tcos(r). Therefore, we have ^2_C(q, ∂ A(q,r)) = t^2 sin^2(r) . Therefore, we may pick r_a = r_a(q,r) := 1/2_C(q, ∂ A(q,r)) = 1/2 t sin(r) . Next, let us compute the maximal distance of q from ∂ A(q,r). Since the maximal distance is attained at the intersection of the shell with the side part of A (by monotonicity of both formulas (<ref>) and (<ref>) with respect to (x,y) and s, respectively), we can simply compute the maximum by looking at the distance from the top shell. We again compute, for (x,y) = r, ^2_C ((t,x),(t(1+r), y)) = t^2 + t^2(1+r)^2 -2t^2(1+r) cos(r) = t^2(2 + 2r +r^2 - 2(1+r)cos(r)) . Consequently, we may pick r_b = r_b(q,r) := 2 t √((2 + 2r +r^2 - 2(1+r)cos(r))) . It is easy to check that r_a, r_b>0 defined in (<ref>), (<ref>) satisfy (<ref>). A useful technical tool, used to prove the rigidity statement of the monotonicity formula, is the following lemma (see <cit.> for the proof). Let (X,) be a Polish space. Let μ, μ_k ∈ℳ_+(X) with μ_k μ in duality with _b(X). Let g_k ⊂ L^2(X;μ_k) be a sequence of functions such that sup_k ∈g_k_L^2(X;μ_k) < ∞. Then, there exists a function g ∈ L^2(X;μ) and a subsequence k(l) such that g_k(l) μ_k(l) g μ in duality with _b(X) and lim inf_l →∞g_k(l)_L^2(X;μ_k(l))≥g_L^2(X; μ) . § STRATIFICATION OF THE SINGULAR SET AND FURTHER APPLICATIONS The first goal of this section is to prove sharp Hausdorff dimension estimates for the singular strata of locally perimeter minimizing sets in (K,N) spaces (X,,ℋ^N). The statement is completely analogous to the classical one for singular strata of minimizing currents in the Euclidean setting, see <cit.>, and for the singular strata of non collapsed Ricci limits <cit.> and spaces <cit.>. Also the proof is based on the classical Federer's dimension reduction argument, and builds upon the monotonicity formula and associated rigidity for perimeter minimizing sets in (0,N) metric measure cones, Theorem <ref>. Though, a difference between the present work and the aforementioned papers is that the monotonicity formula is available only at the level of blow-ups and not in the space X; this creates some challenges that are addressed in the proof. The second main goal will be to present an application of the monotonicity formula and the associated rigidity for cones, to the existence of perimeter minimizing cones in any blow-down of an (0,N) space (X,,ℋ^N) with Euclidean volume growth. Below we introduce the relevant definition of singular strata and of interior or boundary regularity points for a locally perimeter minimizing set E⊂ X, when (X,,ℋ^N) is an (K,N) metric measure space. Let (X,,ℋ^N) be an RCD(K,N) space, E⊂ X a locally perimeter minimizing set in the sense of Definition <ref> and 0 ≤ k ≤ N-3 an integer. The k-singular stratum of E, 𝒮^E_k, is defined as S_k^E := {x ∈∂ E: (Y,ρ, ℋ^N, F,y), (Y,ρ,y) (Z×^k+1,_Z ×_eucl,(z,0)) (Z,_Z,z) F=G×^k+1 G⊂ Z }. The above definition would make sense also in the cases when k≥ N-2. However, it seems more appropriate not to adopt the terminology singular strata in those instances. Let (X,,ℋ^N) be an RCD(K,N) space and let E⊂ X be a locally perimeter minimizing set in the sense of Definition <ref>. Given x∈∂ E, we say that x is an interior regularity point if Tan_x(X,,ℋ^N,E,x)={(^N,_eucl,ℋ^N,^N_+,0)} . The set of interior regularity points of E will be denoted by ℛ^E. Given x∈∂ E, we say that x is a boundary regularity point if Tan_x(X,,ℋ^N,E,x)={(^N_+,_eucl,ℋ^N, {x_1≥ 0},0)} , where x_1 is one of the coordinates of the ^N-1 factor in ^N_+=^N-1×{x_N≥ 0}. The set of boundary regularity points of E will be denoted by ℛ^E_∂ X. It was proved in <cit.> that the interior regular set ℛ^E is topologically regular, in the sense that it is contained in a Hölder open manifold of dimension N-1. By a blow-up argument, in the next proposition, we show that dim_ℋℛ^E_∂ X≤ N-2. Let (X,,ℋ^N) be an RCD(K,N) space. Let E⊂ X be a locally perimeter minimizing set and let ℛ^E_∂ X be the set of boundary regularity points of E, in the sense of Definition <ref>. Then dim_ℋℛ^E_∂ X≤ N-2 . We argue by contradiction. Assume there exists k>N-2, k∈ such that ℋ^k( ℛ^E_∂ X)>0 . Let ε>0. We define the quantitative ε-singular set to be S^ε(E) := {x ∈ X: 𝒟((B^X_r(x), , ℋ^N, E,x), (B^^N_r, _eucl, ^N_+, 0)) ≥ε r, r∈(0,ε) } . Recall that the distance 𝒟 was introduced in <cit.>. Notice that S^ε_1(E) ⊂ S^ε_2(E) for 0<ε_1 ≤ε_2 and that ∂ E ∖ℛ^E = ⋃_n ∈ S^ε_n(E), for any sequence ε_n ↓ 0. It is also clear that ℛ^E_∂ X⊂∂ E ∖ℛ^E. The combination of (<ref>), (<ref>) and (<ref>) implies that there exists ε>0 such that ℋ^k( S^ε (E) ∩ℛ^E_∂ X)>0 . By <cit.>, there exists x ∈ S^ε(E)∩ℛ^E_∂ X such that lim sup_r → 0ℋ^k_∞(B_r(x) ∩ S^ε(E)∩ℛ^E_∂ X)/r^k≥ 2^kC_k , where we denoted by ℋ^k_∞ the k-dimensional ∞-pre-Hausdorff measure. By the very definition of ℛ^E_∂ X, for every sequence r_i ↘ 0, E ⊂ (X, /r_i, ℋ^N/r_i^N, x) converges in the sense of Definition <ref> to a quadrant {x_1≥ 0}, where x_1 is one of the coordinates of the ^N-1 factor in ^N_+=^N-1×{x_N≥ 0}. Embedding the sequence of rescaled spaces X_i and their limit ^N_+ into a proper realization of the pGH-convergence, by Blaschke’s theorem (cf. <cit.>) there exist a compact set A⊂^N_+ and a subsequence, which we do not relabel, such that S^ε(E)∩ℛ^E_∂ X∩ B^i_1(x) converges to A in the Hausdorff sense. Moreover, it is elementary to check that A ⊂ S^ε({x_1≥ 0}) in ^N_+. Therefore, we obtain ℋ^k_∞(S^ε({x_1≥ 0})) ≥ℋ^k_∞(A) ≥lim sup_i →∞ℋ^k_∞(S^ε(E)∩ℛ^E_∂ X∩ B^i_1(x)) = lim sup_i →∞ℋ^k_∞(B_r_i(x) ∩ S^ε(E)∩ℛ^E_∂ X)/r_i^k >0 , where we relied on the classical upper semicontinuity of the pre-Hausdorff measure with respect to Hausdorff convergence in the second inequality and on (<ref>) in the last one. However, it is easy to check that S^ε({x_1≥ 0})= {x_1=x_N=0} which has Hausdorff co-dimension 2, contradicting (<ref>). Our main results about the stratification of the singular set for perimeter minimizers are that the complement of 𝒮_N-3^E in ∂ E consists of either interior or boundary regularity points, and that the classical Hausdorff dimension estimate (𝒮^E_k)≤ k holds for any 0≤ k≤ N-3. Below are the precise statements. Let (X,,ℋ^N) be an RCD(K,N) space and let E⊂ X be a locally perimeter minimizing set in the sense of Definition <ref>. Then ∂ E∖𝒮_N-3^E=ℛ^E∪ℛ^E_∂ X . Let (X,,ℋ^N) be an RCD(K,N) space and E⊂ X a locally perimeter minimizing set. Then, for any 0≤ k≤ N-3 it holds dim_ℋ𝒮_k^E ≤ k . Another application of the monotonicity formula with the associated rigidity is that if an (0,N) space (X,,ℋ^N) with Euclidean volume growth contains a global perimeter minimizer, then any asymptotic cone contains a perimeter minimizing cone. Let (X,,ℋ^N) be an (0,N) metric measure space with Euclidean volume growth, i.e. satisfying for some (and thus for every) x∈ X: lim inf_r→∞ℋ^N(B_r(x))/r^N >0. Let E⊂ X be a global perimeter minimizer in the sense of Definition <ref>. Then for any blow-down (C(Z),_C(Z),ℋ^N) of (X,,ℋ^N) there exists a cone C(W)⊂ C(Z) global perimeter minimizer. The conclusion of Theorem <ref> above seems to be new also in the more classical case of smooth Riemannian manifolds with nonegative sectional curvature, or nonnegative Ricci curvature. We refer to <cit.> for earlier progress in the case of smooth manifolds with nonnegative sectional curvature satisfying additional conditions on the rate of convergence to the tangent cone at infinity and on the regularity of the cross section and to <cit.> for the case of smooth Riemannian manifolds with nonnegative Ricci curvature and quadratic curvature decay. Let us consider a point x∈∂ E∖𝒮_N-3^E. By the very definition of the singular stratum 𝒮_N-3^E, there exists a tangent space to (X,,ℋ^N,E, x) at x of the form (^N-2× Z,_eucl×_Z,ℋ^N,y,G×^N-2), where (Z,_Z,ℋ^2) is an (0,2) metric measure cone (because all tangent cones to any (K,N) space (X,,ℋ^N) are metric measure cones <cit.>) and G⊂ Z is a globally perimeter minimizing set (in the sense of Definition <ref>) thanks to <cit.>. By Lemma <ref> there are only two options. Either x is an interior point and a tangent space is (^N,_eucl,ℋ^N,^N_+,0), or x is a boundary point and a tangent space is (^N_+,_eucl,ℋ^N,{x_1≥ 0},0). In the first case, it was shown in <cit.> that the tangent space at x is unique and hence x∈ℛ^E. If the second possibility occurs, then by <cit.> we infer that the tangent cone to the ambient space (X,,ℋ^N) is unique. The uniqueness of the tangent cone to the set of finite perimeter can be obtained with an argument completely analogous to the one used for interior points in <cit.>, building on top of the classical boundary regularity theory (cf. for instance with <cit.>) instead of the classical interior regularity theory for perimeter minimizers in the Euclidean setting. Hence x∈ℛ^E_∂ X is a boundary regularity point. We argue by contradiction via Federer's dimension reduction argument. The proof is divided into four steps. In the first step we set up the contradiction argument and reduce to the case of entire perimeter minimizers inside (0,N) metric measure cones. In the second step we make a further reduction to the case when the perimeter minimizer is a cone itself, building on top of Theorem <ref>. Via additional blow-up arguments we gain a splitting for the ambient space and for the perimeter minimizing set in step three, thus performing a dimension reduction. The argument is completed in step four. A key subtlety with respect to more classical situations is that the monotonicity formula holds only for perimeter minimizers centered at vertexes of metric measure cones, resulting into the necessity of iterating the blow-ups. Step 1. We argue by contradiction. Suppose that the statement does not hold for some 0≤ k≤ N-3. Then, there exists k'>k, k'∈ such that ℋ^k'( 𝒮^E_k )>0 . Let ε>0. We define the quantitative (k,ε)-singular stratum to be S_k,ε^E := {x ∈ X: 𝒟((B^X_r(x), , ℋ^N, E,x),(B^^k+1× Z_r, _eucl×_Z, F, (0, z))) ≥ε r r∈(0,ε), (Z,_Z,z) F= ^k+1× G G⊂ Z } . Recall that the distance 𝒟 was introduced in <cit.>. Moreover, we notice that S_k,ε_1^E⊂ S_k,ε_2^E for 0<ε_1 ≤ε_2 and that S_k^E = ⋃_n ∈ S_k,ε_n^E, for any sequence ε_n ↓ 0. The contradiction assumption (<ref>) implies that there exists ε>0 such that ℋ^k'( S^E_k,ε)>0 . By <cit.>, there exists x ∈ S^E_k, ε such that lim sup_r → 0ℋ^k'_∞(B_r(x) ∩ S^E_k,ε)/r^k'≥ 2^k'C_k' , where we denoted by ℋ^k'_∞ the k'-dimensional ∞-pre-Hausdorff measure. Then there exists a sequence r_i ↘ 0 such that E ⊂ (X, /r_i, ℋ^N/r_i^N, x) converges in the sense of Definition <ref> to a global perimeter minimizer F ⊂ (C(Z), _C, ℋ^N), in the sense of Definition <ref>. Here we used <cit.> in combination with Lemma <ref> for the compactness, <cit.> for the perimeter minimality of F and <cit.> to infer that the ambient tangent space is a cone. Here (Z,_Z,ℋ^N-1) is an (N-2,N-1) metric measure space. Embedding the sequence of rescaled spaces X_i and their limit C(Z) into a proper realization of the pGH-convergence, by Blaschke’s theorem (cf. <cit.>) there exist a compact set A⊂ C(Z) and a subsequence, which we do not relabel, such that S^E_k,ε∩ B^i_1(x) converges to A in the Hausdorff sense. Moreover, it is elementary to check that A ⊂ S^F_k,ε. Therefore, we obtain ℋ^k'_∞(S^F_k, ε) ≥ℋ^k'_∞(A) ≥lim sup_i →∞ℋ^k'_∞(S^E_k,ε∩ B^i_1(x)) = lim sup_i →∞ℋ^k'_∞(B_r_i(x) ∩ S^E_k,ε)/r_i^k' >0 , where we relied on the classical upper semicontinuity of the pre-Hausdorff measure with respect to Hausdorff convergence in the second inequality and on (<ref>) in the last one. Lastly, (<ref>) implies that ℋ^k'(B_1^C(Z)∩ S^F_k,ε) >0 . Step 2. In this step, by performing a second blow up, we apply Theorem <ref> to show that we can also suppose that the global perimeter minimizer is a cone (with respect to a vertex of the ambient cone). For the sake of clarity, we recall that the set of vertexes of C(Z) is the collection of all points y∈ C(Z) such that C(Z) is a metric cone centered at y. Moreover, we remark that the set of vertexes is isometric to ^k for some 0≤ k≤ N. We claim that there is a point O∈ C(Z) such that O is a vertex of C(Z) and the following density estimate holds: lim sup_r → 0ℋ^k'_∞(B_r(O) ∩ S^F_k,ε)/r^k'≥ 2^k'C_k' . If the claim does not hold, then by (<ref>) there are points of density for ℋ^k'_∞ restricted to S^F_k,ε and none of them belongs to the set of vertexes of C(Z). Hence we can repeat the argument in step 1, blowing up at a density point for ℋ^k'_∞ restricted to S^F_k,ε which is not a vertex in the ambient cone. In this way, the dimension of the set of vertexes of the ambient space, which is isometric to a Euclidean space, increases at least by one. The procedure can be iterated until one of the following two possibilities occurs: the ambient is isometric to ^N, with standard structure, in which case (<ref>) contradicts the classical regularity theory, or there is a density point for ℋ^k'_∞ restricted to S^F_k,ε which is also a vertex of C(Z). Let now O denote any such vertex of C(Z). By Theorem <ref> and the density estimates in Lemma <ref>, the map r↦Per(F; B_r(O))/r^N-1 is monotone non-decreasing, bounded and bounded away from 0. Therefore, there exists the limit 0< a:= lim_r→ 0Per(F;B_r(O))/r^N-1 < ∞ . We perform a second blow up at the tip O∈ C(Z) and obtain a global perimeter minimizer G⊂ C(Z). By (<ref>) and Theorem <ref>, G is a cone. Moreover, by repeating the arguments in step 1, taking into account that O was chosen to be a density point for ℋ^k'_∞ restricted to S^F_k,ε, it holds ℋ^k'(S^G_k,ε) >0 . It follows from (<ref>) that there exists a point in S^G_k, ε∖{O}. Step 3. The goal of this step is to gain a splitting for the ambient and the perimeter minimizer set by considering a blow-up of G at a density point for ℋ^k'_∞ restricted to S^G_k,ε that is not a vertex. Roughly speaking, we will achieve this by showing that the unit normal of the blow-up is everywhere perpendicular to the gradient of a splitting function obtained with the help of Lemma <ref> below, cf. <cit.>. Our setup is that G⊂ C(Z) is a globally perimeter minimizing cone with vertex O, a vertex of the ambient cone. Moreover, ℋ^k'(S^G_k,ε)>0. In particular, by the very same arguments as in Step 1, there exist a point O'∈ C(Z), O'≠ O and a sequence r_i↓ 0 such that lim_i →∞ℋ^k'_∞(B_r_i(O') ∩ S^G_k,ε)/r_i^k'≥ 2^k'C_k' . Up to taking a subsequence that we do not relabel, we can assume that the sequence (C(Z),_C/r_i,ℋ^N,O',G) converges to (C(Z'),_C',ℋ^N,O”,H), where (C(Z'),_C',ℋ^N) is an (0,N) metric measure cone splitting an additional factor with respect to C(Z) and H⊂ C(Z') is a global perimeter minimizer. Moreover, ℋ^k'(B_1(O”) ∩ S^H_k,ε )>0 . Consider the sequence of functions f_i:C(Z)→ defined as f_i(z):=^2_C(O,z)-^2_C(O,O')/r_i , that we view as functions on the rescaled metric measure space (C(Z),_C/r_i,ℋ^N,O'). By Lemma <ref> below, the functions f_i converge to some splitting function g:C(Z')→ in H^1,2_loc, see <cit.> for the relevant background. Moreover Δ f_i converge to 0 uniformly. We claim that, for any function φ∈LIP(C(Z'))∩ W^1,2(C(Z')) it holds ∫_H ∇φ·∇ g H^N = 0 . To see this, let φ_i ∈LIP(X_i)∩ W^1,2(X_i) converging H^1,2-strongly to φ along the sequence (C(Z),_C/r_i,ℋ^N,O'), whose existence was shown in <cit.>. Then, using the Gauss-Green formula (Theorem <ref>) and the characterization of cones in Lemma <ref> we obtain 0 = ∫_∂^* Gφ_i ν^i_G·∇_i f_i _i(G) = - ∫_G∇_i φ_i ·∇_i f_i H^N - ∫_Gφ_i ·Δ_i f_i H^N , where the Hausdorff measure ℋ^N is computed with respect to the rescaled distance _C/r_i. By (<ref>) below and (<ref>) ∫_G∇_i φ_i ·∇_i f_i H^N = - ∫_Gφ_i ·Δ_i f_i H^N → 0 . On the other hand, by <cit.>, it follows that ∫_G∇_i φ_i ·∇_i f_i H^N →∫_H ∇φ·∇ g H^N . Combining (<ref>) and (<ref>) we obtain (<ref>); cf. <cit.> for analogous arguments. Our next goal is to use (<ref>) to show that the perimeter minimizer H splits a line in the direction of the ambient splitting induced by the splitting function g. Let us set Y:=C(Z')=× Y', and assume that is the splitting induced by g. Given any φ∈ W_loc^1,2(Y) let us also denote φ^(t)(y) := φ(t,y) and φ^(y)(t) := φ(t,y). If φ∈ W_ loc^1,2(Y), then φ^(t)∈ W^1,2_loc(Y') and φ^(y)∈ W_loc^1,2(), for ℒ^1-a.e. t and ℋ^N-1-a.e. y respectively (see <cit.>). Up to the isomorphism given by the splitting induced by g, there holds ∇φ·∇ g (t,y)= ∂_t φ^(y)(t) , for ℋ^N-a.e. (t,y)∈ Y . Let P_s denote the heat flow on Y. Then ∫_Y P_s χ_H (t,y) ∇φ·∇ g (t,y) H^N = ∫_Y P_s χ_H (t,y) ∂_t φ^(y) (t) H^N = ∫_H P_s ∂_t φ^(y) (t) H^N = ∫_H ∂_t (P_s φ)^(y) (t) , where in the second equality we have used the self-adjointess of the heat flow and in the last equality we have used Lemma <ref>. By (<ref>) and (<ref>) it follows that ∫_Y P_s χ_H (t,y) ∇φ·∇ g (t,y) H^N =∫_H ∇ (P_s φ) ·∇ g H^N = 0 . Since φ∈LIP(Y) ∩ W^1,2(Y) is arbitrary, an elementary computation using Fubini's theorem and the splitting Y=× Y' shows that ∂_t (P_s χ_H)^(y)(t) = 0 for ℒ^1-a.e. t∈, for ℋ^N-1-a.e. y∈ Y' . By the L^1_loc(Y) convergence of P_s χ_H to χ_H for s↓ 0 and the closure of H, we conclude that χ_H^(y) is constant in t for every y∈ Y'. That implies the existence of a set H' ⊂ Y' such that χ_H (t,y) = χ_H' (y) . By Lemma <ref>, H' ⊂ Y' is a set of locally finite perimeter. Let us show that H' is a global perimeter minimizer, by following the classical Euclidean argument, cf. <cit.>. Suppose not. Then there exist ε > 0 and a set H'_0 ⊂ Y' such that H' Δ H'_0 ⊂⊂ B_r(y) for some r>0 and y∈ Y', such that Per(H'_0;B_r(y)) + ε≤Per(H';B_r(y)) . Let t>0. We define the sets I_t : = ∖ (-t,t) H_0 : = (H'_0 × (-t,t) ) ∪ (H' × I_t) . At this stage, we can use the formulas for the cut and paste of sets of finite perimeter (Theorem <ref>), observe that HΔ H_0 ⊂ B_r(y) × (-t,t) := A and conclude by Lemma <ref> that Per(H_0;A) - Per(H;A) = 2t (Per (H_0';B_r(y)) - Per(H; B_r(y))) + 2ℋ^N-1(H_0' Δ H') ≤ -2tε + 2 ℋ^N-1(B_r(y))<0 , where we have chosen t>0 large enough so that ℋ^N-1(B_r(y)) < tε. Therefore, H' is a global perimeter minimizer, as we claimed. If k=0, the above argument leads to a contradiction. Indeed we found a point in 𝒮^H_0 such that some tangent space splits a line. Step 4. If k>0, then it is straightforward to see that (t,y)∈ S_k, ε^H if and only if y∈ S_k-1, ε^H'. In particular, from the assumption that ℋ^k'(B_1(O”) ∩ S^H_k,ε )>0 , we conclude that ℋ^k'-1(B_1(O”) ∩ S^H'_k-1,ε )>0 . Therefore the steps from 1 to 3 prove that if there exist an (K,N) metric measure space (X,,ℋ^N) and locally perimeter minimizing set E⊂ X such that for some 0≤ k≤ N-3 it holds _ℋ(𝒮^E_k)>k, then there exist an (0,N-1) space (X',',ℋ^N-1) and a locally perimeter minimizing set E'⊂ X' such that _ℋ(𝒮^E'_k-1)>k-1. The dimension reduction can be iterated a finite number of times until we reduce to the case k=0, that we already discussed above. First of all, up to modifying E on a set of measure zero if necessary, we can (and will) assume that E is open. Step 1. Fix a point x∈∂ E. We claim that there exists C>1 such that r^N/C≤ℋ^N(E∩ B_r(x)) ≤ C r^N, for all r>0, r^N-1/C≤(E; B_r(x)) ≤ C r^N-1, for all r>0 . Recall that an (0,N) space is globally doubling (thanks to the Bishop-Gromov inequality <cit.>) and satisfies a global Poincaré inequality <cit.>. Since, by assumption, E mimimizes the perimeter on every metric ball then, by <cit.>, there exists a constant γ_0>0 (depending only on the doubling and Poincaré constants of (X,,ℋ^N)) such that ℋ^N(E∩ B_r(x))/ℋ^N(B_r(x))≥γ_0 and ℋ^N(B_r(x)∖ E)/ℋ^N(B_r(x))≥γ_0 for all r>0 and x∈∂ E. Recall that the ratio ℋ^N(B_r(x))/r^N is monotone non-increasing by Bishop-Gromov inequality, it is bounded above by the value in ^N and it is bounded below by a positive constant thanks to the assumption (<ref>). Hence (<ref>) follows from (<ref>). The perimeter estimate (<ref>) follows from (<ref>) and <cit.>. Step 2. The argument is similar to those involved in the proof of Theorem <ref> above and therefore we only sketch it. Let r_i→∞ be any sequence such that (X,/r_i,ℋ^N,x) converges to a tangent cone at infinity (C(Z),_C(Z),ℋ^N,O) of (X,,ℋ^N). By the Ahlfors regularity estimates (<ref>)-(<ref>) and the compactness and stability <cit.>, the sequence (X,/r_i,ℋ^N,E,x) converges to (C(Z),_C(Z),ℋ^N,F,O) for some non-empty perimeter minimizer F⊂ C(Z). At this stage, we are in position to apply Theorem <ref> and obtain a perimeter minimizing cone in C(Z), up to possibly taking an additional blow-down. In the remainder of the section, we present some technical results that have been used in the proof of Theorem <ref>. Let (Z,_Z,ℋ^2) be an (0,2) metric measure cone and let G⊂ Z be a globally perimeter minimizing set, in the sense of Definition <ref>. Then one of the following two possibilities occur: i) (Z,_Z,ℋ^2) is isomorphic to (^2,_eucl,ℋ^2) and G is a half-plane; ii) (Z,_Z,ℋ^2) is isomorphic to the half-plane (^2_+,_eucl,ℋ^2) and G is a quadrant. We distinguish two cases: if Z has no boundary, then we prove that it is isometric to ^2 and i) must occur; if Z has non empty boundary, then we prove that it is isometric to ^2_+ and that ii) must occur. Let us assume that (Z,_Z,ℋ^2) has empty boundary. Then, by <cit.>, Z is isometric to a cone over S^1(r) for some 0<r≤1. Moreover, by Theorem <ref> there exists a blow-down of G which is a global perimeter minimizing cone C(A), with vertex in the origin and A⊂ S^1(r) connected. Indeed, it is elementary to check that if A is not connected, then C(A) is not locally perimeter minimizing. Let 2π r θ be the length of A, where 0<θ<1. Let G'⊂ Z be a set of finite perimeter such that G' = G outside B_1 and ∂ G ∩ B_1 is composed by the geodesic connecting the two points in ∂ G ∩∂ B_1 = {x_1,x_2}. Such geodesic is contained in B_1 as can be verified through the explicit form of the metric. Using (<ref>) Per(G'; B_1) = √(2(1-cos(_Z'(x_1,x_2)∧π))) ≤ 2 = Per(G; B_1). Equality in (<ref>) is achieved for 2π r θ = _S^1(r)(x_1,x_2) ≥π, that is for 1≥ rθ≥1/2. Let us notice, by symmetry of S^1(r), that we may suppose that θ≤1/2. Indeed, for every fixed θ, we may find a comparison set with perimeter equal to the one constructed above corresponding to 1-θ. Hence equality is only achieved at r=1, θ = 1/2, corresponding to the case where Z=^2 and C(A) is a half space. Notice that once we have established that Z is isometric to ^2, it is elementary that G must be a half-space. In the case where Z has non empty boundary, by <cit.> again, Z is isometric to a cone over a segment [0,l] for some 0< l≤π. The upper bound for the diameter is required in order for the cone to verify the (0,2) condition. We claim that it must hold l=π. As above, by Theorem <ref>, there exists a blow-down of G which is a global perimeter minimizing cone C(A), with vertex in the origin and A⊂ [0,l] some set of finite perimeter. If A is not connected, then it is elementary to check that C(A) is not globally perimeter minimizing. Notice also that the complement of a global perimeter minimizer is a global perimeter minimizer. By minimality and symmetry we can suppose G=C([0,l')], for some 0<l'≤l/2. By considering a suitably constructed competitor in B_1, let us show that the only possibility is that Z is a half-space and C(A) is a quadrant. Consider the set G' coinciding with G outside of B_1 and whose boundary inside B_1 is the geodesic minimizing the distance between ∂ B_1 ∩∂ G and ∂ Z. Then Per(G';B_1)≤Per(G;B_1), with equality achieved only if l=π and l'=π/2. As above, once established that Z is isometric to ^2_+, it is elementary to check that G must be a quadrant. It is a standard fact that any blow-up of a cone centered at a point different from the vertex splits a line. For our purposes it is important to observe that the blow-up of the squared distance function from the vertex is indeed a splitting function in the blow-up of the cone. Let (X,,) be an (N-2,N-1) space and let (C(X),_C(X),_C(X)) be the metric measure cone over X, with vertex O∈ C(X). Fix p∈ C(X) with p≠ O. Let r_i↓ 0 and consider the sequence of rescaled spaces Y_i:=(C(X),_C(X)/r_i,_C(X)/(B_r_i(p)),p) converging in the pmGH topology to a tangent space Y of C(X) at p. Then the functions f_i(·):=_C(X)^2(O,·)-_C(X)^2(O,P)/r_i , viewed as functions f_i:Y_i→, have Laplacians uniformly converging to 0 and converge in H^1,2_loc to a splitting function g:Y→, up to the extraction of a subsequence. Let us set f(·):=_C(X)^2(O,·)-_C(X)^2(O,P) , in order to ease the notation. On C(X) it holds (see <cit.>) Δ f =2N , |∇ f(x)|=2_C(X)(x,O) , for a.e. on x∈ C(X) . By scaling, we obtain that Δ f_i =2Nr_i , |∇ f(x)|=2_C(X)(x,O) , for a.e. x∈ Y_i , where it is understood that the Laplacian and the minimal relaxed gradient are computed with respect to the metric measure structure (C(X),_C(X)/r_i,_C(X)/(B_r_i(p)),p). Notice that x↦ 2_C(X)(x,O) is a 2r_i-Lipschitz function on Y_i, by scaling. Hence the functions f_i:Y_i→ are locally uniformly Lipschitz, they satisfy f_i(p)=0, and they have Laplacians uniformly converging to 0. Up to the extraction of a subsequence, thanks to a diagonal argument, we can assume that they converge locally uniformly and in H^1,2_loc to a function g:Y→ in the domain of the local Laplacian, and that Δ f_i converge to Δ g locally weakly in L^2, thanks to <cit.>. We claim that g is a splitting function on Y, which amounts to say that Δ g=0 and |∇ g| is constant almost everywhere and not 0. The fact that Δ g=0 follows from the weak convergence of the Laplacians and the identity Δ f_i=2Nr_i that we established above. Analogously, employing the identity |∇ f_i(x)|=2_C(X)(x,O) a.e. on Y_i, and the local W^1,2 convergence of f_i to g, it is immediate to check that |∇ g|=2(·,O) a.e. on Y. The next result relates the Heat flow on product spaces with one dimensional derivatives. Let (X,,) be an (K,∞) space and let X× be endowed with the standard product metric measure space structure. Let φ∈ W^1,2(X×). Then for every s>0 it holds P_s ∂_t φ(x,t)= ∂_t (P_sφ)(x,t) , for _X⊗ℒ^1-a.e. (x,t)∈ X×. The statement follows from the tensorization of the Cheeger energy and of the heat flow for products of (K,∞) metric measure spaces, see for instance <cit.>, and from the classical commutation between derivative and heat semi-group on endowed with the standard metric measure structure. It is a well known fact of the Euclidean theory (see for instance <cit.>) that the perimeter enjoys natural tensorization properties, when taking an isometric product by an factor. The next lemma establishes the counterpart of this useful property. Let (X,_X,_X) be an RCD(K,N) space and let F⊂ X be a Borel set. Under these assumptions, E := F ×⊂ X × is a set of locally finite perimeter (where the product X× is endowed with the standard product metric measure structure), if and only if F ⊂ X is a set of locally finite perimeter. Moreover, for any open set A⊂ X and for any R>0 it holds R Per(F;A) = Per(E;A×[0,R]) . By the very definition of perimeter it holds Per(E, A×[0,R]) = inf_(φ_i)_i{lim inf_i→∞∫_0^R ∫_A lip φ_i (t,x) _X ṭ} , where the infimum is taken over all sequences (φ_i)_i ⊂LIP_loc(A×[0,R]) such that φ_i →χ_E in L^1_loc(A×[0,R]). We are going to prove (<ref>) and the first part of the statement will follow immediately. Step 1. Let us start by showing the inequality R Per(F;A) ≥Per(E;A×[0,R]) . Let (ψ_i)_i ⊂LIP_loc(A) be a competitor for the perimeter of F in A, i.e. ψ_i →χ_F in L^1_loc (A,_X) and all the functions ψ_i are locally Lipschitz. Define ϕ(t,x) := ψ(x) for 0≤ t≤ R and x ∈ A. Then, by Fubini's Theorem, {ϕ_i}_i is a competitor for the perimeter of E in A× [0,R]. Therefore, Per(F;A) = inf_(ψ_i)_i{lim inf_i→∞∫_A lip ψ_i (x) _X} = 1/Rinf_(ψ_i)_i{lim inf_i→∞∫_0^R ∫_A lip ϕ^(t)_i (x) _X ṭ} ≥1/Rinf_(φ_i)_i{lim inf_i→∞∫_0^R ∫_A lip φ_i (t,x) _X ṭ} = 1/RPer (E; A× [0,R]) , where the inequality follows from the fact that, on the right hand side, we are taking the infimum over a larger class. Step 2. We prove the opposite inequality in (<ref>). Let us fix ε>0. There exists a sequence (φ_i)_i ⊂LIP_loc(A×[0,R]) with φ_i →χ_E in L^1_loc(A×[0,R]) such that lim inf_i→∞∫_0^R ∫_A lip φ_i (t,x) x̣ ṭ≤Per (E; A× [0,R]) + ε . It is straightforward to check that lip φ_i^(t) (x) ≤lip φ_i (t,x) for every (t,x) ∈× X. Moreover, the sequence (φ_i^(t))_i is a competitor for the variational definition of the perimeter of F in A for ℒ^1-almost every t, by the coarea formula. Therefore, by Fatou's lemma, R Per(F;A) ≤∫_0^R lim inf_i→∞∫_A lip φ^(t)_i (x) _X ṭ ≤lim inf_i→∞∫_0^R ∫_A lip φ_i^(t) (x) _X ṭ ≤lim inf_i→∞∫_0^R ∫_A lip φ_i(t,x) _X ṭ≤Per(E;A×[0,R]) + ε . Since ε > 0 was arbitrary, we conclude.
http://arxiv.org/abs/2307.07457v1
20230714163649
Structured Pruning of Neural Networks for Constraints Learning
[ "Matteo Cacciola", "Antonio Frangioni", "Andrea Lodi" ]
cs.LG
[ "cs.LG", "cs.AI", "math.OC" ]
poly]Matteo Cacciola unipi]Antonio Frangioni cornell]Andrea Lodi [poly] organization=CERC, Polytechnique Montréal, city=Montréal, country=Canada [unipi] organization=University of Pisa, city=Pisa, country=Italy [cornell] organization=Cornell Tech and Technion – IIT, city=New York, country=USA [email protected] [email protected] [email protected] In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse applications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances. Artificial Neural Networks Mixed Integer Programming Model compression Pruning § INTRODUCTION The concept of embedding learned functions inside Mixed Integer Programming (MIP) formulations, also known as “Learning-Symbolic Programming” or “Constraint Learning”, has gained attention in recent literature <cit.>. Furthermore, there has been an increase in the availability of tools that automatically embed commonly used predictive models into MIPs <cit.>. These techniques and tools are especially valuable when employing ML models for predictions and utilizing OR methods for decision making based on those predictions. Unlike the traditional two-stage approaches <cit.>, embedding the predictive model within the decision-making process in an end-to-end optimization framework has been shown to yield superior results. Examples of applications are automatic algorithmic configuration <cit.>, adversarial examples identification <cit.>, cancer treatments development <cit.>, and chemical process optimization <cit.>. A very relevant case is when the learned function is an ANN, since ANNs are the state-of-the-art models for numerous essential ML tasks in Computer Vision and Natural Language processing. Consequently, there have been efforts in the literature to automate the embedding of ANNs <cit.>. For instance, <cit.> enables to incorporate feed-forward architectures with ReLU activation functions into MIPs, utilizing the output of the ANN in the objective function. The maturity of the field is demonstrated by the fact that one of the leading commercial MIP solvers, Gurobi, recently released a https://github.com/Gurobi/gurobi-machinelearningpackage that allows feed-forward ReLU networks to be part of MIP formulations, with compatibility for popular ML packages such as PyTorch, Keras, and scikit-learn. Unfortunately, even when we consider simple architectures that have only ReLU activation functions, the representation of an ANN in a MIP will introduce binary variables, due to the combinatorial nature of the ReLU function. Additionally, the number of binary variables and the associated constraints that need to be added to the MIP is proportional to the number of parameters in the ANN. Deep Learning has witnessed a clear trend towards developing architectures with a very large number of parameters, which contributes to ANNs high predictive power and state-of-the-art performance in various applications. This, however, poses issues in terms of training costs, storage requirements, and prediction time. Consequently, numerous methods, known as model compression techniques, have been developed to reduce the size of ANNs without compromising their predictive capability. Yet, the large size of the ANNs presents an even more significant scalability challenge when it is embedded into a MIP, due to the potentially exponential growth of the latter computational cost with its size (and, in particular, the number of binary variables). Using a state-of-the-art network in a MIP formulation may easily result in an overwhelming number of binary variables and constraints, rendering the models unsolvable within a reasonable time using any available solver. In this paper, we demonstrate that pruning methods, originally developed to address specific ML challenges, can be effectively applied in the context of embedding ANNs into MIPs. Specifically, we utilize a structured pruning technique that we previously developed to significantly accelerate the solution time for adversarial example identification problems using Gurobi. The remainder of the paper is organized as follows: Section <ref> provides a formal definition of the problem concerning the embedding of learned functions in MIP formulations. Additionally, it presents one of the existing formulations from the literature specifically designed for embedding ANNs. In Section <ref>, we introduce pruning techniques and we describe the specific pruning method employed in our experiments. Section <ref> focuses on the benefits of pruning when incorporating ANNs into MIPs. We discuss the reasons why pruning is advantageous in this context and provide insights on selecting appropriate pruning techniques. Finally, in Section <ref> we present numerical results to empirically validate that pruning can effectively speed up the solution process of MIPs with embedded ANNs. § EMBEDDING LEARNED FUNCTIONS IN MIXED INTEGER PROGRAMS We consider a general class of (Mixed-Integer) Nonlinear Programs with “learned constraints”. That is, the formulation of the problem would need to involve some functions g_i(x), i = 1, …, k, defined on the variable space of the optimization decisions, that are “hard” in the sense that no compact algebraic formulation, and not even an efficient computation oracle, is available. Yet, (large) data sets are available, or can be constructed, of outputs y̅ = g_i(x̅) for given x̅. These datasets can be used in several existing ML paradigms (Support Vector Machines, Decision Trees, ANNs, …) to construct estimates g̅_i(x) of each g_i(x), i = 1, …, k, with a workable algebraic description that can then be inserted into an optimization model. Thus, we consider the class of Mathematical Programs with Learned Constraints (MPLC) min cx + by s.t. y_i = g̅_i(x) i = 1, …, k A x + B y ≤ d x ∈ X Linearity in (<ref>) and (<ref>) is not strictly necessary in our development, but it is often satisfied in applications (see, e.g., <cit.>) and we assume it for notational simplicity. Indeed, when X in (<ref>) contains integrality restrictions on (some of) the x variables, the class already contains Mixed-Integer Linear Programs (MILP), whose huge expressive power does not need to be discussed. Of course, a significant factor in the complexity of (<ref>)–(<ref>) is the algebraic form of the g̅_i(x), which impacts the class of optimization problems it ultimately belongs to. A significant amount of research is already available on formulations for embedding feedforward ANNs, in particular with ReLU activations, in a MIP context <cit.>. In these formulations, the neural network is constructed layer by layer. Denoting the input vector at layer ℓ as o_ℓ, and the corresponding weight matrix and bias vector as W_ℓ and b_ℓ, respectively, one has o_ℓ+1 = max( 0 , W_ℓ o_ℓ + b_ℓ ) that can be expressed in a MI(L)P form as v^+_ℓ - v^-_ℓ = W_ℓ o_ℓ + b_ℓ 0 ≤ v^+_ℓ≤ M^+ z_ℓ 0 ≤ v^-_ℓ≤ M^- ( 1 - z_ℓ ) o_ℓ+1 = v^+_ℓ z_ℓ∈{ 0 , 1 }^m Constraints (<ref>) and (<ref>) ensure that both v^+_ℓ and v^-_ℓ are (component-wise) positive, and since the z_ℓ are (component-wise) binary, that at most one of them is positive. Consequently, constraint (<ref>) forces the relations v_ℓ^+ = max{ W_ℓ o_ℓ + b_ℓ , 0 } and v_ℓ^- = min{ W_ℓ o_ℓ + b_ℓ , 0 } (of course, constraint (<ref>) is only there to make apparent what the output of the layer is). Denoting by n the number of neurons in layer ℓ and by m the number of neurons in layer ℓ + 1, system (<ref>)–(<ref>) contains m binary variables, n+2m continuous variables, and 3m constraints. A significant aspect of this model (fragment) is the use of big-M constraints (<ref>) and (<ref>). It is well known that the choice of the value for the constants M can significantly impact the time required to solve an instance. Indeed, the Optimized Big-M Bounds Tightening (OBBT) method has been developed in <cit.> to find effective values for this constant. As previously mentioned, the state-of-the-art solver Gurobi now includes an open-source Python package that automatically embeds ANNs with ReLU activation into a Gurobi model. Additionally, starting from the 10.0.1 release, Gurobi has the capability to detect if a model contains a block of constraints representing the relationship y = g(x), where g(·) is an ANN, in order to then apply the aforementioned OBBT techniques to enhance the solution process. Despite showing a substantial improvement with respect to the previous version, the capabilities of Gurobi to solve these MIPs are still limited. In particular, when embedding an ANN into a MIP, Gurobi is not able to solve the problem in a reasonable time unless the number of layers and neurons in the ANN is small. § ARTIFICIAL NEURAL NETWORKS PRUNING As mentioned in the introduction, the size of state-of-the-art ANNs has been growing exponentially over the years. While these models deliver remarkable performance, they come with high computational costs for training and inference, as well as substantial memory requirements for storage. To address this issue, various techniques have been developed to reduce these costs without significantly compromising the predictive power of the network. One such technique is pruning, which involves reducing the ANN size by eliminating unnecessary parameters. Consider for instance a linear layer with input x_inp, output x_out, and weight and bias tensors W and b, i.e., x_out = W x_inp + b. Thus, pruning entails removing certain entries from W or b. That is, pruning, say, the parameter W_1,1 results in the first coordinate of x_inp being ignored in the scalar product when computing the first coordinate of x_out. Pruning individual weight entries can offer some advantages, but it is generally suboptimal. Since most of the computation is performed on GPUs, there is little computational benefit unless entire blocks of computation, such as tensor multiplications, are removed. Removing entire structures of the ANN is known as structured pruning, in contrast to unstructured pruning that involves eliminating single weights. In the example of the linear layer, structured pruning would aim to remove entire neurons by deleting rows from the W tensor (along with the corresponding b entry in most cases). Figures <ref>, <ref>, and <ref> illustrate the difference between these two pruning techniques. The literature on pruning techniques for neural networks is vast and encompasses a wide range of approaches. One simple and commonly used method is magnitude-based pruning, which involves removing parameters with small magnitudes. This was first introduced in <cit.> and has been widely adopted since. However, more sophisticated strategies have also been proposed, such as Bayesian methods <cit.>, combinations of pruning with other compression techniques <cit.>, and zero accuracy drop pruning <cit.>. A relevant subset of pruning techniques uses a regularization term to enforce sparsity in tensor weight. It is common practice in Machine Learning to add a regularization term R(w) to the standard loss function L(X,Y,w), where w is the vector containing the ANNs parameters and (X,Y) is the training set. Usually, R(w) penalizes the magnitude of the parameters (e.g., R(w)= |w|_2^2) and it is known to improve the generalization performances of the model. If the form of R(w) is chosen carefully, e.g., R(w)=|w|_1, it can also lead to a sparse parameter vector w. When a network parameter is zero, it can typically be removed without changing the model output for any given input. Hence, if R(w) is chosen appropriately to induce all the weights of some neurons to be zero, then such neurons can be removed from the network. Many regularization terms have been proposed both for structured and unstructured pruning, including but not limited to l_1 norm, BerHu term <cit.>, group lasso, and l_p/l_q norms <cit.>. §.§ The Structured Perspective Regularization Term In the literature, the majority of pruning techniques rely on heuristics to determine the impact of removing a parameter or a structure from the ANN. This trend persists in recent works <cit.>, including methods that still utilize simple magnitude-based criteria <cit.>. Only a few techniques attempt to develop a theoretically-grounded methodology <cit.>, and these methods do not primarily focus on structured pruning. In light of this, a pruning technique was developed in <cit.> that is motivated by strong theoretical foundations and specifically addresses structured pruning. In <cit.>, the pruning problem is addressed by starting with a naïve exact MIP formulation and then deriving a stronger formulation by leveraging the Perspective Reformulation technique <cit.>. Analogously to what is done in <cit.> for individual variables rather than groups of them, an efficient way to solve the continuous relaxation of this problem is obtained by projecting away the binary variables, resulting in an equivalent problem to standard ANN training with the inclusion of the new Structured Perspective Regularization (SPR) term z(W;α,M)= 2√((1-α)α)||W||_2 if ||W||_∞/M≤√(α/1-α)||W||_2≤ 1 α M/||W||_∞||W||^2_2+(1-α)||W||_∞/M if √(α/1-α)||W||_2≤||W||_∞/M≤ 1 α ||W||^2_2+(1-α) otherwise, where M is a constant, α is a tunable hyper-parameter and W is the weight tensor corresponding to the structure we want to prune (e.g., the weight matrix of a neuron). That is, in order to prune the ANN one trains it using as loss function L(X,Y,W) +λ∑_j∈𝒩z(W_j;α,M), where W_j is the weight matrix corresponding to neuron j and 𝒩 is the set of neurons of the ANN. Coupled with a final magnitude-based pruning step, this approach has been shown to provide state-of-the-art pruning performances thanks to the unique and interesting properties of the SPR term. This potentially comes at the expense of extra hyperparameter tuning effort for α and M, which is unlikely to be a major issue in this application since ANNs that can be embedded in a MILP, even after pruning, cannot possibly have the extremely large size common in applications like Computer Vision and Naturale Language Processing, and therefore their training and tuning time is unlikely to be a major factor. § PRUNING AS A SPEED-UP STRATEGY As previously mentioned, in the context of embedding ANNs in MIPs, scalability becomes a significant challenge as the number of (binary) variables and constraints grows proportionally with the number of parameters in the embedded ANN, but the cost of solving the MI(L)P may well grow exponentially in the number of (binary) variables. It therefore makes even more sense to employ the ML compression techniques that are used to reduce the computational resources required by ANNs. Many compression techniques other than pruning exist in the ML literature. However, not all of them are effective in the context of MIPs with embedded ANNs. For instance, quantization techniques aim to train networks that have weight values in a discrete (relatively small) set of ℝ <cit.>. One possibility is to directly implement the ANN using a lower bit number format than the standard Float-32 one <cit.>. Quantization is a very popular technique in ML since can decrease both backward- and forward-pass computational effort, at the same time reducing the memory footprint of the resulting model. However, in the contest of MIPs, quantization does not bring any advantage, since the resulting problem from embedding a quantized ANN is not significantly different, from an Operations Research point of view, to the one where a non-quantized model has been embedded. Indeed, weights are coefficients in (<ref>)–(<ref>), and having them in a small set of (integer) values may at most have a minor impact on the solution time. Other methods, like low-rank decomposition and parameter-sharing techniques <cit.>, modify the internal operations of layers; this means that they cannot directly be used in this context without the development of new, specific formulations and new algorithms that can automatically detect them in a MIP problem. By contrast, structured pruning techniques perfectly fit the needs of embedding an ANN in a MIP. Even unstructured pruning may have some impact, since when a weight is removed (i.e., set to zero) the corresponding entry in the MIP constraints matrix is also set to zero, leading to a sparser constraints matrix. However, entirely removing variables or constraints is more effective; in the case of a feed-forward ANN, this corresponds to performing structured pruning on neurons, as visualized in Figures <ref>- <ref>- <ref>- <ref>. It is interesting to remark that, for ML purposes, pruning techniques only bring advantages at inference time, but reducing the number of parameters only reduces linearly the computational cost of the forward pass. By contrast, removing neurons of a network brings an exponential speed up in the time required to solve the resulting MIP formulations. Hence, pruning is arguably more relevant for OR than for ML, despite having been developed in the latter area. In particular, structured pruning—as opposed to unstructured one—is crucial in that it allows using existing automatic structure detection algorithms, such as that implemented in Gurobi, while unstructured pruning is very likely to result in a different structure of the constraints matrix that would not be recognizable, thereby preventing the use of OBBT techniques that are crucial in this context. Based on the considerations above, we argue about modifying the existing pipeline for embedding ANNs in MIPs. After training the ANN (or during training, depending on the technique used), we prune the model before embedding it in the MIP formulation of the problem in hand. This approach either reduces the solution time of the MIP with the same generalisation performances, or, possibly, allows one to include larger, therefore more expressive ANNs, capable of achieving higher accuracy while still maintaining the ability to solve the resulting MIPs within a reasonable time. In particular, we will employ the Structured Perspective Regularization, i.e., we train the ANN by adding the SPR term to the loss, which will lead to a weight tensor with a structured sparsity. After fixing to zero (i.e., removing) neurons whose weights are all below a fixed threshold, we fine-tune the network with a standard loss for a few more epochs (see  <cit.> for details). The obtained ANN is then embedded in the MIP, and it will require the addition of fewer variables and constraints with respect to its unpruned counterpart. § EXPERIMENTS §.§ Building adversarial examples We test the effectiveness of pruning in the task of finding an adversarial example of a given network. In particular, we focus on the verification problem <cit.>, which consists in finding a slight modification of an input that is originally correctly classified by the network in such a way that the modified one is assigned to a chosen class by the ANN. More formally, assume we are given a trained ANN g̅(·):ℝ^n→ [0,1]^C and one input x such that g̅(x) has its maximum value at the coordinate corresponding to the correct class of x. Denoting with k this coordinate and with h the coordinate with the second highest value of g̅(x), the problem we want to solve is max y_h-y_k s.t. y =g̅(x̅) Δ≥ x-x̅ Δ≥x̅-x x̅∈ℝ^n where Δ is a given distance bound. Clearly, (<ref>)–(<ref>) is a special case of the MPLC class (<ref>)–(<ref>). In particular, the constraint (<ref>) encodes an ANN function, so it needs to be handled with the techniques we presented in Section <ref>. We selected this problem since it is of great interest to ML researchers. Furthermore, it can in principle be relevant to test the robustness of networks of any size, and therefore it allows to explore the boundaries of what MPLC approaches (with or without pruning) can achieve. §.§ General setup and notation To test the effectiveness of our pruning techniques, we ran some experiments on network robustness using the MNIST dataset. We used the same settings of the notebook available at <https://github.com/Gurobi/gurobi-machinelearning/blob/main/notebooks/adversarial/adversarial_pytorch.ipynb>, where formulation (<ref>)-(<ref>) is solved with Δ=5. We trained the ANNs using the Pytorch SGD optimizer with no weight decay and no momentum. We used 128 as batch size and we trained the network for 50 epochs with a constant learning rate equal to 0.1. All the networks are Pytorch sequential models containing only Linear and ReLU layers. For the pruned networks, we performed a (limited) 3 by 3 grid search to choose the λ factor that multiplies the SPR term and the α hyper-parameter needed in its definition (M is automatically set as in <cit.>). After 50 training epochs, the model is fine-tuned for 10 epochs without using any regularization. Note that the objective of the grid search is to find the smallest network that keeps basically the same out-of-sample accuracy of the original one, and better results could conceivably be obtained by employing end-to-end techniques that take into account the optimization process in the computation of the loss <cit.>. In tables <ref> and <ref>, the first column reports the network architecture of the used ANN and if pruning was used, while the Δ parameter value of (<ref>)-(<ref>) can be found in the first row. We compare the result of the baseline approach (i.e., without pruning) and the result obtained using the pruning method with the best hyper-parameters found. We report the validation accuracy (in percentage), the time needed by Gurobi to solve the obtained MIP (in seconds), and the number of branch-and-bound nodes explored during that time. Additionally, for the pruned networks, we report the value of λ and α and the architecture of the network after pruning. When referring to a network architecture, the terms LxN refer to a sequence of L layers each of them containing N neurons. When multiple terms follow each other, it indicates their order in the network. For example, 2x20-3x10 stands for a network that starts with 2 layers of 20 neurons and continues with 3 layers of 10 neurons. Each experiment is repeated 3 times and a time limit of 1800 seconds is given to Gurobi. §.§ Detailed results Table <ref> shows the results using Δ=5 on 4 different architectures with an increasing number of neurons and layers. When pruning small architectures, like the 2x50 and 2x100 networks, pruning the ANN results in at least halving the time used by Gurobi. Moreover, the accuracy of the pruned models is higher than the baseline, this is, likely, because pruning has also a regularization effect. The results on the 2x200 architecture show that the baseline is not able to solve the problems in the given time for two out of three runs. Instead, our method always leads to MIPs that are easily solved by Gurobi while maintaining the same accuracy as the baseline. Finally, we report the results using the 6x100 networks, significantly bigger with respect to the previous ones. The baseline, once again, cannot solve two out of the three problems in the given time limit. Instead, our method is able to succeed in all cases, at the cost of losing a little bit of accuracy (0.3 percent in the best case). As a last remark, we notice that for all the MIPs we solved relatively to unpruned network, no counterexample existed in the given neighborhood (i.e., the optimal value of (<ref>)-(<ref>) is negative). This remains true for the corresponding pruned counterparts, confirming that the pruned and unpruned versions of the MIPs are qualitatively very similar. §.§ Investigating the quality of the solutions To better validate the quality of our results, we solved again the adversarial problem (<ref>)-(<ref>) using Δ=20 and employing the same networks trained in the previous experiments. This was aimed to find adversarial examples in the given region to better understand the effect of pruning on the resulting MIP. We report the results in Table <ref>, where the “accuracy" and “pruned architecture" columns have been removed since they are the same as in the previous table. For all the experiments, a counter-example existed in the given region, and in the last column of Table <ref>, named “Found", we report if Gurobi was able to find one adversarial example in the given time limit. Unsurprisingly, for all the MIPs corresponding to pruned networks, Gurobi was able to find an adversarial example within a time considerably inferior to the 1800 seconds limit. Moreover, all the adversarial examples obtained using a pruned network were also adversarial for the unpruned counterpart with the same starting architecture. This empirically proved that, in our setting, pruning can be even used to solve the adversarial example problem for the unpruned counterpart and it is again a good indication that pruning does not heavily affect the resulting MIP. This is in accordance with the ML literature, where there is a good consensus that not-too-aggressive pruning of ANNs does not significantly impacts their robustness <cit.>, and therefore the existence—or not—of the counter-example in our application. Finally, the times reported in Table <ref> show that the speed-up is still very significant even with the new value of Δ and that in some cases the Baseline is not able to find any adversarial example. We conclude this section by noting that additional experiments, which are not included in this paper for the sake of brevity, have shown that a high setting of the OBBT parameter <cit.> of Gurobi is crucial to obtain good performances both for pruned and unpruned instances, confirming the importance of structured pruning. § CONCLUSIONS AND FUTURE DIRECTIONS This paper has demonstrated the effectiveness of pruning artificial neural networks in accelerating the solution time of mixed-integer programming problems that incorporate ANNs. The choice of the sparsity structure for pruning plays a crucial role in achieving significant speed-up, and we argued that structured pruning is superior to unstructured one. Further research in this area can focus on gaining a deeper understanding of which sparsity structures are most suitable for improving the solution time of MIPs. Exploring the trade-off between pruning-induced sparsity and solution quality is another interesting avenue for future investigations. By advancing our understanding of pruning techniques and their impact on MIPs, we can enhance the efficiency and scalability of embedding ANNs in optimization problems. § ACKNOWLEDGMENTS The authors are grateful to Pierre Bonami for his generous and insightful feedback. This work has been supported by the NSERC Alliance grant 544900- 19 in collaboration with Huawei-Canada elsarticle-num
http://arxiv.org/abs/2307.04242v1
20230709184209
Reconstructing Air Shower Parameters with MGMR3D
[ "P. Mitra", "O. Scholten", "T. N. G. Trinh", "S. Buitink", "J. Bhavani", "A. Corstanje", "M. Desmet", "H. Falcke", "B. M. Hare", "J. R. Hörandel", "T. Huege", "N. Karastathis", "G. K. Krampah", "K. Mulrey", "A. Nelles", "H. Pandya", "S. Thoudam", "K. D. de Vries", "S. ter Veen" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.IM" ]
d[1]D..#1 h[1]D--#1 v×B v×(v×B) e-va-po-ra-tion #1#1#1Appendix <ref>#1 #1Fig. <ref> #1 (<ref>)1Eq. (<ref>)#1#2Eqs. (<ref>,<ref>)#1#1Table <ref>#1 X_ maxX_ z#1 MGMR3DKapteyn Institute, University of Groningen, Groningen, The NetherlandsAstrophysical Institute, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, BelgiumVrije Universiteit Brussel, Dienst ELEM, Brussels, BelgiumNikhef, Science Park Amsterdam, Amsterdam, The NetherlandsDepartment of Astrophysics/IMAPP, Radboud University Nijmegen, Nijmegen, The NetherlandsCWI, Centrum Wiskunde & Informatica, Amsterdam, The NetherlandsTU/e, Eindhoven University of Technology, Eindhoven, The NetherlandsNetherlands Institute for Radio Astronomy (ASTRON), Dwingeloo, The NetherlandsMax-Planck-Institut für Radioastronomie, Bonn, GermanyPhysics and Astronomy, University of California, Irvine, CA 92697-4575,U.S.AInstitut für Astroteilchenphysik, KIT, P.O. Box 3640, 76021, Karlsruhe, GermanyParticles and Fundamental Interactions Division,Institute of Experimental Physics, University of WarsawPhysics Education Department, School of Education, Can Tho University, Campus II, 3/2 Street, Ninh Kieu District, Can Tho City 94000, VietnamErlangen Center for Astroparticle Physics (ECAP), Friedrich-Alexander-Universität Erlangen-Nürnberg, Nikolaus-Fiebiger-Straße 2, 91058 Erlangen, GermanyDeutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, GermanyKhalifa University, P.O. Box 127788, Abu Dhabi, United Arab Emirates[][email protected][][email protected] Measuring the radio emission from cosmic ray particle cascades has proven to be a very efficient method to determine their properties such as the mass composition. Efficient modeling of the radio emission from air showers is crucial in order to extract the cosmic ray physics parameters from the measured radio emission. MGMR3D is a fast semi-analytic code that calculates the complete radio footprint, i.e. intensity, polarization, and pulse shapes, for a parametrized shower-current density and can be used in a chi-square optimization to fit a given radio data. It is many orders of magnitude faster than its Monte Carlo counterparts. We provide a detailed comparative study of MGMR3D to Monte Carlo simulations, where, with improved parametrizations, the shower maximum is found to have very strong agreement with a small dependency on the incoming zenith angle of the shower. Another interesting feature we observe with MGMR3D is sensitivity to the shape of the longitudinal profile in addition to . This is achieved by probing the distinguishable radio footprint produced by a shower having a different longitudinal profile than usual. Furthermore, for the first time, we show the results of reconstructing shower parameters for LOFAR data using MGMR3D, and obtaining a resolution of 22 g/cm^2 and energy resolution of 19%. Reconstructing Air Shower Parameters with MGMR3D S. ter Veen August 12, 2023 ================================================ § INTRODUCTION When a high-energy cosmic particle impinges on the atmosphere of Earth, it creates an extensive air shower (EAS). The electrons and positrons in the plasma cloud at the shower front drift in opposite directions due to the Lorentz force caused by the geomagnetic field. Due to this acceleration by an Earth’s magnetic field and deceleration in interactions with air molecules a time varying transverse current is created. This varying current emits radio waves <cit.> where the intensity pattern on the ground, the intensity footprint, depends on the variation of the current with height. There is another subdominant contribution to the radiation from the excess of negative charge   accumulated at the shower front, known as the 'Askaryan effect' <cit.>. The penetration depth where the particle number reaches its maximum, , strongly depends on the specifics of the first interaction, which strongly correlates with the mass of cosmic ray primary. Different values of result in differences in the longitudinal variation of the currents which is reflected in the intensity of the radio footprint. Thus can be reconstructed on the basis of the footprint which allows for a determination of the mass composition of cosmic rays <cit.>. The modeling of radio emission from EAS is generally performed with either microscopic or macroscopic formalisms. In a microscopic formalism the emission is calculated for each particle as obtained from a Monte Carlo simulation of the EAS. The coherence of the signals emerges naturally in this approach. ZHAires<cit.> and CoREAS<cit.> are the two most commonly used microscopic codes. MGMR <cit.>, EVA <cit.> and their latest successor MGMR3D <cit.> are examples of macroscopic codes. In this framework, the radiation field is derived from the Liénard-Wiechert potential <cit.> where the four-current is parametrized. The amplitude of the four-current is explicitly split into the charge component driving the charge excess emission and the transverse drift current generating the geomagnetic emission. One advantage of MGMR3D is that it is computationally inexpensive and produces radio profiles about four orders of magnitude faster than the Monte Carlo simulations. Another advantage is that it is fully deterministic in the sense that one can have control over the outputs by choosing exact shower parameters like the shower maximum and shape parameters of the longitudinal profile, contrary to the inherent randomness in Monte Carlo simulations. For these reasons, MGMR3D can be used to fit a reference radio footprint and obtain the corresponding longitudinal shower parameters that best reproduce the given profile through minimization techniques. There are other, more phenomenological approaches emerging like template synthesis<cit.>, radio morphing<cit.>, that also allow a fast calculation of the radio footprint. In MGMR3D the charge-current cloud of the air shower is parametrized which necessarily approximates its full complexity. In particular, the dependence on the energy of the particles forming this cloud is ignored, however, as the important particles in this cloud are relativistic, this is thought to be a reasonable approximation. In a prior publication  <cit.>, the parametrization and the foundation of the MGMR3D framework were introduced. In this follow-up work, we further investigate the performance of MGMR3D on ensembles of air showers and have refined the parametrization in an extensive comparative study with CoREAS. Most significantly, we have used MGMR3D to re-analyze measured data obtained with LOFAR. The MGMR3D-based analysis reproduces, within statistical significance, the results of an earlier analysis based on microscopic CoREAS calculations. MGMR3D offers thus a very CPU-efficient alternative to existing approaches for extracting shower parameters like from the radio footprint, and thus composition, of the original cosmic rays. Notably, MGMR3D is also a strong tool to map atmospheric electric fields under thunderstorms. In a separate publication <cit.> a detailed study is presented of using MGMR3D for reconstructing atmospheric electric fields during thunderstorm conditions from the radio footprint of air showers. This article is structured as follows- In Modeling we describe the improved modeling of the radiation profile. In <ref> and <ref>, comparisons between CoREAS with MGMR3D shower profiles are demonstrated, and the details of the results of fitting . We also present a correction formula to obtain the correct zenith angle dependency for as compared to CoREAS calculations. Such a correction is necessary since the penetration depth for which the coherent transverse current is maximal generally differs from the penetration depth for which the number of charged particles is maximal, . We also report a study suggesting a strong correlation between showers with nonstandard shapes of longitudinal profiles and the fit quality of MGMR3D. This indicates a novel future prospect of extracting shower parameters regarding the shape of the longitudinal profile, in addition to , with radio technique using MGMR3D. This will in the end help gather a better understanding of the mass composition and hadronic models. In Section lofardata we have shown the results of reconstructing using MGMR3D on measured LOFAR cosmic ray data and compare to the existing reconstructed with LOFAR analysis method, as well as the reconstruction of shower core and energy. § MODELING RADIO EMISSION FROM EAS Modeling The charge and current distributions that drive the radio emission from an EAS is expressed as a four-current j^μ(t,x,y,h) where μ=0 denotes the time (charge) components, and μ=x,y,z denote the space (current) components. The retarded Liénard-Wiechert potential for an observer at (t_o,x_o,y_o,z_o) in the shower plane with the retarded time t_r is A^μ(t_o,x⃗_⃗o⃗)=∫ d^3 x⃗' j^μ(t_r,x⃗')/ D , where the retarded distance is D= n√((-β t_o +h)^2 + (1-β^2 n^2)d^2) , where the distance between the observer and the point of impact of the core of the air shower is denoted by d, and the index of refraction is denoted by n. Since for a cosmic-ray air shower the particles are concentrated in a relatively flat pancake-like structure moving with relativistic speeds, the four current is parametrized as j^μ(t,x,y,h)=w(r)/r f(h,r) J^μ(t) . DefCloud The term w(r) / r in (<ref>) is the radial description of the plasma cloud, the second term f(h,r) is the current density of the shower front. These two are normalised such that J^μ(t) is the charge and current for a fixed time integrated over the complete plasma of the EAS. The radial dependence of the transverse current is parametrized as w(r)= N_w ζ(ζ + 1)^-2.5, with ζ=r/R_0. The function w(r) × r corresponds to the NKG function <cit.> for a fixed shower age s=2<cit.>. These parametrizations were studied and optimized by comparing to the results of CONEX-MC<cit.>. The definition of R_0 is similar to the Molière radius, but not the same as in this context it is a scaling parameter that describes the radial current profile and thus is referred to as radiation radius. In the original formulation of MGMR3D the radiation radius was taken to be a constant. We observed that the optimum value for R_0 depends on the distance from to the shower core (D_), while fitting R_0 for different showers. We find that for distances smaller than 5 km R_0 is proportional to distance and reaches saturation with R_0=50 m for larger distances, independent of zenith angle. This is shown in mol_radius. This linear dependency at smaller D_ is now included in MGMR3D as R_0= 10 D_. The current density at a distance h behind the shower front is parametrized as f(h,r)=N_f η/e^√(η)+1. where N_f is a normalisation constant. The parameter λ, folded in as η=h/ λ, accounts for the pancake thickness scaling and has a radial dependence. The radial dependence of the pancake thickness is described in a way that it is constant near the shower axis and increases linearly at distances away from the shower axis where particles tend to have less energy and thus lag behind. The parametrizations for the radial and pancake function were also studied and optimized with comparison to the results of CONEX-MC <cit.>. The functions w and f that depends on the distance to the shower axis are normalized according to ∫_0^∞ w(r) dr=1 and ∫_0^∞ f(h,r) dh=1 ∀ r. §.§ Parametrization of the currents ParamCurr The original parametrization of the charge cloud in MGMR3D, as described in <cit.>, was based on CONEX-MC simulations <cit.>. There were however important inconsistencies in the extracted shower parameters as well as in the observed radiation profile, when compared to CoREAS results. A reason behind these differences is that in the parametrization the energy distribution of the particles in the shower is not taken into account. Parameters like the drift velocity and charge excess are strongly dependent on the energy range of particles used to predict these averaged quantities. To mitigate these issues we revisit the parametrizations in this work by comparing the results of MGMR3D and CoREAS calculations for an ensemble of air showers. This leads to improved parametrizations, in particular for modeling the drift velocity (cf. ParamCurr) and for the longitudinal profile of the current, J^μ in (<ref>). The details of the comparison between MGMR3D and CoREAS are presented in Compare. §.§.§ Transverse current The transverse current is given by, J⃗_⊥(t_s)=N_c(X_z) u⃗_⊥(), where the transverse drift velocity is denoted as u⃗_⊥(),N_c is the number of charged particles at depth (X_z). It should be noted that the penetration depth for is only indirectly related to the penetration depth of maximum transverse current, since the factor between the two, the drift velocity, depends on air density as well as the mean energy of the particles in the shower. The drift velocity increases with increasing forces acting on the charges. This becomes particularly important for large electric fields in thunderstorm clouds, and special treatment is required so that the particles do not exceed the speed of light <cit.>. The transverse drift u⃗_⊥() is therefore expressed as u⃗_⊥()=cv⃗/√(1+v^2/v_0^2), where the parameter v_0 is adjusted to the value 0.2, and v is taken proportional to the Lorentz force. In the original parametrization used in <cit.> no dependence on air density was assumed in the parametrization of the drift velocity. We noted that a √(ρ) scaling was necessary to obtain agreement with the results of the CoREAS calculation. We thus updated the formula for the drift velocity to read v⃗(X)= c F⃗_⊥/F_t×a_t+1/-X_t/-X_t + a_t×√(ρ()/ρ()) Def-v with X_t=50 g/cm^2, F_β=250 keV/c, and a_t=3. F⃗_⊥ is the total transverse force acting on the particles, and for the air showers when no thunderstorm is present it only consists of the Lorentz force, F⃗_⃗⊥⃗=ev⃗_s ×B⃗, where v⃗_⃗s⃗ is the velocity of the shower front, e is the elementary charge, and B⃗ is Earth's magnetic filed. The second factor in (<ref>) takes into account the fact that the drift velocity depends on the penetration depth in the atmosphere, accounting for the changing mean energy of the shower particles. It is good to mention that this parametrization becomes less accurate for the highest zenith angles, where an additional dependence on emission height is seen. This correction is not yet included in the code, which should therefore be used with caution when studying highly inclined showers above 60 degrees zenith angle. For the study reported in this article both simulated and recorded showers are well below this limit. The physical interpretation of the √(ρ) scaling is not trivial. Interestingly, the drift velocity has the same form as the terminal velocity due to the macroscopic drag force acting opposite to the relative motion of any object moving in a fluid. The drag force of air is proportional to the square of the speed of the object. For a falling object in air the terminal velocity can be reached when the force due to gravity balances the drag force mg=F_D= 1/2ρ C A v^2 , with C,A,v being the drag coefficient, area of the object and terminal velocity respectively. Solving for v results v=√((2mg/ρ CA)). The result can be generalized to situations where the object is accelerated by other forces. In the case of the electron drift velocity that would be the Lorentz force. The equivalent of the drag force is actually due to the many elastic collisions of the relativistic electron in the shower front with neutral air molecules. A relativistic electron in the shower lives roughly a microsecond (300 meters) before being stopped in a hard inelastic collision. Within that microsecond, the electron actually undergoes more than a million elastic collisions with particles in the air. While this provides an intuitive understanding of the ρ^-1/2 scaling, the assumption that an electron plasma experiences the same drag as a macroscopic object is of course not easily justified. It is worth mentioning that in <cit.> a similar density dependence on the electric field amplitude of radio pulse was reported in a study for radio morphing method. ! recently we have noticed that this formulation assumes the drift velocity at Xmax is constant for all zenith showers, ignoring the altitude effect, not perfect for very inclined showers, however for the case in Lofar might get away with it as the zenith range is not extreme. Need a nice way to mention this. using an updated value for the constant c. §.§.§ Charge excess The charge excess in the shower is given as, J_Q(z)= e N_c() ρ_c() where e is the charge of the electron and the proportionality factor is ρ_c() defined in the most recent form, ρ_c() = J^0_Q 3 - -X_c/ + -X_c × (1 - e^--X_c/2(-X_c)) ρ()/ρ_c√(ρ()/ρ_c)Def-chxcurr where J^0_Q is a normalisation constant, ρ_c=0.06 g/cm^3, and X_c=50 g/cm^2. The first two factors in (<ref>) are inspired by comparing to the results of CONEX-MC simulations including simulations for highly inclined showers with zenith 65 degrees. The last term including the square root dependency on density is inspired from the treatment of transverse current in (<ref>). §.§ Parametrization of the longitudinal profile ParamLP There are two common ways to parametrize longitudinal profile, the number of charged particles at a depth , one is the Gaisser-Hillas formula <cit.>, the other is the R,L formula in <cit.>, N^R-L_c()= N_max×(1 - R/L ( -))^R^-2 e^ - / R LDef-Nc-RL N^G-H_c()= N_max×(-X_0/ -X_0)^ -X_0/Λ e^ - /Λ Def-Nc-GH where the number of particles at the shower maximum, N_max is taken proportional to the energy of the cosmic ray, N_max= N_E E_cr . norm The constant N_E is used as a norm factor when fitting the results of MGMR3D to data. The main difference between the two parametrizations is that the parameters in (<ref>) are related to the physics of the shower such as the depth of first interaction while R and L in (<ref>) relate more directly to the rise and fall of the distribution <cit.>. These more general parametrizations provide the option to study effects of the longitudinal shape parameters other than on the radio footprint <cit.>. In principle, either of these parametrizations can be used to describe the longitudinal profiles in MGMR3D. We have used (<ref>) throughout this analysis. The intensity of the radio pulse depends on the energy of the cosmic ray which is treated as a normalisation factor, a proxy for the air shower energy, in MGMR3D when a χ^2 fit to data is performed. This normalisation factor was introduced in (<ref>). Thus, when fitting the radio footprint as generated by CoREAS simulations for showers with a fixed energy, the normalisation factor should be constant, barring shower-to-shower fluctuations. In normvsdensity, we indeed show this is approximately constant, for showers at various zenith angles. These values also have a global normalisation which is constant for all showers. § STOKES PARAMETERS AS OBSERVABLES stokes_obs We investigate the radio footprint of an air shower using Stokes parameters since these capture the complete polarization structure of the radio pulse. Because the objective of the present work is to develop a scheme for data interpretation, we construct the Stokes parameters specific for the LOFAR frequency band, between 30 – 80 MHz band. The Stokes parameters can be expressed in terms of the complex observable E_i=E_i + iÊ_i, where E_i is the electric field component in ê_ and ê_ directions which are by construction perpendicular to the propagation direction of the shower, and Ê_i is its Hilbert transformation <cit.> (in arbitrary units), as I = 1/ N∑_0^n-1( | E|^2_i, + | E|^2_i,) Q = 1/ N∑_0^n-1( | E|^2_i, - | E|^2_i,) U +iV = 2/ N∑_0^n-1( E_i, E_i,^* ) . We sum over the entire signal trace while calculating the values from CoREAS simulations. The linear-polarization angle with the -axis, ψ, can be calculated directly from the Stokes parameters as ψ=1/2tan^-1 (U/Q). The relative amount of circular polarization is given by V/I and it can be interpreted due to a time lag between the peak of the charge excess and transverse current pulses <cit.>. §.§ Noise-error estimate on Stokes parameters MGMR3D performs a fit of the input radio profile through a Levenberg-Marquardt minimization procedure <cit.>, that is based on a steepest descent method. The reduced χ^2 of the fit is defined as χ^2=1/N_ndf ∑_a,f(f_c^a-f_m^a)^2/σ_f^a^2errdef where f_c^a, and f_m^a are the different Stokes parameters calculated with CoREAS and MGMR3D respectively for antenna a, N_ndf is the number of degrees of freedom, and σ_f^a is the error on the Stokes parameter. It is important to note that when we are performing a model-to-model comparison here, the numerator in (<ref>) does not have a noise contribution and the χ^2 can be << 1. For the sake of clarity we refer this as χ̃^2 throughout this paper to distinguish from standard χ^2. In the present calculations, we calculated σ_f for the comparison with CoREAS as σ_I^2 = Δ t/2( c ϵ2/Nσ_n I + 2/Nσ_n^2) = Δ_t/2(2 σ_n/N(c ϵ I_0+σ_n)) σ_Q^2 = Δ t/2( c ϵ 2/Nσ_n I + 2/Nσ_n^2 ) σ_U^2 = σ_V^2= Δ t/2(c ϵ 2/Nσ_n I + 2/Nσ_n^2) , errormodel where N is the length of the trace and σ_n is the noise fluence per sample, c, ϵ are the natural constants - velocity of light and permittivity of air in S.I. units, and Δ t is the width of the time bins. For measured cosmic ray data the value of the noise level σ_n is obtained from measuring a time window of the recorded signal trace where no significant signal is present. In the case of MGMR3D, the value is chosen such that it is a close representation of the measurement. the value is shown in mytable1. § COMPARISON TO COREAS SIMULATIONS Compare With the improved parmetrizations of the current profile as given in ParamCurr we validate the performance of MGMR3D by fitting the radio footprint of showers simulated with CoREAS to that of MGMR3D. There is a range of parameters available in the framework of MGMR3D that can be tuned to achieve a good fit. We follow the approach where generic shower parameters, based on shower generality, are taken fixed, such as those given in mytable1, while others, in particular those describing the longitudinal profile of the shower (, the shower maximum, and E, the shower energy) are fitted for each shower. CoREAS simulations are performed on a star-shaped layout of antennas with the center on the shower axis and 8 arms. Each arm contains 20 antennas, with a spacing of 25 m in the shower plane. The radio pulses are filtered between 30 – 80  MHz. The results of each CoREAS simulation for the intensity I for all antennas of the grid, is fitted with MGMR3D using a steepest descent algorithm treating and E as free parameters. In these calculations, the core position is kept fixed to the center of the grid. In later applications to LOFAR data (lofardata) the core position is also treated as a free parameter §.§ Single shower comparisons The different panels in Stokes_lowzen and Stokes_highzen show the Stokes parameters for two showers coming in at a 26^∘ and 46^∘ zenith respectively. The top panels show the Stokes parameter as a function of antenna position for both MGMR3D and CoREAS and the bottom panels show the relative difference between the two models defined as ΔI= (I_c-I_m)/σ_I. The realistic error model described in (<ref>) is used. All the plots show a common feature that the magnitude of ΔI varies with antenna positions and has zero crossings. The magnitudes of the Stokes parameters depend on the azimuthal orientations of the antennas with respect to the core. For example, along the v×B direction there is full linear polarization resulting Q/I=1. It deviates from unity for other directions, due to a small contribution from the charge-excess emission. Similarly, the circular polarization, expressed by V/I, is small and azimuth angle dependent. The Stokes parameters U and V for the two calculations are shown to agree well within 250 meters, while the differences increase at larger distances. These differences seem to point to an underestimate of the difference in emission heights between charge excess and transverse current radiation in MGMR3D. Stokes_highxmax shows an example of a shower with a very large ≈ 950 g/cm^2 which results in a poor agreement between CoREAS and MGMR3D, such cases can be expected when the shower develops closer to the ground. Further details for such cases are discussed in simu_fit. In the rest of this work, we concentrate on reconstructing the shower maximum using Stokes I. We restrict ourselves to I as it is the Stokes parameter that can most accurately be measured experimentally, and we have also noted that adding other Stokes parameters does not lead to any significant improvement in the reconstruction of air shower parameters. §.§ Fitting the shower maximum simu_fit In this section, we report the results of reconstructing with MGMR3D by fitting an ensemble of CoREAS showers. This CoREAS library was produced for each detected shower in LOFAR, where at least 25 proton and 10 iron showers are simulated with the same energy and arrival direction obtained from a preliminary reconstruction for this shower <cit.>. We have excluded showers with exceeding 750 g/cm^2 because for these showers the footprint becomes extremely small and MGMR3D does not provide a good agreement with CoREAS radio profiles. you should probably give an example of such a poor fit, maybe in the previous section. You could also just include them. Do you limit the fit-range to distances where the signal is larger than the coreas noise?yes, normally this is regulated by the OBS_DIST parameter, an optimum value was chosen that worked for most of the showers, probably they need to be more tuned for very deep showers, but they are very small in number. okay, I will find an example. Sometimes, mgmr3d used to take too long such such a case and in the end gave an error The radio footprints with MGMR3D are fitted to CoREAS with as a free parameter for each shower with arrival direction and energy same as CoREAS. As mentioned earlier, for CoREAS simulations the shower core positions are known, hence we do not fit the core positions. But for real data core positions become important fit parameters while obtaining the radio profile that best describes the data. This is discussed in detail in the next section. We refer to the values obtained from CORSIKA as ^true and the reconstructed values as ^fit. The results are shown in xmaxfit_coreas. This considers mixed primaries with proton and iron for various showers. The error calculated for the realistic noise model given in (<ref>) is used. We have applied a quality cut based on the distance to from the ground. Details of this cut are explained in the following paragraphs. The black crosses are the points that are excluded by the cut. A straight line is fit through the selected points, shown by the blue points. It is evident that there is a very strong correlation between the reconstructed and the CoREAS truth values. The slope and intercepts of the fit are 0.98 and 19 respectively. Distribution of the deviation of ^fit from the fitting line, denoted by Δ X' is shown in the inset histogram of xmaxfit_coreas. This shows a resolution of 9.76 g/cm^2. It is also worth mentioning that we have studied the fits on proton and iron showers separately and found no bias on primary particle type. The fit results are found to be almost identical, we have thus used combined showers for the rest of the analysis. The shift in from the true value is defined as Δ= ^fit - ^true. For the majority of the showers, Δ is independent of to the first order, as suggested by the near unity slope. However, we have found a dependence on the shower zenith angle, as shown in zenith_correction, which includes the same showers as in xmaxfit_coreas. We see from the plot that there are a handful of outliers, a few in the positive direction of Δ and more in the negative direction. The positive ones will be discussed in the next section. The negative outliers appear to be from showers that are developed closer to the ground. In order to obtain a clean parametrization to capture the relationship between Δ and zenith, we have used a cut on the outliers. These outliers are excluded based on a cut on distance from the core of the shower on the ground to . We have chosen a conservative cut to accept showers with distance to > 3 km in the fit that captures the trend between Δ w.r.t. zenith, shown by the red curve. The excluded points are shown in black crosses. For showers that are developed closer to the observer there are systematic differences between MGMR3D and CoREAS (also shown in an radio LDF example in Stokes_highxmax), which could be attributed to the facts that for such showers more detailed parameters, like the dependence on the distance to the shower axis of the thickness and shape of the shower front, start to become important for the radio footprint, leaving room for more fine-tuning for specific showers with MGMR3D. Another important point is that, the general emission mechanism in MGMR3D involving coherence and farfield assumptions start to become less accurate when the emission is generated close to the antennas. However, for the majority of the showers the generic approximations hold and results with MGMR3D are in good agreement with COREAS. The coefficients of the fit from zenith_correction are given in mytable2. This parametrization can be used as a correction factor to estimate the expected value from ^fit in general and is used while fitting LOFAR data to MGMR3D in section lofardata. §.§ Sensitivity to shower shape parameters-R and L It appears from zenith_correction that there are a few showers, where the fit from MGMR3D is overestimated significantly from their CoREAS truth values that are not affected by the distance to cut described in the previous section. In this section, we take a closer look at some of these cases. It is found that these outliers have significantly larger χ̃^2 values than the other CoREAS simulations for the same shower angle and energy. We have ruled out the possibility of non-convergence of the fit, by studying the χ̃^2 surface for , which showed a clear global minimum for all cases. While probing other reasons for such differences, we have found that these showers have longitudinal profiles that differ considerably from the rest of the ensemble. These differences are observed in terms of the shape parameters - R and L as described in (<ref>). It appears that for these outliers the true R and L values, obtained from fitting the CORSIKA longitudinal profiles, are quite extreme compared to their central values. A zoom of the subset of showers containing the outliers are shown in LR_extreme, with their true R and L color coded. The trend demonstrates the correlation between the high Δ with high χ̃^2 (normalised between 0-1), and extreme R and L. The CORSIKA longitudinal profile for the extreme case is shown in LR_property. The shower shape is wider than usual and this could indicate the presence of an energetic secondary shower. In MGMR3D the R and L parameters are fixed to central values (see mytable1)and we fit only, this can explain the large shift in predicted which arises to compensate for the difference in longitudinal profile, however, the χ̃^2 for these outliers still remains higher than the ensemble. This example clearly shows two important results. Firstly, the radio profiles are influenced by other parameters of the longitudinal profile than only . Secondly, MGMR3D is sensitive to these parameters. To extract all three parameters- R, L, and , from MGMR3D calculation requires more dedicated efforts and currently is beyond the scope of this paper. However, the outliers are only a small fraction of the total number of showers, and this would have only a small effect on the zenith based correction proposed in mytable2. § APPLICATION TO LOFAR DATA lofardata In this section we discuss various steps of applying MGMR3D to experimental data and estimate . We have used LOFAR cosmic ray data for this purpose. Currently, LOFAR provides the highest precision for the determination of with the radio technique <cit.>. The dense core of LOFAR consists of 288 low-band dipole antennas within an area with a diameter of 320 meters, known as the Superterp. The radio emission from air showers in the frequency range 30 – 80 MHz is recorded by the LOFAR low-band antennas <cit.>. An array of particle detectors, LORA, installed on the Superterp provides the trigger for the detection of the air showers  <cit.>. The usual reconstruction technique used at LOFAR is based on the production of dedicated CoREAS simulation sets for each detected air shower. The number of simulations needed to reconstruct the shower maximum is optimized with CONEX<cit.>. A set of CORSIKA simulations with proton and iron primaries is produced for each detected cosmic ray. The radio emission is simulated in a star-shaped pattern for antenna positions in the shower plane using CoREAS. For each CoREAS simulation the value of as well as the χ^2 is determined when fitting the core position to data. for a measured shower is then reconstructed by fitting a parabola to the χ^2 vs Monte Carlo contour. The latest results on LOFAR cosmic ray analysis can be found in <cit.>. While such a Monte-Carlo based approach is precise, it is compute-intensive. Thus, fast alternatives such as MGMR3D are desired, where is reconstructed in a steepest descent optimization of the parametrized radio profile to given data. The details of applying MGMR3D to data are as followed- the quantity P_ data or P_ mgmr3d, is calculated as the time integrated voltage squared over a 55 ns window centered around the pulse maximum, and is used as the observable. The error, σ_P, is estimated from the measurement of the noise level from data. This is the same procedure as used in <cit.>. This implementation is different from the previous case of fitting only to simulations where the stokes parameters, integrated over the full trace, were used as observables. The reduced χ^2 to be minimized in MGMR3D is defined as χ ^2 =1/N∑_antennas(P_ data - P_ mgmr3d (x_ core, y_ core, ) /σ_P)^2 , eq_norm and the core positions (x_core, y_core) are the free parameters of the fit. The shower energy for the MGMR3D calculation is determined from the normalization constant, see (<ref>). In fitting to the data we have kept the longitudinal shape parameters R, and L as well as the charge excess parameter J_0Q fixed to the values given in mytable1. Including these parameters in the fit sometimes gave rise to a poor convergence without considerably improving the fit quality. The core reconstruction from a parametrization of the radio LDF  <cit.> is used as initial guesses for the core positions, the same as was also used in the CoREAS reconstruction method. In order to fit , it is seen that starting from a small value between 300-400 g/cm^2 leads to faster convergence. The reconstructed with MGMR3D are shown in comparison to the obtained using the LOFAR reconstruction technique in xmaxfit1. The values reconstructed with MGMR3D are corrected with the zenith correction formula described in mytable2. We have also implemented the distance to based quality cut as described in simu_fit. The red line is a linear fit to the data with a slope of 0.85 and intercept 91. The shaded area is the 1-σ error on the fit. The black line is the prediction from simulations only, as discussed (cf. xmaxfit_coreas). From the comparison shown in xmaxfit1 an estimate can be obtained for the accuracy for ^mgmr3d. The combined error on is calculated from the standard deviation of the gaussian fitted to the distribution of ^mgmr3d - ^CoREAS as shown in calc_err. Assuming the errors due to MGMR3D and CoREAS reconstruction are uncorrelated the total error σ_tot can be written as σ^2_tot = σ^2_coreas + σ^2_mgmr3d, σ_coreas is obtained from the mean of the distribution of errors on reconstructed with CoREAS for individual events, using a Monte-Carlo method <cit.>. With σ_coreas= 14.5 g/cm^2 we obtain σ_mgmr3d= 22.4 g/cm^2. This value is used as the resolution of the reconstruction with MGMR3D from LOFAR data and shown in the black cross in xmaxfit1. Since for CoREAS the shower is given by a microscopic CORSIKA calculation, it is possible to obtain the error on from the quality of the fit but for MGMR3D such a procedure is not possible. The reason is that in MGMR3D calculations, parameters entering in the longitudinal profile, can easily vary well outside the physical regime. An example of the radio profile of a reconstructed shower is shown in Appendix A for both CoREAS and MGMR3D. §.§ Reconstruction of shower core and energy: In deltaxmax_coreshift we show the correlation between the core positions reconstructed using MGMR3D and CoREAS reconstructed core positions. For the majority of the showers, the core positions show good agreement between COREAS and MGMR3D reconstructions. However, there are a few exceptions with large deviations between MGMR3D and CoREAS. This effect is not found to be correlated either with Δ nor χ^2. Some of these events are hard to reconstruct because the signal-to-noise ratio is relatively low, while others have a core that it is not well-contained by the LOFAR stations. In both cases, small differences between CoREAS and MGMR3D can have an impact that is larger than usual. In E_LOFAR the differences in cosmic ray energy reconstruction between MGMR3D, using (<ref>), and CoREAS are compared. The top panel of E_LOFAR shows that there is no clear correlation between the two. The bottom panel of the figure shows the relative difference, defined as, 2 (E_MGMR3D-E_CoREAS)/(E_MGMR3D+E_CoREAS) rel_en to make the differences more quantitative. This shows that there is no average offset between the two energy reconstructions. The spread of 19% in the distribution is comparable to the LOFAR energy resolution of 14% <cit.>. § SUMMARY AND CONCLUSIONS The MGMR3D code, which uses an analytic parameterization of the plasma cloud, provides a promising alternative to obtain the longitudinal structure of an air shower that best reproduces the measured radio footprint through minimization. It is computationally orders of magnitude faster than its microscopic counterparts that are customarily used for analyzing radio emission from cosmic rays. We have reported on a detailed comparison for a large ensemble of showers simulated with CoREAS and MGMR3D. This resulted in an optimized parameterization inside MGMR3D, in particular concerning the drift velocity, the charge excess, and the radial structure. With the optimized parametrization a strong agreement with microscopic CoREAS-simulations were obtained for the lateral distribution functions for radio emission with a relative difference in intensity up to 10%. As a follow-up step we have shown that MGMR3D can be used in a chi-square fit procedure to extract the shower maximum for a large ensemble of showers simulated by CoREAS. The results show a very good agreement with a small systematic zenith-angle dependency, which is upto 6-8 g/cm^2 for zenith angles not exceeding 50 degrees. We introduce a correction formula to compensate for this. However, MGMR3D is yet not fully optimized for highly inclined showers with zenith above 65 degrees. This is a prospect for a future effort and would be useful for simulation studies for experiments such as GRAND(The Giant Radio Array for Neutrino Detection) designed for detecting highly inclined air showers. We have also found that MGMR3D is sensitive to the effects of additional parameters corresponding to the shape of the longitudinal shower profile on the radio footprint- namely R and L. These parameters have the potential to provide further insight in mass composition, constraining hadronic model, as well as astrophysical interpretation of cosmic ray sources, in addition to  <cit.>. Probing these subtle parameter spaces require extremely dense antenna layouts such as The Square Kilometer Array(SKA) <cit.>, and the required simulations also multiply by many folds, which is exhaustive for present compute-intensive Monte Carlo frameworks. MGMR3D, thus, opens up the novel opportunity of making such multi-parameter study plausible by producing large simulation sets with very little compute resources. A detailed study along these lines will be investigated in a follow up work. As a final proof of the proposed procedure we have used MGMR3D to extract from LOFAR data that have been used in earlier studies. An average resolution of 22 g/cm^2 is found which is competitive to the average resolution of 14.5g/cm^2 obtained using the CoREAS based method. It shows that, the latest version of MGMR3D, for specific geometries discussed in this paper, can be used as a fast and efficient tool to reconstruct shower parameters, and for high-precision studies, it can be combined with Monte Carlo simulations as a preliminary estimator to help reduce the required simulation landscape and expedite the analysis. § EXAMPLE FROM LOFAR DATA A comparison of the radio profiles between CoREAS and MGMR3D for one measured shower is shown in ldf_fit1 and the corresponding reconstructed parameters are in table_reco. In this section we have a closer look at some specific events and study the LDF profile. First we show a couple examples with very good agreement between MGMR3D and LOFAR reconstruction as seen from the LDF profile in ldf_fit. The reconstructed values are also very close- within a difference of 3-4 g/cm^2 and the reconstructed core positions are also reasonable, a difference within 12 meters. I propose to skip this. To get some insight in the cause for the outliers shown in xmaxfit1 and deltaxmax_coreshift we show in ldf_fit the detailed radio footprint for some of these outlier events. o get some insight in the cause for the outliers shown in xmaxfit1 and deltaxmax_coreshift we show in ldf_fit the detailed radio footprint for some of these outlier events. Some remarks concerning ldf_fit: I feel that the footprint showing the lofar antennas should stay. The plot of chi-square as function of xmax should go, or it should be discussed in detail in the text. The comparison of the two CoREAS and MGMR3D lateral distribution functions compared to data should remain, however for all either plot the model over the data (preferred) or the other-way around & and use the same scales on the axes for roreas and mgmr. Show the pull plot for both or neither. Make the axes labels readable. N.B. I have changed one panel in the plot where there was something obviously wrong, please check all. These results show that for these cases where there is a large discrepancy between the CoREAS and the MGMR3D results the error-bars on the data are large and either fit seems acceptable. § PROGRAMMING DETAILS The latest version of the program can be downloaded from <cit.>. This version contains the improved parametrizations, realistic errormodel discussed in this paper, as well as the functionality to include antenna response functions, relevant for the application to measured data. § ACKNOWLEDGEMENT P. Mitra acknowledges financing by the Polish National Agency for Academic Exchange within Polish Returns Program no. PPN/PPO/2020/1/00024/U/00001. This research is also funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 103.01-2019.378. BMH is funded by ERC Grant agreement No. 101041097. N.Karastathis acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 445154105. ST acknowledges funding from the Khalifa University Startup grant, project code 8474000237-FSU-2020-13. LOFAR, the Low Frequency Array designed and constructed by ASTRON, has facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the International LOFAR Telescope foundation under a joint scientific policy.
http://arxiv.org/abs/2307.04911v2
20230710213002
CLASSY VII Lyα Profiles: The Structure and Kinematics of Neutral Gas and Implications for LyC Escape in Reionization-Era Analogs
[ "Weida Hu", "Crystal L. Martin", "Max Gronke", "Simon Gazagnes", "Matthew Hayes", "John Chisholm", "Timothy Heckman", "Matilde Mingozzi", "Namrata Roy", "Peter Senchyna", "Xinfeng Xu", "Danielle A. Berg", "Bethan L. James", "Daniel P. Stark", "Karla Z. Arellano-Córdova", "Alaina Henry", "Anne E. Jaskot", "Nimisha Kumari", "Kaelee S. Parker", "Claudia Scarlata", "Aida Wofford", "Ricardo O. Amorín", "Naunet Leonhardes-Barboza", "Jarle Brinchmann", "Cody Carr" ]
astro-ph.GA
[ "astro-ph.GA" ]
AASJournal ApJ Hu et al. CLASSY Profiles 0000-0003-3424-3230]Weida Hu Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA 0000-0001-9189-7818]Crystal L. Martin Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA 0000-0003-2491-060X]Max Gronke Max-Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85741 Garching, Germany 0000-0002-5659-4974]Simon Gazagnes Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA 0000-0001-8587-218X]Matthew Hayes Stockholm University, Department of Astronomy and Oskar Klein Centre for Cosmoparticle Physics, AlbaNova University Centre, SE-10691, Stockholm, Sweden 0000-0002-0302-2577]John Chisholm Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA 0000-0003-1127-7497]Timothy Heckman Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA 0000-0003-2589-762X]Matilde Mingozzi Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0002-4430-8846]Namrata Roy Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA 0000-0002-9132-6561]Peter Senchyna Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101, USA 0000-0002-9217-7051]Xinfeng Xu Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA 0000-0002-4153-053X]Danielle A. Berg Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA 0000-0003-4372-2006]Bethan L. James AURA for ESA, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0001-6106-5172]Daniel P. Stark Steward Observatory, The University of Arizona, 933 N Cherry Ave, Tucson, AZ, 85721, USA 0000-0002-2644-3518]Karla Z. Arellano-Córdova Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA 0000-0002-6586-4446]Alaina Henry Center for Astrophysical Sciences, Department of Physics & Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0002-6790-5125]Anne E. Jaskot Department of Astronomy, Williams College, Williams town, MA 01267, USA 0000-0002-5320-2568]Nimisha Kumari AURA for ESA, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0002-8809-4608]Kaelee S. Parker Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA 0000-0002-9136-8876]Claudia Scarlata Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA 0000-0001-8289-3428]Aida Wofford Instituto de Astronomía, Universidad Nacional Autónoma de México, Unidad Académica en Ensenada, Km 103 Carr. Tijuana-Ensenada, Ensenada 22860, Mexico 0000-0001-5758-1000]Ricardo O. Amorín Instituto de Investigación Multidisciplinar en Ciencia y Tecnología, Universidad de La Serena, Raul Bitrán 1305, La Serena 2204000, Chile Departamento de Astronomía, Universidad de La Serena, Av. Juan Cisternas 1200 Norte, La Serena 1720236, Chile Wellesley College, 106 Central Street, Wellesley, MA 02481, USA 0000-0003-4359-8797]Jarle Brinchmann Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA Lyman-alpha line profiles are a powerful probe of ISM structure, outflow speed, and Lyman continuum escape fraction. In this paper, we present the line profiles of the COS Legacy Archive Spectroscopic SurveY, a sample rich in spectroscopic analogs of reionization-era galaxies. A large fraction of the spectra show a complex profile, consisting of a double-peaked emission profile in the bottom of a damped, absorption trough. Such profiles reveal an inhomogeneous interstellar medium (ISM). We successfully fit the damped absorption (DLA) and the emission profiles separately, but with complementary covering factors, a surprising result because this approach requires no exchange between high-N_HI and low-N_HI paths. The combined distribution of column densities is qualitatively similar to the bimodal distributions observed in numerical simulations. We find an inverse relation between peak separation and the [O iii]/[O ii] flux ratio, confirming that the covering fraction of Lyman-continuum-thin sightlines increases as the peak separation decreases. We combine measurements of peak separation and red peak asymmetry in a diagnostic diagram which identifies six Lyman continuum leakers in the CLASSY sample. We find a strong correlation between the trough velocity and the outflow velocity measured from interstellar absorption lines. We argue that greater vignetting of the blueshifted peak, relative to the redshifted peak, is the source of the well-known discrepancy between shell-model parameters and directly measured outflow properties. The CLASSY sample illustrates how scattering of photons outside the spectroscopic aperture reshapes profiles as the distances to these compact starbursts span a large range. § INTRODUCTION The Epoch of Reionization (EoR) marks a period in the history of the Universe when the emergence of galaxies ionized most of the neutral hydrogen in the intergalactic medium (IGM). Observations suggest that the first ionized pockets in the IGM grew around the largest overdensities of galaxies <cit.>. The massive stars in those galaxies are likely the source of the ionizing photons, the Lyman continuum (LyC) at wavelengths λ<912 Å <cit.>. How this ionizing radiation leaks out of the dense structures where early galaxies form, however, is not well understood. A small column density of neutral hydrogen, N_HI≈ 1.6 × 10^17 cm^-2, will absorb a LyC photon. Exactly how feedback from massive stars opens pathways for LyC escape <cit.> sets the timeline for cosmic reionization <cit.>. Direct observations of the escaping LyC photons are not possible during the EoR because of attenuation by the IGM <cit.>, so indirect tracers LyC escape and outflows are needed. Lyman-α is the most commonly detected emission line from high-redshift galaxies <cit.>. The channels through which photons emerge from galaxies appear to be tightly related to the pathways of LyC escape <cit.> because the origins of photons, H ii regions, are illuminated by the LyC photons arising from central massive stars. Even low column densities of neutral hydrogen in these channels scatter photons many times, altering their direction and frequency. Their random walk redistributes photons flux from the line core into the line wings, and this reshaping of the line profile imprints information about the outflow velocity, column density, and ISM structure on the emergent line profile <cit.>. In the absence of absorption by dust, all the photons eventually escape from the galaxy, and radiative transfer calculations demonstrate some general properties of the line profiles. For example, analytic solutions for static slabs and spheres yield emerging spectra with symmetric redshifted and blueshifted peaks <cit.>. Bulk motion requires Monte Carlo techniques, and these calculations demonstrate that outflowing gas produces an asymmetric profile which has a stronger redshifted component regardless of the outflow geometry and structure <cit.>. The most commonly applied radiative transfer model, the homogeneous shell model, assumes an expanding, spherical shell of neutral hydrogen <cit.>. Over a wide range of outflow properties, the emergent line profile has a P Cygni shape characterized by a redshifted emission line with a broad red wing plus a blueshifted absorption trough. For a very low H i column density, some emission from the near side of the thin shell is transmitted, producing blueshifted emission instead of absorption. Whereas a very high column density shell will trap a photon until it is eventually absorbed by a dust grain, and the emergent line profile has become that of a damped absorber (DLA), a completely-dark absorption trough with very broad wings. The shell model does a good job of reproducing the diversity of commonly observed profile shapes <cit.>. Statistically successful fits, however, do not guarantee accurate recovery of outflow properties. The structure of the shell model is much simpler than actual ISM and multi-phase outflows <cit.>. Low-ionization state (LIS) absorption lines in galaxy spectra unambiguously detect outflowing gas and have provided insight into how outflow properties vary with galaxy properties <cit.>. The outflow speeds derived from the blueshifts of these absorption lines offer an opportunity to test the shell model velocities, and the results reveal significant discrepancies both at high-redshift <cit.> and among nearby Green Pea galaxies <cit.>. Three major discrepancies are reported in those studies: (1) the best-fit redshifts are larger by 10–250 km s^-1 than the spectroscopic redshifts; (2) the best-fit outflow velocities of expanding shell are lower than the outflow velocities derived by LIS lines; (3) the intrinsic line widths of shell model are broader than those of Balmer lines. <cit.> proposed that those discrepancies might be caused by the degeneracies between model parameters, but no explanation for these puzzles based on observations has been found. We also draw attention to another limitation of the shell model. A large fraction of Green Peas and higher-redshift star-forming galaxies show emission line in the bottom of a DLA system <cit.>. These profiles cannot be produced by a homogeneous shell model. The low column density shells that produce double-peaked profiles contradict the presence of damped absorption which requires very high column density. Even larger peak separations are predicted by a clumpy shell because the fitted shell expansion speed lies between the outflow velocities of the neutral clouds and the hot interclump medium <cit.>. Comparing physical properties derived from shell modeling to those measured from other spectral lines can therefore provide new insight about the structure of the multi-phase gas. Because these properties determine the LyC escape fraction from galaxies, there is an urgent need to understand the puzzling properties of profiles in a sample of EoR analogs. To place the unexpected profile shapes, i.e. the double peaked emission lines in DLA systems, in the broader context of the full diversity of observed profile shapes, requires high-resolution and high S/N ratio UV spectroscopy of EoR analogs, including, but not limited to, Green Pea galaxies. The James Webb Space Telescope (JWST) observations reveal a diversity of galaxies in the EoR <cit.>, spanning much wider ranges of galaxy properties than the local Green Peas. In this paper, we analyze 45 line profiles obtained by the COS Legacy Archive Spectrocopy SurveY (CLASSY) <cit.>. This UV-surface brightness selected sample includes the lowest redshift Green Pea galaxies, local Lyman Break Galaxy Analogs (LBAs) <cit.>, and the two local galaxies that are the nearest spectral match to the emission-line spectra of GN-z11 <cit.>. Thus the range in metallicity and ionizing continuum properties include the extreme conditions that were common during galaxy assembly. We present a uniform analysis of the profiles. Outflow properties have been determined from the blueshifted components of the LIS resonance lines <cit.> and the excited fine-structure lines <cit.>. The results provide new insight into the clumpiness of the ISM, as described by the relative covering fractions of high-N_HI and low-N_HI gas, yet also strongly suggest that the discrepancies between shell model parameters and LIS absorption lines arise from aperture vignetting. Among CLASSY targets, the physical size of COS aperture ranges from the scale of star clusters (∼ 100 pc) to galaxies (∼ 10 kpc). The large variations in aperture losses make it possible to view individual profile shapes in a broader context. This paper is organized as follows. In Sec. 2, we introduce the CLASSY sample of profiles, describe how we remove the damped absorption and measure the properties of the high column density neutral gas, and discuss the large variation in the amount of aperture vignetting across the sample. In Sec. 3, we use the radiative transfer code to fit shell models to the net emission-line profiles, investigating different choices for the continuum level (and hence the line equivalent width). In Sec. 4, we discuss the H i column density distribution in EoR analogs, the size scale of the holes leaking LyC radiation, and argue that aperture vignetting biases shell model properties in the directions required to solve the discrepancies with independently measured outflow properties. Throughout this paper, we adopt a Flat ΛCDM cosmology with Ω_m=0.3, Ω_Λ=0.7, and H_0=70 km s^-1 Mpc^-1. We also adopt the Spearman rank method to quantify the correlation strengths r. The data used in this paper is available via the CLASSY high-level science products (HLSP) homepage[ Data will appear at <https://archive.stsci.edu/hlsp/classy> after acceptance by the ApJ. The data product can be found here (<https://drive.google.com/drive/folders/1NCUyr1vQ10z4BZuGBqsBuIjL0dWJnmZ1?usp=sharing>) during the review period.], including the best-fit DLA systems, the emission lines after subtracting the DLA and continuum, and the best-fit shell model spectra. § SAMPLE OF PROFILES Here we present high-S/N spectra for the 45 CLASSY targets. Each of these nearby galaxies has a compact, far-UV bright star-forming region which was the target of the COS observation. The sample provides a diverse set of local analogs of high-redshift galaxies, including both Green Pea galaxies and LBAs. Physical conditions in the starburst range cover oxygen abundances from 12+log(O/H)∼ 7 to 8.8 and electron densities from n_e ∼ 10 to 1120 cm^-3. The stellar masses and star formation rates of their host galaxies sample the range log (M_⋆/M_⊙)∼ 6.2 to 10.1 and log (SFR/M_⊙ yr^-1) ∼ -2 – 1.6, respectively <cit.>. The raw spectroscopic data were reduced using the CALCOS pipeline (v3.3.10), including spectrum extraction, wavelength calibration, and vignetting correction, and then coadded using a custom pipeline <cit.>. The Galactic foreground extinction was corrected assuming a ratio of total-to-selective extinction R_V=3.1 and a Milky Way (MW) extinction curve <cit.>. Fig. <ref> shows an overview of the G130M and G160M spectra, ordered by redshift. The CLASSY spectra easily resolve the damping wings of the broad absorption trough imprinted by H i absorption from the Milky Way. A large fraction of the spectra show a second damped absorber at the redshift of the CLASSY galaxy. In the lowest redshift galaxies, the blueshifted damping wing of the target is blended with the redshifted damping wing of the Milky Way absorption. The yellow waterfall across Fig. <ref> highlights the redshifted emission. Surprisingly, the emission is frequently detected in the bottom of a damped absorption trough. Profiles of this type cannot be produced by a uniform shell of neutral hydrogen. In this paper, we adopt an approach that we have not seen used previously. We fit the damping wing profile, including a non-unity covering factor. We then extract the net emission-line profile relative to the damping trough, as others have done. The equivalent width of this net emission, however, has been previously neglected. We address this in Sec. <ref> below, where we demonstrate that the best normalization for the emission is the fraction of the stellar continuum not intercepted by the high column density neutral hydrogen. We use physical models to define the continuum level near , allowing us to accurately model the DLA system in Sec. <ref>. CLASSY provides two models for the continuum (Senchyna et al. in preparation). Both models assume the observed continuum can be reconstructed as a linear combination of a set of single-age, single-metallicity stellar populations <cit.>, and, thus, be fitted using the following relation: F_obs (λ) = 10^-0.4E(B-V)k(λ) Σ_i X_i M_i(λ), where F_obs (λ) is the observed spectrum, k(λ) is the attenuation law, M_i(λ) is the spectrum of the ith single stellar population (SSP), and X_i is its coefficient. The main difference between the two methods is the stellar population synthesis framework. The top panels of Fig. <ref> illustrate each best-fit continuum. The red dashed line represents the continuum built from STARBURST 99 synthesis models <cit.> and a <cit.> attenuation law, and the green dashed line uses the latest version of the <cit.> model <cit.> and an SMC extinction law <cit.>. These two continua both reproduce the prominent N v λ1240 stellar P-Cygni line well. The narrow dip visible at in both models is not physical (C. Leitherer, private communication), and we interpolate over it. We fit the DLA profiles using both the continuum models and found similar parameters. We adopt the first method for the analysis that follows because the STARBURST99 models have the higher spectral resolution. §.§ DLA fitting Fig. <ref> illustrates the diversity of CLASSY profiles: pure DLA systems, emission in the bottom of a damping trough (hereafter Abs+EM profile), P-Cygni-like profiles, and double-peaked emission. A large fraction of CLASSY spectra (31/45) have a DLA system, and 20 out of these 31 galaxies show double- or single-peaked emission lines. CLASSY spectra offer the high spectral resolution and S/N ratio required to remove the contribution of the DLA system <cit.> and extract the emission lines. Fig. <ref> shows that geocoronal emission lines intersect the broad damping wings at low redshift and at z ≈ 0.07. In addition, the LIS absorption lines from the MW and the target galaxy affect the wings of the DLA systems but intersect only a few emission lines (see Sec. <ref>). We mask these lines as indicated in the second row of Fig. <ref>. To uniquely describe the MW DLA system, we adopt the Galactic H i column density derived from 21 cm emission in the direction of the target <cit.>. The DLA line profile is described by a Voigt profile <cit.>, which is defined by a Doppler parameter (b) and a column density (N_HI). We assume a Doppler parameter of 30 km s^-1 <cit.>, no velocity shift, and complete covering of the continuum source. These steps define the Voigt profile, which we convolve with the instrumental resolution, and then subtract from the normalized continuum to uncover the profile of the CLASSY target. Because DLA systems are optically thick, the bottom of the Voigt profile is completely dark. However, we found significant residual intensity in the bottom of the damping troughs. Partial covering of the continuum source therefore turns out to be critical for fitting damping profiles. This partial covering was sometimes subtle, as in the top left panel (J1129+2034) of Fig. <ref>. In contrast, the top right panel (J1418+2102) of Fig. <ref> shows strong H i damping wings and prominent residual intensity in the trough. Here we adopt a modified Voigt profile which allows a velocity offset, v, and a velocity-independent covering fraction, f_C. We convolve each Voigt profile with a Gaussian line spread function whose width is determined by the spectral resolution <cit.>. Our fitting code then multiplies the normalized continuum by the optical depth of each Voigt profile. The error is measured using a Monte Carlo (MC) approach; we add random noise to the observed spectra and refit it 1000 times. Leaving all the parameters free provided statistically good fits; however, we noticed degeneracies between the fitted velocity v of the DLA and the wings of the damping profile, and also the overlaps between the wings of the DLA system and emission. We broke these degeneracies by using the O i λ1302.2 Å absorption line to constrain the parameters of the Voigt profile, an approach Section <ref> justifies below. The profile shape is not sensitive to b, and we fixed b to be 30 km s^-1. The second row of Fig. <ref> presents the continuum-normalized spectra, our model for the MW absorption, and the fitted damped absorption. Table <ref> summarizes the best-fit Voigt parameters for the DLAs. We extract the emission lines by subtracting the stellar continuum and DLA profile. Previous works have visually selected a local continuum close to the emission. A comparison of common targets shows that the resulting can be sensitive to the wavelength range used to define the local continuum. For example, the beginning of the wavelength range of J0938+5428 used in <cit.> is the blue peak of J0938+5428 in Fig. <ref>. For this same target, <cit.> determine the wavelength range from the intersection of the emission line with the DLA profile. This method recovers the blue peak; however, it underestimates the flux because the bottom of the DLA system is poorly estimated. Among 45 CLASSY galaxies, 24 galaxies show significant double-peak emission lines, 10 show single-peak emission. Fig. <ref> presents the emission line spectra of 34 CLASSY galaxies. The remaining 11 galaxies show pure DLA systems, and are therefore not included in Fig. <ref>. §.§.§ Constraining the DLA Properties with O i Absorption We use the narrow O i absorption lines to constrain the velocity of the DLA. Since the ionization potentials of O and H are very similar, we expect the O i to trace H i gas in the DLA absorber. Fig. <ref> validates this expectation; the DLA systems in CLASSY always associate with strong O i absorption. The only O i absorber without a DLA is J1112+5503, which shows a P-Cygni profile still suggesting substantial H i gas. For optically thick O i absorption, the residual flux at the bottom of the fitted Gaussian profile determines the covering fraction of O i gas. The O i optical depth can be measured following τ = 0.318( N_OI/10^14 cm^-2) (30 km s^-1/b), where N_OI is the O i column density and b the Doppler parameter <cit.>. Since the H i column densities of DLAs in the CLASSY sample are > 10^20 cm^-2, and the metallicities 12+log (O/H) are >7.5, we find that the O i optical depths are >10, and the line is saturated. We acknowledge that this argument relies on the assumption that O i is uniformly distributed in the neutral gas. If the intervening O i clouds have different velocities, the covering fraction derived from O i absorption would place a lower limit on the covering fraction of neutral gas <cit.>. Fig. <ref> shows that the O i covering fraction is approximately equal to the covering fraction of the DLA system. Table <ref> collects the best-fit velocities and covering fractions for the DLA and O i absorption. §.§.§ Notes on individual galaxies * The bottom of the DLA system is hidden under the emissions in J0938+5428, J1024+0524, J1416+1223, and J1521+0759 in Fig. <ref>, so the residual flux in the damping trough is not directly constrained. Since the blue wing of the damping profile is contaminated by metal absorption lines, the shape of the damping profile is poorly constrained. Therefore, for these four galaxies, we adopt the O i covering fractions to be their DLA covering fractions. * The covering fractions of four galaxies (J0337-0502, J0405-3648, J1132+1411, J1448-0110) are fixed to be a constant measured visually but also in agreement with their O i covering fractions. The Voigt profile fit for these four galaxies underestimates the covering fraction because the CLASSY error spectra do not account for the small counts at the trough bottom which produce an asymmetric error <cit.>. * The DLA systems of three galaxies (J0127-0619, J1044+0353, J1359+5726) were not fitted well by a single Voigt profile, and we noticed that their O i absorption lines show a second component. Thus, we adopted two Voigt profiles and matched their velocities and covering factors to those of the O i components. * In J1105+4444, the peak separation is exceptionally broad, ∼1000 . We suggest that the peaks are likely emitted by different regions within the COS aperture. To test this conjecture, we inspected HST/COS NUV acquisition image <cit.>. We found that J1105+4444 is not only an elongated object with multiple clumps, but the major axis of these clumps is along the dispersion direction of the COS observation. Thus, their spatial offset in the aperture may cause an apparent velocity shift which is not physical. This object is excluded in the following analysis. For completeness, we note that the DLA fit for J1105+4444 failed when constrained by two O i components, and we used a double-Voigt profile with free velocities. * The blue part of the J1525+0757 line is likely a P-Cygni profile, so the impact of the geocoronal O i emission should be negligible. * We also exclude J1448-0110 from the emission analysis due to the low S/N. §.§ DLA system and Aperture Loss The fraction of emitted photons captured by the 25 diameter COS aperture will vary dramatically among the targets because of their large range of distances. For a typical target, the physical diameter of the aperture is roughly 700 pc, which is larger than the half-light radius of the UV continuum emission core but smaller than the Strömgren radius of the nebula.[To estimate the volume ionized by the stars within the COS aperture we have used the extinction corrected the Hα luminosity in the SDSS fiber and assumed a volume-average electron density of 1 cm^-3 and case B recombination at 10^4 K.] The most distant CLASSY targets are LBAs at z ≈ 0.18. Here, the COS aperture subtends nearly 8 kpc, vignetting extended halo emission but likely capturing most of the luminosity. CLASSY also includes several very nearby galaxies, where the COS aperture subtends just a few hundred parsecs, and damped absorption troughs are prominent in their spectra (see Fig. <ref> and <ref>). We suggest that the DLA detections indicate the emission is scattered outside the COS aperture. In support of this claim, Fig. <ref> shows that 14 of 16 galaxies with UV half-light radii larger than the COS aperture <cit.> have a DLA in their COS spectrum[The sizes of compact galaxies with r_50<04 are measured using COS acquisition images and the sizes of extended galaxies are measured using SDSS u-band images.]. The frequency of DLA detections is reduced among the galaxies with half-light radii smaller than the COS aperture. The Lyman Break Analog sample has the fewest DLA detections. Although the physical size of the aperture grows with increasing redshift, we do not find a one-to-one correlation between DLA detections and redshift. For z>0.1 (yellow circles), the physical scale of COS aperture reaches ∼5 – 10 kpc and is larger than the UV sizes of those galaxies; however, a large fraction (4/9) of their spectra still show significant DLA system. The spectra of high-redshift galaxies observed with similar aperture size sometimes show DLA systems as well <cit.>. For example, <cit.> reveals a similar fraction (40/92) of galaxies at redshift ∼2.2 – 3.2 which shows the DLA system. We can gain some quantitative insight from the imaging studies of <cit.>. LARS 9 and LARS 14 correspond to the CLASSY galaxies J0823+2806 at z=0.04722 and J0926+4427 at z=0.18067, respectively. The closer galaxy shows a pure-DLA profile, whereas the more distant one has a double-peaked emission profile with no DLA. Inspection of Figure 1 in <cit.> shows the emission comes from a shell around J0823+2806, whereas the emission from J0926+4427 is centrally concentrated. In the latter example, the COS aperture includes roughly 60% of the total flux <cit.>, showing that the luminosity is significantly attenuated even in the case of no absorption. For the DLA, the growth curve shows net emission only when the aperture is enlarged to a diameter of 9.5 kpc, about four times larger than the COS aperture. Galaxy-by-galaxy aperture corrections for are not currently available, but these examples support our conjecture that the galaxies showing damped absorption would show net emission in spectra obtained through larger apertures. §.§ Measurements In this section, we present the measurements of the emission properties. We measure the continuum and DLA-subtracted profiles. We will demonstrate in Sec. <ref> that the emission emerges from holes between the DLA clouds. Since these parts of the line profile have different origins, they must be separated to obtain a meaningful analysis. §.§.§ Kinematics We measure the blue peak velocity, v^Lyα_blue, and red peak velocity, v^Lyα_red, as the position of the local maximum in the emission line at velocity v <0 and v > 0, respectively, relative to the systemic velocity. The minimum between the two emission peaks defines the trough velocity, v^Lyα_trough. We define the peak separation as Δ v_Lyα = v^Lyα_red - v^Lyα_blue. §.§.§ Fluxes and Escape Fraction For double-peaked profiles, we measure the fluxes of the blue and red components by integrating to the velocity of the trough between the components. We also measure the asymmetry parameter of the red peak of emission, defined as A_f = (∫^∞_λ^red_peak f_λ d λ)/(∫^λ^red_peak_λ_trough f_λ d λ), where λ^red_peak is the wavelength of red peak and λ_trough the wavelength of the trough <cit.>. The total fluxes are measured by integrating flux between the wavelengths where the profile meets zero flux, including the central dip in double peaked profiles and the negative flux in P Cygni profiles. We convert the total fluxes to luminosity using luminosity distance from Table <ref>, which is corrected for the cosmic flow. The rest-frame equivalent widths, EWs, are computed using the spectra and the total stellar continuum, EW(Lyα) = ∫ F_Lyα(λ)/F_cont(λ) d λ / (1+z). We estimate the escape fractions f^Lyα_esc based on intrinsic fluxes inferred through dust-corrected Hα (or Hβ) fluxes assuming a Case-B recombination <cit.>: f^Lyα_esc = F_Lyα/(8.7 × F_Hα)[We adopt the factor of 8.7 to be consistent with previous works. It corresponds to a temperature of 10,000 K and an electron density of ∼300 cm^-3. ]. <cit.> have measured the Hα and Hβ fluxes using optical spectra from SDSS, MUSE, KCWI, MMT, and VIMOS. Since the UV spectra and optical spectra are obtained via different instruments with different aperture sizes, a scaling factor between UV spectra and optical spectra is needed to correct the different aperture losses. <cit.> measured the scaling factor by matching the optical spectra to the extrapolation of the best-fit UV stellar continuum model (see their Appendix A). The scaling factors for most objects approximate the ratio between apertures of different instruments but are not exactly the same because some other effects may also cause the flux offsets such as the vignetting. For example, the median of the scaling factor for SDSS spectra is ∼0.79 and the aperture size ratio is (25)^2/(3)^2∼ 0.69. We refer readers to <cit.> for more details. In this work, we adopt the corrected Hα fluxes. Since the Hα for J0934+5514 and J1253-0312 are unavailable, we convert their Hβ fluxes to Hα fluxes using a factor of 2.86, by assuming Case-B recombination with a temperature of 10,000 K and electron density of 100 cm^-3. §.§.§ trough flux density The flux density at the trough velocity defines the trough flux density, f_trough. The f_trough of J0926+4427 and J1429+0643 have also been measured in <cit.> based on the spectra obtained by HST/COS G140L with a resolution of 1,500. <cit.> measured the trough flux density based on the continuum-unsubtracted spectra but our measurements are based on the continuum-subtracted spectrum. Thus, our F_trough/F_cont should be lower by 1 than those in <cit.>. Here F_cont is the flux density of total stellar continuum estimated from STARBURST99. However, accounting for this difference, the F_trough/F_cont of J0926+4427 and J1429+0643 in our measurements are still lower. Particularly for J0926+4427, we do not see the net residual trough flux density (i.e., F_trough<0). This is because the CLASSY spectra have much higher resolution, ranging from ∼ 2,200 to 15,000 with a median of 5,000 <cit.>. The high-resolution spectra resolve the small structures at the central trough, which were smoothed due to the lower resolution in <cit.>. We note that the resolution around emission line might be lower than the resolution for the continuum as emission often subtends a larger solid angle than the continuum. §.§.§ Aperture Effects on Measurements The COS aperture, therefore, attenuates the emission relatively more than the UV emission due to the scattering of photons. Thus, even though Hα, and the UV continuum are measured locally in the same aperture, we expect and the EW to be underestimated. In our example of J0823+2806, see discussion in Sec. 2.2, the attenuation is severe because most of the emission is scattered outside the COS aperture. If scattering outside the COS aperture produces the large fraction of DLA systems in CLASSY, then the and EW of these galaxies are significantly underestimated. A more subtle bias that we will examine is the possibility that this vignetting modifies the shape of the emission-line profile. <cit.> predicted that the blue-to-red peak ratio (hereafter B/R ratio) would increase with increasing impact parameters because the front-scattered photons (blue peak) are closer to the resonance center of the outflowing gas and thus, tend to be scattered to larger impact parameters, compared with the back-scattered photons (red peak). Integral field spectroscopy confirms this trend in a few halos <cit.>. Another possible interpretation is that the average projected outflow velocity decreases with the increasing radius <cit.>. § RADIATIVE TRANSFER MODELING The high fraction of DLA systems in CLASSY was not anticipated. More surprising, however, was the discovery of double-peaked emission in the bottom of the broad absorption profiles. We have drawn attention to an important property of these DLA systems; the high-column density gas only partially covers the continuum source (see Sec. <ref>). The residual intensity in the continuum-normalized spectra indicates the uncovered fraction of the continuum emission (within the COS aperture). In this section, we explore what continuum is linked to the net emission profile, the total continuum or the uncovered fraction. Specifically, we utilize the shell model to fit the profiles which are normalized by the STARBURST99 continuum or the DLA continuum (hereafter normalized profile[ To avoid confusion, we define the emergent profile as the observed emission line with the underlying continuum and DLA, the net profile as the profile after removing the underlying continuum and DLA, and the normalized profile as the net profile after being normalized by the underlying continuum. ]). The model line profile is computed using the Monte Carlo radiative transfer code <cit.>. This technique has been used to successfully reproduce the observed profiles of emission lines <cit.>. The shell model can produce emission when the dust optical depth is low, or a DLA system when there is a substantial neutral hydrogen column with a moderate dust optical depth <cit.>. However, the homogeneous shell model cannot produce a emission line in the DLA trough, i.e., the Abs+Em profiles seen in our CLASSY sample (see Fig .<ref>). The emission line requires low-N_HI channels (with low dust optical depth), which contradicts the presence of damped absorption which requires very high column density. This requires a non-uniform shell model to describe the multi-component ISM. Although radiative transfer through clumpy media has been explored <cit.>, a non-uniform shell is beyond the scope of this work. Here, we adopt an alternative method to fit the profiles of the CLASSY sample. We fit and remove the DLA system to extract the net profile, as described in Sec. <ref>, and then we fit the model to the normalized profile using two different approaches described in Sec. <ref>. The variant fitting results could reveal the physical links between the gasses probed by emission and DLA absorption, as discussed in Sec. <ref>. Some properties of the shell model have been mapped to those of more realistic outflows <cit.>. However, the shell-model parameters are found to have systematic discrepancies with independently measured outflow velocities and the velocity dispersion of the intrinsic line profile <cit.>. To understand the origin of the discrepancies, we perform more fittings with constrained redshift priors and compare it with the previous results in Sec. <ref>. §.§ Shell Model computes resonant scattering through a uniform, expanding shell, which is composed of dust and neutral hydrogen gas. The shell model used in has 6 free parameters, including 2 parameters for the central radiation source: intrinsic line width σ_i and intrinsic equivalent width EW_i, and 4 parameters for the expanding shell: neutral hydrogen column density N_Hi, dust optical depth τ_d, shell velocity v_exp, and temperature T. In addition to these six parameters, a redshift parameter z_tlac is also applied to shift the rest-frame of the profile relative to the systemic redshift of the galaxy. The photons and underlying continuum photons are generated from the central source with an intrinsic width of σ_i and intrinsic equivalent width of EW_i. The photon is then emitted into the H i shell with a random direction and travels a distance before being absorbed or resonantly scattered. The distance that a photon can travel is calculated using the total optical depth of dust τ_d and neutral hydrogen N_Hi in the expanding shell with velocity v_exp and temperature T. The probability that a photon is resonantly scattered or absorbed at a specific position is estimated by comparing the optical depth of neutral hydrogen with the total optical depth at that position. If the photon is resonantly scattered, a new direction and a new frequency are drawn from the proper phase function and the frequency redistribution function, respectively. The previous steps are repeated until the photon escapes from the simulation domain or is absorbed by the dust. If the photons escape from the simulation domain, their frequency, and other properties are recorded. This simulation has been run thousands of times over a discrete grid of (v_exp, N_Hi, T) and then been post-processed with a continuous grid of (σ_i, τ_d, EW_i) to generate the simulated spectra for different parameter values. To fit the observed spectrum, a likelihood function is constructed based on the noise and flux spectra. The best-fit spectrum is derived by maximizing the likelihood function using the Markov Chain Monte Carlo (MCMC) and nonlinear optimization methods. We highlight the importance of the intrinsic equivalent width (EW_i) in the shell model, a parameter excluded by studies that fit the continuum-subtracted line profiles, because the continuum photons are also involved in resonant scattering and can dominate the normalized profile for low-EW_i cases. §.§ Profile fitting Our profile-fitting approach draws attention to ambiguity about the appropriate continuum level for normalization. When a DLA system is present in the spectrum, the underlying continuum could be the total stellar continuum (red lines in panel a of Fig. <ref>), and thus, the normalized spectrum is: I^EW_λ = (f^Lyα_λ + f^cont_λ) / (f^cont_λ), or the residual stellar continuum in the bottom of DLA system (red line in panel b) and thus, the normalized spectrum is: I^EW_λ = (f^Lyα_λ + f^cont_λ× (1-f_C)) / (f^cont_λ× (1-f_C)), where f^Lyα_λ is the emission line, f^cont_λ the best-fit total stellar continuum (see Sec. <ref>), f_C the covering fraction of DLA system. The choice of the underlying continuum will change the equivalent width of the line and, thus, the contribution of continuum photons on the emergent profile. Here we perform profile fittings, assuming each continuum level in turn, and then discuss the results. We present the best-fit spectra in Append. <ref>. We also present the best-fit model parameters of the second profile fitting in Table. <ref>. The fitted parameters somewhat degenerate with each other. For example, in the case of outflowing shells, <cit.> demonstrate that various combinations of shell velocity, column density, temperature, and redshift can produce very similar line profiles, for example, (v_exp, log N_HI, log T, 0), and ∼ (2v_exp, log N_HI-0.5 dex, log T+1 dex, Δ v). Nonetheless, the spectra generated by these parameters show clear differences at the red peak and our high-S/N spectra should be able to distinguish between the degeneracies. §.§.§ First attempt: total stellar continuum Overall, the quality of the first fitting using the total stellar continuum (Eq. <ref>) is quite good, given the simplicity of the model. However, in a subset of spectra, the results are unsatisfactory, especially J0938+5428, J0944+3442, J1044+0353, J1119+5130, J1144+4012, J1416+1223, J1521+0759, as presented in Fig. <ref>, of which the best-fit spectra show a very sharp dip around zero velocity compared to the observed profile. Looking at their original spectra (see. Fig. <ref>), we find that all these poorly fitted profiles correspond to spectra that show significant DLA systems compared with the successful sample. This result motivated us to investigate whether the sharp dips might be caused by an inappropriate underlying continuum, which underestimated the EW spectra I^EW_λ. Thus, we performed a second profile fitting using the residual stellar continuum as described by Eq. <ref>. §.§.§ Second attempt: residual stellar continuum In Fig. <ref>, we present the best-fit models for profiles normalized by the residual stellar continua (1-f_C). For the unsatisfactory sample in the first attempt, normalizing the spectra by the residual stellar continuum significantly improved the best-fit results. The sharp dips seen in the models of Sec. <ref> no longer exist in the new model spectra. In Fig. <ref>, we compare the reduced χ^2 for the first and second attempts. Clearly, most results are significantly improved if adopting the normalization of the residual stellar continuum. Thus, we can conclude that the dip was caused by an inappropriate continuum level. In further analysis, the first attempt of fitting will not be considered. For the galaxies without DLA systems, f_C = 0, the residual continuum rises to the level of the total continuum. It is therefore not surprising that every profile is successfully fitted when the residual continuum is used. We conclude that the residual continuum, 1 - f_C, is the more physical normalization for the emergent emission line. In other words, the DLA covering fraction f_C gives a good indication of the fraction of the intrinsic emission that is blocked by the high-column density clouds. §.§.§ Implication: Scattering Outside COS Aperture Reveals Low-N_HI Channels We have shown that successful radiative transfer modeling of CLASSY spectra, in the context of the shell model, requires: (1) separating the emission profile from the DLA system, and (2) normalizing the emission by the leaked continuum, i.e. the residual flux in the DLA system. This approach divides the COS aperture into two groups of sightlines, hereafter channels, distinguished by their column density. In the schematic picture of a thin shell, these two channels represent clouds and the intercloud medium. More generally, for the targets with C_f > 0, the photons entering the high-N_HI channel do not emerge from the galaxy at radii within the COS aperture. If they did, then the best continuum normalization would be the total stellar continuum, which is inconsistent with our fitting results. The photons entering the high-N_HI channel must be scattered to radii larger than the COS aperture before they escape. The alternative is that they are absorbed by dust grains which seems less likely for two reasons. Most CLASSY galaxies whose COS spectra detect DLA systems have low metallicities and are relatively dust poor. In addition, substantial amounts of dust in the scattering clouds would boost the transmitted equivalent width <cit.>, but we do not measure unusually large EW. When the emission is separated from the DLA system, what do the shell model parameters fitted to the emission component represent? Perhaps the line photons entering low-N_HI channels scatter off both the low N_HI clouds and the walls of the DLA channels. In the limit of no intercloud medium, the kinematics of the dense clouds would determine the shape of the line profile <cit.>, so we might expect the kinematics of both the low- and high-N_HI channels to impact the profile. If photons entering DLA sightlines are scattered outside the spectroscopic aperture, then vignetted apertures may have one advantage, namely providing a direct view of the properties in low-N_HI channels. §.§ Discrepancies between the Shell Model and Observations We have presented that whether the shell model can well-fit the observed profile is critical to infer the ISM properties. However, the three discrepancies reported in <cit.> (see also Sec. <ref>) might suggest a limited physical meaning of the model parameters. These discrepancies are also observed in the CLASSY sample with high significance, as shown in Fig. <ref> (black circles). The best-fit redshifts are always larger by 0 – 200 than the spectroscopic redshift, consistent with <cit.>. One possible origin of the discrepancies is the degeneracies between the model parameters, suggested in <cit.>. To test this scenario and gain more insight into the discrepancies, we perform a third profile fitting following <cit.> which constrains the range of redshift parameters to break the degeneracies. §.§.§ Third attempt: constraining the redshift The CLASSY redshifts derived from UV nebular lines agree well with those derived from optical lines; the standard deviation of velocity difference is ∼22 km s^-1 <cit.>. A spatial offset between the scattered emission and the optically-thin emission lines would introduce an additional redshift error if, and only if, the offset were along the dispersion axis of the spectrograph. Based on the radius of the unvignetted aperture (04), non-perfect alignment could shift the wavelength scale by as much as ± 44 km s^-1. For a redshift-constrained fit, we adopted a narrow Tophat probability distribution of width ± 44 km s^-1 as the prior on redshift. The best-fit spectra are presented in Fig. <ref>. In contrast, in the second attempt at profile fitting (Sec. <ref>), we adopted a Gaussian prior on redshift, and this broad distribution with σ(z_tlac) = 120 serves as the unconstrained fit. §.§.§ Can constrained fitting alleviate the discrepancies? The redshift differences are apparently improved when adopting the constrained fitting (red circles in Fig. <ref>). This is because the constrained redshift prior sets a hard limit of the difference to be 44 . On the other hand, comparing the best-fit profiles of constrained fitting (see Fig. <ref>) with those of unconstrained fitting (see Fig. <ref>), it is hard to distinguish the difference between them by visual inspection. We compare the reduced χ^2 of two profile fittings in Fig. <ref>, which shows that the results of constrained profile fitting are slightly worse than those of unconstrained profile fitting, but still acceptable[ We notice two best-fit spectra (J1112+5503, J1323-0132) of constrained profile fitting are improved compared with the unconstrained profile fitting. This might be because the unconstrained profile fitting for these two objects is trapped in a local maximum of the likelihood.]. Thus, our test confirms that adopting a constrained redshift prior for profile fitting can somewhat alleviate the redshift discrepancy observed in previous works <cit.>. We present the best-fit parameters of the third profile fitting in Table <ref>. However, the best-fit redshift remains systematically larger than the spectroscopic redshift as most of the red circles are still below zero velocity. This indicates that the constrained fitting does not fully resolve the observed discrepancies. We return to this topic in Sec. <ref>, where we combine the comparison between the shell velocity and spectral measurements of outflow velocity. We do not discuss the line width discrepancy in this work because a clumpy model is needed to resolve this discrepancy. By comparing the profiles generated by the uniform shell model and a clumpy model, <cit.> and <cit.> find that a larger line width is always required for the shell model to produce a similar profile as the clumpy model. The intrinsic difference between the two models is that the clumpy model includes the turbulent velocity dispersion of the clumps while the shell model does not. Thus, the line width of the shell model needs to be artificially broadened to compensate for the omission of turbulent motion in the shell model. § PROPERTIES OF THE NEUTRAL ISM In this section, we discuss the relationship between the H i column densities inferred from the absorption and emission components of the line profile. We then discuss indirect evidence LyC leakage. Finally, we return to the problem of why the shell model systematically mispresents outflow properties, finding that the problem lies in the spectroscopic aperture. §.§ Structure of the ISM in CLASSY galaxies In Sec. <ref>, we found evidence that the neutral ISM consists of several components with different column densities. The DLA system requires high-N_HI clouds with N_HI>10^20 cm^-2. In Sec. <ref>, the fitting revealed that the observed emission line requires low-N_HI holes with 10^18<N_HI<10^20 cm^-2. Combining these two results demonstrates the existence of sightlines with different H i column densities in individual galaxies. We have argued that scattering of a significant fraction of the photons out of the COS aperture makes the high-N_HI channels visible via absorption, whereas their damping profiles would be filled in by scattered emission in spectra obtained through larger apertures. Apparently, the halos of many CLASSY galaxies are much larger than the COS aperture, and the scattering of photons out of the COS aperture provides a unique opportunity to describe the structure of the neutral ISM, as we show here. New insight into how LyC radiation escapes from local analogs of EoR galaxies may be obtained by comparing the structure of the ISM in hydrodynamical simulations to the column density distribution we derive. Feedback from massive stars is widely believed to shape the pathways for LyC escape, but the mechanism is debated. For example, <cit.> argue that positive feedback, essentially propagating star formation triggered by the mechanical feedback from massive stars, is essential to shift the production of LyC radiation away from the densest region of a galaxy. In contrast, in H ii regions too young to have produced supernova explosions, turbulence driven by ionization fronts may open channels for LyC escape <cit.>. One difference between these two mechanisms is the size of the channels. Whereas the channels opened by turbulence are individually small, the low-N_HI bubbles driven by mechanical feedback have scales reaching hundreds of pc <cit.>. Thus, the size of the channels provides insight of particular interest for understanding the escape pathways. In this section, we adopt the column density estimation from the third profile fitting (see Sec. <ref>), because it incorporates more constraints from the observation. However, adopting the estimation from the second profile fitting does not change the conclusion of this section. §.§.§ Column Density Distribution of Neutral ISM Fig. <ref> compares the distribution of low-N_HI channels returned by the shell model fits and the high-N_HI column densities measured from the damping wing absorption. In the top panel, the histograms are normalized by the total number of galaxies showing a DLA system or emission line, respectively. Their combined distribution has two peaks: one at N_HI≈ 10^19 cm^-2 which represents the path of the escaping photons[If adopting the N_HI from the second profile fitting, the peak shifts to lower by 0.4 dex.] , and a second peak representing the typical DLA system at N_HI≈ 10^21 cm^-2. We recognize that the DLA sightlines and the pathways of the scattered emission select specific channels through a turbulent, multiphase ISM. Nonetheless, their combined distribution may represent a large fraction of all sightlines because we found that these components cover complementary fractions of the UV continuum (see Sections <ref> and <ref>). We weight the column densities by the covering fraction of each system in the middle panel of Fig. <ref>. This normalization indicates how many sightlines are covered by the low-N_HI or high-N_HI paths. After accounting for the covering fraction, the peak of the distribution of low-N_HI paths shifts to lower column density; the lower column densities have higher weights, i.e., larger covering fraction of low-N_HI paths. In other words, the galaxies with only emission line observed have lower column densities compared to those showing emission in the bottom of DLA system. In the bottom panel, we present the combined distribution of column densities in galaxies with only emission, only DLA system, or emission in the bottom of DLA system. Similar to the middle panel, the distributions are weighted by the covering fraction. Clearly, the column densities increase with the presence of DLA system, consistent with the middle panel. Overall, however, the distribution remains bimodal, consistent with the argument that the distribution includes a large fraction of all sightlines. At a qualitative level, the bimodal distributions in Figure <ref> confirm a structural similarity between the ISM in CLASSY galaxies and the ISM in hydrodynamical simulations focusing on the star – gas interplay <cit.>. In detail, however, we recognize several quantitative differences. §.§.§ Column Density Distribution in Simulations In the H ii region simulations of <cit.>, turbulence driven by ionization fronts creates a bimodal distribution of column densities. In their Figure 6, the higher column-density peak covers N_HI values similar to our Figure <ref>. The simulated column density distribution actually reaches a minimum around 10^19 cm^-2, however, right where where Figure <ref> shows a maximum. The lower column-density peak is offset to 10^17 cm^-2 in the simulated distribution. These simulations zoom in on individual H ii region, and it is possible that placing the H ii region in a more realistic galactic environment would shift the distribution. Comparing the histogram in Fig. <ref> to those Fig. 11 of <cit.>, we find the high-N_HI gas spread over a similar range in column density. In those simulations, the fraction of high-N_HI is sensitive to galaxy mass; for their 10^7 – 10^8 M_⊙ sample, the fraction of sightlines with high-N_HI to the total H i sightlines is about one-third as large seen in Fig. <ref>. Since their histograms exclude the gas within 0.2 R_vir of the starburst, it is possible that the addition of the starburst region would eliminate, or at least mitigate, the discrepancy. Another difference is the column density of the lower-density peak. This peak is seen at N ≈ 18-20 cm^2 in CLASSY, whereas <cit.> find the low-N_HI channels spread, primarily, over the N ≈ 16-18 cm^2 range. This result may indicate that the feedback in <cit.> is too efficient and removes too much neutral hydrogen. Integral-field spectroscopy is clearly needed to address two observational biases. The histograms in Figure <ref> combine measurements made on different physical scales because the physical size of the aperture changes with galaxy distance. It is not fully understood how the aperture affects the column density derived by shell-model fitting. In addition, we emphasize that the lowest colum density sightlines may be missing from Figure <ref>. The shell model returns a column density that represents the total column of clouds plus an intercloud medium <cit.>; it follows that the lowest (and highest) column density sightlines may not be represented in Figure <ref>. The low-N_HI channels may therefore include lower column-density pathways, and we aim to understand whether CLASSY galaxies have sightlines optically thin to LyC radiation. §.§ Pathways for LyC Leakage In this section, we will investigate the LyC-thin sightlines[Column densities lower than 10^18 cm^2 corresponds to LyC escape fractions ≳1% <cit.>.] with N_HI<10^18 cm^-2 in CLASSY sample by analyzing the positive residual trough fluxes and small . §.§.§ Peak Separation, Trough Flux, and Red Asymmetry Peak separation is a good, empirical tracer of LyC escape <cit.>, and the shell model provides a theoretical basis for this relation <cit.>. In galaxies where there are few holes through which LyC can escape (low LyC leakage), the scattered photons traverse optically thick channels, leading to a broad peak separation. Whereas in galaxies with high LyC leakage, the density-bounded channels result in a small peak separation. However, the peak separation does not distinguish how the photons escape <cit.>, as many small holes in a turbulent medium can produce a narrow peak separation just like a large, wind-blown cavity. The asymmetry parameter A_f help to quantify the multiphase nature of the turbulent H ii regions. It is originally introduced by <cit.> to measure the attenuation imprinted by intergalactic medium at high redshift. Here, we apply it in a different context recently introduced by <cit.>. The two dominant types of escape (single flight or excursion) tend to produce a symmetric line. Thus, when the medium is dominated either by ionization- or density-bounded channels as in the blue or gray region in Fig. <ref>, the asymmetry of the emergent line is low. However, when the two channels coexist as in the red region in Fig. <ref>, the asymmetry is high. In Fig. <ref>, we plot peak separation against red peak asymmetry. We divide the diagram into three distinct regions: (gray) low LyC leakage, (red) significant leakage through low-N_HI channels (ionization-bounded, f^LyC_esc>10%), and (blue) significant through large holes (density-bounded, f^LyC_esc>10%). The boundaries come from Figure 13 of <cit.>, which shows these regions in the – f_esc^LyC and A_f – f_esc^LyC planes. We find that the strongest LyC leakers in CLASSY are the three galaxies – J0942+3547, J1323-0312, J1545+0858 – in the blue region. The profiles of these three galaxies also show residual fluxes at trough: F_trough/F_cont= 1.36±0.07, 19.62±0.42, 0.17±0.12, respectively. Their net trough flux supports the conclusion that these galaxies have LyC-thin sightlines <cit.>. Based solely on their profile properties then, these galaxies are likely strong LyC leakers. When we compare their location in Fig. <ref> to directly confirmed LyC leakers, we find that their peak separation is as small as the smallest values measured among directly confirmed LyC leakers <cit.>. Many CLASSY galaxies are located in the gray-shaded region of Fig. <ref>, suggesting they have lower LyC escape fractions than the three galaxies in the blue zone. Three known leakers from <cit.> also lie in the gray zone of Fig. <ref>, just 100 above the blue – gray boundary. Based on this comparison to the properties of the known leakers, we suggest that the CLASSY sample contains more LyC leakers than the (blue) shaded region indicates. The <cit.> simulations zoom in on individual H ii regions, so perhaps the boundary might shift 100 in more realistic environments, i.e. those composed of multiple H ii regions. To gain insight into the empirical boundary, we inspect the positions of the other three CLASSY galaxies with net trough flux. Non-zero trough flux in the emission-line profile requires a low-N_HI column at the systemic velocity. We find three more galaxies with net trough flux, and each has < 400 km s^-1. The galaxies are J0944-0038, J1253-0312, and J1418+2102; their trough fluxes are F_trough/F_cont=0.47±0.29, 0.22±0.07, 1.46±0.17, respectively. We acknowledge that trough fluxes are sensitive to the spectral resolution, which is not precisely known for the emission. We therefore compared the trough width to the width of the red peak which represents an upper limit on the unresolved linewidth. Four galaxies (0942+3547, J1323-0312, J1545+0858, J1418+2102) show broader trough widths than the peak widths, so these troughs are clearly resolved. For the other two objects, J0944-0038 and J1253-0312, their trough widths are similar to peak widths, so higher-resolution spectroscopy might find that we possibly over-estimate their residual trough flux. Consequently, we identified at least four CLASSY galaxies containing density-bounded channels. We conclude that the empirical boundary between the blue and gray zones lies closer to a peak separation of 400 , roughly 100 larger than the blue-gray boundary suggested by the simulations. Based solely on the properties of line profiles, we conclude that four to six of the CLASSY galaxies (highlighted by red squares in Fig. <ref>) are strong LyC leakers. Their red peaks have a low asymmetry, A_f < 3, which indicates they are best described as density-bounded galaxies. In contrast, even though they span the same range of peak separations, half of the directly confirmed leakers have A_f > 3, suggesting their leakage is through ionization-bounded channels in a multiphase medium. §.§.§ Combining Perspectives from and O32 In the previous section, we have shown that the trough flux, peak separation, and red peak asymmetry converge at the same selection of galaxies with density-bounded holes in their neutral ISM. Here we examine the ionization structure of these galaxies, as measured by optical nebular emission lines, to reveal the underlying relation between LyC leaking channels and ionization. We adopt [O III] λ5007/[O II] λ3727 (O32) ratio, one of the most important ionization diagnostics <cit.>, where a high O32 ratio can indicate a density-bounded galaxy[Two of our three best candidates for density-bounded galaxies, J1323-0312 and J1545+0858, have the largest O32 ratios among the CLASSY sample (37.8 and 8.6, respectively). On the other hand, J0942+3547 has a lower O32 ratio of 2.6.] <cit.>. and O32 Intuitively, we expect a high escape fraction of photons from density-bounded galaxies. Yet, in the top panel of Fig. <ref>, the O32 ratio shows no correlation with (Spearman coefficient ∼ 0.04), contradicting the correlation observed among high-redshift galaxies <cit.> and among local dwarf galaxies <cit.>. We argue here that the lack of correlation in our sample might result from the scattering of photons outside the COS aperture, an effect that we argued produces DLA systems in many CLASSY spectra (see Sec. <ref>). The slits used to observe high-redshift galaxies in <cit.> typically subtend 5 to 10 kpc, much larger than the physical scale subtended by the COS aperture for the lowest redshift targets. Although the Lyman alpha Spectral Database <cit.> includes some low-redshift galaxies, the CLASSY sample has a lower median redshift than LASD, so scattering outside the COS aperture plausibly introduces a more serious bias. To test this explanation, we restrict the analysis to the subsample with UV radius < 04, the radius of the unvignetted COS aperture and find a positive correlation; among the yellow points in Fig. <ref>, the Spearman coefficient of 0.22. However, the galaxy distance might not be the only factor influencing scattering outside the spectroscopic aperture. The escape fraction of higher redshift galaxies may also be significantly affected. In the top panel of Fig. <ref>, we overplot measurements for Green Pea galaxies at redshift 0.1 to 0.4 <cit.>. We add LyC leakers from <cit.> with extreme O32 ratios (ranging from 22 – 39). Although the joint sample has a similar redshift range as <cit.>, it also shows no correlation between and O32 ratio. A subset of the joint targets with a large O32 ratio has modest of ∼1%. Thus, using to probe the density-bounded channels should always be aware of those exceptions, not only the aperture loss. and O32 Consistent with previous studies <cit.>, the peak separation among CLASSY galaxies is anti-correlated with the O32 ratio, as shown in the bottom panel of Fig. <ref>. Excluding the galaxies with large UV radius (>04) does not change the correlation strength, and thus, we conclude that the peak velocity measurements are only weakly affected by the aperture loss. The profiles of LyC leakers with extreme O32 ratios of 22 – 39 from <cit.> show ≈ 250 , similar to the J1545+0858 and J0942+3547 in CLASSY sample, consistent with a minimum around 250 . The only data in Fig. <ref> with lower is our new data point for J1323-0312. The joint sample shows that high O32 galaxies always have narrow peak separations, while the low O32 galaxies spread a large range of . We argue that a high O32 ratio traces a large global covering fraction of LyC-thin sightlines, whereas the narrow peak separations appear when the covering fraction of LyC-thin sightlines in our direction is high. Variations in the direction of LyC-thin sightlines relative to our viewing angle, therefore, produce the scatter observed in the vs. O32 ratio diagram. The correlation between and O32, and the non-correlation between and O32 might hint that the different features probe photons from different channels. This speculation is in line with the simulations of <cit.>. When the pathways for LyC escape have a low covering fraction, the majority of photons still need to escape through low-N_HI≈ 10^18 - 10^20cm^-2 channels, the emission line emerges with a broad width and large peak separation. Meanwhile, a smaller fraction of photons will pass through the remaining columns which are optically thin to the LyC (<10^18 cm^-2), and these sightlines contribute emission with narrow lines and small peak-separation <cit.>. It follows that the transition from ionization-bounded leakers to density-bounded leakers is accompanied by a change in the shape of the profiles (namely, the relative strength of the narrow and broad lines). As the covering fraction of LyC thin holes increases, more of the emergent flux is contributed by the component with narrow peaks. When the intensities of two narrow peaks are larger than those of broad peaks, these LyC thin channels can determine while the covering fraction of channels with > 10^18 remains significant and continues to produce broad peaks with a wider separation. Thus, in the case of a significant covering fraction of LyC-thin holes, the peak separation is probing the H i column in LyC-thin holes and we expect O32 to increase as decreases. However, in the case of no LyC leakage or a small LyC leakage, the photons that pass through the columns > 10^18 cm^-2 dominate the profile (peaks and wings). §.§ Outflow Velocity of Neutral ISM <cit.> described an adiabatic galactic wind that could reach speeds of roughly 1000 . Theoretical models that explain the relation of this hot phase to the widely observed cool outflows have been a subject of studies for an extended period of time <cit.>. Photoionization modeling of the LIS absorption lines in CLASSY spectra indicates the outflowing component traces gas in which hydrogen is mostly ionized <cit.>. Yet, the combined neutral and molecular phases transport as much (or more) mass than does the warm-ionized phase in the outflow from M82 <cit.>. Since probes the portion of the outflow where hydrogen is neutral, outflow detection using complement studies of the highly-ionized outflow. When photons scatter in outflowing gas, the resonance center will move blueward with respect to the rest-frame line center. Consequently, the outflow velocity is imprinted on the profile. Here, we suggest that the indicates the average outflow velocity v of neutral clouds, where -v corresponds to the largest optical depth <cit.>. In this section, we first compare trough velocity against the Doppler shift of LIS lines. Then we compare the outflow speeds of neutral ISM in low-N_HI channels, trough velocity , to tracers of high-N_HI clouds. Finally, adopting as a direct measurement of the mean Doppler shift of the neutral gas, we revisit why radiative-transfer modeling is typically driven towards a shell velocity faster than . §.§.§ Trough Velocity and LIS Velocity Resonance UV absorption lines, e.g., Si ii and C ii, have been extensively used to measure outflow speeds <cit.>. Here we focus on Si ii λ1260, which is well measured by the CLASSY collaboration. The Doppler shifts of Si ii in the CLASSY sample have been measured using two different methods. <cit.> use a double-Gaussian profile to deblend the outflow component from the static ISM component of Si ii and find that the outflow component is mostly ionized. On the other hand, Parker et al. (in prep) fit a single-Voigt profile to determine the average velocity of all LIS absorbers. As the LIS lines can also arise from the neutral ISM, the Parker measurements should include the contribution of neutral ISM. Conceptually, if the LIS absorber is dominated by the static ISM, the Parker measurement, which is close to 0 , should be distinct from the Xu measurement. But if the LIS absorber is dominated by the outflow component, the Parker measurement should be similar to the Xu measurement. Fig. <ref> presents the comparisons of to both LIS outflow measurements derived by the two methods[J0808+3948 is excluded because its polycyclic aromatic hydrocarbon feature suggests it might be an AGN <cit.>.]. Directly comparing the two LIS outflow measurements in the top and bottom panels, we notice the positions of two objects (J1416+1223, J0938+5428) shift significantly. Parker et al. (in prep) derive a velocity close to 0 km s^-1, but the outflow velocities derived in <cit.> can reach several hundred km s^-1, suggesting the LIS absorber of these two galaxies are mainly static. Galaxies that shift between the two panels have substantial absorption at v=0, which we attribute to the static ism. Secondly, we see that the Si ii velocity measured by Parker et al. (in prep) shows a better agreement with , particularly the galaxies with both and DLAs (blue squares) in the top panel of Fig. <ref>. This suggests that in those galaxies, the Si ii absorbers (in both outflow and static) contain a significant fraction of neutral hydrogen, though <cit.> suggest that the Si ii in the outflow traces mostly ionized gas. However, looking at the galaxies with no DLAs (red circles in Fig. <ref>), their Si ii velocities disagree with in both two panels. The three most deviant circles (J0021+0052, J0926+4427, J1429+0643) show that their are close to 0 but the Si ii velocities are ≤ -200 . We find that their Si ii line profiles are dominated by the outflow component: they have very little absorption at the systemic velocity, so the velocity is not sensitive to the measurement method. This suggests that the Si ii absorption comes mostly from the ionized gas in these galaxies, similar to <cit.>. §.§.§ Trough Velocity and DLA Velocity In the top panel of Fig. <ref>, we compare the trough velocity and the velocity of high-N_HI clouds (i.e., DLA system velocity probed by O i absorption line). It is intriguing to see such a good agreement between these two independent measurements, suggesting that the low-N_HI channels have the same velocity as the high-N_HI channels. In the bottom panel of Fig. <ref>, we further find that DLA velocity agrees with Si ii velocity. Here, we include the galaxies, which do not have emission lines, as the circles. They are consistent with those galaxies which have both emission lines and DLA systems. This hints that, for the galaxies with DLA systems, the intrinsic reason for the correlation between Si ii velocity and is that the Si ii mainly traces the high-N_HI clouds and the high-N_HI clouds have similar velocity as the low-N_HI clouds. §.§.§ Revisiting Outflow Velocity Discrepancy In this section we discuss the outflow velocity discrepancy using the same profile fittings as Sec. <ref> and propose a new explanation of the discrepancies. Here we adopt the as the intrinsic outflow velocity since it traces the neutral ISM which scatters the photons. First, we directly compare the measured based on the spectroscopic redshift and the outflow velocities v_tlac estimated by the shell model. In the top panel of Fig. <ref>, we present the comparison for both the second profile fitting (redshift-unconstrained) and the third profile fitting (redshift-constrained). Though a clear correlation between and v_tlac can be seen, v_tlac is larger by 0 – 200 and 0 – 140 than the for the second and third fittings, respectively. We speculate that the reason for this discrepancy is a `redshift error' required by the model fitting. To test this idea, we shift our measurements to the fictitious reference frame chosen by the fitted redshift. The bottom panel of Fig <ref> shows the measurements in the reference frames defined by the second and third fittings. We have shifted the measurements by v^Lyα_trough,z_tlac = v^Lyα_trough - (z_tlac - z_spec)× c, where c is the speed of light. The new correlations are significantly improved and close to the 1:1 relationship. Especially for the redshift-unconstrained fitting (second attempt), and v_tlac agree well with each other. These results confirm that we should compare and v_tlac in a common redshift frame. This also confirms that the outflow velocity and the redshift are coupled in the shell model: λ^Lyα_trough/(1+z)×λ_Lyα-1 = -v^outflow/c, where λ^Lyα_trough is the wavelength of trough and λ_Lyα the rest-frame wavelength of . Once the redshift of the shell model is fixed, the model outflow velocity is also determined by the Doppler offset of the observed trough with respect to the model redshift. Thus, the redshift and outflow velocity discrepancies are the "two sides of the same coin". The preferred larger outflow velocity by may hint that the observed B/R ratio is lower than the intrinsic B/R ratio. Moreover, as we discussed in Sec. <ref>, the observed B/R ratio can be biased by the aperture loss. Thus, this inspires us to connect the discrepancies to the aperture loss. We, therefore, propose an explanation for the discrepancies. Since the aperture loss modifies the B/R ratio to a lower value and the B/R ratio is tightly anti-correlated with outflow velocity, to achieve the smaller observed B/R ratio, the shell model will suggest a larger outflow velocity. Meanwhile, a higher systematic redshift is required to match the trough velocity to the outflow velocity (Eq. <ref>). Thus, the best-fit redshift and outflow velocity from the shell model are larger than that observed from the spectra. The aperture loss has a non-negligible impact on the profile and should always be considered when interpreting profile. §.§ A Schema of the Neutral ISM In this section, we summarize our interpretation of ISM structure from the previous sections. We have demonstrated that the ISM in CLASSY galaxies is inhomogeneous, consisting of high-N_HI, low-N_HI, and even LyC-thin regions, based on the clear separation between the DLA and emission (see Sec. <ref>), the non-zero residual flux at trough, and small peak separation (see Sec. <ref>). In the left panel of <ref>, we plot a schema of the neutral ISM for illustration. For simplicity, we adopt a continuous shell model. The low-N_HI and high-N_HI paths are shown as light blue and dark blue, respectively. We also use two gray shades to indicate the halos missed due to the aperture effect. In the right panel, we zoom in to show the radiative transfer in a small slab. The green lines indicate the photons and the gray lines indicate the continuum photons. Although the radiative process is highly non-linear and non-additive, the radiative transfer fitting results suggest that we can take the emission and DLA system apart. The DLA system can be well fitted by a partial-covering Voigt profile with a high-N_HI and the emission normalized by the uncovered continuum can be well fitted by the shell model with a low-N_HI. This clear separation between emission and DLA system indicates that the exchange between low-N_HI path and high-N_HI path should be negligible, as we discussed in Sec. <ref>. Only very few photons which are injected into one region can travel to another region and thus, the radiative processes in two different regions are independent. This is feasible because of two reasons: (1) the possibility of a photon traveling from low-N_HI path to high-N_HI path is very small, as most of which are just “reflected” by the surface between two channels <cit.>; (2) the photons including the underlying continuum photons which are injected into high-N_HI paths are mostly scattered to much larger impact parameters <cit.>, thus, most of which are missed due to the aperture effect and leave a DLA system. Thus, only photons escape through low-N_HI regions can be observed and the emergent profile is a combination of spectra from two regions, as illustrated by the green and gray lines in Fig. <ref>, and has a profile of emission in the bottom of DLA system. In the left panel of Fig. <ref>, we plot several low-N_HI channels in different directions. Although all of those low-N_HI channels can allow the escape of photons, only the channels exposed to the COS aperture (i.e., horizontal one in Fig. <ref>) can contribute to the observed emission line. Because for the photons which are initially injected into low-N_HI channels in other directions, they still need to penetrate the high-N_HI paths before reaching us. We have proposed a scenario that the aperture loss is responsible for those unexpected profiles of emission in the bottom of DLA system in the CLASSY sample. In this work, we also find that the DLA absorber (neutral gas in high-N_HI paths) has a similar systematic velocity as the neutral gas in the low-N_HI paths. However, the ionized gas, traced by the outflowing component of Si ii absorption line, has a generally larger velocity compared with the neutral gas in the low-N_HI paths. Using three LyC leakage diagnostics, we find that at least three galaxies in the CLASSY sample are LyC leaker candidates. Thus, in the right panel of Fig. <ref>, we use yellow to indicate the possible LyC-thin channels in the ISM, through which the photons can easily escape without much resonant scattering. By comparing the with O32 ratio, we conclude that the O32 ratio is tracing the covering fraction of LyC-thin channels, consistent with those known LyC leakers <cit.>. The covering fraction increases as the O32 ratio increases, and thus, the probability of observing small increases. § SUMMARY & CONCLUSIONS In this paper, we extracted high-resolution line profiles from CLASSY spectra of 45 EoR analogs. These HST COS/G130M spectra show a wide variety of profiles, including damped absorption, emission in damped absorption (DLA) profiles, P-Cygni profiles, and pure emission. We attribute the damped absorption to photons being scattered out of the spectroscopic aperture, and we argue that the especially large diversity among CLASSY profiles can be largely attributed to large range of physical scales subtended by the COS aperture, a little over 100 pc up to nearly 8 kpc. We separated the DLA and emission components of the profiles. Specifically, we adopted the precisely measured Doppler shifts of the O i absorption components as priors for the Doppler shift of each broad DLA profile, and we fitted the damped absorption with modified Voigt profiles. After subtracting the stellar continuum and the DLA profile, we modeled the emission profile and the appropriate underlying continuum using the shell model. For the first time, we measure the properties in the neutral shell traversed by the emergent emission, and the conditions in the high column density clouds, in the same sample of galaxies. For double-peaked emission line profiles, we defined the Doppler shift of the minimum between the two emission lines as the trough velocity, which we compared to the Doppler shifts of LIS absorption lines and the DLA. Our results are summarized below: * The emission in the bottom of the DLA profile reveals the inhomogeneity of the ISM and the outflows. The DLA profile and emission line can be surprisingly well fitted by simply splitting a geometric covering factor between the high-column density sightlines and the lower-N_HI channels through which photons escape. This suggests little exchange between high- and low-N_HI paths. Combining the sightlines probed by emission lines with those producing damped absorption, the net distribution of column densities is bimodal and therefore qualitatively similar to the distributions predicted by numerical simulations of H i regions <cit.>. It is important to note, however, that this observed distribution is offset to higher N_HI compared with the simulations. This discrepancy could arise from gas on larger spatial scales than the simulations include, or from structural differences in the star-forming complexes; but, whatever its origin, an understanding of the offset will better inform our understanding of the channels through which not only but also LyC, photons escape from galaxies. * We find that the Doppler shift of the trough velocity matches that of the Si ii velocity in most galaxies with DLAs, suggesting that the Si ii absorber in those galaxies are mainly in neutral phase. However, for galaxies without DLA systems, the trough velocity is always smaller than the Si ii velocity, suggesting Si ii tracing a more ionized phase of the outflow, consistent with <cit.>. Thus, the Si ii absorbers are multi-phase, including neutral hydrogen in addition to the mostly-ionized phase. Combining the and Si ii, we are able to identify the ionization of Si ii absorbers. Our comparison also suggests that the trough velocity directly measures the average velocity of neutral gas in the static ISM and outflows. * In spectra with a DLA, the trough velocity agrees well with the DLA velocity (O i velocity), suggesting that the high-N_HI clouds have similar kinematics as low-N_HI clouds. Further, the Si ii also agrees well with the DLA velocity, even for galaxies without emission. Thus, we conclude that Si ii mainly traces the neutral gas in high-N_HI columns if the galaxies show DLAs. * Motivated by the numerical simulations of <cit.>, we combine the measurements of peak separation and red peak asymmetry in a diagnostic diagram that differentiates the type of channels for LyC leakage. Comparing the diagram with the known LyC leakers, we suggest that the boundary for distinguishing substantial leakage from small leakage is a peak separation less than ∼400 . In the case of leakage, or equivalently small peak separation, then the red peak asymmetry parameter distinguishes holes, where A_f > 3, from the more symmetric profiles generated by full breaks. Six CLASSY galaxies are identified as the density-bounded LyC leakers by this technique, agreeing with the selection of net trough flux. The inferred properties of the LyC-thin sightlines depend on galaxy orientation, whereas the [O iii]/[O ii] ratio offers a sightline-independent perspsective. We confirm the presence of an inverse relation between peak separation and the [O iii]/[O ii] ratio, as has been noted previously <cit.>. * Similar to <cit.>, we find that the fitted redshift is always larger than the spectroscopic redshift and the fitted outflow velocity is larger by 10 – 200 than the trough velocity. The connection between the trough velocity and the outflow velocity offers new insight into the origin of those discrepancies, which we suggest are not adequately explained by parameter degeneracies <cit.>. We argue instead that aperture vignetting is the primary source of the discrepancies. The COS aperture vignets the blue-shifted peak more than the red-shifted peak, resulting in a lower blue-to-red peak ratio. To match the lower blue-to-red peak ratio, the radiative transfer model requires a higher outflow velocity and thus, a larger redshift to match the outflow velocity to trough velocity. Our results underline the sensitivity of profiles to aperture vignetting. The COS aperture not only excludes a large fraction of photons, it modifies the profile. Like many CLASSY targets, the composite spectra of star-forming galaxies at z ∼ 1.8 – 3.5 show DLA systems as well <cit.>. An important difference, however, is that the typical slit width used in ground-based spectroscopy, 12, corresponds to ∼10 kpc. The COS aperture subtends a comparable physical scale only for the most distant Lyman Break Analogs in CLASSY, and their COS spectra do not show DLAs. Nonetheless, our analysis suggests the DLAs appear in the z∼ 2 spectra because the escape on spatial scales is larger than the slit width. An important implication of this paper is that aperture vignetting could strongly affect recent JWST observation of EoR galaxies using Near Infrared Spectrograph (NIRSPec) slit mode, of which the slit width is just 02, corresponding to only ∼ 1 kpc. In this paper, we leveraged these aperture effects, recognizing an opportunity to characterize the properties of the low-N_HI channels and high-N_HI clouds in the same set of galaxy sightlines. To fully understand the connection between the observed profile and LyC leakage, the radiative transfer simulations will need to predict the spatial variations in profile shape. The extracted profiles used in this work, including the DLA profiles and the best-fit shell model spectra, can be downloaded from the CLASSY High Level Science Products database, which is developed and maintained at STScI, Baltimore, USA.[ Data will appear at <https://archive.stsci.edu/hlsp/classy> after acceptance by the ApJ. The data product can be found here (<https://drive.google.com/drive/folders/1NCUyr1vQ10z4BZuGBqsBuIjL0dWJnmZ1?usp=sharing>) during the review period.] § ACKNOWLEDGEMENTS The CLASSY team is grateful for the support that was provided by NASA through grant HST-GO-15840, from the Space Telescope Science Institute, which is operated by the Associations of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. CLM thanks the NSF for support through AST-1817125. BLJ thanks support from the European Space Agency (ESA). The CLASSY collaboration extends special gratitude to the Lorentz Center for useful discussions during the "Characterizing Galaxies with Spectroscopy with a view for JWST" 2017 workshop that led to the formation of the CLASSY collaboration and survey. HST (COS) astropy (The Astropy Collaboration 2013, 2018), CalCOS (STScI), python § BEST-FIT SPECTRA Fig. <ref> and <ref> present the best-fit spectra obtained using approaches described in Sec. <ref> and <ref>, respectively. aasjournal c c c c c c c c c c c c Measurements 900pt object f_Lyα log L_Lyα EW_Lyα f^Lyα_esc A_f Δ v_Lyα v^blue_Lyα v^red_Lyα v^trough_Lyα f^blue_Lyα f^red_Lyα 10^-15 erg s^-1 cm^-2 erg s^-1 Å % km s^-1 km s^-1 km s^-1 km s^-1 10^-15 erg s^-1 cm^-2 10^-15 erg s^-1 cm^-2 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) 12cDouble Peaks J0021+0052 144.56±1.43 42.5 29.04±0.29 25±0.45 2.91±0.04 571±54 -419±53 152±12 -27±20 8.4±0.5 136.4±1.4 J0808+3948 64.31±0.52 42.1 15.02±0.12 27±0.22 1.29±0.12 507±28 -470±26 37±9 -312±6 3.2±0.2 61.2±0.5 J0926+4427 64.64±0.56 42.8 40.65±0.35 35±0.67 1.42±0.13 427±52 -203±45 224±25 -47±17 7.5±0.2 57.1±0.5 J0938+5428 21.14±0.52 41.7 4.06±0.10 3.2±0.083 1.93±0.19 669±52 -296±41 373±31 116±32 7.4±0.3 13.8±0.4 J0942+3547 97.61±0.31 40.7 17.95±0.06 18±0.093 1.53±0.15 267±16 -113±14 154±7 -16±7 14.6±0.3 82.6±0.4 J0944-0038 20.47±0.43 39.2 9.95±0.21 4.1±0.085 0.86±0.35 416±65 -150±61 267±23 34±82 3.4±0.7 17.1±0.8 J0944+3442 0.43±0.09 38.6 0.56±0.11 0.82±0.17 1.71±0.59 531±99 -273±67 257±76 -20±150 0.1±0.1 0.4±0.1 J1016+3754 146.06±1.74 39.8 15.42±0.18 12±0.16 1.51±0.21 404±45 -230±43 175±13 -34±22 12.2±0.9 133.9±1.4 J1024+0524 54.13±0.50 41.1 8.72±0.08 5±0.097 2.87±0.08 464±36 -338±35 126±8 -64±18 2.1±0.3 52.1±0.5 J1025+3622 53.28±0.62 42.3 21.95±0.25 17±0.25 1.70±0.15 469±43 -263±39 206±17 -100±25 4.5±0.2 48.8±0.6 J1044+0353 1.55±0.08 38.8 0.90±0.05 0.096±0.005 1.84±0.36 425±119 -293±111 132±45 -123±98 0.2±0.1 1.3±0.1 J1105+4444 2.52±0.18 39.4 0.53±0.04 0.072±0.0051 ... 999±109 -517±76 482±76 -204±244 0.4±0.2 2.2±0.2 J1119+5130 2.30±0.16 38.1 0.71±0.05 0.78±0.053 4.20±0.27 649±76 -337±67 312±34 -67±108 1.2±0.2 1.1±0.1 J1148+2546 12.34±0.23 40.7 5.62±0.10 7.4±0.14 1.92±0.46 717±97 -450±73 268±63 -132±58 0.4±0.1 12.0±0.2 J1200+1343 80.54±0.44 41.9 56.87±0.31 9.3±0.11 2.68±0.05 530±21 -394±20 136±5 -48±20 9.0±0.2 71.6±0.4 J1253-0312 338.94±1.21 41.6 32.14±0.12 4.4±0.032 1.48±0.11 433±14 -214±11 219±8 -46±9 37.7±0.5 301.1±1.1 J1323-0132 190.14±0.56 41.3 81.19±0.24 21±0.1 1.78±0.06 168±12 -95±12 74±1 -35±7 45.0±1.5 142.7±1.5 J1416+1223 11.70±0.47 41.7 3.26±0.13 3±0.13 1.75±0.28 613±91 -315±80 298±43 35±51 6.0±0.4 5.7±0.3 J1418+2102 26.76±0.22 39.7 19.14±0.16 2.4±0.021 1.10±0.18 403±26 -122±25 281±9 25±9 7.6±0.2 19.0±0.2 J1428+1653 38.94±0.82 42.6 12.32±0.26 27±1 3.34±0.34 416±84 -328±76 89±38 -109±66 2.6±0.3 36.4±0.8 J1429+0643 67.53±0.90 42.8 33.22±0.44 11±0.18 3.56±0.15 545±72 -292±67 253±25 54±33 15.2±0.5 52.4±0.7 J1448-0110 0.43±0.13 38.8 0.09±0.03 0.027±0.0083 ... 395±153 -354±124 40±91 -283±134 0.1±0.1 0.3±0.1 J1521+0759 27.21±0.64 41.8 4.85±0.11 12±0.52 3.27±0.09 404±48 -247±45 157±16 -68±29 -1.0±0.4 28.4±0.7 J1545+0858 160.58±1.02 41.7 29.26±0.19 7.5±0.048 2.75±0.08 284±14 -122±10 163±10 -27±15 6.4±0.3 154.2±1.0 12cSingle Peak / P-Cygni J0036-3333 175.93±1.20 41.1 7.13±0.05 9.2±0.063 ... ... ... 100±5 ... ... 175.9±1.2 J0940+2935 0.58±0.06 36.8 0.34±0.03 0.46±0.044 ... ... ... 273±57 ... ... 0.6±0.1 J1112+5503 12.43±0.45 41.8 5.45±0.20 3.8±0.15 ... ... ... 143±47 ... ... 12.4±0.4 J1144+4012 2.50±0.21 41.0 1.75±0.15 1.6±0.14 ... ... ... 318±70 ... ... 2.5±0.2 J1157+3220 389.02±2.53 41.1 21.63±0.14 23±0.2 ... ... ... 50±13 ... ... 389.0±2.5 J1225+6109 0.50±0.21 36.8 0.04±0.02 0.016±0.0065 ... ... ... 52±63 ... ... 0.5±0.2 J1314+3452 0.85±0.06 37.2 0.21±0.01 0.027±0.0018 ... ... ... 210±47 ... ... 0.8±0.1 J1359+5726 72.71±0.68 41.2 8.71±0.08 6.7±0.081 ... ... ... 148±16 ... ... 72.7±0.7 J1525+0757 67.41±1.21 42.0 14.92±0.27 16±0.43 ... ... ... 82±7 ... ... 67.4±1.2 J1612+0817 36.97±0.83 42.3 12.71±0.29 7±0.18 ... ... ... 114±14 ... ... 37.0±0.8 (1) object name; (2) flux; (3) luminosity; (4) equivalent width; (5) escape fraction; (6) red peak asymmetry; (7) peak separation; (8) blue peak velocity offset; (9) red peak velocity offset; (10) trough velocity offset; (11) blue peak flux; (12) red peak flux. We note that the luminosity distances of some galaxies used in this work are different with those in <cit.> because of the correction of cosmic flow. The properties (e.g., stellar mass, star formation rate) of those galaxies which rely on the luminosities are scaled accordingly. c c c c c c c c c Ancillary data object f_1500 M_1500 Z_neb log M_⋆ E(B-V) O32 v^outflow_Si II r_50 10^-15 erg s^-1 cm^-2 M_⊙ km s^-1 (1) (2) (3) (4) (5) (6) (7) (8) (9) J0021+0052 3.94 -20.55 8.17±0.07 9.09^+0.18_-0.38 0.13±0.006 2.0±0.1 231^+77_-77 0.25 J0036-3333 16.60 -18.34 8.21±0.17 9.09^+0.26_-0.23 0.30±0.012 1.1±0.1 157^+22_-22 0.28 J0127-0619 4.04 -13.58 7.68±0.02 8.63^+0.18_-0.15 0.48±0.006 1.1±0.1 ... 0.15 J0144+0453 1.87 -12.63 7.76±0.02 7.52^+0.24_-0.29 0.04±0.030 2.1±0.1 48^+16_-16 3.54 J0337-0502 7.99 -16.60 7.46±0.04 7.01^+0.24_-0.21 0.05±0.006 6.2±0.2 ... 1.62 J0405-3648 0.96 -10.90 7.04±0.05 6.60^+0.28_-0.28 0.11±0.005 0.6±0.1 ... 6.43 J0808+3948 3.42 -20.23 8.77±0.12 9.12^+0.30_-0.17 0.24±0.070 0.8±0.1 646^+65_-65 0.08 J0823+2806 3.85 -18.86 8.28±0.01 9.38^+0.33_-0.19 0.21±0.004 2.0±0.1 136^+45_-45 0.28 J0926+4427 1.14 -20.64 8.08±0.02 8.76^+0.30_-0.26 0.10±0.008 3.1±0.1 353^+52_-52 0.23 J0934+5514 15.10 -14.05 6.98±0.01 6.25^+0.15_-0.20 0.07±0.007 8.7±0.1 112^+37_-37 1.53 J0938+5428 3.56 -20.53 8.25±0.02 9.15^+0.18_-0.29 0.13±0.006 1.9±0.1 215^+72_-72 0.28 J0940+2935 1.45 -11.14 7.66±0.07 6.80^+0.23_-0.40 0.06±0.010 0.7±0.1 102^+34_-34 3.06 J0942+3547 3.80 -16.30 8.13±0.03 7.56^+0.21_-0.29 0.06±0.011 2.6±0.1 97^+26_-26 0.33 J0944-0038 1.40 -13.07 7.83±0.01 6.89^+0.44_-0.25 0.16±0.010 2.9±0.1 64^+21_-21 2.34 J0944+3442 0.69 -15.06 7.62±0.11 8.19^+0.40_-0.23 0.16±0.013 1.4±0.1 ... 3.74 J1016+3754 7.07 -14.43 7.56±0.01 6.77^+0.27_-0.22 0.07±0.012 4.6±0.2 116^+31_-31 1.52 J1024+0524 4.50 -18.20 7.84±0.03 7.88^+0.37_-0.24 0.10±0.016 2.1±0.1 94^+12_-12 0.40 J1025+3622 1.81 -20.30 8.13±0.01 8.87^+0.25_-0.27 0.09±0.006 2.4±0.1 155^+24_-24 0.35 J1044+0353 1.70 -15.25 7.45±0.03 6.84^+0.41_-0.26 0.08±0.007 6.8±0.1 52^+12_-12 0.38 J1105+4444 4.68 -17.28 8.23±0.01 8.98^+0.29_-0.24 0.17±0.005 2.0±0.1 115^+23_-23 4.11 J1112+5503 1.91 -20.45 8.45±0.06 9.59^+0.33_-0.19 0.23±0.016 0.9±0.1 349^+107_-107 0.20 J1119+5130 2.63 -13.54 7.57±0.04 6.81^+0.15_-0.28 0.10±0.008 2.0±0.1 65^+22_-22 2.18 J1129+2034 1.87 -13.62 8.28±0.04 8.20^+0.37_-0.27 0.23±0.011 1.8±0.1 51^+17_-17 0.38 J1132+5722 2.57 -13.69 7.58±0.08 7.32^+0.23_-0.26 0.10±0.008 0.8±0.1 ... 0.84 J1132+1411 1.75 -15.75 8.25±0.01 8.67^+0.28_-0.19 0.13±0.008 2.7±0.1 60^+10_-10 8.86 J1144+4012 1.20 -19.86 8.43±0.20 9.89^+0.18_-0.29 0.22±0.010 0.6±0.1 246^+33_-33 0.40 J1148+2546 2.07 -18.03 7.94±0.01 8.13^+0.34_-0.24 0.10±0.021 3.7±0.1 95^+19_-19 1.31 J1150+1501 12.60 -13.71 8.14±0.01 6.83^+0.28_-0.30 0.04±0.004 2.3±0.1 67^+22_-22 1.29 J1157+3220 14.40 -17.27 8.43±0.02 9.08^+0.32_-0.18 0.08±0.006 1.2±0.1 238^+49_-49 2.89 J1200+1343 1.38 -18.53 8.26±0.02 8.12^+0.47_-0.42 0.15±0.006 5.1±0.1 192^+13_-13 0.18 J1225+6109 9.50 -13.28 7.97±0.01 7.09^+0.34_-0.24 0.11±0.005 4.7±0.1 51^+17_-17 2.91 J1253-0312 9.11 -18.19 8.06±0.01 7.66^+0.51_-0.23 0.16±0.008 8.0±0.2 113^+38_-38 0.85 J1314+3452 3.72 -12.65 8.26±0.01 7.53^+0.30_-0.21 0.14±0.006 2.3±0.1 62^+21_-21 0.30 J1323-0132 1.33 -15.94 7.71±0.04 6.29^+0.26_-0.10 0.13±0.042 37.8±3.0 ... 0.23 J1359+5726 6.34 -18.53 7.98±0.01 8.39^+0.31_-0.26 0.09±0.006 2.6±0.1 161^+23_-23 1.10 J1416+1223 2.62 -20.63 8.53±0.11 9.59^+0.32_-0.26 0.25±0.008 0.8±0.1 398^+68_-68 0.13 J1418+2102 1.17 -13.99 7.75±0.02 6.26^+0.49_-0.35 0.08±0.006 4.7±0.1 51^+7_-7 0.40 J1428+1653 1.25 -20.75 8.33±0.05 9.56^+0.15_-0.23 0.14±0.008 1.2±0.1 140^+25_-25 0.35 J1429+0643 1.62 -20.92 8.10±0.03 8.80^+0.35_-0.21 0.12±0.012 4.2±0.2 230^+51_-51 0.15 J1444+4237 2.08 -11.33 7.64±0.02 6.39^+0.17_-0.17 0.08±0.053 4.1±0.1 54^+18_-18 8.20 J1448-0110 4.08 -17.55 8.13±0.01 7.58^+0.41_-0.24 0.15±0.005 8.0±0.1 145^+43_-43 0.23 J1521+0759 3.52 -20.33 8.31±0.14 9.00^+0.29_-0.30 0.15±0.008 1.5±0.1 161^+54_-54 0.28 J1525+0757 3.52 -19.83 8.33±0.04 10.06^+0.28_-0.42 0.25±0.008 0.5±0.1 408^+28_-28 0.25 J1545+0858 4.37 -18.40 7.75±0.03 7.50^+0.43_-0.26 0.11±0.036 8.6±0.3 113^+33_-33 0.33 J1612+0817 2.70 -21.12 8.18±0.19 9.78^+0.28_-0.26 0.29±0.008 0.7±0.1 459^+63_-63 0.20 (1) object name; (2) UV flux at 1500 Å from <cit.>; (3) UV absolute magnitude at 1500 Å; (4) metallicity from <cit.>; (5) stellar mass; (6) dust extinction from <cit.>; (7) O32 ratio; (8) velocity of Si ii absorption line; (9) half light radius from <cit.>. c c c c c c c c Best-fit parameters of the second attempt object z_tlac v_exp log N_HI log T log τ σ_i EW_i km s^-1 K km s ^-1 Å (1) (2) (3) (4) (5) (6) (7) (8) J0021+0052 0.098902 214^+1_-2 18.79^+0.11_-0.08 3.8^+0.2_-0.1 -2.05^+1.01_-0.11 117^+1_-1 16.8^+0.8_-0.7 J0036-3333 0.020939 207^+2_-1 18.68^+0.09_-0.06 3.5^+0.2_-0.1 -2.10^+1.13_-0.06 93^+1_-1 6.6^+0.5_-0.1 J0808+3948 0.091384 365^+1_-1 16.76^+0.08_-0.04 3.4^+0.1_-0.1 -1.57^+0.22_-0.11 103^+1_-1 8.7^+0.1_-0.1 J0926+4427 0.180817 131^+2_-3 19.23^+0.05_-0.08 3.7^+0.1_-0.1 -1.74^+0.11_-0.21 248^+2_-3 31.8^+1.0_-1.0 J0938+5428 0.102513 21^+4_-2 19.79^+0.08_-0.08 4.2^+0.2_-0.1 -0.68^+0.74_-0.87 308^+5_-4 74.5^+5.3_-4.4 J0942+3547 0.015121 86^+1_-1 18.20^+0.06_-0.06 3.1^+0.1_-0.1 -3.31^+0.38_-0.09 167^+1_-1 18.3^+0.1_-0.1 J0944-0038 0.005187 119^+4_-4 18.60^+0.09_-0.07 3.4^+0.2_-0.2 -1.35^+0.20_-0.12 218^+3_-4 2130.2^+1851.2_-1159.2 J0944+3442 0.020226 72^+18_-17 19.44^+0.21_-0.23 4.1^+0.7_-0.8 0.44^+0.46_-0.43 182^+12_-11 39.6^+21.4_-14.0 J1016+3754 0.004131 96^+2_-1 18.85^+0.07_-0.11 4.3^+0.1_-0.2 0.10^+0.80_-0.75 142^+1_-2 26.2^+1.9_-1.9 J1024+0524 0.033425 176^+2_-1 19.01^+0.08_-0.09 5.1^+0.1_-0.2 -1.89^+0.92_-0.16 82^+1_-1 16.2^+0.6_-0.4 J1025+3622 0.126786 167^+3_-2 18.99^+0.09_-0.07 4.2^+0.2_-0.1 -0.88^+0.40_-0.55 245^+3_-3 41.3^+2.1_-1.9 J1044+0353 0.013070 175^+10_-15 18.18^+0.20_-0.19 3.5^+0.3_-0.4 -1.14^+0.25_-0.10 248^+14_-9 9.9^+0.9_-0.7 J1112+5503 0.131707 210^+3_-3 19.55^+0.10_-0.05 4.6^+0.3_-0.1 0.55^+1.10_-1.17 270^+3_-4 20.9^+1.5_-1.6 J1119+5130 0.004536 0^+2_-1 20.42^+0.24_-0.65 4.0^+0.4_-0.3 -1.80^+1.21_-0.03 316^+10_-8 6.5^+6.6_-2.4 J1144+4012 0.126832 108^+11_-8 20.20^+0.07_-0.08 4.9^+0.3_-0.7 0.40^+0.46_-0.52 259^+9_-11 281.6^+50.7_-41.7 J1148+2546 0.045710 279^+5_-4 19.19^+0.09_-0.07 3.8^+0.2_-0.2 0.53^+0.67_-0.56 281^+4_-4 46.3^+3.7_-3.6 J1157+3220 0.010230 163^+1_-2 20.06^+0.04_-0.09 5.0^+0.1_-0.2 0.69^+1.98_-1.60 191^+1_-2 200.1^+2.7_-3.5 J1200+1343 0.066942 173^+1_-2 16.56^+0.09_-0.05 5.4^+0.1_-0.1 -2.40^+0.65_-0.12 266^+1_-1 53.6^+1.7_-1.8 J1253-0312 0.023087 184^+0_-1 18.67^+0.03_-0.06 4.3^+0.1_-0.1 -2.70^+0.48_-0.00 256^+1_-1 31.2^+0.5_-0.4 J1323-0132 0.022534 37^+1_-1 17.93^+0.05_-0.02 3.1^+0.0_-0.1 -2.40^+0.24_-0.12 121^+0_-0 75.7^+1.1_-1.1 J1359+5726 0.034107 205^+2_-1 19.03^+0.13_-0.09 4.7^+0.1_-0.2 -1.11^+0.10_-0.47 128^+2_-2 51.8^+2.5_-2.2 J1416+1223 0.123181 0^+2_-2 19.59^+0.09_-0.08 4.6^+0.2_-0.3 -0.22^+0.70_-0.73 289^+6_-6 49.5^+4.1_-3.9 J1418+2102 0.009016 98^+2_-2 18.18^+0.09_-0.06 3.0^+0.1_-0.1 0.68^+1.65_-1.61 230^+1_-1 101.8^+4.8_-4.9 J1428+1653 0.181780 121^+3_-4 18.80^+0.10_-0.07 3.5^+0.1_-0.2 -1.12^+0.08_-0.40 52^+1_-1 17.8^+1.9_-1.0 J1429+0643 0.173984 112^+2_-3 19.21^+0.08_-0.09 3.1^+0.2_-0.2 -0.38^+0.67_-0.82 323^+5_-5 46.8^+3.6_-3.1 J1521+0759 0.094771 215^+2_-2 19.08^+0.12_-0.11 3.0^+0.2_-0.1 -1.04^+0.16_-0.28 93^+2_-2 9.2^+1.8_-0.6 J1525+0757 0.075913 136^+2_-1 18.58^+0.09_-0.06 4.5^+0.2_-0.1 -1.39^+0.00_-0.44 52^+1_-1 19.2^+1.2_-1.0 J1545+0858 0.038336 194^+1_-2 18.41^+0.07_-0.10 3.0^+0.2_-0.1 0.69^+1.91_-1.58 156^+1_-1 55.3^+1.5_-1.8 J1612+0817 0.149267 246^+3_-2 19.61^+0.08_-0.10 5.3^+0.2_-0.2 0.09^+1.03_-1.11 112^+2_-2 35.6^+2.3_-2.0 (1) object name; (2) redshift estimated by the shell model; (3) outflow velocity of expanding shell; (4) H i column density; (5)temperature; (6) dust extinction; (7) intrinsic line width; (8) equivalent width. c c c c c c c c Best-fit parameters of the third attempt object z_tlac v_exp log N_HI log T log τ σ_i EW_i km s^-1 K km s ^-1 Å (1) (2) (3) (4) (5) (6) (7) (8) J0021+0052 0.098535 163^+1_-2 19.17^+0.16_-0.05 4.5^+0.2_-0.1 -0.67^+0.19_-0.07 225^+1_-1 26.7^+4.0_-1.1 J0036-3333 0.020553 120^+2_-2 19.17^+0.08_-0.05 3.0^+0.1_-0.1 -0.63^+0.04_-0.04 142^+2_-2 9.0^+0.3_-0.3 J0808+3948 0.091375 348^+2_-2 16.17^+0.08_-0.05 3.7^+0.1_-0.1 -1.67^+0.46_-0.73 102^+1_-1 8.7^+0.1_-0.1 J0926+4427 0.180816 133^+1_-1 19.19^+0.07_-0.06 3.8^+0.1_-0.2 -1.73^+0.24_-0.38 244^+3_-3 29.9^+1.3_-1.2 J0938+5428 0.102247 11^+1_-2 20.63^+0.05_-0.11 4.3^+0.1_-0.2 -2.44^+0.45_-0.56 291^+5_-4 31.8^+3.9_-1.7 J0942+3547 0.015010 69^+0_-0 18.58^+0.03_-0.03 3.3^+0.0_-0.0 -3.67^+0.58_-0.73 170^+1_-1 18.4^+0.2_-0.2 J0944-0038 0.004912 70^+3_-3 19.42^+0.06_-0.08 3.9^+0.3_-0.2 -2.21^+0.54_-0.71 217^+3_-4 302.0^+23.0_-24.3 J0944+3442 0.020138 59^+9_-10 19.61^+0.11_-0.10 3.3^+0.9_-0.4 0.34^+0.23_-0.35 179^+11_-11 55.8^+18.3_-18.1 J1016+3754 0.004024 81^+2_-2 19.04^+0.13_-0.11 3.8^+0.3_-0.1 -0.58^+0.09_-0.08 148^+2_-1 21.6^+2.4_-1.3 J1024+0524 0.033304 139^+3_-3 19.05^+0.12_-0.11 4.6^+0.1_-0.1 -0.69^+0.14_-0.08 192^+2_-2 20.0^+2.5_-0.8 J1025+3622 0.126552 131^+2_-4 19.43^+0.07_-0.08 3.9^+0.3_-0.2 -1.31^+0.34_-0.35 243^+3_-3 38.2^+3.6_-2.3 J1044+0353 0.012998 158^+9_-9 18.40^+0.11_-0.13 3.4^+0.3_-0.3 -1.16^+0.44_-0.63 253^+12_-8 9.9^+0.9_-0.8 J1112+5503 0.131497 153^+3_-4 19.83^+0.06_-0.09 3.8^+0.2_-0.2 0.25^+0.08_-0.07 257^+4_-4 26.8^+2.7_-2.4 J1119+5130 0.004532 1^+1_-2 20.18^+0.10_-0.12 4.6^+0.1_-0.2 -0.91^+0.31_-0.26 225^+9_-9 13.2^+6.2_-4.6 J1144+4012 0.126862 111^+12_-7 20.19^+0.08_-0.07 4.8^+0.3_-1.0 0.48^+0.13_-0.19 252^+8_-10 324.8^+52.3_-63.3 J1148+2546 0.045233 187^+6_-5 19.66^+0.15_-0.11 5.0^+0.1_-0.2 -1.50^+0.58_-0.60 265^+7_-8 28.3^+4.7_-2.5 J1157+3220 0.011074 392^+2_-2 19.46^+0.12_-0.11 5.0^+0.1_-0.2 0.56^+0.03_-0.02 118^+1_-1 33.6^+4.8_-0.7 J1200+1343 0.066828 131^+3_-3 19.23^+0.17_-0.08 4.5^+0.2_-0.1 -0.04^+0.04_-0.04 219^+1_-1 86.3^+4.6_-3.1 J1253-0312 0.022846 131^+1_-1 19.28^+0.02_-0.04 4.2^+0.1_-0.1 -3.19^+0.70_-0.69 246^+1_-1 30.2^+0.4_-0.4 J1323-0132 0.022511 26^+1_-1 18.22^+0.06_-0.08 3.2^+0.0_-0.1 -0.95^+0.04_-0.05 112^+1_-1 87.5^+1.3_-1.2 J1359+5726 0.033938 175^+3_-2 19.30^+0.13_-0.08 4.6^+0.1_-0.2 -0.82^+0.23_-0.18 125^+2_-2 55.7^+5.5_-2.9 J1416+1223 0.123174 -5^+4_-4 19.20^+0.08_-0.08 3.6^+0.6_-0.3 0.38^+0.10_-0.10 306^+7_-8 41.7^+2.9_-2.6 J1418+2102 0.008699 21^+1_-1 19.73^+0.05_-0.02 4.1^+0.1_-0.1 -3.38^+0.58_-0.76 246^+1_-1 67.4^+2.9_-2.9 J1428+1653 0.181544 122^+2_-3 19.23^+0.05_-0.09 3.0^+0.2_-0.1 0.21^+0.07_-0.08 168^+2_-3 46.5^+3.3_-3.4 J1429+0643 0.173649 21^+2_-2 19.95^+0.09_-0.04 3.4^+0.2_-0.1 -1.30^+0.06_-0.05 321^+5_-6 41.6^+2.1_-2.1 J1521+0759 0.094335 141^+3_-4 19.40^+0.08_-0.08 5.0^+0.1_-0.1 -1.21^+0.22_-0.23 97^+4_-4 9.5^+1.1_-0.7 J1525+0757 0.075916 133^+1_-2 18.58^+0.09_-0.07 4.7^+0.1_-0.2 -1.09^+0.12_-0.11 44^+2_-2 20.1^+1.0_-1.0 J1545+0858 0.037834 92^+2_-3 19.43^+0.06_-0.06 4.2^+0.1_-0.2 -1.87^+0.34_-0.40 72^+3_-2 37.0^+1.8_-1.1 J1612+0817 0.149198 187^+4_-2 19.79^+0.08_-0.07 5.0^+0.2_-0.2 0.42^+0.03_-0.02 84^+2_-2 94.9^+5.1_-4.8 (1) object name; (2) redshift estimated by the shell model; (3) outflow velocity of expanding shell; (4) H i column density; (5)temperature; (6) dust extinction; (7) intrinsic line width; (8) equivalent width.
http://arxiv.org/abs/2307.07292v1
20230714121133
Optimal Dirichlet Boundary Control by Fourier Neural Operators Applied to Nonlinear Optics
[ "Nils Margenberg", "Franz X. Kärtner", "Markus Bause" ]
math.NA
[ "math.NA", "cs.NA", "physics.comp-ph", "78M50, 78M10, 78A60, 65M60, 49M41" ]
Similarity-based Memory Enhanced Joint Entity and Relation Extraction Witold Kościukiewicz1,20009-0001-0192-8850 Mateusz Wójcik 1,20009-0008-0547-9467 Tomasz Kajdanowicz20000-0002-8417-1012 Adam Gonczarek1 August 12, 2023 =========================================================================================================================================== We present an approach for solving optimal Dirichlet boundary control problems of nonlinear optics by using deep learning. For computing high resolution approximations of the solution to the nonlinear wave model, we propose higher order space-time finite element methods in combination with collocation techniques. Thereby, C^l-regularity in time of the global discrete is ensured. The resulting simulation data is used to train solution operators that effectively leverage the higher regularity of the training data. The solution operator is represented by Fourier Neural Operators and Gated Recurrent Units and can be used as the forward solver in the optimal Dirichlet boundary control problem. The proposed algorithm is implemented and tested on modern high-performance computing platforms, with a focus on efficiency and scalability. The effectiveness of the approach is demonstrated on the problem of generating Terahertz radiation in periodically poled Lithium Niobate, where the neural network is used as the solver in the optimal control setting to optimize the parametrization of the optical input pulse and maximize the yield of 0.3THz-frequency radiation. We exploit the periodic layering of the crystal to design the neural networks. The networks are trained to learn the propagation through one period of the layers. The recursive application of the network onto itself yields an approximation to the full problem. Our results indicate that the proposed method can achieve a significant speedup in computation time compared to classical methods. A comparison of our results to experimental data shows the potential to revolutionize the way we approach optimization problems in nonlinear optics. MSC2020: 78M50, 78M10, 78A60, 65M60, 49M41 Keywords: Optimal Control, Neural Operators, Deep Neural Networks, Nonlinear Optics, Space-Time Finite Element Method § INTRODUCTION §.§ Physical Problem and Machine Learning approach Nonlinear optical phenomena play a fundamental role in a lot of applications, including the development of innovative optical sources. As high-intensity lasers become more accessible and complexity increases, the simulation of nonlinear optical phenomena gains importance in order to achieve optimal performance and reduce the cost and time for empirical studies. In this work we are concerned with the generation of Terahertz (THz) radiation, which spans the frequency range of 0.130. Thus, it is positioned between the microwave and infrared electromagnetic frequency bands. THz radiation offers great potential for a wide range of ultrafast spectroscopic, strong field and imaging applications. However, a persistent challenge in current research lies in the limited availability of compact THz sources capable of delivering both high field strength and high repetition rates. We address this limitation through the development of machine learning techniques to elucidate and optimize THz generation in nonlinear crystals. By leveraging these approaches, we aim to pave the way for the next generation of compact and efficient light sources for spectroscopic applications, thereby enabling significant advancements in the field krtnerAXSISExploringFrontiers2016. In this paper, we develop machine learning techniques to solve an optimal control problem that arises in the optimization of THz radiation generation in nonlinear crystals. Specifically, the problem can be formulated as an optimal Dirichlet boundary control problem, which requires the repeated solution of the forward problem. While we previously developed accurate simulation methods for a class of nonlinear dispersive wave equations in nonlinear optics margenbergAccurateSimulationTHz2023, each simulation entails a significant computational effort. Consequently, the solution of the forward problem with this method is impractical for the integration into the optimal control problem, which necessitates different approaches. The key ideas of our method are sketched in Fig. <ref>. The first aspect of the method we develop is the differentiability of a program that is implemented using established Artificial Neural Network (ANN) libraries. This paradigm is known as differentiable programming rackauckasUniversalDifferentialEquations2020. The second main idea in our algorithm builds on the periodicity of the material parameters. We consider a problem in nonlinear optics which involves a periodically poled nonlinear crystal. We learn a solution operator U to the forward problem in a single period of the crystal. By recursive application of U onto itself we approximate the solution operator for multiple layers. The resulting solution operator can then be integrated into the solution of an optimal control problem. In our work, we adopt a hybrid approach that combines classical and mature numerical methods, specifically finite element methods, with deep learning techniques. We use numerical methods where they have clear advantages over machine learning approaches, while we use ANNs where numerical methods are not feasible or efficient. Our approach to solving an optimal Dirichlet boundary control problem exemplifies this paradigm, which is the focus of this work. In particular, we extend our previous work margenbergAccurateSimulationTHz2023 on space-time finite element methods by using higher order variational time discretizations presented in anselmannNumericalStudyGalerkin2020,anselmannGalerkinCollocationApproximation2020. The resulting finite element solution has global C^1-regularity in time. We then use the resulting simulation data to train a solution operator that effectively leverages the higher regularity of the training data. The numerical and machine learning methods are implemented and tested on modern high-performance computing platforms, with a focus on efficiency and scalability. §.§ Related works Partial differential equations (PDEs) play a fundamental role in science and engineering. They describe natural phenomena and processes in a lot of scientific fields and provide a mathematical framework to model these phenomena. Despite the significant advances in recent decades, challenges persist, e. g.in the context of addressing the solution of large-scale systems of nonlinear equations. §.§.§ Machine Learning for partial differential equations Approaches for the solution of PDEs using ANNs trace back more than 20 years, e. g. lagarisArtificialNeuralNetworks1998. The idea is to directly parametrize the solution to the PDE as an ANN. The network is then trained by incorporating the differential equation, along with the boundary conditions, into the loss function. In Weinan Weinan and Yu minimize an energy functional, resembling the variational formulation used in FEM. On the other hand, DeepXDE luDeepXDEDeepLearning2021, PINN raissi2019physics, and the Deep Galerkin Method sirignano2018dgm use different approaches where the strong residual of the PDE is minimized. This is done through collocation methods on randomly selected points within the domain and on the boundary. Karniadakis and Zhang proposed VPINNs kharazmiHpVPINNsVariationalPhysicsinformed2021, where the cost function is the variational formulation, which is optimized by sampling test functions. A comprehensive review of PINN and related approaches in the field of Scientific Machine Learning can be found in cuomoScientificMachineLearning2022. The authors of mattheakisPhysicalSymmetriesEmbedded2019,chenSymplecticRecurrentNeural2020,jinSympNetsIntrinsicStructurepreserving2020,hernndezStructurepreservingNeuralNetworks2021 develop ANNs that ensure the symplectic structure of Hamiltonian mechanics, which improves generalization and accuracy. Based on Koopmann operator representation, the authors of ginDeepLearningModels2020,pan2020physics train an ANN to represent a coordinate transformation that linearizes a nonlinear PDE. A recent approach to solving PDEs involves learning solution operators using artificial neural networks. This technique involves approximating solution operators using ANNs, which can potentially enable the solution of complex problems. A significant advantage of this approach is that once the solution operator is trained, it can be applied to other scenarios. Training ANNs is computationally expensive, which makes PINNs and related approaches not competitive to classical simulation methods grossmannCanPhysicsInformedNeural2023. Evaluating an ANN on the other hand is computationally cheap, making neural operators appealing: Once a solution operator is trained, it can be generalized to other scenarios, which only requires the evaluation of the network. Various architectures exist for constructing these neural operators. In lu2021learning, Lu et al. construct an architecture called DeepONets by iterating a shallow network proposed in chen1995universal. This type of network consists of a trunk network which is applied to an input function and a branch net which is applied to an element from the domain of the operator. In lanthalerErrorEstimatesDeepONets2022 the authors prove an error estimate for the DeepONet architecture. Other approaches to learn solution operators are inspired by reduced basis methods bhattacharyaModelReductionNeural2021,nelsen2020random,opschoor2020deep,schwab2019deep,oleary-roseberryDerivativeinformedProjectedNeural2022,fresca2022poddlrom. Based on low rank decompositions the authors of khoo2019switchnet introduce an ANN with low-rank structure to approximate the inverse of differential operators. In kovachkiNeuralOperatorLearning2023, the authors construct such a network as the tensor product of two networks, which also carries similarities with the DeepONet architecture lu2019deeponet. Building on fan2019bcr, fan2019multiscale, kashinath2020enforcing, the Fourier Neural Operator (FNO) architecture are developed in liFourierNeuralOperator2020,kovachkiNeuralOperatorLearning2023. In kovachkiUniversalApproximationError2021 the authors prove a universal approximation property and error bounds. Based on FNOs, new architectures are developed, e. g. Neural Inverse Operators molinaroNeuralInverseOperators2023 which are used to solve inverse problems. The idea of designing and interpreting ANNs using continuity becomes increasingly popular. A notable example of this is the formulation of ResNet as a continuous time process with respect to the depth parameter haber2017stable,eProposalMachineLearning2017. See also antilOptimalTimeVariable2022 for an extension to adaptive timestepping, where the timestepsize is a parameter, which can be optimized. Similarly, works linking ANNs and dynamical systems observe that problems arising in deep learning can be recast into optimal contol problems on differential equations jiequnhanDynamicalSystemsAndOptimal2022,eMeanfieldOptimalControl2019,liuDeepLearningTheory2019,seidmanRobustDeepLearning2020,benningDeepLearningOptimal2019. Recent works employ deep learning techniques to address computational challenges encountered in solving optimal control problems; The works weinanEmpoweringOptimalControl2022,bensoussanChapter16Machine2022 and references therein serve as a good basis for a comprehensive survey. Existing work is mainly concerned with stochastic control ruthottoMachineLearningFramework2020,carmonaConvergenceAnalysisMachine2021,carmonaConvergenceAnalysisMachine2021a. This is in contrast to this work, where we are concerned with Dirichlet optimal control problems. §.§.§ Space-Time Finite Element Methods We describe the numerical simulations of nonlinear optical phenomena in the context of space-time finite element methods kcherVariationalSpaceTime2014. Specifically we use time discretization of higher order and regularity anselmannNumericalStudyGalerkin2020,anselmannGalerkinCollocationApproximation2020. Other investigations on space-time finite element methods were conducted in drflerSpaceTimeDiscontinuousGalerkin2016,drflerParallelAdaptiveDiscontinuous2019, where numerical results with an adaptive algorithm are presented. Further work relevant for electromagnetic problems is the PhD thesis findeisenParallelAdaptiveSpaceTime2016 and references therein. Various alternative methods for discretizing wave equations via space-time finite element methods exist, and they are discussed in depth in langerSpaceTimeMethodsApplications2019. Notable examples include the works of banjaiTrefftzPolynomialSpaceTime2017,gopalakrishnanMappedTentPitching2017, as well as more recent developments such as those presented in perugiaTentPitchingTrefftzDG2020,steinbachCoerciveSpacetimeFinite2020. These works and their references serve as a good basis for a comprehensive survey of recent developments in space-time discretization techniques for linear wave equations. The advantages of the variational time discretization include the natural integration with the variational discretization in space and that it naturally captures couplings and nonlinearities. These features facilitate the use of concepts such as duality and goal oriented adaptivity bauseFlexibleGoalorientedAdaptivity2021. The concepts of variational space-time discretization also offers a unified approach to stability and error analysis as shown in matthiesHigherOrderVariational2011. Furthermore, the use of space-time FEM allow us to solve the wave equation together with the arising ADEs in one holistic framework margenbergAccurateSimulationTHz2023. Once the formulation is established, the methods can be extended in a generic manner. For instance, we introduced the physical problem we are concerned with in this work in margenbergAccurateSimulationTHz2023 and extend it here to the family of Galerkin-collocation methods anselmannGalerkinCollocationApproximation2020. § NOTATION AND MATHEMATICAL PROBLEM Let 𝒟⊂^d with d∈1, 2, 3 be a bounded domain with boundary ∂𝒟=Γ_D and I=(0, T] a bounded time interval with final time T>0. By H^m(𝒟) we denote the Sobolev space of L^2(𝒟) functions with derivatives up to order m in L^2(𝒟). For the definition of these function spaces we refer to evansPartialDifferentialEquations2010. We let L L^2(𝒟), V=H^1(𝒟) and V_0=H^1_0(𝒟) be the space of all H^1-functions with vanishing trace on the Dirichlet part of the boundary Γ_D. We denote the L^2-inner product by ∙∙. For the norms we use ∙∙_L^2(𝒟) and ∙_m ∙_H^m(𝒟) for m∈ and m≥ 1. By L^2(0, T; B), C([0, T]; B) and C^q([0, T]; B), for q∈, we denote the standard Bochner spaces of B-valued functions for a Banach space B, equipped with their natural norms. Further, for a subinterval J⊆ [0, T], we will use the notations L^2(J; B), C^m(J; B) and C^0(J; B) C(J; B) for the corresponding Bochner spaces. Further, we define the function spaces, that we need below for the variational formulation of the model equations. Function spaces for the variational formulationsfnspace W(I) = w∈ L^2(I; V)∂_t w∈ L^2(I; L) , W_0(I) = w∈ L^2(I; V_0)∂_t w∈ L^2(I; L) , W_nl(I) = w∈ L^2(I; V)∂_t w∈ L^2(I; L), ∂_t ( w w)∈ L^2(I; L) . In (<ref>) we denote by w the contraction of w, i.e. w=∑_i=1^d w_i. §.§.§ Mathematical model problem from nonlinear optics In this work we study nonlinear dispersive wave propagation, that is modeled by the following coupled partial differential equation (cf. margenbergAccurateSimulationTHz2023,abrahamConvolutionFreeMixedFiniteElement2019). Its physical background and application is discussed further in Section <ref>. Nonlinear dispersive wave equationlor-ade ∂_tt P + Γ_0∂_t P + ν_t^2 P - (ε_Ω-ε_ω)ν_t^2 E =0 on 𝒟× I , -Δ E + ε_ω∂_tt E + (ε_Ω- ε_ω)ν_t^2 E- ν_t^2 P -Γ_0∂_t P+ χ^(2)∂_tt( E E) = f on 𝒟× I , E(0) = E_0, ∂_t E(0) = E_1, P(0) = P_0, ∂_t P(0) = P_1 on 𝒟 , E = g^ E on Γ_D . By E we denote the electric field, by P the polarization and Γ_0, ν_t, ε_ω, ε_Ω∈_+ are material parameters. We further define ε_Δ=ε_Ω- ε_ω. The boundary condition g^ E is a prescribed trace on Γ_D and f is an external force acting on the domain. To simplify the notation and enable better numerical treatment lateron, we have already expressed Problem <ref> in normalized quantities. Specifically, we have transformed the equations and quantities using the transformation t̃=c_0t, where c_0 is the speed of light in vacuum. This normalization is consistently applied throughout this work. Therefore, we omit the tilde notation, as we already did in (<ref>). For the numerical approximation we reformulate Problem <ref> as a first-order system in time; cf. Problem <ref>. To this end we introduce the auxiliary variables 2 U=∂_t P + Γ_0 P , A=ε_ω∂_t E - Γ_0 P+χ^(2)∂_t( E E) . We tacitly assume that Problem <ref> has a sufficiently regular, unique solution. The proof of existence and uniqueness for the nonlinear system (<ref>) extends beyond the scope of this work. However, it is crucial for our subsequent mathematical arguments and formulations that the solution to Problem <ref> is regular enough such that all the mathematical arguments and formulations used below are well-defined and the application of higher order discretization techniques becomes reasonable. This regularity, in turn, imposes certain conditions on the data, coefficients, and geometric properties of the domain; cf. evansPartialDifferentialEquations2010. Under the assumption of the existence of a unique and smooth solution to (<ref>), this solution satisfies the following weak formulation. Weak formulation of the nonlinear dispersive wave equation (<ref>)lor-stm For given data f∈ L^2(I; L), boundary conditions g^ e∈ L^2(I; H^ 1/2(Γ_D)) and initial conditions ( u_0, p_0, a_0, e_0) v_0∈ L^3× V find v ( u, p, a, e) ∈W(I), W(I), W(I), W_nl(I) W(I) such that eΓ_D=g^ e and for all Φ∈W_0(I)^4 A( v)(Φ)= F(Φ) is satisfied. The functional F W_0(I)^4→ and the semilinear form, which is linear in the second argument, A W(I)×W_0(I)^4→ are given by A( v)(Φ) ∫_0^T∂_t pϕ^0 + Γ_0 pϕ^0 - uϕ^0 t+ u(0)ϕ^0(0) +∫_0^Tν_t^2 pϕ^1 - ε_Δν_t^2 eϕ^1 + ∂_t uϕ^1 t + p(0)ϕ^1(0) +∫_0^Tε_ω∂_t eϕ^2 - Γ_0 pϕ^2 +χ^(2)∂_t( E E)ϕ^2 - aϕ^2 t + a(0)ϕ^2(0) +∫_0^T∇ e∇ϕ^3 + ε_Δν_t^2 eϕ^3 - ν_t^2 pϕ^3+∂_t aϕ^3 t+ e(0)ϕ^3(0) , F(ϕ) u_0ϕ^0(0) + p_0ϕ^1(0) + a_0ϕ^2(0) + e_0ϕ^3(0)+∫_0^T fϕ^3 t . We note that all integrals in (<ref>) are well-defined in the function space W(I), due to the Definition <ref>. To obtain higher regularity of the solution, stricter assumptions f and v_0 may have to be imposed. Weakly imposed initial conditionsweakinitial In (<ref>), the expressions w(0) for w∈ u, p, a, e are well-defined when we consider the continuous embedding W↪ C(I̅; V), cf. [Chapter XVIII, Theorem 1]dautrayMathematicalAnalysisNumerical1999. We further note that the test space W_0 is dense in L^2(I; V_0), as stated in [Chapter 2, Corollary 2.1]bruchhuserGoalOrientedSpaceTimeAdaptivity2022. Based on Remark <ref>, we comment on the variational problem (<ref>). 1pt 0pt 0pt * For convenience, the initial conditions of Problem <ref> are incorporated in the variational equation (<ref>) through the forms (<ref>). The Sobolev embedding W(I)↪C(I̅;,V)^4 ensures the well-defined pointwise evaluation of functions in W(I) within the forms (<ref>). * According to Remark <ref>, the test space W_0(I)^4 is densely embedded in the Hilbert space L^2(I;V_0)^4. This dense embedding is an indispensable requirement for the proper formulation of Problem <ref>. * The variables ( u, p, a, e) belong to the solution space W(I). Although weaker assumptions about u, p and a would have been sufficient for the existence of the space-time integrals in (<ref>). However, we adopt this stronger assumption since we use an H^1(𝒟)-conforming approximation for all variables in Section <ref>. This concept follows the lines of bangerthAdaptiveGalerkinFinite2010. Under the above-made assumptions we now define the solution operator that is associated with Problem <ref> and its weak formulation (<ref>). Solution Operatorabstract Consider the nonlinear Problem <ref>. The solution operator S D(S)⊂ L^2(I; H^ 1/2(Γ_D))× L^2(I; L)× (L^3× V) → W(I), (g^ e, f, v_0) ↦ v . is defined by the mapping of the data f and the initial conditions v_0 to the unique solution v of (<ref>), such that A(S(g^ e, f, v_0))(Φ) = F(Φ) ∀Φ∈(W_0(I))^4 . The domain D( S) is supposed to be a subset of sufficiently regular functions f, v_0 in L^2(I; H^ 1/2(Γ_D))× L^2(I; L)× (L^3× V) such that (<ref>) admits a unique solution with the regularity required for the numerical approximation scheme. The goal of this work is to approximate the operator S by an ANN, which evaluation involves low computational costs and thereby lets an optimal control problem subject to the Dirichlet data of Problem <ref> become feasible. For the training and validation of the ANN approximate solutions to Problem <ref> with high resolution are necessary. They are computed by space-time finite element techniques of high accuracy which are presented in Section <ref>. §.§ Physical Background Based on margenbergAccurateSimulationTHz2023, we review the model (<ref>), with a focus on the applications and physics of nonlinear optics newIntroductionNonlinearOptics2011,boydChapterNonlinearOptical2020. Nonlinear and dispersive effects arise due to the interaction of waves with atoms or molecules in a medium. The polarization P of the medium captures these interactions at a macroscopic level. The polarization can be developed as a power series in terms of the electric field E. Based on the physical settings and materials considered in this work, it is deemed sufficient to include only the linear and quadratic terms to accurately model the phenomena of interest. The polarization is then given by P(x, t)=ε_0χ^(1)⊗ E(x, t)+χ^(2)⊗ E(x, t)⊗ E(x, t) , where the electric susceptibilities χ^(n)×𝒟→⊗_i=0^n^d are tensor-valued functions which depend on the frequency and spatial coordinate. We assume that χ^(1) and χ^(2) can be simplified to scalar functions such that χ^(1)→ and χ^(2)𝒟→ . We note that χ^(1) doesn't depend on spatial coordinates and the material is homogeneous w. r. t. to the linear susceptibility. Further, we only consider instantaneous nonlinearities, which means that the nonlinear susceptibilities are frequency independent. We formulate the dispersive electromagnetic wave equation -Δ E +∂_ttε_r* E +χ^(2)∂_tt( E E) = f. Here ε_r is the relative electric permittivity for which ε_r=n^2=1+χ^(1) holds, where n is the refractive index. A simple model introduced by Lorentz, which describes the electric permittivity ε_r as a function of the frequency ν is given by ε_r(ν)= ε_ω + (ε_Ω-ε_ω) ν_t^2/ν_t^2-ν^2+Γ_0ν . The physical model for (<ref>) is an electron bound to the nucleus by a force governed by Hooke's law with characteristic frequency ν_t. Γ_0 is the damping coefficient and ε_Ω and ε_ω are the low and high frequency limits of the relative electric permittivity. In the time domain (<ref>) gives rise to the convolution term [ε_r(ν) * E](t) in (<ref>). To avoid the computationally expensive evaluation of this convolution, we derive an auxiliary differential equation (ADE), as given by (<ref>). By substituting (<ref>) into (<ref>), we obtain (<ref>), which results in the formulation of Problem <ref>. § VARIATIONAL SPACE-TIME DISCRETIZATION FOR NONLINEAR DISPERSIVE WAVE EQUATIONS In this section we present the numerical approximation scheme that we use for highly resolved and accurate computations of solutions to the weak form (<ref>) of the nonlinear dispersive wave problem in Problem <ref>. The approach discretizes the continuous system (<ref>) by enforcing differentiability in time constraints on the trial space of piecewise polynomials in combination with variational conditions, based on the weak formulation (<ref>) and collocation conditions, deduced from the strong form (<ref>). The collocation conditions are imposed at the end point of the subintervals of the time mesh. Due to the differentiability in time, we will observe that the collocation conditions are also satisfied at the initial time points of the subintervals. These schemes are referred to as Galerkin-collocation methods, for short GCC^s(k) where s denotes the differentiability with respect to the time variable and k the order of the polynomials of the trial space. Galerkin-collocation schemes have been introduced and studied for acoustic waves in anselmannNumericalStudyGalerkin2020,anselmannGalerkinCollocationApproximation2020. For the choice r=k (r being the order of approximation in space), convergence of order k+1 in space and time is shown for the fully discrete approximation of the solution and its time derivative. In our simulations presented in Section <ref> we put k=3. In the numerical investigations of Section <ref>, we will see that Galerkin-collocation are strongly adapted to the accurate and efficient numerical simulation of nonlinear dispersive phenomena. The collocation conditions allow us to reduce the size of the discrete variational test space, which leads to increased efficiency compared to standard Galerkin-Petrov approaches, as presented in kcherVariationalSpaceTime2014 for example. Galerkin-collocation schemes lead to discrete solutions of higher order regularity in time. For instance, by employing the GCC^1(3) method, the simplest scheme from this family of time discretization techniques, we obtain solutions of C^1-regularity in time, which is particularly advantageous for wave problems. We also exploit the increased regularity in our optimal contral method by neural networks in Section <ref>. For the time discretization, we split the time interval I into a sequence of N disjoint subintervals I_n=(t_n-1, t_n], n=1,…, N. For a Banach space B and k∈ℕ_0 we define ℙ_k(I_n; B)=w_τ_n I_n→ B w_τ_n(t)=∑_j=0^kW^jt^j ∀ t∈ I_n, W^j∈ B ∀ j . For r∈ℕ we define the finite element space that is built on the spatial mesh as V_h=v_h∈ C(𝒟̅) v_hK∈𝒬_r(K) ∀ K ∈𝒯_h , V_h, 0=V_h∩ V_0 , where 𝒬_r(K) is the space defined by the reference mapping of polynomials on the reference element with maximum degree r in each variable. From now on we choose the piecewise polynomial degrees in (<ref>) and (<ref>) to k=3 and r=3. The trial and test space for our discrete problem are then defined by 1X_τ, h=w∈ C^1(I̅; V_h) wI_n∈ℙ_3(I_n; V_h) ∀ n=1,…, N , Y_τ, h=w∈ L^2(I; V_h, 0) wI_n∈ℙ_0(I_n; V_h, 0) ∀ n=1,…, N . We impose global C^1-regularity on X_τ, h, which corresponds to a spline-type discretization in time. We chose the global time-discrete space of piecewise constant functions as Y_τ, h. Thereby, we need to fix additional degrees of freedom in order to ensure solvability. To this end, we combine the C^1-regularity constraints with the strong form of the equations at the endpoints of each subinterval I_n. Then, collocation conditions are a result of the imposed global C^1-regularity. This is different from becherVariationalTimeDiscretizations2021, where the collocation conditions are imposed, which then imply the C^1-regularity. The different construction is due to the nonlinear character of the system. For simplicity regarding the prescription of inhomogeneous boundary conditions we make the following assumption. Inhomogeneous Dirichlet Boundary conditionsinhom We impose an implicit restriction on the set of admissible boundary conditions g^ e. We assume that there exists a function g_τ, h in C^1(I̅; V_h) such that g_τ, h^ eΓ_D=g^ e ∀ t ∈I̅ . For prescribing more general boundary conditions suitable interpolation operators applied to the boundary values are required. For brevity and since this is a standard technique, it is not considered here. We let ( e_0, h, a_0, h, p_0, h, u_0, h) v_0, h∈ V_h^4, which are appropriate finite element approximations of the initial values v_0. Here, we use interpolation in V_h. We introduce ∂_t^i w_n, h=∂_t^i w_τ, h(t_n) , and discretize Problem <ref> with the GCC^1(3) method. From the local problems <ref> we derive the following global in time fully discrete formulation. C^1-regular in time Galerkin-collocation scheme for (<ref>)lor-gcc For given data and boundary conditions f_τ, h, g_τ, h^ e∈ C^1(I̅; V_h, 0), find u_τ, h, p_τ, h, a_τ, h and e_τ, h such that e_τ, h=g_τ, h^ e on I̅×Γ_D and for all (ϕ_1, h^0,…,ϕ_1, h^11,…, ϕ_N, h^0,…, ϕ_N, h^11, ϕ_τ, h^0,…, ϕ_τ, h^3)ϕ_τ, h∈ V_h, 0^12N× Y_τ, h^4 A_τ, h( v_τ, h)(Φ_τ, h)= F_τ, h(ϕ_τ, h) , is satisfied, where F Y_τ, h^4→ and A X_τ, h^4× V_h^12N× Y_τ, h^4→ are given by A_τ, h( v_τ, h)(ϕ_τ, h) ∫_0^T∂_t p_τ, hϕ_τ, h^0 + Γ_0 p_τ, hϕ_τ, h^0 - u_τ, hϕ_τ, h^0 t +∫_0^Tν_t^2 p_τ, hϕ_τ, h^1 -ε_Δν_t^2 e_τ, hϕ_τ, h^1 + ∂_t u_τ, hϕ_τ, h^1 t +∫_0^Tε_ω∂_t e_τ, hϕ_τ, h^2 - Γ_0 p_τ, hϕ_τ, h^2+χ^(2)∂_t( e_τ, h e_τ, h)ϕ_τ, h^2 - a_τ, hϕ_τ, h^2 t +∫_0^T∇ e_τ, h∇ϕ_τ, h^3 + ε_Δν_t^2 e_τ, hϕ_τ, h^3 - ν_t^2 p_τ, hϕ_τ, h^3+∂_t a_τ, hϕ_τ, h^3 t + u_τ, h(0)ϕ_τ, h^0(0) + p_τ, h(0)ϕ_τ, h^1(0) + a_τ, h(0)ϕ_τ, h^2(0) + e_τ, h(0)ϕ_τ, h^3(0) +∂_t u_τ, h(0)ϕ_τ, h^4(0) +∂_t p_τ, h(0)ϕ_τ, h^5(0) +∂_t a_τ, h(0)ϕ_τ, h^6(0) +∂_t e_τ, h(0)ϕ_τ, h^7(0) +∑_n=1^N(∂_t p_n, hϕ_n, h^8 + Γ_0 p_n, hϕ_n, h^8 - u_n, hϕ_n, h^8 +ν_t^2 p_n, hϕ_n, h^9 -ε_Δν_t^2 e_n, hϕ_n, h^9 + ∂_t u_n, hϕ_n, h^9 +ε_ω∂_t e_n, hϕ_n, h^10 - Γ_0 p_n, hϕ_n, h^10+χ^(2)∂_t( e_n, h e_n, h)ϕ_n, h^10 - a_n, hϕ_n, h^10 +∇ e_n, h∇ϕ_n, h^11 + ε_Δν_t^2 e_n, hϕ_n, h^11 - ν_t^2 p_n, hϕ_n, h^11+∂_t a_n, hϕ_n, h^11) , F_τ, h (ϕ_τ, h) ∫_0^T f_τ, hϕ_τ, h^3 t+ ∑_n=1^N f_τ, h(t_n)ϕ_n, h^3 + u_0, hϕ_τ, h^0(0)+ p_0, hϕ_τ, h^1(0)+ a_0, hϕ_τ, h^2(0)+ e_0, hϕ_τ, h^3(0) +∂_t u_0, hϕ_τ, h^4(0)+∂_t p_0, hϕ_τ, h^5(0)+∂_t a_0, hϕ_τ, h^6(0)+∂_t e_0, hϕ_τ, h^7(0) . In our implementation, we use a local test basis supported on the subintervals I_n in Problem <ref>. This leads to a time marching scheme with the local Problem <ref> to be solved in each of the time steps. We comment on the fully discrete Problem <ref>. 1pt 0pt 0pt * In constrast to anselmannGalerkinCollocationApproximation2020, collocation conditions are a result of the imposed global C^1-regularity. As already mentioned in [Remark 3.4]anselmannGalerkinCollocationApproximation2020 the approach of imposing global C^1-regularity is also valid. We also show this in detail in Appendix <ref>. * After breaking Problem <ref> into local problems, we can put the equations of the proposed GCC^1(3) approach in their algebraic forms (cf. Appendix <ref>) and get a nonlinear system of equations. The common approach of handling the nonlinear problem is a linearization by means of Newton's method. In every Newton step we have to solve a linear system of equations, of which we give a detailed description in the Appendix <ref>. * Hermite polynomials are ideal for wave problems, particularly those with high frequencies. Moreover, they offer significant advantages for the numerical solution of the nonlinear wave equations (<ref>) by reducing the computational cost of assembling matrices and residuals in Newton's method. These advantages result from the sparse structure of the nonlinear term given by Hermite polynomials as trial functions: ∫_I_nχ^(2)∂_t( e e)ζ_1 t =χ^(2)(| e_0| | e_1| | e_2| | e_3|)^⊤ ∫_I_n(∂_t(ξ_iξ_j))_i, j=0,…, 3 t_= (-1, 0, 1, 0)∈^4× 4 ( e_0 e_1 e_2 e_3) =χ^(2) e_2 e_2-χ^(2) e_0 e_0 , where e_i_i=0^3 denote the coefficient functions of the i-th time basis function in V_h. This also applies to third order nonlinearities χ^(3) with two non-vanishing terms. ∫_I_nχ^(3)∂_t( e^2 e)ζ_1 t =χ^(3)(| e_0|^2 | e_1|^2 | e_2|^2 | e_3|^2)^⊤ ∫_I_n(∂_t(ξ_iξ_jξ_k))_i, j, k=0,…, 3 ( e_0 e_1 e_2 e_3) =χ^(3) e_2^2 e_2-χ^(3) e_0^2 e_0. We consider the abstract space-time discrete form (<ref>). The global formulation puts the work in this section in context to the abstract problem introduced in Definition <ref> and, together with an analogous formulation of the solution operator, becomes useful in the next section. Discrete Solution Operatorabstract-discrete Consider Problem <ref> given in variational formulation. Then the solution operator S_τ, h to (<ref>), which maps g_τ, h^ e, f_τ, h and the initial conditions v_0, h to the solution v_τ, h is defined through S_τ, h D(S_τ, h)⊂ C^1(I̅; V_h)× C^1(I̅; V_h, 0)×V_h^4 → X_τ, h^4, (g_τ, h^ e, f_τ, h, v_0, h) ↦ v . In order to ensure well-posedness we assume that S_τ, h is a bijection. Then, for given data f_τ, h we find a unique solution ( e_τ, h, a_τ, h, p_τ, h, u_τ, h) = v_τ, h∈ X_τ, h^4 which satisfies S_τ, h(g_τ, h^ e, f_τ, h, v_0, h) = v_τ, h. We note that S_τ, h, v_τ, h, g_τ, h^ e and f_τ, h approximate S, v and f in (<ref>). In the next section we introduce two types of ANNs, which we consider for training a discrete solution operator U≈S_τ, h . U is trained with accurate approximations v_τ, h obtained by numerical solutions and is subsequently used to solve an optimal control problem. § ARTIFICIAL NEURAL NETWORKS Neural networks exist in various types. In this section we briefly review the architecture of the neural networks that we use below to learn the discrete solution operator defined in Problem <ref>. In Section <ref>, the neural networks are then applied to accelerate optimization processes for Dirichlet boundary control of the pump pulse for terahertz generation. §.§ Fourier Neural Operators (FNO) FNO is a recently introduced type of ANN that proposes a novel method for combining neural networks with Fourier analysis, mainly to solve differential equations liFourierNeuralOperator2020. Within the framework of Neural Operators, a universal approximation theorem and error bounds have been developed for the FNO in kovachkiUniversalApproximationError2021. The key innovation of the FNO is a new type of layer, the Fourier layer (cf. Fig. <ref> and (<ref>)). In the Fourier layer the Fourier series is used to efficiently compute the convolution of the input function with a set of integration kernels, represented in the frequency domain. Here, we briefly introduce FNOs for complex-valued functions v∈ L^1(𝕋^d) on the unit torus 𝕋^d, in order to restrict ourselves to 1-periodic functions. For details we refer to [Section 3.1]grafakosClassicalFourierAnalysis2014. The Fourier transform of a function v𝕋^d →^n is denoted by L^2(𝕋^d;^n) →ℓ^2(^d;^n). Similarly, ^-1ℓ^2(^d;^n)→ L^2(𝕋^d;^n) denotes the Fourier inversion. More precisely, for a function v ∈ L^2(𝕋^d;) the Fourier transform is defined by (cf. [Definition 3.1.1]grafakosClassicalFourierAnalysis2014) ( v) (l) = ∫_𝕋^dv(x) exp(-2πlx) x , l ∈^d . For a function w ∈ℓ^2(^d;) is given by Fourier inversion (cf. [Proposition 3.2.5]grafakosClassicalFourierAnalysis2014) by(^-1 w) (x) = ∑_l ∈^d w (l) exp(2πlx) , x ∈𝕋^d . For vector-valued functions, the formulas (<ref>) and (<ref>) are applied componentwise. We note that, for an integrable function u on ^n with Fourier transform u, the Fourier series and Fourier inversion can be seen as the restriction of the classical Fourier transform to ^n. Together with the Poisson summation formula [Theorem 3.2.8]grafakosClassicalFourierAnalysis2014, we know that the Fourier expansion equals the periodization of the function v on ^n. This gives us a perspective on the extension of the Fourier series, and therefore FNOs, to non-periodic functions. Fourier Neural Operator (FNO)fno An FNO N L^2(𝕋^d,^d_i)→ L^2(𝕋^d, ^d_o) is a mapping consisting of a concatenation of functions such that N( v)=Q∘L_N∘⋯∘L_1∘R( v) , with a lifting operator R and a projection operator Q, represented by matrices R∈^n× d_i and Q∈^d_o× n, respectively 2 R L^2(𝕋^d; ^d_i) → L^2(𝕋^d; ^n) , v ↦R v , (R v)(x) = R (v(x)), R∈^n× d_i . Q L^2(𝕋^d; ^n) → L^2(𝕋^d; ^d_o) , v ↦Q v , (Q v)(x) = Q (v(x)), Q∈^d_o× n . A Fourier layer L_k is given by L_k( v) =σ(W_k v + b_k + ^-1P_n ( v)_K_n v) , where W_k∈^n× n is a weight matrix and b_k∈^n a bias vector and P_n^d→^n× n, l ↦ P_n(l)∈^n× n are the weights of the modes l∈^d and σ^n→^n is an activation function, for instance the tanh function is applied. Concerning Definition <ref> we note the following. 1pt 0pt 0pt * Let us consider ( v)(l) ∈^n. In order to ensure that K_n v in (<ref>) is real-valued for real-valued v conjugate symmetry in the parametrization is enforced by P_n(-l)_j, k = P_n^*(l)_j, k , j=1,…, m, k=1,…, n ∀ l ∈ Z_l_max; * In (<ref>) and (<ref>), R and Q are both locally acting operators. They are represented by the matrices R and Q, that are to be trained. * We restrict the domain of the FNO to 𝕋^d, in order to consider only 1-periodic functions. The Poisson summation formula lets us lift this restriction. Similarly, in [Lemma 41]kovachkiUniversalApproximationError2021 the authors show that FNOs can be generalized to domains with Lipschitz boundary. * The activation function σ^n→^n with σ v= (σ(v_1),…, σ(v_n))^⊤∈^n is a componentwise applied scalar- and real-valued, non-polynomial function σ∈ C^∞(), which is globally Lipschitz-continuous. We sketch the FNO in Fig. <ref>. The key feature of FNO architectures are the convolution-based integral kernels K_n, that are non-local. This enables learning operators with a global character, such as operators arising in the simulation of PDEs. Another major factor in the efficiency is that in the discrete case we are able to use the Fast Fourier Transform (FFT) to compute K_n v in (<ref>), if the computational mesh is uniform. This is sketched in the following. §.§.§ The Discrete Setting Let the D_J⊂𝕋^d be a set of J ∈ℕ uniformly distributed points with resolution s_1 ×⋯× s_d = J in the domain 𝕋^d, v ∈^J × n and (v) ∈^J × n. The multiplication by the weight tensor P ∈^J × m × n is defined by the operation ( P · ( v) )_k, l = ∑_j=1^n P_k, l, j ( v)_k, j , k=1,…, J, l=1,…, m. The Fourier transform can be replaced by the Fast Fourier Transform (FFT). For v ∈^J × n, k = (k_1, …, k_d) ∈ℤ_s_1×⋯×ℤ_s_d, and x=(x_1, …, x_d) ∈𝕋^d, the FFT and its inverse ^-1 are defined as ( v)_l(k) = ∑_x_1=0^s_1-1⋯∑_x_d=0^s_d-1 v_l(x_1, …, x_d) exp- 2π∑_j=1^dx_j k_j/s_j , for l=1,…, n, (^-1 v)_l(x) = ∑_k_1=0^s_1-1⋯∑_k_d=0^s_d-1 v_l(k_1, …, k_d) exp2π∑_j=1^dx_j k_j/s_j , for l=1,…, n. The parameters W, b, P of the Fourier layers in Definition <ref> are learned in Fourier space, where they can be expressed in terms of the Fourier coefficients of the input functions. When the network is used to evaluate functions in physical space, it simply amounts to projecting onto the basis functions exp2π i ⟨ x, k ⟩, which are well-defined for all x ∈^d. This allows the network to evaluate functions at any desired resolution, without being tied to a specific discretization scheme. The implementation of the FNO using the FFT restricts the geometry and discretization to uniform mesh discretizations of 𝕋^d. In practice FNOs can be extended to other domains by padding the input with zeros. The loss is computed only on the original domain during training. The Fourier neural operator extends the output smoothly to the padded domain, as discussed in kovachkiNeuralOperatorLearning2023. §.§ Recurrent neural networks with memory Recurrent neural networks (RNNs) are an extension to Feed-forward neural networks that use an activation variable a_n^k∈ℝ^p_k to propagate information over discrete time steps, making them suitable for time series and sequential data. An extension of this model uses network nodes with memory. These neural networks are effective in modeling long-term dependencies and can overcome the vanishing gradient problem that recursive neural networks face hochreiterGradientFlowRecurrent2001. In our work, we employ Gated Recurrent Units (GRUs) choLearningPhraseRepresentations2014. Gated Recurrent Unit (GRU)gru A Gated Recurrent Neural Network 𝒩 X^N→ Y^N maps a sequence of elements of a finite dimensional inner product space X to a sequence of elements of a finite dimensional inner product space Y. It consists of a concatenation multiple GRUs, i.e. 𝒩 = G_1∘⋯∘G_L A GRU G_k^N×^q→^N× q×^q, k∈{1,…, L} is defined by the following equations: z_n^k = σ_z ( W_k^(z) h_n^k-1 + U_k^(z) h_n-1^k + b_k^(z)) , r_n^k = σ_r ( W_k^(r) h_n^k-1 + U_k^(r) h_n-1^k + b_k^(r)) , h_n^k = z_n^k ⊙ h_n-1^k + (1 - z_n^k) ⊙σ_h ( W_k^(h) h_n^k-1 + U_k^(h) ( r_n^k ⊙ h_n-1^k) + b_k^(h)) , where n∈{1,…, N}, ⊙ denotes the element-wise product and U_k^( · ) W_k^( · ) and b_k^( · ) are weight matrices and bias vectors determined by training. By h_n^0 we denote the input to the GRU. In (<ref>), the update gate vector z_n^k ∈^q defined in (<ref>) determines the contribution of the previous hidden output h_n-1^k to the current output h_n^k (cf. (<ref>)), while the reset gate vector r_n^k ∈^q defined by (cf. (<ref>)) controls the nonlinearity of the cell. Together, they control the memory of a GRU cell, determining to what extent information from the past is carried over to the present output. § OPTIMAL CONTROL WITH NEURAL OPERATORS Optimal control problems (OCP) are important in several branches of science and engineering. Finding efficient solutions to these problems remains a challenging task. Neural operators can represent the dynamics of complex systems efficiently. Their combination with OCPs has the potential to yield novel solutions by replacing the oftentimes costly solution of the forward problem. In this section, we investigate the use of neural operators for solving OCPs, with a focus on Dirichlet boundary conditions as constraints. We apply this technique to the problem of THz generation in a periodically poled crystal (cf. Section <ref>) and propose a novel approach to optimize the input pulse with the goal to maximize the efficiency of optical to THz generation. This can be formulated as an optimal boundary control problem, where we seek the Dirichlet boundary conditions that yield the maximum optical to THz conversion. §.§ Optimal Dirichlet boundary control First, we state a general optimal Dirichlet boundary control problem, which serves as the foundation for our proposed method. Function spaces for optimal control problemfn-ocp Define S=I×Γ_D and Q=I×𝒟 and the set of admissible controls U_ad=u∈ L^2(S) u_a≤ u≤ u_b a. e. in S, u_a, u_b∈ L^2(S) . Let 𝒥 be a Gateaux differentiable functional. For the state y∈ W(I) and the control u∈ U_ad we consider the following optimization problem. Optimal Dirichlet boundary controlopcon For 𝒥 W(I)× L^2(S)→, ( y, u)↦𝒥( y, u) solve min_u∈ U_ad 𝒥( y, u) = 𝒢( y) + α/2u^2 , α>0 , subject to y =S(u, f, y_0) . The operator S is the abstract solution operator of the PDE introduced for our application in (<ref>), by which the optimization problem is constrained. The control u enters through the Dirichlet boundary condition. The functional 𝒢 W(I) → is left to be defined for the application. We now derive an OCP similar to Problem <ref> for a setting where Problem <ref> provides the initial and boundary conditions and the PDE for the optimization problem. In practice, we define the functional 𝒢 in (<ref>) such that radiation at frequency f_Ω is optimized. Since max𝒥= - min (-𝒥) is satisfied, we restrict ourselves to the description of minimization problems. Cost function for optimizing generation of THz radiationcost-thz Let y_c W_nl(I) → L^2(I; ) and ψ→ be given by y_c( y)= ∫_B_ε(c) y(x, t) x , ψ(ν) = 1_(f_Ω-r, f_Ω-r)expr^2/(ν-r-f_Ω)(ν+r+f_Ω) , r>0 , where the ball B_ε(c) around the control point c∈𝒟 is chosen such that B_ε(c)∩Γ_D=∅ is satisfied. We define the cost functional 𝒢 as 𝒢_Ω( y)= ∫_f_Ω-r^f_Ω+r (y_c( y, t))(ν)^2ψ(ν)ν . We note that y_c∈ L^2(I; ) and therefore its Fourier transform exists. The parameter r is chosen such that ψ is sufficiently close to the indicator function at f_Ω. In the discrete case, we specify this more precisely. For the state E∈ W_nl(I) and the control g^ e∈ U_ad we study the following optimization problem. Optimal Dirichlet boundary control for THz generationopconTHz For the solution operator (<ref>) to Problem <ref>, solve the optimization problem max_u∈ U_ad 𝒥( E, g^ e) = 𝒢_Ω( E) + α/2g^ e^2 , subject to ( u, p, a, e)^⊤=S(g^ e, f , v_0) . We provide realistic parameters for this problem in Section <ref>. In the formulation of Problem <ref>, we can replace the solution operator S, the data f, and v_0 with their discrete counterparts as defined in Definition <ref> in a straightforward manner. We can evaluate the cost function in the discrete setting using an FFT. While different methods exist for the solution of OCPs similar to <ref>, to the best of our knowledge the nonlinear wave equation of Problem <ref> has not yet been investigated in theory or practice. Within this work we concentrate on the algorithmic and practical aspects of solving Problem <ref>. For an overview over optimal control theory and solution methods we refer to manzoniOptimalControlPartial2021,hinzeOptimizationPDEConstraints2009a and references therein and, more specifically for hyperbolic problems, to gugatOptimalBoundaryControl2015. §.§ Optimal Control for THz generation with Neural operators Even in one space dimension solving Problem <ref> by using the variational space-time methods we presented so far is infeasible for scenarios of practical interest due to the substantial computational burden imposed by the solution of the forward problem; cf. margenbergAccurateSimulationTHz2023. In order to focus the presentation on the essential ideas, we restrict ourselves to the one-dimensional case for the remainder of this section. In margenbergAccurateSimulationTHz2023 we observed that this is a reasonable restriction to make from a practical point of view. We propose an algorithm that relies on ANNs to accelerate the solution of the PDE, allowing for a more efficient optimization of the control parameters. In Fig. <ref> we sketch its key idea. The goal is to train neural operators U based on accurate numerical simulations which generalize well to U_ad. Then they are used as the forward solver in the optimal Dirichlet boundary control problem. The first cornerstone of the method is to consider only controls that are feasible in practice. By this we can implement a differentiable sampler of Dirichlet boundary data in a deep learning library of our choice and concatenate it with the solution operator. Admissible controls for Problem <ref>diri-ocp The Dirichlet data, i. e. the control in Problem <ref>, is of the form g(t) =exp-(2log 2(tτ)^2)^p∑_i=1^n a_icosφ_i + 2 π1/2ζ_i t^2 + f_it . For some fixed n the set of parameters Ξ and a sampler P which maps these parameters to the pulses are given byΞ ={(τ, p, a_0, φ_0, ζ_0, f_0,…, a_n, φ_n, ζ_n, f_n)∈_+^2+4nτ≤τ_max, p≤ p_max, a_i≤ a_max, φ_i≤φ_max, ζ_i≤ζ_max, f_i≤ f_max ∀ i=1,…, n} , PΞ → L^2(Γ_in× I), (τ, p, a_0, φ_0, ζ_0, f_0,…, a_n, φ_n, ζ_n, f_n)↦ g(t) . In (<ref>), the parameter τ is the full width half maximum, p the order of the supergaussian, a_i the amplitude, ϕ_i the phaseshift, ζ_i the quadratic chirprate and f_i the center frequency. The upper bounds in (<ref>) are given through the limitations of the experimental setup. Note that the image of P[Ξ]⊂ L^2(S) is the set of admissible controls U_ad in Problem <ref>. The second idea of the method is the differentiability of a program written with an established ANN library: For most operations on the datastructures of these libraries a method for calculating its gradient is already implemented. The last idea in our algorithm builds on the periodicity of the material parameters (cf. Fig. <ref>). The example in this work is based on periodically poled crystals, where χ^(2) govern the nonlinear processes. The periodicity of χ^(2) can be used to learn a solution operator U to the forward problem in only one period of the χ^(2) parameter. We formalize this concept in the discrete setting: Consider the discrete solution operator given in (<ref>). The goal is to approximate S_τ, h(g(t)) by U∘⋯∘U(g(t))U≈S_τ, h(g(t)) . On the other hand, for the efficient solution of Problem <ref> we don't need the full space-time solution. We only need the solution at some collocation points J_𝒟 =⋃_i=1^m x_𝒟, i , x_𝒟, iΛ i∈𝒟 , where m is the number of periods in the crystal. For the generation of training data we evaluate and save the solution E(x, t) at the points in J_𝒟. Motivated by the fact that J_𝒟, i+1 is the set J_𝒟, i, shifted in positive x_1-direction, we construct an operator U which maps the time trajectory of the electric field E at period i to the time trajectory of E at period i+1. In order to construct a suitable solution operator we define the space V_τ(I)= w∈ L^2(I; ) w(t)I_n∈ℙ_3(I_n, ) . Neural Operator for Problem <ref>no Let i∈1,…, m, x_a∈ J_𝒟, i, x_b=x_a+ e_1Λ∈ J_𝒟, i+1 and p(t; x_a)∈ V_τ(I). The neural operator U= I∘N∘T is constructed such that U V_τ(I) → V_τ(I) , p(t; x_a) ↦p̂ (t; x_b) , where N is one of the networks introduced in Section <ref>, T evaluates p∈ V_τ(I) at the endpoints of the subinterval and I is the Hermite-type interpolator (<ref>) applied on each subinterval I_n, 2 T V_τ →^N+1 , p(t; x_a) ↦ ( p(t_0; x_a),…, p(t_N; x_a)) , I^N+1 → V_τ , u ↦p̂(t; x_b) . For the evaluation of I we use automatic differentiation in order to obtain ∂_tN. Further, we note that the coefficients of the polynomials and the values of p̂ at the time endpoints of I_n coincide, which makes the evaluation computationally cheap. With these preparations, the computation of p(t;x_a)=U(g(t)), x_a∈ J_𝒟, 0=Γ_in is well-defined and p(t;x_a+Λ) is the time trajectory of the electric field E at period 1. We can iterate this to obtain the time trajectory at period i, p_i-1(t; x_a+Λ (i-1))=U^i-1(g(t))=p_i(t; x_a+Λ i) . With the solution operator defined, we can formulate the algorithm for the solution of the optimal Dirichlet boundary control problem. Through the differentiability of P∘U we can calculate the gradients of the parameters ξ∈Ξ with respect to the cost function in Problem <ref>. Then we can use the well-known gradient descent algorithm or Newton's method for the solution of Problem <ref>. In Algorithm <ref> we describe the steps using a simple gradient descent method, which can also be tracked in Fig. <ref>. The extension to Newton's method is straightforward. In Appendix <ref> we give an abstract formulation of how a solution operator for a full space-time approximation U can be obtained. Optimal control based on deep learningopcondl § NUMERICAL EXPERIMENTS We present numerical studies of the proposed neural operators for solving optimal control problems. First, we investigate and validate their ability to efficiently represent the dynamics of a simple test case. Then we extend the test to our method of optimal control via neural operators by adding a set of constraints and solving the resulting optimal control probem. Finally, we demonstrate the feasibility of the proposed approach by applying our methods to the physical problem of THz generation and compare the results to experimental data. In this section we only consider settings in one space dimension, since otherwise numerical simulations are too time-consuming. In margenbergAccurateSimulationTHz2023 we also restricted ourselves to 1D without notable limitations. §.§ Implementation aspects We implemented our numerical simulations using  arndtDealIILibrary2021, a finite element toolbox that offers efficient and scalable parallelization with MPI. To solve the nonlinear systems of equations, we employ a Newton-Krylov method. For the linear systems of equations that arise for each Newton iteration, we use the generalized minimal residual method (GMRES) with the algebraic multigrid solver MueLu MueLu. MueLu serves as a preconditioner with a single sweep for every GMRES iteration. We implemented the ANNs and the optimal control method proposed here with the interface of  paszkePyTorchImperativeStyle2019, . PyTorch also supports parallelization with MPI, which is used throughout this work. §.§ Domain truncation In numerical simuations, wave propagation and other physical processes have to be truncated to bounded regions. To this end, we extend 𝒟 by a Perfectly Matched Layer (PML) on the right-hand side 𝒟_F=𝒟∪𝒟_PML. We only consider the 1D case where 𝒟=[0, L]⊂ is a bounded and closed interval. The PML can be written as 𝒟_PML=(L, L_𝒟_F] with L_PML L_𝒟_F-L. Inside the PML-region we have the problem ∂_tt P + Γ_0∂_t P + ν_t^2 P - κ_xε_Δν_t^2 E =0 on 𝒟_PML× I , ∂_t R+α_x R -ε_ωσ_x E =0 on 𝒟_PML× I , ∂_t Q + α̃_x Q - σ̃_x∂_x E = 0 on 𝒟_PML× I , -∇·κ_x^-1∇ E + ∂_x Q + κ_xε_ω∂_tt E + κ_x(ε_Ω- ε_ω)ν_t^2 E - ν_t^2P -Γ_0∂_t P +∂_tε_ωσ_x E - α_x R =0 on 𝒟_PML× I , E(0) = 0 ∂_t E(0) = 0 on 𝒟_PML , E = 0 on Γ_D∩𝒟̅_̅P̅M̅L̅× I . A more in-depth presentation with further discussion and references for PML can be found in margenbergAccurateSimulationTHz2023. §.§ Numerical convergence test of the space-time finite element method Here we verify the numerical methods we developed for the forward problem. To this end we prescribe a function as the solution to the equations in Problem <ref>. We use the residual of this function as a source term, which in turn makes the prescribed function the solution. We use the Galerkin–collocation method proposed in Problem <ref> for the time discretization and the finite element space V_h defined in (<ref>) for the spatial discretization. Consequently, we expect fourth-order convergence. We choose a 1D test case in the domain 𝒟=[0, 0.001955] over the time interval I=[0, 1e-13]. As the electric field we choose E(x, t)=sin(2 πω_2(x-n_2 t))+ sin(2 πω_1(x-n_1 t)) . To compute the error in the physical domain and exclude error contributions from within the PML region, we introduce a weighting function l𝒟→ that is equal to one in the physical domain and zero in the PML region: l(x)= 0, x∈𝒟_PML , 1, x∈𝒟 . Furthermore we multiply l by the source term to restrict it to the physical domain. Thereby the solution inside 𝒟 is given by (<ref>). Then it propagates into 𝒟_PML where it is attenuated to the point of vanishing. We study the errors e_ Z= Z(x, t)- Z_τ, h(x, t) for Z∈{ E, A, P, U} in the norms e_ Z_L^∞(L^2)=max_t∈ I∫_𝒟| e_ Z|^2 x^1/2 and e_ Z_L^2(L^2)=∫_I∫_𝒟| e_ Z|^2 x t^1/2 . We abbreviate the error quantities e_ Z_L^∞(L^2) and e_ Z_L^2(L^2) by L^∞-L^2( Z) and L^2-L^2( Z) for z∈ E, A, P, U. The errors are calculated by simultaneous refinement in space and time. In Table <ref> we observe the fourth order convergence in the variables E and A. For the auxiliary variables U and P we observe the same convergence rates in Table <ref>, which highlights the advantage of modelling these auxiliary variables with differential equations; cf. margenbergAccurateSimulationTHz2023. §.§ Test of the solution operator and optimal control methodology In order to evaluate our algorithm, we construct an artificial test case similar to a numerical convergence test studied before. This test aims to provide empirical evidence of the algorithm's capability to solve complex problems. By this rigorous evaluation, we hope to gain insights into its strengths and weaknesses and identify optimal parametrizations that may be employed in the practical test case. §.§.§ Training and testing of the solution operator In a first step we construct a test for the solution operator, where we consider plane waves in vacuum. We generate the training data from plane waves at frequencies f∈{291.56, 290.56, 2·291.56, 2·290.56}. These frequencies are in the range of what we encounter in practice for the THz generation. For the test we choose 2 plane waves with frequencies f_1, f_2 drawn from a continuous uniform distribution with support (290.56, 2·291.56). Then we add two more plane waves by chosing the frequencies f_3=f_1+1 and f_4=f_2+1. We choose a 1D test case in the domain 𝒟=[0, 3.e-14c_0] over the time interval I=[0, 1e-14]. The spatial domain has 3 periods with Λ=1.e-14c_0. Therefore, the solution operator is always applied 3 times onto itself. This is done throughout this subsection. We test FNOs and GRUs. Each of these models are trained and evaluated with varying numbers of layers and layer widths to evaluate their performances. In Table <ref> we collect all configurations used here. For some of them, Fig. <ref> shows the loss curves. The legend entries are named according to the rows and columns in Table <ref>. It is evident that although both models achieve the same level of accuracy, the GRU exhibits significant instability and oscillation in its loss. We attempted to address this issue by using an annealing learning rate during training and conducted extensive tuning of the hyperparameters, but the instability persisted. We note that one batch of training data used for Fig. <ref> already contains 10000 timesteps, so the issue could be related to the long term stability of the GRU. However, the prediction of 10000 timesteps is low compared to our practical example in the following section. The FNO on the other hand converges fast, especially for networks with 8 layers compared to the ones with 4 layers (cf. <ref> (a)). Due to the simplicity of the problem setting, the best models exhibit a similar loss across all architectures, eventhough the number of trainable parameters varies by multiple orders; cf. Table <ref>. Although the GRUs are smaller in these scenarios, the average training time is eight times longer for the same number of layers l and width w. In order to show the advantage of the higher regularity time discretization, we use differnet loss functions during training. We consider the three loss functions l_C^1^2( E,Ê) =1/I𝒟∫_I∫_𝒟|Ê- E|^2 x t , l_∂ t^2( E,Ê) =1/N∑_i=1^NÊ_i- E_i^2+1/N∑_i=1^N∂_t Ê_i-∂_t E_i^2 , l^2( E,Ê) =1/N∑_i=1^NÊ_i- E_i^2 . Here l_C^1 is motivated by the higher order time discretization and to evaluate the integrals we integrate the Hermite-type polynomials on the subintervals analytically. The second loss makes use of the data provided by the higher order time discretization but only considers the error in the collocation points (the subinterval endpoints). Since the losses themselves are difficult to compare, we study the errors e_ E=Ê(x, t)- E(x, t) in the norms given in (<ref>) with the abbreviations L^∞-L^2( E) and L^2-L^2( E). In Fig. <ref>, we evaluate the networks on successively refined time meshes in line with a numerical convergence test. For each refinement we use a new ANN which is trained as mentioned at the beginnig of this section. The errors are then evaluated during the testing of the ANNs, which we also described above. For the GRU the test results are stable, despite the high oscillations observed during the training time. Furthermore, all three loss functions lead to similar results. The GRUs do not benefit from the two loss functions which include the time derivative. The FNO on the other hand profits from the added information and is otherwise stuck at high errors. Even at high time resolution, the network encounters difficulty in distinguishing frequencies that are just 1 apart. Including the time derivative via the loss functions (<ref>) or (<ref>) is important for effectively training the network. However, the difference between them is negligible. Interestingly we are able to observe linear convergence for both networks. §.§.§ Computational efficiency of the solution operator For the training of the ANNs, we implemented a distributed training algorithm similar to the one in lianCanDecentralizedAlgorithms2017. In contrast to lianCanDecentralizedAlgorithms2017, we sychronize the network parameters by averaging them over all processes. For the available resources of 5 nodes, each with 2 GPUs, we are unable to determine any significant performance gains from considering only the neighboring MPI processes. Our distributed implementation is not equivalent to the sequential implementation due to the synchronization of the network parameters and not the gradients, which reduces the computational overhead. In the tests we run in this work, there is no disadvantage to this approach, yielding the same accuracies up to machine precision. In lianCanDecentralizedAlgorithms2017 the authors show that their decentralized algorithm, which is related to our approach, leads to the same convergence rate as the vanilla SGD. Fig. <ref> shows the strong scaling of the algorithm and the corresponding energy consumption for the two network architectures under consideration. The tests are run on an HPC cluster with 5 GPU nodes, each with 2 Nvidia A100 GPUs and 2 Intel Xeon Platinum 8360Y CPUs. The scaling tests are performed with 1 GPU as a baseline and with 1 to 5 nodes, always using both GPUs on the node. The number of MPI processes are equal to the number of GPUs such that one MPI process uses one GPU and CPU. We further note that, as shown in Table <ref>, the training and evaluation times are equal across one architecture, although the number of parameters differ significantly (cf. Table <ref>). However the implementation in PyTorch is optimized for larger networks and the ones we use are too small, to make a difference in computation times. Fig. <ref> and <ref> (a) illustrate the near-optimal scaling performance for up to 4 GPUs. For 6 GPUs, the impact of synchronization costs becomes noticeable, as shown in Fig. <ref> (a) and saturates afterwards. A further comparison of our implementation with the asynchronous implementation available only in PyTorch's Python interface would require significant effort since we have exclusively used PyTorch's interface. Such a comparison for the assessment of our implementation is beyond the scope of this work. Here, we concentrate on evaluating the strong scaling test by means of the speedup S and energy ratio R, 2 S=t_wall(1)/t_wall(n) , R=E(n)/E(1) , where n is the number of GPUs (which coincides with the number of MPI processes in this study), E(n) is the energy consumed by the CPU, memory and GPU and t_wall is the wall clock time. The energy consumption of the CPU and memory is almost constant, with the increase in energy consumption primarily attributable to additional GPUs. Furthermore, the costs for CPU and memory are high and the energy consumption of the GPUs only became larger when using 10 GPUs. Overall our implementation exhibits great performance for this small artificial problem. This is confirmed by the productivity metric P in Fig. <ref> (c), which is defined as the ratio of S and R, as per anselmannEnergyefficientGMRESMultigridSolver2023. The optimal productivity lies at 6 GPUs and the results are promising for scaling to larger problems. §.§.§ Optimal control through Deep Neural Networks In order to test the methodology we propose in Section <ref>, we construct a simple OCP based on the solution operator we obtained in the last section. Initially, we sample 4 super Gaussian pulses parametrized as in (<ref>) and define Ξ from (<ref>) accordingly (n=4). We choose a supergaussian pulse of order p=6, a full-width half maximum of τ=1.e-14 and the frequencies f_1=291.56, f_2=290.56, f_3=2·291.56, f_4=2·290.56. The time domain is chosen as I=[-8e-15, 8e-15] and the spatial domain D=[-8e-15c_0, 8e-15c_0]. In Table <ref> we list the initial parameters ξ_0∈Ξ of the pulse (cf. Algorithm <ref>, line <ref>) for all tests performed in this section, except for φ_1=φ_2=φ_3=φ_4=ζ_1=ζ_2=ζ_3=ζ_4=0, since they showed low sensitivity. The setting for the OCP is described in Problem <ref>. We choose the cost function such that the amplitudes of the two high frequencies f_3, f_4 are minimized. To this end, we put f_Ω=1/2(f_3+f_4) and r=10 in (<ref>). The solution to this minimization problem is trivial, as setting the amplitude of the two high frequencies to zero would be sufficient to solve the problem. For the ANNs exploiting the linearity is not straightforward: The solution operator has to generalize from plane waves to Gaussian pulses and is a nonlinear operator by construction. Therefore, linearity has to be learned from the training data, and we cannot assume that we always achieve that. We compare two different optimization methods: the AdamW optimizer kingmaAdamMethodStochastic2015, a modified version of stochastic gradient descent, and the L-BFGS liuLimitedMemoryBFGS1989, a quasi-Newton method. In Fig. <ref> we plot the development of the amplitudes and cost function over the epochs. The first row of plots contains the results for the FNOs. The L-BFGS method converges in 1 step and only changes slightly afterwards. However, the two optimization routines don't lead to the same result. The L-BFGS gets stuck in the first local minimum it finds, the AdamW optimizer does not due to the added momentum. We can rule out the penalty parameter as the reason. We set it to β=6.e-14 in all of our tests. Slight differences of other parameters introduced during the optimization and differences in the output of the ANN are further contributing factors. In Table <ref> we compare the final pulse parametrizations and the cost function of the OCP for different networks using the AdamW optimizer. All parameters of the pulse are subject to optimization and can be affected by updates during optimization. For the FNO, the amplitude a_1 is correctly used as the main quantity to control the optimization and other parameters show low sensitivity. The results for the FNO are independent of the parametrization and reach nearly the same values with respect to the parameters. The second row of plots in Fig. <ref> contains the results for the GRUs in solving the OCP. The L-BFGS optimizer stagnates and fails to improve the cost function, while the AdamW optimizer shows some improvements. However, the solution operator resulting from the GRUs can't distinguish between different frequencies. In Table <ref> we observe that the GRUs nearly remove all the lower frequencies. In the previous section the accuracy and convergence behavior of the GRU and FNO were almost the same. From the unsatisfactory results for the solution of the OCP with GRUs we conclude that the GRU architecture is not well-suited for this type of problem. Furthermore, the applicability of the ANNs to Problem <ref> is predicted by the ability to approximate the solution operator S, which is difficult to evaluate experimentally. Nevertheless, FNOs are promising tools for this task based on their performance in our experiments. Overall the FNOs solve the test problem and are good candidates for the deployment to the realistic problem from nonlinear optics. §.§.§ Computational efficiency of the optimal control algorithm For the evaluation of the ANN in our optimal control algorithm we only use one GPU on a single node, since the distributed algorithm does not pay off in that case due to the fast evaluation of the ANN. In Table <ref> we show the wall times for the training of the solution operator and the wall time for the solution of the OCP. The wall time for the OCP is significantly lower than the training of the solution operator. The low cost of solving the OCP we is a considerable advantage, since we expect to reuse the trained solution operator multiple times in the optimal control setting. Overall, the approach has great potential for the solution of OCPs, since classical numerical solutions, even in this artificial settings, exhibit high computational cost. In the next section, we use highly accurate simulation data from a realistic physical setting to train the solution operator and apply it to Problem ref:problem:opconTHz, the OCP of maximizing THz generation. §.§ THz Generation in a Periodically Poled Nonlinear Crystal The main goal in this section is to show the potential of our proposed method by applying it to a case where experimental results are available olgunHighlyEfficientGeneration2022, in order to verify that the algorithm solves an OCP in realistic settings. That potentially leads to the improvement of the experimental setup especially for higher intensities where simplified models fail. As in margenbergAccurateSimulationTHz2023, we use 2 super Gaussian pulses parametrized as in (<ref>) and define Ξ from (<ref>) accordingly (n=2). We choose a supergaussian pulse of order p=6, a full-width half maximum of τ=250 and the frequencies f_1=291.56, f_2=291.26. The pulses are separated in center frequency by the THz frequency f_Ω=0.3. In this section we choose a pulse with average fluence of 200□. The average fluence F is defined as the mean of the optical intensity I_E over time. The detailed definitions are given in Appendix <ref>. In the simplified 1D case, they can be expressed as 2 I_ E=1/2ε_0c_0 √(ε_r)* E^2 , F=1/T∫_0^T I_E(t) t , The pulse is applied at the left-hand side of the crystal by a Dirichlet boundary condition on Γ_in (cf. Fig. <ref>), propagates through the domain and enters the PML where it is attenuated. The problem setting is already sketched in Fig. <ref>. The computational effort for these simulations is high: The simulations presented here took 15 days on an HPC cluster using 5 nodes, each with 2 Intel Xeon Platinum 8360Y CPUs. In this study, we limited our investigations and numerical simulations to one spatial dimension. This was necessitated by simulation times and the added complexity of using PMLs in 2D and 3D. In the settings investigated here, the simplification of reducing the simulations to one spatial dimension and neglecting the impacts of the remaining spatial directions is not expected to significantly perturb the results. The simulation results presented here are based on a timestep size of k=5.e-17 and average cell-size of 5.175e-8, which leads to 5.e9 number of timesteps and 1703940 degrees of freedom in space. §.§.§ Training and Evaluation of the Solution Operator As in the case of artificial data in Section <ref>, the simulation data is used to train a solution operator. We test only FNOs, since GRUs did not show satisfactory results in the artificial test case. Fig. <ref> (a) shows the losses of some parametrizations given in Table <ref>. For the training we split the data set obtained in the setting described above into a training and validation set. We simulate 25 periods of the crystal, where the first 15 periods are used for training and the last 10 periods are used for the validation. In Fig. <ref> (a) we observe fast convergence of the FNOs, even with larger architectures, and the best models exhibit a similar loss across all architectures. We use the two loss functions (<ref>) and (<ref>), since the added information of the time derivative proved to be essential for good performance in the settings we investigate in this work. In Fig. <ref> (b) we plot the errors e=E_GCC^1(3)-E_FNO. For different timestep-sizes we use different FNOs, trained on simulation data obtained with the same step size. We observe linear convergence as before in the artificial test case. We test the FNO on pulses g(t)∈P[Ξ] with different average fluence 100;200;300;400;500;600□. Other parameters remain unchanged compared to the training scenario. We test the different parametrizations of the FNOs (cf. Table <ref>). We evaluate the accuracy of the FNOs based on the internal conversion efficiency (CE) and the errors (<ref>). In Fig. <ref> (a), we compare the CE obtained from the FNO simulations with numerical simulations obtained from the space-time finite element method presented here and experimental results from olgunHighlyEfficientGeneration2022. The numerical simulations and FNOs are in good agreement with the experimental data and are mostly close or within the standard deviation of the experimental data. The FNOs perform very well, but lose some accuracy as the fluence increases. This is expected, since we only trained it with data from simulations with a fluence of 200□. Nevertheless, they seem to learn the physical processes governing the THz generation accurately. We analyze the accuracy of the FNO further in Fig. <ref> (b), where we plot the errors e=E_GCC^1(3)-E_FNO, evaluated in the norms (<ref>) for different values of the average fluence. Although the error grows with increasing fluence, for that the network was not trained, the results are promising. The FNO is able to provide a good generalization to pulses with higher fluence. §.§.§ Optimal control through the Solution Operator As the final task, we consider the Problem <ref> in the realistic setting and compare the results to experimental results. In this case we want to maximize the radiation at the frequency f_THz=0.3. Therefore, we set f_Ω=f_THz and r=0.25 in (<ref>). We test the method on the FNOs we trained in Section <ref>. Fig. <ref> shows the losses on a subset of the parametrizations given in Table <ref>. The initial parameters correspond to the case we considered in Section <ref> for an average fluence of 100□. As observed in <ref> (a) the internal CE grows with increasing fluence. In order to maximize the 0.3-frequency radiation the simplest improvement is an increase of the amplitude of the pulse. We test here if this gets picked up by the FNO and it successfully optimizes the internal CE. The internal CE is closely linked to the cost function of the OCP, which is proportional to the intensity at 0.3. Comparing AdamW optimizer and L-BFGS for optimization methods, we plot the amplitudes and cost function development over epochs in Fig. <ref> (a), (b). Although the convergence is slower than the artificial test case, the trajectories are overall similar. Again, L-BFGS converges significantly faster than AdamW. Both reach similar optima, further improvement may be limited by the regularization term. Lowering it beyond the value we used before, lead to instabilities and implausible results. The reason for the slower convergence of the L-BFGS method is the requirement of using a lower learning-rate. Any attempt to use a higher learning-rate result in stagnation. The second row of plots in Fig. <ref> contains the internal optical to THz CE and the internal CE of the second harmonic generation. The optical to THz CE in Fig. <ref> (c) grows proportionally to the cost function, which confirms that the FNO approximates at least a part of the solution operator and physical model. In order to improve the internal CE, the amplitude grows significantly, which confirms our expectation, that the amplitude should be the main tuning parameter. We also observe this in Table <ref>. In good agreement to our previous observations in margenbergAccurateSimulationTHz2023, the CE of the second harmonic generation in Fig. <ref> (d) oscillates strongly. The reason for the oscillations is the phase-mismatch, which leads to oscillating negative and positive interference. This leads to varying conversion efficiencies over the layers, depending on how close we are to a phase-match. In Table <ref>, we compare the final pulse parametrizations and the cost function of the OCP for different networks using the AdamW optimizer. Overall, FNOs perform well for the optimization of optical to THz generation. Performing the numerical simulations for obtaining the training data is the main contributor to the computational costs. The subsequent training of the solution operator takes 1 day and the final solution of the optimal control algorithm takes 2 hours at most. Considering that a single numerical solution takes 15 days, the proposed approach offers great potential for optimal control problems involving complex physics, in particular nonlinear optics. These problems are still computationally challenging and oftentimes remain infeasible through classical methods. § CONCLUSION In this paper we developed methods to solve an optimal control problem arising in nonlinear optics. To this end, we extended the Galerkin-collocation time discretization to a nonlinear dispersive wave eqation. We observed that the method is particularly well suited for problems arising in nonlinear optics. We confirmed the results found in anselmannGalerkinCollocationApproximation2020 by convergence tests. Although the implementation of the method is parallelized and able to run on HPC platforms, the solution time for using them within an optimal control loop is still too high. We devised an algorithm which uses the simulation data with discrete solutions of higher regularity in time to train an ANN, which is used for the forward solve. The algorithm is applicable to a general optimal Dirichlet boundary control problem and can be extended to other optimal control problems. Our method allows for efficient solution of the optimal control problem, since we only require the solution at some collocation points, and don't need the full space-time solution. We compared GRUs and FNOs and tested their implementation on HPC platforms and verified it by a strong scaling test. We also evaluated the energy efficiency. We were able to observe first order convergence, which we only reached for the FNO with the added higher regularity. A thorough investigation of this phenomenon is subject to future work. The GRU architecture was not able to solve the optimal control problem satisfactorily despite its good accuracy during the initial tests. FNOs were successful in solving the optimal control problem. They clearly had an advantage over GRUs, since they are designed for solving PDEs. The optical to THz conversion efficiency achieved by the FNOs was found to be in good agreement with experimental data. Moreover, the FNOs were successful in optimizing the efficiency of this conversion process in an optimal boundary control setting. Solving the whole optimal control problem with the trained solution operator is 360 times faster than a single forward solve with the numerical methods. Through its computational efficiency, the FNO has the potential to enable breakthroughs in the development of high-field THz pulses, by efficiently solving the optimization problem of maximizing the optical to THz conversion efficiency. The rapid convergence of the training and fast evaluation of the FNOs make them a cost-effective solution for this purpose and potentially other complex physical problems. §.§ Acknowledgement NM acknowledges support by the Helmholtz-Gesellschaft grant number HIDSS-0002 DASHH. Computational resources (HPC-cluster HSUper) were provided by the project hpc.bw, funded by dtec.bw — Digitalization and Technology Research Center of the Bundeswehr. FXK acknowledges support througth ERC Synergy Grant (609920). § DERIVATION OF THE FULLY DISCRETE SYSTEM Here, we elaborate on the derivation of the fully discrete problems carried out in Section <ref>. We discretize Problem <ref> and particularly describe the steps necessary to obtain the fully discrete, global in time Problem <ref> with the equation (<ref>) from (<ref>). We derive the local fully discrete problem, which, as discussed in Section <ref>, leads to the global fully discrete problem <ref>. Finally, we describe the solution of the local, fully discrete problems by a Newton linearization in combination with the solvers for the arising linear systems of equations. Following anselmannNumericalStudyGalerkin2020, we define {ϕ_j}_j=1^J ⊂ V_h as a (global) nodal Lagrangian basis of V_h and the Hermite-type basis of ℙ_3(Î; ), where Î [0, 1]: ξ̂_0(t)=1-3t^2+2t^3 ξ̂_1(t)=t-2t^2+t^3 ξ̂_2(t)=3t^2-2t^3 ξ̂_3(t)=-t^2+t^3 . With the affine transformation T_n Î → I_n t̂ ↦ t_n-1 + (t_n-t_n-1) t̂ the basis {ξ_i}_i=0^3 on I_n is given by the composition of ξ̂_l∘T_n^-1ξ_l for l=0,…, 3. Functions w_τ, h∈ℙ_3(I_n; V_h) are thus represented as w_τ, h(x, t) =∑_i=0^3 w_n, i(x)ξ_i(t) = ∑_i=0^3∑_j=1^J w_n, i, jϕ_j(x)ξ_i(t), for (x, t)∈Ω×I̅_̅n̅. We adopt the representation (<ref>) for the variables U, P, E, A and choose test functions from ℙ_0(I_n; V_h). A test basis of ℙ_0 (I_n, V_h) is then given by B = ϕ_i 1_I_n_i=1^J . Let A_h^0: V_0 → V_h, 0 be the discrete operator that is defined by ⟨ A_h e_h , ϕ_h ⟩ = ⟨∇ e_h, ∇ϕ_h⟩ ∀ ϕ_h∈ V_h, 0 . We define V_g_h v ∈ V v = g_h on Γ_D and A_h V_g_h→V_h, w↦ A_h w. By the definition of V_g_h, w admits the representation w= w^0+g_h and we define A_h by A_h w= A_h^0w^0+g_h . For w ∈{ u, p, a, e}, we denote the right and left-hand limit by ∂_t^i w_n, h^-=lim_t↗ t_n∂_t^i w_τ, h(t), ∂_t^i w_n, h^+=lim_t↘ t_n∂_t^i w_τ, h(t), for i∈0, 1. Recall the fully discrete, global formulation of the GCC^1(3) method Problem <ref>. Now consider the local problem on the interval I_n where the trajectories e_τ, h(t), a_τ, h(t), p_τ, h(t), and u_τ, h(t) have already been computed for all t ∈ [0, t_n-1] with initial conditions e_τ, h(0)= e_0, h, a_τ, h(0)= a_0, h, p_τ, h(0)= p_0, h and u_τ, h(0)= u_0, h. Then we solve the following local problem: Local, fully discrete, GCC^1(3) method for (<ref>)lor-gcc-local Given ( e_τ, h(t_n-1), a_τ, h(t_n-1), p_τ, h(t_n-1), u_τ, h(t_n-1))∈V_h^4, find ( e_τ, h, a_τ, h, p_τ, h, u_τ, h) ∈ℙ_3(I_n; V_h)^4 such that e_τ, h=g_τ, h^ e on I̅_n×Γ_D and w_n-1, h^+= w_n-1, h^- ∀ w∈{ e, a, p, u} , ∂_t w_n-1, h^+=∂_t w_n-1, h^- ∀ w∈{ e, a, p, u} , - u_n, h^-+∂_t p_n, h^-+Γ_0 p_n, h^- =0 , ν_t^2 p_n, h^- - ν_t^2ε_Δ e_n, h^- + ∂_t u_n, h^- =0 , -Γ_0 p_n, h^- + χ^(2)∂_t e_n, h^- e_n, h^-+ ε_ω∂_t e_n, h^- - a_n, h^- =0 , ν_t^2ε_Δ e_n, h^- + A_h e_n, h^- -ν_t^2 p_n, h^- + ∂_t a_n, h^- = f_n, h^- , and for all (ϕ_τ, h^0, ϕ_τ, h^1, ϕ_τ, h^2, ϕ_τ, h^3) ∈ℙ_0(I_n; V_h, 0)^4, ∫_t_n-1^t_n∂_t p_τ, h^nϕ_τ, h^0 + Γ_0 p_τ, h^nϕ_τ, h^0 - u_τ, h^nϕ_τ, h^0 t =0 , ∫_t_n-1^t_nν_t^2 p_τ, h^nϕ_τ, h^1 - ε_Δν_t^2 e_τ, h^nϕ_τ, h^1 + ∂_t u_τ, h^nϕ_τ, h^1 t =0 , ∫_t_n-1^t_nε_ω∂_t e_τ, h^nϕ_τ, h^2 - Γ_0 p_τ, h^nϕ_τ, h^2 +χ^(2)∂_t( e_τ, h^n e_τ, h^n)ϕ_τ, h^2 - a_τ, h^nϕ_τ, h^2 t =0 , ∫_t_n-1^t_n∇ e_τ, h^n∇ϕ_τ, h^3 + (ε_Ω- ε_ω)ν_t^2 e_τ, h^nϕ_τ, h^3 - ν_t^2 p_τ, h^nϕ_τ, h^3+∂_t a_τ, h^nϕ_τ, h^3 t =∫_t_n-1^t_n f_τ, hϕ_τ, h^3 t . We comment on the local fully discrete problem <ref>: 1pt 0pt 0pt * Note that we evaluate the time integrals on the right-hand side of (<ref>) and the boundary conditions g_τ, h^ e∈ C^1(I̅; V_h) (cf. Assumption <ref>) using the Hermite-type interpolation operator I_τI_n, on I_n, defined by I_τI_ng(t) = ξ̂_0 (0) gI_n(t_n-1) + τ_n ξ̂_1 (0) ∂_t gI_n(t_n-1) + ξ̂_2 (1) gI_n(t_n-1) + τ_n ξ̂_3 (1) ∂_t gI_n(t_n-1) . * The collocation conditions (<ref>) need to be defined at the initial time t_0. From the initial conditions we can get the collocation conditions (<ref>) at the initial timepoints by setting ∂_t u_τ, h(t_0^-) =ν_t^2ε_Δ e_0, h-ν_t^2 p_0, h , ∂_t p_τ, h(t_0^-) = u_0, h-Γ_0 p_0, h , ∂_t a_τ, h(t_0^-) =ν_t^2 p_0, h- A_h e_0, h-ν_t^2ε_Δ e_0, h , ∂_t e_τ, h(t_0^-) =ε_Δ^-1( a_0, h - χ^(2)∂_t( e_0, h e_0, h)+Γ_0 p_0, h) . * Consider a time interval I_l, l=2,…, N. We previously solved the Problem <ref> on I_l-1. At t_l-1 the collocation conditions (<ref>)–(<ref>) are fulfilled. For (<ref>), we see that -Γ_0 p_l, h^+ + χ^(2)∂_t e_l, h^+ e_l, h^++ ε_ω∂_t e_l, h^+ - a_l, h^+ --Γ_0 p_l, h^- + χ^(2)∂_t e_l, h^- e_l, h^-+ ε_ω∂_t e_l, h^- - a_l, h^- =χ^(2) e_l, h^+∂_t e_l, h^+-χ^(2) e_l, h^-∂_t e_l, h^- +χ^(2) e_l, h^+∂_t e_l, h^+-χ^(2) e_l, h^-∂_t e_l, h^-=0 , by using (<ref>) and (<ref>) componentwise. The remaining conditions (<ref>)–(<ref>) follow immediately. Therefore, upon solving Problem <ref> on I_l the equations - u_l, h+∂_t p_l, h+Γ_0 p_l, h =0 , ν_t^2 p_l, h - ν_t^2ε_Δ e_l, h + ∂_t u_l, h =0 , -Γ_0 p_l, h + χ^(2)∂_t e_l, h e_l, h+ ε_ω∂_t e_l, h - a_l, h =0 , ν_t^2ε_Δ e_l, h + A_h e_l, h -ν_t^2 p_l, h + ∂_t a_l, h = f_l, h , hold. This justifies the notion of a collocation method and shows that, from the initial timepoint on, global C^1-regularity is achieved by enforcing it from time step to time step. We put the equations of the proposed GCC^1(3) approach in their algebraic forms. In the variational equations (<ref>), we use the representation (<ref>) for each component of ( e_τ, h, a_τ, h, p_τ, h, u_τ, h) ∈ (ℙ_3(I_n;V_h))^4 and choose the piecewise constant test functions. We interpolate the right-hand sides in (<ref>) by applying the Hermite interpolation and evaluate the arising time integrals analytically. The collocation conditions (<ref>) can be recovered in their algebraic forms by using the fact that the Hermite type polynomials and their first derivatives vanish at the locations x=0 and x=1, with the exceptions ξ_0(0)=1, ∂_tξ_1(0)=1, ξ_2(1)=1, ∂_t ξ_3(1)=1. Given the local Problem <ref> on the interval I_n and (<ref>), we introduce the abbreviations w_h, i= w_n, i(x)∈V_h and w_i= w_n, i, 0,…, w_n, i, J^⊤∈^J for w ∈ Q, R, U, P, E, A. Further we define v_h, r = ( e_0, h  e_1, h  a_0, h  a_1, h)^⊤ and v_h, l=( e_2, h  e_3, h  a_2, h  a_3, h)^⊤ . Then we condense the system of equations such that we solve for e_2, h, e_3, h, a_2, h, a_3, h. Solving for the unknowns p_2, h, p_3, h, u_2, h, u_3, h reduces to simple vector identities in their algebraic form. We write the nonlinear system of equations in variational form for each subinterval I_n as A_h, n(v_h, l)(Φ)=F_h, n(Φ; v_h, r) ∀Φ∈V_h^4 , where A_h, nV_h^4×V_h^4→ is a semilinear form and F_h, n(Φ; v_r) the right-hand side. Then A_h, n and the functional F_h, n in (<ref>) are defined through A_h, n(v_h, l)(ϕ)=A_h, n^1(v_h, l)(Φ)+A_h, n^2(v_h, l)(Φ)+A_h, n^3(v_h, l)(Φ)+A_h, n^4(v_h, l)(Φ) , with the components A_h, n^i, i=1,…, 3. The components represent the block structure of the system of equations in algebraic form. The components given as A_h, n^1(v_h, l)(ϕ) =-Γ_0/ν_t^2∇ e_h, 2∇ϕ- ε_ΔΓ_0 e_h, 2ϕ - a_h, 2ϕ + χ^(2) e_h, 3 e_h, 3ϕ +ε_ω/k e_h, 3ϕ -Γ_0/k ν_t^2 a_h, 3ϕ , A_h, n^2(v_h, l)(ϕ) =ε_Δ e_h, 2ϕ-k^2ν_t^2-12/12 ν_t^2∇ e_h, 2∇ϕ + k Γ_0+6/kν_t^2 a_h, 2ϕ - k Γ_0+6 /12ν_t^2 ∇ e_h, 3∇ϕ-ε_Δ( k Γ_0+6) /12 e_h, 3ϕ -k^2ν_t^2+6 k Γ_0+24/12 k ν_t^2 a_h, 3ϕ , A_h, n^3(v_h, l)(ϕ) = k ν_t^2+2 Γ_0 /2 ν_t^2∇ e_h, 2∇ϕ+ ε_ΔΓ_0 e_h, 2ϕ +k^2ν_t^2-12/k^2ν_t^2 a_h, 2ϕ +ε_Δ/k e_h, 3ϕ -k^2ν_t^2-12 /12 k ν_t^2∇ e_h, 3∇ϕ+ k Γ_0+6 /k^2ν_t^2 a_h, 3ϕ , A_h, n^4(v_h, l)(ϕ) =2ε_ω-k ε_ΔΓ_0/2 e_h, 2ϕ- k Γ_0/2 ν_t^2∇ e_h, 2∇ϕ - k ν_t^2+2 Γ_0 /2 ν_t^2 a_h, 2ϕ +χ^(2) e_2 e_2ϕ +k Γ_0/12 ν_t^2∇ e_h, 3∇ϕ +k ε_ΔΓ_0/12 e_h, 3ϕ + k/12 a_h, 3ϕ , and with an analogous splitting of F_h, n F_h, n^1(Φ; v_h, r) = 0 , F_h, n^2(Φ; v_h, r) =-k Γ_0 +6/12 ν_t^2(∇ e_h, 1∇ϕ+∇ e_h, 0∇ϕ) - ε_Δ( k Γ_0+6) /12( e_h, 1ϕ+ e_h, 0ϕ) = + k Γ_0+6/k ν_t^2 a_h, 0ϕ + k/12 u_h, 1ϕ +k/2 u_h, 0ϕ +1/2 p_h, 1ϕ +4 p_h, 0ϕ , F_h, n^3(Φ; v_h, r) = ε_Δ/k e_h, 1ϕ-k^2ν_t^2-12 /12 k ν_t^2∇ e_h, 1∇ϕ + 6 ε_Δ/k e_h, 0ϕ -k^2ν_t^2-12 /2 k ν_t^2∇ e_h, 0∇ϕ = + k^2ν_t^2-12 /k^2ν_t^2 a_h, 0ϕ + u_h, 0ϕ -1/k p_h, 1ϕ -6/k p_h, 0ϕ , F_h, n^4(Φ; v_h, r) = k ε_ΔΓ_0+2 ε_ω/2 e_h, 0ϕ+k Γ_0/2 ν_t^2∇ e_h, 0∇ϕ + k Γ_0/12 ν_t^2∇ e_h, 1∇ϕ+k ε_ΔΓ_0/12 e_h, 1ϕ =+χ^(2) e_0 e_0ϕ +k ν_t^2-2 Γ_0 /2 ν_t^2 a_h, 0ϕ +k/12 a_h, 1ϕ . As a result of the condensation we further get update equations for the variables p_2, u_2, p_3, u_3 u_2 =-6Γ_0/kν_t^2 u_0 - 6/k p_0 + Γ_0/kν_t^2 u_1 - 1/k p_1 + ε_Δ/k(6 e_0+ e_1 - Γ_0 k e_2 - e_3) , p_2 = Γ_0/ν_t^2 u_2 + 6/kν_t^2 u_0 - 6(kΓ_0 -2)/k^2ν_t^2 p_0 + 1/kν_t^2 u_1 - Γ_0/kν_t^2 p_1 - ε_Δ e_2 , u_3 =kν_t^2 p_2 - kε_Δν_t^2 e_2 , p_3 =kΓ_0 p_2-k u_2 . The common approach of handling the nonlinear problem is a linearization by means of Newton's method. Let x∈V_hA_h, n(x)(ϕ)=F(ϕ) ∀ϕ∈V_h be the variational equation related to (<ref>). Recall that A_h, n(∙)(∙) of (<ref>) is a semi-linear form which is linear in the second argument. We assume that it is sufficiently differentiable by means of the Gateaux derivative A_h, n' (x)(δx, ϕ)/ sA_h, n(x+εδx)(ϕ)ε=0. A_h, n' denotes the derivative of A_h, n at x∈V_h in direction δx∈V_h. The Newton iteration for solving (<ref>) with an initial guess x_0∈V_h iterates for m=0,… δx_mA_h, n'(x_m-1)(δx_m, ϕ) = F(ϕ)-A_h, n(x_m-1)(ϕ) ∀ϕ∈V_h , x_m x_m-1 + δx_m. Next we apply the Newton scheme to the system (<ref>). The Gateaux derivative A_h, n' (x)(δx, ϕ) is A_h, n^1' (x)(δx, ϕ) = -Γ_0/ν_t^2∇δ e_h, 2∇ϕ- ε_ΔΓ_0δ e_h, 2ϕ -δ a_h, 2ϕ + χ^(2) ( e_h, 3δ e_h, 3ϕ + δ e_h, 3 e_h, 3ϕ) +ε_ω/kδ e_h, 3ϕ -Γ_0/k ν_t^2δ a_h, 3ϕ , A_h, n^2' (x)(δx, ϕ) = ε_Δδ e_h, 2ϕ-k^2ν_t^2-12/12 ν_t^2∇δ e_h, 2∇ϕ + k Γ_0+6/kν_t^2δ a_h, 2ϕ - k Γ_0+6 /12ν_t^2 ∇δ e_h, 3∇ϕ-ε_Δ( k Γ_0+6) /12δ e_h, 3ϕ -k^2ν_t^2+6 k Γ_0+24/12 k ν_t^2δ a_h, 3ϕ , A_h, n^3' (x)(δx, ϕ) = k ν_t^2+2 Γ_0 /2 ν_t^2∇δ e_h, 2∇ϕ+ ε_ΔΓ_0δ e_h, 2ϕ +k^2ν_t^2-12/k^2ν_t^2δ a_h, 2ϕ + ε_Δ/kδ e_h, 3ϕ -k^2ν_t^2-12 /12 k ν_t^2∇δ e_h, 3∇ϕ+ k Γ_0+6 /k^2ν_t^2δ a_h, 3ϕ , A_h, n^4' (x)(δx, ϕ) = 2ε_ω-k ε_ΔΓ_0/2δ e_h, 2ϕ- k Γ_0/2 ν_t^2∇δ e_h, 2∇ϕ - k ν_t^2+2 Γ_0 /2 ν_t^2δ a_h, 2ϕ+k/12δ a_h, 3ϕ + χ^(2) (δ e_h, 2 e_h, 2+e_h, 2δ e_h, 2ϕ) +k Γ_0/12 ν_t^2(∇δ e_h, 3∇ϕ+δ e_h, 3ϕ) . In every Newton step we have to solve a linear system of equations, for which we use the GMRES method with an algebraic multigrid solver, which serves as a preconditioner with a single sweep for every GMRES iteration. This accelerates the convergence of the GMRES iterations. § EXTENSION OF THE NEURAL OPERATOR TO A FULL SPACE-TIME APPROXIMATION We sketch how the solution operator U can be extended in order to obtain an approximation to S_h. The idea is to construct an interpolation operator I_h which interpolates the solutions U^i( v) and U^i+1( v) in space. We need a finite-dimensional subspace W ⊂ V of dimension M = (W). A standard example would be W=V_h, a classical finite element space, neural networks are feasible as well. First note that from Eq. (<ref>) we see that J_𝒟, i+1 is the set J_𝒟, i, translated in positive x_1-direction. For i∈1,…, m we choose x_a∈ J_𝒟, i and let U^i-1(g(t))= p(t, x_a). Then U( p(t, x_a))=p̂ (t, x_b) where x_b∈ J_𝒟, i+1. We choose a basis ϕ_1,…, ϕ_J of W with support points J_𝒟. Let ϕ_j^a denote basis functions with support points in J_𝒟, i+1 and ϕ_j^b those with support points in J_𝒟, i. On each subinterval I_n we can then interpolate UI_n(x, t) =∑_k=0^3ξ_k(t) ∑_j=0^J_𝒟, i p_kϕ_j^a(x)+p̂_kϕ_j^b(x) . Finally the solution operator S_h(g(t)) can be approximated by recursive application of U to itself, i. e.U∘⋯∘U(g(t))U≈S_h(g(t)). § PHYSICAL QUANTITIES AND QUANTITES OF INTEREST We give some background on the quantities of interest in the simulations carried out in this work. To this end we first introduce the Poynting vector S and optical power P (cf. [Chapter 6, Section 6]jacksonClassicalElectrodynamics1999). 2 S = 1/μ_0 E × B , P=∫_A s· n s The flow of energy in an electromagnetic field, with the electric field E and the magnetic field B is described by the Poynting vector (<ref>). The optical power P in (<ref>) is the flux of the Poynting vector through a surface A. Then, the intensity is the magnitude of the Poynting vector and the average fluence is the mean of the intensity over time. 2 I_ E= S=P/A. F=1/T∫_0^T I_ E t In the simplified 1D case, the optical intensity I_E and fluence F can be described by equations (<ref>) and (<ref>) respectively. We note that the intensity is proportional to the power and the fluence is proportional to the energy.
http://arxiv.org/abs/2307.03979v1
20230708140755
Attacking (EC)DSA scheme with ephemeral keys sharing specific bits
[ "M. Adamoudis", "K. A. Draziotis", "D. Poulakis" ]
cs.CR
[ "cs.CR", "94A60" ]
Computation of the private key]Attacking (EC)DSA scheme with ephemeral keys sharing specific bits [2010]94A60. [ Sahil Gangurde ABV-Indian Institute of Information Technology & Management, Gwalior, India [email protected] =========================================================================================================================== In this paper, we present a deterministic attack on (EC)DSA signature scheme, providing that several signatures are known such that the corresponding ephemeral keys share a certain amount of bits without knowing their value. By eliminating the shared blocks of bits between the ephemeral keys, we get a lattice of dimension equal to the number of signatures having a vector containing the private key. We compute an upper bound for the distance of this vector from a target vector, and next, using Kannan's enumeration algorithm, we determine it and hence the secret key. The attack can be made highly efficient by appropriately selecting the number of shared bits and the number of signatures. § INTRODUCTION - STATEMENT OF RESULTS In August 1991, the U.S. government's National Institute of Standards and Technology (NIST) proposed an algorithm for digital signatures. The algorithm is known as DSA, for Digital Signature Algorithm <cit.>. It is an efficient variant of the ElGamal digital signature scheme <cit.> intended for use in electronic mail, electronic funds transfer, electronic data interchange, software distribution, data storage, and other applications which require data integrity assurance and data authentication. In 1998, an elliptic curve analogue called Elliptic Curve Digital Signature Algorithm (ECDSA) was proposed and standardized <cit.>. §.§ The (EC)DSA Signature Scheme First, we recall the DSA schemes. The signer selects a prime p of size between 1024 and 3072 bits with increments of 1024, as recommended in FIPS 186-3 <cit.>. Also, he selects a prime q of size 160, 224 or 256 bits, with q|p-1 and a generator g of the unique order q subgroup G of the multiplicative group 𝔽_p^* of the prime finite field 𝔽_p. Furthermore, he selects a randomly a ∈{1,…,q-1} and computes R = g^a p. The public key of the signer is (p,q,g,R) and his private key a. He also publishes a hash function h : {0,1}^* →{0,…,q-1}. To sign a message m∈{0,1}^*, he selects randomly k ∈{1,…,q-1} which is the ephemeral key, and computes r = (g^k p) q and s = k^-1(h(m)+ar) q. The signature of m is (r,s). The signature is accepted as valid if and only if the following holds: r = ((g^s^-1h(m) qR^s^-1r q) p) q. Next, let us recall the ECDSA scheme. The signer selects an elliptic curve E over 𝔽_p, a point P∈ E(𝔽_p) with order a prime q of size at least 160 bits. Following FIPS 186-3, the prime p must belongs to the set {160,224,256,512}. Further, he chooses randomly a ∈{1,…,q-1} and computes Q = aP. Finally, he publishes a hash function h : {0,1}^* →{0,…,q-1}. The public key of the signer is (E,p,q,P,Q) and his private key a. To sign a message m, he selects randomly k ∈{1,…,q-1} which is the ephemeral key and computes kP = (x,y) (where x and y are regarded as integers between 0 and p-1). He computes r = x q and s = k^-1(h(m)+ar) q. The signature of m is (r,s). The verifier computes u_1 = s^-1h(m) q, u_2 = s^-1r q, u_1P+u_2Q = (x_0,y_0). He accepts the signature if and only if r = x_0 q. §.§ Previous Results Researchers have explored various attacks on DSA schemes by analyzing the signature equation s= k^-1(h(m)+ar) mod q and using lattice reduction techniques such as LLL and CVP algorithms. One study focused on the use of a linear congruential pseudorandom number generator (LCG) for generating random numbers in DSA <cit.>, showing that combining the DSA signature equations with LCG generation equations can lead to a system of equations that provide the secret key. To recover the secret key, several heuristic attacks have been proposed <cit.> in another study, which assume the revelation of a small fraction of the corresponding nonce k. However, these attacks are based on heuristic assumptions, making it difficult to make precise statements on their theoretical behavior. The first rigorous lattice attack on (EC)DSA was presented in <cit.>. The authors successfully decreased the security of (EC)DSA to a Hidden Number Problem (HNP), which can then be further reduced to an approximation Closest Vector Problem (CVP) for a specific lattice. The signer's secret key a can be computed using this reduction in polynomial time. The attack was also adapted to the case of ECDSA, as described in <cit.>. The paper <cit.> describes an attack on DSA schemes that uses the LLL reduction method and requires one message. By computing two short vectors of a three-dimensional lattice, the attack derives two intersecting lines in (a, k), provided that a and k are sufficiently small, and the second shortest vector is sufficiently short. If two messages are available, the same attack can be applied to derive a linear congruence relating to the corresponding ephemeral keys. The papers <cit.> and <cit.> describe attacks on DSA schemes using the LLL algorithm and one or two messages. In <cit.>, the combination of LLL with algorithms for finding integral points of two classes of conics gives a, provided that at least one of the sets {a,k^-1 q}, {k,a^-1 q}, {a^-1 q,k^-1 q} is sufficiently small. In <cit.>, the Lagrange Reduction algorithm is applied on a 2-dimensional lattice defined by a signed message, and provides two straight lines intersecting at (a, k). Similar attacks can be applied to the pairs (k^-1 q, k^-1a q) and (a^-1 q,a^-1k q). If two signed messages are available, the above two attacks can be applied to the equation relating the two ephemeral keys. The article <cit.> presents an attack using Coppersmith's method to compute the secret key a. The attack works when a and k satisfy a specific inequality, and in this case, the secret key a can be efficiently computed. The article <cit.> describes an attack that involves constructing a system of linear congruences using signed messages. This system has at most one unique solution below a certain bound, which can be computed efficiently. Thus, if the length of a vector containing the secret and ephemeral keys of a signed message is quite small, the secret key can be computed using the above system. The article <cit.> presents an improved version of this attack. In <cit.>, the proposed attacks take advantage using of the bits in the ephemeral key and the Fast Fourier Transform. In <cit.>, it is shown that, using lattice reduction under some heuristic assumptions, that partial information about the nonces of multiple signatures can lead to recovery of the full private key. The original approach to doing so is based on discrete Fourier analysis techniques <cit.>. A very important issue is the attacks on cryptosystems based on the malicious modification of memory registers. These attacks may affect the randomness of the secret parameters, and so, to force certain bits of the ephemeral key to be equal, without their values being known. In <cit.>, it is discussed how such attacks could occur in a real-life scenario. Following the line of research from <cit.>, the authors of <cit.> focus on an attack scenario where ephemeral keys share specific bits, such as the least significant bits (LSB) and/or most significant bits (MSB), either within multiple blocks. By eliminating the shared blocks of bits between the ephemeral keys, a lattice of dimension equal to the number of signatures is provided, which contains a quite short vector with components that reveal the secret key. Then, the LLL algorithm is used for the computation of this vector. Note that these attacks are based on heuristic assumptions. Later, in <cit.>, the authors further improved upon the attack proposed in <cit.> by providing a probabilistic attack with a success probability approaching 1 when the pair (δ,n) is appropriately selected, where n represents the number of signatures, and δ represents the number of shared bits in the ephemeral keys. This attack relies on a mild assumption regarding the hash function used in (EC)DSA. §.§ Our Contribution Our study builds on the research presented in <cit.>, and we present a deterministic attack that, although not always polynomial in complexity, proves to be highly efficient in practical scenarios. Instead of using methods like LLL, approximate, or exact CVP, which were employed in previous attacks, we use enumeration on a suitable lattice to find lattice vectors that are close to a specific target vector. From these solutions, we can readily extract the secret key to the system. It is important to highlight that the attacks presented in <cit.> rely on heuristics assumptions that aim to force the presence of a vector containing the private key as a solution to the Shortest Vector Problem (SVP) in a relatively large lattice. In <cit.>, the authors provide a probabilistic approach to <cit.>, where an assumption for the hash function is made and the attack is modelled as a Closest Vector Problem (CVP). Due to the computational complexity of finding such a vector using a deterministic algorithm, an approximation algorithm can be used instead. Our approach takes a different path. We calculate a bound for the distance between the vector of the lattice containing the private key and a target vector. Then, we leverage Kannan's enumeration algorithm to determine this vector and, consequently, extract the secret key. Our experiments demonstrate that the attack can be made highly efficient by appropriately selecting values for δ and n. Finally, we improve the results provided in <cit.>. §.§ Our results In the subsequent Theorem, we apply the framework suggested by <cit.>, which presupposes that we have access to a collection of signed messages with ephemeral keys that are shorter than q. These messages have some of their most and least significant bits in common, with a total of δ bits shared. Suppose we have a (EC)DSA scheme with a binary length ℓ prime number q and secret key a. Let m_j (j=0,…,n) be messages signed with this scheme, (r_j,s_j) their signatures, and k_j = ∑_i=1^ℓ k_j,i 2^ℓ-i (where k_j,i∈{0,1}) are the corresponding ephemeral keys, respectively. Set A_j = -r_js_j^-1 q. Suppose that 0< k_j < q (j=0,…,n), and there are integers δ >0 and 0 ≤δ_L≤δ such that the following conditions hold: * k_0,i+1 = ⋯ = k_n,i+1 (i=1,…,δ-δ_L,ℓ-δ_L, …,ℓ-1). * For i = 0,…,n, set C_i,j = (A_j-1 -A_i) 2^-δ_L q, (j=1,…,i), and C_i,j = (A_j -A_i) 2^-δ_L q (j=i+1,…,n). The shortest vector of the lattice ℒ_i spanned by the vectors (2^δ+1q,0,…, 0),…, (0,…, 0, 2^δ+1q , 0), (2^δ+1C_i,1, …, 2^δ+1C_i,n, 1) has length > 1/2 (2^δ+1q)^n/n+1. Then, the secret key a can be computed in 𝒪(2^ℓ-δ n+2n n ( (nℓ)^c 2^𝒪(n) +ℓ^4 2^n (n+1)^n+1/2)) bit operations, for some c > 0. By the Gaussian heuristic <cit.> the length of the vectors of the lattice ℒ is > q^n/(n+1). Thus, the hypothesis (2) of Theorem <ref> will very often be satisfied. In the above complexity estimate, if ℓ≤δ n, then the time complexity is polynomial in ℓ. Roadmap. The paper is structured as follows: Section 2 presents an auxiliary lemma that will prove crucial in the proof of Theorem <ref>. Section 3 is dedicated to the proof of Theorem <ref>, providing a detailed explanation and justification. In Section 4, an attack on (EC)DSA, derived from Theorem <ref>, is presented. Additionally, several experiments are conducted to illustrate the effectiveness of the attack. Finally, Section 5 concludes the paper, summarizing the main findings and discussing potential avenues for future research. § LATTICES Let ℬ = { b_1, …, b_n}⊂^n be a basis of ^n. A n-dimensional lattice spanned by ℬ is the set ℒ = {z_1 b_1+⋯ +z_n b_n/ z_1,…,z_n ∈}. Recall that the scalar product of two vectors 𝐮 = (u_1,…,u_n) and 𝐯 = (v_1,…,v_n) in ℝ is the quantity ⟨𝐮,𝐯⟩ = u_1v_1+⋯ + u_nv_n, and the Euclidean norm of a vector v = (v_1,…,v_n) ∈^n the quantity 𝐯 = ⟨𝐯,𝐯⟩^1/2 = (v_1^2+⋯ +v_n^2)^1/2. The Gram-Schmidt orthogonalisation (GSO) of the basis ℬ is the orthogonal family {𝐛_1^⋆,…,𝐛_n^⋆} defined as follows: 𝐛_i^⋆ = 𝐛_i-∑_j=0^i-1μ_i,j𝐛_j^⋆, with μ_i,j = ⟨𝐛_i,𝐛_j^⋆⟩/𝐛_j^⋆^2 (j= 0,…,i-1). Let L be a lattice. If K is a convex body in ^n+1 symmetric about the origin, we denote by λ_i(K,L) (i=1,…,n+1) the ith successive minimum of K with respect to L which it is defined as follows λ_i(K, L) = inf{λ > 0/ (λ K) ∩ L contains i linearly independent points}. Further, we denote by s(L) the length of a shortest vector in L. Let B_𝐯(R) be the closest ball of center 𝐯 and radius R in ℝ^n+1 and L a lattice. Then,we have: |B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1. Set 𝒟_𝐯(R) = {𝐱-𝐲/ 𝐱,𝐲∈ B_𝐯(R)}. Then, 𝒟_𝐯(R) is a convex body, symmetric about the origin. Then <cit.> implies: |B_𝐯(R)∩ L | < ∏_i=1^n+1(1/λ_i(𝒟_𝐯(R),L)+1). Let 𝐱,𝐲∈ B_𝐯(R). Then, we have: 𝐱-𝐲≤𝐱-𝐯+ 𝐯-𝐲≤ 2R. It follows that 𝒟_𝐯(R)⊆ B_0(2R), and so we deduce λ_1(B_0(2R),L) ≤λ_i(𝒟_𝐯(R),L) (i=1,…,n). Further, we have λ_1(B_0(2R),L) ≥ s(L)/2R. Combining the inequalities (<ref>), (<ref>) and (<ref>), we obtain: |B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1. § PROOF OF THEOREM 1.1 Let a be the secret key and k_j, j = 0,…,n the ephemeral keys. We put A_j = -r_js_j^-1 q and B_j = -h(m_j) s_j^-1 q for j = 0,…,n. The signing equation for (EC)DSA provides that, k_j+A_j a +B_j ≡ 0 ( q) (j=0,…,n). Suppose first that k_0 = min{k_0,…,k_n}. We set δ_M=δ-δ_L. From the hypothesis of the Theorem we get z_j=k_j-k_0=ε 2^ℓ-δ_M-1+⋯+ε' 2^δ_L, for some ε, ε'∈{0,1}. Since z_j>0 we get 0<z_j<2^ℓ-δ_M and there exists positive integer z_j' such that z_j = 2^δ_Lz^'_j Furthermore, we set C_j = (A_j-A_0)2^-δ_L q and D_j = (B_j-B_0)2^-δ_L q. From (<ref>) we have the congruences: z_j^'+C_j a +D_j ≡ 0 ( q) (j=1,…,n). Since z_j^' is positive, there is a positive integer c_j such that -C_ja-D_j+c_jq= z_j^'. Thus, we obtain: 0 < c_jq-C_j a-D_j < 2^ℓ-δ. It follows that -2^ℓ-δ-1 < c_jq-C_j a-D_j-2^ℓ-δ-1 < 2^ℓ-δ-1, whence we get 0 < |c_jq-C_j a-D_j-2^ℓ-δ-1| < 2^ℓ-δ-1. Therefore, we have: 0 < |c_jq2^δ+1 -C_j2^δ+1 a-D_j2^δ+1-2^ℓ| < 2^ℓ. We consider the lattice ℒ spanned by the rows of the matrix 𝒥 = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1q 0 … 0 0; 0 0 2^δ+1q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1q 0; 2^δ+1C_1 2^δ+1C_2 2^δ+1C_3 … 2^δ+1C_n 1 ]). The vectors of the lattice ℒ are of the form (2^δ+1(qx_1+x_n+1C_1),2^δ+1(qx_2+x_n+1C_2),…,2^δ+1(qx_n+x_n+1C_n),x_n+1), for some integers x_1,…,x_n+1. By setting (x_1,…,x_n+1)=(c_1,…,c_n,-a), we get the lattice vector 𝐮 = (2^δ+1(c_1q-C_1a),…,2^δ+1(c_nq-C_na),-a). Further we consider the vector in the span of ℒ, 𝐯 = (D_12^δ+1+2^ℓ,…,2^δ+1D_n+2^ℓ,0). Now, we have u- v=(2^δ+1(qc_1-C_1a-D_1)-2^ℓ,…,2^δ+1(qc_n-C_na-D_n)-2^ℓ,-a), and inequalities (<ref>) yield: 𝐮-𝐯 < 2^ℓ√(n+1). Put R = 2^ℓ√(n+1). Then 𝐮∈ B_𝐯(R). Next, we compute a LLL-reduced basis for ℒ, say ℬ = {𝐛_1,…,𝐛_n+1}. This can be done in time 𝒪(n^6 (log q)^3). By hypothesis (2) of Theorem, we have: s(ℒ) > 1/2 (2^δ+1 q)^n/n+1. Let {𝐛_1^*,…,𝐛_n+1^*} the Gram-Schmidt orthogonalisation of ℬ. By <cit.>, we get: 4 b_i^*^2 ≥ 2 b_i-1^*^2 ≥ b_i-1^2 ≥ s(L)^2 Thus, we obtain: 1/4 (2^δ+1q)^n/n+1≤𝐛_i^* (i=1,…,n+1). Next, using Kannan's enumeration algorithm <cit.>, we compute all the elements of B_𝐯(R)∩ℒ. Combining <cit.> with the inequality (<ref>), we obtain that the bit complexity of the procedure is (nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 , where c is a constant >0. Then we check whether the last coefficient of 𝐮∈ B_𝐯(R)∩ℒ is the minus of the secret key -aq. Every such operation needs 𝒪((log q)^4) bit operations <cit.>. If none of the elements of 𝐮∈ B_𝐯(R)∩ℒ gives the secret key, then we repeat the procedure assuming that k_1 = min{k_0,…,k_n}, and we continue until we found the secret key. By Lemma <ref>, we have: |B_𝐯(R)∩ℒ | < ( 2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1. Thus, the overall bit complexity of the computation of a is 𝒪(n(nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 +n ( 2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1 (log q)^4), whence the result. § THE ATTACK The proof of Theorems 1.1 yields the following attack: ATTACK-DSA Input: Messages m_j (j=0,…,n) and (r_j,s_j) their (EC)DSA signatures and integers δ >0 and 0 ≤δ_L≤δ and the public key (p,q,g,R) (resp. (E,p,q,P,Q)). Output: The private key a. * For j=0,…, n compute A_j = -r_is_i^-1 q, B_j = -h(m_j) s_j^-1 q. * For i=0,…,n, * For j=1,…,i compute C_i,j = (A_j-1 -A_i) 2^-δ_L q, D_i,j = (B_j-1 -B_i) 2^-δ_L q, and for j= i+1,…,n compute C_i,j = (A_j -A_i) 2^-δ_L q, D_i,j = (B_j -B_i) 2^-δ_L q. * Consider the lattice ℒ_i spanned by the rows of the matrix J_i = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1 q 0 … 0 0; 0 0 2^δ+1 q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1 q 0; 2^δ+1C_i,1 2^δ+1C_i,2 2^δ+1C_i,3 … 2^δ+1C_i,n 1 ]) and compute a LLL-basis ℬ_i for ℒ_i. * Consider the vector 𝐯_i = (2^δ+1D_i,1+2^ℓ,…,2^δ+1D_i,n+2^ℓ,0), and using Kannan's enumeration algorithm with basis ℬ_i, compute all 𝐮∈ℒ_i satisfying 𝐮-𝐯_i < 2^ℓ√(n+1). * Check whether the last coordinate of 𝐮 say u_n+1 satisfies g^-u_n+1≡ Rq (resp. P(-u_n+1) = Q). If it is so, then return the secret key -u_n+1q=a. For the Pseudocode of Kannan's Enumeration Algorithm, one can see <cit.>. Supposing that condition (2) is satisfied, taking n quite small and nδ≥ℓ, Theorem <ref> implies that the attack is polynomial in ℓ. Furthermore, if s(L) is closed to the Gauss heuristic, then the upper bound for the number of points of B_𝐯(R)∩ℒ will be the smaller possible, and so it is expect that the attack will be quite efficient. Experiments. We conducted a thorough analysis of our experiments, and we compared our results with those presented by Gomez et al. <cit.>. Our findings indicate a significant improvement in almost all cases. Our experiments were conducted on a Linux machine with an i5-12400 CPU, using Sagemath 9.8 <cit.>. We made the assumption that we already knew the minimum ephemeral key. However, in the general case, where the minimum key is unknown, we would need to perform n executions, where n+1 represents the number of signatures. This worst-case scenario would require multiplying the execution time of each experiment by n. Overall, our results demonstrate a notable improvement compared to the previous findings (see the Table below). Finally, we have successfully found the secret key even when the shared bits in the ephemeral keys are only 5. Remarkably, in this case, we only needed a minimum of 58 signatures. It is worth noting that in <cit.>, no successful attack was provided for the specific scenario where δ=5. § CONCLUSION Attacks based on the malicious modification of memory registers is a topic of high importance, since it may affect the randomness of the secret parameters by forcing a limited number of bits to a certain value, which can be unknown to the attacker. In this context, we developed a deterministic attack on the DSA schemes, providing that several signatures are such that the corresponding ephemeral keys share a number of bits without knowing their value. Our attack is deterministic, meaning that it always produces a result for a given input every time. However, it is important to note that while the attack is deterministic, it may not always be practical to execute. Deterministic attacks on the (EC)DSA are relatively rare, as they typically rely on heuristic assumptions. While deterministic attacks on (EC)DSA, are rare, our attack demonstrates practical feasibility in specific scenarios, surpassing previous results, (see Table <ref>). However, it is important to note that the practicality and effectiveness of our attack may vary depending on the specific choice of (δ,n). Acknowledgement The author, Marios Adamoudis is co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme ”Human Resources Development, Education and Lifelong Learning” in the context of the Act ”Enhancing Human Resources Research Potential by undertaking a Doctoral Research” Sub-action 2: IKY Scholarship Programme for PhD candidates in the Greek Universities. 99 marios M. Adamoudis, K. A. Draziotis and D. Poulakis, Enhancing a DSA attack, CAI 2019, p. 13-25. LNCS 11545, Springer 2019. Aranha D. F. Aranha, F. R. Novaes, Akira Takahashi, M. Tibouchi, and Y. Yarom. LadderLeak: Breaking ECDSA with less than one bit of nonce leakage. In Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni Vigna, editors, ACM CCS 2020, pages 225-242. ACM Press, November 2020. Bellare M. Bellare, S. Goldwasser and Micciancio, “Pseudo-random" number generation within cryptographic algorithms: the DSS case. In Proc. of Crypto '97, LNCS 1294 IACR, Palo Alto, CA. Springer-Verlag, Berlin 1997. Blake I. F. Blake and T. Garefalakis, On the security of the digital signature algorithm. Des. Codes Cryptogr., 26, no. 1-3 (2002), 87-96. Bleichenbacher D. Bleichenbacher. On the generation of one-time keys in DL signature schemes. In Presentation at IEEE P1363 working group meeting, page 81, 2000. Draziotis K. A. Draziotis and D. Poulakis, Lattice attacks on DSA schemes based on Lagrange's algorithm. 5th international Conference on Algebraic Informatics, CAI 2013. Berlin: Springer. LNCS 8080, 119-131 (2013). Draziotis2 K. A. Draziotis, (EC)DSA lattice attacks based on Coppersmith's method, Information Processing Letters 116(8), Elsevier (2016), Pages 541-545. ElGamal T. ElGamal, A public key cryptosystem and a signature scheme based on discrete logarithm, IEEE Transactions on Information Theory, 31 (1985), 469-472. fips FIPS PUB 186-3, Federal Information Processing Standards Publication, Digital Signature Standard (DSS). Faugere J. -L. Faugère, C. Goyet, and G. Renault, Attacking (EC)DSA Given Only an Implicit Hint, Selected Area of Cryptography, LNCS 7707, p. 252–274, Springer-Verlag, Berlin - Heidelberg 2013. Gomez Ana I. Gomez, D. Gomez-Perez, and G. Renault, A probabilistic analysis on a lattice attack against DSA. Des. Codes Cryptogr. 87, 2469-2488 (2019). Hanrot G. Hanrot and D. Stehlé, Improved analysis of kannan’s shortest lattice vector algorithm. In Proceedings of Crypto, LNCS 4622, 170-186. Springer, 2007. Hanrot2 G. Hanrot, X. Pujol and D. Stehlé, Algorithms for the shortest and closest lattice vector problems. Chee, Yeow Meng (ed.) et al., Coding and cryptology. Third international workshop, IWCC 2011, Qingdao, China, May 30 – June 3, 2011. Proceedings. Berlin: Springer. Lecture Notes in Computer Science 6639, 159-190 (2011). Hoffstein J. Hoffstein, J. Pipher, H. H. Silverman, An introduction to mathematical cryptography. 2nd ed. Undergraduate Texts in Mathematics. New York, NY: Springer 2014. Howgrave N. A. Howgrave-Graham and N. P. Smart, Lattice Attacks on Digital Signature Schemes, Des. Codes Cryptogr. 23 (2001) 283-290. Johnson D. Johnson, A. J. Menezes and S. A. Vastone, The elliptic curve digital signature algorithm (ECDSA), Intern. J. of Information Security, 1 (2001) 36-63. Koblitz N. Koblitz, A. J. Menezes and S. A. Vastone, The state of elliptic curve cryptography, Des. Codes Cryptogr. 19 (2000), 173-193. Koblitz2 N. Koblitz and A. J. Menezes, A survey of Public-Key Cryptosystems, SIAM REVIEW, 46, No. 4 (2004), 599-634. Leadbitter P.J. Leadbitter, D. Page, N.P. Smart. Attacking DSA Under a Repeated Bits Assumption. In: Joye, M., Quisquater, JJ. (eds) Cryptographic Hardware and Embedded Systems - CHES 2004. CHES 2004. Lecture Notes in Computer Science, vol 3156, (2004) 428-440. Springer, Berlin, Heidelberg. Lenstra A. K. Lenstra, H. W. Lenstra Jr., and L. Lovász, Factoring polynomials with rational coefficients, Math. Ann., 261 (1982), 513-534. Malikiosis R.-D. Malikiosis, Lattice-point enumerators of ellipsoids, Combinatorica 33, No. 6 (2013) 733-744. Menezes A. J. Menezes, P. C. van Oorschot and S. A. Vanstone, Handbook of Applied Cryptography, CRC Press, Boca Raton, Florida, 1997. Micciancio D. Micciancio and P. Voulgaris. A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. In Proc. of STOC, ACM, (2010) pages 351-358. Mulder1 E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher s solution to the Hidden Number Problem to attack nonce leaks in 384-bit ECDSA. In Cryptographic Hardware and Embedded Systems-CHES 2013, 435-452. Springer, 2013. Mulder2 E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher's solution to the hidden number problem to attack nonce leaks in 384-bit ecdsa: extended version. Journal of Cryptographic Engineering, 4(1):33-45, 2014. National National Institute of Standards and Technology (NIST). FIPS Publication 186: Digital Signature Standard. May 1994. Nguyen P. Nguyen and I. E. Shparlinski, The Insecurity of the Digital Signature Algorithm with Partially Known Nonces, J. Cryptology, 15 (2002), 151-176. Nguyen2 P. Nguyen and I. E. Shparlinski, The Insecurity of the Elliptic Curve Digital Signature Algorithm with Partially Known Nonces, Des. Codes Cryptogr. 30, (2003), 201-217. Poulakis D. Poulakis, Some Lattice Attacks on DSA and ECDSA, Applicable Algebra in Engineering, Communication and Computing, 22, (2011), 347-358. Poulakis1 D. Poulakis, New lattice attacks on DSA schemes, J. Math. Cryptol. 10 (2) (2016), 135–144. sage Sage Mathematics Software, The Sage Development Team. <http://www.sagemath.org>. Sun C. Sun, T. Espitau, M. Tibouchi, and M. Abe, Guessing Bits: Improved Lattice Attacks on (EC)DSA with Nonce Leakage, IACR Transactions on Cryptographic Hardware and Embedded Systems, ISSN 2569-2925, Vol. 2022, No. 1, pp. 391-413. Zheng Z. Zheng, Modern Cryptography, Volume 1, Springer 2021.
http://arxiv.org/abs/2307.04002v1
20230708160353
Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems
[ "Jiaqi Zou", "Songlin Sun", "Christos Masouros", "Yuanhao Cui", "Yafeng Liu", "Derrick Wing Kwan Ng" ]
eess.SP
[ "eess.SP" ]
4.1ex   Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems Jiaqi Zou, Graduate Student Member, IEEE, Songlin Sun, Senior Member, IEEE, Christos Masouros, Senior Member, IEEE, Yuanhao Cui, Member, IEEE, Ya-Feng Liu, Senior Member, IEEE, and Derrick Wing Kwan Ng, Fellow, IEEE Part of this work has been submitted to the IEEE Global Communications Conference (GLOBECOM 2023) for possible presentation <cit.>. Jiaqi Zou is with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China, and also with the Department of Electrical and Electronic Engineering, University College London, London WC1E 7JE, UK (e-mail: [email protected]). Songlin Sun and Yuanhao Cui are with Beijing University of Posts and Telecommunications (BUPT), Beijing, China (e-mail: [email protected], [email protected]). Christos Masouros is with the Department of Electrical and Electronic Engineering, University College London, WC1E 7JE, UK (e-mail: [email protected]). Ya-Feng Liu is with the State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]) Derrick Wing Kwan Ng is with the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia (e-mail: [email protected]). August 12, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 1.5 In this paper, we investigate the design of energy-efficient beamforming for an ISAC system, where the transmitted waveform is optimized for joint multi-user communication and target estimation simultaneously. We aim to maximize the system energy efficiency (EE), taking into account the constraints of a maximum transmit power budget, a minimum required signal-to-interference-plus-noise ratio (SINR) for communication, and a maximum tolerable Cramér-Rao bound (CRB) for target estimation. We first consider communication-centric EE maximization. To handle the non-convex fractional objective function, we propose an iterative quadratic-transform-Dinkelbach method, where Schur complement and semi-definite relaxation (SDR) techniques are leveraged to solve the subproblem in each iteration. For the scenarios where sensing is critical, we propose a novel performance metric for characterizing the sensing-centric EE and optimize the metric adopted in the scenario of sensing a point-like target and an extended target. To handle the nonconvexity, we employ the successive convex approximation (SCA) technique to develop an efficient algorithm for approximating the nonconvex problem as a sequence of convex ones. Furthermore, we adopt a Pareto optimization mechanism to articulate the tradeoff between the communication-centric EE and sensing-centric EE. We formulate the search of the Pareto boundary as a constrained optimization problem and propose a computationally efficient algorithm to handle it. Numerical results validate the effectiveness of our proposed algorithms compared with the baseline schemes and the obtained approximate Pareto boundary shows that there is a non-trivial tradeoff between communication-centric EE and sensing-centric EE, where the number of communication users and EE requirements have serious effects on the achievable tradeoff. Integrated sensing and communication (ISAC), energy efficiency, fractional programming. § INTRODUCTION Integrated sensing and communications (ISAC) are anticipated as a viable enabling technology for unlocking the potential of next-generation wireless networks, as the two kinds of systems tend to share various common devices, signal processing techniques, and even the hardware circuitries. Rather than the conventional parallel development of the two systems, the joint designs advocating their coexistence and cooperation have attracted extensive research interest in recent years. For instance, the coexistence of communication and radar systems focuses on spectrum sharing or physical integration design, which mainly aims to mitigate the mutual interference and efficiently manage the limited wireless resources <cit.>. Indeed, since communication and radar systems may transmit independent signals superimposed in the time/frequency domains, the interference between each other should be minimized to facilitate their individual functionalities. In such cases, numerous approaches have been proposed, such as cooperative spectrum sharing <cit.> and beamforming design <cit.>. Nevertheless, the existence of inevitable mutual interference still causes certain limitations on spectral efficiency performance. Meanwhile, compared with the coexistence design approaches that generate communication and sensing signals separately, ISAC employs a common transmitted signal for realizing communication and sensing simultaneously. In such a case, the crux of ISAC is how to design a specialized waveform for effectively transmitting data and sensing potential targets. In particular, the waveform design can be categorized into the communication-centric, radar-centric, and joint design according to the design goals <cit.>. Specifically, the radar-centric design aims to modulate the communication data onto the radar pulses, where the radar probing signals can be regarded as an information carrier <cit.>. On the other hand, communication-centric approaches utilize existing communication signals to sense the environment, such as cellular signals <cit.> and Wi-Fi signals <cit.>. In particular, various environmental conditions can be extracted from the received echoes of the communication signals, as the target's existence or movement inevitably affects the signal's propagation. Nevertheless, the integration performance is limited in the above two approaches, as the communication/sensing functionality is often carried out as ancillary tasks. In contrast, the joint ISAC design studies the co-design of signaling methodologies enabling both communications and sensing, which is the research content of this work. §.§ Related Works Related works of joint waveform design focus on striking a balance between the tradeoff of communication and sensing. For example, <cit.> investigated the tradeoff between the multi-user interference minimization and the appropriate radar beampattern formulation. Besides, a recent work in <cit.> considered the Cramér-Rao bound (CRB) minimization with guaranteed signal-to-interference-plus-noise ratio (SINR) for each communication user. Furthermore, as widely-used performance metrics, the fundamental tradeoff between the CRB for target parameter estimation and the data rate for communication was also investigated in <cit.> under various system settings, to unveil the potential of ISAC. Despite the above approaches can achieve favorable performance tradeoffs between the estimation performance and spectral efficiency <cit.>, the energy efficiency (EE) optimization of the joint waveform has not been fully investigated. Currently, the energy consumption of the state-of-the-art fifth-generation (5G) wireless networks is extremely high, resulting in expensive operational costs <cit.>. It is anticipated that the upcoming ISAC will pave the way for developing a perceptive wireless network requiring a much higher energy consumption than the current one, since the wireless signals are expected to achieve the dual purposes of environment sensing and information transmission simultaneously. This could hinder the long-term development of sustainable and environmentally friendly wireless communication technologies. Hence, there is a pressing need to investigate the energy efficiency design of ISAC for establishing a perceptive-efficient and spectrally-efficient cellular network. Actually, energy-aware optimization has been a hot topic in the past decade for conventional cellular networks, e.g., <cit.>. Specifically, EE is defined as the ratio of the achieved data rate and the required power consumption, capturing the energy consumption per bit in communication, which has been widely studied for various communication networks <cit.>. However, these approaches for maximizing the communication EE cannot be directly applied to ISAC, as they do not take into consideration of sensing functionalities. Recently, the EE optimization for radar-communication spectrum sharing has been studied in  <cit.>, and the results cannot be applied to ISAC systems either due to the separated signal waveform design. On the other hand, a few works have studied ISAC beamforming for maximizing communication-centric EE. For instance, the work of <cit.> investigated the communication EE maximization under the required radar beampattern constraint. Yet, it does not consider the sensing EE and the performance of target parameter estimation. Besides, the work of <cit.> focused on energy minimization under the sensing and communication constraints. In particular, the algorithm designed in <cit.> cannot handle the EE optimization due to the intrinsic challenges brought by fractional programming in the resource allocation design. More importantly, to the best of our knowledge, the sensing-centric EE that characterizes the EE of target sensing has been rarely studied in the literature. In particular, to fulfill the increasing demand for sensing services, it is natural for the base station (BS) to transmit the waveforms with high power for improving the detection and estimation performance. However, this operation will inevitably bring unaffordable energy costs, which contradicts to the emerging requirements of carbon neutrality and environmental sustainability for future wireless networks <cit.>. Therefore, there is an urgent need for the design an energy-efficient sensing performance metric for ISAC. §.§ Contributions Against this background, this work considers the EE optimization for the waveform design of ISAC, where the communication-centric EE, sensing-centric EE, and their tradeoffs are investigated. Specifically, for the ISAC systems wherein communication serves as the primary objective, we study the ISAC waveform design for maximizing the communication-centric EE, i.e., the ratio of the achievable rate and the corresponding power consumption, while guaranteeing both the target estimation and communication performance in terms of the CRB and SINR, respectively. As for the sensing-centric ISAC systems, for the first time, we propose the performance metric to measure the sensing-centric EE for target parameter estimation. Then, we optimize the ISAC waveform to maximize the sensing-centric EE, considering the constraints of SINR, CRB, and the maximum transmission power budget. Then, we study the Pareto boundary of communication-centric EE and sensing-centric EE for characterizing their tradeoffs. The main contributions of this paper are summarized as follows. * We optimize the communication-centric EE considering the two scenarios having a point-like target estimation and an extended target estimation, respectively, under the constraints of CRB, SINR, and transmission power limitations. For the case of point-like target, the nonconvexity of the objective function and CRB constraint hinder the communication-centric EE optimization. For handling these challenges, we first adopt the quadratic-transform-Dinkelbach method to reformulate the nonconvex fractional objective function as a tractable formulation. Then, we adopt the semi-definite relaxation and linear matrix inequality to convert the nonconvex optimization problem into a sequence of convex optimization problems. Finally, we generalize the proposed algorithm to an extended target case. * We propose a performance metric for capturing the notion of sensing-centric EE for the first time, which adopts the ratio of the reciprocal of the CRB to the transmit energy for measuring “information-per-Joule’’. Then, based on the proposed metric, we consider the sensing-centric EE maximization for point-like/extended targets by optimizing the transmit beamforming. Although the considered problem is nonconvex, we adopt the Schur complement to reformulate the problem into a tractable formulation, facilitating the development of a successive convex approximation (SCA)-based algorithm to effectively acquire the solution to the design problem. * We adopt the Pareto optimization technique to characterize the tradeoff between the communication-centric EE and the sensing-centric EE. In particular, we formulate a constrained optimization problem that maximizes the communication-centric EE under the constraint of sensing-centric EE. To handle the nonconvexity of the considered optimization problem, we propose an SCA-based iterative algorithm for addressing the nonconvexity. Then, by varying the threshold of the sensing-centric EE, the approximate Pareto boundary can be obtained by solving a sequence of constrained problems. Simulation results present the Pareto boundary to demonstrate the tradeoff between the two EE metrics. The remainder of this paper is organized as follows. Section II introduces the system model, including the communication model and the sensing model. In Section III, we study the optimization of the communication-centric EE under the sensing and communication constraints. The sensing-centric EE is studied in Section IV. Section V investigates the tradeoff between the communication-centric and the sensing-centric EE. Simulation results are provided in Section VI. Finally, we conclude the paper in Section VII. Notations: The normal plain text (i.e., t), bold lowercase letters (i.e., 𝐰) and uppercase letters (i.e., 𝐖) represent scalars, vectors, and matrices, respectively. tr(·), rank(·), (·)^H, and (·)^T denote the trace operator, the rank operator, the Hermitian transpose, and the transpose operator, respectively. ℂ^n × n stands for an n × n complex-valued matrix. · represents the L_2 norm of a matrix. The inequality 𝐀≽0 means that 𝐀 is Hermitian positive semi-definite. Re(·) denotes the real part of the argument. We adopt 𝔼(·) for the stochastic expectation. ḟ(x) denotes the first derivative of function f(x). The notation ≜ is used for definitions. § SYSTEM MODEL As depicted in Fig. <ref>, we consider an ISAC multiple-input multiple-output (MIMO) system, where the BS equipped with M transmit antennas serves K single-antenna UEs for communication with K ≤ M. Let k ∈𝒦≜{1,2, ⋯,K} denote the communication user set. As for radar estimation, the environmental information is simultaneously extracted from the reflected echoes with N receiving antennas implemented at the BS. Without loss of generality, the number of transmit antennas is less than that of receive antennas, i.e., M ≤ N. As for target sensing, both the point-like target and the extended target cases are considered separately covering various practical scenarios. In particular, the former case denotes the unstructured point that is far away from the BS, such as unmanned aerial vehicles (UAVs). On the other hand, for the extended target, it acts as a reflecting surface with a large number of distributed scatterers, such as a vehicle or a pedestrian <cit.>. The detailed model is given as follows. §.§ Communication Model We denote the beamforming vector and the channel from the BS to the k-th user as 𝐰_k∈ℂ^M× 1 and 𝐡_k∈ℂ^M× 1, respectively. Then, the data symbol intended for the k-th user at time slot l is denoted as s_k[l], with unit power 𝔼( |s_k[l]|^2) =1. Left multiplying 𝐬[l] = [s_1[l], s_2[l], ⋯, s_k[l]]^T ∈ℂ^K × 1 with the beamforming matrix 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_k] ∈ℂ^M × K, the transmitted signal vector of the BS is given by 𝐱[l]= 𝐖𝐬[l]. Then, the transmitted ISAC waveform over L time slots can be denoted as 𝐗 = [ x[1], x[2], ⋯, x[L] ] ∈ℂ^M × L. Then, the received signal at the k-th user during the l-th time slot, l ∈{1, 2, ⋯, L}, is given as follows y_k[l] = h_k^H 𝐰_k s_k[l] + ∑_k ∈𝒦 j ≠ k h_k^H 𝐰_j s_j[l] + z_c[l], where z_c[l] is the additive white Gaussian noise (AWGN) with zero mean and variance σ_c^2. The received SINR at the k-th user can be calculated as SINR_k( W) = | h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2, and the corresponding achievable rate is R_k( W) = log_2(1+SINR_k ( W)). It is well known that communication-centric EE is defined as a ratio of the transmission sum rate ∑_k R_k( W) to the total power consumption P. Following <cit.>, the power consumption can be calculated as P = 1/ϵP_d + P_0, where the power amplifier efficiency ϵ∈ [0,1] and P_0 denotes the constant circuit power consumed by circuitries in RF chains, power supply, cooling system, etc. Besides, the total transmit power is given by P_d = ∑_k w_k_2^2. Hence, the communication-centric EE, measuring the required “bits-per-Joule" <cit.>, can be calculated as EE_C = R_k(𝐖)/ P = ∑_k log_2( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0. §.§ Sensing Model For radar sensing, the BS exploits the echo signals collected in L time slots to estimate the target parameter. This work considers the two cases with either a point-like target or an extended target, respectively. For notational simplicity, we consider the same angle of departure (AOD) and angle of arrival (AOA) of the target, i.e., θ_t=θ_r=θ <cit.>. Then, for the point-like target that locates in the far field, the target response matrix can be denoted as 𝐀 = α𝐚_r(θ)𝐚^H_t(θ), where 𝐚_x(θ), x∈{t,r}, is the steering vector for the transmit signal at angle θ. Following the existing works on ISAC, e.g., <cit.>, we assume that the BS employs a uniform linear antenna with a half-wavelength spacing between the adjacent antennas. Then, the transmit and receive steering vectors are given by 𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (M -1) cosθ]^T, 𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (N -1) cosθ]^T. For the extended target that locates in the near field, we follow <cit.> to model it as a reflecting surface with N_s point-like scatters. Then, the target response matrix can be represented as 𝐀 = ∑_i=1^N_sα_i 𝐚_r(θ_i)𝐚_t^H(θ_i), where α_i is the reflection coefficient of the i-th scatterer. Therefore, the received target echoes 𝐘_R from the point-like or the extended targets can both be denoted as 𝐘_R = 𝐀𝐗 + 𝐙_s, where 𝐙_s is the zero-mean AWGN with variance σ_s^2 in each element. Since CRB is a lower bound on the variance of an unbiased estimator of an unknown parameter that can guarantee the performance of sensing <cit.>, we adopt the CRB as the sensing metric to design the energy-efficient ISAC in the following. § COMMUNICATION-CENTRIC ENERGY-EFFICIENT DESIGN §.§ Point-Like Target Case Since the CRB of α has a similar form as the one of θ, for conciseness, this work only considers the CRB of θ to for the design of the ISAC beamforming. For the point-like target, the CRB of θ is given as follows <cit.> CRB(θ)=σ_s^2/|α|^2 (M𝐚̇^H(θ)𝐑_𝐱^T𝐚̇(θ)+ 𝐚^H(θ)𝐑_𝐱^T𝐚(θ)‖𝐚̇(θ)‖^2-M|𝐚^H(θ)𝐑_𝐱^T𝐚̇(θ)|^2/𝐚^H(θ)𝐑_𝐱^T𝐚 (θ)), where 𝐑_𝐱 is the sample covariance matrix of 𝐗. Since 𝔼( |s_k[l]|^2) =1, for a large L, we have the asymptotic result R_𝐱 = 1/L X X^H ≈ W W^H = ∑_k=1^K w_k w_k^H  <cit.>. The communication-centric energy efficient design is to maximize the EE_C defined in (<ref>), under the constraints of multiple users’ required SINR and maximal CRB(θ), whose optimization problem can be formulated as follows max_{𝐰_k}_k=1^K   ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0 s.t.   ∑_k=1^K w_k _2^2 ≤ P_max, CRB(θ) ≤ρ , | h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k, where P_max denotes the power budget of the BS and (<ref>) is the transmit power constraint. Besides, ρ and γ_k are the required CRB threshold for sensing and the required SINR for the k-th communication user, respectively. In general, it is challenging to solve problem (<ref>) directly, due to the nonconvexity of the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>). For addressing the nonconvex optimization problem, we first adopt the Dinkelbach's method <cit.> to reformulate the problem (<ref>) as max_{𝐰_k}_k=1^K   f_1(𝐰_k) - λ f_2(𝐰_k) s.t.    (<ref>), (<ref>), (<ref>), where f_1(𝐰_k) ≜∑_k=1^K log_2 ( 1+| h_k^H w_k |^2/σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2), f_2(𝐰_k) ≜1/ϵ∑_k=1^K w_k_2^2 + P_0, and λ≥ 0 is the auxiliary variable to be iteratively updated by λ = f_1(𝐰_k)/f_2(𝐰_k). With (<ref>) and (<ref>), an efficient solution to problem (<ref>) can be obtained by updating 𝐰_k and λ alternately. Nevertheless, problem (<ref>) is still difficult to handle due to the following issues: 1) the objective function (<ref>) is still non concave over {𝐰_k } due to the fractional function f_1(𝐰_k); 2) nonconvex constraints (<ref>) and (<ref>). Since the function log_2(·) is concave and non-decreasing, the nonconvexity of (<ref>) can be addressed if the term inside log_2(·) can be reformulated as an equivalent concave formulation. Bearing this in mind, since f_1(𝐰_k) belongs to the general multiple-ratio concave-convex fractional programming problem, we adopt the quadratic transform method <cit.> to reformulate f_1(𝐰_k) as f_1(𝐰_k) = t_kmax∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐰_k) ) , where B_k(𝐰_k) = σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2 and t_k is an introduced auxiliary variable that is iteratively updated by t_k = | h_k^H w_k |( σ_c^2 +∑_j=1,j ≠ k^K| h_k^H w_j |^2)^-1. Based on the above reformulations, problem (<ref>) can be recast as max_{𝐰_k, t_k}_k=1^K, λ   ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐰_k) ) - λ( 1/ϵ∑_k=1^K w_k_2^2 + P_0)    s.t. (<ref>), where {𝐰_k, t_k}_k=1^K and λ can be updated alternatively. In the following, we focus on handling the nonconvex constraints (<ref>) and (<ref>). Specifically, constraint (<ref>) can be reformulated as Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2 - M| a^H(θ) R_ x^Tȧ(θ)|^2/ a^H(θ) R_ x^T a(θ) - σ_s^2/2Lρ|α|^2 ≥ 0. Then, for notational conciseness, denoting ℱ( R_X) ≜ Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2, (<ref>) can be reformulated as the following linear matrix inequality by leveraging the Schur complement  <cit.>. [ ℱ( R_x) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ) R_ x^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 . Next, for handling the nonconvex constraint (<ref>), we introduce an auxiliary optimization variable matrix 𝐖_k and reformulate constraint (<ref>) into tr(𝐐_k 𝐖_k) - γ_k ∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) ≥γ_k σ_c^2, W_k =w_k w_k^H, where 𝐐_k = h_k h_k^H. Then, problem (<ref>) can be equivalently reformulated as max_{𝐰_k,𝐖_k, t_k}_k=1^K   ∑_k=1^K log_2 ( 1+ 2 t_k ·Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0) s.t.    [ [ ℱ(∑_k=1^K𝐖_k) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ)∑_k=1^K W_k^Tȧ(θ); √(M)ȧ^H(θ)∑_k=1^K W_k^T a(θ) a^H(θ)∑_k=1^K W_k^T a(θ) ] ]≽0 , (<ref>), (<ref>), (<ref>), where B_k(𝐖_k) ≜∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2. However, constraint (<ref>) is a nonconvex equality constraint which is difficult to handle. Therefore, we introduce the following lemma to transform constraint (<ref>) into equivalent inequality constraints. W_k =w_k w_k^H can be equivalently reformulated as [ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , 𝐖_k ≽0, ∀ k, tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤ 0, ∀ k. The proof is given in Appendix A. Although the equality constraint in (<ref>) has been reformulated as the equivalent inequality constraints, constraint (<ref>) is still nonconvex. For handling this, we adopt the SCA technique that establishes an inner convex approximation of constraint (<ref>) given as tr(𝐖_k) + (𝐰_k^(i-1))^H 𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ 0, ∀ k, where 𝐰^(i-1)_k is the solution obtained at the i-th iteration of the SCA. Therefore, at the i-th iteration, the convex approximation of problem (<ref>) can be reformulated as max_𝒲, t_k, λ   ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0) s.t.   (<ref>), (<ref>),(<ref>),(<ref>),(<ref>). Algorithm <ref> summarizes the iterative algorithm for handling problem (<ref>), where f̂_1(𝐰_k, 𝐖_k) = ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) and f̂_2(𝐖_k) =1/ϵ∑_k=1^K tr( W_k)+ P_0. Although we cannot guarantee that the optimal solution of problem (<ref>) can be obtained, the proposed Algorithm <ref> follows the inexact Dinkelbach-type algorithm adopted in <cit.>, whose convergence can be guaranteed by the following lemma. Let {𝐰_k^i,𝐖_k^i} be the solution sequence generated by solving problem (<ref>). The sequence {λ^(i)} generated by Algorithm 1 is non-decreasing and convergent. Since f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i)) =(λ^(i+1)-λ^(i))f̂_2(𝐖^(i)), we have λ^(i+1)≥λ^(i) if f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥ 0. Obviously, f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))=0. At the i-th iteration, we approximate problem (<ref>) as problem (<ref>) around 𝐰_k^(i-1). Since 𝐰_k^(i-1) is definitely a feasible solution of problem (<ref>), we have f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))= 0. Therefore, we can conclude that the sequence {λ^(i)} is non-decreasing and Algorithm 1 converges due to the finite power budget. Complexity Analysis: The computational complexity of Algorithm <ref> is dominated by solving problem (<ref>). Problem (<ref>) involves linear matrix inequality (LMI) constraints that dominate the computation complexity. We notice that the problem contains one LMI constraint of size 2M, K LMI constraints of size M+1, and K LMI constraints of size M. Given the required accuracy ϵ_0 > 0, the ϵ_0-optimal solution can be achieved after a sequence of iterations. Then, the computational complexity can be given as 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ) by reserving the highest order term, where I_iter denotes the number of iterations <cit.>. Due to the stringent requirement introduced by (<ref>), it is generally non-trival to directly obtain a feasible solution as an initial point. Alternatively, we can adopt the penalty SCA <cit.> and introduce auxilary variables ρ̅_k to transform problem (<ref>) into max_𝒲, t_k, λ   ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0) - p̅∑_k=1^K ρ̅_k s.t.   𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ρ̅_k, ∀ k, (<ref>), (<ref>), (<ref>), (<ref>), where p̅ and ∑_k=1^K ρ̅_k denote the weight coefficient and the penalty term, respectively. To obtain the initial point of (<ref>), we can solve problem (<ref>) as an initial warm-up phase by gradually raising p̅ to induce a reduction in the penalty term to a smaller value. When the penalty term decreases to zero, problem (<ref>) reduces to problem (<ref>), whose solution serves as the feasible initial point of (<ref>). §.§ Extended Target Case For estimating the extended target, we follow <cit.> to consider the CRB of the target response matrix 𝐀 instead of the angle. Since K ≤ M, transmitting K signal streams is not always sufficient for recovering the rank-M matrix. To address this issue, the BS generates additional signals that are dedicated for target probing. As such, the augmented data matrix at the l-th time slot is 𝐱̃[l]≜[𝐖, 𝐖̃][𝐬[l];𝐬̃[l]], where 𝐬̃[l] ∈ℂ^(N_t-K) × 1 is the dedicated probing signal and 𝔼( 𝐬[l] 𝐬̃^H[l] ) = 0. Note that in the augmented signal, the beamforming 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_K] ∈ℂ^M × K broadcasts the information data to the K users and the beamforming 𝐖̃ = [𝐰_K+1, ⋯, 𝐰_K+M] ∈ℂ^M × M is employed to generate probing signals for enabling the estimation of the target response matrix. However, the introduced probing signals 𝐬̃[l] inevitably generate undesired interference to the served multiple users that introduces non-trivial tradeoff between sensing and communication. In particular, the SINR received at the k-th user is given by S̃ĨÑR̃_k = | 𝐡_k^H 𝐰_k|^2/∑ _i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2, where ‖𝐡_k^H𝐖̃‖^2_2 is the additional interference due to the probing signals. In such a case, the CRB for the extended target estimation can be derived as CRB_extended= σ_s^2 M/Ntr(𝐑_𝐱^ - 1), where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H . Based on the discussions above, the problem of communication-centric EE optimization for estimating an extended target can be formulated as max_{𝐰_k}_k=1^K+M    ∑_k=1^K log_2(1+S̃ĨÑR̃ _k)/1/ϵ∑_k=1^K+M w_k _2^2 + P_0 s.t.    ∑_k=1^K+M w_k _2^2 ≤ P_max, CRB_extended= σ_s^2 M/Ltr(𝐑_𝐱^ - 1) ≤τ , S̃ĨÑR̃_̃k̃≥γ_k. Obviously, although constraints (<ref>) and (<ref>) are both convex, the fractional objective function (<ref>) is still nonconvex. Following Section <ref>, we first adopt Dinkelbach’s transformation to handle the nonconvex fractional programming and reformulate the problem as follows max_{𝐰_k}_k=1^K+M     ∑_k=1^K log_2 (1+S̃ĨÑR̃ _k) - λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0) s.t.      (<ref>), (<ref>), (<ref>). Then, by exploiting the equality -log a = bmax (log b - ab) <cit.>, problem (<ref>) can be reformulated as max_{𝐰_k}_k=1^K+M, {b_k}_k=1^K, λ    ∑_k=1^K log_2 ( | 𝐡_k^H 𝐰_k|^2 + ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2) + ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H 𝐖̃‖_2^2 + σ _C^2 ) ) - λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0) s.t.     (<ref>), (<ref>), (<ref>). For obtaining a tractable formulation, by introducing auxiliary variables 𝐖_k ≜𝐰_k 𝐰_k^H, k ∈ [1, 2, ⋯, K] and 𝐑_𝐖̃ = 𝐖̃𝐖̃^H, problem (<ref>) can be reformulated as max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖_2, λ   ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k ) + ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) ) - λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0) , s.t.  tr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ≤ P_max, σ_s^2 M/Ntr( ( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ^-1) ≤τ , 𝐡_k 𝐖_k 𝐡^H_k - γ_k ( ∑_i = 1,i k^K𝐡_k^H 𝐖_i 𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k ) ≥γ_k σ_c^2, 𝐖_k ≽0, ∀ k, 𝐑_𝐖̃≽0, rank(𝐖_k) = 1, ∀ k. After inspecting problem (<ref>), we can find that all constraints are convex, except for constraint (<ref>). Besides, the objective function in (<ref>) includes three sets of optimization variables: {λ}, {b_k}, and {{𝐖_k}_k=1^K, 𝐑_𝐖̃}. Moreover, when fixing the other two sets, the objective function is convex with respect to the remaining one. Therefore, we first adopt the rank relaxation to remove constraint (<ref>) and then employ an alternating optimization (AO) algorithm to optimize three sets of optimization variables alternately. The detailed algorithm is summarized in Algorithm 2, where we denote f̃_1(𝐖_k, 𝐑_𝐖̃ ) = ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k ) + ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) ) f̃_2(𝐖_k, 𝐑_𝐖̃ ) = 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0. In the following theorem, we will show that the rank-1 solution of problem (<ref>) can be recovered from the solution generated by Algorithm 2. Given the optimal solution obtained by Algorithm <ref> as {𝐖_k^∗, 𝐑^∗_𝐖̃}. When K = 1, 𝐖̂^∗ = 𝐖^∗𝐡_k 𝐡_k^H 𝐖^∗/𝐡_k^H 𝐖^∗𝐡_k,   𝐑̂^∗_𝐖̃= 𝐑^∗_𝐖̃ is the optimal rank-1 solution that achieves identical performance as {𝐖_k^∗, 𝐑^∗_𝐖̃}. When K > 1, one can always construct the optimal solution that satisfies the rank-1 constraint acquiring the same performance. The proof is given in Appendix B. Complexity Analysis: We provide the computational complexity of Algorithm <ref> as follows. Similarly, the problem (<ref>) is a semidefinite program that can be solved by the standard interior-point algorithm. We note that the problem involves K+1 LMI constraints of size M. We consider the highest order term and express the computational complexity as 𝒪( √(MK+M+K+1) M^6 K^3 I_iterlog(1/ϵ_0) ) for an ϵ_0-optimal solution, where I_iter represents the number of iterations <cit.>. § SENSING-CENTRIC ENERGY-EFFICIENT DESIGN §.§ Performance Metric for Sensing-Centric EE It is well known that CRB is the inverse of Fisher information for the unbiased estimator <cit.>. In fact, Fisher information is the statistical expected value of the observed information about an observable random variable. Considering these, we adopt the reciprocal ratio of the CRB to the transmit power, further normalized by the total time slot length. In this context, we arrive at a novel sensing-centric EE metric that measures the average sensing information per Joule, defined as EE_s≜CRB^-1/L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) . In this manner, both the sensing-centric EE and communication-centric EE measure the “information” per Joule, but the “information” has different meanings. Based on the above metric, we study the waveform design to maximize the sensing-centric EE considering the point-like target and the extended target in Sections <ref> and <ref>, respectively. §.§ Point-Like Target Case Considering the point-like target, with the CRB of estimating θ given in (<ref>), the sensing-centric EE optimization problem can be formulated as max_{𝐰_k}_k=1^K    CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) s.t.    ∑_k=1^K w_k _2^2 ≤ P_max, CRB(θ) ≤ρ , | h_k^H w_k |^2/σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k. Obviously, problem (<ref>) is also intractable due to the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>). For handling the fractional objective function (<ref>), with the introduced auxiliary optimization variables ω, t,ϕ, and ζ, problem (<ref>) can be reformulated as max_{𝐰_k}_k=1^K, ω, ϕ, ζ     ω s.t.     CRB^-1(θ) ≤1/t, 1/ϵ∑_k=1^K w_k_2^2 + P_0 ≤ϕ, t ≥ζ^2, ω≤ζ^2/ϕ, (<ref>), (<ref>), (<ref>). The equivalence between (<ref>) and (<ref>) is obvious, since constraints (<ref>), (<ref>), and (<ref>) should be active at the optimal solution. We note that (<ref>) share the same form with (<ref>). Therefore, with Schur complement, constraint (<ref>) can be reformulated as [ ℱ(∑_k=1^K𝐖_k) - t σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0, where ℱ(∑_k=1^K𝐖_k) ≜ Mȧ^H(θ)∑_k=1^K𝐖_k^Tȧ(θ)+ a^H(θ)∑_k=1^K𝐖_k^T a(θ)‖ȧ(θ)‖^2 and 𝐖_k = 𝐰_k 𝐰_k^H. Furthermore, Lemma <ref> presents an equivalent formulation of the equality 𝐖_k = 𝐰_k 𝐰_k^H whose convex approximation has been given in (<ref>) and (<ref>). Then, for handling the fractional constraint (<ref>), we introduce auxiliary variables {τ_k, ψ_k, ∀ k} to reformulate (<ref>) as τ^2_k / ψ_k ≥γ_k, τ_k = 𝐡_k^H 𝐰_k, ψ_k ≥σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2, where (<ref>) and (<ref>) are convex constraints. Then, problem (<ref>) can be reformulated as max_Θ    ω s.t.    ω≤ζ^2/ϕ , γ_k ≤τ^2_k/ψ_k , ∀ k (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>), (<ref>), where Θ≜{{𝐖_k, 𝐰_k}_k=1^K, ω, t,ϕ, ζ, τ_k, ψ_k } denotes the set of optimization variables. Obviously constraint (<ref>) is convex. Therefore, the challenge for handling problem (<ref>) lies in the nonconvexity of constraint (<ref>). To deal with this, we adopt the SCA techniques to establish a convex approximation of constraint (<ref>). Since function ζ^2/ϕ is jointly convex with respect to ζ and ϕ, its convex lower approximation can be established as ζ^2/ϕ ≥(ζ^(n))^2/ϕ^(n) + 2 ζ^(n)/ϕ^(n) (ζ - ζ^(n) ) - ( ζ^(n)/ϕ^(n)) ^2 (ϕ - ϕ^(n) ) = 2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ , where ζ^(n) and ϕ^(n) are the feasible points obtained at the n-th iteration of the SCA. Consequently, the inner convex approximation of ω≤ζ^2/ϕ is ω≤2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ. Similarly, the inner convex approximation of γ_k ≤τ^2_k/ψ_k, ∀ k is γ_k ≤2 τ_k^(n)/ψ_k^(n)τ_k - ( τ_k^(n)/ψ_k^(n)) ^2 ψ_k , ∀ k , where τ_k^(n) and ψ_k^(n) are the feasible points obtained at the n-th iteration. Finally, a convex approximation of problem (<ref>) is formulated as max_Θ    ω s.t.    (<ref>), (<ref>), (<ref>). In this way, problem (<ref>) can be solved with off-the-shelf numerical convex program solvers such as CVX Toolbox <cit.>. We summarize the proposed iterative method in Algorithm <ref>, where its initial feasible solution can be obtained by following the penalty SCA method given in Remark 1. In the following, we analyze the convergence of Algorithm <ref>. We can note that in the iterative procedure of Algorithm <ref>, Θ^(n-1) is always feasible in problem (<ref>) at n-th iteration owing to the adopted first-order Taylor approximation. We note that (<ref>) can be optimally solved and the optimal value of its objective function serves as a lower bound on that of (<ref>). Therefore, it can be guaranteed that the optimal value of (<ref>) at n-th iteration n, denoted as p_∗^(n), always satisfies p_∗^(n)≥ p_∗^(n-1). Therefore, Algorithm <ref> produces a non-decreasing objective function of problem (<ref>). Similar to Algorithm <ref>, the computational complexity of Algorithm <ref> is 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ). §.§ Extened Target Case For the case of the extended target, following the discussion in Section <ref>, we choose 𝐀 as the parameter to be estimated and adopt the formulation of CRB in (<ref>). Then, we have the sensing-centric EE for sensing an extended target as EE_S = ( σ_s^2 M/Ltr(𝐑_𝐱^-1) )^-1/ L ( 1/ϵtr(𝐑_𝐱) + P_0 ) = ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ) , where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H = ∑_k=1^K 𝐰_k 𝐰_k^H + 𝐑_𝐖̃. Then, we formulate the problem as max_{𝐰_k}_k=1^K,𝐑_𝐖̃    ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ) s.t.    tr(𝐑_𝐱) ≤ P_max, σ_s^2 M/Ntr(𝐑_𝐱^ - 1) ≤ϕ , S̃ĨÑR̃_̃k̃≥γ_k, where S̃ĨÑR̃_̃k̃ is given in (<ref>) and can be recast as a convex form in (<ref>) by letting 𝐖_k = 𝐰_k 𝐰_k^H. We notice that in (<ref>), the numerator is the reciprocal of a convex function and the denominator is strictly positive and convex. To handle its nonconvexity, we introduce auxiliary optimization variables p_e,q_e and equivalently transform the problem into max_{𝐰_k}_k=1^K,𝐑_𝐖̃, q_e, p_e   1/p_e q_e s.t.       p_e ≥σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ), q_e ≥tr(𝐑_𝐱^ - 1), (<ref>), (<ref>),(<ref>). Then, the problem can be further transformed into its equivalent form as min_{𝐖_k}_k=1^K,𝐑_𝐖̃, q_e, p_e   ln(p_e) + ln(q_e)     s.t.  (<ref>), (<ref>), where the objective function is still not convex, but can be approximated based on the first order Taylor series expansion given by ln(p_e) + ln(q_e) ≤ln( p^(n)_e ) + ln( q_e^(n)) + 1/p_e^(n)( p_e-p_e^(n)) + 1/q^(n)_e( q_e-q^(n)_e) , where p_e^(n) and q_e^(n) are the feasible solutions obtained at the n-th iteration. Following the techniques detailed in Section <ref>, a convex approximation of problem (<ref>) at the n-th iteration can be established as min_{𝐖_k}_k=1^K, 𝐑_𝐖̃, q_e, p_e  ln(p^(n)_e) + ln(q_e^(n)) + 1/p_e^(n) (p_e-p_e^(n)) + 1/q^(n)_e (q_e-q^(n)_e) s.t.   (<ref>), (<ref>),(<ref>),(<ref>), (<ref>). The computational complexity is 𝒪( √(MK+M+K+1) M^6 K^3 I_iterln(1/ϵ_0) ) for an ϵ_0-optimal solution. Based on the optimal solution of (<ref>), denoted as {𝐖_k^∗, 𝐑^∗_𝐖̃}, the optimal rank-1 solutions can always be reconstructed. The proof can be achieved by following the proof of Theorem 2 and the details are omitted for brevity. § APPROXIMATE PARETO BOUNDARY OF ENERGY-EFFICIENT ISAC SYSTEMS In this section, we aim to investigate the Pareto boundary of the achievable EE performance region built on the communication-centric EE and the sensing-centric EE. Considering the point-like target case, we follow <cit.> to formulate the search of the Pareto boundary as a constrained optimization problem that maximizes the communication-centric EE under the sensing-centric EE constraint. It is worth noting that the proposed algorithm can be adapted to the extended target case directly. Now, we aim to solve max_{𝐰_k}_k=1^K   ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0 s.t.   CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) ≥ℰ, ∑_k w_k _2^2 ≤ P_max, where ℰ denotes the required minimum sensing-centric EE threshold. Obviously, problem (<ref>) is a nonconvex fractional program, which is challenging to solve directly. To handle fractional objective function (<ref>) and nonconvex constraint (<ref>), we follow <cit.> to find the approximate optimal Pareto boundary for characterizing the tradeoff between the communication-centric EE and sensing-centric EE. In particular, we first apply the Dinkelbach algorithm to reformulate fractional function (<ref>) as max_λ ∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) - λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0 ) s.t. (<ref>), (<ref>), where B_k(𝐖_k) = ∑^K_j=1, j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2. Furthermore, by introducing auxiliary variables b_k, k=1,…,K, the intractable fractional terms in (<ref>) can be equivalently formulated as ∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) = max_b_k ( ∑_k=1^Klog_2 (1+ b_k) - ∑_k=1^K b_k + ∑_k=1^K(1+b_k)| h_k^H w_k |^2 /B_k(𝐖_k)), which has an analytical solution b_k = | h_k^H w_k |^2/B_k(𝐖_k). Finally, by applying the quadratic transform <cit.>, problem (<ref>) can be reformulated as max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ   ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0) s.t.     (<ref>), (<ref>),(<ref>),(<ref>). The convex approximation of nonconvex constraint (<ref>) is constraint (<ref>), as mentioned in  Section <ref>. For handling nonconvex constraint (<ref>), we introduce an auxiliary variable ℰ̃ and employ the Schur complement to obtain the convex approximation of problem (<ref>) given by max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ   ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0) s.t.     [ ℱ(∑_k=1^K𝐖_k) - ℰ̃σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 , ℰ̃≥ℰ N (1/ϵ∑_k=1^Ktr( W_k)+ P_0), (<ref>), (<ref>), (<ref>). (<ref>) is convex whose optimum can be obtained by the interior point method. Therefore, an efficient solution of problem (<ref>) can be obtained by solving a sequence of problem (<ref>). Algorithm <ref> summarizes the iterative algorithm, where f̆_1(𝐰_k, 𝐖_k) = β/ℛ∑_k=1^K( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k). - t_k^2 B_k(𝐖_k) ) + (1-β) ϕ̃/L 𝒞, f̆_2(𝐖_k) = λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0). § NUMERICAL RESULTS In this section, we provide simulation results of the proposed energy-efficient waveform design. Numerical analysis is presented to evaluate the performance of communication-centric EE (EE_C), sensing-centric EE (EE_S), and their approximate Pareto boundary. Unless stated otherwise, we consider a dual-functional BS equipped N = 20 receiving antennas, with the frame length N set to 30. The maximum transmission power P_max is set to 30 dBm with the power amplifier efficiency ϵ = 0.35. The circuit power consumption is set to P_0 = 33 dBm. For the target estimation of radar, the target angle is θ = 90 ^∘. §.§ EE_C Optimization We first examine the performance of Algorithm <ref> for maximizing EE_C considering the existence of a point-like target. The convergence rate of Algorithm 1 is given in Fig. <ref>. Obviously, it enjoys a fast convergence rate, whose objective function value converges within 12 iterations on average. Furthermore, the convergence rate of Algorithm 1 is almost the same for different system parameters, e.g., different M and CRB constraints, which confirms the scalability of Algorithm 1. Fig. <ref> investigates the EE_C performance versus the root-CRB threshold for different M. The EE_C increases with the increasing Root-CRB threshold, indicating that EE_C can achieve a higher level when the sensing performance requirement is less stringent. Indeed, increasing the number of antennas can improve EE_C, since more spatial degrees-of-freedom can be utilized for designing an efficient ISAC waveform. On the other hand, the baseline scheme only maximizes the communication sum rate under the same constraints of problem (<ref>). Obviously, the EE_C of the baseline scheme is unsatisfying, since it only considers the spectral efficiency maximization instead of the EE_C maximization. In such a case, the baseline scheme encourages the ISAC BS to adopt as much power as possible for increasing the communication sum rate. Fig. <ref> and Fig. <ref> plot the EE_C of the point-like target and extended target with the increasing SINR constraint of multiple users, γ_k, respectively. With the increasing γ_k, EE_C first remains unchanged and then decreases due to the shrunken feasible region. Therefore, increasing the downlink communication rate does not necessarily improve EE_C. Furthermore, with the increasing root-CRB, the EE_C decreases, since more power is allocated to radar sensing due to the increasing sensing requirements. A similar trend can also be found in Fig. <ref> for the increasing CRB in the extended target case. §.§ EE_S Optimization In this subsection, we investigate the performance of EE_S optimization for both the point-like target sensing and extended target cases. In Fig. <ref>, we first consider the point-like target to show the EE_S versus the increasing power budget, for different SINR levels. As expected, EE_S increases with the increasing P_T, since the increasing power improves the estimation accuracy and increases EE_S. Besides, lowering the SINR requirement also improves EE_S, since relaxing the SINR constraint enlarges the feasible region and improves EE_S. For demonstrating the performance gain obtained by our proposed Algorithm 3, we perform the performance comparison with two other baselines, namely BA_1 and BA_2. In particular, BA_1 aims to minimize the transmission power while BA_2 aims to maximize the communication sum rate under the same constraints as our proposed method (γ_k = 5 dB, the root-CRB threshold is set to 0.15 deg, P_max = 30 dBm). The results indicate that EE_S of BA_1 is significantly low due to the insufficient power for improving the CRB performance. Additionally, EE_S of BA_2 is also inferior to the proposed method and exhibits a further decline as the transmission power increases, since most of the power is utilized for maximizing the sum rate instead of sensing target. Fig. <ref> further demonstrates the EE_S versus the SINR requirement, where the root-CRB threshold is set to 0.15 deg. It can be observed that EE_S decreases as the increasing SINR and the number of communication users since the increasing communication requirements deteriorates the sensing performance. As for the scenario of sensing an extended target, Fig. <ref> shows the EE_S versus communication SINR under different numbers of users and different CRB. It is worth noting that the performance metric for the extended target sensing EE_S is different from the point-like target case. Similar to the scenario of sensing a point-like target, EE_S decreases with the increasing requirements of communication SINR, especially when the number of users is larger. Besides, increasing CRB requirements improves EE_S, due to the improved estimation performance. §.§ Approximate Pareto Boundary of Energy-Efficient ISAC. Fig. <ref> plots the approximate Pareto boundary of energy-efficient ISAC, which demonstrates the tradeoff between EE_C and EE_S. With the more stringent EE_S constraint, the EE_C decreases. In particular, when the required minimum sensing-centric EE threshold ℰ is small, strengthening the requirement of EE_S only affects EE_C mildly. However, when the required EE_S beyonds a certain threshold, increasing EE_S constraint will bring a sharp decline in EE_C. This phenomenon shows that there is a non-trivial tradeoff between EE_S and EE_C, which should be given serious consideration. Besides, we can find that the area spanned by the Pareto boundary is sensitive to the number of communication users, K, since the increasing number of served communication users consumes the available spatial degrees of freedom which cannot compensate for the performance loss due to the increasingly stringent EE_S constraint. Therefore, it is more challenging to balance EE_S and EE_C for a large K. On the other hand, after the required EE_S surpasses some threshold, EE_C decreases sharply. This is because most of the available resources are allocated for satisfying the stringent EE_s constraint, such that the remaining resources are insufficient for guaranteeing the EE_C performance. § CONCLUSION In this paper, we addressed the problem of maximizing energy efficiency for MIMO ISAC systems. We first studied the communication-centric EE adopting the conventional definition of EE in both the point-like target and extended target cases. We reformulated the objective function using the quadratic-transform-Dinkelbach method and solved the sub-problem by leveraging the Schur complement and semi-relaxation techniques. In the second part, we introduced a novel performance metric for measuring sensing-centric EE. We iteratively approximated the objective function as a convex program exploiting SCA to address this problem. Finally, we investigated the tradeoff between the two EE metrics and provided an effective solution. Numerical results showed an improvement compared to the benchmark on both communication-centric EE and sensing-centric EE performance, and we also demonstrated the tradeoff between communication-centric and sensing-centric EE. § APPENDIX A First, we provide the matrix inequality 𝐖_k ≽𝐰_k 𝐰_k^H, which satisfies either of the following cases: Case I: 𝐖_k ≻𝐰_k 𝐰_k^H. Then, we have tr(𝐖_k)   >  tr(𝐰_k 𝐰^H_k). Case II: 𝐖_k = 𝐰_k 𝐰_k^H. In this case, we have tr(𝐖_k) = tr(𝐰_k 𝐰^H_k). By combining 𝐖_k ≽𝐰_k 𝐰_k^H, with an additional LMI constraint, given as tr(𝐖_k) ≤tr(𝐰_k 𝐰^H_k), we can guarantee that Case II always holds. We remark that tr(𝐰_k 𝐰_k^H) = tr(𝐰^H_k 𝐰_k) =𝐰^H_k 𝐰_k. Further applying the Schur complement, W_k =w_k w_k^H can be equivalently transformed into the following LMI, given as [ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , ∀ k, tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤0, ∀ k, which completes the proof. § APPENDIX B For K = 1, we can derive that 𝐡_k^H 𝐖̂^∗𝐡_k = 𝐡_k^H 𝐖^∗𝐡_k. Hence, the received SNR and the transmission rate at the user does not decrease. Besides, we have 𝐖^∗ - 𝐖̂^∗ = ( 𝐖^∗)^1/2( 𝐈 - (𝐖^∗)^1/2𝐡_k 𝐡_k^H (𝐖^∗)^1/2/𝐡_k^H 𝐖^∗𝐡_k) ( 𝐖^∗)^1/2≽0, indicating that the power constraint is satisfied due to 𝐖^∗≽𝐖̂^∗. Additionally, replacing 𝐖^∗ by 𝐖̂^∗ would not decrease the transmission rate or increase the total power, showing that 𝐖̂^∗ is the optimum to the objective function. Then, we discuss the case of K > 1 . We introduce r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 and equivalently reformulate (<ref>) as max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃, λ   ∑_k=1^K log( 1+r ) - λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0) + ∑_k=1^K( log b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) ) s.t.  r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 , (<ref>),(<ref>), (<ref>), (<ref>), (<ref>) . We note that with the fixed λ, problem (<ref>) is jointly convex of variables {𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃. Thus, it can be proved that Slater's condition holds such that strong duality holds. By introducing the Lagrange multipliers ϖ_k,1≤ 0, ϖ_k,2≤ 0, μ≤ 0 and Ψ_k ≽0, we provide the Lagrangian function of 𝐖_k as ℒ(𝐖_k) = - ϖ_k,1𝐡_k^H 𝐖_k 𝐡_k + ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐖_k 𝐡_i + ϖ_k,2𝐡_k^H 𝐖_k 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐖_k 𝐡_i - tr(𝐖_k Ψ_k)+ μtr(𝐖_k) + ξ , where ξ represent the terms that do not involve 𝐖_k. Then, the KKT conditions of (<ref>) is given as ℒ̇(𝐖^∗_k) = 0 , 𝐖^∗_k Ψ_k = 0. Then, we have Ψ^∗_k = 𝐀_k^∗ - ϖ_k,1𝐡_k^H 𝐡_k and 𝐀_k^∗ = ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐡_i + ϖ_k,2𝐡_k^H 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐡_i + μ𝐈_M. Nest, we discuss the rank of 𝐀_k^∗ under the following cases. 1) Case I: rank( 𝐀_k^∗) = M. In this case, we have rank( Ψ^∗_k) ≥ M-1 with the inequality rank( 𝐗 + 𝐘 ) ≥rank( 𝐗 ) - rank( 𝐘 ) <cit.>. For rank(Ψ^∗_k ) = M, the first condition in (<ref>) implies 𝐖^∗_k = 0. For rank(Ψ^∗_k ) = M - 1, we have rank( 𝐖^∗_k )= 1. 2) Case II: rank( 𝐀_k^∗) = r_a < M. In this case, we exploit <cit.> to construct a rank-1 solution 𝐖^∗_k. We give {𝐪_k,i^∗}_i=1^M-r_ato denote the columns of orthonormal basis of Ω_k^∗, which represents the nullspace of 𝐀_k^∗. As Ψ^∗_k ≽0, we have (𝐪_k,i^∗)^H Ψ^∗_k 𝐪_k,i^∗ = - ϖ_k,1 |𝐡_k^H 𝐪_k,i^∗ |^2 ≥ 0. Since (<ref>) should be active at opimum indicating ϖ_k,1≥ 0, we have 𝐡_k^H 𝐪_k,i^∗ = 0 and Ψ^∗_k Ω_k^∗ = 0. Thus, the M - r_a dimensions of Ψ^∗_k's null space can be represented by Ω_k^∗. We further denote Ω_k^∗ as the null-space of Ψ^∗_k, we have rank(Ω_k^∗) ≥ M - r_a. Additionally, since rank( 𝐀_k^∗) = r_a, we have rank( Ψ^∗_k) ≥ r_a - 1, which shows that rank(Ω_k^∗) ≤ M - r_a + 1. Then, it can be readily noted that rank(Ω_k^∗) = M - r_a or rank(Ω_k^∗) = M - r_a + 1. When rank(Ω_k^∗) = M - r_a , we have 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H with λ_k,i^∗≥ 0. In such a case, 𝐡_k^H 𝐖_k^∗𝐡_k = 0, which constradicts the optimality. Hence, we conclude that rank(Ω_k^∗) = M - r_a + 1. Denoting Ω_k^∗ as [Ω_k^∗, 𝐩_k^∗], the optimal solution 𝐖^∗_k can be given as 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H + λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H with λ̃^∗_k ≥ 0. Therefore, a rank-1 solution can be constructed as 𝐖̂_k^∗ = 𝐖^∗_k - ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H = λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H , 𝐑̂^∗_𝐖̃ = 𝐑^∗_𝐖̃ + ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H. In the following, we show that the reconstructed solution, 𝐖̂_k^∗ and 𝐑̂^∗_𝐖̃ satisfy the constraints. Firstly, we have 𝐡_k^H 𝐖_k^∗𝐡_k = 𝐡_k^H 𝐖̂_k^∗𝐡_k, 𝐡_k^H (∑_i = 1,i k^K𝐖^∗_i + 𝐑^∗_𝐖̃) 𝐡_k = 𝐡_k^H (∑_i = 1,i k^K𝐖̂^∗_i + 𝐑̂^∗_𝐖̃) 𝐡_k. Therefore, the right-hand side term in (<ref>) and the left-hand side term in (<ref>) remain unchanged. Besides, it can be readily verified that constraints (<ref>) and (<ref>) hold, since 𝐖_k^∗ + 𝐑^∗_𝐖̃ = 𝐖̂^∗_k + 𝐑̂^∗_𝐖̃, which completes the proof. IEEEtran
http://arxiv.org/abs/2307.05228v1
20230711124855
Attribute Controlled Dialogue Prompting
[ "Runcheng Liu", "Ahmad Rashid", "Ivan Kobyzev", "Mehdi Rezagholizadeh", "Pascal Poupart" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Measurements of dense fuel hydrodynamics in the NIF burning plasma experiments using backscattered neutron spectroscopy J. P. Chittenden August 12, 2023 ======================================================================================================================= Prompt-tuning has become an increasingly popular parameter-efficient method for adapting large pretrained language models to downstream tasks. However, both discrete prompting and continuous prompting assume fixed prompts for all data samples within a task, neglecting the fact that inputs vary greatly in some tasks such as open-domain dialogue generation. In this paper, we present a novel, instance-specific prompt-tuning algorithm for dialogue generation. Specifically, we generate prompts based on instance-level control code, rather than the conversation history, to explore their impact on controlled dialogue generation. Experiments on popular open-domain dialogue datasets, evaluated on both automated metrics and human evaluation, demonstrate that our method is superior to prompting baselines and comparable to fine-tuning with only 5%-6% of total parameters. § INTRODUCTION Fine-tuning has been frequently used when deploying generative pretrained language models (PLMs) to downstream tasks since the advent of GPT <cit.> and BERT <cit.>. However, this requires storing a full copy of parameter states for every downstream task, which is memory-consuming and expensive to serve when working with large-scale models with billions of parameters like GPT-3 <cit.>. In this work, we design a lightweight prompting module for adapting pretrained language models for attribute controlled dialogue generation. More precisely, for each attribute such as persona, intention, emotion etc. we only save an additional prompt module. Since the prompting module is a fraction of the size of the pretrained dialogue model, this allows many controlled dialogue systems to be stored on a device without too much overhead. We present results on both intent and persona controlled dialogue. § RELATED WORK GPT-3 <cit.> introduces prompting, a method to steer a frozen PLM by transforming inputs into cloze-style phrases with task description and some task examples. Though it is memory-efficient since one single copy of the PLM can be shared across different tasks, the model's performance is largely restricted by the maximum conditional input length, the model size and manual guesswork for prompts <cit.>. Other works focus on automatically searching for better discrete prompts <cit.>. Recently, there has been an increased interest in continuous prompts / prompt-tuning, which bridges the gap between prompting and fine-tuning, while remaining efficient during training <cit.>. Continuous prompts extend prompt selection to the entire space of embeddings, including vector embeddings that do not correspond to any human-interpretable natural language tokens. Hence, soft prompts are more expressive than discrete prompts. However, both deep prompts and shallow prompts assume a static prompt / task-level prompt for all samples within a task, neglecting the fact that samples might vary greatly, especially in the field of conversation generation. There are recent papers exploring possible instance-specific prompts. For instance, Control-prefixes <cit.> generates attribute-level prompts for input labels, but its expressiveness is limited to four labels. IPL <cit.> includes a look-up module to reweight prompt tokens before passing the updated embedding-only prompt into the transformer, but IPL updates all model parameters, which loses the efficiency benefits of prompting. IDPG <cit.> consumes inputs in a two-layer perceptron module to generate instance-dependent prompts in classification tasks rather than generation tasks. In addition, <cit.> proposes DialogPrompt which performs instance-specific prompting for dialogue generation by conditioning the prompt on the entire dialogue history. However, their prompting module consists of GPT-2, which is a full-fledged language model, and the approach is as costly as storing an entire fine-tuned base model. Recent works Contrastive prefixes <cit.> and Tailor <cit.> both propose attribute-based prompts, instead of instance-specific, to include either single-attribute or multi-attribute prompts into controlled text generation tasks, which reveal the powerful potential of controllability of continuous prompts. In contrast to previous work, we propose Controlled DialogPrompt for applying prompt-tuning in controlled dialogue generation, which optimizes prompts based on provided control codes rather than the previous conversation history and we further explore the controllability of prompts at the instance level. The size of the prompt encoder is strictly limited and we freeze the pretrained transformer during training in order to preserve memory efficiency. In addition, we would like to highlight that our work focuses more on open-ended text generation rather than natural language understanding, such as entailment, paraphrase detection, extractive QA, as seen in other parameter-efficient fine-tuning methods <cit.>. We posit that generating high-quality text is a more challenging task that requires a more nuanced approach to prompt tuning. § CONTROLLED DIALOGPROMPT In this section, we present Controlled DialogPrompt (Controlled DP) for dialogue generation, which is expected to provide attribute information such as the dialogue intention or the user’s persona within the prompt and steer the pretrained model efficiently. Soft Prompt-tuning <cit.> learns soft tokens for different tasks and then prepends them to the conversation context as well as control attributes. This approach yields a static shallow prompt since the soft tokens are static (i.e., fixed for a task) and shallow (only added as an input to the language model). In contrast, Prefix-tuning proposes a more effective technique that adds soft tokens in the form of key-value pairs at every attention block of the transformer <cit.>. This allows the soft tokens to influence each stage of the language model and therefore it is referred to as a static deep prompt. Figure <ref>(bottom right) shows our proposed controlled dialogue prompt (Deep version). Instead of training static soft tokens for the dialogue task, we train a lightweight prompt module that takes as input a control attribute, either an intention label or persona sentences, and outputs key-value pairs that are prepended to each layer of the language model. Since the soft token embeddings change depending on the control attribute, this corresponds to an instance-specific prompt. For the shallow prompt (Figure <ref> bottom left), we follow Soft Prompt-tuning which adds an additional trainable embedding layer to encode the attribute. For the deep prompt module, we consider two architectures: i) a simple multilayer perceptron (two fully connected layers of size 512 with tanh activation) applied to each token of the control attribute, and ii) a two-layer transformer decoder with embedding size of 256. The embedding size of each architecture was chosen to yield roughly the same number of parameters. This number of parameters is about 5%-6% of the number of parameters of the language model. For a given domain, training the prompt module is done as follows. An intention label or persona sentences are fed to the prompting module, which outputs key-value pairs added at each layer of the frozen pretrained dialogue system. Gradients to maximize the likelihood of response tokens are back-propagated through the dialogue system and prompting module, but only the weights of the prompting module are updated. § EXPERIMENTS §.§ Datasets and baseline models We evaluate the proposed method on two publicly available datasets: Dailydialog <cit.> for label control and FoCus <cit.> for document control. Dailydialog <cit.> is a widely used daily conversation dataset that provides a dialogue act for every sentence that indicates the communication function of each utterance. There are 4 types of dialogue acts in total. FoCus<cit.> is a new persona-grounded dataset that aims to provide informative answers based on the user’s persona about the geographical landmark. We provide the detailed dataset setups in Appendix <ref>. To demonstrate better performance of Controlled DialogPrompt, we compare our model with other competitive prompt-tuning techniques. The backbone model is DialoGPT-Large <cit.>. Details are provided in Appendix <ref>. §.§ Evaluation Methods We use both automatic evaluation metrics and human evaluation to measure the performance. Automated metrics For controllability, we follow <cit.> to evaluate whether models can customize responses based on specified control attributes. Details about controllability measures are provided in Appendix <ref> Regarding response quality, we use n-gram based metrics such as BLEU (B-2, B-4) <cit.>, NIST (N-2, N-4) <cit.>, ROUGE-L <cit.>, METEOR <cit.> to evaluate fluency and adequacy and distinct n-gram distribution metrics such as Dist (D-1, D-2) <cit.> and Entropy (E-4) <cit.> to measure the diversity of the response. Human Evaluation Human evaluation on the other hand is used to measure consistency between dialogue context and response and attribute controllability. We adopt single-turn pairwise evaluations to prevent annotator bias in numerical score evaluation. Details on question settings and annotators are provided in Appendix <ref> § RESULT AND ANALYSIS §.§ DialogAct / Intention Table <ref> summarizes the automatic evaluation results on the DialogAct label control task. Compared to static task prompts, instance-level controlled prompts achieve better performance consistently on both deep and shallow prompt levels. Since the controlled attribute is injected independently through the prompts, it does not affect the understanding and generation ability of the pretrained transformer. Both Controlled DP deep methods show higher controllability and response quality than Controlled DP embedding, in line with <cit.> indicating the expressiveness of deep prompts. Also, Controlled DP deep methods show performance close to fine-tuning and even outperform on some metrics such as NIST. This is because NIST is weighted-BLEU with higher weights on rarer words and fine-tuning tends to generate from a more limited vocabulary whereas Controlled DialogPrompt sometimes generates less frequent words and can attain a better NIST score. Human evaluation (Table <ref>) also shows that Controlled DP deep has a significantly higher winning rate than other prompting techniques on both control attribute relevancy and conversation consistency. §.§ User's Persona Table <ref> shows that our model displays advantages over other prompting methods in terms of response quality, which shows a promising sign that controlled DP can be adapted to more challenging document control scenarios. Note that the difference in BLEU-2 is more pronounced for Focus compared to DailyDialog, as Focus is more complicated and uses sentences as the attribute rather than labels. Although controlled DP methods perform slightly lower than Prefix-tuning on the similarity scores with given user's persona and Entropy-4 values, we find it to be highly consistent with the previous conversation history upon human evaluation (Table <ref>). Similar results are observed with FoCus <cit.> where models with high generation abilities do not always ensure high grounding abilities. In addition, the difference between static/instance-specific deep prompts and static/instance-specific shallow prompts emphasizes the direct impact of deep prompts in complex tasks. Fine-tuning performs the best, but with approximately 20X more tunable parameters. § CONCLUSION AND FUTURE WORK In summary, we presented a novel prompting technique, conditioned on a dialogue attribute (persona or intent), for controlled dialogue generation. The prompting module requires only 5%-6% of the total number of parameters, which allows the storage of several fined-tuned prompting modules for different dialogue generation tasks at a fraction of the cost of a full dialogue model. However, Controlled DialogPrompt currently studies conditioning on simple control attribute sentences like the user's persona and the work can be extended to more extensive and complex sentences such as background knowledge documents to further evaluate the controlled prompt's encoding capabilities. Additionally, combining multiple Controlled DialogPrompts on several control attributes and automatically triggering various dialogue skills is an interesting and unexplored direction. § LIMITATIONS In our current experiments, prompt-based methods are primarily storage-efficient or parameter-efficient solutions. Since these methods all require backpropagation to the bottom layer, the training time of prompt-based methods are closely resembles that of traditional fine-tuning approach. § ACKNOWLEDGEMENTS This research was funded by Huawei Canada and the National Sciences and Engineering Research Council of Canada. Resources used in preparing this research at the University of Waterloo were provided by the province of Ontario and the government of Canada through CIFAR and companies sponsoring the Vector Institute. acl_natbib § EXPERIMENTAL SETUPS §.§ Datasets §.§.§ Label control Dailydialog <cit.> is a widely used daily conversation dataset that provides a dialogue act for every sentence. Dialogue acts indicate the communication function of each utterance and there are 4 types of dialogue acts: inform, questions, directives, and commissives. We follow the standard split of the original Dailydialog dataset, limit the conversation context to a maximum of four sentences, and remove any sentence that has more than 25 words to maintain computation efficiency. As a result, we obtain 61,669 training samples, 5769 validation samples, and 5453 testing samples. We additionally use the Dailydialog multi-reference dataset from <cit.> during metrics computation to mitigate the one-to-many possible response problem. §.§.§ Document control FoCus<cit.> is a persona-grounded dataset. Unlike DailyDialog, FoCus aims to build a dialogue agent that provides informative answers based on the user’s persona about the geographical landmark; therefore, it is more content-rich and challenging. The selected knowledge candidate sentence is prepended to the conversation and regarded as part of the input. The input to the base model has the template: "Knowledge: [Selected knowledge sentence] Conversation: [Previous utterances]”. The persona sentences are given as the input to the prompt encoder. In fine-tuning (no prompt encoder) and static prompt methods (the prompt encoder does not take attribute information), the persona sentences are concatenated together with the knowledge and previous utterances and form the input to base model as “Knowledge: [Selected knowledge sentence] Persona: [User’s Personas] Conversation: [Previous utterances]” Since the grounded answer of the test set has not been released, we shuffle and split the original training set to construct our training samples and validation samples (70% training and 30% validation) and the original validation set as our testing samples. We further restrict conversation context to at most three sentences because the bot’s utterances are much longer than human’s utterances. In total, we have 49,198 samples for training, 21,134 samples for validation, and 5,639 samples for testing. §.§ Baseline models To demonstrate better performance of Controlled DialogPrompt, we compare our model with other competitive prompt-tuning techniques. * Pretrained DialoGPT <cit.>: DialoGPT-large has shown its superiority for a wide range of open-domain dialogue generation tasks by pretraining on a massive corpus. * Fine-tuning: Fine-tuning, though memory-consuming, is the most straightforward and prevalent adaptation technique to downstream tasks. Fine-tuning has been considered as the benchmark for all light-weight fine-tuning methods including prompt-tuning. * Soft Prompt-tuning (static shallow prompt) <cit.>: The method applies a static task prompt to the embedding of every input. We experiment with different lengths (length 10 and length 50) of the static shallow prompt and use the better length 50. * Prefix-tuning (static deep prompt) <cit.>: Prefix prompts are added to every layer during computation. We experiment with different lengths (length 10 and length 50) and we report the better prompt result with length 10. * Controlled DP - Embedding (instance-specific shallow prompt): The shallow version of our method with controlled prompts added only in the embedding layer. It is used to demonstrate the expressiveness of the deep Controlled DialogPrompt. * Controlled DP - MLP / 2-layer Transformer (instance-specific deep prompt): We explore different prompt encoder structures, among which MLP prompt encoder shares the frozen pretrained transformer embedding layer to reduce tunable parameters. During our experiments, we utilize DialoGPT-large as the frozen backbone model and train all models on two Nvidia V100 32G GPUs. We train models for 10 epochs with training batch size 2 per GPU and learning rate of 1e-4 except for fine-tuning, which is set to 5e-5 in the FoCus dataset and 1e-5 in the Dailydialog dataset. Models that achieve the lowest validation losses are saved during the training. We perform optimization with the AdamW optimizer with maximum gradient clipping set to 1. For decoding, we choose top-k sampling provided in Huggingface where k=10 and temperature T=0.9. The result is generated with random seed=42. § EVALUATION METHODS §.§ Automated metrics For controllability, we follow <cit.> to evaluate whether models can customize responses based on specified control attributes. (1) For label control, we fine tune an independent BERT classifier <cit.> which can take a sentence and predict its dialogue intention. We train the classifier on the same training set and achieve 83.23% accuracy on the test set. (2) For document control, we also compute the cosine similarity between the Glove embedding of the generated responses and grounded persona documents. As FoCus dataset contains human-annotated labels for used persona sentences, only those that are actually used are evaluated. Detailed training information is provided in <cit.>. Regarding response quality, we utilize different variants of n-gram based metrics such as BLEU (B-2, B-4) <cit.>, NIST (N-2, N-4) <cit.>, ROUGE-L <cit.>, METEOR <cit.> to evaluate fluency and adequacy and distinct n-gram distribution metrics such as Dist (D-1, D-2) <cit.> and Entropy (E-4) <cit.> to measure the diversity of the response. We follow the metrics setting in <cit.>. §.§ Human Evaluation Human evaluation on the other hand is used to measure consistency between dialogue context and response and attribute controllability. Similar to ACUTE-Eval in <cit.>, we adopt single-turn pairwise evaluations to prevent annotator bias in numerical score evaluation. We compare Controlled DialogPrompt with every other prompt-tuning methods, covering static shallow prompt, static deep prompt and instance-specific shallow prompt. In each comparison group, there are two questions designed separately to assess response’s dialogact/personality controllability as well as consistency to the previous conversation context. For dialogact controllability, we have the question: Which response do you think is more related to the given dialog act (intention)?. For personality controllability, we set the question as Which response do you think is more related to the personality?. For the consistency to the previous conversation context, we set the question as Which response do you think is more consistent to the above conversation context? We sample 15 conversations from each comparison group and there are 5 conversations overlapped across different groups. Annotators are industrial NLP researchers and NLP graduate students. We collected 900 annotations in total.
http://arxiv.org/abs/2307.04760v1
20230710175817
Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos
[ "Sagnik Majumder", "Ziad Al-Halah", "Kristen Grauman" ]
cs.CV
[ "cs.CV", "cs.SD", "eess.AS" ]
[ Lorenzo Nicolodi Version of June 20, 2023 ============================ We propose a self-supervised method for learning representations based on spatial audio-visual correspondences in egocentric videos. In particular, our method leverages a masked auto-encoding framework to synthesize masked binaural audio through the synergy of audio and vision, thereby learning useful spatial relationships between the two modalities. We use our pretrained features to tackle two downstream video tasks requiring spatial understanding in social scenarios: active speaker detection and spatial audio denoising. We show through extensive experiments that our features are generic enough to improve over multiple state-of-the-art baselines on two public challenging egocentric video datasets, EgoCom and EasyCom. Project: <http://vision.cs.utexas.edu/projects/ego_av_corr>. § INTRODUCTION Egocentric videos provide a first-person view of how we perceive and interact with our surroundings in our daily lives, and they are pushing a new frontier in multi-modal learning research <cit.>. A key aspect of ego-video is that it can provide a rich stream of first-person spatial (multi-channel) audio alongside the visual frames, when the audio is captured with multiple microphones <cit.>. The coupling of such visual and spatial audio provides strong spatial information about the sound sources (where the sound sources are, if they are in motion or not) in the context of the surrounding physical space (how big or small the room is, if there is a large wall nearby), as well as the camera wearer's attention in the scene implicit in how they move their head. Such spatial cues are especially important for social settings of multiple people talking to each other, where it is valuable to be able to focus on the voice(s) of interest from among various competing sounds and understand where people are directing their attention and speech activity, for better comprehension and communication. In this way, future AR applications in conversational settings could allow a hearing-impaired person to determine who is speaking in order to redirect their attention, or enhance the received audio to make it more intelligible for any listener. We argue that this creates the need for human-centric spatially-grounded understanding of audio-visual events—to learn representations from video that capture audio-visual events in the context of the persistent physical space of the environment and the human speakers in it. Such representations are useful for answering questions like “who is speaking right now?" and “what would the voices sound like without the audio noise?". Whereas the former requires inferring the source location for a voice in the scene, the latter requires understanding how the perceived audio is a function of the source locations, the listener, and the surrounding environment. Despite being significant, the problem of human-centric spatially-grounded understanding of audio-visual events is underexplored. Current audio-visual representation learning models exclusively tackle exocentric (third-person) video <cit.>, which lacks the AR relevance and sidesteps challenges inherent to ego-video arising from the camera wearer's head motion and relatively limited field of view. Limited prior work has explored self-supervised objectives using multi-channel audio and video <cit.>, but outside of the egocentric and social contexts. We propose to learn audio-visual representations via spatial correspondence between an egocentric video and its multi-channel audio. In particular, we design a novel pretext task where the goal is to inpaint binaural (two-channel) audio using both video and audio. Given an egocentric video clip with binaural audio, we mask segments of it and train a model based on a new form of masked autoencoding (MAE) <cit.> to predict the missing segments on the basis of the video and the unmasked segments in the audio. See Fig. <ref> (top). Additionally, we introduce a spatial audio masking strategy that combines random masking of discrete audio segments in the two channels with masking a full channel. This, in essence, helps combines the benefits of two tasks: synthesis of novel binaural audio segments, and binauralization of a full monaural waveform. While the binauralization task is more challenging and enables learning stronger spatial correspondences between vision and audio, random masking of segments leads to better learning stability in cases where binauralization only using vision is intractable. Once trained, our model's encoder provides a spatial audio-visual feature that can be used to address multiple downstream tasks using multiple backbones and egocentric video datasets. Motivated by the AR applications discussed above, we validate our feature learning method on two downstream social egocentric tasks that require strong audio-visual spatial reasoning: 1) active speaker detection: predicting which person in the field of view of an egocentric video is speaking, and 2) spatial audio denoising: separating audio noise (any sounds from non-speakers) from the input audio. See Figure <ref>(bottom). We test the generality of our method by evaluating on two egocentric video datasets, EgoCom <cit.> and EasyCom <cit.>. On both, our method significantly outperforms multiple state-of-the-art task-specific and audio-visual spatial feature learning models. § RELATED WORK Audio-visual self-supervised pretraining Past work <cit.> extensively studies the synergy of vision and audio for learning representations through self-supervision. They explore using both modalities to construct pretext tasks based on synthesis <cit.>, alignment <cit.>, and masked auto-encoding (MAE)<cit.>, with downstream tasks focused on audio-visual event classification and retrieval. However, none of these methods are designed to extract spatial cues from video and multi-channel audio, nor do they analyze the social egocentric setting. On the contrary, we tackle the challenging problem of self-supervised learning spatial audio-visual features from egocentric videos. Further, different from the existing MAE-style models <cit.>, we propose a specialized masking strategy that better learns spatial audio-visual cues. Audio-visual spatial correspondence learning Learning the spatial alignment between video and audio is important for self-supervision <cit.>, spatial audio generation <cit.>, audio-visual embodied learning <cit.> and 3D scene mapping <cit.>. However, these methods are either restricted to exocentric settings <cit.>, or else tackle egocentric settings <cit.> in simulated 3D environments that lack realism and diversity, both in terms of the audio-visual content of the videos and the continuous camera motion due to the camera-wearer's physical movements. On the contrary, we learn an audio-visual representation from real-world egocentric video. More closely related to our work are Telling Left from Right <cit.> and 2.5D Visual Sounds <cit.>, both of which learn spatial audio-visual features for improving source separation and localization, albeit for exocentric data only. The former predicts whether the left and right binaural channels are swapped, which provides only coarse spatial information about the scene; the latter learns to “lift" the mono input to binaural audio, which can be underconstrained from the single-channel audio and video alone. We design a novel pretext task using audio-visual inpainting of multi-channel audio, which is both fine-grained (requiring to capture subtleties about the arrangement of speakers in the environment) and, through our novel masking strategy, exposes better multi-modal constraints for stable training. Our results show our model's advantages over both prior methods <cit.>. Active speaker detection Active speaker detection (ASD) entails predicting the active speaker(s) from among all detected faces in a video, and can be seen as a special case of generic 2D sound localization <cit.>. While early ASD methods rely on lip movements and facial gestures <cit.>, recent methods employ ensemble networks <cit.> or 3D CNNs <cit.>, relation context modules <cit.>, attention  <cit.>, or graph neural networks <cit.>. Multi-channel audio improves ASD in <cit.>, but does so requiring privileged information of speaker pose for training. Unlike all these methods, our goal is to learn spatial audio-visual features purely from in-the-wild egocentric methods through self-supervision—features generic enough to benefit multiple ASD models, as we demonstrate for both TalkNet <cit.> and SPELL <cit.>. Spatial audio denoising Audio denoising, which requires separating a target sound from noise, has traditionally been studied with single-channel (non-spatial) audio, both in the audio-only setting <cit.> and audio-visual settings <cit.>. Using spatial audio captured with multiple microphones <cit.> naturally makes the task simpler. Different from the above, we learn task-agnostic audio-visual spatial features. That is, our contribution is the feature learning idea (which benefits both denoising and ASD), rather than a novel denoising approach. § LEARNING SPATIAL FEATURES FROM EGOCENTRIC AUDIO-VISUAL CORRESPONDENCE The spatial sound perceived in an egocentric setting is shaped by environment in which it is emitted and the sound source location relative to the camera-wearer. Based on this knowledge, we hypothesize that trying to solve the pretext task of audio-visual inpainting of binaural audio—synthesis of missing segments in a spatial audio clip by extracting information about the scene and the source location from the coupling of vision and audio—can lead to learning useful audio-visual spatial correspondences. To validate our hypothesis, we propose a novel feature-learning task for egocentric videos: learning spatial features from audio-visual correspondence through binaural audio inpainting. Formally, we consider an egocentric video clip C = (V, A), where V and A refer to the visual and binaural audio streams, respectively. The visual clip V comprises T frames, such that V = {V_1, …, V_T}. We generate a set of visual tokens V̂ by splitting V into P tubelets, such that V̂ = {V̂_1, …, V̂_P}, where V̂_k denotes the k^th tubelet consisting of a contiguous sequence of non-overlapping 16 × 16 dimensional patches spanning all T frames. We represent the binaural audio A as Mel-spectrograms <cit.>, such that A = {A^L, A^R}, where A^L and A^R are the spectrograms for the left and right channels, respectively. We create a set of audio tokens  by splitting A into Q non-overlapping patches of size 2 × 16, such that  = {Â_1, …, Â_Q }. Next, we mask a portion of the audio tokens in  and obtain complementary subsets of masked and unmasked tokens, Â^M and Â^U, respectively, where Â^M = {Ä_1, …, Ä_S}, Â^U = {A̅_1, …, A̅_Q-S}, and S is the number of masked tokens. Given {V̂, Â^M, Â^U}, we aim to learn a self-supervised model ℱ comprising an encoder ℰ and decoder 𝒟, such that ℱ = 𝒟∘ℰ and ℱ(V̂, Â^U) = Ã^M, where Ã^M is an estimate of the masked audio tokens in Â^M. By training on this pretext task, our encoder ℰ can learn rich audio-visual spatial correspondences that can be leveraged for multiple downstream tasks that require the synergy of vision and spatial audio, as we show in results. § APPROACH To solve our pretext task of binaural audio inpainting in egocentric videos, we propose an approach based on the masked autoencoding framework <cit.>, which has been shown to learn meaningful semantic features from audio-visual data <cit.>. Our model ℱ has 2 main components (see Fig. <ref>): 1) an audio-visual (AV) spatial correspondence encoder, ℰ, and 2) an audio-visual decoder for binaural audio inpainting, 𝒟. The encoder ℰ (Sec. <ref>) learns an implicit representation of the spatial relationships between the visual and unmasked binaural audio tokens, while the decoder D (Sec. <ref>) uses this implicit representation to synthesize the masked audio tokens. We also devise a simple yet novel masking protocol (Sec. <ref>) specifically for our inpainting task, which mixes masking random audio tokens with masking a full audio channel, and helps the model learn stronger audio-visual spatial associations, which facilitate multiple downstream tasks. We train ℱ with a training objective that aims to minimize the prediction error in the masked audio tokens. Next, we describe our model architecture, training objective, audio masking protocol, and downstream tasks. §.§ Audio-visual spatial correspondence encoder The audio-visual spatial correspondence encoder ℰ (Fig. <ref> left) extracts features from the visual and unmasked audio tokens {V̂, Â^U}. It begins by embedding the visual and audio tokens using separate transformer encoders <cit.> for individually capturing the spatio-temporal features in the two modalities. Next, it uses a shared transformer encoder <cit.> to jointly encode the audio and visual features, and produces a multi-modal representation suitable for binaural audio inpainting. Audio and visual encoders. We first encode the visual tokens V̂ using a linear layer to generate visual features v, such that v = {v_1, …, v_P}. We encode the audio tokens Â^U with another linear layer to produce audio features a, such that a = {a_1, …, a_Q-S}, where S is the number of masked tokens out of a total of Q audio tokens (cf. Sec. <ref>). For each visual feature v_j, we add a sinusoidal positional embedding p^V_j <cit.> to it, where p^V_j captures cues about the 3D position of the j^th tubelet in the visual clip V. For an audio feature a_i, we add a sinusoidal positional embedding p^A_i and a learnable channel embedding c ∈{c_L, c_R} to it to convey information about the 2D location of the i^th unmasked audio token in the spectrogram and also the audio channel to which it belongs. Next, we feed the transformed visual and audio features to separate transformer encoders, ℰ^V and ℰ^A, respectively, and obtain visual features e^V = {e^V_1, …, e^V_P } and audio features e^A = {e^A_1, …, e^A_Q-S}. Shared audio-visual encoder. Given the visual features e^V and audio features e^A, we concatenate them into e^AV, such that e^AV = { e^V_1, …, e^V_P, e^A_1, …, e^A_Q-S}, and re-add the sinusoidal positional embeddings p^V and p^A to the features of the respective modalities in e^AV. Furthermore, we add the channel embeddings c to the audio features, and a learnable modality embeddings m ∈{m_A, m_V} to all features in e^AV to help the model distinguish between the visual and audio modalities. Next, a shared audio-visual transformer ℰ^AV encoder takes e^AV as input and outputs audio-visual features f^AV, which implicitly holds spatio-temporal information required for accurate inpainting of audio. §.§ Audio-visual decoder for binaural audio inpainting Our audio-visual decoder 𝒟 takes f^AV as input and attempts to synthesize the masked binaural audio tokens by leveraging spatio-temporal cues in f^AV. It first projects f^AV to a lower-dimensional feature set g^AV. It then appends a learnable embedding for the masked audio tokens to g^AV and passes it through a shared audio-visual transformer decoder <cit.>. Next, it feeds the audio feature outputs of the shared decoder to another transformer decoder and uses its outputs to predict an estimate of the masked binaural audio tokens. The decoders are light-weight compared to the encoders, ensuring that the encoders are primarily responsible for driving the inpainting task and producing good audio-visual features for strong downstream performance. We next describe each component of 𝒟 in detail. Shared audio-visual decoder. We first create a lower-dimensional projection g^AV of the audio-visual encodings f^AV by passing it through a linear layer, and append a learnable embedding ϕ corresponding to each of the S masked audio tokens to g^AV. Next, we add the positional embeddings p^V and p^A, the audio channel embeddings c, and the modality embeddings m to g^AV, and feed it to a shallow transformer decoder 𝒟^AV that outputs an audio-visual feature set h^AV. We then take take the audio features h^A from h^AV and pass them to the audio decoder for further processing. Audio decoder. The audio decoder 𝒟^A re-adds the positional embeddings p^A and channel embeddings c to g^A, and feeds it to a transformer decoder, which outputs audio features d^A. Prediction of masked audio tokens. Finally, we take the subset d^A_M of all audio features d^A, which correspond to the masked audio tokens Â^M, upsample them by passing through a linear layer, and reshape them to obtain an estimate Ã^M of the masked tokens Â^M, such that Ã^M = {ã_1, …, ã_S }. §.§ Model training We train our model to minimize the error in prediction of the masked audio tokens. In particular, we compute the mean-squared error ℒ averaged over all masked audio tokens, such that ℒ = 1/S∑_i=1… S ||ä_i - ã_i ||^2_2. §.§ Audio masking for inpainting We design an audio masking protocol that is customized to help our model better extract spatial audio-visual cues during self-supervised pretraining. In particular, we mix the strategy of randomly masking a full audio channel with that of randomly masking audio tokens in the ratio r% : (100-r%) during training, where r represents the relative frequency with which we randomly drop an audio channel. On the one hand, token masking could lead to tokens from the same location in the two audio channels being present among the unmasked tokens, thereby providing additional spatial cues to the model and resulting in a simpler optimization objective for the inpainting task. On the other hand, channel masking forces the model to solve a more challenging binauralization task solely on the basis of vision, which could help it learn even stronger spatial features. Towards achieving high performance on the downstream tasks, we aim to strike a fine balance between these two strategies. In our setup, we choose in favor of a particular strategy at the level of a training batch, and set the value of r using validation on the downstream tasks. When finetuning on downstream tasks, we randomly mask a channel. §.§ Downstream tasks requiring spatial audio-visual understanding We explore two downstream tasks with our pretrained features: active speaker detection and spatial audio denoising. Active speaker detection (ASD) involves matching an audio clip with an appropriate face track from the corresponding video clip. While current state-of-the-art methods <cit.> rely on semantic similarities between monaural audio and vision to solve this task, leveraging spatial audio can additionally reveal the sound source location in the video. As we will see, however, our learned representation improves this task even compared to simpler ways to use the binaural input. In spatial audio denoising, also studied with spatial audio-visual pretraining in <cit.>, the goal is to separate the target audio from distractors. In particular, we aim to remove the audio from sources extraneous to the conversation (off-video sounds from other parts of the scene). § EXPERIMENTS Datasets. We evaluate our model on two challenging egocentric video datasets that contain binaural audio: 1) EgoCom <cit.>, and 2) EasyCom <cit.>, detailed in Supp. While both datasets contain egocentric videos captured by people having conversations, EgoCom is more unconstrained than EasyCom. Whereas EasyCom primarily shows participants sitting around a table and talking, EgoCom has videos of participants moving around a room, turning their face and body, standing up, etc. These datasets test the robustness of our method in diverse scenarios of varying difficulty. Model architecture and training The uni-modal encoders, ℰ^A and ℰ^V, have 8 layers, while the audio-visual encoder ℰ^AV has 6 layers. All encoders have 12 attention heads and use 768-dimensional hidden embeddings. The audio-visual decoder 𝒟^AV and audio-only decoder 𝒟^A have 1 and 3 layers, respectively. Both decoders have 6 attention heads and use 384-dimensional hidden embeddings. To pretrain our model, we set the relative frequency of dropping an audio channel in our masking protocol for training to r=20 %. We train our model for 200 epochs using the AdamW <cit.> optimizer with a weight decay of 10^-5, and a learning rate scheduler that reaches a peak learning rate of 2 × 10^-4 over 10 warmup epochs, and then decays it through half-cycle cosine annealing <cit.>. For data agumentation, we perform random flipping of video clips and audio channels along their width. During ASD training, we finetune the pretrained features with a lower learning rate than the rest of the model. See Supp. for further details on datasets, architecture, and training. §.§ Active speaker detection First we evaluate our model on active speaker detection (ASD). Backbone models. We consider two state-of-the-art ASD models as the backbones for leveraging our pretrained representations: 1) TalkNet <cit.>, and 2) SPELL <cit.>. TalkNet is an attention-based model that first encodes the face track and the audio clip using temporal encoders into feature sequences of the same length as the input clip. Next, it performs self- and cross-attention on the feature sequences to capture intra- and inter-modal semantic and temporal patterns. Finally, it fuses the two feature streams frame by frame, and uses a binary classifier to predict if the face in the track is active or not. SPELL first extracts audio-visual features for each face in a clip using a two-stream ResNet <cit.> encoder. It then treats these features as nodes in a graph and uses a graph neural network to learn both long- and short-term bidirectional semantic relationships. Finally, it does binary classification of every graph node to predict if its associated face is active or not Pretrained feature fusion. To fuse our pretrained features with the ASD backbones, we first use a single-layer transformer decoder <cit.>. The decoder takes the feature outputs of our shared transformer encoder ℰ^AV as the keys and values, a sinusoidal embedding sequence as queries, where each embedding denotes an index of a frame in the clip, and outputs an audio-visual feature sequence of the same length as the clip. Each output feature acts as a spatially aggregated representation of the features for the individual tokens from the corresponding frame, and implicitly holds rich information about the audio source location in the scene. Finally, we append these features to the cross-attention outputs in TalkNet, or the two-stream audio-visual encoder outputs in SPELL, on a per-frame basis. In essence, while the original audio-visual encoders leverage semantic correlations between vision and audio, our features can provide strong complementary spatial cues for better performance. Baselines. For both TalkNet and SPELL, we compare against multiple baselines comprising both the unmodified backbone and improved versions of it, in addition to some naive methods: * All-active: a naive model that predicts that all visible speaker are always active * All-inactive: a naive model that predicts that all visible faces are always inactive * Random: a naive model that emits a random ASD confidence score for every visible speaker * Backbone w/o audio: a vision-only version of the backbone with no audio input * Backbone: the originally-proposed backbone that processes only faces and monaural audio * Backbone-binaural: an improvement over the backbone, where we use binaural audio instead of monaural, alongside positional encodings for the faces, indicative of their relative position and depth, for better matching the face to the audio * Backbone-binaural w/ scene video: a further improvement over the backbone, where we additionally provide the scene images (uncropped video frames) to the backbone-binaural model * Backbone w/ TLR <cit.> features: we fuse the SOTA Telling Left from Right (TLR) <cit.>, which learns audio-visual spatial correspondences by predicting the spatial alignment between vision and binaural audio. * Backbone w/ 2.5D-VS <cit.> features: we fuse features from the SOTA audio-visual binauralization model, 2.5D Visual Sounds (2.5D-VS) <cit.>. For both TLR <cit.> and 2.5D-VS <cit.>, we use a feature fusion method like ours to fuse their pretrained features with the backbone. We use the standard mean average precision (mAP) metric. Results. Table <ref> (top) reports our ASD results on both val and test splits. The three naive baselines achieve very low ASD performance on both EgoCom <cit.> and EasyCom <cit.>, emphasizing the difficulty of the task. For both TalkNet <cit.> and SPELL <cit.>, the unchanged backbone model generally performs better than the model without audio, showing that both vision and audio are required. Upgrading from monaural to binaural audio further boosts performance, as the model can now leverage both spatial and semantic information. Additionally using scene features lets the backbone explicitly match the scene area around the inferred source location with the face, and further improves ASD, especially for EgoCom, where the background scene changes more often. TLR <cit.> and 2.5D-VS <cit.> improve the original models on EasyCom, but fare worse on the more challenging EgoCom, demonstrating the limitations of their pretrained features. Furthermore, 2.5D-VS outperforms TLR, emphasizing that fine-grained spatial correspondences are necessary. Our model substantially outperforms all baselines for both models (TalkNet and SPELL) on both datasets. This shows that our method helps learn stronger spatial features for ASD, which are both backbone- and dataset-agnostic. Besides, our improvement over the baselines using alternate pretrained features, ndicates that merely predicting spatial alignment (TLR) or doing audio-visual binauralization (2.5D-VS) isn't enough for ASD, especially on the more challenging EgoCom dataset. Model analysis. Table <ref> (bottom) shows an ablation of our pretraining method. Upon training for ASD from scratch, we see sharp drop in performance[SPELL requires storing pretrained features in the graph nodes, therefore not allowing training from scratch], showing that our advantage is not solely due to our model design, but also our self-supervised pretraining stage. §.§ Spatial audio denoising Next we evaluate spatial audio denoising. To instantiate this task, we add the binaural audio of a target clip with the downscaled binaural audio from another randomly chosen clip, where the downscaling factor is depends on the desired noise level, and attempt to extract the target from the mixture. We evaluate three noise levels, expressed using the signal-to-noise (SNR) ratio: 1) 0 dB, 2) 2.5 dB, and 3) 5 dB. The different noise levels test our model's robustness to varying levels of task difficulty—the lower the SNR value, the higher the noise, and consequently, the higher the difficulty. For this task, we evaluate on EgoCom only. We find that for EasyCom mixing audio from a different clip as noise usually leads to spatially overlapping sound sources since the dataset is recorded in a fixed setting (people sitting around a table), this renders the denoising task on EasyCom intractable for all models. Backbone model. We adopt the commonly used U-Net <cit.> model for audio-visual source separation <cit.> as the backbone, which produces a binaural ratio mask for the target audio (see Supp. for details). We multiply the predicted ratio mask with the mixed magnitude spectrogram to get the predicted magnitude spectrogram, then convert it to a waveform using inverse short-time Fourier transform with the mixed audio phase. Pretrained feature fusion. To use our features for denoising, we reshape the visual features f^V and unmasked audio features f^A, produced by our audio-visual encoder ℰ^AV, to form multi-channel 2D maps, where the features align with their corresponding tokens vis-vis the raster order. Next, we pass the feature maps to separate convolutional layers, concatenate the outputs channel-wise, and use them to replace the visual features at the U-Net <cit.> bottleneck. Our fusion strategy helps the U-Net leverage fine-grained spatial cues at the level of audio patches and video tubelets. Baselines. We compare against the following baselines and existing methods: * U-Net w/o vision: an audio-only blind denoising model * U-Net: the original backbone without any alterations * U-Net w/ ImageNet features: pretrains the visual encoder on ImageNet <cit.> * U-Net w/ TLR <cit.> features: fuses the features from TLR <cit.> with the feature outputs of the audio encoder through channel-wise concatenation. * U-Net w/ 2.5D-VS <cit.> features: fuses the pretrained features from 2.5D-VS <cit.> similarly. Evaluation metric. For evaluating our denoising quality, we use standard metrics: 1) STFT distance, a spectrogoram-level measure of the denoising error, expressed using base 10^-3 and 2) SI-SDRi: the improvement in SI-SDR <cit.>, a scale-invariant estimate of the level distortion in the audio, over using the mixed audio as the prediction. Results. Table <ref> (top) shows spatial audio denoising results on the more challenging EgoCom dataset. The unmodified U-Net backbone performs better than the version that lacks vision, establishing that similar to ASD, vision is crucial for better denoising. Using pretrained features of TLR <cit.> or 2.5D-VS <cit.> further improves the performance, showing that learning spatial audio-visual features aids denoising. Our method outperforms all baselines (p ≤ 0.05) across both metrics for all noise levels. While the improvement over the baselines that do not use self-supervised pretraining emphasizes the utility of learning spatial audio-visual relationships through self-supervision, the performance boost over TLR and 2.5D-VS underlines the strengths of our self-supervised method design—which are consistently realized for both ASD and denoising. Further, our improvement margins over the baselines are larger for higher noise levels (0 and 2.5 dB), indicating our features play a bigger role in the more difficult denoising settings. Model analysis. In Table <ref> (bottom), we ablate our pretraining method. Similar to ASD, training our model from scratch on the denoising task leads to a decline in the performance. This disentangles the impact of our pretext task design from the model architecture and shows that our pretraining stage helps the backbone with learning better audio-visual features, leading to superior denoising quality. §.§ Qualitative analysis. In Fig. <ref>, we analyze the visual attention maps of our shared audio-visual encoder ℰ^AV. Observe that the regions of high attention are usually centered around the active speakers (see center-left and center-right) and other sound sources (e.g., a loudspeaker for generating background noise in the examples at the center and top-left), or around objects that determine how the sound spatializes in the scene (e.g., large tables, cupboards). Interestingly, our model also attends to multiple people if they are speaking at the same time (see top-left), thereby facilitating the detection of multiple active speakers. § CONCLUSION We introduced a novel self-supervised approach for learning audio-visual representations in egocentric videos via spatial correspondence between the video and its binaural audio. The spatial representations are learned via binaural audio inpainting, which involves masking segments or full channels of the binaural audio and predicting the masked parts on the basis of the video and unmasked audio context. Through extensive evaluation, we show that our learned features are strong and generic enough to improve over multiple backbone methods and for multiple downstream tasks, including active speaker detection and source separation. In future work, we plan to explore alternate pretraining strategies involving spatial audio synthesis and leverage more large-scale conversational video datasets for learning stronger features. plain § SUPPLEMENTARY MATERIAL In this supplementary material, we provide additional details about: * Video (with audio) for qualitative illustration of our pretext task and qualitative evaluation of our model predictions on the downstream tasks (Sec. <ref>). * Evaluation of the impact of the channel masking frequency r (from Sec. <ref> in main) in our audio masking protocol on the downstream task performance (Sec. <ref>) * Evaluation of the impact of our model parameter initialization on the downstream performance (Sec. <ref>) * Additional dataset details (Sec. <ref>), as mentioned in Sec. <ref> in main * Additional model architecture and hyperparameter details for both self-supervised pretraining and downstream training (Sec. <ref>), as referenced in Sec. <ref> and <ref> in main §.§ Supplementary video The supplementary video provides a qualitative illustration of our pretraining task for learning spatial features from audio-visual correspondence in egocentric videos. Moreover, we provide video samples from the both EgoCom <cit.> and EasyCom <cit.> datasets to illustrate the unique challenges posed by the egocentric videos. Additionally, we demonstrate our model's prediction quality for both active speaker detection and spatial audio denoising, and analyze common failure models for our model on both tasks. Please see the video on <http://vision.cs.utexas.edu/projects/ego_av_corr> and use headphones to hear the binaural audio correctly. §.§ Channel masking frequency r Here, we analyze the effect of the channel masking frequency r in our audio masking protocol (Sec. <ref> in main) on the downstream task performance. Table <ref> reports the active speaker detection (ASD) results on the more challenging EgoCom <cit.> dataset, and table <ref> reports the denoising results for different noise levels. We notice that the performance on both ASD and denoising, especially at the higher noise levels, declines upon increasing or decreasing the value of r from our choice of 20 % based on the downstream validation performance (Sec. <ref> in main), which helps our model achieve a fine balance between the two complementary strategies of masking a complete channel and randomly masking audio tokens. Whereas randomly masking a channel of the binaural audio entails solving the more under-constrained and consequently complex binauralization task, thereby helping our model learn stronger spatial associations between vision and audio, randomly masking audio tokens helps with improving training stability. §.§ Model parameter initialization To evaluate the effect of random parameter initialization on our model, we train our model on both tasks and datasets with 3 different random seeds. Across all runs, our standard errors are less than 0.01 on all metrics, showing that our model is robust to different random parameter initializations, and the improvements in performance are significantly larger than these small variations from randomness. §.§ Dataset details As discussed in main (Sec. <ref> in main), we use two public datasets containing egocentric videos with binaural audio, EgoCom <cit.> and EasyCom <cit.>, for our experiments. For EgoCom, we follow the authors and split the data into train/val/test comprising 30.3/2.4/5.8 hours of data. For EasyCom, we randomly generate train/val/test splits with 4.5/0.38/0.39 hours of data, such that there is no overlap in conversation participants between any two splits. Next, we extract 1 second long clips from both datasets, where the video and binaural audio are sampled at 5 frames per second (fps) and 16 kHz, respectively. The frame resolution is 240 × 352 for EgoCom, and 198 × 352 for EasyCom. Furthermore, we choose audio channel 5 and 6 (corresponding to the in-ear microphones) as our binaural audio channels for EasyCom. §.§ Model architecture and training details In addition to the provided details in main Sec. <ref>, we provide here extra model architecture and training details for both pretraining and finetuning on downstream tasks, for reproducibility. We perform all training using 8 NVIDIA Tesla V100 SXM2 GPUs. We will release all code and data. §.§.§ Pretraining We described our model architecture and pretraining details in Sec. <ref> in main. Here, we provide additional details about our model's input preparation, architecture, parameter initialization, and training . Input preparation. We sample the video clips at their original resolution, normalize them using the per-color means and standard deviations computed using ImageNet <cit.>, and generate a total of 330 and 286 visual tokens for EgoCom and EasyCom, respectively, by splitting the clips into non-overalapping tubelets containing a sequence of 5 patches, where each patch is 16 × 16 in size (Sec. <ref> in main). We represent the binaural audio as two-channel Kaldi-compliant <cit.> spectrograms with 98 temporal windows and 128 Mel-frequency bins, which we compute by using the binaural audio normalized to [-1, 1], a window length of 25 ms and a hop length of 10 ms. We normalize the spectrograms by computing the mean and standard deviation of the Mel-spectrograms generated from all audio clips in each dataset. We next generate 392 audio tokens per spectrogram channel by splitting it into non-overlapping patches of size 2 × 16. Architecture. All hidden layers in each transformer block <cit.> emit features that are four times as long as the embedding size for the block. We always use LayerNorm <cit.> after every output of a transformer block unless it's a direct input to another transformer block. Parameter initialization. We use Xavier <cit.> uniform initialization for all network parameters. For the LayerNorm <cit.> layers, we initialize their weights to 1 and biases to 0. We use a truncated normal distribution with a standard deviation of 0.02 and a sampling range of [-2, 2] to initialize the learnable modality and channel embedding tokens, and initialize the mask tokens from a normal distribution with a standard deviation of 0.02. Training. We set the batch size to 104 during pretraining. §.§.§ Active speaker detection In Sec. <ref> in main, we outlined our feature fusion method for active speaker detection (ASD). Here, we provide additional architectural details for feature fusion, and also describe our finetuning process. Pretrained feature fusion. Fig. <ref> and <ref> show our feature fusion methods for TalkNet <cit.> and SPELL <cit.> ASD backbones, respectively. The single-layer transformer decoder (Sec. <ref> in main), which we use for fusing our pretrained features with the backbones (Sec. <ref> in main), generates 128 and 512 dimensional embeddings for TalkNet and SPELL, respectively. Since SPELL doesn't train any audio-visual features when training its graph neural network (GNN), we first pretrain the the transformer decoder for SPELL by using it with the TalkNet backbone. Towards that goal, we feed the decoder features to a single linear layer that maps the 512 dimensional features to 128 dimensional features, and is followed by GELU <cit.> activations and LayerNorm <cit.>, before fusing the 128 dimensional features with the TalkNet backbone. After pretraining, we append the 512 dimensional outputs of the decoder with the outputs of the two-stream audio-visual encoder (Sec. <ref> in main) for training the GNN in SPELL. Training. For TalkNet, we train using Adam <cit.> for 25 epochs optimizer with an initial learning rate (LR) of 10 ^ -4 for the backbone and 10 ^ -5 for the pretrained components, both of which we decay using a step LR scheduler by a factor of 0.95 after every epoch. We set the batch size to 400. For SPELL, we first train the two-stream audio-visual encoder for feature extraction for 100 epochs using the cross entropy loss and Adam <cit.> with an initial learning rate of 5 × 10^-4, which we decay by 0.1 after every 40 epochs. We set the batch size to 320. For training the GNN of SPELL, we train for 70 epochs by using a batch size of 320 again and learning rate of 10^-3, while setting all other hyperparameters per the original paper. §.§.§ Spatial audio denoising Backbone architecture. Following <cit.>, our U-Net backbone for spatial audio denoising (Sec. <ref> in main) is an audio-visual model comprising an audio encoder, a visual encoder, and a decoder for predicting an estimate of the target audio. The audio encoder takes the log magnitude spectrogram of the mixed binaural audio as input, and uses a stack of 5 convolutional (conv.) layers to produce a multi-channel 2D audio feature map. Each conv. layer has a kernel size of 4, padding of 1, and stride of 2, and is followed by leaky ReLU <cit.> activations with a slope of 0.2 for negative inputs, and batch normalization <cit.>. The conv. layers have 64, 128, 256, 512 and 512 output channels, respectively. The visual encoder has a ResNet-18 <cit.> architecture that outputs a multi-channel 2D visual feature map without feeding it to the average pooling or any subsequent layers. We push the ResNet outputs through another conv. layer to match its height and width with the audio features. The conv layer has a kernel size of (1, 4), a padding of (0, 0) for EgoCom <cit.> and (1, 0) for EasyCom <cit.>, and 128 output channels. Further, we remove the last feature column from the output of the conv. layer for all channels for EasyCom. We concatenate the per-frame features along the channel dimension and generate the visual features. We then concatenate the visual features with the audio features channel-wise, and feed the concatenated features to the audio decoder, which predicts an estimate of the ratio mask <cit.> for the target audio magnitude spectrogram. The audio decoder first uses a stack of 5 transpose convolutional (conv.) layers, which are connected to the corresponding encoder layers through skip connections. The transpose conv. layers have a kernel size of 4, stride of 2, and a padding of (1, 1), except for the last layer, which has a padding of (2, 1). The transpose conv. layers have 1152, 1024, 512, 256 and 128 output channels, respectively. Next, the audio decoder feeds the output of the transpose conv. layers to a conv. layer with 2 input and output channels, and a kernel size of (2, 1) to emit the predicted ratio mask. Input preparation. To transform the audio waveforms into magnitude spectrograms, we first normalize them to [-1, 1] and then compute the short-time Fourier transform with a window length of 128, hop length of 64, and 512 frequency bins. Pretrained feature fusion. Fig. <ref> shows our feature fusion method for spatial audio denoising. We reshape the visual features from the outputs of our audio-visual encoder ℰ^AV to form multi-channel 2D visual feature maps (Sec. <ref> in main), such that the 2D raster order of the features matches that of the tubelets in the video clip, and feed the reshaped features to a convolutional (conv.) layer with a kernel size of (3, 4), stride of (2, 3), padding of (1, 2) and (2, 2) for EgoCom <cit.> and EasyCom <cit.>, respectively, and 128 input and 784 output channels. We similary reshape the audio features, and feed them to another conv. layer with a kernel size of (1, 7), padding of 0, stride of (1, 6), and 128 input and 256 output channels. Both conv. layers are followed by leaky ReLU activations with a slope of 0.2 for the negative values, and batch normalization. Next, we concatenate the visual and audio features along the channel dimension, and further concatenate them with the audio encoder outputs channel-wise (Sec. <ref> in main). Training. We train using Adam <cit.> for 200 epochs optimizer with an learning rate (LR) of 5 × 10 ^ -4. We set the batch size to 80.
http://arxiv.org/abs/2307.04593v1
20230710143512
DWA: Differential Wavelet Amplifier for Image Super-Resolution
[ "Brian B. Moser", "Stanislav Frolov", "Federico Raue", "Sebastian Palacio", "Andreas Dengel" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Moser et al. German Research Center for Artificial Intelligence (DFKI), Germany RPTU Kaiserslautern-Landau, Germany [email protected] DWA: Differential Wavelet Amplifier for Image Super-Resolution Brian B. Moser1, 2 Stanislav Frolov1,2 Federico Raue1 Sebastian Palacio1 Andreas Dengel1, 2 February 2023 =================================================================================================== This work introduces Differential Wavelet Amplifier (DWA), a drop-in module for wavelet-based image Super-Resolution (SR). DWA invigorates an approach recently receiving less attention, namely Discrete Wavelet Transformation (DWT). DWT enables an efficient image representation for SR and reduces the spatial area of its input by a factor of 4, the overall model size, and computation cost, framing it as an attractive approach for sustainable ML. Our proposed DWA model improves wavelet-based SR models by leveraging the difference between two convolutional filters to refine relevant feature extraction in the wavelet domain, emphasizing local contrasts and suppressing common noise in the input signals. We show its effectiveness by integrating it into existing SR models, e.g., DWSR and MWCNN, and demonstrate a clear improvement in classical SR tasks. Moreover, DWA enables a direct application of DWSR and MWCNN to input image space, reducing the DWT representation channel-wise since it omits traditional DWT. § INTRODUCTION Image Super-Resolution (SR) has an impressive legacy in Computer Vision (CV) yet still presents an exhilarating challenge <cit.>. SR is a task of enhancing Low-Resolution (LR) images to High Resolution (HR). It is challenging because many High Resolution (HR) images can correspond to a given Low-Resolution (LR) image, rendering the task mathematically ill-posed. In recent years, deep learning has fueled rapid development in SR, leading to tremendous progress <cit.>. While many techniques have improved the overall quality of image reconstructions, there remains a pressing need for methods capable of producing high-frequency details, particularly when dealing with high magnification ratios <cit.>. Addressing this issue is crucial for the continued advancement of SR. Influenced by achievements on other CV tasks, recent research focused on trending approaches like Transformer-based networks <cit.>, Denoising Diffusion Probabilistic Models <cit.> or Generative Adversarial Networks <cit.>. Despite astonishing reconstruction capabilities, they often lack an explicit focus on generating high-frequency details, i.e., local variations. This work aims to advance the field of SR by exploring wavelet-based networks. Unfortunately, this technique has received less attention despite its significant potential <cit.>. We seek to provide a fresh perspective and revive research by re-evaluating these approaches. Discrete Wavelet Transformation (DWT) enables an efficient image representation without losing information compared to its naïve spatial representation, i.e., traditional RGB format. It does so by separating high-frequency details in distinct channels and reducing the spatial area of input image representation by a factor of 4. Therefore, a smaller receptive field is required to capture the input during feature extraction. Using DWT like in DWSR <cit.> and MWCNN <cit.> reduces the overall model size and computational costs while performing similarly to state-of-the-art image SR architectures. This work introduces a new Differential Wavelet Amplifier (DWA) module inspired by differential amplifiers from electrical engineering <cit.>. Differential amplifiers increase the difference between two input signals and suppress the common voltage shared by the two inputs, called Common Mode Rejection (CMR) <cit.>. In other words, it mitigates the impact of noise (e.g., electromagnetic interference, vibrations, or thermal noise) affecting both source inputs while retaining valuable information and improving the integrity of the measured input signal. Our proposed DWA layer adapts this idea to deep learning and can be used as a drop-in module to existing SR models. This work shows its effectiveness as exemplary for wavelet-based SR approaches. DWA leverages the difference between two convolutional filters with a stride difference to enhance relevant feature extraction in the wavelet domain, emphasizing local contrasts and suppressing common noise in the input signals. We demonstrate the effectiveness of DWA through extensive experiments and evaluations, showing improved performance compared to existing wavelet-based SR models without DWA: DWSR with DWA shows overall better performance w.r.t. PSNR and SSIM, and MWCNN with DWA achieves better SSIM scores with comparable PSNR values on the testing datasets Set5 <cit.>, Set14 <cit.>, and BSDS100 <cit.>. Taken together, our work makes the following key contributions: * Introduction of Differential Wavelet Amplifier (DWA): a novel module that leverages the difference between two convolutional filters horizontally and vertically in a wavelet-based image representation, which is applicable as drop-in addition in existing network architectures. * Comprehensive evaluation demonstrating the improved performance by using DWA on popular SR datasets such as Set5 <cit.>, Set14 <cit.>, and BSDS100 <cit.> by adding DWA to existing wavelet-based SR models, namely, DWSR <cit.> and MWCNN <cit.>. * Experimental analysis showing that DWA enables a direct application of DWSR and MWCNN to the input space by avoiding the DWT on the input image. This application reduces the input channel-wise to 3 instead of 12 channels for RGB images while keeping the spatial reduction benefit of DWT. * Visual examination of reconstructions showcasing that the DWSR with the DWA module captures better distinct edges and finer details, which are also closer to the ground truth residuals. § BACKGROUND This chapter provides comprehensive background information on 2D Discrete Wavelet Transform (2D-DWT), how SR models (DWSR <cit.> and MWCNN <cit.>) use it, and related work to Differential Wavelet Amplifiers (DWA). Additionally, we introduce differential amplifiers from electrical engineering, which inspired our proposed method DWA. §.§ Discrete Wavelet Transform in SR The 2D Discrete Wavelet Transform (2D-DWT) decomposes an image into four unique sub-bands with distinct frequency components: a low-frequency approximation sub-band and three high-frequency detail sub-bands representing horizontal, vertical, and diagonal details. Let x [ n ] ∈ℝ^N be a signal. The 1D Discrete Wavelet Transformation (1D-DWT) with Haar wavelet passes the input signal first through a half-band high-filter h [ n ] and a low-pass filter l [ n ]. Next, half of the sample is eliminated according to the Nyquist rule <cit.>. The wavelet coefficients are calculated by repeating the decomposition to each output coefficient iteratively <cit.>. In the case of images, it applies h [ n ] and l [ n ] in different combinations, resulting in four function applications. The DWSR <cit.> SR model exploits the wavelet domain and gets the DWT representation of the interpolated LR image as input. DWSR is composed of 10 convolution layers that are applied sequentially. It adds the interpolated LR input as residual for the final reconstruction step, which results in learning only the sparse residual information between the LR and HR domains. MWCNN <cit.> exploits multi-level DWT (multiple applications of DWT) and utilizes a U-Net architecture <cit.>. DWT replaces all downsizing steps, and the inverse operation of DWT replaces all upsampling steps. Ultimately, it uses the interpolated LR image as a residual connection for the final prediction. The standard MWCNN setup consists of 24 convolution layers. One caveat of DWSR and MWCNN in learning the residual is that they must translate its rich information input to sparse representation, e.g., the average band. To ease the burden, we present a Differential Wavelet Amplifier, which directly transforms the input into sparse representations inspired by differential amplifiers introduced next. §.§ Differential Amplifier An electronic amplifier is a standard electrical engineering device to increase a signal's power <cit.>. One type of electronic amplifier is the differential amplifier that increases the difference between two input signals and suppresses the common voltage shared by the two inputs <cit.>. Given two inputs V^-_in, V^+_in∈ℝ^N and the differential gain of the amplifier A_d ∈ℝ, the output V_out is calculated as V_out = A_d ( V^+_in - V^-_in) The purpose of differential amplifiers is to suppress common signals or noise sources that are present in multiple input channels while retaining valuable information. In the literature, this is called Common Mode Rejection (CMR) and is a critical property in many electrical engineering applications, particularly in systems that measure small signals in the presence of noise or interference, e.g., electromagnetic interference or thermal noise <cit.>. Hence, using CMR improves the signal-to-noise ratio, overall system performance, and signal integrity since the system can focus on the relevant differential signals. §.§ Differential Convolutions Closest to our work is Sarıgül et al. <cit.>, which applies differential convolutions, i.e., the difference of two convolution layers, to emphasize contrasts for image classification, which is inherently different to image generation tasks such as image SR. Despite this, they do not consider a stride difference vital for capturing variations. Knutsson et al. <cit.> theoretically examine a normalized version of differential convolutions also with no stride difference. Due to the time of publication, they did not try it in the case of deep learning-based image SR. Newer applications like Canh et al. <cit.> consider learnable parameters to turn the Difference of Gaussians (DoG) <cit.> into a learnable framework, but has the same caveat: As Knutsson concluded, their approaches can be interpreted as a standard convolution weighted with the local energy minus the “mean” operator acting on the “mean” data, i.e., a more elaborate convolution operation. A similarity could also be seen in the approach of residual connections of ResNets <cit.> when the kernel parameters have a negative sign. However, residual connections are different since they force a convolution layer to learn to extract the sparse details that are not apparent in the input. In contrast, our proposed method with Differential Wavelet Amplifier (DWA) explicitly produces sparse details by design due to the subtraction operator. Therefore, DWA does not have to learn what input information should be removed for the residual information. It can focus on relevant features that persist when the stride convolution does not detect the same feature, thereby emphasizing local contrast. § DIFFERENTIAL WAVELET AMPLIFIER (DWA) This section presents our proposed Differential Wavelet Amplifier (DWA) module. Inspired by differential amplifiers in electrical engineering, DWA is designed to operate in the wavelet domain and exploits the difference between two input signals to improve the performance of image SR methods based on wavelet predictions. DWA is applied separately in the horizontal and vertical axis of the input image. In each direction, we perform two convolutions with a stride distance in one direction for both axis (from left to right, from top to bottom, as in MDLSTMs <cit.>), allowing a fine-grained feature extraction and emphasizing local contrasts while suppressing the common mode in the input, similar to CMR in electrical engineering. <ref> visualizes all processes involved in DWA. Let 𝐱∈ℝ^w × h × c_in be an input image or feature map with c_in channels. We define ψ(𝐱, (i, j) ) : ℝ^w × h × c_in×ℕ^2 →ℝ^k · k × c_in as a function that extracts k · k points around a spatial position (i, j). We can then express the resulting feature maps for the horizontal 𝐇( 𝐱) and vertical 𝐕( 𝐱) axis as 𝐇( 𝐱)_i,j = f ( ψ(𝐱, (i, j) ) ; θ_1 ) - f ( ψ(𝐱, (i+s, j) ) ; θ_2 ), 𝐕( 𝐱)_i,j = f ( ψ(𝐱, (i, j) ) ; θ_3 ) - f ( ψ(𝐱, (i, j+s) ) ; θ_4), where f : ℝ^k · k × c_in→ℝ^c_f is a convolution operation with parameters θ_n for 0 < n < 4 , k × k the kernel size and s ∈ℕ a pre-defined stride difference. As a result, the local variance is captured in one direction for both axes, similar to MDLSTMs <cit.>: from left to right with parameters θ_1 and θ_2 and from top to bottom with parameters θ_3 and θ_4. We obtain two distinct feature maps that capture complementary input image information and provide richer feature representations for the wavelet-based SR task. The input is directly translated to sparse representations, which reduces the distance to residual target objectives in networks that use residual connections for final prediction. We concatenate the resulting feature maps alongside the input to ensure no information is lost during the DWA processing. This combination creates a comprehensive set of feature maps that retains the original input information while incorporating the directional features obtained from both axes. More formally: g ( 𝐱) = 𝐱⊙σ( H ( 𝐱) ⊙ V ( 𝐱) ), where ⊙ is a channel-wise concatenation operator and σ is a non-linear function like sigmoid, tanh or ReLU <cit.>. The concatenated feature map is fed into an additional convolution layer f_final: ℝ^k · k × (c_in + 2 · c_f)→ℝ^c_final and parameters θ_final, which maps the channel size after concatenation to a desired target channel size c_final such that our module can easily be incorporated into existing models: DWA( 𝐱)_i,j = f_final( ψ(g (𝐱), (i, j) ) ; θ_final) A SR model utilizing this DWA module exploits the comprehensive feature map to learn the complex relationships between LR and HR images, ultimately reconstructing the HR image with reduced noise. By employing the DWA, we aim to harness the benefits of wavelet domain processing and the difference between two convolutional filters. We demonstrate the effectiveness of our approach through extensive experiments and evaluations in the following sections. §.§ Direct Application of DWA (DWA Direct) One way to circumvent additional computation steps is to apply DWA directly on the image space, omitting DWT and learning the transition between image and frequency space implicitly via DWA. Thus, the interpolation of the input, which effectively adds no additional information since it generates only approximated values, can be reduced by half for networks like DWSR or MWCNN. Consequently, the network is better adapted to the given values of the LR input. In the experiments, we evaluate this alternative approach called DWA Direct and show that it further enhances the performances of DWSR and MWCNN. § EXPERIMENTS We evaluate our proposed DWA module by integrating it into the wavelet-based SR models DWSR and MWCNN. We begin this section by describing the experiments. Next, we discuss the results quantitatively and qualitatively. We show the effectiveness of DWA and that a direct application of wavelet-based SR models with DWA to image space is feasible without forfeiting reconstruction quality. §.§ Experimental Setup We applied widely-used SR datasets to evaluate our method. In addition, we utilized standard augmentation techniques such as rotation, horizontal and vertical flipping. For testing, we employed the datasets Set5 <cit.>, Set14 <cit.>, BSDS100 <cit.>. For training, we used different settings for DWSR and MWCNN to match the original works for a fair comparison, as dissected in the following. In all experiments, we train using the Adam optimizer <cit.> with a learning rate of 10^-4 with L2 regularization of 10^-8 on a single A100 GPU. Moreover, we use a learning rate decay schedule, which reduces the learning rate by 20 % every 20 epochs. Ablation Study: We use DIV2K <cit.> and follow the standard procedure by extracting sub-images of 192×192 for training. We iterate for 40 epochs over the training dataset. Since we compare with DWSR, we use L1-loss as the learning objective, as reported by the authors of DWSR. DWSR-Scenario: We use DIV2K <cit.> like in the ablation study, but we train for 100 epochs as reported in DWSR. MWCNN-Scenario: We collect 800 images from DIV2K <cit.>, 200 images from BSD <cit.> and 4,744 images from WED <cit.> and train for 100 epochs. Contrary to DWSR, we adapt the L2-loss like the authors of MWCNN. For sub-image extraction, we use a size of 240×240 to match the training settings of MWCNN. § RESULTS This section presents the quantitative and qualitative analysis of this work. It shows that incorporating the DWA module into DWSR improves the performance in every dataset and for all scaling factors. Moreover, we consistently improve the SSIM scores by implementing DWA into MWCNN and achieve similar PSNR results. This section starts with an ablation study to investigate different striding settings and the effect of combining DWA with DWSR for the direct application and the regular DWT case (see <ref>). Next, we examine the performance scores of our DWA module on classical SR datasets with DWSR and MWCNN. Finally, we visually compare the quality of the reconstructions. §.§.§ Ablation Study <ref> shows the impact of different striding settings for DWSR with DWA with 2x and 4x scaling. We observe an improvement for striding settings greater than 0, significantly for PSNR and slightly for SSIM. The differences between striding settings greater than 0 are minimal, with a slight decrease for larger striding sizes. Nonetheless, they outperform DWA with no stride difference consistently. Thus, having a stride difference to capture local variations more effectively benefits the overall performance of DWSR. We further investigate the impact of various model configurations, DWSR with or without the DWA module, in a direct application or without (see <ref>). <ref> presents the results, where two graphs display the PSNR and SSIM values <cit.>, respectively, for each method. We apply the ablation study with different model depths, ranging from 6 to 18, instead of using a standard depth of 10 for DWSR. As a result, DWSR with DWA or DWA Direct consistently outperforms the DWSR baseline across all model depths. This demonstrates the effectiveness of incorporating the DWA module as the first layer in the DWSR framework. Moreover, DWA Direct outperforms DWA applied to the DWT on the input with greater model depths. Furthermore, we observe a considerable performance drop in DWSR Direct without using the DWA module compared to all other evaluated methods. This indicates that the DWA module is crucial in enabling the Direct approach, as its absence significantly degrades performance. §.§.§ Performance <ref> summarizes PSNR and SSIM scores when applying the DWA module to DWSR and MWCNN for classical SR datasets on different scaling factors for a longer training span. We observe that incorporating the DWA module into DWSR improves the performance in every dataset and for all scaling factors. For MWCNN with DWA, a similar observation can be made, especially for the SSIM scores, which show overall the best performances. However, it has slightly decreased PSNR values for some cases, e.g., for scaling factor 3. Both applications, DWSR with DWA and MWCNN with DWA, are applied directly on the input image space, omitting a DWT of the input. §.§.§ Visual Comparison <ref> displays the ground truth HR image alongside the DWSR and DWA reconstructions. DWSR and DWA perform reasonably well in reconstructing the images. However, the DWA reconstructions exhibit more accurate and sharp details, particularly in the zoomed-in regions. Since the added bicubic interpolation of the LR image in the reconstruction process provides a robust base prediction, we also present the residual images, which are the differences between the bicubic interpolations and the ground truth images, to highlight the performance difference between both approaches. These residual images are the learning targets of the models to improve the reconstruction quality beyond interpolation. By comparing the residual images, we can see more clearly that the DWA model captures better distinct edges and finer details, which are also closer to the ground truth residuals, as opposed to the DWSR model. It has more substantial edges and finer points in the residual images, which are also closer in color (see red colored lines of DWSR reconstruction in <ref> as a comparison). This observation aligns with our quantitative results, where DWA outperforms DWSR regarding various performance metrics. To provide deeper insights into our proposed models, <ref> presents feature maps generated by the DWSR and DWA Direct models after the first layer. To ensure diversity, we selected the top five channels from each method based on the highest sum of distances between pairwise differences of all channels. Our analysis reveals that although DWSR operates on the frequency space, it still remains similar to the LR input and fails to capture the desired target residual. In contrast, DWA Direct extracts local contrasts and variations more effectively from the image space and performs better in mapping the target residual. § CONCLUSION AND FUTURE WORK In this work, we presented a novel Differential Wavelet Amplifier (DWA) module, which can be used as a drop-in module to existing wavelet-based SR models. We showed experimentally on Set5, Set14, and BSDS100 for scaling factors 2, 3, and 4 that it improves the reconstruction quality of the SR models DWSR and MWCNN while enabling an application of them to the input image space directly without harm to performance. This module captures more distinct edges and finer details, which are closer to the ground truth residuals, which wavelet-based SR models usually learn. This work is an opportunity to seek further advancements for SR based on frequency-based representations. For future work, an exciting research avenue would be to explore ways to incorporate DWA on different DWT levels in MWCNN instead of only applying it initially. § ACKNOWLEDGMENTS This work was supported by the BMBF projects SustainML (Grant 101070408) and by Carl Zeiss Foundation through the Sustainable Embedded AI project (P2021-02-009). splncs04
http://arxiv.org/abs/2307.06032v1
20230712092540
From Vlasov-Poisson to Schrödinger-Poisson: dark matter simulation with a quantum variational time evolution algorithm
[ "Luca Cappelli", "Francesco Tacchino", "Giuseppe Murante", "Stefano Borgani", "Ivano Tavernelli" ]
quant-ph
[ "quant-ph", "astro-ph.CO" ]
Dipartimento di Fisica dell'Università di Trieste IBM Quantum, IBM Research – Zurich, 8803 Rüschlikon, Switzerland INAF - Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131, Trieste, Italy ICSC - Italian Research Center on High Performance Computing, Big Data and Quantum Computing IBM Quantum, IBM Research – Zurich, 8803 Rüschlikon, Switzerland INAF - Osservatorio Astronomico di Trieste, via G.B. Tiepolo 11, 34143 Trieste, Italy ICSC - Italian Research Center on High Performance Computing, Big Data and Quantum Computing Dipartimento di Fisica dell'Università di Trieste, via Tiepolo 11, I-34131 Trieste, Italy INAF - Osservatorio Astronomico di Trieste, Trieste, Italy IFPU, Institute for Fundamental Physics of the Universe, Trieste, Italy ICSC - Italian Research Center on High Performance Computing, Big Data and Quantum Computing [email protected] IBM Quantum, IBM Research – Zurich, 8803 Rüschlikon, Switzerland Cosmological simulations describing the evolution of density perturbations of a self-gravitating collisionless Dark Matter (DM) fluid in an expanding background, provide a powerful tool to follow the formation of cosmic structures over wide dynamic ranges. The most widely adopted approach, based on the N-body discretization of the collisionless Vlasov-Poisson (VP) equations, is hampered by an unfavourable scaling when simulating the wide range of scales needed to cover at the same time the formation of single galaxies and of the largest cosmic structures. On the other hand, the dynamics described by the VP equations is limited by the rapid increase of the number of resolution elements (grid points and/or particles) which is required to simulate an ever growing range of scales. Recent studies showed an interesting mapping of the 6-dimensional+ 1 (6D+1) VP problem into a more amenable 3D+1 non-linear Schrödinger-Poisson (SP) problem for simulating the evolution of DM perturbations. This opens up the possibility of improving the scaling of time propagation simulations using quantum computing. In this paper, we develop a rigorous formulation of a variational-time evolution quantum algorithm for the simulation of the SP equations to follow DM perturbations, presenting a thorough analysis of the scaling of the algorithm as a function of spatial dimensions and resolution. Finally we investigate the transition of the SP dynamics towards the classical (ħ / m → 0) limit, which could become an efficient alternative to the solution of the VP equation. From Vlasov-Poisson to Schrödinger-Poisson: dark matter simulation with a quantum variational time evolution algorithm Ivano Tavernelli August 12, 2023 ====================================================================================================================== § INTRODUCTION A number of astrophysical and cosmological observations consistently point toward the definition of the so-called standard cosmological model <cit.>. In this model, the mass-energy content of the Universe is made by about 70% of an unknown form of Dark Energy (DE), which accounts for the accelerated cosmic expansion, by about 25% of an unknown form of collisioness non-baryonic Dark Matter (DM), while only the remaining ∼ 5% is made of ordinary baryonic matter. In addition, viable models of galaxy formation require DM particles to be cold (CDM), i.e. with negligible streaming velocities. With the further observational evidences for DE to be consistent with a cosmological constant term (Λ) in the Einstein field equations, all this leads to the definition of the standard ΛCDM cosmological model <cit.>. While the exact nature of cosmic dark constituents remains so far elusive, it is widely accepted that the gravitational instability of the tiny CDM density perturbations imprinted in the primordial Universe drive the formation of cosmic structures, from kiloparsec (kpc) scales relevant for galaxy formation, to the Gigaparsec (Gpc) scales of the global cosmic web <cit.>. Describing in detail the evolution of such DM perturbations within a DE-dominated expanding background, and comparing the predictions to observational data is crucial to shed light on the nature of DM and DE. The most widely adopted approach to address the study of the gravitational instability of density perturbations in a collisionless fluid is by adopting the N-body discretization of the evolution of fluid phase-space distribution function described by the Vlasov-Poisson (VP) system of equations <cit.>. In its most straightforward implementation, the N-body method explicitly computes the gravitational interaction between each pair of the N particles, which discretize the fluid, thus implying a N^2 scaling with the number of resolution elements. While different methods, based on different levels of numerical approximation, have been introduced to speed-up these computations, still they are currently hampered by the unfavorable scaling of the available classical algorithms with the respect to system sizes. Furthermore, we should keep in mind that the N-body discretization of the phase-space structure of the fluid is also an approximation to reduce the dimensionality of the problem to a treatable level. A recent work by Mocz et al. <cit.> showing numerical correspondence between the 6D + 1 Vlasov-Poisson (VP) and the 3D+1 Schröndiger-Poisson (SP) equations for cosmological simulation revived the interest in simulating and studying various form of dark matter, which can be modelled by the SP equation <cit.>. In fact, the SP equation has also a direct physical interpretation of the so-called axion model, which postulates the presence of scalar particles as constituents of dark matter. In the ultra-light particle mass limit, this model is known as fuzzy dark matter (FDM) <cit.>. This correspondence opens up the possibility of using quantum algorithms (QA) for the investigation of dark matter dynamics, as it was already demonstrating that QA can reduce the scaling complexity for the solution of quantum mechanical problems in many-body physics and quantum chemistry <cit.>. More generally, we propose a scalable quantum algorithm for the simulation of the time propagation of non-linear Schrödinger-like equations of the form i ∂/∂ tΨ = H[Ψ] Ψ where H[Ψ] indicates the functional dependence of the Hamiltonian from the system wavefunction. In this work, we explore the challenges arising in the implementation of cosmological simulations on quantum devices. The dynamics is governed by the SP equation, where a self-gravitating potential introduces nonlinearities in the problem. The mapping of the nonlinear problem onto a quantum device is solved using a classical-hybrid variational algorithm similar to the one proposed in Lubasch et al. <cit.>. The evolution of the wavefunction is carried out using a variational time evolution (VTE) approach, tailored for nonlinear self-consistent problems defined on a grid, which allows for an exponential saving in computational memory resources through the encoding of N grid points in log_2(N) qubits. Building on <cit.>, we adapt the VTE algorithm to the case where the potential is given by a variational ansatz, proposing quantum circuits for the evaluation of the required matrix elements whose depth scaling is polynomial with the number of qubits and the number of sample required for a desired accuracy does not depend on the system size. We investigate the possibility to recover classical results varying the scale of the problem and recover an empirical logarithmic scaling law between the latter and the simulation's resolution. This work is structured as follows. In Section <ref> we describe the mapping of the cosmological SP equation on a quantum computer, including a discussion of the strategies that must be adopted in the latter for the description of non-linear problems. Section <ref> is be devoted to the description of the VTE algorithm for self-consistent nonlinear problems, including a discussion on the quantum circuit implementation. Numerical simulations for a one dimensional 5-qubit (i.e., 32 grid points) system will be given in Section <ref>. The results include an analysis of the time-evolution obtained with different choices of physical parameters interpolating between the pure quantum regime and a classical, ħ/m → 0, limit. A study on the resolution convergence in this classical regime is also presented. Finally, we discuss the computational costs of our quantum algorithm and the conditions for potential quantum advantage. We draw our main conclusions in Section <ref> § THEORY AND METHODS §.§ History of the Schrödinger-Poisson equation Under the fluid assumption, the phase-space distribution of massive CDM particles at time t is described by the distribution function f(𝐱, 𝐯, t), where 𝐱, 𝐯∈R^3 are the positions and velocities of the particles, so that f d𝐱 d𝐯 describe the phase-space density within the 6D volume element d𝐱 d𝐯, so that the density field in configuration space is given by ρ(𝐱,t)=∫ f(𝐱, 𝐯, t) d𝐯. Under the assumption of a collisionless fluid, the evolution of the distribution function obeys a continuity equation in phase-space, df(𝐱, 𝐯, t)/dt=0. If the fluid is self-gravitating, then the Poisson equation, ∇^2U(𝐱,t)=4π G ρ(𝐱,t) (with G the Newton's gravitational constant) provides the relationship between the density field and the gravitational potential U <cit.>. Simulations of cosmic structure formation within a ΛCDM model aim at solving this Vlasov-Poisson system of equations, once initial conditions on position and velocity of the particles f(𝐱, 𝐯, t_0) are assigned to represent an ensamble realization of a given cosmological model <cit.>. As such, the VP equations must be solved in 6D+1 dimensions. The high dimensionality of this problem makes it very hard to tackle when a high spatial resolution is needed, as usual in modern cosmological simulations. A widely used approach to reduce the dimensionality of the problem is to model the initial DM distribution as an ensemble of collisionless massive particles interacting only through self-gravity. Such a set of particles formally obeys to the Euler-Poisson (EP) equations, a closure of the VP equations obtained by asking that the distribution function is single-valued in space. Classically the evolution is carried out using N-body <cit.> or fluid approaches <cit.>. The N-body approach <cit.> best approximates the analytic solution of the system (each DM particle has a single-valued velocity; at large scales however they can cross, as the VP equations require) and usually presents no singularities. However, it requires much more computational resources than the fluid one. On the other hand, the fluid method, that directly solves the EP equations, manages to reduce the dimensionality of the problem from 6D+1 to 3D+1, but present singularities and shell-crossing <cit.>. The potential limitations of both the N-body and the fluid methods clearly demonstrates that finding an alternative and efficient way to solve the VP equations would provide a significant conceptual and computational benefit for the numerical study of cosmic structure formation. Within this context, the Schrödinger-Poisson (SP) equations, i.e. the coupling of the Schrödinger equation with a self-interacting potential obeying the Poisson equation, have recently been proved to recover in the classical limit ħ/m → 0 the dynamics of the VP equations <cit.>. Such an approach was first introduced in ref. <cit.> as the non-relativistic limit of the Einstein field equations with a scalar boson field as source. The procedure known as the Schrödinger method (SM) maps the initial distribution f(𝐱, 𝐯, t_0) to the wavefunction Ψ(𝐱, t_0) through a Husimi transformation <cit.> with an accuracy of O(ħ / m), O((ħ / m)^2). The wavefunction then evolves according to the SP system of equations i ħ∂Ψ/∂ t=-ħ^2/2 m∇^2Ψ+m U Ψ ; ∇^2 U = 4 π G(ρ - ρ^*) . Here we have chosen to use a density contrast ρ - ρ^* as source of the gravitational potential, where ρ^* represents the average density over the volume considered. We note that in this approach Eq. (<ref>) describes a density field, not a particle's wavefunction. Note also that the constant ħ is not the Planck constant, since it is rather related to the resolution in momentum space, while the mass m defines the resolution in mass (see discussion below in Sect. <ref> and in S.I <ref>.). As an aside note, we remind that the SP equations has been already used in the numerical study of cosmic structure formation to study the dynamics of the Fuzzy Dark Matter (FDM) perturbations <cit.>. This class of DM candidates emerges as the ultra-light mass limit of a scalar bosonic field, whose particles are known as axions. In this case ħ represents in fact the actual Planck constant and m the mass of the axion-like particles. The characteristic scale of the problem is the ratio ħ/m: at smaller scales the dynamics is influenced by quantum effects as quantum pressure, while at larger scales, this effect becomes negligible and the classical Cold Dark Matter (CDM) limit is recovered. §.§ The nonlinear SP equation on quantum computers We consider a complex wavefunction Ψ(𝐱, t) (with 𝐱∈ℝ^3) defined in such a way that |Ψ|^2 = ρ / ρ^*. The following normalization emerges naturally from the definition of the volume-mean density ρ^* 1/𝒱∫ d𝒱 |Ψ|^2 = 1, The SP equation of interest (see diagram in Fig. <ref>) assumes the general form i ∂/∂ tΨ(𝐱, t) = ( -λ/2∇^2 + 1/λ V[Ψ(𝐱, t)] ) Ψ(𝐱, t) . with the self-interacting potential V[Ψ] defined as ∇^2 V[Ψ]= ∇^2 V(𝐱, t) = | Ψ(𝐱, t) |^2-1 . Here λ = ħ/m is the intrinsic scale of the problem <cit.> and V[Ψ] is a redefinition of the self interacting potential U[Ψ] that renders the Poisson equation dimensionless. We use square brackets, e.g., V[Ψ], to denote functional dependence. Details on how to recover Eqs. (<ref>), (<ref>) from Eq. (<ref>) are given in the S.I. <ref>. This set of equations known as Schrödinger-Poisson (SP) can be seen as a time-dependant Schrödinger-like equation (TDSE), where the self-interacting nature of the potential in Eq. (<ref>) causes the dynamics of the system to be strongly nonlinear. It features two main processes, whose intensity are regulated by the magnitude of λ. We observe that if λ→∞ the potential term vanishes, leaving only the free Schrödinger equation which leads to diffusion <cit.> (however, due to the imaginary coefficient iλ/2, the Schrödinger equation cannot be strictly classified as a diffusion equation). In this case we expect to see a spatial smoothing of the density distribution. In the opposite limit, when λ→ 0, the potential term dominates: this should cause the collapse of the distribution followed by a series of peaks and caustics. As such, this can be seen as the onset of the classical regime of gravitational instability <cit.>. While quantum computation proved to be efficient in solving linear partial differential equations (PDEs) <cit.> problems arise when dealing with nonlinear equations due to the intrinsic linearity of the quantum computation formalism <cit.>. Two main challenges are associated to the non-linearity of Eq. (<ref>). The first one is related to the fact that quantum states are usually prepared and evolved through unitary operations. This preserves the well-known probability-like normalization of the quantum register: ⟨ψ|ψ⟩ = 1. Thus, the physical wavefunction |Ψ⟩, that solves Eq. (<ref>), and the generic quantum state on the quantum register |ψ⟩ live in two different Hilbert spaces. We will give more details on this subject in Section <ref>. The second complication is related to the self-consistency of the problem, which forces us to look at alternative time evolution algorithms than Trotter-based expansions <cit.>. To address both problems, in this work we propose a variational time evolution algorithm specifically adapted to the nonlinearity of the problem. §.§ The quantum computing approach to the SP equation A first attempt to solve the nonlinear SP equation was given by Mocz & Szasz.  <cit.>. Such a solution is fully variational and makes use of a finite difference optimization of the potential and of the system wavefunction evaluated at two subsequent time steps. The variational nature of this approach also allows one to bypass the costly solution of the Poisson equation in Fourier space in favour of a variational optimization of the potential as implemented in a separate qubit register. In this work, we propose a different strategy based on a variational time-dependent quantum algorithm for the propagation of the variational parameters defining the system wavefunction (See Section <ref>). This enables a more rigorous implementation of the wavefunction dynamics, avoiding all instabilities implicit in most VQE optimization procedures (e.g., slow convergence due to the trapping in local minima and Barren plateaus). On the other hand the VTE algorithm comes at the cost of evaluating additional matrix elements for the solution of the equation of motion for the wavefunction parameters. §.§.§ Grid-based representation of the system wavefunction A typical space discretization associated to problems in first quantization <cit.> approximates a continuous space with a grid. In 1D, a line of length L is divided in arbitrary N equidistant points. For each grid point x_j we have Ψ_j ≃Ψ(x_j), with j ∈{0, 1,..., N-1} and periodic boundary conditions Ψ_N = Ψ_0. With a n-qubit quantum register, one can generate a quantum state |ψ⟩ belonging to a N-dimensional Hilbert space, where N = 2^n. Making use of such logarithmic encoding, only n = log_2N qubits are needed to describe a N-point grid. A generic state |ψ⟩ can hence be represented on a quantum register as a superposition of computational basis states, |ψ⟩ = ∑_k=0^N-1ψ_j |bin(j)⟩, where bin(j) is the binary representation of the grid position j and ψ_j ∈ℂ is the associated amplitude or weight, such that the probability distribution of measuring the different basis states (i.e., different positions on the grid) is normalized as ⟨ψ| ψ⟩ = ∑_j = 0^N-1 |ψ_j|^2 = 1. By combining this relation with the discretization of Eq. (<ref>), we can establish a correspondence between the approximated physical wavefunction on the gridpoint x_j and the corresponding coefficient of the j-th basis |bin(j)⟩ in Eq. (<ref>), such that Ψ_j = √(N)ψ_j. The dynamics of the system wavefunction is described by means of a time-dependent variational approach <cit.>. To this end, we define a quantum trial state |ψ(θ(t))⟩, parametrized by a set of (time-dependent) variables θ(t) = {θ_1(t),...,θ_M_p(t) }, which evolve according to well-defined equations of motion <cit.>. The initial state is prepared through a suitable choice of a parameterized unitary (quantum circuit) U(θ(0)). An explicit circuit example is shown in Fig. <ref>. Using the previous relation between Ψ_j and ψ_j, we can describe the time evolution of the physical state |Ψ(θ(t))⟩ = √(N)|ψ(θ(t))⟩. using the updated parameters θ(t) (see <ref>). §.§.§ Variational time propagation with nonlinearities The trial wavefunction |ψ(θ(t)) ⟩ is evolved adapting the VTE algorithm proposed in Ref. <cit.> to the case where the potential is self-consistent with the wavefunction and needs to be re-evaluated at each timestep. In VTE, the dynamics is tracked on the manifold spanned by the time-dependent parameters θ(t) used to describe the trial wavefunction. For a system evolving under the action of a Hamiltonian ℋ, we derive, from the McLachlan variational principle <cit.>, a set of equations of motion (EOM) of the form M θ̇=B, where M_ kl ={⟨∂_θ_kΨ|∂_θ_lΨ⟩ - ⟨∂_θ_kΨ|Ψ⟩⟨Ψ|∂_θ_lΨ⟩} B_k ={⟨∂_θ_kΨ|ℋ| Ψ⟩-⟨∂_θ_kΨ|Ψ⟩⟨Ψ|ℋ| Ψ⟩} with ℋ[Ψ]=( -λ/2∇^2 + 1/λ V[Ψ(𝐱, t)] ) as defined in Eq. (<ref>). The dependence of Ψ on the parameters θ(t) is implicit. Note that to capture the exact evolution comprehensive of nonlinear effects, the terms in Eqs. (<ref>) and (<ref>) are rescaled according to Eq. (<ref>). §.§.§ Optimization of the potential As anticipated in Sec. <ref>, the functional dependence of the potential on the system wavefunction, Ψ, brings a further level of complexity into the dynamics of the system. While classically, the solution of the Poisson equation (<ref>) for a generic wavefunction Ψ can easily be found using a spectral method in Fourier space <cit.>, such strategy is not practical on near-term quantum computers, as it would require rather deep circuits One way of implementing a quantum spectral method for the solution of the SP equation would require a quantum Fourier transform (QFT) to move in the momentum space. Then a circuit able to reproduce |ψ|^2-1 would need to be followed by one able to divide for the squared momenta k^2. In the end a QFT^-1 would return the exact potential on the quantum register.. We instead resort to a variational approach. Hence, we introduce a second set of parameters ϕ(t) = {ϕ_V(t), ϕ̃_1(t),..., ϕ̃_L(t) } describing a quantum state |Φ_V(ϕ)⟩ = ϕ_V |Φ_V (ϕ̃)̃⟩, such that the potential can be obtained as |Φ_V(ϕ)⟩ = ∑_j=0^N-1 V_j(ϕ(t)) |j⟩ = ϕ_V ∑_j=0^N-1Ṽ_j(ϕ(t)) |j⟩ , where the index j in V_j(ϕ(t)) labels the grid position 𝐱_j associated to the bit string bin(j). In Eq. (<ref>) the parameter ϕ_V <cit.> ensures the normalization of the potential, ⟨Φ_Ṽ(ϕ̃) | Φ_Ṽ(ϕ̃)|=⟩∑_j=0^N-1 |Ṽ_j(ϕ(t))|^2 = 1 , ∀ t . The potential can therefore be interpreted as a functional of the circuit parameters, V_j(ϕ(t)). The parameters are iteratively updated to minimize the distance between the parameterized potential and the one arising from Eq. (<ref>): min_ϕ( ∑_j=0^N-1( ∇^2V_j(ϕ) - |Ψ_j(θ)|^2 + 1)^2 ) . When the optimization converges, the function V_j(ϕ(t)) approximates the exact potential V(𝐱, t) with 𝐱∈{𝐱_j } corresponding to the parameterized wavefunction Ψ(θ(t)) at a specific time t. § THE ALGORITHM The self-consistency problem is solved, as anticipated in Section  <ref>, by alternating the solution of the TDSE (VTE) and the optimization of the potential (Pot. Opt). The intrinsic nonlineartity of the SP equation is reconciled with the requirements of a quantum circuit implementation imposing the correct normalization of the physical wavefunction and potential, as given by Eq. (<ref>) and Eq. (<ref>)), respectively. A scheme of this algorithm is reported in Alg. <ref>, where {θ_t_i} and {ϕ_t_i} refer to the parameters' set at time t_i; i ∈{ 0, 1, ..., N_t-1 }. For conciseness, in Alg. <ref> we use the following notation Ψ_i ≡Ψ(θ_t_i). §.§ Circuit implementation The trial quantum states for both the wavefunction and the potential are implemented using a heuristic local ansatz <cit.> that alternates single qubit rotation layers U^rot(θ) and entangling layers U^ent (see example in Fig. <ref>) U(θ) = U_0^rot(θ_0) ·∏_ξ = 1^D U_ξ^ent· U_ξ^rot (θ_ξ) where D is the number of entangling layers and θ_ξ a subgroup of parameters. In Fig <ref>, we show the typical circuits used to encode the wavefunction |ψ(θ)⟩, while Fig. <ref> reports the one used for the potential |Φ_V(ϕ)⟩. The latter consists of just R_Y(θ) rotations and CX gates, since the target potential function is real-valued. The quantum part of the evolution algorithm resides in the measurement of the expectation values in Eqs. (<ref>) and (<ref>). In the following, we propose an efficient implementation of the circuits for the evaluation of the terms with derivatives in Eqs. (<ref>) and (<ref>). In particular, we provide a detailed procedure for the calculation of those matrix elements that have a functional dependence on the non-linear potential, such as ⟨∂_θ_kψ|ℋ(V(ϕ)) |ψ⟩. Given the structure of the ansatz in Eq. (<ref>) and θ_k in the subset θ_ξ̃ , the derivatives ∂_θ_k leaves the unitary unchanged, with the exception of the target rotational layer: U_ξ^rot(θ_ξ) = ⊗_j=0^n-1exp{-i/2α_j θ_ξ̃,j} where θ_ξ̃, j∈θ_ξ̃ and α_j ∈{X, Y, Z } is a Pauli matrix, generator of single qubit rotations. Combining Eqs. (<ref>), (<ref>) and |ψ(θ)⟩ = U(θ)|Ξ⟩, one gets for the partial derivative ∂_k ∂_θ_k U(θ)|Ξ⟩ = |∂_θ_kψ(θ)⟩ = -i/2 W_k(θ) |Ξ⟩ , for a generic quantum state |Ξ⟩. Here W_k(θ) is a modified version of U(θ) where the single qubit rotation R_α(θ_k) is preceded by its own generator <cit.>. In the search for an efficient quantum circuit able to reproduce the matrix and vector elements of the McLachlan equation of motion of Eq. (<ref>), the main obstacle is to produce a quantum state with the following structure: |ψ⟩ = 1/√(2)( U_1(θ)|Ξ⟩|0⟩ + U_2(θ)|Ξ⟩|1⟩) , where U_1, U_2 are generic unitaries and the second quantum register (single qubit) is used to evaluate the value of the matrix element. In the specific case at study, these unitaries should be expressive enough to enable a suitable parametrization of the wavefunction and its derivatives (Eq. (<ref>)). Given the structure of the circuit W_k, by controlling only the Pauli matrix that implements the derivative, it is possible to prepare the quantum states F_k (θ) |Ξ⟩|+⟩ = 1/√(2)( W_k(θ)|Ξ⟩|0⟩ + U(θ)|Ξ⟩|1⟩) = 1/√(2)( 2i |∂_θ_kψ(θ)⟩|0⟩ + |ψ(θ)⟩|1⟩), F_k, l (θ) |Ξ⟩|+⟩ = 1/√(2)( W_k(θ)|Ξ⟩|0⟩ + W_l(θ)|Ξ⟩|1⟩) = i√(2)( |∂_θ_kψ(θ)⟩|0⟩ + |∂_θ_lψ(θ)⟩|1⟩) , for a given reference state |Ξ⟩ where F_k, l(θ) and F_k(θ) refer to unitaries for the different derivatives (see Fig. <ref>). Fig. <ref> summarizes all quantum circuits relevant for the evaluation of the terms in Eq. (<ref>), (<ref>). A brief discussion on how to evaluate them on a QC will follow, starting with the overlaps ⟨∂_θ_kψ|ψ⟩ and ⟨∂_θ_kψ|∂_θ_jψ⟩. One can notice from Eqs.(<ref>), (<ref>) that upon applying a H gate and then measuring ⟨⟨σ_z⟩⟩ on the ancillary qubit returns the desired quantities. Furthermore, there is no need to evaluate the real part to compute the product of the overlaps in Eq. (<ref>) since the term ⟨∂_θ_kψ|ψ⟩ is purely imaginary. The circuits used to do so are shown in Figs. <ref>, <ref>. The term ⟨∂_θ_kψ | V(ϕ̃) | ψ|$⟩ provides a link between the two parts of the algorithm:VTEand potential optimization.V(ϕ̃)is given in Eq. (<ref>) and is prepared using the parametersϕ̃resulting from the minimization of Eq. (<ref>). The circuit in Fig. <ref> is the one used for the evaluation of this linking term, where the series ofnToffoli gates provides a pointwise multiplication between the wavefunction and the potential registers (i.e,∑_k Ṽ_k ψ_k). Accurate details are presented in the S.I <ref>. Concerning the term{ ⟨∂_θ_k ψ| ∇^2 |ψ⟩ }, a few considerations are needed. For systems of cosmological relevance, we expect accurate simulations to require a fine enough spatial resolution to resolve all spatial features. Therefore, using a finite differences approach, as also proposed in Ref. <cit.>, can be justified as the discretization error should be irrelevant at higher resolutions. In this framework, an approximation of the Laplace operator is given by {⟨∂_θ_kψ|∇^2 |ψ⟩} = 1/Δ x^2{⟨∂_θ_kψ|ψ_+ ⟩ - 2 ⟨∂_θ_kψ|ψ⟩ + ⟨∂_θ_kψ|ψ_-⟩} , with the positive (and negative) shifted wavefunctions|ψ_±⟩ = ∑_j=0^N-1 ψ_j ±1 |bin(j)⟩, obtained using the adder circuitA<cit.>, whose action on thej-th base is|bin(j)⟩ ↦|bin(j-1)⟩, in combination with the unitaryF_k(θ)of Eq. (<ref>) with different control state allows to evaluate the shifted overlaps in Eq. (<ref>). A scheme of the circuits needed to perform these operations is presented in Fig. <ref>. More details about its functioning and on the implementation of the adderAare given in the S.I <ref>. § RESULTS AND DISCUSSION Before addressing the setup used in our simulation, some consideration about the characteristic scales appearing in the SP equation and the corresponding units are needed. Given the invariance of the SP Eqs (<ref>), (<ref>) under the following scaling transformation: { x, t, ψ, λ}↦{α x, β t, β^-1ψ, α^-2βλ}λemerges as an intrinsic scale of the problem <cit.> as its scaling law combines changes in both the spatial and time domain (i,e system with different box dimension or evolution time will display different dynamics). Concerning the dimension of the physical quantities appearing in the problem, we used arbitrary unit. The choice is mainly dictated by the arbitrary values chosen for the density normalizationρ^*and the constantGin the transition from Eq. (<ref>) to Eqs.(<ref>), (<ref>) (S.I <ref>). §.§ Numerical simulations As a test case, we consider a one dimensional system of lengthL=8with periodic boundary conditions. As anticipated above, we use arbitrary units for both spatial coordinates and time variable. The choice ofLand the total time of the simulation is done in such a way that, once we fixλ= 1, the self interacting potential Eq. (<ref>) exactly balances the diffusion associated to the Schrödinger time-evolution. In order to compare our results with those from Ref. <cit.>, we used as initial condition a sinusoidal distribution of the form Ψ(x, 0) = √( 1 + 0.6 sin(π/4 x )) , evolved according to Eqs. (<ref>) and (<ref>). For this proof-of-principle numerical implementation, the parametersθ_0reproducing the initial quantum state are obtained by optimizing the state fidelityℱ(ψ(θ), ψ̃)between the variational trial state|ψ(θ)⟩and a target state|ψ̃⟩. In this work we refer toℱas the state fidelity between two quantum states <cit.> (i.e, state normalization is1). In the situation where|ψ_1⟩and|ψ_2⟩are pure states, we have ℱ(ψ_1, ψ_2) = |⟨ψ_1 | ψ_2⟩|^2 . This value will be also used to measure of the convergence of the states obtained with the variational method to the ones obtained classically. We point out that this has noting to do with the convergence to the actual solution of the physical problem (i.e, does not take into consideration the grid discretization error). The classical optimization of the potential ( Pot. Opt. in Algorithm <ref> ) is performed using a combination of (to start the optimization) and (to find the best solution) algorithms as implemented in v1.9.0. All simulations were performed in Qiskit <cit.> within the framework, i.e., using a matrix representation of the quantum circuit and a vector representation of the quantum state. The equations of motion in Eq. (<ref>) are integrated using an explicit Euler method with fixed timestep for a total ofN_tsteps. Here, it is important to mention that, in general, the inversion of the matrixMin Eq. (<ref>) may become ill-defined. To reduce the resulting instabilities of the dynamics, we used the least squares solver <cit.> with a suitable choice of the corresponding hyperparameters: the cutoffr_c, used to determine the effective rank of the matrix in Eq. (<ref>) such that the singular values smaller thanr_c ·Λ_maxare set to zero (hereΛ_maxis the singular value of largest magnitude), and the regularization factorϵ, applied to the diagonal of the matrixMin Eq. (<ref>). In order to determine the quality of the results, we should also consider the level of expressivity of the variational ansatz, which is used to encode the system wavefunction and the potential. In order to achieve accurate results, one would need – in principle - a number of circuit parametersθ(t)for the wavefunction that approaches the size of the Hilbert space. On the other hand, the number of terms in the matrices and vectors used in the equations of motion, Eqs. (<ref>) and  (<ref>), scale asM_p^2andM_p, respectively, as shown in Tab. <ref>, whereM_pis the number of parameters. Reducing the number of parameters significantly reduces the total number of circuit evaluations. This however translates in a lower accuracy of the dynamics, as the ansatz may not enable a thoroughly description of the sector of interest of the full Hilbert space. Similarly, a large number of parameters will enable a more accurate description of the self-consistent potential, at the price of a more cumbersome (classical) optimization process and an increased circuit depth. To assess the quality of our implementation (including the adjustment of the hyperparameters), we performed two series of simulations. The first one is a classical spectral method based on theFFTas in <cit.>. Results obtained from this approach will be used as a reference. The actual implementation of our proposed quantum algorithm consists, instead, of repeated cycles of circuit optimization and VTE steps (Algorithm <ref>). When comparing its outcomes with the exact ones (Fig. <ref> and Tab. <ref>) we observe that the quantum approach rightfully captures the qualitative behaviour of the wavefunction, although the probability distribution obtained from the VTE is not as smooth as the exact one. §.§ Interpretation of the SP results Fig. <ref> shows the time evolution of the initial sinusoidal distribution, as given in Eq. (<ref>), over a time span of approximately6time units for two different choices of the parameterλ(left:λ=1, right:λ=0.25). The lower panels depict the same dynamics as a two-dimensional surface plot of the time dependent wavefunction. The larger the value ofλ, the larger the quantum nature of the dynamics; in fact, in the limit ofλ→0, the SP dynamics converges towards the classical VP dynamics <cit.>. Physically, the collapse and splitting of the probability density (left panels in Fig. <ref>) is an effect of the self-interacting potential. This is regulated by the scale of the problemλ. However, as stated in the preamble of Sect. <ref>, what really matters is not the absolute value ofλ, but its value relative to the box size and time (e.g, if instead ofL=8we hadL=1, we would need to changeλtoλ/64, accordingly). In the classical limitħ/m →0, the quantum effects are suppressed, the potential cannot counter anymore the diffusion and secondary peaks arise, as in the classical VP solution. §.§ Scaling of required resources The largest cosmological simulations describe nowadays the evolution of boxes having a size of several Gigaparsecs, and using of the order of a trillion resolution elements (particles) <cit.>. While simulations of this size are beyond the reach of what can be achieved on current quantum computers, the possibility of efficiently running large suites of simulations with∼10^10particles each is still highly valuable to carry out a number of useful calibrations of observational quantities and to explore the parameter space of cosmological models <cit.>. We thus consider a situation of possible cosmological interest to be a3D simulation with resolution in grid points per dimension of2048 = 2^11. Thanks to the logarithmic encoding, a total of2^33grid points can be obtained withn_tot = 33qubits. In Tab. <ref> we report the number of qubits needed for every term of Eq. (<ref>) and the relative number of different circuits used. In this exploratory work we used a heuristic number of parametersM_pand timestepsN_tfor our simulation. Thus we are not in position of providing an accurate estimate of the number of parameters, or timesteps, required for a relevant cosmological simulation. What we can say is that, such simulation would require a maximum of2n+1qubits, used in the evaluation of the potential term. As for the number of timesteps needed, we can gather some insights from Tab. <ref>. First of all, we remind that, to describe exactly the whole Hilbert space the number of parametersM_pshould increase by a factor of two by adding one qubit. In addition, we note that an increased spatial resolution (number of qubits) requires a larger number of time-steps to preserve the accuracy level at which the dynamics is described. This also happens in classical numerical integration problems . On the other hand, keeping the fidelityℱfixed, the required number of timestepsN_tis expected to decrease as the number of variational parametersM_pis increased. This could be motivated by the fact that the equation we are integrating (Eq. (<ref>)) is defined on the parameter space, while the original dynamics (i.e, the Hamiltonian in Eq. (<ref>)) appears only in the vector term (see Eq. (<ref>)). Moreover, the variational approach allows us to use a number of parameters smaller than the Hilbert space dimension. Hence we want to capture the same dynamics on a sub-manifold that offers less versatility in term of parameters' evolution. This requires a finer timestep. In the specific, we would needM_p=2Nvariational parameters to span the whole Hilbert space. AsM_pdrifts away from this number our ability to catch dynamical fluctuations is reduced, and more timesteps are required to track the correct evolution of the wavefunction. All this motivates the fact that the fidelity values reported in Table <ref> are smaller for the simulation with larger number of qubits: in fact, for the 5-qubit cases the eitherM_por the number of time-steps are not increased according to the scaling required to keep fidelity to a stable value. We point out that this is valid in the regimeM_p > M_min, whereM_minis the minimum number of variational parameters required to reproduce the target function within a given accuracy. This sets a lower bound on the number of parameters. This number can change during the evolution of the wavefunction accordingly to the complexity of the wavefunction. §.§.§ Space resolution and classical limit As we approach the classical limit, however, the space resolution needed to capture the right dynamical behaviour increases. This is clear in the left panel of Fig. <ref>, where the convergence of the probability distribution is shown as a function of the spatial resolution for simulations with different scaleλ. We observed that with decreasingλaccurate results require a finer representation of the space coordinate. This is mainly due to the appearance of peaked structures observed in the dynamics (see Fig. <ref>), which are harder to resolve than in the case of largerλvalues. It is worth mentioning that the increase in space resolution also requires a corresponding decrease of the simulation time step (Table <ref>). In the right panel of Fig. <ref> the resolution is shown as a function of the scaleλfor different convergence values. From an empirical fit we showed that the number of qubits necessary to resolve the dynamics of a system scales as𝒪(log(λ)). To quantify convergence we used theL_2norm between thenqubits probability distributionf_n– at a fixed time frame – mapped onto the13qubits grid and the13qubits probability distributionf_n𝒞^(13)_n = || f_13 - f_n ||_L_2 . In detail, the scaling law is fitted with a logarithmic functionn(λ, 𝒞̃^(13)) = K log(λ) + q(𝒞̃^(13)) , whereK = -1.44andq(𝒞̃^(13))is the resolution needed to obtain the desired convergence factor𝒞̃^(13)whenλ= 1. Here𝒞̃^(13)indicates a reference value of𝒞^(13)_n, chosen a priori, thus does not depends onn. To be able to determine from a qualitative standpoint what value of𝒞_13is needed to obtain convergence in resolution, we plotted in Fig. <ref> the probability distribution at a fixed timestep, for different resolution and differentλ. Comparing the images of this plot with the graphics in Fig. <ref> tells us what convergence level is associated to a numerical value of𝒞_13. We observed that the right behaviour can be captured as soon as the various density distributions start overlapping. More precisely this happens for6qubits whenλ=0.5and for8÷9qubits whenλ=0.0.625. It is fair to assume that aL_2distance of𝒪(10^-1)is enough to resolve the dynamic. We hence gather from both the fit and the previous remarks that a one dimensional resolution of11qubits can be enough to resolve a simulation approaching the classical limit withλup to𝒪(10^-3). §.§.§ Sampling and system size Of importance is also the study of the convergence of the results as a function of the number of measurements (N_s) needed to accurately evaluate the elements in Eqs. (<ref>) and (<ref>). Measurements introduces a statistical noise in the solution of the equation of motion for the propagation of the wavefunction parameters, which has an impact on the overall dynamics. Building on <cit.> we investigate the aforementioned behaviour in the case of the newly introduced term⟨∂_θ_kψ|ℋ|ψ⟩. The potential part is directly proportional to the measurement of the the ancilla qubit⟨σ_V^z ⟩, thus the variance of the measurements can be estimated by the following ℰ_V = ϕ_V(n) L √(1 - ⟨σ_V^z ⟩^2/N_s) , where the value ofσ^z_Vis intended in the limit ofN_s →∞and the norm of the potentialϕ_V(n)scales with the number of qubits as2^n/2(this can be easily seen applying the spectral method proposed in Ref. <cit.> to obtain the potential, where the wavefunction is normalized as in Eq. (<ref>)). The fact that the number of shots scales exponentially with the number of qubits is related to the nonlinear nature of the problem. Precisely, is a consequence of the factorization of the physical wavefunction and the potential (remember Eqs. (<ref>), (<ref>)). The kinetic part is given by a linear combination of three different set of measurements, see Eq. (<ref>). The variance is estimated with a quadrature-sum as ℰ_K = 2^2n/L√(4 - ⟨σ_k+^z ⟩^2 -⟨σ_k-^z⟩^2 + 2⟨σ_k^z⟩^2 /N_s) . Here the factor2^2nemerges from the term1/Δx^2required form the finite differences method. We observe that in both situations the number of measurement required for an arbitrary accuracy increases with the number of qubits. § CONCLUSIONS In this paper, we tackled the problem of simulating a many-body problem of collisionless self-gravitating particles interacting only through a potential. In a cosmological context, this describes, e.g., the case of gravitational instability of a cold dark matter fluid in an expanding background. Our analysis builds on the possibility to recover the dynamics of the Vlasov-Poisson (VP) equations by mapping it to a framework more suited for quantum computing (QC), namely the Schrödinger-Poisson (SP) equations. We proposed a variational time-evolution (VTE) algorithm for the solution of the corresponding non-linear time-dependant Schrödinger-like equation (TDSE) in which, at each time-step, the potential, which is a functional of the time-evolved system wavefunction, is obtained upon minimization of a suitable parameterized unitary in the quantum register. The proposed quantum algorithm was developed with the aim of scaling up to system sizes, which are in principle much less favourable for classical computers than for quantum computers. To this end, we used a compact (i.e., logarithmic) encoding of the spatial grid (i.e.,nqubits describing2^ngrid points), while enabling the representation of any self-consistent potential, which can be described by combining a parameterized unitary circuit and classical normalization factors. In particular, working with a circuit depth that scales polynomially with the number of qubits, we were able to reach a final state fidelity of approximately0.96in a 5 qubits simulation. Concerning the scaling of the VTE circuit, the number of terms required to evolve the wavefunction in a single timestep scales quadratically with the number of variational parameters. However, the number of timesteps required to achieve a given fidelity increases as the ratio between the number of variational parameters and the Hilbert space dimension decreases, as shown in Tab. <ref>. This behaviour might be related to the heuristic ansätz used in our implementation (e.g, Figs. <ref>, <ref>). We postpone to future investigations understanding weather using ansätze based on tensor networks (e.g, Matrix Product States), as proposed in <cit.>, can bring some improvements. In addition, the number of measurements required to reach a desired accuracy shows a polynomial scaling with the number of grid points. We point out that this behaviour is not specifically related to our proposed VTE algorithm, but to the approach chosen to tackle the nonlinear nature of the problem, namely factorizing the potential and the wavefunction into unitary circuits followed by classical normalization. Moreover, using classical simulations we investigated how the required resolution changes as we approach the classical limitħ/m →0in a1Dscenario. The proposed empirical log-scaling law opens up new interesting perspectives for the use of QC in the propagation of the SP equation in more general settings, including the3Dcase. In conclusion, we consider this work as a first steps towards the use of QC in the solution of the dynamics of a self-gravitating collisionless fluid. While the scaling up of the quantum approach to system sizes that may be relevant for cosmological prediction in 3D seams unlikely before the advent of fault-tolerant quantum computing, there may be interesting studies (e.g., the study of static and dynamic phase transitions) which may occur already in low dimensions (1D) and that can become classically hard because of the complexity of the quantum description SP formulation (e.g., because of the growing entanglement). A similar strategy was recently implemented in the domain lattice gauge theory (see <cit.>). It is also worth pointing out that, while this study was inspired by the cosmological problem of gravitational instability of a collisionless fluid, our results are general and can be applied to other domains, including the study of the plasma dynamics in a Tokamak fusion reactor. At the current state of development, our QC algorithm is clearly not competitive, in terms of accessible dynamic range, with respect to classical methods, both in cosmology and plasma physics, using near-term, noisy, QC with a number of qubits∼100<cit.>. On the other hand, developments that can make our approach more noise-resilient can still be foreseen, including more efficient integration methods and physically motivated variational ansätze. Of particular interest is also the possibility of designing hybrid quantum-classical algorithms, which can, for instance, combine the benefits of tensor-network expansion with the potential of variational quantum circuits to describe states with large entanglement. We therefore look with a good deal of optimism into the future developments of this very promising application domain for QC. We thank Guglielmo Mazzola for insightful discussions and feedback. This paper is supported by the Fondazione ICSC National Recovery and Resilience Plan (PNRR) Project ID CN-00000013 "Italian Research Center on High-Performance Computing, Big Data and Quantum Computing" funded by MUR Missione 4 Componente 2 Investimento 1.4: "Potenziamento strutture di ricerca e creazione di "campioni nazionali di R&S (M4C2-19 )" - Next Generation EU (NGEU). We acknowledge the use of IBM Quantum services for this work. IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. The current list of IBM trademarks is available at <https://www.ibm.com/legal/copytrade.> get arXiv to do 4 passes: Label(s) may have changed. Rerun
http://arxiv.org/abs/2307.05330v1
20230708201724
The Value of Chess Squares
[ "Aditya Gupta", "Shiva Maharaj", "Nicholas Polson", "Vadim Sokolov" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Typology of Risks of Generative Text-to-Image Models Atoosa Kasirzadeh ==================================================== Valuing chess squares and determining the placement of pieces on the board are the main objectives of our study. With the emergence of chess AI, it has become possible to accurately assess the worth of positions in a game of chess. The conventional approach assigns fixed values to pieces (=∞, =9, =5, =3, =3, =1). We enhance this analysis by introducing marginal valuations for both pieces and squares. We demonstrate our method by examining the positioning of Knights and Bishops, and also provide valuable insights into the valuation of pawns. Notably, Nimzowitsch was among the pioneers in advocating for the significance of Pawn structure and valuation. Finally, we conclude by suggesting potential avenues for future research. Key Words: AI, AlphaZero, Bayes, Chess, Deep Learning, Neural Network, Chess Piece Values, Knights, Bishops, Pawns. Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory, there must be a solution, a right procedure in any position. —John von Neumann § INTRODUCTION Chess AI was pioneered by <cit.>, <cit.>, and <cit.>, who developed algorithms for solving chess. Shannon's approach was one of trial and error and “learning” the optimal policy. Turing (and Champernowne) valued the pieces marginally. They had the following positional evaluation functions: piece mobility, piece safety, king mobility, king safety, and castling. Modern day methods are based on state dependent objective function evaluation via learning (a.k.a reinforcement learning) <cit.>. Solving Chess is a daunting NP-hard computational problem, with the Shannon number, which measures the number of possible board states, being (with legal moves). A major advance over pure look-ahead calculation engines are deep neural networks which interpolate the value and policy functions from empirical game playing. For example, AlphaZero uses self-play to allow quick solution paths to be calculated and “learns" chess in less than four hours without any prior knowledge, see <cit.> and <cit.> for further discussion. While much recent work has been done in Chess AI, the question of the value of a chess square has not yet been explored. In this work, we propose a system to measure the advantage/disadvantage offered by control of particular chess squares with different pieces. In particular, we propose a method for measuring the advantage/disadvantage states of the form s ∈Color×Piece×Square. For example, the notion that certain state combinations, such as having a White on f5 provides an advantage to White players is a widely held belief in the world of chess. We analyze these key combinations to see whether the games of high-level chess grandmasters provide merit to this belief. Our investigation will shed light on the strategic nuances and patterns that emerge from such positions and contribute to the understanding of chess at the highest level of play. To value pieces on squares, we create a Neural Network to analyze a dataset of Grandmaster games and make predictions regarding winning probabilities. This uses Centipawn evaluations for specific subsets of chess states involving Knight and Bishop pieces. The results show that our model successfully generated predictions for White Knights and Bishops, as well as Black Knights and Bishops. The predictions provided valuable insights into the advantages and disadvantages associated with different states and positions on the chessboard. For example, the analysis revealed that Knights placed in the corners of the board had lower winning probabilities, likely due to their limited mobility and restricted influence. On the other hand, as Knights moved closer to the opponent's side, their positional value tended to increase, potentially allowing them to infiltrate enemy territory and exert greater control over the game. The study's results enhance the understanding of chess strategies and gameplay dynamics, aiding in strategic decision-making and the evaluation of different gameplay approaches. Several chess maxims are reflected in our neural network predictions. For example, Pawns are observed to gain in value as they cross the 4th rank, highlighting the significance of advancing pawns beyond this milestone. Pawns positioned on the h and a files on the 5th rank are particularly powerful, contributing to central control and potential attacking opportunities. Pawns on the 6th rank, especially when supported by a pawn on the 5th rank, become highly threatening. Edge pawns tend to be weaker compared to central pawns, emphasizing the importance of controlling central squares. Additionally, kingside pawns are often more dangerous when advanced than queenside pawns, influencing the dynamics of the game. Important squares for the white pawn are identified by examining the highest Centipawn evaluation c(s) values in each column. The squares e4, h4, c5, and h6 are highlighted as critical positions for white pawns. Occupying these squares provides advantages, such as central control, support for piece development, and potential attacking opportunities. Similarly, for black pawns, the squares f5, d5, c4, d3, and f3 emerge as key positions. Placing pawns on these squares enhances black's control of central areas, supports piece coordination, and enables counter-play against white's position. Understanding the significance of these key squares and applying the derived insights allows players to make informed decisions regarding pawn placement, pawn breaks, and strategic plans. This knowledge empowers players to optimize their pawn structures, control critical areas of the board, and leverage their pawns to gain a competitive advantage in the game. The rest of the paper is outlined as follows. Section <ref> provides connections with previous literature. Section <ref> goes over the methods we used. Section <ref> provides an application of the proposed methods to Grandmasters and Magnus Carlsen, the World Chess Champion. Section <ref> provides an application to Pawns. Finally, Section <ref> concludes. §.§ Connections with Previous Work In the field of Chess AI, previous research has primarily focused on predicting the probabilities of winning w(s) and Centipawn evaluations c(s) for more simplified states. <cit.> explored simpler states where s belongs to the set of Piece. In their work, they utilized Logistic Regression methods to determine the value of a chess piece by creating a model that predicts the outcome of a game based on existing piece imbalances in a given position. A recent lichess study also tried similar approaches <cit.> <cit.>. Building upon this previous work, our research extends the scope by proposing an augmented state representation s that encompasses Color×Piece×Square, thereby incorporating the square (location) information as an additional component of the state. This augmentation enables a more comprehensive understanding of the game dynamics by considering both the piece and its position on the board. Furthermore, we employ Neural Networks as our chosen methodology, allowing us to capture and model the intricate relationships between the state s and its corresponding Centipawn evaluation c(s). One crucial distinction between our proposed approach and previous methodologies lies in the predictive target. While prior research focused on predicting the binary outcome of the game (win or loss), our proposed model aims to predict the Centipawn evaluation c(s) instead. By doing so, we shift the focus towards assessing the advantage or disadvantage of a particular chess position, providing more granular information beyond a simple win/loss prediction. By using the augmented state representation and employing Neural Networks, our proposed model offers a more comprehensive and nuanced analysis of the chess game. This allows us to capture the intricate interplay between the color, piece type, square, and Centipawn evaluation, providing a deeper understanding of the factors influencing the game's outcome. In the realm of Chess AI research, <cit.> made significant strides by employing Q-learning methods, as discussed in Section <ref>, with a specific focus on chess gambits. Their work aimed to uncover key characteristics and insights associated with these strategic opening moves by calculating Q-values for various chess gambits. This initial exploration into the application of Q-learning in analyzing and understanding chess gambits laid a solid foundation for further research in this field. This paper extends the work of <cit.> and proposes novel architectures that can predict the probabilities of winning w(s) and Centipawn evaluations c(s) for all possible states s ∈Color×Piece×Square. While previous work focused on specific subsets of states, particularly those related to gambits, our approach seeks to encompass the entire chessboard by incorporating the color, piece type, and square information into a comprehensive state representation. By embracing a wider scope of analysis that covers all possible states, our research aims to provide a more comprehensive understanding of the game, surpassing the limitations imposed by narrow subsets. To achieve this, we employ advanced techniques, such as Neural Networks, to capture the intricate relationships between the components of a state and the corresponding probabilities of winning w(s) and Centipawn evaluations c(s). This allows us to offer valuable insights into the dynamics of chess gameplay across a vast array of states, thereby providing a more holistic and comprehensive analysis. Through our research, we strive to advance the field by developing robust and effective models capable of accurately predicting the probabilities of winning and assessing the Centipawn evaluations for any given state. By considering the full spectrum of states represented by Color×Piece×Square, our proposed architectures pave the way for a deeper understanding of chess strategies. They enable us to evaluate the efficacy of these strategies and unravel the intricacies of the game, ultimately contributing to the development of more sophisticated and intelligent Chess AI systems. § CHESS PIECE AND SQUARE VALUATION Our work will provide values for states consisting of a combination of pieces and squares For example, we make wish to assess the value of a fianchetto bishop of the queen's side ad that bishop controls a key diagonal. We denote this value by V ( , b2 ) or a white knight on a good outpost such as f5, wish is denoted V ( , f5). As valuation will be based on the probability of winning, as calculated by a chess engine, the law of probability gives us a key identity V ( ) = ∑_position V ( , position ), where the sum is taken over all future positions. Hence, we can see that the initial value of the knight (a.k.a V ( )=3 comes from its total use throughout the game. Once the pieces have moved, there's a different marginal values. Our goal is to be able to assess values such as V ( , f5). The commonly used chess piece valuations are given by ( , , , , , ) = ( ∞ , 9 , 5 , 3, 3 ,1 ) These were modified in <cit.> through the use of Machine Learning techniques to be ( , , , , , ) = ( ∞ , 8.9 , 4.6 , 3.3, 3 ,1 ) and in a recent lichess study on finding the value for pieces finds ( , , , , , ) = ( ∞ , 9.82 , 4.93 , 3.28, 3.16 , 1 ). We build on this line of research by adding square position to the state vector. §.§ Centipawn Evaluation and Optimal Play In our approach, we begin by formalizing the theoretical functions used in Q-learning. The value function, denoted as V(s), represents the probability of winning the game given a specific state s. This state s belongs to the set Color×Piece×Square, and it is worth emphasizing that V(s) is calculated with respect to the color parameter in any given state. To assess any legal chess position, we derive a Centipawn evaluation denoted as c(s). The Centipawn serves as a measurement unit for evaluating the advantage in chess, where one Centipawn is equal to 1/100 of a pawn. The win probability w(s) can be directly obtained from c(s) using the following equation: w(s) = ℙ(winning|s) = 1/1+10^-c(s)/4, and c(s) = 4log_10(w(s)/1-w(s)). For example, if White has a c(s) =0.2 advantage, then the win probability is w(s) = 0.526. To address the sequential decision problem, we employ the dynamic programming technique known as Q-learning. This methodology involves breaking down the decision problem into smaller sub-problems. A key principle utilized in Q-learning is Bellman's principle of optimality, which states: Bellman Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (Bellman, 1957) To solve this sequential decision problem, we employ Backwards Induction, which determines the most optimal action at the last node in the decision tree (i.e., the checkmate position). Utilizing this information, we can then determine the best action for the second-to-last decision point, and this process continues backward until we identify the optimal action for every possible situation, effectively solving the Bellman equation. In recent years, the field of artificial intelligence has witnessed significant advancements, particularly in the realm of AI algorithms like deep learning, alongside the development of remarkably powerful computer chess engines. These technological breakthroughs have revolutionized the way we evaluate and understand chess positions, enabling us to delve into the intricacies of the game with unparalleled precision. One notable achievement stemming from these advancements is the ability to accurately assess chess positions. By leveraging AI algorithms, particularly deep learning techniques, we can now analyze and comprehend chess moves and strategies in a manner that was previously unimaginable. These algorithms have been specifically designed to process vast amounts of data, learn from patterns, and make informed decisions, ultimately resulting in highly accurate evaluations of chess positions. Moreover, the advent of advanced computer chess engines, exemplified by the likes of Stockfish 15 <cit.>, has played a pivotal role in shaping the landscape of chess analysis and study. These engines, meticulously crafted through a combination of cutting-edge algorithms and extensive programming, have transformed the way chess is played and understood. Gone are the days when determining the optimality of specific chess lines of play relied solely on human intuition and analysis. The emergence of chess engines has effectively shifted the burden from human players and theorists to these intelligent systems. By leveraging their computational power and algorithmic prowess, chess engines have assumed the responsibility of assessing various lines of play, thus solving the Bellman equation. By adhering to Bellman's optimality condition, computer chess engines fulfill the requirements of possessing complete knowledge about the chess environment and evaluating all possible actions and their consequences. Through this rigorous analysis, they provide insights into the optimal move in a given position §.§ Q-Values The corresponding Q-value represents the probability of winning, given a policy/move a in a given state s, by following the optimal Bellman path thereafter: Q(s, a) = ℙ(winning|s, a). To address the optimal sequential decision problem, we employ Q-learning, which calculates the Q-matrix (<cit.>, <cit.>), denoted as Q(s, a) for a given state s and action a. The Q-value matrix describes the value of performing action a and then acting optimally thereafter. The current optimal policy and value function can be expressed as follows: V(s) = amax Q(s, a) = Q(s, a^*(s)) where a^*(s) = argmax_a Q(s, a). The policy function establishes the optimal mapping from states to actions, and by substituting the Q-values, we obtain the value function for a given state. In Section <ref>, we introduce a Neural Network architecture designed specifically for predicting the value of c(s) given the state s. By harnessing the predictive capability of this Neural Network, we can subsequently determine the probability of a player winning, denoted as w(s), based on their corresponding state s. The Neural Network model comprises interconnected layers, including an input layer that accepts the state s as input. Through a series of computations within the hidden layers, the model captures complex relationships and patterns inherent in the input data. Ultimately, the output layer produces the predicted value of c(s). By employing this trained Neural Network model, we can make predictions of c(s) for unseen states s. These predicted values can then be utilized to compute the probability of a player winning, denoted as w(s). The specific relationship between c(s) and w(s) is contingent upon the characteristics and dynamics of the chess game under analysis. With the ability to predict w(s), we gain valuable insights into the probability of a player winning based on their current state s. This information can be harnessed in various ways, including evaluating strategic moves, assessing the overall advantage or disadvantage of specific board configurations, and guiding decision-making during gameplay. The Neural Network's capacity to capture intricate patterns and relationships within the input data significantly contributes to more accurate predictions and a deeper understanding of the dynamics of the chess game. By incorporating the predicted values of c(s) and computing the corresponding probabilities of winning, we enhance our analytical capabilities and facilitate informed decision-making in the context of chess gameplay. §.§ Neural Network Architecture We design a specific 3-layer Neural Network aimed at predicting the value of a chess square and piece combination, denoted as c(s) for s ∈Color×Piece×Square, as shown in Figure <ref>. This model incorporates a hyperbolic tangent (tanh) activation function as a key component of its architecture. By applying the tanh activation function to the network layers, the model becomes capable of capturing and processing intricate patterns and relationships within the input data. To ensure effective training of the model, we curate a meticulously crafted dataset. This dataset consists of two essential elements: the state information, represented by s, and the corresponding critical power level (CPL) recorded for each state. The state information encompasses relevant factors, variables, or parameters that define the chessboard system or environment. Through supervised learning using this dataset, the model learns to associate the given state information with the corresponding CPL. Consequently, it acquires the ability to predict the CPL based on the provided state information as input. This training process involves iteratively adjusting the model's parameters to minimize the disparity between its predictions and the actual CPL values present in the training dataset. The selection of the tanh activation function holds particular significance for our chess square and piece prediction model. The tanh function introduces non-linearity into the model, enabling it to capture complex relationships specific to chessboard configurations. This non-linearity allows the model to interpret intricate patterns and dependencies between the input variables and the output, facilitating more accurate predictions. Furthermore, the tanh activation function maps the input values into the range [-1, 1], which is well-suited for our chess-related application. This bounded output range ensures that the model's predictions for critical power levels remain within a specific value range, aligning with the constraints and limitations inherent to chess strategies. By incorporating the tanh activation function and training the model on the state information and corresponding CPL data, our proposed model strives to provide a robust and dependable framework for predicting critical power levels in various chess scenarios. Its ability to capture the intricate relationships specific to chess squares and pieces makes it particularly valuable for tasks such as evaluating the relative strength of different board configurations, predicting advantageous moves, and assisting in strategic decision-making during chess gameplay. §.§ Data In order to train the Neural Network effectively, a training dataset is constructed, comprising two essential components. This dataset consists of elements that contain both the state information denoted by s, as well as the corresponding evaluation associated with that particular state. To gather the necessary chess game data for analysis, a vast mega database containing millions of previously played chess games is utilized. Within this database, each game is represented using the Portable Game Notation (PGN) notation, which allows for standardized representation and compatibility with various chess software and applications. The process of constructing the training dataset involves parsing and evaluating all positions p within each game. The Forsyth-Edwards Notation (FEN) is employed to determine the location of relevant chess pieces within each position p. As a result, all states s ∈ p are extracted and added to the training dataset. To navigate through the moves of each chess game systematically, the Python Chess library is utilized. This library provides a comprehensive set of functions and classes specifically designed for working with chess games and positions, enabling efficient traversal of the stored games in the database. For every position p within the dataset, an evaluation is obtained. To accomplish this, the research incorporates the Stockfish engine, a widely recognized and powerful chess engine. Stockfish employs advanced algorithms and evaluation functions to assess the strength of positions. By leveraging the capabilities of Stockfish, the training dataset can determine the evaluation of each position p on the chessboard accurately. Finally, this evaluation is associated with all states s ∈ p, resulting in a comprehensive dataset that encompasses both the state s and the evaluation associated with the position p from which s was derived. This dataset serves as the foundation for training the Neural Network, enabling it to learn and make informed decisions based on the provided state information. § KNIGHT AND BISHOP VALUATION In this study, our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Knight, Bishop}}. Although our focus is initially on the Knight and Bishop pieces, it is important to note that the model can be expanded to encompass all pieces, offering a broader analysis of the game. To provide a visual representation of the predicted values, heat maps are generated for both w(s) and c(s) corresponding to each valid combination within the specified subset. These heat maps offer a comprehensive overview of the probabilities of winning and Centipawn evaluations associated with the Knight and Bishop pieces in different states. To illustrate the efficacy of our model, we first employ it to predict the Centipawn evaluations c(s) specifically for states where the color c is White and the piece p is Knight or Bishop. The resulting predictions are showcased in Figure <ref> and Figure <ref>, providing valuable insights into the relative advantages or disadvantages of such states. Building upon this, we further use c(s) to derive the corresponding probabilities of winning w(s) for these specific states. The model-generated probabilities are visualized in Figure <ref> and Figure <ref>, offering a clear representation of the likelihood of White winning the game given the occurrence of the specified state s. By leveraging our proposed model, we gain a deeper understanding of the dynamics of the game, specifically in relation to the Knight and Bishop pieces within the context of the White color. This analysis not only facilitates strategic decision-making but also provides a basis for evaluating the effectiveness of various gameplay approaches. Moreover, the model's expandability to encompass all pieces allows for a comprehensive examination of the game across different states, enabling us to uncover additional insights and enhance the overall understanding of chess strategies and gameplay dynamics. The model is then used to determine c(s) and w(s) for states { (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop"}, as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> respectively. Key squares for the Bishops can be seen in <ref>: The applications of the model on Grandmaster games provide valuable insights into the dynamics and strategies employed by top-level chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we gain a deeper understanding of the advantages and disadvantages associated with different chess positions. These insights have several practical applications in chess analysis and gameplay evaluation. The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions. By focusing on specific subsets of states, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to the overall gameplay strategies employed by Grandmasters. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations. Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game and evaluate the effectiveness of various gameplay approaches. This broader perspective enhances our overall understanding of chess strategies and gameplay dynamics. The predictions generated by the model can also be utilized for comparative analysis between different players or groups of players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states, we can identify patterns and trends in the strategies employed by Grandmasters. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities. For example, in Figure <ref>, where w(s) represents the evaluation of the knight-square state, we can observe that the lowest values of w(s) are found in the white corners of the chessboard, specifically squares a1 and h1. This observation aligns with the widely held belief that knights are generally considered being in their worst positions when confined to the corners of the board. The disadvantage of having a knight in the corner may stem from its limited mobility and restricted scope of influence. When placed in the corners, knights have fewer potential squares to reach and can easily become isolated from the central and more strategically significant areas of the board. On the other hand, as the knights move closer to the opponent's side of the board, their positional value tends to increase. This is most likely due to the knights' ability to infiltrate enemy territory, potentially attacking key squares, pieces, or pawns. The increasing value of knight-square states as the knights advance can be attributed to several factors. Firstly, the proximity to the opponent's pieces and pawns provides more targets for the knight's maneuvers and attacks. Secondly, knights positioned closer to the enemy's side can exert greater control over central squares and influence the dynamics of the game. This control can restrict the opponent's options and potentially create weaknesses in their position. Analyzing the values of knight-square states in different positions on the board, such as the corners and closer to the opponent's side, supports the claim that the placement of knights significantly affects their effectiveness. Understanding the strengths and weaknesses associated with different knight positions helps players make informed decisions about piece placement, strategic plans, and tactical considerations. Key squares for the knight to occupy are marked in Figure <ref>. The applications of our model on Grandmaster games provide valuable insights into the dynamics and strategies employed in high-level chess. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of the game across different states, facilitating a deeper understanding of chess strategies and enhancing the overall gameplay experience. §.§ Magnus Carlsen Our proposed model can be further applied to gain insights into the playing style and performance of specific players. In this section, we focus on the world-renowned chess player Magnus Carlsen, the reigning World Chess Champion. By applying our model to the games played by Carlsen, we aim to uncover unique patterns and characteristics that contribute to his success and distinguish his gameplay from other Grandmasters. Our proposed model is applied to a dataset consisting of 2000+ Carlsen games played in the last 5 years. Similar to the previous section, we begin by predicting the Centipawn evaluations c(s) for states where Carlsen plays as the “White" color and utilizes the “Knight" or “Bishop" piece. These predictions provide valuable insights into the relative advantages or disadvantages of Carlsen's chosen states, shedding light on his strategic decision-making process. The resulting heat maps, showcased in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, offer a visual representation of the predicted Centipawn evaluations for Carlsen's specific subset of states. Building upon this analysis, we further utilize the Centipawn evaluations c(s) to derive the corresponding probabilities of winning w(s) for Carlsen's selected states. The model-generated winning probabilities provide a clear representation of Carlsen's likelihood of winning the game given the occurrence of the specified state s. By focusing on Carlsen's gameplay, we gain a deeper understanding of his preferred strategies and tendencies when employing the Knight piece as the “White" color. This analysis allows us to assess the effectiveness of Carlsen's gameplay choices, providing insights into his decision-making process and potential areas of strength or improvement. Additionally, comparing Carlsen's results to the general dataset of Grandmaster games helps us evaluate his performance against the broader chess community. The model is then used to determine c(s) and w(s) for states (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop", as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, respectively. The applications of the model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by one of the world's top chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we can gain a deeper understanding of the advantages and disadvantages associated with different chess positions in Carlsen's games. These insights have numerous practical applications in chess analysis and gameplay evaluation. The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states encountered by Magnus Carlsen. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops in Carlsen's games. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions as encountered by Carlsen. By focusing on specific subsets of states in Carlsen's games, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to Carlsen's overall gameplay strategies. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations based on Carlsen's approach. Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states in Carlsen's games. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game as played by Carlsen and evaluate the effectiveness of various gameplay approaches employed by him. This broader perspective enhances our overall understanding of Carlsen's strategies and gameplay dynamics. The predictions generated by the model can also be utilized for comparative analysis between Magnus Carlsen and other players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states in Carlsen's games, we can identify patterns and trends in his strategies. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities while considering Carlsen's approach. In Figure <ref>, we discover the solution to one of the questions raised in Section <ref>: the value of the white knight on f5. Figure <ref> illustrates the distribution of c(s) for the White Knight on f5 in Carlsen's games. It is evident that the c(s) values for the White Knight exhibit a positive skew, indicating that this particular state s is typically associated with favorable c(s) values. Therefore, having a white knight positioned on f5 often confers an advantage. By incorporating such insights into our analysis of Carlsen's games, we gain a more comprehensive understanding of the strengths, weaknesses, and strategic implications of the Knight and Bishop pieces as employed by Magnus Carlsen. In sum, the applications of our model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by this world-class chess player. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions encountered by Carlsen, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of Carlsen's games, facilitating a deeper understanding of his strategies and enhancing the overall gameplay experience. § PAWN VALUATION No pawn exchanges, no file-opening, no attack—Aron Nimzowitsch Our study is not complete until we apply the model to the mighty pawn. Our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Pawn}}. The results of the model when applied to the White Pawn are shown in Figure <ref> and Figure <ref>. We note a few chess maxims that are reflected in the model predictions. * Pawns gain in value as they cross the 4th rank: This point highlights an important principle in chess, where advancing pawns beyond the 4th rank often leads to increased positional strength and potential threats. As pawns move forward, they gain control over more squares, restrict the opponent's piece mobility, and open up lines for their own pieces. Crossing the 4th rank is a significant milestone that can significantly impact the dynamics of the game. * Pawns on the h and a files are very good on the 5th rank: This point emphasizes the strategic importance of pawns positioned on the h and a files when they reach the 5th rank. Pawns on these files can have a powerful influence on the game, particularly in the endgame. Placing pawns on the 5th rank provides support for the central pawns, helps control key central squares, and may facilitate piece activity and potential attacks on the opponent's position. * Pawns on the 6th rank are deadly, especially when supported by a pawn on the 5th rank: This point highlights the strength of pawns on the 6th rank, which is just two steps away from promotion. Pawns advanced to this rank become highly dangerous, as they pose a direct threat to promote to a more powerful piece. When supported by a pawn on the 5th rank, these pawns can create a formidable pawn duo, exerting significant pressure on the opponent's position and potentially leading to advantageous tactical opportunities. * Edge pawns tend to be weaker than central pawns: This point draws attention to the relative weakness of pawns placed on the edges of the board (such as the a and h files) compared to pawns in central positions. Edge pawns have fewer potential squares to advance or support other pieces, limiting their mobility and influence. In contrast, central pawns control more critical squares, contribute to a stronger pawn structure, and have a greater impact on the overall game dynamics. * Kingside pawns are more dangerous when advanced than queenside pawns: This point highlights a positional aspect where advancing pawns on the kingside (g and h files for White, g and h files for Black) can have a more immediate and aggressive impact compared to advancing pawns on the queenside (a and b files for White, a and b files for Black). Advanced kingside pawns can create open lines, potentially exposing the opponent's king to attacks or weakening their pawn structure. Understanding this distinction helps players assess the strategic implications of pawn advances on different sides of the board. Important squares for the white pawn can also be seen by examining the highest Centipawn evaluation c(s) values in each column. By analyzing the rows in the heatmap corresponding to the white pawns, we can identify squares that consistently have high Centipawn evaluations, indicating their significance for white pawns. Starting from the top row (from White's perspective), the squares with the highest c(s) values are e4, h4, c5, and h6. These squares represent critical positions for white pawns. The square e4, located in the fourth row, is a well-known central square in chess. Occupying e4 with a white pawn can provide several advantages, such as controlling important central squares, supporting piece development, and establishing a strong pawn presence in the center. Also in the fourth row, we find the square h4. Although it is on the edge of the board, it is an important square for white pawns. Placing a pawn on h4 can serve multiple purposes, including potentially supporting a kingside pawn storm, reinforcing control over the g5 square, or preparing to launch an attack on the opponent's position. In the fifth row, we encounter the square c5. Occupying c5 with a white pawn can contribute to a solid pawn structure and provide control over central squares. It may also support piece mobility and influence the game's dynamics, particularly in the context of pawn breaks or central pawn exchanges. Finally, in the sixth row, the square h6 stands out with the highest c(s) value. Placing a pawn on h6 can have strategic implications, such as potentially supporting kingside attacks or acting as a defensive shield for the king. By identifying these squares with high c(s) values, we gain valuable insights into the strategic positioning of white pawns. These squares offer opportunities for central control, piece activity, attacking potential, and overall pawn structure. Understanding the significance of these squares helps players make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their advantage in the game. We next apply this model to the black pawns. The results are shown in Figure <ref> and Figure <ref>. Similar conclusions can be drawn for the black pawns. By analyzing the highest Centipawn evaluation c(s) values in each column for the black pawns, we can identify the key squares that consistently have high evaluations, signifying their significance for black pawns. Just like for the white pawns, the rows in the heatmap corresponding to the black pawns reveal important squares. The squares with the highest c(s) values for black pawns are f5, d5, c4, d3, and f3. These squares play a crucial role in determining the strength and strategic positioning of the black pawns. The square f5, located in the fifth row, emerges as one of the critical squares for black pawns. Placing a pawn on f5 can provide black with control over central squares, potential support for piece development, and opportunities for counterplay. The square d5 stands out with a high c(s) value. Occupying d5 with a black pawn contributes to central control, potentially restricts white's pawn breaks, and provides a solid foundation for black's pawn structure. In the fourth row, the square c4 is identified as an important square for black pawns. Occupying c4 can offer black strategic advantages, such as central control, potential support for piece activity, and the creation of tactical opportunities. Furthermore, the square d3 in the third row holds significance for black pawns. Placing a pawn on d3 strengthens black's central presence, potentially restricts white's pawn advancements, and helps solidify black's position in the center. Lastly, the square f3 in the third row also demonstrates a high c(s) value. Occupying f3 with a black pawn can support kingside counterplay, potentially restrict white's piece mobility, and offer opportunities for tactical operations. Analyzing these key squares for black pawns, namely f5, d5, c4, d3, and f3, provides valuable insights into the strategic considerations and potential strengths of the black pawn structure. Occupying and controlling these squares strategically enhances black's control of central areas, supports piece coordination, and enables counterplay against white's position. By understanding the significance of these squares, players can make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their potential advantage and navigate the complexities of the game from the black perspective. § DISCUSSION In this paper, we presented a comprehensive methodology for evaluating chess positions and predicting the probabilities of winning w(s) and Centipawn evaluations c(s). Our approach utilized a combination of Centipawn evaluation, Q-learning, and Neural Networks to capture the complex dynamics of the game and facilitate informed decision-making. We began by formalizing the theoretical functions used in Q-learning, such as the value function V(s) and Centipawn evaluation c(s). The value function represented the probability of winning the game given a specific state s, while the Centipawn evaluation measured the advantage in chess. We derived the win probability w(s) from the Centipawn evaluation using a mathematical equation. To address the sequential decision problem, we employed the dynamic programming technique of Q-learning, which involved breaking down the problem into smaller sub-problems and solving the Bellman equation. The Q-value matrix represented the probability of winning given a policy/move in a specific state, and we determined the optimal policy and value function using the Q-values. To predict Centipawn evaluations c(s), we designed a Neural Network architecture specifically tailored for chess positions. This model incorporated the tanh activation function to capture intricate patterns and relationships within the input data. By training the Neural Network on a meticulously crafted dataset, we could make accurate predictions of Centipawn evaluations for unseen states. Our methodology expanded upon previous work by considering a comprehensive state representation that encompassed color, piece type, and square information. This allowed for a more nuanced analysis of the game dynamics and a deeper understanding of the factors influencing the outcome. We also showcased the applications of our model, focusing on specific subsets of states, such as the Knight and Bishop pieces, and visualizing the predicted probabilities of winning and Centipawn evaluations through heat maps. Further research in this area could explore the dynamic nature of square values, taking into account positional changes and the interaction between different pieces. By refining and expanding our methodology, we can continue to deepen our understanding of the intricate dynamics of chess positions and contribute to advancements in the field of chess AI. In conclusion, our methodology provides a robust framework for evaluating chess positions and making informed decisions during gameplay. By combining Centipawn evaluation, Q-learning, and Neural Networks, we achieved a comprehensive analysis of the game dynamics and enhanced our ability to assess strategic moves and guide decision-making. Our research contributes to the development of more sophisticated and intelligent Chess AI systems, paving the way for deeper insights into the intricacies of the game. With our methodology, we strive to unravel the logical relations of chess and provide a comprehensive understanding of the game, empowering players and researchers alike to unlock new levels of strategic thinking and mastery. plainnat
http://arxiv.org/abs/2307.04394v3
20230710075623
Relieving the $S_8$ Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
[ "Xingpao Suo", "Xi Kang", "Huanyuan Shan" ]
astro-ph.CO
[ "astro-ph.CO" ]
APS/123-QED [email protected] Institute for Astronomy, School of Physics, Zhejiang University, Hangzhou 310027, China [email protected] Institute for Astronomy, School of Physics, Zhejiang University, Hangzhou 310027, China Purple Mountain Observatory, 10 Yuan Hua Road, Nanjing 210034, China [email protected] Shanghai Astronomical Observatory (SHAO), Nandan Road 80, Shanghai 200030, China Recent observations of weak gravitational lensing surveys indicate a smoother Universe compared to the predictions of the Cosmic Microwave Background (CMB). This is known as σ_8 tension or S_8 tension, where σ_8 represents the present root-mean-square matter fluctuation averaged over a sphere of radius 8 h^-1Mpc and S_8 ≡σ_8√(Ω_m/0.3). In this Letter, we investigate a kind of general Dirac-Born-Infeld (DBI) Lagrangian referred as surface-type DBI (s-DBI) model. We have found that, up to the linear order, the constraints on the s-DBI model with CMB from Planck2018 and low-redshift probes (WL and GC) yield S_8= 0.7685_-0.0066^+0.0077 and S_8=0.766_-0.0376^+0.0471, respectively, which are not only self-consistent but also consistent with the values derived from most low-redshift probes. Furthermore, we provide an outlook for searching the non-linear effects of this model, which could be helpful to resolve other issues by Cold Dark Matter on small scales. Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm Huanyuan Shan August 12, 2023 ========================================================================================= Introduction. –The ΛCDM model stands as the most widely accepted cosmological model, serving as the standard framework for Big Bang cosmology. It offers a simple yet effective description that agrees with most observations. However, with the development of theoretical and observational studies, some disagreement between different observations or between theory and observations have emerged, challenging the ΛCDM model and suggesting the need for new extended model or physics<cit.>. Among these challenges, σ_8, or S_8 tension is one of the most significant<cit.>. It shows that the low-redshift probes such as weak gravitational lensing (WL) <cit.>, galaxy clustering (GC) <cit.> as well as their combined analyses <cit.>, indicate a smoother Universe than the constraint by cosmic microwave background (CMB)<cit.>. Quantitatively, the structure growth parameter S_8 ≡σ_8 √(Ω_m/0.3) derived from low-redshift probes is systematically 2-3σ lower than the value obtained from the CMB<cit.>. Recently, a joint cosmological analysis of cosmic shear + galaxy-galaxy lensing + GC yielded a constraint of (Ω_m, S_8) = (0.305^+0.010_-0.015,0.766^+0.02_-0.014)(see <cit.>, hereafter referred as K1K-3×2pt). This result is deviated by 8.3 ± 2.6% relative to (Ω_m, S_8) = (0.3166±0.0084, 0.834±0.016) given by of Planck2018<cit.>. In this Letter, we present a novel dark matter model which offers a solution to the S_8 tension. Referred as the surface-type Dirac-Born-Infeld (s-DBI) model, it adopts an area functional form as the dark matter Lagrangian, which presents a special case within the broader class of general DBI models. Our study demonstrates that this model effectively addresses the S_8 tension by smoothing out the low-redshift structure while preserving the perturbation evolution at high redshifts. The surface-type DBI as a dark matter model. –Here we consider the Lagrangian ℒ ≡R/2κ + Λ_I + Λ_II√(1 + ∂_μϕ∂^μϕ) + ℒ_m and its corresponding action S = ∫ d^4x √(-g)ℒ, where g ≡(g_μν) represents the determinant of the space-time metric g_μν with signature [-1,1,1,1], R denotes the scalar curvature of Levi-Civita connection, κ≡ 8 π G with gravitational constant G, Λ_I is the vacuum energy or equivalently cosmological constant, ℒ_m is the lagrangian of normal matter including radiation and baryon, and Λ_II√(1 + ∂_μϕ∂^μϕ) with a constant Λ_II and scalar field ϕ is the Lagrangian that we introduce to represent dark matter, which we refer to as the surface-type Dirac-Born-Infeld (s-DBI) model. It is important to note that our consideration of s-DBI is primarily from a mathematical standpoint. The terms ∫ d^4x √(-g) and ∫ d^4 x √(-g)√(1 + ∂_μϕ∂^μϕ) can be viewed as formal area or volume functionals. Meanwhile, it is worth mentioning that the s-DBI also processes strong physical motivations. It can be interpreted as a general DBI with the constant warp factor <cit.> or as a low-dimension deduced equivalence in membrane theory <cit.>. For the Lagrangian given in Eq. (<ref>), applying the principle of least action leads to the Einstein field equation: R_μν - 1/2R g_μν = -κ( T_μν^(Λ_I) + T_μν^(Λ_II) + T_μν^(m)), where R_μν is the Ricci tensor, T_μν^(Λ_I) = - Λ_I g_μν and T_μν^(Λ_II) = Λ_II(∂_μϕ∂_νϕ/√(1+∂_ρϕ∂^ρϕ) - g_μν√(1+∂_ρϕ∂^ρϕ)) represent the energy-stress tensor of dark energy and dark matter in this model, respectively. Now our focus turns to the s-DBI field. In a homogeneous Universe, according to Eq. (<ref>), this field can be treated as a perfect fluid characterized by the Equation of State (EoS) w = - Λ_II^2/ρ^2= - 1/1+(a_d/a)^6 , where w ≡ P / ρ, P and ρ denoting the pressure and mass density of the s-DBI field, respectively. Here, a is the scale factor normalized to unity at the present time, and a_d is a free parameter. When the Universe evolves from a=0 to a=∞, the s-DBI field transforms from the dark-matter phase (w=0) to the dark energy phase (w=-1). The parameter a_d characterizes the scale at which this phase transition occurs and can be interpreted as the decay scale factor or decay parameter. Notably, this phase transition is rapid with a power index of six. Using Eq. (<ref>), we can derive the density evolution of ρ regard to a as follows: ρ(a) = ρ_today/√(1+a_d^-6)√(a_d^-6+a^-6)≡ρ_s √(a_d^-6+a^-6) . Moreover, considering a linear perturbation in the homogeneous Universe, the sound speed of the s-DBI field can be given by c_s^2 = c_a^2 = - w , where c_s and c_a are the rest-frame and adiabatic sound speed, respectively. The EoS and sound speed provide sufficient information to complete the scalar linear evolution equations of the Universe <cit.>. The dark matter with the above form EoS and sound speed has such properties that during the early stages (a≪ a_d), it behaves similarly to the pressure-less standard cold dark matter, but at the late stages (a close to a_d), it exhibits a certain sound speed and pressure, which leads to the smoothing out the structures that formed during the early stages. This may provide an explanation for the observed smoother Universe compared to the predictions from the CMB. In Fig. <ref>, we present the linear matter spectra of different redshifts with a_d = 3.8 as a reference. It is evident that the suppression related to the ΛCDM increases with time. The value of the decay parameter will greatly influence this process. Fig. <ref> shows the power spectra of different a_d values at z=0, along with the matter power spectrum of the ΛCDM model for comparison. As a_d tends towards infinity, the s-DBI model will degenerate to ΛCDM. An initial estimate for a_d can be made based on the following considerations: if a_d≤ a_today = 1, the dark matter would have already decayed to the dark energy phase, which is against the observation. Therefore, a_d should be larger than one. However, a_d should not be so large that it becomes indistinguishable from standard cold dark matter. According to <cit.> and <cit.>, any solution to the S_8 tension must be effective after z≈ 1. Hence, a_d should not exceed approximately ten. In summary, if the constraint yields a value outside the range of [1, 10], it should be considered as providing insufficient support for this model. Note that the non-relativistic approximation of the s-DBI field is equivalent to the Chaplygin gas<cit.>, which has the EoS of P= - A/ρ with a constant A>0. However, in the relativistic region, we need to consider Eq. (<ref>) and the evolution equation for ϕ (1/2∂_μlog( -g ) + ∂_μ) ∂^μϕ/√(1+∂_νϕ∂^νϕ)=0 , which represents a general minimal surface equation. Since the perturbation evolution of dark matter, particularly on large scales and in the early stage of our Universe, is dominated by non-relativistic and linear part, we can ignore the non-linear and relativistic aspects of the theory. Constraints by the observations. –To demonstrate that the s-DBI model can alleviate the S_8 tension, we perform a series of constraints using different observational datasets. We begin with the of Planck2018, which combines the TT, TE, EE and low-E angular power spectra of the CMB to constrain the cosmological parameters<cit.>. This baseline analysis is advantageous as it avoids model-dependent non-linear effects that may introduce uncertainties <cit.>. For the low-redshift probes, we employ the WL shear catalog from KiDS1000<cit.> and the GC data from SDSS-III BOSS<cit.>. In our analysis, we treat the high-redshift probe (CMB) and low-redshift probes (WL and GC) separately, instead of combining them, since if the two data sets can give a consistent result, it will be a stronger proof of the correctness of a model. Additionally, we employ the same data sets to constrain the ΛCDM model in parallel, serving as a control group for comparison. We modified the Boltzmann code <cit.> [<https://lesgourg.github.io/class_public/class.html>] to perform perturbation calculations. Based on it, a public Markov Chain Monte Carlo (MCMC) sampler <cit.>[<https://baudren.github.io/montepython.html>] was used. All the MCMC samplings in our constraint are done with Metropolis-Hasting algorithm coded in . To constrain this model with Planck2018 , we assume a flat prior on some nuisance parameters in Planck likelihood <cit.> and the cosmological parameters {ω_b, Ω_s, h, A_s, n_s, τ_reio, a_d}, where Ω_s ≡ρ_s/ρ_cr≡8π G /3H_0^2ρ_s is the reduced dark matter density in our model. The names and prior of base cosmological parameters are listed in Table <ref>. For comparison, we also conducted a parallel ΛCDM constraint using a similar setup. Note that in all the analysis we always assume the spacial curvature is zero (Ω_K=0) and our neutrinos model is the same as Planck2018 with two massless species and one massive with 0.06eV. The posterior distributions with Planck2018 are presented in Table <ref>. The Markov chain used for the analysis satisfies the Gelman-Rubin convergence criterion with R-1 ≈ 10^-3, indicating good convergence. The posterior distributions for all parameters are approximately Gaussian, and the acceptance rate of the chain is around 0.22, indicating reliable convergence. Furthermore, our constraints on the ΛCDM model are consistent with the results reported by the Planck2018 collaboration <cit.>, validating the accuracy of our analysis. The results reveal slight differences in the mean values or best fits of common cosmological parameters between the s-DBI and ΛCDM. However, significant discrepancies are observed in the total matter density Ω_m and the structure growth parameter S_8. The s-DBI model yields values of (Ω_m, S_8) = (0.3072_-0.0055^+0.0071,0.7685_-0.0066^+0.0077) , which are in strong agreement with the results from K1K-3×2pt and clearly deviate from the result given by Planck2018 <cit.>. To assess the goodness of fit, we present the CMB temperature power spectrum with the best-fit model in Fig. <ref>. It is evident that the discrepancy between the two models is significantly smaller than the discrepancy between the theoretical predictions and observational data, giving χ^2_obs,LCDM=4.51× 10^-12, χ^2_obs,s-DBI= 4.38 × 10^-12 and χ^2_s-DBI,LCDM = 1.19× 10^-13, where χ^2_i,j is defined as χ^2_i,j≡∑_k=0^N-1(f_k^(i) - f_k^(j))^2/f_k^(j) with f_k^(i) the k-th entry of data set i with total length N. These results suggest that both the s-DBI and the ΛCDM model are strongly favored by Planck2018 data. Due to the similarity of the results, we did not include the plots of other components of the power spectra. It's also worth noting that the s-DBI model does not exacerbate the Hubble tension<cit.>. On the contrary, it relieves the Hubble tension by increasing the Hubble constant slightly higher to h≈ 0.68, compared with the result from ΛCDM with h≈0.67. After constraining the model with Planck2018 CMB power spectrum, we proceed with the combined constraint using low-redshift probes, i.e., WL and GC. We perform parallel constraints for both the s-DBI and ΛCDM models. The non-linear scale evolution of our model is not available, so we eliminate the non-linear effect reliably. For WL, we adopt the correlation function ξ_+(θ) and truncate the small scale portion (θ<10) using the KiDS cosmology analysis pipeline <cit.>. The validity of this truncation is ensured through a comparison between the correlation function data vector ξ⃗_+^NL and ξ⃗_+^L, which include the non-linear and linear effects, respectively. By increasing the angular variable θ, we verify that the relative distance between the two vectors ||Δξ⃗|| / ||ξ⃗_+^NL|| reaches a level of 10^-2, where Δξ⃗≡ξ⃗_+^L - ξ⃗_+^NL and ||· || ≡√(⟨·, ·⟩). Note that we discard the correlation function ξ_- since the effect of non-linear on ξ_- can hardly be removed. For GC, we focus only on the measurements of the baryon acoustic oscillations (BAO) and discard the redshift-space distortions. Due to the strict elimination of the non-linear effect, the constraint capacity on the five common base parameters becomes weaker. Hence, for both the s-DBI and ΛCDM models, we fix these parameters according to their respective best-fit values in Table <ref>. However, for the s-DBI model, we allow the decay parameter a_d to have a prior within the interval [2,6], as the constraint capability of WL + GC on a_d is unknown. For the ΛCDM model, the low-redshift data still prefer a lower value of S_8 compared to the Planck2018 . The constraint yields (Ω_m, S_8) = (0.299_-0.0105^+0.011, 0.770_-0.035^+0.0371), which is consistent with the results from K1K-3×2pt but with a difference of about 0.6σ for Ω_m and 0.1σ for S_8. However, as shown in Fig. <ref>, the tension between low-redshift probes and CMB still persists. On the other hand, for the s-DBI model, the S_8 tension does not exist. As depicted in Fig. <ref>, the constraint on WL+GC data gives a value of (Ω_m, S_8) = (0.305_-0.0127^+0.0107, 0.766_-0.0376^+0.0471), which is highly consistent with our constraint using the Planck2018 . Note that the area of credibility interval is larger than that of ΛCDM due to the degeneration between Ω_s and a_d. In conclusion, our analysis reveals that the S_8 tension persists in the ΛCDM model when considering non-linear-free data. This suggests that modifying the non-linear model such as <cit.> or <cit.>, is unlikely to resolve the tension effectively. On the other hand, the s-DBI model, within the scope of the data sets we have considered, successfully alleviates the S_8 tension. Non-linear effect and outlook. –A key issue exists about whether small-scale structures such as dark matter halos can form under the s-DBI model. To answer this question, we note that s-DBI's non-relativistic approximation, Chaplygin gas is barotropic, for which we can introduce an effective potential h ≡ - ∫_ρ^∞dP(ρ')/ρ' = - 1/2Λ_II^2/ρ^2 to substitute the effect of pressure. We include this external potential in the N-body simulation software <cit.> by modifying the implementation of the PM algorithm. Setting the cosmological parameters Ω_m, Ω_vac and h as the best-fit values of the s-DBI model from Table <ref>, we carry out the simulation with 512^3 particles in a cube box with the length of the edge of 100Mpc. The parallel simulation for ΛCDM is also performed. The simulations reveal that in the s-DBI model, the dark matter halo can indeed form. Furthermore, we find that the differences between the s-DBI and ΛCDM models are tiny at redshifts z>1. However, as the redshift z approaches zero, the s-DBI model predicts a lower non-linear power spectrum compared to ΛCDM. The "bias" between the power spectra of the two models, defined as b_M ≡√(P_s-DBI/P_Λ CDM), is shown in Fig. <ref>. Note that the leading order bias between observed and simulated power spectrum, denoted as b_1 ≡√(P_gg/P_mm ), can range from about 1.4 to 3.5<cit.>. In comparison, the bias b_M ≈ 0.9 is close enough to unity, suggesting that our model can fit the observed galaxy power spectrum by minor regulation on b_1. Besides, In the s-DBI simulation, we found that some small dark matter halos are dissolved by external pressure, suggesting that our model may hold promise in addressing other inconsistencies related to cold dark matter, such as the presence of dark matter lacking galaxy<cit.>, cuspy halo<cit.> and dwarf galaxy missing problem<cit.>. However, a rigorous numerical analysis is necessary to fully investigate these issues. Moreover, a more comprehensive understanding of the non-linear effects is crucial for further constraints using various cosmological probes. Given the complexity of this topic, we leave it to future work. Xingpao Suo and Xi Kang acknowledge the support from the National Key Research and Development Program of China (No.2022YFA1602903), the NSFC (No. 11825303, 11861131006), the science research grants from the China Manned Space project with No. CMS-CSST-2021-A03, CMS-CSST-2021-A04, the Fundamental Research Funds for the Central Universities of China (226-2022-00216) and the start-up funding of Zhejiang University. Huanyuan Shan acknowledges the support from NSFC of China under grant 11973070, Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7013, Program of Shanghai Academic/Technology Research Leader, and the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A01, CMS-CSST-2021-A04. We thank Joe Zuntz and Benjamin Stölzner for helpful discussions. *
http://arxiv.org/abs/2307.06298v1
20230712165240
Improved Real-time Image Smoothing with Weak Structures Preserved and High-contrast Details Removed
[ "Shengchun Wang", "Wencheng Wang", "Fei Hou" ]
cs.CV
[ "cs.CV" ]
[ Improved Real-time Image Smoothing with Weak Structures Preserved and High-contrast Details Removed Shengchun Wang^1,2 Wencheng Wang^1,2* Fei Hou^1,2 ^1The State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences ^2School of Computer Science and Technology, University of Chinese Academy of Sciences {wangsc,whn,houfei}@ios.ac.cn August 12, 2023 ================================================================================================================================================================================================================================================================================================================================== type=figure [Input] [b]0.155 < g r a p h i c s > [b]0.48 < g r a p h i c s > [b]0.48 < g r a p h i c s > [GFES] [b]0.155 < g r a p h i c s > [b]0.48 < g r a p h i c s > [b]0.48 < g r a p h i c s > [DeepFSPIS] [b]0.155 < g r a p h i c s > [b]0.48 < g r a p h i c s > [b]0.48 < g r a p h i c s > [CSGIS-Net] [b]0.155 < g r a p h i c s > [b]0.48 < g r a p h i c s > [b]0.48 < g r a p h i c s > [ILS] [b]0.155 < g r a p h i c s > [b]0.48 < g r a p h i c s > [b]0.48 < g r a p h i c s > [Ours] [b]0.155 < g r a p h i c s > [b]0.48 < g r a p h i c s > [b]0.48 < g r a p h i c s > Comparison between some latest methods and our improved method. For the input (a), GFES <cit.> produces block effects, as shown in the red box in (b), DeepFSPIS <cit.>, CSGIS-Net <cit.> and ILS <cit.> fail to remove some high-contrast details in white dots, as shown in the green boxes in (c), (d) and (e), and DeepFSPIS and ILS have weak structures smoothed out, as shown in the enlarged red boxes in (c) and (e). As for our result (f), it is in high quality without high-contrast details and has weak structures well preserved. Compared with ILS, which is the fastest no-learning method to our knowledge, we took 2 iterations while ILS took 10 iterations to produce these results. (Zoom in for a better view.) ] ^*Corresponding Author. Image smoothing is by reducing pixel-wise gradients to smooth out details. As existing methods always rely on gradients to determine smoothing manners, it is difficult to distinguish structures and details to handle distinctively due to the overlapped ranges of gradients for structures and details. Thus, it is still challenging to achieve high-quality results, especially on preserving weak structures and removing high-contrast details. In this paper, we address this challenge by improving the real-time optimization-based method via iterative least squares (called ILS). We observe that 1) ILS uses gradients as the independent variable in its penalty function for determining smoothing manners, and 2) the framework of ILS can still work for image smoothing when we use some values instead of gradients in the penalty function. Thus, corresponding to the properties of pixels on structures or not, we compute some values to use in the penalty function to determine smoothing manners, and so we can handle structures and details distinctively, no matter whether their gradients are high or low. As a result, we can conveniently remove high-contrast details while preserving weak structures. Moreover, such values can be adjusted to accelerate optimization computation, so that we can use fewer iterations than the original ILS method for efficiency. This also reduces the changes onto structures to help structure preservation. Experimental results show our advantages over existing methods on efficiency and quality. § INTRODUCTION Image smoothing is to smooth out details while preserving structures to concisely present the contents in the images, by which many subsequent image processing applications can be facilitated, like saliency detection <cit.>, image abstraction <cit.>, pencil sketching <cit.>, and detail enhancement <cit.>. Till now, a large number of image smoothing methods have been proposed. They are either filtering-based methods <cit.> or optimization-based methods <cit.>. Though they take different strategies for smoothing out details, they all rely on gradients to determine smoothing manners and so difficult to distinguish structures and details to handle distinctively, because the ranges of gradients for structures and details may be overlapped. As a result, it is still very challenging to achieve high-quality results, especially on preserving weak structures while removing high-contrast details. For example, the filtering-based methods calculate the output pixel intensity as a weighted average of input pixel intensities inside a window. Their results are dependent on the windows. The smaller windows help for preserving structures while the larger windows for smoothing out details. As the windows are determined by some measurement on gradients, their structure-preserving abilities and their smoothing abilities are difficult to be well balanced. As discussed in <cit.>, these methods tend to produce artifacts like halos and gradient reversals. As for optimization-based methods, they formulate image smoothing as an optimization problem to solve globally, generally in an iteration manner. They can achieve superior performance over the filtering-based methods in avoiding artifacts. Unfortunately, they are by iteratively reducing pixelwise gradients for smoothing, so that it is in the dilemma of smoothing out high-contrast details and preserving weak structures, which have low contrasts. Using fewer iterations, the high-contrast details cannot be smoothed. If many more iterations are used, weak structures would be smoothed out. To our knowledge, some learning methods have been proposed recently for image smoothing <cit.>. With the trained networks, they can fast output the results. However, they always need an expensive training process and the quality of their results suffers from the training data, which are always produced by existing non-learning methods for image smoothing or using human-made images. Thus, their potentials are limited, especially in handling real-world images. In this paper, we address this challenge of high-quality image smoothing by determining smoothing manner for pixels via their properties on structures or not, no matter whether they have high or low gradients. In this way, it is convenient to preserve weak structures while removing out high-contrast details in image smoothing. Our proposed method is based on improving the ILS method in <cit.>, which is optimization-based and solves an objective function through iterative least squares, and so-called ILS. It exploits frequency computation for solving the objective function and so able to perform real-time image smoothing. As an optimization-based method, it is ineffective to preserve weak structures and remove high-contrast details, as discussed in the above paragraph. We observe that ILS prefers to smooth out the pixels with lower gradients while not for the pixels with higher gradients, and this is much dependent on the computation of its penalty function, whose independent variable is the pixelwise gradient. This motivates us that the values of the penalty function take much effects on determining smoothing manners. Thus, we modify the computation of the penalty function by the properties of pixels on structures or not, allowing the pixels of details to have lower penalty function values and the pixels of the structures to have higher penalty function values, by which the smoothing would be dependent on whether they should be smoothed or preserved, no matter whether these pixels have high or low gradients. By an analysis, with such a modification of the penalty function, the framework of the ILS method can be still used for image smoothing. Thus, we can more effectively smooth out the details with high contrasts, and due to this, we can use fewer iterations than the original ILS method to obtain quality results, by which efficiency can be promoted and structures can be more effectively preserved due to the reduced changes onto structures, e.g., avoiding the halos and intensity shift. All these will be discussed in detail in Section <ref> and <ref>. Experimental results show that we can obtain better results than state-of-the-art methods, especially on removing high-contrast details and preserving weak structures, while we can reduce the iterations considerably in comparison with the original ILS method for acceleration, as illustrated in Figure <ref>. § RELATED WORKS §.§ Filtering-based Methods Filtering-based methods calculate the output pixel intensity as a weighted average of input pixel intensities inside a window, such as the popular bilateral filter <cit.>. For improvement, some methods try to enhance weight computation, like using histograms <cit.> or prioritizing spatial scales <cit.>, while some try to promote the efficiency, including fast bilateral filter <cit.>, adaptive manifolds for real-time high-dimensional filtering <cit.> and fast high-dimensional filtering using the permutohedral lattice <cit.>. Considering the results are much determined by the scopes defined by the windows, many methods try to improve window determination, e.g. using non-local windows in the tree filter <cit.> and the graph-based filter <cit.>, using small windows near edges and using large windows inside the regions far away from edges <cit.> and using edge-aware windows <cit.>. Some methods also investigate to employ priors for improvements, like guided image filtering <cit.>. These methods are generally very efficient, but they are prone to produce artifacts in their results, as discussed in <cit.>. §.§ Optimization-based Methods Optimization-based methods <cit.> take image smoothing as an optimization problem to solve, by which details are smoothed out while structures are preserved. As they globally consider the characteristics of pixels and may embed the prior of the output image in the optimization procedure to guide image smoothing, they are better in avoiding artifacts than filtering-based methods, e.g. the methods via total variation smoothing <cit.>, weighted least square filter <cit.> and L_0 smoothing <cit.>. Considering textural details may contain significant local gradients, some methods are particularly studied to improve texture filtering <cit.>. As global optimization based methods are computationally expensive, many methods are proposed to speed up optimization computation, like using preconditioned techniques <cit.>. With regard to this, Liu et al. <cit.> proposed a method using iterative least squares for fast solving the optimization problem via fast Fourier transforms and inverse fast Fourier transforms. It can perform image smoothing in real time while preserving salient edges. In general, these methods achieve smoothing results by imposing certain penalty on image gradients, so that they are often difficult to smooth out the details with high-contrasts while preserving weak structures, as discussed in Section <ref>. This prevents them from obtaining high quality results. §.§ Learning-based Methods Recently, learning techniques have been studied to promote image smoothing. Some methods use deep neural networks (DNN) architectures as a solver to solve the objective function or predict parameters <cit.>. As they need to train the model separately for different inputs, they are not easy to use. Some methods design novel network architectures as the image smoothing models to directly predict smoothing results by ground truth datasets <cit.>. Though they are very convenient to output the results after their networks are learned, their results are prevented in quality by their training data. This is because the training data are always produced by existing non-learning methods, whose shortcomings would be transferred to learning methods, as discussed in  <cit.>. Though Fan, et al. <cit.> proposed an unsupervised learning method, it relies on the detected textures or structures to optimize the objective functions during training. Since high quality textures or structures are difficult to obtain, the potentials of this method are limited. Till now, it is still difficult for learning methods to produce high quality results. § DETERMINING SMOOTHING MANNERS BY PROPERTIES OF PIXELS ON STRUCTURES Our method is by modifying the penalty function of ILS to decouple gradients from determining smoothing manners and then employ the framework of ILS for image smoothing. In the following subsections, we first review ILS and then discuss our improvements. §.§ ILS Method Review The ILS method was proposed in <cit.>, which is by minimizing the following energy function: E(u,f) = ∑_s( (u_s - f_s)^2 + λ∑_* ∈x,yϕ_p(∇ u_*,s) ), where f is the input image, u is the smoothed output image, s denotes the pixel position and ∇ u_* (* ∈x,y) represents the gradient of u along x-axis/y-axis. Here, the gradients are computed by the standard finite difference [1, -1] and [1, -1]^⊤ along x-axis and y-axis, respectively. The penalty function ϕ_p(·) is defined as: ϕ_p(d)=(d^2+ϵ)^p/2 where d is the gradient value, ϵ is a small constant and fixed ϵ = 0.0001 in their tests. The norm power p is usually set as 0 < p ≤ 1 for edge-preserving smoothing and fixed p = 0.8 in our tests, as suggested in <cit.>. In <cit.>, it is demonstrated that Eq. (<ref>) can be solved by updating u iteratively and the value of u in each iteration can be obtained as: u^n+1= umin∑_s((u_s-f_s)^2+ . . λ∑_* ∈{x, y}1/2(√(c)∇ u_*, s-1/√(c)μ_*, s^n )^2 ) where the constant c=p ϵ^p/2-1>0. The value of μ_*, s^n in each iteration is computed as: μ_*, s^n =c ∇ u_*, s^n-ϕ_p^'(∇ u_*, s^n) =c ∇ u_*, s^n-p ∇ u_*, s((∇ u_*, s^n)^2+ϵ)^p/2-1, * ∈{x, y} where ϕ_p^' is the derivative of ϕ_p. Because each iteration in Eq. (<ref>) is a least square (LS) problem, this method is called ILS. As Eq. (<ref>) can be solved efficiently with the help of fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT), according to the discussion in <cit.>, the ILS method is very efficient for image smoothing. Moreover, it is observed that using a few iterations of Eq. (<ref>), it is able to achieve most of the energy decrease, so that the edges will not be influenced very much, helpful for edge-preserving in image smoothing, which is also benefited from the suitable parameter settings, as discussed in <cit.>. §.§ Improvements Though ILS is efficient and able to preserve edges, it is unable to smooth out high-contrast details and would smooth out weak structures (represented by edges), as discussed in <cit.>. Here, we present improvements to address these shortcomings. By the plots in Figure <ref> about the relationship between gradients and the smoothing effects, represented by the values of edge stopping function, we observe that the penalties are corresponding to gradients monotonously, and ILS tends to smooth out the pixels with lower gradients more preferentially than the pixels with higher gradients. This motivates us that the values of the penalty function for pixels determine how the pixels are smoothed. For the pixels with lower values of the penalty function, they will be smoothed more preferentially, while not for the pixels with higher values of the penalty function. Thus, if we have the penalty function computed by some values instead of gradients, called guidance values, these guidance values would take effects to determine how the pixels are smoothed. With an analogical reasoning like the discussion in the above paragraph, the pixels with lower guidance values would be much more smoothed while not for the pixels with higher guidance values. Therefore, when the pixels are determined in the regions to be much smoothed, they will adjust their intensities to be very smooth no matter whether their gradients are lower or higher. Similarly, for the pixels determined to be little smoothed, they will not be smoothed much and so their gradients would be well kept for helping preservation of their related structures (edges), even the weak structures. According to Eq. (<ref>), the energy function of the ILS method is not only related to the penalty function, but also related to its data term, trying to remain the pixelwise intensities as much as possible for preserving the contents in the image. Considering gradients are related to intensities and image smoothing is to reduce gradients between pixels of non-structures, we have our guidance values computed by weighting gradients. As a result, we have the penalty function modified as follows, ϕ_p(id) =(id^2 + ϵ )^p/2 id = ω_*,s×∇ u_*,s where ω_*,s is a weight for determining the values of id to correspond to the properties of pixels on structures or not. It can have the pixel with a high gradient to have a very low id value or have the pixel with a low gradient to have a very high id value. In this way, structures can be preserved and details are removed, irrespective of their gradients. As illustrated in Figure <ref>, with the red pixels given very lower weights, the red region can be smoothed very well, while this cannot be achieved with the original ILS method. § WEIGHT COMPUTATION In Section <ref>, we have discussed that the ILS method can be improved to well smooth out high-contrast details and preserve weak structures when we modify the computation of its penalty function. In this section, we discuss how to compute the weights for achieving high quality results. According to the discussion in Section <ref>, in our modification of the penalty function by Eq. (<ref>) and Eq. (<ref>), it is expected that the guidance id values are very low for the pixels of details and very high for the pixels of structures. Thus, we need first to determine whether the pixels are on structures or not. Then, we compute the weights in [0.0, 1.0] for keeping guidance id values still in [-1.0, 1.0], by which we can satisfy the requirement to use the framework of ILS for image smoothing, that is, the independent variable of the penalty function must have their values in [-1.0, 1.0]. As discussed in Section <ref>, the ILS method prefers to smooth out the pixels with lower guidance values. Therefore, for fast smoothing out the details with high contrasts, it is expected the guidance values for the pixels of details are very near 0.0. As a result, our weight computation is by the following steps: 1) The interval gradient ∇_ΩI for pixel q is computed, trying to distinguish whether it is for details or on structures. Interval gradients are proposed in <cit.> to enhance the distinguishing of textured details from structures by enlarging the difference between structures and textures in terms of gradient computation, where the gradient at a pixel is not computed by the difference of intensity between its left and right adjacent pixels, but by the difference between the averaged intensity for a range of pixels to its left and that for a range of pixels to its right. We find interval gradients are also very effective to distinguish other details besides texture details, and so take it for detecting structures. 2) A γ(q) value in [0.0, 1.0] is computed by the interval gradient ∇_ΩI for pixel q, trying to allow γ(q) to have higher values for structure pixels and lower values for details pixels. 3) The weight ω(q) is computed by γ(q), trying to have its value very near 0.0 for the pixels of details, while not for the pixels of structures. In the following, these computations are discussed. * The interval gradient (∇_ΩI)_q in <cit.> is computed by using a local window Ω_q centered at a pixel q as follows, (∇_ΩI)_q = g_σ^r(I_q) - g_σ^l(I_q) where g_σ^r(I_q) and g_σ^l(I_q) are left and right clipped 1D Gaussian filter functions defined by g_σ^r(I_q) = 1/k_r∑_n ∈Ω(q) ω_σ (n-q-1)I_n g_σ^l(I_q) = 1/k_l∑_n ∈Ω(q) ω_σ (q-n)I_n where k_r and k_l are coefficients for normalization, defined as k_r = ∑_n ∈Ω(q) ω_σ (n-q-1) and k_l = ∑_n ∈Ω(q) ω_σ (q-n) and ω_σ is the clipped exponential weighting function with a scale parameter σ, which is the range of the neighboring pixels for interval gradient computation in the window Ω(q). ω_σ is defined as ω_σ(x)={ exp(- x^2/2σ^2) if x ≥ 0 0 otherwise . * γ(q) computation is by the following equation, γ(q) = min ( 1.0,| (∇_ΩI)_q| + ϵ_s/| (∇ I)_q| + ϵ_s) (∇ I)_q = I_q+1 - I_q where I is a 1D discrete signal, ϵ_s is a small constant to prevent numerical instability and we also fix ϵ_s = 0.0001 in all the experiments. In general, (∇_ΩI)_q has a smaller absolute value than (∇ I)_q for detailed pixels, so that their γ(q) will have a value lower than 1.0. As for a pixel of structures, its (∇_ΩI)_q always has a larger absolute value than its (∇ I)_q, so that its γ(q) will have a value 1.0, or much near 1.0. * ω(q) computation is by the following equation, ω(q) = 2(1/1+exp(- (2σ_s + 1) * (γ(q) - 1))) where σ_s controls the sharpness of weight transition from structures to detail regions, which is fixed as σ_s = σ in all our tests. With Eq. (<ref>), when γ(q) is 1.0 or much near 1.0, its ω(q) will be much near 1.0. With γ(q) being smaller and smaller to approach 0.0, the denominator of Eq. (<ref>) will increase its value more and more rapidly, and so the ω(q) will approach 0.0 very much. As illustrated in Figure <ref>, our weight computation can effectively produce guidance values to well correspond to the properties of the pixels on structures or details, no matter whether the pixels have high or low gradients. Therefore, compared to ILS, our improved method can well preserve weak structures, such as sample D, while smoothing out high-contrast details, such as samples A and B. § RESULTS AND DISCUSSION We made tests to compare our method with existing methods, including 1) filtering-based methods via bilateral texture filtering (BTF) <cit.>, interval gradients (IG) <cit.> and edge guidance filtering for structure extraction (EGF) <cit.>; 2) optimization-based methods via real-time image smoothing via iterative least squares (ILS) <cit.>, erasing appearance preservation (EAP) <cit.>, structure and texture-aware image decomposition via training a neural network (STDN) <cit.> and a generalized framework for edge-preserving and structure-preserving image smoothing (GFES) <cit.>; and 3) learning-based methods via deep flexible structure preserving image smoothing (DeepFSPIS) <cit.>, learning to solve the intractable for structure preserving image smoothing (Easy2Hard) <cit.> and contrastive semantic-guided image smoothing network (CSGIS-Net) <cit.>. We have our method implemented in MATLAB (R2017b) and download the open codes for all the compared methods, which are provided by the authors, where the non-learning ones are also implemented in MATLAB. We made tests on a personal computer installed with an Intel Core i7-8700 CPU, 48GB RAM, an NVIDIA GeForce GTX 1080Ti GPU with 11 GB memory. The results are always collected with the Windows 10 operating system except the results for EGF, which are collected with Linux operating system (Ubuntu 20.04), and for DeepFSPIS, which are collected from API provided by the author. §.§ Parameters Our improved method has several parameters. For the parameters p, ϵ and c for ILS computation and ϵ_s for interval gradient computation, they are set as suggested in the references, as they are generally stable for image smoothing. As for our added parameter σ_s in Eq. (<ref>), we only set σ_s = σ, and can always obtain good results. For high performance, there are three parameters to be well investigated, iteration number N, λ, and σ. For N, when it is larger, the image would be smoothed much more but this will takes much more time, not helpful for preserving edges and tends to produce artifacts like halos and intensity shift. As we can fast smooth out high-contrast details, we can use a few iterations to produce good results. Thus, we used 2 ∼ 5 iterations. As discussed in <cit.>, λ controls the smoothing strength and a larger λ leads to stronger smoothing. In our tests, it is set λ ∈ [0.1, 1.0]. As for σ, it controls the scale of details including texture details to be smoothed out, as discussed in <cit.>. In our tests, we always set σ ∈ [2, 5] as they can well produce good results. As illustrated in Figure <ref>, we can use smaller values for λ, N and σ to achieve good results. In <cit.>, it is discussed that larger values for λ or N would lead to stronger smoothing on the high-contrast details, but this would produce the intensity shift effect, blur weak structures, and cause compartmentalization artifacts and halo artifacts. As we can use smaller values for λ or N to smooth out high-contrast details, this is helpful for suppressing these artifacts, as illustrated in Figure <ref>. §.§ Quality Figure <ref> shows the results by the methods in comparison. As can be seen, we can obtain better results than the others, especially on smoothing out high-contrast details while preserving weak structures, which are particularly shown in the enlarged boxes. More results are provided in the supplementary materials[https://www.aliyundrive.com/s/rmrAZW7JQF5]. For the filtering-based methods, BTF, IG and EGF over-smoothing structures and have the problem of preserving small weak structures, such as for the blue petals in Figure <ref>fig7b, fig7c and fig7d, especially in the green boxes. This is due to their limited potentials for structure detection. For the optimization-based methods, EAP and ILS have the drawback of blurring small structures, as shown in red boxes in Figure <ref>fig7e and fig7f. Besides, ILS cannot remove high-contrast details, as shown in the green box in Figure <ref>fig7f. GFES and STDN can obtain better smoothing results, but they produce patch-like appearances, as shown in the green boxes in Figure <ref>fig7g and fig7h. This is because they cannot well distinguish neighboring structures and details with similar intensities due to their using truncated Huber penalty function or learning techniques, and so have them handled similarly to cause block effects. For the learning-based methods, Easy2Hard and CSGIS-Net fail to remove high-contrast details, as shown in the green boxes in Figure <ref>fig7i and fig7k. DeepFSPIS cannot preserve weak structures, as shown in the red box in Figure <ref>fig7j. They are prevented by their training data, as discussed in Section <ref>. For quantitative evaluation of image smoothing results, it still lacks effective measures, as discussed in <cit.>. We will indirectly show our improvements by quantitative evaluation of tone mapping results based on the smoothed results with GFES, ILS and our method, to be discussed in Subsection <ref>. More comparisons with previous methods have been discussed in <cit.> to show the superiority of GFES and ILS over previous methods. §.§ Efficiency In comparison with ILS, our improved method only differs from it on the computation of the penalty function. Here, we add some weight computation. With a simple investigation, it is known that we don’t increase much time cost in each iteration. By the statistics on time cost in Table <ref>, it is known ours is much faster than ILS, no matter whether on CPUs or on GPUs. This is much benefited from our reduced iterations. As for other compared methods, ours is much faster than theirs. The statistics in Table <ref> show that we can perform real time image smoothing on GPUs in handling ordinary images. This is much superior to existing methods. §.§ Applications Our method has better performance on preserving structures and removing details than existing methods. Thus, we can improve many applications, as illustrated in Figure <ref> and Figure <ref> for detail enhancement, and in Figure <ref> for HDR tone mapping. More results are provided in the supplementary materials[https://www.aliyundrive.com/s/rmrAZW7JQF5]. For quantitative evaluation of our improvements, we made an investigation on the HDR tone mapping results based on the smoothed images produced by GFES, ILS and ours. Here, the 25 HDR images used in  <cit.> are used and the tone mapping quality index (TMQI) proposed by Yeganeh et al. <cit.> is used for quantitative evaluation. TMQI first evaluates the structural fidelity and the naturalness of the tone mapping images, and then combines these two measurements with a power function to give a final score ranging from 0.0 to 1.0. Larger values of TMQI indicate better quality, and vice versa. In Table  <ref>, it is listed the statistics about the average evaluation results for the 25 images, which show our superiority over ILS and GFES. Limitation. Our current implementation is by using interval gradients to distinguish details from structures, and so execute our weight computation. As the potentials of interval gradients are not very strong in distinguishing structures from the others, our results would suffer from this, as shown in Figure <ref>. As we know, texture filtering is required in many applications. For this, image smoothing methods cannot be directly used to texture filtering as textures can sometimes have very high contrast. Thus, for image smoothing methods to smooth out texture details, a preprocessing is often required to smooth the input image for reducing contrasts of textures, like done with ILS <cit.>. As for our method, we can take the same way to perform texture filtering. § CONCLUSIONS It is still challenging with existing methods to smooth out high-contrast details while preserving weak structures, as they are difficult to distinguish structures and details to handle them distinctively, due to the overlapped ranges of gradients for structures and details. In this paper, we address this challenge by developing novel measures to determine smoothing manners for pixels via their properties on structures or not, no matter whether they have high or low gradients. In this way, we can well distinguish structures and details to handle distinctively, and so convenient for preserving weak structures while smoothing out high-contrast details. Moreover, we can still use the framework of ILS for real time image smoothing while using fewer iterations than the original ILS for acceleration. In sum, we present a novel image smoothing method that can more efficiently produce better results than existing methods, especially on preserving weak structures while removing high-contrast details. ieee_fullname
http://arxiv.org/abs/2307.04187v1
20230709144054
Predictive Coding For Animation-Based Video Compression
[ "Goluck Konuko", "Stéphane Lathuilière", "Giuseppe Valenzise" ]
cs.CV
[ "cs.CV", "cs.MM" ]
𝐱 ŁL Goluck Konuko^†, Stéphane Lathuilière^, Giuseppe Valenzise^† ^† Université Paris-Saclay, CentraleSupélec, Laboratoire des signaux et systèmes ^ LTCI, Télécom Paris, Institut Polytechnique de Paris, France Predictive Coding for Animation-Based Video Compression Morgane Austern Received: date / Accepted: date ======================================================= We address the problem of efficiently compressing video for conferencing-type applications. We build on recent approaches based on image animation, which can achieve good reconstruction quality at very low bitrate by representing face motions with a compact set of sparse keypoints. However, these methods encode video in a frame-by-frame fashion, i.e., each frame is reconstructed from a reference frame, which limits the reconstruction quality when the bandwidth is larger. Instead, we propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame. The residuals can be in turn coded in a predictive manner, thus removing efficiently temporal dependencies. Our experiments indicate a significant bitrate gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC, on a dataset of talking-head videos. Video compression, image animation, generative models, video conferencing, predictive coding § INTRODUCTION Recent work on learning-based video coding for videoconferencing applications has shown that it is possible to compress videos of talking heads with extremely low bitrate, without significant losses in visual quality <cit.>. The basic tenet of these methods is that face motion can be represented through a compact set of sparse keypoints <cit.>, which can be transmitted and used at the decoder side to animate a reference video frame. However, despite the impressive coding performance of these methods at very low bitrates, existing animation-based codecs for videoconferencing still have several bottlenecks. Firstly, when the available bitrate increases, the reconstruction quality quickly reaches saturation, and conventional coding tools such as HEVC or VVC perform better. Secondly, bitrate variability in current schemes is complex, unlike conventional coding methods where a simple quantization parameter can be used to regulate bitrate. Finally, animation-based codecs operate on a frame-by-frame basis, which is inefficient for eliminating temporal redundancy in the video. This paper addresses these limitations by proposing a predictive coding scheme for videoconferencing applications. Specifically, we interpret the keypoint-based image animation used in previous codecs <cit.> as a spatial predictor of the current (target) frame, as depicted in Figure <ref>. The residual between the animated and the target frame is then coded and used at the decoder side to correct the animated target frame. Since animation residuals exhibit temporal correlation, we also encode them in a predictive manner, i.e., we predict the current animation residual based on the previously decoded residual and encode the prediction difference. It is worth noting that this approach is similar in principle to the classic video coding prediction loop, with the important distinction that residual coding and animation are jointly learned in an end-to-end fashion. We name our method RDAC, for Residual Deep Animation Codec. Our results demonstrate significant rate-distortion improvements compared to standard codecs such as HEVC and VVC, as measured by several classical and learning-based perceptual quality metrics. Furthermore, the proposed technique has the additional advantage of reducing temporal drift compared to previous frame-by-frame approaches. § RELATED WORK Image animation models have been applied to compress talking head videos at ultra-low bitrates in conferencing-type applications <cit.>. Different from other learning-based compression frameworks <cit.>, the animation-based codecs in <cit.> and <cit.> propose architectures that use a variable number of motion keypoints to change the reconstruction quality within a small range of low bitrates. The deep animation codec (DAC) in our previous work <cit.> offers the possibility to vary the bitrate by creating a list of reference frames from which the best reconstruction is computed. Specifically, a new reference frame is added to the decoder buffer if all the available frames give reconstruction below a predefined threshold. However, this approach may introduce temporal jittering when adjacent animated frames are predicted from different reference frames. Using second-order motion coherence <cit.> introduces spatio-temporal stability in the decoded video, hence reducing the jittering. However, this architecture is still limited in terms of quality variability since it relies only on face animation. In our recent work <cit.>, we proposed a hybrid coding architecture (HDAC) that uses a low-quality HEVC bitstream as side information to enhance the final result of the animation codec. While improving on previous methods, the use of this low-quality auxiliary stream limits in practice the possibility to reconstruct high-frequency details. In this work, we propose a residual deep animation codec (RDAC) that learns a compact representation of the residual between a frame and its animation-based prediction, and encodes this residual using temporal prediction. § PROPOSED METHOD A general scheme of the proposed residual deep animation codec is depicted in Fig. <ref>. The components of the proposed system are detailed as follows: Section <ref> introduces the frame prediction and residual coding and Section <ref> presents temporal learning in the residual space. §.§ Deep Image Animation Prediction and Residual Coding We leverage the principles developed in the First Order Model <cit.> for image animation and our prior works <cit.> for animation-based prediction. The image animation process works by estimating a sparse set of motion landmarks using a keypoint detector (KPD) which is a UNet-like architecture from <cit.>. The keypoints are used by a motion transfer network (MTN) that generates the optical flow between a decoded reference image 𝐗̃_0 and the desired target 𝐗_t. Subsequently, the optical-flow map is applied to the feature space representation of the reference frame derived by the encoder of an autoencoder network. The deformed source features are assumed to be a close approximation of the target frame's feature representation and are used by a decoder network to produce the final animation 𝐗̂_t. We build on this animation framework by including an encoder network that learns a latent representation of 𝐑_t = 𝐗_t - 𝐗̂_t i.e. the residual after animation as illustrated in Fig. <ref>. We start with the architecture of the variational autoencoder network  <cit.> used for learned image compression frameworks. However, since the residual images have very sparse features we mitigate the potential encoding of a noisy latent representation by increasing the number of downsampling convolutional layers from 3 to 5 and symmetrically increase the number of upsampling layers. §.§ Using Temporal Correlation in the Residual Layer For a sequence of target frames 𝐗_1→𝐗_T animated from a single reference frame, 𝐗_0, we observe that the residual differences 𝐑_1→𝐑_T have a high temporal correlation. In this paper, we use a simple differential coding scheme to exploit this temporal correlation. Specifically, we compute the temporal difference signal between consecutive frame residuals, 𝐃_t = 𝐑_t-𝐑̂_t-1, as shown in Fig. <ref>. Note that, in general, more sophisticated prediction schemes are possible, that could bring additional temporal decorrelation, e.g., any dense or block-based motion compensated scheme. In this work, we demonstrated coding gains even with a suboptimal zero-motion temporal predictor, leaving the study of more advanced prediction schemes to future work. The difference signal 𝐃_t is coded using an additional autoencoder network, which is trained together with the animation-based predictor and the reconstruction network. The decoding process consists in reconstructing the residual 𝐑̃_t=𝐃̃_t + 𝐑̃_t-1. The reconstructed residual is then concatenated to the animation-based predictor 𝐗̂_t and passed as input to a reconstruction network that produces the final decoded frame 𝐗̃_t. The reconstruction network consists of 2 convolution layers and 3 ResNet blocks. §.§ Model Training We initialize the animation module with pre-trained models from <cit.>. The loss terms for image animation are the same as in <cit.>, while the rate-distortion loss ℒ_RD is derived as described in <cit.>: ℒ_RD = λ·MSE(𝐑_𝐭, 𝐑̂_𝐭) + Rate where the bitrate cost in bits-per-pixel (bpp) is computed from the entropy estimate of the residual latent representation. § EXPERIMENTS AND RESULTS §.§ Evaluation Protocol We randomly select 30 video sequences from the VoxCeleb test set with minimum lengths of 128 frames. We note that chaning the GOP size affects the average reconstruction quality of the video sequences. Therefore, we encode the sequences with GOP sizes 16, 32, 64, and 128 and select the best reconstruction point at each bitrate from a union of the computed metrics i.e. the convex hull of all the GOP configurations. The reference frame is encoded with QP 30 using the BPG codec (HEVC intra) and the motion keypoints as well as the compressed residuals are entropy coded using a context-adaptive arithmetic coder with a Prediction by Partial Match (PPM) model <cit.>. HEVC and VVC (VTM-11) metrics are computed under low-delay configurations with high QP values to minimize bitrate. We also compare against the LPIPS-VGG metrics reported for BeyondKP <cit.> and FaceVid2Vid <cit.> since they use comparable test conditions. Notice that for these last two methods, we only have a single bitrate point, since they do not support bitrate variability beyond 10 kbps. MSE loss is used at training time for residual learning. However, the other loss terms used in training the network optimize for perceptual quality. Therefore, we restrict our evaluation to use only perceptual metrics and multi-scale pixel fidelity metrics. §.§ RD Evaluation In Tab. <ref>, we note over 70% bitrate savings for perceptual-based metrics i.e. LPIPS <cit.>, msVGG <cit.> and DISTS <cit.> as well as over 40% bitrate savings for pixel-based metrics over HEVC. In Fig. <ref> we make a visual comparison of our proposed framework with HEVC and VVC in the low bitrate range. Fig. <ref> illustrates the rate-distortion performance using the LPIPS metric. RDAC significantly improves performance of conventional video codecs over a wide range of bitrates, and it outperforms previous animation-based codecs which do not employ predictive coding. §.§ Ablation study and temporal drift An advantage of using a closed-loop prediction scheme for temporal coding of residuals is that it avoids the temporal drifting affecting previous open-loop schemes such as DAC. This is supported by Fig. <ref>, where we show the temporal reconstruction quality (measured with MS-SSIM) of our framework and DAC. We also investigate to which extent the temporal prediction contributes to the RD gains, over a frame-by-frame scheme to code the prediction residuals 𝐑_t. To this end, we remove the temporal feedback loop in Fig. <ref>, encoding the residuals as all Intra. Tab. <ref> reports the gains of our proposed RDAC (with temporal prediction) over this simpler solution, demonstrating that reducing temporal correlation has a significant impact on coding performance. §.§ Computational complexity In Tab. <ref>, we make a complexity evaluation by comparing the coding or decoding time for a single interframe. The animation-based models DAC, HDAC, and our framework are evaluated on a CPU and GPU while the HEVC and VVC codecs are only evaluated on a CPU since they do not have GPU acceleration capability. We note that our proposal adds only a moderate level of complexity relative to HEVC. However since we achieve bitrate savings greater than VVC, we consider this additional complexity as an acceptable tradeoff for the target application. § CONCLUSIONS Animation-based compression offers the possibility to transmit videos with very low bitrate. However, it is often limited to reconstructing the outputs at a fixed quality level, cannot scale efficiently when higher bandwidth is available, and does not compress efficiently temporal redundancies in the signal. In this paper, we propose a coding scheme that integrates image animation (re-interpreted as a frame predictor) with classical predictive coding principles, where we exploit both spatial and temporal dependencies to achieve a coding gain. Our RDAC codec outperforms previous methods and standard codecs by a large margin on a dataset of talking head videos, despite the very simple temporal prediction approach employed. Acknowledgement: This work was funded by Labex DigiCosme - Université Paris-Saclay. This work was performed using HPC resources from GENCI-IDRIS utils/IEEEbib
http://arxiv.org/abs/2307.04429v1
20230710090926
Designing Novel Cognitive Diagnosis Models via Evolutionary Multi-Objective Neural Architecture Search
[ "Shangshang Yang", "Haiping Ma", "Cheng Zhen", "Ye Tian", "Limiao Zhang", "Yaochu Jin", "Xingyi Zhang" ]
cs.NE
[ "cs.NE", "cs.AI", "cs.LG" ]
IEEE TRANSACTIONS ON XXXX, VOL. X, NO. X, MM YYYY Yang et al.: No title Designing Novel Cognitive Diagnosis Models via Evolutionary Multi-Objective Neural Architecture Search Manuscript received –. This work was supported in part by the National Key Research and Development Project under Grant 2018AAA0100105 and 2018AAA0100100, in part by the National Natural Science Foundation of China under Grant 61822301, 61876123, 61906001, 62136008, U21A20512, and U1804262, in part by the Anhui Provincial Natural Science Foundation under Grant 1808085J06 and 1908085QF271, in part by the Collaborative Innovation Program of Universities in Anhui Province under Grant GXXT-2020-013, and in part by the State Key Laboratory of Synthetical Automation for Process Industries under Grant PAL-N201805 (Corresponding authors: Limiao Zhang and Xingyi Zhang). Shangshang Yang, Haiping Ma, Cheng Zhen, Ye Tian, Limiao Zhang, Yaochu Jin, Fellow, IEEE, and Xingyi Zhang, Senior Member, IEEE S. Yang and X. Zhang is with the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Artificial Intelligence, Anhui University, Hefei 230039, China (email: [email protected]; [email protected]). C. Zhen, Y. Tian, and H. Ma are with the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Institutes of Physical Science and Information Technology, Anhui University, Hefei 230601, China (email: [email protected];[email protected];[email protected]). L. Zhang is with Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, 230601, Anhui, China (email: [email protected]). Y. Jin is with the Faculty of Technology, Bielefeld Unversity, Bielefeld 33619, Germany (email:[email protected]). August 12, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Cognitive diagnosis plays a vital role in modern intelligent education platforms to reveal students' proficiency in knowledge concepts for subsequent adaptive tasks. However, due to the requirement of high model interpretability, existing manually designed cognitive diagnosis models hold too simple architectures to meet the demand of current intelligent education systems, where the bias of human design also limits the emergence of effective cognitive diagnosis models. In this paper, we propose to automatically design novel cognitive diagnosis models by evolutionary multi-objective neural architecture search (NAS). Specifically, we observe existing models can be represented by a general model handling three given types of inputs and thus first design an expressive search space for the NAS task in cognitive diagnosis. Then, we propose multi-objective genetic programming (MOGP) to explore the NAS task's search space by maximizing model performance and interpretability. In the MOGP design, each architecture is transformed into a tree architecture and encoded by a tree for easy optimization, and a tailored genetic operation based on four sub-genetic operations is devised to generate offspring effectively. Besides, an initialization strategy is also suggested to accelerate the convergence by evolving half of the population from existing models' variants. Experiments on two real-world datasets demonstrate that the cognitive diagnosis models searched by the proposed approach exhibit significantly better performance than existing models and also hold as good interpretability as human-designed models. To make the search algorithm effectively generate offspring, containing four sub-genetic operations is devised. Besides, we also propose an initialization strategy to make half of the population evolve from existing models' variants to accelerate the convergence. Experiments on two real-world datasets demonstrate that the cognitive diagnosis models searched by the proposed approach exhibit significantly better performance than existing models and also hold good interpretability same as human-designed models. and then propose the model interpretability objective to formulate the NAS task as a multi-objective optimization problem (MOP) for maintaining models' performance and interpretability simultaneously. To tackle the formulated MOP well, we propose a multi-objective evolutionary algorithm to explore the devised large search space, where . Specifically, a novel tree-based search space is first suggested to contain not only existing models but also other architectures that humans have never seen as many as possible, where each cognitive diagnosis model can be represented by a binary tree. Then, an effective evolutionary algorithm is developed to explore the suggested novel search space, which considers both the performance and the interpretability of models in the search. Cognitive diagnosis models, neural architecture search, evolutionary algorithm, multi-objective optimization, genetic programming, model interpretability. § INTRODUCTION Cognitive diagnosis (CD) in the field of intelligent education <cit.> aims to reveal students' proficiency in specific knowledge concepts according to their historical response records of answering exercises and the exercise-concept relational matrix (termed Q-matrix) <cit.>. Fig. <ref> gives an illustrative example of CD, where students {A,B} have practiced a series of exercises (i.e., {e_1,e_3,e_4} and {e_1,e_2,e_3}), and got corresponding responses. Based on the records and Q-matrix, the students' knowledge proficiency in each concept can be obtained through CD. By doing so, there is a wide range of intelligent education tasks, such as personalized exercise recommendation <cit.> and targeted training <cit.>, which can benefit from the students' diagnosis results. With the rising demand for cognitive diagnosis models (CDMs) in online education platforms, many researchers developed various CD approaches, which are generally grouped into two types. The first genre of approaches is mainly proposed by researchers in educational psychology. Their designed CDMs usually rely on simple handcrafted functions to model student-exercise interactions and portray the student learning ability in a one-dimensional vector or other manners. The representatives include Item Response Theory (IRT) <cit.>, Deterministic Input, Noisy ’And’ gate (DINA) <cit.>, Multidimensional IRT (MIRT) <cit.>, and Matrix Factorization (MF) <cit.>. Item Response Theory (IRT) <cit.> and Deterministic Inputs, Noisy-And gate (DINA) <cit.> are two pioneering approaches, where IRT and DINA utilize a unidimensional continuous vector and a binary vector respectively to denote the student mastery for predicting the probabilities of a student correctly answering exercises. In addition, there are also some CD approaches improving above two CDMs or using other techniques, such as MIRT <cit.> which extends IRT's unidimensional student and exercise latent traits into multidimensional space, and MF <cit.> based on the matrix factorization technique. The second genre of ones <cit.> is based on neural networks (NNs), where the student learning ability is portrayed by an inner latent vector. The representatives contain Neural Cognitive Diagnosis (NCD) <cit.>, Prerequisite Attention model for Knowledge Proficiency diagnosis (PAKP) <cit.>, and Relation map driven Cognitive Diagnosis (RCD) <cit.> . As the critical components of CDMs, diagnostic functions are mainly responsible for predicting student exercising scores by integrating three types of input vectors (i.e., student/exercise/concept-related input vector) in a highly interpretable manner. To pursue high model interpretability, existing CDMs' diagnostic functions are desired to hold simple architectures. For example, IRT <cit.> and MF <cit.> utilize the simple logistic function and inner-product respectively as their diagnostic functions. However, there exist two kinds of problems for these simple handcrafted diagnostic functions. Firstly, simple diagnostic functions' architectures disable CDMs from modeling complex relationships between students and exercises well <cit.>, failing to meet the demands of modern education systems containing a large quantity of student exercising data. Secondly, the design of existing diagnostic functions heavily relies on researchers' knowledge of both educational psychology and NNs <cit.>, which is labor-intensive and needs a lot of trial-and-error. And the human design bias may limit the emergence of novel diagnostic functions to some extent. Furthermore, recent CD approaches <cit.> put less focus on the architecture design of diagnostic functions but on enhancing the input vectors for high performance, which hinders the development of CDMs to some extent. As the key components of CDMs, diagnostic functions are mainly responsible for predicting student exercising scores by integrating three types of input vectors in a high interpretability manner, including the student ability-related latent vector (i.e., student-related vector), the exercise-related latent vectors, and the concept-related latent vectors. Due to the aim of pursuing high model interpretability during the model design, existing CDMs' diagnostic functions are desired to hold simple architectures. For example, IRT <cit.> and MF <cit.> utilize the simple logistic function and the intuitive inner-product respectively as their diagnostic functions to linearly combine student-related and exercise-related latent vectors. However, there exist two aspects of problems for these simple manually-designed diagnostic functions. On the one hand, the simple architectures of diagnostic functions disable CDMs from modeling the complex relationship between students and exercises well, failing to meet the demands of current intelligent education systems containing a large quantity of student exercising data. On the other hand, the design of existing diagnostic function architectures heavily relies on researchers' domain knowledge in both educational psychology and NNs, where not only the design process is labor-intensive and needs a lot of trial-and-error but also the human design bias may limit the emergence of novel diagnostic functions to some extent. Although NCD <cit.> argues to find an automatic way to learn the complex interactions between students and exercises, its simple diagnostic function architecture is still manually designed by summarizing architectures of previous CDMs. Furthermore, recent CD approaches do not focus on the architecture design of diagnostic functions but on enhancing the input vectors based on existing diagnostic function architectures for improving the prediction performance. Therefore, it is necessary to design more effective novel diagnostic function architectures to meet the demands of current intelligent education systems. For the above reasons, this paper aims to develop novel CDMs by automatically designing effective diagnostic function architectures. Since Zoph and Le <cit.> proposed to search neural architectures for image tasks, neural architecture search (NAS) <cit.> has been widely applied to many research fields and achieved significant success <cit.>. Among various search strategies of NAS, including reinforcement learning <cit.> and gradient optimization <cit.>, evolutionary algorithms (EAs), especially multi-objective evolutionary algorithms (MOEAs), have shown a more powerful ability to search <cit.>. Moreover, compared to other NAS approaches, MOEA-based NAS approaches <cit.> are superior in getting out of local optima and presenting trade-offs among multiple objectives , where many architectures holding different attributes can be found in a single run. The representative approaches include Neural Architecture Search using Multi-Objective Genetic Algorithm (NSGA-Net) <cit.>, and Lamarckian Evolutionary algorithm for Multi-Objective Neural Architecture DEsign (LEMONADE) <cit.>. However, existing NAS approaches cannot be applied to CD due to the difference in search space between CD and other tasks, and different search space generally needs different MOEAs <cit.>, whose representations and genetic operations are task-tailored <cit.>, further hindering them from being applied to CD. Therefore, this paper proposes an evolutionary multi-objective NAS to design novel CDMs (termed EMO-NAS-CD), where an expressive search space is first devised and multi-objective genetic programming (MOGP) is employed to explore the search space to develop high-performance CDMs with good interpretability. Specifically, our main contributions are as follows: * This paper is the first NAS work to design CDMs, which explores the search space design and search strategy design of NAS. Regarding the search space, we first design an expressive search space for the NAS task of CD (NAS-CD) by summarizing existing diagnostic function architectures. Within, each candidate architecture is denoted by a general model, which takes at most three given types of input vectors as input nodes. Then, regarding the search strategy, we propose MOGP to explore the search space by solving a bi-objective problem of NAS-CD, which maximizes the objectives of model performance and interpretability simultaneously. To make the searched highly interpretable, we propose to optimizethe model performance and model interpretability simultaneously and thus formulate the NAS-CD task as a bi-objective optimization problem, where the interpretability of an architecture is intuitively characterized with its depth, breadth, and its contained computation node number. * In the MOGP design, we first transform architectures under the search space into tree architectures and then encode them by trees for easy optimization, which can avoid the optimization difficulties of vector-based encoding (e.g., the problem of variable-length encoding). Based on four sub-genetic operations, a tailored genetic operation is devised for effective offspring generation in the MOGP. Besides, to accelerate the MOGP's convergence, we further design a prior knowledge-based initialization strategy to evolve partial individuals of the population from existing CDMs' variants. To avoid some optimization difficulties in general MOEAs (e.g., variable-length encoding difficulty in vector-based encoding), each architecture is first transformed into its corresponding tree architecture and then encoded by tree-based representation for easy optimization. Besides, a tailored genetic operation inspired from GP is suggested for effective offspring generation. On the basis of the above techniques, the proposed MOEA turns out to be a MOGP. To accelerate the convergence of the proposed MOGP, we further design a population initialization strategy to initialize partial individuals of the population from existing CDMs' variants. * To validate the effectiveness of the proposed EMO-NAS-CD, we compare it with some representative CDMs on two popular education datasets. Experimental results show that EMO-NAS-CD can find a set of architectures to build CDMs, which present trade-offs between interpretability and performance. The found architectures hold both significantly better prediction performance and good interpretability. Moreover, we verify the effectiveness of the suggested genetic operation as well as the initialization strategy, and we also demonstrate the superiority of the devised model interpretability objective over the common model complexity. The rest of this paper is as follows. Section II reviews existing CD approaches and presents the motivation for this work. Section III introduces the proposed search space. Section IV presents the details of the proposed approach. The experiments are shown in Section V, and we give conclusions and future work in Section VI. § PRELIMINARIES AND RELATED WORK §.§ Preliminaries of Cognitive Diagnosis Task Formally, there are N students, M exercises, and K knowledge concepts in an intelligent education platform for the cognitive diagnosis task, which can be represented by S = {s_1,s_2,⋯,s_N}, E= {e_1,e_2,⋯,e_M}, and C={c_1,c_2,⋯,c_K}, respectively. Besides, there is commonly an exercise-concept relation matrix Q= (Q_jk∈{0,1})^M× K, Q-matrix, to depict the relationship between exercises and knowledge concepts, where Q_jk=1 means the exercise e_j contains the knowledge concept c_k and Q_jk=0 otherwise. R_log is used to denote the students' exercising response logs and it can be represented by a set of triplets (s_i,e_j,r_ij), where s_j ∈ S, e_j ∈ E, and r_ij∈{0,1} refers to the response score of student s_i on exercise e_j. Here r_ij=1 indicates the answer of student s_i on e_j is correct and r_ij=0 otherwise. Based on the students' response logs R_log and Q-matrix, the cognitive diagnosis task mines the students' proficiency in knowledge concepts by building a model ℱ to predict the students' exercising score. To predict the score of student s_i on exercise e_j, the model ℱ can take three types of inputs, including the student-related feature vector 𝐡_S∈ R^1× D, the exercise-related feature vector 𝐡_E∈ R^1× D, and the knowledge concept-related feature vector 𝐡_C∈ R^1× K, which can be obtained by . 𝐡_S = 𝐱_i^S × W_S, W_S∈ R^N× D 𝐡_E = 𝐱_j^E × W_E, W_E∈ R^M× D 𝐡_C = 𝐱_j^E × Q = (Q_j1, Q_j2,⋯, Q_jK) ., where D is the embedding dimension (usually equal to K for consistency), 𝐱_i^S ∈{0,1}^1× N is the one-hot vector for student s_i, 𝐱_j^E ∈{0,1}^1× M is the one-hot vector for exercise e_j, and W_S and W_E are trainable matrices in the embedding layers. Then, the model ℱ outputs the predicted response r̂_ij as r̂_ij = ℱ(𝐡_S,𝐡_E,𝐡_C), where ℱ(·) is the diagnostic function to combine three types of inputs in different manners. Generally speaking, after training the model ℱ based on students' response logs, each bit value of 𝐡_S represents the student's proficiency in the corresponding knowledge concept. §.§ Related Work on Cognitive Diagnosis In the past decades, a series of CDMs have been developed based on researchers' experiences in educational psychology and deep neural networks (DNNs), mainly from two perspectives. §.§.§ Incorporating Richer Input Information As introduced above, there are three types of inputs that can be used for the diagnostic function in a CDM, including the student-related vector 𝐡_S, the exercise-related vector 𝐡_E, and the knowledge concept-related vector 𝐡_C. Therefore, the first type of approaches aims to incorporate richer context information or other information into these input vectors to boost the diagnostic function inputs for improving the prediction performance. To achieve this, Zhou et al. <cit.> proposed Educational context-aware Cognitive Diagnosis (ECD) <cit.> to model educational context-aware features in student learning. Specifically, the student's educational contexts (e.g., school information, student personal interests, parents' education) are incorporated into the student-related vector 𝐡_S by a hierarchical attention NN. Then, the integrated student-related vector 𝐡_S will be processed by a common diagnostic function. The incorporated educational context information can indeed improve the diagnosis performance of different diagnostic functions, including IRT, MIRT, and NCD. In <cit.>, Gao et al. proposed RCD to incorporate the model inputs with the prior relations between knowledge concepts. To be specific, students, exercises, and concepts are first built as a hierarchical graph. This graph contains a student-exercise interaction map, a concept-exercise correlation map, and a concept dependency map that is extracted from the prior relations between knowledge concepts. Then, a multi-level attention NN is used to achieve node aggregation of the hierarchical graph, and the aggregated node features are used as three input vectors, 𝐡_S, 𝐡_E, and 𝐡_C, to improve the model performance. Similarly, Wang et al. <cit.> proposed CDGK (i.e., Cognitive DiaGnosis by Knowledge concept aggregation) to incorporate the relations between knowledge concepts into input vectors. Different from RCD, CDGK only builds the graph structure of knowledge concepts according to the dependency among knowledge concepts. Only the leaf nodes in the constructed graph will be used to aggregate the target node's features. Finally, the aggregated knowledge concept features will be taken as the concept-related vector 𝐡_C used for subsequent diagnosis process. §.§.§ Designing Diagnostic Functions The above CD approaches only focus on incorporating extra information into input vectors, and directly employ existing diagnostic functions to handle the enhanced input vectors for diagnosis. In contrast, the second type of approaches focuses on designing powerful diagnostic functions, which are responsible for combining input vectors in highly interpretable manners. As the most typical CDM, the diagnostic function of DINA <cit.> is to first obtain two binary student and concept latent features (θ, β∈{0,1}^1× K) and two exercise latent features (guessing g∈ R^1 and slipping sl∈ R^1) from input vectors. Then, the score of student s_i on exercise e_j can be represented as r̂_ij = g^1-nt(1-sl)^nt, where nt = ∏_kθ_k^β_k. Despite the high interpretability of its diagnostic function, DINA suffers from poor prediction performance in current CD tasks due to its poor scalability on large-scale student exercising data. As another typical CDM, the diagnostic function of IRT <cit.> first takes student-related and exercise-related vectors 𝐡_S and 𝐡_E, and then transforms them into one student latent feature θ∈ R^1 and two exercise latent features (β∈ R^1 and a ∈ R^1), respectively. Next, a simple logistic function is applied to the linear transformation of θ, β, and a, e.g., a simple version is Sigmoid(a(θ -β)) as stated in <cit.>. Finally, the diagnostic function outputs the predicted scores of the student on exercises. Similarly, MIRT <cit.> applies the same logistic function as IRT to the linear transformation of the student latent feature θ∈ R^1× K, the exercise latent feature β∈ R^1, and the knowledge concept latent feature α∈ R^1× K. θ and α are equal to 𝐡_S and 𝐡_C, and β is transformed from 𝐡_E. Note that student and knowledge concept latent features in MIRT are multidimensional for the demands of multidimensional data <cit.>. Finally, its prediction process can be output as r̂_ij = Sigmoid(β+∑α⊙θ). Compared to IRT, MIRT exhibits better performance yet without losing interpretability. Differently, MF <cit.> is originally proposed for recommender systems but can be used for CD from the data mining perspective, where students and exercises in CD can correspond to users and items in recommender systems. As demonstrated in <cit.>, the diagnostic function of MF can be modeled as directly applying the inner-product to 𝐡_S and 𝐡_E. Finally, its prediction process can be represented by r̂_ij = ∑𝐡_S⊙𝐡_E, whose architecture is quite simple yet effective compared to other CDMs. The most representative approach NCD <cit.> builds a new diagnostic function with one shallow layer and three fully connected (FC) layers. Firstly, the student latent feature 𝐟_S∈ R^1× K and two exercise latent features 𝐟_diff∈ R^1× K and f_disc∈ R^1 are first obtained by { 𝐟_S = Sigmoid(𝐡_S) 𝐟_diff = Sigmoid(𝐡_E) f_disc = Sigmoid(𝐡_E× W_disc), W_disc∈ R^D× 1.. Then, the shallow layer inspired by MIRT is used to linearly combine the above features and concept-related vector 𝐡_C as 𝐲 = 𝐡_C⊙(𝐟_S-𝐟_diff )× f_disc. Afterward, the hidden feature 𝐲 is fed into three FC layers with the monotonicity property to get the final prediction output. Ma et al. proposed Knowledge-Sensed Cognitive Diagnosis (KSCD) to diagnose the student's proficiency. Similar to NCD, KSCD's diagnostic function <cit.> consists of two FC layers followed by one shallow layer. Two FC layers are used to combine the learned knowledge concept features with 𝐡_S and 𝐡_E, respectively, for obtaining the enhanced student and exercise features. Then, the shallow layer is used to further combined enhanced features and 𝐡_C to get the prediction. §.§ Motivation of This Work Despite the competitive performance of the above CDMs, their diagnosis function architectures are too simple to model complex student-exercise interactions well <cit.>, especially for large-scale student exercising data in current intelligent education systems. Moreover, the design of existing diagnosis function architectures heavily relies on researcher expertise in the domains of both education and NNs, which needs a lot of trial-and-error and thus is labor-intensive and costly <cit.>. Besides, the human design bias may make some potential yet beyond-human knowledge architectures miss. Therefore, in contrast to current CD approaches focusing on improving model inputs, this paper aims to develop more effective diagnostic function architectures for CD. As an automated neural architecture design paradigm <cit.>, NAS has been widely used for many research domains <cit.> and made significant progress since it was first proposed by Zoph and Le in <cit.>. Existing NAS approaches have made great achievement in various domains to search the best architectures of prevailing various DNNs, including convolution neural networks (CNNs) for computer vision (CV) tasks <cit.>, recurrent neural networks (RNNs) for natural language process (NLP) <cit.> and speech-related <cit.> tasks, graph neural networks (GNNs) for non-European data tasks <cit.>, and Transformers for CV <cit.>, NLP <cit.>, and speech-related <cit.> tasks. As an automated neural architecture design paradigm <cit.>, NAS has been widely used for many research domains <cit.> and made significant progress <cit.>. Existing NAS approaches have been used to search the best architectures of prevailing various DNNs, including convolution neural networks (CNNs) for computer vision (CV) tasks <cit.>, recurrent neural networks (RNNs) for natural language processing (NLP) <cit.> and speech-related <cit.> tasks, graph neural networks (GNNs) for the tasks having non-Euclidean data <cit.>, and Transformers for CV <cit.>, NLP <cit.>, and speech-related <cit.> tasks. However, due to the difference in search space among different domains, these NAS approaches cannot be applied to search the optimal diagnostic function architecture. Besides, the architectures of existing diagnostic functions can be seen as a general model, which is used to handle three given types of inputs and output a scalar or a vector. To this end, this paper an evolutionary multi-objective optimization-based NAS approach for automatically designing effective diagnostic function architectures to build novel CDMs. Here, we first design an expressive search space by summarizing existing architectures, and then we propose MOGP to explore the devised search space by optimizing the objectives of model performance and model interpretability simultaneously. To the best of our knowledge, our work is the first to apply the NAS technique to the CD task. § THE PROPOSED SEARCH SPACE FOR CD As stated above, the search space of existing NAS approaches <cit.> is task-specific, which cannot be applied to CD for searching diagnostic functions. To design the search space for CD, we first observe and summarize existing CD approaches that design novel diagnostic functions. Then we find that their diagnostic functions combine three types of input vectors in a linear or non-linear manner and finally output a scalar or a vector for the score prediction. In other words, the diagnostic function architecture can be seen as a general model that has three input nodes, some internal nodes, and one output node. Both its output node and its internal nodes are computation nodes to handle their inputs by their adopted operators. We can find that the general model is similar to models under the search space of RNN in NAS <cit.>. Fig. <ref>(a) plots the RNN cell found by Efficient Neural Architecture Search (ENAS) <cit.>, where x[t] and h[t-1] are two input nodes, avg is the output node, and others are computation nodes. By summarizing previous CD approaches, we collected some operators that be used for computation nodes of the general model. These operators are divided into two types, i.e., unary and binary operators, which are used to receive one input and two inputs, respectively. Here computation nodes (including the output node) in the general model can only handle at most two inputs, which is different from that of RNNs. As a result, we take the general model as the proposed search space for CD, where 15 candidate operators in Table <ref> can be adopted by each computation node and the following are their descriptions: * Unary operators. Each unary operator only takes one input x and returns its output. FFN_D returns the vectors, Sum, Mean, and FFN return the scalar outputs, while the other eight unary operators return the outputs having the same shape as their inputs, which contains five arithmetic operators (i.e., Neg, Abs, Inv, Square, and Sqrt) and three activation functions Tanh <cit.>, Sigmoid <cit.>, and Softplus <cit.>. * Binary operators. Three binary operators considered in the general model: in addition to addition Add and multiplication Mul, we further consider a Concat operator to aggregate two input vectors into one vector. Note that the output shapes of Add and Mul are determined by the maximal shape of two inputs. For example, when one input is a scalar x and another input is a vector 𝐲∈ R^1× D, the output shape is same as 𝐲 (equal to 1× D). Here FFN and Concat are NN-based operators containing learnable parameters, which make the proposed search space more expressive than that of RNN. Note that the general model may output a scalar y or a vector 𝐲, because the general model may adopt different operators while the output shapes of candidate operators are different. To make the prediction process successful, the general model has to execute the following process to get the prediction score of student s_i on exercise e_j: r̂_ij={ y, if y ∈ R^1 FC_3(FC_2(FC_1(𝐲))), if 𝐲∈ R^1× D ., where FC_1(·), FC_2(·), and FC_3(·) are three FC layers with output dimensions H_1, H_2, and H_3, respectively. The three FC layers are set to hold the monotonicity property according to the experiences in <cit.>. By doing so, the probability of a correct response to the exercise is monotonically increasing at any dimension of the student’s knowledge proficiency, which enables FC layers to hold the same interpretability as the identity operation. For better understanding, Fig. <ref>(b) presents the general diagnostic function architecture under the proposed search space. The general diagnostic function architecture (the general model) contains two parts. The first part (termed CD cell) is similar to the RNN cell in NAS, and the second part is a three-layer FC NN or an identity operation as shown in (<ref>). The CD cell has several computation nodes (represented by ovals) and at most three input nodes (𝐡_S, 𝐡_E, and 𝐡_C, represented by triangles). Different from the RNN cell, its output node is also computation node, and computation nodes are selected from unary operators (denoted by green nodes) or binary operators (denoted by orange nodes). After obtaining the CD cell's output y, either the identity operation or the three-layer FC NN will be applied to get the final prediction r̂_ij. As stated in <cit.>, a promising search space should contain not only a large number of expressive neural architectures but also as many existing handcrafted architectures as possible. To demonstrate the effectiveness of the proposed search space, we take four representative CDMs, including IRT, MIRT, MF, and NCD, as illustrative examples. Fig. <ref>(a) to Fig. <ref>(d) present their diagnostic function architectures under the proposed search space. As can be seen, these typical CDMs can be easily represented under the proposed search space by specific computation nodes and selected input nodes. § THE PROPOSED EMO-NAS-CD This section will first present the proposed EMO-NAS-CD framework, and then sequentially give individual representation, objectives, and a tailored genetic operation. Finally, other details are introduced. §.§ Overall Framework of EMO-NAS-CD The main idea of the proposed EMO-NAS-CD is to search high-performance diagnostic function architectures holding high interpretability under the devised search space. To this end, we aim to solve the NAS-CD task by optimizing a multi-objective optimization problem (MOP), which has two objectives: model performance and model interpretability. To avoid the difficulties of using vector-based encoding for the devised search space (e.g., variable-length encoding problem), we propose MOGP (a popular type of MOEAs <cit.>) to solve the MOP by transforming architectures into tree architectures and encoding them by trees, because genetic programming (GP) <cit.> can solve tree-encoding-based problems well. The devised MOGP follows the framework of NSGA-II <cit.>, and we devise an effective genetic operation and a population initialization strategy for the MOGP. As can be seen that the proposed EMO-NAS-CD is a MOGP-based NAS approach for CD. Based on the classical NSGA-II <cit.>, the main idea of the proposed EMO-NAS-CD is to search effective diagnostic function architectures holding high interpretability by maximizing the objectives of model performance and model interpretability. Instead of using vector-based encoding for each architecture in the proposed search space, we first transform each architecture into its corresponding tree architecture, and then encode it by the tree-based representation, which avoids some difficulties of vector-based encoding (e.g., variable-length encoding difficulty) in general MOEAs and is easier to be optimized by GP <cit.>. Moreover, we devise an effective genetic operation inspired by GP and a population initialization strategy for the proposed EMO-NAS-CD. As a result, the proposed EMO-NAS-CD is a MOGP-based NAS approach for CD. The overall framework of the proposed EMO-NAS-CD is summarized in Fig. <ref>, which is mainly composed of five steps. Firstly, a population initialization strategy (in Section <ref>) is employed to randomly generate Pop individuals as population 𝐏. Second, the standard binary tournament selection is employed to select individuals for getting the mating pool 𝐏'. Next, a novel genetic operation is applied to 𝐏' to generate offspring individuals and form the offspring population 𝐐. Fourth, train the architecture of each individual of 𝐐 for a certain number of (Num_E) epochs to compute its objective values. Fifth, the environmental selection in NSGA-II <cit.> will be employed to identify and maintain the individuals that hold better objective values from the union of population 𝐏 and offspring population 𝐐. The second to the fifth step will be repeated until the maximal number of generation Gen is exceeded, then the non-dominated individuals will finally be output. For details, Algorithm <ref> also summarizes the main procedures of the proposed EMO-NAS-CD. It is worth noting that there exist some individuals during the whole optimization process, whose neural architectures achieve terrible performance, nearly close to random performance. The reason behind this is that these architectures will encounter the gradient explosion problem when they continuously use some operations (e.g., Square, Tanh, and Softplus), which makes it difficult for general training paradigms to train them well. To solve this problem, in the individual evaluation, we adopt a simple early-stopping strategy <cit.> to stop the training of a neural architecture if its performance does not improve for several epochs. §.§ Individual Representation To represent architectures in the proposed search space, vector-based encoding is naturally our first choice because of its high popularity in many real-world optimization problems. Suppose the vector-based encoding for i-th computation node of an architecture is n_i={link_1, link_2,Op}, where link_1 and link_2 denote node n_i receiving which nodes' outputs and Op denotes which operator is adopted, and then each architecture is represented by a set of nodes {n_i| 1≤ i ≤ num_c} (num_c denotes the number of computation nodes). However, as shown in Fig. <ref>(b), the architectures in the proposed search space are variable. Thus it is difficult and unsuitable to represent architectures by vector-based encoding due to two challenges. The first challenge is that num_c is not fixed but variable, and thus the vector-based encoding of each architecture is variable-length, which is difficult to solve by general MOEAs <cit.>. Secondly, different from the output node of the RNN cell, the output node in the proposed search space is a computation node and receives at most two inputs. This poses a decision constraint in using vector-based encoding as individual representation and thus is also difficult to solve. It can be found from Fig. <ref>(b) that there are two challenges for vector-based encoding in general MOEAs to represent architectures in the proposed search space. Suppose the vector-based encoding for i-th computation node of an architecture is n_i={link_1, link_2,Op}, where link_1 and link_2 denote node n_i receiving which nodes' outputs and Op denotes which operator is adopted, and then each architecture is represented by a set of nodes {n_i| 1≤ i ≤ num_c} (num_c denotes the number of computation nodes). The first challenge is that num_c is not fixed but variable and thus the vector-based encoding of each architecture is variable-length, which is difficult to solve by general MOEAs <cit.>. Secondly, different from the output node of the RNN search space, the output node in the proposed search space is a computation node and thus receives at most two inputs, which poses a decision constraint in using vector-based encoding as individual representation and thus is also difficult to solve. To avoid the above issues, we propose to utilize tree-based representation to encode architectures in our proposed search space, and we propose MOGP to solve the MOP to search novel CDMs because of the superiority of GP in solving tree-encoding-based optimization problems <cit.>. For this aim, we have to transform the architectures under the proposed search space into their corresponding single-root tree architecture. Fig. <ref> (e) gives the transform process by taking the general model as an illustrative example: the input nodes are seen as the leaf nodes of the tree architecture, the output node is equal to the root node, and the whole tree architecture can be seen as a single-root binary computation tree, where the obtained tree architecture is similar to the Koza-like tree in GP <cit.>. Based on the tree-based representation, the proposed MOGP can effectively search diagnostic function architectures but still needs the assistance of some tailored strategies, such as genetic operations and initialization strategies. we observe that the general model can be transformed into a corresponding single-root tree architecture. Fig. <ref> (e) gives the transform process: the input nodes are seen as the leaf nodes of the tree architecture, the output node is equal to the root node, and the whole tree architecture can be seen as a single-root binary computation tree, where the obtained tree architecture is similar to the Koza-like tree in GP <cit.>. Considering the superiority of GP in solving tree-encoding-based optimization problems <cit.>, we adopt tree-based representation to encode architectures in our search space, and thus the proposed MOEA turns out to be a MOGP, which needs effective tailored genetic operations. §.§ Objectives To make the searched architectures hold good performance and high interpretability, the proposed MOGP is to optimize the following MOP: max_𝒜 F(𝒜)={ f_1(𝒜) = AUC(𝒜,D_val) f_2(𝒜) = model interpretability(𝒜) ., where 𝒜 denotes the candidate architecture to be optimized. f_1(𝒜) represents the AUC (Area Under an ROC Curve) value <cit.> of 𝒜 (i.e., model performance) on validation dataset D_val. f_2(𝒜) represents the model interpretability of architecture 𝒜, since an architecture holding high model interpretability is preferred for CD. To obtain reasonable f_2(𝒜), an intuitive idea is to compute the model complexity by counting how many computation nodes and leaf nodes are in 𝒜. But it is not reasonable <cit.> to some extent since much research <cit.> indicates that the model depth plays the most important role in the model interpretability. Besides, some research on interpretable trees <cit.> further indicates that binary operators commonly provide better interpretability than unary operators. More importantly, recent CDMs prefer introducing extra inputs and more feature fusions in the models because it is easier to interpret the model performance <cit.>. This implies that more inputs in CDMs represent higher interpretability, further indicating that binary operators are more important than unary ones since binary operators will introduce more inputs. However, the model complexity that counts the number of nodes in 𝒜 can not reflect the above fact. As shown in Fig. <ref>, despite more nodes, we think 𝒜_2 holds better interpretability than 𝒜_1 due to a smaller depth. Due to larger breadth (more inputs), 𝒜_4 and 𝒜_3 should be better than 𝒜_1 but worse than 𝒜_2. 𝒜_5 should be better than 𝒜_3 but worse than 𝒜_4 due to containing more nodes. Even compared to 𝒜_3 having the same depth as 𝒜_1, 𝒜_1 is worse than 𝒜_3 because 𝒜_3 holds a larger breadth than 𝒜_1, where the tree breadth is equal to the number of leaf nodes. As can be seen from the comparisons among 𝒜_1, 𝒜_3, and 𝒜_4, a tree holding a larger breadth means more binary operators contained in the tree, and thus indicates the tree holds higher interpretability. Besides, 𝒜_4 holds higher interpretability than 𝒜_5 since 𝒜_4 has fewer computation nodes. With the above considerations, we characterize the model interpretability of architecture 𝒜 by its tree's depth, breadth, and computation node number. The model interpretability of 𝒜 is first determined by the tree depth depth, then by the tree breadth breadth (equal to the number of leaf nodes), and finally by the number of computation nodes num_c. As a consequence, the f_2(𝒜) can be computed by f_2(𝒜) = (1-depth-1/10)+breadth/200+(0.001- num_c/20000), where we make the depths of all architectures less than 10 in this paper to hold high model interpretability and thus f_2(𝒜)∈ (0,1) has five decimal places. The first decimal place is determined by depth, The second and third decimal places are determined by breadth, and the remaining decimal places are determined by num_c. Note that three parameters (10, 200, 2000) are empirically set and can be other choices, which will not affect the proposed approach's result as long as two criteria are met. Firstly, the decimal place(s) determined by depth, breadth, and num_c do not affect each other; secondly, the decimal place(s) determined by depth is most important, followed by breadth, and finally num_c. In Fig. <ref>, the depths of five architectures are 3, 2, 3, 3, and 3, their breadths are 1, 2, 3, 4, and 4, and their computation node numbers are 3, 3, 4, 4, and 5. According to (<ref>), their second objective values are 0.80585, 0.91085, 0.81580, 0.82080, and 0.82075, respectively, which are consistent with our consideration. §.§ Genetic Operation For effective offspring generation in the proposed MOGP, we propose an effective genetic operation based on four sub-genetic operations that modified and inspired from GP <cit.>. The following introduces four modified sub-genetic operations: Exchange, Delete, Replace, and Insert. * Exchange. Given two individuals, 𝐏'_1 and 𝐏'_2, randomly select two sub-trees, t_1 and t_2, from the trees corresponding to two individuals, respectively, and then exchange two sub-trees to generate two new trees and form two offspring individuals, 𝐎_1 and 𝐎_2. (The root nodes will not be selected.) * Delete. Given a parent individual 𝐏'_1, randomly select a computation node from the tree corresponding to 𝐏'_1. To delete this node, one of the left and right child trees of this node will be randomly connected to its parent node (if exists). The newly generated tree can form the offspring individual 𝐎_1. * Replace. For the tree corresponding to individual 𝐏'_1, randomly select a node to be replaced and replace the node's operator by a new operator randomly sampled from Table <ref>. If the original operator is unary but the sampled operator is binary, a new leaf node will be generated and connected to this node as its child tree, where the new leaf node is randomly sampled from {𝐡_S, 𝐡_E, 𝐡_C}. If the original operator is binary but the sampled operator is unary, only one of the left and right child trees of this node will be kept. As a result, offspring individual 𝐎_1 can be obtained based on the revised tree. * Insert. A new operator is first randomly sampled from the predefined operators, and a computation node is randomly selected from individual 𝐏'_1. Then, the sampled operator is inserted between this node and its parent node (if exists) as a new computation node. If the sampled operator is binary, an additional leaf node will be randomly sampled from {𝐡_S, 𝐡_E, 𝐡_C} and added to the new computation node as its child tree. Finally, offspring individual 𝐎_1 will be generated. Note that the root node will not be involved in Exchange since the Exchange operation will be meaningless or ineffective if root nodes are selected. For a better understanding of the above operations, Fig. <ref> gives some illustrative examples of generating offspring individuals. The pink area denotes the selected computation nodes (or corresponding sub-trees) needed to be handled, and the light purple area represents the executed changes. As can be seen, Exchange will lead to big modifications between generated individuals and corresponding parent individuals, while other operations commonly lead to small modifications. Therefore, the Exchange operation can be used for exploration, and others can be used for exploitation <cit.>. Equipped with four sub-genetic operations, we empirically combine them to form our proposed genetic operation, whose basic procedures are summarized in Algorithm <ref>. Four operations are called four sub-genetic operations because they can constitute many other genetic operations when adopting different combination manners. In Algorithm <ref>, two individuals 𝐏'_i (i-th individual in 𝐏') and 𝐏'_i+1 are first selected from the mating pool 𝐏', and the numbers of computation nodes in the two individuals are computed as num_c^i and num_c^i+1 (Lines 3-4). Second, randomly sample an integer rand from {1,2,3,4} if both num_c^i and num_c^i+1 are not smaller than 2, otherwise randomly sample rand from {3,4}. Numbers 1, 2, 3, and 4 correspond to Exchange, Delete, Replace, and Insert , respectively (Lines 5-9). This is because Exchange and Delete will be ineffective, even meaningless, if there is only one computation node in the individual. Third, the sub-genetic operation corresponding to rand will be applied to 𝐏'_i and 𝐏'_i+1 to generate offspring individuals 𝐎_1 and 𝐎_2 (Lines 10-14). Next, the obtained 𝐎_1 and 𝐎_2 will be added to the offspring population Off (Line 15). The first to the fourth step will be repeated until all offspring individuals are generated. After that, an individual repair strategy in Section <ref> is used to make offspring individuals feasible since there exist some constraints for some operators in computation nodes of trees (Line 17). For example, Sum, Mean, FFN, and Concat only receive vectors as their inputs. Finally, the obtained offspring population Off is output. §.§ Related Details In the mating pool selection of EMO-NAS-CD, two individuals are first randomly selected each time, and then their non-dominated front sizes and crowding distance values are compared to keep the better one. The computation of non-dominated front size and crowding distance for each individual is the same as for NSGA-II <cit.>. Due to the simple topologies of tree architectures, this will generate many duplicated individuals. To address this issue, a simple archive stores the individuals that have appeared and identifies whether a newly generated individual has already occurred. In addition, there are the population initialization strategy and the individual repairing strategy in the proposed approach. §.§.§ Population Initialization Instead of evolving architectures entirely from scratch <cit.>, we aim to introduce prior knowledge about existing CDMs' diagnostic functions into the search process. To this end, one half of the individuals in the population are generated from four existing CDMs (IRT, MIRT, MF, and NCD) by applying the proposed genetic operation. To maintain the diversity of the population and avoid getting trapped into local optima, another half of individuals are randomly generated from scratch. Here, we utilize a hyperparameter Node_range = {node_h1, node_h2} to limit the computation node number sampled in each randomly generated individual. Here node_h1 and node_h2 refer to the lower and upper bounds of the number of generated nodes. §.§.§ Individual Repair Most operators in Table <ref> can be applied to the input with any shape, except for Sum, Mean, FFN, and Concat, which can only receive one-dimensional vectors as their inputs. The first three operators are specially used to extract a high-level scalar feature from vectors, while Concat is specially used to concatenate and map two vectors to one vector. Therefore, one generated individual is infeasible and needs repairing if its contained nodes are equipped with the above four operators but take scalar inputs (termed infeasible nodes). To tackle this issue, we first execute the post-order traversal for each individual to check whether each node is feasible and then directly replace the operator of the infeasible node with other unary operators or other binary operators (e.g, replace Concat by Add, and replace Mean by Neg). §.§.§ Complexity Analysis The time complexity of the proposed EMO-NAS-CD is mainly determined by two components, i.e., the training of each architecture and the optimization process of NSGA-II. Suppose the size of a training dataset is |D_train|, the time complexity of training each architecture <cit.> is O(Num_E× |D_train| × D), and the time complexity of one generation of NSGA-II is O(Pop^2) <cit.>. Therefore, the overall time complexity of EMO-NAS-CD is O(Pop× Gen × Num_E× |D_train| × D) +O(Pop^2× Gen). Since Num_E× |D_train| × D ≫ Pop× Gen, the time complexity of EMO-NAS-CD can be regarded as O(Pop× Gen × Num_E× |D_train| × D). On the other hand, its space complexity is mainly determined by the population and the offspring population, and each population has Pop individuals encoded by trees. Suppose the average number of computation nodes in the trees is AvgNum, the space complexity of an individual is O(AvgNum*3) since each node needs three numbers to specify its operation and two subtrees. As a result, the whole space complexity of EMO-NAS-CD is O(AvgNum*3× Pop × 2), i.e., O(AvgNum× Pop × 6). of EMO-NAS-CD is determined by hidden vectors with the size of 1× D in each architecture, where the number of hidden vectors is determined by the number of contained computation nodes and three leaf nodes. Suppose the average number of computation nodes in each architecture is AvgNum_node, the space complexity of EMO-NAS-CD is O(Pop× Gen ×(AvgNum_node+3)× D). § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets To verify the effectiveness of the proposed EMO-NAS-CD, we conducted experiments on two real-world education datasets, including ASSISTments <cit.> and SLP <cit.>. We have summarized the statistics of two datasets in Table <ref> and presented the descriptions of two datasets as follows: * ASSISTments (ASSISTments 2009-2010 skill builder) <cit.> is an openly available dataset created in 2009 by the ASSISTments online tutoring service system. Here we adopted the public corrected version that does not contain duplicate data. As can be seen, there are more than 4 thousand students, nearly 18 thousand exercises, and over 300 thousand response logs in the dataset. * SLP (Smart Learning Partner) <cit.> is another public education dataset published in 2021. SLP collects the regularly captured academic performance data of learners during their three-year study on eight different subjects, including Chinese, mathematics, English, physics, chemistry, biology, history and geography. The dataset contains nearly 58 thousand response logs of 1,499 students on 907 exercises. According to the experiences of previous work <cit.>, we filtered out students with less than 15 response logs for all datasets to ensure that there are sufficient data to model each student for diagnosis. §.§.§ Compared Approaches and Metrics To validate the effectiveness of the proposed approach, we compared the diagnostic function architectures found by the proposed EMO-NAS-CD with state-of-the-art CDMs, including DINA <cit.>, IRT <cit.>, and MIRT <cit.>, MF <cit.>, NCD <cit.>, RCD <cit.>, CDGK <cit.>, and KSCD <cit.>. The detailed descriptions of these comparison CDMs can be found in Section <ref>. The source codes of most compared approaches are available at <https://github.com/orgs/bigdata-ustc/repositories>. Note that the results of RCD on SLP are not reported since RCD needs extra manually enhanced inputs that SLP does not have. To measure the performance obtained by all CDMs, three evaluation metrics including AUC, accuracy (ACC), and root mean square error (RMSE) are adopted. §.§.§ Parameter Settings * 1. Architecture Settings The dimension D is equal to the number of knowledge concepts K, H_1, H_2, and H_3 are set to 512, 256, and 1, respectively. * 2. Search Settings During the search process in the proposed EMO-NAS-CD, each student's response logs in each dataset are randomly split into 70%, 10%, and 20% as training, validating, and testing datasets, respectively. To train the architecture encoded by each individual, the Adam optimizer with a learning rate of 0.001 is used to optimize the Cross-Entropy loss between the prediction results and the targets, where the size of each batch is set to 128, and the number of training epochs Num_E is set to 30. For the proposed EMO-NAS-CD, the population size Pop is set to 100, the maximal number of generation Gen is set to 100, and the initial node range Node_range is set to {2,4}. * 3. Training Settings For more convincing results, we adopted multiple different settings to split the dataset into training and test datasets for evaluating the model performance, where the settings contain 50%/50%, 60%/40%, 70%/30%, and 80%/20% as suggested in <cit.>. Each found architecture needs to be retrained from scratch for 50 epochs, the settings are the same as that in the above search settings. For a fair comparison, the parameter settings of all comparison CDMs are the same as those in their original papers to hold their best performance. All experiments were conducted on a NVIDIA RTX 3090 GPU. Our source code can be available at <https://github.com/DevilYangS/EMO-NAS-CD>. §.§ Effectiveness of The Proposed EMO-NAS-CD Table <ref> summarizes the prediction performance comparison between the proposed EMO-NAS-CD and comparison CDMs in terms of ACC, RMSE, and AUC values that are averaged on 30 independent runs on the two datasets, where five different splitting settings are considered. Here seven architectures (with different degrees of model interpretability) found by EMO-NAS-CD in a single run are selected for comparison, where architectures A1 to A7 are found on the ASSISTments and architectures S1 to S7 are found on the SLP. To this end, for some architectures that have similar model interpretability, the architecture with the best performance among these architectures will be selected for final comparison. For more convincing explanations, Table <ref> further shows the results of A1 (S1) to A7 (S7). A1 refers to the average results on ten different runs of EMO-NAS-CD. In each run, the architecture that has similar interpretability to A1 is used to compute A1, which is the same to obtain A2 to A7 and S1 to S7. Besides, the Friedman test with Nemenyi procedure <cit.> (under significance level α=0.05) was conducted on the results of comparison CDMs and A1 (S1) to A7 (S7), which is a nonparametric statistical procedure to check whether a set of samples are statistically different. Table <ref> summarizes the statistical results including significance analysis and rank of each method, where '1' indicates significant difference between two methods and '0' otherwise. As can be observed from Table III and Table IV, nearly all architectures found by EMO-NAS-CD (except for the simplest architectures A1 and S1) exhibit significantly better performance than all comparison CDMs. Take the results under the splitting setting of 80%/20% for analysis, and the boxplots for AUC values (under this setting) of comparison CDMs and seven found architectures are further presented in Fig.<ref> for explicit observation. As can be seen, the most effective architecture A7 outperforms the current best CDM (RCD) by over 0.07 on the ASSISTments dataset in terms of the AUC value. Even for the simplest architecture A1, there still holds the superiority of performance over most CDMs, which is competitive to KSCD and only worse than RCD, but KSCD and RCD use extra input information to enhance the performance. Therefore, compared to the CDMs that do not have such input information, the performance difference between our best-found architectures and these CDMs is more significant: the performance leading of A7 over the best of these CDMs is up to 0.08 in terms of AUC values, and architecture A1 also outperforms these CDMs. It can be seen that the proposed approach achieves such a tremendous performance improvement by only designing more effective architectures without extra input information. In addition, we can find that the standard deviation of the proposed approach is very small from the comparisons between A1 to A7 and A1 to A7. We can make the same observations and conclusions based on the results on the SLP dataset. For a deep insight into found architectures, we presented all non-dominated individuals found by the proposed approach on two datasets in Fig. <ref> and Fig. <ref>, where the architectures corresponding to these individuals are further plotted in the right parts of two figures. As can be observed, A1 or S1 is the shallowest architecture, which holds the highest model interpretability but worse prediction performance, while A7 or S7 is the deepest architecture, which holds the best performance but the worst interpretability among all selected architectures. In addition, we can obtain some interesting and insightful observations from these best-found architectures on two datasets. Firstly, from the comparisons of S1 and S2, A2 and A3, as well as A4 and A5, we can find that adding a proper activation such as Sigmoid and Softplus can enhance the model performance without losing interpretability; Secondly, in most shallower architectures, the exercise-related input 𝐡_E tends to be directly combined with the student-related input 𝐡_S by some binary operators, while in most deeper architectures, 𝐡_E tends to be first combined with the knowledge concept-related input 𝐡_C and then combined with 𝐡_S. Finally, all shallower architectures prefer FC layers as their second parts to output the final prediction, while for the deeper architectures with better performance, the Identity operation seems to be a more effective second part. These deeper architectures commonly obtain the final prediction with the assistance of the Mean operator. The above observations provide some valuable guidelines for manually designing novel CDMs. §.§ Architecture Transferring Validation As can be seen from Figs. <ref> and <ref>, the two sets of selected best-architectures on two datasets are a bit different from each other. Only architecture A1 is same as architecture S1 and similar to S2 and S4, and architectures A6 and A7 are similar to architecture S5. To further investigate the transferability and generalization of the found architectures, Table <ref> presents the performance of architectures A2 to A7 and architectures S2 to S7 on the two datasets under the splitting setting of 80%/20%, where the results of A1 and S1 are not contained since they have the same architecture. As can be observed, the architectures found on the ASSISTments still hold competitive performance on the SLP; similarly, the architectures found on the SLP also hold comparable performance on the ASSISTments. Note that architecture A5 and S2 have the best generalization to hold the most promising performance on both datasets. §.§ Ablation Study This section will validate the effectiveness of some devised strategies and analyze the parameter sensitivity. In the following, only the results on the SLP dataset is presented due to higher search cost on the ASSISTments. To verify the effectiveness of the proposed initialization strategy, we equipped the proposed approach with other two initialization strategies to form two variant approaches, EMO-NAS-CD (random) and EMO-NAS-CD (existing). The initial population of the former is randomly generated, and the latter initializes its population purely from existing CDMs. Besides, we also established another variant called EMO-NAS-CD (crossover+mutation), which generates offspring by first applying the Exchange operation and then randomly applying one of other three operations. As a result, Fig. <ref> presents the convergence profiles of hypervolume (HV) <cit.> obtained by the proposed approach and its variants. HV measures convergence and diversity of a population, and a large HV value indicates a good convergence and diversity. The comparison between EMO-NAS-CD and other two variants indicates that the suggested population initialization strategy can indeed speed up the convergence and lead to better final convergence. Besides, we can observe that the proposed genetic operation is significantly better for the proposed approach than the compared genetic operation. The reason behind this is that successively employing two sub-genetic operations to generate offspring will cause major modifications between the generated individuals and their parent individuals, which can promote the exploration of the algorithm but hinder the exploitation of the algorithm to some extent. To sum up, the effectiveness of the proposed initialization and genetic operation can be demonstrated. To validate the effectiveness of objective f_2(𝒜) of (<ref>) in assisting the proposed approach to search interpretable CDMs, Fig. <ref> exhibits the non-dominated individuals found by two variants of the proposed approach and plots their six representative architectures for observation. Here, the first variant takes the model complexity as the second objective: f_2^com = 1-numc+breadth/30, and the second variant computes the model complexity as f_2^com_dep = 1-numc+breadth+depth-1/30. f_2^com is measured by the sum of the numbers of computation nodes and leaf nodes, and f_2^com_dep additionally considers the influence of the tree depth (30 is a parameter used for normalizing the objective value). As can be seen from Fig. <ref>(a), compared to the architectures in the right part, the architectures in the left part have better performance but at the expense of a much larger increase of depth. Besides, the architectures (located in the upper left area) are much deeper compared to the architectures with similar performance in Fig. <ref>. The reason behind this is that the objective of model complexity prefers adding a unary operator node, whereas adding a binary operator node would introduce an extra leaf node, leading to a worse objective value. The same observation and conclusion can be drawn from Fig. <ref>(b), where the found architectures are still very deep. This is because f_2^com_dep is basically the same as f_2^com yet implicitly assigns a smaller penalty to binary operator nodes, where the assigned penalty is still larger than the penalty assigned to unary operator nodes by f_2^com_dep. Finally, the effectiveness of the devised model interpretability objective can be validated. To analyze the sensitivity of the proposed approach to the framework of MOEAs and the hyperparameters Pop and Node_range, Fig. <ref> compares HV values on the SLP obtained by EMO-NAS-CD under different hyperparameter combinations of Pop and Node_range. According to Taguchi method <cit.>, Pop is set from 10 to 120 with step equal to 10, while node_h2 in Node_range is set from 1 to 12 with step equal to 1 and node_h1 is fixed to 2. The original EMO-NAS-CD is under NSGA-II, but EMO-NAS-CD[NSGA-III] and EMO-NAS-CD[VAEA] are EMO-NAS-CD under NSGA-III <cit.> and VAEA <cit.>, respectively. As can be seen from Fig. <ref>, firstly, the proposed EMO-NAS-CD is robust to the framework of MOEAs; secondly, the proposed EMO-NAS-CD can obtain relatively good performance when the population size is greater than 80, and it is not necessary to set Pop to 120 for a slightly higher HV value at the expense of an extra 0.2 times of cost; thirdly, the setting of node_h2 has a big influence on the result of the EMO-NAS-CD, and EMO-NAS-CD can obtain relatively good performance when node_h2 lies from 3 to 5. Therefore, current hyperparameter settings for EMO-NAS-CD are good enough to some extent. §.§ Discussion This section will discuss three guidelines for researchers in various domains after the experiments. The first guideline is for researchers in NAS. To design a task-specific NAS approach, researchers should make the best of their domain knowledge to create a search space. By doing so, the search space can include existing models for the target task and many other potential models. In addition, the search strategy should also be based on the search space's characteristics and the target task's domain knowledge. The second guideline is for researchers in CD, inspiring them on how to design effective CDMs, where the detailed guideline can be found in the last paragraph of Section <ref>. The third guideline is for researchers interested in NAS and intelligent education. Considering the success made by our approach, it is promising for other tasks in intelligent education to employ the NAS technique to design effective neural architectures. Besides, researchers can borrow experiences from this paper to design the objectives of model interpretability, generalization, and robustness, formulate their multiple objectives as a MOP, and then employ a suitable MOEA to solve the MOP. § CONCLUSION AND FUTURE WORK In this paper, we proposed to design novel CDMs by leveraging evolutionary multi-objective NAS. Specifically, we first proposed an expressive search space for CD, which contains a large number of potential architectures and existing architectures. Then, we proposed an effective MOGP to search high-performance architectures with high interpretability in the search space by optimizing the MOP having two objectives. To avoid some optimization difficulties, each architecture is first transformed into its corresponding tree architecture and then encoded by tree-based representation for easy optimization. Besides, in the proposed MOGP, an effective genetic operation is designed for offspring generation, and a population initialization strategy is devised to accelerate the convergence. Experimental results demonstrate the superiority of the architectures found by the proposed approach to existing CDMs in terms of performance and model interpretability. This work has shown the promising prospect of leveraging NAS for CD, but there still exist some threats to the validity of the proposed approach, including internal, external, and construct threats. Firstly, the devised model interpretability objective is the primary internal threat. As seen from Fig. <ref>, Fig. <ref>, and Fig. <ref>, the proposed approach will find relatively different architectures when different model interpretability objectives are adopted. Besides, the proposed model interpretability objective is empirically designed based on some experiences from decision trees, which may limit the emergence of novel architectures as well as the application of found architectures in the real world due to a little unreasonable definition of model interpretability. Therefore, we would like to design more reasonable model interpretability objectives in the future. Secondly, the dataset utilized in the proposed approach is the main external threat. We can find that the found architectures on different datasets are quite different, which indicates that the architectures found by the proposed approach on a single dataset are not general for the cognitive diagnosis task. Besides, the size of the utilized dataset affects the search efficiency of the proposed approach, which leads to an extremely high computation cost when a large-scale dataset is met, e.g., the search cost on ASSISTments is about 15 GPU days. Therefore, in the future, we would like to design generalized CDMs and explore surrogate models <cit.> to reduce the search cost. Finally, the proposed search space is the main construct threat since it is designed based on the summary of existing architectures and forces all architectures to be single-root trees. Despite high effectiveness, the current search space may limit the emergence of more potential architectures since CDMs should not always be single-root trees. Therefore, it is interesting to devise other types of search space to contain more effective CDMs. Firstly, the proposed approach suffers from high computation cost, e.g., the search cost on ASSISTments is about 15 GPU days, and it will become severer when searching architectures for a large-scale dataset, where the training time of each architecture is much more expensive. Therefore, in the future, we would like to explore surrogate models <cit.> to reduce the search cost. Moreover, our proposed search space is designed based on the summary of existing architectures and forces all architectures to be single-root trees, which is effective to some extent but may ignore many potential architectures. Therefore, it is interesting to devise a more effective search space and encoding strategy for CD. IEEEtran
http://arxiv.org/abs/2307.04036v1
20230708195101
Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
[ "Tong Steven Sun", "Yuyang Gao", "Shubham Khaladkar", "Sijia Liu", "Liang Zhao", "Young-Ho Kim", "Sungsoo Ray Hong" ]
cs.HC
[ "cs.HC", "cs.AI", "cs.CV", "cs.LG" ]
]Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations George Mason University USA [email protected] Emory University USA [email protected] George Mason University USA [email protected] Michigan State University USA [email protected] Emory University USA [email protected] NAVER AI Lab Republic of Korea [email protected] George Mason University USA [email protected] The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs. Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a Misc.valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed , the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations. helps CNN engineers to systemically search “unreasonable” local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using , participants made a more accurate and “reasonable” model than the current state-of-the-art. Also, participants found the way guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable. <ccs2012> <concept> <concept_id>10010147.10010257.10010282</concept_id> <concept_desc>Computing methodologies Learning settings</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003124</concept_id> <concept_desc>Human-centered computing Interaction paradigms</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Learning settings [500]Human-centered computing Human computer interaction (HCI) [500]Human-centered computing Interaction paradigms [500]Computing methodologies Machine learning [ Sungsoo Ray Hong ==================== § INTRODUCTION As the societal impact of Computer Vision (CV) models grows <cit.>, it has become crucial to find an effective way to steer Convolutional Neural Networks (CNNs) to align their behaviors with users' mental model <cit.>. Using Explainable AI (XAI) techniques can be the first step to steering Machine Learning (ML) models, as spotting repeating cases that “surprise” ML engineers for a similar reason can help the engineers to generalize the cases to a bigger pattern that signals the vulnerability of their model <cit.>. While XAI techniques are increasingly becoming essential for revising ML models, there are relatively fewer options available for CNNs <cit.>. Among few, local explanation–the technique that overlays a saliency map on a single image to visualize the attentive areas that the model referred to–has been widely used by tremendous ML engineers due to its visual straightforwardness <cit.>. By seeing the attention of a model, a user can assess whether the rationale behind the prediction is reasonable <cit.>. Checking the reasonableness of CNN's “attention” through local explanation can improve CNN's performance in two ways. First, checking the attention can help ML engineers to identify the bias of a dataset used in training. In diagnosing a gender classifier, for example, if a model is attentive to contextual objects, such as “snowboard” to predict a man <cit.> or “shopping cart” to infer a women <cit.>, it means that these contextual objects often appear with a specific gender class in the training dataset. As a result, such an imbalanced distribution of contextual objects causes the model attention to be biased towards contextual objects rather than focusing on the person in the image to classify the gender <cit.>. Using a biased dataset can induce a model to reference contextual objects in prediction, which is defined to be unfair <cit.>. Therefore, diagnosing CNNs using local explanation can reduce bias ingrained in a training set, leading the forthcoming model to be fairer <cit.>. Second, detecting unfair predictions through local explanation can lead to a more robust and generalizable model with stable accuracy. The repeated occurrence of unfair predictions is related to the vulnerability of a CNN, which can be essential for defending against malicious attacks. For example, imagine that an attacker found a gender classifier that tends to classify images with snowboards as men. In that case, the attacker can prepare counter-contextual examples that show women riding snowboards in a backdoor attack to drop the model accuracy. Steering CNNs to fix the found vulnerable patterns can thus yield a model that provides stable accuracy performance regardless of object types appearing in future images. In summary, if the dataset used in training is biased <cit.>, the model fails at demonstrating reasonable attention for specific predictions, which we call to be unfair predictions <cit.>. Such unfair cases, in turn, make the CNN model vulnerable <cit.>. Collectively, the phenomenon of a CNN shifting attention in an unreasonable way due to biased data refers to the problem of contextual bias <cit.>. While contextual bias has become a highly crucial issue in ML and beyond <cit.>, spotting the vulnerability and steering the model is highly challenging or not even feasible <cit.> even for experienced ML engineers <cit.>. Detecting unreasonable attention through local explanation can be “just noticeable” from human eyes, but the current solutions are predominantly a machine-centric approach with limited human involvement <cit.>. In Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW), despite the rich body of research dedicated to better supporting ML engineers <cit.>, little effort has been made to design interfaces that can efficiently and effectively steer CNNs to mitigate contextual bias. Further, while there exists a breadth of empirical studies focused on understanding the ML engineers' practice, challenges, and design opportunities(e.g., <cit.>), it is not well understood how ML engineers apply local explanation in steering CNNs to mitigate contextual bias or what are the practical challenges. Through this work, we aim to bridge the technical and empirical gaps we identified in the problem of contextual bias. Specifically, we aim to create a novel interactive system that can empower ML engineers to leverage local explanations in diagnosing the vulnerability of CNNs and steer them. To inform our design based on real practice, we conducted a formative study (S1) with five industry CNN experts who have more than 5 years of model development. We sought to understand how they use local explanations, what the limitations of existing tools are, and how the new design can practically help their practice. As a result, we identified 3 challenges and 3 desires that we were able to use to streamline their process in our new design. Based on the findings, we devised , the first interactive system that realizes a direct feedback loop that connects a user and a CNN using local explanations for model steering. First, enables a user to systematically categorize unreasonables—the images that have overlaps between the model attention and contextual objects—among images used in validation. Next, for the categorized unreasonables, suggests the “reasonable” attention boundary that excludes contextual objects to help a user effortlessly finish the annotation task required for steering. Third, using the user-confirmed boundary input, steers the target model by optimizing both the prediction loss and attention loss (minimizing prediction errors and shifting the model's attention towards confirmed “reasonable” areas). Finally, helps a user to see what has been changed before and after steering. In particular, provides the evaluation results regarding (1) how the attention quality has become reasonable and (2) how the improved model attention quality affected the model accuracy performance. In the summative study (S2), we evaluated with 12 experienced CNN builders, asking them to revise a gender classifier across two days. We found using enabled every participant to boost their model accuracy performance and model attention quality than applying state-of-the-art techniques. Meanwhile, after using , we also found that over 80% of the participants perceived that using would improve their capability regarding model vulnerability assessment and performance improvement. Based on the two studies, we provide implications for design on Beyond XAI—how the future design can convert XAI-driven insights into actionable steering plans such that the AI's behavior can gradually be aligned to the human mental model. This work offers the following contributions: * S1: Understanding How Local Explanation Is Used in Improving CNNs: We extend our knowledge about how field practitioners apply local explanations when working on CNNs and what the challenges are. Based on the analysis, we suggest how new design can mitigate their difficulties in steering CNNs. * Design Contribution: We devise and instantiate , a novel, end-to-end, and interactive design that enables ML engineers to practice a systematic case-based vulnerability diagnosis and model steering. * S2: Understanding the Effect of : Through the study with 12 experienced CNN developers, we understand how the new design can make a difference in building more accurate and robust CNNs. * Implications for Design for Steerable AI: Based on the results of S1 and S2, we provide how the HCI and CSCW communities can contribute to converting XAI-driven insights more useful and actionable through steerable AI design. § RELATED WORK In this review, we first dive deeper into understanding the problem of contextual bias and explain how unreasonable model attention can detrimentally affect CNN's model performance. Second, we review landmark XAI-driven systems in HCI devised for diagnosing Deep Neural Networks (DNNs) and discuss how the findings can be applied to resolve the problem of contextual bias through an interactive system. Next, we cover how the recent advance in explanation-guided steering techniques can be applied to implement an interactive and integrated model steering environment. Then we highlight the remained technical and empirical challenges in HCI. When CNNs are not trained properly with generalized and representative datasets, there can be various kinds of bias that can introduce several weaknesses in the model performance <cit.>. Imagine that one engineer is preparing a set of images for training a dog detection model. In preparation of data, 50% of the images would show a dog to balance positive and negative cases <cit.>. The problem can start when some contextual objects, such as a ball, appear more frequently in positive cases than negative <cit.>. Using such a biased dataset, a model would establish a “spurious” correlation between a dog and a ball <cit.>. In such a case, the model's attention visualized through local explanation is on the ball rather than a dog <cit.>. Consequently, when bringing an image that shows a ball, the model may likely say that it detected a dog by seeing a ball regardless of a dog appearing in the image <cit.>. As such, this phenomenon of “contextual bias” refers to the case where a model's attention is shifting to contextual objects which are not directly relevant to the model's goal <cit.>. Consequently, using this potential vulnerability, an attacker may be able to drastically decrease model accuracy by showing the ball images without dogs <cit.>. Furthermore, CNN's shifting the focus to a contextual object incurs the fairness issue <cit.>; While model accuracy is accepted as a “golden standard” in modern ML research for evaluation, there is growing concern that putting insufficient emphasis on the quality of model explanation can lead us to have a technical debt <cit.>. This aspect of a CNN's blind decision made by referring to contextual objects has become crucial in the Fairness, Accountability, and Transparency (FAccT) community and beyond <cit.>. In handling contextual bias, several studies outside of HCI commonly apply mathematical approaches rather than incorporating human input <cit.>. For example, Singh et al. used Class Activation Maps as a “weak” automatic attention annotation <cit.>. Feature augmentation <cit.> is another technique proposed for de-biasing using disentangled representation. Hirota et al. provided a way to analyze skewed data distributions to attain unbiased human-like reasoning <cit.>. While each method has its pros and cons, there has been no ideal breakthrough. In recent years, ML communities' approaches are gradually shifting towards involving more human inputs <cit.>. Aligning with this direction, local explanations, such as Grad-CAM <cit.>, started to catch attention as an XAI technique that can mitigate contextual bias. It enables a user to spot the unreasonable model attention at a glance, and perhaps this aspect makes the technique the most widely used XAI technique for investigating CNNs <cit.>. Meanwhile, in HCI and CSCW, despite the wide range of novel systems proposed for helping ML engineers <cit.>, we didn't recognize a system directly focusing on handling contextual bias. When we scope the approaches related to Deep Neural Networks, we found the two perspectives useful in handling contextual bias through local explanation. The first takeaway is that a bottom-up approach—the design that helps users understand the vulnerable patterns by exploring specific cases through local explanation <cit.>—can provide a more straightforward and intuitive flow than a top-down approach which aims at helping a user to understand global structure or rules to explain how DNNs make a prediction <cit.>. Prospector <cit.> and What-if tool <cit.> belong to the bottom-up design that can help ML engineers to see the instance-level of prediction cases to gradually realize a set of patterns for making prediction <cit.>. On the other hand, top-down approaches include XAI techniques and visual analytic components to help a user to understand the “landscape” of prediction rules, structure, and decision boundaries. For instance, Squares <cit.> and Blocks <cit.> are some of the earliest designs that explain how DNNs predict the multi-class problem. MLCube Explorer <cit.>, TwoRavens <cit.>, and Visus <cit.> present the model comparison feature, helping ML engineers more easily decide the model they would like to deploy. ActiVis <cit.>, RuleMatrix <cit.>, CNN explore <cit.>, ExplainExplorer <cit.>, DeepEyes <cit.>, RNNVis <cit.>, NeuroCartography <cit.>, and Dodrio <cit.> fall into visual analytic approaches. The second takeaway is that by including every feature required for assessing and steering in a single, end-to-end systems can reduce the cost of switching the context between the diagnosis to the refinement <cit.>. EnsembleMatrix <cit.>, ModelTracker <cit.>, Tenserflow Graph Visualizer <cit.>, and explAIner <cit.> present end-to-end environments that combine diagnosis and model refinement. This review concludes that local explanations can help a user to easily diagnose the model vulnerability for easing contextual bias in a bottom-up fashion. Meanwhile, including both diagnosis and steering in a single system can further help ML engineers. In realizing this design goal, the first technical challenge is understanding how to steer a CNN upon finding the unreasonable model attention. In recent years, new techniques have enabled steering the AI's behavior using human input through local explanation. For example, Attention Branch Network <cit.> is a pioneering method that allows humans to directly adjust the boundary of model attention. More advanced techniques, such as GRADIA <cit.>, RES <cit.>, and GNES <cit.> have been proposed. While they can be potentially effective, they have never surfaced or been used by ML engineers through interactive systems. The second challenge is the lack of studies aimed at understanding how ML engineers practice and perceive local explanations in their CNN building workflow. There has been a series of empirical studies aimed at learning the workflow of ML engineers and data scientists. The directions include understanding how they use XAI tools <cit.>, how ML beginners learn XAI tools to work on their model building <cit.>, how ML experts view the automated AI <cit.>, how ML experts collaborate in using XAI tools, and beyond <cit.>. Despite the popularity of local explanations, we didn't identify the work specifically focusing on understanding ML engineers' current practices and challenges. Item.1So, we believe that an interactive system is essential to bridge the gap between computational techniques and human-centered design to diagnose and resolve contextual bias. Since diagnosing and steering a CNN is a deep cognitive process that requires dense and repetitive interaction with a system, conducting a formative study in advance would higher the chance of yielding a practically useful design <cit.>. § STUDY 1: FORMATIVE STUDY Through the reviews, we defined our specific goal of designing an interactive system that can mitigate contextual bias embedded in CNNs. In doing so, we learned that local explanation provided through bottom-up fashion could help a user to efficiently and effectively examine CNN's vulnerable patterns and steers it. To situate our design considerations based on real practice, we conduct a formative study with industry practitioners. §.§ Method We conducted open-ended, semi-structured interviews with professional CNN developers. In recruiting them, we first provided a flyer to a company bulletin and communicated with industry acquaintances who use local explanations. As a result, we recruited five experts with an average of over 5 years of experience building state-of-the-art CNN solutions in their field (see Item.2Table <ref>). In shaping the detail of the interview, we strictly followed the interview methodology in HCI <cit.>. First, in scoping our directions of inquiry, we motivated participants to focus on sharing their lived experiences, specifically about their practice and perception of local explanation but not discouraging them from connecting their story about local explanation with other experiences. Consequently, in designing our questions Item.3(shown in Appendix A), we started from their general background and workflow in the early phase as follows. In particular, we asked about their (1) roles and areas of expertise, the (2) CNNs they build, and (3) their development settings and tool belts. Then we moved to their local-explanation-related questions aiming to learn their (4) workflows, (5) reasons-of-use, (6) challenges in using local explanation, and (7) their wish lists. Second, to construct an appropriate dialogue with our participants, two authors—who completed HCI-centered training in their PhDs and currently working on a specialized domain of Human-AI Interaction and Deep Learning in academia and industry, respectively—participated in every interview. One author proceeded with the interview with questions, while the second author asked follow-up questions to gain more specific insights. In our interview, we collected 4 hours and 31 minutes of video. On average, each interview lasted 54 minutes, ranging from 37 minutes to 67 minutes in total. In our analysis, we used a qualitative coding process <cit.> which entails two authors' coding, diagramming, and consensus-based theme generation. First, the two authors each created, using the interview records, initial sets of codes, and memos <cit.>. Second, they shared the codes and analyzed the emerging Item.1commonalities and discrepancies related to their perceived challenges and desires. For the matters of discrepancies, the two authors discussed the reasons for the disagreement and decided each matter could be agreed upon or annexed in existing commonalities. Finally, after thinking about others' code choices, they reviewed all our coded text, quotes, and memos to tweak and derive the final structure. §.§ Results From every participant, we heard strong reasons why they apply local explanations in their practice. The overarching reason they apply explanation in their workflow is predominantly related to retaining the “generalizability” of their model. The generalizability explains the degree to which the model would “shake” when it sees unexpected, different cases they didn't see in the past. P5 mentioned: “we strongly believe that that's the way to go, those sorts of visualizations are clearly the path towards understanding how to improve the model. I think it's a required envision. If the mistake is turned out to be unreasonable, I'm going to explore my data and see why it's not robust enough.” P4 shared his interesting observation that accurate prediction and reasonable attention might be somewhat correlated. Item.2He believed it was more crucial for a model to focus on the right gaze to make it robust for unexpected cases than optimizing performance on the test set, as we could not prepare the perfect dataset that represents every case equally. All participants shared their experiences about the cases of spotting unreasonable attention in checking the vulnerability to remove the model's weakness. P3 mentioned that he uses local explanation in the model comparison task mainly because it can be a good indicator of how robust the model can be: “I see model behaves very differently task-by-task. ResNet works very well in one task, and VGG works well in a different task. I have no idea why. And the local explanation tells me why.” While attaining a CNN's generalizability has been discussed in previous literature, our findings extend the existing in two directions. First, we identified the three practical challenges they are encountering when applying local explanation in their workflow every day. Second, we also identified the three future desires that the current local explanation-driven techniques cannot realize but could be achieved with future solutions. §.§.§ Challenges C1. Iterative and Exhaustive Diagnosis: In diagnosing their model through local explanation, participants expressed the process as “nothing is given”. In detecting vulnerable patterns using local explanation, participants seemed to have proactive and iterative shaping of their assumption and collecting the cases. Generally, participants went for several rounds of iterative target image selection and local explanation generation. This generation was made based on their dense inductive and deductive reasoning. The aspect of iterative case-based reasoning seemed to entail nontrivial labor, which exhausts ML engineers. P1 mentioned: “I wish I could check the (saliency) maps for every case. But coding to layout multiple maps takes some effort and does not become feasible as the dataset gets bigger. In the end, I normally have to compromise, just checking instances in an inaccurate category if I'm lucky, or even fewer.” P3 developed a multi-classifier that has 4,000 to 5,000 classes. He mentioned that the required mental effort for detecting vulnerable attention grows exponentially as the number of classes increases. In the end, he can only consider a few “major” classes. Many of our participants remarked that their model vulnerability analysis using local explanation is mostly a group effort, and sharing insights with colleagues also adds up even more time. For P2's case, his group made a web-based tool where the team member can upload image groups and show the local explanation results for discussion due to the complexity of coding and positioning on a screen. C2. Ad-Hoc Diagnosis Leads to Uncertainty: The next challenge that our participants mentioned was the uncertainty they had to cope with in determining the vulnerable patterns. They seemed to suffer from two types of vulnerability. Since finding the vulnerable patterns stems from their intuition, our participants mentioned that there is no guarantee that their selection covers every major and minor vulnerability type. In addition, upon spotting the local explanations that gaze at unreasonable objects, they had to decide if cases sow merely noise or the signal that leads to a vulnerable pattern. Often, our participants' vulnerability determination process was done on their “gut feeling”, which made them perceive the process as heuristic and ad-hoc. P2 mentioned: “I feel like showing the pros and cons of model's attention using local explanation is cherry picking, in many cases. Even if someone says the quality of model attention is good or bad with some examples, there is no ground one can say the cases represent a real pattern or merely subtle noise that won't likely happen in the future.” Item.2P3 also shared similar difficulties that increasing classes could result in more bad-attention cases. Even though these problematic cases were identified, they might likely reoccur in the future. P4 said that the hardship in verifying the severity of the vulnerability is closely related to the fact that there is no measure that we can rely on to see the “impact of the detected cases” from the perspective of the whole dataset. There was a minor opinion that their feeling of uncertainty in the process was connected to the doubt about the diagnosis results. For instance, P1 mentioned that he doesn't believe he can completely remove the bias no matter how much effort he may put in or what tools he may use. C3. Hard to Steer as Intended: Every participant agreed that changing the model's future behavior from learned insights is challenging or often not feasible. P5 mentioned that the insights were not actually insightful as they are often unactionable: “Surprisingly, it wasn't really insightful when we looked at the mistakes our model made, and the saliency map was totally unreasonable. It was like it doesn't know what to do here, something is missing, architectural leap or something I don't know, we didn't quite solve a lot of the failure cases.” Item.2He also shared his “dream tool” idea for instant attention adjustment, which could be some drawing applications that he could manually guide CNNs to focus on previously missed features of images and retrain through backpropagation. P1 mentioned his current struggle to fix a model by fortifying the training set, such as adding more data to counterbalance the failure class. He still looked for alternative methods as the performance was not promising. §.§.§ Desires D1. The Way to Interact: Beyond Command Line: Some mentioned that local explanation could not fully realize their potential with command line interfaces as the way to create them requires some work. This aspect is connected to C1; participants feel making multiple queries for selecting images and examining model attention can become arduous. From the interaction design's perspective, shifting the command line-based interface to a directly manipulatable GUI can streamline the process. P1 remarked: “I feel like a complex task like this (vulnerability diagnosis), we would mostly benefit from GUI rather than a tool with a command line. It takes too long to create saliency maps. Showing the maps with different selection criteria and sorting can be super helpful.” By lowering the cost of creating local explanations, participants could more effectively examine a bigger volume of model attention than the current design. Item.2Some also mentioned the necessity of reorganizing results after each search, which was not easy with the current tools. P4 always looked for failure cases manually but struggled when there were too many cases. He suggested some summarization or pre-filtering features that prioritize interesting cases. This finding indicates it is worth considering designing an interactive analytic system that enables a user to easily formulate the query and see the results. D2. Evaluating Model: Model Accuracy and Beyond: We had multiple chances to hear participants' voices regarding what they care about when it comes to evaluating their models. In particular, we found that our participants shared the consensus regarding the model accuracy as a gold standard metric that should not be sacrificed even though the purpose of revision is not for boosting model accuracy (e.g., mitigating contextual bias). For instance, Item.2P4 was very curious to see whether improving model attention could improve model accuracy, and if the model were not improved, he would care less about attention quality improvement. P5 also mentioned the tension between fairness and accuracy in model development: “I had much of a concern for fairness in my practice, it was more the kind of thing where prioritizing fairness connects to increasing failure case. This would result in my client making less money. If it was a courtroom, there's a much stronger debate here. But it's very serious in industrial cases that fairness is important, but the accuracy is still the king.” At the same time, they shared their concern that the way the current tools provide the model accuracy is not enough to understand how accurate and how reasonable their models are. Item.2P2 found it very difficult to check the saliency maps for accurate cases, and he felt uncomfortable making decisions solely based on overlooking accurate cases since it could penalize model generalizability. He was less focused on the test set performance than generalizability in the long run. This internal tension helped us realize the delicate view of the way ML experts see model accuracy. It's still the “King” that should not be compromised, but they may still need more than that to make their model generalizable and trustworthy enough. D3. A Balance between “Pain” and “Gain”: One aspect we learned from our participants is that ML engineers are generally more conservative about testing a new feature using a human-in-the-loop-driven approach than we thought due to its high cost. Regarding the idea of using human input for steering CNNs, some participants mentioned that the direction has potential but would only work if the workload is manageable. For instance, P3 mentioned that he might not likely use the new tool if the expected effort is more than what they are currently investing in for the model diagnosis. Not surprisingly, many participants mentioned the difficulties in eliciting data from in-house annotators or workers in crowdsourcing platforms. P5 said: “The workflow of human-in-the-loop to adjust attention using human help, no one would say it's a bad idea that you could include humans and get more data and improve it. This is an obvious virtuous aspect, but it's not like you just sign up for data bricks, and you're done. Getting human labels would probably need a little bit of training. You don't want that to be an expense to ML engineers.” This aspect helped us realize that making a practical tool can be readily adopted. It must automate the vast volume of work via intelligent automation and minimize the chance for human outsourcing. §.§ Design Considerations While we found that the local explanation serves as an indispensable tool for diagnosing the vulnerability of participants' data and model, they suffered in each stage of C1: detecting cases that signal vulnerable patterns, C2: verifying them to be “real”, and C3: steering. Meanwhile, we also found they desire to D1: have an interactive and directly manipulatable design that can cut down their effort for writing lots of queries and parameters, D2: use the product that can improve the model accuracy while also improving the quality of model attention to be reasonable, and D3: enable users to achieve the new feature with a reasonable size of additional labor. As D1 suggests, we were able to find the reason why the interactive interface can be well appreciated by ML engineers, especially when completing their task requires deep thinking and iterative interactions with their tool. In designing the system, we further synthesize our findings and establish the design considerationsItem.1Item.2 as shown below. Table <ref> also shows how the participants (“PID”) support the identified challenges (“C”), desires (“D”), and design considerations (“DC”). * DC1. Semantic local explanation browser: Seeing the results of local explanations for finding the cases that signal vulnerable patterns is the first stage to mitigating contextual bias. In this stage, providing a semantic browser—that users can see, rank, and select the dominant semantic object types observed within the model's area of attention for every image—could reduce ML engineers' uncertain feelings and save them time. In building a dog detector, this feature may enable a user query such as “find every image attentive on treat” or “rank every object type by its occurrence in a dataset.” Descriptive statistics, such as how frequently the object types appear, can help users understand the degree to which the object grabs the model's attention. DC1 will relieve C1, C2, and D2 (based on all 5 participants). * DC2. Labor-efficient selection of “unreasonables” and adjustment of their attention boundaries: Using the browser, users can diagnose a CNN by finding the cases that show unreasonable attention (“unreasonables”, hereinafter). Then the users would annotate the areas that can make the annotation reasonable. The system would need to provide this annotation with a lightweight interaction cost. DC2 is related to D3 (based on 2 participants: P3 and P5). * DC3. The fine-tuning mechanism that can boost both model accuracy and model attention quality: One of the most evident consensuses among the participants was their difficulties in steering CNNs. Therefore, the tool must help users to clearly understand how the CNN's quality of the model attention visualized through local explanation has been changed based on the input the users provided. While doing so, the tool must not compromise the model's accuracy performance. DC3 is derived from C3 (based on 2 participants: P1 and P5). * DC4. Evaluation results that show what has been changed: The last stage of the workflow would be to help users understand how their attempts made a difference. In showing the differences, providing a set of views that show the difference made regarding the accuracy of model prediction, the quality of model attention, and the combined view that explain how the changing of the attention has been related to the accuracy would facilitate users' understanding of the impact. DC4 is derived from C3 and D2 (based on 4 participants: P1, P2, P4, and P5). § Based on the DCs in S1, we designed . is the first interactive system designed and built to support a CNN engineer's contextual bias-related tasks based on their practical needs. The early part of 's workflow is defined based on what we learned from ML engineers: First, a user prepares the base CNN model and datasets to be used for diagnosis (the “loading model’’ and “loading dataset’’ tabs). Second, a user collects the cases where their gazes are on unreasonable objects by browsing local explanation results (i.e., the “accessing attention quality’’ tab in ). The rest of the stages follow the recent literature that proposes model steering through local explanation <cit.>. Third, for the collected “unreasonables”, a user corrects the attention boundary to shift the CNN's future gaze from contextual objects and starts to fine-tune the base CNN model with annotations (the “adjusting attention’’ tab in ). Finally, a user sees how the approaches made the CNN different (the “evaluation’’ tab in ). §.§ Interacting with Consider a scenario for Sarah, an ML engineer who has trained a dog classifier built based on a CNN architecture. She found the model accuracy performance was not enough for deployment and found a few cases that she could not understand why it failed. She decided to examine her model using local explanations. First, she created local explanations for a few accurate and inaccurate cases for multiple rounds to reason what could be wrong. After her search, she found out the model's focus sometimes moves to some specific contextual objects, such as balls and treats. To study if the cases would repeat, she decided to invest her time in generating local explanations for all the images and checking them serially. She put some effort into coding for loading and saving files (models, images, and statistics). For the dubious cases, she decided to collect similar datasets for further testing (C1). Along the path, she started to wonder if the contextual object types she identified were comprehensive. She decided to examine other object types (C2). Upon confirming every case and object type that signals the vulnerability of her model, she will need to find a way to steer the model's behavior (C3). Using , her workflow can make better progress with less effort. First, she uploads the base CNN and the image data she will use for diagnosis. Leveraging the automatic local explanation object aggregation feature, will provide a list of object types that her CNN is gazing at, such as dogs, cats, balls, treats, and other object types, with examples. She asks that she wants to see every case that is attentive to objects other than “dogs”. Based on her specification, local explanation results are grouped based on object type categories (DC1). She can quickly skim through each category (e.g., dogs, balls, treats, and cats) and confirm dubious local explanations as “unreasonables” in a few clicks. will suggest the automatically drawn “reasonable” boundary for unreasonables' and asks Sarah to confirm or manually refine (DC2). Upon her confirmation, will fine-tune the base model such that it won't make the same mistakes (DC3). After the fine-tuning, Sarah can check how the models' performance regarding model accuracy and model attention quality has changed (DC4). §.§ Workflow and System Components supports stage-based workflows to inspect the model. The global navigation bar (see Fig. <ref>) on top of the screen provides access to each stage. §.§.§ Loading Model and Data allows users to upload their base CNN models and datasets. In designing the feature for model upload, we considered compatibility with one of the most widely used Python libraries for building CNNs, PyTorch <cit.>. Next, the “loading dataset” tab helps a user to upload the image datasets for diagnosis (a validation set, hereinafter) and a final evaluation after the fine-tuning (a test set, hereinafter). In particular, the validation set is used for diagnosing contextual bias in the next stage. Using the test set in the last stage, a user can evaluate the final model by comparing before and after treatment and more. §.§.§ Attention Quality Assessment This stage has two goals. First, helping a user understand which semantic object types are causing contextual bias by which degree (DC1). Second, helping a user categorize every image into reasonable or unreasonable (i.e., the images that do not focus or focus on contextual bias in their local explanation) (DC2), which will be used in the next stage. For both goals, the core mission is to significantly cut down a user's labor compared to their current practice. In achieving the first goal, provides a list of semantic object types that can be observed in the model's focused area ordered by how frequently they appear. In detecting the semantic object types, adopts a pre-trained object detection model <cit.> that is capable of detecting 80 object types defined in the Microsoft COCO dataset <cit.> (e.g., “person”, “bicycle”, “dog”, etc.). A user will decide if the semantic object types are relevant or contextual to a CNN's goal. In a gender classification problem, for example, the relevant object type can be a human face, while other object types, such as neckties or bicycles, can be contextual. Second, based on the relevant object types specified by a user, intelligently suggests if local explanations of the images in a validation set are reasonable or unreasonable (see Item.3Item.7Fig. <ref>, green borders suggest the local explanations are reasonable while yellow borders suggest unreasonable). The suggestions can reduce a user's time for assessing the quality of local explanations. In positioning the results of suggestions, separates them into two sides: inaccurate images on the left and accurate on the right. This layout helps determine which semantic object contributes to accurate/inaccurate records by how much. When a user encounters a suggestion that is not right, (s)he can flip the suggestion by clicking the image, the semantic object group, or every of the accurate or inaccurate images. Finally, provides 3 options for visualizing local explanation results: color-scale, gray-scale, or polygon mask (see Fig. <ref>-C). §.§.§ Adjusting Attention To support the later part of DC2—correcting the attention boundary of images categorized as unreasonables, needs an efficient annotation experience, especially because boundary drawing is an expensive annotation task. In doing so, shows a vis-à-vis comparison between the current model attention on the left and the suggested attention boundaries on the right-hand side (see Item.7Fig. <ref>). The suggested boundaries are made based on the Mask R-CNN model <cit.> we applied in 4.2.1. If the suggested boundaries are not enough, a user can redraw manually (see the drawing panel in Fig. <ref>). In checking the boundary suggestions, a user can separately examine the images from (1) unreasonables that are accurate (i.e., the images that were accurately predicted based on the wrong reasons, or by “luck”) and (2) unreasonables that are inaccurate (i.e., the image group that made an inaccurate prediction potentially because of seeing wrong contextual objects <cit.>). Upon finishing the correction for unreasonable, becomes ready for fine-tuning using adjusted inputs. §.§.§ Fine-Tuning This stage is the key to maintaining an overall effective pipeline. Based on DC3, we implemented a fine-tuning mechanism that can consider attention adjustment as new guidance for revising a better model and making the process of using boundary adjustment input straightforward. The existing approach to optimizing a CNN’s model performance in the fine-tuning process is to minimize only the prediction loss—an error measure between model predictions and actual values. To boost both the model performance and the interpretability of the black-box CNN model, we adopted Explanation-guided Learning framework <cit.> where the model accuracy performance and local explanation quality are jointly optimized with the prediction loss and attention loss. Our intention for adding the attention loss during model training is based on the assumption that the model can learn to pay attention to the right semantic object types for the prediction tasks, thus naturally enhancing both the explainability and generalizability. While the techniques in Explanation-guided Learning are in their early stage, some studies started to validate how applying both terms of explanation loss and prediction loss can benefit DNN performance using text data <cit.>, image data <cit.>, and graph-structured data <cit.>. However, the techniques in Explanation-guided Learning have not been tested by human participants in their workflow. Our aim in building is to understand how “real” human participants can interact with a system to leverage the techniques and if we can find evidence that using the techniques can practically help users in mitigating contextual bias in their CNN revision workflow. For the implementation of the explanation objective for , we adopted the most recent approach called RES <cit.>, which proposed a generic robust framework for learning based on a user's boundary adjustment under the assumptions that the human annotation labels can be (1) not exactly accurate in drawing the boundary, (2) can be incomplete in the region, and (3) inconsistent with the distribution of the model explanation (i.e., binary annotation vs. the boundary with alpha channel). Consequently, in the benchmark test, RES outperformed GRADIA <cit.> and HAICS <cit.> in leveraging human annotation boundaries and robust against the aforementioned annotation noises <cit.>. In implementing, we utilized two methods from the RES's GitHub codebase[Available at: https://github.com/YuyangGao/RES], “Baseline” as the conventional state-of-the-art fine-tuning mechanism that applies a prediction loss but not an explanation loss. This will be used as a baseline to help a user to understand how using can make a difference in model accuracy and model explanation quality. Next, we implemented “RES-G” as the experimental attention steering mechanism that jointly optimizes the prediction loss and explanation loss. Upon using to finish their boundary adjustment, a user will click fine-tune to activate the fine-tuning process. Typically, our fine-tuning mechanism takes at least a few hours, and it is not possible to realize a real-time system yet. In the system's back end, we built a schedule queue that receives the boundary input one by one. The inputs will be fine-tuned in order by a system administrator. §.§.§ Evaluation Dashboard Model evaluation is the last stage, where a user can check how the input has changed a model's varying performances. Based on DC4, we designed this stage to help a user understand not only how model accuracy has been changed but also how the quality of local explanation has been shifted. Most importantly, this stage attempt to facilitate a user's understanding of how accurate or inaccurate records are reasonable or unreasonable local explanations are related. In doing so, we adopted Reasonable Matrix <cit.>, an evaluative matrix that explains the model's performance using the four groups as follows: * Reasonable Accurate: The group that has accurately predicted records with reasonable attention. The bigger the group is, the more generalizable the model is. * Unreasonable Accurate: The group that has accurate records but is based on unreasonable attention. Records in this group can be considered “lucky guess”. Reducing this group can increase model generalizability. * Reasonable Inaccurate: The group has inaccurate records, but the attention is on the right area. * Unreasonable inaccurate: The group has inaccurate records while their attention is also on unreasonable objects. This group can be considered an opportunity group, as shifting the gaze to reasonable objects can flip the prediction from inaccurate to accurate. To generate a Reasonability Matrix, it is required to assess if the local explanation results are reasonable or unreasonable. provides an automatic annotation feature to avoid relying on human annotation (as D3 suggests). In particular, a user can select from 3 options. Strict: assess local explanation as reasonable if the attention of a record includes only relevant objects and does not contain irrelevant objects; Moderate: assess reasonable if the majority portion of an image contains relevant objects while the minor portion includes irrelevant objects; Relaxed: assess reasonable if the attentive area has any overlap with relevant objects. After a user selects the Reasonability Matrix creation option, (s)he can start the evaluation. To help a user understand what has been changed, prepares the three conditions as follows: * M: the initial model before fine-tuning. * M_base: the state-of-the-art fine-tuned model using M without applying attention input. * M_exp: the fine-tuned model using M that uses attention input. Using the three conditions, provides two pairwise comparisons of (1) Before vs. After: comparing M and M_exp and (2) State-of-the-art vs. our approach: M_base and M_exp. In each pairwise model evaluation, there were 4 types of analytic views that users could do in-depth evaluations. (1) Overall interpretation: for helping a user to directly understand how model accuracy and attention quality have been changed, the view presents a Reasonability Matrix showing percentage changes in 4 sub-groups (see the top-left sub-figure of Item.7Fig. <ref>). The view also shows numeric comparisons to track the overall model accuracy and attention quality changes (see the bottom-left sub-figure of Fig. <ref>). Finally, a user can see the generated performance report and an attention explorer module to derive insights about the effectiveness of the model conditions (e.g., whether the “unreasonable inaccurate” cases have been reduced by attention steering regarding the test image data). (2) Accuracy-related analysis: this view provides accurate/inaccurate record bar plots grouped by common objects, helping users understand which semantic object types contribute to accurate or inaccurate records. (3) Local explanation quality analysis: In this view, we present IoU distribution charts. IoU (Intersection over Union) helps us to understand the overlap between the model's focused gaze and relevant objects. IoU of 0% means the gaze is entirely located on contextual objects, whereas 100% means the gaze is only on relevant objects. The higher the IoU score, the better an attention area aligns with the ground truth. In this view, we further help users browse cases based on IoU values (e.g., show images where IoU is between 40% and 60%). (4) Record-wise attention comparison: the right screen in Fig. <ref> contains a comprehensive comparison of models’ local explanations, side-by-side for all conditions. This design helps a user quickly recognize attention quality changes among different conditions. §.§ Implementation is a browser-based user interface with a lightweight back end built with Python Flask, fully compatible with widely used ML and visualization libraries in Python (e.g., PyTorch, Grad-CAM, OpenCV, Matplotlib, etc.). The front end was developed using HTML, CSS, JavaScript, and D3.js for creating dynamic and interactive elements (such as the attention-drawing feature) to communicate between users and models. Item.3More detailed technical settings and a live demo of can be found in our GitHub repository[Available at: https://github.com/TongStevenSun/DeepFuse]. § STUDY 2: SUMMATIVE STUDY The core tasks integrated into —(1) diagnosing CNN's vulnerable patterns through local explanation and (2) making the found patterns actionable through direct model attention adjustment—have not been introduced in the previous work. Further, our “system” has multiple sub-pieces connected together into a “single working whole” <cit.> to streamline the target task. Due to these characteristics, we avoid applying comparison or experimental study where we have a clear baseline, just like many previous HCI work <cit.>. Instead, we choose to derive our directions of inquiry by defining research questions (RQs), then triangulate the way we collect data in multiple ways to answer the questions. Our goal in S2 is to create reusable pieces of knowledge in terms of what piece integrated into our system can be useful and understand how the system, as a whole, can be effective in supporting ML engineers who mitigate contextual bias. To achieve our goal, we first aimed at understanding the effect of workflow—how our new workflow of model steering using local explanations introduced through an interactive environment can make a difference for ML engineers. The research questions (RQs) in this category are: RQ1a. How has a user’s viewpoint about using attention as a method for model revision changed after experiencing our workflow? and RQ1b. How has a user’s viewpoint about using attention as a method for evaluating their model performance changed after experiencing our workflow? Next, we were curious to learn the effect of using itself as a system—how using can change the outcomes for mitigating contextual bias? In particular, the RQs regarding this direction are: RQ2a. How did using in the input phase make participants’ model diagnosis process different? RQ2b. How did using impact the outcome of contextual bias in terms of model accuracy and attention quality? §.§ Method We recruited 12 participants by snowball sampling through our network in industry and academia or advertising on social media. In defining the S2 sample size, we followed the most common sample size of the past CHI publications consulted from Caine's work <cit.>. The participants were selected by a screening survey where we asked about their demographics and degree of expertise in building vision-based models using CNNs, the task goals of vision models if experienced, professional position, experience in using local explanation, and whether they have heard of and understands the importance of detecting the “wrong” attention to handle contextual bias. We are aware of the potential Hawthorne and novelty effects of having overestimated results when participants are being studied and new to our system <cit.>. To reduce the effects, we particularly hired experienced CNN developers who have established their own approaches in CNN fine-tuning. Later in the study, we asked them to compare the effectiveness between our approach and their current approaches and give reasoning. We recruited 12 qualified participants (2 females and 10 males, aged between 20 and 43) out of 43 who submitted the screening survey. Six participants were academic researchers, and the other six were practitioners. Eight participants identified themselves as experienced, three as intermediate, and one as beginner developers in vision-based modeling. Item.4Although the experience distribution was imbalanced due to our consideration of having all genders' perspectives, there should not be any potential effect of this distribution on the study since all participants were qualified for the study with a good understanding of handling contextual bias and wrong reasoning of a model based on its saliency maps. Eight participants out of 12 have experienced using local explanation to improve model performance in the past (see Table <ref>). <ref> summarizes the S2 workflow. Participants joined two online sessions, the input and output sessions, for two consequent days. Participants joined the sessions virtually on Zoom and shared their screens with us. In the input session, we onboarded participants by explaining the purposes of the and presenting how model evaluation could be done differently using local explanations of a standard classifier. Then participants went through a tutorial where they practiced using the interface with a toy dataset. The onboarding and tutorial took 30 minutes. After the tutorial, participants performed the early phase of tasks using features introduced in 4.2.1, 4.2.2, and 4.2.3. After an input session, we fine-tuned the initial model (M) into 2 conditions of models: a state-of-the-art model without users' inputs (M_base) and a model using our users' attention inputs in the validation set (M_exp). The output session was scheduled one day after the input session since we cannot make our participants wait until fine-tuning is done. On the following day, participants joined the output session, where they used the reviewing feature of to assess the model performance using the features introduced in 4.2.5. After the review, we conducted semi-structured interviews with the participants. After finishing two sessions, we provided them with 60 USD as a token of appreciation. While the input session took 90 minutes and the output session lasted two hours, Item.4as shown in Table <ref>, participants used for about 25 minutes on average in the input session (Min=12, Max=47, SD=10.43) and about 20 minutes in the output session (Min=5, Max=33, SD=8.88). The average time spent on the system in both sessions was about 45 minutes (Min=17, Max=68, SD=16.83). §.§.§ Task, Data, and Model While can work with any classification task, we chose a binary gender classification problem for the study. We are aware of the limitation of framing the gender recognition task as a binary classification, which cannot fully represent the viewpoint of gender diversity. We are aware of the negative aspects of choosing a binary gender classification as the main task in S2. For instance, automatic gender recognition primarily classifies gender through physical characteristics, which can disadvantage gender minorities <cit.>. Also, while we believe that binary cannot represent the diversity in gender, we chose the task because it is one of the most widely adopted tasks in the problem of contextual bias <cit.>. We note that our choice of the binary classification task is to demonstrate the system's capability of solving contextual bias in a relatively simplistic setting with the help of well-annotated datasets used for training CNN classifiers. We also note that we explained the possible concerns that can stem from the binary gender classification to our participants at the beginning of the study. The dataset used in the study was selected from the Microsoft COCO dataset <cit.>, one of the most widely used datasets in ML and computer vision communities. The dataset was chosen because of its well-structured label formats and abundant 80 object classes co-appearing with humans, and it has been used for contextual bias studies <cit.>. The image selection process has three steps. First, the images were filtered by the segmentation labels of the “person” class for single-person images only. Second, the images were re-filtered by the gender-related keyword in the captioning labels (i.e., “male”, “man’’, “men’’, “female”, “woman’’, “women’’). Lastly, the filtered images were examined manually to have the best quality images for the gender classification task, excluding images with very small human figures that were unidentifiable for classification. In total, we extracted 2,000 images and split them into 1,000 in the training set, 500 in the validation set, and 500 in the test set. Since we wanted to test the ’s capabilities of detecting and reducing contextual bias, we needed a model that had a reasonable performance but was vulnerable to contextual bias. We first manually added contextual objects (i.e., green star markers) on the top-left corners of the images. The distribution of the star-added images is shown in Fig. <ref>, bottom. For the training set, 1/3 of the “male” images (N = 167) were added with stars. For both the validation and test sets, the star markers were added only on the “female” images (N = 250). Then, we trained a standard ResNet-18 classifier (denoted as “M’’) using the biased image data. In deciding on ResNet architecture in S2, we tested several models built based on ResNet-18 and 50. We found no significant model accuracy improvement by adding more layers to the ResNet-18 architecture. Therefore, we chose a less complex model architecture to make lightweight. Since the majority of images in the training set were original images, the model can achieve a reasonable prediction accuracy of 74% on regular images without the star markers. We should expect that the model only saw “male” images have star markers. When we tested the model on the validation set that only has star markers in the female class, the accuracy dropped to 43.8%, and 77.6% of “female” images were mispredicted. This showed that the model only used commonly appeared star markers on “male” images as a feature to make predictions for images with the same contextual objects, meaning the model (M) was vulnerable to contextual bias. In generating local explanations, applies Grad-CAM <cit.> on the last convolutional layer. Due to CNN's hierarchical structure and comparisons of attention maps between layers <cit.>, earlier layers' attention maps are more scattered around objects' edges and corners, whereas the focus of local explanation gets shape to semantic objects as getting closer to later layers (see Fig. 5 in <cit.>). Using the last layer, local explanations can create more semantic object-level meanings, which a human user can easily leverage for adjusting boundaries. §.§.§ Input Session At the beginning of the input session, we discussed the idea of using local explanations for mitigating contextual bias in a binary gender classification task. After the discussion, we demonstrated how participants could upload their models and datasets using . Then we explained 's model vulnerability diagnosis feature explained in 4.2.1 and 4.2.2. and attention adjustment feature described in 4.2.3. Upon the end of the tutorial, we gave time for participants to mimic the whole process using the same toy dataset and ask any questions. Then, we asked participants to start the main session. We erased all prior input and asked users to start over the process using a larger dataset (particularly assessing the local explanations of the validation set) and a base model we provided. During the main session, participants had to use the system without help. The main session was video-recorded. Once participants finish their input session, we asked them to fill out an input survey, asking 2 questions for the “absolute” and “relative” valuations as follows: * Q1: “[RQ2a, Absolute] I found understanding the model’s vulnerable aspects using to be _____.” (A 7-level Likert scale of usefulness. “7” is “extremely useful”.) * Q2: “[RQ2a, Relative] Using , understanding the model’s vulnerable aspects was _____ than my current practice.” (A 7-level Likert scale of difficulty. “7” is “much easier”.) §.§.§ Output Session In this session, participants evaluated the performance change of the improved model with the test set. In particular, provided two pairwise comparisons between M and M_exp, and M_base and M_exp) (see 4.2.5). After the short output session tutorial using a toy test set, participants started the main output session using the model they fine-tuned from their input session and the larger test set. Once users were finished with all the analysis and comfortable with their findings, we moved to the semi-structured exit interview. The interview had 9 question categories that were made to understand (1) their general perception about , such as the pros and cons they felt throughout the two sessions, (2) their perception of the specific perspectives, including (2-a) experiencing local explanation adjustment, (2-b) applying reasonability matrix in assessing the model performance, (2-c) features they used in day 1, (2-d) features they used in day 2, and (3) their suggestions for the better in the future. Same as S1, two researchers attended every interview. After the interview, they completed an output survey Item.5with 6 questions (see Q3 to Q8 below). Lastly, to check the usability of , we asked participants to fill out the System Usability Scale (SUS) survey <cit.> (see Appendix B). * Q3: “[RQ2b, Absolute] I found the capability of regarding improving the model performance using my input was _____.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.) * Q4: “[RQ2b, Relative] I found the capability of regarding improving the model performance was _____ than my current practice.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.) * Q5: “[RQ1a, Absolute] Adjusting the saliency maps (as guided) can be effective in building future models.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.) * Q6: “[RQ1a, Relative] Adjusting the saliency maps (as guided) can practically change my model-building practice to a better form in the future.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.) * Q7: “[RQ1b, Absolute] On top of a model accuracy performance, using saliency maps (as guided) can provide an effective measure for evaluating my future model performance.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.) * Q8: “[RQ1b, Relative] On top of a model accuracy performance, using saliency maps (as guided) can practically change the way I evaluate my future model performance to a better form.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.) For the analysis of the exit interviews, we followed the similar process we applied in analyzing S1. The difference from S1 was the existence of the video recordings. The recordings were reviewed multiple times for transcription, code development, and analysis to synchronize with the notes. The codes and memos were developed by our two authors gradually as we intake more interviews. After the final interview, each of the authors developed the themes and shared them with each other, developing the consensus-based diagram that articulates the main insights we learned relevant to explaining the three RQs. §.§ Results In this section, we aggregated all survey and interview responses from the participants for the RQs we developed. S2 results suggest that (1) the workflow of the local explanation-based attention steering provided a diverse perspective in diagnosing model vulnerability, (2) the direct steering design helped the process of model revision straightforward, and (3) every participant enjoyed improved key model performance measures. Specific sub-tasks, how they are improved, and why the participants perceived they are improved are in Table <ref>. We believe these are not merely because of the Hawthorne and novelty effects since we have subjective evidence of performance improvement and assessment efficiency. We also organized the aspects that need improvement in Table <ref>, which we share in detail in the Discussion section. The behavioral data we collected shows that all participants generated the model that outperforms (1) its model accuracy, (2) the overlap between the model's focus and the relevant object types (IoU), and (3) the proportion of reasonable attention out of all images in a test set. The average accuracy of 12 users’ fine-tuned models (M_exp) was 82.95%, with an average IoU of 0.39 (“Intersection over Union” with respect to the attention ground truth of the user-defined gender-related object: “person”), and the average proportion of reasonable attention was 89.55% (see Item.8Fig. <ref>-A). All these performances outperformed both the initial model (model M: accuracy = 47.6%, IoU = 0.12, attention reasonability = 51.8%) and the model that applied state-of-the-art fine-tuning method without attention (model M_base: accuracy = 79.0%, IoU = 0.26, attention reasonability = 79.4%). Regarding the attitudinal survey data, every absolute and relative question's mean was over 4. In terms of absolute questions, 100% of ratings were above 4-“neutral” (M = 6.19, SD = 0.67). This indicates that participants were satisfied with the overall quality of the workflow and the system. Regarding the relative questions, 89.6% of ratings were above 4-“neutral” (M = 5.94, SD = 1.24), which indicates that they felt applying the workflow and the system can practically improve their current practice. §.§.§ [RQ1-a] Workflow: Adjusting model attention as a CNN steering method After completing the user studies, the majority of users strongly agree that adjusting local explanations can effectively improve model performance (Q5 rating: M = 6.42 out of 7-“strongly agree”, SD = 0.64, as shown in Fig. <ref>-B). Also, people think their current modeling processes can be practically improved by considering the attention adjustment method (Q6 rating: M = 6.17 out of 7-“strongly agree”, SD = 1.07). During interviews, all participants shared their positive impressions about the effectiveness of attention adjustment in improving model accuracy, which is the primary objective of conducting model fine-tuning. They also confirmed that the impact of contextual bias was reduced as attention quality increased by attention steering. By adding a new perspective from humans, a model also becomes fairer in making predictions for each target class (P2, P5, P10). Participants (P1, P2, P3, P4) with experience in model attack and defense shared the possibility of using our method to improve the robustness of the models against backdoor attacks, letting the model ignore small perturbations on an image and focus on the right area. We learned that after trying our method, people gained awareness of considering human-in-the-loop and visual-based approaches in model steering since most of the ML researchers use algorithmic approaches for handling contextual bias, such as data augmentation, hyperparameter tuning, ensemble methods, etc., rather than extensively using visualization in the fine-tuning process. §.§.§ [RQ1-b] Workflow: Adding quality of model attention in evaluating CNNs Based on the feedback, users agree that using an attention evaluation method (e.g., reasonability matrix as guided, based on Gao et al. <cit.>) is effective in diagnosing model vulnerabilities (Q7 rating: M = 6.33, SD = 0.47, see Fig. <ref>-B), and they are very likely to use this method for improving future practices Q8 rating: M = 6.08, SD = 0.76). Participants think that the attention assessment features in provide more diverse and rigorous perspectives in assessing a model's vulnerabilities, especially the reasonability matrix, which can be seen as an expansion of the accuracy dimension to understanding “why” a model underperforms (P1, P3, P5, P6, P8, P9, P10, P12). P1 and P4 endorsed the necessity of equipping a reasonable matrix assessment step in checking the model’s decision-making. The matrix interpretation was straightforward to most users, as it is related to the widely-used confusion matrix concept in the data science domain. The dynamic shifts of model vulnerability were well presented as shown by the reasonability matrix (3 vulnerable sub-groups, “UIA - unreasonable inaccurate’’, “UA - unreasonable accurate’’, and “RIA - reasonable inaccurate’’). One major task we designed for users to achieve was the recognition of a backdoor attack in the data (i.e., added green star markers which may trigger a false prediction by the model), and all participants were able to identify the impact of the attack by evaluating attention quality using the reasonability matrix. §.§.§ [RQ2-a] System: How improved CNN diagnosis After comparing with people's current practices, was confirmed as a useful (Q1 rating: M = 5.92 out of 7-“extremely useful”, SD = 0.76, see Fig. <ref>-B) and easier tool (Q2 rating: M = 6.0, SD = 1.15) in understanding model vulnerability, benefiting from the labor-efficient mechanisms. The step-by-step nature of the assessment process in allows users to systematically detect both contextual and manipulated bias in the data, making it easier to reduce model vulnerability (P3, P9, P12). People believe this GUI design can significantly reduce human effort in coding and visualization management for comprehensively assessing a CNN (P2, P3, P5, P6, P7, P8, P9, P10, P12). ML engineers are well aware of the advantages of using visualization to compare metrics and surface bias, but it is a cumbersome task (e.g., repetitive file creation and loading, lack of visual-based explorers for local explanations, etc.). Instead, people mostly use command lines and unintuitive numeric comparisons for checking vulnerabilities. One important feature that people liked was the local explanation grouping by detected objects (e.g., “person”, “bicycle”, etc.), which allowed them to check attention quality and accuracy changes within the common object level (P2, P3, P6, P9, P12). Some users pointed out that having consistent criteria for annotating attention quality regarding the classification task could be tricky with subjective uncertainty (P2, P4, P6, P9, P11). P6 mentioned that during the initial exploratory analysis of some models, users might not have good/bad attention criteria for annotating the attention. P10 shared an experience in exploring what objects cause contextual bias, and the biggest challenge was making a reasonable assumption at first and evaluating it over time. This challenge is critical if the annotation task is outsourced to multiple people. §.§.§ [RQ2-b] System: How improved CNN revision outcomes According to survey responses, people witnessed the highly effective capability of in the performance steering task (Q3 rating: M = 6.08 out of 7-“extremely effective”, SD = 0.64, see Fig. <ref>-B). Regarding the same task, people found it slightly more effective than current approaches (Q4 rating: M = 5.5, SD = 1.66) as 2 users who preferred their approaches and rated 2-“less effective”. Aligning model attention with human perceptions can effectively revise a model performance, and with 's adjustment mechanisms (i.e., attention drawing panel and boundary suggestions, as shown in Fig. <ref>), people can directly embed their intention and domain knowledge into the CNN (P2, P4, P9, P10). Regarding model performance comparison, people were able to reveal the overall context of the image data and the corresponding impact on the model (accuracy and attention quality) by detected object sub-grouping of (P1, P2, P3, P5, P6, P8, P9, P11, P12). An industry practitioner who worked primarily on model quality assurance mentioned that the black-box models were not usually accessible for engineers outside the core ML team, and had features that could be practical for them to evaluate the model performance in that situation (P11). In the last evaluation view of for record-wise attention comparison (as shown on the right of Fig. <ref>), P7 was curious about the opposite shift of attention quality (i.e., a change from “right’’ to “wrong’’ attention after model fine-tuning) and wanted to see some quantitative measures about it. The IoU distribution visualization was another measure in that could provide a rigorous comparison between model conditions (with/without attention adjustment), revealing the positive relationship between accuracy and attention quality improvement (P2, P8, P11). As people mentioned, measuring IoU was not commonly used in classification evaluation compared to segmentation tasks, and it was typically difficult to visualize. §.§ Discussion Overall, the system received acceptable usability <cit.> with an average SUS score of 76.88 (SD = 14.70, see the SUS box plot in Fig. <ref>-B, the rated scores (0-4) were converted to a 0-100 scale based on Brooke's SUS guide <cit.>), exceeding the average SUS level of 68. There were 10 out of 12 participants (except P3 and P5) who gave above-average SUS scores. Although this study is not for system-level comparison, we wanted to understand the effect of our fine-tuning mechanism collected from real users. We conducted Mann-Whitney U tests to confirm the significant performance improvement after using attention. From each of the 12 participants' results, the accuracy of our fine-tuned model using attention was significantly greater than the baseline line condition (U = 0, n_base = n_exp = 12, p < 0.00001). The same results apply to the IoU and attention reasonability proportion comparisons. Through the studies, we also identified disadvantages of our system that need to be improved (as shown in Table <ref>). Regarding the interpretation of the reasonability matrix produced by users' annotation and model prediction, the guidelines can be more formally provided to be acceptable in the ML community (P4, P5, P11). The styles of attention visualization (i.e., color-scale, gray-scale, and polygon mask) need improvement, especially since the orange polygon mask was not visually clear for P3 and P10. It can be solved by having color and opacity adjustment features. People also raise the potential inconsistency issue in attention adjustment, where users may have subjective options and criteria about where the “right” attention should be. needs to further provide more deterministic guidelines in attention adjustment for more complex task types, especially for tasks that require domain expertise (e.g., TB diagnosis in chest X-ray images <cit.>). With this uncertainty in attention adjustment, P7 and P10 suggested an instant performance comparison feature to reflect the model improvement on the fly as people annotate, which can be a future direction in active learning to have simultaneous updates while labeling in progress <cit.>. About the attention adjustment module, people suggested that the drawing feature should be optimized for drawing curves and near image borders, as it was not easy to do so (P1, P3, P6). P5 suggested existing smart drawing features (e.g., image matting tool in Photoshop <cit.>) to be added. P7 thinks that binary mask drawings might not be enough for the best attention guidance used in fine-tuning the model. A solution could be giving higher weights toward the centroid of the attention areas. Item.5(b)With the current data size and task setting in S2, the trade-off between manual workload and model improvement may not be as significant since the overall workload was not overwhelming and considered labor-efficient compared with existing assessment methods. Though evaluating attention maps could be a labor-intensive step, diagnosing and optimizing the model's vulnerability were effective and easy to use based on users' feedback. The annotation steps were incorporated with AI-supported automation (bulk annotation, object detection, object relevance filtering, adjustment recommendation, etc.) to reduce both users' cognitive and labor workloads while gaining better performance. However, as data size increases, this labor-performance trade-off becomes essential, and more specifically, scalability solutions should be explored to reduce human labor while maintaining good fine-tuning performance. We further discussed scalability considerations regarding the trade-off in the next section (6.3). § IMPLICATIONS FOR DESIGN BEYOND XAI Through S1 and S2, we learned several insights from our participants. While listening to their voice and questions, and observing the way they perceive after their usage, we learned that at the heart of people's pursuit of grounding their models into their practice, one of the core challenges they encounter seems to understand how they can harmonize between the way they see the CNN should suppose to work and the way CNNs actually work. When they identify such a gap through XAI-driven tools, the upcoming challenge seemed to be to know how to reconcile such a gap efficiently and effectively. We reflect on this aspect of beyond XAI—how to help a user to shift their learned insights to actionable plans—and list up possible research directions that the HCI and CSCW communities can consider in designing future XAI or steerable AI tools to help practitioners “in the trench”. §.§ Correlating Model Attention and Model Accuracy One of the overarching questions we wanted to understand was how the model attention seen as reasonable by the human mind could also result in accurate prediction. Perhaps that was the reason we decided to use the reasonability matrix. If reasonable attention and accurate prediction are aligned together, the reasonable accurate instances (i.e., accurate for the right reason) and unreasonable inaccurate instances (i.e., inaccurate for the wrong reason) should increase while the unreasonable accurate and reasonable inaccurate instances should decrease. The tendency we saw was positive. We observed the reasonable accurate instances increased while the unreasonable accurate instances decreased from most participants. At least from our setting, adding more human reasoning to the model's way of thinking has increased the model's gaze toward intrinsic objects, resulting in an accuracy increment. However, one segment that didn't change was the reasonable inaccurate group. We think understanding the reason when and why the model makes inaccurate predictions despite the reasonable gaze should be closely related to improving model performance. Regarding research in Fairness, Accountability, and Transparency (FaccT), a dominant view is that human input or intervention may be required to realize a model that retains FaccT with the cost of model accuracy drop. We hope to understand the effective way to correlate the right reason, and accurate prediction can motivate the development of a fair, robust, and accurate model <cit.>. In general, we believe it is important to understand how to align human reasoning and model accuracy. Shao et al. argue that humans “arguing” against DNNs when explanations are not reasonable can benefit the model <cit.>. A railroad cannot be a train <cit.>, a snowboard is not a man <cit.>, and a shopping cart should not be a woman <cit.>. Lastly, while human-guided ML has a potential and good cause <cit.>, finding a way to cut down the human-side labor is another important perspective from the two studies. §.§ Generalizability Consideration: Beyond Binary Classification We started to test the idea of direct steering of model attention through local explanation from the binary classification problem for reasons—simplicity of the problem and well-annotated datasets. After using , several participants shared their feedback and curiosity on how our pipeline can be applied in more advanced vision-based tasks. The design we provided in binary classification can be relatively simpler than the aforementioned cases. As the model's task gets more complex and diverse, new designs customized to the particular task type and application area should be required to understand the generalizability of our findings. Item.5(a)Methodologically, local explanation-based attention steering is not limited to binary classification tasks. The future design can be explored to enhance CNN models for handling different tasks, such as multi-class classification, object detection, and segmentation tasks, which could possibly be expanded from processing images to videos. The core user flow beneath in CNN steering is as follows: First, the user flow allows human users to define reasonable and unreasonable types of attention depending on task goals. Next, the user flow motivates reasonable attention types and penalizes unreasonable attention types in a fine-tuning process suggested in Explanation-guided Learning <cit.>. Finally, the designer can provide a dashboard that helps users to understand how their indicated directions were reflected in the model revision process. While the flow can be generally applicable, the way a designer facilitates a user's definition of reasonable and unreasonable attention type should be carefully implemented depending on the type of problem. For example, in a multi-class classification or object detection task for different animals, users can employ attention logic that penalizes background and motivates foreground objects to build a more reasonable and high-performing model. As mentioned in 5.1.1, local explanation methods can be applied to different layers of a CNN to produce different levels of granularity. If the task goal requires a coarse granularity detection of a bounding box, applying local explanation visualization at the last layer of CNN can be suitable. However, if it needs more fine-grained granularity of closed curve for semantic segmentations, producing local explanations on both the first convolutional layer for edge-level of detail and the last convolutional layer for object-level detail can be considered, providing more depths of local explanation for users to evaluate. Finally, we noted P7's suggestion about extending this flow to a more advanced video level of object classification, detection, and segmentation model steering. Due to the data volume, special design considerations need to be applied in such a task. However, upon the efficient design for indicating reasonable and unreasonable attention types, we believe that it is possible to apply the suggested flow to the problem space. §.§ Scalability Consideration: Hundreds vs. Millions Despite the promising performance of the model steering method, scalability remains an essential concern raised by several participants (P2, P3, P4, P8, P11), as many real-world image classification tasks involve millions of images. Human scalability has been a crucial issue in HCI, CSCW, and beyond—while Misc.the data size can easily go up to millions and trillions in training state-of-the-art models, human cognition remains flat <cit.>. Even if we can surface millions of images to users, it may not be possible for them to scan images serially and achieve sensemaking. Generally, to successfully devise a scalable design, we believe that the number of images users have to go over should still not exceed thousands, and the amount of time they may spend should not exceed one hour, as recent data annotation literature suggests <cit.>. Herbert Simon remarked that “wealth of information creates a poverty of attention” <cit.>. As the trade-off between human labor and performance gain in human-in-the-loop applications is illustrated in Fig. <ref>, when users spend more effort as data size increases, the model will gain better performance until the workload hits the bottleneck of feasible human labor. We aim to make the curve of labor-performance trade-off steeper (from “curve 1” to “curve 2” shown in Fig. <ref>) through scalability optimization to improve the impact of human workload on performance gain. By devising “scalable” human-in-the-loop approaches, model performance could be further improved with the feasible amount of available human labor. Item.5(b)While every human-in-the-loop approach can suffer the bottleneck of limited information, labor source, session time, etc., ultimate breakthroughs in human-in-the-loop and interactive ML designs could come from scalability strategies. We introduce how some of the design strategies can be adopted in the design space of Beyond XAI. First, one can consider sampling from the whole dataset. Modern computer vision models can yield keywords of objects and context in the scene. Using such additional information extracted from the vast dataset, it is possible to define major and minor clusters of images. The new design may help users proceed with a small portion of sampled images derived from such clusters to reason the whole dataset and typify reasonable and unreasonable attention types accordingly. Second, one can consider examining images based on the sequence built from Active Learning, Misc.a technique that chooses the fewest unlabeled data possible that could maximize the model accuracy gain <cit.>. Applying active learning techniques is common in data annotation research, which can help reduce the required size of images to reason. Third, devising further intelligent features that can automate the current workflow can facilitate the process as well. Some features that need manual investigation can be automated in future designs. Finally, if there is a strong rationale for investing more human resources, one can consider crowdsourcing. §.§ Data Iteration and Continual Lifelong Learning 's capability of figuring out the vulnerability through local explanation is closely related to the capability of fortifying the dataset by adding more examples that can remove the contextual bias. Such “data iteration” is not uncommon in practice. To improve the model, the most fundamental way is to improve data. For instance, Chameleon lets users compare data features, training/testing splits, and performance across data versions <cit.>. When combining the data iteration with model steering using local explanations, one could derive some interesting design ideas that can help ML engineers to better find, search, and add the dataset. While improving the model with new data can be straightforward, a few issues need to be considered when steering models through local explanations. First, it is necessary to understand what learning strategy can be more effective between the case where stacking every dataset in one place and retraining the model and the case of iteratively adding the new dataset and making the model “evolve”, In general, the first case can yield a high-performing model than the second case due to the chance of catastrophic forgetting, which is a problematic and almost inevitable drawback  <cit.>. In recent years, the concept of continual lifelong learning has emerged <cit.> and provided a breakthrough. Understanding which strategy can yield what strengths and weaknesses in the scenario of data iteration with local explanation reasoning would be necessary. §.§ Improving Fine-Tuning This work is the first study that observes how ML engineers experience techniques in the Explanation-guided Learning framework in fine-tuning their model and perceiving the difference. While we saw participants satisfied with the progress they made with the RES framework, we introduced a few directions on how the RES framework can be evolved to design an improved model steering environment in the future. One important direction is how to design a better quantitative measurement to assess the quality of the steered attention during the fine-tuning process. Simple distance-based metrics such as Mean Squared Error (MSE) or Intersection over Union (IoU) scores that are calculated purely based on the alignment of each feature can hardly comprehensively reflect the quality of the adjusted attention, as they completely ignore the correlations among visual features. One potential remedy to this issue is also to leverage fidelity-based metrics, which aim at evaluating how faithful the model's attention is with respect to the model's prediction. The assumption behind this is that the `right' attention should contain sufficient information for the model also to make the `right' prediction <cit.>; while on the other hand, removing the attention should also lead to significant negative impact for the model to make the correct prediction <cit.>. However, it is still not clear and challenging to propose a single metric that can together measure the faithfulness and the degree of alignment with the human annotation to make a more comprehensive assessment of the attention quality. Another possible topic is how to leverage multiple annotations from different users for a single sample <cit.>. As obtaining more than one annotation can be helpful to boost the reliability of the human boundary for attention adjustment, it poses challenges on how to align model attention with multiple ground truth boundaries. While a simple way out can be using the 50% consensus or majority vote over all the available annotations, useful information can be lost during the aggregation. Thus, new techniques are in demand to leverage each annotation effectively. § CONCLUSION In this work, we examined our inquiry of how we can design a direct feedback loop between a human and a CNN through local explanations. In particular, we designed and developed the first interactive system to help a user adjust the local explanation results regarding the gaze of CNNs. We applied our interactive design in the problem space of contextual bias for CNN engineers. With the S1, we learned ML engineers' practical challenges and desires, converting the insights to design considerations that could improve how we use local explanations in model diagnosis and steering. With , we conducted S2 and found how can provide a better workflow and experience to CNN engineers. At the same time, we also found limitations and future research directions. In particular, we boiled down and shared in Implications for Design beyond XAI within the categories of (1) correlating model attention and model accuracy, (2) generalizability consideration, (3) scalability consideration, (4) data iteration and lifelong learning, and (5) improving fine-tuning. We hope this work can benefit researchers and practitioners who seek to understand how to make XAI-driven insights actionable in steering AI. ACM-Reference-Format § STUDY 1 INTERVIEW QUESTIONS Item.3 §.§ About you * Can you explain your role in your company? §.§ Your models and development settings * Can you explain the purpose, input, and output of your models for which you used model saliency/attention? * Can you walk us through your process of building your model? E.g., how to collect the training set, how to train your model, how to improve your model performance, how to debug? §.§ Use of saliency maps * Can you explain the way you use saliency maps in understanding your model’s behavior? * Can you explain the way you use saliency maps in supervising/improving your model’s behavior? §.§ Working on fair/robust/accurate models * Can you explain your experience/effort towards building more fair DNN models? * Can you explain if attention/saliency was useful or not? §.§ Your tools, challenge, and wish list in the future * Can you explain the types of tools that you use for understanding/improving your DNN models? * Can you explain the challenges you experience while interacting with your DNN? * What new tools/features do you wish to have in the near future to make your life better? § STUDY 2 SYSTEM USABILITY SCALE (SUS) SURVEY <CIT.> Item.5 §.§ Indicate your degree of agreement for each of the 10 statements (on a Likert scale from 1-“strongly disagree” to 5-“strongly agree”) * I think that I would like to use this system frequently. * I found the system unnecessarily complex. * I thought the system was easy to use. * I think that I would need the support of a technical person to be able to use this system. * I found the various functions in this system were well integrated. * I thought there was too much inconsistency in this system. * I would imagine that most people would learn to use this system very quickly. * I found the system very cumbersome to use. * I felt very confident using the system. * I needed to learn a lot of things before I could get going with this system.
http://arxiv.org/abs/2307.04571v1
20230710140334
Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation
[ "Chongming Gao", "Kexin Huang", "Jiawei Chen", "Yuan Zhang", "Biao Li", "Peng Jiang", "Shiqi Wang", "Zhong Zhang", "Xiangnan He" ]
cs.IR
[ "cs.IR" ]
[email protected] 0000-0002-5187-9196 University of Science and Technology of China [email protected] 0009-0001-4868-0952 University of Science and Technology of China Corresponding author. 0000-0002-4752-2629 [email protected] Zhejiang University Hangzhou China [email protected] 0000-0002-7849-208X Kuaishou Technology Co., Ltd. [email protected] 0000-0001-5667-5347 Kuaishou Technology Co., Ltd. [email protected] 0000-0002-9266-0780 Kuaishou Technology Co., Ltd. [email protected] 0000-0002-5369-884X Chongqing University Chongqing China [email protected] 0000-0003-1349-9755 University of Electronic Science and Technology of China [1] [email protected] 0000-0001-8472-7992 University of Science and Technology of China Offline reinforcement learning (RL), a technology that offline learns a policy from logged data without the need to interact with online environments, has become a favorable choice in decision-making processes like interactive recommendation. Offline RL faces the value overestimation problem. To address it, existing methods employ conservatism, e.g., by constraining the learned policy to be close to behavior policies or punishing the rarely visited state-action pairs. However, when applying such offline RL to recommendation, it will cause a severe Matthew effect, i.e., the rich get richer and the poor get poorer, by promoting popular items or categories while suppressing the less popular ones. It is a notorious issue that needs to be addressed in practical recommender systems. In this paper, we aim to alleviate the Matthew effect in offline RL-based recommendation. Through theoretical analyses, we find that the conservatism of existing methods fails in pursuing users' long-term satisfaction. It inspires us to add a penalty term to relax the pessimism on states with high entropy of the logging policy and indirectly penalizes actions leading to less diverse states. This leads to the main technical contribution of the work: Debiased model-based Offline RL (DORL) method. Experiments show that DORL not only captures user interests well but also alleviates the Matthew effect. The implementation is available via <https://github.com/chongminggao/DORL-codes>. 2023 2023 acmlicensed[SIGIR '23]Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information RetrievalJuly 23–27, 2023Taipei, Taiwan Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23), July 23–27, 2023, Taipei, Taiwan 15.00 10.1145/3539618.3591636 978-1-4503-9408-6/23/07 <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation Xiangnan He Received January 1, 2015; accepted January 1, 2015 =========================================================================================== § INTRODUCTION Recommender systems, a powerful tool for helping users select preferred items from massive items, are continuously investigated by e-commerce companies. Previously, researchers tried to dig up static user interests from historical data by developing supervised learning-based recommender models. With the recent development of deep learning and the rapid growth of available data, fitting user interests is not a bottleneck for now. A desired recommendation policy should be able to satisfy users for a long time <cit.>. Therefore, it is natural to involve Reinforcement Learning (RL) which is a type of Machine Learning concerned with how an intelligent agent can take actions to pursue a long-term goal <cit.>. In this setting, the recommendation process is formulated as a sequential decision process where the recommender interacts with users and receives users' online feedback (i.e., rewards) to optimize users' long-term engagement, rather than fitting a model on a set of samples based on supervised learning <cit.>. However, it is expensive and impractical to learn a policy from scratch with real users, which becomes the main obstacle that impedes the deployment of RL to recommender systems. One remedy is to leverage historical interaction sequences, i.e., recommendation logs, to conduct offline RL (also called batch RL) <cit.>. The objective is to learn an online policy that makes counterfactual decisions to perform better than the behavior policies induced by the offline data. However, without real-time feedback, directly employing conventional online RL algorithms in offline scenarios will result in poor performance due to the value overestimation problem in offline RL. The problem is induced when the function approximator of the agent tries to extrapolate values (e.g., Q-values in Q-learning <cit.>) for the state-action pairs that are not well-covered by logged data. More specifically, since the RL model usually maximizes the expected value or trajectory reward, it will intrinsically prefer overestimated values induced by the extrapolation error, and the error will be compounded in the bootstrapping process when estimating Q-values, which results in unstable learning and divergence <cit.>. In recommendation, this may lead to an overestimation of user preferences for items that infrequently appear in the offline logs. This is the core challenge for offline RL algorithms because of the inevitable mismatch between the offline dataset and the learned policy <cit.>. To solve this problem, offline RL algorithms incorporate conservatism into the policy design. Model-free offline RL algorithms directly incorporate conservatism by constraining the learned policy to be close to the behavior policy <cit.>, or by penalizing the learned value functions from being over-optimistic upon out-of-distribution (OOD) decisions <cit.>. Model-based offline RL algorithms learn a pessimistic model as a proxy of the environment, which results in a conservative policy <cit.>. This philosophy guarantees that offline RL models can stick to offline data without making OOD actions, which has been proven to be effective in lots of domains, such as robotic control <cit.> or games <cit.>. However, applying conservatism to recommender systems gives rise to a severe Matthew effect <cit.>, which can be summarized as “the rich get richer and the poor get poorer”. In recommendation, it means that the popular items or categories in previous data will get larger opportunities to be recommended later, whereas the unpopular ones get neglected. This is catastrophic since users desire diverse recommendations and the repetition of certain contents will incur the filter bubble issue, which in turn hurts users' satisfaction even though users favored them before <cit.>. We will show the Matthew effect in the existing offline RL-based recommender (conservative), and analyze how users' satisfaction will be hurt (effect). In this paper, we embrace the model-based RL paradigm. The basic idea is to learn a user model (i.e., world model) that captures users' preferences, then use it as a pseudo-environment (i.e., simulated users) to produce rewards to train a recommendation policy. Compared to model-free RL, model-based RL has several advantages in recommendation. First and foremost, model-based RL is much more sample efficient <cit.>. That it needs significantly fewer samples makes it more suitable for the highly sparse recommendation data. Second, explicitly learning the user model simplifies the problem and makes it easier to incorporate expert knowledge. For example, the user model can be implemented as any state-of-the-art recommendation model (e.g., DeepFM <cit.> in this work) or sophisticated generative adversarial frameworks <cit.>. Although some works have adopted this paradigm in their recommender systems <cit.>, they did not explicitly consider the value overestimation problem in offline RL, not to mention the Matthew effect in the solutions. To address the value overestimation problem while reducing the Matthew effect, we propose a Debiased model-based Offline RL (DORL) method for recommendation. By theoretically analyzing the mismatch between real users' long-term satisfaction and the preferences estimated from the offline data, DORL adds a penalty term that relaxes the pessimism on states with high entropy of the logging policy and indirectly penalizes actions leading to less diverse states. By introducing such a counterfactual exploration mechanism, DORL can alleviate the Matthew effect in final recommendations. Our contributions are summarized as: * We point out that conservatism in offline RL can incur the Matthew effect in recommendation. We show this phenomenon in existing methods and how it hurts user satisfaction. * After theoretically analyzing how existing methods fail in recommendation, we propose the DORL model that introduces a counterfactual exploration in offline data. * We demonstrate the effectiveness of DORL in an interactive recommendation setting, where alleviating the Matthew effect increases users' long-term experience. § RELATED WORK Here, we briefly review the Matthew effect in recommendation. We introduce the interactive recommendation and offline RL. §.§ Matthew Effect in Recommendation <cit.> confirmed the existence of the Matthew effect in YouTube's recommendation system, and <cit.> gave a quantitative analysis of the Matthew effect in collaborative filtering-based recommenders. A common way of mitigating the Matthew effect in recommendation is to take into account diversity <cit.>. Another perspective on this problem is to remove popularity bias <cit.>. We consider the Matthew effect in offline RL-based recommendation systems. we will analyze why this problem occurs and provide a novel way to address it. §.§ Interactive Recommendation The interactive recommendation is a setting where a model interacts with a user online <cit.>. The model recommends items to the user and receives the user's real-time feedback. This process is repeated until the user quits. The model will update its policy with the goal to maximize the cumulative satisfaction over the whole interaction process (instead of learning on I.I.D. samples). This setting well reflects the real-world recommendation scenarios, for example, a user will continuously watch short videos and leave feedback (e.g. click, add to favorite) until he chooses to quit. Here, we emphasize the most notable difference between the interactive recommendation setting and traditional sequential recommendation settings <cit.>. settings illustrates the learning and evaluation processes in sequential and interactive recommendation settings. Sequential recommendation uses the philosophy of supervised learning, i.e., evaluating the top-k results by comparing them with a set of “correct” answers in the test set and computing metrics such as Precision, Recall, NDCG, and Hit Rate. By contrast, interactive recommendation evaluates the results by accumulating the rewards along the interaction trajectories. There is no standard answer in interactive recommendation, which is challenging <cit.>. Interactive recommendation requires offline data of high quality, which hampers the development of this field for a long time. We overcome this problem by using the recently-proposed datasets that support interactive learning and off-policy evaluation <cit.>. §.§ Offline Reinforcement Learning Recently, many offline RL models have been proposed to overcome the value overestimation problem. For model-free methods, BCQ <cit.> uses a generative model to constrain probabilities of state-action pairs the policy utilizes, thus avoiding using rarely visited data to update the value network; CQL <cit.> contains a conservative strategy to penalize the overestimated Q-values for the state-action pairs that have not appeared in the offline data; GAIL <cit.> utilizes a discriminator network to distinguish between expert policies with others for imitation learning. IQL <cit.> enables the learned policy to improve substantially over the best behavior in the data through generalization, without ever directly querying a Q-function with unseen actions. For model-based methods, MOPO <cit.> learns a pessimistic dynamics model and use it to learn a conservative estimate of the value function; COMBO <cit.> learns the value function based on both the offline dataset and data generated via model rollouts, and it suppresses the value function on OOD data generated by the model. Almost all offline RL methods have a similar philosophy: to introduce conservatism or pessimism in the learned policy <cit.>. There are efforts to conduct offline RL in recommendation <cit.>. However, few works explicitly discuss the Matthew effect in recommendation. <cit.> mentioned this effect in their experiment section, but their method is not tailored to overcome this issue. § EMPIRICAL STUDY ON MATTHEW EFFECT We conduct empirical studies in recommendation to show how the Matthew effect affects user satisfaction. When the Matthew effect is amplified, the recommender will repeatedly recommend the items with dominant categories. To illustrate the long-term effect on user experience, we explore the logs of the KuaiRand-27K video dataset[<https://kuairand.com/>] <cit.> and the LFM-1b music dataset[<http://www.cp.jku.at/datasets/LFM-1b/>] <cit.>. KuaiRand-27K contains a 23 GB log recording 27,285 users’ 322,278,385 interactions on 32,038,725 videos with 62 categories, which are collected from April 8th, 2022 to May 8th, 2022. LFM-1b contains a 40GB log recording 120,322 users’ 1,088,161,692 listening events on 32,291,134 tracks with 3,190,371 artists, which are fetched from Last.FM in the range from January 2013 to August 2014. Both the two datasets provide the timestamp of each event, hence we can assess the long-term effect of overexposure by investigating the change of Day-1 Retention. Day-1 Retention is defined as the probability of a user who returns to the app tomorrow after finishing today's viewing/listening. This metric is more convincing than real-time signals (e.g., click, adding to favorite) in regard to reflecting the long-term effect on user satisfaction. We consider the item-level and category/artist-level repeat rates as the metrics to measure the Matthew effect. The item-level (or category-level) repeat rate of a user viewing videos on a certain day is defined as: the number of viewing events/the number of unique videos (or unique categories). For example, if a user views 5 unique videos (which belong to 3 unique categories) 20 times in a day, then the item-level repeat rate is 20/5=4.0 and the category-level repeat rate is 20/3=6.67. The item-level and artist-level repeat rates for music listening are defined in a similar way. Note that in KuaiRand, video-level overexposure rarely appears because of the rule of video recommendation, i.e., the same video will not be recommended twice. In this definition, user activity can become a confounder. For instance, a user who views 100 videos a day can be more active than a user who views only 10 videos a day, and thus is more likely to revisit the App the next day. Therefore, we control for this confounder by splitting the users w.r.t. the number of their daily viewing events. Groups with different user activity levels are marked by different colors and marker types. The results are shown in effect. In short, Day-1 Retention reduces when the repeat rate increases in each group with a user activity level. This phenomenon can be observed for both the item-level and category/artist-level repeat rates within the video dataset and music dataset. The results show that users' satisfaction will be hurt when the Matthew effect becomes severer. § PRELIMINARY ON MODEL-BASED RL We introduce the basics of RL and model-based offline RL. §.§ Basics of Reinforcement Learning Reinforcement learning (RL) is the science of decision making. We usually formulate the problem as a Markov decision process (MDP): M = (𝒮,𝒜,T,r,γ), where 𝒮 and 𝒜 represent the state space and action space, T(s,a,s')= P(s_t+1=s'|s_t=s,a_t=a) is the transition probability from (s,a) to s', r(s,a) is the reward of taking action a at state s, and γ is the discount factor. Accordingly, the offline MDP can be denoted as M=(𝒮,𝒜,T,r̂, γ), where T and r̂ are the transition probability and reward function predicted by an offline model. In offline RL, the policy is trained on an offline dataset 𝒟 which was collected by a behavior policy π_β running in online environment M. By modifying the offline MDP M to be conservative for overcoming the value overestimation issue, we will derive a modified MDP M=(𝒮,𝒜,T,r, γ), where the modified reward r is modified from the predicted reward r̂. Since RL considers long-term utility, we can define the value function as V_M^π(s) = 𝔼_π,T[∑_t=0^∞γ^t r(s_t,a_t)|s_0=s], denoting the cumulative reward gain by policy π after state s in MDP M. Let P_T,t^π be the probability of the agent's being in state s at time t, if the agent uses policy π and transits with T. Defining ρ_ T^π(s,a) = (1-γ)π(a|s)∑_t=0^∞γ^t P_ T,t^π(s) as the discounted distribution of state-action pair (s,a) for policy π over T, we can derive another form of the policy's accumulated reward as η_ M(π) = 𝔼_(s,a)∼ρ_ T^π[r(s,a)]. §.§ Model-Based Offline RL Framework In this paper, we follow a state-of-the-art general Model-based Offline Policy Optimization framework, MOPO[We consider an RL framework to be general if it doesn't require domain-related prior knowledge or any specific algorithms. Satisfying our demands, MOPO is shown to be one of the best-performing model-based offline RL frameworks.] <cit.>. The basic idea is to learn a dynamics model T which captures the state transition (s,a) → s' of the environment and estimates reward r̂(s,a) given state s and action a. For addressing the distributional shift problem where values V_M^π(s) are usually over-optimistically estimated, MOPO introduces a penalty function p(s,a) on the estimated reward r̂(s,a) as: r̃(s,a) = r̂(s,a)-λ p(s,a). On the modified reward r̃(s,a), the offline MDP M will be modified to be a conservative MDP: M = (𝒮,𝒜,T,r̃,γ). MOPO learns its policy in this MDP: M. By defining ϵ_p(π) = 𝔼_(s,a)∼ρ_T^π[p(s,a)], MOPO has the following theoretical guarantee: If the penalizer p(s,a) meets: λ𝔼_(s,a)∼ρ_T^π[p(s,a)] ≥ |η_M(π) - η_M(π)|, then the best offline policy π̂ trained in M satisfies: η_M(π̂)≥sup_π{η_M(π) - 2λϵ_p(π)}. The proof can be found in <cit.>. penalty_reg requires the penalty to be a measurement of offline and online mismatch, thus ϵ_p(π) can be interpreted as how much policy π will be affected by the offline extrapolation error. lower_bound is considered to be a theoretical guarantee for reward penalty in model-based offline RL. For example, with π^* denoting optimal policy in online MDP M, we have η_M(π̂)≥η_M(π^*) - 2λϵ_p(π^*). Remark: Through learning π̂ offline in the conservative MDP M with mopo_penalty, we can obtain the result that will not deviate too much from the result of learning an optimal policy π^* online in the ground-truth MDP M. The deviation will not exceed 2λϵ_p(π^*). However, there was no sufficient analysis on how to properly choose the penalty term p(s,a). Next, We introduce how to adapt this framework to recommendation and reformulate the p(s,a) according to the characteristic of the recommendation scenario. § METHOD We implement the model-based offline RL framework in recommendation. we redesign the penalty to alleviate the accompanied Matthew effect. Then, we introduce the proposed DORL model. §.§ Model-based RL in Recommendation In recommendation, we cannot directly obtain a state from the environment, we have to model the state by capturing the interaction context and the user's mood. Usually, a state s∈𝒮 is defined as the vector extracted from the user's previously interacted items and corresponding feedback. After the system recommends an item as action a∈𝒜, the user will give feedback as a scale reward signal r R(s,a). For instance, r∈{0,1} indicates whether the user clicks the item, or r∈ℝ^+ reflects a user's viewing time for a video. The state transition function (i.e., state encoder) T can be written as s' f_ω(s,a,r), where f_ω(s,a,r) autoregressively outputs the next state s' and can be implemented as any sequential models. When learning offline, we cannot obtain users' reactions to the items that are not covered by the offline dataset. we address this problem by using a user model (or reward model) R(s,a) to learn users' static interests. This model can be implemented as any state-of-the-art recommender such as DeepFM <cit.>. The user model will generate an estimated reward r̂=R(s,a) representing a user's intrinsic interest in an item. The transition function T will be written as s' f_ω(s,a,r̂). The offline MDP is defined as M = (𝒮,𝒜,T,r̂,γ). Since the estimated reward r̂ can deviate from the ground-truth value r, we follow MOPO to use mopo_penalty to get the modified reward r̃(s,a)= r̂(s,a)-λ p(s,a). Afterward, we can train the recommendation policy on the modified reward r̃ by treating the user model as simulated users. Now, the problem turns into designing the penalty term p(s,a). To begin with, we extend the mismatch function in <cit.>. We use R and R as the shorthand for R(s,a) and R(s,a), respectively. Define the mismatch function G_M^π(s,a) of a policy π on the ground truth MDP M and the estimated MDP M as: 3ex G_M^π(s,a) 𝔼_ŝ'∼T,r̂∼R[γ V_M^π(ŝ') + r̂] - 𝔼_s'∼ T,r∼ R[γ V_M^π(s')+r] = 𝔼_r̂∼R[γ V_M^π(f_ω(s,a,r̂)) + r̂] - 𝔼_r∼ R[γ V_M^π(f_ω(s,a,r))+r]. It satisfies: 𝔼_(s,a)∼ρ_T^π[G_M^π(s,a)] = η_M(π) - η_M(π). The mismatch function G_M^π(s,a) extends the definition presented in <cit.>, with the key distinction being the separation of state transition function into state s and reward r. This is due to the fact that, in the context of recommendation systems, the stochastic nature of state probabilities arises solely from the randomness associated with their reward signals r. Consequently, when integrating along the state transition, it is essential to explicitly express the impact of reward r. The proof of G_p can be adapted from the proof procedure of the telescoping lemma in <cit.>. Following the philosophy of conservatism, we add a penalty term p(s,a) according to the mismatch function G_M^π(s,a) by assuming: λ p(s,a) ≥ |G_M^π(s,a)|. By combining G_satisfy and G_penalty, the condition in penalty_reg is met, which provides the theoretical guarantee for the recommendation policy π learned in the conservative MDP: M. Remark: G_p provides a perspective for designing the penalty term p(s,a) that satisfies the theoretical guarantee in guarantee. According to G_penalty, the problem of defining p(s,a) turns into analyzing G_M^π(s,a), which will be described in remedy. The original MOPO model uses the uncertainty of the dynamics model P_U as the penalty, i.e., p(s,a)=P_U. However, penalizing uncertainty will encourage the model to pay more attention to items that are frequently recommended while neglecting the rarely recommended ones. This will accelerate the Matthew effect. §.§ Matthew Effect To quantify the Matthew effect in the results of recommendation, we use a metric: majority category domination (MCD), which is defined as the percentage of the recommended items that are labeled as the dominated categories in training data[The dominated categories are the most popular categories that cover 80% items in the training set. There are 13 (out of 46) dominated categories in KuaiRand, and 12 (out of 31) dominated categories in KuaiRec.]. We show the effect of conservatism of MOPO by varying the coefficient λ of mopo_penalty. The results on the KuaiRec dataset are shown in conservative. With increasing λ, the model receives a higher single-round reward (the blue line), which means the policy captures users' interest more accurately. On the other hand, MCD also increases (the red bars), which means the recommended items tend to be the most popular categories (those cover 80% items) in training data. I.e., the more conservative the policy is, the stronger the Matthew effect becomes. When the results narrow down to these categories, users' satisfaction will be hurt and the interaction process terminates early, which results in low cumulative rewards over the interaction sequence. More details will be described in exp. §.§ Solution: Re-design the Penalty To address this issue, we consider a more sophisticated manner to design the penalty term p(s,a) in mopo_penalty. We dissect the mismatch function in G_define as: 3ex |G_M^π(s,a)| ≤ γ|𝔼_r̂∼R[V_M^π(f_ω(s,a,r̂))] - 𝔼_r∼ R[ V_M^π(f_ω(s,a,r))]| + |𝔼_r̂∼Rr - 𝔼_r∼ Rr| γ d_V(R,R) + d_1(R,R), where d_1(R,R) represents the deviation of estimated reward R from true reward R, d_V(R,R) measures the difference between the value functions V_M^π of next state calculated offline (via R) and online (via R). Both of them can be seen as specific metrics measuring the distance between R and R. While d_1(R,R) is straightforward, d_V(R,R) considers the long-term effect on offline learning and is hard to estimate. Based on the aforementioned analysis, a pessimistic reward model R in MOPO will amplify the Matthew effect that reduces long-term satisfaction, thus resulting in a large d_V(R,R). An intuitive way to solve this dilemma is to introduce exploration on states with high entropy of the logging policy. Without access to online user feedback, we can only conduct the counterfactual exploration in the offline data. ∙ An illustrative example. To illustrate the idea, we give an example in distribution. The goal is to estimate a user's preferences given the logged data induced by a behavior policy. In reality, the distribution of the logged data is dependent on the policies of previous recommenders. For convenience, we use a Gaussian distribution as the behavior policy in distribution(a). Since previous recommenders cannot precisely reflect users' ground-truth preferences, there is always a deviation between the behavior policy (the red line) and users' ground-truth preferences (the blue line). Besides, as the items are not equally exposed in the behavior policy, there will be high uncertainty in estimating the rarely appeared items (as shown in the filled area). Offline RL methods emphasize conservatism in estimation and penalize the uncertain samples, which results in the distribution that narrows down the preferences to these dominated items (the green line). This is how the Matthew effect appears. By contrast, using a uniform distribution to collect data can prevent biases and reduce uncertainty (distribution(b)). Ideally, a policy learned on sufficient data collected uniformly can capture unbiased user preferences and produce recommendations without the Matthew effect, i.e., γ d_V(R,R) + d_1(R,R) can reduce to 0. Therefore, an intuitive way to design penalty term p(s,a) is to add a term: the discrepancy of behavior between the uniform distribution π_u(·|s) and the behavior policy π_β(·|s) given state s. We use the Kullback–Leibler divergence D_KL(π_β(·|s)||π_u(·|s)) to measure the distance, which can be written as: P_E := -D_KL(π_β(·|s)||π_u(·|s)) = -𝔼_a∼π_β(·|s)[log(π_β(a|s))-log(π_u(a|s))] =  ℋ(π_β(·|s)) - log(|𝒜|), where |𝒜| is a constant representing the number of items. Hence, the term P_E depends on the entropy of the behavior policy π_β(·|s) given state s. The modified penalty term can be written as p(s,a) = P_U + P_E, and the modified reward model will be formulated as: r̃(s,a) = r̂(s,a) - λ_1 P_U + λ_2 P_E. Except for penalizing high uncertainty areas, the new model also penalizes policies with a low entropy at state s. Intuitively, if the behavior policy π_β(·|s) recommended only a few items at state s, then the true user preferences at state s may be unrevealed. Under such circumstances, the entropy term ℋ(π_β(·|s)) is low, hence we penalize the estimated reward r̂(s,a) by a large P_E. The entropy penalizer does not depend on the chosen action but only on the state in which the agent is. Which means the effect of this penalty will be indirect and penalize actions that lead to less diverse states, because of the long-term optimization. Hence, The learned policy achieves the counterfactual exploration in the offline data, which in turn counteracts the Matthew effect in offline RL. §.§ The DORL Method Now, we provide a practical implementation motivated by the analysis above. The proposed model is named Debiased model-based Offline RL (DORL) model, whose framework is illustrated in DORL. ∙ Penalty on entropy. We introduce how to compute P_E in the entropy penalizer module. entropy shows a trajectory of the interaction process, where the current action at time t is to recommend item 8. We define P_E in final_r to be the summation of k-order entropy (k=1,2,⋯). For example, when k=3, we search all users' recommendation logs to collect all continuous sub-sequences with pattern [{3,7,8},?], where “?” can match any item, and {3,7,8} is a sorted set that can cover all of its enumeration, e.g., [8,3,7] or [7,3,8]. On these sub-sequences, we can count the frequencies of action “?” to estimate the entropy of behavior policy π_β given the previous three recommended items. Without losing generality, we normalize the entropy to range (0,1]. ∙ Penalty on uncertainty. We penalize both the epistemic uncertainty of the reward model and the aleatoric uncertainty of offline data. We use the variance of K ensemble reward models {R_θ_k, k =1,2,⋯,K} to capture the epistemic uncertainty, which is commonly used to capture the uncertainty of the model in offline RL <cit.>. Aleatoric uncertainty is data-dependent <cit.>. By formulating the user model as a Gaussian probabilistic model (GPM), we can directly predict the variance of the reward and take this predicted variance as aleatoric uncertainty. For the k-th model R_θ_k, the loss function is: ℒ(θ_k) = 1/N∑_i=1^N1/2σ^2_θ_k(x_i)y_i-f_θ_k(x_i)^2 + 1/2logσ^2_θ_k(x_i), where N is the number of samples, f_θ_k(x_i) and σ^2_θ_k(x_i) are the predicted mean and variance of sample x_i, respectively. By combining epistemic uncertainty and aleatoric uncertainty, we formulate the uncertainty pernalizer P_U in final_r as: P_U := max_k∈{1,2,⋯,K}σ^2_θ_k. We define the fitted reward r̂ as the mean of K ensemble models: r̂(s,a)=1/K∑_kf_θ_k(s,a). The final modified reward r̃(s,a) will be computed by final_r. The framework of the proposed DORL model is illustrated in DORL. Without losing generality, we use DeepFM <cit.> as the backbone for the user model and implement the actor-critic method <cit.> as the RL policy. The state tracker f_ω(s,a,r) is a network modeling the transition function T(s,a,s')= P(s_t+1=s'|s_t=s,a_t=a). It can be implemented as any sequential model such as recurrent neural network (RNN)-based models <cit.>, Convolutional models <cit.>, Transformer-based methods <cit.>. <cit.> investigated the performances of different state encoders in RL-based recommenders. We use a naive average layer as the state tracker since it requires the least training time but nonetheless outperforms many complex encoders <cit.>. It can be written as: s⃗_t+11/N∑^t_n=t-N+1[e⃗_a_n⊕r̃_n], where ⊕ is the concatenation symbol, s⃗_t+1 is the vector representing the state at time t+1, e⃗_a_n is the embedding vector of action a_n. r̃_n is the reward value calculated by final_r and we normalize it to range (0,1] here. N is the window size reflecting how many previous item-reward pairs are calculated. § EXPERIMENTS We introduce how we evaluate the proposed DORL model in the interactive recommendation setting. We want to investigate the following questions: * (RQ1) How does DORL perform compared to state-of-the-art offline RL methods in the interactive recommendation setting? * (RQ2) To what extent can DORL alleviate the Matthew effect and pursue long-term user experience? * (RQ3) How does DORL perform in different environments with different user tolerance to repeated content? §.§ Experimental Setup We introduce the experimental settings with regard to environments and state-of-the-art offline RL methods. §.§.§ Recommendation Environments As mentioned in IRS, in the interactive recommendation setting, we are interested in users' long-term satisfaction rather than users' fitting capabilities <cit.>. Traditional recommendation datasets are too sparse or lack necessary information (e.g., timestamps, explicit feedback, item categories) to evaluate the interactive recommender systems. We create two recommendation environments on two recently-proposed datasets, KuaiRec and KuaiRand-Pure, which contain high-quality logs. KuaiRec <cit.> is a video dataset that contains a fully-observed user-item interaction matrix where 1,411 users have viewed all 3,327 videos and left feedback. By taking the fully-observed matrix as users' true interest, we can give a reward for the model's every recommendation (without missing entries like other datasets). We use the normalized viewing time (i.e., the ratio of viewing time to the video length) as the online reward. KuaiRand-Pure <cit.> is a video dataset that inserted 1,186,059 random recommendations involving 7,583 items into 27,285 users' standard recommendation streams. These randomly exposed data can reflect users' unbiased preferences, from which we can complete the matrix to emulate the fully-observed matrix in KuaiRec. This is an effective way to evaluate RL-based recommendation <cit.>. We use the “is_click” signal to indicate users' ground-truth interest, i.e., as the online reward. In mattew_hurt, we have shown that users' experience can be hurt by the Matthew effect. To let the environments reflect this phenomenon, we follow <cit.> to introduce a quit mechanism: when the model recommends more than M items with the same category in previous N rounds, the interaction terminates. Note that the same item will not be recommended twice in an interaction sequence. Since we evaluate the model via the cumulative rewards ∑_tr_t over the interaction trajectory, quitting early (due to the Matthew effect) will lead to inferior performances. For now, the two environments can play the same role as the online users. Therefore, we can evaluate the model as the process shown in settings (b). The evaluation environments are used for assessing models and they are not available in the training stage. For the training purpose, both KuaiRec and KuaiRand provide additional recommendation logs. The statistics of the training data are illustrated in data. §.§.§ Baselines We select two naive bandit-based algorithms, four model-free offline RL methods, and four model-based offline RL methods (including ours) in evaluation. We use the DeepFM model <cit.> as the backbone in the two bandit methods and four model-based methods. These baselines are: * ϵ-greedy, a naive bandit-based policy that outputs a random result with probability ϵ or outputs the deterministic results of DeepFM with probability 1-ϵ. * UCB, a naive bandit-based policy that maintains an upper confidence bound for each item and follows the principle of optimism in the face of uncertainty. * SQN, or Self-Supervised Q-learning <cit.>, contains two output layers (heads): one for the cross-entropy loss and the other for RL. We use the RL head to generate final recommendations. * BCQ, or Batch-Constrained deep Q-learning <cit.>, adapts the conventional deep Q-learning to batch RL. We use the discrete-action version <cit.>, whose core idea is to reject these uncertain data and update the policy using only the data of high confidence. * CQL, or Conservative Q-Learning <cit.>, is a model-free RL method that adds a Q-value regularizer on top of an actor-critic policy. * CRR, or Critic Regularized Regression <cit.>, is a model-free RL method that learns the policy by avoiding OOD actions. * MBPO, a vanilla model-based policy optimization method that uses DeepFM as the user model to train an actor-critic policy. * IPS <cit.> is a well-known statistical technique adjusting the target distribution by re-weighting each sample in the collected data. We implement IPS in a DeepFM-based user model, then learn the policy using an actor-critic method. * MOPO, a model-based offline policy optimization method <cit.> that penalizes the uncertainty of the DeepFM-based user model and then learns an actor-critic policy. §.§ Overall Performance Comparison (RQ1) We evaluate all methods in two environments. For the four model-based RL methods (MBPO, IPS, MOPO, and our DORL), we use the same DeepFM model as the user model and fixed its parameters to make sure the difference comes only from the policies. We use the grid search technique on the key parameters to tune all methods in the two environments. For DORL, we search the combination of two key parameters λ_1 and λ_2 in final_r. Both of them are searched in {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5, 10, 50, 100}. We report the results with λ_1=0.01,λ_2=0.05 for KuaiRand and λ_1=0.05,λ_2=5 for KuaiRec. All methods in two environments are evaluated with the quit parameters: M=0, N=4, and the maximum round is set to 30. The results are the average metrics of 100 interaction trajectories. The results are shown in main_result, where all policies are learned with 200 epochs. After learning in each epoch, we will evaluate all methods with 100 episodes (i.e., interaction trajectories) in the two interactive environments. The first row shows the cumulative reward, which directly reflects the long-term satisfaction in our interactive recommendation setting. The second row and third rows dissect the cumulative reward into two parts: the length of the interaction trajectory and the single-round reward, respectively. For a better comparison, we average the results in 200 epochs and show them in results. Besides the three metrics, we also report the majority category domination (MCD) in results. From the results, we observe that the four model-based RL methods (MBPO, IPS, MOPO, and DORL) significantly outperform the four model-free RL methods (SQN, CRR, CQL, and BCQ) with respect to trajectory length and cumulative reward. This is because model-based RL is much more sample efficient than model-free RL. In recommendation, the training data is highly sparse. Model-free RL learns directly from the recommendation logs that we have split into different sequences according to the exit rule described above. However, it is extremely difficult to capture the exit mechanism from the sparse logs. By contrast, the model-based RL can leverage the user model to construct as many interaction sequences as possible during training, which guarantees that the policy can distill useful knowledge from the limited offline samples. That is why we embrace model-based RL in recommendation. For model-based RL methods, MOPO shows an obvious improvement compared to the vanilla method MBPO in terms of single-round reward. It is because MBPO does not consider the OOD actions in the offline data that will incur extrapolation errors in the policy. MOPO introduces the uncertainty penalizer to make the policy pay more attention to the samples of high confidence, which in turn makes the policy capture users' interest more precisely. However, MOPO sacrifices many unpopular items because they appear less frequently and are considered uncertain samples. Therefore, the average length decreases, which in turn reduces the cumulative reward. Our method DORL overcomes this problem. From main_result, we observe that DORL attains the maximal average cumulative reward after several epochs in both KuaiRec and KuaiRand due to that it reaches the largest interaction length. Compared to MOPO and MBPO, DORL sacrifices a little bit of the single-round reward due to its counterfactual exploration philosophy, meanwhile, this greatly improves the diversity and enlarges the length of interactions due. Therefore, it achieves the goal of maximizing users' long-term experiences. After enhancing the vanilla MBPO with the IPS technique, the learned user model gives adjustments to the distribution of training data by re-weighting all items. IPS obtains a satisfactory performance in KuaiRec but receives abysmal performances in KuaiRand. This is due to its well-known high variance issue can incur estimation errors. Compared to IPS's hard debiased mechanism, DORL's soft debiased method is more suitable for model-based RL in recommendation. For model-free RL methods, As discussed above, four model-free methods fail in two datasets due to limited offline samples. Though they can capture users' interest by returning a high single-round reward (e.g., SQN and CRR in KuaiRec, and BCQ in KuaiRand), they cannot maintain a long interaction trajectory. For example, BCQ updates its policy only on those samples with high confidence, which results in a severe Matthew effect in the recommendation results (reflected by high MCD and short length). SQN's performance oscillates with the largest magnitude since its network is updated by two heads. The RL head serves as a regularizer to the self-supervised head. When the objectives of the two heads conflict with each other, the performance becomes unstable. Therefore, these methods are not suitable for recommendation where offline data are sparse. As for the naive bandit methods, UCB and ϵ-greedy, they are designed to explore and exploit the optimal actions for the independent and identically distributed (IID) data. They do not even possess the capability to optimize long-term rewards at all. Therefore, they are inclined to recommend the same items when the model finishes exploring offline data, which leads to high MCD and short interactions. These naive policies are not suitable for pursuing long-term user experiences in recommendation. §.§ Results on alleviating Matthew effect (RQ2) We have shown in conservative that penalizing uncertainty will result in the Matthew effect in recommendation. More specifically, increasing λ_1 can make the recommended items to be the most dominant ones in the training set, which results in a high MCD value. Here, we show how the introduced “counterfactual” exploration mechanism helps alleviate this effect. We conduct the experiments for different combinations of (λ_1, λ_2) as described above, then we average the results along the λ_1 to show the influence of λ_2 alone. The results are shown in res_entropy. Obviously, increasing λ_2 can lengthen the interaction process and reduce majority category domination. I.e., When we penalize the entropy of behavior policy hard, (1) the recommender does not repeat the items with the same categories; (2) the recommended results will be diverse instead of focusing on dominated items. The results show the effectiveness of penalizing entropy in DORL in alleviating the Matthew effect. §.§ Results with different environments (RQ3) To validate that DORL can work robustly in different environment settings, we vary the window size N in the exit mechanism and fix M=10 during the evaluation. The results are shown in leave. We only visualize the most important metric: the cumulative reward. When N is small (N=1), other model-based methods can surpass our DORL. When N gets larger (N>3), users' tolerance for similar content (i.e., items with the same category) becomes lower, and the interaction process comes to be easier to terminate. Under such a circumstance, DORL outperforms all other policies, which demonstrates the robustness of DORL in different environments. § CONCLUSION We point out that conservatism in offline RL can incur the Matthew effect in recommendation. We conduct studies to show that the Matthew effect hurts users' long-term experiences in both the music and video datasets. Through theoretical analysis of the model-based RL framework, we show that the reason for amplifying the Matthew effect is the philosophy of suppressing uncertain samples. It inspires us to add a penalty term to make the policy emphasize the data induced by the behavior policies with high entropy. This will reintroduce the exploration mechanism that conservatism has suppressed, which alleviates the Matthew effect. In the future, when fitting user interests is not a bottleneck anymore, researchers could consider higher-level goals, such as pursuing users' long-term satisfaction <cit.> or optimizing social utility <cit.>. With the increase in high-quality offline data, we believe that offline RL can be better adapted to recommender systems to achieve these goals. During this process, many interesting yet challenging issues (such as the Matthew effect in this work) will be raised. After addressing these issues, we can create more intelligent recommender systems that benefit society. § ACKNOWLEDGEMENTS This work is supported by the National Key Research and Development Program of China (2021YFF0901603), the National Natural Science Foundation of China (61972372, U19A2079, 62121002), and the CCCD Key Lab of Ministry of Culture and Tourism. ACM-Reference-Format